url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=52A37&jrnl=one&onejrnl=proc | American Mathematical Society
My Account · My Cart · Customer Services · FAQ
Publications Meetings The Profession Membership Programs Math Samplings Washington Office In the News About the AMS
You are here: Home > Publications
AMS eContent Search Results
Matches for: msc=(52A37) AND publication=(proc) Sort order: Date Format: Standard display
Results: 1 to 8 of 8 found Go to page: 1
[1] Vladimir Kadets. Coverings by convex bodies and inscribed balls. Proc. Amer. Math. Soc. 133 (2005) 1491-1495. MR 2111950. Abstract, references, and article information View Article: PDF This article is available free of charge [2] Wieslaw Kubis. Perfect cliques and $G_\delta$ colorings of Polish spaces. Proc. Amer. Math. Soc. 131 (2003) 619-623. MR 1933354. Abstract, references, and article information View Article: PDF This article is available free of charge [3] Mark McConnell. The rational homology of toric varieties is not a combinatorial invariant . Proc. Amer. Math. Soc. 105 (1989) 986-991. MR 954374. Abstract, references, and article information View Article: PDF This article is available free of charge [4] M. Deza and P. Frankl. Bounds on the maximum number of vectors with given scalar products . Proc. Amer. Math. Soc. 95 (1985) 323-329. MR 801348. Abstract, references, and article information View Article: PDF This article is available free of charge [5] Z. Füredi and I. Palásti. Arrangements of lines with a large number of triangles . Proc. Amer. Math. Soc. 92 (1984) 561-566. MR 760946. Abstract, references, and article information View Article: PDF This article is available free of charge [6] Peter B. Borwein. Sylvester's problem and Motzkin's theorem for countable and compact sets . Proc. Amer. Math. Soc. 90 (1984) 580-584. MR 733410. Abstract, references, and article information View Article: PDF This article is available free of charge [7] K. S. Sarkaria. A Riemann hypothesis'' for triangulable manifolds . Proc. Amer. Math. Soc. 90 (1984) 325-326. MR 727259. Abstract, references, and article information View Article: PDF This article is available free of charge [8] Mau Hsiang Shih and Hann Tzong Wang. Unit lemniscates contained in the unit ball . Proc. Amer. Math. Soc. 86 (1982) 451-454. MR 671213. Abstract, references, and article information View Article: PDF This article is available free of charge
Results: 1 to 8 of 8 found Go to page: 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8248697519302368, "perplexity": 2263.6911202054152}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104631.25/warc/CC-MAIN-20170818082911-20170818102911-00658.warc.gz"} |
https://superuser.com/questions/1215219/how-to-delete-itunes-music-folder-from-command-prompt-on-windows | # How to delete iTunes Music folder from command prompt on Windows
If I have to delete the music folder of iTunes on a Windows PC, is it possible to do it from the command prompt? If so, how?
• You just want to delete the music (not the iTunes itself) ? Is there a specific reason ? – N. Cornet Jun 1 '17 at 11:25
• Yes only the list of music because I have a lot of old music that I don't like it anymore – Bryan Savian Jun 1 '17 at 12:00
• @BryanSavian What do you mean by prompt commands? Do you mean Windows terminal DOS commands? – esQmo_ Jun 1 '17 at 12:09
• Yes I mean Windows terminal DOS commands – Bryan Savian Jun 1 '17 at 13:05
1. Open iTunes and navigate to Library > Songs ;
2. Press ctrl+A to select all your music ;
3. Right click on the selection and choose Delete from Library ;
4. Select Delete Songs and then Move to Recycle Bin ;
1. Open the Command Prompt as administrator (by right-clicking on Start icon and selecting Command Prompt (Admin));
2. If your iTunes folder is not on the C: drive type D: (for drive D:) and hit ENTER. Navigate to your iTunes folder using cd \path\to\iTunes\"iTunes Media"\ then hit ENTER (don't be into the Music folder. If you are type cd ..);
3. To remove type rd Music \s \q and hit ENTER (\s is for removing subdirectories and files, \q is for skipping confirmation); | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22213825583457947, "perplexity": 10007.644353305284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146160.21/warc/CC-MAIN-20200225202625-20200225232625-00420.warc.gz"} |
http://obnv.pogrzebyluban.pl/activities-for-teaching-algebraic-expressions.html | Giving activities before, during and after listening means that students are not just listening but are engaged in the task, and actually doing something with what they hear. context feel? 3. These questions can be used to create tests or exams. On this page you will find: a complete list of all of our math worksheets, lessons, math homework, and quizzes. The teaching of mathematics around big ideas offers students opportunities to develop a sophisticated understanding of mathematics concepts and processes, and helps them to. Math Play has a large collection of free online math games for elementary and middle school students. Below are just a few suggestions for activities to make vocabulary practice fun. Substitution - Evaluate the expressions by substituting numbers for letters. Learn to greet and thank people, and ask for help in English. Lesson activities include games, puzzles, and warm-ups, as well as activities to teach and practice each of the core skills of language learning: speaking English Club offers listening and repeating activities for ESL students to practice English pronunciation. Luckily, teaching reading to more imaginative students is easy with these character analysis activities. They also can step back for an overview and shift perspective. An algebraic expression containing only one term is called a Monomial. Be sure students show all steps in evaluating the expression. Advance Preparation For the optional Readiness activity in Part 3, students will need Algebra Election Cards and the Electoral Vote Map from Lesson 6 11. The total area can be found by adding these two expressions, then substituting 3 into the expression for x. You need to first teach students how to write and evaluate numerical expressions to get a feel for the basic setup of what an algebraic expression may look like in a mathematical format. The memory box does not mean necessarily this is the way to start teaching a topic. See more ideas about Middle school math, Teaching math, Algebraic expressions. Algebra for All – Introducing Algebra. A only appliles when you have more than one term. 6th Grade: Describe simple relationships by creating and analyzing tables, equations, and expressions. Kick math instruction into high gear with a collection made up of lesson plans, activities, worksheets, videos, and apps created to. Evaluating the original expression at x = 1 gives 3(1) 2-6(1)+6=3. This free online course will teach you about advanced algebraic concepts and their applications in simple and easy Upon completion of this advanced linear algebra course, you will be able to simplify expressions These are great skills not only for science and math but also for our daily activities. This math worksheet was created on 2019-02-08 and has been viewed 670 times this week and 6,355 times this month. Algebra Expressions are needed in computer apps which are written to process real world situations. kids worksheet chapter 1. Aug 12, 2020 - Writing Numerical Expressions Worksheet. Introduction to Algebra. The numbers are constants. FYI, in 7th grade, the blue assessment for simplifying expressions and solving basic equations is a repeat of the blue Algebraic Expressions and Integers. Through verbally expressing their ideas and responding to others your students will develop their self-confidence, as well as enhance their communication and critical thinking skills which are vital Thanks for sharing such effective teaching strategies. Translating from verbal to algebraic and algebraic to verbal expressions. No matter how much you enjoy teaching English, having to come up with ESL worksheets regularly can get frustrating. These activities support student development of number sense and the concept of mathematical operations: 2006 Mathematics Game challenges students to "use only the digits 2, 0, 0, and 6, in any order, with the operations +, -, x, ÷, ^(raised to a power), sqrt (square root), and ! (factorial), to write expressions for the counting numbers 1. Students may be tired or have other things on their minds and diving straight into a textbook or grammar explanation can be quite jarring. Algebra Jeopardy Game. Instead of relying on memorization, students first develop an understanding of linear relationships and then develop an understanding of how these relationships are represented by algebraic expressions and equations. Nicaragua Fourth and Fifth Review Under the Three-Year. The activities below help practice building up expressions using Algebra Tiles, using zero pairs — i. Easy & Engaging ESL Activities and Mini-Books for Every Classroom: Terrific Teaching Tips, Games, Mini-Books & More to Help New Students from Every Nation Build Basic English Vocabulary and Feel Welcome!. advanced kids worksheet formulas PDF. 0- Knowledge of Algebra, Patterns, and Functions Topic B: Expressions, Equations, and Inequalities. Apr 4, 2018 - Explore Bethy's board "Algebraic Expressions" on Pinterest. Teaching reading doesn't need to end with the story. Second, you will need to read through the lesson created by Catherine DeFrancisco. Visit our TeachingEnglish website for more lesson plans and activities. Example: A monomial like 3abc can be written as 3 x a x b x c, 3,a,b and c are factors of 3abc. Teaching algebra is quite demanding and difficult for both the new teacher and the students. As an educator, you probably understand the importance of diversifying your teaching materials. Communicative teaching is often organized in the three-phase framework. Instant access to millions of Study Resources, Course Notes, Test Prep, 24/7 Homework Help, Tutors, and more. Take time to learn The more REAL English phrases and expressions you listen to, the more fluent you will become, to Thank you so much for the great work! This is one of the best, if not the best program that teaches. Comparing Bits and Pieces Problem 3. An algebraic expression combines both numbers and letters using the arithmetic operations of addition (+), subtraction (–), multiplication (·), and division (÷) to express a quantity. Write and evaluate numerical expressions involving whole number exponents. The video lessons with the practice and written instruction were very helpful. Warm-up activity. Visit Mike's blog, Teaching Games, for more great ideas. A: One of the properties of exponents states the following: (a / b) n = a n / b n. and how do the speakers in the given. My all time favorite activity for practicing letters (and sight words too) is This activity can be downloaded for free from my Teachers Pay Teachers store. Exaggerate actions and facial expressions to really engage them!. Translate algebraic expressions into English phrases, and translate English phrases into algebraic expressions. Principles of teaching vocabulary. You may select from 2, 3, or 4 terms with addition, subtraction, and multiplication. Honors Algebra II introduces students to advanced functions, with a focus on developing a strong conceptual grasp of the expressions that define them. An expression such as a 2 110 is an algebraic expressionbecause it is a combination of variables, numbers, and at least one operation. These Algebraic Expressions Worksheets will create algebraic statements for the student to simplify. The lists are also organized by the key K12 Common Core math content categories (geometry, measurement & data, etc. Useful resources and activities for teaching English learners of all levels - lesson plans, vocabulary sheets, listening and reading practice, conversation topics - general and business. 7th grade practice. Activities are a great way to make reading fun. Bored with Algebra? Confused by Algebra? Hate Algebra? We can fix that. In a hurry? Browse our pre-made printable worksheets library with a You can create printable tests and worksheets from these Algebraic Expressions questions! Select one or more questions using the checkboxes above. • Students will practice solving proportions. Click on the cards to find matching pairs. This is a useful activity for introducing prime factorization by continuing the roots to their prime factors. Writing Expressions. I have been using projects, games, cooperative learning, and interactive activities to teach mathematics at the middle school, high school, and college levels for the last 12 years. Unlimited practice is available on each topic which allows thorough mastery of the concepts. This article presents some useful expressions for debating. 15 Card Activity Find matches for your expressions. Algebra: Expressions (6th grade math) No teams 1 team 2 teams 3 teams 4 teams 5 teams 6 teams 7 teams 8 teams 9 teams 10 teams Custom Press F11 Select menu option View > Enter Fullscreen for full-screen mode. This is a great activity to bring all of the skills together and provides a good opportunity for teachers and parents to monitor and assess a pupil's. Algebra is just like a puzzle where we start with something like "x − 2 = 4" and we want to end up with something like "x = 6". Teaching aids and materials support the lesson plan and assist learning. The product is a multiplication of the factors. Objectives Most teachers who employ the Grammar Translation Method to teach English would probably tell you that (for their students at least) the most fundamental reason for learning the language is give learners access to English literature, develop their minds "mentally" through foreign language. Throughout the 4-week unit students will receive opportunities to make important brain connections, as they experience algebra in different ways, forms and representations. Pre-Algebra concepts are presented in this unit, including order of operations, and writing algebraic expressions and equations. Course content includes a review of basic mathematical concepts and operations; solving equations and inequalities; graphing; solving systems of equations and inequalities; exponents; polynomials; factoring; rational expressions and equations; and an introduction to roots and rational. Defining a variable in an algebraic expression and equation. AMANDA HILLIARD. terms in an algebraic expression that can be combined (with tiles this is represented by the tiles having the same size and shape and basic colour). keystage 3 Interactive Worksheets to help your child understand Algebra: Expressions in Maths Year 7. Although proficiency in arith-metic operations is important to becoming proficient in algebra, the recommendations advocate algebra instruction that moves students beyond superficial mathematics knowledge and toward a deeper understand - ing of algebra. Slader teaches you how to learn with step-by-step textbook solutions written by subject matter experts. *Remember that before teaching abroad, many countries (including China), require teachers to hold a Bachelor's Degree and a minimum 120 hour TEFL. Algebraic Expressions. Expanding algebraic expressions. Clicking on a topic's name will open a list of its current curriculum. Quizzes, tests, exercises and puzzles to help you learn English as a Second Language (ESL) This project of The Internet TESL Journal (iteslj. Math Chimp was created by educators and is ideal for children, parents and teachers. knowledge and their use of instructional strategies for teaching in the patterns, functions and algebra strand of the K-5 Mathematics Standards of Learning. Stations review basic skills in preparation for the first unit. I would like to help you with fun ways to teach algebraic expressions as it was my favorite topic in math. y = f (x) and. Classes Wise Resources. This free online course will teach you about advanced algebraic concepts and their applications in simple and easy Upon completion of this advanced linear algebra course, you will be able to simplify expressions These are great skills not only for science and math but also for our daily activities. The memory box does not mean necessarily this is the way to start teaching a topic. Algebraic expressions by Christie Harp 24003 views. You can say very little with. In this activity, students sort cards to strengthen their understanding of multiple representations, including: algebraic expression, verbal description, table of values, and algebra-tile model. Sellers will prove to be an. Year 7 Interactive Maths - Second Edition Consider the expression a ( b + c ). Write and evaluate numerical expressions involving whole number exponents. Communicative teaching is often organized in the three-phase framework. The teacher might need to demonstrate how the. , 7x), and operations that involve numbers and variables (e. ) 1-7 Guide Notes SE - The Distributive Property (FREEBIE) 4. *Remember that before teaching abroad, many countries (including China), require teachers to hold a Bachelor's Degree and a minimum 120 hour TEFL. First of all, when using activities for teaching vocabulary there are two key points we must remember You can use a good and simple English dictionary for the meanings of words. You can find here materials for teaching English based on TED Talks and other videos worth watching. Unlimited practice is available on each topic which allows thorough mastery of the concepts. Algebra is a branch of mathematics that substitutes letters for numbers. Key words: generation, language skills, interact, new approaches. Both teacher and student describe their strategies, activities, approaches, thoughts, and responses as they move week by week through the experience of teaching and ReadWriteThink. Algebra Review Jeopardy Game. The activity - Free Let's Get Dressed Game by Teaching Talking. There are conventions for writing algebraic expressions:. Such utterances will benefit from the teacher teaching the correct forms. teaching and learning of algebra. air practice test. D: The situation describes a geometric series, with a common ratio of 2. There are many types of listening activities. Free activities and icebreakers for online teaching as a freelance online trainer or teacher. Sponsored “That’s one of the most challenging skills to teach students because it’s a very abstract skill,” Walkington said. time worksheets year 2 time exercise for grade 2 worksheets for year 2 triangle congruence sss and sas worksheet answer k kuta software infinite geometry sss and sas congruence answers kuta software. This Algebraic Expressions Worksheet will produce a great handout to help students learn the symbols for different words and phrases in word problems. Most will respond, "x plus y. interpret the solutions of the equation as the x-value(s) of the intersection point(s) of. High School Algebra Curriculum. I find doing the PCK Map a useful exercise because it helps me link concepts, synthesize my teaching knowledge about the topic, not leave out important ideas in the course of the teaching and of course in planning the details of the. So, the verbal expression the product of 2 and m can be used to describe the algebraic expression 2 m. Expressions Activities on patterns and algebra from the student area; Presentation on introducing algebra; Poster detailing connections between linear patterns and other syllabus topics; Teacher resource booklet from workshop 5; Student worksheet on representing algebraic expressions using arrays. Studies have shown that younger students can only focus for about It's not always easy, but with experience and these tips for teaching young learners, you'll. Here are 21 free to use icebreakers for online teaching that you can use. This game is an interactive math quiz which kids could use to test their skills online. Return from this Algebraic Expressions Millionaire Game to the Millionaire Math Games , Algebra Math Games , or Math Play. Algebraic thinking is fundamental to functioning in business, industry, science, technology and daily life. In this post, I am describing activities where students practice using past modal verbs for speculation and deduction. Similarly, in algebra, an algebraic expression can be written as a product of their factors. And if you want to share your lesson plans on a personal blog or with other teachers in your school, making your lesson plan engaging will make all the difference!. GRI Non-Teaching Recruitment 2020 The Gandhigram Rural Institute University offers a non-teaching vacancy. • Monday–Thursday's activities provide a one- or two-step word problem. Below are just a few suggestions for activities to make vocabulary practice fun. The Place of Grammar in Language Teaching. You can evaluate algebraic expressions by replacing the variables with numbers and then finding the numerical value of the expression. Sequences - Work out the 4th, 5th, 10th, 20th and nth terms in a number sequence. The aim of the first activity is to become familiar with the story of Sam and James playing a game of football. This questionnaire is addressed to teachers of mathematics, who are asked to supply information about their academic and professional backgrounds, instructional practices, and attitudes towards teaching mathematics. Trinomial. + - * / Plus, Added To, Increased By, More Than, Sum. Operation Verbal Expressions add plus sum more than increased by subtract minus difference less than decreased by take away less Operation Verbal Write an expression that shows how many books Susan has. 38KB) Income and Cost Formulas. The idea behind creating any document is to convey the message to the reader. Algebra has a reputation for being difficult, but Math Games makes struggling with it a thing of the past. Snowstorm: Students write down what they learned on a piece of 15. Sometimes it helps to look at a simpler case before venturing into the. The Algebra standards cover how to successfully replace all the numbers your students have ever learned about with letters, and why it will serve them well. Here we will explore the world of using established formulas and appropriate units of measure to calculate the area and volume of shapes as well as creating and evaluating algebraic expressions by substituting a given value for each variable. Upon completion of this lesson, students will be able to: give examples of different types of algebraic expressions ; distinguish between different types of algebraic expressions. Here are a set of practice problems for the Algebra notes. Teaching active vocabulary is important for an advanced student in terms of their own creativity. Students can also practice using the third conditional to express regret. See more ideas about Algebraic expressions, Teaching math, Middle school math. Year 5 Maths Worksheets Pdf. Algebra 1 Test Practice. This includes solving algebraic equations, factoring algebraic expressions, working with rational expressions, and graphing linear equations. Play below. 9 13 ? 5 3 11 2. If you teach in a. High School: Algebra. Essential Questions: How does the result change when the value of the variable is changed? What words or symbols indicate which operation? How can mathematical symbols model verbal. Algebraic Expressions 5 Questions | 1345 Attempts Multiplication, Algebra, Mathematics Contributed By:. The printable translating phrases worksheets in this page provide prolific practice to 6th grade, 7th grade, and 8th grade students on expressing the phrases as algebraic expressions like linear expressions, single & multiple variable expressions, equations and inequalities. High school math worksheets for math teachers and math students. 1 digit multiplication worksheets higher math worksheets beginning multiplication worksheets 2nd grade geometry worksheets grade 3 freshman math lessons fractions to decimals and decimals to fractions first grade learning third grade. Fourth Grade Math Worksheets. Students will simplify and operate with radical expressions. Write and interpret numerical expressions. So what are educational learning theories and how can we use them in our teaching practice? There are so many out there, how do we know which are still relevant and which will work for our classes? There are 3 main schema's of learning theories; Behaviourism, Cognitivism and Constructivism. Worksheet 2:6 Factorizing Algebraic Expressions Section 1 Finding Factors Factorizing algebraic expressions is a way of turning a sum of terms into a product of smaller ones. • Students will practice solving proportions. Addition and Subtraction. All good reasons to make sure that your vocabulary teaching is interesting, useful and effective, don't you think? See below for some fun activities to make the lessons engaging for students of all levels. When we simplify this rational expression, we have to be careful how we simplify or reduce the fraction. I run a highly interactive classroom that promotes student participation and critical thinking. Thus, an algebraic expression consists of numbers, variables, and operations. WordPress Shortcode. Keeping it fresh with lots of different ways of learning will help students (and the teacher) avoid getting burned out or tired of working with vocabulary. Mar 16, 2020 - Explore Jodi Weissman's board "Algebra grade 6" on Pinterest. This algebraic order of operations PDF tasks KS4 students with simplifying expressions (26 questions), inserting brackets, Where required, to make identities true (8 questions) and looking at five expressions to spot where errors occurred. Ratio And Proportion Worksheet. Pearson Texas kids worksheet 1 answers. It carefully guides students from the basics to the more advanced techniques required to be successful in the next course, Intermediate Algebra. Recall that only like terms can be added or subtracted. Writing Algebraic Expressions Activity Pack 5 Low Prep Activities for Independent Practice!This product includes 5 low prep, engaging activities to practice writing algebraic expressions. • converting fractions to lowest terms. The Personal Math Trainer powered by Knewton feature, or one of the other digital features, offer unlimited practice, real-time feedback, and a variety of question types and learning aids to help teach Algebra, Geometry, and Algebra 2. Save time planning lessons. x in kids worksheet means. Algebraic expression definition, a symbol or a combination of symbols used in algebra, containing one or more numbers, variables, and arithmetic operations: how to solve algebraic expressions. Upon completion of this lesson, students will be able to: give examples of different types of algebraic expressions ; distinguish between different types of algebraic expressions. I was pleased that you incorporated all types of learning in the lessons - visual, auditory, and written/doing. Discover thousands of teacher-tested classroom activities to inspire and engage your students. The series covers a brief revision of number systems. Jordan’s father also states that the x-values in these expressions is equal to 3. Answer key is included for easy checking. AEverything you need to introduce and practice writing algebraic expressions. Most will respond, "x plus y. Teaching styles, also called teaching methods, are considered to be the general principles, educational, and management strategies for classroom instruction. Fun Algebra Activities for the Classroom Whether you are a homeschooling parent or a teacher, fun algebra activities are a great way to engage kids and teach them important concepts. Algebraic Expressions Worksheets and Quizzes Combining Like Terms Algebraic Expression: Parts of an Expression Writing Expressions Algebraic Expressions Worksheets: Combining Like Terms Variables And Expressions Worksheets Simplify Expressions Worksheets Evaluating Expression Worksheets Pre Algebra Word Problem Worksheets Distributive Property. Most multiple choice problems expect your students to take an algebraic approach, and to make algebraic mistakes. They then have to match up the expressions. Algebra Review. Thus, an algebraic expression consists of numbers, variables, and operations. Teaching children requires patience and a sense of fun and playfulness. • converting fractions to lowest terms. If teachers use visual aids regularly, students will expect to learn the next language topic by using visual aids, because each visual aid for them is an interesting learning tool. Task-based language teaching (TBLT), also known as task-based instruction (TBI), focuses on the use of authentic language and on asking students to do meaningful tasks using the target language. You may select from 2, 3, or 4 terms with addition, subtraction, and multiplication. Year 7 Maths Worksheets Pdf. Baltrop: Area Of Composite Figures Worksheet. Basically, when factoring algebraic expressions, you will first look for the GCF and use your GCF to make your polynomial look like a. They have been taught to a variety of abilities, so find the PPT that suits your class best. Here is given types of games and how to use them in teaching young learners. Math Chimp was created by educators and is ideal for children, parents and teachers. These hands-on activities can help teachers when introducing this complicated concept, making the abstract tangible and bringing the lesson home. The traditional pairs or Pelmanism game adapted to test knowledge of equivalent algebraic expressions. These display materials and printable resources will support the teaching of algebra in your primary and secondary classrooms. These materials help students understand how to solve basic algebra equations. Each term can be a variable, a number and a variable, or a number and many variables with or without exponents, as long as everything is being multiplied. 10,636 views. Algebraic Expression This is a practice to sharpen skill on algebraic expression. Pre-Algebra Worksheets. use a spreadsheet model to analyse a real-life problem and link spreadsheet formulae to algebraic expressions; form equivalent algebraic expressions in context; Activity 1: variables and constants. terms in an algebraic expression that can be combined (with tiles this is represented by the tiles having the same size and shape and basic colour). context feel? 3. Circumlocutionis a roundabout expression of meaning. Course content includes a review of basic mathematical concepts and operations; solving equations and inequalities; graphing; solving systems of equations and inequalities; exponents; polynomials; factoring; rational expressions and equations; and an introduction to roots and rational. The letter o is usually not used because it can be mistaken for 0 (zero). We are talented in algebra. 1 Apply properties of operations as strategies to add, subtract, factor, and expand linear expressions with rational coefficients. The activities below help practice building up expressions using Algebra Tiles, using zero pairs — i. ESL warm-up activities are essential in the English classroom. Then you can show one or more pictures that express "exhilaration". Use a real-life situation to apply the concepts of variables and The next part of the activity provides students with the opportunity to apply algebraic reasoning to 3. These activities are perfect for math workshop stations, homework, and independent practice. Parenthetical Expression Quiz. Instant access to millions of Study Resources, Course Notes, Test Prep, 24/7 Homework Help, Tutors, and more. Teaching the Pragmatics of. See our ESL lesson plans and worksheets for teaching adults which touch contemporary topics. Visualize multiplying and factoring algebraic expressions using tiles. Such tasks can include visiting a doctor, conducting an interview, or calling customer service for help. Some activities only work well once with a class and many activities can only be used to teach a specific aspect of grammar, language function or speaking topic. 2020 Leave a Comment. Use parentheses, brackets, or braces in numerical expressions, and evaluate expressions with these symbols. Kids can use our free, exciting games to play and compete with their friends as they progress in this subject! Algebra concepts that pupils can work on here include: Solving and writing variable equations to find answers to real-world problems. By algebraic expression we mean, a combination of letters and operation symbols such that if numbers are substituted instead of the letters and the operations are performed, a number results (Sfard, personal communication). In teaching pupils a foreign language the teacher should bear this in mind when preparing for the vocabulary work during the lesson. Therefore, the given expression can also be written as 5 3 / 8 3. Elementary Algebra is written in a clear and concise manner, making no assumption of prior algebra experience. In the fall of 2007, I decided to teach Algebra 1 differently. Write and evaluate numerical expressions involving whole number exponents. Teaching division can be a real challenge. Math Aids Worksheets. Teaching styles, also called teaching methods, are considered to be the general principles, educational, and management strategies for classroom instruction. As you review the next example, notice how the distributive property was used first, then the algebraic expression was simplified. Apr 26, 2020 - Explore Mary Gill's board "Algebraic Expressions" on Pinterest. Equations become meaningful, not memorized. The word "common" means shared by each term. Join with the largest and the most active Facebook page on Cambridge Teaching Knowledge Test (TKT). Free activities and icebreakers for online teaching as a freelance online trainer or teacher. Houghton Mifflin Harcourt Online Store; Math Expressions Resources for Students; Math Expressions Resources for Families. Substitution - Evaluate the expressions by substituting numbers for letters. com, where unknowns are common and variables are the norm. Teaching English as a Foreign Language For Dummies®. WeAreTeachers receives a few cents when you buy using our links, at no cost to you. The rational expression above is extremely basic. Multi Step Word Problems 3rd Grade Pdf. Tips for Teaching a Conversation Class for Adults. Written by experts of Language Link Russia. Sine and Cosine Graphs. Algebra - Table of Contents. • evaluate arithmetic and algebraic expressions involving integers and including brackets and exponents, emphasizing the need for knowing and following the order of operations. 2nd Grade Algebra Worksheets Mental Math 1st Grade Middle School Spelling Worksheets mathematics worksheets for grade 5 PDF Rectangle Worksheets For Kindergarten Letter P Tracing Worksheets For Preschool Vocal Work Related Skills Worksheets English Grammar Online Exercises And Downloadable Worksheets direct. TEACHING VOCABULARY AT PRIMARY SCHOOL OLENA REMEZ English teacher of school of Divide the class into two groups. Using the problems given, your students must convert these problems into algebraic equations. Desmos offers best-in-class calculators, digital math activities, and curriculum to help every student love math and love learning math. Algebra Games and Activities for 5th Graders. Use parentheses, brackets, or braces in numerical expressions, and evaluate expressions with these symbols. Parenthetical Expression Quiz. Join with the largest and the most active Facebook page on Cambridge Teaching Knowledge Test (TKT). 17 8 ? 20 ? 2 Course 3 3 Problem of the Day Ray and Katrina are wandering through the wildlife preserve. Here you can find interactive games designed to make math drills fun and entertaining. As an educator, you probably understand the importance of diversifying your teaching materials. If you teach in a. Children need lots of stimulation all the time. Computers use letter variables and mathematical symbols in the Algorithms in their programs, rather than full word English sentences. 2020 Leave a Comment. Understand that any equation in x can be interpreted as the equation f(x) = g(x), and. The printable translating phrases worksheets in this page provide prolific practice to 6th grade, 7th grade, and 8th grade students on expressing the phrases as algebraic expressions like linear expressions, single & multiple variable expressions, equations and inequalities. Children need lots of stimulation all the time. Confidence is a great help. The English language is exceedingly complex, with numerous nuances that must be learned. As we think about algebraic reasoning, it may also help to define the term algebra. Students need to be able to translate common words to math symbols (14. Slope & Y-intercept. Education resources, designed specifically with parents in mind. Get unstuck. kids worksheet tiles virtual manipulative. Below you will find links to projects that enhance and elaborate on the concepts taught throughout this course. They should know that certain words or phrases imply certain operations. 6 Digit Subtraction Word Problems. In case of a binomial 2xz + 5xy, we can write it as x (2z +5y), Here ,x and (2z + 5y) are factors of the binomial 2xz + 5xy. Teachers can use the dashboard to create different classes, add students, choose specific games each class can access, and monitor students' activity. Algebra - Table of Contents. Algebraic expressions. Select your answer by clicking on its button. Working with expressions and equations, including formulas, is an integral part of the curriculum in Grades 7 and 8. $$4\cdot x-3$$ First we substitute x with 5. 19 thoughts on “ Simplifying Algebraic Expressions Activity ” Anna (@Borschtwithanna) on October 4, 2012 at 7:18 am said: Aaaaaand, I have a lesson plan for tomorrow. Learn Vocabulary Using Fun Activities. Save time planning lessons. Variables and Algebraic Expressions. Understanding how changes in a variable's When verifying whether two algebraic expressions are equal to each other, we can either recall the relevant laws or substitute values for the variables to test. Webinars for teachers. Write Algebraic Expressions. These Algebraic Expressions Worksheets are a good resource for students in the 5th Grade through the 8th Grade. If teachers use visual aids regularly, students will expect to learn the next language topic by using visual aids, because each visual aid for them is an interesting learning tool. It includes an algebraic expressions worksheet where the students must solve problems that will help then work their way through a maze! Registering on the website also gives access to a ton of Algebra 2 activities. Enduring Understanding (Big Ideas): Algebraic expressions can represent words. To help you with tactile learning activities for teaching, I've made a list of tactile activities for you. Learn to greet and thank people, and ask for help in English. In our number sense activities students will learn ways to adapt numbers and to use grouping symbols that will help them understand and use algebraic expressions. time worksheets year 2 time exercise for grade 2 worksheets for year 2 triangle congruence sss and sas worksheet answer k kuta software infinite geometry sss and sas congruence answers kuta software. The following suggestions will help build success based on being confident, thorough, and. Click on the "Solution" link for each problem to go to the page containing the solution. The rest of the week will include a variety of review activites to provide practice with the operation words. Ratio And Proportion Worksheet. Writing Algebraic Expressions. Fifth Grade Math: Operations and Algebraic Thinking Standards. Many teaching strategies work for any classroom, no matter what the age of the students or the subject. Teaching Speaking Goals and Techniques for Teaching Speaking. The minimum value of the expression is 3. These worksheets cover all the basic concepts of algebra and algebraic expressions for the CBSE students. It includes an algebraic expressions worksheet where the students must solve problems that will help then work their way through a maze! Registering on the website also gives access to a ton of Algebra 2 activities. Help children learn how to simplify algebraic expressions with our Year 7-8 algebra worksheets. Find algebraic expressions lesson plans and teaching resources. PRACTICE TEACHING LESSON MATHEMATICS VI EXPLICIT TEACHING December 12, 2017 (Experimental Group) OBJECTIVES CONTENT LEARNING RESOURCES PROCEDURE REMARKS REFLECTION A. Unfortunately, extra curricular activities for students are increasingly relegated to the backseat nowadays, due to highly sedentary lifestyles. The Distributive Property Activity – Cupcakes and Algebra Solving Equations Christmas Coloring Worksheets Maze Solving Equations Activities Translating Algebraic Expressions Cinco De Mayo – Theoretical and Experimental Probability How to Teach Dividing Polynomials How to Teach Simplifying Radicals. Prime And Composite Numbers Worksheet. - You should have a word. Algebra has a reputation for being difficult, but Math Games makes struggling with it a thing of the past. Learn about integers, equations, function machines and more. ID: 1284018 Language: English School subject: Math Grade/level: Grade 7 Age: 5-14 Main content: Algebra Other contents: Like terms, basic arithmetic operation. The learning goal was very clear and nothing. 6 Digit Subtraction Word Problems. After presenting a story, I prepared a gap-fill text to check students' understanding of the story. The tools are appropriate for use with any high school mathematics curriculum and compatible with the Common Core State Standards for Mathematics in terms of content and mathematical practices. Students can download free printable worksheets for. The students have to sit quietly and listen to a lecture on the present perfect, for example, before they actually get to do anything. Algebra Expressions are needed in computer apps which are written to process real world situations. What are algebraic expressions ? What are different types of algebraic expressions, what is the value and degree of an algebraic expression? Let us learn the. "Teaching materials" is a generic term used to describe the resources teachers use to deliver instruction. A variable is a symbol, usually a letter, that represents one or more numbers. Free Drawing Pages For Children. They very helpful and informative for teachers. Algebra is just like a puzzle where we start with something like "x − 2 = 4" and we want to end up with something like "x = 6". An algebraic expression is a mathematical expression that consists of variables, numbers and operations. I have to write a program that tests whether two algebraic expressions are equivalent. Controlled practice: see practice. A wonderful applet about using balancing to. Next What Are Algebraic Fractions. This is a fun and easy way to evalua. One systematic method, however, is as follows. Add any tactile learning activity to your teaching, so your tactile learner will remember the lesson. Algebra has more possible letter combinations than the entire English language. Apr 4, 2018 - Explore Bethy's board "Algebraic Expressions" on Pinterest. Example: A monomial like 3abc can be written as 3 x a x b x c, 3,a,b and c are factors of 3abc. Writing Algebraic Expressions Notes and Activities, Common Core Standard: 6. This questionnaire is addressed to teachers of mathematics, who are asked to supply information about their academic and professional backgrounds, instructional practices, and attitudes towards teaching mathematics. First, you will either need a copy of the Dr. AEverything you need to introduce and practice writing algebraic expressions. Apr 4, 2018 - Explore Bethy's board "Algebraic Expressions" on Pinterest. See more ideas about Middle school math, Teaching math, Algebraic expressions. Unfortunately, extra curricular activities for students are increasingly relegated to the backseat nowadays, due to highly sedentary lifestyles. Since your class has been selected as part of a nationwide sample. An Algebra Toolkit, available online, can help the students refresh their prerequisite skills. You can evaluate algebraic expressions by replacing the variables with numbers and then finding the numerical value of the expression. kids worksheet tiles virtual manipulative. If you wish to develop an active lifestyle and learn several essential skills, I recommend you to try out some good co curricular activities. On page 2 of the workshop is a table containing the activities and suggested amount of time necessary to accomplish each activity. Activity-Based Learning. Have you tried these practical activities to help students with vocabulary learning? There's something for all ages and levels. Circumlocutionis a roundabout expression of meaning. The teacher concentrates on conjunctions, time expressions, pronouns, etc. The Personal Math Trainer powered by Knewton feature, or one of the other digital features, offer unlimited practice, real-time feedback, and a variety of question types and learning aids to help teach Algebra, Geometry, and Algebra 2. While rigorous enough to be used as a college or high school text, the format is reader friendly, particularly in this Second Edition, and clear enough to be used for self-study in a non-classroom environment. Sine and Cosine Rule. Algebraic expressions can both represent verbal expressions and communicate the meaning of the verbal expression. Textbook assignments and certain diagrams, for example, reference Prentice Hall's Algebra 1 (California Edition). Printable ESL Lesson Plans and ESL Materials for TEFL/TESOL teachers. Furthermore, the worksheets contain a mixed exercise on simplifying. An Algebra Toolkit, available online, can help the students refresh their prerequisite skills. Teaching children requires patience and a sense of fun and playfulness. Pre-test'' material enables readers to target problem areas quickly and skip areas. Grade Three Worksheets. 7 Solving Absolute Value Equations and Inequalities. This activity is part of the On the Cutting Edge Peer Reviewed Teaching Activities collection. To teach a foreign language effectively the teacher needs teaching aids and teaching materials. 3: Sharing One Hundred Things Exponential notation is used in place value thinking required by Investigation 3 and is explicitly introduced in Problem 3. Vertical Addition Worksheets. 16 ? 8 4 ? 1 8 3. Equations become meaningful, not memorized. Furthermore, students must be able to share the results of their use of mathematics. Use parentheses, brackets, or braces in numerical expressions, and evaluate expressions with these symbols. The activities are similar to those your participants can use in teaching children, but are more complex and demanding. The first step of factorising an expression is to 'take out' any common factors which the terms have. Our best resources for at-home teaching and learning. teaching and learning of algebra. Aug 12, 2020 - Writing Numerical Expressions Worksheet. Teachers can use the dashboard to create different classes, add students, choose specific games each class can access, and monitor students' activity. Once students understand the basics of how to solve numerical expressions using their knowledge of the order of operations, they can move on to learning about. 4 Digit Subtraction Problems. "I'm teaching Algebra 2 for the first time this year, and I'm so glad I joined the Algebra 2 Teacher Community! This has saved me so much time and relieved the stress of teaching something new! Thank you!!". Expanding algebraic expressions. Multiply often or multiply once: it is your choice. Write expressions that record operations with numbers and with letters standing for numbers. Here’s a great example of that tendency (from a past NY Regents exam). Below are just a few suggestions for activities to make vocabulary practice fun. This lesson is best implemented with students working in groups of 2-4. Such utterances will benefit from the teacher teaching the correct forms. On one slip ask them to list 2- 3 things they like about math. It's in our blood. Algebraic Fractions. 4 Rewriting Equations and Formulas 1. Lesson Objective: Students will be able to write and read expressions in which letters stand for numbers. keystage 3 Interactive Worksheets to help your child understand Algebra: Expressions in Maths Year 7. The traditional pairs or Pelmanism game adapted to test knowledge of equivalent algebraic expressions. Grouped by level of study. Understand that any equation in x can be interpreted as the equation f(x) = g(x), and. There are many different approaches you can use in. This text is written in such a way as to maintain maximum flexibility and usability. Here we given Algebraic Expressions and In Equalities Study Material Notes Pdf for those who are preparing for Competitive Examination. Grammar worksheets. Previous Operations with Algebraic Fractions. Essential Questions: How does the result change when the value of the variable is changed? What words or symbols indicate which operation? How can mathematical symbols model verbal. Worksheet 2 6 Factorizing Algebraic Expressions. KS3 Maths Curriculum Area Algebra Simplify and manipulate algebraic expressions to maintain equivalence by:collecting like terms multiplying a single term over a bracket taking out common factors expanding products of two. Jump to a section. Active learning is based on constructivism, a learning theory that asserts that learners construct their own understanding of a topic by building upon their prior knowledge. The following suggestions will help build success based on being confident, thorough, and. Centre for Teaching Excellence, University of Waterloo. Read the STAR Sheet for the strategy listed above. On page 2 of the workshop is a table containing the activities and suggested amount of time necessary to accomplish each activity. While root analysis is taught explicitly, the ultimate goal is for readers to use this strategy independently. Writing Algebraic Expressions Activity Pack 5 Low Prep Activities for Independent Practice!This product includes 5 low prep, engaging activities to practice writing algebraic expressions. Teaching Algebra can be challenging. Article from engaging-math Algebra Activities Maths Algebra Math Tutor Math Resources Algebra Equations Math 8 Math Manipulatives Daily Math Math Multiplication. From the NCETM website: "This multi-media resource has been developed with teachers. Coinage of wordsis creation of non-existent words. Activities include vocabulary quizzes, crossword puzzles, wordsearch games, wordmatch quizzes, and listening and reading exercises. Directions: Choose the algebraic expression that correctly represents the phrase provided. These are quick, review-type activities that I use after I teach translating Algebraic expressions. In case of a binomial 2xz + 5xy, we can write it as x (2z +5y), Here ,x and (2z + 5y) are factors of the binomial 2xz + 5xy. Activities in Communicative Language Teaching are focused on students in realistic The activities were the outcome of a Teacher Development workshop conducted by Trinity College Objectives: 1. Summarize the components of the strategy. "I'm teaching Algebra 2 for the first time this year, and I'm so glad I joined the Algebra 2 Teacher Community! This has saved me so much time and relieved the stress of teaching something new! Thank you!!". Here you'll find a variety of worksheets on which students will practice evaluating algebraic expressions with variables. 05 Jul 01 algebra and properties by GD: 15 Aug 01 algebra by rama: 18 Sep 02 Teaching algebraic expressions by Diane White: 19 Jan 05 RE:LEARNING ALGEBRAIC EXPRESSIONS by LAURA REX: 24 Oct 06 Re: RE:LEARNING ALGEBRAIC EXPRESSIONS by Teresea Jones: 25 May 11 Re: Teaching algebraic expressions by Viji. ) 1-7 Assignment - The Distributive Property (FREEBIE) 2. Lessons are practical in nature informal in tone, and contain many worked examples and warnings about problem areas and probable "trick" questions. Preschool Printable Letters Free. The activities in this book help students develop such skills. Traditionally, the teaching method is defined as a method of interrelated and interdependent activities of the educator and trainees aimed at realizing the To disclose the method more specifically, you need to consider it at the level of receptions - specific ways of organizing the activities of trainees, the. Practice prealgebra with our popular math games. $$4\cdot 5-3$$ And then we calculate the answer. To take this activity a step further, ask students to write down their questions and hand them in. com is the home to the highest quality math games, videos & worksheets online. The first step of factorising an expression is to 'take out' any common factors which the terms have. Expressions Activities on patterns and algebra from the student area; Presentation on introducing algebra; Poster detailing connections between linear patterns and other syllabus topics; Teacher resource booklet from workshop 5; Student worksheet on representing algebraic expressions using arrays. Terms are the separate values in an expression. This is a comprehensive collection of free printable math worksheets for grade 7 and for pre-algebra, organized by topics such as expressions, integers, one-step equations, rational numbers, multi-step equations, inequalities, speed, time & distance, graphing, slope, ratios, proportions, percent, geometry, and pi. Fun Algebra Activities for the Classroom Whether you are a homeschooling parent or a teacher, fun algebra activities are a great way to engage kids and teach them important concepts. To succeed, they need a firm grounding in high school algebra terminology. Get your students engaged in simplifying and solving linear equations using Mangahigh’s maths game ‘Jabara’. Specifically, the study tested students’ ability to turn story problems into algebraic equations -- what’s called algebraic expression writing. for teaching the pragmatics of complaining. An algebraic expression combines both numbers and letters using the arithmetic operations of addition (+), subtraction (–), multiplication (·), and division (÷) to express a quantity. $$5^3=125$$ Where 5 is called the base and 3 is called. Alongside online courses for teachers, you'll also find a range of relevant degrees from leading universities. See full list on study. With a learn-by-doing'' approach, it reviews and teaches elementary and some intermediate algebra. Letter F Coloring Pages For Toddlers. We turn complicated text content into engaging, exciting, and accurate video content. Write and Interpret Numerical Expressions. Handbooks provide teachers with useful tools and strategies for language teaching. Learn to greet and thank people, and ask for help in English. Each unit of CMP 3 includes many activities that develop understanding and proficiency in work on such modeling tasks. Carousel Previous Carousel Next. Using expressions like "I would like to argue that…". The traditional pairs or Pelmanism game adapted to test knowledge of equivalent algebraic expressions. The "Improving Learning in Mathematics" (or Standards Units) resources are, in my opinion, some of the finest ever produced. Help children learn how to simplify algebraic expressions with our Year 7-8 algebra worksheets. There are several tactile learning activities listed below which you can add to lessons for your tactile. Writing Basic Algebraic Expressions operation example written numerically example with a variable addition. From kindergarten to elementary you’ll find K-5 resources, including phonics worksheets and numeracy games. It includes an algebraic expressions worksheet where the students must solve problems that will help then work their way through a maze! Registering on the website also gives access to a ton of Algebra 2 activities. If teachers use visual aids regularly, students will expect to learn the next language topic by using visual aids, because each visual aid for them is an interesting learning tool. Fun Algebra Activities for the Classroom Whether you are a homeschooling parent or a teacher, fun algebra activities are a great way to engage kids and teach them important concepts. Translate algebraic expressions into English phrases, and translate English phrases into algebraic expressions. Here is a video showcasing a fun way to teach algebra to Math students. Alongside online courses for teachers, you'll also find a range of relevant degrees from leading universities. From middle school through to high school we have everything from Spanish lessons to algebra activities, as well as Common Core-aligned lessons and revision guides for tests. Below is the list of audio lessons of the most common expressions in English. Share My Lesson is a destination for educators who dedicate their time and professional expertise to provide the best education for students everywhere. The need to teach in general and teach to English language effectively in particular is the challenge Therefore, the study of vocabulary has occupied the central place in teaching learning activities. Use a real-life situation to apply the concepts of variables and The next part of the activity provides students with the opportunity to apply algebraic reasoning to 3. Quick activities that can be used to check for understanding or emphasize key information at the end of a Creative Closure Activities. During the last few years important developments By teaching aids we mean various devices which can help the foreign language teacher in presenting linguistic material to his pupils and fixing it in their. 5 Senses Worksheets For Kindergarten. Kindergarten Operations and Algebraic Thinking. At the bottom of this lesson there are Guided Notes, a Slide Show, and a Sets Worksheet to help you out with this teaching Sets to your students. Teaching online platforms make it possible for teachers to teach conveniently and students to learn things almost in a classroom-like environment. Icebreakers are an important part of any training program, as they encourage people to participate from the start of a session, to get to. x2 2 3 x2 1 7 1 6x c GOAL Add and subtract algebraic. What resources do use while teaching Advanced students?. Teaching Methodologies Quiz Teaching Methodologies Quiz. 2020 Leave a Comment 29. Here you can find interactive games designed to make math drills fun and entertaining. Preschool Printable Letters Free. Question 5 Boolean algebra is a strange sort of math. They recognize the significance of an existing line in a geometric figure and can use the strategy of drawing an auxiliary line for solving problems. Read the General Guidelines for Teaching Algebra provided at the beginning of this case study. The teacher can also copy and paste words for revision to the online flashcard site Study Stack. Lesson activities include games, puzzles, and warm-ups, as well as activities to teach and practice each of the core skills of language learning: speaking English Club offers listening and repeating activities for ESL students to practice English pronunciation. Rule I for the teacher: While teaching pupils vocabulary Besides various accessories (objects, pictures, movements, gestures, facial expressions, etc. Published by. See more ideas about Algebraic expressions, Teaching math, Middle school math. This questionnaire is addressed to teachers of mathematics, who are asked to supply information about their academic and professional backgrounds, instructional practices, and attitudes towards teaching mathematics. This page provides sample 5th Grade Number tasks and games from our 5 th Grade Math Centers e Book. AMANDA HILLIARD. Each element and operator in a mathematical equation has its own name and there are standardized phrases that are used to describe the relationships between them. 1 The student, given rational, radical, or polynomial expressions, will a) add, subtract, multiply, divide, and simplify rational algebraic expressions; b) add, subtract, multiply, divide, and simplify radical expressions containing rational numbers and variables, and expressions containing rational exponents; c) write radical expressions as expressions containing rational exponents and. For example, the complete set of rules for Boolean addition is as follows: $$0+0=0$$ $$0+1=1$$ $$1+0=1$$ $$1+1=1$$ Suppose a student saw this for the very first time, and was quite puzzled by it. In case of a binomial 2xz + 5xy, we can write it as x (2z +5y), Here ,x and (2z + 5y) are factors of the binomial 2xz + 5xy. Algebraic Expressions Worksheets and Quizzes Combining Like Terms Algebraic Expression: Parts of an Expression Writing Expressions Algebraic Expressions Worksheets: Combining Like Terms Variables And Expressions Worksheets Simplify Expressions Worksheets Evaluating Expression Worksheets Pre Algebra Word Problem Worksheets Distributive Property. These are quick, review-type activities that I use after I teach translating Algebraic expressions. These materials help students understand how to solve basic algebra equations. Free Drawing Pages For Children. Think, pair and share. The teacher writes on the board an activity like "brush your teeth. Teaching division can be a real challenge. Expressions Activities on patterns and algebra from the student area; Presentation on introducing algebra; Poster detailing connections between linear patterns and other syllabus topics; Teacher resource booklet from workshop 5; Student worksheet on representing algebraic expressions using arrays. Core Math Tools is interactive software tools for algebra and functions, geometry and trigonometry, and statistics and probability. The idea behind creating any document is to convey the message to the reader. Algebraic expression lesson plans and worksheets from thousands of teacher-reviewed The 26 lessons in the Algebra 1, Module 4 collection teach students how to use polynomial In this algebraic expression instructional activity, students translate a written expression to an algebraic equation. multiplication 4s worksheet. x in kids worksheet means. The work you have done in mapping your curriculum, writing your outcomes and planning for what The following resources are teaching and learning activities that can be adapted and used in a range of classroom situations, with large and small groups of students. Grounded in the work of Freudenthal, mat erials developed cooperatively by mathematics. You are here. Algebra: Expressions (6th grade math) No teams 1 team 2 teams 3 teams 4 teams 5 teams 6 teams 7 teams 8 teams 9 teams 10 teams Custom Press F11 Select menu option View > Enter Fullscreen for full-screen mode. Most will respond, "x plus y. Factorise quadratic equations 3 by Angela Phillips 1351 views. links to teaching and learning guides, NCEA resources, PLCs, and other useful sites. We turn complicated text content into engaging, exciting, and accurate video content. This is a useful activity for introducing prime factorization by continuing the roots to their prime factors. Grade 2 Worksheets. read information from a plot. But by doing this, I. Letter F Coloring Pages For Toddlers. In this post, I am describing activities where students practice using past modal verbs for speculation and deduction. Printable ESL Lesson Plans and ESL Materials for TEFL/TESOL teachers. Algebraic Expression This is a practice to sharpen skill on algebraic expression. No matter the obstacles. Read the sentences and determine how to write the algebraic expression or equations. Students must master these symbols so that they can correctly analyze the problems they will be doing. Money Equivalent Worksheets. Students begin studying these skills through the use of manipulatives, or physical tools that represent objects, as early as pre-school, and continue building their skills, adding and subtracting ever larger numbers through elementary school. Teaching division can be a real challenge. Study these extracurricular activities examples and samples that you can learn from when writing your activities list for the common application. Those that don't require learners to produce language in response are easier than those that do. They have been taught to a variety of abilities, so find the PPT that suits your class best. Addition Worksheets For Grade 1. "Teaching materials" is a generic term used to describe the resources teachers use to deliver instruction. The Distributive Property Activity – Cupcakes and Algebra Solving Equations Christmas Coloring Worksheets Maze Solving Equations Activities Translating Algebraic Expressions Cinco De Mayo – Theoretical and Experimental Probability. Algebra 1, Algebra 2 and Precalculus Algebra. The printable translating phrases worksheets in this page provide prolific practice to 6th grade, 7th grade, and 8th grade students on expressing the phrases as algebraic expressions like linear expressions, single & multiple variable expressions, equations and inequalities. Read the writing of other ESL teachers - or send something in yourself. Coolmath Algebra has hundreds of really easy to follow lessons and examples. Teaching ESL to children is challenging but also very rewarding. What are algebraic expressions ? What are different types of algebraic expressions, what is the value and degree of an algebraic expression? Let us learn the. Through explorations,. Teachers have access to simulation-specific tips and video primers, resources for teaching with simulations, and activities shared by our teacher community. Try our Pre-Algebra lessons below, or browse other units of instruction. Evaluate each expression if x 2, y 7, and z 4. Using the app, you can quickly remake and assign your favorite classroom papers as online, digital activities for your students. The letter o is usually not used because it can be mistaken for 0 (zero). To take this activity a step further, ask students to write down their questions and hand them in. We have learned that, in in an algebraic expression, letters can stand for numbers. There's no question that this is an This activity is a great way to start your order of operations lesson because it creates a feeling of Introduce the step-by-step method for evaluating algebraic expressions by explaining the. Lesson activities include games, puzzles, and warm-ups, as well as activities to teach and practice each of the core skills of language learning: speaking English Club offers listening and repeating activities for ESL students to practice English pronunciation. Characters in different situations - Create a situation. At the end, there shouldn't be any more adding, subtracting, multiplying, or dividing left to do. Under Challenges, teachers can search the games and Prodigi quizzes that target specific Common Core standards and assign them to students. The English language is exceedingly complex, with numerous nuances that must be learned. Resources for teachers. Terms are the separate values in an expression. This Algebraic Expressions Worksheet will produce a great handout to help students learn the symbols for different words and phrases in word problems. Include expressions that arise from formulas used in real-world problems. While root analysis is taught explicitly, the ultimate goal is for readers to use this strategy independently. $$20-3=17$$ An expression that represents repeated multiplication of the same factor is called a power e. Learn how to evaluate expressions with variables. All good reasons to make sure that your vocabulary teaching is interesting, useful and effective, don't you think? See below for some fun activities to make the lessons engaging for students of all levels. Use this illustrated algebra worksheet with your KS3 or KS4 Maths class to help them learn to use and write algebraic expressions. Enduring Understanding (Big Ideas): Algebraic expressions can represent words. The author, Samuel Chukwuemeka aka Samdom For Peace gives credit to Our Lord, Jesus Christ. Quizzes, tests, exercises and puzzles to help you learn English as a Second Language (ESL) This project of The Internet TESL Journal (iteslj. They are rich, challenging, well thought-out and well resourced. The letters e and i have special values in algebra and are usually not used as variables. Teachers can differentiate in a number of ways: how students access content, the types of activities students do to master a concept, what the end Based on student investigation and hands-on projects, inquiry-based learning is a teaching method that casts a teacher as a supportive figure who provides. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24620716273784637, "perplexity": 2216.555586594963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524270.28/warc/CC-MAIN-20210121070324-20210121100324-00738.warc.gz"} |
https://www.mbatious.com/topic/468/minimum-weighing-requirements | # Minimum Weighing Requirements
• Pan and Spring balance to find minimum number of Weighing .
To measure the weight between 1 to 100 how many minimum weighs are required so that all weight can be measured
First understand its a pan balance here if we want to measure 40 kg then we have two ways
1. Put 40 kg in single pan
2. Put w1 and w2 weight on two pans where w1-w2=40
Now we need to measure all weights between 1 and 100 so we will start from 1 so we need to measure 1 kg so we need 1 kg weight
now we want to measure 2 kg so we can use a weight of 2 kg but we wan to use minimum weights so we can do that by using a weight of 3kg
as 3-1=2 so after 1 kg we need 3 kgs
now to measure 4 we have 1+3=4 so no extra weight needed now 5,but we will try to get it using differences
9-4=5 so we will instead use a 9 kg weight so now we have 3 weights
1,3,9 kgs now check pattern , we are getting powers of 3 ,the weights needed are 1,3,9,27,81,243,729... and so on
here we want to measure till 100 kg
and we have 1 + 3 + 9 + 27 + 81> = 100 so we need 5 weights only
DIRECT WAY
3^k >= 100 so k =5 will be our answer
Now if spring balance then we have to use only sums because we don't have two pans there in that case,
we need a 1kg weight then a 2kg weight so 3 could be measure with help of 1+2
so we needed a 4 kg weight ,for 5 we have 4+1 for 6=4+2 and for 7=4+2+1
so we now need an 8 kg weight
now check pattern - 1, 2, 4, 8, 16, 32, 64,......
so we will use power of 2 in case of spring balance
so 2^k > = 100 so k=7 so we need 7 weighing
SO if we are using Pan balance then we will work on power of 3 and if we are working on spring balance then we will work on power of 2
What is the minimum number of weighing operations required to measure 63 kg of wheat , if only 1 weight of 1 kg is available
Direct Method- Only 1 weight of 1 kg is available means this is not pan ,this is spring balance
so 2^n > = 63 so n=6
Method2-
1st time- 1kg
2nd time -2 kg (1 of weight and 1 of wheat)
3rd time-4 kg (1 of weight and 3 of wheat)
4th time-8 kg (1 of weight and 7 of wheat)
5th time 16 kg (1 of weight & 15 of wheat)
6th time-32 kg (1 of weight & 31 of wheat)
so 6 attempt
Given 5 coins out of which one coin is lighter.How many minimum number of weighing are required to figure out the odd coin.
3^k > = N
here minimum value of k will give minimum number of weighing
like here N=5 so 3^k>=5 so min value of k will be 2 so we need minimum 2 weighing
or if N=12 then 3^k>=12 so min k=3 so we need minimum 3
In a set of 400 balls, all balls except one, which is lighter than the rest, are of equal weight. What is the minimum number of weighing required to identify the lighter ball using a two pan balance?
Direct way
3^n>=400 so n=6
Divide the balls into 3 groups and then weigh among themselves.
With maximum of 6 weighs you would know which ball is faulty.
For 3 Balls Just 1 weighing is required.(keep 1 in each pan and set the 3rd aside)
For 4-9 balls 2 weighing are required.
For 10-27 Balls, 3 weighing are required.
For 28 - 81 Balls, 4 weighing are required and so on.
Looks like your connection to MBAtious was lost, please wait while we try to reconnect. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.915860116481781, "perplexity": 1315.5634532792621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746171.27/warc/CC-MAIN-20181119233342-20181120015342-00518.warc.gz"} |
http://jmre.dlut.edu.cn/cn/ch/reader/view_abstract.aspx?flag=1&file_no=20120103&journal_id=cn | C. SELVARAJ,L. MADHUCHELVI.Characterization of $2$-Primal Near-Rings[J].数学研究及应用,2012,32(1):19~25
Characterization of $2$-Primal Near-Rings
Characterization of $2$-Primal Near-Rings
DOI:10.3770/j.issn:2095-2651.2012.01.003
作者 单位 C. SELVARAJ Department of Mathematics, Periyar University, Salem-636011, Tamilnadu, India L. MADHUCHELVI Department of Mathematics, Sri Sarada College, Salem-636016, Tamilnadu, India
In 1999, Kim and Kwak asked one question that Is a ring $R$ $2$-primal if $O_{P}\subseteq P$ for each $P\in m{\rm Spec}(R)?"$. In this paper, we prove that if $O_{P}$ has the IFP for each $P \in m{\rm Spec}(N)$, then $O_{P} \subseteq P$ for each $P \in m{\rm Spec}(N)$ if and only if $N$ is a $2$-primal near-ring and also we give characterization of 2-primal near-rings by using its minimal $0$-prime ideals.
In 1999, Kim and Kwak asked one question that Is a ring $R$ $2$-primal if $O_{P}\subseteq P$ for each $P\in m{\rm Spec}(R)?"$. In this paper, we prove that if $O_{P}$ has the IFP for each $P \in m{\rm Spec}(N)$, then $O_{P} \subseteq P$ for each $P \in m{\rm Spec}(N)$ if and only if $N$ is a $2$-primal near-ring and also we give characterization of 2-primal near-rings by using its minimal $0$-prime ideals. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8879914879798889, "perplexity": 1404.1435802669823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806438.11/warc/CC-MAIN-20171121223707-20171122003707-00520.warc.gz"} |
http://stats.stackexchange.com/questions/46711/confusion-related-to-higher-order-markov-chain | # confusion related to higher order markov chain
I was reading this book related to machine learning. It's given that for Mth order markov chain, the number of parameters = $K^{M-1}(K-1)$ where M is the order. I am not sure how this is derived.
For eg, lets say I have three states {Sunny, Cloudy, Rainy}. So at each time stamp I will look at the previous two states. So if I use the above formula, I will have 3*(2-1)*(3-1) = 6 which is less I guess. I should have the following parameters
P(Sunny|Cloudy,Rainy)
P(Sunny|Cloudy,Sunny)
P(Sunny|Cloudy,Cloudy)
P(Sunny|Rainy,Rainy)
P(Sunny|Rainy,Sunny)
P(Sunny|Rainy,Cloudy)
P(Sunny|Sunny,Rainy)
P(Sunny|Sunny,Sunny)
P(Sunny|Sunny,Cloudy)
and so on. It has lots of parameters isn't it higher than 6. I am bit confused. IT should be $K^{M}(K-1)$ I guess
-
I think you're right about the counting. For a Markov model of order $1$, with $K$ states, you need $K\times K$ parameters, minus $K$ parameters, since you have to consider the normalization of the conditional probabilities for each final state $K$; hence, $K\times K - K = K(K-1)$.
For order $M$, it generalizes to $K^{M+1} - K = K^M (K-1).$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.974577009677887, "perplexity": 313.1463170726536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009515.14/warc/CC-MAIN-20141125155649-00217-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://matheuscmss.wordpress.com/2011/10/21/typical-smooth-cocycles-over-a-hyperbolic-basis-have-non-zero-lyapunov-exponents-i/ | Posted by: matheuscmss | October 21, 2011
## Typical smooth cocycles over a hyperbolic basis have non-zero Lyapunov exponents I
Last spring (more precisely, April-May 2011) I participated in a “Groupe de Travail” organized by Sylvain Crovisier and Jerome Buzzi around the theme “Cocycles over hyperbolic dynamics”. As you can see in this webpage here, after a preparatory talk by F. Ledrappier (on his theorem on vanishing of exponents and determinism of the measures on projective spaces which are invariant under the action of random sequences of matrices), I gave two expository talks (one in April 29 and another in May 20) about Marcelo Viana’s article “Almost all cocycles over any hyperbolic system have nonvanishing Lyapunov exponents”.
My plan is to make the notes I prepared for these expositions available here: today’s post is a slightly expanded version of my notes for the first expository talk, and a future post will correspond to my notes for the second (and final) exposition.
Linear Cocycles
Let ${\pi:\mathcal{E}\rightarrow M}$ be a vector bundle whose fibers are isomorphic to ${\mathbb{K}^d}$ where ${d\geq 1}$ and ${\mathbb{K}=\mathbb{R} \textrm{ or } \mathbb{C}}$. We say that a vector bundle automorphism ${F:\mathcal{E}\rightarrow\mathcal{E}}$ is a linear cocycle over a transformation ${f:M\rightarrow M}$ if ${\pi\circ F=f\circ\pi}$.
Example 1 Given ${f:M\rightarrow M}$ and a matrix-valued function ${A:M\rightarrow GL(d,\mathbb{K})}$, we can form a linear cocycle ${F}$ by considering the (trivial) vector bundle ${M\times\mathbb{K}^d}$ and defining ${F(x,v)=(f(x),A(x)v)}$.
Example 2 Given a diffeomorphism of a manifold ${f:M\rightarrow M}$, its derivative ${Df:TM\rightarrow TM}$ is a linear cocycle over ${f}$. We call ${Df}$ the derivative cocycle.
Let ${F:\mathcal{E}\rightarrow\mathcal{E}}$ be a measurable linear cocycle over an invertible map ${f:M\rightarrow M}$ preserving a probability measure ${\mu}$. Suppose that ${\mathcal{E}}$ comes equipped with a family ${\|.\|}$ of norms ${\|.\|_x}$ on its fibers ${\mathcal{E}_x}$, ${x\in M}$, such that ${\log\|F_x^{\pm1}\|}$ is ${\mu}$-integrable. Here, ${F_x}$ is the linear map ${F_x:\mathcal{E}_x\rightarrow\mathcal{E}_{f(x)}}$ induced by ${F}$. In this context, Oseledets theorem implies that, for ${\mu}$-almost every ${x\in M}$, we have a splitting
$\displaystyle \mathcal{E}_x=E^1_x\oplus\dots\oplus E^k_x, \quad k=k(x)$
and a collection of real numbers ${\lambda_1(F,x)>\dots>\lambda_k(F,x)}$ such that
$\displaystyle \lim\limits_{n\rightarrow\pm\infty}\frac{1}{n}\log\|F_x^n(v_i)\|=\lambda_i(F,x)$
for every ${v_i\in E_x^i-\{0\}}$. Moreover, the Lyapunov exponents ${\lambda_i(F,x)}$ and the Oseledets subspaces${E_x^i}$ depend measurably on ${x}$.
Remark 1 Lyapunov exponents ${\lambda_i(F,x)}$ are constant along ${f}$-orbits. Therefore, if ${\mu}$ is ergodic, then the Lyapunov exponents are constant (${\mu}$-almost everywhere). In this case, we denote by ${\lambda_i(F,\mu)}$ the value of these constants.
In the sequel, we will be interested in the positivity of largest Lyapunov exponent
$\displaystyle \lambda^+(F,x):=\lim\limits_{n\rightarrow+\infty}\frac{1}{n}\log\|F_x^n\|=\lambda_1(F,x)$
under appropriate smoothness conditions on the linear cocycle ${F}$. In particular, we will need the following definitions:
Definition 1 Given ${r\in\mathbb{N}}$ and ${0\leq\nu\leq 1}$, the set ${\mathcal{G}^{r,\nu}(f,\mathcal{E})}$ denotes the set of ${C^r}$ linear cocycles ${F}$ over ${f}$ whose ${r}$th derivative is ${\nu}$-Hölder continuous. Here, whenever ${r\geq 1}$, we assume that the basis ${M}$ and the vector bundle ${\pi:\mathcal{E}\rightarrow M}$ have ${C^r}$-structures. Moreover, given a Riemannian metric ${\langle,\rangle}$ on ${\mathcal{E}}$, we denote by ${\mathcal{S}^{r,\nu}(f,\mathcal{E})}$ the subset of ${\mathcal{G}^{r,\nu}(f,\mathcal{E})}$ consisting of linear cocycles ${F}$ verifying ${\det F_x=1}$ for all ${x\in M}$. Below, we will equip ${\mathcal{G}^{r,\nu}(f,\mathcal{E})}$ and ${\mathcal{S}^{r,\nu}(f,\mathcal{E})}$ with its natural ${C^{r+\nu}}$-topology.
Setting
In his article, Marcelo considers linear cocycles over two classes of hyperbolic systems: uniformly hyperbolic homeomorphisms and non-uniformly hyperbolic diffeomorphisms. Below we explain the main features of these systems.
Given a continuous map ${f:M\rightarrow M}$ of a compact metric space and a point ${x\in M}$, the stable set of ${x}$ is
$\displaystyle W^s(x)=\{y\in M: \textrm{dist}(f^n(y),f^n(x))\rightarrow 0 \textrm{ as } n\rightarrow+\infty\}$
and the stable set of size ${\varepsilon>0}$ is
$\displaystyle W^s_{\varepsilon}(x)=\{y\in M: \textrm{dist}(f^n(y),f^n(x))\leq\varepsilon, \,\,\forall\, n\geq 0\}.$
If in addition ${f}$ is invertible, one can define unstable sets and unstable sets of size ${\varepsilon>0}$ by replacing ${f^n}$ by ${f^{-n}}$ in the previous definitions.
The first class of systems is:
Definition 2 A homeomorphism ${f:M\rightarrow M}$ is uniformly hyperbolic if there are ${K,\tau,\varepsilon,\delta>0}$ such that, for every ${x\in M}$,
• ${\textrm{dist}(f^n(y_1),f^n(y_2))\leq K e^{-\tau n}\textrm{dist}(y_1,y_2)}$ for all ${y_1,y_2\in W^s_{\varepsilon}(x)}$ and ${n\geq 0}$;
• ${\textrm{dist}(f^{-n}(z_1),f^{-n}(z_2))\leq K e^{-\tau n}\textrm{dist}(z_1,z_2)}$ for all ${z_1,z_2\in W^u_{\varepsilon}(x)}$ and ${n\geq 0}$;
• if ${\textrm{dist}(x_1,x_2)\leq\delta}$, then ${\#(W^u_{\varepsilon}(x_1)\cap W^s_{\varepsilon}(x_2))=1}$; denoting by ${[x_1,x_2]}$ the unique point in ${W^u_{\varepsilon}(x_1)\cap W^s_{\varepsilon}(x_2)}$, we also require that ${[x_1,x_2]}$ depends continuously on ${x_1}$ and ${x_2}$.
The second class of systems is:
Definition 3 Let ${f:M\rightarrow M}$ be a ${C^{1+\alpha}}$-diffeomorphism (${\alpha>0}$) of a compact manifold ${M}$ and ${\mu}$ be a ${f}$-invariant non-atomic probability. We say that ${(f,\mu)}$ is (non-uniformly) hyperbolic if the Lyapunov exponents ${\lambda_i(f,x)=\lambda_i(Df,x)}$ of the derivative cocycle ${Df}$ are nonzero at ${\mu}$-almost every ${x\in M}$.
Given ${(f,\mu)}$ non-uniformly hyperbolic and ${x\in M}$ such that the Lyapunov exponents ${\lambda_i(Df,x)}$ and the Oseledets subspaces ${E_x^i}$ are well-defined, we denote by ${E^s_x}$, resp. ${E^u_x}$, the sum of all Oseledets subspaces associated to negative, resp. positive, Lyapunov exponents. Starting from seminal works of Pesin, we dispose nowadays of a whole literature (sometimes called Pesin theory) dedicated to the nice properties of non-uniformly hyperbolic systems. For our purposes, we will need the following properties assured by the so-called Pesin stable manifold theorem: for ${\mu}$-almost every ${x\in M}$, there are ${W_{loc}^s(x)}$ and ${W_{loc}^u(x)}$ ${C^1}$-disks passing through ${x}$ (a.k.a. Pesin local stable and unstable manifolds of ${x}$) such that
• at ${x}$, we have that ${W_{loc}^s(x)}$ is tangent to ${E_x^s}$ and ${W_{loc}^u(x)}$ is tangent to ${E_x^u}$;
• for every ${\tau_x<\min |\lambda_i(x,Df)|}$, there exists ${K_x>0}$ such that, for any ${n\in\mathbb{N}}$,
• (a) ${\textrm{dist}(f^n(y_1),f^n(y_2))\leq K_x e^{-n\tau_x}\textrm{dist}(y_1,y_2)}$ for any ${y_1,y_2\in W_{loc}^s(x)}$;
• (b) ${\textrm{dist}(f^{-n}(z_1), f^{-n}(z_2))\leq K_xe^{-n\tau_x}\textrm{dist}(z_1,z_2)}$ for any ${z_1,z_2\in W_{loc}^u(x)}$;
• ${f(W_{loc}^s(x))\subset W_{loc}^s(x)}$ and ${f(W^u_{loc}(x))\supset W_{loc}^u(x)}$;
• ${W^s(x)=\bigcup\limits_{n=0}^{\infty}f^{-n}(W_{loc}^s(x))}$ and ${W^u(x)=\bigcup\limits_{n=0}^{\infty}f^n(W_{loc}^u(x))}$.
Moreover, the constants ${\tau_x}$, ${K_x}$ and the sizes of the disks ${W_{loc}^s(x)}$ and ${W_{loc}^u(x)}$ can be chosen to depend measurably on ${x}$.
In a nutshell, Pesin stable manifold theorem says that the measurable plane fields ${E^s_x}$ and ${E^u_x}$ can be locally integrated into local disks ${W^s_{loc}(x)}$ and ${W^u_{loc}(x)}$ whose sizes depend measurably on ${x}$. Also, the distances of iterates of points in such local disks are exponentially contracted (in the future or in the past) by an exponential rate essentially equal to the Lyapunov exponent of the center ${x}$ of these disks. Finally, any point whose (future or past) iterates converges to the (future or past) iterates of ${x}$ (i.e., any point in the stable ${W^s(x)}$ or unstable ${W^u(x)}$ sets of ${x}$) must approach it in an exponential way (this is expressed by the fact that, after iterating an adequate number of times, the orbit enters the local disks ${W^s_{loc}(x)}$ or ${W^u_{loc}(x)}$). For a proof of Pesin stable manifold theorem, we recommend Pesin’s original article or the article of A. Fathi, M. Herman and J.-C. Yoccoz).
The fact that the objects depend measurably on the points allows us the so-called hyperbolic blocks. Roughly speaking, these are “large” compact sets where the objets appearing in Pesin stable manifold theorem depend continuously on the point. More precisely, from the measurable dependence of objects and Luzin theorem, for every ${K, \tau>0}$, we can select ${\mathcal{H}(K,\tau)}$ a compact set such that
• ${\tau_x\geq\tau}$ and ${K_x\leq K}$ for any ${x\in\mathcal{H}(K,\tau)}$;
• the disks ${W_{loc}^s(x)}$ and ${W_{loc}^u(x)}$ depend continuously on ${x\in\mathcal{H}(K,\tau)}$;
• ${\mu(\mathcal{H}(K,\tau))\rightarrow 1}$ when ${\tau\rightarrow 0}$ and ${K\rightarrow\infty}$.
In particular, the sizes of ${W_{loc}^s(x)}$ and ${W_{loc}^u(x)}$ and their angles ${\angle(E_x^s,E_x^u)}$ are uniformly bounded away from zero for ${x\in\mathcal{H}(K,\tau)}$.
Finally, the class of invariant measures ${\mu}$ considered by Marcelo are the measures with a local product structure. In a few words, these are measures whose relationship with the Pesin stable and unstable manifolds is nice. More concretely, given a hyperbolic block ${\mathcal{H}(K,\tau)}$, we take a small constant ${\delta>0}$ such that, for any two points ${x,y\in\mathcal{H}(K,\tau)}$ with ${\textrm{dist}(x,y)\leq\delta}$, we have ${\#(W_{loc}^s(x)\cap W_{loc}^u(y))=1}$. In the sequel, we denote by ${[x,y]}$ the unique point in ${W^s_{loc}(x)\cap W_{loc}^u(y)}$, and we observe that the corresponding map ${(x,y)\mapsto [x,y]}$ is continuous. Given ${x\in\mathcal{H}(K,\tau)}$, we define
$\displaystyle \mathcal{N}_x^s(\delta):=\mathcal{N}_x^s(K,\tau,\delta) = \{z\in W^s_{loc}(x)\cap W^u_{loc}(y): \textrm{dist}(y,x)\leq\delta\}\subset W_{loc}^s(x),$
$\displaystyle \mathcal{N}_x^u(\delta):=\mathcal{N}_x^u(K,\tau,\delta) = \{z\in W^u_{loc}(x)\cap W^s_{loc}(y): \textrm{dist}(y,x)\leq\delta\}\subset W_{loc}^u(x)$
and
$\displaystyle \mathcal{N}_x(\delta)=[\mathcal{N}_x^s(\delta),\mathcal{N}_x^u(\delta)],$
i.e., ${\mathcal{N}_x(\delta)}$ is the image of ${\mathcal{N}_x^s(\delta)\times\mathcal{N}_x^u(\delta)}$ under the map ${[.,.]}$. Pictorially, the sets ${\mathcal{N}_x^s(\delta)}$, ${\mathcal{N}_x^u(\delta)}$ and ${\mathcal{N}_x(\delta)}$ can be seen as follows:
In this context, we have the following definition:
Definition 4 We say that ${\mu}$ has local product structure if, for every ${\mu}$-generic ${x}$ and ${\delta>0}$ as above, we have that the restriction of ${\mu}$ to ${\mathcal{N}_x(\delta)}$ is equivalent to the product measure ${\nu^u\times\nu^s}$, where ${\nu^u}$ is the projection of ${\mu}$ to ${\mathcal{N}_x^u(\delta)}$ and ${\nu^s}$ is the projection of ${\mu}$ to ${\mathcal{N}_x^s(\delta)}$.
Remark 2 This definition makes sense in both cases of non-uniformly hyperbolic diffeomorphisms and uniformly hyperbolic homeomorphisms. In the case of volume-preserving non-uniformly hyperbolic diffeomorphisms ${(f,Leb)}$, the Lebesgue measure ${Leb}$ has local product structure because, as it was shown by Y. Pesin, the stable and unstable manifolds form absolutely continuous laminations. More generally, every hyperbolic measure ${\mu}$ with absolutely continuous disintegration along the stable and unstable laminations has local product structure (as it was shown by C. Pugh and M. Shub). Finally, any equilibrium measure ${\mu}$ associated to the restriction ${f|_{\Lambda}}$ of an Axiom A ${C^{1+\alpha}}$ diffeomorphism ${f}$ to a basic set ${\Lambda}$ and a Hölder continuous potential ${\phi:M\rightarrow\mathbb{R}}$ (i.e., a probability measure ${\mu}$ verifying the variational principle ${h_{\mu}(f)+\int\phi d\mu = \sup\limits_{\nu \, f-\textrm{invariant probability}}h_{\nu}(f)+\int\phi d\nu}$) has local product structure (see e.g. R. Bowen’s book).
Statement of results
In the case of ${(f,\mu)}$ non-uniformly hyperbolic, Marcelo showed the following results:
Theorem 5 For all ${r\in\mathbb{N}}$ and ${0\leq\nu\leq 1}$ with ${r+\nu>0}$ and ${\mu}$ ergodic hyperbolic measure with local product structure, the set of cocycles ${F\in\mathcal{S}^{r,\nu}(f,\mathcal{E})}$ whose top Lyapunov exponent is positive, i.e.,
$\displaystyle \lambda^+(F,x):=\lim\limits_{n\rightarrow\infty}\frac{1}{n}\log\|A^n(x)\|>0 \quad \textrm{for}\,\mu-\textrm{almost every } x\in M$
is open and dense in ${\mathcal{S}^{r,\nu}(f,\mathcal{E})}$.
Remark 3 Even though I didn’t check all details, I think that the previous statement could be generalized to “non-uniformly hyperbolic homeomorphisms” ${(f,\mu)}$ in the sense that, besides the following Pesin theory like properties
• for ${\mu}$-a.e. ${x}$, there are constants ${K_x<\infty}$, ${\tau_x>0}$ and “local” stable and unstable disks ${W^s_{loc}(x)}$, ${W^u_{loc}(x)}$ such that, for any ${n\in\mathbb{N}}$, \subitem (a) ${\textrm{dist}(f^n(y_1),f^n(y_2))\leq K_x e^{-n\tau_x}\textrm{dist}(y_1,y_2)}$ for any ${y_1,y_2\in W_{loc}^s(x)}$; \subitem (b) ${\textrm{dist}(f^{-n}(z_1), f^{-n}(z_2))\leq K_xe^{-n\tau_x}\textrm{dist}(z_1,z_2)}$ for any ${z_1,z_2\in W_{loc}^u(x)}$;
• ${K_x}$, ${\tau_x}$ and the sizes of the local disks ${W^s_{loc}(x)}$, ${W^u_{loc}(x)}$ depend measurably on ${x}$;
• ${f(W_{loc}^s(x))\subset W_{loc}^s(x)}$ and ${f(W^u_{loc}(x))\supset W_{loc}^u(x)}$;
• ${W^s(x)=\bigcup\limits_{n=0}^{\infty}f^{-n}(W_{loc}^s(x))}$ and ${W^u(x)=\bigcup\limits_{n=0}^{\infty}f^n(W_{loc}^u(x))}$;
• the local stable and unstable disks are topologically transverse in the sense that, for any hyperbolic block ${\mathcal{H}(K,\tau)}$ (where ${K_x\leq K}$ and ${\tau_x\geq\tau}$), one can find ${\delta>0}$ such that ${\#W^s_{loc}(x)\cap W^u_{loc}(y)=1}$ whenever ${x,y\in\mathcal{H}(K,\tau)}$ and ${\textrm{dist}(x,y)\leq\delta}$;
• ${\mu}$ has product local structure in the sense that ${\mu|_{\mathcal{N}_x(\delta)}}$ is equivalent to the product measure ${\nu^s\times\nu^u}$ where ${\nu^{s/u}}$ is the projection of ${\mu}$ to ${\mathcal{N}_x^{s/u}(\delta)}$, where ${\mathcal{N}_x(\delta)}$, ${\mathcal{N}_x^{s/u}(\delta)}$ are defined in the same way as above,
one also impose the following “Katok’s shadowing lemma” like property:
• there is a countable family ${\mathcal{K}_m}$ of “hyperbolic blocks” (say ${K_m\subset\mathcal{H}(K_m,\tau_m)}$) with ${\mu(K_m)\rightarrow 1}$ as ${m\rightarrow\infty}$ such that, for each ${j\in\mathbb{N}}$ and ${\gamma>0}$, there are constants ${K,\tau,\rho,\varepsilon}$ such that for every ${z\in\mathcal{K}_j}$ and ${\kappa\geq 1}$ with ${f^{\kappa}(z)\in\mathcal{K}_j}$ and ${\textrm{dist}(f^{\kappa}(z),z)<\varepsilon}$, there is a periodic point ${p\in M}$ of period ${\kappa}$ satisfying:
• (a) ${p\in\mathcal{H}(K,\tau)}$
• (b) ${W^s_{loc}(p)}$ and ${W^u_{loc}(p)}$ have size ${\rho}$ (at least) and they are topologically transverse to the local stable and unstable disks of any ${w\in\mathcal{K}_j}$ in a ${\rho}$-neighborhood of ${p}$;
• (c) ${\textrm{dist}(f^{i}(p),f^{i}(z))<\gamma}$ for every ${0\leq i\leq \kappa}$.
However, this notion of “non-uniformly hyperbolic homeomorphisms” is not very useful as the main source of examples of such systems are precisely non-uniformly hyperbolic diffeomorphisms.
Essentially by combining Theorem 5 with the ergodic decomposition theorem, one is able to derive the following corollary:
Corollary 6 For every ${r\in\mathbb{N}}$, ${0\leq \nu\leq 1}$ with ${r+\nu>0}$, and ${\mu}$ hyperbolic measure (not necessarily ergodic) with local product structure, the set ${\mathcal{A}}$ of cocycles ${F}$ with ${\lambda^+(F,x)>0}$ for ${\mu}$-a.e. ${x}$ is a Baire residual subset of ${\mathcal{S}^{r,\nu}(f,\mathcal{E})}$.
On the other hand, in the case of uniformly hyperbolic homeomorphisms, Marcelo is able to recover the full conclusion of Theorem 5 even for non-ergodic measures:
Corollary 7 For every ${r\in\mathbb{N}}$, ${0\leq \nu\leq 1}$ with ${r+\nu>0}$, ${f}$ uniformly hyperbolic homeomorphism, and ${\mu}$ probabilitiy measure with local product structure, the set ${\mathcal{A}}$ of cocycles ${F}$ with ${\lambda^+(F,x)>0}$ for ${\mu}$-a.e. ${x}$ is open, dense and its codimension is ${\infty}$ in ${\mathcal{S}^{r,\nu}(f,\mathcal{E})}$.
Remark 4 In Corollary 7, if one assumes that the cocycle ${F}$ is dominated (roughly speaking this means that the dynamics on the fibers has expansion/contraction rates situated “between” the expansion/contraction rates of the base dynamics ${f}$), then the set ${\mathcal{A}}$ may be chosen independently of measure ${\mu}$ (as it was shown by C. Bonatti, X. Gomez-Mont and M. Viana), and the Lyapunov spectrum is simple, i.e., the multiplicity of all Lyapunov exponents of ${F\in\mathcal{A}}$ is 1 (as it was shown by C. Bonatti and M. Viana).
Partly motivated by the results mentioned in the remark above, Marcelo conjectures that:
Conjecture. Theorem 5 and Corollaries 6, 7 remain true if one replaces “${\lambda^+(F,x)>0}$” by “simple Lyapunov spectrum” in their conclusions.
Remark 5 (Historical remark) These results in the same lines of previous theorems of H. Furstenberg, A. Raugi, Y. Guivarc’h, I. Goldsheid, G. Margulis on the Lyapunov spectrum of identically and independently distributed (i.i.d. for short) random products of matrices, and Bonatti, Gomez-Mont, Viana, Bonatti, Viana on the Lyapunov spectrum of uniformly hyperbolic homeomorphisms, and Avila, Viana on the Lyapunov spectrum of the so-called Kontsevich-Zorich cocycle.
Remark 6 The assumption ${r+\nu>0}$ is necessary:
• by the works of J. Bochi, and J. Bochi, M. Viana, we know that vanishing exponents may be locally ${C^0}$ generic (i.e., one can construct ${C^0}$ open sets of cocycles where vanishing exponents is a ${C^0}$ Baire residual property);
• by the works of L. Arnold, N. D. Cong and A. Arbieto, J. Bochi, we know that vanishing exponents is a ${L^p}$ generic property for all ${1\leq p<\infty}$.
Our long-term goal here is to present the proof of Theorem 5. In the next section, we describe some of the main steps towards this result.
Strategy of proof of Theorem 5
We begin with some (technical) preliminary reductions. Firstly, we notice that, up to replacing the metric ${d(x,y)}$ by ${d(x,y)^{\nu}}$ (${\nu>0}$), one can assume that our cocycles are Lipschitz, i.e., ${\nu=1}$ and ${F\in \mathcal{S}^{r,1}(f,\mathcal{E})}$, ${r\geq 0}$. Secondly, as we’re going to see, all subsequent arguments will be local in nature, so that one can also assume that ${\mathcal{E} = M\times\mathbb{K}^d}$ (where ${\mathbb{K} = \mathbb{C}}$ or ${\mathbb{R}}$). In particular, under this assumption, we can think of ${\mathcal{S}^{r,\nu}(f,\mathcal{E})}$ as the set
$\displaystyle \left\{A:M\rightarrow SL(d,\mathbb{K}): A\in C^{r,\nu}\right\}$
equipped with the norm
$\displaystyle \|A\|_{r,\nu} = \max\limits_{0\leq i\leq r}\sup\limits_{x\in M} \|D^i A(x)\| + \sup\limits_{x\neq y}\frac{\|D^rA(x)-D^rA(y)\|}{d(x,y)^{\nu}}$
In the sequel, an important role will be played by the projective cocycle ${f_A: M\times\mathbb{P}(\mathbb{K}^d)\rightarrow M\times\mathbb{P}(\mathbb{K}^d)}$ naturally associated to a linear cocycle ${(f,A)}$.
In this language, the proof of Theorem 5 can be divided into the following steps:
• First step: One shows that ${\lambda^+(A,x)=0}$ for ${\mu}$-a.e. ${x}$ implies that the cocycle is dominated at ${x}$ (in a sense that is slightly weaker than the one studied by Bonatti, Gomez-Mont, Viana). Then, one shows that this domination at ${\mu}$-a.e. ${x}$ implies the existence of nice stable and unstable manifolds for the projective cocycle ${f_A}$ at ${(x,\xi)\in\{x\}\times\mathbb{P}(\mathbb{K}^d)}$ (and, moreover, these ${f_A}$-invariant manifolds are graphs over the stable and unstable manifolds for ${f}$ at ${x}$). In particular, these ${f_A}$-invariant manifolds can be used to define stable holonomies ${h_{x,y}^s:\{x\}\times\mathbb{P}(\mathbb{K}^d)\rightarrow \{y\}\times\mathbb{P}(\mathbb{K}^d)}$ for two points ${x,y\in M}$ in the same stable manifold and unstable holonomies ${h_{x,z}^u: \{x\}\times\mathbb{P}(\mathbb{K}^d)\rightarrow \{z\}\times\mathbb{P}(\mathbb{K}^d)}$ for two points ${x,z\in M}$ in the same unstable manifold.
• Second step: By the compactness of the projective space ${\mathbb{P}(\mathbb{K}^d)}$, we have that ${M\times\mathbb{P}(\mathbb{K}^d)}$ always supports ${f_A}$-invariant measures ${m}$ projecting to ${\mu}$ under the natural projection ${M\times\mathbb{P}(\mathbb{K}^d)\rightarrow M}$. By Rokhlin’s disintegration theorem (see this link here for Rokhlin’s original article, and this link herefor a modern exposition by Marcelo of the same result), any such probability measure ${m}$ can be disintegrated into an essentially unique family ${\{m_x: x\in M\}}$, that is, we can write
$\displaystyle m(A)=\int_M m_x(A\cap(\{x\}\times\mathbb{P}(\mathbb{K}^d)))d\mu(x)$
and the family is unique in the sense that any two families verifying the previous equation must coincide up to a set of zero ${\mu}$-measure. By a result of F. Ledrappier, we will see that ${\lambda^+(A,x)=0}$ implies ${m_y = (h_{x,y}^s)_*m_x}$ and ${m_z = (h_{x,z}^u)_*m_x}$. In other words, the disintegration ${\{m_x:x\in M\}}$ is invariant under stable and unstable holomies whenever ${\lambda^+(A,x)=0}$. Actually, as I already mentioned, F. Ledrappier gave a talk on his result at the “groupe de travail” as a “preparation” for my expositions. So, during my talks it was assumed previous knowledge of F. Ledrappier’s theorem, and I will also do so here: we’ll content ourselves to state and use Ledrappier’s theorem without further mentions to its proof (even though this is a very interesting theorem lying at the heart of this proof). For more details, I recommend reading the original article (since it is not very long). In any case, the invariance under holonomies of the disintegration of ${m}$ can be used to show that the map ${x\rightarrow m_x}$ is continuous. In particular, as it is known that periodic points are dense when ${(f,\mu)}$ is non-uniformly hyperbolic, this will allow us to say that the dynamical behavior of periodic points affects the entire dynamics. Notice that this “contamination by periodic points” (as Marcelo likes to call it) is only possible when ${\lambda^+(A,x)=0}$, and it is quite remarkable: even though the set of periodic points has zero ${\mu}$-measure (as ${\mu}$ is non-atomic), the “non-wildness” of the cocycle (expressed by the property ${\lambda^+(A,x)=0}$) permits to say that they matter for the global dynamics. Of course, this is a particularity of linear cocycles with vanishing exponents and it is far from being true in general.
• Third step: Using certain nice properties of the so-called blocks of dominations (analogs of Pesin’s hyperbolic blocks for ${f_A}$), one can construct an arbitrarily large number of periodic points, all of them being dynamically related (“heteroclinically linked by their invariant manifolds”). Here, it is crucial that ${\mu}$ has local product structure!
• Fourth step: In the case ${\mathbb{K}=\mathbb{C}}$, we will complete the proof of Theorem 5 by means of the following argument. Given ${\ell\in\mathbb{N}}$, we select ${p_1,\dots,p_{2\ell}}$ pairwise distinct periodic points of ${f}$. Recall that a matrix ${A}$ of ${SL(d,\mathbb{C})}$ having some eigenvalues with the same norm is a phenomenon of codimension 1 (at least). Hence, we have that the fact that the cocycle ${A}$ is “typical” over ${p_1,\dots,p_{2l}}$, i.e., the eigenvalues of ${A^{\kappa(p_j)}(p_j)}$ has eigenvalues of distinct norms (where ${\kappa(p_j)}$ is the ${f}$-period of ${p_j}$) for each ${j=1,\dots,2\ell}$ has codimension ${\geq 2\ell\geq\ell}$. On the other hand, if ${\lambda^+(A,x)=0}$ at ${\mu}$-a.e. ${x}$, by the second step above we have that ${(h_{p_j,q}^u)_* m_{p_j} = m_q = (h_{p_k,q}^s)_* m_{p_k}}$ for all ${q\in W^u(p_j)\cap W^s(p_k)}$. Moreover, since the cocycle ${A}$ is typical over ${p_1,\dots,p_{2\ell}}$, we know that ${m_{p_j}}$ is a linear combination of Dirac measures supported on the eigenspaces of ${A^{\kappa(p_j)}(p_j)}$. Hence, the equality ${(h_{p_j,q}^u)_* m_{p_j} = (h_{p_k,q}^s)_* m_{p_k}}$ implies that the ${h_{p_j,q}^u}$-image of some eigenspace of ${A^{\kappa(p_j)}(p_j)}$ coincides with the ${h_{p_k,q}^s}$-image of some eigenspace of ${A^{\kappa(p_k)}(p_k)}$. As we’re going to see later, this coincidence at one heteroclinic point ${q\in W^u(p_j)\cap W^s(p_k)}$ is a positive codimension phenomenon, so that its validity at all heteroclinic points is a codimension ${\geq\ell}$ phenomenon. Because ${\ell\in\mathbb{N}}$ is an arbitrary integer, we see that set of cocycles ${A\in SL(d,\mathbb{C})}$ with ${\lambda^+(A,x)=0}$ at ${\mu}$-a.e. ${x}$ has ${\infty}$ codimension, that is, one gets Theorem 5 in the case ${\mathbb{K}=\mathbb{C}}$. However, in the remaining case ${\mathbb{K}=\mathbb{R}}$, we can not proceed as above: indeed, the set of matrices in ${SL(d,\mathbb{R})}$ with a pair of complex conugate eigenvalues is open, so that we can’t say anymore that “a matrix with some eigenvalues of same norm is a codimension 1 phenomenon”. In particular, this case will introduce a few technical issues that we prefer comment only in due time.
This being said, let me mention that we’ll split the proof of Theorem 5 into two blog posts (in a way more or less corresponding to my two talks at the “groupe de travail”). More precisely, in the remainder of today’s post we will give more details on the first step above, and we will leave the discussion of the other three steps for a subsequent post.
1st step of the proof of Theorem 5: domination and invariant foliations
We start the final section of today’s post with the notion of “domination”. Given ${(f,\mu)}$ a non-uniformly hyperbolic system, ${A:M\rightarrow SL(d,\mathbb{K})}$ a ${C^{r,\nu}}$-cocycle and ${\mathcal{H}(K,\tau)}$ a hyperbolic block of ${(f,\mu)}$, we define, for each ${N\in\mathbb{N}}$, ${\theta>0}$, ${D_A(N,\theta)}$ as the set of points ${x\in M}$ such that
$\displaystyle \prod\limits_{j=0}^{k-1}\|A^N(f^{jN}(x))\|\cdot\|A^N(f^{jN}(x))^{-1}\|\leq e^{kN\theta}$
and
$\displaystyle \prod\limits_{j=0}^{k-1}\|A^{-N}(f^{-jN}(x))\|\cdot\|A^{-N}(f^{-jN}(x))^{-1}\|\leq e^{kN\theta}$
for all ${k\geq 1}$ (${k\in\mathbb{N}}$).
Definition 8 We say that ${x}$ is ${s}$-dominated (${s\geq 1}$) if ${x\in\mathcal{H}(K,\tau)\cap D_A(N,\theta)}$ where ${s\theta<\tau}$.
Roughly speaking, since the parameters ${K}$ and ${\tau}$ controls the rates of hyperbolicity of ${f}$ at a point ${x\in\mathcal{H}(K,\tau)}$, we see that the domination condition says that the “strength of hyperbolicity” (measure by ${\theta}$) of the cocycle ${A}$ along the fibers ${\mathbb{K}^d}$ can’t surpass the strength of hyperbolicity on the basis ${f}$ (measured by ${s}$ and ${\tau}$): this is the content of the condition ${s\theta<\tau}$. In other words, if we consider the dynamics of the projective cocycle ${f_A}$, then the domination condition ${s\theta<\tau}$ is some sort of quantitative partial hyperbolicity of ${f_A}$ at ${x}$: the stable and unstable directions of ${f_A}$ correspond to the ones of ${f}$, while the “central / dominated” direction is the fiber direction ${\{0\}\times\mathbb{K}^d}$ (as, given a matrix ${B}$, the norm of its projective action ${B_{\#}}$ and its inverse ${B_{\#}^{-1}}$ are bounded from above by ${\|B\|\cdot\|B^{-1}\|}$; in particular, this justifies the choice of the expressions ${\prod\limits_{j=0}^{k-1}\|A^N(f^{jN}(x))\|\cdot\|A^N(f^{jN}(x))^{-1}\|}$ and ${\prod\limits_{j=0}^{k-1}\|A^{-N}(f^{-jN}(x))\|\cdot\|A^{-N}(f^{-jN}(x))^{-1}\|}$ in the previous definition to measure the strength of hyperbolicity of [iterates of] ${f_A}$ at the fiber directions).
In the sequel, we will to study the relationship between vanishing exponents and domination.
Proposition 9 For ${\mu}$-a.e. ${x}$ with ${\lambda^+(A,x)=0}$, we have that ${x}$ is ${s}$-dominated for every ${s\geq 1}$.
In few words, this proposition says that the vanishing of Lyapunov exponents at ${\mu}$-typical point implies “${\infty}$-domination” (i.e., ${s}$-domination for all ${s\geq 1}$). To prove this proposition, we will need the following Lemma.
Lemma 10 For all ${\delta>0}$ and ${\mu}$-a.e. ${x\in M}$, there exists ${N=N(x)\geq 1}$ such that
$\displaystyle \frac{1}{k}\sum\limits_{j=0}^{k-1}\frac{1}{N}\log\|A^N(f^{jN}(x))\|\leq\lambda^+(A,x)+\delta$
for every ${k\geq 1}$.
Proof: Take ${\varepsilon>0}$ such that ${4\varepsilon\sup\limits_{z\in M}\log\|A(z)\|<\delta}$ and ${\eta\geq 1}$ a large integer with
$\displaystyle \mu(\Delta_{\eta})\geq 1-\varepsilon^2$
where ${\Delta_{\eta} = \left\{x\in M: \frac{1}{\eta}\log\|A^{\eta}(x)\|\leq \lambda^+(A,x)+\frac{\delta}{2}\right\}}$. Let ${\tau(x)}$ be the “average sojourn time” of the ${f^{\eta}}$-orbit of ${x}$ inside ${\Delta_{\eta}}$ and put ${\Gamma_{\eta}=\{x\in M: \tau(x)\geq 1-\varepsilon\}}$. By sub-multiplicativity of norms, we have that
$\displaystyle \frac{1}{k}\sum\limits_{j=0}^{k-1}\frac{1}{\ell\eta}\log\|A^{\ell\eta}(f^{j\ell\eta}(x))\|\leq \frac{1}{k\ell}\sum\limits_{j=0}^{k\ell-1}\frac{1}{\eta}\log\|A^{\eta}(f^{j\eta}(x))\|$
for any ${\ell\geq 1}$. For ${x\in\Gamma_{\eta}}$, fix ${\ell=\ell(x)}$ large so that
$\displaystyle \#\{j\in\{0,\dots,n-1\}:f^{j\eta}(x)\notin\Gamma_{\eta}\}\leq (1-\tau(x)+\varepsilon)n$
for each ${n\geq\ell}$. It follows that
$\displaystyle \begin{array}{rcl} \frac{1}{k\ell}\sum\limits_{j=0}^{k\ell-1}\frac{1}{\eta}\log\|A^{\eta}(f^{j\eta}(x))\| &\leq& \lambda^+(A,x)+\frac{\delta}{2} + (1-\tau(x)+\varepsilon)\sup\log\|A\| \\ &<& \lambda^+(A,x)+\delta \end{array}$
By putting the previous two inequalities together, we see that ${\mu}$-a.e. ${x\in\Gamma_{\eta}}$ satisfy the conclusion of the Lemma with ${N=\ell\eta=N(x)}$. On the other hand,
$\displaystyle \mu(\Gamma_{\eta}) + (1-\varepsilon) \mu(M-\Gamma_{\eta}) \geq \int \tau(x)d\mu = \mu(\Delta_{\eta})\geq 1-\varepsilon^2,$
so that ${\mu(\Gamma_{\eta})\geq 1-\varepsilon}$. Since ${\varepsilon>0}$ is arbitrary, by letting ${\varepsilon\rightarrow 0}$, we see that the proof of the Lemma is complete. $\Box$
Corollary 11 Let ${\theta>0}$ and ${\lambda\geq 0}$ with ${d\cdot\lambda<\theta}$ (where ${d}$ is the dimension of the fiber ${\mathbb{K}^d}$). Then, for ${\mu}$-a.e. ${x}$ with ${\lambda^+(A,x)\leq\lambda}$, one has ${x\in D_A(N,\theta)}$ for some ${N\geq 1}$.
Proof: Take ${\delta>0}$ with ${d(\lambda+\delta)<\theta}$, and ${x\in M}$, ${N=N(x)\geq 1}$ satisfying the conclusion of the previous Lemma, i.e.,
$\displaystyle \frac{1}{k}\sum\limits_{j=0}^{k-1}\frac{1}{N}\log\|A^N(f^{jN}(x))\|\leq\lambda^+(A,x)+\delta$
Since ${A\in SL(d,\mathbb{K})}$, we have that ${\det A^N(z)=1}$ for all ${z\in M}$, and, a fortiori, ${\|A^N(z)^{-1}\|\leq\|A^N(z)\|^{d-1}}$ for all ${z\in M}$. Hence,
$\displaystyle \frac{1}{kN}\sum\limits_{j=0}^{k-1}\log\left(\|A^N(f^{jN}(x))\|\cdot\|A^N(f^{jN}(x))^{-1}\|\right) \leq d(\lambda^+(A,x)+\delta) < \theta,$
that is, ${x\in D_A(N,\theta)}$. $\Box$
At this stage, it is not hard to check that Proposition 9 is a direct consequence of this corollary.
Remark 7 One of the reasons that Marcelo treats the issue of vanishing top Lyapunov exponent ${\lambda^+(A,x)}$ but not the simplicity one is more or less explained by the proof of Proposition 9: while absence of positive top exponent (i.e., ${\lambda^+=0}$) implies a certain “domination” (a crucial ingredient in the proof of Theorem 5 as it allows the existence of holonomies and contamination by periodic points), it is not obvious that the absence of simplicity implies some sort of domination or other nice property allowing for the “contamination by periodic points” argument. In fact, as the reader can see from these two articles of C. Bonatti, M. Viana, and A. Avila, M. Viana, in all situations (as far as I know) where one can deduce simplicity, either the cocycle is (a priori) assumed to be dominated (the case of Bonatti-Viana article) or locally constant (the case Avila-Viana article). In the former case, the existence of holonomies follows the arguments we present below, while in the former case, the existence of holonomies is granted for free (even in absence of domination).
Our next goal is to derive the existence of nice strong stable and unstable manifolds of the projective cocycle ${f_A}$ at ${(x,\xi)}$ (which are Lipschitz graphs over the stable and unstable manifolds of ${f}$ at ${x}$) whenever ${x}$ is ${2}$-dominated. To do so, we need a preliminary result about the existence of holonomies and ${1}$-dominated points.
Proposition 12 There exists ${L>0}$ such that, for every ${x}$ ${1}$-dominated, say, ${x\in\mathcal{H}(K,\tau)\cap D_A(N,\theta)}$ with ${\theta<\tau}$, and ${y,z\in W^s_{loc}(x)}$, the following limit
$\displaystyle H_{y,z}^s:=H_{A,y,z}^s:=\lim\limits_{n\rightarrow+\infty} A^n(z)^{-1} A^n(y)$
(the stable holonomy between ${y}$ and ${z}$) exists, and, moreover,
$\displaystyle \|H_{y,z}^s-Id\|\leq L\textrm{dist}(y,z)$
and ${H_{y,z}^s = H_{x,z}^s\circ H_{y,x}^s}$.
The proof of this proposition relies on the following “bounded distortion” type lemma:
Lemma 13 There exists a constant ${C=C(A,K,\tau,N)>0}$ such that
$\displaystyle \|A^n(y)\|\cdot\|A^n(z)^{-1}\|\leq C e^{n\theta}$
for all ${y,z\in W^s_{loc}(x)}$, ${x\in D_A(N,\theta)}$.
Proof: Firstly, we observe that
$\displaystyle \|A^n(y)\|\cdot\|A^n(z)^{-1}\|\leq C_1\prod\limits_{j=0}^{k-1}\|A^N(f^{jN}(y))\|\cdot\|A^N(f^{jN}(z))^{-1}\|$
where ${k=\lfloor n/N\rfloor}$ and ${C_1=C_1(A,N)}$.
Secondly, since ${A}$ being a Lipschitz cocycle, one has, for some constant ${L_1 = L_1(A,N)}$,
$\displaystyle \frac{\|A^N(f^{jN}(y))\|}{\|A^N(f^{jN}(x))\|}\leq \exp(L_1 \textrm{dist}(f^{jN}(x),f^{jN}(y))\leq \exp(L_1\cdot K\cdot e^{-jN\tau})$
and
$\displaystyle \frac{\|A^N(f^{jN}(z))^{-1}\|}{\|A^N(f^{jN}(x))^{-1}\|}\leq \exp(L_1\textrm{dist}(f^{jN}(x),f^{jN}(z)))\leq \exp(L_1\cdot K\cdot e^{-jN\tau})$
whenever ${y,z\in W^s_{loc}(x)}$.
Finally, by the domination assumption ${x\in D_A(N,\theta)}$, we have
$\displaystyle \prod\limits_{j=0}^{k-1}\|A^N(f^{jN}(x))\|\cdot\|A^N(f^{jN}(x))^{-1}\|\leq e^{kN\theta}\leq e^{n\theta}$
By putting these estimates together, one can check that the Lemma follows (with ${C = C_1\exp(L_1\cdot K\cdot\sum\limits_{j=0}^{\infty}e^{-jN\tau}}$). $\Box$
Now we can complete the proof of Proposition 12:
Proof: We claim that ${A^n(z)^{-1}A^n(y)}$ is a Cauchy sequence. Indeed, we observe that
$\displaystyle \|A^{n+1}(z)^{-1}A^{n+1}(y) - A^n(z)^{-1}A^n(y)\|\leq$
$\displaystyle \|A^n(z)^{-1}\|\cdot\|A(f^n(z))^{-1}A(f^n(y))-Id\|\cdot\|A^n(y)\|$
Since ${A}$ is Lipschitz,
$\displaystyle \|A(f^n(z))^{-1}A(f^n(y))-Id\|\leq L_2\textrm{dist}(f^n(y),f^n(z))\leq L_2 K e^{-n\tau}\textrm{dist}(y,z)$
whenever ${y,z\in W^s_{loc}(x)}$. By combining these estimates with Lemma 13, one obtains
$\displaystyle \|A^{n+1}(z)^{-1}A^{n+1}(y) - A^n(z)^{-1}A^n(y)\|\leq C L_2 e^{n(\theta-\tau)}\textrm{dist}(y,z) \ \ \ \ \ (1)$
Because ${\theta<\tau}$ (by ${1}$-domination), the claim is proved. In particular, the limit ${H_{y,z}^s:=\lim\limits_{n\rightarrow+\infty} A^n(z)^{-1} A^n(y)}$ exists and it satisfies
$\displaystyle \|H_{y,z}^s-Id\|\leq L \textrm{dist}(y,z)$
with ${L=CL_2\sum\limits_{n=0}^{\infty}e^{n(\theta-\tau)}}$. Finally, the verification of the identity ${H_{y,z}^s = H_{x,z}^s\circ H_{y,x}^s}$ is left as an exercise to the reader. $\Box$
Corollary 14 There exists ${\widetilde{L}>0}$ such that for every ${2}$-dominated ${x}$, say ${x\in\mathcal{H}(K,\tau)\cap D_A(N,\theta)}$, ${2\theta<\tau}$, and ${y,z\in W^s_{loc}(x)}$, the following limit
$\displaystyle H_{f^j(y),f^j(x)}^s:=\lim\limits_{n\rightarrow\infty} A^n(f^j(z))^{-1} A^n(f^j(y)) = A^j(z)\cdot H_{y,z}^s\cdot A^j(y)^{-1}$
exists and it satisfies
$\displaystyle \|H_{f^j(y),f^j(x)}^s-Id\|\leq\widetilde{L} e^{j(2\theta-\tau)}\textrm{dist}(y,z)\leq\widetilde{L}\textrm{dist}(y,z)$
Proof: Since ${A^n(f^j(z))^{-1} A^n(f^j(y)) = A^j(z) [A^{n+j}(z)^{-1}A^{n+j}(y)] A^j(y)^{-1}}$, the desired limit exists (by Proposition 12) and it is ${A^j(z)\cdot H_{y,z}^s\cdot A^j(y)^{-1}}$. Moreover, by the bounded distortion type Lemma 13 and the estimate (1) above (with ${n}$ replaced by ${n+j}$), one obtains
$\displaystyle \|A^{n+1}(f^j(z))^{-1} A^{n+1}(f^j(y))-A^n(f^j(z))^{-1} A^n(f^j(y))\|\leq .$
$C e^{j\theta} C L_2 K e^{(n+j)(\theta-\tau)}\textrm{dist}(y,z).$
By summing over ${n\in\mathbb{N}}$, we deduce the last statement of the Corollary. $\Box$
Remark 8 Note that if ${x}$ is dominated for ${A}$, say ${x\in D_A(N,\theta)}$, then ${x}$ is dominated for ${B}$ whenever ${B}$ is sufficiently ${C^0}$ close to ${A}$: more precisely, for each ${\theta'>\theta}$, we can select a ${C^0}$ neighborhood ${\mathcal{U}}$ of ${A}$ such that ${x\in D_B(N,\theta')}$ when ${B\in\mathcal{U}}$. Similarly, the reader can check that the constants ${L_1, L_2, L}$ and ${\widetilde{L}}$ above can be taken uniform in a ${C^0}$ neighborhood of ${A}$. In particular, all statements above hold uniformly in a ${C^0}$ neighborhood of ${A}$.
Closing today’s post, we study the dependence of the holonomies on the cocycle ${A}$. In this direction, we get the following result under a ${3}$-domination assumption.
Lemma 15 Assume that ${x}$ is ${3}$-dominated for ${A}$, say ${x\in\mathcal{H}(K,\tau)\cap D_A(N,\theta)}$, ${3\theta<\tau}$. Then, there is a ${C^{r,\nu}}$ neighborhood ${\mathcal{U}}$ of ${A}$ such that, for every ${y,z\in W^s_{loc}(x)}$, the map
$\displaystyle B\in\mathcal{U}\mapsto H_{B,y,z}^s$
is ${C^1}$ and its derivative is
$\displaystyle \partial_B H_{B,y,z}^s(\dot{B}) = \sum\limits_{i=0}^{\infty} B^i(z)^{-1}$
$\displaystyle [H_{B,f^i(y),f^i(z)}^s B(f^i(y))^{-1} \dot{B}(f^i(y)) - B(f^i(z))^{-1}\dot{B}(f^i(z))H_{B,f^i(y),f^i(z)}^s]$
$\displaystyle B^i(y)$
Proof: Fix ${\theta'>\theta}$ with ${3\theta'<\tau}$. By the previous remark, we can select a ${C^{r,\nu}}$ neighborhood ${\mathcal{U}}$ of ${A}$ such that ${x\in\mathcal{H}(K,\tau)\cap D_B(N,\theta')}$ for every ${B\in\mathcal{V}}$. By the bounded distortion type Lemma 13, ${\|B^i(z)^{-1}\|\cdot\|B^i(y)\|\leq C e^{i\theta'}}$, and by the previous corollary, ${\|H_{B,f^i(y),f^i(z)}^s-Id\|\leq \widetilde{L} e^{i(2\theta'-\tau)}\textrm{dist}(y,z)}$. On the other hand, ${\|B(f^i(y))^{-1} \dot{B}(f^i(y))\|\leq \|B^{-1}\|_{r,\nu}\|\dot{B}\|_{r,\nu}}$ and
$\|B(f^i(y))^{-1} \dot{B}(f^i(y)) - B(f^i(z))^{-1} \dot{B}(f^i(z))\|\leq$
$2 L_3 \|\dot{B}\|_{r,\nu} \textrm{dist}(f^i(y),f^i(z))\leq$
$\displaystyle 2L_3 K e^{-i\tau}\|\dot{B}\|_{r,\nu}\textrm{dist}(y,z)$
where ${L_3=\sup\{\|B\|_{r,\nu}:B\in\mathcal{U}\}}$. It follows that the expression above defining ${\partial_B H^s_{B,y,z}(\dot{B})}$ converges as
$\displaystyle \begin{array}{rcl} \|\partial_B H^s_{B,y,z}(\dot{B})\| &\leq& \sum\limits_{i=0}^{\infty}Ce^{i\theta'}2L_3[\widetilde{L}e^{i(2\theta'-\tau)} + Ke^{-i\tau}]\|\dot{B}\|_{r,\nu}\textrm{dist}(y,z) \\ &\leq& \widetilde{C} e^{i(3\theta'-\tau)}\|\dot{B}\|_{r,\nu}\textrm{dist}(y,z) \end{array}$
where ${\widetilde{C} = 2L_3C(\widetilde{L}+K)}$.
Now, we recall that ${H^n_{B,y,z}:=B^n(z)^{-1}B^n(y)\rightarrow H^s_{B,y,z}}$ as ${n\rightarrow\infty}$ and each ${H^n_{B,y,z}}$ is ${C^1}$ on the ${B}$-variable with derivative
$\displaystyle \begin{array}{rcl} \partial_B H_{B,y,z}^n (\dot{B}) &=& B^n(z)^{-1}\sum\limits_{i=0}^{n-1}B^{n-i}(f^i(y))B(f^i(y))^{-1}\dot{B}(f^i(y)) B^i(y) \\ &-& \sum\limits_{i=0}^{n-1} B^i(z)^{-1} B(f^i(z))^{-1} \dot{B}(f^i(z)) B^{n-i}(f^i(z))^{-1} B^n(y) \end{array}$
Thus, our task is reduced to show that ${\partial_B H_{B,y,z}^n (\dot{B})\rightarrow \partial_B H_{B,y,z}^s (\dot{B})}$ uniformly. Keeping this goal in mind, we observe that the previous corollary implies that
$\displaystyle \|H^{n-i}_{B,f^i(y),f^i(z)} - H^s_{B,f^i(y),f^i(z)}\|\leq \widetilde{L} e^{i\theta'}e^{n(\theta'-\tau)} \textrm{dist}(y,z)$
for each ${0\leq i\leq n-1}$. Thus, the difference between the ith terms of ${\partial_B H^n_{B,y,z}(\dot{B})}$ and ${\partial_B H^s_{B,y,z}(\dot{B})}$ is bounded by
$\displaystyle 2Ce^{i\theta'}\widetilde{L}e^{i\theta'}e^{n(\theta'-\tau)}L_3\|\dot{B}\|_{r,\nu}\textrm{dist}(y,z) = \widehat{C} e^{2i\theta'}e^{n(\theta'-\tau)}\|\dot{B}\|_{r,\nu}\textrm{dist}(y,z)$
where ${\widehat{C} = 2C\widetilde{L}L_3}$. Putting this estimate together with the bounds of the previous paragraph applied to the terms ${i\geq n}$, we deduce that
$\displaystyle \|\partial_B H^s_{B,y,z}(\dot{B})-\partial_B H^n_{B,y,z}(\dot{B})\|\leq \left(\widehat{C}\sum\limits_{i=0}^{n-1}e^{2i\theta'}e^{n(\theta'-\tau)} + \widetilde{C} \sum\limits_{i=n}^{\infty} e^{i(3\theta'-\tau)}\right)$
$\displaystyle \|\dot{B}\|_{r,\nu}\textrm{dist}(y,z).$
Because ${3\theta'<\tau}$, we get that the right-hand side of this estimate goes to ${0}$ as ${n\rightarrow\infty}$. This proves the Lemma. $\Box$
So, this is all for today! Next time, we will complete the proof of Theorem 5 by discussing the remaining steps in the strategy of proof presented above. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 647, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9816319346427917, "perplexity": 191.9028118092859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589902.8/warc/CC-MAIN-20180717203423-20180717223423-00239.warc.gz"} |
https://www.physicsforums.com/threads/query-about-induced-voltage-and-open-circuit-current.819053/ | # Query about induced voltage and open-circuit current
1. Jun 14, 2015
### Ohmer
When we move a wire through a magnetic field appears a magnetic force that distributes loads at one end of the wire ( following equation F = q⋅(vxB) ), until an opposite force is generated on the wire by the electric field (E) caused by charge distribution.
This causes the EMF or induced voltage. When we close the circuit the current flows thanks to the electric field…
but what happens when the circuit is open? in real generators the wires are coils in which a sinusoidal emf is induced, which means that charges flow sinusoidally from one end to another of the wire without load. If the intensity is dQ / dt through a section of the wire, why no intensity? has to do with the magnetic field does no work nor affect the kinetic energy of the electrons ? This shift in load distribution generates power losses?
2. Jun 14, 2015
### Jeff Rosenbury
Why is there no load? Although one section of wire may not have a load, somewhere down the wire there is a load (at least in a closed circuit). That load causes a backup of charge carriers, like a traffic jam. The backup goes all the way back to the section in the magnetic field which provides the motive force for the charge carriers.
BTW, the kinetic energy of the charge carriers is small. They typically move on the order of a few meters/second and have almost no mass. The energy is in the electric and magnetic fields (which "move" at the speed of light).
3. Jun 14, 2015
### davenn
because he is asking about a single bit of wire, with open ends
Similar Discussions: Query about induced voltage and open-circuit current | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8542680740356445, "perplexity": 716.5940272979396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823220.45/warc/CC-MAIN-20171019031425-20171019051425-00116.warc.gz"} |
http://meetings.aps.org/Meeting/DFD05/Event/37954 | ### Session NQ: Richtmyer-Meshkov Instability
11:01 AM–1:37 PM, Tuesday, November 22, 2005
Hilton Chicago Room: Stevens 2
Chair: Christopher Tomkins, Los Alamos National Laboratory
Abstract ID: BAPS.2005.DFD.NQ.5
### Abstract: NQ.00005 : Richtmyer-Meshkov Instability of a Membraneless, Sinusoidal Gas Interface
11:53 AM–12:06 PM
Preview Abstract MathJax On | Off Abstract
#### Authors:
Results are presented from a series of shock tube experiments studying the Richtmyer-Meshkov instability (RMI) for the case of a 2-D single mode gas interface. The membraneless interface is formed by the head-on flow of nitrogen, seeded with acetone, and sulfur-hexafluoride which creates a stagnation surface. A sinusoidal interface is created by oscillating two rectangular pistons that are initially flush with the shock tube walls. The RMI is studied for varying incident shock strengths (1.3 $\le M \le$ 4) by imaging the interface with planar laser-induced fluorescence, once immediately before shock arrival and at two different post-shock times. The experimental images and the growth rates of non-dimensionalized geometrical features are compared to numerical simulations using the \textit{Raptor} code (LLNL) which takes advantage of the Piecewise Linear Method (PLM) with Adaptive Mesh Refinement (AMR) to solve the Navier-Stokes equations. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3585093319416046, "perplexity": 6063.112710947023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698207393/warc/CC-MAIN-20130516095647-00013-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://mathhelpforum.com/number-theory/148840-canonical-decomp-2-a.html | Math Help - Canonical Decomp 2
1. Canonical Decomp 2
Is there an easy way to approach this number or just start dividing away?
10,510,100,501
2. Originally Posted by dwsmith
Is there an easy way to approach this number or just start dividing away?
10,510,100,501
Hint: $(x^2+1)^5 = x^{10} + 5x^8 + 10x^6 + 10x^4 + 5x^2 + 1$.
3. I am not sure how that is supposed to help, but if I could see the connection, how would I know to come up with $(x^2+1)^5$?
4. Originally Posted by dwsmith
I am not sure how that is supposed to help, but if I could see the connection, how would I know to come up with $(x^2+1)^5$?
Second (blatant) hint: Try putting x = 10 in that binomial expansion.
I have to admit that I cheated in order to find that method. I plugged the number 10510100501 in here in order to find the factors. When I saw the answer I could see why it came out that way. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7200391292572021, "perplexity": 306.1591062693151}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927458.37/warc/CC-MAIN-20150521113207-00139-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://proofwiki.org/wiki/Category:Definitions/Superfactorials | # Category:Definitions/Superfactorials
This category contains definitions related to Superfactorials.
Related results can be found in Category:Superfactorials.
The superfactorial of $n$ is defined as:
$n\$ = \displaystyle \prod_{k \mathop = 1}^n k! = 1! \times 2! \times \cdots \times \left({n - 1}\right)! \times n!$where$k!$denotes the factorial of$n\$.
## Pages in category "Definitions/Superfactorials"
The following 2 pages are in this category, out of 2 total. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000089406967163, "perplexity": 1606.6292262931238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432521.57/warc/CC-MAIN-20200603081823-20200603111823-00560.warc.gz"} |
https://murphmath.wordpress.com/2013/07/ | You are currently browsing the monthly archive for July 2013.
I use several different computers, so I keep all of my files on Dropbox. To keep my latex style sheets synced up, I created a texmf folder on Dropbox. Then by adding a path on each computer, tex will automatically find the stylesheet there. I’m posting detailed instructions, because I always forget how to add the path. (These instructions are for macOS and texlive.)
First, you need a texmf tree on your Dropbox. Start by making a file called “texmf”—this can be put in any directory/file on your Dropbox. Inside of texmf, put another folder called “tex”, inside tex put a file called “latex”. Now make a folder for your style file inside latex—I called mine “dropsty”. So you should have a nested set of folders that looks like:
~/Dropbox/…/texmf/tex/latex/dropsty/dropsty.sty
Now you need to tell tex how to find this folder. To do this, open the terminal. Type “kpsewhich texmf.cnf”. This will give you the path to a config file. For me it returns
/usr/local/texlive/2012basic/texmf.cnf
texmf.cnf is the config file we want to edit.
(Here’s how I edited the file, skip this if you know how. Type “cd /usr/local/texlive/2012basic/” to get to the directory with the file. (Type “ls” to see what’s in the folder, type “pwd” to see where you are.) Now you’ll have to make sure you can edit the file, so type “sudo chmod 777 texmf.cnf”. You’ll be prompted to enter your password. Now “emacs texmf.cnf”. You can navigate using the arrow keys. To save, hold “control” and type “x s”.)
Here’s what texmf.cnf looks like by default:
% (Public domain.)
% This texmf.cnf file should contain only your personal changes from the
% original texmf.cnf (for example, as chosen in the installer).
%
% That is, if you need to make changes to texmf.cnf, put your custom
% settings in this file, which is …/texlive/YYYY/texmf.cnf, rather than
% the distributed file (which is …/texlive/YYYY/texmf/web2c/texmf.cnf).
% And include *only* your changed values, not a copy of the whole thing!
%
TEXMFLOCAL = \$SELFAUTOPARENT/texmf-local
TEXMFHOME = ~/Library/texmf
TEXMFVAR = ~/Library/texlive/2012basic/texmf-var
TEXMFCONFIG = ~/Library/texlive/2012basic/texmf-config
We want to add a new path to TEXMFHOME. All we do is put a colon after the current path, and add “~/Dropbox/…/texmf”. In my case, it looks like this: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.972848117351532, "perplexity": 3679.9743627322778}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824537.24/warc/CC-MAIN-20171021005202-20171021025202-00504.warc.gz"} |
https://brilliant.org/problems/calculator-problem-2/ | # Calculator problem
Algebra Level 2
Note: This problem is incorrect.
I want to calculate the square root of 7 correct to at least 5 decimal places. However, my calculator is dysfunctional and only the green keys work (the red keys do not).
How many key presses are required to display the square root of 7 starting from the All Clear (0) position.
For example, if I wanted to get $$0.1$$, then starting from 0, pressing $$10^x$$ will give me 1. Pressing $$10^x$$ will give me $$10$$. Pressing $$1/x$$ will give me $$0.1$$.
Edit: Sorry. Somebody has pointed out that it can be done in 4 steps using Factorial of Pi. He is right. I was under the impression that the factorial function was defined only for non-negative integers.
Someone else has also pointed a way of doing this with fewer than 9 steps, so the options are not correct.
Anyway, for those still interested, try finding the minimum number of steps needed.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7844984531402588, "perplexity": 427.8344898569799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645513.14/warc/CC-MAIN-20180318032649-20180318052649-00589.warc.gz"} |
http://export.arxiv.org/abs/1311.0475 | math.CO
(what is this?)
# Title: Majority out-dominating functions in digraphs
Abstract: At least two different notions have been published under the name "majority domination in graphs": Majority dominating functions and majority dominating sets. In this work we extend the former concept to digraphs. Given a digraph $D=(V,A),$ a function $f : V \rightarrow \{-1,1\}$ such that $f(N^+[v])\geq1$ for at least half of the vertices $v$ in $V$ is a majority out-dominating function (MODF) of $D.$ The weight of a MODF $f$ is $w(f)=\sum\limits_{v\in V}f(v),$ and the minimum weight of a MODF in $D$ is the majority out-domination number of $D,$ denoted $\gamma^+_{maj}(D).$ In this work we introduce these concepts and prove some results regarding them, among which the fact that the decision problem of finding a majority out-dominating function of a given weight is NP-complete.
Comments: 12 pages, 2 figures Subjects: Combinatorics (math.CO) Cite as: arXiv:1311.0475 [math.CO] (or arXiv:1311.0475v1 [math.CO] for this version)
## Submission history
From: Martin Manrique [view email]
[v1] Sun, 3 Nov 2013 15:19:41 GMT (13kb)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5123531818389893, "perplexity": 1569.82979869692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209999.57/warc/CC-MAIN-20200923050545-20200923080545-00611.warc.gz"} |
http://labiogene.org/spip.php?article131 | Fatty Acid Composition of Human Colostrums of Burkinabe Women. Carbone Virginia, Musumeci Maria, Simpore Jacques, Saggese Paola, d`Agata Alfonsina and Musumeci Salvatore Pakistan Journal of Biological Sciences, 2006, 9 . 6 : 1028-1032. Abstract : The aim of this study is to compare/contrast the colostrum lipid composition of 53 Burkinabe women, collected in 2005 at the Maternity of Centre Medical Saint Camille in Ouagadougou (Burkina Faso), with similar data obtained in breast milk, five years ago and then to show the evolution of this important aliment. The fatty acid composition of colostrum samples was determined by Gas-liquid Chromatography-Mass Spectrometry. Saturated lipids (C8:0-C:14-0) showed a progressive increasing trend in the Burkinabe woman colostrum with respect those already measured five years ago. The C15:0-C24:0 fractions were found costantly higher, but their trends were in progressive decrement. The 18:2n-6 fraction (linoleic acid) reached the highest value in the third day post partum. The 18:3n-3 was constantly higher in the second and third days. The 20:4n-6 (arachidonic acid) and LC n-6 PUFA were lower ever since the first day, but with a trend to increase. Also 22 : 6n-3 and LC n-3 PUFA were costantly lower. The 18:2n-6/18:3n-3 and LC n-6/LC n-3 ratios were lower and higher, respectively, if compared with those already measured five years ago. These results suggest the need to improve alimentary habits of mothers in order to restore the balanced n-6/n3 PUFA ratio in their colostrums. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606588244438171, "perplexity": 10628.935584326584}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189316.33/warc/CC-MAIN-20170322212949-00551-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://ece4uplp.com/indirect-method-of-generation-of-fm-signal/ | # Indirect method of generation of FM signal
Indirect method of generation of FM signal is also known as Armstrong method .Here a crystal oscillator generates carrier signal , which provides very high stability compared to Direct method. this method generates a WBFM signal, i.e a phase modulator generates a NBFM signal in the first step , then in the second step NBFM will be converted to WBFM signal using a frequency multiplier.
In NBFM modulation index is small and the distortion is very low in NBFM ,here we prefer phase modulator to generate NBFM as it’s generation is easy, the frequency multiplier multiplies incoming frequency along with frequency deviation $\Delta&space;f$ . Hence NBFM will be converted into WBFM with large frequency deviation as well.
Frequency multiplier:-
The frequency multiplier consists of a non-linear device followed by a Band Pass Filter, the non-linear device is a memory less device.
If the input to a non-linear device is an FM wave with frequency $f_{c}$ and deviation $\Delta&space;f$ then output consists of DC component and ‘n’ frequency modulated waves with carrier frequencies $f_{c},2f_{c},3f_{c},......nf_{c}$ and frequency deviations $\Delta&space;f,2\Delta&space;f,3\Delta&space;f,4\Delta&space;f.....n\Delta&space;f$ . The BPF designing is in such a way that it passes the FM wave centered at the frequency $nf_{c}$with frequency deviation $n\Delta&space;f$ and to suppress all other FM components. Thus a frequency multiplier generates a WBFM wave from a NBFM wave.
Generation of WBFM by Armstrong’s method:-
This Armstrong’s method is indirect method used to generate WBFM signal.It is used to generate FM signal having both the desired frequency deviation and carrier frequency.
The block diagram consists of two stage multiplier and an intermediate stage of frequency translator .
(No Ratings Yet) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7316145896911621, "perplexity": 2390.6599346088933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525524.12/warc/CC-MAIN-20190718063305-20190718085305-00523.warc.gz"} |
http://www.geog.com.cn/CN/10.11821/xb200002012 | • 论文 •
### 冰川槽谷横剖面沿程变化及其对冰川动力的反映
1. 北京大学城市与环境学系,北京100871
• 收稿日期:1999-09-12 修回日期:1999-01-18 出版日期:2000-03-15 发布日期:2000-03-15
• 基金资助:
国家自然科学基金资助项目(49671075)
### The Cross-section Variation of Glacial Valley and Its Reflection to the Glaciation
LI Ying kui, LIU Geng nian
1. Department of Urban and Environmental Sciences, Peking University, Beijing 100871
• Received:1999-09-12 Revised:1999-01-18 Online:2000-03-15 Published:2000-03-15
• Supported by:
National Natural Science Foundation of China,No.49671075
Abstract: A new model based on the gradient wide depth ratio (GWDR) is posed and the longitudinal variation of glacial valley is presented by using this model on the basis of the investigation and measuring data of the glacial valley cross sections in the middle and west of the Tian Shan Mountains. The GWDR of a glacial valley cross section is defined as the ratio between the distance of the same contour line and its depth in the cross section, and it can describe the integrated characters of a cross section as well as the compared morphological analysis on different cross sections. Statistics show that the relationship between the GWDR and its corresponding depth conforms to the power function. Two parameters ( A f, a measure of the breadth of the valley floor, and B f, a measure of the steepness of the valley sides) are used here to describe this relationship. According to their planar shapes, glacial valleys are classified as single valleys and multi valleys, and multi valleys are subdivided into a simple valley section, a confluent valley section and a single flow section. Base on measuring data of 49 cross profiles of glacial valleys in the middle and west of the Tian Shan Mountains, the longitudinal variations of glacial valleys are concluded as follow: (1) In single valleys, two parameters ( |A f| and |B f| ) of the GWDR model increase form the head to the snow line, and the valley becomes wider and steeper in two walls. On the contrary, they decrease from the snow line to the end of the valley, and the valley becomes narrower and gentler in two walls. (2) In multi valleys, |A f| and |B f| increase form the simple valley section to the confluent valley section, and decrease from the confluent valley section to the single flow section. These characteristics reflect the differences of glaciation along the valley. The glaciation near the snowline is greater than upstream and downstream in the simple valley because the glacier reaches the maximum values in temperature, thickness, and velocity at this location. In multi valleys, the confluence action becomes the dominant influence factor of the glaciation, so the glaciation in confluence locations is greater than other locations.
• P931.4 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3689805567264557, "perplexity": 3071.3702111444227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00509.warc.gz"} |
https://www.physicsforums.com/threads/inflection-points-and-intervals.720604/ | Inflection Points and Intervals
1. Nov 3, 2013
Qube
1. The problem statement, all variables and given/known data
Suppose that a continuous function f(x) has horizontal tangent lines at x = -1, x = 0, and x = 1. If f"(x) = 60x^3 - 30x, then which of the following statements is/are true?
A) f(x) has a local max at x = 1
B) f(x) has a local min at x = -1
C) f(x) has an inflection point at x = 0
2. Relevant equations
Local maximums occur at critical points.
All points at which horizontal tangent lines occur are critical points because the existence of a horizontal tangent line at that point implies the existence of that point on the function, and as we know, critical points must exist in the domain of the function.
Therefore, x = 1, 0, and 1 are critical points.
We can use the second derivative test to test for local extrema.
3. The attempt at a solution
f"(-1) = - 30. x = -1 is a local max. B is true.
f"(1) = 30. x = 1 is a local min. A is true.
f(x) has an inflection point at x = 0; the second derivative is 0 at x = 0 and x = ±1/sqrt(2).
f"(x) changes sign around x = 0 from being positive in the interval (-1/sqrt(2), 0) and (0, 1/sqrt(2)).
Therefore, all three are true.
2. Nov 3, 2013
Staff: Mentor
Your reasoning looks good to me.
Draft saved Draft deleted
Similar Discussions: Inflection Points and Intervals | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762473464012146, "perplexity": 386.914665074463}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105291.88/warc/CC-MAIN-20170819012514-20170819032514-00490.warc.gz"} |
https://bitbucket.org/kenko/getemx/src/58ff83036a4d/README.rst | Full commit
# GETEMX
## Building & Installation
It should be possible to build and install getemx using cabal.
## Invocation
getemx accepts no options on the command line. Its arguments should consist simply of .emx files; the simplest way to invoke it is:
\$ getemx *.emx
where the .emx files are in the current directory.
getemx will read a file in your home directory called ".emxdownloader" which can define options to control its behavior. Options may either be boolean or string; the values of boolean options must be one of "f", "t", "false", or "true" while string options may be any string. The syntax of the .emxdownloader file is very simple:
option = value
Any amount of whitespace may occur before or after the "=". The following are boolean options:
• replace_underscores: if true, underscores in the filename will be replaced by spaces. True by default.
• replace_apostrophe_identity: if true, the string "'" in filenames will be replaced by "'". True by default.
• get_art: if true, cover art will be downloaded for each album. True by default.
Currently the only string options control the filenames of the downloaded files. There are two classes here: dldir specifies a directory relative to which further processing will take place, while dlfmt and dlfmt_multidisc specify how to process individual files. The latter two accept a number of replacement options:
The defaults for the string options are:
• dldir: ., that is, whatever directory getemx is run from
• dlfmt: %(a)/%(A)/%(a) - %(A) - %(n) - %(t)
• dlfmt_multidisc: %(a)/%(A): %(D)/%(a) - %(A): %(D) - %(n) - %(t)
dlfmt_multidisc is used if a track is being downloaded that belongs to a set with more than one disc; otherwise, dlfmt is used. Note that at present the default for dlfmt_multidisc will probably do the wrong thing on OS X. Note also that neither of the default values ends with %(e): the file extension is supplied automatically if it is not explicitly specified.
A ~/.emxdownloader file that set every option to its default value could look like this:
get_art = t
replace_underscores = t
replace_apostrophe_identity = t
dldir = .
dlfmt = %(a)/%(A)/%(a) - %(A) - %(n) - %(t)
dlfmt_multidisc = %(a)/%(A): %(D)/%(a) - %(A) - %(n) - %(t)
"Could" because one could also write "true" out in full for "t". | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3559996783733368, "perplexity": 6493.241815106689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936459277.13/warc/CC-MAIN-20150226074059-00182-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/determine-whether-the-sequence-converges-or-diverges.530143/ | # Determine whether the sequence converges or diverges
1. Sep 14, 2011
### dangish
Hey all,
University for me started last week, however I was unable to attend until now. I just e-mailed my professor and there is a quiz just next week and I have no notes and he will not give them to me. I'm wondering if anyone knows any good online sites that could help me catch up. Here are some of the questions to give you an idea of what I am looking for:
Question 1: Determine whether the sequence converges or diverges. If it converges, find the
limit.
(a) {1 + [(-1)^n]/2 }
(b) {1 + [(-1)^n]/3n }
(c) {sin(n)/n}
Question 2: Determine whether the sequence is increasing, decreasing or not monotonic. Is the sequences bounded? If the sequence is convergent, find its limit.
(a) { (sqrt(n)) / 1 + sqrt(n) }
(b) { 2 + 1/3^n }
Question 3: Determine whether the series is convergent or divergent. If it is convergent, find
its sum.
(a) $\sum 4n+2 / 4n - 2$
Any advice on some good reference material would be GREATLY appreciated.. Thanks in advance!
2. Sep 14, 2011
### lanedance
3. Sep 14, 2011
### Char. Limit
Re: Convergence/Divergence
For question three, just see what the summand tends to as n tends to infinity. If it doesn't tend to zero, your sum won't converge.
4. Sep 14, 2011
### dangish
Re: Convergence/Divergence
Wouldn't both the numerator and denominator go to infinity?
5. Sep 14, 2011
### Char. Limit
Re: Convergence/Divergence
Yes, but that doesn't tell us much. For this one, you can rewrite the function as such:
$$\frac{4n+2}{4n-2} = \frac{4n-2+4}{4n-2} = 1 + \frac{4}{4n-2}$$
Try evaluating the limit from here.
6. Sep 14, 2011
### dangish
Re: Convergence/Divergence
So it is convergent? And it's sum is 1?
7. Sep 14, 2011
### Char. Limit
Re: Convergence/Divergence
No, that's not the series. That's just the summand, the term in the series. You do know the limit test, right?
Limit test:
If an does not tend to 0 as n tends to infinity, then $\sum a_n$ diverges.
8. Sep 14, 2011
### dangish
Re: Convergence/Divergence
Like I said, haven't been to class yet. I'll try and get some notes tomorrow, thanks for the help though I appreciate it man.
Similar Discussions: Determine whether the sequence converges or diverges | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7418134808540344, "perplexity": 1064.1842527247748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690340.48/warc/CC-MAIN-20170925055211-20170925075211-00109.warc.gz"} |
http://mathhelpforum.com/calculus/5980-finding-dy-dx.html | # Thread: finding the dy/dx
1. ## finding the dy/dx
i am having trouble with find the dy/dx for
1) y= 1/3x3
2) y= 3/4x5
3) y=4 square root_/x3
please note the small numbers are powers.
2. Originally Posted by rpatel
i am having trouble with find the dy/dx for
1) y= 1/3x3
2) y= 3/4x5
3) y=4 square root_/x3
please note the small numbers are powers.
Hello,
if y = a*x^r, a, r are real numbers, then dy/dx= a*r*x^(r-1)
With your problems you'll get:
1) dy/dx=(1/3)*3*x^2 = x^2
2) dy/dx = 15/4*x^4
And now I'm only guessing:
Transform the root into a power: y = x^(1/4)
3) dy/dx = (1/4)*x^(-3/4)
Greetings
EB | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328283667564392, "perplexity": 18771.37940298316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806832.87/warc/CC-MAIN-20171123123458-20171123143458-00146.warc.gz"} |
https://geo.libretexts.org/Bookshelves/Geology/Book%3A_Geological_Structures_-_A_Practical_Introduction_(Waldron_and_Snyder)/01%3A_Topics/1.02%3A_Orientation_of_Structures | # 1.2: Orientation of Structures
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
## Lines and Planes
### Linear and planar features in geology
Almost all work on geologic structures is concerned in one way or another with lines and planes.
The following are examples of linear features that one might observe in rocks, together with some kinematic deductions from them:
• glacial striae (which reveal the direction of ice movement);
• the fabric or lineation produced by alignment of amphiboles seen in metamorphic rocks (which reveal the direction of stretching acquired during deformation);
• and the alignment of elongate clasts or fossil shells in sedimentary rocks (which reveals current direction).
Examples of planar features include:
• tabular igneous intrusive bodies such as dykes and sills;
• bedding planes in sedimentary rocks;
• the fabric or foliation produced by alignment of sheet silicate minerals such as mica in metamorphic rocks, which reveals the direction of flattening during deformation;
• joints and faults produced by the failure of rocks in response to stress (and which therefore reveal the orientation of stress at some time in the past).
Notice that although several of the above descriptive observations lead to kinematic inferences, only the last one allows us to make dynamic conclusions!
### Bearings
To describe almost any structure, we need to say something about its orientation (also known as its attitude): Does it run north-south, or perhaps east-west, or somewhere in between? A direction relative to north is called a bearing. In most geologic work, bearings are specified as azimuths.
An azimuth is a bearing measured clockwise from north.
An azimuth of 000° represents north, 087° is just a shade north of east, 225° represents southwest, and 315° represents NW.
Notice that it is best to use a three digit number for azimuths. This helps to avoid confusion with inclinations (below). The degree symbol is often omitted when recording large numbers of azimuths.
Confusingly, there are other methods of specifying an azimuth. In the United States, bearings are often specified using quadrants.
In the quadrants method of measuring bearings, angles are measured starting at either due north or due south (whichever is closest), and measured by counting degrees toward the east or west.
Here are the four azimuths above, converted to the quadrants representation:
000° N00E
087° N87E
225° S45W
315° N45W
Because it is more confusing, especially when doing calculations, we will not use the quadrants method much in this manual. However, you need to be prepared to understand measurements recorded as quadrants, especially when reading books and geologic reports published in the U.S.
Azimuths are typically measured with a compass, which uses the Earth’s magnetic field as a reference direction. In most parts of the Earth, the magnetic field is not aligned exactly north-south.
The magnetic declination is the azimuth of the Earth’s magnetic field.
Magnetic declination varies from place to place and varies slowly over time. Currently (2020) the declination in Edmonton is about 014°.
Most geological compasses have a mechanism for compensating for declination. Of course, the compass must be adjusted for the particular area in which you are working.
### Inclinations
Another type of measurement is often used in structural geology:
An inclination is an angle of slope measured downward relative to horizontal.
A horizontal line has an inclination of 00°, and a vertical one is inclined 90°. Always use two digits for inclination, to distinguish inclinations from azimuths (three digits).
Inclinations are measured using a device called a clinometer or inclinometer. Geological compasses typically have a built-in clinometer, so one instrument can be used for measuring both types of angle. However, you must hold the compass differently in each case:
To measure an azimuth precisely, using the Earth’s magnetic field, you must hold the compass horizontal;
To measure an inclination, you are using the Earth’s gravity field, and the compass must be held in a vertical plane.
### Orientation of a line
To specify the orientation of a line requires two measurements, called plunge and trend:
The plunge of a line is its inclination, measured downward relative to horizontal;
The trend of a line is its azimuth, measured in the direction of plunge.
So, a line with plunge 07 and trend 007 slopes downward very gently in a direction just east of north. 227-87 specifies a line that plunges very steeply towards the SW.
There are several different conventions for writing plunge and trend measurements: some geologists write the plunge first and some write it second. The best way to keep things clear is to always use three digits for the trend and two for the plunge. In addition, it’s sometimes helpful to specify the compass direction, just as a check, e.g.
025-37 NE
### Orientation of a plane
To specify the orientation of a plane, we also need two measurements, an azimuth and an inclination. The dip of a plane is its inclination. It’s important when measuring dip to measure the steepest possible slope in the plane. If you are in doubt, imagine water running down the surface; it will take the steepest path, in the direction of dip.
The dip of a plane is the inclination of the steepest line in the plane.
The azimuth of a plane is a bit more complicated. There are several different directions that we might measure. If we measure the direction in which the plane slopes downhill, then we are measuring dip direction.
The dip direction of the plane is the azimuth of the steepest line in the plane.
However, dip direction is not easy to measure accurately with many compasses, because the slope of the plane varies rather gradually on either side of the dip direction. For this reason, many geologists prefer to measure the strike, which refers to the direction of a horizontal line drawn on the surface.
The strike of a plane is the azimuth of a horizontal line that lies in the plane.
There are two directions in which we could measure the strike, 180° apart! The dip direction is clockwise from one, and counterclockwise from the other. In most Canadian geological field work, the right-hand rule (‘RHR‘) is used to avoid this ambiguity.
Right-hand rule: When you are facing in the strike direction, the plane dips downward to your right.
An equivalent statement is that strike is always 90° counter-clockwise from the dip direction.
It’s a good idea to add a rough compass direction to the dip measurement, just as a check that right-hand rule measurement has been done correctly. For example:
345/45 NE
specifies a plane that dips at 45° with strike roughly NNW. The dip direction is clockwise from the strike, so the dip direction is ENE – but ‘NE’ indicates that we have the direction right.
#### Other conventions for defining the orientation of a plane
Unfortunately, there are several other conventions to resolve strike ambiguity.
Some geologists prefer to record whichever strike direction is less than 180, and use letters (e.g. ‘NE’) to resolve the ambiguity. In this convention (‘strike, dip, alphabetic dip direction’) the above measurement would be written:
165/45 NE
Other geologists prefer to record dip direction and dip. In the ‘dip-direction, dip’ (DDD) convention, the above measurement would be written:
075,45
In the UK the strike has sometimes been specified so that the dip direction is counterclockwise from the strike, though confusingly this convention is also called ‘right-hand rule’. If you want to know the logic for this convention, ask a British geologist! (It has nothing to do with driving on the left side of the road.) In this convention, our plane would be:
165/45
In most work for this course, planes will be specified using the (Canadian) right-hand rule. However, you should be prepared, as geologists, to work with data collected using any of the other conventions.
### Relationship of lines to planes
Often it’s possible to measure several different linear and planar structures at a single outcrop. Sometimes there are special relationships between these structures. The following sections describe some of these relationships.
#### Intersecting planes
If two planar structures have different orientations, they will intersect in space. The intersection of two non-parallel planes defines a line (Fig 5). The orientation of the intersection line depends only on the orientation of the two planes. (If we change the position of one or both planes but keep their orientation constant, the location of the line of intersection will change, but not its orientation.) There are many situations that you will meet in this manual where planes intersect. The following are particularly important:
• The intersection of a geological surface with the topographic surface (the ground) is called the surface trace or outcrop trace (or just trace) of that surface. Geological maps are typically divided into areas of different colours (for different rock units) that are bounded by lines; these lines on the map are the traces of the geological surfaces that separate the units.
• The intersection of a fault plane with a planar rock unit that the fault displaces produces a line called the fault cut-off or cutoff.
• The two sides, or limbs, of a fold may intersect on a line called the fold hinge.
• The truncation, at an unconformity, of an older planar rock unit or surface by a younger one with a different orientation in space produces a line which may be called the subcrop, or subcrop limit.
#### Line that lies in a plane
On any given plane, it’s possible to draw an infinite number of lines that are parallel to, or ‘lie in‘, the plane. Some examples are current lineations that lie in bedding planes, and striations on fault planes that lie in the fault plane itself.
The orientation of a line that lies in a plane may be specified by rake or pitch. Unlike an azimuth (which is measured from north in a horizontal plane) or an inclination (which is measured from horizontal in a vertical plane) a rake is measured from horizontal in an inclined plane as shown in Fig. 6. As with strike, there are several conventions for specifying rake. We recommend measuring the rake of a line from the ‘right-hand rule’ strike direction, clockwise when looking down on the surface, as an angle between 000° and 180°.
For example, a geologist may record a fault surface like this:
Fault plane 075/78 SE; Slickenlines rake 108°
On a vertical plane the rake of a line is the same as its plunge. On all other planes, rake ≥ plunge.
Remember:it only makes sense to measure a rake when a line lies in a plane.
#### Pole to a plane
There’s also an infinite number of other lines are not parallel to any given plane (they may pierce the plane). One special line is perpendicular to any given plane: it’s sometimes called the pole to the plane. We will meet poles to planes in a later section of the course.
### Contours
#### What are contours?
Contours are curving lines on a map that are widely used in the Earth sciences to show the variation of some quantity over the Earth’s surface. You are probably most familiar with topographic contours that show the shape of the land surface. However, Earth science uses many other types of contours such as:
• Magnetic contours: Variations in the strength of the Earth’s magnetic field;
• Isobars: Variations in air pressure;
• Isopachs: Variations in the thickness of a stratigraphic unit;
• Structure contours: Variations in the elevation above sea-level or depth below sea-level of a geological surface.
In each of these cases a numerical quantity, such as the elevation of a surface, varies from place to place, and the contour lines illustrate that spatial variation.
A contour is a curving line on a map that separates higher values of some quantity from lower values.
A contour can also be thought of as a line connecting points at which the measured quantity has constant value. Each contour line is labelled with this constant value; a map covered with contour lines is a useful expression of the spatial variation of the measured quantity.
(Note: This property is sometimes used as a definition of a contour. For example, a topographic contour is sometimes defined ‘as a line joining points of equal elevation‘. Although this is a satisfactory definition, it is harder to apply in practice, for two reasons. First, when the data are sparse, for example when working with drilled wells, it may be difficult to find any points of exactly equal elevation; locating such points requires interpolation. Second, it is very easy, when threading contours, to end up with “lower” points on both sides of the same contour line. This is always wrong! So, it is imperative when drawing a contour to remember that it has a ‘high’ side and a ‘low’ side, so that it always separates higher and lower values.)
Often, the measured quantity is the elevation of the Earth’s surface, above or below sea level. A topographic contour can be considered as a line on the ground separating points of higher and lower elevation. It can also be thought of as the line of intersection of the ground surface with a horizontal plane. Below sea level, contours showing the elevation of the sea floor are known as bathymetric contours.
On most topographic maps, topographic contours are separated by a constant interval: for example, contours on a map might be drawn at 310, 320, 330, 340 m etc. The spacing of the contours is called the contour interval. In this example the contour interval is 10 m.
A structure contour (Fig. 8) is a contour line on a geologic surface, such as the top or bottom of a rock formation, a fault, or an unconformity. Typically, structure contours are drawn on surfaces that are buried underground. However, sometimes it’s possible to guess where a geologic surface was before it was eroded away; structure contours are then drawn for this imaginary surface above ground! Just like a topographic contour, a structure contour is the line of intersection of the contoured surface with a horizontal plane.
#### Strike, dip, and contours
Because structure contours are by definition lines of constant elevation, they are parallel to the strike of the geologic surface. They are sometimes called strike lines. So, given a pattern of structure contours it’s possible to determine the strike of the surface at any point.
The dip of the surface controls how far apart the contours are. Where a surface dips steeply, the contours are close together; where the surface is near-horizontal the contours are far apart. The horizontal spacing of contours, recorded on the map is called the contour spacing. There is a simple relationship between the dip δ of a surface and the spacing of its contours.
tan (δ) = contour interval / contour spacing
If a surface is planar (i.e. the strike and dip are constant) then the contours will be parallel, equally spaced, straight lines. Thus you can readily determine the orientation of a surface from the azimuth and spacing of its structure contours.
### Contours and outcrop traces
On a geologic map, a geologic surface such as the boundary between two map-units appears as a line, called the outcrop trace or topographic trace of that boundary (Fig. 9). Typically, the outcrop traces of geological units are quite complicated, curving lines, because they are affected both by the dip of the geologic surface and the complex shape of the topographic surface. Because of this, in areas of topographic relief, it’s often possible to use the outcrop trace of a boundary between two rock units to make inferences about the strike and dip of the units.
The precise orientation of a surface can be determined from its outcrop trace because its position and elevation are known at every point where the trace crosses a topographic contour line. These intersection points can be used for drawing structure contours (Fig. 10). Thus, for example, the 400 m structure contour is constructed by connecting all points where the outcrop trace crosses the 400 m topographic contour. Once a number of structure contours have been drawn, the orientation of the surface may be determined from the spacing and orientation of the structure contours.
Conversely, if structure contours of a geologic surface are known, its trace can be determined by connecting points where the geologic and topographic surfaces have the same elevation; i.e. the trace connects points where structure and topographic contours with the same elevation cross one another.
Where the elevation of a structure contour is greater than topographic elevation, this means the geological surface is “above ground”, and has thus been removed by erosion at that location. Conversely, where the elevation of a structure contour is less than topographic elevation, this means the geological surface is below ground, or in the subsurface, and can be encountered by excavation or drilling. The outcrop trace of a geological surface can thus be thought of as a line that separates a region where that surface is present below ground, from another region where the surface has been eroded away above the present-day ground.
There are some general considerations when constructing geologic traces (Fig. 11).
• The outcrop trace of a horizontal geological surface is parallel to the topographic contours.
• The outcrop trace of a vertical geological surface is a straight line parallel to the strike; it ignores topographic contours.
• The outcrop traces of dipping surfaces show V-shapes as they cross valleys and ridges; these regions are particularly useful in determining strike and dip.
• In general, the V-shape formed as a trace crosses a river valley points in the direction of dip. (This is known as the “rule of vees”.) The only exception occurs when the dip is in the same direction as the slope of the valley, but gentler than the gradient of the river; then the V-shapes point up-dip.
• For planar surfaces with shallow dip (gentler than the typical hill slopes of topography in the region) the outcrop trace will generally follow topographic contours quite closely, crossing them at widely spaced intervals.
• In such regions, the relative position of a top or bottom contact of a unit can be inferred from the local topography. For example, if the position of the bottom trace of a unit is known then the top of the unit must be exposed at a higher elevation.
A geologic trace should never cross a topographic contour except where the identical structural and topographic contours intersect.
This page titled 1.2: Orientation of Structures is shared under a CC BY-NC license and was authored, remixed, and/or curated by John Waldron & Morgan Snyder (Open Education Alberta) . | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8383814692497253, "perplexity": 1282.6019807764837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00738.warc.gz"} |
https://brilliant.org/problems/pulleys-pulleys-everywhere-3/ | # Pulleys Pulleys Everywhere -3
System is shown in the figure and man is pulling the rope form both sides with constant speed $$u$$. Then the speed of the block will be? ($$M$$ can only move vertically)
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9708654284477234, "perplexity": 688.1604633227331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608058.13/warc/CC-MAIN-20170525083009-20170525103009-00463.warc.gz"} |
https://www.khanacademy.org/math/cc-eighth-grade-math/cc-8th-linear-equations-functions/cc-8th-graphing-prop-rel/e/graphing-proportional-relationships | # Graphing proportional relationships
### Problem
Keith is saving money for a car. He has saved the same amount each year for the past three years, and records how much he has at the end of each year in the table below.
Year 1Year 2Year 3
Total amount saveddollar sign, 1500dollar sign, 3000dollar sign, 4500
What is Keith's unit rate of change of dollars with respect to time; that is, how much does Keith save in one year?
The unit rate is
dollars per year.
Graph the proportional relationship described above, with the x-coordinate representing years, and the y-coordinate representing amount saved in thousands of dollars. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 13, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4317512512207031, "perplexity": 3018.848170464491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170839.31/warc/CC-MAIN-20170219104610-00034-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://iwaponline.com/wst/article-abstract/79/12/2387/68884/Degradation-of-norfloxacin-in-aqueous-solution?redirectedFrom=fulltext | ## Abstract
The frequent detection of antibiotics in water bodies gives rise to concerns about their removal technology. In this study, the degradation kinetics and mechanisms of norfloxacin (NOR), a typical fluoroquinolone pharmaceutical, by the UV/peroxydisulfate (PDS) was investigated. NOR could be degraded effectively using this process, and the degradation rate increased with the increasing dosage of PDS but decreased with the increasing concentration of NOR. In real water, the degradation of NOR was slower than that in ultrapure water, which indicated that laboratory results cannot be directly used to predict the natural fate of antibiotics. Further experiments suggested that the degradation of NOR was the most fast under neutral condition, the existence of HA or FA inhibited the degradation of NOR, and the presence of inorganic ions (NO3, Cl, CO32− and HCO3) had no significant effect on degradation of NOR. Total organic carbon (TOC) removal rate (40%) indicated NOR was not completely mineralized, and six transformation products were identified, and possible degradation pathways of NOR had been proposed. It can be prospected that UV/PDS technology could be used for advanced treatment of wastewater containing fluoroquinolones. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8332110643386841, "perplexity": 3952.6545194828923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664808.68/warc/CC-MAIN-20191112074214-20191112102214-00329.warc.gz"} |
http://jeffreyhorner.tumblr.com/post/28130157240/rapache-1-2-0-released | July 27, 2012
rApache 1.2.0 Released
With this release comes a minor change in behavior: for requests that have been configured with RFileEval, RFileHandler, or using the r-script handler, rApache will set the working directory to the file’s directory.
For instance with a Rook deployment like this:
<Location /hmisc>
SetHandler r-handler
RFileEval "/home/hornerj/Hmisc/config.R:Rook::Server\$call(app)"
</Location>
It makes sense to change the working directory to /home/hornerj/Hmisc. That way, the examples in the Rook package can work without change.
Also, for:
<Directory /home/hornerj/rapache/test/brew>
SetHandler r-script
RHandler brew::brew
</Directory>
and a request of /home/hornerj/rapache/test/brew/simple.html, it makes sense to set the working directory to:
/home/hornerj/rapache/test/brew
Or if the request was /home/hornerj/rapache/test/brew/subdir/foo.html, it makes sense to set it to:
/home/hornerj/rapache/test/brew/subdir
Yay for minor releases!
9:50am | URL: http://tmblr.co/Zf5rDyQCi2Au | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1821727156639099, "perplexity": 11029.084137609625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835670.21/warc/CC-MAIN-20140820021355-00171-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://codereview.stackexchange.com/questions/177911/java-aes-256-gcm-file-encryption | # Java AES-256 GCM file encryption
I wrote my first file encryption program, that encrypts a file with AES-256 GCM and stores IV and salt prepended to the file content, so it's likely that I did something worse than possible.
I would like you to look at my code and point out errors or places where it is possible to make better.
import javax.crypto.*;
import javax.crypto.spec.GCMParameterSpec;
import javax.crypto.spec.PBEKeySpec;
import javax.crypto.spec.SecretKeySpec;
import java.io.*;
import java.security.GeneralSecurityException;
import java.security.NoSuchAlgorithmException;
import java.security.SecureRandom;
import java.security.spec.KeySpec;
import java.util.Base64;
import java.util.logging.Level;
import java.util.logging.Logger;
public class FileCryptor implements Serializable {
private static final Logger LOGGER = Logger.getLogger(FileCryptor.class.getName());
private static final int DEFAULT_GCM_AUTHENTICATION_TAG_SIZE_BITS = 128;
private static final int DEFAULT_GCM_IV_NONCE_SIZE_BYTES = 12;
private static final int DEFAULT_PBKDF2_ITERATIONS = 65536;
private static final int DEFAULT_PBKDF2_SALT_SIZE_BYTES = 32;
private static final int DEFAULT_AES_KEY_LENGTH_BITS = 256;
private static final String DEFAULT_CIPHER = "AES";
private static final String DEFAULT_CIPHERSCHEME = "AES/GCM/NoPadding";
private static final String DEFAULT_PBKDF2_SCHEME = "PBKDF2WithHmacSHA256";
private int gcmAuthenticationTagSizeBits = DEFAULT_GCM_AUTHENTICATION_TAG_SIZE_BITS;
private int gcmIvNonceSizeBytes = DEFAULT_GCM_IV_NONCE_SIZE_BYTES;
private int pbkdf2Iterations = DEFAULT_PBKDF2_ITERATIONS;
private int pbkdf2SaltSizeBytes = DEFAULT_PBKDF2_SALT_SIZE_BYTES;
private int aesKeyLengthBits = DEFAULT_AES_KEY_LENGTH_BITS;
private String cipher = DEFAULT_CIPHER;
private String cipherscheme = DEFAULT_CIPHERSCHEME;
private String pbkdf2Scheme = DEFAULT_PBKDF2_SCHEME;
/**
* Creates a new empty EncryptedFile object
*/
public FileCryptor() {
}
/**
* Generates a randomly filled byte array
*
* @param sizeInBytes length of the array in bytes
* @return byte array containing random values
* @throws NoSuchAlgorithmException
*/
private static byte[] generateRandomArry(int sizeInBytes) throws NoSuchAlgorithmException {
/* generate random salt */
final byte[] salt = new byte[sizeInBytes];
SecureRandom random = SecureRandom.getInstanceStrong();
random.nextBytes(salt);
return salt;
}
/**
* Encrypts the provided file using the provided password and writes it
* in a file with extension "enc"
*
* @param path path to a writeable file (may already exist)
* @throws GeneralSecurityException
*/
public void encrypt(String password, String path) throws GeneralSecurityException {
/* Derive the key*/
SecretKeyFactory factory = SecretKeyFactory.getInstance(pbkdf2Scheme);
byte[] newSalt = generateRandomArry(pbkdf2SaltSizeBytes);
KeySpec keyspec = new PBEKeySpec(password.toCharArray(), newSalt, pbkdf2Iterations, aesKeyLengthBits);
SecretKey tmp = factory.generateSecret(keyspec);
SecretKey key = new SecretKeySpec(tmp.getEncoded(), cipher);
Cipher myCipher = Cipher.getInstance(cipherscheme);
byte[] newNonce = generateRandomArry(gcmIvNonceSizeBytes);
GCMParameterSpec spec = new GCMParameterSpec(gcmAuthenticationTagSizeBits, newNonce);
myCipher.init(Cipher.ENCRYPT_MODE, key, spec);
try (
FileInputStream fileInputStream = new FileInputStream(path);
FileOutputStream fileOutputStream = new FileOutputStream(path + ".enc");
CipherOutputStream encryptedOutputStream = new CipherOutputStream(fileOutputStream, myCipher);
) {
// write IV/nonce
fileOutputStream.write(newNonce);
// write salt
fileOutputStream.write(newSalt);
byte[] buffer = new byte[32];
encryptedOutputStream.write(buffer);
}
} catch (IOException e) {
LOGGER.log(Level.SEVERE, e.getMessage(), e);
throw new SecurityException(e.getMessage(), e);
}
}
/**
* Decrypts the encrypted file using the provided password.
*
* @param path path to a previously encrypted file to be decrypted
* @throws GeneralSecurityException
*/
public void decrypt(String password, String path) throws GeneralSecurityException {
byte[] myNonce = new byte[gcmIvNonceSizeBytes];
byte[] mySalt = new byte[pbkdf2SaltSizeBytes];
try (
FileInputStream fileInputStream = new FileInputStream(path);
) {
} catch (IOException e) {
LOGGER.log(Level.SEVERE, e.getMessage(), e);
throw new SecurityException(e.getMessage(), e);
}
/* Derive the key*/
SecretKeyFactory factory = SecretKeyFactory.getInstance(pbkdf2Scheme);
KeySpec keyspec = new PBEKeySpec(password.toCharArray(), mySalt, pbkdf2Iterations, aesKeyLengthBits);
SecretKey tmp = factory.generateSecret(keyspec);
SecretKey key = new SecretKeySpec(tmp.getEncoded(), cipher);
Cipher myCipher = Cipher.getInstance(cipherscheme);
GCMParameterSpec spec = new GCMParameterSpec(gcmAuthenticationTagSizeBits, myNonce);
myCipher.init(Cipher.DECRYPT_MODE, key, spec);
try (
FileOutputStream fileOutputStream = new FileOutputStream(path.substring(0, path.length() - 4));
FileInputStream fileInputStream = new FileInputStream(path);
CipherInputStream cipherInputStream = new CipherInputStream(fileInputStream, myCipher);
) {
byte[] skipped = new byte[gcmIvNonceSizeBytes+pbkdf2SaltSizeBytes];
byte[] buffer = new byte[32];
fileOutputStream.write(buffer);
}
} catch (IOException e) {
LOGGER.log(Level.SEVERE, e.getMessage(), e);
throw new SecurityException(e.getMessage(), e);
}
}
}
• Not a good idea to implement crypto algorithms by yourself, except for exercising. Oct 14 '17 at 14:58
• @BillalBEGUERADJ I wouldn't say that this post necessarily does that. It seems to use existing algorithms. Oct 14 '17 at 15:12
• Possible duplicate of AES Encryption/Decryption with key Feb 14 '18 at 14:44
• Looks like a follow-up post to me...
– user34073
Feb 14 '18 at 16:55
• Please try and clean up your code in your fave IDE first. If you don't do that you'll get things like import java.util.Base64; which just isn't used. Feb 11 '20 at 3:33
## Elephant in the room
Let's first try and identify the elephant in the room. There is one big problem with using GCM and file decryption: tag validation. The problem is that for GCM there are two choices: either you output unvalidated chunks of data, or you buffer all data until the tag is validated. The first option should be followed by any reasonable low level API designer, leaving it up to the user to handle invalid tags.
In Java, those will result in AEADBadTagException (or just BadPaddingException in some older / third party implementations). Unfortunately Java devs had some struggle with this as well, so there have been multiple implementations of both the cipher and the CipherInputStream & CipherOutputStream if I remember correctly. To be honest, I lost track and most of my knowledge of this issue some time ago.
Anyway, you should really test how the current Java versions handle the situation. But it is very important that for your program you make sure that:
1. not all data is buffered (for large files);
2. you indicate to the user that the encryption has failed and that cleanup is necessary.
Because if the data is not all buffered, you don't want to leave the user with partial files with content that was not valid. Some kind of strategy is required.
## Design
The class requires one password per file. That's OK, but please mind you that this gets cumbersome fast for multiple files, especially since the PBKDF2 function needs to be run each time - even if you'd use the same password. Creating a key and then using that makes more sense, and it simplifies your methods. You still have a random IV anyway.
The parameters of the function say that you need to provide a path to a writable file. However, that's not how path is actually used. All side effects should be made clear to the user, and the format of the encrypted file should be documented as well.
The design is relatively OK. The streams seem to be used correctly. The security parameters seem correct as well, kudos. You are using streaming with a small (too small?) buffer size - which isn't configurable though and stated as a literal instead of a constant. There is precious little encoding / decoding going on, which is exactly how file handling should be; little to no stringified code to be found except maybe for the parameters (char[] and File make more sense to me).
The iteration count is already at the small side; it is very much expected to change per system, so I would write it to file as well. Including a versioning header is always a good idea.
There is too little checking on the parameters and on the files themselves, especially when it comes to error generation. Java does have more extensive file libraries since the NIO libraries were introduced.
The exception handling is not very good. There is too little distinction made between various exception classes, especially when it comes to e.g. ciphers not being found and input / output errors.
## FileCryptor class
public class FileCryptor implements Serializable {
Why would this kind of class be serializable? That just doesn't make sense. Besides that, you should include a serialization constant in case your inner class design is changed.
private int gcmAuthenticationTagSizeBits = DEFAULT_GCM_AUTHENTICATION_TAG_SIZE_BITS;
That's nice, you can reprogram or upgrade you class and you don't have to create your fields. However, a subclass cannot use them because they are private. And if you upgrade then you can always introduce them. Don't increase the state / fields unless necessary and directly use the constants. The constants will get inlined, speeding up the Java code.
On the other hand:
private int pbkdf2Iterations = DEFAULT_PBKDF2_ITERATIONS;
is something you do immediately want to be upgradable, so much so that it could be saved with the file.
## generateRandomArry method
Method name is missing an "a" for "Arry"
final byte[] salt = new byte[sizeInBytes];
Sorry? What salt? If you just return it as a generic array then you should not name the variable salt.
SecureRandom random = SecureRandom.getInstanceStrong();
That's overdoing it, getInstanceStrong() is for long term key generation. Just use new SecureRandom().
## encrypt method
password.toCharArray(),
The whole idea of char[] is that you can zero it out. So passing it as a string doesn't make a whole lot of sense.
Note too that the PBKDF2 function implementation of the standard JVM only uses the lower 8 bits of each char (rather stupidly if you ask me). I would make sure that the characters are not outside that range, or you may end up with something that is encrypted with a different password than you might have thunk. This is particularly "fun" if somebody uses a Chinese password or if you try and get compatibility with other runtimes.
From the PBEKeySpec class documentation:
You convert the password characters to a PBE key by creating an instance of the appropriate secret-key factory. For example, a secret-key factory for PKCS #5 will construct a PBE key from only the low order 8 bits of each password character, whereas a secret-key factory for PKCS #12 will take all 16 bits of each character.
And, as PBKDF2 is specified in the PKCS#5 Password Based Encryption (PBE) standards, this is from the algorithms page of Java:
Password-based key-derivation algorithm defined in PKCS #5: Password-Based Cryptography Specification, Version 2.1 using the specified pseudo-random function (). Example: PBKDF2WithHmacSHA256.
byte[] buffer = new byte[32];
encryptedOutputStream.write(buffer);
}
InputStream has a transferTo method since Java 9.
throw new SecurityException(e.getMessage(), e);
No, that cannot be right. Think of your own message and possibly throw your own checked exception. A security exception "file cannot be opened" doesn't make sense.
From the documentation:
Thrown by the security manager to indicate a security violation.
... that's not it. Probably you wanted to reuse GeneralSecurityException, but that would also be wrong.
## decrypt method
byte[] myNonce = new byte[gcmIvNonceSizeBytes];
Oh, my, don't start with the my prefix now, just use the same names as in the encryption method.
int countReadBytesNonce = fileInputStream.read(myNonce);
Please use readFully. This won't fail on files, but will fail for other input streams.
/* Derive the key*/
If you have to write that down the you need to introduce a method. Duplicate code.
path.length() - 4
Oh, right, that's never going to fail right? Try running that on "LPT1.enc" on Windows. Please check your input before doing that.
byte[] skipped = new byte[gcmIvNonceSizeBytes+pbkdf2SaltSizeBytes];
Just keep the file open please.
• "Java only uses the lower 8 bits of each char" — can you be more specific what you mean with "Java" here? It would be nice to have a citation here. Feb 11 '20 at 7:16
• By the way, Windows also doesn't allow LPT1.enc to be created since the base name of the file must not be reserved. I experienced this when some Git repository had a file called include/aux.h. Feb 12 '20 at 1:29
• Hmm yeah, bad example maybe. I'll think of another one. But the main thing is to validate the files before continuing (although you could expect that to happen outside of the functionality provided by the methods, which is why I'm more in favor of passing File, no need to create that object multiple times). Feb 12 '20 at 1:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15082915127277374, "perplexity": 5429.5301322521445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.0/warc/CC-MAIN-20211201113241-20211201143241-00560.warc.gz"} |
https://jump.dev/JuMP.jl/stable/tutorials/linear/diet/ | The diet problem
This tutorial solves the classic "diet problem", also known as the Stigler diet.
Required packages
This tutorial requires the following packages:
using JuMP
import DataFrames
import HiGHS
Formulation
Suppose we wish to cook a nutritionally balanced meal by choosing the quantity of each food $f$ to eat from a set of foods $F$ in our kitchen.
Each food $f$ has a cost, $c_f$, as well as a macro-nutrient profile $a_{m,f}$ for each macro-nutrient $m \in M$.
Because we care about a nutritionally balanced meal, we set some minimum and maximum limits for each nutrient, which we denote $l_m$ and $u_m$ respectively.
Furthermore, because we are optimizers, we seek the minimum cost solution.
With a little effort, we can formulate our dinner problem as the following linear program:
\begin{aligned} \min & \sum\limits_{f \in F} c_f x_f \\ \text{s.t.}\ \ & l_m \le \sum\limits_{f \in F} a_{m,f} x_f \le u_m, && \forall m \in M \\ & x_f \ge 0, && \forall f \in F \end{aligned}
In the rest of this tutorial, we will create and solve this problem in JuMP, and learn what we should cook for dinner.
Data
First, we need some data for the problem:
foods = DataFrames.DataFrame(
[
"hamburger" 2.49 410 24 26 730
"chicken" 2.89 420 32 10 1190
"hot dog" 1.50 560 20 32 1800
"fries" 1.89 380 4 19 270
"macaroni" 2.09 320 12 10 930
"pizza" 1.99 320 15 12 820
"salad" 2.49 320 31 12 1230
"milk" 0.89 100 8 2.5 125
"ice cream" 1.59 330 8 10 180
],
["name", "cost", "calories", "protein", "fat", "sodium"],
)
9×6 DataFrame
Rownamecostcaloriesproteinfatsodium
AnyAnyAnyAnyAnyAny
1hamburger2.494102426730
2chicken2.8942032101190
3hot dog1.556020321800
4fries1.89380419270
5macaroni2.093201210930
6pizza1.993201512820
8milk0.8910082.5125
9ice cream1.59330810180
Here, $F$ is foods.name and $c_f$ is foods.cost. (We're also playing a bit loose the term "macro-nutrient" by including calories and sodium.)
Tip
Although we hard-coded the data here, you could also read it in from a file. See Getting started with data and plotting for details.
We also need our minimum and maximum limits:
limits = DataFrames.DataFrame(
[
"calories" 1800 2200
"protein" 91 Inf
"fat" 0 65
"sodium" 0 1779
],
["name", "min", "max"],
)
4×3 DataFrame
Rownameminmax
AnyAnyAny
1calories18002200
2protein91Inf
3fat065
4sodium01779
JuMP formulation
Now we're ready to convert our mathematical formulation into a JuMP model.
First, create a new JuMP model. Since we have a linear program, we'll use HiGHS as our optimizer:
model = Model(HiGHS.Optimizer)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: HiGHS
Next, we create a set of decision variables x, indexed over the foods in the data DataFrame. Each x has a lower bound of 0.
@variable(model, x[foods.name] >= 0);
Our objective is to minimize the total cost of purchasing food. We can write that as a sum over the rows in data.
@objective(
model,
Min,
sum(food["cost"] * x[food["name"]] for food in eachrow(foods)),
);
For the next component, we need to add a constraint that our total intake of each component is within the limits contained in the limits DataFrame. To make this more readable, we introduce a JuMP @expression
for limit in eachrow(limits)
intake = @expression(
model,
sum(food[limit["name"]] * x[food["name"]] for food in eachrow(foods)),
)
@constraint(model, limit.min <= intake <= limit.max)
end
What does our model look like?
print(model)
Min 2.49 x[hamburger] + 2.89 x[chicken] + 1.5 x[hot dog] + 1.89 x[fries] + 2.09 x[macaroni] + 1.99 x[pizza] + 2.49 x[salad] + 0.89 x[milk] + 1.59 x[ice cream]
Subject to
410 x[hamburger] + 420 x[chicken] + 560 x[hot dog] + 380 x[fries] + 320 x[macaroni] + 320 x[pizza] + 320 x[salad] + 100 x[milk] + 330 x[ice cream] ∈ [1800.0, 2200.0]
24 x[hamburger] + 32 x[chicken] + 20 x[hot dog] + 4 x[fries] + 12 x[macaroni] + 15 x[pizza] + 31 x[salad] + 8 x[milk] + 8 x[ice cream] ∈ [91.0, Inf]
26 x[hamburger] + 10 x[chicken] + 32 x[hot dog] + 19 x[fries] + 10 x[macaroni] + 12 x[pizza] + 12 x[salad] + 2.5 x[milk] + 10 x[ice cream] ∈ [0.0, 65.0]
730 x[hamburger] + 1190 x[chicken] + 1800 x[hot dog] + 270 x[fries] + 930 x[macaroni] + 820 x[pizza] + 1230 x[salad] + 125 x[milk] + 180 x[ice cream] ∈ [0.0, 1779.0]
x[hamburger] ≥ 0.0
x[chicken] ≥ 0.0
x[hot dog] ≥ 0.0
x[fries] ≥ 0.0
x[macaroni] ≥ 0.0
x[pizza] ≥ 0.0
x[milk] ≥ 0.0
x[ice cream] ≥ 0.0
Solution
Let's optimize and take a look at the solution:
optimize!(model)
solution_summary(model)
* Solver : HiGHS
* Status
Result count : 1
Termination status : OPTIMAL
Message from the solver:
"kHighsModelStatusOptimal"
* Candidate solution (result #1)
Primal status : FEASIBLE_POINT
Dual status : FEASIBLE_POINT
Objective value : 1.18289e+01
Objective bound : 0.00000e+00
Relative gap : Inf
Dual objective value : 1.18289e+01
* Work counters
Solve time (sec) : 3.02315e-04
Simplex iterations : 6
Barrier iterations : 0
Node count : -1
Success! We found an optimal solution. Let's see what the optimal solution is:
for food in foods.name
println(food, " = ", value(x[food]))
end
hamburger = 0.6045138888888871
chicken = 0.0
hot dog = 0.0
fries = 0.0
macaroni = 0.0
pizza = 0.0
milk = 6.9701388888888935
ice cream = 2.5913194444444447
That's a lot of milk and ice cream! And sadly, we only get 0.6 of a hamburger.
We can also use the function Containers.rowtable to easily convert the result into a DataFrame:
table = Containers.rowtable(value, x; header = [:food, :quantity])
solution = DataFrames.DataFrame(table)
9×2 DataFrame
Rowfoodquantity
StringFloat64
1hamburger0.604514
2chicken0.0
3hot dog0.0
4fries0.0
5macaroni0.0
6pizza0.0
8milk6.97014
9ice cream2.59132
This makes it easy to perform analyses our solution:
filter!(row -> row.quantity > 0.0, solution)
3×2 DataFrame
Rowfoodquantity
StringFloat64
1hamburger0.604514
2milk6.97014
3ice cream2.59132
Problem modification
JuMP makes it easy to take an existing model and modify it by adding extra constraints. Let's see what happens if we add a constraint that we can buy at most 6 units of milk or ice cream combined.
@constraint(model, x["milk"] + x["ice cream"] <= 6)
optimize!(model)
solution_summary(model)
* Solver : HiGHS
* Status
Result count : 1
Termination status : INFEASIBLE
Message from the solver:
"kHighsModelStatusInfeasible"
* Candidate solution (result #1)
Primal status : NO_SOLUTION
Dual status : INFEASIBILITY_CERTIFICATE
Objective value : 1.18289e+01
Objective bound : 0.00000e+00
Relative gap : Inf
Dual objective value : 3.56146e+00
* Work counters
Solve time (sec) : 5.18560e-04
Simplex iterations : 0
Barrier iterations : 0
Node count : -1
Uh oh! There exists no feasible solution to our problem. Looks like we're stuck eating ice cream for dinner.
Tip | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7671701908111572, "perplexity": 10302.74402314043}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00648.warc.gz"} |
https://stupidsid.com/previous-question-papers/download/elements-of-civil-engg-engg-mechanics-7170 | MORE IN Elements of Civil Engg. & Engg. Mechanics
VTU First Year Engineering (P Cycle) (Semester 1)
Elements of Civil Engg. & Engg. Mechanics
June 2013
Total marks: --
Total time: --
INSTRUCTIONS
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
1(a)(i) Geotechnical engineering involves the study of ?
A) Water
B) Soil
C) Air
D) All of these
1 M
A) Inside the city
C) Around the city
D) None of these
1 M
1(a)(iii) The part of civil engineering which deals with waste water and solid waste is called,
A) Water supply engineering
B) Geotechnical engineering
C)Sanitary engineering
D) Structural engineering
1 M
1(a)(iv) A bascule bridge is a,
A) Floating bridge
B) Arch bridge
C) Suspension bridge
D) Movable bridge
1 M
1(b) Write a note on role of civil engineer in infrastructural development.
10 M
1(c) Name the different types of roads as per Nagpur plan.
6 M
2(a)(i) Moment of a force can be defined as the product of force and ______. distance from the line of action of force to the moment center.
A) Least
B) Maximum
C) Any
D) None of these
1 M
2(a)(ii) Effect of force on a body depends on,
A) Direction
B) Magnitude
C) Position
D) All of these
1 M
2(a)(iii) The forces which meet at one point and have their line of action in different planes are called
A) Coplanar concurrent forces
B) Non coplanar concurrent forces
C) Non coplanar non concurrent forces
D) None of these
1 M
2(a)(iv) Couple means two forces acting parallel,
A) Equal in magnitude and in the same direction.
B) Not equal in magnitude but in the same direction .
C) Equal in magnitude but opposite in direction.
D) None of these
1 M
2(b) Define force and state its characteristics.
6 M
2(c) Determine the magnitude and direction of the resultant for the system of forces shown in Q2 (c). Use classical method.
10 M
3(a)(i) The technology of finding the resultant of a system of forces is called,
A) Resultant
B) Resolution
C) Composition
D) None of these
1 M
3(a)(ii) Equilibriant in nothing but a resultant,
A) Equal in magnitude and in the same direction.
B) Equal in magnitude but opposite in direction.-
C) Not equal in magnitude but in the same direction.
D) Not equal in magnitude and opposite in direction.
1 M
3(a)(iii) If two forces P and Q (P > Q),act on the same straight line but in opposite direction their resultant is
A) P + Q
B) P / Q
C) Q - P
D) P - Q
1 M
3(a)(iv) In coplanar concurrent force system if ?H = 0 then the resultant is
A) Horizontal
B) Vertical
C) Moment
D) None of these
1 M
3(b) State and prove Varignon's theorem of the moments.
6 M
3(c) Two spheres each of radius 100mm and weight 5kN is in a rectangular box as shown in fig. Calculate reactions at point of contacts.
10 M
4(a)(i) Moment of total area about its centroidal axis is
A) Twice the area
B) Three times the area
C) Zero
D) None of these
1 M
4(a)(ii) The centroid of a semicircle of radius R about its centroidal diametric axis is
A) 3R/4?
B) 3R/8?
C) 4R/?
D) 4R/3?
1 M
4(a)(iii) An axis over which one half of the plane figure is just mirror of the other half which is
A) Bottom most axis of the figure
B) Axis of symmetry
C) Unsymmetrical axis
D) None of these
1 M
4(a)(iv) Centroid of a rectangle of base width b and depth d is
A) b/3 end d/3
B) b/2 and d/2
C) b/4 and d/4
D) None of these.
1 M
4(b) Determine the centroid of a triangle by the method of integration.
6 M
4(c) Locate the centrid of the lamina shown in fig with respect to point O.
10 M
5(a)(i) The necessary condition of equilibrium of a coplanar concurrent three system is algebric sum of ??? must be zero.
A) Horizontal and vertical threes
B) Moment of threes
C) Horizontal. vertical and moment of forces
D) None of these
1 M
5(a)(ii) In non concurrent force system if ??H=0, ??V=0 then the resultant is
A) Horizontal
B) Vertical
C) Moment
D) Zero
1 M
5(a)(iii) The force which is equal and opposite to the resultant is
A) Resultant force
B) Force
C) Equilibriant
D) None of these
1 M
5(a)(iv) The procedure of resolution is
A) To find the resultant of the force system
B) To break up an inclined force into two components
C) To find the equilibriant
D) None of these
1 M
5(b) Determine the reaction at the point ofshown in contact for the sphere shown in fig.
6 M
5(c) Determine the angle ? for the system of strings ABCD in equilibrium as shown in fig.
10 M
6(a)(i) Statically determinate beams are,
A) The beams which can be analyzed completely using static equations of equilibrium
B) The beams which can be without using static equations of equilibrium
C) Fixed beams
D) None of these
1 M
6(a)(ii) Fixed beams are,
A) One end is fixed and the other is simply supported
B) Both ends are fixed
C) Both ends are roller supported
D) One end is fixed and the other is free.
1 M
6(a)(iii) The number of reaction components at fixed end of a beam are,
A) 2
B) 3
C) 4
D) None of these
1 M
6(a)(iv) U.D.L. stands for
D) All of these
1 M
6(b) Explain different types of supports.
6 M
6(c) Determine the reaction at the support for the beam shown in fig.
10 M
7(a)(i) Angle of friction is angle between
A) the incline and horizontal
B) the normal reaction and friction force
C) the weight of the body and the friction force
D)Normal reaction and the resultant.
1 M
7(a)(ii) The force of friction developed at the contact surface is always
A) Parallel to the plane and along the direction of the applied force
B) Perpendicular to the plane
C) Parallel to the plane and opposite to the direction of the motion
D) All of these.
1 M
7(a)(iii) The maximum inclination of the plane on which the body free from external forces can repose is called
A) Cone of friction
B) Angle of friction
C) Angle of repose
D) None of these
1 M
7(a)(iv) The force of friction depends on
A) Area of contact
B) Roughness of the surface
C) Both area of contact and roughness of the surface
D) None of these.
1 M
7(b) State the laws of static friction.
4 M
7(c) A uniform ladder of length 15m and weight 750N rests against a vertical wall making an angle of 60° with the horizontal. Co-efficient of friction between the wall and the ladder is 0.3 and between the ground and the ladder is 0.25. A man weighing 500N ascends the ladder. How long will he be able to go before the ladder slips?
12 M
8(a)(i) The unit of radius of Gyration is
A) mm
B) mm2
C) mm3
D) mm4
1 M
8(a)(ii) The moment of inertia of an area about an axis which is in a plane perpendicular to the area is called
B) Polar moment of inertia
C) Second moment of area
D) None of these
1 M
8(a)(iii) The moment.of inertia ofa circle with 'd' as its diameter about its centroidal axis is
A) (?D2)/32
B) (?D2)/64
C)(?D4)/32
D) (?D4)/64
1 M
8(a)(iv) The moment of inertia of a square of side b about an axis through its centroid is
A) b4/12
B)b4/8
C) b4/36
D) b3/12
1 M
8(b) State and prove parallel axis theorem.
6 M
8(c) Find the moment of inertia of the region shown in Fig. about horizontal axis AB and also find the radius of Gyration about the same axis.
10 M
More question papers from Elements of Civil Engg. & Engg. Mechanics | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8778707385063171, "perplexity": 4073.575729649282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704820894.84/warc/CC-MAIN-20210127024104-20210127054104-00208.warc.gz"} |
http://etna.math.kent.edu/volumes/2011-2020/vol44/abstract.php?vol=44&pages=124-139 | ## Iterative methods for symmetric outer product tensor decomposition
Na Li, Carmeliza Navasca, and Christina Glenn
### Abstract
We study the symmetric outer product for tensors. Specifically, we look at decompositions of a fully (partially) symmetric tensor into a sum of rank-one fully (partially) symmetric tensors. We present an iterative technique for third-order partially symmetric tensors and fourth-order fully and partially symmetric tensors. We include several numerical examples which indicate faster convergence for the new algorithms than for the standard method of alternating least squares.
Full Text (PDF) [1.2 MB], BibTeX
### Key words
multilinear algebra, tensor products, factorization of matrices
15A69, 15A23
< Back | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483719825744629, "perplexity": 2944.7168067513967}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.97/warc/CC-MAIN-20210506114045-20210506144045-00521.warc.gz"} |
http://dopovidi-nanu.org.ua/en/archive/2018/1/10 | # Shape of Earth's lithosphere and geotectonics
Tserklevych, ALShylo, YA Dopov. Nac. akad. nauk Ukr. 2018, 1:67-72 https://doi.org/10.15407/dopovidi2018.01.067 Section: Geosciences Language: Russian Abstract: The studies of the computer simulation results of the Earth lithosphere figure reorientation allow revealing the certain regularities reflecting structure-forming processes. It has been shown that the shape of the lithosphere surface has a different orientation relative to the geoid figure. The acting horizontal forces in the upper shell of the planet are calculated, introducing the concept of “evolutionary deviation of a plumb” and assuming that the tangential forces are proportional to the angle, which is defined as the angle between the direction of the plumb line in the past geological epoch and the plumb line direction at a given point. The calculated fields of tangential force vectors show a good consistency with the directions of the space-time displacements of Earth's continents and tectonic plates and consistent with the results of the horizontal movements of GNSS stations. This is a quite convincing evidence that, under the long-term action of vortex rotational-gravitational forces, the lithospheric masses acquire the property of creep. Keywords: evolutionary deviation of a plumb, shape of the lithosphere, vortex geodynamics
References:
1. Shen, E. L. (1980). The gravitational energy and internal structure of the planets. (Extended abstract of candidate thesis). S.I. Subbotin Institute of Geophysics of the AS of UkrSSR, Kiev (in Russian).
2. Tserklevych, A. L., Zaiats', O. S. & Shylo, Y. O. (2016). Approximation of the physical surface of the Earth by biaxial and triaxial ellipsoid. Heodynamika, No. 1, pp. 40-49 (in Ukrainian).
3. Tserklevych, A. L., Zaiats', O. S. & Shylo, Y. O. (2017). Dynamics of the Earth shape transformation. Kinematika i Fizika Nebesnykh Tel, 33, No. 3, pp. 54-69 (in Ukrainian). doi: https://doi.org/10.3103/S0884591317030060
4. National Centers for Environmental Information. ETOPO1 Global Relief Model. Retrieved from http://www.ngdc.noaa.gov/mgg/global/global.html | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8791632056236267, "perplexity": 3708.261299769356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524270.28/warc/CC-MAIN-20210121070324-20210121100324-00339.warc.gz"} |
https://jeopardylabs.com/print/6th-grade-math-staar-review | Number, Operation, & Quantitative Reasoning
Patterns, Relationships, & Algebraic Thinking
Geometry and Spatial Reasoning
Measurement
Probability and Statistics
### 100
Order the times from slowest to fastest: 18.09, 8.09, 8.091, 8.91
What is 18.09, 8.91, 8.091, 8.09
### 100
Decimal equivalent of 21%
What is .21
### 100
What is an angle that measures 125'?
What is an obtuse angle.
### 100
What is the volume of a cube with dimensions of 5 cm x 5 cm x 5 cm?
What is 125 cubic cm.
### 100
The number of birds Tom saw each day last week: 11, 7, 5, 10, 7, 9, 8. What is the median?
What is 8.
### 200
Order from greatest to least: 3/8, 4/5, 1/2
What is 4/5, 1/2, 3/8
### 200
75% of the students have taken their class picture. What fraction have NOT taken their class picture?
What is 1/4
### 200
What is always double the radius?
What is diameter.
### 200
How many yards are in 24 feet?
What is 8 yards.
### 200
The high temperatures for last week: 70', 72', 78', 83', 82'. What is the mean?
What is 77'.
### 300
Prim factorization of 120.
What is 2 x 2 x 2 x 2 x 3
### 300
Teresa has 3 more dollars than Chase. Chase has 2 less dollars than Hunter. Hunter has $5. How much money does Teresa have? What is$6.
### 300
A triangle has angle measurements of 50' and 45'. What is the missing angle measurement?
What is 85'.
### 300
What is the perimeter of a rectangle with a length of 10 inches and a width of 5 inches.
What is 30 inches.
### 300
For breakfast you first have a choice of oatmeal, cereal, breakfast sandwhich or eggs, then you may choose a banana, apple or orange. How many different breakfast combinations can you make?
What is 12 possible combinations.
### 400
Greastest common factor of: 9, 18, 81
What is 9.
### 400
A scale on a map reads: 2 cm= 15 miles. From your home to Galveston it is 10 cm. How many miles is Galveston from your home?
What is 60 miles.
### 400
What is the area of a circle with a radius of 6cm?
What is 108 cm squared.
### 400
You are carpeting a bedroom that is 7 feet by 9 feet. How much carpet would you need to purchase?
What is 63 square feet.
### 400
What is the next number in this pattern: 21, 24, 22, 25, 23...
What is 26.
### 500
Least common multiple of: 12, 48, 60
What is 12.
### 500
Out of every 10 people 3 have a car. If there are 40 people, how many have a car?
What is 12 people.
### 500
A parallelogram has an angle measuring 95'. What are the other angle measurements?
What is 95', 85', 85'.
### 500
You are planting a square garden. If it measures 9 meters on one side how much fencing will you need?
What is 36 meters.
### 500
A spinner is divided into 11 equal sections and labeled with numbers 1 through 11. What is the probability of NOT spinning an even number?
What is 6/11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6091417074203491, "perplexity": 3857.3801644630294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999876.81/warc/CC-MAIN-20190625172832-20190625194832-00138.warc.gz"} |
https://www.centralbanking.com/central-banking/news/1410703/fukui-boj-favourable-money-supply | # Fukui says BOJ wants 'favourable money supply'
Bank of Japan Governor Toshihiko Fukui told Parliament Wednesday 17 March that the central bank wants to achieve "favourable money supply" growth and that the economy hasn't recovered enough to allow the BOJ to adopt an explicit inflation target.
We are attempting to make our monetary policy effective, and we would like to eventually achieve a favourable expansion of money supply,'' Fukui said at the budget committee of the upper house of parliament in Tokyo. Still, it will take a while''
#### Latest issue
###### Central Banking Journal
Read the latest edition of the Central Banking journal | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1776028275489807, "perplexity": 10341.606593548533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948047.85/warc/CC-MAIN-20180426012045-20180426032045-00547.warc.gz"} |
https://planetmath.org/minimaxinequality | # minimax inequality
The minimax inequality was first proved by John von Neumann. It is the starting point to discuss two-players zero-sum static games theory.
${\bf Theorem~{}1:}~{}~{}\underline{\rm minimax~{}inequality,~{}simple~{}strategies}$
For any $m\times n$ matrix $A_{i,j}$, we have
(1)$~{}~{}\displaystyle{\max_{1\leq i\leq m}\min_{1\leq j\leq n}A_{i,j}\leq\min_{1% \leq j\leq n}\max_{1\leq i\leq m}A_{i,j}}$
(2)$~{}~{}\displaystyle{\max_{1\leq i\leq m}\min_{1\leq j\leq n}A_{i,j}=\min_{1% \leq j\leq n}\max_{1\leq i\leq m}A_{i,j}}$ if and only if $A_{i,{\tilde{j}}}\leq A_{{\tilde{i}},{\tilde{j}}}\leq A_{{\tilde{i}},j}~{}~{}~% {}~{}\forall i,j$ is valid for some $({\tilde{i}},{\tilde{j}})$
For a 2 players zero-sum game, the entries of $A_{i,j}$ is interpreted as the payoff when player 1 has chosen the $i^{th}$ strategy while player 2 has chosen the $j^{th}$ strategy. The value $A_{{\tilde{i}},{\tilde{j}}}$ is known as the value of the game.
${\bf Proof}$ Since $\displaystyle{\min_{1\leq j\leq n}A_{i,j}\leq\max_{1\leq i\leq m}A_{i,j}~{}~{}% ~{}~{}\forall i,j}~{}~{}$. The LHS is independent of $j$ while the RHS is independent of $i$, therefore we obtain $~{}~{}\displaystyle{\max_{1\leq i\leq m}\min_{1\leq j\leq n}A_{i,j}\leq\min_{1% \leq j\leq n}\max_{1\leq i\leq m}A_{i,j}}$
${\bf Theorem~{}2:}~{}~{}\underline{\rm minimax~{}inequality,~{}mixed~{}strategies}$
Let $\displaystyle{S_{m}=\{x\in{\mathbb{R}}^{m}~{}|~{}x_{i}\geq 0~{}\forall i~{},~{% }\sum_{i=1}^{m}x_{i}=1\}\subseteq{\mathbb{R}}^{m}}$. For any $m\times n$ matrix $A_{i,j}$, we have
$~{}~{}~{}~{}\displaystyle{\max_{x\in S_{m}}\min_{y\in S_{n}}\sum_{i=1}^{m}\sum% _{j=1}^{n}A_{i,j}x_{i}y_{j}=\min_{y\in S_{n}}\max_{x\in S_{m}}\sum_{i=}^{m}% \sum_{j=1}^{n}A_{i,j}x_{i}y_{j}}$
Here $0\leq x_{i}\leq 1$ is interpreted as the probability that Player 1 will choose strategy $i$ while $0\leq y_{j}\leq 1$ is the probability that Player 2 will choose strategy $j$.
${\bf Proof}$ For any $x\in S_{m}$ and any $y\in S_{n}$ we have $\displaystyle{\max_{x\in S_{m}}\min_{y\in S_{n}}\sum_{i=1}^{m}\sum_{j=1}^{n}A_% {i,j}x_{i}y_{j}\leq\sum_{i=1}^{m}\sum_{j=1}^{n}A_{i,j}x_{i}y_{j}}$
Taking maximum for $x\in S_{m}$ on both sides, we have $\displaystyle{\max_{x\in S_{m}}\min_{x\in S_{n}}\sum_{i=1}^{m}\sum_{j=1}^{n}A_% {i,j}x_{i}y_{j}\leq\max_{s\in S_{m}}\sum_{i=1}^{m}\sum_{j=1}^{n}A_{i,j}x_{i}y_% {j}}$
Taking minimum for $y\in S_{n}$ on both sides, we have $\displaystyle{v_{1}=\max_{x\in S_{m}}\min_{y\in S_{n}}\sum_{i=1}^{m}\sum_{j=1}% ^{n}A_{i,j}x_{i}y_{j}\leq v_{2}=\min_{y\in S_{n}}\max_{x\in S_{m}}\sum_{i=1}^{% m}\sum_{j=1}^{n}A_{i,j}x_{i}y_{j}}$
The prove of other half of the inequality takes two steps:
Step 1$~{}~{}$Suppose there is a $y\in S_{n}$ such that $\displaystyle{\sum_{j=1}^{n}A_{i,j}y_{j}\leq 0}$ $~{}~{}\Rightarrow~{}~{}$There is some ${\tilde{x}}\in S_{m}$ such that $\displaystyle{\sum_{i=1}^{m}\left\lgroup\sum_{j=1}^{n}A_{i,j}y_{j}\right% \rgroup{\tilde{x}}_{i}\leq 0}$
$\Rightarrow~{}~{}\displaystyle{\underset{x\in S_{n}}{\rm max}\sum_{i=1}^{m}% \sum_{j=1}^{n}A_{i,j}x_{i}y_{j}\leq 0}$ $~{}~{}\Rightarrow~{}~{}\displaystyle{v_{2}=\underset{y\in S_{n}}{\rm min}% \underset{x\in S_{m}}{\rm max}\sum_{i=1}^{m}\sum_{j=1}^{n}A_{i,j}x_{i}y_{j}% \leq 0}~{}~{}~{}~{}(*1)$
Step 2$~{}~{}$Suppose there is a $x\in S_{m}$ such that $\displaystyle{\sum_{i=1}^{m}A_{i,j}x_{i}y_{j}>0}$ $~{}~{}\Rightarrow~{}~{}$There is some ${\tilde{y}}\in S_{n}$ such that $\displaystyle{\sum_{j=1}^{n}\left\lgroup\sum_{i=1}^{m}A_{i,j}x_{i}\right% \rgroup{\tilde{y}}_{j}\geq 0}$
$\Rightarrow~{}~{}\displaystyle{\underset{y\in S_{n}}{\rm min}\sum_{i=1}^{m}% \sum_{j=1}^{n}A_{i,j}x_{i}y_{j}\geq 0}$ $~{}~{}\Rightarrow~{}~{}\displaystyle{v_{1}=\underset{x\in S_{m}}{\rm max}% \underset{y\in S_{n}}{\rm min}\sum_{i=1}^{m}\sum_{j=1}^{n}A_{i,j}x_{i}y_{j}% \geq 0}~{}~{}~{}~{}(*2)$
Combining (*1) and (*2) we see that either $0\leq v_{1}$ or $v_{2}\leq 0$ is the case and $v_{1}<0 cannot be valid. Repeat the same procedure to the matrix ${\tilde{A}}_{i,j}=A_{i,j}-\lambda$ and we see that $v_{1}-\lambda<0 is invalid, i.e. $v_{1}<\lambda is not valid for any $\lambda$. Since $\lambda$ is arbitrary, we conclude that $v_{2}\leq v_{1}$.
An entire theory on minimax has already been developed and is one of the major research area in optimization theory. The following contains some good sources for further reference:
## References
• 1 V.F.Demyanov and V.N.Malozemov, Introduction to Minimax, Keter Publishing House Jerusalem Ltd, 1974.
Title minimax inequality MinimaxInequality 2013-03-22 16:57:16 2013-03-22 16:57:16 bchui (10427) bchui (10427) 32 bchui (10427) Theorem msc 91A99 msc 91A05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 58, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9825927019119263, "perplexity": 409.0383218655964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00419.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/193386-bound-sum-products.html | ## Bound on the sum of products
Let's say that I know that:
$\displaystyle \sum_{i,j} a_{ij} \leq r$, where $\displaystyle 0 \leq a_{ij} \leq 0.5$ and $\displaystyle r \geq 0$ and $\displaystyle 1 \leq i,j \leq n$
and I also know that
$\displaystyle \sum_{i,j} y_{ij} \leq d$, where $\displaystyle y_{ij} \geq 0$ and $\displaystyle d \geq 0$ and $\displaystyle y$ is zero-diagonal and $\displaystyle 1 \leq i,j \leq n$
What do we know about?
$\displaystyle \sum_{i,j} a_{ij} y_{ij} \leq ?$
So far the best I can prove is
$\displaystyle \sum_{i,j} a_{ij} y_{ij} \leq \frac{d}{2}$
Is there a tighter bound? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9588695168495178, "perplexity": 214.86458829220112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945111.79/warc/CC-MAIN-20180421090739-20180421110739-00026.warc.gz"} |
http://arxiv-export-lb.library.cornell.edu/abs/2301.13291 | hep-th
(what is this?)
# Title: Effective approach to the Antoniadis-Mottola model: quantum decoupling of the higher derivative terms
Abstract: We explore the decoupling of massive ghost mode in the $4D$ (four-dimensional) theory of the conformal factor of the metric. The model was introduced by Antoniadis and Mottola in [1] and can be regarded as a close analog of the fourth-derivative quantum gravity. The analysis of the derived one-loop nonlocal form factors includes their asymptotic behavior in the UV and IR limits. In the UV (high energy) domain, our results reproduce the Minimal Subtraction scheme-based beta functions of [1]. In the IR (i.e., at low energies), the diagrams with massive ghost internal lines collapse into tadpole-type graphs without nonlocal contributions and become irrelevant. On the other hand, those structures that contribute to the running of parameters of the action and survive in the IR, are well-correlated with the divergent part (or the leading in UV contributions to the form factors), coming from the effective low-energy theory of the conformal factor. This effective theory describes only the light propagating mode. Finally, we discuss whether these results may shed light on the possible running of the cosmological constant at low energies.
Comments: 32 pages, 17 figures Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc) Cite as: arXiv:2301.13291 [hep-th] (or arXiv:2301.13291v1 [hep-th] for this version)
## Submission history
From: Wagno Cesar e Silva [view email]
[v1] Mon, 30 Jan 2023 21:08:40 GMT (1080kb,D)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44217708706855774, "perplexity": 1750.6193123001883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00735.warc.gz"} |
http://softpanorama.org/Admin/admin_horror_stories.shtml | Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers May the source be with you, but remember the KISS principle ;-) Skepticism and critical thinking is not panacea, but can help to understand the world better
Softpanorama classification of sysadmin horror stories
The data loss is a new calamity of technological age that affects all of us. But only few can contribute to it to the extent system administrator can ;-)
10-15 min spend on re-reading this page once a month can help to avoid some of the situations described below. A spectacular blunder is often too valuable to be forgotten as it tends to repeat itself in a year or two ;-).
Version 2.3 (Oct 3, 2019)
News Enterprise Unix System Administration Recommended Links Defensive programming Creative uses of rm Mistakes made because of the differences between various Unix/Linux flavors Missing backup horror stories Reboot Blunders Side effects of patching Accidental Shutdowns/Reboot Blunders Performing the operation on a wrong server Pure stupidity Abrupt loss of power horror stories Typos in the commands with disastrous consequences Multiple sysadmin working on the same box Side effects of performing infrequent or poorly understood operations Lack of testing of complex, potentially destructive, commands before execution of production box Recovery of LVM partitions Dot-star-errors and regular expressions blunders Premature or misguided optimization Ownership changing blunders Locking yourself out Excessive zeal in improving security of the system Unintended consequences Workagolism and Burnout Executing command in a wrong directory Saferm -- wrapper for rm command Safe-rm An observation about corporate security departments Coping with the toxic stress in IT environment The Unix Hater’s Handbook Tips Original Anatoly Ivasyuk collection of Sysadmin Horror Stories Humor Larry Wall - Wikiquote
Introduction
"More systems have been wiped out by admins
than any hacker could do in a lifetime"
Rick Furniss
“Experience fails to teach where there is no desire to learn.”
"Everything happens to everybody sooner or later if there is time enough."
George Bernard Shaw
“Experience is the most expensive teacher, but a fool will learn from no other.”
Benjamin Franklin
Unix system administration is an interesting and complex craft. It's good, if your work demands use of your technical skills, creativity and judgment. If it doesn't, then you're in absurd world of Dilbertized cubicle farms and bureaucratic stupidity. And unfortunately that happens too.
There is a lot of deep elegance in Unix, and a talented sysadmin, like any talented craftsman, is able to expose this hidden beauty by the masterful manipulation of complex objects using classic Unix utilities and command line. Which often amazes observers, who have Windows background. In Unix administration you need to improvise on the job to get things done, create your own tools and master command line environment; if you want to be on advanced level you can't go "by manual", you need to improvise. Unfortunately some of such improvisations produce unexpected side effects ;-)
In a way not only execution of complex sequences of commands is a part of this craftsmanship. Blunders and folklore about them are also the legitimate part of the craft. It's human to err after all. And if you are working as root, such an error can easily wipe a vital part of the system. In you are unlucky this is a production system. If you are especially unlucky there is no backup. It is presence of absence of recent backup that often distinguishes the horror story from a minor nuisance. That's why many veteran sysadmins create personal backup before doing something complex and/or risky. At least you should backup /etc on your first login as root this day. That can be done from /root/.bash_profile. or a similar dot files that you use.
In sysadmin job the conditions are often far from perfect. Some are overworked, some lack the necessary knowledge (or lost it as such feature is seldom used), some are just lazy and try to cut corners. Trying to cut corners is not necessary bad per se, as in real work it is often necessary to disobey established rules and act boldly and decisivly. But like operating with sharp blade that entails risks and sometimes people are burned. As Larry Wall put it "We all agree on the necessity of compromise. We just can't agree on when it's necessary to compromise. "
Sh*t happens, but there is a system in any madness ;-). That's why it is important to try to classify typical sysadmin mistakes. People learn from experience and that's why each sysadmin should maintain his own lab journal. Regardless of the reason, every mistake should be documented as it constitutes as an important lesson pointing to the class of similar errors possible. As saying goes "never waist a good crisis". It is an opportunity to make your system more safe and prevent similar occurrences in the future. For example in many cases when a simple mistake cause serious problems, we observe absence of backups and absence of baseline, often both.
The most common blunder in Unix/Linux is probably wiping out useful data by wrong rm command. This class of errors is often called Creative uses of rm. This danger can't be avoided via wrapper such as saferm. Bit nothing can replace an up-to-date backup, so doing backup before any large scale deletion or file reorganization is a must.
what is really bad, is that after resister people tend to react impulsively trying to save the situation. And thos steps often dramatically increase the damage. So rule number 1 is "After the disaster happen do not rush into action. Tale time to analyze the "crime scene" and try diligently like a real detective preserve all evidence. Document everything. It might be crucial to solving the problem you have.
Then do your research. I have a recent case when RAID 5 array was lost. Investigation had shown that is actually consisted of two RAID5 virtual drives, with the second mostly unused. The partition was configured as logic volume in Linux LVM consisting of two PV (physical volumes) and there was a documentation in Internet as for how to recover from this situation (see Recovery of LVM partitions ). the best case in when you have one missing PV (preferably the second, which was the case in this SNAFU). This information allowed to recover most of the files. I created a lab from unused server, experimented with this method for a day and then managed to recover most of the files on the production server. If I panicked, and simply reinitialized everything and reformatted the partition, the data would be lost.
If data are lost look where a copy can exist. Often if user works onmultiple server he/she copies hisfile and at least part of them can be resorted from this copy.
There are several steps that you can do to mitigate the damage:
1. Pay attention to creation of solid backup infrastructure. If necessary, private. Missing backup is the root of all evil
2. Block operation for all level 2 system directories and /etc/config file via wrapper. Block operation on any directory that is in your favorite directories list if you have any (requires a script). Remove potentially dangerous alias to rm in default RHEL installation (where rm is aliased to rm -i for root user) with something more reasonable. The rm='rm -i' alias is an horror because after you get used to it, you will automatically expect rm to prompt you by default before removing files. Of course, one day you'll run it on an account that hasn't that alias set and before you understand what's going on, it is too late.
3. Many such blunders occur because you type the potentially destructive command on the command line. Use editor that allows command line execution instead (for example, vim has this capability). When you operate on backup copy of the system directory it is easy to type automatically the name of the directory with slash in front of it (rm -r /etc instead of rm -r etc ) if it because you are conditioned to type it this was you can first rename the directory to something else. Use absolute path for rm in all cases.
• If safer to type such command in the editor first. Or at least type options and arguments and only them type the name of the command.
• It is better to use file managers like WinSCP or Midnight commander, as they provide visual feedback and you do not need to type the name of the file or directory which eliminated the whole class of potential errors.
• you can use a wrapper for rm command -- a script can check for common blunder. Such script will not permit to delete you any system directories and some important files. Among other useful preventive checks introduces a delay between hitting enter and starting of the operation and list several file to be deleted (for example the first five and the last three) and the total number. Prototypes of such scripts are available and can be adapted to your needs. The problem with this solution that after disaster you enhance such script and uses it for a while. But then you often "regress" to usual way of doing things until the next disaster strikes.
• You can execute move command (moving for example, to /Trash folder) instead and then delete files and directories from /Trash folder which has, say 90 day expiration period for files in it
4. You should never try to a large scale deletion of files in a hurry, or while destructed and doing other tasks simultaneously. Often major blunder are done in attempts urgently "free space". Instead you get rid of important files of databases. View it as surgical operation which requires clean environment, patience and cool head. Ability to resist user pressure is a virtue of sysadmin.
View any large scale deletion as a surgical operation which requires clean environment, patience and cool head. Ability to resist user pressure is a virtue of sysadmin.
5. In any case learning, rereading this or similar pages periodically like some kind of "sysadmin safety training" helps to avoid typical rm blunders (especially including ".." in the list of directories to be deleted as in classic rm .* ). Please note that after you read about them the awareness lasts just a a couple of week or month at best. After that you firmly forget about those things and the danger returns. So maintaining proper level of awareness about them is an important part of the art of Unix system administration. You probably need to schedule reading Creative uses of rm in your calendar. I do.
13 of each month is a very appropriate day for sysadmin safety training and self-study on this topic.
13 of each month is a very appropriate day for syadmin safety training and self-study on this topic.
I use it for updating this set of pages ;-)
Another common and disastrous blunder for any Unix sysadmin who jungle many dozens of servers is performing an operation on the wrong server. They vary from reboot of the production server instead of quality ( or testing ) server, removal of file on the original filesystem instead of backup filesystem (while this file does not exist on the backup), and many others. See Performing the operation on a wrong server
Similar blunder is accidental reboot. The cause can as simple as disconnecting wrong power cord, or executing reboot command in wrong terminal window. Using different colors in the background can help in the latter case.
Learning from your own mistakes as well as mistakes of others is an important part of learning the craft. But the awareness does not last long. That's why it is important periodically reread such pages: they can prevent some horrible blunders
SNAFU as a classic career limited move for Unix linux sysadmin ;-)
The term SNAFU and "Huston we have a problem" often means the same thing
SNAFU is an acronym that is widely used to stand for the sarcastic expression Situation Normal: All Fucked Up. This term is often used as a synonym for the "sysadmin blunder". And implicitly includes some efforts for the cover-up of an embarrassing situation.
So the efforts to avoid them are well justified, and exist since early 90th, if not earlier. That's why this and similar pages that exist on the web (including Original Anatoly Ivasyuk collection of Sysadmin Horror Story created in early 90th). They all, while far from perfect, still represent a useful study material similar to any course in shell programming. And should treated as such. That mean studied and periodically refreshed. The latter, as I already mentioned, is especially important as the awareness fades in a month or two. Eventually in a year or so the information is wiped out from your memory by new information and the flow of new problems that are typical for any sysadmin job. So the person inevitably regress to old ("dangerous") way of doing things, for example abandoning the use of "protective" wrapper for the rm command. This is especially typical for Creative uses of rm, Missing backup horror stories, and Performing the operation on a wrong server types of SNAFU
Usually the memory fades quickly and in several months or a year most of us are quite ready to repeat them again ;-)
IMHO periodic reread of such pages is the only realistic way to keep the awareness on a proper level. This is similar how large enterprises conduct once a month "security awareness training. And why the letter often generates in useless exercise it might be very useful o incorporate some related to typical blunder slides into them.
After 60 years or so on Unix existence (which also the duration of existence of the problem with the expansion of ".*" basic regular expression on the command line) many Unix/Linux sysadmins still do not understand or don't remember about danger of rm -rf .* .
Reading those pages can help. In addition keeping a personal journal of your SNAFU ( a typical SNAFU is like traffic incident is a confluence of several mistakes/simultaneous maneuvers /misunderstanding, etc; also like in the army incompetent bosses often play prominent role in such incidents, creating unnecessary and harmful pressure in already very stressful situation.
Periodically browsing this personal log is really important as each of this incidents can easily be as they typically, saying it politically correctly represent at "career limiting move". Sometimes resulting in termination of employment.
But there is always a silver lining in each dark cloud. When handled properly incidents stimulate learning and stimulates personal growth of a system administrator. Although in many cases there are less painful way to grow your knowledge. Including knowledge of bad incidents. That's why this set of pages was created. Reading it and other similar pages might help any aspiring sysadmin to avoid blunder for which many people already paid the price, which in certain cases includes termination of employment...
There are several fundamental reasons for blunders sysadmins commit:
1. Absence of backup. This is No.1 reason while mistake became a disaster. See Missing backup horror stories for more information.
One thing that distinguishes a professional Unix sysadmin from an amateur is the attitude to backup and the level of knowledge of backup technologies. Professional sysadmin knows all too well that often the difference between major SNAFU and a nuisance is availability of the up-to-date backup of data.
One thing that distinguishes a professional Unix sysadmin from an amateur is the attitude to backup and the level of knowledge of backup technologies
Here is pretty telling poem from unknown source (well originally Paul McCartney :-) on the subject :
Yesterday,
All those backups seemed a waste of pay.
Now my database has gone away.
Oh I believe in yesterday.
2. "Overconfidence and false sense of security" which often demonstrates itself in the absence of testing of complex commands It is so important that it deserves a separate page.
sense of security invites performing dangerous actions without proper preparation and checking. All of us thing that we are great on command line. In in most cases (say 99.99%) this is true. But there is another 0.1%. For example, the fact that you use command line for a decade of more does not actually shield you from committing horrible blunders, if you are not careful. Especially if your prefer, as many sysadmin do, to work as root. Verifying command by typing them on the editor first and only that running them is a good practice. Especially, if the server on which you are working is hundreds miles away. It is so easy one day absolutely automatically type something like
rm * 171206.log
rm *171206.log
Our brains sometimes tend to play jokes on us.
Another aspect of the same is that complexity of environment and hidden interactions between components are ignored and you jump into the action without investigating possible consequences of the move. For example even trivial operation like fixing the way calendar year is represented (so called "year 2000 problem") proved to be a very complex mess. Similarly even a simple upgrade of the version of the complier, or interpreter done at the request of one user, can disrupt the work of others. This is typical both with hardware and software operations. For example, sometimes sysadmin shut himself out of remote box by performing a network reconfiguration operation that does not take into account what type of network connection he/she is using. Of course, now in most case you have a server remote control unit (DRAC/ILO, etc) those days, but the problem is that it might nor work. Such things can crash as was typical for certain versions of DELL DRAC 7 ( Resetting frozen iDRAC without unplugging the server ) and HP ILO. For several years ILO on HP Proliant DL580 G7 (which otherwise were pretty decent and very reliable four socket 4U servers) did not last more then week. And to reboot it you need to disconnect power cables form the server (which is pretty much idiotic solution for rack servers; but this is HP with its very complex and capricious hardware)
If the server is remote and those two mishaps happened simultaneously you have a problem.
3. Excessive zeal. As Talleyrand advised to young diplomats: "First and foremost try to avoid excessive zeal." That very wise recommendation is fully applicable to sysadmin activities especially important regarding the efforts to "improve security" which often lead to horrible SNAFU and more often then not does not improve overall security. . Often doing nothing NOW is the most optimal cause of action. It let you to have time to think about the situation and understand it better.
• "Do not jump into the action until the next morning" is often not a bad advice.
• Another trivial corollary of this maxim is you should never to start anything important before vacation ("to finish everything before vacations" -- the road to hell is really paved with good intentions ;-), unless you really want your vacation to be spoiled ;-).
• Excessive zeal is probably the source of most horrible blunders. Rush is a form of excessive zeal. Doing something "quick and fast" to help the company, or your manager, or your colleagues, often can turn into unmitigated disaster.
4. Inability to resist requests to violate established procedures when you are pressed. That first this is related to the violation of the "Rule no 1: create backup before starting any activity that can screw up OS or important components.
5. Believing the user version of the situation, without checking the gory details. First of all users often do not understand what they want. So blindly following their instruction is a sure recipe for a disaster. So, always ask yourself, does the particular user know what they want? Often if you check that answer is: No, no, and again no. For example, if a user wrote you a email requesting that a newer version R language interpreter should be installed on your servers ASAP, because previous version is too old (which is true), without checking you might miss the real meaning of his message, which quite different from the requested action (and that means that following his request leads to a rather big SNAFU):
1. The user is an is a typical luser (idiot/novice/incompetent) who know neither Linux nor R well and tried to install some R package (or a group of packages). When installation failed he/she just jumped to the conclusion that the problem is with the R interpreter version, because he heard that there is a newer one.
2. The user inhered some code, which he does not understand and it does not run under the currently installed interpreter. In his infinite wisdom the user decided that the problem is not in him/her but in the R interpreter.
3. Combination of (a) and (b)
4. Some other reason with the incompetence as the root case
If in this case you jump to action and update the interpreter you now can face several more serious problems:
1. You might need to restore everything from backup after another user complain (and that means for example on all 16 or more servers that you just updated, spending a good part of your weekend ;-). At this point you might discover that not all server have the most recent backup which spells troubles.
2. The problem that the user faces became much worse, and now you are in the loop to help him/her, because it is you who made it worse.
3. Multiple users start experiencing serious problem with their R scripts.
Another more humiliating story of the same type Opensource.com
The accidental spammer (An anonymous story)
It's a pretty common story that new sys admins have to tell: They set up an email server and don't restrict access as a relay, and months later they discover they've been sending millions of spam email across the world. That's not what happened to me.
I set up a Postfix and Dovecot email server, it was running fine, it had all the right permissions and all the right restrictions. It worked brilliantly for years. Then one morning, I was given a file of a few hundred email addresses. I was told it was an art organization list, and there was an urgent announcement that must be made to the list as soon as possible. So, I got right on it. I set up an email list, I wrote a quick sed command to pull out the addresses from the file, and I imported all the addresses. Then, I activated everything.
6. Loss of situational awareness. The latter is the ability to identify, process, and comprehend the critical elements of information about what is happening; the state of being alert to any, often subtle, clues. When you are tied that often means that you lost part or all situational awareness and are inclined to perform reckless actions. So most horrible SNAFU often happens with you are tied, or exhausted, or sleepy.
Another source of the loss of situational awareness is connected with lack of preparedness, when the person already forgot important details about particular procedure or subsystem due to very infrequent problems with them, but failed to RTFM and jumps into action.
Loss of situational awareness typically happens when you tied, exhausted or under pressure. It is often is connected with lack of preparedness, when the person already forgot important details about particular procedure or subsystem due to very infrequent problems with them, but failed to RTFM and jumps into action.
In this sense while long troubleshooting session can be beneficial as only this way you get a "mental picture" (like traffic controller) of what is happening, extremely long troubleshooting sessions (nighters) are counterproductive (and even dangerous) just because of this factor: in such conditions you can accidentally destroy with one stoke a vital part of the OS or find some other creative way to make situation worse. Working too long shifts in case you are dealing with SNAFU often creates much bigger problem then the problem you were dealing with.
Avoiding any complex or potentially destructive operation when you are tired is a prudent advice, but due to specific of sysadmin work with its unpredictable load peaks it is very difficult to follow. Here are a couple of tips:
• When you can visualize something instead of just relying on your "mental picture", do it. Multiple terminal sessions to the same box are in this sense a must. In some cases OFM file manager like Midnight commander and/or using X interface (for example via VNC) with GUI file manager like Worker in addition to command window might help as it provides context that is lacking in pure command line. Removing files with mc is much safer operation then using the command line, as you have a visual feedback.
• Connection should be secure and reliable as sudden disconnect in the middle of the long operation can amplify the damage. For example even such simple utility as screen can prevent problems caused by sudden disconnect to the remote box, while performing operation that cannot be interrupted. Using nohup with all sensitive operations is also a good practice.
• All your operations should be logged and backups should be taken continually before and after important change. Most terminal emulators allow to create a log. This option should be used to created your private database. Some scripts can be written to clear the log and convert them to more useful info pages.
• Always make a backup of /etc at the start of each day. Make a full backup of the system in case you are doing something dangerous. Loss of two hours is nothing in comparison with a week of frantic troubleshooting and restore operations that spill into weekends. With 8TB USB 3.0 drives available, creation of up to 8TB of backup in one session is not a problem.
7. "Skipping "cool down period" after the disaster" often the most damage is caused not by the SNAFU itself but subsequent hasty and badly thought out "recovery" actions. People tend to react to disaster on an emotional basis, with feelings overweighting the logic and rush to actions trying to save the situation, while making it worse. Humans are “wired” biologically to fear first and think later. So after experiencing first, often relatively minor problem sysadmin often overreact and commits a huge blunder, trying to correct this error without full understanding of the situation. At this time minor problem became real SNAFU.
The key in facing any serious problem is to give yourself some "cool down" period. Just a couple of minutes of thinking about the problem can save you from making a misguided action that makes the situation tremendously worse, sometime irreparable. In any case creating a backup is not a step you can skip. This is a step that differentiates between amateur and professional.
8. Misperception of the complexity of the environment and associated risks. Modern hardware and system software is way too complex and sometimes dealing with components of modern server became minefield (especially if this happens rarely and previous lessons and knowledge are long forgotten) can also lead to disasters too. For example HP P410 RAID controller has interesting property to "forget" its configuration in certain circumstances if you remove the drive that is not used by controller while the server is up. In this case on reboot you get something like
<4>cciss 0000:05:00.0: cciss: Trying to put board into performant mode
<4>cciss 0000:05:00.0: Placing controller into performant mode
<6> cciss/c0d0: unknown partition table
Formally it should allow hot swapping and removal of inactive drive. But here your jaw drops, especially if you realize that you have no recent backup.
9. Reckless driving. Desire to "cut corners" often is connected with being tired, personal problems, excessive hubris, bravado, being over caffeinated, etc. It is very similar to reckless driving. Absence of testing of complex commands listed above can also be classified as an example of "reckless driving". And Linux is so complex OS and has so many important command that it is impossible to remember all the gory details. You need to refresh you memory by consulting your notes, map pages and WEB first. If this is not done and you rely on you intuition in using some feature you can be badly disappointed :-(. For example, people often forget that ".*" matches “.” and ".." and do rm command or other "destructive" command this without prior testing what set of files is effected on production server.
An “Ohnosecond” is defined as the period of time between when you hit the Enter key and you realize what you just did.
There is a difference between a test server and production server in a sense that any action on production server should be verified prior to execution. The similarity with traffic incidents involving reckless drivers, a reckless sysadmin is aware of the risk and consciously disregard it.
State laws usually define reckless driving as “driving with a willful or a wanton disregard for the safety of persons or property,” or in similar terms. Courts consider alcohol and drug use as a factor in deciding whether the driver’s actions were reckless.
Raising situation awareness by doing self-safety training
"Those Who Forget History Are Doomed to Repeat It"
Multiple authors
"Those who cannot remember the past are condemned to repeat it."
George Santayana
Having even primitive recoding of your blunders in a form of, say, plain text file, HTML page, Word document. or special logbook is a good way to increase situation awareness. Some blunders are repetitive.
People usually are unable to learn on blunders committed by others. They prefer to make their own... And even in this case after a year of two the lesson is typically completely forgotten.
Re-reading of description of your own blunder typically provides strong emotional reaction and reinforces understanding of dangers related to this blunders. This type of "emotional memory" is very important in helping to avoid similar blunder in the future. That means that periodic reviews of descriptions of your own blunders is really necessary part of sysadmin arsenal. Re-reading those description should be periodic (for example once a quarter) self-safety training, much like safety training in large corporations.
I can attest that those 10-15 min spend on re-reading and enhancing this material once a month can help to avoid some of the situations described below. Spectacular blunder is often too valuable to be forgotten as they tends to repeat itself ;-). And people tend to commit them again and again. If you read some of the stories form late 90th they often sound as if they were written yesterday.
10-15 min spend on re-reading and enhancing this material one a month can help to avoid some of the situations described below. Spectacular blunder is often too valuable to be forgotten as it tends to repeat itself ;-).
Reading about somebody else blunder does not fully convey the gravity of the situation in which you can find yourself by repeating it. But it can serve as weaker substitute for log of own blunders. For example, the understanding that dealing with files and directories staring with dot in Unix requires extreme caution probably can be acquired only by committing one (just one) such blunder.
Dealing with RAID controllers is another areas that requires extreme caution, a good planning and availability of verified backup. Sometimes even routine firmware update turns into unmitigated disaster. This is also an area where the difference between minor nuisance and major disaster is presence of the most recent backup.
Some typical cases of loss of situational awareness
Here is constructed by myself list of typical cases of the loss of situational awareness
• Performing some critical operation on the wrong server. If you have multiple terminal session for server with close names, at one point you can find yourself performing operation on a wrong server. One of the simplest ways is to change background of the terminal of the server on which you are perfuming critical operations to yellow (or some other distinct color). This can be done in Teraterm. If you use Windows desktop to connect to Unix servers use MSVDM to create multiple desktop and change background for each to make the typing a command in a wrong terminal window less likely. If you prefer to work as root switch to root only on the server that you are working. Use your regular ID and sudo on the others.
• Failure to keep record of your steps and verify steps before applying them to the production box. For example, people often forget that ".*" matches “.” and ".." and do rm command or other "destructive" command this without prior testing what set of files is effected on production server. There is a difference between a test server and production server in a sense that any action on production server should be verified prior to execution. The similarity with traffic incidents here is that like reckless driver reckless sysadmin is aware of the risk and consciously disregard it.
• Destruction by the noise typical for large datacenter. I strongly recommend noise cancellation headphones for a noisy datacenter—it greatly reduced my noise/stress level from days of datacenter work.
• Sleep deprivation, which leads to worsening mood and communication skills, inability to focus, decreased mental and physical performance. Chronic sleep deprivation can lead to neurotic behavior
• Excessive use of caffeine. Excessive use of caffeine does not help incas e of sleep deprivation and lead to side effects including overly excited, aggressive behaviour and related set of blunder. In high doses caffeine can affect heart rate pushing it high.
• Extreme fatigue, especially after multi-hour troubleshooting binges. Stress situations usually increases the chance that fatigue will impair your abilities
• Confusion or use of "gut feeling" instead of coluring man page, Web and, if applicable, your notes, when using some obscure command switches and such.
• Departure from standard operating procedures, taking shortcuts and sudden change of plan.
• Ambiguity of environment like presence of etc directory in home directory confused with /etc directory and command rm -r /etc entered automatically when you want to delete it content because it is hardwired in your brain.
• Fixation or preoccupation with the speed (meeting deadline) instead of quality.
History of this effort
In this page we will present "Softpanorama classification of sysadmin horror stories". It is not the first such effort and hopefully not the last one. And we need to pay proper tribute to the pioneer in this are -- Anatoly Ivasyuk
The author is indebted to Anatoly Ivasyuk who created original " The Unofficial Unix Administration Horror Story Summary. This list exists in two major versions:
One thing that we need in this area is a good classification. While Anatoly Ivasyuk did the first, the most difficult step more can be done. One such classification created by the author is presented below.
This page and related subpages can be viewed as an attempt to create more relevant classification of sysadmin blunders, reorganize the existing material and enhance the content by adding more modern stories.
The issues connected with ego and hubris
hubris: Overbearing pride or presumption; arrogance: "There is no safety in unlimited technological hubris” ( McGeorge Bundy) All the world's a stage, And all the men and women merely players; They have their exits and their entrances, And one man in his time plays many parts, Shakespeare, As You Like It Act 2, scene 7, 139–143 I think there's a lot of naivete and hubris within our mix of personalities. - Ian Williams
Hubris (/ˈhjuːbrɪs/, also hybris, from ancient Greek ὕβρις) describes a personality quality of extreme or foolish pride or dangerous over-confidence.[1] In its ancient Greek context, it typically describes behavior that defies the norms of behavior or challenges the gods, and which in turn brings about the downfall, or nemesis, of the perpetrator of hubris (Hubris - Wikipedia)
Larry Wall once said that "The three chief virtues of a programmer are: Laziness, Impatience and Hubris.". I assume that this was a joke. This is not true for programmers. But for system administrators those three qualities are mortal sins. Especially the last two. Just hubris alone will never let you to be a good system administrator. That's what distinguish system administrators from artists.
We're all victims of our own hubris at times. Success usually breeds a degree of hubris. But some people are more affected then others. The problems start when people are shy to ask more experienced colleagues for advice of information, because they are afraid to demonstrated that they do not know something, which other assume they know. Sometimes this is the reason that lead to disasters.
If the senior, more experienced sysadmin looks at you like you’re an idiot, ask him why. It's better to be thought an idiot for asking than proven to be an idiot by not asking!
Backup = ( Full + Removable tapes (or media) + Offline + Offsite + Tested )
Vivek Gite
1. Creative uses of rm with unintended consequences This is an intrinsic, unavoidable danger in Linux. Like using a sharp blade or chainsaw. Blunders happen very infrequently, but even a single one can be devastating and if happen on production server can cost your job. That means that the level of knowledge of intricacies of rm command directly correlated with the level of qualification of the Linux sysadmin. Please read recommendation in Creative uses of rm with unintended consequences. they were created as generalization of unfortunate episodes (usually called SNAFU) of many sysadmins including myself.
2. Missing backup. Please remember that backup is the last change for you to restore the system if something went terribly wrong. That means that before any dangerous steps you need to locate and check the existence of backup. Making another backup is also a good idea to that you have two or more recent copies. Attempt at least to brose the backup and see if data are intact is a must.
3. Missing baseline and losing initial confirmation in the stream of changes. This is the most typical mistake in network troubleshooting and optimization is losing your initial configuration. This also might mean lack of preparation and lack of situational awareness. You need to take several steps to prevent this blunder from occurring and most important of them are baselines and backups.
4. Locking yourself out
• Accidentally cutting access to the remote box or hosing in some way your remote network connection (locking yourself out). For example, changing firewall rules and not testing them before logout.
• Forgetting root password on a remote box. This is very common problem, especially if access to the remote box is rare. Generally, if regular passwords are used it is important to wear electronic watches with memo pad like Casio (but not smartwatch ;-)
5. Performing operation on a wrong computer. The naming schemes used by large corporations usually do not have enough distance between them to avoid such blunders. also if you work on multiple terminal and do not distinguish them with color, you can easily make such a blunder. For example, you can type XYZ300 instead of XYZ200 and login to the wrong box. If you are in a hurry and do not check the name, you proceed with operation intended for different box. Another common situation is when you have several terminal windows open and in a hurry start working on a wrong server. That's why it's important that shell prompt shows the name of the host (but it is not enough; in case of terminal the color of the background is also important; probably more important). Often, if you both have a production server and a quality server for some application is wise never have two terminals opened simultaneously if you are doing some tricky and potentially disastrous (if done on the wrong box) staff . Reopening it is not a big deal but can save you from some very unpleasant situations.
6. Forgetting in which directory you are and executing command in a wrong directory. This is common mistake if work under severe time pressure or are very tired.
7. Regular expressions related blunders. Novice sysadmins usually do not realize that '.*' also matches '..' often with disastrous consequences if commands like chmod, chown, rm are used recursively or in find command.
8. Find filesystem traversal errors and other errors related to find. This is very common class of errors and it is covered in a separate page Typical Errors In Using Find
9. Side effects of performing operations on home or application directories due to links to system directories. This is a pretty common mistake and I had committed it myself several time with various, but always unpleasant consequences.
10. Misunderstanding of syntax of important command and/or not testing complex command before execution of production box. Such errors are often made under time pressure. One such case is using recursive rm, chown, chmod or find commands. Each of them deserves category of its own.
11. Ownership changing blunders Those are common when using chown with find so you need to test the command first.
12. Excessive zeal in improving security of the system ;-). A lot of current security recommendation are either stupid or counterproductive. In the hands of overly enthusiastic and semi-competent administrator it becomes a weapon that no hacker can ever match. I think more systems were destroyed by idiotic security measures that by hackers.
13. Mistakes done under time pressure. Some of them were discussed above, but generally time pressure serves as a powerful catalyst for the most devastating mistakes.
14. Patching horrors
15. Unintended consequences of automatic system maintenance scripts
16. Side effects/unintended consequences of multiple sysadmin working on the same box
17. Premature or misguided optimization and/or cleanup of the system. Changing settings without full understanding consequences of such changes. Misguided attempts to get rid of unwanted file or directories (cleaning the system).
18. Mistakes made because of the differences between various Unix/Linux flavors For example in Solaris run level 5 means reboot while in Linux run level 5 is running system with networking and X11.
19. Stupid or preventable mistakes including those when dealing with complex server hardware.
Some personal experience
Cleaning NFS mounted home directory to save space
To speed up installation of the sever I mounted my home directory from another server. Then forgot about it and it remained mounted. CentOS 6.9 was installed on server. Later researcher asked to reinstall on it RHEL as one of his application were supported only on RHEL and I started with backing up all critical directories "just in case". Thinking that I already have a copy of my home directory elsewhere I decided to shrink space on /home filesystem and not realizing that it was NFS mounted deleted it.
Reboot of wrong server
Such commands as reboot or mkinitrd can be pretty devastating when applied to wrong server. That mishap happens with a lot of administrators including myself, so it is prudent to take special measures to make it less probable.
This situation often is made more probable due to not fault-tolerant name scheme employed in many corporations where names of the servers differ by one symbol. For example, scheme serv01, serv02 serv03 and so on is a pretty dangerous name scheme as server names are different by only single digit and thus errors like working on the wrong server are much more probable.
The typical case of the loss of situational awareness is performing some critical operation on the wrong server. If you use Windows desktop to connect to Unix servers use MSVDM to create multiple desktop and change background for each to make the typing command in a wrong terminal window less likely
Even more complex scheme like Bsn01dls9, Nyc02sns10 were first three letter encode the location, then numeric suffix and then vendor of the hardware and OS installed are prone to such errors. My impression that unless first letters differ, there is a substantial chance of working on wrong server. Using favorite sport teams names is a better strategy and those "formal" name can be used as aliases.
If you try to distill the essence of horror stories most of them were upgraded from errors to horror stories due to inadequate backups.
Having a good recent backup is the key feature that distinguishes mere nuisance from full blown disaster. This point is very difficult to understand by novice enterprise administrators. Rephrasing Bernard Show we can say "Experience keeps the most expensive school, but most sysadmins are unable to learn anywhere else". Please remember that in enterprise environment you will almost never be rewarded for innovations and contributions but in many cases you will be severely punished for blunders. In other words typical enterprise IT is a risk averse environment and you better understand that sooner rather then later...
If you try to distill the essence of horror stories most of them are about inadequate backups. Having a good recent backup is the key feature that distinguishes mere nuisance from full blown disaster.
Rush and absence of planning are probably the second most important reason. In many cases sysadmin is stressed and that impair judgment.
Forgetting to chroot affected subtree
Another typical reason is abuse of privileges. If you have access to root that does not mean that you need to perform all operations as root. For example such simple operations' as
cd /home/joeuser
chown -R joeuser:joeuser .*
performed as root cause substantial problems and time lost in recovery of ownership of system file. Computers are really fast now and of modern server such an operation can take a second or two :-(.
Even with user privileges there will be some damage: it will affect all world writable files and directories.
This is the case where chroot can provide tremendous help:
cd /home/joeuser
chroot /home/joeuser
chown -R joeuser:joeuser .*
Abuse of root privileges
Another typical reason is abuse of root privileges. Using sudo or RBAC (on Solaris) you can avoid some unpleasant surprises. Another good practice if to use screen with one screen for root operations and another for operations that can be performed under your on ID or under privileges of wheel group (or other group to which all sysadmins belong).
Many Unix sysadmin horror stories are related to unintended consequences, unanti
ide effects of a particular Unix commands such as find and rm performed with root privileges. Unix is a complex OS and many intricate details (like behavior of commands like rm -r .* or chown -R a:a .*) can easily be forgotten from one encounter to another, especially if sysadmin works with several flavors of Unix or Unix and Windows servers.
For example recursive deletion of files either via rm -r or via find -exec rm {} \; has a lot of pitfalls that can destroy the server pretty nicely in less then a minute, if run without testing.
Some of those pitfalls can be viewed as a deficiency of rm implementation (it should automatically block * deletion of system directories like /, /etc/ and so on unless -f flag is specified, but Unix lacks system attribute for files although in some case sticky bit on directories (like /tmp) can help).
That means that it is wise to use wrappers for rm. There are several more or less usable approach to writing such a wrapper:
• Configurable blacklist of files and directories that should never be removed. This is an approach implement in Perl script safe-rm
• Redefining rm as mv to junk directory. Which should be cleaned periodically with find -mtime -7 or greater.
• Displaying several first targets before executing actual rm command and the total number of affected files and asking for confirmation. In this case rm is wrapped with the shell function, as in command line rm is usually typed without path.
Another important source of blunders is time pressure. Trying to do something quickly cutting corners (such as creating verified a backup) often lead to substantial downtime. Hurry slowly is one of the saying that are very true for sysadmin. But unfortunately very difficult to follow. In any case always backup /etc/directory on your login (this should be done from profile or bashrc script.
In any case always backup /etc/directory on your login (this should be done from profile or bashrc script.
Sometimes your emotional state contribute to the problems: you didn't have much sleep or your mind was distracted by your personal life problems. In such days it is important to slow down and be extra cautious. Doing nothing is such cases is much better that creating another SNAFU.
Typos are another common source of serious, some time disastrous errors. One rule that should be followed (but as the memory of the last incident fades, this rule like any safety rules, usually is forgotten :-): if you are working as root and perform dangerous operations never type the directory path, always copy it from the history, if possible or list it via ls command and copy from the screen.
If you are working as root and perform dangerous operations never type the directory path, especially complex path. Always try to copy it from the history, if possible or list it via ls command and then copy it from the screen.
I once automatically typed /etc instead of etc trying to delete directory to free space on a backup directory on a production server (/etc probably in engraved in sysadmin head as it is typed so often and can be substituted for etc subconsciously). I realized that it was mistake and cancelled the command, but it was a fast server and one third of /etc was gone. The rest of the day was spoiled... Actually not completely: I learned quite a bit about the behavior of AIX in this situation and the structure of AIX /etc directory this day so each such disaster is actually a great learning experience, almost like one day or even one week training course ;-). But it's much less nerve wracking to get this knowledge from a regular course...
Another interesting thing is having backup was not enough is this case -- backup software sometimes can stop working and the server has an illusion of the backup not the actual backup. That happens with HP Data Protector, which is too complex software to operate reliably. The same can be true for ssh and rsync based backup -- something in the configuration changes and that went unnoticed until too late. And this was a remote server is a datacenter across the country. I restored the directory on the other non-production server (overwriting its /etc directory in this second box with the help of operations, tell me about cascading errors and Murphy law :-). Then netcat helped to transfer the tar file.
If you are working as root and perform dangerous operations never type a directory path of the command, copy it from the screen. If you can copy command from history instead of typing, just do it !
In such cases network services with authentication stop working and the only way to transfer files is using CD/DVD, USB drive or netcat. That's why it is useful to have netcat on servers: netcat is the last resort file transfer program when services with authentication like ftp or scp stop working. It is especially useful to have it if the datacenter is remote.
netcat is the last resort file transfer program when services with authentication like ftp or scp stop working. It is especially useful to have it, if the datacenter is remote.
What other authors are saying
Linux Server Hacks, Volume Two Tips & Tools for Connecting, Monitoring, and Troubleshooting William von Hagen, Brian K. Jones
Avoid Common Junior Mistakes
Get over the junior admin hump and land in guru territory.
No matter how "senior" you become, and no matter how omnipotent you feel in your current role, you will eventually make mistakes. Some of them may be quite large. Some will wipe entire weekends right off the calendar. However, the key to success in administering servers is to mitigate risk, have an exit plan, and try to make sure that the damage caused by potential mistakes is limited. Here are some common mistakes to avoid on your road to senior-level guru status.
Don't Take the root Name in Vain
Try really hard to forget about root. Here's a quick comparison of the usage of root by a seasoned vet versus by a junior administrator.
Solid, experienced administrators will occasionally forget that they need to be root to perform some function. Of course they know they need to be root as soon as they see their terminal filling with errors, but running su -root occasionally slips their mind. No big deal. They switch to root, they run the command, and they exit the root shell. If they need to run only a single command, such as a make install, they probably just run it like this:
$su -c 'make install' This will prompt you for the root password and, if the password is correct, will run the command and dump you back to your lowly user shell. A junior-level admin, on the other hand, is likely to have five terminals open on the same box, all logged in as root. Junior admins don't consider keeping a terminal that isn't logged in as root open on a production machine, because "you need root to do anything anyway." This is horribly bad form, and it can lead to some really horrid results. Don't become root if you don't have to be root! Building software is a good example. After you download a source package, unzip it in a place you have access to as a user. Then, as a normal user, run your ./configure and make commands. If you're installing the package to your ~/bin directory, you can run make install as yourself. You only need root access if the program will be installed into directories to which only root has write access, such as /usr/local. My mind was blown one day when I was introduced to an entirely new meaning of "taking the root name in vain." It doesn't just apply to running commands as root unnecessarily. It also applies to becoming root specifically to grant unprivileged access to things that should only be accessible by root! I was logged into a client's machine (as a normal user, of course), poking around because the user had reported seeing some odd log messages. One of my favorite commands for tracking down issues like this is ls -lahrt/etc, which does a long listing of everything in the directory, reverse sorted by modification time. In this case, the last thing listed (and hence, the last thing modified) was /etc/shadow. Not too odd if someone had added a user to the local machine recently, but it so happened that this company used NIS+, and the permissions had been changed on the file! I called the number they'd told me to call if I found anything, and a junior administrator admitted that he had done that himself because he was writing a script that needed to access that file. Ugh. Don't Get Too Comfortable Junior admins tend to get really into customizing their environments. They like to show off all the cool things they've recently learned, so they have custom window manager setups, custom logging setups, custom email configurations, custom tunneling scripts to do work from their home machines, and, of course, custom shells and shell initializations. That last one can cause a bit of headache. If you have a million aliases set up on your local machine and some other set of machines that mount your home directory (thereby making your shell initialization accessible), things will probably work out for that set of machines. More likely, however, is that you're working in a mixed environment with Linux and some other Unix variant. Furthermore, the powers that be may have standard aliases and system-wide shell profiles that were there long before you were. At the very least, if you modify the shell you have to test that everything you're doing works as expected on all the platforms you administer. Better is just to keep a relatively bare-bones administrative shell. Sure, set the proper environment variables, create three or four aliases, and certainly customize the command prompt if you like, but don't fly off into the wild blue yonder sourcing all kinds of bash completion commands, printing the system load to your terminal window, and using shell functions to create your shell prompt. Why not? Well, because you can't assume that the same version of your shell is running everywhere, or that the shell was built with the same options across multiple versions of multiple platforms! Furthermore, you might not always be logging in from your desktop. Ever see what happens if you mistakenly set up your initialization file to print stuff to your terminal's titlebar without checking where you're coming from? The first time you log in from a dumb terminal, you'll realize it wasn't the best of ideas. Your prompt can wind up being longer than the screen! Just as versions and build options for your shell can vary across machines, so too can "standard" commands-drastically! Running chown -R has wildly different effects on Solaris than it does on Linux machines, for example. Solaris will follow symbolic links and keep on truckin', happily skipping about your directory hierarchy and recursively changing ownership of files in places you forgot existed. This doesn't happen under Linux. To get Linux to behave the same way, you need to use the -H flag explicitly. There are lots of commands that exhibit different behavior on different operating systems, so be on your toes! Don't Perform Production Commands "Off the Cuff" Many environments have strict rules about how software gets installed, how new machines are built and pushed into production, and so on. However, there are also thousands of sites that don't enforce any such rules, which quite frankly can be a bit scary. Not having the funds to come up with a proper testing and development environment is one thing. Having a blatant disregard for the availability of production services is quite another. When performing software installations, configuration changes, mass data migrations, and the like, do yourself a huge favor (actually, a couple of favors): Script the procedure! Script it and include checks to make sure that everything in the script runs without making any assumptions. Check to make sure each step has succeeded before moving on. Script a backout procedure. If you've moved all the data, changed the configuration, added a user for an application to run as, and installed the application, and something blows up, you really will not want to spend another 40 minutes cleaning things up so that you can get things back to normal. In addition, if things blow up in production, you could panic, causing you to misjudge, mistype, and possibly make things worse. Script it! The process of scripting these procedures also forces you to think about the consequences of what you're doing, which can have surprising results. I once got a quarter of the way through a script before realizing that there was an unmet dependency that nobody had considered. This realization saved us a lot of time and some cleanup as well. Ask Questions The best tip any administrator can give is to be conscious of your own ignorance. Don't assume you know every conceivable side effect Dr. Nikolai Bezroukov Top Visited Your browser does not support iframes. Switchboard Latest Past week Past month NEWS CONTENTS Old News ;-) "Those Who Forget History Are Doomed to Repeat It" Multiple authors "Those who cannot remember the past are condemned to repeat it." George Santayana An "Ohnosecond" is defined as the period of time between when you hit enter and you realize what you just did. [Nov 08, 2019] What breaks our systems A taxonomy of black swans by Laura Nolan Feed Oct 25, 2018 | opensource.com Find and fix outlier events that create issues before they trigger severe production problems. Black swans are a metaphor for outlier events that are severe in impact (like the 2008 financial crash). In production systems, these are the incidents that trigger problems that you didn't know you had, cause major visible impact, and can't be fixed quickly and easily by a rollback or some other standard response from your on-call playbook. They are the events you tell new engineers about years after the fact. Black swans, by definition, can't be predicted, but sometimes there are patterns we can find and use to create defenses against categories of related problems. For example, a large proportion of failures are a direct result of changes (code, environment, or configuration). Each bug triggered in this way is distinctive and unpredictable, but the common practice of canarying all changes is somewhat effective against this class of problems, and automated rollbacks have become a standard mitigation. As our profession continues to mature, other kinds of problems are becoming well-understood classes of hazards with generalized prevention strategies. Black swans observed in the wild All technology organizations have production problems, but not all of them share their analyses. The organizations that publicly discuss incidents are doing us all a service. The following incidents describe one class of a problem and are by no means isolated instances. We all have black swans lurking in our systems; it's just some of us don't know it yet. Hitting limits Programming and development Running headlong into any sort of limit can produce very severe incidents. A canonical example of this was Instapaper's outage in February 2017 . I challenge any engineer who has carried a pager to read the outage report without a chill running up their spine. Instapaper's production database was on a filesystem that, unknown to the team running the service, had a 2TB limit. With no warning, it stopped accepting writes. Full recovery took days and required migrating its database. The organizations that publicly discuss incidents are doing us all a service. Limits can strike in various ways. Sentry hit limits on maximum transaction IDs in Postgres . Platform.sh hit size limits on a pipe buffer . SparkPost triggered AWS's DDoS protection . Foursquare hit a performance cliff when one of its datastores ran out of RAM . One way to get advance knowledge of system limits is to test periodically. Good load testing (on a production replica) ought to involve write transactions and should involve growing each datastore beyond its current production size. It's easy to forget to test things that aren't your main datastores (such as Zookeeper). If you hit limits during testing, you have time to fix the problems. Given that resolution of limits-related issues can involve major changes (like splitting a datastore), time is invaluable. When it comes to cloud services, if your service generates unusual loads or uses less widely used products or features (such as older or newer ones), you may be more at risk of hitting limits. It's worth load testing these, too. But warn your cloud provider first. Finally, where limits are known, add monitoring (with associated documentation) so you will know when your systems are approaching those ceilings. Don't rely on people still being around to remember. Spreading slowness "The world is much more correlated than we give credit to. And so we see more of what Nassim Taleb calls 'black swan events' -- rare events happen more often than they should because the world is more correlated." -- Richard Thaler HostedGraphite's postmortem on how an AWS outage took down its load balancers (which are not hosted on AWS) is a good example of just how much correlation exists in distributed computing systems. In this case, the load-balancer connection pools were saturated by slow connections from customers that were hosted in AWS. The same kinds of saturation can happen with application threads, locks, and database connections -- any kind of resource monopolized by slow operations. HostedGraphite's incident is an example of externally imposed slowness, but often slowness can result from saturation somewhere in your own system creating a cascade and causing other parts of your system to slow down. An incident at Spotify demonstrates such spread -- the streaming service's frontends became unhealthy due to saturation in a different microservice. Enforcing deadlines for all requests, as well as limiting the length of request queues, can prevent such spread. Your service will serve at least some traffic, and recovery will be easier because fewer parts of your system will be broken. Retries should be limited with exponential backoff and some jitter. An outage at Square, in which its Redis datastore became overloaded due to a piece of code that retried failed transactions up to 500 times with no backoff, demonstrates the potential severity of excessive retries. The Circuit Breaker design pattern can be helpful here, too. Dashboards should be designed to clearly show utilization, saturation, and errors for all resources so problems can be found quickly. Thundering herds Often, failure scenarios arise when a system is under unusually heavy load. This can arise organically from users, but often it arises from systems. A surge of cron jobs that starts at midnight is a venerable example. Mobile clients can also be a source of coordinated demand if they are programmed to fetch updates at the same time (of course, it is much better to jitter such requests). Events occurring at pre-configured times aren't the only source of thundering herds. Slack experienced multiple outages over a short time due to large numbers of clients being disconnected and immediately reconnecting, causing large spikes of load. CircleCI saw a severe outage when a GitLab outage ended, leading to a surge of builds queued in its database, which became saturated and very slow. Almost any service can be the target of a thundering herd. Planning for such eventualities -- and testing that your plan works as intended -- is therefore a must. Client backoff and load shedding are often core to such approaches. If your systems must constantly ingest data that can't be dropped, it's key to have a scalable way to buffer this data in a queue for later processing. Automation systems are complex systems "Complex systems are intrinsically hazardous systems." -- Richard Cook, MD If your systems must constantly ingest data that can't be dropped, it's key to have a scalable way to buffer this data in a queue for later processing. The trend for the past several years has been strongly towards more automation of software operations. Automation of anything that can reduce your system's capacity (e.g., erasing disks, decommissioning devices, taking down serving jobs) needs to be done with care. Accidents (due to bugs or incorrect invocations) with this kind of automation can take down your system very efficiently, potentially in ways that are hard to recover from. Christina Schulman and Etienne Perot of Google describe some examples in their talk Help Protect Your Data Centers with Safety Constraints . One incident sent Google's entire in-house content delivery network (CDN) to disk-erase. Schulman and Perot suggest using a central service to manage constraints, which limits the pace at which destructive automation can operate, and being aware of system conditions (for example, avoiding destructive operations if the service has recently had an alert). Automation systems can also cause havoc when they interact with operators (or with other automated systems). Reddit experienced a major outage when its automation restarted a system that operators had stopped for maintenance. Once you have multiple automation systems, their potential interactions become extremely complex and impossible to predict. It will help to deal with the inevitable surprises if all this automation writes logs to an easily searchable, central place. Automation systems should always have a mechanism to allow them to be quickly turned off (fully or only for a subset of operations or targets). Defense against the dark swans These are not the only black swans that might be waiting to strike your systems. There are many other kinds of severe problem that can be avoided using techniques such as canarying, load testing, chaos engineering, disaster testing, and fuzz testing -- and of course designing for redundancy and resiliency. Even with all that, at some point your system will fail. To ensure your organization can respond effectively, make sure your key technical staff and your leadership have a way to coordinate during an outage. For example, one unpleasant issue you might have to deal with is a complete outage of your network. It's important to have a fail-safe communications channel completely independent of your own infrastructure and its dependencies. For instance, if you run on AWS, using a service that also runs on AWS as your fail-safe communication method is not a good idea. A phone bridge or an IRC server that runs somewhere separate from your main systems is good. Make sure everyone knows what the communications platform is and practices using it. Another principle is to ensure that your monitoring and your operational tools rely on your production systems as little as possible. Separate your control and your data planes so you can make changes even when systems are not healthy. Don't use a single message queue for both data processing and config changes or monitoring, for example -- use separate instances. In SparkPost: The Day the DNS Died , Jeremy Blosser presents an example where critical tools relied on the production DNS setup, which failed. The psychology of battling the black swan To ensure your organization can respond effectively, make sure your key technical staff and your leadership have a way to coordinate during an outage. Dealing with major incidents in production can be stressful. It really helps to have a structured incident-management process in place for these situations. Many technology organizations ( including Google ) successfully use a version of FEMA's Incident Command System. There should be a clear way for any on-call individual to call for assistance in the event of a major problem they can't resolve alone. For long-running incidents, it's important to make sure people don't work for unreasonable lengths of time and get breaks to eat and sleep (uninterrupted by a pager). It's easy for exhausted engineers to make a mistake or overlook something that might resolve the incident faster. Learn more There are many other things that could be said about black (or formerly black) swans and strategies for dealing with them. If you'd like to learn more, I highly recommend these two books dealing with resilience and stability in production: Susan Fowler's Production-Ready Microservices and Michael T. Nygard's Release It! . Laura Nolan will present What Breaks Our Systems: A Taxonomy of Black Swans at LISA18 , October 29-31 in Nashville, Tennessee, USA. [Nov 08, 2019] How to prevent and recover from accidental file deletion in Linux Enable Sysadmin trashy - Trashy · GitLab might make sense in simple cases. But often massive file deletions are about attempts to get free space. Nov 08, 2019 | www.redhat.com Back up You knew this would come first. Data recovery is a time-intensive process and rarely produces 100% correct results. If you don't have a backup plan in place, start one now. Better yet, implement two. First, provide users with local backups with a tool like rsnapshot . This utility creates snapshots of each user's data in a ~/.snapshots directory, making it trivial for them to recover their own data quickly. There are a great many other open source backup applications that permit your users to manage their own backup schedules. Second, while these local backups are convenient, also set up a remote backup plan for your organization. Tools like AMANDA or BackupPC are solid choices for this task. You can run them as a daemon so that backups happen automatically. Backup planning and preparation pay for themselves in both time, and peace of mind. There's nothing like not needing emergency response procedures in the first place. Ban rm On modern operating systems, there is a Trash or Bin folder where users drag the files they don't want out of sight without deleting them just yet. Traditionally, the Linux terminal has no such holding area, so many terminal power users have the bad habit of permanently deleting data they believe they no longer need. Since there is no "undelete" command, this habit can be quite problematic should a power user (or administrator) accidentally delete a directory full of important data. Many users say they favor the absolute deletion of files, claiming that they prefer their computers to do exactly what they tell them to do. Few of those users, though, forego their rm command for the more complete shred , which really removes their data. In other words, most terminal users invoke the rm command because it removes data, but take comfort in knowing that file recovery tools exist as a hacker's un- rm . Still, using those tools take up their administrator's precious time. Don't let your users -- or yourself -- fall prey to this breach of logic. If you really want to remove data, then rm is not sufficient. Use the shred -u command instead, which overwrites, and then thoroughly deletes the specified data However, if you don't want to actually remove data, don't use rm . This command is not feature-complete, in that it has no undo feature, but has the capacity to be undone. Instead, use trashy or trash-cli to "delete" files into a trash bin while using your terminal, like so: $ trash ~/example.txt
$trash --list example.txt One advantage of these commands is that the trash bin they use is the same your desktop's trash bin. With them, you can recover your trashed files by opening either your desktop Trash folder, or through the terminal. If you've already developed a bad rm habit and find the trash command difficult to remember, create an alias for yourself: $ echo "alias rm='trash'"
Even better, create this alias for everyone. Your time as a system administrator is too valuable to spend hours struggling with file recovery tools just because someone mis-typed an rm command.
Respond efficiently
Unfortunately, it can't be helped. At some point, you'll have to recover lost files, or worse. Let's take a look at emergency response best practices to make the job easier. Before you even start, understanding what caused the data to be lost in the first place can save you a lot of time:
• If someone was careless with their trash bin habits or messed up dangerous remove or shred commands, then you need to recover a deleted file.
• If someone accidentally overwrote a partition table, then the files aren't really lost. The drive layout is.
• In the case of a dying hard drive, recovering data is secondary to the race against decay to recover the bits themselves (you can worry about carving those bits into intelligible files later).
No matter how the problem began, start your rescue mission with a few best practices:
• Stop using the drive that contains the lost data, no matter what the reason. The more you do on this drive, the more you risk overwriting the data you're trying to rescue. Halt and power down the victim computer, and then either reboot using a thumb drive, or extract the damaged hard drive and attach it to your rescue machine.
• Do not use the victim hard drive as the recovery location. Place rescued data on a spare volume that you're sure is working. Don't copy it back to the victim drive until it's been confirmed that the data has been sufficiently recovered.
• If you think the drive is dying, your first priority after powering it down is to obtain a duplicate image, using a tool like ddrescue or Clonezilla .
Once you have a sense of what went wrong, It's time to choose the right tool to fix the problem. Two such tools are Scalpel and TestDisk , both of which operate just as well on a disk image as on a physical drive.
Practice (or, go break stuff)
At some point in your career, you'll have to recover data. The smart practices discussed above can minimize how often this happens, but there's no avoiding this problem. Don't wait until disaster strikes to get familiar with data recovery tools. After you set up your local and remote backups, implement command-line trash bins, and limit the rm command, it's time to practice your data recovery techniques.
Download and practice using Scalpel, TestDisk, or whatever other tools you feel might be useful. Be sure to practice data recovery safely, though. Find an old computer, install Linux onto it, and then generate, destroy, and recover. If nothing else, doing so teaches you to respect data structures, filesystems, and a good backup plan. And when the time comes and you have to put those skills to real use, you'll appreciate knowing what to do.
[Nov 08, 2019] My first sysadmin mistake by Jim Hall
"... I put together a simple strategy: Don't reboot the server. Use an identical system as a template, and re-create the ..."
Nov 08, 2019 | opensource.com
rm command in the wrong directory. As root. I thought I was deleting some stale cache files for one of our programs. Instead, I wiped out all files in the /etc directory by mistake. Ouch.
My clue that I'd done something wrong was an error message that rm couldn't delete certain subdirectories. But the cache directory should contain only files! I immediately stopped the rm command and looked at what I'd done. And then I panicked. All at once, a million thoughts ran through my head. Did I just destroy an important server? What was going to happen to the system? Would I get fired?
Fortunately, I'd run rm * and not rm -rf * so I'd deleted only files. The subdirectories were still there. But that didn't make me feel any better.
Immediately, I went to my supervisor and told her what I'd done. She saw that I felt really dumb about my mistake, but I owned it. Despite the urgency, she took a few minutes to do some coaching with me. "You're not the first person to do this," she said. "What would someone else do in your situation?" That helped me calm down and focus. I started to think less about the stupid thing I had just done, and more about what I was going to do next.
I put together a simple strategy: Don't reboot the server. Use an identical system as a template, and re-create the /etc directory.
Once I had my plan of action, the rest was easy. It was just a matter of running the right commands to copy the /etc files from another server and edit the configuration so it matched the system. Thanks to my practice of documenting everything, I used my existing documentation to make any final adjustments. I avoided having to completely restore the server, which would have meant a huge disruption.
To be sure, I learned from that mistake. For the rest of my years as a systems administrator, I always confirmed what directory I was in before running any command.
I also learned the value of building a "mistake strategy." When things go wrong, it's natural to panic and think about all the bad things that might happen next. That's human nature. But creating a "mistake strategy" helps me stop worrying about what just went wrong and focus on making things better. I may still think about it, but knowing my next steps allows me to "get over it."
[Nov 08, 2019] How to use Sanoid to recover from data disasters Opensource.com
Nov 08, 2019 | opensource.com
filesystem-level snapshot replication to move data from one machine to another, fast . For enormous blobs like virtual machine images, we're talking several orders of magnitude faster than rsync .
If that isn't cool enough already, you don't even necessarily need to restore from backup if you lost the production hardware; you can just boot up the VM directly on the local hotspare hardware, or the remote disaster recovery hardware, as appropriate. So even in case of catastrophic hardware failure , you're still looking at that 59m RPO, <1m RTO.
Backups -- and recoveries -- don't get much easier than this.
root@box1:~# syncoid pool/images/vmname root@box2:pooln
ame/images/vmname
Or if you have lots of VMs, like I usually do... recursion!
root@box1:~# syncoid -r pool/images/vmname root@box2:po
olname/images/vmname
This makes it not only possible, but easy to replicate multiple-terabyte VM images hourly over a local network, and daily over a VPN. We're not talking enterprise 100mbps symmetrical fiber, either. Most of my clients have 5mbps or less available for upload, which doesn't keep them from automated, nightly over-the-air backups, usually to a machine sitting quietly in an owner's house.
Preventing your own Humpty Level Events
Sanoid is open source software, and so are all its dependencies. You can run Sanoid and Syncoid themselves on pretty much anything with ZFS. I developed it and use it on Linux myself, but people are using it (and I support it) on OpenIndiana, FreeBSD, and FreeNAS too.
You can find the GPLv3 licensed code on the website (which actually just redirects to Sanoid's GitHub project page), and there's also a Chef Cookbook and an Arch AUR repo available from third parties.
[Nov 07, 2019] What breaks our systems A taxonomy of black swans Opensource.com
Nov 07, 2019 | opensource.com
What breaks our systems: A taxonomy of black swans Find and fix outlier events that create issues before they trigger severe production problems. 25 Oct 2018 Laura Nolan Feed 147 up 2 comments Image credits : Eumelincen . CC0 x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
Black swans, by definition, can't be predicted, but sometimes there are patterns we can find and use to create defenses against categories of related problems.
For example, a large proportion of failures are a direct result of changes (code, environment, or configuration). Each bug triggered in this way is distinctive and unpredictable, but the common practice of canarying all changes is somewhat effective against this class of problems, and automated rollbacks have become a standard mitigation.
As our profession continues to mature, other kinds of problems are becoming well-understood classes of hazards with generalized prevention strategies.
Black swans observed in the wild
All technology organizations have production problems, but not all of them share their analyses. The organizations that publicly discuss incidents are doing us all a service. The following incidents describe one class of a problem and are by no means isolated instances. We all have black swans lurking in our systems; it's just some of us don't know it yet.
Hitting limits
Programming and development
Running headlong into any sort of limit can produce very severe incidents. A canonical example of this was Instapaper's outage in February 2017 . I challenge any engineer who has carried a pager to read the outage report without a chill running up their spine. Instapaper's production database was on a filesystem that, unknown to the team running the service, had a 2TB limit. With no warning, it stopped accepting writes. Full recovery took days and required migrating its database.
The organizations that publicly discuss incidents are doing us all a service. Limits can strike in various ways. Sentry hit limits on maximum transaction IDs in Postgres . Platform.sh hit size limits on a pipe buffer . SparkPost triggered AWS's DDoS protection . Foursquare hit a performance cliff when one of its datastores ran out of RAM .
One way to get advance knowledge of system limits is to test periodically. Good load testing (on a production replica) ought to involve write transactions and should involve growing each datastore beyond its current production size. It's easy to forget to test things that aren't your main datastores (such as Zookeeper). If you hit limits during testing, you have time to fix the problems. Given that resolution of limits-related issues can involve major changes (like splitting a datastore), time is invaluable.
When it comes to cloud services, if your service generates unusual loads or uses less widely used products or features (such as older or newer ones), you may be more at risk of hitting limits. It's worth load testing these, too. But warn your cloud provider first.
Finally, where limits are known, add monitoring (with associated documentation) so you will know when your systems are approaching those ceilings. Don't rely on people still being around to remember.
"The world is much more correlated than we give credit to. And so we see more of what Nassim Taleb calls 'black swan events' -- rare events happen more often than they should because the world is more correlated."
-- Richard Thaler
HostedGraphite's postmortem on how an AWS outage took down its load balancers (which are not hosted on AWS) is a good example of just how much correlation exists in distributed computing systems. In this case, the load-balancer connection pools were saturated by slow connections from customers that were hosted in AWS. The same kinds of saturation can happen with application threads, locks, and database connections -- any kind of resource monopolized by slow operations.
HostedGraphite's incident is an example of externally imposed slowness, but often slowness can result from saturation somewhere in your own system creating a cascade and causing other parts of your system to slow down. An incident at Spotify demonstrates such spread -- the streaming service's frontends became unhealthy due to saturation in a different microservice. Enforcing deadlines for all requests, as well as limiting the length of request queues, can prevent such spread. Your service will serve at least some traffic, and recovery will be easier because fewer parts of your system will be broken.
Retries should be limited with exponential backoff and some jitter. An outage at Square, in which its Redis datastore became overloaded due to a piece of code that retried failed transactions up to 500 times with no backoff, demonstrates the potential severity of excessive retries. The Circuit Breaker design pattern can be helpful here, too.
Dashboards should be designed to clearly show utilization, saturation, and errors for all resources so problems can be found quickly.
Thundering herds
Often, failure scenarios arise when a system is under unusually heavy load. This can arise organically from users, but often it arises from systems. A surge of cron jobs that starts at midnight is a venerable example. Mobile clients can also be a source of coordinated demand if they are programmed to fetch updates at the same time (of course, it is much better to jitter such requests).
Events occurring at pre-configured times aren't the only source of thundering herds. Slack experienced multiple outages over a short time due to large numbers of clients being disconnected and immediately reconnecting, causing large spikes of load. CircleCI saw a severe outage when a GitLab outage ended, leading to a surge of builds queued in its database, which became saturated and very slow.
Almost any service can be the target of a thundering herd. Planning for such eventualities -- and testing that your plan works as intended -- is therefore a must. Client backoff and load shedding are often core to such approaches.
If your systems must constantly ingest data that can't be dropped, it's key to have a scalable way to buffer this data in a queue for later processing.
Automation systems are complex systems
"Complex systems are intrinsically hazardous systems."
-- Richard Cook, MD
If your systems must constantly ingest data that can't be dropped, it's key to have a scalable way to buffer this data in a queue for later processing. The trend for the past several years has been strongly towards more automation of software operations. Automation of anything that can reduce your system's capacity (e.g., erasing disks, decommissioning devices, taking down serving jobs) needs to be done with care. Accidents (due to bugs or incorrect invocations) with this kind of automation can take down your system very efficiently, potentially in ways that are hard to recover from.
Christina Schulman and Etienne Perot of Google describe some examples in their talk Help Protect Your Data Centers with Safety Constraints . One incident sent Google's entire in-house content delivery network (CDN) to disk-erase.
Schulman and Perot suggest using a central service to manage constraints, which limits the pace at which destructive automation can operate, and being aware of system conditions (for example, avoiding destructive operations if the service has recently had an alert).
Automation systems can also cause havoc when they interact with operators (or with other automated systems). Reddit experienced a major outage when its automation restarted a system that operators had stopped for maintenance. Once you have multiple automation systems, their potential interactions become extremely complex and impossible to predict.
It will help to deal with the inevitable surprises if all this automation writes logs to an easily searchable, central place. Automation systems should always have a mechanism to allow them to be quickly turned off (fully or only for a subset of operations or targets).
Defense against the dark swans
These are not the only black swans that might be waiting to strike your systems. There are many other kinds of severe problem that can be avoided using techniques such as canarying, load testing, chaos engineering, disaster testing, and fuzz testing -- and of course designing for redundancy and resiliency. Even with all that, at some point your system will fail.
To ensure your organization can respond effectively, make sure your key technical staff and your leadership have a way to coordinate during an outage. For example, one unpleasant issue you might have to deal with is a complete outage of your network. It's important to have a fail-safe communications channel completely independent of your own infrastructure and its dependencies. For instance, if you run on AWS, using a service that also runs on AWS as your fail-safe communication method is not a good idea. A phone bridge or an IRC server that runs somewhere separate from your main systems is good. Make sure everyone knows what the communications platform is and practices using it.
Another principle is to ensure that your monitoring and your operational tools rely on your production systems as little as possible. Separate your control and your data planes so you can make changes even when systems are not healthy. Don't use a single message queue for both data processing and config changes or monitoring, for example -- use separate instances. In SparkPost: The Day the DNS Died , Jeremy Blosser presents an example where critical tools relied on the production DNS setup, which failed.
The psychology of battling the black swan
To ensure your organization can respond effectively, make sure your key technical staff and your leadership have a way to coordinate during an outage. Dealing with major incidents in production can be stressful. It really helps to have a structured incident-management process in place for these situations. Many technology organizations ( including Google ) successfully use a version of FEMA's Incident Command System. There should be a clear way for any on-call individual to call for assistance in the event of a major problem they can't resolve alone.
For long-running incidents, it's important to make sure people don't work for unreasonable lengths of time and get breaks to eat and sleep (uninterrupted by a pager). It's easy for exhausted engineers to make a mistake or overlook something that might resolve the incident faster.
There are many other things that could be said about black (or formerly black) swans and strategies for dealing with them. If you'd like to learn more, I highly recommend these two books dealing with resilience and stability in production: Susan Fowler's Production-Ready Microservices and Michael T. Nygard's Release It! .
Laura Nolan will present What Breaks Our Systems: A Taxonomy of Black Swans at LISA18 , October 29-31 in Nashville, Tennessee, USA.
[Nov 07, 2019] How to prevent and recover from accidental file deletion in Linux Enable Sysadmin
trashy - Trashy · GitLab might make sense in simple case. But often deletions are about increasing free space.
Nov 07, 2019 | www.redhat.com
Back up
You knew this would come first. Data recovery is a time-intensive process and rarely produces 100% correct results. If you don't have a backup plan in place, start one now.
Better yet, implement two. First, provide users with local backups with a tool like rsnapshot . This utility creates snapshots of each user's data in a ~/.snapshots directory, making it trivial for them to recover their own data quickly.
There are a great many other open source backup applications that permit your users to manage their own backup schedules.
Second, while these local backups are convenient, also set up a remote backup plan for your organization. Tools like AMANDA or BackupPC are solid choices for this task. You can run them as a daemon so that backups happen automatically.
Backup planning and preparation pay for themselves in both time, and peace of mind. There's nothing like not needing emergency response procedures in the first place.
Ban rm
On modern operating systems, there is a Trash or Bin folder where users drag the files they don't want out of sight without deleting them just yet. Traditionally, the Linux terminal has no such holding area, so many terminal power users have the bad habit of permanently deleting data they believe they no longer need. Since there is no "undelete" command, this habit can be quite problematic should a power user (or administrator) accidentally delete a directory full of important data.
Many users say they favor the absolute deletion of files, claiming that they prefer their computers to do exactly what they tell them to do. Few of those users, though, forego their rm command for the more complete shred , which really removes their data. In other words, most terminal users invoke the rm command because it removes data, but take comfort in knowing that file recovery tools exist as a hacker's un- rm . Still, using those tools take up their administrator's precious time. Don't let your users -- or yourself -- fall prey to this breach of logic.
If you really want to remove data, then rm is not sufficient. Use the shred -u command instead, which overwrites, and then thoroughly deletes the specified data
However, if you don't want to actually remove data, don't use rm . This command is not feature-complete, in that it has no undo feature, but has the capacity to be undone. Instead, use trashy or trash-cli to "delete" files into a trash bin while using your terminal, like so:
$trash ~/example.txt$ trash --list
example.txt
One advantage of these commands is that the trash bin they use is the same your desktop's trash bin. With them, you can recover your trashed files by opening either your desktop Trash folder, or through the terminal.
If you've already developed a bad rm habit and find the trash command difficult to remember, create an alias for yourself:
$echo "alias rm='trash'" Even better, create this alias for everyone. Your time as a system administrator is too valuable to spend hours struggling with file recovery tools just because someone mis-typed an rm command. Respond efficiently Unfortunately, it can't be helped. At some point, you'll have to recover lost files, or worse. Let's take a look at emergency response best practices to make the job easier. Before you even start, understanding what caused the data to be lost in the first place can save you a lot of time: • If someone was careless with their trash bin habits or messed up dangerous remove or shred commands, then you need to recover a deleted file. • If someone accidentally overwrote a partition table, then the files aren't really lost. The drive layout is. • In the case of a dying hard drive, recovering data is secondary to the race against decay to recover the bits themselves (you can worry about carving those bits into intelligible files later). No matter how the problem began, start your rescue mission with a few best practices: • Stop using the drive that contains the lost data, no matter what the reason. The more you do on this drive, the more you risk overwriting the data you're trying to rescue. Halt and power down the victim computer, and then either reboot using a thumb drive, or extract the damaged hard drive and attach it to your rescue machine. • Do not use the victim hard drive as the recovery location. Place rescued data on a spare volume that you're sure is working. Don't copy it back to the victim drive until it's been confirmed that the data has been sufficiently recovered. • If you think the drive is dying, your first priority after powering it down is to obtain a duplicate image, using a tool like ddrescue or Clonezilla . Once you have a sense of what went wrong, It's time to choose the right tool to fix the problem. Two such tools are Scalpel and TestDisk , both of which operate just as well on a disk image as on a physical drive. Practice (or, go break stuff) At some point in your career, you'll have to recover data. The smart practices discussed above can minimize how often this happens, but there's no avoiding this problem. Don't wait until disaster strikes to get familiar with data recovery tools. After you set up your local and remote backups, implement command-line trash bins, and limit the rm command, it's time to practice your data recovery techniques. Download and practice using Scalpel, TestDisk, or whatever other tools you feel might be useful. Be sure to practice data recovery safely, though. Find an old computer, install Linux onto it, and then generate, destroy, and recover. If nothing else, doing so teaches you to respect data structures, filesystems, and a good backup plan. And when the time comes and you have to put those skills to real use, you'll appreciate knowing what to do. [Nov 06, 2019] Sysadmin 101 Leveling Up by Kyle Rankin Nov 06, 2019 | www.linuxjournal.com This is the fourth in a series of articles on systems administrator fundamentals. These days, DevOps has made even the job title "systems administrator" seems a bit archaic like the "systems analyst" title it replaced. These DevOps positions are rather different from sysadmin jobs in the past with a much larger emphasis on software development far beyond basic shell scripting and as a result often are filled with people with software development backgrounds without much prior sysadmin experience. In the past, a sysadmin would enter the role at a junior level and be mentored by a senior sysadmin on the team, but in many cases these days, companies go quite a while with cloud outsourcing before their first DevOps hire. As a result, the DevOps engineer might be thrust into the role at a junior level with no mentor around apart from search engines and Stack Overflow posts. In the first article in this series, I explained how to approach alerting and on-call rotations as a sysadmin. In the second article , I discussed how to automate yourself out of a job. In the third , I covered why and how you should use tickets. In this article, I describe the overall sysadmin career path and what I consider the attributes that might make you a "senior sysadmin" instead of a "sysadmin" or "junior sysadmin", along with some tips on how to level up. Keep in mind that titles are pretty fluid and loose things, and that they mean different things to different people. Also, it will take different people different amounts of time to "level up" depending on their innate sysadmin skills, their work ethic and the opportunities they get to gain more experience. That said, be suspicious of anyone who leveled up to a senior level in any field in only a year or two -- it takes time in a career to make the kinds of mistakes and learn the kinds of lessons you need to learn before you can move up to the next level. Kyle Rankin is a Tech Editor and columnist at Linux Journal and the Chief Security Officer at Purism. He is the author of Linux Hardening in Hostile Networks , DevOps Troubleshooting , The Official Ubuntu Server Book , Knoppix Hacks , Knoppix Pocket Reference , Linux Multimedia Hacks and Ubuntu Hacks , and also a contributor to a number of other O'Reilly books. Rankin speaks frequently on security and open-source software including at BsidesLV, O'Reilly Security Conference, OSCON, SCALE, CactusCon, Linux World Expo and Penguicon. You can follow him at @kylerankin. [Nov 06, 2019] 7 Ways to Make Fewer Mistakes at Work by Carey-Lee Dixon May 31, 2015 | www.linkedin.com Follow Digital Marketing Executive at LASCO Financial Services Though mistakes are not intentional and are inevitable, that doesn't mean we should take a carefree approach to getting things done. There are some mistakes we make in the workplace, which could be easily avoided if we paid a little more attention to what we were doing. Agree? We've all made them and possibly mulled over a few silly mistakes we have made in the past. But, I am here to tell you that mistakes doesn't make you 'bad' person, it's more of a great learning experience - of what you can do better and how you can get it right the next time. And having made a few silly mistakes in my work life, I guarantee that if you adopt a few of these approaches that I have been applying in my work life, I am pretty sure you too will make you fewer mistakes at work. 1. Give your full attention to what you are doing ...dedicate uninterrupted times to accomplish that [important] task. Do whatever it takes, to get it done with your full attention, so if it means eliminating distractions, taking breaks in between and working with a to-do list, do it. But trying to send emails, editing that blog post and doing whatever else, may lead to you making a few unwanted mistakes. Tip: Eliminate distractions. 2. Ask Questions Often, we make mistakes because we didn't ask that one question. Either we were too proud to or we thought we had it 'covered.' Unsure about the next step to take or how to undertake a task? Do your homework and ask someone who is more knowledgeable than you are, ask someone who can guide you accordingly. Worried about what others will think? Who cares? Asking questions only make you smarter, not dumb. And so what if others think you are dumb. Their opinion doesn't matter anyway, asking questions helps you to make fewer mistakes and as my mom would say, 'Put on the mask and ask' . Each task usually comes with a challenge and requires you learn something new, so use the resources available to you, like the more experienced colleagues to get all the information you need that will enable you to make fewer mistakes. Tip: Do your homework. Ask for help. 3. Use checklists Checklists can be used to help you structure what needs to be done before you publish that article or submit that project. They are quite useful especially when you have a million things to do. Since I am responsible for getting multiple tasks done, I often use checklists/to-do lists to help keep me get structured and to ensure I don't leave anything undone. In general, lists are great and using one to detail things to do, or steps required to move to the next stage will help to minimize errors, especially when you have a number of things on your plate. And did I mention, Richard Branson is also big on lists . That's how he gets a lot of things done. 4. Review, review, review Carefully review your work. I must admit, I get a little paranoid, about delivering error-free work. Like, seriously, I don't like making them and often beat up myself if I send an email with some silly grammatical errors. And that's why reviewing your work before you click send, is a must-do. Often, we submit our work with errors because we are working against a tight deadline and didn't give yourself enough time to review what was done. The last thing you really need is your boss in neck for the document that was due last week, which you just completed without much time to review it. So, if you have spent endless hours working on a project, is proud your work and ready to show it to the team - take a break and come back to review it. Taking a break and then getting back to review what was done will allow you to find those mistakes before others can. And yes, the checklist is quite useful in the review process - so use it. Tip: Get a second eye. 5. Get a second eye Even when you have done careful review, chances are there will still be mistakes. It happens. So getting a second eye, especially one from a more experienced person can find that one error you overlooked. Sometimes we overlook the details, because we are in a hurry or not 100% focused on the task at hand, getting that other set of eyes to check for errors or an important point, that you missed, is always useful. Tip: Get a second eye from someone more experienced or knowledgeable. 6. Allow enough time In making mistakes at work, I realise I am more prone to making mistakes when I am working against a tight deadline . Failure to allow enough time for a project or for review can lead to missed requirements and incompleteness, which results in failure to meet desired expectations. That's why it is essential to be smart in estimating the time needed to accomplish a task, which should include time for review. Ideally, you want to give yourself enough time, to do research, complete a document/project, review what was done and ask for a second eye , so setting realistic schedules is most important in making fewer mistakes. Tip: Limit working against tight deadlines. 7. Learn from others mistakes No matter how much you know or think you know, it always important to learn from the mistakes of others. What silly mistakes did a co-worker make that caused a big stir in the office? Make note of it and intentionally try not to make the same mistakes too. Some of the greatest lessons are those we learn from others. So pay attention to past mistakes made, what they did right, what they didn't nail and how they got out of the rut. Tip: Pay close attention to the mistakes others make. No matter how much you know or think you know, it is always important to learn from the mistakes of others. Remember, mistakes are meant to teach you not break you . So if you make mistakes, it only shows us that sometimes we need to take a different approach to getting things done. Mistakes are meant to teach you not break you No one wants to make mistakes; I sure don't. But that does not mean we should be afraid of them. I have made quite a few mistakes in my work life, which has only proven that I need to be more attentive and that I need to ask for help more than I usually do. So, take the necessary steps to make fewer mistakes but at the same time, don't beat up yourself over the ones you make. A great resource on mistakes in the workplace, Mistakes I Made at Work . A great resource on focusing on less and increasing productivity, One Thing . ____________________________________________________ For more musings, career lessons and tips that you can apply to your personal and professional life visit my personal blog, www.careyleedixon.com . I enjoy working on being the version of myself, helping others to grow in their personal and professional lives while doing what matters. For questions or to book me for writing/speaking engagements on career and personal development, email me at careyleedixon@gmail.com [Nov 06, 2019] 10+ mistakes Linux newbies make - TechRepublic Nov 06, 2019 | www.techrepublic.com javascript:void(0) 7: Giving up too quickly Here's another issue I see all too often. After a few hours (or a couple of days) working with Linux, new users will give up for one reason or another. I understand giving up when they realize something simply doesn't work (such as when they MUST use a proprietary application or file format). But seeing Linux not work under average demands is rare these days. If you see new Linux users getting frustrated, try to give them a little extra guidance. Sometimes getting over that initial hump is the biggest challenge they will face. [Nov 06, 2019] Destroying multiple production databases by Jan Gerrit Kootstra Aug 08, 2019 | www.redhat.com In my 22-year-old career as an IT specialist, I encountered two major issues where -- due to my mistakes -- important production databases were blown apart. Here are my stories. Freshman mistake The first time was in the late 1990s when I started working at a service provider for my local municipality's social benefit agency. I got an assignment as a newbie system administrator to remove retired databases from the server where databases for different departments were consolidated. Due to a type error on a top-level directory, I removed two live database files instead of the one retired database. What was worse was that due to the complexity of the database consolidation during the restore, other databases were hit, too. Repairing all databases took approximately 22 hours. What helped A good backup that was tested each night by recovering an empty file at the end of the tar archive catalog, after the backup was made. Future-looking statement It's important to learn from our mistakes. What I learned is this: • Write down the steps you will perform and have them checked by a senior sysadmin. It was the first time I did not ask for a review by one of the seniors. My bad. • Be nice to colleagues from other teams. It was a DBA that saved me. • Do not copy such a complex setup of sharing databases over shared file systems. • Before doing a life cycle management migration, go for a separation of the filesystems per database to avoid the complexity and reduce the chances of human error. • Change your approach: Later in my career, I always tried to avoid lift and shift migrations. Senior sysadmin mistake In a period where partly offshoring IT activities was common practice in order to reduce costs, I had to take over a database filesystem extension on a Red Hat 5 cluster. Given that I set up this system a couple of years before, I had not checked the current situation. I assumed the offshore team was familiar with the need to attach all shared LUNs to both nodes of the two-node cluster. My bad, never assume. As an Australian tourist once mentioned when a friend and I were on a vacation in Ireland after my Latin grammar school graduation: "Do not make an ars out of you me." Or, another phrase: "Assuming is the mother of all mistakes." Well, I fell for my own trap. I went for the filesystem extension on the active node, and without checking the passive node's ( node2 ) status, tested a failover. Because we had agreed to run the database on node2 until the next update window, I had put myself in trouble. As the databases started to fail, we brought the database cluster down. No issues yet, but all hell broke loose when I ran a filesystem check on an LVM-based system with missing physical volumes. Looking back I would say you're stupid to myself. Running pvs , lvs , or vgs would have alerted me that LVM detected issues. Also, comparing multipath configuration files would have revealed probable issues. So, next time, I would first, check to see if LVM contains issues before going for the last resort: A filesystem check and trying to fix the millions of errors. Most of the time you will destroy files, anyway. What saved my day What saved my day back then was: • My good relations with colleagues over all teams, where a short talk with a great storage admin created the correct zoning to the required LUNs, and I got great help from a DBA who had deep knowledge of the clustered databases. • A good database backup. • Great management and a great service manager. They kept the annoyed customer away from us. • Not making make promises I could not keep, like: "I will fix it in three hours." Instead, statements such as the one below help keep the customer satisfied: "At the current rate of fixing the filesystem, I cannot guarantee a fix within so many hours. As we just passed the 10% mark, I suggest we stop this approach and discuss another way to solve the issue." Future-looking statement I definitely learned some things. For example, always check the environment you're about to work on before any change. Never assume that you know how an environment looks -- change is a constant in IT. Also, share what you learned from your mistakes. Train offshore colleagues instead of blaming them. Also, inform them about the impact the issue had on the customer's business. A continent's major transport hub cannot be put on hold due to a sysadmin's mistake. A shutdown of the transport hub might have been needed if we failed to solve the issue and the backup site in a data centre of another service provider would have been hurt too. Part of the hub is a harbour and we could have blown up a part of the harbour next to a village of about 10,000 people if both a cotton ship and an oil tanker would have gotten lost on the harbour master's map and collided. General lessons learned I learned some important lessons overall from these and other mistakes: • Be humble enough to admit your mistakes. • Be arrogant enough to state that you are one of the few people that can help fix the issues you caused. • Show leadership of the solvers' team, or at least make sure that all of the team's roles will be fulfilled -- including the customer relations manager. • Take back the role of problem-solver after the team is created, if that is what was requested. • "Be part of the solution, do not become part of the problem," as a colleague says. I cannot stress this enough: Learn from your mistakes to avoid them in the future, rather than learning how to make them on a weekly basis. Jan Gerrit Kootstra Solution Designer (for Telco network services). Red Hat Accelerator. More about me [Nov 06, 2019] My 10 Linux and UNIX Command Line Mistakes by Vivek Gite May 20, 2018 | www.cyberciti.biz I had only one backup copy of my QT project and I just wanted to get a directory called functions. I end up deleting entire backup (note -c switch instead of -x): cd /mnt/bacupusbharddisk tar -zcvf project.tar.gz functions I had no backup. Similarly I end up running rsync command and deleted all new files by overwriting files from backup set (now I have switched to rsnapshot ) rsync -av -delete /dest /src Again, I had no backup. ... ... ... All men make mistakes, but only wise men learn from their mistakes -- Winston Churchill . From all those mistakes I have learn that: 1. You must keep a good set of backups. Test your backups regularly too. 2. The clear choice for preserving all data of UNIX file systems is dump, which is only tool that guaranties recovery under all conditions. (see Torture-testing Backup and Archive Programs paper). 3. Never use rsync with single backup directory. Create a snapshots using rsync or rsnapshots . 4. Use CVS/git to store configuration files. 5. Wait and read command line twice before hitting the dam [Enter] key. 6. Use your well tested perl / shell scripts and open source configuration management software such as puppet, Ansible, Cfengine or Chef to configure all servers. This also applies to day today jobs such as creating the users and more. Mistakes are the inevitable, so have you made any mistakes that have caused some sort of downtime? Please add them into the comments section below. [Oct 25, 2019] Get inode number of a file on linux - Fibrevillage Oct 25, 2019 | www.fibrevillage.com Get inode number of a file on linux An inode is a data structure in UNIX operating systems that contains important information pertaining to files within a file system. When a file system is created in UNIX, a set amount of inodes is created, as well. Usually, about 1 percent of the total file system disk space is allocated to the inode table. How do we find a file's inode ? ls -i Command: display inode ls -i Command: display inode$ls -i /etc/bashrc
131094 /etc/bashrc
131094 is the inode of /etc/bashrc.
Stat Command: display Inode
$stat /etc/bashrc File: /etc/bashrc' Size: 1386 Blocks: 8 IO Block: 4096 regular file Device: fd00h/64768d Inode: 131094 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2013-12-10 10:01:29.509908811 -0800 Modify: 2013-06-06 11:31:51.792356252 -0700 Change: 2013-06-06 11:31:51.792356252 -0700 find command: display inode $find ./ -iname sysfs_fc_tools.tar -printf '%p %i\n'
./sysfs_fc_tools.tar 28311964
Notes :
%p stands for file path
%i stands for inode number
tree command: display inode under a directory
#tree -a -L 1 --inodes /etc
/etc
├── [ 132896] a2ps
├── [ 132898] a2ps.cfg
├── [ 132897] a2ps-site.cfg
├── [ 133315] acpi
...
usecase of using inode
find / -inum XXXXXX -print to find the full path for each file pointing to inode XXXXXX.
Though you can use the example to do rm action, but simply I discourage to do so, for security concern in find command, also in other file system, same inode refers a very different file.
filesystem repair
If you get a bad luck on your filesystem, most of time, run fsck to fix it. It helps if you have inode info of the filesystem in hand.
This is another big topic, I'll have another article for it.
[Oct 25, 2019] Howto Delete files by inode number by Erik
Feb 10, 2011 | erikimh.com
linux administration - tips, notes and projects
Ever mistakenly pipe output to a file with special characters that you couldn't remove?
-rw-r–r– 1 eriks eriks 4 2011-02-10 22:37 –fooface
Good luck. Anytime you pass any sort of command to this file, it's going to interpret it as a flag. You can't fool rm, echo, sed, or anything else into actually deeming this a file at this point. You do, however, have a inode for every file.
[eriks@jaded: ~]$rm -f –fooface rm: unrecognized option '–fooface' Try rm ./–fooface' to remove the file –fooface'. Try rm –help' for more information. [eriks@jaded: ~]$ rm -f '–fooface'
rm: unrecognized option '–fooface'
Try rm ./–fooface' to remove the file –fooface'.
Try rm –help' for more information.
So now what, do you live forever with this annoyance of a file sitting inside your filesystem, never to be removed or touched again? Nah.
We can remove a file, simply by an inode number, but first we must find out the file inode number:
$ls -il | grep foo Output: [eriks@jaded: ~]$ ls -il | grep foo
508160 drwxr-xr-x 3 eriks eriks 4096 2010-10-27 18:13 foo3
500724 -rw-r–r– 1 eriks eriks 4 2011-02-10 22:37 –fooface
589907 drwxr-xr-x 2 eriks eriks 4096 2010-11-22 18:52 tempfoo
589905 drwxr-xr-x 2 eriks eriks 4096 2010-11-22 18:48 tmpfoo
The number you see prior to the file permission set is actually the inode # of the file itself.
Hint: 500724 is inode number we want removed.
Now use find command to delete file by inode:
# find . -inum 500724 -exec rm -i {} \;
There she is.
[eriks@jaded: ~]$find . -inum 500724 -exec rm -i {} \; rm: remove regular file ./–fooface'? y [Oct 25, 2019] unix - Remove a file on Linux using the inode number - Super User Oct 25, 2019 | superuser.com , ome other methods include: escaping the special chars: [~]$rm \"la\*
use the find command and only search the current directory. The find command can search for inode numbers, and has a handy -delete switch:
[~]$ls -i 7404301 "la* [~]$find . -maxdepth 1 -type f -inum 7404301
./"la*
[~]$find . -maxdepth 1 -type f -inum 7404301 -delete [~]$ls -i
[~]$ , Maybe I'm missing something, but... rm '"la*' Anyways, filenames don't have inodes, files do. Trying to remove a file without removing all filenames that point to it will damage your filesystem. [Oct 25, 2019] Linux - Unix Find Inode Of a File Command Jun 21, 2012 | www.cyberciti.biz ... ... .. stat Command: Display Inode You can also use the stat command as follows: $ stat fileName-Here $stat /etc/passwd Sample outputs: File: /etc/passwd' Size: 1644 Blocks: 8 IO Block: 4096 regular file Device: fe01h/65025d Inode: 25766495 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2012-05-05 16:29:42.000000000 +0530 Modify: 2012-05-05 16:29:20.000000000 +0530 Change: 2012-05-05 16:29:21.000000000 +0530 Share on Facebook Twitter Posted by: Vivek Gite The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . [Sep 29, 2019] IPTABLES makes corporate security scans go away Sep 29, 2019 | www.reddit.com r/ShittySysadmin • Posted by u/TBoneJeeper 1 month ago IPTABLES makes corporate security scans go away In a remote office location, corporate's network security scans can cause many false alarms and even take down services if they are tickled the wrong way. Dropping all traffic from the scanner's IP is a great time/resource-saver. No vulnerability reports, no follow-ups with corporate. No time for that. 12 comments 93% Upvoted What are your thoughts? Log in or Sign up log in sign up Sort by level 1 name_censored_ 9 points · 1 month ago Seems a bit like a bandaid to me. • A good shitty sysadmin breaks the corporate scanner's SMTP, so it can't report back. • A great shitty sysadmin spins up their own scanner instance (rigged to always report AOK) and fiddles with arp/routing/DNS to hijack the actual scanner. • A 10x shitty sysadmin installs a virus on the scanner instance, thus discrediting the corporate security team for years. level 2 TBoneJeeper 3 points · 1 month ago Good ideas, but sounds like a lot of work. Just dropping their packets had the desired effect and took 30 seconds. level 3 name_censored_ 5 points · 1 month ago No-one ever said being lazy was supposed to be easy. level 2 spyingwind 2 points · 1 month ago To be serious, closing of unused ports is good practice. Even better if used services can only be accessed from know sources. Such as the DB only allows access from the App server. A jump box, like a guacd server for remote access for things like RDP and SSH, would help reduce the threat surface. Or go further and setup Ansible/Chef/etc to allow only authorized changes. level 3 gortonsfiJr 2 points · 1 month ago Except, seriously, in my experience the security teams demand that you make big security holes for them in your boxes, so that they can hammer away at them looking for security holes. level 4 asmiggs 1 point · 1 month ago Security teams will always invoke the worst case scenario, 'what if your firewall is borked?', 'what if your jumpbox is hacked?' etc. You can usually give their scanner exclusive access to get past these things but surprise surprise the only worst case scenario I've faced is 'what if your security scanner goes rogue?'. level 5 gortonsfiJr 1 point · 1 month ago What if you lose control of your AD domain and some rogue agent gets domain admin rights? Also, we're going to need domain admin rights. ...Is this a test? level 6 spyingwind 1 point · 1 month ago What if an attacker was pretending o be a security company? No DA access! You can plug in anywhere, but if port security blocks your scanner, then I can't help. Also only 80 and 443 are allowed into our network. level 3 TBoneJeeper 1 point · 1 month ago Agree. But in rare cases, the ports/services are still used (maybe rarely), yet have "vulnerabilities" that are difficult to address. Some of these scanners hammer services so hard, trying every CGI/PHP/java exploit known to man in rapid succession, and older hardware/services cannot keep up and get wedged. I remember every Tuesday night I would have to go restart services because this is when they were scanned. Either vendor support for this software version was no longer available, or would simply require too much time to open vendor support cases to report the issues, argue with 1st level support, escalate, work with engineering, test fixes, etc. level 1 rumplestripeskin 1 point · 1 month ago Yes... and use Ansible to update iptables on each of your Linux VMs. level 1 rumplestripeskin 1 point · 1 month ago I know somebody who actually did this. level 2 TBoneJeeper 2 points · 1 month ago Maybe we worked together :-) [Sep 04, 2019] Basic Trap for File Cleanup Sep 04, 2019 | www.putorius.net Basic Trap for File Cleanup Using an trap to cleanup is simple enough. Here is an example of using trap to clean up a temporary file on exit of the script. #!/bin/bash trap "rm -f /tmp/output.txt" EXIT yum -y update > /tmp/output.txt if grep -qi "kernel" /tmp/output.txt; then mail -s "KERNEL UPDATED" user@example.com < /tmp/output.txt fi NOTE: It is important that the trap statement be placed at the beginning of the script to function properly. Any commands above the trap can exit and not be caught in the trap. Now if the script exits for any reason, it will still run the rm command to delete the file. Here is an example of me sending SIGINT (CTRL+C) while the script was running. # ./test.sh ^Cremoved '/tmp/output.txt' NOTE: I added verbose ( -v ) output to the rm command so it prints "removed". The ^C signifies where I hit CTRL+C to send SIGINT. This is a much cleaner and safer way to ensure the cleanup occurs when the script exists. Using EXIT ( 0 ) instead of a single defined signal (i.e. SIGINT – 2) ensures the cleanup happens on any exit, even successful completion of the script. [Aug 26, 2019] linux - Avoiding accidental 'rm' disasters - Super User Aug 26, 2019 | superuser.com Avoiding accidental 'rm' disasters Ask Question Asked 6 years, 3 months ago Active 6 years, 3 months ago Viewed 1k times 1 Mr_Spock ,May 26, 2013 at 11:30 Today, using sudo -s , I wanted to rm -R ./lib/ , but I actually rm -R /lib/ . I had to reinstall my OS (Mint 15) and re-download and re-configure all my packages. Not fun. How can I avoid similar mistakes in the future? Vittorio Romeo ,May 26, 2013 at 11:55 First of all, stop executing everything as root . You never really need to do this. Only run individual commands with sudo if you need to. If a normal command doesn't work without sudo, just call sudo !! to execute it again. If you're paranoid about rm , mv and other operations while running as root, you can add the following aliases to your shell's configuration file: [$UID = 0 ] && \
alias rm='rm -i' && \
alias mv='mv -i' && \
alias cp='cp -i'
These will all prompt you for confirmation ( -i ) before removing a file or overwriting an existing file, respectively, but only if you're root (the user with ID 0).
Don't get too used to that though. If you ever find yourself working on a system that doesn't prompt you for everything, you might end up deleting stuff without noticing it. The best way to avoid mistakes is to never run as root and think about what exactly you're doing when you use sudo .
[Aug 26, 2019] bash - How to prevent rm from reporting that a file was not found
Aug 26, 2019 | stackoverflow.com
How to prevent rm from reporting that a file was not found? Ask Question Asked 7 years, 4 months ago Active 1 year, 4 months ago Viewed 101k times 133 19
pizza ,Apr 20, 2012 at 21:29
I am using rm within a BASH script to delete many files. Sometimes the files are not present, so it reports many errors. I do not need this message. I have searched the man page for a command to make rm quiet, but the only option I found is -f , which from the description, "ignore nonexistent files, never prompt", seems to be the right choice, but the name does not seem to fit, so I am concerned it might have unintended consequences.
• Is the -f option the correct way to silence rm ? Why isn't it called -q ?
• Does this option do anything else?
Keith Thompson ,Dec 19, 2018 at 13:05
The main use of -f is to force the removal of files that would not be removed using rm by itself (as a special case, it "removes" non-existent files, thus suppressing the error message).
You can also just redirect the error message using
$rm file.txt 2> /dev/null (or your operating system's equivalent). You can check the value of $? immediately after calling rm to see if a file was actually removed or not.
vimdude ,May 28, 2014 at 18:10
Yes, -f is the most suitable option for this.
tripleee ,Jan 11 at 4:50
-f is the correct flag, but for the test operator, not rm
[ -f "$THEFILE" ] && rm "$THEFILE"
this ensures that the file exists and is a regular file (not a directory, device node etc...)
mahemoff ,Jan 11 at 4:41
\rm -f file will never report not found.
Idelic ,Apr 20, 2012 at 16:51
As far as rm -f doing "anything else", it does force ( -f is shorthand for --force ) silent removal in situations where rm would otherwise ask you for confirmation. For example, when trying to remove a file not writable by you from a directory that is writable by you.
Keith Thompson ,May 28, 2014 at 18:09
I had same issue for cshell. The only solution I had was to create a dummy file that matched pattern before "rm" in my script.
[Aug 26, 2019] shell - rm -rf return codes
Aug 26, 2019 | superuser.com
rm -rf return codes Ask Question Asked 6 years ago Active 6 years ago Viewed 15k times 8 0
SheetJS ,Aug 15, 2013 at 2:50
Any one can let me know the possible return codes for the command rm -rf other than zero i.e, possible return codes for failure cases. I want to know more detailed reason for the failure of the command unlike just the command is failed(return other than 0).
Adrian Frühwirth ,Aug 14, 2013 at 7:00
To see the return code, you can use echo $? in bash. To see the actual meaning, some platforms (like Debian Linux) have the perror binary available, which can be used as follows: $ rm -rf something/; perror $? rm: cannot remove something/': Permission denied OS error code 1: Operation not permitted rm -rf automatically suppresses most errors. The most likely error you will see is 1 (Operation not permitted), which will happen if you don't have permissions to remove the file. -f intentionally suppresses most errors Adrian Frühwirth ,Aug 14, 2013 at 7:21 grabbed coreutils from git.... looking at exit we see... openfly@linux-host:~/coreutils/src$ cat rm.c | grep -i exit
if (status != EXIT_SUCCESS)
exit (status);
/* Since this program exits immediately after calling 'rm', rm need not
atexit (close_stdin);
usage (EXIT_FAILURE);
exit (EXIT_SUCCESS);
usage (EXIT_FAILURE);
error (EXIT_FAILURE, errno, _("failed to get attributes of %s"),
exit (EXIT_SUCCESS);
exit (status == RM_ERROR ? EXIT_FAILURE : EXIT_SUCCESS);
Now looking at the status variable....
openfly@linux-host:~/coreutils/src $cat rm.c | grep -i status usage (int status) if (status != EXIT_SUCCESS) exit (status); enum RM_status status = rm (file, &x); assert (VALID_STATUS (status)); exit (status == RM_ERROR ? EXIT_FAILURE : EXIT_SUCCESS); looks like there isn't much going on there with the exit status. I see EXIT_FAILURE and EXIT_SUCCESS and not anything else. so basically 0 and 1 / -1 To see specific exit() syscalls and how they occur in a process flow try this openfly@linux-host:~/$ strace rm -rf $whatever fairly simple. ref: http://www.unix.com/man-page/Linux/EXIT_FAILURE/exit/ [Jul 26, 2019] The day the virtual machine manager died by Nathan Lager "Dangerous" commands like dd should probably be always typed first in the editor and only when you verity that you did not make a blunder , executed... A good decision was to go home and think the situation over, not to aggravate it with impulsive attempts to correct the situation, which typically only make it worse. Lack of checking of the health of backups suggest that this guy is an arrogant sucker, despite his 20 years of sysadmin experience. Notable quotes: "... I started dd as root , over the top of an EXISTING DISK ON A RUNNING VM. What kind of idiot does that?! ..." "... Since my VMs were still running, and I'd already done enough damage for one night, I stopped touching things and went home. ..." Jul 26, 2019 | www.redhat.com ... ... ... See, my RHEV manager was a VM running on a stand-alone Kernel-based Virtual Machine (KVM) host, separate from the cluster it manages. I had been running RHEV since version 3.0, before hosted engines were a thing, and I hadn't gone through the effort of migrating. I was already in the process of building a new set of clusters with a new manager, but this older manager was still controlling most of our production VMs. It had filled its disk again, and the underlying database had stopped itself to avoid corruption. See, for whatever reason, we had never set up disk space monitoring on this system. It's not like it was an important box, right? So, I logged into the KVM host that ran the VM, and started the well-known procedure of creating a new empty disk file, and then attaching it via virsh . The procedure goes something like this: Become root , use dd to write a stream of zeros to a new file, of the proper size, in the proper location, then use virsh to attach the new disk to the already running VM. Then, of course, log into the VM and do your disk expansion. I logged in, ran sudo -i , and started my work. I ran cd /var/lib/libvirt/images , ran ls -l to find the existing disk images, and then started carefully crafting my dd command: dd ... bs=1k count=40000000 if=/dev/zero ... of=./vmname-disk ... Which was the next disk again? <Tab> of=vmname-disk2.img <Back arrow, Back arrow, Back arrow, Back arrow, Backspace> Don't want to dd over the existing disk, that'd be bad. Let's change that 2 to a 3 , and Enter . OH CRAP, I CHANGED THE 2 TO A 2 NOT A 3 ! <Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C><Ctrl+C> I still get sick thinking about this. I'd done the stupidest thing I possibly could have done, I started dd as root , over the top of an EXISTING DISK ON A RUNNING VM. What kind of idiot does that?! (The kind that's at work late, trying to get this one little thing done before he heads off to see his friend. The kind that thinks he knows better, and thought he was careful enough to not make such a newbie mistake. Gah.) So, how fast does dd start writing zeros? Faster than I can move my fingers from the Enter key to the Ctrl+C keys. I tried a number of things to recover the running disk from memory, but all I did was make things worse, I think. The system was still up, but still broken like it was before I touched it, so it was useless. Since my VMs were still running, and I'd already done enough damage for one night, I stopped touching things and went home. The next day I owned up to the boss and co-workers pretty much the moment I walked in the door. We started taking an inventory of what we had, and what was lost. I had taken the precaution of setting up backups ages ago. So, we thought we had that to fall back on. I opened a ticket with Red Hat support and filled them in on how dumb I'd been. I can only imagine the reaction of the support person when they read my ticket. I worked a help desk for years, I know how this usually goes. They probably gathered their closest coworkers to mourn for my loss, or get some entertainment out of the guy who'd been so foolish. (I say this in jest. Red Hat's support was awesome through this whole ordeal, and I'll tell you how soon. ) So, I figured the next thing I would need from my broken server, which was still running, was the backups I'd diligently been collecting. They were on the VM but on a separate virtual disk, so I figured they were safe. The disk I'd overwritten was the last disk I'd made to expand the volume the database was on, so that logical volume was toast, but I've always set up my servers such that the main mounts -- / , /var , /home , /tmp , and /root -- were all separate logical volumes. In this case, /backup was an entirely separate virtual disk. So, I scp -r 'd the entire /backup mount to my laptop. It copied, and I felt a little sigh of relief. All of my production systems were still running, and I had my backup. My hope was that these factors would mean a relatively simple recovery: Build a new VM, install RHEV-M, and restore my backup. Simple right? By now, my boss had involved the rest of the directors, and let them know that we were looking down the barrel of a possibly bad time. We started organizing a team meeting to discuss how we were going to get through this. I returned to my desk and looked through the backups I had copied from the broken server. All the files were there, but they were tiny. Like, a couple hundred kilobytes each, instead of the hundreds of megabytes or even gigabytes that they should have been. Happy feeling, gone. Turns out, my backups were running, but at some point after an RHEV upgrade, the database backup utility had changed. Remember how I said this system had existed since version 3.0? Well, 3.0 didn't have an engine-backup utility, so in my RHEV training, we'd learned how to make our own. Mine broke when the tools changed, and for who knows how long, it had been getting an incomplete backup -- just some files from /etc . No database. Ohhhh ... Fudge. (I didn't say "Fudge.") I updated my support case with the bad news and started wondering what it would take to break through one of these 4th-floor windows right next to my desk. (Ok, not really.) At this point, we basically had three RHEV clusters with no manager. One of those was for development work, but the other two were all production. We started using these team meetings to discuss how to recover from this mess. I don't know what the rest of my team was thinking about me, but I can say that everyone was surprisingly supportive and un-accusatory. I mean, with one typo I'd thrown off the entire department. Projects were put on hold and workflows were disrupted, but at least we had time: We couldn't reboot machines, we couldn't change configurations, and couldn't get to VM consoles, but at least everything was still up and operating. Red Hat support had escalated my SNAFU to an RHEV engineer, a guy I'd worked with in the past. I don't know if he remembered me, but I remembered him, and he came through yet again. About a week in, for some unknown reason (we never figured out why), our Windows VMs started dropping offline. They were still running as far as we could tell, but they dropped off the network, Just boom. Offline. In the course of a workday, we lost about a dozen windows systems. All of our RHEL machines were working fine, so it was just some Windows machines, and not even every Windows machine -- about a dozen of them. Well great, how could this get worse? Oh right, add a ticking time bomb. Why were the Windows servers dropping off? Would they all eventually drop off? Would the RHEL systems eventually drop off? I made a panicked call back to support, emailed my account rep, and called in every favor I'd ever collected from contacts I had within Red Hat to get help as quickly as possible. I ended up on a conference call with two support engineers, and we got to work. After about 30 minutes on the phone, we'd worked out the most insane recovery method. We had the newer RHEV manager I mentioned earlier, that was still up and running, and had two new clusters attached to it. Our recovery goal was to get all of our workloads moved from the broken clusters to these two new clusters. Want to know how we ended up doing it? Well, as our Windows VMs were dropping like flies, the engineers and I came up with this plan. My clusters used a Fibre Channel Storage Area Network (SAN) as their storage domains. We took a machine that was not in use, but had a Fibre Channel host bus adapter (HBA) in it, and attached the logical unit numbers (LUNs) for both the old cluster's storage domains and the new cluster's storage domains to it. The plan there was to make a new VM on the new clusters, attach blank disks of the proper size to the new VM, and then use dd (the irony is not lost on me) to block-for-block copy the old broken VM disk over to the newly created empty VM disk. I don't know if you've ever delved deeply into an RHEV storage domain, but under the covers it's all Logical Volume Manager (LVM). The problem is, the LV's aren't human-readable. They're just universally-unique identifiers (UUIDs) that the RHEV manager's database links from VM to disk. These VMs are running, but we don't have the database to reference. So how do you get this data? virsh ... Luckily, I managed KVM and Xen clusters long before RHEV was a thing that was viable. I was no stranger to libvirt 's virsh utility. With the proper authentication -- which the engineers gave to me -- I was able to virsh dumpxml on a source VM while it was running, get all the info I needed about its memory, disk, CPUs, and even MAC address, and then create an empty clone of it on the new clusters. Once I felt everything was perfect, I would shut down the VM on the broken cluster with either virsh shutdown , or by logging into the VM and shutting it down. The catch here is that if I missed something and shut down that VM, there was no way I'd be able to power it back on. Once the data was no longer in memory, the config would be completely lost, since that information is all in the database -- and I'd hosed that. Once I had everything, I'd log into my migration host (the one that was connected to both storage domains) and use dd to copy, bit-for-bit, the source storage domain disk over to the destination storage domain disk. Talk about nerve-wracking, but it worked! We picked one of the broken windows VMs and followed this process, and within about half an hour we'd completed all of the steps and brought it back online. We did hit one snag, though. See, we'd used snapshots here and there. RHEV snapshots are lvm snapshots. Consolidating them without the RHEV manager was a bit of a chore, and took even more leg work and research before we could dd the disks. I had to mimic the snapshot tree by creating symbolic links in the right places, and then start the dd process. I worked that one out late that evening after the engineers were off, probably enjoying time with their families. They asked me to write the process up in detail later. I suspect that it turned into some internal Red Hat documentation, never to be given to a customer because of the chance of royally hosing your storage domain. Somehow, over the course of 3 months and probably a dozen scheduled maintenance windows, I managed to migrate every single VM (of about 100 VMs) from the old zombie clusters to the working clusters. This migration included our Zimbra collaboration system (10 VMs in itself), our file servers (another dozen VMs), our Enterprise Resource Planning (ERP) platform, and even Oracle databases. We didn't lose a single VM and had no more unplanned outages. The Red Hat Enterprise Linux (RHEL) systems, and even some Windows systems, never fell to the mysterious drop-off that those dozen or so Windows servers did early on. During this ordeal, though, I had trouble sleeping. I was stressed out and felt so guilty for creating all this work for my co-workers, I even had trouble eating. No exaggeration, I lost 10lbs. So, don't be like Nate. Monitor your important systems, check your backups, and for all that's holy, double-check your dd output file. That way, you won't have drama, and can truly enjoy Sysadmin Appreciation Day! Nathan Lager is an experienced sysadmin, with 20 years in the industry. He runs his own blog at undrground.org, and hosts the Iron Sysadmin Podcast. More about me [Apr 29, 2019] When the disaster hit, you need to resolve things quickly and efficiently, with panic being the worst enemy. Amount of training and previous experience become crucial factors in such situations It is rarely just one thing that causes an “accident”. There are multiple contributors here. Notable quotes: "... Panic in my experience stems from a number of things here, but two crucial ones are: ..." "... not knowing what to do, or learned actions not having any effect ..." Apr 29, 2019 | www.nakedcapitalism.com ...I suspect that for both of those, when they hit, you need to resolve things quickly and efficiently, with panic being the worst enemy. Panic in my experience stems from a number of things here, but two crucial ones are: input overload not knowing what to do, or learned actions not having any effect Both of them can be, to a very large extent, overcome with training, training, and more training (of actually practising the emergency situation, not just reading about it and filling questionairres). ... ... ... [Mar 26, 2019] I wiped out a call center by mistyping the user profile expiration purge parameters in a script before leaving for the day. Mar 26, 2019 | twitter.com SwiftOnSecurity 7:07 PM - 25 Mar 2019 I wiped out a call center by mistyping the user profile expiration purge parameters in a script before leaving for the day. https:// twitter.com/soniagupta504/ status/1109979183352942592 SwiftOnSecurity 7:08 PM - 25 Mar 2019 Luckily most of it was backed up with a custom-built user profile roaming system, but still it was down for an hour and a half and degraded for more... [Mar 01, 2019] Molly-guard for CentOS 7 UoB Unix by dg12158 Sep 21, 2015 | bris.ac.uk Since I was looking at this already and had a few things to investigate and fix in our systemd-using hosts, I checked how plausible it is to insert a molly-guard-like password prompt as part of the reboot/shutdown process on CentOS 7 (i.e. using systemd). Problems encountered include: • Asking for a password from a service/unit in systemd -- Use systemd-ask-password and needs some agent setup to reply to this correctly? • The reboot command always walls a message to all logged in users before it even runs the new reboot-molly unit, as it expects a reboot to happen. The argument --no-wall stops this but that requires a change to the reboot command. Hence back to the original problem of replacing packaged files/symlinks with RPM • The reboot.target unit is a "systemd.special" unit, which means that it has some special behaviour and cannot be renamed. We can modify it, of course, by editing the reboot.target file. • How do we get a systemd unit to run first and block anything later from running until it is complete? (In fact to abort the reboot but just for this time rather than being set as permanently failed. Reboot failing is a bit of a strange situation for it to be in ) The dependencies appear to work but the reboot target is quite keen on running other items from the dependency list -- I'm more than likely doing something wrong here! So for now this is shelved. It would be nice to have a solution though, so any hints from systemd experts are gratefully received! (Note that CentOS 7 uses systemd 208, so new features in later versions which help won't be available to us) This entry was posted in Uncategorized by dg12158 . Bookmark the permalink . [Mar 01, 2019] molly-guard protects machines from accidental shutdowns-reboots by ruchi Nov 28, 2009 | www.ubuntugeek.com molly-guard installs a shell script that overrides the existing shutdown/reboot/halt/poweroff commands and first runs a set of scripts, which all have to exit successfully, before molly-guard invokes the real command. One of the scripts checks for existing SSH sessions. If any of the four commands are called interactively over an SSH session, the shell script prompts you to enter the name of the host you wish to shut down. This should adequately prevent you from accidental shutdowns and reboots. This shell script passes through the commands to the respective binaries in /sbin and should thus not get in the way if called non-interactively, or locally. The tool is basically a replacement for halt, reboot and shutdown to prevent such accidents. Install molly-guard in ubuntu sudo apt-get install molly-guard or click on the following link apt://molly-guard Now that it's installed, try it out (on a non production box). Here you can see it save me from rebooting the box Ubuntu-test Ubuntu-test:~$ sudo reboot
W: molly-guard: SSH session detected!
Please type in hostname of the machine to reboot: ruchi
Good thing I asked; I won't reboot Ubuntu-test ...
W: aborting reboot due to 30-query-hostname exiting with code 1.
Ubuntu-Test:~$By default you're only protected on sessions that look like SSH sessions (have$SSH_CONNECTION set). If, like us, you use alot of virtual machines and RILOE cards, edit /etc/molly-guard/rc and uncomment ALWAYS_QUERY_HOSTNAME=true. Now you should be prompted for any interactive session.
[Mar 01, 2019] Confirm before executing shutdown-reboot command on linux by Ilija Matoski
"... rushing to leave and was still logged into a server so I wanted to shutdown my laptop, but what I didn't notice is that I was still connected to the remote server. ..."
Oct 23, 2017 | matoski.com
rushing to leave and was still logged into a server so I wanted to shutdown my laptop, but what I didn't notice is that I was still connected to the remote server. Luckily before pressing enter I noticed I'm not on my machine but on a remote server. So I was thinking there should be a very easy way to prevent it from happening again, to me or to anyone else.
So first thing we need to create a new bash script at /usr/local/bin/confirm with the contents bellow and with execution permissions
#!/usr/bin/env bash
echo "About to execute $1 command" echo -n "Would you like to proceed y/n? " read reply if [ "$reply" = y -o "$reply" = Y ] then$1 "${@:2}" else echo "$1 ${@:2} cancelled" fi Now only thing left to do is to setup the aliases so they go through this command to confirm instead of directly calling the command. So I create the following files /etc/profile.d/confirm-shutdown.sh alias shutdown="/usr/local/bin/confirm /sbin/shutdown" /etc/profile.d/confirm-reboot.sh alias reboot="/usr/local/bin/confirm /sbin/reboot" Now when I actually try to do a shutdown/reboot it will prompt me like so. ilijamt@x1 ~$ reboot
Before proceeding to perform /sbin/reboot, please ensure you have approval to perform this task
Would you like to proceed y/n? n
/sbin/reboot cancelled
[Feb 21, 2019] https://github.com/MikeDacre/careful_rm
Feb 21, 2019 | github.com
rm is a powerful *nix tool that simply drops a file from the drive index. It doesn't delete it or put it in a Trash can, it just de-indexes it which makes the file hard to recover unless you want to put in the work, and pretty easy to recover if you are willing to spend a few hours trying (use shred to actually secure erase files).
careful_rm.py is inspired by the -I interactive mode of rm and by safe-rm . safe-rm adds a recycle bin mode to rm, and the -I interactive mode adds a prompt if you delete more than a handful of files or recursively delete a directory. ZSH also has an option to warn you if you recursively rm a directory.
These are all great, but I found them unsatisfying. What I want is for rm to be quick and not bother me for single file deletions (so rm -i is out), but to let me know when I am deleting a lot of files, and to actually print a list of files that are about to be deleted . I also want it to have the option to trash/recycle my files instead of just straight deleting them.... like safe-rm , but not so intrusive (safe-rm defaults to recycle, and doesn't warn).
careful_rm.py is fundamentally a simple rm wrapper, that accepts all of the same commands as rm , but with a few additional options features. In the source code CUTOFF is set to 3 , so deleting more files than that will prompt the user. Also, deleting a directory will prompt the user separately with a count of all files and subdirectories within the folders to be deleted.
Furthermore, careful_rm.py implements a fully integrated trash mode that can be toggled on with -c . It can also be forced on by adding a file at ~/.rm_recycle , or toggled on only for $HOME (the best idea), by ~/.rm_recycle_home . The mode can be disabled on the fly by passing --direct , which forces off recycle mode. The recycle mode tries to find the best location to recycle to on MacOS or Linux, on MacOS it also tries to use Apple Script to trash files, which means the original location is preserved (note Applescript can be slow, you can disable it by adding a ~/.no_apple_rm file, but Put Back won't work). The best location for trashes goes in this order: 1. $HOME/.Trash on Mac or $HOME/.local/share/Trash on Linux 2. <mountpoint>/.Trashes on Mac or <mountpoint>/.Trash-$UID on Linux
3. /tmp/$USER_trash Always the best trash can to avoid Volume hopping is favored, as moving across file systems is slow. If the trash does not exist, the user is prompted to create it, they then also have the option to fall back to the root trash ( /tmp/$USER_trash ) or just rm the files.
/tmp/$USER_trash is almost always used for deleting system/root files, but note that you most likely do not want to save those files, and straight rm is generally better. [Feb 21, 2019] https://github.com/lagerspetz/linux-stuff/blob/master/scripts/saferm.sh by Eemil Lagerspetz Shell script that tires to implement trash can idea Feb 21, 2019 | github.com #!/bin/bash ## ## saferm.sh ## Safely remove files, moving them to GNOME/KDE trash instead of deleting. ## Made by Eemil Lagerspetz ## Login ## ## Started on Mon Aug 11 22:00:58 2008 Eemil Lagerspetz ## Last update Sat Aug 16 23:49:18 2008 Eemil Lagerspetz ## version= " 1.16 " ; ... ... ... [Feb 21, 2019] The rm='rm -i' alias is an horror Feb 21, 2019 | superuser.com The rm='rm -i' alias is an horror because after a while using it, you will expect rm to prompt you by default before removing files. Of course, one day you'll run it with an account that hasn't that alias set and before you understand what's going on, it is too late. ... ... ... If you want save aliases, but don't want to risk getting used to the commands working differently on your system than on others, you can to disable rm like this alias rm='echo "rm is disabled, use remove or trash or /bin/rm instead."' Then you can create your own safe alias, e.g. alias remove='/bin/rm -irv' or use trash instead. [Feb 21, 2019] Ubuntu Manpage trash - Command line trash utility. Feb 21, 2019 | manpages.ubuntu.com Provided by: trash-cli_0.12.9.14-2_all NAME trash - Command line trash utility. SYNOPSIS trash [arguments] ... DESCRIPTION Trash-cli package provides a command line interface trashcan utility compliant with the FreeDesktop.org Trash Specification. It remembers the name, original path, deletion date, and permissions of each trashed file. ARGUMENTS Names of files or directory to move in the trashcan. EXAMPLES $ cd /home/andrea/
$touch foo bar$ trash foo bar
BUGS
Report bugs to http://code.google.com/p/trash-cli/issues
AUTHORS
Trash was written by Andrea Francia <andreafrancia@users.sourceforge.net> and Einar Orn
Olason <eoo@hi.is>. This manual page was written by Steve Stalcup <vorian@ubuntu.com>.
Changes made by Massimo Cavalleri <submax@tiscalinet.it>.
SEE ALSO
trash-list(1), trash-restore(1), trash-empty(1), and the FreeDesktop.org Trash
Specification at http://www.ramendik.ru/docs/trashspec.html.
Both are released under the GNU General Public License, version 2 or later.
[Jan 29, 2019] hardware - Is post-sudden-power-loss filesystem corruption on an SSD drive's ext3 partition expected behavior
Dec 04, 2012 | serverfault.com
My company makes an embedded Debian Linux device that boots from an ext3 partition on an internal SSD drive. Because the device is an embedded "black box", it is usually shut down the rude way, by simply cutting power to the device via an external switch.
This is normally okay, as ext3's journalling keeps things in order, so other than the occasional loss of part of a log file, things keep chugging along fine.
However, we've recently seen a number of units where after a number of hard-power-cycles the ext3 partition starts to develop structural issues -- in particular, we run e2fsck on the ext3 partition and it finds a number of issues like those shown in the output listing at the bottom of this Question. Running e2fsck until it stops reporting errors (or reformatting the partition) clears the issues.
My question is... what are the implications of seeing problems like this on an ext3/SSD system that has been subjected to lots of sudden/unexpected shutdowns?
My feeling is that this might be a sign of a software or hardware problem in our system, since my understanding is that (barring a bug or hardware problem) ext3's journalling feature is supposed to prevent these sorts of filesystem-integrity errors. (Note: I understand that user-data is not journalled and so munged/missing/truncated user-files can happen; I'm specifically talking here about filesystem-metadata errors like those shown below)
My co-worker, on the other hand, says that this is known/expected behavior because SSD controllers sometimes re-order write commands and that can cause the ext3 journal to get confused. In particular, he believes that even given normally functioning hardware and bug-free software, the ext3 journal only makes filesystem corruption less likely, not impossible, so we should not be surprised to see problems like this from time to time.
Which of us is right?
Embedded-PC-failsafe:~# ls
Embedded-PC-failsafe:~# umount /mnt/unionfs
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Invalid inode number for '.' in directory inode 46948.
Fix<y>? yes
Directory inode 46948, block 0, offset 12: directory corrupted
Salvage<y>? yes
Entry 'status_2012-11-26_14h13m41.csv' in /var/log/status_logs (46956) has deleted/unused inode 47075. Clear<y>? yes
Entry 'status_2012-11-26_10h42m58.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47076. Clear<y>? yes
Entry 'status_2012-11-26_11h29m41.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47080. Clear<y>? yes
Entry 'status_2012-11-26_11h42m13.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47081. Clear<y>? yes
Entry 'status_2012-11-26_12h07m17.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47083. Clear<y>? yes
Entry 'status_2012-11-26_12h14m53.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47085. Clear<y>? yes
Entry 'status_2012-11-26_15h06m49.csv' in /var/log/status_logs (46956) has deleted/unused inode 47088. Clear<y>? yes
Entry 'status_2012-11-20_14h50m09.csv' in /var/log/status_logs (46956) has deleted/unused inode 47073. Clear<y>? yes
Entry 'status_2012-11-20_14h55m32.csv' in /var/log/status_logs (46956) has deleted/unused inode 47074. Clear<y>? yes
Entry 'status_2012-11-26_11h04m36.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47078. Clear<y>? yes
Entry 'status_2012-11-26_11h54m45.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47082. Clear<y>? yes
Entry 'status_2012-11-26_12h12m20.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47084. Clear<y>? yes
Entry 'status_2012-11-26_12h33m52.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47086. Clear<y>? yes
Entry 'status_2012-11-26_10h51m59.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47077. Clear<y>? yes
Entry 'status_2012-11-26_11h17m09.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47079. Clear<y>? yes
Entry 'status_2012-11-26_12h54m11.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47087. Clear<y>? yes
Pass 3: Checking directory connectivity
'..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953).
Fix<y>? yes
Couldn't fix parent of inode 46948: Couldn't find parent directory entry
Pass 4: Checking reference counts
Unattached inode 46945
Connect to /lost+found<y>? yes
Inode 46945 ref count is 2, should be 1. Fix<y>? yes
Inode 46953 ref count is 5, should be 4. Fix<y>? yes
Pass 5: Checking group summary information
Block bitmap differences: -(208264--208266) -(210062--210068) -(211343--211491) -(213241--213250) -(213344--213393) -213397 -(213457--213463) -(213516--213521) -(213628--213655) -(213683--213688) -(213709--213728) -(215265--215300) -(215346--215365) -(221541--221551) -(221696--221704) -227517
Fix<y>? yes
Free blocks count wrong for group #6 (17247, counted=17611).
Fix<y>? yes
Free blocks count wrong (161691, counted=162055).
Fix<y>? yes
Inode bitmap differences: +(47089--47090) +47093 +47095 +(47097--47099) +(47101--47104) -(47219--47220) -47222 -47224 -47228 -47231 -(47347--47348) -47350 -47352 -47356 -47359 -(47457--47488) -47985 -47996 -(47999--48000) -48017 -(48027--48028) -(48030--48032) -48049 -(48059--48060) -(48062--48064) -48081 -(48091--48092) -(48094--48096)
Fix<y>? yes
Free inodes count wrong for group #6 (7608, counted=7624).
Fix<y>? yes
Free inodes count wrong (61919, counted=61935).
Fix<y>? yes
embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED *****
embeddedrootwrite: ********** WARNING: Filesystem still has errors **********
embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks
Embedded-PC-failsafe:~#
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Directory entry for '.' in ... (46948) is big.
Split<y>? yes
Missing '..' in directory inode 46948.
Fix<y>? yes
Setting filetype for entry '..' in ... (46948) to 2.
Pass 3: Checking directory connectivity
'..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953).
Fix<y>? yes
Pass 4: Checking reference counts
Inode 2 ref count is 12, should be 13. Fix<y>? yes
Pass 5: Checking group summary information
embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED *****
embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks
Embedded-PC-failsafe:~#
Embedded-PC-failsafe:~# e2fsck /dev/sda3
e2fsck 1.41.3 (12-Oct-2008)
embeddedrootwrite: clean, 657/62592 files, 87882/249937 blocks
filesystems hardware ssd ext3 share | improve this question edited Dec 5 '12 at 18:40 ewwhite 173k 75 364 712 asked Dec 4 '12 at 1:13 Jeremy Friesner Jeremy Friesner 611 1 8 25
• Have you all thought of changing to ext4 or ZFS? – mdpc Dec 4 '12 at 2:14
• I've thought about changing to ext4, at least... would that help address this issue? Would ZFS be better still? – Jeremy Friesner Dec 4 '12 at 2:17
• 1 Neither option would fix this. We still use devices with supercapacitors in ZFS, and battery or flash-protected cache is recommended for ext4 in server applications. – ewwhite Dec 4 '12 at 2:54
add a comment | 2 Answers 2 active oldest votes 10 You're both wrong (maybe?)... ext3 is coping the best it can with having its underlying storage removed so abruptly.
Your SSD probably has some type of onboard cache. You don't mention the make/model of SSD in use, but this sounds like a consumer-level SSD versus an enterprise or industrial-grade model .
Either way, the cache is used to help coalesce writes and prolong the life of the drive. If there are writes in-transit, the sudden loss of power is definitely the source of your corruption. True enterprise and industrial SSD's have supercapacitors that maintain power long enough to move data from cache to nonvolatile storage, much in the same way battery-backed and flash-backed RAID controller caches work .
If your drive doesn't have a supercap, the in-flight transactions are being lost, hence the filesystem corruption. ext3 is probably being told that everything is on stable storage, but that's just a function of the cache. share | improve this answer edited Apr 13 '17 at 12:14 Community ♦ 1 answered Dec 4 '12 at 1:24 ewwhite ewwhite 173k 75 364 712
• Sorry to you and everyone who upvoted this, but you're just wrong. Handling the loss of in progress writes is exactly what the journal is for, and as long as the drive correctly reports whether it has a write cache and obeys commands to flush it, the journal guarantees that the metadata will not be inconsistent. You only need a supercap or battery backed raid cache so you can enable write cache without having to enable barriers, which sacrifices some performance to maintain data correctness. – psusi Dec 5 '12 at 19:12
• @psusi The SSD in use probably has cache explicitly enabled or relies on an internal buffer regardless of the OS_level setting. The data in that cache is what a supercapacitor-enabled SSD would protect. – ewwhite Dec 5 '12 at 19:30
• The data in the cache doesn't need protecting if you enable IO barriers. Most consumer type drives ship with write caching disabled by default and you have to enable it if you want it, exactly because it causes corruption issues if the OS is not careful. – psusi Dec 5 '12 at 19:35
• @pusi Old now, but you mention this: as long as the drive correctly reports whether it has a write cache and obeys commands to flush it, the journal guarantees that the metadata will not be inconsistent. That's the thing: because of storage controllers that tend to assume older disks, SSDs will sometimes lie about whether data is flushed. You do need that supercap. – Joel Coel Aug 9 '15 at 22:01
add a comment | 2 You are right and your coworker is wrong. Barring something going wrong the journal makes sure you never have inconsistent fs metadata. You might check with hdparm to see if the drive's write cache is enabled. If it is, and you have not enabled IO barriers ( off by default on ext3, on by default in ext4 ), then that would be the cause of the problem.
The barriers are needed to force the drive write cache to flush at the correct time to maintain consistency, but some drives are badly behaved and either report that their write cache is disabled when it is not, or silently ignore the flush commands. This prevents the journal from doing its job. share | improve this answer answered Dec 5 '12 at 19:09 psusi psusi 2,617 11 9
• -1 for reading-comprehension... – ewwhite Dec 5 '12 at 19:34
• @ewwhite, maybe you should try reading, and actually writing a useful response instead of this childish insult. – psusi Dec 5 '12 at 19:36
• +1 this answer probably could be improved, as any other answer in any QA. But at least provides some light and hints. @downvoters: improve the answer yourselves, or comment on possible flows, but downvoting this answer without proper justification is just disgusting! – Alberto Dec 6 '12 at 21:44
[Jan 29, 2019] xfs corrupted after power failure
Highly recommended!
Oct 15, 2013 | www.linuxquestions.org
katmai90210
hi guys,
i have a problem. yesterday there was a power outage at one of my datacenters, where i have a relatively large fileserver. 2 arrays, 1 x 14 tb and 1 x 18 tb both in raid6, with a 3ware card.
after the outage, the server came back online, the xfs partitions were mounted, and everything looked okay. i could access the data and everything seemed just fine.
today i woke up to lots of i/o errors, and when i rebooted the server, the partitions would not mount:
Oct 14 04:09:17 kp4 kernel:
Oct 14 04:09:17 kp4 kernel: XFS internal error XFS_WANT_CORRUPTED_RETURN a<ffffffff80056933>] pdflush+0x0/0x1fb
Oct 14 04:09:17 kp4 kernel: [<ffffffff80056a84>] pdflush+0x151/0x1fb
Oct 14 04:09:17 kp4 kernel: [<ffffffff800cd931>] wb_kupdate+0x0/0x16a
Oct 14 04:09:17 kp4 kernel: [<ffffffff80032c2b>] kthread+0xfe/0x132
Oct 14 04:09:17 kp4 kernel: [<ffffffff8005dfc1>] child_rip+0xa/0x11
Oct 14 04:09:17 kp4 kernel: [<ffffffff800a3ab7>] keventd_create_kthread+0x0/0xc4
Oct 14 04:09:17 kp4 kernel: [<ffffffff80032b2d>] kthread+0x0/0x132
Oct 14 04:09:17 kp4 kernel: [<ffffffff8005dfb7>] child_rip+0x0/0x11
Oct 14 04:09:17 kp4 kernel:
Oct 14 04:09:17 kp4 kernel: XFS internal error XFS_WANT_CORRUPTED_RETURN at line 279 of file fs/xfs/xfs_alloc.c. Caller 0xffffffff88342331
Oct 14 04:09:17 kp4 kernel:
got a bunch of these in dmesg.
The array is fine:
[root@kp4 ~]# tw_cli
//kp4> focus c6
s
//kp4/c6> how
Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
------------------------------------------------------------------------------
u0 RAID-6 OK - - 256K 13969.8 RiW ON
u1 RAID-6 OK - - 256K 16763.7 RiW ON
VPort Status Unit Size Type Phy Encl-Slot Model
------------------------------------------------------------------------------
p0 OK u1 2.73 TB SATA 0 - Hitachi HDS723030AL
p1 OK u1 2.73 TB SATA 1 - Hitachi HDS723030AL
p2 OK u1 2.73 TB SATA 2 - Hitachi HDS723030AL
p3 OK u1 2.73 TB SATA 3 - Hitachi HDS723030AL
p4 OK u1 2.73 TB SATA 4 - Hitachi HDS723030AL
p5 OK u1 2.73 TB SATA 5 - Hitachi HDS723030AL
p6 OK u1 2.73 TB SATA 6 - Hitachi HDS723030AL
p7 OK u1 2.73 TB SATA 7 - Hitachi HDS723030AL
p8 OK u0 2.73 TB SATA 8 - Hitachi HDS723030AL
p9 OK u0 2.73 TB SATA 9 - Hitachi HDS723030AL
p10 OK u0 2.73 TB SATA 10 - Hitachi HDS723030AL
p11 OK u0 2.73 TB SATA 11 - Hitachi HDS723030AL
p12 OK u0 2.73 TB SATA 12 - Hitachi HDS723030AL
p13 OK u0 2.73 TB SATA 13 - Hitachi HDS723030AL
p14 OK u0 2.73 TB SATA 14 - Hitachi HDS723030AL
Name OnlineState BBUReady Status Volt Temp Hours LastCapTest
---------------------------------------------------------------------------
bbu On Yes OK OK OK 0 xx-xxx-xxxx
i googled for solutions and i think i jumped the horse by doing
xfs_repair -L /dev/sdc
it would not clean it with xfs_repair /dev/sdc, and everybody pretty much says the same thing.
this is what i was getting when trying to mount the array.
Filesystem Corruption of in-memory data detected. Shutting down filesystem xfs_check
Did i jump the gun by using the -L switch :/ ?
jefro
Here is the RH data on that.
[Jan 29, 2019] an HVAC tech that confused the BLACK button that got pushed to exit the room with the RED button clearly marked EMERGENCY POWER OFF.
Jan 29, 2019 | thwack.solarwinds.com
George Sutherland Jul 8, 2015 9:58 AM ( in response to RandyBrown ) had similar thing happen with an HVAC tech that confused the BLACK button that got pushed to exit the room with the RED button clearly marked EMERGENCY POWER OFF. Clear plastic cover installed with in 24 hours.... after 3 hours of recovery!
PS... He told his boss that he did not do it.... the camera that focused on the door told a much different story. He was persona non grata at our site after that.
[Jan 29, 2019] HVAC units greatly help to increase reliability
Jan 29, 2019 | thwack.solarwinds.com
Worked at a bank. 6" raised floor. Liebert cooling units on floor with all network equipment. Two units developed a water drain issue over a weekend.
About an hour into Monday morning, devices, servers, routers, in a domino effect starting shorting out and shutting down or blowing up, literally.
Opened the floor tiles to find three inches of water.
We did not have water alarms on the floor at the time.
Shortly after the incident, we did.
But the mistake was very costly and multiple 24 hour shifts of IT people made it a week of pure h3ll.
[Jan 29, 2019] In a former life, I had every server crash over the weekend when the facilities group took down the climate control and HVAC systems without warning
Jan 29, 2019 | thwack.solarwinds.com
• In a former life, I had every server crash over the weekend when the facilities group took down the climate control and HVAC systems without warning.
[Jan 29, 2019] [SOLVED] Unable to mount root file system after a power failure
Jan 29, 2019 | www.linuxquestions.org
07-01-2012, 12:56 PM # 1 damateem LQ Newbie Registered: Dec 2010 Posts: 8 Rep: Unable to mount root file system after a power failure [ Log in to get rid of this advertisement] We had a storm yesterday and the power dropped out, causing my Ubuntu server to shut off. Now, when booting, I get [ 0.564310] Kernel panic - not syncing: VFS: Unable to mount root fs on unkown-block(0,0) It looks like a file system corruption, but I'm having a hard time fixing the problem. I'm using Rescue Remix 12-04 to boot from USB and get access to the system. Using sudo fdisk -l Shows the hard drive as /dev/sda1: Linux /dev/sda2: Extended /dev/sda5: Linux LVM Using sudo lvdisplay Shows LV Names as /dev/server1/root /dev/server1/swap_1 Using sudo blkid Shows types as /dev/sda1: ext2 /dev/sda5: LVM2_member /dev/mapper/server1-root: ext4 /dev/mapper/server1-swap_1: swap I can mount sda1 and server1/root and all the files appear normal, although I'm not really sure what issues I should be looking for. On sda1, I see a grub folder and several other files. On root, I see the file system as it was before I started having trouble. I've ran the following fsck commands and none of them report any errors sudo fsck -f /dev/sda1 sudo fsck -f /dev/server1/root sudo fsck.ext2 -f /dev/sda1 sudo fsck.ext4 -f /dev/server1/root and I still get the same error when the system boots. I've hit a brick wall. What should I try next? What can I look at to give me a better understanding of what the problem is? Thanks, David
damateem View Public Profile View LQ Blog View Review Entries View HCL Entries Find More Posts by damateem
07-02-2012, 05:58 AM # 2 syg00 LQ Veteran Registered: Aug 2003 Location: Australia Distribution: Lots ... Posts: 17,415 Rep: Might depend a bit on what messages we aren't seeing. Normally I'd reckon that means that either the filesystem or disk controller support isn't available. But with something like Ubuntu you'd expect that to all be in place from the initrd. And that is on the /boot partition, and shouldn't be subject to update activity in a normal environment. Unless maybe you're real unlucky and an update was in flight. Can you chroot into the server (disk) install and run from there successfully ?.
syg00 View Public Profile View LQ Blog View Review Entries View HCL Entries Find More Posts by syg00
07-02-2012, 06:08 PM # 3 damateem LQ Newbie Registered: Dec 2010 Posts: 8 Original Poster Rep: I had a very hard time getting the Grub menu to appear. There must be a very small window for detecting the shift key. Holding it down through the boot didn't work. Repeatedly hitting it at about twice per second didn't work. Increasing the rate to about 4 hits per second got me into it. Once there, I was able to select an older kernel (2.6.32-39-server). The non-booting kernel was 2.6.32-40-server. 39 booted without any problems. When I initially setup this system, I couldn't send email from it. It wasn't important to me at the time, so I planned to come back and fix it later. Last week (before the power drop), email suddenly started working on its own. I was surprised because I haven't specifically performed any updates. However, I seem to remember setting up automatic updates, so perhaps an auto update was done that introduced a problem, but it wasn't seen until the reboot that was forced by the power outage. Next, I'm going to try updating to the latest kernel and see if it has the same problem. Thanks, David
damateem View Public Profile View LQ Blog View Review Entries View HCL Entries Find More Posts by damateem
07-02-2012, 06:24 PM # 4 frieza Senior Member Contributing Member Registered: Feb 2002 Location: harvard, il Distribution: Ubuntu 11.4,DD-WRT micro plus ssh,lfs-6.6,Fedora 15,Fedora 16 Posts: 3,233 Rep: imho auto updates are dangerous, if you want my opinion, make sure auto updates are off, and only have the system tell you there are updates, that way you can chose not to install them during a power failure as for a possible future solution for what you went through, unlike other keys, the shift key being held doesn't register as a stuck key to the best of my knowledge, so you can hold the shift key to get into grub, after that, edit the recovery line (the e key) to say at the end, init=/bin/bash then boot the system using the keys specified on the bottom of the screen, then once booted to a prompt, you would run Code: fsck -f {root partition} (in this state, the root partition should be either not mounted or mounted read-only, so you can safely run an fsck on the drive) note the -f seems to be an undocumented flag that does a more thorough scan than merely a standard run of fsck. then reboot, and hopefully that fixes things glad things seem to be working for the moment though.
frieza View Public Profile View LQ Blog View Review Entries View HCL Entries Visit frieza's homepage! Find More Posts by frieza
07-02-2012, 06:32 PM # 5
suicidaleggroll LQ Guru Contributing Member
Registered: Nov 2010 Location: Colorado Distribution: OpenSUSE, CentOS Posts: 5,573
Rep:
Quote:
Originally Posted by damateem However, I seem to remember setting up automatic updates, so perhaps an auto update was done that introduced a problem, but it wasn't seen until the reboot that was forced by the power outage.
I think this is very likely. Delayed reboots after performing an update can make tracking down errors impossibly difficult. I had a system a while back that wouldn't boot, turns out it was caused by an update I had done 6 MONTHS earlier, and the system had simply never been restarted afterward.
suicidaleggroll View Public Profile View LQ Blog View Review Entries View HCL Entries Find More Posts by suicidaleggroll
07-04-2012, 10:18 AM # 6 damateem LQ Newbie Registered: Dec 2010 Posts: 8 Original Poster Rep: I discovered the root cause of the problem. When I attempted the update, I found that the boot partition was full. So I suspect that caused issues for the auto update, but they went undetected until the reboot. I next tried to purge old kernels using the instructions at http://www.liberiangeek.net/2011/11/...neiric-ocelot/ but that failed because a previous install had not completed, but it couldn't complete because of the full partition. So had no choice but to manually rm the oldest kernel and it's associated files. With that done, the command apt-get -f install got far enough that I could then purge the unwanted kernels. Finally, sudo apt-get update sudo apt-get upgrade brought everything up to date. I will be deactivating the auto updates. Thanks for all the help! David
[Jan 29, 2019] A new term PEBKAC
Jan 29, 2019 | thwack.solarwinds.com
P roblem
E xists
B etween
K eyboard
A nd
C hair
or the most common fault is the id ten t or ID10T
[Jan 29, 2019] Are you sure?
Jan 29, 2019 | thwack.solarwinds.com
RichardLetts
Jul 13, 2015 8:13 PM Dealing with my ISP:
Me: There is a problem with your head-end router, you need to get an engineer to troubleshoot it
Them: no the problem is with your cable modem and router, we can see it fine on our network
Me: That's interesting because I powered it off and disconnected it from the wall before we started this conversation.
Them: Are you sure?
Me: I'm pretty sure that the lack of blinky lights means it's got no power but if you think it's still working fine then I'd suggest the problem at your end of this phone conversation and not at my end.
[Jan 29, 2019] Your tax dollars at government It work
Jan 29, 2019 | thwack.solarwinds.com
My story is about required processes...Need to add DHCP entries to the DHCP server. Here is the process. Receive request. Write 5 page document (no exaggeration) detailing who submitted the request, why the request was submitted, what the solution would be, the detailed steps of the solution including spreadsheet showing how each field would be completed and backup procedures. Produce second document to include pre execution test plan, and post execution test plan in minute detail. Submit to CAB board for review, submit to higher level advisory board for review; attend CAB meeting for formal approval; attend additional approval board meeting if data center is in freeze; attend post implementation board for lessons learned...Lesson learned: now I know where our tax dollars go...
Notable quotes:
Jan 29, 2019 | www.reddit.com
highlord_fox Moderator | /r/sysadmin Sock Puppet 10 points 11 points 12 points 3 years ago (1 child)
9-10 year old Poweredge 2950. Four drives, 250GB ea, RAID 5. Not even sure the fourth drive was even part of the array at this point. Backups consist of cloud file-level backup of most of the server's files. I was working on the server, updating the OS, rebooting it to solve whatever was ailing it at the time, and it was probably about 7-8PM on a Friday. I powered it off, and went to power it back on.
SHIT SHIT SHIT SHIT SHIT SHIT SHIT . Power it back off. Power it back on.
I stared at it, and hope I don't have to call for emergency support on the thing. Power it off and back on a third time.
OhThankTheGods
I didn't power it off again until I replaced it, some 4-6 months later. And then it stayed off for a good few weeks, before I had to buy a Perc 5i card off ebay to get it running again. Long story short, most of the speed issues I was having was due to the card dying. AH WELL.
EDIT: Formatting.
[Jan 29, 2019] Extra security can be a dangerious thing
"... Of course, tar complained about the situation and returned with non-null status, but since the backup procedure had seemed to work fine, no one thought it necessary to view the logs... ..."
Jul 20, 2017 | www.linuxjournal.com
Anonymous, 11/08/2002
At an unnamed location it happened thus... The customer had been using a home built 'tar' -based backup system for a long time. They were informed enough to have even tested and verified that recovery would work also.
Everything had been working fine, and they even had to do a recovery which went fine. Well, one day something evil happened to a disk and they had to replace the unit and do a full recovery.
Things looked fine until someone noticed that a directory with critically important and sensitive data was missing. Turned out that some manager had decided to 'secure' the directory by doing 'chmod 000 dir' to protect the data from inquisitive eyes when the data was not being used.
Of course, tar complained about the situation and returned with non-null status, but since the backup procedure had seemed to work fine, no one thought it necessary to view the logs...
[Jan 29, 2019] Backing things up with rsync
"... rsync with ssh as the transport mechanism works very well with my nightly LAN backups. I've found this page to be very helpful: http://www.mikerubel.org/computers/rsync_snapshots/ ..."
Jul 20, 2017 | www.linuxjournal.com
Anonymous on Fri, 11/08/2002 - 03:00.
The Subject, not the content, really brings back memories.
Imagine this, your tasked with complete control over the network in a multi-million dollar company. You've had some experience in the real world of network maintaince, but mostly you've learned from breaking things at home.
Time comes to implement (yes this was a startup company), a backup routine. You carefully consider the best way to do it and decide copying data to a holding disk before the tape run would be perfect in the situation, faster restore if the holding disk is still alive.
So off you go configuring all your servers for ssh pass through, and create the rsync scripts. Then before the trial run you think it would be a good idea to create a local backup of all the websites.
You logon to the web server, create a temp directory and start testing your newly advance rsync skills. After a couple of goes, you think your ready for the real thing, but you decide to run the test one more time.
Everything seems fine so you delete the temp directory. You pause for a second and your month drops open wider than it has ever opened before, and a feeling of terror overcomes you. You want to hide in a hole and hope you didn't see what you saw.
I RECURSIVELY DELETED ALL THE LIVE CORPORATE WEBSITES ON FRIDAY AFTERNOON AT 4PM!
Anonymous on Sun, 11/10/2002 - 03:00.
This is why it's ALWAYS A GOOD IDEA to use Midnight Commander or something similar to delete directories!!
...Root for (5) years and never trashed a filesystem yet (knockwoody)...
Anonymous on Fri, 11/08/2002 - 03:00.
rsync with ssh as the transport mechanism works very well with my nightly LAN backups. I've found this page to be very helpful: http://www.mikerubel.org/computers/rsync_snapshots/
[Jan 29, 2019] It helps if somebody checked if the equpment really has power, but often this step is skipped.
"... On closer inspection, noticed this power lead was only half in the socket... I connected this back to the original switch, grabbed the "I.T manager" and asked him to "just push the power lead"... his face? Looked like Casper the friendly ghost. ..."
Jan 29, 2019 | thwack.solarwinds.com
I've had a few horrors, heres a few...
Had to travel from Cheshire to Glasgow (4+hours) at 3am to get to a major high street store for 8am, an hour before opening. A switch had failed and taken out a whole floor of the store. So I prepped the new switch, using the same power lead from the failed switch as that was the only available lead / socket. No power. Initially thought the replacement switch was faulty and I would be in trouble for not testing this prior to attending site...
On closer inspection, noticed this power lead was only half in the socket... I connected this back to the original switch, grabbed the "I.T manager" and asked him to "just push the power lead"... his face? Looked like Casper the friendly ghost.
Problem solved at a massive expense to the company due to the out of hours charges. Surely that would be the first thing to check? Obviously not...
The same thing happened in Aberdeen, a 13 hour round trip to resolve a fault on a "failed router". The router looked dead at first glance, but after taking the side panel off the cabinet, I discovered it always helps if the router is actually plugged in...
Yet the customer clearly said everything is plugged in as it should be and it "must be faulty"... It does tend to appear faulty when not supplied with any power...
[Jan 29, 2019] It can be hot inside the rack
Jan 29, 2019 | thwack.solarwinds.com
Shortly after I started my first remote server-monitoring job, I started receiving, one by one, traps for servers that had gone heartbeat missing/no-ping at a remote site. I looked up the site, and there were 16 total servers there, of which about 4 or 5 (and counting) were already down. Clearly not network issues. I remoted into one of the ones that was still up, and found in the Windows event viewer that it was beginning to overheat.
I contacted my front-line team and asked them to call the site to find out if the data center air conditioner had gone out, or if there was something blocking the servers' fans or something. He called, the client at the site checked and said the data center was fine, so I dispatched IBM (our remote hands) to go to the site and check out the servers. They got there and called in laughing.
There was construction in the data center, and the contractors, being thoughtful, had draped a painter's dropcloth over the server racks to keep off saw dust. Of COURSE this caused the servers to overheat. Somehow the client had failed to mention this.
...so after all this went down, the client had the gall to ask us to replace the servers "just in case" there was any damage, despite the fact that each of them had shut itself down in order to prevent thermal damage. We went ahead and replaced them anyway. (I'm sure they were rebuilt and sent to other clients, but installing these servers on site takes about 2-3 hours of IBM's time on site and 60-90 minutes of my remote team's time, not counting the rebuild before recycling.
Oh well. My employer paid me for my time, so no skin off my back.
[Jan 29, 2019] "Sure, I get out my laptop, plug in the network cable, get on the internet from home. I start the VPN client, take out this paper with the code on it, and type it in..." Yup. He wrote down the RSA token's code before he went home.
Jan 29, 2019 | thwack.solarwinds.com
jm_sysadmin Expert Jul 8, 2015 7:04 AM
I was just starting my IT career, and I was told a VIP user couldn't VPN in, and I was asked to help. Everything checked out with the computer, so I asked the user to try it in front of me. He took out his RSA token, knew what to do with it, and it worked.
I also knew this user had been complaining of this issue for some time, and I wasn't the first person to try to fix this. Something wasn't right.
I asked him to walk me through every step he took from when it failed the night before.
"Sure, I get out my laptop, plug in the network cable, get on the internet from home. I start the VPN client, take out this paper with the code on it, and type it in..." Yup. He wrote down the RSA token's code before he went home. See that little thing was expensive, and he didn't want to lose it. I explained that the number changes all time, and that he needed to have it with him. VPN issue resolved.
[Jan 29, 2019] How electricians can help to improve server uptime
"... "Oh my God, the server room is full of smoke!" Somehow they hooked up things wrong and fed 220v instead of 110v to all the circuits. Every single UPS was dead. Several of the server power supplies were fried. ..."
Jan 29, 2019 | thwack.solarwinds.com
This happened back when we had an individual APC UPS for each server. Most of the servers were really just whitebox PCs in a rack mount case running a server OS.
The facilities department was doing some planned maintenance on the electrical panel in the server room over the weekend. They assured me that they were not going to touch any of the circuits for the server room, just for the rooms across the hallway. Well, they disconnected power to the entire panel. Then they called me to let me know what they did. I was able to remotely verify that everything was running on battery just fine. I let them know that they had about 20 minutes to restore power or I would need to start shutting down servers. They called me again and said,
"Oh my God, the server room is full of smoke!" Somehow they hooked up things wrong and fed 220v instead of 110v to all the circuits. Every single UPS was dead. Several of the server power supplies were fried.
And a few motherboards didn't make it either. It took me the rest of the weekend kludging things together to get the critical systems back online.
[Jan 28, 2019] Testing backup system as the main source of power outatages
Highly recommended!
Jan 28, 2019 | thwack.solarwinds.com
gcp Jul 8, 2015 10:33 PM
Many years ago I worked at an IBM Mainframe site. To make systems more robust they installed a UPS system for the mainframe with battery bank and a honkin' great diesel generator in the yard.
During the commissioning of the system, they decided to test the UPS cutover one afternoon - everything goes *dark* in seconds. Frantic running around to get power back on and MF restarted and databases recovered (afternoon, remember? during the work day...). Oh! The UPS batteries were not charged! Oops.
Over the next few weeks, they did two more 'tests' during the working day, with everything going *dark* in seconds for various reasons. Oops.
Then they decided - perhaps we should test this outside of office hours. (YAY!)
Still took a few more efforts to get everything working - diesel generator wouldn't start automatically, fixed that and forgot to fill up the diesel tank so cutover was fine until the fuel ran out.
Many, many lessons learned from this episode.
[Jan 28, 2019] False alarm: bas small inmashine room due to electrical light not a server
Jan 28, 2019 | www.reddit.com
radiomix Jack of All Trades 5 points 6 points 7 points 3 years ago (2 children)
I was in my main network facility, for a municipal fiber optic ring. Outside were two technicians replacing our backup air conditioning unit. I walk inside after talking with the two technicians, turn on the lights and begin walking around just visually checking things around the room. All of a sudden I started smelling that dreaded electric hot/burning smell. In this place I have my core switch, primary router, a handful of servers, some customer equipment and a couple of racks for my service provider. I start running around the place like a mad man sniffing all the equipment. I even called in the AC technicians to help me sniff.
After 15 minutes we could not narrow down where it was coming from. Finally I noticed that one of the florescent lights had not come on. I grabbed a ladder and opened it up.
The ballast had burned out on the light and it just so happen to be the light right in front of the AC vent blowing the smell all over the room.
The last time I had smelled that smell in that room a major piece of equipment went belly up and there was nothing I could do about it.
benjunmun 2 points 3 points 4 points 3 years ago (0 children)
The exact same thing has happened to me. Nothing quite as terrifying as the sudden smell of ozone as you're surrounded by critical computers and electrical gear.
[Jan 28, 2019] Loss of power problems: Machines are running, but every switch in the cabinet is dead. Some servers are dead. Panic sets in.
Jan 28, 2019 | www.reddit.com
eraser_6776 VP IT/Sec (a damn suit) 9 points 10 points 11 points 3 years ago (1 child)
May 22, 2004. There was a rather massive storm here that spurred one of the [biggest Tornaodes recorded in Nebraska]( www.tornadochaser.net/hallam.html ) and I was a sysadmin for a small company. It was a Saturday, aka beer day, and as all hell was breaking loose my friends and roomates' pagers and phones were all going off. "Ha ha!" I said, looking at a silent cellphone "sucks to be you!"
Next morning around 10 my phone rings, and I groggily answer it because it's the owner of the company. "You'd better come in here, none of the computers will turn on" he says. Slight panic, but I hadn't received any emails. So it must have been breakers, and I can get that fixed. No problem.
I get into the office and something strikes me. That eery sound of silence. Not a single machine is on.. why not? Still shaking off too much beer from the night before, I go into the server room and find out why I didn't get paged. Machines are running, but every switch in the cabinet is dead. Some servers are dead. Panic sets in.
I start walking around the office trying to turn on machines and.. dead. All of them. Every last desktop won't power on. That's when panic REALLY set in.
In the aftermath I found out two things - one, when the building was built, it was built with a steel roof and steel trusses. Two, when my predecessor had the network cabling wired he hired an idiot who didn't know fire code and ran the network cabling, conveniently, along the trusses into the ceiling. Thus, when lightning hit the building it had a perfect ground path to every workstation in the company. Some servers that weren't in the primary cabinet had been wired to a wall jack (which, in turn, went up into the ceiling then back down into the cabinet because you know, wire management!). Thankfully they were all "legacy" servers.
The only thing that saved the main servers was that Cisco 2924 XL-EN's are some badass mofo's that would die before they let that voltage pass through to the servers in the cabinet. At least that's what I told myself.
All in all, it ended up being one of the longest work weeks ever as I first had to source a bunch of switches, fast to get things like mail and the core network back up. Next up was feeding my buddies a bunch of beer and pizza after we raided every box store in town for spools of Cat 5 and threw wire along the floor.
Finally I found out that CDW can and would get you a whole lot of desktops delivered to your door with your software pre-installed in less than 24 hours if you have an open checkbook. Thanks to a great insurance policy, we did. Shipping and "handling" for those were more than the cost of the machines (again, this was back in 2004 and they were business desktops so you can imagine).
Still, for weeks after I had non-stop user complaints that generally involved "..I think this is related to the lightning ". I drank a lot that summer.
[Jan 28, 2019] Format of wrong particon initiated during RHEL install
"... Look at the screen, check out what it is doing, realize that the installer had grabbed the backend and he said yeah format all(we are not sure exactly how he did it). ..."
Jan 28, 2019 | www.reddit.com
kitched 5 points 6 points 7 points 3 years ago (2 children)
~10 years ago. 100GB drives on a node attached to an 8TB SAN. Cabling is all hooked up as we are adding this new node to manage the existing data on the SAN. A guy that is training up to help, we let him install RedHat and go through the GUI setup. Did not pay attention to him, and after a while wonder what is taking so long. Walk over to him and he is still staring at the install screen and says, "Hey guys, this format sure is taking a while".
Look at the screen, check out what it is doing, realize that the installer had grabbed the backend and he said yeah format all(we are not sure exactly how he did it).
Middle of the day, better kick off the tape restore for 8TB of data.
[Jan 28, 2019] I still went to work that day, tired, grumpy and hyped on caffeine teetering between consciousness and a comatose state
Big mistake. This is a perfect state to commit some big SNAFU
Jan 28, 2019 | thwack.solarwinds.com
I was the on-call technician for the security team supporting a Fortune 500 logistics company, in fact it was my first time being on-call. My phone rings at about 2:00 AM and the help desk agent says that the Citrix portal is down for everyone. This is a big deal because it's a 24/7 shop with people remoting in all around the world. While not strictly a security appliance, my team was responsible for the Citrix Access Gateway that was run on a NetScaler. Also on the line are the systems engineers responsible for the Citrix presentation/application servers.
I log in, check the appliance, look at all of the monitors, everything is reporting up. After about 4 hours of troubleshooting and trying everything within my limited knowledge of this system we get my boss on the line to help.
It came down to this: the Citrix team didn't troubleshoot anything and it was the StoreFront and broker servers that were having the troubles; but since the CAG wouldn't let people see any applications they instantly pointed the finger at the security team and blamed us.
I still went to work that day, tired, grumpy and hyped on caffeine teetering between consciousness and a comatose state because of two reasons: the Citrix team doesn't know how to do their job and I was too tired to ask the investigating questions like "when did it stop working? has anything changed? what have you looked at so far?".
Long story short, don't drink soda late at night, especially near your laptop! Soda spills are not easy to cleanup.
Jan 28, 2019 | thwack.solarwinds.com
mickyred 1 point 2 points 3 points 4 years ago (1 child)
cpbills Sr. Linux Admin 1 point 2 points 3 points 4 years ago (0 children)
They exist. This is why 'good' employers provide coffee.
[Jan 28, 2019] Something about the meaning of the word space
Jul 13, 2015 | thwack.solarwinds.com
Jul 13, 2015 7:44 AM
Trying to walk a tech through some switch config.
me: type config space t
them: it doesn't work
me: <sigh> <spells out config> space the single letter t
them: it still doesn't work
--- try some other rudimentary things ---
me: uh, are you typing in the word 'space'?
them: you said to
[Jan 28, 2019] Happy Sysadmin Appreciation Day 2016
Jan 28, 2019 | opensource.com
dale.sykora on 29 Jul 2016 Permalink
I have a horror story from another IT person. One day they were tasked with adding a new server to a rack in their data center. They added the server... being careful to not bump a cable to the nearby production servers, SAN, and network switch. The physical install went well. But when they powered on the server, the ENTIRE RACK went dark. Customers were not happy:( IT turns out that the power circuit they attached the server to was already at max capacity and thus they caused the breaker to trip. Lessons learned... use redundant power and monitor power consumption.
Another issue was being a newbie on a Cisco switch and making a few changes and thinking the innocent sounding "reload" command would work like Linux does when you restart a daemon. Watching 48 link activity LEDs go dark on your vmware cluster switch... Priceless
[Jan 28, 2019] The ghost of the failed restore
"... Even with the best solution that promises to make the most thorough backups, the ghost of the failed restoration can appear, darkening our job skills, if we don't make a habit of validating the backup every time. ..."
Nov 01, 2018 | opensource.com
In a well-known data center (whose name I do not want to remember), one cold October night we had a production outage in which thousands of web servers stopped responding due to downtime in the main database. The database administrator asked me, the rookie sysadmin, to recover the database's last full backup and restore it to bring the service back online.
But, at the end of the process, the database was still broken. I didn't worry, because there were other full backup files in stock. However, even after doing the process several times, the result didn't change.
With great fear, I asked the senior sysadmin what to do to fix this behavior.
"You remember when I showed you, a few days ago, how the full backup script was running? Something about how important it was to validate the backup?" responded the sysadmin.
"Of course! You told me that I had to stay a couple of extra hours to perform that task," I answered. "Exactly! But you preferred to leave early without finishing that task," he said. "Oh my! I thought it was optional!" I exclaimed.
"It was, it was "
Moral of the story: Even with the best solution that promises to make the most thorough backups, the ghost of the failed restoration can appear, darkening our job skills, if we don't make a habit of validating the backup every time.
[Jan 28, 2019] The danger of a single backup harddrive (USB or not)
"... In real life, the backup strategy and hardware/software choices to support it is (as most other things) a balancing act. The important thing is that you have a strategy, and that you test it regularly to make sure it works as intended (as the main point is in the article). Also, realizing that achieving 100% backup security is impossible might save a lot of time in setting up the strategy. ..."
Nov 08, 2002 | www.linuxjournal.com
Anonymous on Fri, 11/08/2002
Why don't you just buy an extra hard disk and have a copy of your important data there. With today's prices it doesn't cost anything.
Anonymous on Fri, 11/08/2002 - 03:00. A lot of people seams to have this idea, and in many situations it should work fine.
However, there is the human factor. Sometimes simple things go wrong (as simple as copying a file), and it takes a while before anybody notices that the contents of this file is not what is expected. This means you have to have many "generations" of backup of the file in order to be able to restore it, and in order to not put all the "eggs in the same basket" each of the file backups should be on a physical device.
Also, backing up to another disk in the same computer will probably not save you when lighting strikes, as the backup disk is just as likely to be fried as the main disk.
In real life, the backup strategy and hardware/software choices to support it is (as most other things) a balancing act. The important thing is that you have a strategy, and that you test it regularly to make sure it works as intended (as the main point is in the article). Also, realizing that achieving 100% backup security is impossible might save a lot of time in setting up the strategy.
(I.e. you have to say that this strategy has certain specified limits, like not being able to restore a file to its intermediate state sometime during a workday, only to the state it had when it was last backed up, which should be a maximum of xxx hours ago and so on...)
Hallvard P
[Jan 28, 2019] Those power cables ;-)
Jan 28, 2019 | opensource.com
John Fano on 31 Jul 2016
I was reaching down to power up the new UPS as my guy was stepping out from behind the rack and the whole rack went dark. His foot caught the power cord of the working UPS and pulled it just enough to break the contacts and since the battery was failed it couldn't provide power and shut off. It took about 30 minutes to bring everything back up..
Things went much better with the second UPS replacement. :-)
[Jan 28, 2019] "Right," I said. "Time to get the backup." I knew I had to leave when I saw his face start twitching and he whispered: "Backup ...?"
Jan 28, 2019 | opensource.com
SemperOSS on 13 Sep 2016 Permalink This one seems to be a classic too:
Working for a large UK-based international IT company, I had a call from newest guy in the internal IT department: "The main server, you know ..."
"Yes?"
"I was cleaning out somebody's homedir ..."
"Yes?"
"Well, the server stopped running properly ..."
"Yes?"
"... and I can't seem to get it to boot now ..."
"Oh-kayyyy. I'll just totter down to you and give it an eye."
I went down to the basement where the IT department was located and had a look at his terminal screen on his workstation. Going back through the terminal history, just before a hefty amount of error messages, I found his last command: 'rm -rf /home/johndoe /*'. And I probably do not have to say that he was root at the time (it was them there days before sudo, not that that would have helped in his situation).
"Right," I said. "Time to get the backup." I knew I had to leave when I saw his face start twitching and he whispered: "Backup ...?"
==========
Bonus entries from same company:
It was the days of the 5.25" floppy disks (Wikipedia is your friend, if you belong to the younger generation). I sometimes had to ask people to send a copy of a floppy to check why things weren't working properly. Once I got a nice photocopy and another time, the disk came with a polite note attached ... stapled through the disk, to be more precise!
[Jan 28, 2019] regex - Safe rm -rf function in shell script
Jan 28, 2019 | stackoverflow.com
community wiki
5 revs
,May 23, 2017 at 12:26
This question is similar to What is the safest way to empty a directory in *nix?
I'm writing bash script which defines several path constants and will use them for file and directory manipulation (copying, renaming and deleting). Often it will be necessary to do something like:
rm -rf "/${PATH1}" rm -rf "${PATH2}/"*
While developing this script I'd want to protect myself from mistyping names like PATH1 and PATH2 and avoid situations where they are expanded to empty string, thus resulting in wiping whole disk. I decided to create special wrapper:
rmrf() {
if [[ $1 =~ "regex" ]]; then echo "Ignoring possibly unsafe path${1}"
exit 1
fi
shopt -s dotglob
rm -rf -- $1 shopt -u dotglob } Which will be called as: rmrf "/${PATH1}"
rmrf "${PATH2}/"* Regex (or sed expression) should catch paths like "*", "/*", "/**/", "///*" etc. but allow paths like "dir", "/dir", "/dir1/dir2/", "/dir1/dir2/*". Also I don't know how to enable shell globbing in case like "/dir with space/*". Any ideas? EDIT: this is what I came up with so far: rmrf() { local RES local RMPATH="${1}"
SAFE=$(echo "${RMPATH}" | sed -r 's:^((\.?\*+/+)+.*|(/+\.?\*+)+.*|[\.\*/]+|.*/\.\*+)$::g') if [ -z "${SAFE}" ]; then
echo "ERROR! Unsafe deletion of ${RMPATH}" return 1 fi shopt -s dotglob if [ '*' == "${RMPATH: -1}" ]; then
echo rm -rf -- "${RMPATH/%\*/}"* RES=$?
else
echo rm -rf -- "${RMPATH}" RES=$?
fi
shopt -u dotglob
return $RES } Intended use is (note an asterisk inside quotes): rmrf "${SOMEPATH}"
rmrf "${SOMEPATH}/*" where$SOMEPATH is not system or /home directory (in my case all such operations are performed on filesystem mounted under /scratch directory).
CAVEATS:
• not tested very well
• not intended to use with paths possibly containing '..' or '.'
• should not be used with user-supplied paths
• rm -rf with asterisk probably can fail if there are too many files or directories inside $SOMEPATH (because of limited command line length) - this can be fixed with 'for' loop or 'find' command SpliFF ,Jun 14, 2009 at 13:45 I've found a big danger with rm in bash is that bash usually doesn't stop for errors. That means that: cd$SOMEPATH
rm -rf *
Is a very dangerous combination if the change directory fails. A safer way would be:
cd $SOMEPATH && rm -rf * Which will ensure the rf won't run unless you are really in$SOMEPATH. This doesn't protect you from a bad $SOMEPATH but it can be combined with the advice given by others to help make your script safer. EDIT: @placeybordeaux makes a good point that if$SOMEPATH is undefined or empty cd doesn't treat it as an error and returns 0. In light of that this answer should be considered unsafe unless $SOMEPATH is validated as existing and non-empty first. I believe cd with no args should be an illegal command since at best is performs a no-op and at worse it can lead to unexpected behaviour but it is what it is. Sazzad Hissain Khan ,Jul 6, 2017 at 11:45 nice trick, I am one stupid victim. – Sazzad Hissain Khan Jul 6 '17 at 11:45 placeybordeaux ,Jun 21, 2018 at 22:59 If$SOMEPATH is empty won't this rm -rf the user's home directory? – placeybordeaux Jun 21 '18 at 22:59
SpliFF ,Jun 27, 2018 at 4:10
@placeybordeaux The && only runs the second command if the first succeeds - so if cd fails rm never runs – SpliFF Jun 27 '18 at 4:10
placeybordeaux ,Jul 3, 2018 at 18:46
@SpliFF at least in ZSH the return value of cd $NONEXISTANTVAR is 0placeybordeaux Jul 3 '18 at 18:46 ruakh ,Jul 13, 2018 at 6:46 Instead of cd$SOMEPATH , you should write cd "${SOMEPATH?}" . The ${varname?} notation ensures that the expansion fails with a warning-message if the variable is unset or empty (such that the && ... part is never run); the double-quotes ensure that special characters in $SOMEPATH , such as whitespace, don't have undesired effects. – ruakh Jul 13 '18 at 6:46 community wiki 2 revs ,Jul 24, 2009 at 22:36 There is a set -u bash directive that will cause exit, when uninitialized variable is used. I read about it here , with rm -rf as an example. I think that's what you're looking for. And here is set's manual . ,Jun 14, 2009 at 12:38 I think "rm" command has a parameter to avoid the deleting of "/". Check it out. Max ,Jun 14, 2009 at 12:56 Thanks! I didn't know about such option. Actually it is named --preserve-root and is not mentioned in the manpage. – Max Jun 14 '09 at 12:56 Max ,Jun 14, 2009 at 13:18 On my system this option is on by default, but it cat't help in case like rm -ri /* – Max Jun 14 '09 at 13:18 ynimous ,Jun 14, 2009 at 12:42 I would recomend to use realpath(1) and not the command argument directly, so that you can avoid things like /A/B/../ or symbolic links. Max ,Jun 14, 2009 at 13:30 Useful but non-standard command. I've found possible bash replacement: archlinux.org/pipermail/pacman-dev/2009-February/008130.htmlMax Jun 14 '09 at 13:30 Jonathan Leffler ,Jun 14, 2009 at 12:47 Generally, when I'm developing a command with operations such as ' rm -fr ' in it, I will neutralize the remove during development. One way of doing that is: RMRF="echo rm -rf" ...$RMRF "/${PATH1}" This shows me what should be deleted - but does not delete it. I will do a manual clean up while things are under development - it is a small price to pay for not running the risk of screwing up everything. The notation ' "/${PATH1}" ' is a little unusual; normally, you would ensure that PATH1 simply contains an absolute pathname.
Using the metacharacter with ' "${PATH2}/"* ' is unwise and unnecessary. The only difference between using that and using just ' "${PATH2}" ' is that if the directory specified by PATH2 contains any files or directories with names starting with dot, then those files or directories will not be removed. Such a design is unlikely and is rather fragile. It would be much simpler just to pass PATH2 and let the recursive remove do its job. Adding the trailing slash is not necessarily a bad idea; the system would have to ensure that $PATH2 contains a directory name, not just a file name, but the extra protection is rather minimal. Using globbing with ' rm -fr ' is usually a bad idea. You want to be precise and restrictive and limiting in what it does - to prevent accidents. Of course, you'd never run the command (shell script you are developing) as root while it is under development - that would be suicidal. Or, if root privileges are absolutely necessary, you neutralize the remove operation until you are confident it is bullet-proof. Max ,Jun 14, 2009 at 13:09 To delete subdirectories and files starting with dot I use "shopt -s dotglob". Using rm -rf "${PATH2}" is not appropriate because in my case PATH2 can be only removed by superuser and this results in error status for "rm" command (and I verify it to track other errors). – Max Jun 14 '09 at 13:09
Jonathan Leffler ,Jun 14, 2009 at 13:37
Then, with due respect, you should use a private sub-directory under $PATH2 that you can remove. Avoid glob expansion with commands like 'rm -rf' like you would avoid the plague (or should that be A/H1N1?). – Jonathan Leffler Jun 14 '09 at 13:37 Max ,Jun 14, 2009 at 14:10 Meanwhile I've found this perl project: http://code.google.com/p/safe-rm/ community wiki too much php ,Jun 15, 2009 at 1:55 If it is possible, you should try and put everything into a folder with a hard-coded name which is unlikely to be found anywhere else on the filesystem, such as ' foofolder '. Then you can write your rmrf() function as: rmrf() { rm -rf "foofolder/$PATH1"
# or
rm -rf "$PATH1/foofolder" } There is no way that function can delete anything but the files you want it to. vadipp ,Jan 13, 2017 at 11:37 Actually there is a way: if PATH1 is something like ../../someotherdirvadipp Jan 13 '17 at 11:37 community wiki btop ,Jun 15, 2009 at 6:34 You may use set -f # cf. help set to disable filename generation (*). community wiki Howard Hong ,Oct 28, 2009 at 19:56 You don't need to use regular expressions. Just assign the directories you want to protect to a variable and then iterate over the variable. eg: protected_dirs="/ /bin /usr/bin /home$HOME"
for d in $protected_dirs; do if [ "$1" = "$d" ]; then rm=0 break; fi done if [${rm:-1} -eq 1 ]; then
rm -rf $1 fi , Add the following codes to your ~/.bashrc # safe delete move_to_trash () { now="$(date +%Y%m%d_%H%M%S)"; mv "$@" ~/.local/share/Trash/files/"$@_$now"; } alias del='move_to_trash' # safe rm alias rmi='rm -i' Every time you need to rm something, first consider del , you can change the trash folder. If you do need to rm something, you could go to the trash folder and use rmi . One small bug for del is that when del a folder, for example, my_folder , it should be del my_folder but not del my_folder/ since in order for possible later restore, I attach the time information in the end ( "$@_$now" ). For files, it works fine. [Jan 28, 2019] That's how I learned to always check with somebody else before rebooting a production server, no matter how minor it may seem Jan 28, 2019 | www.reddit.com VexingRaven 1 point 2 points 3 points 3 years ago (1 child) Not really a horror story but definitely one of my first "Oh shit" moments. I was the FNG helpdesk/sysadmin at a company of 150 people. I start getting calls that something (I think it was Outlook) wasn't working in Citrix, apparently something broken on one of the Citrix servers. I'm 100% positive it will be fixed with a reboot (I've seen this before on individual PCs), so I diligently start working to get people off that Citrix server (one of three) so I can reboot it. I get it cleared out, hit Reboot... And almost immediately get a call from the call center manager saying every single person just got kicked off Citrix. Oh shit. But there was nobody on that server! Apparently that server also housed the Secure Gateway server which my senior hadn't bothered to tell me or simply didn't know (Set up by a consulting firm). Whoops. Thankfully the servers were pretty fast and people's sessions reconnected a few minutes later, no harm no foul. And on the plus side, it did indeed fix the problem. And that's how I learned to always check with somebody else before rebooting a production server, no matter how minor it may seem. [Jan 14, 2019] Safe rm stops you accidentally wiping the system! @ New Zealand Linux Jan 14, 2019 | www.nzlinux.com 1. Francois Marier October 21, 2009 at 10:34 am Another related tool, to prevent accidental reboots of servers this time, is molly-guard: http://packages.debian.org/sid/molly-guard It asks you to type the hostname of the machine you want to reboot as an extra confirmation step. [Jan 10, 2019] When idiots are offloaded to security department, interesting things with network eventually happen Highly recommended! Security department often does more damage to the network then any sophisticated hacker can. Especially if they are populated with morons, as they usually are. One of the most blatant examples is below... Those idiots decided to disable Traceroute (which means ICMP) in order to increase security. Notable quotes: "... Traceroute is disabled on every network I work with to prevent intruders from determining the network structure. Real pain in the neck, but one of those things we face to secure systems. ..." "... Also really stupid. A competent attacker (and only those manage it into your network, right?) is not even slowed down by things like this. ..." "... Breaking into a network is a slow process. Slow and precise. Trying to fix problems is a fast reactionary process. Who do you really think you're hurting? Yes another example of how ignorant opinions can become common sense. ..." "... Disable all ICMP is not feasible as you will be disabling MTU negotiation and destination unreachable messages. You are essentially breaking the TCP/IP protocol. And if you want the protocol working OK, then people can do traceroute via HTTP messages or ICMP echo and reply. ..." "... You have no fucking idea what you're talking about. I run a multi-regional network with over 130 peers. Nobody "disables ICMP". IP breaks without it. Some folks, generally the dimmer of us, will disable echo responses or TTL expiration notices thinking it is somehow secure (and they are very fucking wrong) but nobody blocks all ICMP, except for very very dim witted humans, and only on endpoint nodes. ..." "... You have no idea what you're talking about, at any level. "disabled ICMP" - state statement alone requires such ignorance to make that I'm not sure why I'm even replying to ignorant ass. ..." "... In short, he's a moron. I have reason to suspect you might be, too. ..." "... No, TCP/IP is not working fine. It's broken and is costing you performance and $$. But it is not evident because TCP/IP is very good about dealing with broken networks, like yours. ..." "... It's another example of security by stupidity which seldom provides security, but always buys added cost. ..." "... A brief read suggests this is a good resource: https://john.albin.net/essenti... [albin.net] ..." "... Linux has one of the few IP stacks that isn't derived from the BSD stack, which in the industry is considered the reference design. Instead for linux, a new stack with it's own bugs and peculiarities was cobbled up. ..." "... Reference designs are a good thing to promote interoperability. As far as TCP/IP is concerned, linux is the biggest and ugliest stepchild. A theme that fits well into this whole discussion topic, actually. ..." May 27, 2018 | linux.slashdot.org jfdavis668 ( 1414919 ) , Sunday May 27, 2018 @11:09AM ( #56682996 ) Re:So ( Score: 5 , Interesting) Traceroute is disabled on every network I work with to prevent intruders from determining the network structure. Real pain in the neck, but one of those things we face to secure systems. Anonymous Coward writes: Re: ( Score: 2 , Insightful) What is the point? If an intruder is already there couldn't they just upload their own binary? Hylandr ( 813770 ) , Sunday May 27, 2018 @05:57PM ( #56685274 ) Re: So ( Score: 5 , Interesting) They can easily. And often time will compile their own tools, versions of Apache, etc.. At best it slows down incident response and resolution while doing nothing to prevent discovery of their networks. If you only use Vlans to segregate your architecture you're boned. gweihir ( 88907 ) , Sunday May 27, 2018 @12:19PM ( #56683422 ) Re: So ( Score: 5 , Interesting) Also really stupid. A competent attacker (and only those manage it into your network, right?) is not even slowed down by things like this. bferrell ( 253291 ) , Sunday May 27, 2018 @12:20PM ( #56683430 ) Homepage Journal Re: So ( Score: 4 , Interesting) Except it DOESN'T secure anything, simply renders things a little more obscure... Since when is obscurity security? fluffernutter ( 1411889 ) writes: Re: ( Score: 3 ) Doing something to make things more difficult for a hacker is better than doing nothing to make things more difficult for a hacker. Unless you're lazy, as many of these things should be done as possible. DamnOregonian ( 963763 ) , Sunday May 27, 2018 @04:37PM ( #56684878 ) Re:So ( Score: 5 , Insightful) No. Things like this don't slow down "hackers" with even a modicum of network knowledge inside of a functioning network. What they do slow down is your ability to troubleshoot network problems. Breaking into a network is a slow process. Slow and precise. Trying to fix problems is a fast reactionary process. Who do you really think you're hurting? Yes another example of how ignorant opinions can become common sense. mSparks43 ( 757109 ) writes: Re: So ( Score: 2 ) Pretty much my reaction. like WTF? OTON, redhat flavors all still on glibc2 starting to become a regular p.i.t.a. so the chances of this actually becoming a thing to be concerned about seem very low. Kinda like gdpr, same kind of groupthink that anyone actually cares or concerns themselves with policy these days. ruir ( 2709173 ) writes: Re: ( Score: 3 ) Disable all ICMP is not feasible as you will be disabling MTU negotiation and destination unreachable messages. You are essentially breaking the TCP/IP protocol. And if you want the protocol working OK, then people can do traceroute via HTTP messages or ICMP echo and reply. Or they can do reverse traceroute at least until the border edge of your firewall via an external site. DamnOregonian ( 963763 ) , Sunday May 27, 2018 @04:32PM ( #56684858 ) Re:So ( Score: 4 , Insightful) You have no fucking idea what you're talking about. I run a multi-regional network with over 130 peers. Nobody "disables ICMP". IP breaks without it. Some folks, generally the dimmer of us, will disable echo responses or TTL expiration notices thinking it is somehow secure (and they are very fucking wrong) but nobody blocks all ICMP, except for very very dim witted humans, and only on endpoint nodes. DamnOregonian ( 963763 ) writes: Re: ( Score: 3 ) That's hilarious... I am *the guy* who runs the network. I am our senior network engineer. Every line in every router -- mine. You have no idea what you're talking about, at any level. "disabled ICMP" - state statement alone requires such ignorance to make that I'm not sure why I'm even replying to ignorant ass. DamnOregonian ( 963763 ) writes: Re: ( Score: 3 ) Nonsense. I conceded that morons may actually go through the work to totally break their PMTUD, IP error signaling channels, and make their nodes "invisible" I understand "networking" at a level I'm pretty sure you only have a foggy understanding of. I write applications that require layer-2 packet building all the way up to layer-4. In short, he's a moron. I have reason to suspect you might be, too. DamnOregonian ( 963763 ) writes: Re: ( Score: 3 ) A CDS is MAC. Turning off ICMP toward people who aren't allowed to access your node/network is understandable. They can't get anything else though, why bother supporting the IP control channel? CDS does *not* say turn off ICMP globally. I deal with CDS, SSAE16 SOC 2, and PCI compliance daily. If your CDS solution only operates with a layer-4 ACL, it's a pretty simple model, or You're Doing It Wrong (TM) nyet ( 19118 ) writes: Re: ( Score: 3 ) > I'm not a network person IOW, nothing you say about networking should be taken seriously. kevmeister ( 979231 ) , Sunday May 27, 2018 @05:47PM ( #56685234 ) Homepage Re:So ( Score: 4 , Insightful) No, TCP/IP is not working fine. It's broken and is costing you performance and$$$. But it is not evident because TCP/IP is very good about dealing with broken networks, like yours.
The problem is that doing this requires things like packet fragmentation which greatly increases router CPU load and reduces the maximum PPS of your network as well s resulting in dropped packets requiring re-transmission and may also result in widow collapse fallowed with slow-start, though rapid recovery mitigates much of this, it's still not free.
It's another example of security by stupidity which seldom provides security, but always buys added cost.
Hylandr ( 813770 ) writes:
Re: ( Score: 3 )
As a server engineer I am experiencing this with our network team right now.
Do you have some reading that I might be able to further educate myself? I would like to be able to prove to the directors why disabling ICMP on the network may be the cause of our issues.
Zaelath ( 2588189 ) , Sunday May 27, 2018 @07:51PM ( #56685758 )
Re:So ( Score: 4 , Informative)
A brief read suggests this is a good resource: https://john.albin.net/essenti... [albin.net]
Bing Tsher E ( 943915 ) , Sunday May 27, 2018 @01:22PM ( #56683792 ) Journal
Re: Denying ICMP echo @ server/workstation level t ( Score: 5 , Insightful)
Linux has one of the few IP stacks that isn't derived from the BSD stack, which in the industry is considered the reference design. Instead for linux, a new stack with it's own bugs and peculiarities was cobbled up.
Reference designs are a good thing to promote interoperability. As far as TCP/IP is concerned, linux is the biggest and ugliest stepchild. A theme that fits well into this whole discussion topic, actually.
[Jan 10, 2019] saferm Safely remove files, moving them to GNOME/KDE trash instead of deleting by Eemil Lagerspetz
Jan 10, 2019 | github.com
#!/bin/bash
##
## saferm.sh
## Safely remove files, moving them to GNOME/KDE trash instead of deleting.
##
## Started on Mon Aug 11 22:00:58 2008 Eemil Lagerspetz
## Last update Sat Aug 16 23:49:18 2008 Eemil Lagerspetz
##
version="1.16";
## flags (change these to change default behaviour)
recursive="" # do not recurse into directories by default
verbose="true" # set verbose by default for inexperienced users.
force="" #disallow deleting special files by default
unsafe="" # do not behave like regular rm by default
## possible flags (recursive, verbose, force, unsafe)
# don't touch this unless you want to create/destroy flags
flaglist="r v f u q"
# Colours
blue='\e[1;34m'
red='\e[1;31m'
norm='\e[0m'
## trashbin definitions
# this is the same for newer KDE and GNOME:
trash_desktops="$HOME/.local/share/Trash/files" # if neither is running: trash_fallback="$HOME/Trash"
# use .local/share/Trash?
use_desktop=$( ps -U$USER | grep -E "gnome-settings|startkde|mate-session|mate-settings|mate-panel|gnome-shell|lxsession|unity" )
# mounted filesystems, for avoiding cross-device move on safe delete
filesystems=$( mount | awk '{print$3; }' )
if [ -n "$use_desktop" ]; then trash="${trash_desktops}"
infodir="${trash}/../info"; for k in "${trash}" "${infodir}"; do if [ ! -d "${k}" ]; then mkdir -p "${k}"; fi done else trash="${trash_fallback}"
fi
usagemessage() {
echo -e "This is ${blue}saferm.sh$norm $version. LXDE and Gnome3 detection. Will ask to unsafe-delete instead of cross-fs move. Allows unsafe (regular rm) delete (ignores trashinfo). Creates trash and trashinfo directories if they do not exist. Handles symbolic link deletion. Does not complain about different user any more.\n"; echo -e "Usage:${blue}/path/to/saferm.sh$norm [${blue}OPTIONS$norm] [$blue--$norm]${blue}files and dirs to safely remove$norm" echo -e "${blue}OPTIONS$norm:" echo -e "$blue-r$norm allows recursively removing directories." echo -e "$blue-f$norm Allow deleting special files (devices, ...)." echo -e "$blue-u$norm Unsafe mode, bypass trash and delete files permanently." echo -e "$blue-v$norm Verbose, prints more messages. Default in this version." echo -e "$blue-q$norm Quiet mode. Opposite of verbose." echo ""; } detect() { if [ ! -e "$1" ]; then fs=""; return; fi
path=$(readlink -f "$1")
for det in $filesystems; do match=$( echo "$path" | grep -oE "^$det" )
if [ -n "$match" ]; then if [${#det} -gt ${#fs} ]; then fs="$det"
fi
fi
done
}
trashinfo() {
#gnome: generate trashinfo:
bname=$( basename -- "$1" )
fname="${trash}/../info/${bname}.trashinfo"
cat < "${fname}" [Trash Info] Path=$PWD/${1} DeletionDate=$( date +%Y-%m-%dT%H:%M:%S )
EOF
}
setflags() {
for k in $flaglist; do reduced=$( echo "$1" | sed "s/$k//" )
if [ "$reduced" != "$1" ]; then
flags_set="$flags_set$k"
fi
done
for k in $flags_set; do if [ "$k" == "v" ]; then
verbose="true"
elif [ "$k" == "r" ]; then recursive="true" elif [ "$k" == "f" ]; then
force="true"
elif [ "$k" == "u" ]; then unsafe="true" elif [ "$k" == "q" ]; then
unset verbose
fi
done
}
performdelete() {
# "delete" = move to trash
if [ -n "$unsafe" ] then if [ -n "$verbose" ];then echo -e "Deleting $red$1$norm"; fi #UNSAFE: permanently remove files. rm -rf -- "$1"
else
if [ -n "$verbose" ];then echo -e "Moving$blue$k$norm to $red${trash}$norm"; fi mv -b -- "$1" "${trash}" # moves and backs up old files fi } askfs() { detect "$1"
if [ "${fs}" != "${tfs}" ]; then
until [ "$answer" == "y" -o "$answer" == "n" ]; do
echo -e "$blue$1$norm is on$blue${fs}$norm. Unsafe delete (y/n)?"
done
if [ "$answer" == "y" ]; then unsafe="yes" fi fi } complain() { msg="" if [ ! -e "$1" -a ! -L "$1" ]; then # does not exist msg="File does not exist:" elif [ ! -w "$1" -a ! -L "$1" ]; then # not writable msg="File is not writable:" elif [ ! -f "$1" -a ! -d "$1" -a -z "$force" ]; then # Special or sth else.
msg="Is not a regular file or directory (and -f not specified):"
elif [ -f "$1" ]; then # is a file act="true" # operate on files by default elif [ -d "$1" -a -n "$recursive" ]; then # is a directory and recursive is enabled act="true" elif [ -d "$1" -a -z "${recursive}" ]; then msg="Is a directory (and -r not specified):" else # not file or dir. This branch should not be reached. msg="No such file or directory:" fi } asknobackup() { unset answer until [ "$answer" == "y" -o "$answer" == "n" ]; do echo -e "$blue$k$norm could not be moved to trash. Unsafe delete (y/n)?"
done
if [ "$answer" == "y" ] then unsafe="yes" performdelete "${k}"
ret=$? # Reset temporary unsafe flag unset unsafe unset answer else unset answer fi } deletefiles() { for k in "$@"; do
fdesc="$blue$k$norm"; complain "${k}"
if [ -n "$msg" ] then echo -e "$msg $fdesc." else #actual action: if [ -z "$unsafe" ]; then
askfs "${k}" fi performdelete "${k}"
ret=$? # Reset temporary unsafe flag if [ "$answer" == "y" ]; then unset unsafe; unset answer; fi
#echo "MV exit status: $ret" if [ ! "$ret" -eq 0 ]
then
asknobackup "${k}" fi if [ -n "$use_desktop" ]; then
# generate trashinfo for desktop environments
trashinfo "${k}" fi fi done } # Make trash if it doesn't exist if [ ! -d "${trash}" ]; then
mkdir "${trash}"; fi # find out which flags were given afteropts=""; # boolean for end-of-options reached for k in "$@"; do
# if starts with dash and before end of options marker (--)
if [ "${k:0:1}" == "-" -a -z "$afteropts" ]; then
if [ "${k:1:2}" == "-" ]; then # if end of options marker afteropts="true" else # option(s) setflags "$k" # set flags
fi
else # not starting with dash, or after end-of-opts
files[++i]="$k" fi done if [ -z "${files[1]}" ]; then # no parameters?
usagemessage # tell them how to use this
exit 0;
fi
# Which fs is trash on?
detect "${trash}" tfs="$fs"
# do the work
deletefiles "${files[@]}" [Oct 22, 2018] linux - If I rm -rf a symlink will the data the link points to get erased, to Notable quotes: "... Put it in another words, those symlink-files will be deleted. The files they "point"/"link" to will not be touch. ..." Oct 22, 2018 | unix.stackexchange.com user4951 ,Jan 25, 2013 at 2:40 This is the contents of the /home3 directory on my system: ./ backup/ hearsttr@ lost+found/ randomvi@ sexsmovi@ ../ freemark@ investgr@ nudenude@ romanced@ wallpape@ I want to clean this up but I am worried because of the symlinks, which point to another drive. If I say rm -rf /home3 will it delete the other drive? John Sui rm -rf /home3 will delete all files and directory within home3 and home3 itself, which include symlink files, but will not "follow"(de-reference) those symlink. Put it in another words, those symlink-files will be deleted. The files they "point"/"link" to will not be touch. [Oct 22, 2018] Does rm -rf follow symbolic links? Jan 25, 2012 | superuser.com I have a directory like this: $ ls -l
total 899166
drwxr-xr-x 12 me scicomp 324 Jan 24 13:47 data
-rw-r--r-- 1 me scicomp 84188 Jan 24 13:47 lod-thin-1.000000-0.010000-0.030000.rda
drwxr-xr-x 2 me scicomp 808 Jan 24 13:47 log
lrwxrwxrwx 1 me scicomp 17 Jan 25 09:41 msg -> /home/me/msg
And I want to remove it using rm -r .
However I'm scared rm -r will follow the symlink and delete everything in that directory (which is very bad).
I can't find anything about this in the man pages. What would be the exact behavior of running rm -rf from a directory above this one?
LordDoskias Jan 25 '12 at 16:43, Jan 25, 2012 at 16:43
How hard it is to create a dummy dir with a symlink pointing to a dummy file and execute the scenario? Then you will know for sure how it works! –
hakre ,Feb 4, 2015 at 13:09
X-Ref: If I rm -rf a symlink will the data the link points to get erased, too? ; Deleting a folder that contains symlinkshakre Feb 4 '15 at 13:09
Susam Pal ,Jan 25, 2012 at 16:47
Example 1: Deleting a directory containing a soft link to another directory.
susam@nifty:~/so$mkdir foo bar susam@nifty:~/so$ touch bar/a.txt
susam@nifty:~/so$ln -s /home/susam/so/bar/ foo/baz susam@nifty:~/so$ tree
.
├── bar
│ └── a.txt
└── foo
└── baz -> /home/susam/so/bar/
3 directories, 1 file
susam@nifty:~/so$rm -r foo susam@nifty:~/so$ tree
.
└── bar
└── a.txt
1 directory, 1 file
susam@nifty:~/so$ So, we see that the target of the soft-link survives. Example 2: Deleting a soft link to a directory susam@nifty:~/so$ ln -s /home/susam/so/bar baz
susam@nifty:~/so$tree . ├── bar │ └── a.txt └── baz -> /home/susam/so/bar 2 directories, 1 file susam@nifty:~/so$ rm -r baz
susam@nifty:~/so$tree . └── bar └── a.txt 1 directory, 1 file susam@nifty:~/so$
Only, the soft link is deleted. The target of the soft-link survives.
Example 3: Attempting to delete the target of a soft-link
susam@nifty:~/so$ln -s /home/susam/so/bar baz susam@nifty:~/so$ tree
.
├── bar
│ └── a.txt
└── baz -> /home/susam/so/bar
2 directories, 1 file
susam@nifty:~/so$rm -r baz/ rm: cannot remove 'baz/': Not a directory susam@nifty:~/so$ tree
.
├── bar
└── baz -> /home/susam/so/bar
2 directories, 0 files
The file in the target of the symbolic link does not survive.
The above experiments were done on a Debian GNU/Linux 9.0 (stretch) system.
Wyrmwood ,Oct 30, 2014 at 20:36
rm -rf baz/* will remove the contents – Wyrmwood Oct 30 '14 at 20:36
Buttle Butkus ,Jan 12, 2016 at 0:35
Yes, if you do rm -rf [symlink], then the contents of the original directory will be obliterated! Be very careful. – Buttle Butkus Jan 12 '16 at 0:35
frnknstn ,Sep 11, 2017 at 10:22
Your example 3 is incorrect! On each system I have tried, the file a.txt will be removed in that scenario. – frnknstn Sep 11 '17 at 10:22
Susam Pal ,Sep 11, 2017 at 15:20
@frnknstn You are right. I see the same behaviour you mention on my latest Debian system. I don't remember on which version of Debian I performed the earlier experiments. In my earlier experiments on an older version of Debian, either a.txt must have survived in the third example or I must have made an error in my experiment. I have updated the answer with the current behaviour I observe on Debian 9 and this behaviour is consistent with what you mention. – Susam Pal Sep 11 '17 at 15:20
Ken Simon ,Jan 25, 2012 at 16:43
Your /home/me/msg directory will be safe if you rm -rf the directory from which you ran ls. Only the symlink itself will be removed, not the directory it points to.
The only thing I would be cautious of, would be if you called something like "rm -rf msg/" (with the trailing slash.) Do not do that because it will remove the directory that msg points to, rather than the msg symlink itself.
> ,Jan 25, 2012 at 16:54
"The only thing I would be cautious of, would be if you called something like "rm -rf msg/" (with the trailing slash.) Do not do that because it will remove the directory that msg points to, rather than the msg symlink itself." - I don't find this to be true. See the third example in my response below. – Susam Pal Jan 25 '12 at 16:54
Andrew Crabb ,Nov 26, 2013 at 21:52
I get the same result as @Susam ('rm -r symlink/' does not delete the target of symlink), which I am pleased about as it would be a very easy mistake to make. – Andrew Crabb Nov 26 '13 at 21:52
,
rm should remove files and directories. If the file is symbolic link, link is removed, not the target. It will not interpret a symbolic link. For example what should be the behavior when deleting 'broken links'- rm exits with 0 not with non-zero to indicate failure
[Oct 05, 2018] Unix Admin. Horror Story Summary, version 1.0 by Anatoly Ivasyuk
Oct 05, 2018 | cam.ac.uk
From: mfraioli@grebyn.com (Marc Fraioli)
Organization: Grebyn Timesharing
Well, here's a good one for you:
I was happily churning along developing something on a Sun workstation, and was getting a number of annoying permission denieds from trying to
write into a directory heirarchy that I didn't own. Getting tired of that, I decided to set the permissions on that subtree to 777 while Iwas working, so I wouldn't have to worry about it.
Someone had recently told me that rather than using plain "su", it was good to use "su -", but the implications had not yet sunk in. (You can probably see where this is going already, but I'll go to the bitter end.)
Anyway, I cd'd to where I wanted to be, the top of my subtree, and did su -. Then I did chmod -R 777. I then started to wonder why it was taking so damn long when there were only about 45 files in 20 directories under where I (thought) I was. Well, needless to say, su - simulates a real login, and had put me into root's home directory, /, so I was proceeding to set file permissions for the whole system to wide open.
I aborted it before it finished, realizing that something was wrong, but this took quite a while to straighten out.
Marc Fraioli
[Oct 05, 2018] One wrong find command can create one weak frantic recovery efforts
Ahh, the hazards of working with sysadmins who are not ready to be sysadmins in the first place
Oct 05, 2018 | cam.ac.uk
From: jerry@incc.com (Jerry Rocteur)
Organization: InCC.com Perwez Belgium
Horror story,
I sent one of my support guys to do an Oracle update in Madrid.
As instructed he created a new user called esf and changed the files
in /u/appl to owner esf, however in doing so he *must* have cocked up
his find command, the command was:
find /u/appl -user appl -exec chown esf {} \;
He rang me up to tell me there was a problem, I logged in via x25 and
about 75% of files on system belonged to owner esf.
VERY little worked on system.
What a mess, it took me a while and I came up with a brain wave to
fix it but it really screwed up the system.
Moral: be *very* careful of find execs, get the syntax right!!!!
[Oct 05, 2018] When some filenames are etched in your brain you can type them several times repeating the same blunder again and again by Anatoly Ivasyuk
"... I was working on a line printer spooler, which lived in /etc. I wanted to remove it, and so issued the command "rm /etc/lpspl." There was only one problem. Out of habit, I typed "passwd" after "/etc/" and removed the password file. Oops. ..."
Oct 05, 2018 | cam.ac.uk
From Unix Admin. Horror Story Summary, version 1.0 by Anatoly Ivasyuk
From: tzs@stein.u.washington.edu (Tim Smith)
Organization: University of Washington, Seattle
I was working on a line printer spooler, which lived in /etc. I wanted to remove it, and so issued the command "rm /etc/lpspl." There was only
one problem. Out of habit, I typed "passwd" after "/etc/" and removed the password file. Oops.
I called up the person who handled backups, and he restored the password file.
A couple of days later, I did it again! This time, after he restored it, he made a link, /etc/safe_from_tim.
About a week later, I overwrote /etc/passwd, rather than removing it. After he restored it again, he installed a daemon that kept a copy of /etc/passwd, on another file system, and automatically restored it if it appeared to have been damaged.
Fortunately, I finished my work on /etc/lpspl around this time, so we didn't have to see if I could find a way to wipe out a couple of filesystems...
--Tim Smith
[Oct 05, 2018] Due to a configuration change I wasn't privy to, the software I was responsible for rebooted all the 911 operators servers at once
Oct 05, 2018 | www.reddit.com
ardwin 5 years ago (9 children)
Due to a configuration change I wasn't privy to, the software I was responsible for rebooted all the 911 operators servers at once.
cobra10101010 5 years ago (1 child)
Oh God..that is scary in true sense..hope everything was okay
ardwin 5 years ago (0 children)
I quickly learned that the 911 operators, are trained to do their jobs without any kind of computer support. It made me feel better.
reebzor 5 years ago (1 child)
I did this too!
edit: except I was the one that deployed the software that rebooted the machines
vocatus 5 years ago (0 children)
Hey, maybe you should go apologize to ardwin. I bet he was pissed.
[Oct 05, 2018] sudo yum -y remove krb5 (this removes coreutils)
Oct 05, 2018 | www.reddit.com
DrGirlfriend Systems Architect 2 points 3 points 4 points 5 years ago (5 children)
• sudo yum -y remove krb5 (this removes coreutils)
• deleted a production LUN rather than the development LUN - destroyed several months worth of assets for a book that was scheduled to go to print in a few weeks (found a good backup that was "only" two weeks out of date)
• forgot to "wr mem" on new ATM routers at a remote site 70 miles away
2960G 2 points 3 points 4 points 5 years ago (1 child)
+1 for the "yum -y". Had the 'pleasure' of fixing a box one of my colleagues did "yum -y remove openssl". Through utter magic managed to recover it without reinstalling :-)
chriscowley DevOps 0 points 1 point 2 points 5 years ago (0 children)
Do I explain. I would probably curled the RPMs of the repo into cpio and put them into place manually (been there)
vocatus NSA/DOD/USAR/USAP/AEXP [ S ] 0 points 1 point 2 points 5 years ago (1 child)
That last one gave me the shivers.
[Oct 05, 2018] Trying to preserve connection after networking change while working on the core switch remotely backfired, as sysadmin forgot to cancel scheduled reload comment after testing change
"... That was a fun day.... What's worse is I was following a change plan, I just missed the "reload cancel". Stupid, stupid, stupid, stupid. ..."
Oct 05, 2018 | www.reddit.com
Making some network changes in a core switch, use 'reload in 5' as I wasn't 100% certain the changes wouldn't kill my remote connection.
Changes go in, everything stays up, no apparent issues. Save changes, log out.
"All monitoring for customer is showing down except the edge firewalls".
... as soon as they said it I knew I forgot to cancel the reload.
0xD6 5 years ago
This one hit pretty close to home having spent the last month at a small Service Provider with some serious redundancy issues. We're working through them one by one, but there is one outage in particular that was caused by the same situation... Only the scope was pretty "large".
Performed change, was distracted by phone call. Had an SMS notifying me of problems with a legacy border that I had just performed my changes on. See my PuTTY terminal and my blood starts to run cold. "Reload requested by 0xd6".
...Fuck I'm thinking, but everything should be back soon, not much I can do now.
However, not only did our primary transit terminate on this legacy device, our old non-HSRP L3 gateways and BGP nail down routes for one of our /20s and a /24... So, because of my forgotten reload I withdrew the majority of our network from all peers and the internet at large.
That was a fun day.... What's worse is I was following a change plan, I just missed the "reload cancel". Stupid, stupid, stupid, stupid.
[Oct 05, 2018] I learned a valuable lesson about pressing buttons without first fully understanding what they do.
Oct 05, 2018 | www.reddit.com
This is actually one of my standard interview questions since I believe any sys admin that's worth a crap has made a mistake they'll never forget.
Here's mine, circa 2001. In response to a security audit, I had to track down which version of the Symantec Antivirus was running and what definition was installed on every machine in the company. I had been working through this for awhile and got a bit reckless.
There was a button in the console that read 'Virus Sweep'. Thinking it'd get the info from each machine and give me the details, I pressed it.. I was wrong..
Very Wrong. Instead it proceeded to initiate a virus scan on every machine including all of the servers.
Less than 5 minutes later, many of our older servers and most importantly our file servers froze. In the process, I took down a trade floor for about 45 minutes while we got things back up. I learned a valuable lesson about pressing buttons without first fully understanding what they do.
[Oct 05, 2018] A newbie turned production server off to replace a monitor
Oct 05, 2018 | www.reddit.com
just_call_in_sick 5 years ago (1 child)
A friend of the family was an IT guy and he gave me the usual high school unpaid intern job. My first day, he told me that a computer needed the monitor replaced. He gave me this 13" CRT and sent me on my way. I found the room (a wiring closet) with a tiny desk and a large desktop tower on it.
TURNED OFF THE COMPUTER and went about replacing the monitor. I think it took about 5 minutes for people start wondering why they can no longer use the file server and can't save their files they have been working on all day.
It turns out that you don't have to turn off computers to replace the monitor.
[Oct 05, 2018] Sometimes one extra space makes a big difference
Oct 05, 2018 | cam.ac.uk
From: rheiger@renext.open.ch (Richard H. E. Eiger)
Organization: Olivetti (Schweiz) AG, Branch Office Berne
In article <1992Oct9.100444.27928@u.washington.edu> tzs@stein.u.washington.edu
(Tim Smith) writes:
> I was working on a line printer spooler, which lived in /etc. I wanted
> to remove it, and so issued the command "rm /etc/lpspl." There was only
> one problem. Out of habit, I typed "passwd" after "/etc/" and removed
>
[deleted to save space[
>
> --Tim Smith
Here's another story. Just imagine having the sendmail.cf file in /etc. Now, I was working on the sendmail stuff and had come up with lots of sendmail.cf.xxx which I wanted to get rid of so I typed "rm -f sendmail.cf. *". At first I was surprised about how much time it took to remove some 10 files or so. Hitting the interrupt key, when I finally saw what had happened was way to late, though.
Fortune has it that I'm a very lazy person. That's why I never bothered to just back up directories with data that changes often. Therefore I managed to restore /etc successfully before rebooting... :-) Happy end, after all. Of course I had lost the only well working version of my sendmail.cf...
Richard
[Oct 05, 2018] Deletion of files purpose of which you do not understand sometimes backfire by Anatoly Ivasyuk
Oct 05, 2018 | cam.ac.uk
Unix Admin. Horror Story Summary, version 1.0 by Anatoly Ivasyuk
From: philip@haas.berkeley.edu (Philip Enteles)
Organization: Haas School of Business, Berkeley
As a new system administrator of a Unix machine with limited space I thought I was doing myself a favor by keeping things neat and clean. One
day as I was 'cleaning up' I removed a file called 'bzero'.
Strange things started to happen like vi didn't work then the compliants started coming in. Mail didn't work. The compilers didn't work. About this time the REAL system administrator poked his head in and asked what I had done.
Further examination showed that bzero is the zeroed memory without which the OS had no operating space so anything using temporary memory was non-functional.
The repair? Well things are tough to do when most of the utilities don't work. Eventually the REAL system administrator took the system to single user and rebuilt the system including full restores from a tape system. The Moral is don't be to anal about things you don't understand.
Take the time learn what those strange files are before removing them and screwing yourself.
Philip Enteles
[Oct 05, 2018] Danger of hidden symlinks
Oct 05, 2018 | cam.ac.uk
From: cjc@ulysses.att.com (Chris Calabrese)
Organization: AT&T Bell Labs, Murray Hill, NJ, USA
>On a old decstation 3100
I was deleting last semesters users to try to dig up some disk space, I also deleted some test users at the same time.
One user took longer then usual, so I hit control-c and tried ls. "ls: command not found"
Turns out that the test user had / as the home directory and the remove user script in Ultrix just happily blew away the whole disk.
>U...~
[Oct 05, 2018] Hidden symlinks and recursive deletion of the directories
Notable quotes:
Oct 05, 2018 | www.reddit.com
mavantix Jack of All Trades, Master of Some; 5 years ago (4 children)
I was cleaning up old temp folders of junk on Windows 2003 server, and C:\temp was full of shit. Most of it junk. Rooted deep in the junk, some asshole admin had apparently symlink'd sysvol to a folder in there. Deleting wiped sysvol.
There where no usable backups, well, there where but the ArcServe was screwed by lack of maintenance.
Spent days rebuilding policies.
...and no I didn't tell this story to teach any of your little princesses to do the same when you leave your company.
[Oct 05, 2018] Automatically putting slash in front of directory named like system (named like bin,etc,usr, var) which are all etched in sysadmin memory
This is why you should never type rm command on command line. Type it in editor first.
Oct 05, 2018 | www.reddit.com
aultl Senior DevOps Engineer
rm -rf /var
I was trying to delete /var/named/var
nekoeth0 Linux Admin, 5 years ago
Haha, that happened to me too. I had to use a live distro, chroot, copy, what not. It was fun!
[Oct 05, 2018] I corrupted a 400TB data warehouse.
Oct 05, 2018 | www.reddit.com
I corrupted a 400TB data warehouse.
Took 6 days to restore from tape.
mcowger VCDX | DevOps Guy 8 points 9 points 10 points 5 years ago (0 children)
Meh - happened a long time ago.
Had a big Solaris box (E6900) running Oracle 10 for the DW. Was going to add some new LUNs to the box and also change some of the fiber pathing to go through a new set of faster switches. Had the MDS changes prebuilt, confirmed in with another admin, through change control, etc.
Did fabric A, which went through fine, and then did fabric B without pausing or checking that the new paths came up on side A before I knocked over side B (in violation of my own approved plan). For the briefest of instants, there were no paths to the devices and Oracle was configured in full async write mode :(. Instant corruption of the tables that were active. Tried to do use archivelogs to bring it back, but no dice (and this is before Flashbacks, etc). So we were hosed.
Had to have my DBA babysit the RMAN restore for the entire weekend :(. 1GBe links to backup infrastructure.
RCA resulted in MANY MANY changes to the design of that system, and me just barely keeping my job.
invisibo DevOps 2 points 3 points 4 points 5 years ago (0 children)
You just made me say "holy shit! Out loud. You win.
FooHentai 2 points 3 points 4 points 5 years ago (0 children)
Ouch.
I dropped a 500Gb RAID set. There were 2 identical servers in the rack right next to each other. Both OpenFiler, both unlabeled. Didn't know about the other one and was told to 'wipe the OpenFiler'. Got a call half an hour later from a team wondering where all their test VMs had gone.
vocatus NSA/DOD/USAR/USAP/AEXP [ S ] 1 point 2 points 3 points 5 years ago (0 children)
I have to hear the story.
[Oct 02, 2018] Rookie almost wipes customer's entire inventory unbeknownst to sysadmin
"... At that moment, everything from / and onward began deleting forcefully and Reginald described his subsequent actions as being akin to "flying flat like a dart in the air, arms stretched out, pointy finger fully extended" towards the power switch on the mini computer. ..."
Oct 02, 2018 | theregister.co.uk
I was going to type rm -rf /*.old* – which would have forcibly removed all /dgux.old stuff, including any sub-directories I may have created with that name," he said.
But – as regular readers will no doubt have guessed – he didn't.
"I fat fingered and typed rm -rf /* – and then I accidentally hit enter instead of the "." key."
At that moment, everything from / and onward began deleting forcefully and Reginald described his subsequent actions as being akin to "flying flat like a dart in the air, arms stretched out, pointy finger fully extended" towards the power switch on the mini computer.
"Everything got quiet."
Reginald tried to boot up the system, but it wouldn't. So instead he booted up off a tape drive to run the mini Unix installer and mounted the boot "/" file system as if he were upgrading – and then checked out the damage.
"Everything down to /dev was deleted, but I was so relieved I hadn't deleted the customer's database and only system files."
Reginald did what all the best accident-prone people do – kept the cock-up to himself, hoped no one would notice and started covering his tracks, by recreating all the system files.
Over the next three hours, he "painstakingly recreated the entire device tree by hand", at which point he could boot the machine properly – "and even the application worked out".
Jubilant at having managed the task, Reginald tried to keep a lid on the heart that was no doubt in his throat by this point and closed off his work, said goodbye to the sysadmin and went home to calm down. Luckily no one was any the wiser.
"If the admins read this message, this would be the first time they hear about it," he said.
"At the time they didn't come in to check what I was doing, and the system was inaccessible to the users due to planned maintenance anyway."
Did you feel the urge to confess to errors no one else at your work knew about? Do you know someone who kept something under their hat for years? Spill the beans to Who, Me? by emailing us here . ® Re: If rm -rf /* doesn't delete anything valuable
Eh? As I read it, Reginald kicked off the rm -rf /*, then hit the power switch before it deleted too much. The tape rescue revealed that "everything down to /dev" had been deleted, ie. everything in / beginnind a,b,c and some d. On a modern system that might include /boot and /bin, but evidently was not a total disaster on Reg's server.
Anonymous Coward
title="Inappropriate post? Report it to our moderators" type="submit" value="Report abuse"> I remember discovering the hard way that when you delete an email account in Thunderbird and it asks if you want to delete all the files associated with it it actually means do you want to delete the entire directory tree below where the account is stored .... so, as I discovered, saying "yes" when the reason you are deleting the account is because you'd just created it in the wrong place in the the directory tree is not a good idea - instead of just deleting the new account I nuked all the data associated with all our family email accounts!
big_D Monday 1st October 2018 10:05 GMT bpfh
div Re: .cobol
"Delete is right above Rename in the bloody menu"
Probably designed by the same person who designed the crontab app then, with the command line options -e to edit and -r to remove immediately without confirmation. Misstype at your peril...
I found this out - to my peril - about 3 seconds before I realised that it was a good idea for a server's crontab to include a daily executed crontab -l > /foo/bar/crontab-backup.txt ...
Jason Bloomberg
div Re: .cobol
I went to delete the original files, but I only got as far as "del *.COB" befiore hitting return.
I managed a similar thing but more deliberately; belatedly finding "DEL FOOBAR.???" included files with no extensions when it didn't on a previous version (Win3.1?).
That wasn't the disaster it could have been but I've had my share of all-nighters making it look like I hadn't accidentally scrubbed a system clean.
Down not across
div Re: .cobol
Probably designed by the same person who designed the crontab app then, with the command line options -e to edit and -r to remove immediately without confirmation. Misstype at your peril...
Using crontab -e is asking for trouble even without mistypes. I've see too many corrupted or truncated crontabs after someone has edited them with crontab -e. crontab -l > crontab.txt;vi crontab.txt;crontab crontab.txt is much better way.
You mean not everyone has crontab entry that backs up crontab at least daily?
MrBanana
div Re: .cobol
"WAH! I copied the .COBOL back to .COB and started over again. As I knew what I wanted to do this time, it only took about a day to re-do what I had deleted."
When this has happened to me, I end up with better code than I had before. Re-doing the work gives you a better perspective. Even if functionally no different it will be cleaner, well commented, and laid out more consistently. I sometimes now do it deliberately (although just saving the first new version, not deleting it) to clean up the code.
big_D
div Re: .cobol
I totally agree, the resultant code was better than what I had previously written, because some of the mistakes and assumptions I'd made the first time round and worked around didn't make it into the new code.
Woza
div Reminds me of the classic
https://www.ee.ryerson.ca/~elf/hack/recovery.html
Anonymous South African Coward
div Re: Reminds me of the classic
https://www.ee.ryerson.ca/~elf/hack/recovery.html
Was about to post the same. It is a legendary classic by now.
Chairman of the Bored
div One simple trick...
...depending on your shell and its configuration a zero size file in each directory you care about called '-i' will force the rampaging recursive rm, mv, or whatever back into interactive mode. By and large it won't defend you against mistakes in a script, but its definitely saved me from myself when running an interactive shell.
It's proven useful enough to earn its own cronjob that runs once a week and features a 'find -type d' and touch '-i' combo on systems I like.
Glad the OP's mad dive for the power switch saved him, I wasn't so speedy once. Total bustification. Hence this one simple trick...
Now if I could ever fdisk the right f$cking disk, I'd be set! PickledAardvark div Re: One simple trick... "Can't you enter a command to abort the wipe?" Maybe. But you still have to work out what got deleted. On the first Unix system I used, an admin configured the rm command with a system alias so that rm required a confirmation. Annoying after a while but handy when learning. When you are reconfiguring a system, delete/rm is not the only option. Move/mv protects you from your errors. If the OS has no move/mv, then copy, verify before delete. Doctor Syntax div Re: One simple trick... "Move/mv protects you from your errors." Not entirely. I had a similar experience with mv. I was left with a running shell so could cd through the remains of the file system end list files with echo * but not repair it.. Although we had the CDs (SCO) to reboot the system required a specific driver which wasn't included on the CDs and hadn't been provided by the vendor. It took most of a day before they emailed the correct driver to put on a floppy before I could reboot. After that it only took a few minutes to put everything back in place. Chairman of the Bored div Re: One simple trick... @Chris Evans, Yes there are a number of things you can do. Just like Windows a quick ctrl-C will abort a rm operation taking place in an interactive shell. Destroying the window in which the interactive shell running rm is running will work, too (alt-f4 in most window managers or 'x' out of the window) If you know the process id of the rm process you can 'kill$pid' or do a 'killall -KILL rm'
Couple of problems:
(1) law of maximum perversity says that the most important bits will be destroyed first in any accident sequence
(2) by the time you realize the mistake there is no time to kill rm before law 1 is satisfied
The OP's mad dive for the power button is probably the very best move... provided you are right there at the console. And provided the big red switch is actually connected to anything
Colin Bull 1
div cp can also be dangerous
After several years working in a DOS environment I got a job as project Manager / Sys admin on a Unix based customer site for a six month stint. On my second day I wanted to use a test system to learn the software more, so decided to copy the live order files to the test system.
Unfortunately I forgot the trailing full stop as it was not needed in DOS - so the live order index file over wrote the live data file. And the company only took orders for next day delivery so it wiped all current orders.
Luckily it printed a sales acknowledgement every time an order was placed so I escaped death and learned never miss the second parameter with cp command.
Anonymous Coward
title="Inappropriate post? Report it to our moderators" type="submit" value="Report abuse"> i'd written a script to deploy the latest changes to the live environment. worked great. except one day i'd entered a typo and it was now deploying the same files to the remote directory, over and again.
it did that for 2 whole years with around 7 code releases. not a single person realised the production system was running the same code after each release with no change in functionality. all the customer cared about was 'was the site up?'
not a single person realised. not the developers. not the support staff. not me. not the testers. not the customer. just made you think... wtf had we been doing for 2 years??? Yet Another Anonymous coward
div Look on the bright side, any bugs your team had introduced in those 2 years had been blocked by your intrinsically secure script
Prst. V.Jeltz
div not a single person realised. not the developers. not the support staff. not me. not the testers. not the customer. just made you think... wtf had we been doing for 2 years???
That is Classic! not surprised about the AC!
Bet some of the beancounters were less than impressed , probly on customer side :)
Anonymous Coward
title="Inappropriate post? Report it to our moderators" type="submit" value="Report abuse"> Re: ...then there's backup stories...
Many years ago (pre internet times) a client phones at 5:30 Friday afternoon. It was the IT guy wanting to run through the steps involved in recovering from a backup. Their US headquarters had a hard disk fail on their accounting system. He was talking the Financial Controller through a recovery and while he knew his stuff he just wanted to double check everything.
8pm the same night the phone rang again - how soon could I fly to the states? Only one of the backup tapes was good. The financial controller had put the sole remaining good backup tape in the drive, then popped out to get a bite to eat at 7pm because it was going to be a late night. At 7:30pm the scheduled backup process copied the corrupted database over the only remaining backup.
Saturday was spent on the phone trying to talk them through everything I could think of.
Sunday afternoon I was sitting in a private jet winging it's way to their US HQ. Three days of very hard work later we'd managed to recreate the accounting database from pieces of corrupted databases and log files. Another private jet ride home - this time the pilot was kind enough to tell me there was a cooler full of beer behind my seat. Olivier2553
div Re: Welcome to the club!
"Lesson learned: NEVER decide to "clean up some old files" at 4:30 on a Friday afternoon. You WILL look for shortcuts and it WILL bite you on the ass."
Do not do anything of some significance on Friday. At all. Any major change, big operation, etc. must be made by Thursday at the latest, so in case of cock-up, you have the Friday (plus days week-end) to repair it.
JQW
div I once wiped a large portion of a hard drive after using find with exec rm -rf {} - due to not taking into account the fact that some directories on the system had spaces in them.
Will Godfrey
div Defensive typing
I've long been in the habit of entering dangerous commands partially in reverse, so in the case of theO/Ps one I've have done:
' -rf /*.old* '
then gone back top the start of the line and entered the ' rm ' bit.
sisk
div A couple months ago on my home computer (which has several Linux distros installed and which all share a common /home because I apparently like to make life difficult for myself - and yes, that's as close to a logical reason I have for having multiple distros installed on one machine) I was going to get rid of one of the extraneous Linux installs and use the space to expand the root partition for one of the other distros. I realized I'd typed /dev/sdc2 instead of /dev/sdc3 at the same time that I verified that, yes, I wanted to delete the partition. And sdc2 is where the above mentioned shared /home lives. Doh.
Fortunately I have a good file server and a cron job running rsync every night, so I didn't actually lose any data, but I think my heart stopped for a few seconds before I realized that.
Kevin Fairhurst
div Came in to work one Monday to find that the Unix system was borked... on investigation it appeared that a large number of files & folders had gone missing, probably by someone doing an incorrect rm.
Our systems were shared with our US office who supported the UK outside of our core hours (we were in from 7am to ensure trading was ready for 8am, they were available to field staff until 10pm UK time) so we suspected it was one of our US counterparts who had done it, but had no way to prove it.
Rather than try and fix anything, they'd gone through and deleted all logs and history entries so we could never find the evidence we needed!
Restoring the system from a recent backup brought everything back online again, as one would expect!
DavidRa
div Sure they did, but the universe invented better idiots
Of course. However, the incompletely-experienced often choose to force bypass that configuration. For example, a lot of systems aliased rm to "rm -i" by default, which would force interactive confirmations. People would then say "UGH, I hate having to do this" and add their own customisations to their shells/profiles etc:
unalias rm
alias rm=rm -f
Lo and behold, now no silly confirmations, regardless of stupidity/typos/etc.
[Jul 30, 2018] Sudo related horror story
Jul 30, 2018 | www.sott.net
A new sysadmin decided to scratch his etch in sudoers file and in the standard definition of additional sysadmins via wheel group
## Allows people in group wheel to run all commands
# %wheel ALL=(ALL) ALL
he replaced ALL with localhost
## Allows people in group wheel to run all commands
# %wheel localhost=(ALL) ALL
then without testing he distributed this file to all servers in the datacenter. Sysadmin who worked after him discovered that sudo su - command no longer works and they can't get root using their tried and true method ;-)
[Apr 22, 2018] Unix-Linux Horror Stories Unix Horror Stories The good thing about Unix, is when it screws up, it does so very quickly
"... you probably don't want that user owning /bin/nologin. ..."
Aug 04, 2011 | unixhorrorstories.blogspot.com
Unix Horror Stories: The good thing about Unix, is when it screws up, it does so very quickly The project to deploy a new, multi-million-dollar commercial system on two big, brand-new HP-UX servers at a brewing company that shall not be named, had been running on time and within budgets for several months. Just a few steps remained, among them, the migration of users from the old servers to the new ones.
The task was going to be simple: just copy the home directories of each user from the old server to the new ones, and a simple script to change the owner so as to make sure that each home directory was owned by the correct user. The script went something like this:
#!/bin/bash
cat /etc/passwd|while read line
do
USER=$(echo$line|cut -d: -f1)
HOME=$(echo$line|cut -d: -f6)
chown -R $USER$HOME
done
[NOTE: the script does not filter out system ids from userids and that's a grave mistake. also it was run before it was tested ; -) -- NNB]
As you see, this script is pretty simple: obtain the user and the home directory from the password file, and then execute the chown command recursively on the home directory. I copied the files, executed the script, and thought, great, just 10 minutes and all is done.
That's when the calls started.
It turns out that while I was executing those seemingly harmless commands, the server was under acceptance test. You see, we were just one week away from going live and the final touches were everything that was required. So the users in the brewing company started testing if everything they needed was working like in the old servers. And suddenly, the users noticed that their system was malfunctioning and started making furious phone calls to my boss and then my boss started to call me.
And then I realized I had thrashed the server. Completely. My console was still open and I could see that the processes started failing, one by one, reporting very strange messages to the console, that didn't look any good. I started to panic. My workmate Ayelen and I (who just copied my script and executed it in the mirror server) realized only too late that the home directory of the root user was / -the root filesystem- so we changed the owner of every single file in the filesystem to root!!! That's what I love about Unix: when it screws up, it does so very quickly, and thoroughly.
There must be a way to fix this , I thought. HP-UX has a package installer like any modern Linux/Unix distribution, that is swinstall . That utility has a repair command, swrepair . So the following command put the system files back in order, needing a few permission changes on the application directories that weren't installed with the package manager:
swrepair -F
But the story doesn't end here. The next week, we were going live, and I knew that the migration of the users would be for real this time, not just a test. My boss and I were going to the brewing company, and he receives a phone call. Then he turns to me and asks me, "What was the command that you used last week?". I told him and I noticed that he was dictating it very carefully. When we arrived, we saw why: before the final deployment, a Unix administrator from the company did the same mistake I did, but this time, people from the whole country were connecting to the system, and he received phone calls from a lot of angry users. Luckily, the mistake could be fixed, and we all, young and old, went back to reading the HP-UX manual. Those things can come handy sometimes!
Morale of this story: before doing something on the users directories, take the time to see which is the User ID of actual users - which start usually in 500 but it's configuration-dependent - because system users's IDs are lower than that.
Send in your Unix horror story, and it will be featured here in the blog!
Greetings,
Agustin
This script is so dangerous. You are giving home directories to say the apache user and you probably don't want that user owning /bin/nologin.
[Apr 22, 2018] Unix Horror story script question Unix Linux Forums Shell Programming and Scripting
Apr 22, 2018 | www.unix.com
scottsiddharth Registered User
Unix Horror story script question
This text and script is borrowed from the "Unix Horror Stories" document.
It states as follows
"""""Management told us to email a security notice to every user on the our system (at that time, around 3000 users). A certain novice administrator on our system wanted to do it, so I instructed them to extract a list of users from /etc/passwd, write a simple shell loop to do the job, and throw it in the background.
Here's what they wrote (bourne shell)...
for USER in cat user.list;
do mail $USER <message.text & done Have you ever seen a load average of over 300 ??? """" END My question is this- What is wrong with the script above? Why did it find a place in the Horror stories? It worked well when I tried it. Maybe he intended to throw the whole script in the background and not just the Mail part. But even so it works just as well... So? RE:Unix Horror story script question I think, it does well deserve to be placed Horror stories. Consider the given server for with or without SMTP service role, this script tries to process 3000 mail commands in parallel to send the text to it's 3000 repective receipents. Have you ever tried with valid 3000 e-mail IDs, you can feel the heat of CPU (sar 1 100) P.S.: I did not tested it but theoretically affirmed. Best Regards. Thunderbolt, View Public Profile 3 11-24-2008 - Original Discussion by scottsiddharth Quote: Originally Posted by scottsiddharth Thank you for the reply. But isn't that exactly what the real admin asked the novice admin to do. Is there a better script or solution ? Well, Let me try to make it sequential to reduce the CPU load, but it will take no. of users*SLP_INT(default=1) seconds to execute.... #Interval between concurrent mail commands excution in seconds, minimum 1 second. SLP_INT=1 for USER in cat user.list; do; mail$USER <message.text; [ -z "${SLP_INT}" ] && sleep 1 || sleep${SLP_INT}" ;
done
[Apr 22, 2018] THE classic Unix horror story programming
"... " in "rm -rf ~/ ..."
Apr 22, 2008 | www.reddit.com
probablycorey 10 years ago (35 children)
A little trick I use to ensure I never delete the root or home dir... Put a file called -i in / and ~
If you ever call rm -rf *, -i (the request confirmation option) will be the first path expanded. So your command becomes...
rm -rf -i
Catastrophe Averted!
mshade 10 years ago (0 children)
That's a pretty good trick! Unfortunately it doesn't work if you specify the path of course, but will keep you from doing it with a PWD of ~ or /.
Thanks!
aythun 10 years ago (2 children)
Or just use zsh. It's awesome in every possible way.
brian@xenon:~/tmp/test% rm -rf *
zsh: sure you want to delete all the files in /home/brian/tmp/test [yn]?
rex5249 10 years ago (1 child)
I keep an daily clone of my laptop and I usually do some backups in the middle of the day, so if I lose a disk it isn't a big deal other than the time wasted copying files.
MyrddinE 10 years ago (1 child)
Because we are creatures of habit. If you ALWAYS have to type 'yes' for every single deletion, it will become habitual, and you will start doing it without conscious thought.
Warnings must only pop up when there is actual danger, or you will become acclimated to, and cease to see, the warning.
This is exactly the problem with Windows Vista, and why so many people harbor such ill-will towards its 'security' system.
zakk 10 years ago (3 children)
and if I want to delete that file?!? ;-)
alanpost 10 years ago (0 children)
I use the same trick, so either of:
$rm -- -i or$ rm ./-i
will work.
emag 10 years ago (0 children)
rm /-i ~/-i
nasorenga 10 years ago * (2 children)
The part that made me the most nostalgic was his email address: mcvax!ukc!man.cs.ux!miw
Gee whiz, those were the days... (Edit: typo)
floweryleatherboy 10 years ago (6 children)
One of my engineering managers wiped out an important server with rm -rf. Later it turned out he had a giant stock of kiddy porn on company servers.
monstermunch 10 years ago (16 children)
Whenever I use rm -rf, I always make sure to type the full path name in (never just use *) and put the -rf at the end, not after the rm. This means you don't have to worry about hitting "enter" in the middle of typing the path name (it won't delete the directory because the -rf is at the end) and you don't have to worry as much about data deletion from accidentally copy/pasting the command somewhere with middle click or if you redo the command while looking in your bash history.
Hmm, couldn't you alias "rm -rf" to mv the directory/files to a temp directory to be on the safe side?
branston 10 years ago (8 children)
Aliasing 'rm' is fairly common practice in some circles. It can have its own set of problems however (filling up partitions, running out of inodes...)
amnezia 10 years ago (5 children)
you could alias it with a script that prevents rm -rf * being run in certain directories.
jemminger 10 years ago (4 children)
you could also alias it to 'ls' :)
derefr 10 years ago * (1 child)
One could write a daemon that lets the oldest files in that directory be "garbage collected" when those conditions are approaching. I think this is, in a roundabout way, how Windows' shadow copy works.
branston 10 years ago (0 children)
Could do. Think we might be walking into the over-complexity trap however. The only time I've ever had an rm related disaster was when accidentally repeating an rm that was already in my command buffer. I looked at trying to exclude patterns from the command history but csh doesn't seem to support doing that so I gave up.
A decent solution just occurred to me when the underlying file system supports snapshots (UFS2 for example). Just snap the fs on which the to-be-deleted items are on prior to the delete. That needs barely any IO to do and you can set the snapshots to expire after 10 minutes.
Hmm... Might look at implementing that..
mbm 10 years ago (0 children)
Most of the original UNIX tools took the arguments in strict order, requiring that the options came first; you can even see this on some modern *BSD systems.
shadowsurge 10 years ago (1 child)
I just always format the command with ls first just to make sure everything is in working order. Then my neurosis kicks in and I do it again... and a couple more times just to make sure nothing bad happens.
Jonathan_the_Nerd 10 years ago (0 children)
If you're unsure about your wildcards, you can use echo to see exactly how the shell will expand your arguments.
splidge 10 years ago (0 children)
A better trick IMO is to use ls on the directory first.. then when you are sure that's what you meant type rm -rf !$to delete it. earthboundkid 10 years ago * (0 children) Ever since I got burned by letting my pinky slip on the enter key years ago, I've been typing echo path first, then going back and adding the rm after the fact. zerokey 10 years ago * (2 children) Great story. Halfway through reading, I had a major wtf moment. I wasn't surprised by the use of a VAX, as my old department just retired their last VAX a year ago. The whole time, I'm thinking, "hello..mount the tape hardware on another system and, worst case scenario, boot from a live cd!" Then I got to, "The next idea was to write a program to make a device descriptor for the tape deck" and looked back at the title and realized that it was from 1986 and realized, "oh..oh yeah...that's pretty fucked." iluvatar 10 years ago (0 children) Great story Yeah, but really, he had way too much of a working system to qualify for true geek godhood. That title belongs to Al Viro . Even though I've read it several times, I'm still in awe every time I see that story... cdesignproponentsist 10 years ago (0 children) FreeBSD has backup statically-linked copies of essential system recovery tools in /rescue, just in case you toast /bin, /sbin, /lib, ld-elf.so.1, etc. It won't protect against a rm -rf / though (and is not intended to), although you could chflags -R schg /rescue to make them immune to rm -rf. clytle374 10 years ago * (9 children) It happens, I tried a few months back to rm -rf bin to delete a directory and did a rm -rf /bin instead. First thought: That took a long time. Second thought: What do you mean ls not found. I was amazed that the desktop survived for nearly an hour before crashing. earthboundkid 10 years ago (8 children) This really is a situation where GUIs are better than CLIs. There's nothing like the visual confirmation of seeing what you're obliterating to set your heart into the pit of your stomach. jib 10 years ago (0 children) If you're using a GUI, you probably already have that. If you're using a command line, use mv instead of rm. In general, if you want the computer to do something, tell it what you want it to do, rather than telling it to do something you don't want and then complaining when it does what you say. earthboundkid 10 years ago (3 children) Yes, but trash cans aren't manly enough for vi and emacs users to take seriously. If it made sense and kept you from shooting yourself in the foot, it wouldn't be in the Unix tradition. earthboundkid 10 years ago (1 child) 1. Are you so low on disk space that it's important for your trash can to be empty at all times? 2. Why should we humans have to adapt our directory names to route around the busted-ass-ness of our tools? The tools should be made to work with capital letters and spaces. Or better, use a GUI for deleting so that you don't have to worry about OMG, I forgot to put a slash in front of my space! Seriously, I use the command line multiple times every day, but there are some tasks for which it is just not well suited compared to a GUI, and (bizarrely considering it's one thing the CLI is most used for) one of them is moving around and deleting files. easytiger 10 years ago (0 children) Thats a very simple bash/ksh/python/etc script. 1. script a move op to a hidden dir on the /current/ partition. 2. alias this to rm 3. wrap rm as an alias to delete the contents of the hidden folder with confirmation mattucf 10 years ago (3 children) I'd like to think that most systems these days don't have / set as root's home directory, but I've seen a few that do. :/ dsfox 10 years ago (0 children) This is a good approach in 1986. Today I would just pop in a bootable CDROM. fjhqjv 10 years ago * (5 children) That's why I always keep stringent file permissions and never act as the root user. I'd have to try to rm -rf, get a permission denied error, then retype sudo rm -rf and then type in my password to ever have a mistake like that happen. But I'm not a systems administrator, so maybe it's not the same thing. toast_and_oj 10 years ago (2 children) I aliased "rm -rf" to "omnomnom" and got myself into the habit of using that. I can barely type "omnomnom" when I really want to, let alone when I'm not really paying attention. It's saved one of my projects once already. shen 10 years ago (0 children) I've aliased "rm -rf" to "rmrf". Maybe I'm just a sucker for punishment. I haven't been bit by it yet, the defining word being yet. robreim 10 years ago (0 children) I would have thought tab completion would have made omnomnom potentially easier to type than rm -rf (since the -rf part needs to be typed explicitly) immure 10 years ago (0 children) It's not. lespea 10 years ago (0 children) before I ever do something like that I make sure I don't have permissions so I get an error, then I press up, home, and type sudo <space> <enter> and it works as expected :) kirun 10 years ago (0 children) And I was pleased the other day how easy it was to fix the system after I accidentally removed kdm, konqueror and kdesktop... but these guys are hardcore. austin_k 10 years ago (0 children) I actually started to feel sick reading that. I've been in a IT disaster before where we almost lost a huge database. Ugh.. I still have nightmares. umilmi81 10 years ago (4 children) Task number 1 with a UNIX system. Alias rm to rm -i. Call the explicit path when you want to avoid the -i (ie: /bin/rm -f). Nobody is too cool to skip this basic protection. flinchn 10 years ago (0 children) i did an application install at an LE agency last fall - stupid me mv ./etc ./etcbk <> mv /etc /etcbk ahh that damned period DrunkenAsshole 10 years ago (0 children) Were the "*"s really needed for a story that has plagued, at one point or another, all OS users? xelfer 10 years ago (0 children) Is the home directory for root / for some unix systems? i thought 'cd' then 'rm -rf *' would have deleted whatever's in his home directory (or whatever$HOME points to)
srparish 10 years ago (0 children)
Couldn't he just have used the editor to create the etc files he wanted, and used cpio as root to copy that over as an /etc?
sRp
stox 10 years ago (1 child)
Been there, done that. Have the soiled underwear to prove it. Amazing what kind of damage you can recover from given enough motivation.
sheepskin 10 years ago * (0 children)
I had a customer do this, he killed it about the same time. I told him he was screwed and I'd charge him a bunch of money to take down his server, rebuild it from a working one and put it back up. But the customer happened to have a root ftp session up, and was able to upload what he needed to bring the system back. by the time he was done I rebooted it to make sure it was cool and it booted all the way back up.
Of course I've also had a lot of customer that have done it, and they where screwed, and I got to charge them a bunch of money.
jemminger 10 years ago (0 children)
pfft. that's why lusers don't get root access.
supersan 10 years ago (2 children)
i had the same thing happened to me once.. my c:\ drive was running ntfs and i accidently deleted the "ntldr" system file in the c:\ root (because the name didn't figure much).. then later, i couldn't even boot in the safe mode! and my bootable disk didn't recognize the c:\ drive because it was ntfs!! so sadly, i had to reinstall everything :( wasted a whole day over it..
b100dian 10 years ago (0 children)
Yes, but that's a single file. I suppose anyone can write hex into mbr to copy ntldr from a samba share!
bobcat 10 years ago (0 children)
http://en.wikipedia.org/wiki/Emergency_Repair_Disk
boredzo 10 years ago (0 children)
Neither one is the original source. The original source is Usenet, and I can't find it with Google Groups. So either of these webpages is as good as the other.
docgnome 10 years ago (0 children)
In 1986? On a VAX?
MarlonBain 10 years ago (0 children)
This classic article from Mario Wolczko first appeared on Usenet in 1986 .
amoore 10 years ago (0 children)
I got sidetracked trying to figure out why the fictional antagonist would type the extra "/ " in "rm -rf ~/ ".
Zombine 10 years ago (2 children)
...it's amazing how much of the system you can delete without it falling apart completely. Apart from the fact that nobody could login (/bin/login?), and most of the useful commands had gone, everything else seemed normal.
Yeah. So apart from the fact that no one could get any work done or really do anything, things were working great!
I think a more rational reaction would be "Why on Earth is this big, important system on which many people rely designed in such a way that a simple easy-to-make human error can screw it up so comprehensively?" or perhaps "Why on Earth don't we have a proper backup system?"
daniels220 10 years ago (1 child)
The problem wasn't the backup system, it was the restore system, which relied on the machine having a "copy" command. Perfectly reasonable assumption that happened not to be true.
Zombine 10 years ago * (0 children)
Neither backup nor restoration serves any purpose in isolation. Most people would group those operations together under the heading "backup;" certainly you win only a semantic victory by doing otherwise. Their fail-safe data-protection system, call it what you will, turned out not to work, and had to be re-engineered on-the-fly.
I generally figure that the assumptions I make that turn out to be entirely wrong were not "perfectly reasonable" assumptions in the first place. Call me a traditionalist.
[Apr 22, 2018] rm and Its Dangers (Unix Power Tools, 3rd Edition)
Apr 22, 2018 | docstore.mik.ua
14.3. rm and Its Dangers
Under Unix, you use the rm command to delete files. The command is simple enough; you just type rm followed by a list of files. If anything, rm is too simple. It's easy to delete more than you want, and once something is gone, it's permanently gone. There are a few hacks that make rm somewhat safer, and we'll get to those momentarily. But first, here's a quick look at some of the dangers.
To understand why it's impossible to reclaim deleted files, you need to know a bit about how the Unix filesystem works. The system contains a "free list," which is a list of disk blocks that aren't used. When you delete a file, its directory entry (which gives it its name) is removed. If there are no more links ( Section 10.3 ) to the file (i.e., if the file only had one name), its inode ( Section 14.2 ) is added to the list of free inodes, and its datablocks are added to the free list.
Well, why can't you get the file back from the free list? After all, there are DOS utilities that can reclaim deleted files by doing something similar. Remember, though, Unix is a multitasking operating system. Even if you think your system is a single-user system, there are a lot of things going on "behind your back": daemons are writing to log files, handling network connections, processing electronic mail, and so on. You could theoretically reclaim a file if you could "freeze" the filesystem the instant your file was deleted -- but that's not possible. With Unix, everything is always active. By the time you realize you made a mistake, your file's data blocks may well have been reused for something else.
When you're deleting files, it's important to use wildcards carefully. Simple typing errors can have disastrous consequences. Let's say you want to delete all your object ( .o ) files. You want to type:
% rm *.o
But because of a nervous twitch, you add an extra space and type:
% rm * .o
It looks right, and you might not even notice the error. But before you know it, all the files in the current directory will be gone, irretrievably.
If you don't think this can happen to you, here's something that actually did happen to me. At one point, when I was a relatively new Unix user, I was working on my company's business plan. The executives thought, so as to be "secure," that they'd set a business plan's permissions so you had to be root ( Section 1.18 ) to modify it. (A mistake in its own right, but that's another story.) I was using a terminal I wasn't familiar with and accidentally created a bunch of files with four control characters at the beginning of their name. To get rid of these, I typed (as root ):
# rm ????*
This command took a long time to execute. When about two-thirds of the directory was gone, I realized (with horror) what was happening: I was deleting all files with four or more characters in the filename.
The story got worse. They hadn't made a backup in about five months. (By the way, this article should give you plenty of reasons for making regular backups ( Section 38.3 ).) By the time I had restored the files I had deleted (a several-hour process in itself; this was on an ancient version of Unix with a horrible backup utility) and checked (by hand) all the files against our printed copy of the business plan, I had resolved to be very careful with my rm commands.
[Some shells have safeguards that work against Mike's first disastrous example -- but not the second one. Automatic safeguards like these can become a crutch, though . . . when you use another shell temporarily and don't have them, or when you type an expression like Mike's very destructive second example. I agree with his simple advice: check your rm commands carefully! -- JP ]
-- ML
[Apr 22, 2018] How to prevent a mistaken rm -rf for specific folders?
"... Probably your best bet with it would be to alias rm -ri into something memorable like kill_it_with_fire . This way whenever you feel like removing something, go ahead and kill it with fire. ..."
Jan 20, 2013 | unix.stackexchange.com
I think pretty much people here mistakenly 'rm -rf'ed the wrong directory, and hopefully it did not cause a huge damage.. Is there any way to prevent users from doing a similar unix horror story?? Someone mentioned (in the comments section of the previous link) that
... I am pretty sure now every unix course or company using unix sets rm -fr to disable accounts of people trying to run it or stop them from running it ...
Is there any implementation of that in any current Unix or Linux distro? And what is the common practice to prevent that error even from a sysadmin (with root access)?
It seems that there was some protection for the root directory (/) in Solaris (since 2005) and GNU (since 2006). Is there anyway to implement the same protection way to some other folders as well??
To give it more clarity, I was not asking about general advice about rm usage (and I've updated the title to indicate that more), I want something more like the root folder protection: in order to rm -rf / you have to pass a specific parameter: rm -rf --no-preserve-root /.. Is there similar implementations for customized set of directories? Or can I specify files in addition to / to be protected by the preserve-root option?
amyassin, Jan 20, 2013 at 17:26
I think pretty much people here mistakenly ' rm -rf 'ed the wrong directory, and hopefully it did not cause a huge damage.. Is there any way to prevent users from doing a similar unix horror story ?? Someone mentioned (in the comments section of the previous link ) that
... I am pretty sure now every unix course or company using unix sets rm -fr to disable accounts of people trying to run it or stop them from running it ...
Is there any implementation of that in any current Unix or Linux distro? And what is the common practice to prevent that error even from a sysadmin (with root access)?
It seems that there was some protection for the root directory ( / ) in Solaris (since 2005) and GNU (since 2006). Is there anyway to implement the same protection way to some other folders as well??
To give it more clarity, I was not asking about general advice about rm usage (and I've updated the title to indicate that more), I want something more like the root folder protection: in order to rm -rf / you have to pass a specific parameter: rm -rf --no-preserve-root / .. Is there similar implementations for customized set of directories? Or can I specify files in addition to / to be protected by the preserve-root option?
mattdm, Jan 20, 2013 at 17:33
1) Change management 2) Backups. – mattdm Jan 20 '13 at 17:33
Keith, Jan 20, 2013 at 17:40
probably the only way would be to replace the rm command with one that doesn't have that feature. – Keith Jan 20 '13 at 17:40
sr_, Jan 20, 2013 at 18:28
safe-rm maybe – sr_ Jan 20 '13 at 18:28
Bananguin, Jan 20, 2013 at 21:07
most distros do alias rm='rm -i' which makes rm ask you if you are sure.
Besides that: know what you are doing. only become root if necessary. for any user with root privileges security of any kind must be implemented in and by the user. hire somebody if you can't do it yourself.over time any countermeasure becomes equivalaent to the alias line above if you cant wrap your own head around the problem. – Bananguin Jan 20 '13 at 21:07
midnightsteel, Jan 22, 2013 at 14:21
@amyassin using rm -rf can be a resume generating event. Check and triple check before executing it – midnightsteel Jan 22 '13 at 14:21
Gilles, Jan 22, 2013 at 0:18
To avoid a mistaken rm -rf, do not type rm -rf .
If you need to delete a directory tree, I recommend the following workflow:
• If necessary, change to the parent of the directory you want to delete.
• mv directory-to-delete DELETE
• Explore DELETE and check that it is indeed what you wanted to delete
• rm -rf DELETE
Never call rm -rf with an argument other than DELETE . Doing the deletion in several stages gives you an opportunity to verify that you aren't deleting the wrong thing, either because of a typo (as in rm -rf /foo /bar instead of rm -rf /foo/bar ) or because of a braino (oops, no, I meant to delete foo.old and keep foo.new ).
If your problem is that you can't trust others not to type rm -rf, consider removing their admin privileges. There's a lot more that can go wrong than rm .
Always make backups .
Periodically verify that your backups are working and up-to-date.
Keep everything that can't be easily downloaded from somewhere under version control.
With a basic unix system, if you really want to make some directories undeletable by rm, replace (or better shadow) rm by a custom script that rejects certain arguments. Or by hg rm .
Some unix variants offer more possibilities.
• On OSX, you can set an access control list on a directory preventing deletion of the files and subdirectories inside it, without preventing the creation of new entries or modification of existing entries: chmod +a 'group:everyone deny delete_child' somedir (this doesn't prevent the deletion of files in subdirectories: if you want that, set the ACL on the subdirectory as well).
• On Linux, you can set rules in SELinux, AppArmor or other security frameworks that forbid rm to modify certain directories.
amyassin, Jan 22, 2013 at 9:41
Yeah backing up is the most amazing solution, but I was thinking of something like the --no-preserve-root option, for other important folder.. And that apparently does not exist even as a practice... – amyassin Jan 22 '13 at 9:41
Gilles, Jan 22, 2013 at 20:32
@amyassin I'm afraid there's nothing more (at least not on Linux). rm -rf already means "delete this, yes I'm sure I know what I'm doing". If you want more, replace rm by a script that refuses to delete certain directories. – Gilles Jan 22 '13 at 20:32
Gilles, Jan 22, 2013 at 22:17
@amyassin Actually, I take this back. There's nothing more on a traditional Linux, but you can set Apparmor/SELinux/ rules that prevent rm from accessing certain directories. Also, since your question isn't only about Linux, I should have mentioned OSX, which has something a bit like what you want. – Gilles Jan 22 '13 at 22:17
qbi, Jan 22, 2013 at 21:29
If you are using rm * and the zsh, you can set the option rmstarwait :
setopt rmstarwait
Now the shell warns when you're using the * :
> zsh -f
> setopt rmstarwait
> touch a b c
> rm *
zsh: sure you want to delete all the files in /home/unixuser [yn]? _
When you reject it ( n ), nothing happens. Otherwise all files will be deleted.
Drake Clarris, Jan 22, 2013 at 14:11
EDIT as suggested by comment:
You can change the attribute of to immutable the file or directory and then it cannot be deleted even by root until the attribute is removed.
chattr +i /some/important/file
This also means that the file cannot be written to or changed in anyway, even by root . Another attribute apparently available that I haven't used myself is the append attribute ( chattr +a /some/important/file . Then the file can only be opened in append mode, meaning no deletion as well, but you can add to it (say a log file). This means you won't be able to edit it in vim for example, but you can do echo 'this adds a line' >> /some/important/file . Using > instead of >> will fail.
These attributes can be unset using a minus sign, i.e. chattr -i file
Otherwise, if this is not suitable, one thing I practice is to always ls /some/dir first, and then instead of retyping the command, press up arrow CTL-A, then delete the ls and type in my rm -rf if I need it. Not perfect, but by looking at the results of ls, you know before hand if it is what you wanted.
NlightNFotis, Jan 22, 2013 at 8:27
One possible choice is to stop using rm -rf and start using rm -ri . The extra i parameter there is to make sure that it asks if you are sure you want to delete the file.
Probably your best bet with it would be to alias rm -ri into something memorable like kill_it_with_fire . This way whenever you feel like removing something, go ahead and kill it with fire.
amyassin, Jan 22, 2013 at 14:24
I like the name, but isn't f is the exact opposite of i option?? I tried it and worked though... – amyassin Jan 22 '13 at 14:24
NlightNFotis, Jan 22, 2013 at 16:09
@amyassin Yes it is. For some strange kind of fashion, I thought I only had r in there. Just fixed it. – NlightNFotis Jan 22 '13 at 16:09
Silverrocker, Jan 22, 2013 at 14:46
To protect against an accidental rm -rf * in a directory, create a file called "-i" (you can do this with emacs or some other program) in that directory. The shell will try to interpret -i and will cause it to go into interactive mode.
For example: You have a directory called rmtest with the file named -i inside. If you try to rm everything inside the directory, rm will first get -i passed to it and will go into interactive mode. If you put such a file inside the directories you would like to have some protection on, it might help.
Note that this is ineffective against rm -rf rmtest .
ValeriRangelov, Dec 21, 2014 at 3:03
If you understand C programming language, I think it is possible to rewrite the rm source code and make a little patch for kernel. I saw this on one server and it was impossible to delete some important directories and when you type 'rm -rf /direcotyr' it send email to sysadmin.
[Apr 21, 2018] Any alias of rm is a very stupid idea
"... It you want a safety net, do "alias del='rm -I –preserve_root'", ..."
Feb 14, 2017 | www.cyberciti.biz
Art Protin June 12, 2012, 9:53 pm
Any alias of rm is a very stupid idea (except maybe alias rm=echo fool).
A co-worker had such an alias. Imagine the disaster when, visiting a customer site, he did "rm *" in the customer's work directory and all he got was the prompt for the next command after rm had done what it was told to do.
It you want a safety net, do "alias del='rm -I –preserve_root'",
Drew Hammond March 26, 2014, 7:41 pm
^ This x10000.
I've made the same mistake before and its horrible.
[Mar 28, 2018] Sysadmin wiped two servers, left the country to escape the shame by Simon Sharwood
Mar 26, 2018 | theregister.co.uk
"This revolutionary product allowed you to basically 'mirror' two file servers," Graham told The Register . "It was clever stuff back then with a high speed 100mb FDDI link doing the mirroring and the 10Mb LAN doing business as usual."
Graham was called upon to install said software at a British insurance company, which involved a 300km trip on Britain's famously brilliant motorways with a pair of servers in the back of a company car.
Maybe that drive was why Graham made a mistake after the first part of the job: getting the servers set up and talking.
"Sadly the software didn't make identifying the location of each disk easy," Graham told us. "And – ummm - I mirrored it the wrong way."
"The net result was two empty but beautifully-mirrored servers."
Oops.
Graham tried to find someone to blame, but as he was the only one on the job that wouldn't work.
His next instinct was to run, but as the site had a stack of Quarter Inch Cartridge backup tapes, he quickly learned that "incremental back-ups are the work of the devil."
Happily, all was well in the end.
[Dec 07, 2017] First Rule of Usability Don't Listen to Users
"... So, do users know what they want? No, no, and no. Three times no. ..."
Dec 07, 2017 | www.nngroup.com
But ultimately, the way to get user data boils down to the basic rules of usability
• Watch what people actually do.
• Do not believe what people say they do.
• Definitely don't believe what people predict they may do in the future.
... ... ...
So, do users know what they want? No, no, and no. Three times no.
Finally, you must consider how and when to solicit feedback. Although it might be tempting to simply post a survey online, you're unlikely to get reliable input (if you get any at all). Users who see the survey and fill it out before they've used the site will offer irrelevant answers. Users who see the survey after they've used the site will most likely leave without answering the questions. One question that does work well in a website survey is "Why are you visiting our site today?" This question goes to users' motivation and they can answer it as soon as they arrive.
[Dec 07, 2017] The rogue DHCP server
"... from Don Watkins ..."
Dec 07, 2017 | opensource.com
from Don Watkins
I am a liberal arts person who wound up being a technology director. With the exception of 15 credit hours earned on my way to a Cisco Certified Network Associate credential, all of the rest of my learning came on the job. I believe that learning what not to do from real experiences is often the best teacher. However, those experiences can frequently come at the expense of emotional pain. Prior to my Cisco experience, I had very little experience with TCP/IP networking and the kinds of havoc I could create albeit innocently due to my lack of understanding of the nuances of routing and DHCP.
At the time our school network was an active directory domain with DHCP and DNS provided by a Windows 2000 server. All of our staff access to the email, Internet, and network shares were served this way. I had been researching the use of the K12 Linux Terminal Server ( K12LTSP ) project and had built a Fedora Core box with a single network card in it. I wanted to see how well my new project worked so without talking to my network support specialists I connected it to our main LAN segment. In a very short period of time our help desk phones were ringing with principals, teachers, and other staff who could no longer access their email, printers, shared directories, and more. I had no idea that the Windows clients would see another DHCP server on our network which was my test computer and pick up an IP address and DNS information from it.
I had unwittingly created a "rogue" DHCP server and was oblivious to the havoc that it would create. I shared with the support specialist what had happened and I can still see him making a bee-line for that rogue computer, disconnecting it from the network. All of our client computers had to be rebooted along with many of our switches which resulted in a lot of confusion and lost time due to my ignorance. That's when I learned that it is best to test new products on their own subnet.
[Jul 20, 2017] The ULTIMATE Horrors story with recovery!
"... By yet another miracle of good fortune, the terminal from which the damage had been done was still su'd to root (su is in /bin, remember?), so at least we stood a chance of all this working. ..."
Nov 08, 2002 | www.linuxjournal.com
Anonymous on Fri, 11/08/2002 - 03:00.
Its here .. Unbeliveable..
[I had intended to leave the discussion of "rm -r *" behind after the compendium I sent earlier, but I couldn't resist this one.
I also received a response from rutgers!seismo!hadron!jsdy (Joseph S. D. Yao) that described building a list of "dangerous" commands into a shell and dropping into a query when a glob turns up. They built it in so it couldn't be removed, like an alias. Anyway, on to the story! RWH.] I didn't see the message that opened up the discussion on rm, but thought you might like to read this sorry tale about the perils of rm....
(It was posted to net.unix some time ago, but I think our postnews didn't send it as far as it should have!)
----------------------------------------------------------------
Have you ever left your terminal logged in, only to find when you came back to it that a (supposed) friend had typed "rm -rf ~/*" and was hovering over the keyboard with threats along the lines of "lend me a fiver 'til Thursday, or I hit return"? Undoubtedly the person in question would not have had the nerve to inflict such a trauma upon you, and was doing it in jest. So you've probably never experienced the worst of such disasters....
It was a quiet Wednesday afternoon. Wednesday, 1st October, 15:15 BST, to be precise, when Peter, an office-mate of mine, leaned away from his terminal and said to me, "Mario, I'm having a little trouble sending mail." Knowing that msg was capable of confusing even the most capable of people, I sauntered over to his terminal to see what was wrong. A strange error message of the form (I forget the exact details) "cannot access /foo/bar for userid 147" had been issued by msg.
My first thought was "Who's userid 147?; the sender of the message, the destination, or what?" So I leant over to another terminal, already logged in, and typed
grep 147 /etc/passwd
/etc/passwd: No such file or directory.
Instantly, I guessed that something was amiss. This was confirmed when in response to
ls /etc
I got
ls: not found.
I suggested to Peter that it would be a good idea not to try anything for a while, and went off to find our system manager. When I arrived at his office, his door was ajar, and within ten seconds I realised what the problem was. James, our manager, was sat down, head in hands, hands between knees, as one whose world has just come to an end. Our newly-appointed system programmer, Neil, was beside him, gazing listlessly at the screen of his terminal. And at the top of the screen I spied the following lines:
# cd
# rm -rf *
Oh, *****, I thought. That would just about explain it.
I can't remember what happened in the succeeding minutes; my memory is just a blur. I do remember trying ls (again), ps, who and maybe a few other commands beside, all to no avail. The next thing I remember was being at my terminal again (a multi-window graphics terminal), and typing
cd /
echo *
I owe a debt of thanks to David Korn for making echo a built-in of his shell; needless to say, /bin, together with /bin/echo, had been deleted. What transpired in the next few minutes was that /dev, /etc and /lib had also gone in their entirety; fortunately Neil had interrupted rm while it was somewhere down below /news, and /tmp, /usr and /users were all untouched.
Meanwhile James had made for our tape cupboard and had retrieved what claimed to be a dump tape of the root filesystem, taken four weeks earlier. The pressing question was, "How do we recover the contents of the tape?". Not only had we lost /etc/restore, but all of the device entries for the tape deck had vanished. And where does mknod live?
You guessed it, /etc.
How about recovery across Ethernet of any of this from another VAX? Well, /bin/tar had gone, and thoughtfully the Berkeley people had put rcp in /bin in the 4.3 distribution. What's more, none of the Ether stuff wanted to know without /etc/hosts at least. We found a version of cpio in /usr/local, but that was unlikely to do us any good without a tape deck.
Alternatively, we could get the boot tape out and rebuild the root filesystem, but neither James nor Neil had done that before, and we weren't sure that the first thing to happen would be that the whole disk would be re-formatted, losing all our user files. (We take dumps of the user files every Thursday; by Murphy's Law this had to happen on a Wednesday).
Another solution might be to borrow a disk from another VAX, boot off that, and tidy up later, but that would have entailed calling the DEC engineer out, at the very least. We had a number of users in the final throes of writing up PhD theses and the loss of a maybe a weeks' work (not to mention the machine down time) was unthinkable.
So, what to do? The next idea was to write a program to make a device descriptor for the tape deck, but we all know where cc, as and ld live. Or maybe make skeletal entries for /etc/passwd, /etc/hosts and so on, so that /usr/bin/ftp would work. By sheer luck, I had a gnuemacs still running in one of my windows, which we could use to create passwd, etc., but the first step was to create a directory to put them in.
Of course /bin/mkdir had gone, and so had /bin/mv, so we couldn't rename /tmp to /etc. However, this looked like a reasonable line of attack.
By now we had been joined by Alasdair, our resident UNIX guru, and as luck would have it, someone who knows VAX assembler. So our plan became this: write a program in assembler which would either rename /tmp to /etc, or make /etc, assemble it on another VAX, uuencode it, type in the uuencoded file using my gnu, uudecode it (some bright spark had thought to put uudecode in /usr/bin), run it, and hey presto, it would all be plain sailing from there. By yet another miracle of good fortune, the terminal from which the damage had been done was still su'd to root (su is in /bin, remember?), so at least we stood a chance of all this working.
Off we set on our merry way, and within only an hour we had managed to concoct the dozen or so lines of assembler to create /etc. The stripped binary was only 76 bytes long, so we converted it to hex (slightly more readable than the output of uuencode), and typed it in using my editor. If any of you ever have the same problem, here's the hex for future reference:
070100002c000000000000000000000000000000000000000000000000000000
0000dd8fff010000dd8f27000000fb02ef07000000fb01ef070000000000bc8f
8800040000bc012f65746300
I had a handy program around (doesn't everybody?) for converting ASCII hex to binary, and the output of /usr/bin/sum tallied with our original binary. But hang on---how do you set execute permission without /bin/chmod? A few seconds thought (which as usual, lasted a couple of minutes) suggested that we write the binary on top of an already existing binary, owned by me...problem solved.
So along we trotted to the terminal with the root login, carefully remembered to set the umask to 0 (so that I could create files in it using my gnu), and ran the binary. So now we had a /etc, writable by all.
From there it was but a few easy steps to creating passwd, hosts, services, protocols, (etc), and then ftp was willing to play ball. Then we recovered the contents of /bin across the ether (it's amazing how much you come to miss ls after just a few, short hours), and selected files from /etc. The key file was /etc/rrestore, with which we recovered /dev from the dump tape, and the rest is history.
Now, you're asking yourself (as I am), what's the moral of this story? Well, for one thing, you must always remember the immortal words, DON'T PANIC. Our initial reaction was to reboot the machine and try everything as single user, but it's unlikely it would have come up without /etc/init and /bin/sh. Rational thought saved us from this one.
The next thing to remember is that UNIX tools really can be put to unusual purposes. Even without my gnuemacs, we could have survived by using, say, /usr/bin/grep as a substitute for /bin/cat. And the final thing is, it's amazing how much of the system you can delete without it falling apart completely. Apart from the fact that nobody could login (/bin/login?), and most of the useful commands had gone, everything else seemed normal. Of course, some things can't stand life without say /etc/termcap, or /dev/kmem, or /etc/utmp, but by and large it all hangs together.
I shall leave you with this question: if you were placed in the same situation, and had the presence of mind that always comes with hindsight, could you have got out of it in a simpler or easier way?
Answers on a postage stamp to:
Mario Wolczko
------------------------------------------------------------------------
Dept. of Computer Science ARPA: miw%uk.ac.man.cs.ux@cs.ucl.ac.uk
The University USENET: mcvax!ukc!man.cs.ux!miw
Manchester M13 9PL JANET: miw@uk.ac.man.cs.ux
U.K. 061-273 7121 x 5699
[Jul 20, 2017] These Guys Didn't Back Up Their Files, Now Look What Happened
"... Please someone help me I last week brought a Toshiba Satellite laptop running windows 7, to replace my blue screening Dell vista laptop. On plugged in my sumo external hard drive to copy over some much treasured photos and some of my (work – music/writing.) it said installing driver. it said completed I clicked on the hard drive and found a copy of my documents from the new laptop and nothing else. ..."
Jul 20, 2017 | www.makeuseof.com
Back in college, I used to work just about every day as a computer cluster consultant. I remember a month after getting promoted to a supervisor, I was in the process of training a new consultant in the library computer cluster. Suddenly, someone tapped me on the shoulder, and when I turned around I was confronted with a frantic graduate student – a 30-something year old man who I believe was Eastern European based on his accent – who was nearly in tears.
"Please need help – my document is all gone and disk stuck!" he said as he frantically pointed to his PC.
Now, right off the bat I could have told you three facts about the guy. One glance at the blue screen of the archaic DOS-based version of Wordperfect told me that – like most of the other graduate students at the time – he had not yet decided to upgrade to the newer, point-and-click style word processing software. For some reason, graduate students had become so accustomed to all of the keyboard hot-keys associated with typing in a DOS-like environment that they all refused to evolve into point-and-click users.
The second fact, gathered from a quick glance at his blank document screen and the sweat on his brow told me that he had not saved his document as he worked. The last fact, based on his thick accent, was that communicating the gravity of his situation wouldn't be easy. In fact, it was made even worse by his answer to my question when I asked him when he last saved.
"I wrote 30 pages."
Calculated out at about 600 words a page, that's 18000 words. Ouch.
Then he pointed at the disk drive. The floppy disk was stuck, and from the marks on the drive he had clearly tried to get it out with something like a paper clip. By the time I had carefully fished the torn and destroyed disk out of the drive, it was clear he'd never recover anything off of it. I asked him what was on it.
"My thesis."
Making Backups of Backups
If there is anything I learned during those early years of working with computers (and the people that use them), it was how critical it is to not only save important stuff, but also to save it in different places. I would back up floppy drives to those cool new zip drives as well as the local PC hard drive. Never, ever had a single copy of anything.
Unfortunately, even today, people have not learned that lesson. Whether it's at work, at home, or talking with friends, I keep hearing stories of people losing hundreds to thousands of files, sometimes they lose data worth actual dollars in time and resources that were used to develop the information.
To drive that lesson home, I wanted to share a collection of stories that I found around the Internet about some recent cases were people suffered that horrible fate – from thousands of files to entire drives worth of data completely lost. These are people where the only remaining option is to start running recovery software and praying, or in other cases paying thousands of dollars to a data recovery firm and hoping there's something to find.
Not Backing Up Projects
The first example comes from Yahoo Answers , where a user that only provided a "?" for a user name (out of embarrassment probably), posted:
"I lost all my files from my hard drive? help please? I did a project that took me 3 days and now i lost it, its powerpoint presentation, where can i look for it? its not there where i save it, thank you"
The folks answering immediately dove into suggesting that the person run recovery software, and one person suggested that the person run a search on the computer for *.ppt.
... ... ...
Doing Backups Wrong
Then, there's a scenario of actually trying to do a backup and doing it wrong, losing all of the files on the original drive. That was the case for the person who posted on Tech Support Forum , that after purchasing a brand new Toshiba Laptop and attempting to transfer old files from an external hard drive, inadvertently wiped the files on the hard drive.
Please someone help me I last week brought a Toshiba Satellite laptop running windows 7, to replace my blue screening Dell vista laptop. On plugged in my sumo external hard drive to copy over some much treasured photos and some of my (work – music/writing.) it said installing driver. it said completed I clicked on the hard drive and found a copy of my documents from the new laptop and nothing else.
While the description of the problem is a little broken, from the sound of it, the person thought they were backing up from one direction, while they were actually backing up in the other direction. At least in this case not all of the original files were deleted, but a majority were.
[Jul 20, 2017] How Toy Story 2 Almost Got Deleted... Except That One Person Made A Home Backup
"... Panic can lead to further problems ..."
May 01, 2018 | Techdirt
Here's a random story, found via Kottke , highlighting how Pixar came very close to losing a very large portion of Toy Story 2 , because someone did an rm * (non geek: "remove all" command). And that's when they realized that their backups hadn't been working for a month. Then, the technical director of the film noted that, because she wanted to see her family and kids, she had been making copies of the entire film and transferring it to her home computer. After a careful trip from the Pixar offices to her home and back, they discovered that, indeed, most of the film was saved:
Now, mostly, this is just an amusing little anecdote, but two things struck me:
How in the world do they not have more "official" backups of something as major as Toy Story 2 . In the clip they admit that it was potentially 20 to 30 man-years of work that may have been lost. It makes no sense to me that this would include a single backup system. I wonder if the copy, made by technical director Galyn Susman, was outside of corporate policy. You would have to imagine that at a place like Pixar, there were significant concerns about things "getting out," and so the policy likely wouldn't have looked all that kindly on copies being used on home computers.
The Mythbusters folks wonder if this story was a little over-dramatized , and others have wondered how the technical director would have "multiple terabytes of source material" on her home computer back in 1999. That resulted in an explanation from someone who was there that what was deleted was actually the database containing the master copies of the characters, sets, animation, etc. rather than the movie itself. Of course, once again, that makes you wonder how it is that no one else had a simple backup. You'd think such a thing would be backed up in dozens of places around the globe for safe keeping...
Hans B PUFAL ( profile ), 18 May 2012 @ 5:53am
Reminds me of .... Some decades ago I was called to a customer site, a bank, to diagnose a computer problem. On my arrival early in the morning I noted a certain panic in the air. On querying my hosts I was told that there had been an "issue" the previous night and that they were trying, unsuccessfully, to recover data from backup tapes. The process was failing and panic ensued.
Though this was not the problem I had been called on to investigate, I asked some probing questions, made a short phone call, and provided the answer, much to the customer's relief.
What I found was that for months if not years the customer had been performing backups of indexed sequential files, that is data files with associated index files, without once verifying that the backed-up data could be recovered. On the first occasion of a problem requiring such a recovery they discovered that they just did not work.
The answer? Simply recreate the index files from the data. For efficiency reasons (this was a LONG time ago) the index files referenced the data files by physical disk addresses. When the backup tapes were restored the data was of course no longer at the original place on the disk and the index files were useless. A simple procedure to recreate the index files solved the problem.
Clearly whoever had designed that system had never tested a recovery, nor read the documentation which clearly stated the issue and its simple solution.
So here is a case of making backups, but then finding them flawed when needed.
Anonymous Coward , 18 May 2012 @ 6:00am
Re: Reminds me of .... That's why, in the IT world, you ALWAYS do a "dry run" when you want to deploy something, and you monitor the heck out of critical systems.
Rich Kulawiec , 18 May 2012 @ 6:30am
Two notes on backups
1. Everyone who has worked in computing for any period of time has their own backup horror story. I'll spare you mine, but note that as a general observation, large organizations/corporations tend to opt for incredibly expensive, incredibly complex, incredibly overblown backup "solutions" sold to them by vendors rather than using the stock, well-tested, reliable tools that they already have. (e.g., "why should we use dump, which is open-source/reliable/portable/tested/proven/efficient/etc., when we could drop $40K on closed-source/proprietary/non-portable/slow/bulky software from a vendor?" Okay, okay, one comment: in over 30 years of working in the field, the second-worst product I have ever had the misfortune to deal with is Legato (now EMC) NetWorker. 2. Hollywood has a massive backup and archiving problem. How do we know? Because they keep telling us about it. There are a series of self-promoting commercials that they run in theaters before movies, in which they talk about all of the old films that are slowly decaying in their canisters in vast warehouses, and how terrible this is, and how badly they need charitable contributions from the public to save these treasures of cinema before they erode into dust, etc. Let's skip the irony of Hollywood begging for money while they're paying professional liar Chris Dodd millions and get to the technical point: the easiest and cheapest way to preserve all of these would be to back them up to the Internet. Yes, there's a one-time expense of cleaning up the analog versions and then digitizing them at high resolution, but once that's done, all the copies are free. There's no need for a data center or elaborate IT infrastructure: put 'em on BitTorrent and let the world do the work. Or give copies to the Internet Archive. Whatever -- the point is that once we get past the analog issues, the only reason that this is a problem is that they made it a problem by refusing to surrender control. saulgoode ( profile ), 18 May 2012 @ 6:38am Re: Two notes on backups "Real Men don't make backups. They upload it via ftp and let the world mirror it." - Linus Torvalds Anonymous Coward , 18 May 2012 @ 7:02am What I suspect is that she was copying the rendered footage. If the footage was rendered at a resolution and rate fitting to DVD spec, that'd put the raw footage at around 3GB to 4GB for a full 90min, which just might fit on the 10GB HDD that were available back then on a laptop computer (remember how small OSes were back then). Even losing just the rendered raw footage (or even processed footage), would be a massive setback. It takes a long time across a lot of very powerful computers to render film quality footage. If it was processed footage then it's even more valuable as that takes a lot of man hours of post fx to make raw footage presentable to a consumer audience. aldestrawk ( profile ), 18 May 2012 @ 8:34am a retelling by Oren Jacob Oren Jacob, the Pixar director featured in the animation, has made a comment on the Quora post that explains things in much more detail. The narration and animation was telling a story, as in storytelling. Despite the 99% true caption at the end, a lot of details were left out which misrepresented what had happened. Still, it was a fun tale for anyone who had dealt with backup problems. Oren Jacob's retelling in the comment makes it much more realistic and believable. The terabytes level of data came from whoever posted the video on Quora. The video itself never mentions the actual amount of data lost or the total amount the raw files represent. Oren says, vaguely, that it was much less than a terabyte. There were backups! The last one was from two days previous to the delete event. The backup was flawed in that it produced files that when tested, by rendering, exhibited errors. They ended up patching a two-month old backup together with the home computer version (two weeks old). This was labor intensive as some 30k files had to be individually checked. The moral of the story. • Firstly, always test a restore at some point when implementing a backup system. • Secondly, don't panic! Panic can lead to further problems. They could well have introduced corruption in files by abruptly unplugging the computer. • Thirdly, don't panic! Despite, somehow, deleting a large set of files these can be recovered apart from a backup system. Deleting files, under Linux as well as just about any OS, only involves deleting the directory entries. There is software which can recover those files as long as further use of the computer system doesn't end up overwriting what is now free space. Mason Wheeler , 18 May 2012 @ 10:01am Re: a retelling by Oren Jacob Panic can lead to further problems. They could well have introduced corruption in files by abruptly unplugging the computer. What's worse? Corrupting some files or deleting all files? aldestrawk ( profile ), 18 May 2012 @ 10:38am Re: Re: a retelling by Oren Jacob In this case they were not dealing with unknown malware that was steadily erasing the system as they watched. There was, apparently, a delete event at a single point in time that had repercussions that made things disappear while people worked on the movie. I'll bet things disappeared when whatever editing was being done required a file to be refreshed. A refresh operation would make the related object disappear when the underlying file was no longer available. Apart from the set of files that had already been deleted, more files could have been corrupted when the computer was unplugged. Having said that, this occurred in 1999 when they were probably using the Ext2 filesystem under Linux. These days most everyone uses a filesystem that includes journaling which protects against corruption that may occur when a computer loses power. Ext3 is a journaling filesystem and was introduced in 2001. In 1998 I had to rebuild my entire home computer system. A power glitch introduced corruption in a Windows 95 system file and use of a Norton recovery tool rendered the entire disk into a handful of unusable files. It took me ten hours to rebuild the OS and re-install all the added hardware, software, and copy personal files from backup floppies. The next day I went out and bought a UPS. Nowadays, sometimes the UPS for one of my computers will fail during one of the three dozen power outages a year I get here. I no longer have problems with that because of journaling. Danny ( profile ), 18 May 2012 @ 10:49am I've gotta story like this too Ive posted in athe past on Techdirt that I used to work for Ticketmaster. The is an interesting TM story that I don't think ever made it into the public, so I will do it now. Back in the 1980s each TM city was on an independent computer system (PDP unibus systems with RM05 or CDC9766 disk drives. The drives were fixed removable boxes about the size of a washing machine, the removable disk platters about the size of the proverbial breadbox. Each platter held 256mb formatted. Each city had itts own operations policies, but generally, the systems ran with mirrored drives, the database was backed up every night, archival copies were made monthly. In Chicago, where I worked, we did not have offsite backup in the 1980s. The Bay Area had the most interesting system for offsite backup. The Bay Area BASS operation, bought by TM in the mid 1980s, had a deal with a taxi driver. They would make their nightly backup copies in house, and make an extra copy on a spare disk platter. Tis cabbie would come by the office about 2am each morning, and they'd put the spare disk platter in his trunk, swapping it for the previous day's copy that had been his truck for 24 hours. So, for the cost of about two platters ($700 at the time) and whatever cash they'd pay the cabbie, they had a mobile offsite copy of their database circulating the Bay Area at all times.
When the World Series earthquake hit in October 1988, the TM office in downtown Oakland was badly damaged. The only copy of the database that survived was the copy in the taxi cab.
That incident led TM corporate to establish much more sophisticated and redundant data redundancy policies.
aldestrawk ( profile ), 18 May 2012 @ 11:30am
Re: I've gotta story like this too I like that story. Not that it matters anymore, but taxi cab storage was probably a bad idea. The disks were undoubtedly the "Winchester" type and when powered down the head would be parked on a "landing strip". Still, subjecting these drives to jolts from a taxi riding over bumps in the road could damage the head or cause it to be misaligned. You would have known though it that actually turned out to be a problem. Also, I wouldn't trust a taxi driver with the company database. Although, that is probably due to an unreasonable bias towards cab drivers. I won't mention the numerous arguments with them (not in the U.S.) over fares and the one physical fight with a driver who nearly ran me down while I was walking.
Huw Davies , 19 May 2012 @ 1:20am
Re: Re: I've gotta story like this too RM05s are removable pack drives. The heads stay in the washing machine size unit - all you remove are the platters.
That One Guy ( profile ), 18 May 2012 @ 5:00pm
What I want to know is this... She copied bits of a movie to her home system... how hard did they have to pull in the leashes to keep Disney's lawyers from suing her to infinity and beyond after she admitted she'd done so(never mind the fact that he doing so saved them apparently years of work...)?
Lance , 3 May 2014 @ 8:53am
http://thenextweb.com/media/2012/05/21/how-pixars-toy-story-2-was-deleted-twice-once-by-technology-a nd-again-for-its-own-good/
Evidently, the film data only took up 10 GB in those days. Nowhere near TB...
[Jul 20, 2017] Scary Backup Stories by Paul Barry
"... All the tapes were then checked, and they were all ..."
Nov 07, 2002 | Linux Journal
The dangers of not testing your backup procedures and some common pitfalls to avoid.
Backups. We all know the importance of making a backup of our most important systems. Unfortunately, some of us also know that realizing the importance of performing backups often is a lesson learned the hard way. Everyone has their scary backup stories. Here are mine. Scary Story #1
Like a lot of people, my professional career started out in technical support. In my case, I was part of a help-desk team for a large professional practice. Among other things, we were responsible for performing PC LAN backups for a number of systems used by other departments. For one especially important system, we acquired fancy new tape-backup equipment and a large collection of tapes. A procedure was put in place, and before-you-go-home-at-night backups became a standard. Some months later, a crash brought down the system, and all the data was lost. Shortly thereafter, a call came in for the latest backup tape. It was located and dispatched, and a recovery was attempted. The recovery failed, however, as the tape was blank . A call came in for the next-to-last backup tape. Nervously, it was located and dispatched, and a recovery was attempted. It also failed because this tape also was blank. Amid long silences and pink-slip glares, panic started to set in as the tape from three nights prior was called up. This attempt resulted in a lot of shouting.
All the tapes were then checked, and they were all blank. To add insult to injury, the problem wasn't only that the tapes were blank--they weren't even formatted! The fancy new backup equipment wasn't smart enough to realize the tapes were not formatted, so it allowed them to be used. Note: writing good data to an unformatted tape is never a good idea.
Now, don't get me wrong, the backup procedures themselves were good. The problem was that no one had ever tested the whole process--no one had ever attempted a recovery. Was it no small wonder then that each recovery failed?
For backups to work, you need to do two things: (1) define and implement a good procedure and (2) test that it works.
To this day, I can't fathom how my boss (who had overall responsibility for the backup procedures) managed not to get fired over this incident. And what happened there has always stayed with me.
A Good Solution
When it comes to doing backups on Linux systems, a number of standard tools can help avoid the problems discussed above. Marcel Gagné's excellent book (see Resources) contains a simple yet useful script that not only performs the backup but verifies that things went well. Then, after each backup, the script sends an e-mail to root detailing what occurred.
I'll run through the guts of a modified version of Marcel's script here, to show you how easy this process actually is. This bash script starts by defining the location of a log and an error file. Two mv commands then copy the previous log and error files to allow for the examination of the next-to-last backup (if required):
#! /bin/bash
backup_log=/usr/local/.Backups/backup.log
backup_err=/usr/local/.Backups/backup.err
mv $backup_log$backup_log.old
mv $backup_err$backup_err.old
With the log and error files ready, a few echo commands append messages (note the use of >>) to each of the files. The messages include the current date and time (which is accessed using the back-ticked date command). The cd command then changes to the location of the directory to be backed up. In this example, that directory is /mnt/data, but it could be any location:
echo "Starting backup of /mnt/data: date." >> $backup_log echo "Errors reported for backup/verify: date." >>$backup_err
cd /mnt/data
The backup then starts, using the tried and true tar command. The -cvf options request the creation of a new archive (c), verbose mode (v) and the name of the file/device to backup to (f). In this example, we backup to /dev/st0, the location of an attached SCSI tape drive:
tar -cvf /dev/st0 . 2>>$backup_err Any errors produced by this command are sent to STDERR (standard error). The above command exploits this behaviour by appending anything sent to STDERR to the error file as well (using the 2>> directive). When the backup completes, the script then rewinds the tape using the mt command, before listing the files on the tape with another tar command (the -t option lists the files in the named archive). This is a simple way of verifying the contents of the tape. As before, we append any errors reported during this tar command to the error file. Additionally, informational messages are added to the log file at appropriate times: mt -f /dev/st0 rewind echo "Verifying this backup: date" >>$backup_log
tar -tvf /dev/st0 2>>$backup_err echo "Backup complete: date" >>$backup_log
To conclude the script, we concatenate the error file to the log file (with cat ), then e-mail the log file to root (where the -s option to the mail command allows the specification of an appropriate subject line):
cat $backup_err >>$backup_log
mail -s "Backup status report for /mnt/data" root < $backup_log And there you have it, Marcel's deceptively simple solution to performing a verified backup and e-mailing the results to an interested party. If only we'd had something similar all those years ago. ... ... ... [May 07, 2017] centos - Do not play those dangerous games with resizing of partitions unless absolutely necessary Copying to additional drive (can be USB), repartitioning and then copying everything back is a safer bet www.softpanorama.org In theory, you could reduce the size of sda1, increase the size of the extended partition, shift the contents of the extended partition down, then increase the size of the PV on the extended partition and you'd have the extra room. However, the number of possible things that can go wrong there is just astronomical, so I'd recommend either buying a second hard drive (and possibly transferring everything onto it in a more sensible layout, then repartitioning your current drive better) or just making some bind mounts of various bits and pieces out of /home into / to free up a bit more space. --womble [May 05, 2017] As Unix does not have a rename command usage of mv for renaming can lead to SNAFU www.softpanorama.org If destination does not exist it behaves as rename command but if destination exists and is directory it move it one level up For example, if you have directories /home and home2 and want to move all subdirectories from /home2 to /home and the directory /home is empty you can't use mv home2 home if you forget to remove the directory /home, mv silently will create /home/home2 directory and you have a problem if this is user home directories. [May 05, 2017] The key problem with cp utility is that it does not preserve timestamp of the file. Expected behaviour of copy command by windows users is that it preserves attributes. But this in not true for Unix cp command. Using -r option without -p option destroys all timestamps. www.vanityfair.com -p -- Preserve the characteristics of the source_file. Copy the contents, modification times, and permission modes of the source_file to the destination files. You might wish to create an alias alias cp='cp -p' as I can't imagine case where regular Unix behaviour is desirable. [Feb 14, 2017] My 10 UNIX Command Line Mistakes Feb 14, 2017 | www.cyberciti.biz Destroyed named.conf I wanted to append a new zone to /var/named/chroot/etc/named.conf file., but end up running: ./mkzone example.com > /var/named/chroot/etc/named.conf Destroyed Working Backups with Tar and Rsync (personal backups) I had only one backup copy of my QT project and I just wanted to get a directory called functions. I end up deleting entire backup (note -c switch instead of -x): cd /mnt/bacupusbharddisk tar -zcvf project.tar.gz functions I had no backup. Similarly I end up running rsync command and deleted all new files by overwriting files from backup set (now I’ve switched to rsnapshot ) rsync -av -delete /dest /src Again, I had no backup. [Feb 12, 2017] Vendor support vs. local support We had a client that said their IBM application was running slow because of the "network". (The mysterious place that packets vanish into like a black hole...lol) I explained to them that the application spans two data centers in separate states across several different pieces of equipment. They said they didn't feel like going through the process of opening another ticket with IBM since IBM would require them to gather a bunch of logs and do a lot of investigation work on their side. Instead they decided to punt it over to the networking team by opening a ticket/incident that read something along the lines that their application was slow due to network related issues. To help get things moving along I setup a weekly call to get a status on where we were with the troubleshooting process. The first thing I would do was a role call. I would ask who was on the line and then very specifically ask if IBM was on the call. Every time they informed us that IBM wasn't on the call and hadn't been engaged. We were at a standstill and the calls would end very quickly after role call because IBM was the missing piece. We needed someone with enough knowledge of the application to tell us what exactly was slow so we could track it down across the network. Based on the clients initial thought process with punting over to networking you can imagine how well they knew their application. Needless to say after a few weeks of role call they asked me to cancel the meetings since they contacted IBM and tweaked a few application settings that corrected the problems. The issue was resolved on our end by a simple role call which was strategically done to get this problem routed to the proper group despite the client's laziness.... [Feb 12, 2017] Stupidity of the manager effect So the Exchange server had a bit of a hiccup one day, back when I was on the help desk. There was an hour window where one of the databases got behind and the queues had to catch up. This caused ~200 users to have slow or unresponsive Outlook clients. I got an angry call from someone in accounting after about 20 minutes of downtime, and she proceeded to assume the roll of tech support manager: Her: So is the email down? Me: Yes, we've notified our system administrators and they have already fixed the issue. We're waiting for the server to return to normal, but for now it's playing catchup. Her: So how are you going to prevent the help desk from getting swamped with calls? Don't you think it would be a good idea to help deflect the calls you're getting? Me: We're actually not that swamped. The outage only applies to 205 users in the company that are on that specific database. Her: Ok but what are you going to do about it? What about those 205 people who are having problems? Shouldn't you notify them? How hard is it to send a mass email letting them know that the server is down? Me: I... don't think they would get the email if the email server is down for them. Her: Well I'm going to send a mass email to the accounting department, I suggest you do the same for the rest of the company. [Feb 12, 2017] Just the push of the button in the opened datacenter atreides71 Jul 15, 2015 6:49 PM My first job was in a Hewlett Packard reseller company, the small datacenter was plain sight from the lobby so our sales executives could talk visitors about the infrastructure we were using to run the company systems ( ERP, email, BI, etc ), and they had the bad habit to let people in so they could see the different solutions very close. Someday one of those executives must had left the door open, it was summer holiday time so we had a visit of a reseller and he was accompanied by his child son, who quickly found that the door was opened, came into the datacenter and pushed a single button, the on/off button of the Progress Database Server that kept the ERP information. He did and he left the datacenter without being noticed. In just a couple of minutes we had a lot of calls from all the branch offices asking about the ERP service; it took us 1 or 2 hours to find the failure, check the raid status, the database integrity and put in on line again, we had a meeting looking for the root cause of the outage until someone had the idea to check the video of the security cameras, then we found the real responsible for the fail of the system. Since then, the Datacenter remained closed. [Feb 11, 2017] Being way too lazy is not always beneficial When a customer gets a replacement disk for their SAN and doesn't replace it for a week saying "I just couldn't bring myself to care about the SAN this week." and then another disk goes bad the next day. mleon Jul 15, 2015 9:13 AM When a customer gets a replacement disk for their SAN and doesn't replace it for a week saying "I just couldn't bring myself to care about the SAN this week." and then another disk goes bad the next day. [Feb 10, 2017] An inventive idea of reusing the socket into which the switch was plugged jimtech18 Jul 31, 2015 1:34 PM No hazard pay: Replacing a failing switch in a high pressure test lab (one with signs that warn of the danger of pinhole leaks being able to KILL you). Up near the top of the stupid-tall 20' step ladder when the lab tech holding the ladder tells me about the guy who fell off this same ladder last year and broke his hip (now he tells me). (Who puts a switch in the ceiling supports anyway? Apparently there used to be a wall there that the switch was mounted to. Construction guys removed the wall so the switch and wires just got moved up and mounted at the ceiling! duh what else would you do? long before my time) Back to the challenge at hand. As I'm messing around with the switch the Hydrogen alarm mounted near the ceiling starts wailing, and the guy on the floor says "That's not good!" and leaves the room, remember him, he was steadying the ladder that someone fell off of last year that I am still at the top of. He soon returns and holds the ladder as I climb all the way down, seems like twice as far as when I climbed up. By this time the Hydrogen alarm has stopped and both techs say that there is nothing to worry about and that I should finish the switch replacement so they can get back to work. Of course, as a SYSADMIN, I go back up the crappy 20' step ladder and finish swapping out the failed switch with a POE powered one, problem resolved. I take the failed switch back to my office and it works fine, what? How could that be? Turns out the extension cord that the switch (mounted at the ceiling) was plugged into had been unplugged by the first tech who was HELPING me because he needed an outlet. Once I pointed out the cause of the whole issue he said "Oh Yeah, that's where that cord goes, Oh Well, it's fixed now and I get to keep using the outlet, thanks". [Feb 08, 2017] A side effect of simultaneous changes on many boxes can be networking storm when boxes start communing all at once Deltona Jul 31, 2015 4:50 AM I was supposed to do some routine redundancy tests at a remote site in another country. After implementing and testing everything successfully, I enabled EnergyWise on a couple hundred switches in one go. The broadcast storm that followed brought everything in the DC down to a halt. It took me two hours to figure out why this happened and i missed my flight back home. A couple months later, a dozen plus firmwares were released to address this issue. [Feb 07, 2017] Troubleshooting method for networking problems: work up the OSI model - layer 1 - check the cabling Troubleshooting method - work up the OSI model - layer 1 - check the cabling. After checking the cabling, check the cabling again. Before you're ready to escalate, ask for help, check the cabling again. Troubleshooting method - work up the OSI model - layer 1 - check the cabling. After checking the cabling, check the cabling again. Before you're ready to escalate, ask for help, check the cabling again. [Feb 06, 2017] The way to keep senior management informed I was working for Network Operations in a company several years back. It was a small company and we had a VP that was not tech savvy. We were having an issue one day, and he came running into the Network Operations Center asking what was going on. One of our coworkers looked at him and said, relax, it is no big deal, we have everything under control. He asked what was the problem. Our coworker said, "the flux capacitor stopped working, but we got it restarted." The VP said OK, turned around and left the room to go report to the execs about our Flux Capacitor issue.... [Feb 05, 2017] Cutting yourself from the networked server by putting down and then up eth0 interface jemertz Mar 30, 2016 10:26 AM When working in a remote lab, on a Linux server which you're connecting to through eth0: use: ifdown eth0; ifup eth0 not: ifdown eth0 ifup eth0 Doing it on one line means it comes back up right after it goes down. Doing it on two lines means you lose connection before you can type the second line. I figured this out the hard way, and haven't made the same mistake a second time. [Feb 04, 2017] How do I fix mess created by accidentally untarred files in the current dir, aka tar bomb Highly recommended! In such cases the UID of the file is often different from uid of "legitimate" files in polluted directories and you probably can use this fact for quick elimination of the tar bomb, But the idea of using the list of files from the tar bomb to eliminate offending files also works if you observe some precautions -- some directories that were created can have the same names as existing directories. Never do rm in -exec or via xargs without testing. Notable quotes: "... You don't want to just rm -r everything that tar tf tells you, since it might include directories that were not empty before unpacking! ..." "... Another nice trick by @glennjackman, which preserves the order of files, starting from the deepest ones. Again, remove echo when done. ..." "... One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names in the tar listing make the names start in an unpredictable column. ..." "... That kind of (antisocial) archive is called a tar bomb because of what it does. Once one of these "explodes" on you, the solutions in the other answers are way better than what I would have suggested. ..." "... The easiest (laziest) way to do that is to always unpack a tar archive into an empty directory. ..." "... The t option also comes in handy if you want to inspect the contents of an archive just to see if it has something you're looking for in it. If it does, you can, optionally, just extract the file(s) you want. ..." Feb 04, 2017 | superuser.com linux - Undo tar file extraction mess - Super User first try to issue tar tf archive tar will list the contents line by line. This can be piped to xargs directly, but beware : do the deletion very carefully. You don't want to just rm -r everything that tar tf tells you, since it might include directories that were not empty before unpacking! You could do tar tf archive.tar | xargs -d'\n' rm -v tar tf archive.tar | sort -r | xargs -d'\n' rmdir -v to first remove all files that were in the archive, and then the directories that are left empty. sort -r (glennjackman suggested tac instead of sort -r in the comments to the accepted answer, which also works since tar 's output is regular enough) is needed to delete the deepest directories first; otherwise a case where dir1 contains a single empty directory dir2 will leave dir1 after the rmdir pass, since it was not empty before dir2 was removed. This will generate a lot of rm: cannot remove dir/': Is a directory and rmdir: failed to remove dir/': Directory not empty rmdir: failed to remove file': Not a directory Shut this up with 2>/dev/null if it annoys you, but I'd prefer to keep as much information on the process as possible. And don't do it until you are sure that you match the right files. And perhaps try rm -i to confirm everything. And have backups, eat your breakfast, brush your teeth, etc. === List the contents of the tar file like so: tar tzf myarchive.tar Then, delete those file names by iterating over that list: while IFS= read -r file; do echo "$file"; done < <(tar tzf myarchive.tar.gz)
This will still just list the files that would be deleted. Replace echo with rm if you're really sure these are the ones you want to remove. And maybe make a backup to be sure.
In a second pass, remove the directories that are left over:
while IFS= read -r file; do rmdir "$file"; done < <(tar tzf myarchive.tar.gz) This prevents directories with from being deleted if they already existed before. Another nice trick by @glennjackman, which preserves the order of files, starting from the deepest ones. Again, remove echo when done. tar tvf myarchive.tar | tac | xargs -d'\n' echo rm This could then be followed by the normal rmdir cleanup. Here's a possibility that will take the extracted files and move them to a subdirectory, cleaning up your main folder. #!/usr/bin/perl -w use strict ; use Getopt :: Long ; my$clean_folder = "clean" ;
my $DRY_RUN ; die "Usage:$0 [--dry] [--clean=dir-name]\n"
if ( ! GetOptions ( "dry!" => \$DRY_RUN , "clean=s" => \$clean_folder ));
# Protect the 'clean_folder' string from shell substitution
$clean_folder =~ s / '/' \\ '' / g ; # Process the "tar tv" listing and output a shell script. print "#!/bin/sh\n" if ( !$DRY_RUN );
while (<>)
{
chomp ;
# Strip out permissions string and the directory entry from the 'tar' list
my $perms = substr ($_ , 0 , 10 );
my $dirent = substr ($_ , 48 );
# Drop entries that are in subdirectories
next if ( $dirent =~ m :/.: ); # If we're in "dry run" mode, just list the permissions and the directory # entries. # if ($DRY_RUN )
{
print "$perms|$dirent\n" ;
next ;
}
# Emit the shell code to clean up the folder
$dirent =~ s / '/' \\ '' / g ; print "mv -i '$dirent' '$clean_folder'/.\n" ; } Save this to the file fix-tar.pl and then execute it like this: $ tar tvf myarchive . tar | perl fix - tar . pl -- dry
This will confirm that your tar list is like mine. You should get output like:
- rw - rw - r --| batch
- rw - rw - r --| book - report . png
- rwx ------| CaseReports . png
- rw - rw - r --| caseTree . png
- rw - rw - r --| tree . png
drwxrwxr - x | sample /
If that looks good, then run it again like this:
$mkdir cleanup$ tar tvf myarchive . tar | perl fix - tar . pl -- clean = cleanup > fixup . sh
The fixup.sh script will be the shell commands that will move the top-level files and directories into a "clean" folder (in this instance, the folder called cleanup). Have a peek through this script to confirm that it's all kosher. If it is, you can now clean up your mess with:
$sh fixup . sh I prefer this kind of cleanup because it doesn't destroy anything that isn't already destroyed by being overwritten by that initial tar xv. Note: if that initial dry run output doesn't look right, you should be able to fiddle with the numbers in the two substr function calls until they look proper. The $perms variable is used only for the dry run so really only the $dirent substring needs to be proper. One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names in the tar listing make the names start in an unpredictable column. One other thing: you may need to use the tar option --numeric-owner if the user names and/or group names in the tar listing make the names start in an unpredictable column. === That kind of (antisocial) archive is called a tar bomb because of what it does. Once one of these "explodes" on you, the solutions in the other answers are way better than what I would have suggested. The best "solution", however, is to prevent the problem in the first place. The easiest (laziest) way to do that is to always unpack a tar archive into an empty directory. If it includes a top level directory, then you just move that to the desired destination. If not, then just rename your working directory (the one that was empty) and move that to the desired location. If you just want to get it right the first time, you can run tar -tvf archive-file.tar | less and it will list the contents of the archive so you can see how it is structured and then do what is necessary to extract it to the desired location to start with. The t option also comes in handy if you want to inspect the contents of an archive just to see if it has something you're looking for in it. If it does, you can, optionally, just extract the file(s) you want. [Feb 04, 2017] Restoring deleted /tmp folder Jan 13, 2015 | cyberciti.biz As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is: mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp [Feb 04, 2017] Use CDPATH to access frequent directories in bash - Mac OS X Hints Feb 04, 2017 | hints.macworld.com The variable CDPATH defines the search path for the directory containing directories. So it served much like "directories home". The dangers are in creating too complex CDPATH. Often a single directory works best. For example export CDPATH = /srv/www/public_html . Now, instead of typing cd /srv/www/public_html/CSS I can simply type: cd CSS Use CDPATH to access frequent directories in bash Mar 21, '05 10:01:00AM • Contributed by: jonbauman I often find myself wanting to cd to the various directories beneath my home directory (i.e. ~/Library, ~/Music, etc.), but being lazy, I find it painful to have to type the ~/ if I'm not in my home directory already. Enter CDPATH , as desribed in man bash ): The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr". Personally, I use the following command (either on the command line for use in just that session, or in .bash_profile for permanent use): CDPATH=".:~:~/Library" This way, no matter where I am in the directory tree, I can just cd dirname , and it will take me to the directory that is a subdirectory of any of the ones in the list. For example: $ cd
$cd Documents /Users/baumanj/Documents$ cd Pictures
$cd Preferences /Users/username/Library/Preferences etc... [ robg adds: No, this isn't some deeply buried treasure of OS X, but I'd never heard of the CDPATH variable, so I'm assuming it will be of interest to some other readers as well.] cdable_vars is also nice Authored by: clh on Mar 21, '05 08:16:26PM Check out the bash command shopt -s cdable_vars From the man bash page: cdable_vars If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to. With this set, if I give the following bash command: export d="/Users/chap/Desktop" I can then simply type cd d to change to my Desktop directory. I put the shopt command and the various export commands in my .bashrc file. [Aug 04, 2015] My 10 UNIX Command Line Mistakes by Vivek Gite The thread of comments after the article is very educational. We reproduce only a small fraction. June 21, 2009 Anyone who has never made a mistake has never tried anything new. -- Albert Einstein. Here are a few mistakes that I made while working at UNIX prompt. Some mistakes caused me a good amount of downtime. Most of these mistakes are from my early days as a UNIX admin. userdel Command The file /etc/deluser.conf was configured to remove the home directory (it was done by previous sys admin and it was my first day at work) and mail spool of the user to be removed. I just wanted to remove the user account and I end up deleting everything (note -r was activated via deluser.conf): userdel foo ... ... ... Destroyed Working Backups with Tar and Rsync (personal backups) I had only one backup copy of my QT project and I just wanted to get a directory called functions. I end up deleting entire backup (note -c switch instead of -x): cd /mnt/bacupusbharddisk tar -zcvf project.tar.gz functions I had no backup. Similarly I end up running rsync command and deleted all new files by overwriting files from backup set (now I've switched to rsnapshot) rsync -av -delete /dest /src Again, I had no backup. Deleted Apache DocumentRoot I had sym links for my web server docroot (/home/httpd/http was symlinked to /www). I forgot about symlink issue. To save disk space, I ran rm -rf on http directory. Luckily, I had full working backup set. ... ... ... Public Network Interface Shutdown I wanted to shutdown VPN interface eth0, but ended up shutting down eth1 while I was logged in via SSH: ifconfig eth1 down Firewall Lockdown I made changes to sshd_config and changed the ssh port number from 22 to 1022, but failed to update firewall rules. After a quick kernel upgrade, I had rebooted the box. I had to call remote data center tech to reset firewall settings. (now I use firewall reset script to avoid lockdowns). Typing UNIX Commands on Wrong Box I wanted to shutdown my local Fedora desktop system, but I issued halt on remote server (I was logged into remote box via SSH): halt service httpd stop Wrong CNAME DNS Entry Created a wrong DNS CNAME entry in example.com zone file. The end result - a few visitors went to /dev/null: echo 'foo 86400 IN CNAME lb0.example.com' >> example.com && rndc reload Failed To Update Postfix RBL Configuration In 2006 ORDB went out of operation. But, I failed to update my Postfix RBL settings. One day ORDB was re-activated and it was returning every IP address queried as being on its blacklist. The end result was a disaster. Conclusion All men make mistakes, but only wise men learn from their mistakes -- Winston Churchill. From all those mistakes I've learnt that: 1. Backup = ( Full + Removable tapes (or media) + Offline + Offsite + Tested ) 2. The clear choice for preserving all data of UNIX file systems is dump, which is only tool that guaranties recovery under all conditions. (see Torture-testing Backup and Archive Programs paper). 3. Never use rsync with single backup directory. Create a snapshots using rsync or rsnapshots. 4. Use CVS to store configuration files. 5. Wait and read command line again before hitting the dam [Enter] key. 6. Use your well tested perl / shell scripts and open source configuration management software such as puppet, Cfengine or Chef to configure all servers. This also applies to day today jobs such as creating the users and so on. Mistakes are the inevitable, so did you made any mistakes that have caused some sort of downtime? Please add them into the comments below. Jon June 21, 2009, 2:42 am My all time favorite mistake was a simple extra space: cd /usr/lib ls /tmp/foo/bar I typed rm -rf /tmp/foo/bar/ * instead of rm -rf /tmp/foo/bar/* The system doesn't run very will without all of it's libraries…… Vinicius August 21, 2010, 5:42 pm I Did something similar on a remote server I was going to type 'chmod -R 755 ./' but i throw 'chmod -R 755 /' |: Daniel December 30, 2013, 9:40 pm I typed 'chmod -R 777′ , to allow all files to have rwx permissions from all users (RPi) . Doesn't work that well without sudo! robert wlaschin May 1, 2012, 9:57 pm Hm… I was trying to format a USB flash dd if=big_null_file of=/dev/sdb unfortunately /dev/sdb was my local secondary drive, sdc was the usb … shucks. I discovered this after I rebooted. Jeff April 21, 2011, 10:46 pm I did something similar on my first day as a junior admin. As root, I copied my buddy's dot files (.profile, etc.) from his home directory to mine because he had some cool customizations. He also had some scripts in a directory called .scripts/ that he wanted me to copy. I gave myself ownership of the dot files and the contents of the .scripts directory with this command: cd ~jeff; chown -R jeff .* It was only later that I realized that ".*" matched "." and "..", so my userid owned the entire machine… which happened to be our production Oracle database. That was 15 years ago and we've both changed jobs a few times, but that friend reminds me of that mistake every time I see him. Garry April 11, 2014, 8:02 pm I once had a bunch of dot files I wanted to remove. So I did: rm -r .* This, of course, includes ".." – recursively. I had taken over SysAdmin of a server. The server had a cron job that ran, as root, that cd'ed into a directory and did a find, removing any files older than 3 days. It was to clean up the log files of some program they had. They quit using the program. About a year later, someone removed the directory. The cron job ran. The cd into the log file directory didn't work, but the cron job kept going. It was still in / – removing any files older than 3 days old! I restored the filesystems and went home to get some sleep, thinking I would investigate root cause after I had some rest. As soon as my head hit the pillow, the phone rang. "It did it again". The cron job had run again. Lastly, I once had an accidental copy & paste, which renamed (mv) /usr/lib. Did you know the "mv" command uses libraries in /usr/lib? I found that out the hard way when I discovered I could not move it back to its original pathname. Nor could I copy it (cp uses /usr/lib). An "Ohnosecond" is defined as the period of time between when you hit enter and you realize what you just did. Michael Shigorin April 12, 2014, 8:14 am That's why set -e or #!/bin/sh -e (in this particular case I'd just tell find that_dir … though). --[The -e flag's long name is errexit, causing the script to immediately exit on the first error. -- NNB] My .. incident has taught me to hit tab just in case to see what actually gets removed; BTW zsh is very helpful in that regard, it has some safety net means for the usual * ~ cases - but then again touching nothing with destructive tools when tired, especially as root, is a bitter but prudent decision. Regarding /usr/lib: ALT Linux coreutils are built properly ;-) (although there are some leftovers as we've found when looking with some Gentoo guys at LVEE conference) georgesdev June 21, 2009, 9:15 am never type anything such as: rm -rf /usr/tmp/whatever maybe you are going to type enter by mistake before the end of the line. You would then for example erase all your disk starting on /. if you want to use -rf option, add it at the end on the line: rm /usr/tmp/whatever -rf and even this way, read your line twice before adding -rf Daniel Hoherd May 4, 2012, 4:58 pm Another good test is to first do "echo rm -rf /dir/whatever/*" to see the expansion of the glob and what will be deleted. I especially do this when writing loops, then just pipe to bash when I know I've got it right. Denis November 23, 2010, 9:27 am I think it is a good practice to use parameter i whithin the -rf: rm -rfi /usr/tmp/whatever -i will ask you do you sure to delete all that stuff. John February 25, 2011, 11:11 am I worked with a guy who always used "rm -rf" to delete anything. And he always logged in as root. Another worker set the stage for him by creating a file called "~" in a visible location (that would be a filed entered as "\~", as not to expand to the user's home directory. User one then dealt with that file with "rm -rf ~". This was when the root home directory was / and not something like /root. You got it. Cody March 22, 2011, 1:33 pm (Note to mod: put this in wrong place initially; sorry about that. here is the correct place). This reminds me of when I told a friend a way to auto-log out on login (many ways but this would be more obscure). He then told someone who was "annoying" him to try it on his shell. End result was this person was furious. Quite so. And although I don't find it so funny now (keyword not as – I still think it's amusing), I found it hilarious then (hey, was young and obnoxious as can be!). The command, for what its worth : echo "PS1=kill -9 0" >> ~/.bash_profile Yes, that's setting the prompt to run the command : kill -9 0 upon sourcing of ~/.bash_profile which means kill that shell. Bad idea! I don't even remember what inspired me to think of that command as this was years and years ago. However, it does bring up an important point : Word of the wise : if you do not know what a command does, don't run it! Amazing how many fail that one… Peter Odding January 7, 2012, 6:40 pm I once read a nasty trick that's also fun in a very sadistic kind of way: echo 'echo sleep 1 >> ~/.profile' >> /home/unhappy-user/.profile The idea is that every time the user logs in it will take a second longer than the previous time… This stacks up quickly and gets reallllly annoying :-) Daniel April 23, 2015, 10:53 am What about echo "PS1=$PS1 ; sleep 1" >> ~/.bash_profile
I'm not sure if it works, but it's pretty cool.
3ToKoJ June 21, 2009, 9:26 am
public network interface shutdown … done
typing unix command on wrong box … done
Delete apache DocumentRoot … done
Firewall lockdone … done with a NAT rule redirecting the configuration interface of the firewall to another box, serial connection saved me
I can add, being trapped by aptitude keeping tracks of previously planned - but not executed - actions, like "remove slapd from the master directory server"
UnixEagle June 21, 2009, 11:03 am
Rebooted the wrong box
While adding alias to main network interface I ended up changing the main IP address, the system froze right away and I had to call for a reboot
Instead of appending text to Apache config file, I overwritten it's contents
Firewall lockdown while changing the ssh port
Wrongfully run a script contained recursive chmod and chown as root on / caused me a downtime of about 12 hours and a complete re-install
Some mistakes are really silly, and when they happen, you don't believe your self you did that, but every mistake, regardless of it's silliness, should be a learned lesson.
If you did a trivial mistake, you should not just overlook it, you have to think of the reasons that made you did it, like: you didn't have much sleep or your mind was confused about personal life or …..etc.
I like Einstein's quote, you really have to do mistakes to learn.
smaramba June 21, 2009, 11:31 am
Yyping unix command on wrong box and firewall lockdown are all time classics: been there, done that. But for me the absolute worst, on linux, was checking a mounted filesystem on a production server…
fsck /dev/sda2
The root filesystem was rendered unreadable. system down. Dead. Users really pissed off. fortunately there was a full backup and the machine rebooted within an hour.
Don May 10, 2011, 4:14 pm
I know this thread is a couple of years old but …
Using lpr from the command line, forgetting that I was logged in to a remote machine in another state. My print job contained sensitive information which was now on a printer several hundred miles away! Fortunately, a friend intercepted the message and emailed me while I was trying to figure out what was wrong with my printer :-)
od June 21, 2009, 12:50 pm
"Typing UNIX Commands on Wrong Box"
Yea, I did that one too. Wanted to shut down my own vm but I issued init 0 on a remote server which I accessed via ssh. And oh yes, it was the production server.
Adi June 21, 2009, 10:24 pm
tar -czvf /path/to/file file_archive.tgz
tar -czvf file_archive.tgz /path/to/file
I ended up destroying that file and had no backup as this command was intended to provide the first backup – it was on the DHCP Linux production server and the file wad dhcpd.conf!
wayback.archive.org
"rm" Is Forever
The principles above combine into real-life horror stories. A series of
exchanges on the Usenet news group alt.folklore.computers illustrates
our case:
Date: Wed, 10 Jan 90
From: djones@megatest.uucp (Dave Jones)
Subject: rm *
Newsgroups: alt.folklore.computers2
Anybody else ever intend to type:
% rm *.o
And type this by accident:
% rm *>o
Now you've got one new empty file called "o", but plenty of room
for it!
Actually, you might not even get a file named "o" since the shell documentation
doesn't specify if the output file "o" gets created before or after the
wildcard expansion takes place. The shell may be a programming language,
but it isn't a very precise one.
Date: Wed, 10 Jan 90 15:51 CST
From: ram@attcan.uucp
Subject: Re: rm *
Newsgroups: alt.folklore.computers
I too have had a similar disaster using rm. Once I was removing a file
system from my disk which was something like /usr/foo/bin. I was in /
usr/foo and had removed several parts of the system by:
% rm -r ./etc
…and so on. But when it came time to do ./bin, I missed the period.
System didn't like that too much.
Unix wasn't designed to live after the mortal blow of losing its /bin directory.
An intelligent operating system would have given the user a chance to
recover (or at least confirm whether he really wanted to render the operating
system inoperable).
Unix aficionados accept occasional file deletion as normal. For example,
consider following excerpt from the comp.unix.questions FAQ:3
6) How do I "undelete" a file?
Someday, you are going to accidentally type something like:
% rm * .foo
and find you just deleted "*" instead of "*.foo". Consider it a
rite of passage.
Of course, any decent systems administrator should be doing
backup copy of your file is available.
"A rite of passage"? In no other industry could a manufacturer take such a
cavalier attitude toward a faulty product. "But your honor, the exploding
gas tank was just a rite of passage." "Ladies and gentlemen of the jury, we
will prove that the damage caused by the failure of the safety catch on our
3comp.unix.questions is an international bulletin-board where users new to the
Unix Gulag ask questions of others who have been there so long that they don't
know of any other world. The FAQ is a list of Frequently Asked Questions garnered
Changing rm's Behavior Is Not an Option
After being bitten by rm a few times, the impulse rises to alias the rm command
so that it does an "rm -i" or, better yet, to replace the rm command
with a program that moves the files to be deleted to a special hidden directory,
such as ~/.deleted. These tricks lull innocent users into a false sense
of security.
Date: Mon, 16 Apr 90 18:46:33 199
From: Phil Agre <agre@gargoyle.uchicago.edu>
To: UNIX-HATERS
Subject: deletion
On our system, "rm" doesn't delete the file, rather it renames in some
obscure way the file so that something called "undelete" (not
"unrm") can get it back.
course I can always undelete them. Well, no I can't. The Delete File
command in Emacs doesn't work this way, nor does the D command
in Dired. This, of course, is because the undeletion protocol is not
part of the operating system's model of files but simply part of a
kludge someone put in a shell command that happens to be called
"rm."
As a result, I have to keep two separate concepts in my head, "deleting"
a file and "rm'ing" it, and remind myself of which of the two of
them I am actually performing when my head says to my hands
"delete it."
Some Unix experts follow Phil's argument to its logical absurdity and
maintain that it is better not to make commands like rm even a slight bit
friendly. They argue, though not quite in the terms we use, that trying to
make Unix friendlier, to give it basic amenities, will actually make it
worse. Unfortunately, they are right.
[Sep 04, 2014] Blunders with expansion of tar files, structure of which you do not understand
if you try to expand tar file in some production directory you accidentally can overwrite and change ownership of such directories and then spend a lot of type restored status quo. It is safer to expand such tar files in /tmp first and only after that seeing the results then decide whether to copy some directories of re-expand the tar file. Now in production directory.
[Sep 03, 2014] Doing operation in a wrong directory among several similar directories
Sometimes directories are very similar, for example numbered directoriess created by some application such as task0001, task0002, ... task0256. In this case you can well perform operation on a wrong directory. For example send to tech support a tar file with the directory that instead of test data contain production run.
[Oct 17, 2013] Crontab file - The UNIX and Linux Forums
The loss of crontab is a serious trouble. This is one of a typical sysadmin blunders (Crontab file - The UNIX and Linux Forums)
Hi All,
I created a crontab entry in a cron.txt file accidentally entered
crontab cron.txt.
Now my previous crontab -l entries are not showing up, that means i removed the scheduling of the previous jobs by running this command "crontab cron.txt"
How do I revert back to previously schedule jobs.
Thanks.
In this case, if you do not have a backup, you only remedy is to try to extract cron commands from /var/log/messages.
[Jul 17, 2012] My 10 UNIX Command Line Mistakes
Destroyed Working Backups with Tar and Rsync (personal backups)
I had only one backup copy of my QT project and I just wanted to get a directory called functions. I end up deleting entire backup (note -c switch instead of -x):
cd /mnt/bacupusbharddisk tar -zcvf project.tar.gz functions
I had no backup. Similarly I end up running rsync command and deleted all new files by overwriting files from backup set (now I've switched to rsnapshot)
rsync -av -delete /dest /src
Deleted Apache DocumentRoot
I had sym links for my web server docroot (/home/httpd/http was symlinked to /www). I forgot about symlink issue. To save disk space, I ran rm -rf on http directory. Luckily, I had full working backup set.
Public Network Interface Shutdown
I wanted to shutdown VPN interface eth0, but ended up shutting down eth1 while I was logged in via SSH:
ifconfig eth1 down
Firewall Lockdown
I made changes to sshd_config and changed the ssh port number from 22 to 1022, but failed to update firewall rules. After a quick kernel upgrade, I had rebooted the box. I had to call remote data center tech to reset firewall settings. (now I use firewall reset script to avoid lockdowns).
Typing UNIX Commands on Wrong Box
I wanted to shutdown my local Fedora desktop system, but I issued halt on remote server (I was logged into remote box via SSH):
halt
service httpd stop
Conclusion
All men make mistakes, but only wise men learn from their mistakes -- Winston Churchill.
From all those mistakes I've learnt that:
1. Backup = ( Full + Removable tapes (or media) + Offline + Offsite + Tested )
2. The clear choice for preserving all data of UNIX file systems is dump, which is only tool that guaranties recovery under all conditions. (see Torture-testing Backup and Archive Programs paper).
3. Never use rsync with single backup directory. Create a snapshots using rsync or rsnapshots.
4. Use CVS to store configuration files.
5. Wait and read command line again before hitting the dam [Enter] key.
6. Use your well tested perl / shell scripts and open source configuration management software such as puppet, Cfengine or Chef to configure all servers. This also applies to day today jobs such as creating the users and so on.
Mistakes are the inevitable, so did you made any mistakes that have caused some sort of downtime? Please add them into the comments below.
[May 17, 2012] Pixar's The Movie Vanishes, How Toy Story 2 Was Nearly Lost
In the 2010 animated short titled Studio Stories: The Movie Vanishes, we learn from Pixar's Oren Jacob and Galyn Susman how a big chunk of Toy Story 2 movie files were nearly lost due to the accidental use of a Linux rm command (and a poor backup system). This short was included on the Toy Story 2 DVD extras.
Pixar studio stories - The movie vanishes (full) - YouTube
[Mar 16, 2012] Using right command in a wrong place
From email to Editor of Softpanorama...
This happened with Open view. It has a command for agent reinstallation. opc-inst -r. The problem is that it needs to be run on the node not on the server and does not accept any arguments
In this case it was run on the server with predictable results. This was a production server of a large corporation so you can imagine the level of stress in putting down this fire...
[Oct 14, 2011] Nasty surprise with the command cd joeuser; chown -R joeuser:joeuser .*
This is classic case of side effect of dot .* along with -R flag which cause complete tree traversal in Unix. The key issue here is not panic. The recovery is possible even if you do not have the map of all files permissions (and you better do it on regular basis). The first step is to use
for p in $(rpm -qa); do rpm --setugids$p; done
Similar approach can be used for resoring permissions:
for p in $(rpm -qa); do rpm --setperms$p; done
[Jul 22, 2011] Mailbag by Marcello Romani
Feb 02, 2011 | LG #186
Hi, I had a horror story similar to Ben's one, about two years ago. I backed up a PC and reinstalled the OS with the backup usb disk still attached. The OS I was reinstalling was a version of Windows (2000 or XP, I don't remember right now). When the partition creation screen appeared, the list items looked a bit different from what I was expecting, but as soon as I realized why, my fingers had already pressed the keys, deleting the existing partitions and creating a new ntfs one. Luckily, I stopped just before the "quick format" command... Searching the 'net for data recovery software, I came across TestDisk, which is target at partition table recovery. I was lucky enough to have wiped out only that portion of the usb disk, so in less than an hour I was able to regain access to the all of my data. Since then I always "safely remove" usb disks from the machine before doing anything potentially dangerous, and check "fdisk -l" at least three times before deciding that the arguments to "dd" are written correctly...
Marcello Romani TAG mailing list TAG@lists.linuxgazette.net http://lists.linuxgazette.net/listinfo.cgi/tag-linuxgazette.net
[Jul 03, 2011] Be careful with naming servers
Some application like Oracle products are sensitive to DNS names you use, especially hostname. They store them in multiple places and there is no easy way to change it in all those places after Oracle product is installed. They also accept only long hostname (i.e. box.location.firm.com) instead of short.
If you mess with your hostname and DBA installed Oracle product you usually need to reinstall the box.
Such errors can happen if you copy files form ne servers to another to speed up the installation and forgot to modify /etc/hosts file or modified it incorrectly.
[Jun 03, 2011] Sysadmin Tales of Terror by Carla Schroder
February 19, 2003 | Enterprise Networking Planet
Cover One's Behind With Glory
Now let's be honest, documentation is boring and no fun. I don't care; just do it. Keep a project diary. Record everything you find. You don't want to shoulder the blame for someone else's mistakes or malfeasance. It is unlikely you'll get into legal trouble, but the possibility always exists. Record progress and milestones as well. Those in management tend to have short memories and limited attention spans when it comes to technical matters, so put everything in writing and make a point of reviewing your progress periodically. No need to put on long, windy presentations -- take ten minutes once a week to hit the high points. Emphasize the good news; after all, as the ace sysadmin, it is your job to make things work. Any dork can make a mess; it takes a real star to deliver the goods.
Be sure to couch your progress in terms meaningful to the person(s) you're talking to. A non-technical manager doesn't want to hear how many scripts you rewrote or how many routers you re-programmed. She wants to hear "Group A's email works flawlessly now, and I fixed their database server so it doesn't crash anymore. No more downtime for Group A." That kind of talk is music to a manager's ears.
Managing Users
In every business there are certain key people who wield great influence. They can make or break you. Don't focus exclusively on management -- the people who really run the show are the secretaries and administrative assistants. They know more than anyone about how things work, what's really important, and who is really important. Consult them. Listen to them. Suck up to them. Trust me, this will pay off handsomely. Also worth cultivating are relationships with the cleaning and maintenance people -- they see things no one else even knows about.
When you're new on the job and still figuring things out, the last thing you need is to field endless phone calls from users with problems. Make them put it in writing -- email, yellow pad, elaborate trouble-ticket system, whatever suits you. This gives you useful information and time to do some triage.
Managing Remote Users
If you have remote offices under your care, the phone can save a lot of travel. There's almost always one computer-savvy person in every office; make this person your ally and helper. At very least, this person will be able to give you coherent, understandable explanations. At best, they will be your remote hands and eyes, and will save you much trouble.
Such a person may be a candidate for training and possibly transferring to IT. Some people are afraid of helping someone like this for fear of losing out to them in some way. The truth, though, is that you never lose by helping people, so don't let that idea scare you off from giving a boost to a worthy person.
Getting Help
We all know how to use Google, Usenet, and other online resources to get assistance. By all means, don't be too proud -- ask! And by all means, don't be stupide either -- use a fake name and don't mention the company you work for. There's absolutely no upside to making such information public; there are, however, many downsides to doing so, like inviting security breaches, giving away too much information, making your company look bad, and besmirching your own reputation.
As I said at the beginning, these are strategies that have served me well. Feel free to send me your own ideas; I especially love to hear about true-life horror stories that have happy endings.
Resources
[Jun 20, 2010] IT Resource Center forums - greatest blunders
Bill McNAMARA
I've done this with people looking over my shoulder (while in single user):
echo "/dev/vg00/lvol6 /tmp vxfs delaylog 0 2" > /etc/fstab
reboot!!
Other good ones:
mv /dev/ /Dev
(try it - and don't ask why!!)
Later,
Bill
Christian Gebhardt
Hi
As a newby in UNIX I had an Oracle Testinstallion on a production system
productiv directory: /u01/...
test directory: /test/u01/...
deleting the test installation:
cd /test
rm /u01
OOPS ...
After several bdf commands I noticed that the wrong lvol shrinks and stops the delete command with Ctrl'C
The database still worked without the most binaries and libraries and after a restore from tape without stopping and starting the database all was ok.
I love oracle ;-)
Chris
harry d brown jr
Learning hpux? Naw, that's not it....maybe it was learning to spell aix?? sco?? osf?? Nope, none of those.
The biggest blunder:
One morning I came in at my usual time of 6am, and had an operator ask me what was wrong with one of our production servers (servicing 6 banks). Well nothing worked at the console (it was already logged in as root). Even a "cat *" produced nothing but another shell prompt. I stopped and restarted the machine and when it attempted to come back up it didn't have any OS to run. Major issue, but we got our backup tapes from that night and restored the machine back to normal. I was clueless (sort of like today)
The next morning, the same operator caught me again, and this time I was getting angry (imagine that). Same crap, different day. Nothing was on any disk. This of course was before we had raid availble (not that that would have helped). So we restored the system from that nights backups and by 8am the banks have their systems up.
So now I have to fix this issue, but where the hell to start? I knew that production batch processing was done by 9PM, and that the backups started right after that. The backups completed around 1am, which were good backups, because we never lost a single transaction. But around 6am the stuff hit the fan. So I had a time frame: 1am-6am, something was clobbering the system. I went though the crons, but nothing really stood out, so I had to really dive into them. This is the code (well almost) I found in the script:
cd /tmp/uniplex/trash/garbage
rm -rf *
As soon as I saw those two lines, I realized that I was the one that had caused the system to crap out every morning. See, I needed some disk space, and I was doing some house cleaning, and I deleted the sub-directory "garbage" from the /tmp/uniplex/trash" directory. Of course the script is run by root, which attempted to "CD" to a non-existent directory, which failed, and cron was still cd'd to "/", it then proceeded to "rm -rf *" my system!
live free or die
harry
Bill Hassell
I guess my blunder sets the record for "most clobbered machines" in one day:
I created an inventory script to be used in the Response Center to track all the systems across the United States (about 320 systems). These are all test and problem replication machines but necessary for the R/C engineers to replicate customer problems.
The script was written about 1992 to handle version 7.0 and higher. About 1995, I had a number of useful scripts that it seemed reasonable to drop these into all 300 machines as a part of the inventory process (so far, so good). Then about that time, 10.01 was released and I made a few changes to the script. One was to change the useful script location from /usr/local/bin to /usr/contrib/bin because of bad directory permissions. I considered 'fixing' the bad permissions but since these systems must represent the customer environment, I decided to move everything.
Enter the shell option -u. I did not use that option in my scripts and due to a spelling error, an environment variable was used in rm -r which was null, thus removing the entire /usr/local directory on 320 machines overnight.
Needless to say, I never write scripts without set -u at the top of the script.
John Poff
The good news is that after that mess they decided that we would never start a DR drill at midnight!
JP
Dave Johnson
Here is my worst.
We us BC's on our XP512. We stop the application, resync the BC, split the BC, start the application, mount the BC on same server, start backup to tape from BC. Well I had to add a LUN to the primary and BC. I recreated the BC. I forgot to change the script that mounts the BC to include the new LUN. The error message vgimport when you do not include all the LUN's is just a warning and it makes the volume group available. The backups seemed to be working just fine.
Well 2 months go by. I did not have enough available disk space to test my backups. (That has been changed). Then I decided to be proactive about deleted old files. So I wrote a script:
cd /the/directory/I/want/to/thin/out
find . -mtime +30 -exec rm {} \;
Well that was scheduled on cron to run just before backups one night. The next morning I get the call the system is not responding. (I guessed later the cd command had failed and the find ran from /).
After a reboot I find lots of files are missing from /etc /var /usr /stand and so on. No problem, just rebuild from the make_recovery tape created 2 nights before then restore the rest from backup.
Well step 1 was fine, but the backup tape was bad. The database was incomplete. It took 3 days (that is 24 hours per day) to find the most recient tape with a valid database. Then we had to reload all the data. After the 3rd day I was able to turn over recovery to the developers. It took about a week to get the application back on-line.
I have sent a request to HP to have the vgimport command changed so a vgimport that does not specify all the LUN's will fail unless some new command line param is used. They have not yet provided this "enhancement" as of the last time I checked a couple of months ago. I now test for this condition and send mail to root as well as fail the BC mount if it does.
Dave Unverhau
This is probably not too uncommon...needed to shutdown a server for service (one of several lined up along the floor...no...not racked). Grabbed the keyboard sitting on that box and quickly typed the shutdown string (with a -hy 0, of course) and got ready to service the box.
...ALWAYS make sure the keyboard is sitting on the box to which it is connected!
Deepak Extross
We had this developer who claimed that when he runs his program, it complains about /usr/bin/ld. (This was because of a missing shared library, he later discovered) It was decided to backup /usr/bin/ld and replace it with 'ld' from another machine on which his program worked.
No sooner was ld moved, than all hell breaks loose.
Users get coredumps in response to even simple commands like "ls", pwd", "cd"... New users cannot telnet into the system and those who are logged in are frozen in their tracks.
Both the developer and admin are still working with us...
RAC
Well I was very very new to HP-UX. Wanted to set up PPP connection with a password borrowed from a friend so that I could browse the net.
Did not worry that the remote support modem can not dial out from remote support port.
Went through all documents available, created device files dozen times, but never worked. In anguish did rm -fr ltr|tail -4|Awk '{print $9}' (That to pacify myself that I know complex commands) But alas, I was /sbin/rc3.d. Thought this is not going to work and left that. Other colleage not aware of this rebooted the system for Veritas netbackup problem. Within next two hours HP engineer was on-site. Was called by colleague. Was watching whole recovery process, repeatedly saying "I want to learn, I want to learn" Then came to know that can not be done. Dave Johnson Hey Bill, When I reinstalled the OS from the make_recovery tape it wiped out the scirpt I wrote and the item on the cron. There is no evidance of what happened or who was responsible. I did however go straight to my boss to confess and take the blame. That above all is probably the strongest reason next to being able to recover at least some of the data why I was not terminated for it. Did I mention in the first post this happened Feb of 2002???? Simon Hargrave 1. On a live Sun Enterprise server, you turn the key one way for maintenance, and one way for off. I wanted to turn it to maintenance but wasn't sure which way to turn it. Guess which way I chose... 2. On an XP512 I accidentally business-copied a new LUN over the top of a live LUN, because I put the wrong LUN ID in!!! Luckily the live datas backup had finished a full 3 minutes earlier...phew! 3. I can't take credit for this one my ex-boss did it, but I had to include it. On Solaris he added a filesystem in the vfstab file, but put the wrong device in the raw-device field. Concequently all backups backed up the wrong device, so when the data got trashed and required restoring, it...um...didn't exist on tape! Luckily for him he'd left the company 2 months before and I was left to explain what a halfwit he was ;) Dave Chamberlin I have stepped in TAR on a couple occasions. I Moved a tar file from production box to development box but I had tarred it with an absolute path. When I untarred it - it overwrote the existing directory - destroying all the developers updates! I have also been burned by the fact that xvf and cvf are very close on the keyboard - so my command to extract a tar came out once as tar -cvf - which of course erased the tar file. Only other bad blunder was doing an lvreduce on a mounted file system - thought I was recovering space without affecting the other files on the volume. Luckily - they were backup up... Martin Johnson One of my coworkers decided to set up a pseudo root (UID=0) account for himself. He used useradd to create the account and made / his home directory. He was unaware that useradd does a "chown -R" to the home directory. So he became the owner of all files on the system. This was a pop3 mail server system, and the mail services did not like the change. My coworker left for the day, leaving me with angry VPs looking over my shoulder demanding to know when email services will be back. Marty <the coworker is now known as "chown boy"> fg Greatest GAFF: Taking the word of someone who I thought knew what they were doing and had taken the proper precautions to ensure a recovery method for a rebuild of filesystems, to make a long story short, no backup, no make_recovery, and then rebuilt filesystems. Data lost and had to rebuild. Recovered most of the data except for previous 24hrs. MORAL of the story: Always have backups and make_recovery tapes done. Richard Darling When I upgraded from 10.20 to 11.0 I finished the system installed, and then used cpio to copy my user applications. One of the vendors had originally had their app installed in /usr (before my time), and I copied the app up one directory and wiped out /usr. By the way, I didn???t back up the installation before the cpio copy. It was a Friday night and I wanted to get out...figured I could backup after getting the apps copied over...learnt an important lesson regarding backups that night... RD Belinda Dermody writing a script to chmod -R to r/w for the world on a dir. Not doing a check to see if I was in the proper directory and all of a sudden my bin directory files were all 666. Lucky enough I had multiple windows and it hadn't gotten to the sbin directory yet. Had a few inquiries why certain commands wouldnt work before I got it all back correctly. From then on, I do$? and check the return status before I issue any remove or chmod commands.
Ian Kidd
I was going to vi a script that performs a cold-backup of an oracle database. Since we prefer not to be root all the time, we use sudo.
So I typed, "sudo", but then was interrupted by someone. I then typed the name of the script when that person left. Nothing appeared on the screen immediately, so I got a coffee.
When I came back, I saw " sudo {script}" and realized - 1 minute the DBAs started screaming that their database was down - that I started a cold backup in the middle of a production day.
Duncan Edmonstone
My worst two:
Installing a server in a major call centre of a US bank...
I built the OS as required by our apps team in the US, and following our build standards put the system into trusted mode.
They installed the app, and realised they'd forgotten to ask me to put the system into NIS (system could be used by any of the call centre reps in over 40 call centers - a total of 15,000 NIS based accounts!) It's the middle of the night in the UK, so the apps team get a US admin to set up the system as a NIS client. (yes it shouldn't work when the box is trusted, but it does!)
Next day, the apps team is complaining about some stuff not working - can I take the system out of trusted mode so we can discount that? Sure course I can - I run tsconvert and wait.... and wait.... and wait.... hmmm - this usually takes about 30 seconds - what gives?
Try to open another window to check whats happening - can't log in as root, the password that worked two minutes ago no longer works!
Next root file system full messages start to scroll up the screen!
It turns out that tsconvert is busy taking ALL the NIS accounts and putting them in the /etc/passwd file (yes all 15,000 of them) and guess what? There's a root account in NIS!
All I can say is thank god for good backups!
The other one was a typical junior admin mistake which comes from not understanding shell file name generation fully:
A user can't log in, I go take a look at his home directory and note the permissions on his .profile are incorrect. I also note that the other '.' files are incorrect, so I do this:
cd /home/user
chmod 400 .*
I call the user and tell him to try again - he says he still can't log in! Huh?
So I go back and carry on looking for the problem, but before I know it the phone is ringing off the hook! No-one can log in now!
And then it dawns on me
I type the following:
cd /home/user
echo .*
and that returns (of course)
. .. .cshrc .exrc .login .profile .sh_history
Oops I didn't just change the permissions on the users '.' files - I also changed the permissions on the users directory, and (crucially!) the users parent directory /home!
These days I always use echo to check my file name pattern matching logic when doing this kind of thing...
We live and learn
Duncan
Vincent Fleming
I have been way too fortunate not to have really blundered all that bad (I've mostly done development), but one I've seen was a real good one...
The "security auditor", who apparently knew absolutely nothing about UNIX, was reviewing our development system, and decided that /tmp having world read/write permissions was not a good thing for security - so, in the middle of the day, he chmod 744 /tmp ... suddenly, 200+ developers (including myself) on the machine (it was a *very* large machine back in 1990) were unable to save their editor sessions!
So, of course, I use the "wall" command to point our their error so they can fix it quickly and I can save my 2+ hours of edits:
$wall who's the moron who changed the permissions on /tmp???? .$
The funny thing was that I was the one they escorted out of the building that day...
The hazards of being a contractor and publically humiliating an employee...
Jerry Jordak
This one wasn't my fault, but is still funny.
One time, we had to add disk space to one of our servers. My manager at time also was in charge of the EMC disk environment, so he allocated an extra disk to the server. I configured the disk into the OS, did a pvcreate on it, and proceeded to add it to the volume group, extend the filesystem, etc...
At about that same time, another one of our servers started going absolutely nuts. It turns out that he accidentally gave me a disk that was already allocated that other system. That drive had held the binaries for that server's application. Oops...
Tom Danzig
As root:
find / -u 201 -chown dansmith
Did this afeter changing a user ID to another number. Should have user "-user" and not -u (I had usermod on my mind). System gladly ignored the -u and started changing all files to user dansmith (/etc/passwd, /etc/hosts, etc). Needless to say, system was hosed.
Was able to recover fine from make_recovery tape. Fortunately this was also a test box and not production.
Oh well ... live and learn! Mistakes are only bad if you don't learn from them.
Mark Fenton
Back in '92 on a NIS network, meant to wipe out a particular user's directory, but was one level up from same when issued rm -r *. Took three hours to back up all home directories on network....
Last year, I discovered that new is not necessarily better. Updating Db software I blithly stopped the Db, copied new software in, and restarted. Users couldn't get any processing done that day -- seems that there was a conversion program that was *supposed* to run that didn't. But that wasn't the blunder -- the blunder is that the most recent backup had been two days previous, so all the previous day's processing was gone... (and that had been an overtime day, too!)
Keely Jackson
My greatest blunder:
The guy who set up the live database had done it as himself rather than aa a separate dba user. He left the company and his user id was re-allocated to somebody in HR. The guy in HR subsequently left as well.
One day I decided to tidy up the system and remvoe the this user. I did this via sam, selected the option to delete all the users files thinking that nobody who was in HR could possibly own any important files.
Unfortunately I was somewhat mistaken. Of course the guy in HR now owned all the database files. The first thing I knew was when the users started to complain that the database was no longer available. I got the db back from restore but everybody had lost half a days work.
Needless to say, I now do not delete old users files but re-allocate them to a special 'leavers' user and check them all before deleting anything.
A good HP blunder.
HP were moving the live server - a K420 - between sites and the remvoal men managed to drop it down a flight of stairs. It landed on one of them who then had to be taken to hospital. Fortunately he was only bruised while the machine had a huge dent in it. Anyway, it got moved to the other site and booted up straight away with no problems. That is what I call resiliant hardware. As a precaution disks etc were changed but it is still running quite happily today.
Cheers
Keely
Michael Steele
When I was first starting out I worked for a Telecom as an 'Application Administrator' and I sat in a small room with a half a dozen other admins and together we took calls from users as their calls escalated up from tier I support. We were tier II in a three tier organization.
A month earlier someone from tier I confused a production server with a test server and rebooted it in the middle of the day. These servers were remotely connected over a large distance so it can be confusing. Care is needed before rebooting.
The tier I culprit took a great deal of abuse for this mistake and soon became a victim of several jokes. An outage had been caused in a high availability environment which meant management, interviews, reports; It went on and on and was pretty brutal.
And I was just as brutal as anyone.
Their entire organization soon became victimize by everyone from our organization. The abuse traveled right up the management tree and all participated.
It was hilarious, for us.
Until I did the same thing a month later.
There is nothing more humbling then 2000 people all knowing who you are for the wrong reason and I have never longed for anonymity more.
Now I alway do a 'uname' or 'hostname' before a reboot, even when I'm right in front of it.
Geoff Wild
Problem Exists Between Keyboard And Chair:
Just did this yesterday:
tar cvf - /sapmnt/XXX | tar xvf -
Meant to do:
tar cvf - /sapmnt/XXX | (cd /sapmnttest/XXX ;tar xvf -)
Needless to say, I corrupted most of the files in /sapmnt/XXX
Rgds....Geoff
Suhas
1. Imagine what would have happened when, on a Solaris box, while taking backup of ld.so.1, instead of doing "cp", "mv" was done !!! As most of you would be aware, ld.so.1 is the library file that is accesses by every system call. The next 1 hour was sheer chaos .. and worst hour ever experienced!!!!
Lesson Learnt: "Look before you leap !!!"
2. Was responsible for changing the date on the back-up master server by nearly a year . That night was a horrifying night of my life.
Lesson Learnt : "A typo-error can cost you any-thing between $0 to infinity." Keep forumming !!!! Suhas [Jun 12, 2010] Sysadmin Blunder (3rd Response) - Toolbox for IT Groups chrisz Did one also. I was in a directory off of root and performed the following command: chown -R someuser.somegroup .* I didn't think much of the command, just wanted to change the owner and group for all files with a . in the front of them and subdirs. Went well for the files in the current directory until it reached the .. file (previous directory). All the files and subdir's off of root changed to the owner and group specified. I was wondering why the command was taking so long to complete. BTW, it changed the owner and group for all NFS files too! That's when the real fun started. Some days you're the windshield, other days you're the bug! Dan Wright It didn't really cause any significant damage, but about 10 years ago, I had recently become an admin of a network of mostly NeXT machines which were new to me and the default shell was c-shell, which I also wasn't very familiar with. I had dialed in from home on nite to play around and become more familiar with how things worked on NeXTStep. In an attempt to kill a job, I typed in "kill 1" instead of "kill %1" - and it probably was actually a "kill -9 1" and of course I was root. And of course, 1 was the init process. I immediately lost connection and had to do a hard reboot on that machine the next day before that user got in (for some reason, the machine with the modem wasn't in my office, it was in someone elses). Fortunately, that wasn't a critical machine outside of normal business hours. No harm, no foul, eh? If you like this kind of story, there are a bunch here: http://www2.hunter.com/~skh/humor/admin-horror.html User123731 I have in the past touched a file called "-i" in important directories. This will cause rm to see the "-i" and make the rm interactive before it acts on other files/dirs if you do not specify a particular directory. User451715: Ha! That's an easy one.. My first position as a Junior Admin in HPUX working in First line support about eight years ago.. I was working on a server, moving some files around, and mistakenly moved all of the files in the /etc directory to a lower level direcory (about 10 sub-dirs down).. I sat there at the console wide-eyed, my heart dropped, and I turned and looked out the window, and saw my job sailing out of it, since this was a server that was being prepared to be deployed and that a month's worth of work would have been wasted. Luckily, a Senior Admin who later became my greatest mentor (Phil Gifford), took pity on my situation, and we sat there and recovered the /etc directory before anyone knew what had happened.. The key here was, he walked me through the necessary steps to recover files from an ignite tape, and voila! Needless to say, I learned all about why seasoned UNIX admins protect root privilege as it it was the 'Family Jewels'.. <chuckle> Mike E.- Bryan Irvine : My biggest blunder wasn't an an AIX box but applies to the thread. I once made an access list for a cisco box, and forgot that there is an implicit "deny all" rule at the end. So I made my nifty access list and enabled it, tested the traffic to see if it was blocked and lo and behold it seemed to be working. Great! I went on with my life and figured I go read some news or something. uhhhh that didn't work. tried email..that didn't work. tried traceroutes, they all died at the router I was jsut working on...then the phones started ringing. *click* lightbulb went on in my head and I ran as fast as I could to the router to reboot it (lucky I hadn't written the nvram) The phone didn't stop ringing for 45 minutes even though the problem only existed for about 4 minutes. But then, what do ya expect when you kill internet traffic at 5 locations across 2 states? the guys on the cisco list said that if you haven't done similar you are lying about the 5 years experience on your resume ;-) --Bryan jxtmills: I needed to clear a directory full of ".directories" and I issued: rm -r .* It seemed to find ".." awfully fast. That was a bad day, but, we restored most of the files in about an hour. John T. Mills bryanwun: I thought I was in an application dir but instead was in /usr and did chown -R to a low level user on top of that I did not have a mksysb backup, and the machine was in production. It continued to function for the users ok but most shell commands returned nothing I had to find another machine with the same OS and Maint level write a script to gather ownership permissions then write another script to apply the permissions to the damaged machine. this returned most functionality but I still cant install new software with installp it goes through the motions then nothing is changed. alain: hi every one for me , i remember 2 1 - 'rm *;old' in / directory note ';' instead of '.' 2 - kill #pid of the informix process (oninit) and delete it (i dreamed) jturner: Variation on a theme ... the 'rm -r theme' as a junior admin on AIX3.2.5, I had been left to my own devices to create some housekeeping scripts - all my draft scripts being created and tested in my home directory with a '.jt' suffix. After completing the scripts I found that I had inadvertantly placed some copies in / with a .jt suffix. Easy job then to issue a 'rm *.jt' in / and all would be well. Well it would have been if I hadn't put a space between the * and the .jt. And the worst thing of all, not being a touch typist and looking at the keys, I glanced at the screen before hitting the enter key and realised with horror what was going to happen and STILL my little finger continued to proceed toward the enter key. Talk about 'Gone in 60 seconds' - my life was at an end - over - finished - perhaps I could park cars or pump gas for a living. Like other correspondents a helpful senior admin was on hand to smile kindly and show me how to restore from mksysb-fortunately taken daily on these production systems. (Thanks Pete :))) ) To this day, rm -i is my first choice with multiple rm's just as a test!!!!!! Happy rm-ing :) daguenet: I know that one. Does any body remember when the rm man page had a warning not do rm -rf / as root? How may systems were rebuild due that blunder. Not that I have ever done something like that, nor will ever admit to it:). Aaron cal.staples: That is a no brainer! First a little background. I cooked up a script called "killme" which would ask for a search string then parse the process table and return a list of all matches. If the list contained the processes you wanted to kill then you would answer "Yes", not once, but twice just to be sure you thought about it. This was very handy at times so I put it out on all of our servers. Some time had passed and I had not used it for a while when I had a need to kill a group of processes. So I typed the command not realizing that I had forgotten the scripts name. Of course I was on our biggest production system at that time and everything stopped in it's tracks! Unknown to me was that there is an AIX command called "killall" which is what I typed. From the MAN page: "The killall command cancels all processes that you started, except those producing the killall process. This command provides a convenient means of canceling all processes created by the shell that you control. When started by a root user, the killall command cancels all cancellable processes except those processes that started it." And it doesn't ask for confirmation or anything! Fortunately the database didn't get corrupted and we were able to bring everything back on line fairly quickly. Needless to say we changed the name of this command so it couldn't be run so easily. "Killall" is a great command for a programmer developing an application which goes wild and he/she needs to kill the processes and retain control of the system, but it is very dangerous in the real world! Jeff Scott: The silliest mistake? That had to be a permissions change on /bin. I got a call from an Oracle DBA that the$ORACLE_HOME/bin no longer belonged to
oracle:dba. We never found out how that happened. I logged in to change the
permissions. I accidentally typed cd /oracle.... /bin (note the space
before /bin), then cheerfully entered the following command:
#chown -R oracle:dba ./*
The command did not climb up to root fortunately, but it really made a mess.
We ended up restoring /bin from a backup taken the previous evening.
Jeff Scott
Darwin Partners/EMC
Cal S.
tzhou:
crontab -r when I wanted to do crontab -e. See letters e and r are side by
side on the keyboard. I had 2 pages of crontab and had no backup on the
machine !
Jeff Scott:
I've seen the rm -fr effect before. There were no entries in any crontab.
Before switching to sudo, the company used a homegrown utility to grant
things root access. The server accepted ETLs from other databases, acting as
a data warehouse. This utility logged locally, instead of logging via
syslogd with local and remote logging. So, when the system was erased, there
really was no way to determine the actual cause. Most of the interactive
users shared a set of group accounts, instead of enforcing individual
accounts and using su - or sudo. The outage cost the company $8 million USD due to lost labor that had to be repeated. Clearly, it was caused by a cleanup script, but it is anyone's guess which one. Technically, this was not a sysadmin blunder, but it underscores the need for individual accounts and for remote logging of ALL uses of group accounts, including those performed by scripts. It also underscores the absolute requirement that all scripts have error trapping mechanisms. In this case, the rm -fr was likely preceded by a cd to a nonexistent directory. Since this was performed as root, the script cd'd to / instead. The rm -fr then removed everything. The other possibility is that it applied itself to another directory, but, again, root privileges allowed it to climb up the directory tree to root. Aneesh Mohan Hi Siv, The greatest blunder happened frm myside is created a lvol named /dev/vg00/lvol11 and did newfs on /dev/vg00/rlvol1 :) The second greatest blunder from my side is corrupting the root filesystem by the below 2 steps :) #lvchange -C n /dev/vg00/lvol03 #lvextend -L 100 /dev/vg00/lvol03 Cheers, Aneesh [Jun 09, 2010] Halloween - IT Admin Horror Stories Zimbra Forums Well ... I was working for a large multi-national running HP-UX systems and Oracle/SAP, and one day the clock struck twelve and the OS just started to disappear Down went SAP and Oracle like a sack of spuds! Mayhem broke with the IT manager standing over my shoulder wanting to know what had happened ... I did not have a clue, and I could not even get onto the system as it was completely hosed! So the task of restoring the server began and after 30 minutes I had everything backup and running again. phewww Until 1pm! The system disappearing again What the hell is going off, panic set it, this time I managed to keep a couple of sessions open to allow me to check the system. And then it clicked .... I wonder .... Yep indeed, somebody had setup a cronjob AS ROOT, that attempted to 'cd' to a directory which then proceeded with a 'rm -rf *' Though the ******* other admin did not verify that the directory existed before performing the remove! Well once we had restored the system again, the cronjob was removed, and we were all running fine again. Morale of the story is to always protect root access and ensure you have adequate backups!!! [Jun 06, 2010] NFS-export as a poor man backdoor You can't log-in to the box if /etc/passwd or /etc/shadow are gone... Ric Werme: Oct 10, 2007 18:05:52 -0700 Bill McGonigle once learned: > rm lets you remove libc too. DAMHINT. I managed to salvage one system because I had NFS-exported / and could gain write access from another system. After that I often did the export before replacing humorless files like libc.so and sometimes did the update with NFS. It was a struggle to remember to type the /mnt before the /etc/passwd so I tried to cd to the target directory copy files in. -Ric Werme [Jun 06, 2010] Security zeal ;-) Good judgment comes from experience, experience comes from poor judgment. So do new jobs...Sometimes, even entirely new careers! > On 10/9/07, John Abreau <[EMAIL PROTECTED]> wrote: >> ... I looked in /bin for suspicious files, and that was the >> first time I ever noticed the file [ . It looked suspicious, so >> of course I deleted it. :-/ [Jun 05, 2010] Directory formerly known as /etc ;-) Tom Buskey Thu, 11 Oct 2007 06:18:27 -0700 On 10/10/07, Bill McGonigle <[EMAIL PROTECTED]> wrote: > > > On Oct 9, 2007, at 17:31, Ben Scott wrote: > > > Did you know 'rpm' will let you remove every package from the > > system? > > rm lets you remove libc too. DAMHINT. I had a user call about a user supported system that was having issues. We explicitly do not support it and the users only use the root account. He gave me the root account to login and I couldn't. I went to his system & looked around. /etc was empty. I told him he was fsked and he should ftp any files he wanted to elsewhere & that he wouldn't be able to login again or reboot. In any event, we were not supporting it. Sure enough, a help desk ticket came in for another admin, claiming that the system got corrupted during bootup. Why do users lie so often? All it does is obscure the problem... Tom Buskey wrote: > On 10/10/07, Bill McGonigle <[EMAIL PROTECTED]> wrote: > >> On Oct 9, 2007, at 17:31, Ben Scott wrote: >> >> >>> Did you know 'rpm' will let you remove every package from the >>> system? >>> >> rm lets you remove libc too. DAMHINT. >> > > > I had a user call about a user supported system that was having issues. We > explicitly do not support it and the users only use the root account. > > He gave me the root account to login and I couldn't. I went to his system & > looked around. /etc was empty. I told him he was fsked and he should ftp > any files he wanted to elsewhere & that he wouldn't be able to login again > or reboot. In any event, we were not supporting it. > > Sure enough, a help desk ticket came in for another admin, claiming that the > system got corrupted during bootup. Why do users lie so often? All it does > is obscure the problem... > Hmmm. Did you check lost+found? I've had similar symptoms only to discover that there was indeed a bad sector that remapped all of /etc/ and some of /var and /usr. fsck didn't help much until I moved the drive to another system and ran fsck there. But you're right - if its not supported, then they'll have to go elsewhere to get this done. BTW: My point is: the user may not have lied, but was just calling the shot as s/he saw them. --Bruce [May 26, 2010] Never ever play lose with /boot partition. Here is the recent story connected with the upgrade of the OS (in this case Suse 10) to a new service pack (SP3) After the upgrade sysadmin discovered that instead of /boot partition mounted there is none but there is a /boot directory directory on the boot partition populated by the update. This is so called "split kernel" situation when one (older) version of kernel boots and then it finds different (more recent) modules in /lib/modules and complains. There reason of this strange behavior of Suse update was convoluted and connected with LVM upgrade it contained after which LVM blocked mounting of /boot partition. Easy, he thought. Let's boot from DVD, mount boot partition to say /boot2 and copy all files from the /boot directory to the boot partition. And he did exactly that. To make things "clean" he first wiped the "old" boot partition and then copied the directory. After rebooting the server he see GRUB prompt; it never goes to the menu. This is a production server and the time slot for the upgrade was 30 min. Investigation that involves now other sysadmins and that took three hours as server needed to be rebooted, backups retrieved to other server from the tape, etc, reveals that /boot directory did not contain a couple of critical files such as /boot/message and /boot/grub/menu.lst. Remember /boot partition was wiped clean. BTWs /boot/message is an executable and grub stops execution of stpped /boot/grub/menu.lst.when it encounted instruction gfxmenu (hd0,1)/message Here is an actual /boot/grub/menu.lst. # Modified by YaST2. Last modification on Thu May 13 13:43:35 EDT 2010 default 0 timeout 8 gfxmenu (hd0,1)/message ##YaST - activate ###Don't change this comment - YaST2 identifier: Original name: linux### title SUSE Linux Enterprise Server 10 SP3 root (hd0,1) kernel /vmlinuz-2.6.16.60-0.54.5-smp root=/dev/vg01/root vga=0x317 splash=silent showopts initrd /initrd-2.6.16.60-0.54.5-smp ###Don't change this comment - YaST2 identifier: Original name: failsafe### title Failsafe -- SUSE Linux Enterprise Server 10 SP3 root (hd0,1) kernel /vmlinuz-2.6.16.60-0.54.5-smp root=/dev/vg01/root vga=0x317 showopts ide=nodma apm=off acpi=off noresume edd=off 3 initrd /initrd-2.6.16.60-0.54.5-smp Luckily there was a backup done before this "fix". Four hours later server was bootable again. Sysadmin Stories Moral of these stories October 19, 2009 | UnixNewbie.org From: jarocki@dvorak.amd.com (John Jarocki) Organization: Advanced Micro Devices, Inc.; Austin, Texas - Never hand out directions on "how to" do some sysadmin task until the directions have been tested thoroughly. – Corollary: Just because it works one one flavor on *nix says nothing about the others. '-} – Corollary: This goes for changes to rc.local (and other such "vital" scripties. 2 From: ericw@hobbes.amd.com (Eric Wedaa) Organization: Advanced Micro Devices, Inc. -NEVER use 'rm ', use rm -i ' instead. -Do backups more often than you go to church. -Read the backup media at least as often as you go to church. -Set up your prompt to do a pwd everytime you cd. -Always do a cd . before doing anything. -DOCUMENT all your changes to the system (We use a text file called /Changes) -Don't nuke stuff you are not sure about. -Do major changes to the system on Saturday morning so you will have all weekend to fix it. -Have a shadow watching you when you do anything major. -Don't do systems work on a Friday afternoon. (or any other time when you are tired and not paying attention.) 3 From: rca@Ingres.COM (Bob Arnold) Organization: Ask Computer Systems Inc., Ingres Division, Alameda CA 94501 1) The "man" pages don't tell you everything you need to know. 2) Don't do backups to floppies. 3) Test your backups to make sure they are readable. 4) Handle the format program (and anything else that writes directly to disk devices) like nitroglycerine. 5) Strenuously avoid systems with inadequate backup and restore programs wherever possible (thank goodness for "restore" with an "e"!). 6) If you've never done sysadmin work before, take a formal training class. 7) You get what you pay for. There's no substitute for experience. 9) It's a lot less painful to learn from someone else's experience than your own (that's what this thread is about, I guess 4 From: jimh@pacdata.uucp (Jim Harkins) Organization: Pacific Data Products If you appoint someone to admin your machine you better be willing to train them. If they've never had a hard disk crash on them you might want to ensure they understand hardware does stuff like that. 5 From: dvsc-a@minster.york.ac.uk Organization: Department of Computer Science, University of York, England Beware anything recursive when logged in as root! 6 From: matthews@oberon.umd.edu (Mike Matthews) Organization: /etc/organization *NEVER* move something important. Copy, VERIFY, and THEN delete. 7 From: almquist@chopin.udel.edu (Squish) Organization: Human Interface Technology Lab (on vacation) When you are doing some BIG type the command reread what you've typed about 100 times to make sure its sunk in (: 8 From: Nick Sayer If / is full, du /dev. 9 From: TRIEMER@EAGLE.WESLEYAN.EDU Organization: Wesleyan College Never ever assume that some prepackaged script that you are running does anything right. Admin Stories UnixNewbie.org This is a modified list from "The Unofficial Unix Administration Horror Story Summary" by Anatoly Ivasyuk. My 10 UNIX Command Line Mistakes by Vivek Gite with 90 comments Anyone who has never made a mistake has never tried anything new. -- Albert Einstein. Here are a few mistakes that I made while working at UNIX prompt. Some mistakes caused me a good amount of downtime. Most of these mistakes are from my early days as a UNIX admin. userdel Command The file /etc/deluser.conf was configured to remove the home directory (it was done by previous sys admin and it was my first day at work) and mail spool of the user to be removed. I just wanted to remove the user account and I end up deleting everything (note -r was activated via deluser.conf): userdel foo Rebooted Solaris Box On Linux killall command kill processes by name (killall httpd). On Solaris it kill all active processes. As root I killed all process, this was our main Oracle db box: killall process-name Destroyed named.conf I wanted to append a new zone to /var/named/chroot/etc/named.conf file., but end up running: ./mkzone example.com > /var/named/chroot/etc/named.conf Destroyed Working Backups with Tar and Rsync (personal backups) I had only one backup copy of my QT project and I just wanted to get a directory called functions. I end up deleting entire backup (note -c switch instead of -x): cd /mnt/bacupusbharddisk tar -zcvf project.tar.gz functions I had no backup. Similarly I end up running rsync command and deleted all new files by overwriting files from backup set (now I've switched to rsnapshot) rsync -av -delete /dest /src Again, I had no backup. Deleted Apache DocumentRoot I had sym links for my web server docroot (/home/httpd/http was symlinked to /www). I forgot about symlink issue. To save disk space, I ran rm -rf on http directory. Luckily, I had full working backup set. Accidentally Changed Hostname and Triggered False Alarm Accidentally changed the current hostname (I wanted to see current hostname settings) for one of our cluster node. Within minutes I received an alert message on both mobile and email. hostname foo.example.com Public Network Interface Shutdown I wanted to shutdown VPN interface eth0, but ended up shutting down eth1 while I was logged in via SSH: ifconfig eth1 down Firewall Lockdown I made changes to sshd_config and changed the ssh port number from 22 to 1022, but failed to update firewall rules. After a quick kernel upgrade, I had rebooted the box. I had to call remote data center tech to reset firewall settings. (now I use firewall reset script to avoid lockdowns). Typing UNIX Commands on Wrong Box I wanted to shutdown my local Fedora desktop system, but I issued halt on remote server (I was logged into remote box via SSH): halt service httpd stop Wrong CNAME DNS Entry Created a wrong DNS CNAME entry in example.com zone file. The end result - a few visitors went to /dev/null: echo 'foo 86400 IN CNAME lb0.example.com' >> example.com && rndc reload Conclusion All men make mistakes, but only wise men learn from their mistakes -- Winston Churchill. From all those mistakes I've learnt that: 1. Backup = ( Full + Removable tapes (or media) + Offline + Offsite + Tested ) 2. The clear choice for preserving all data of UNIX file systems is dump, which is only tool that guaranties recovery under all conditions. (see Torture-testing Backup and Archive Programs paper). 3. Never use rsync with single backup directory. Create a snapshots using rsync or rsnapshots. 4. Use CVS to store configuration files. 5. Wait and read command line again before hitting the dam [Enter] key. 6. Use your well tested perl / shell scripts and open source configuration management software such as puppet, Cfengine or Chef to configure all servers. This also applies to day today jobs such as creating the users and so on. Mistakes are the inevitable, so did you made any mistakes that have caused some sort of downtime? Please add them into the comments below. Jon 06.21.09 at 2:42 am My all time favorite mistake was a simple extra space: cd /usr/lib ls /tmp/foo/bar I typed rm -rf /tmp/foo/bar/ * instead of rm -rf /tmp/foo/bar/* The system doesn't run very will without all of it's libraries…… georgesdev 06.21.09 at 9:15 am never type anything such as: rm -rf /usr/tmp/whatever maybe you are going to type enter by mistake before the end of the line. You would then for example erase all your disk starting on /. if you want to use -rf option, add it at the end on the line: rm /usr/tmp/whatever -rf and even this way, read your line twice before adding -rf 3ToKoJ 06.21.09 at 9:26 am public network interface shutdown … done typing unix command on wrong box … done Delete apache DocumentRoot … done Firewall lockdone … done with a NAT rule redirecting the configuration interface of the firewall to another box, serial connection saved me I can add, being trapped by aptitude keeping tracks of previously planned - but not executed - actions, like "remove slapd from the master directory server" UnixEagle 06.21.09 at 11:03 am Rebooted the wrong box While adding alias to main network interface I ended up changing the main IP address, the system froze right away and I had to call for a reboot Instead of appending text to Apache config file, I overwritten it's contents Firewall lockdown while changing the ssh port Wrongfully run a script contained recursive chmod and chown as root on / caused me a downtime of about 12 hours and a complete re-install Some mistakes are really silly, and when they happen, you don't believe your self you did that, but every mistake, regardless of it's silliness, should be a learned lesson. If you did a trivial mistake, you should not just overlook it, you have to think of the reasons that made you did it, like: you didn't have much sleep or your mind was confused about personal life or …..etc. I like Einstein's quote, you really have to do mistakes to learn. Selected Comments 7 smaramba 06.21.09 at 11:31 am typing unix command on wrong box and firewall lockdown are all time classics: been there, done that. but for me the absolute worst, on linux, was checking a mounted filesystem on a production server… fsck /dev/sda2 the root filesystem was rendered unreadable. system down. dead. users really pissed off. fortunately there was a full backup and the machine rebooted within an hour. 8 od 06.21.09 at 12:50 pm "Typing UNIX Commands on Wrong Box" Yea, I did that one too. Wanted to shut down my own vm but I issued init 0 on a remote server which I accessed via ssh. And oh yes, it was the production server. 10 sims 06.22.09 at 2:23 am Funny thing, I don't remember typing typing in the wrong console. I think that's because I usually have the hostname right there. Fortunately, I don't do the same things over and over again very much. Which means I don't remember command syntax for all but most used commands. Locking myself out while configuring the firewall – done – more than once. It wasn't really a CLI mistake though. Just being a n00b. georgesdev, good one. I usually: ls -a /path/to/files to double check the contents then up arrowkey homekey hit del a few times and type rm. I always get nervous with rm sitting at the prompt. I'll have to remember that -rf at the end of the line. I always make mistakes making links. I can never remember the syntax. :/ Here's to less CLI mistakes… (beer) Grant D. Vallance 06.22.09 at 7:56 am A couple of days ago I typed and executed (as root): rm -rf /* on my home development server. Not good. Thankfully, the server at the time had nothing important on it, which is why I had no backups … I am still not sure *why* I did when I have read about all the warnings about using this command. (A dyslexic moment with the syntax?) Ah well, a good lesson learned. At least it was not the disaster it could of been. I shall be *very* paranoid about this command in the future. Joren 06.22.09 at 9:30 am I wanted to remove the subfolder etc from the /usr/local/matlab/ directory. So I accidentally added the '/' symbol in a force of habit when going to the /etc folder and I typed from the /usr/local/matlab directory: sudo rm /etcinstead of sudo rm etc Without the entire /etc folder the computer didn't work anymore (which was to be expected ofcourse) and I ended up reinstalling my computer. Robsteranium 06.22.09 at 11:05 am Aza Rashkin explains how habituation can lead to stupid errors – confirming "yes I'm sure/ overwrite file etc" automatically without realising it. Perhaps rm and the > command need an undo/ built-in backup… Ramaswamy 06.22.09 at 10:47 am Deleted the files I used place some files in /tmp/rama and some conf files at /home//httpd/conf file I used to swap between these two directories by "cd -" Executed the command rm -fr ./* supposed to remove the files at /tmp/rama/*, but ended up by removing the file at /home//httpd/conf/*, with out any backup Yonitg 06.23.09 at 8:06 am Great post ! I did my share of system mishaps, killing servers in production, etc. the most emberassing one was sending 70K users the wrong message. or beter yet, telling the CEO we have a major crysis, gathering up many people to solve it, and finding that it is nothing at all while all the management is standing in my cube. Solaris 06.23.09 at 8:37 pm Firewall lock out: done. Command on wrong server: done. And the worst: update and upgrade while some important applications were running, of course on a production server.. as someone mentioned the system doesn't run very well without all of its original libraries :) Peko 06.30.09 at 8:46 am I invented a new one today. Just assuming that a [-v] option stands for –verbose Yep, most of the time. But not on a [pkill] command. [pkill -v myprocess] will kill _any_ process you can kill - except those whose name contains "myprocess". Ooooops. :-! (I just wanted pkill to display "verbose" information when killing processes) Yes, I know. Pretty dumb thing. Lesson learned ? I would suggest adding another critical rule to your list: " Read The Fantastic Manual - First" ;-) Jai Prakash 07.03.09 at 1:43 pm Mistake 1: My Friend tried to see last reboot time and mistakenly executed command "last | reboot" instead of "last | grep reboot" It made a outage on Production DB server. Mistake 2: Another guy, wants to see the FQDN on solaris box and executed "hostname -f" It changed the hostname name to "-f" and clients faced lot of connectivity issues due to this mistake. [ hostname -f is used in Linux to see FQDN name but it solaris its usage is different ] 32 Name 07.04.09 at 5:20 pm Worse thing i've done so far, It accidentally dropped a MySQL database containing 13k accounts for a gameserver :D Luckily i had backups but took a little while to restore, 33 Vince Stevenson 07.06.09 at 6:23 pm I was dragged into a meeting one day and forgot to secure my Solaris session. A colleague and former friend did this: alias ll='/usr/sbin/shutdown -g5 -i5 "Bye bye Vince"' He must have thought that I was logged into my personal host machine, not the company's cashcow server. What happens when it all goes wrong. Secure your session… Rgds Vince Bjarne Rasmussen 07.07.09 at 7:56 pm well, tried many times, the crontab fast typing failure… crontab -r instead of -e e for edit r for remove.. now i always use -l for list before editing… 35 Ian 07.08.09 at 4:15 am Made a script that automatically removes all files from a directory. Now, rather than making it logically (this was early on) I did it stupidly. cd /tmp/files rm ./* Of course, eventually someone removed /tmp/files.. 36 shlomi 07.12.09 at 9:21 am Hi On My RHEL 5 sever I create /tmp mount point to my storage and tmpwatch script that run under cron.daily removes files which have not been accessed 12 hours !!! 52 foo 09.25.09 at 9:41 pm wanted to kill all the instances of a service on HP-UX (pkill like util not available)… # ps -aef | grep -v foo | awk {print'$2′} | xargs kill -9
Typed "grep -v" instead of "grep -i" and u can guess what happened :(
Typing rm -Rf /var/* in the wrong box. Recovered in few minutes by doing scp root@healty_box:/var . – the ssh session on the just broken box was still open . This saved my life :-P
Deltaray 10.03.09 at 4:37 am
Like Peko above, I too once ran pkill with the -v option and ended up killing everything else. This was on a very important enterprise production machine and I reminded myself the hard lesson of making sure you check man pages before trying some new option.
I understand where pkill gets its -v functionality from (pgrep and thus from grep), but honestly I don't see what use of -v would be for pkill. When do you really need to say something like kill all processes except this one? Seems reckless. Maybe 1 in a million times you'd use it properly, but probably most of the time people just get burned by it. I wrote to the author of pkill about this but never heard anything back. Oh well.
Guntram 10.05.09 at 7:51 pm
This is why i never use pkill; always use something like "ps ….| grep …" and, when it's ok, type a " | awk '{print $2}' | xargs kill" behind it. But, as a normal user, something like "pkill -v bash" might make perfect sense if you're sitting at the console (so you can't just switch to a different window or something) and have a background program rapidly filling your screen. Worst thing that ever happened to me: Our oracle database runs some rdbms jobs at midnight to clean out very old rows from various tables, along the line of "delete from XXXX where last_access < sysdate-3650". One sunday i installed ntp to all machines, made a start script that does an ntpdate first, then runs ntpd. Tested it:$ date 010100002030; /etc/init.d/ntpd start; date
Worked great, current time was ok.
$date 010100002030; reboot After the machine was back up i noticed i had forgotten the /etc/rc*.d symlinks. But i never thought of the database until a lot of people were very angry monday morning. Fortunately, there's an automated backup every saturday. sqn 10.07.09 at 6:05 pm tried to lockout a folder by removing it's attributes (chmod 000) as a beginner and wanted to impress myself, did: # cd /folder # chmod 000 .. -R used two points instead of one, and of course the system used the upper folder witch is / for modifying attributes ended up getting out of my home and go the the server to reset the permissions back to normal. I got lucky because i just did a dd to move the system from one HDD to another and I haven't deleted the old one yet :) And of course the classical configuring the wrong box, firewall lockout :) dev 10.15.09 at 10:15 am while I was working on many ssh window: rm -rf * I intended to remove all files under a site, after changing the current working directory, then replacing with the stable one wrong window, wrong server, and I did it on production server xx(( just aware the mistakes 1.5 after typing [ENTER] no backup. maybe luckily, the site was keep running smooth.. it seems that the deleted files were such images, or media contents 1-2 secs incidental removal in heavy machine gave me loss approx. 20 MB 58 LMatt 10.17.09 at 3:36 pm In a hurry to get a db back up for a user, I had to parse through nearly a several terabyte .tar.gz for the correct SQL dumpfile. So, being the good sysadmin, I locate it within an hour, and in my hurry to get db up for the client who was on the phone the entire time: mysql > dbdump.sql Fortunately I didn't sit and wait all that long before checking to make sure that the database size was increasing, and the client was on hold when I realized my error. mysql > dbdump.sql - SHOULD be - mysql < dbdump.sql I had just sent stdout of the mysql CLI interface to a file named dbdump.sql. I had to re-retrieve the damn sqldump file and start over! BAH! FOILED AGAIN! Mr Z 10.18.09 at 5:13 am After 10+ years I've made a lot of mistakes. Early on I got myself in the habit of testing commands before using them. For instance: ls ~usr/tar/foo/bar then rm -f ~usr/tar/foo/bar – make sure you know what you will delete When working with SSH, always make sure what system you are on. Modifying system prompts generally eliminates all confusion there. It's all just creating a habit of doing things safely… at least for me. 60 chris 10.22.09 at 11:15 pm cd /var/opt/sysadmin/etc rm -f /etc note the /etc. It was supposed to be rm -rf etc Jonix 10.23.09 at 11:18 am The deadline were coming too close to comfort, I'd worked for too looong hours for months. We were developing a website, and I was in charge of developing the CGI scripts which generated a lot of temporary files, so on pure routine i worked in "/var/www/web/" and entered "rm temp/*" which i misspelled at some point as "rm tmp/ *". I kind of wondered, in my overtired brain, what took so long for the delete to finish, it should only be 20 small files that is should delete. The very next morning the paying client was to fly in and pay us a visit, and get a demonstration of the project. P.S Thanks to Subversion and opened files in Emacs buffers I managed to get almost all files back, and I had rewritten the missing files before the morning. Cougar 10.29.09 at 3:00 pm rm * in one of my project directory (no backup). I planned to do rm *~ to delete backup files but used international keyboard where space was required after ~ (dead key for letters like õ).. BattleHardened 10.30.09 at 1:33 am Some of my more choice moments: postsuper -d ALL (instead of -r ALL, adjacent keys – 80k spooled mails gone). No recovery possible – ramfs :/ Had a .pl script to delete mails in .Spam directories older than X days, didn't put in enough error checking, some helpdesk guy provisioned a domain with a leading space in it and script rm'd (rm -rf /mailstore/ domain.com/.Spam/*) the whole mailstore. (250k users – 500GB used) – Hooray for 1 day old backup chown -R named:named /var/named when there was a proc filesystem under /var/named/proc. Every running process on system got chown.. /bin/bash, /usr/sbin/sshd and so on. Took hours of manual find's to fix. .. and pretty much all the ones everyone else listed :) You break it, you fix it. Shantanu Oak 11.03.09 at 11:20 am scp overwrites an existing file if exists on the destination server. I just used the following command and soon realised that it has replaced the "somefile" of that server!! scp somefile root@192.168.0.1:/root/ thatguy 11.04.09 at 3:37 pm Hmm, most of these mistakes I have done – but my personal favourite. # cd /usr/local/bin # ls -l -> that displayed some binaries that I didn't need / want. # cd .. # rm -Rf /bin – Yeah, you guessed it – smoked the bin folder ! The system wasn't happy after that. This is what happens when you are root and do something without reading the command before hitting [enter] late at night. First and last time … Gurudatt 11.06.09 at 12:05 am chmod 777 / never try this, if u do so even root will not be able to login 69 richard 11.09.09 at 6:59 pm so in recovering a binary backup of a large mysql database, produced by copying and tarballing '/var/lib/mysql', I untarred it in tmp, and did the recovery without incident. (at 2am in the morning, when it went down). Feeling rather pleased with myself for suck a quick and successful recovery, I went to deltete the 'var' directory in '/tmp' . I wanted to type: rm -rf var/ instead I typed : rm -rf /var unfortunatley I didnt spot it for a while, and not until after did I realize that my on-site backups were stored in /var/backups … IT was a truly miserable few days that followed while I pieced together the box from SVN and various other sources … Derek 11.12.09 at 10:26 pm Heh, These were great. I have many above.. my first was reboot ….Connection reset by peer. Unfortunately, I thought I was rebooting my desktop. Luckily, the performance test server I was on hadn't been running tests(normally they can take 24-72 hours to run).. symlinks… ack! I was cleaning up space and thought weird.. I don't remember having a bunch of databases in this location.. rm -f * unfortunately, it was a symlink to my /db slice, that DID have my databases, friday afternoon fun. I did a similar with being in the wrong directory… deleted all my mysql binaries. This was also after we had acquired a company and the same happened on one of their servers months before.. we never realized that, and the server had an issue one dady… so we rebooted. Mysql had been running in memory for months, and upon reboot there was no more mysql. Took us a while to figure that out because no one had thought that the mysql binaries were GONE! Luckily I wasn't the one who had deleted the binaries, just got to witness the aftermath. jason 11.18.09 at 4:19 pm The best ones are when you f*ck up and take down the production server and are then asked to investigate what happened and report on it to management…. Mr Z 11.19.09 at 3:02 pm @jason That sort of situation leads to this tee-shirt http://www.rfcafe.com/business/images/Engineer%27s%20Troubleshooting%20Flow%20Chart.jpg M.S. Babaei 08.01.09 at 3:39 am once upon a time mkfs is killing me on ext3 partition I want instead of mkfs.ext3 /dev/sda1 I did this mkfs.ext3 /dev/sdb1 I never forget what I lost?? Simon B 08.07.09 at 2:47 pm Whilst a colleague was away from their keyboard I entered : rm -rf * … but did not press enter on the last line (as a joke). I expected them to come back and see it as a joke and rofl….back space… The unthinkable happened, the screen went to sleep and they banged the enter key to wake it up a couple of times. We lost 3 days worth of business and some new clients. estimated cost$50,000+
ginzero 08.17.09 at 5:10 am
tar cvf /dev/sda1 blah blah…
47 Kevin 08.25.09 at 10:50 am
tar cvf my_dir/* dir.tar
and your write your archive in the first file of the directory …
48 ST 09.17.09 at 10:14 am
I've done the wrong server thing. SSH'd into the mailserver to archive some old messages and clear up space.
Mistake #1: I didn't logoff when I was done, but simply minimized the terminal and kept working
Mistake#2: At the end of the day I opened what I thought was a local terminal and typed:
/sbin/shutdown -h now
thinking I was bringing down my laptop. The angry phone calls started less than a minute later. Thankfully, I just had to run to the server room and press power.
I never thought about using CVS to backup config files. After doing some really dumb things to files in /etc (deleting, stupid edits, etc), I started creating a directory to hold original config files, and renaming those files things like httpd.conf.orig or httpd.conf.091709
As always, the best way to learn this operating system is to break it…however unintentionally.
49 Wolf Halton 09.21.09 at 3:16 pm
Attempting to update a Fedora box over the wire from Fedora8 to Fedora9
I updated the repositories to the Fedora9 repos, and ran
I have now tested this on a couple of boxes and without exception the upgrades failed with many loose older-version packages and dozens of missing dependencies, as well as some fun circular dependencies which cannot be resolved. By the time it is done, eth0 is disabled and a reboot will not get to the kernel-choice stage.
Oddly, this kind of update works great in Ubuntu.
50 Ruben 09.24.09 at 8:23 pm
while cleaning the backup hdd late the night, a '/' can change everything…
"rm -fr /home" instead of "rm -fr home/"
It was a sleepless night, but thanks to Carlo Wood and his ext3grep I rescued about 95% of data ;-)
51 foo 09.25.09 at 9:36 pm
--> Added 5 extra files that were not to be commited, so I decided to undo the change,delete the files and add to svn again…..
# svn rm foo –force
and it deleted the foo directory from disk :(…lost all my code just before the dead line :(
52 foo 09.25.09 at 9:41 pm
wanted to kill all the instances of a service on HP-UX (pkill like util not available)…
# ps -aef | grep -v foo | awk {print'$2′} | xargs kill -9 Typed "grep -v" instead of "grep -i" and u can guess what happened :( 53 LinAdmin 09.29.09 at 2:38 pm Typing rm -Rf /var/* in the wrong box. Recovered in few minutes by doing scp root@healty_box:/var . – the ssh session on the just broken box was still open . This saved my life :-P 54 Deltaray 10.03.09 at 4:37 am Like Peko above, I too once ran pkill with the -v option and ended up killing everything else. This was on a very important enterprise production machine and I reminded myself the hard lesson of making sure you check man pages before trying some new option. I understand where pkill gets its -v functionality from (pgrep and thus from grep), but honestly I don't see what use of -v would be for pkill. When do you really need to say something like kill all processes except this one? Seems reckless. Maybe 1 in a million times you'd use it properly, but probably most of the time people just get burned by it. I wrote to the author of pkill about this but never heard anything back. Oh well. 55 Guntram 10.05.09 at 7:51 pm This is why i never use pkill; always use something like "ps ….| grep …" and, when it's ok, type a " | awk '{print$2}' | xargs kill" behind it. But, as a normal user, something like "pkill -v bash" might make perfect sense if you're sitting at the console (so you can't just switch to a different window or something) and have a background program rapidly filling your screen.
Worst thing that ever happened to me:
Our oracle database runs some rdbms jobs at midnight to clean out very old rows from various tables, along the line of "delete from XXXX where last_access < sysdate-3650". One sunday i installed ntp to all machines, made a start script that does an ntpdate first, then runs ntpd. Tested it:
$date 010100002030; /etc/init.d/ntpd start; date Worked great, current time was ok.$ date 010100002030; reboot
After the machine was back up i noticed i had forgotten the /etc/rc*.d symlinks. But i never thought of the database until a lot of people were very angry monday morning. Fortunately, there's an automated backup every saturday.
56 sqn 10.07.09 at 6:05 pm
tried to lockout a folder by removing it's attributes (chmod 000) as a beginner and wanted to impress myself, did:
# cd /folder
# chmod 000 .. -R
used two points instead of one, and of course the system used the upper folder witch is / for modifying attributes
ended up getting out of my home and go the the server to reset the permissions back to normal. I got lucky because i just did a dd to move the system from one HDD to another and I haven't deleted the old one yet :)
And of course the classical configuring the wrong box, firewall lockout :)
57 dev 10.15.09 at 10:15 am
while I was working on many ssh window:
rm -rf *
I intended to remove all files under a site, after changing the current working
directory, then replacing with the stable one
wrong window, wrong server, and I did it on production server xx((
just aware the mistakes 1.5 after typing [ENTER]
no backup. maybe luckily, the site was keep running smooth..
it seems that the deleted files were such images, or media contents
1-2 secs incidental removal in heavy machine gave me loss approx. 20 MB
58 LMatt 10.17.09 at 3:36 pm
In a hurry to get a db back up for a user, I had to parse through nearly a several terabyte .tar.gz for the correct SQL dumpfile. So, being the good sysadmin, I locate it within an hour, and in my hurry to get db up for the client who was on the phone the entire time:
mysql > dbdump.sql
Fortunately I didn't sit and wait all that long before checking to make sure that the database size was increasing, and the client was on hold when I realized my error.
mysql > dbdump.sql - SHOULD be -
mysql < dbdump.sql
I had just sent stdout of the mysql CLI interface to a file named dbdump.sql. I had to re-retrieve the damn sqldump file and start over!
BAH! FOILED AGAIN!
59 Mr Z 10.18.09 at 5:13 am
After 10+ years I've made a lot of mistakes. Early on I got myself in the habit of testing commands before using them. For instance:
ls ~usr/tar/foo/bar then rm -f ~usr/tar/foo/bar – make sure you know what you will delete
When working with SSH, always make sure what system you are on. Modifying system prompts generally eliminates all confusion there.
It's all just creating a habit of doing things safely… at least for me.
60 chris 10.22.09 at 11:15 pm
rm -f /etc
note the /etc. It was supposed to be rm -rf etc
61 Jonix 10.23.09 at 11:18 am
The deadline were coming too close to comfort, I'd worked for too looong hours for months.
We were developing a website, and I was in charge of developing the CGI scripts which generated a lot of temporary files, so on pure routine i worked in "/var/www/web/" and entered "rm temp/*" which i misspelled at some point as "rm tmp/ *". I kind of wondered, in my overtired brain, what took so long for the delete to finish, it should only be 20 small files that is should delete.
The very next morning the paying client was to fly in and pay us a visit, and get a demonstration of the project.
P.S Thanks to Subversion and opened files in Emacs buffers I managed to get almost all files back, and I had rewritten the missing files before the morning.
62 Cougar 10.29.09 at 3:00 pm
rm * in one of my project directory (no backup). I planned to do rm *~ to delete backup files but used international keyboard where space was required after ~ (dead key for letters like õ)..
63 BattleHardened 10.30.09 at 1:33 am
Some of my more choice moments:
postsuper -d ALL (instead of -r ALL, adjacent keys – 80k spooled mails gone). No recovery possible – ramfs :/
Had a .pl script to delete mails in .Spam directories older than X days, didn't put in enough error checking, some helpdesk guy provisioned a domain with a leading space in it and script rm'd (rm -rf /mailstore/ domain.com/.Spam/*) the whole mailstore. (250k users – 500GB used) – Hooray for 1 day old backup
chown -R named:named /var/named when there was a proc filesystem under /var/named/proc. Every running process on system got chown.. /bin/bash, /usr/sbin/sshd and so on. Took hours of manual find's to fix.
.. and pretty much all the ones everyone else listed :)
You break it, you fix it.
64 PowerPeeCee 11.02.09 at 1:01 am
As an Ubuntu user for a while, Y'all are giving me nightmares, I will make extra discs and keep them handy. Eek! I am sure that I will break it somehow rather spectacularly at some point.
65 mahelious 11.02.09 at 10:44 pm
second day on the job i rebooted apache on the live web server, forgetting to first check the cert password. i was finally able to find it in an obscure doc file after about 30 minutes. the resulting firestorm of angry clients would have made Nero proud. I was very, very surprised to find out I still had a job after that debacle.
66 Shantanu Oak 11.03.09 at 11:20 am
scp overwrites an existing file if exists on the destination server. I just used the following command and soon realised that it has replaced the "somefile" of that server!!
scp somefile root@192.168.0.1:/root/
67 thatguy 11.04.09 at 3:37 pm
Hmm, most of these mistakes I have done – but my personal favourite.
# cd /usr/local/bin
# ls -l -> that displayed some binaries that I didn't need / want.
# cd ..
# rm -Rf /bin
– Yeah, you guessed it – smoked the bin folder ! The system wasn't happy after that. This is what happens when you are root and do something without reading the command before hitting [enter] late at night. First and last time …
68 Gurudatt 11.06.09 at 12:05 am
chmod 777 /
never try this, if u do so even root will not be able to login
69 richard 11.09.09 at 6:59 pm
so in recovering a binary backup of a large mysql database, produced by copying and tarballing '/var/lib/mysql', I untarred it in tmp, and did the recovery without incident. (at 2am in the morning, when it went down). Feeling rather pleased with myself for suck a quick and successful recovery, I went to deltete the 'var' directory in '/tmp' . I wanted to type:
rm -rf var/
rm -rf /var
unfortunatley I didnt spot it for a while, and not until after did I realize that my on-site backups were stored in /var/backups …
IT was a truly miserable few days that followed while I pieced together the box from SVN and various other sources …
70 Henry 11.10.09 at 6:00 pm
Nice post and familiar with the classic mistakes.
My all time classic:
- rm -rf /foo/bar/ * [space between / and *]
Be carefull with clamscan's:
–detect-pua=yes –detect-structured=yes –remove=no –move=DIRECTORY
I chose to scan / instead of /home/user and I ended with a screwed apt, libs, and missing files from allover the place :D I luckily had –log=/home/user/scan.log and not console output, so I could restore the moved files one by one
these 2 happened at home, while working I've learned a long time ago (SCO Unix times) to backup files before rm :D
71 Derek 11.12.09 at 10:26 pm
Heh,
These were great.
I have many above.. my first was
reboot
….Connection reset by peer. Unfortunately, I thought I was rebooting my desktop. Luckily, the performance test server I was on hadn't been running tests(normally they can take 24-72 hours to run)..
symlinks… ack! I was cleaning up space and thought weird.. I don't remember having a bunch of databases in this location.. rm -f * unfortunately, it was a symlink to my /db slice, that DID have my databases, friday afternoon fun.
I did a similar with being in the wrong directory… deleted all my mysql binaries.
This was also after we had acquired a company and the same happened on one of their servers months before.. we never realized that, and the server had an issue one dady… so we rebooted. Mysql had been running in memory for months, and upon reboot there was no more mysql. Took us a while to figure that out because no one had thought that the mysql binaries were GONE! Luckily I wasn't the one who had deleted the binaries, just got to witness the aftermath.
72 Ahmad Abubakr 11.13.09 at 2:23 pm
My favourite :)
sudo chmod 777 /
73 jason 11.18.09 at 4:19 pm
The best ones are when you f*ck up and take down the production server and are then asked to investigate what happened and report on it to management….
74 Mr Z 11.19.09 at 3:02 pm
@jason
That sort of situation leads to this tee-shirt
75 John 11.20.09 at 2:29 am
Clearing up space used by no-longer-needed archive files:
# du -sh /home/myuser/oldserver/var
32G /home/myuser/oldserver/var
# cd /home/myuser/oldserver
# rm -rf /var
The box ran for 6 months after doing this, by the way, until I had to shut it down to upgrade the RAM…although of course all the mail, Web content, and cron jobs were gone. *sigh*
76 Erick Mendes 11.24.09 at 7:55 pm
Yesterday I've locked my self outside of a switch I was setting up. lol
I was setting up a VLAN on it and my PC was directly connected to it thru one of the ports I messed up.
Had to get thru serial to undo vlan config.
Oh, the funny thing is that some hours later my boss just made the same mistake lol
77 John Kennedy 11.25.09 at 2:09 pm
Remotely logged into a (Solaris) box at 3am. Made some changes that required a reboot. Being too lazy to even try and remember the difference between Solaris and Linux shutdown commands I decided to use init. I typed init 0…No one at work to hit the power switch for me so I had to make the 30 minute drive into work.
This one I chalked up to being a noob…I was on an XTerminal which was connected to a Solaris machine. I wanted to reboot the terminal due to display problems…Instead of just powering off the terminal I typed reboot on the commandline. I was logged in as root…
78 bram 11.27.09 at 8:45 pm
on a remote freebsd box:
[root@localhost ~]# pkg_delete bash
(since my default shell in /etc/passwd was still pointing to a non-existent /usr/local/bin/bash, i would never be able to log in)
79 Li Tai Fang 11.29.09 at 8:02 am
On a number of occasions, I typed "rm" when I wanted to type "mv," i.e., I wanted to rename a file, but instead I deleted it.
80 vmware 11.30.09 at 4:59 am
last | reboot
last | grep reboot
81 ColtonCat 12.02.09 at 4:21 am
I have a habit of renaming config files I work on to the same file with a "~" at the end for a backup, so that I can roll back if I make a mistake, and then once all is well I just do a rm *~. Trouble happened to me when I accidentally typed rm * ~ and as Murphy would have it a production asterisk telephony server.
82 bye bye box 12.02.09 at 7:54 pm
Slicked the wrong box in a production data center at my old job.
In all fairness it was labeled wrong on the box and kvm ID.
Now I've learned to check hostname before decom'ing anything.
83 Murphy's Red 12.02.09 at 9:11 pm
Running out of diskspace while updating a kernel on FreeBSD.
Not fully inserting a memory module on my home machine which shortcircuited my motherboard.
On several occasions i had to use a rdesktop session to windows machine and use putty to connect to a machine (yep.. i know it sounds weird ;-) ) Anyway.. text copied in windows is stored differently than text copied in the shell. Why changing a root passwd on a box, (password copied using putty) i just control v-ed it and logged off. I had to go to the datacenter to boot into single user mode to acces the box again.
Using the same crappy setup, i copied some text in windows, accidently hit control-v in the putty screen of the box i was logged into as root, the first word was halt, the last character an enter.
Configuring nat on the wrong interface while connected through ssh
Adding a new interface on a machine, filled in the details of a home network in kudzu which changed the default gateway to 192.168.1.1 on the main interface. Only checking the output of ifconfig but not the traffic or gateway and dns settings.
fsck -y on filesystem without unmounting it
84 ehrichweiss 12.03.09 at 6:55 pm
I've definitely rebooted the wrong box, locked myself out with firewall rules, rm -rf'ed a huge portion of my system. I had my infant son bang on the keyboard for my SGI Indigo2 and somehow hit the right key combo to undo a couple of symlinks I had created for /usr(I had to delete them a couple of times in the process of creating them) AND cleared the terminal/history so I had no idea what was going on when I started getting errors. I had created the symlink a week prior so it took me a while to figure out what I had to do to get the system operational again.
My best and most recent FUBAR was when I was backing up my system(I have horrible, HORRIBLE luck with backups to the point I don't bother doing them any more for the most part); I was using mondorescue and backing the files up to an NTFS partition I had mounted under /mondo and had done a backup that wouldn't restore anything because of an apostrophe or single quote in one of the file names was backing up, so I had to remove the files causing the problem which wasn't really a biggie and did the backup, then formatted the drive as I had been planning………..only to discover that I hadn't remounted the NTFS partition under /mondo as I had thought and all 30+ GB of data was gone. I attempted recovery several times but it was just gone.
85 fly 12.04.09 at 3:55 pm
my personal favorite, a script somehow created few dozens file in /etc dit … all named ??somestrings so i promplty did rm -rf ??* … (at the point when i hit [enter] i remebered that ? is a wildchar … Too late :)) luckily that was my home box … but reinstall was imminent :)
86 bips 12.06.09 at 9:56 am
il m'est arrivé de farie :
crontab -r
au lieu de :
crontab -e
ce qui a eu pour effet de vider la liste crontab…
87 bips 12.06.09 at 9:59 am
also i've done
shutdown -n
(i thaught -n meant "now")
which had for consequence to reboot the server without networking…
88 Deltaray 12.06.09 at 4:51 pm
bips: What does shutdown -n do? Its not in the shutdown man page.
89 miss 12.14.09 at 8:42 am
crontab -e vs crontab -r is the best :)
90 marty 12.18.09 at 12:21 am
the extra space before a * is one I've done before only the root cause was tab completion.
#rm /some/directory/FilesToBeDele[TAB]*
Thinking there were multiple files that began with FilesToBeDele. Instead, there was only one and pressing tab put in the extra space. Luckily I was in my home dir, and there was a file with write only permission so rm paused to ask if I was sure. I ^C and wiped my brow. Of course the [TAB] is totally unneccesary in this instance, but my pinky is faster than my brain.
Copy Your Linux Install to a Different Partition or Drive
Jul 9, 2009
If you need to move your Linux installation to a different hard drive or partition (and keep it working) and your distro uses grub this tech tip is what you need.
To start, get a live CD and boot into it. I prefer Ubuntu for things like this. It has Gparted. Now follow the steps outlined below.
Copying
• Mount both your source and destination partitions.
• Run this command from a terminal:
$sudo cp -afv /path/to/source/* /path/to/destination Don't forget the asterisk after the source path. • After the command finishes copying, shut down, remove the source drive, and boot the live CD again. Configuration • Mount your destination drive (or partition). • Run the command "gksu gedit" (or use nano or vi). • Edit the file /etc/fstab. Change the UUID or device entry with the mount point / (the root partition) to your new drive. You can find your new drive's (or partition's) UUID with this command: $ ls -l /dev/disk/by-uuid/
• Edit the file /boot/grub/menu.lst. Change the UUID of the appropriate entries at the bottom of the file to the new one.
Install Grub
• Run sudo grub.
• At the Grub prompt, type:
find /boot/grub/menu.lst
This will tell you what your new drive and partition's number is. (Something like hd(0,0))
• Type:
root hd(0,0)
but replace "hd(0,0)" with your partition's number from above.
• Type:
setup hd(0)
but replace "hd(0)" with your drive's number from above. (Omit the comma and the number after it).
That's it! You should now have a bootable working copy of your source drive on your destination drive! You can use this to move to a different drive, partition, or filesystem.
Related Stories:
Linux - Compare two directories(Feb 18, 2009)
Cloning Linux Systems With CloneZilla Server Edition (CloneZilla SE)(Jan 22, 2009)
Copying a Filesystem between Computers(Oct 28, 2008)
rsnapshot: rsync-Based Filesystem Snapshot(Aug 26, 2008)
K9Copy Helps Make DVD Backups Easy(Aug 23, 2008)
Hosing Your Root Account By S. Lee Henry
If you manage your own Unix system, you might be interested in hearing how easy it is to make your root account completely inaccessible -- and then how to fix the problem. I have landed in this situation twice in my career and, each time, ended up having to boot my Solaris box off a CD-ROM in order to gain control of it.
The first time I ran into this problem, someone else had made a typing mistake in the root user's shell in the /etc/passwd file. Instead of saying "/bin/sh", the field was made to say "/bin/sch", suggesting to me that the intent had been to switch to /bin/csh. Due to the typing mistake, however, not only could root not log in but no one could su to the root account. Instead, we got error messages like these:
login: root
boson% su -
su: cannot run /bin/sch: No such file or directory
`
The second time, I rdist'ed a new set of /etc files to a new Solaris box I was setting up without realizing that the root shell on the source system had been set to /bin/tcsh. Because this offspring of the C shell is not available on most Unix boxes (and certainly isn't delivered with Solaris), I found myself facing the same situation that I had run into many years before.
I couldn't log in as root. I couldn't su to the root account. I couldn't use rcp (even from a trusted host) -- because it checks the shell. I could ftp a copy of tcsh, but could not make it executable. I couldnt boot the system in single user mode (it also looked for a valid shell). The only option at my disposal was to boot the system from a CD ROM. Once I had done this, I had two choices: 1) I could mount my root partition on /a, cd to /a/etc, replace the shell in the /etc/passwd file, unmount /a, and then reboot. 2) I could mount my root partition on /a, cd to /a/bin, chmod 755 the copy of tcsh that I had previously ftped there, unmount /a, and then reboot.
I fixed root's entry in the /etc/passwd file and made my new tcsh file executable to prevent any possible recurrence of the problem. To avoid these problems, I usually don't allow the root shell to be set to anything other than /bin/sh (or /bin/csh if I'm pressured into it). The Bourne shell (or bash) is generally the best shell for root because it's on every system and the system start/stop scripts (in the /etc/rc?.d or /etc/rc.d/rc?.d directories) are almost exclusively written in sh syntax. Hence, should one of these files fail to include the #!/bin/sh designator, they will still run properly.
Surprised by how easily and completely I had made my system unusable, I was left running around the office looking for the secret stash of Solaris CD-ROMs to repair the damage. By the way, changing the file on the rdist source host and rdist'ing the files a second time would not have worked because even rdist requires the root account on the system be working properly. The rdist tool is based on rcp.
Unix Admin. Horror Story Summary, version 1.0 compiled by: Anatoly Ivasyuk (anatoly@nick.csh.rit.edu)
The Unofficial Unix Administration Horror Story Summary, version 1.1
Two Cent BASH Shell Script Tips
Lots More 2 Cent Tips...
Some great 2¢ Tips...
Etc
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31397321820259094, "perplexity": 4378.4240595125375}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307813.73/warc/CC-MAIN-20191215094447-20191215122447-00279.warc.gz"} |
http://mathhelpforum.com/pre-calculus/17108-how-did-happen.html | # Math Help - How did this happen?
1. ## How did this happen?
(x - h)^2 + (y - k)^2 = r^2
(x - 5)^2 + (y - 2)^2 = 4^2
If there's a 5 and a 2 to work with, how did 4 come up? I don't understand how to solve these radius distance equations. Can I please get some help on this?
2. Originally Posted by BlueStar
(x - h)^2 + (y - k)^2 = r^2
(x - 5)^2 + (y - 2)^2 = 4^2
If there's a 5 and a 2 to work with, how did 4 come up? I don't understand how to solve these radius distance equations. Can I please get some help on this?
you need to tell us the question for us to help you
3. A swimmer jumps 2 feet north and 5 feet east of the corner of the pool. The ripple effect traveled four feet from the center. Model an equation of a circle for the set of points that could be the center of the cannon ball. The corner is the origin at (0,0), and the center is at (5,2) with a radius of 4 feet. I need to find the standard equation using the distance formula.
4. Originally Posted by BlueStar
A swimmer jumps 2 feet north and 5 feet east of the corner of the pool. The ripple effect traveled four feet from the center. Model an equation of a circle for the set of points that could be the center of the cannon ball. The corner is the origin at (0,0), and the center is at (5,2) with a radius of 4 feet. I need to find the standard equation using the distance formula.
as far as i can see, it seems they just want the equation of the circle that forms the cannon ball. in which case it would be an equation of the form (x - h)^2 + (y - k)^2 = r^2 with center (h,k) and radius = 4, which is the standard form of a circle (or do you want to actually derive this equation using the distance formula?) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8280033469200134, "perplexity": 227.33415269721004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737833893.68/warc/CC-MAIN-20151001221713-00144-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.nist.gov/publications/determination-porosity-anisotropic-fractal-systems-neutron-scattering | # Determination of Porosity in Anisotropic Fractal Systems by Neutron Scattering
Published: February 01, 2018
### Author(s)
Xin Gu, David F. R. Mildner
### Abstract
Small-angle scattering from two-phase isotropic systems requires the scattering invariant to determine the relative fractions of each phase in the material. For anisotropic systems the measurement yields a result that depends on the projection of the phases onto the scattering plane normal to the incident radiation. When the scattering system has a unique axis such that there is no preferred direction in the plane normal to that axis, the scattering gives elliptical contours on the two-dimensional detector. Two different measurements of projected phases, one with the incident beam direction coincident with the unique axis and the other normal to the axis, can be combined to give a three-dimensional description of the system and therefore lead to a determination of the total porosity of the system.
Citation: Journal of Applied Crystallography
Volume: 51
Pub Type: Journals | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8420207500457764, "perplexity": 560.9164216603401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525974.74/warc/CC-MAIN-20190719032721-20190719054721-00551.warc.gz"} |
http://www.aimsciences.org/search/author?author=Zhi-Ming%20Ma | # American Institute of Mathematical Sciences
## Journals
DCDS
Discrete & Continuous Dynamical Systems - A 2014, 34(12): 5061-5084 doi: 10.3934/dcds.2014.34.5061
In this paper we provide a verifiable necessary and sufficient condition for a regular q-process to be again a q-process under a transformation of state space. The result as well as some other results on continuous states Markov jump processes is employed to investigate jump processes arising from the study in modeling genetic coalescent with recombination.
keywords: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071004152297974, "perplexity": 876.3608576131971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742963.17/warc/CC-MAIN-20181115223739-20181116005739-00088.warc.gz"} |
https://www.physicsforums.com/threads/tiddly-beer-beer-beer.233324/ | # Tiddly Beer Beer Beer
• #1
TheStatutoryApe
260
4
Lord Bless Charlie Mops.
So what are your favourite types and brands of beer?
I prefer darker beers myself, Guinness being my favourite. New Castle, Negra Modelo, Sapporo, and Flying Dog(Denver microbrew) are the only other beers I have ever tried and enjoyed.
Otherwise I have also enjoyed several ciders, Hornsby's Amber being my favourite. HardCore Black was one of my favourites as well but I think the HardCore Cider Co is gone now. Woodchuck and Wyders are decent.
• #2
Staff Emeritus
Gold Member
2,035
623
Red Hook Extra Special Bitter. A cold beer in the evening of a hot day...aahhhh!
• #3
Staff Emeritus
Gold Member
4,846
6
I like all sorts. Sam Smith's stout is better than Guinness, but I like that as well.
• #4
B. Elliott
252
10
Guinness. Hands down.
• #5
undrcvrbro
132
0
I wore a PBR shirt today, haha. At least that brand is cheap enough for us young bucks.
Last edited:
• #6
Homework Helper
Gold Member
2,371
3
Edit: Beer needs to be properly spelled.
Last edited:
• #7
Staff Emeritus
Gold Member
4,846
6
Oh yeah hoegaarden. Ich liebe weissbier.
• #8
Gold Member
9,756
253
I drink Keith's India Pale Ale in bars. At home, I stick with Lucky because it's the cheapest stuff that you can get around here. (I was astounded to find out that it's considered a 'top-line' beer in the US.)
If neither is available, I go with Pil.
(Every once in a while I'll have a half of Guinness because it looks so nice, but it tastes like it has been filtered through a moose.)
• #9
Homework Helper
Gold Member
2,371
3
I drink Keith's India Pale Ale in bars. At home, I stick with Lucky because it's the cheapest stuff that you can get around here. (I was astounded to find out that it's considered a 'top-line' beer in the US.)
If neither is available, I go with Pil.
(Every once in a while I'll have a half of Guinness because it looks so nice, but it tastes like it has been filtered through a moose.)
You can buy cases of Coors Light, Budweiser, Labbatt's and so on in Quebec for $22.50! Also, Hoegaarden is available in Quebec. It's hard to find here in Quebec. My buddy bought 30 of them in Quebec and since they only come in 6-packs it cost him$75!
• #10
Staff Emeritus
Gold Member
11,828
53
Lately, Guinness has been tasting watered down to me. I've been enjoying some of the beers by the Great Lakes Brewing Co...Edmund Fitzgerald Porter, and Burning River Ale. I've also discovered one called Dead Guy Ale that I like (the bar I went to the last day I taught gross anatomy this year just happened to have it and it seemed a fitting beer to celebrate that class being done, so I tried it just for the name...really tasty); that's made by Rogue Brewing Co in Oregon. There's also Mountaineer Brewing Co., which is a WV brewery that has a nice Nut Brown that I like, but only bottled. I went to a bar that had it on tap, and it didn't taste very good on tap. Odd...usually beers are better on tap than bottled.
• #11
Staff Emeritus
Gold Member
2,035
623
...a cold, cloudy hefeweizen on a hot, sunny day is wonderful. Pyramid brewery mades a good one.
• #12
Staff Emeritus
Gold Member
4,846
6
Wychwood brewery is one of my favourites.
• #13
Staff Emeritus
Gold Member
8,010
1,010
I always enjoyed a little dark Heiney.
• #14
Staff Emeritus
Gold Member
11,828
53
I always enjoyed a little dark Heiney.
Has Tsu already left for the week? :uhh:
• #15
gravenewworld
1,127
25
Best American macro brew= Yuengling lager (oldest brewery in the US too)
Other favs (all microbrews): Dogfish head 90 min IPA, Victory Hop Devil, Victory Storm King, Iron Hill Brewery's Pig Iron Porter, 3 Floyds Dark Lord Stout, anything from Yards.
I don't understand why hefewiezens have all of the sudden become all the rage. They are nasty. Any beer that needs fruit put in it=:yuck:
Last edited:
• #16
Staff Emeritus
Gold Member
8,010
1,010
Has Tsu already left for the week? :uhh:
EnjoyED
I stopped drinking long before the micro-brew craze, but I always did like the finer dark beers. Heiney and Sapporo were both real good.
• #17
Staff Emeritus
Gold Member
11,828
53
I don't understand why hefewiezens have all of the sudden become all the rage. They are nasty. Any beer that needs fruit put in it=:yuck:
I don't put fruit in them. They are pretty low on my list of beers though, but they are considered a nice, light summer beer, so that's why they all come out this time of year.
• #18
Staff Emeritus
Gold Member
2,035
623
I always enjoyed a little dark Heiney.
...too much information...
• #19
Staff Emeritus
20,868
4,843
I like all sorts. Sam Smith's stout is better than Guinness, but I like that as well.
I like the Oatmeal and Imperial Stouts and Taddy Porter. Their Nut Brown and Pale Ales are also very good.
Chimay Red and Blue are fine Trappiste Ales.
And I certainly enjoy Guinness Stout.
• #20
Homework Helper
1,742
0
These are all some fancy beers - as for myself the American beers e.g. Budweiser makes me a bit sick ; they all seem to have somekind of a chemical aspect to them. It turns out that this chemical aspect may be related to hops which were chemically modified to treat the skunky property of beer. I've been drinking more and more of Heineken these days.
• #21
rewebster
843
2
busch and miller
---seriously---I guess some things don't have to be exotic
Last edited:
• #22
Staff Emeritus
Gold Member
4,846
6
What I find hilarious in britain is going to a real ale bar where people order fosters and carling.
• #23
Gold Member
Dearly Missed
4,397
559
...too much information...
LOL.
• #24
Gold Member
2,570
723
Guinnesss by far whenever I can get it then the New Belgiums' 1554, followed by the Fat Tire. In the summer I am partial to the Skinny Dip Ale.
• #25
Staff Emeritus
Homework Helper
12,145
166
I like all sorts. Sam Smith's stout is better than Guinness, but I like that as well.
Sam Smith's Oatmeal Stout is my favorite.
• #26
Red Rum
21
0
Brand Up from Brand brewery down in Wijlre in Limburg in the south of Holland about 2 km from the German border. Originally called Brand Up, relaunched a few years ago as Brand Urtyp and now reverting to its old name. A good German style hoppy, estery beer. And the region is nice too. Unlike the rest of the Netherlands, Limburg has hills! It's 26 deg C outside and I'm looking forward to one after work now!
• #27
Andre
4,509
74
Brand Up from Brand brewery down in Wijlre in Limburg in the south of Holland about 2 km from the German border. Originally called Brand Up, relaunched a few years ago as Brand Urtyp and now reverting to its old name. A good German style hoppy, estery beer. And the region is nice too. Unlike the rest of the Netherlands, Limburg has hills! It's 26 deg C outside and I'm looking forward to one after work now!
Whilst I admit that Brand is on the short list for the Dutch brands, one should certainly not pass by Amstel 1870 too lighthearthy. But for the Twenthenaren there is only Grolsch, especially "Het Kanon", which certainly lives up to the expectations the name generates.
But I'd prefer a Pinot gris when it's 26 degrees.
Last edited:
• #28
scorpa
361
1
I never drink beer, I don't like it very much. The only kinds I have tried that I have not minded are guiness and mgd. The only other ones I've tried are the usual coors, molson, pilsner...ect and i hate them.
• #29
Barfolumu
68
0
But I drink lots of locals. Boulevard Brewing Company beers, and my favorite is the stout by Granite City Brewery.
• #30
Gold Member
3,228
55
The best beers and ales available locally come from a micro-brewery about 15 miles from here. Oak Pond Brewery makes some excellent brews. For the commercial brews, I'll pick up Guinness, Heineken, or Becks if I'm feeling flush and Molson Golden if I'm not.
• #31
Insanity
58
0
Lord Bless Charlie Mops.
He invented a wonderful drink, and he made it out of hops.
• #32
nucleargirl
122
2
I like Becks! and Radler :)
• #33
Gold Member
768
4
I've been enjoying beer commercials lately-
Last edited by a moderator:
• #34
Staff Emeritus
Gold Member
4,846
6
Strange this came up. I went to a beer festival yesterday. They have 52 beers on. I managed to try 9 of them.
• #35
Staff Emeritus
Gold Member
7,176
21
I've been enjoying beer commercials lately-
Heh. You can watch a video of a commercial on Youtube, but first have to watch an actual commercial before the video starts!
I like this one best among the Heinekens:
Last edited by a moderator:
• Last Post
Replies
6
Views
389
• Last Post
Replies
2
Views
590
• Last Post
Replies
1
Views
623
• Last Post
Replies
13
Views
1K
• Last Post
Replies
3
Views
1K
• Last Post
Replies
66
Views
9K
• Last Post
Replies
160
Views
31K
• Last Post
Replies
5
Views
2K
• Last Post
Replies
28
Views
4K
• Last Post
Replies
3
Views
1K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.155682772397995, "perplexity": 12789.414946271607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00425.warc.gz"} |
http://padraic.xyz/notes/programming/distributed-parallelism/ | # Distributed Memory Parallelism
Distributed memory parallelism can be understood as parallelisation across 'machines'; each process has it's own independent memory, and processes can send messages between each other.
This is achieved using a protocol called Message Passing Interface, MPI. In pratice, MPI is implemented as a C librarry with bindings to Fortran, Python (with boost or mpi4py), R and C++(with boost). MPI programmes are called with a command line programme, mpiexec, and vendors will provide wrappers around compilers to ease compilation.
The MPI library is C< and so when passing custom types for example we need to use void and a 'cast' method to which we pass our datatype.E.g int MPI_Bcast(void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm). The final argument to ths function is the communicator, the object which handles message passing.
To compile MPI programmes, we use a command named mpic++, potentially also referred to as mpicc or mpicxx. This programme handles proper linking to mpi. We then run the command with mpiexec. We can even specify the number of processors available to mpiexec using the flag -n <number>. The hello_world example is shown below
#include <mpi.h>
// Next line tells CATCH we will use our own main function
#define CATCH_CONFIG_RUNNER
#include "catch.hpp"
TEST_CASE("Just test I exist") {
int rank, size;
MPI_Comm_rank (MPI_COMM_WORLD, &rank);
MPI_Comm_size (MPI_COMM_WORLD, &size);
CHECK(size > 0); CHECK(rank >= 0);
}
int main(int argc, char * argv[]) {
MPI_Init (&argc, &argv);
int result = Catch::Session().run(argc, argv);
MPI_Finalize();
return result;
}
Any MPI calls have to come between MPI_Init and MPI_Finalize. The communicator MPI_Comm handles message passing between a given group of processes. The communicator knows both the size of the group and the rank of a process (order). By convention, process 0 in a group is special and is called the root.
## Point to Point Communication
This refers to message passing between two processes, for example to process data or report success. Common examples include:
1. Blocking Synchronous Send: A drops off a message and waits until it receives a received receipt from process B. Name is MPI_SSend.
2. Blocking Send: A drops of the message and then carries on, while process B waits for the data. Name is MPI_Send.
3. Non-blocking Send: Here, process A stores the data in the safebox and doesn't need to wait for the transit to being. The data is transmitted to B, and a receipt is left in the safebox to be accessed later by process A. Name is MPI_ISend.
The questions involved are: how long do I have to wait, and when can I start modifying data used for the message? This will depend entirely on the nature of your calculation.
As a shorthand, if there is no method name prefix this is a blocking call, if the call looks like MPI_S<name> it is synchronous, and if it is MPI_I then it is asynchronous. For each Send call we also need a receive for Process B, which includes a preallocated buffer and a pointer to a status variable.
Again, it is important to be aware of accidentally creating 'Dead Lock' situations, here two processes are stuck waiting for each other to proceed. This is especially difficult to debug as the 'speed' of each process will vary run to run.
## Collective Communications
In this case, we are doing more than 1-to-1 communication between our processes. There are multiple different schemes of which we will review only a few. Here are a few examples:
1. Broadcast: Used e.g. in setting up a calculation, root sends data to the other processes.
2. Gather: Other processes send their data to root, usually used for results.
Synchronisation is achieved using the MPI_Barrier method to hold all processes to a common point. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22632662951946259, "perplexity": 3809.7561378942783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572439.21/warc/CC-MAIN-20190915235555-20190916021555-00410.warc.gz"} |
https://www.physicsforums.com/threads/aerosol-can.25275/ | # Aerosol can
1. May 12, 2004
### alexbib
I was told that when you release spray from an aerosol can, the can cools down. Is this true, and if so, why?
Does the gas in the can require outside energy to expand and escape the can?
Thanks,
Alex
2. May 12, 2004
### Allday
I think we can approach this problem with the good old ideal gas law. It states that P*V = n*R*T,
where P is pressure, V is volume, n represents the amount of gas, R is a constant, and T is temperature.
The gas in an aerosol can is under pressure. It wants to get out of the can and when you press the nozzle you provide the means for it to do so. Now were going to have to make some assumptions about whats going on when the nozzle is pressed and whatever gas is inside is sprayed out. I've only used this equation for gasses where n doesn't change. I think we can use a constant n as an approximation if we are considering a short burst. In this approximation the volume is going to remain constant as well (the can isn't changing shape) and R is defined as a constant. During the spray the pressure inside the can will go down, which for the above equation to be an equality, means the temperature has to go down.
Gabriel
3. May 12, 2004
### MiGUi
That effect is known as Joule-Kelvin effect, and it only happens real gases! so the ideal gas don't work with this effect!
The Joule-Kelvin effect says that: If a real gas is expanding and it crosses a (i don't know the english word, but i want to mean that the section of the tube or so is lower than the section the gas was crossing before), without interchange of heat, then the temperature variates.
When you press the aerosol, the gas has to cross through the little hole, so the temperature of the recipient goes down.
MiGUi.
4. May 12, 2004
### alexbib
alright, I'll look it up. thanks!
5. May 16, 2004
### Pierre
this is exactly refrigirator work!
6. May 16, 2004
### alexbib
hey, anybody knows what happens to the entropy of the can? Does it increase or decrease? How could you evaluate the change in entropy, since pV=nRt is not true?
7. May 17, 2004
### alexbib
it is obvious that the overall entropy change is positive (compressed gas in a can is more ordered than when the pressure reaches an equilibrium, but what about the entropy of the can (and it's content) alone? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029104709625244, "perplexity": 727.4027192606159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512400.59/warc/CC-MAIN-20181019124748-20181019150248-00062.warc.gz"} |
https://math.stackexchange.com/questions/3741532/convergence-in-probability-implies-mean-squared-convergence | # Convergence in probability implies mean squared convergence
Let $$(\Omega, \mathcal{F}, \mathbb{P})$$ be a probability space. Let $$(X_n)_{n \in \mathbb{N}}$$ be a sequence of $$\mathcal{F}$$ measurable random variables. Let $$X$$ be another $$\mathcal{F}$$ measurable random variable. I have $$X_n \rightarrow X$$ in probability. Additionally, $$\mathbb{P}(|X_n|, where $$L$$ is a constant independent of $$n$$. I have to show that $$X_n \rightarrow X$$ in mean squared sense, i.e. as $$n \rightarrow \infty$$, $$\mathbb{E}(X_n - X)^2 \rightarrow 0$$. How do I go about this? Thanks.
• @OliverDiaz I'm sorry I guess the sequence in the answer does converge to zero in mean squared sense Jul 1, 2020 at 17:28
Convergence in probability: For any $$\delta>0$$, $$\lim_{n\to\infty}\mathbb{P}(|X_n-X|>\delta)=0$$.
Also, since $$\mathbb{P}(|X_n|, we have that $$|X_n| almost surely for all $$X_n$$. Since convergence in probability implies almost-everywhere convergence of a subsequence, we also have that $$\mathbb{P}(|X|, i.e. $$|X| almost surely. Now let $$\delta>0$$. We have $$\mathbb{E}[X_n-X]^2=\int|X_n-X|^2=\int_{\{|X_n-X|>\delta\}}|X_n-X|^2+\int_{\{|X_n-X|<\delta\}}|X_n-X|^2\leq$$ $$\leq\int_{\{|X_n-X|>\delta\}}|X_n-X|^2+\delta^2\leq\mathbb{P}(|X_n-X|>\delta)\cdot (4L^2)+\delta^2\to\delta^2$$
Since $$\delta>0$$ was arbitrary and we have that $$\limsup_{n\to\infty}\mathbb{E}|X_n-X|^2\leq\delta^2$$, we conclude that $$\mathbb{E}|X_n-X|^2\to0$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.000006914138794, "perplexity": 567.2829442214068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663021405.92/warc/CC-MAIN-20220528220030-20220529010030-00728.warc.gz"} |
https://detailedpedia.com/wiki-Switzerland | # Switzerland
Swiss Confederation
Five official names
• Schweizerische Eidgenossenschaft (German)
• Confédération suisse (French)
• Confederazione Svizzera (Italian)
• Confederaziun svizra (Romansh)
• Confoederatio helvetica (Latin)
Motto: (unofficial)
"Unus pro omnibus, omnes pro uno"
"One for all, all for one"
Anthem: "Swiss Psalm"
Location of Switzerland (green)
in Europe (green and dark grey)
Capital
46°57′N 7°27′E / 46.950°N 7.450°E
Largest cityZürich
Official languages
Ethnic groups
(2020)
Religion
(2020)
Demonym(s)
GovernmentFederal assembly-independent directorial republic with elements of a direct democracy
Walter Thurnherr
LegislatureFederal Assembly
Council of States
National Council
History
1 August 1291
• Sovereignty recognised (Peace of Westphalia)
24 October 1648
7 August 1815
12 September 1848
Area
• Total
41,285 km2 (15,940 sq mi) (132nd)
• Water (%)
4.34 (2015)
Population
• 2020 estimate
8,636,896 (99th)
• 2015 census
8,327,126
• Density
207/km2 (536.1/sq mi) (48th)
GDP (PPP)2022 estimate
• Total
$739.49 billion (35th) • Per capita$84,658 (5th)
GDP (nominal)2022 estimate
• Total
$841.69 billion (20th) • Per capita$92,434 (7th)
Gini (2018) 29.7
low
HDI (2021) 0.962
very high · 1st
CurrencySwiss franc (CHF)
Time zoneUTC+1 (CET)
• Summer (DST)
UTC+2 (CEST)
Driving sideright
Calling code+41
ISO 3166 codeCH
Internet TLD.ch, .swiss
Switzerland, officially the Swiss Confederation, is a landlocked country located at the confluence of Western, Central and Southern Europe. It is bordered by Italy to the south, France to the west, Germany to the north and Austria and Liechtenstein to the east.
Switzerland is geographically divided among the Swiss Plateau, the Alps and the Jura; the Alps occupy the greater part of the territory, whereas the Swiss population of approximately 8.7 million is concentrated mostly on the plateau, where the largest cities and economic centres are located, including Zürich, Geneva and Basel.
Switzerland originates from the Old Swiss Confederacy established in the Late Middle Ages following a series of military successes against Austria and Burgundy. The Federal Charter of 1291 is considered the country's founding document. Since the Reformation of the 16th century, Switzerland has maintained a policy of armed neutrality. Swiss independence from the Holy Roman Empire was formally recognised in the Peace of Westphalia in 1648. Switzerland has not fought an international war since 1815. It joined the United Nations only in 2002, though it pursues an active foreign policy, including participation in frequent peace-building processes worldwide. Switzerland is the birthplace of the Red Cross, one of the world's oldest and best-known humanitarian organisations, and hosts the headquarters or offices of most major international institutions, including the WTO, the WHO, the ILO, FIFA, and the United Nations. It is a founding member of the European Free Trade Association (EFTA), but not part of the European Union (EU), the European Economic Area, or the Eurozone; however, it participates in the European single market and the Schengen Area through bilateral treaties.
Switzerland is a federal republic composed of 26 cantons, with federal authorities based in Bern. It has four main linguistic and cultural regions: German, French, Italian and Romansh. Although most Swiss are German-speaking, national identity is rooted in a common historical background, shared values such as federalism and direct democracy, and Alpine symbolism. This identity transcends language, ethnicity, and religion, leading to Switzerland being described as a Willensnation ("nation of volition") rather than a nation state.
Due to its linguistic diversity, Switzerland is known by multiple native names: Schweiz [ˈʃvaɪts] (German); Suisse [sɥis(ə)] (French); Svizzera [ˈzvittsera] (Italian); and Svizra [ˈʒviːtsrɐ, ˈʒviːtsʁɐ] (Romansh). On coins and stamps, the Latin name, Confoederatio Helvetica — frequently shortened to "Helvetia" — is used instead of the spoken languages.
Switzerland is one of the world's most developed countries. It has the highest nominal wealth per adult of any country and the eighth-highest gross domestic product per capita. Switzerland ranks first in the Human Development Index since 2021 and performs highly also on several international metrics, including economic competitiveness and democratic governance. Cities such as Zürich, Geneva and Basel rank among the highest in terms of quality of life, albeit with some of the highest costs of living.
## Etymology
The English name Switzerland is a portmanteau of Switzer, an obsolete term for a Swiss person which was in use during the 16th to 19th centuries, and land. The English adjective Swiss is a loanword from French Suisse, also in use since the 16th century. The name Switzer is from the Alemannic Schwiizer, in origin an inhabitant of Schwyz and its associated territory, one of the Waldstätte cantons which formed the nucleus of the Old Swiss Confederacy. The Swiss began to adopt the name for themselves after the Swabian War of 1499, used alongside the term for "Confederates", Eidgenossen (literally: comrades by oath), used since the 14th century. The data code for Switzerland, CH, is derived from Latin Confoederatio Helvetica (English: Helvetic Confederation).
The toponym Schwyz itself was first attested in 972, as Old High German Suittes, perhaps related to swedan ‘to burn’ (cf. Old Norse svíða ‘to singe, burn’), referring to the area of forest that was burned and cleared to build. The name was extended to the area dominated by the canton, and after the Swabian War of 1499 gradually came to be used for the entire Confederation. The Swiss German name of the country, Schwiiz, is homophonous to that of the canton and the settlement, but distinguished by the use of the definite article (d'Schwiiz for the Confederation, but simply Schwyz for the canton and the town). The long [iː] of Swiss German is historically and still often today spelled ⟨y⟩ rather than ⟨ii⟩, preserving the original identity of the two names even in writing.
The Latin name Confoederatio Helvetica was neologised and introduced gradually after the formation of the federal state in 1848, harking back to the Napoleonic Helvetic Republic. It appeared on coins from 1879, inscribed on the Federal Palace in 1902 and after 1948 used in the official seal (e.g., the ISO banking code "CHF" for the Swiss franc, and the country top-level domain ".ch", are both taken from the state's Latin name). Helvetica is derived from the Helvetii, a Gaulish tribe living on the Swiss Plateau before the Roman era.
Helvetia appeared as a national personification of the Swiss confederacy in the 17th century in a 1672 play by Johann Caspar Weissenbach.
## History
The state of Switzerland took its present form with the adoption of the Swiss Federal Constitution in 1848. Switzerland's precursors established a defensive alliance in 1291, forming a loose confederation that persisted for centuries.
### Beginnings
The oldest traces of hominid existence in Switzerland date to about 150,000 years ago. The oldest known farming settlements in Switzerland, which were found at Gächlingen, date to around 5300 BC.
Founded in 44 BC by Lucius Munatius Plancus, Augusta Raurica (near Basel) was the first Roman settlement on the Rhine and is now among the most important archaeological sites in Switzerland.
The earliest known tribes formed the Hallstatt and La Tène cultures, named after the archaeological site of La Tène on the north side of Lake Neuchâtel. La Tène culture developed and flourished during the late Iron Age from around 450 BC, possibly influenced by Greek and Etruscan civilisations. One of the most important tribal groups was the Helvetii. Steadily harassed by Germanic tribes, in 58 BC, the Helvetii decided to abandon the Swiss Plateau and migrate to western Gallia. Julius Caesar's armies pursued and defeated them at the Battle of Bibracte, in today's eastern France, forcing the tribe to move back to its homeland. In 15 BC, Tiberius (later the second Roman emperor) and his brother Drusus conquered the Alps, integrating them into the Roman Empire. The area occupied by the Helvetii first became part of Rome's Gallia Belgica province and then of its Germania Superior province. The eastern portion of modern Switzerland was integrated into the Roman province of Raetia. Sometime around the start of the Common Era, the Romans maintained a large camp called Vindonissa, now a ruin at the confluence of the Aare and Reuss rivers, near the town of Windisch.
The first and second century AD was an age of prosperity on the Swiss Plateau. Towns such as Aventicum, Iulia Equestris and Augusta Raurica, reached a remarkable size, while hundreds of agricultural estates (Villae rusticae) were established in the countryside.[citation needed]
Around 260 AD, the fall of the Agri Decumates territory north of the Rhine transformed today's Switzerland into a frontier land of the Empire. Repeated raids by the Alamanni tribes provoked the ruin of the Roman towns and economy, forcing the population to shelter near Roman fortresses, like the Castrum Rauracense near Augusta Raurica. The Empire built another line of defence at the north border (the so-called Donau-Iller-Rhine-Limes). At the end of the fourth century, the increased Germanic pressure forced the Romans to abandon the linear defence concept. The Swiss Plateau was finally open to Germanic tribes.[citation needed]
In the Early Middle Ages, from the end of the fourth century, the western extent of modern-day Switzerland was part of the territory of the Kings of the Burgundians. The Alemanni settled the Swiss Plateau in the fifth century and the valleys of the Alps in the eighth century, forming Alemannia. Modern-day Switzerland was then divided between the kingdoms of Alemannia and Burgundy. The entire region became part of the expanding Frankish Empire in the sixth century, following Clovis I's victory over the Alemanni at Tolbiac in 504 AD, and later Frankish domination of the Burgundians.
Throughout the rest of the sixth, seventh and eighth centuries, Swiss regions continued under Frankish hegemony (Merovingian and Carolingian dynasties) but after its extension under Charlemagne, the Frankish Empire was divided by the Treaty of Verdun in 843. The territories of present-day Switzerland became divided into Middle Francia and East Francia until they were reunified under the Holy Roman Empire around 1000 AD.
By 1200, the Swiss Plateau comprised the dominions of the houses of Savoy, Zähringer, Habsburg, and Kyburg. Some regions (Uri, Schwyz, Unterwalden, later known as Waldstätten) were accorded the Imperial immediacy to grant the empire direct control over the mountain passes. With the extinction of its male line in 1263, the Kyburg dynasty fell in AD 1264. The Habsburgs under King Rudolph I (Holy Roman Emperor in 1273) laid claim to the Kyburg lands and annexed them, extending their territory to the eastern Swiss Plateau.
### Old Swiss Confederacy
The Old Swiss Confederacy from 1291 (dark green) to the sixteenth century (light green) and its associates (blue). In the other colours shown are the subject territories.
The 1291 Bundesbrief (federal charter)
The Old Swiss Confederacy was an alliance among the valley communities of the central Alps. The Confederacy was governed by nobles and patricians of various cantons who facilitated management of common interests and ensured peace on mountain trade routes. The Federal Charter of 1291 is considered the confederacy's founding document, even though similar alliances likely existed decades earlier. The document was agreed among the rural communes of Uri, Schwyz, and Unterwalden.
By 1353, the three original cantons had joined with the cantons of Glarus and Zug and the Lucerne, Zürich and Bern city-states to form the "Old Confederacy" of eight states that obtained through the end of the 15th century. The expansion led to increased power and wealth for the confederation. By 1460, the confederates controlled most of the territory south and west of the Rhine to the Alps and the Jura mountains, and the University of Basel was founded (with a faculty of medicine) establishing a tradition of chemical and medical research. This increased after victories against the Habsburgs (Battle of Sempach, Battle of Näfels), over Charles the Bold of Burgundy during the 1470s, and the success of the Swiss mercenaries. The Swiss victory in the Swabian War against the Swabian League of Emperor Maximilian I in 1499 amounted to de facto independence within the Holy Roman Empire. In 1501, Basel and Schaffhausen joined the Old Swiss Confederacy.
The Confederacy acquired a reputation of invincibility during these earlier wars, but expansion of the confederation suffered a setback in 1515 with the Swiss defeat in the Battle of Marignano. This ended the so-called "heroic" epoch of Swiss history. The success of Zwingli's Reformation in some cantons led to inter-cantonal religious conflicts in 1529 and 1531 (Wars of Kappel). It was not until more than one hundred years after these internal wars that, in 1648, under the Peace of Westphalia, European countries recognised Switzerland's independence from the Holy Roman Empire and its neutrality.
During the Early Modern period of Swiss history, the growing authoritarianism of the patriciate families combined with a financial crisis in the wake of the Thirty Years' War led to the Swiss peasant war of 1653. In the background to this struggle, the conflict between Catholic and Protestant cantons persisted, erupting in further violence at the First War of Villmergen, in 1656, and the Toggenburg War (or Second War of Villmergen), in 1712.
### Napoleonic era
The Act of Mediation was Napoleon's attempt at a compromise between the Ancien Régime and a Republic.
In 1798, the revolutionary French government invaded Switzerland and imposed a new unified constitution. This centralised the government of the country, effectively abolishing the cantons: moreover, Mülhausen left Switzerland and the Valtellina valley became part of the Cisalpine Republic. The new regime, known as the Helvetic Republic, was highly unpopular. An invading foreign army had imposed and destroyed centuries of tradition, making Switzerland nothing more than a French satellite state. The fierce French suppression of the Nidwalden Revolt in September 1798 was an example of the oppressive presence of the French Army and the local population's resistance to the occupation.[citation needed]
When war broke out between France and its rivals, Russian and Austrian forces invaded Switzerland. The Swiss refused to fight alongside the French in the name of the Helvetic Republic. In 1803 Napoleon organised a meeting of the leading Swiss politicians from both sides in Paris. The Act of Mediation was the result, which largely restored Swiss autonomy and introduced a Confederation of 19 cantons. Henceforth, much of Swiss politics would concern balancing the cantons' tradition of self-rule with the need for a central government.[citation needed]
In 1815 the Congress of Vienna fully re-established Swiss independence, and the European powers recognised permanent Swiss neutrality. Swiss troops served foreign governments until 1860 when they fought in the siege of Gaeta. The treaty allowed Switzerland to increase its territory, with the admission of the cantons of Valais, Neuchâtel and Geneva. Switzerland's borders saw only minor adjustments thereafter.
### Federal state
The first Federal Palace in Bern (1857). One of the three cantons presiding over the Tagsatzung (former legislative and executive council), Bern was chosen as the permanent seat of federal legislative and executive institutions in 1848, in part because of its closeness to the French-speaking area.
The restoration of power to the patriciate was only temporary. After a period of unrest with repeated violent clashes, such as the Züriputsch of 1839, civil war (the Sonderbundskrieg) broke out in 1847 when some Catholic cantons tried to set up a separate alliance (the Sonderbund). The war lasted less than a month, causing fewer than 100 casualties, most of which were through friendly fire. The Sonderbundskrieg had a significant impact on the psychology and society of Switzerland.[citation needed][who?]
The war convinced most Swiss of the need for unity and strength. Swiss from all strata of society, whether Catholic or Protestant, from the liberal or conservative current, realised that the cantons would profit more from merging their economic and religious interests.[citation needed]
Thus, while the rest of Europe saw revolutionary uprisings, the Swiss drew up a constitution that provided for a federal layout, much of it inspired by the American example. This constitution provided central authority while leaving the cantons the right to self-government on local issues. Giving credit to those who favoured the power of the cantons (the Sonderbund Kantone), the national assembly was divided between an upper house (the Council of States, two representatives per canton) and a lower house (the National Council, with representatives elected from across the country). Referendums were made mandatory for any amendments. This new constitution ended the legal power of nobility in Switzerland.
Inauguration in 1882 of the Gotthard rail tunnel connecting the southern canton of Ticino, the longest in the world at the time
A single system of weights and measures was introduced, and in 1850 the Swiss franc became the Swiss single currency, complemented by the WIR franc in 1934. Article 11 of the constitution forbade sending troops to serve abroad, marking the end of foreign service. It came with the expectation of serving the Holy See, and the Swiss were still obliged to serve Francis II of the Two Sicilies with Swiss Guards present at the siege of Gaeta in 1860.[citation needed]
An important clause of the constitution was that it could be entirely rewritten if necessary, thus enabling it to evolve as a whole rather than being modified one amendment at a time.
This need soon proved itself when the rise in population and the Industrial Revolution that followed led to calls to modify the constitution accordingly. The population rejected an early draft in 1872, but modifications led to its acceptance in 1874. It introduced the facultative referendum for laws at the federal level. It also established federal responsibility for defence, trade, and legal matters.
In 1891, the constitution was revised with unusually strong elements of direct democracy, which remain unique today.
### Modern history
General Ulrich Wille, appointed commander-in-chief of the Swiss Army for the duration of World War I
Switzerland was not invaded during either of the world wars. During World War I, Switzerland was home to the revolutionary and founder of the Soviet Union Vladimir Illych Ulyanov (Vladimir Lenin) who remained there until 1917. Swiss neutrality was seriously questioned by the short-lived Grimm–Hoffmann affair in 1917. In 1920, Switzerland joined the League of Nations, which was based in Geneva, after it was exempted from military requirements.[citation needed]
During World War II, detailed invasion plans were drawn up by the Germans, but Switzerland was never attacked. Switzerland was able to remain independent through a combination of military deterrence, concessions to Germany, and good fortune, as larger events during the war intervened. General Henri Guisan, appointed the commander-in-chief for the duration of the war ordered a general mobilisation of the armed forces. The Swiss military strategy changed from static defence at the borders to organised long-term attrition and withdrawal to strong, well-stockpiled positions high in the Alps, known as the Reduit. Switzerland was an important base for espionage by both sides and often mediated communications between the Axis and Allied powers.
Switzerland's trade was blockaded by both the Allies and the Axis. Economic cooperation and extension of credit to Nazi Germany varied according to the perceived likelihood of invasion and the availability of other trading partners. Concessions reached a peak after a crucial rail link through Vichy France was severed in 1942, leaving Switzerland (together with Liechtenstein) entirely isolated from the wider world by Axis-controlled territory. Over the course of the war, Switzerland interned over 300,000 refugees aided by the International Red Cross, based in Geneva. Strict immigration and asylum policies and the financial relationships with Nazi Germany raised controversy, only at the end of the 20th century.
During the war, the Swiss Air Force engaged aircraft of both sides, shooting down 11 intruding Luftwaffe planes in May and June 1940, then forcing down other intruders after a change of policy following threats from Germany. Over 100 Allied bombers and their crews were interned. Between 1940 and 1945, Switzerland was bombed by the Allies, causing fatalities and property damage. Among the cities and towns bombed were Basel, Brusio, Chiasso, Cornol, Geneva, Koblenz, Niederweningen, Rafz, Renens, Samedan, Schaffhausen, Stein am Rhein, Tägerwilen, Thayngen, Vals, and Zürich. Allied forces maintained that the bombings, which violated the 96th Article of War, resulted from navigation errors, equipment failure, weather conditions, and pilot errors. The Swiss expressed fear and concern that the bombings were intended to put pressure on Switzerland to end economic cooperation and neutrality with Nazi Germany. Court-martial proceedings took place in England. The U.S. paid SFR 62,176,433.06 for reparations.[citation needed]
Switzerland's attitude towards refugees was complicated and controversial; over the course of the war, it admitted as many as 300,000 refugees while refusing tens of thousands more, including Jews persecuted by the Nazis.
After the war, the Swiss government exported credits through the charitable fund known as the Schweizerspende and donated to the Marshall Plan to help Europe's recovery, efforts that ultimately benefited the Swiss economy.
During the Cold War, Swiss authorities considered the construction of a Swiss nuclear bomb. Leading nuclear physicists at the Federal Institute of Technology Zürich such as Paul Scherrer made this a realistic possibility. In 1988, the Paul Scherrer Institute was founded in his name to explore the therapeutic uses of neutron scattering technologies. Financial problems with the defence budget and ethical considerations prevented the substantial funds from being allocated, and the Nuclear Non-Proliferation Treaty of 1968 was seen as a valid alternative. Plans for building nuclear weapons were dropped by 1988. Switzerland joined the Council of Europe in 1963.
In 2003, by granting the Swiss People's Party a second seat in the governing cabinet, the Parliament altered the coalition that had dominated Swiss politics since 1959.
Switzerland was the last Western republic (the Principality of Liechtenstein followed in 1984) to grant women the right to vote. Some Swiss cantons approved this in 1959, while at the federal level, it was achieved in 1971 and, after resistance, in the last canton Appenzell Innerrhoden (one of only two remaining Landsgemeinde, along with Glarus) in 1990. After obtaining suffrage at the federal level, women quickly rose in political significance. The first woman on the seven-member Federal Council executive was Elisabeth Kopp, who served from 1984 to 1989, and the first female president was Ruth Dreifuss in 1999.
In 1979 areas from the canton of Bern attained independence from the Bernese, forming the new canton of Jura. On 18 April 1999, the Swiss population and the cantons voted in favour of a completely revised federal constitution.
In 2002 Switzerland became a full member of the United Nations, leaving Vatican City as the last widely recognised state without full UN membership. Switzerland is a founding member of the EFTA but not the European Economic Area (EEA). An application for membership in the European Union was sent in May 1992, but did not advance since rejecting the EEA in December 1992 when Switzerland conducted a referendum on the EEA. Several referendums on the EU issue ensued; due to opposition from the citizens, the membership application was withdrawn. Nonetheless, Swiss law is gradually changing to conform with that of the EU, and the government signed bilateral agreements with the European Union. Switzerland, together with Liechtenstein, has been surrounded by the EU since Austria's entry in 1995. On 5 June 2005, Swiss voters agreed by a 55% majority to join the Schengen treaty, a result that EU commentators regarded as a sign of support. In September 2020, a referendum calling for a vote to end the pact that allowed a free movement of people from the European Union was introduced by the Swiss People's Party (SPP). However, voters rejected the attempt to retake control of immigration, defeating the motion by a roughly 63%–37% margin.
On 9 February 2014, 50.3% of Swiss voters approved a ballot initiative launched by the Swiss People's Party (SVP/UDC) to restrict immigration. This initiative was mostly backed by rural (57.6% approval) and suburban groups (51.2% approval), and isolated towns (51.3% approval) as well as by a strong majority (69.2% approval) in Ticino, while metropolitan centres (58.5% rejection) and the French-speaking part (58.5% rejection) rejected it. In December 2016, a political compromise with the EU was attained that eliminated quotas on EU citizens, but still allowed favourable treatment of Swiss-based job applicants. On 27 September 2020, 62% of Swiss voters rejected the anti-free movement referendum by SVP.
## Geography
Physical map of Switzerland (in German)
Extending across the north and south side of the Alps in west-central Europe, Switzerland encompasses diverse landscapes and climates across its 41,285 square kilometres (15,940 sq mi).
Switzerland lies between latitudes 45° and 48° N, and longitudes and 11° E. It contains three basic topographical areas: the Swiss Alps to the south, the Swiss Plateau or Central Plateau, and the Jura mountains on the west. The Alps are a mountain range running across the central and south of the country, constituting about 60% of the country's area. The majority of the population live on the Swiss Plateau. The Swiss Alps host many glaciers, covering 1,063 square kilometres (410 sq mi). From these originate the headwaters of several major rivers, such as the Rhine, Inn, Ticino and Rhône, which flow in the four cardinal directions, spreading across Europe. The hydrographic network includes several of the largest bodies of fresh water in Central and Western Europe, among which are Lake Geneva (Lac Léman in French), Lake Constance (Bodensee in German) and Lake Maggiore. Switzerland has more than 1500 lakes and contains 6% of Europe's freshwater stock. Lakes and glaciers cover about 6% of the national territory. Lake Geneva is the largest lake and is shared with France. The Rhône is both the main source and outflow of Lake Geneva. Lake Constance is the second largest and, like Lake Geneva, an intermediate step by the Rhine at the border with Austria and Germany. While the Rhône flows into the Mediterranean Sea at the French Camargue region and the Rhine flows into the North Sea at Rotterdam, about 1,000 kilometres (620 miles) apart, both springs are only about 22 kilometres (14 miles) apart in the Swiss Alps.
Contrasted landscapes between the regions of the Matterhorn and Lake Lucerne
Forty-eight mountains are 4,000 metres (13,000 ft) or higher in height. At 4,634 m (15,203 ft), Monte Rosa is the highest, although the Matterhorn (4,478 m or 14,692 ft) is the best known. Both are located within the Pennine Alps in the canton of Valais, on the border with Italy. The section of the Bernese Alps above the deep glacial Lauterbrunnen valley, containing 72 waterfalls, is well known for the Jungfrau (4,158 m or 13,642 ft) Eiger and Mönch peaks, and its many picturesque valleys. In the southeast the long Engadin Valley, encompassing St. Moritz, is also well known; the highest peak in the neighbouring Bernina Alps is Piz Bernina (4,049 m or 13,284 ft).
The Swiss Plateau has greater open and hilly landscapes, partly forested, partly open pastures, usually with grazing herds or vegetable and fruit fields, but it is still hilly. Large lakes and the biggest Swiss cities are found there.
Switzerland contains two small enclaves: Büsingen belongs to Germany, while Campione d'Italia belongs to Italy. Switzerland has no exclaves.
### Climate
The Swiss climate is generally temperate, but can vary greatly across localities, from glacial conditions on the mountaintops to the near-Mediterranean climate at Switzerland's southern tip. Some valley areas in the southern part of Switzerland offer cold-hardy palm trees. Summers tend to be warm and humid at times with periodic rainfall, ideal for pastures/grazing. The less humid winters in the mountains may see weeks-long intervals of stable conditions. At the same time, the lower lands tend to suffer from inversion during such periods, hiding the sun.[citation needed]
A weather phenomenon known as the föhn (with an identical effect to the chinook wind) can occur any time and is characterised by an unexpectedly warm wind, bringing low relative humidity air to the north of the Alps during rainfall periods on the south-facing slopes. This works both ways across the alps but is more efficient if blowing from the south due to the steeper step for oncoming wind. Valleys running south to north trigger the best effect. The driest conditions persist in all inner alpine valleys that receive less rain because arriving clouds lose a lot of their moisture content while crossing the mountains before reaching these areas. Large alpine areas such as Graubünden remain drier than pre-alpine areas, and as in the main valley of the Valais, wine grapes are grown there.
The wettest conditions persist in the high Alps and in the Ticino canton, which has much sun yet heavy bursts of rain from time to time. Precipitation tends to be spread moderately throughout the year, with a peak in summer. Autumn is the driest season, winter receives less precipitation than summer, yet the weather patterns in Switzerland are not in a stable climate system. They can vary from year to year with no strict and predictable periods.[citation needed]
### Environment
Switzerland contains two terrestrial ecoregions: Western European broadleaf forests and Alps conifer and mixed forests.
Switzerland's many small valleys separated by high mountains often host unique ecologies. The mountainous regions themselves offer a rich range of plants not found at other altitudes. The climatic, geological and topographical conditions of the alpine region make for a fragile ecosystem that is particularly sensitive to climate change. According to the 2014 Environmental Performance Index, Switzerland ranks first among 132 nations in safeguarding the environment, due to its high scores on environmental public health, its heavy reliance on renewable sources of energy (hydropower and geothermal energy), and its level of greenhouse gas emissions. In 2020 it was ranked third out of 180 countries. The country pledged to cut GHG emissions by 50% by 2030 compared to the level of 1990 and plans to reach zero emissions by 2050.
However, access to biocapacity in Switzerland is far lower than the world average. In 2016, Switzerland had 1.0 hectares of biocapacity per person within its territory, 40 percent less than world average of 1.6. In contrast, in 2016, Swiss consumption required 4.6 hectares of biocapacity – their ecological footprint, 4.6 times as much as Swiss territory can support. The remainder comes from other countries and the shared resources (such as the atmosphere impacted by greenhouse gas emissions). Switzerland had a 2019 Forest Landscape Integrity Index mean score of 3.53/10, ranking it 150th globally out of 172 countries.
### Urbanisation
Urbanisation in the Rhone Valley (outskirts of Sion)
Between two-thirds and three-quarters of the population live in urban areas. Switzerland went from a largely rural country to an urban one from 1930 to 2000. After 1935 urban development claimed as much of the Swiss landscape as it did during the prior 2,000 years. Urban sprawl affects the plateau, the Jura and the Alpine foothills, raising concerns about land use. During the 21st century, population growth in urban areas is higher than in the countryside.
Switzerland has a dense network of complementary large, medium and small towns. The plateau is densely populated with about 450 people per km2 and the landscape shows uninterrupted signs of human presence. The weight of the largest metropolitan areas – Zürich, GenevaLausanne, Basel and Bern – tend to increase.[clarification needed] The importance of these urban areas is greater than their population suggests. These urban centers are recognised for their high quality of life.
The average population density in 2019 was 215.2 inhabitants per square kilometre (557/sq mi).: 79 In the largest canton by area, Graubünden, lying entirely in the Alps, population density falls to 28.0 inhabitants per square kilometre (73/sq mi).: 30 In the canton of Zürich, with its large urban capital, the density is 926.8 per square kilometre (2,400/sq mi).: 76
## Government and politics
The Federal Constitution adopted in 1848 is the legal foundation of Switzerland's federal state. A new Swiss Constitution was adopted in 1999 that did not introduce notable changes to the federal structure. It outlines rights of individuals and citizen participation in public affairs, divides the powers between the Confederation and the cantons and defines federal jurisdiction and authority. Three main bodies govern on the federal level: the bicameral parliament (legislative), the Federal Council (executive) and the Federal Court (judicial).
### Parliament
The Swiss Parliament consists of two houses: the Council of States which has 46 representatives (two from each canton and one from each half-canton) who are elected under a system determined by each canton, and the National Council, which consists of 200 members who are elected under a system of proportional representation, reflecting each canton's population. Members serve part-time for 4 years (a Milizsystem or citizen legislature). When both houses are in joint session, they are known collectively as the Federal Assembly. Through referendums, citizens may challenge any law passed by parliament and, through initiatives, introduce amendments to the federal constitution, thus making Switzerland a direct democracy.
### Federal Council
The Swiss Federal Council in 2022 with President Ignazio Cassis (bottom) standing on an abstract, reduced railway lines map and positioned at their respective political origins
The Federal Council directs the federal government, the federal administration, and serves as a collective Head of State. It is a collegial body of seven members, elected for a four-year term by the Federal Assembly, which also oversees the council. The President of the Confederation is elected by the Assembly from among the seven members, traditionally in rotation and for a one-year term; the President chairs the government and executes representative functions. The president is a primus inter pares with no additional powers and remains the head of a department within the administration.
The government has been a coalition of the four major political parties since 1959, each party having a number of seats that roughly reflects its share of the electorate and representation in the federal parliament. The classic distribution of 2 CVP/PDC, 2 SPS/PSS, 2 FDP/PRD and 1 SVP/UDC as it stood from 1959 to 2003 was known as the "magic formula". Following the 2015 Federal Council elections, the seven seats in the Federal Council were distributed as follows:
### Supreme Court
The function of the Federal Supreme Court is to hear appeals against rulings of cantonal or federal courts. The judges are elected by the Federal Assembly for six-year terms.
### Direct democracy
The Landsgemeinde is an old form of direct democracy, still in practice in two cantons.
Direct democracy and federalism are hallmarks of the Swiss political system. Swiss citizens are subject to three legal jurisdictions: the municipality, canton and federal levels. The 1848 and 1999 Swiss Constitutions define a system of direct democracy (sometimes called half-direct or representative direct democracy because it includes institutions of a representative democracy). The instruments of this system at the federal level, known as popular rights (German: Volksrechte, French: droits populaires, Italian: diritti popolari), include the right to submit a federal initiative and a referendum, both of which may overturn parliamentary decisions.
By calling a federal referendum, a group of citizens may challenge a law passed by parliament by gathering 50,000 signatures against the law within 100 days. If so, a national vote is scheduled where voters decide by a simple majority whether to accept or reject the law. Any eight cantons can also call a constitutional referendum on federal law.
Similarly, the federal constitutional initiative allows citizens to put a constitutional amendment to a national vote, if 100,000 voters sign the proposed amendment within 18 months. The Federal Council and the Federal Assembly can supplement the proposed amendment with a counterproposal. Then, voters must indicate a preference on the ballot if both proposals are accepted. Constitutional amendments, whether introduced by initiative or in parliament, must be accepted by a double majority of the national popular vote and the popular cantonal votes.
### Cantons
The Swiss Confederation consists of 26 cantons:
Canton ID Capital Canton ID Capital
Aargau 19 Aarau *Nidwalden 7 Stans
*Appenzell Ausserrhoden 15 Herisau *Obwalden 6 Sarnen
*Appenzell Innerrhoden 16 Appenzell Schaffhausen 14 Schaffhausen
*Basel-Landschaft 13 Liestal Schwyz 5 Schwyz
*Basel-Stadt 12 Basel Solothurn 11 Solothurn
Bern 2 Bern St. Gallen 17 St. Gallen
Fribourg 10 Fribourg Thurgau 20 Frauenfeld
Geneva 25 Geneva Ticino 21 Bellinzona
Glarus 8 Glarus Uri 4 Altdorf
Grisons 18 Chur Valais 23 Sion
Jura 26 Delémont Vaud 22 Lausanne
Lucerne 3 Lucerne Zug 9 Zug
Neuchâtel 24 Neuchâtel Zürich 1 Zürich
*These cantons are known as half-cantons.
The cantons are federated states. They have a permanent constitutional status and, in comparison with other countries, a high degree of independence. Under the Federal Constitution, all 26 cantons are equal in status, except that 6 (referred to often as the half-cantons) are represented by one councillor instead of two in the Council of States and have only half a cantonal vote with respect to the required cantonal majority in referendums on constitutional amendments. Each canton has its own constitution and its own parliament, government, police and courts. However, considerable differences define the individual cantons, particularly in terms of population and geographical area. Their populations vary between 16,003 (Appenzell Innerrhoden) and 1,487,969 (Zürich), and their area between 37 km2 (14 sq mi) (Basel-Stadt) and 7,105 km2 (2,743 sq mi) (Grisons).
#### Municipalities
As of 2018 the cantons comprised 2,222 municipalities.
### Federal City
Until 1848, the loosely coupled Confederation did not have a central political organisation. Issues thought to affect the whole Confederation were the subject of periodic meetings in various locations.
In 1848, the federal constitution provided that details concerning federal institutions, such as their locations, should be addressed by the Federal Assembly (BV 1848 Art. 108). Thus on 28 November 1848, the Federal Assembly voted in the majority to locate the seat of government in Bern and, as a prototypical federal compromise, to assign other federal institutions, such as the Federal Polytechnical School (1854, the later ETH) to Zürich, and other institutions to Lucerne, such as the later SUVA (1912) and the Federal Insurance Court (1917). Other federal institutions were subsequently attributed to Lausanne (Federal Supreme Court in 1872, and EPFL in 1969), Bellinzona (Federal Criminal Court, 2004), and St. Gallen (Federal Administrative Court and Federal Patent Court, 2012).
The 1999 Constitution does not mention a Federal City and the Federal Council has yet to address the matter. Thus as of 2022, no city in Switzerland has the official status either of capital or of Federal City. Nevertheless, Bern is commonly referred to as "Federal City" (German: Bundesstadt, French: ville fédérale, Italian: città federale).
### Foreign relations and international institutions
The Palace of Nations, the European headquarters of the United Nations in Geneva
Traditionally, Switzerland avoids alliances that might entail military, political, or direct economic action and has been neutral since the end of its expansion in 1515. Its policy of neutrality was internationally recognised at the Congress of Vienna in 1815. Swiss neutrality has been questioned at times. In 2002 Switzerland became a full member of the United Nations. It was the first state to join it by referendum. Switzerland maintains diplomatic relations with almost all countries and historically has served as an intermediary between other states. Switzerland is not a member of the European Union; the Swiss people have consistently rejected membership since the early 1990s. However, Switzerland does participate in the Schengen Area.
The colour-reversed Swiss flag became the symbol of the Red Cross Movement, founded in 1863 by Henry Dunant.
Many international institutions have headquarters in Switzerland, in part because of its policy of neutrality. Geneva is the birthplace of the Red Cross and Red Crescent Movement, the Geneva Conventions and, since 2006, hosts the United Nations Human Rights Council. Even though Switzerland is one of the most recent countries to join the United Nations, the Palace of Nations in Geneva is the second biggest centre for the United Nations after New York. Switzerland was a founding member and hosted the League of Nations.[citation needed]
Apart from the United Nations headquarters, the Swiss Confederation is host to many UN agencies, including the World Health Organization (WHO), the International Labour Organization (ILO), the International Telecommunication Union (ITU), the United Nations High Commissioner for Refugees (UNHCR) and about 200 other international organisations, including the World Trade Organization and the World Intellectual Property Organization. The annual meetings of the World Economic Forum in Davos bring together business and political leaders from Switzerland and foreign countries to discuss important issues. The headquarters of the Bank for International Settlements (BIS) moved to Basel in 1930.[citation needed]
Many sports federations and organisations are located in the country, including the International Handball Federation in Basel, the International Basketball Federation in Geneva, the Union of European Football Associations (UEFA) in Nyon, the International Federation of Association Football (FIFA) and the International Ice Hockey Federation both in Zürich, the International Cycling Union in Aigle, and the International Olympic Committee in Lausanne.
Switzerland is scheduled to become a member of the United Nations Security Council for the 2023–2024 period.
#### Switzerland and the European Union
Although not a member, Switzerland maintains relationships with the EU and European countries through bilateral agreements. The Swiss have brought their economic practices largely into conformity with those of the EU, in an effort to compete internationally. EU membership faces considerable negative popular sentiment. It is opposed by the conservative SVP party, the largest party in the National Council, and not advocated by several other political parties. The membership application was formally withdrawn in 2016. The western French-speaking areas and the urban regions of the rest of the country tend to be more pro-EU, but do not form a significant share of the population.
Members of the European Free Trade Association (green) participate in the European single market and are part of the Schengen Area.
An Integration Office operates under the Department of Foreign Affairs and the Department of Economic Affairs. Seven bilateral agreements liberalised trade ties, taking effect in 2001. This first series of bilateral agreements included the free movement of persons. A second series of agreements covering nine areas was signed in 2004, including the Schengen Treaty and the Dublin Convention.
In 2006, a referendum approved 1 billion francs of supportive investment in Southern and Central European countries in support of positive ties to the EU as a whole. A further referendum will be needed to approve 300 million francs to support Romania and Bulgaria and their recent admission.
The Swiss have faced EU and international pressure to reduce banking secrecy and raise tax rates to parity with the EU. Preparatory discussions involve four areas: the electricity market, participation in project Galileo, cooperating with the European Centre for Disease Prevention and Control and certificates of origin for food products.
Switzerland is a member of the Schengen passport-free zone. Land border checkpoints apply on to goods movements, but not people.
### Military
A Swiss Air Force F/A-18 Hornet at Axalp Air Show
The Swiss Armed Forces, including the Land Forces and the Air Force, are composed mostly of conscripts, male citizens aged from 20 to 34 (in exceptional cases up to 50) years. Being a landlocked country, Switzerland has no navy; however, on lakes bordering neighbouring countries, armed boats patrol. Swiss citizens are prohibited from serving in foreign armies, except for the Swiss Guards of the Vatican, or if they are dual citizens of a foreign country and reside there.[citation needed]
The Swiss militia system stipulates that soldiers keep their army-issued equipment, including personal weapons, at home. Some organisations and political parties find this practice controversial. Women can serve voluntarily. Men usually receive military conscription orders for training at the age of 18. About two-thirds of young Swiss are found suitable for service; for the others, various forms of alternative service are available. Annually, approximately 20,000 persons are trained in recruit centres for 18 to 21 weeks. The reform "Army XXI" was adopted by popular vote in 2003, replacing "Army 95", reducing the rolls from 400,000 to about 200,000. Of those, 120,000 are active in periodic Army training, and 80,000 are non-training reserves.
The newest reform of the military, WEA/DEVA/USEs, started in 2019 and was expected to reduce the number of army personnel to 100,000 by the end of 2022.[clarification needed]
Swiss-built Mowag Eagles of the Land Forces
Overall, three general mobilisations have been declared to ensure the integrity and neutrality of Switzerland. The first one was held in response to the Franco-Prussian War of 1870–71. The second was in response to the First World War outbreak in August 1914. The third mobilisation took place in September 1939 in response to the German attack on Poland.[citation needed]
Because of its neutrality policy, the Swiss army does not take part in armed conflicts in other countries, but joins some peacekeeping missions. Since 2000 the armed force department has maintained the Onyx intelligence gathering system to monitor satellite communications.
Gun politics in Switzerland are unique in Europe in that 2–3.5 million guns are in the hands of civilians, giving the nation an estimate of 28–41 guns per 100 people. It is worth noting that as per the Small Arms Survey, only 324,484 guns are owned by the military. Only 143,372 are in the hands of soldiers. However, ammunition is no longer issued.
## Economy and labour law
A proportional representation of Switzerland exports, 2019
The city of Basel (Roche Tower) is the capital of the country's pharmaceutical industry, which accounts for around 38% of Swiss exports worldwide.
The Greater Zürich area, home to 1.5 million inhabitants and 150,000 companies, is one of the most important economic centres in the world.
Origin of the capital at the 30 biggest Swiss corporations, 2018:
Switzerland (39%)
North America (33%)
Europe (24%)
Rest of the world (4%)
Switzerland has a stable, prosperous and high-tech economy. It is the world's wealthiest country per capita in multiple rankings. The country ranks as one of the least corrupt countries in the world, while its banking sector is rated as "one of the most corrupt in the world". It has the world's twentieth largest economy by nominal GDP and the thirty-eighth largest by purchasing power parity. It is the seventeenth largest exporter. Zürich and Geneva are regarded as global cities, ranked as Alpha and Beta respectively. Basel is the capital of Switzerland's pharmaceutical industry, hosting Novartis, Roche, and many other players. It is one of the world's most important centres for the life sciences industry.
Switzerland had the highest European rating in the Index of Economic Freedom 2010, while also providing significant public services. On a per capita basis, nominal GDP is higher than those of the larger Western and Central European economies and Japan, while adjusted for purchasing power, Switzerland ranked 11th in 2017, fifth in 2018 and ninth in 2020.
The 2016 World Economic Forum's Global Competitiveness Report ranked Switzerland's economy as the world's most competitive; as of 2019, it ranks fifth globally. The European Union labeled it Europe's most innovative country and the most innovative country in the Global Innovation Index in 2022, as it had done in 2021, 2020 and 2019. It ranked 20th of 189 countries in the Ease of Doing Business Index. Switzerland's slow growth in the 1990s and the early 2000s increased support for economic reforms and harmonisation with the European Union. In 2020, IMD placed Switzerland first in attracting skilled workers.
For much of the 20th century, Switzerland was the wealthiest country in Europe by a considerable margin (per capita GDP). Switzerland has one of the world's largest account balances as a percentage of GDP. In 2018, the canton of Basel-City had the highest GDP per capita, ahead of Zug and Geneva. According to Credit Suisse, only about 37% of residents own their own homes, one of the lowest rates of home ownership in Europe. Housing and food price levels were 171% and 145% of the EU-25 index in 2007, compared to 113% and 104% in Germany.
Switzerland is home to several large multinational corporations. The largest by revenue are Glencore, Gunvor, Nestlé, Mediterranean Shipping Company, Novartis, Hoffmann-La Roche, ABB, Mercuria Energy Group and Adecco. Also, notable are UBS AG, Zurich Financial Services, Richemont, Credit Suisse, Barry Callebaut, Swiss Re, Rolex, Tetra Pak, The Swatch Group and Swiss International Air Lines.
Switzerland's most important economic sector is manufacturing. Manufactured products include specialty chemicals, health and pharmaceutical goods, scientific and precision measuring instruments and musical instruments. The largest exported goods are chemicals (34% of exported goods), machines/electronics (20.9%), and precision instruments/watches (16.9%). The service sector – especially banking and insurance, commodities trading, tourism, and international organisations – is another important industry for Switzerland. Exported services amount to a third of exports.
Agricultural protectionism—a rare exception to Switzerland's free trade policies—contributes to high food prices. Product market liberalisation is lagging behind many EU countries according to the OECD. Apart from agriculture, economic and trade barriers between the European Union and Switzerland are minimal, and Switzerland has free trade agreements with many countries. Switzerland is a member of the European Free Trade Association (EFTA).
### Taxation and government spending
Switzerland is a tax haven. The private sector economy dominates. It features low tax rates; tax revenue to GDP ratio is one of the smallest of developed countries. The Swiss Federal budget reached 62.8 billion Swiss francs in 2010, 11.35% of GDP; however, canton and municipality budgets are not counted as part of the federal budget. Total government spending is closer to 33.8% of GDP. The main sources of income for the federal government are the value-added tax (33% of tax revenue) and the direct federal tax (29%). The main areas of expenditure are in social welfare and finance/taxes. The expenditures of the Swiss Confederation have been growing from 7% of GDP in 1960 to 9.7% in 1990 and 10.7% in 2010. While the social welfare and finance sectors and tax grew from 35% in 1990 to 48.2% in 2010, a significant reduction of expenditures has been occurring in agriculture and national defence; from 26.5% to 12.4% (estimation for the year 2015).
### Labour force
Slightly more than 5 million people work in Switzerland; about 25% of employees belonged to a trade union in 2004. Switzerland has a more flexible labor market than neighbouring countries and the unemployment rate is consistently low. The unemployment rate increased from 1.7% in June 2000 to 4.4% in December 2009. It then decreased to 3.2% in 2014 and held steady for several years, before further dropping to 2.5% in 2018 and 2.3% in 2019. Population growth (from net immigration) reached 0.52% of population in 2004, increased in the following years before falling to 0.54% again in 2017. The foreign citizen population was 28.9% in 2015, about the same as in Australia.
In 2016, the median monthly gross income in Switzerland was 6,502 francs per month (equivalent to US\$6,597 per month). After rent, taxes and pension contributions, plus spending on goods and services, the average household has about 15% of its gross income left for savings. Though 61% of the population made less than the mean income, income inequality is relatively low with a Gini coefficient of 29.7, placing Switzerland among the top 20 countries. In 2015, the richest 1% owned 35% of the wealth. Wealth inequality increased through 2019.
About 8.2% of the population live below the national poverty line, defined in Switzerland as earning less than CHF3,990 per month for a household of two adults and two children, and a further 15% are at risk of poverty. Single-parent families, those with no post-compulsory education and those out of work are among the most likely to live below the poverty line. Although work is considered a way out of poverty, some 4.3% are considered working poor. One in ten jobs in Switzerland is considered low-paid; roughly 12% of Swiss workers hold such jobs, many of them women and foreigners.
## Education and science
The University of Basel is Switzerland's oldest university (1460).
Some Swiss scientists who played a key role in their discipline (clockwise):
Leonhard Euler (mathematics)
Louis Agassiz (glaciology)
Auguste Piccard (aeronautics)
Albert Einstein (physics)
Education in Switzerland is diverse, because the constitution of Switzerland delegates the operation for the school system to the cantons. Public and private schools are available, including many private international schools.
### Primary education
The minimum age for primary school is about six years, but most cantons provide a free "children's school" starting at age four or five. Primary school continues until grade four, five or six, depending on the school. Traditionally, the first foreign language in school was one of the other Swiss languages, although, in 2000, English was elevated in a few cantons. At the end of primary school or at the beginning of secondary school, pupils are assigned according to their capacities into one of several sections (often three). The fastest learners are taught advanced classes to prepare for further studies and the matura, while other students receive an education adapted to their needs.
### Tertiary education
Switzerland hosts 12 universities, ten of which are maintained at cantonal level and usually offer non-technical subjects. It ranked 87th on the 2019 Academic Ranking of World Universities. The largest is the University of Zurich with nearly 25,000 students.[citation needed] The Swiss Federal Institute of Technology Zurich (ETHZ) and the University of Zurich are listed 20th and 54th respectively, on the 2015 Academic Ranking of World Universities.
The federal government sponsors two institutes: the Swiss Federal Institute of Technology Zurich (ETHZ) in Zürich, founded in 1855 and the École Polytechnique Fédérale de Lausanne (EPFL) in Lausanne, founded in 1969, formerly associated with the University of Lausanne.
Eight of the world's ten best hotel schools are located in Switzerland. In addition, various Universities of Applied Sciences are available. In business and management studies, the University of St. Gallen, (HSG) is ranked 329th in the world according to QS World University Rankings and the International Institute for Management Development (IMD), was ranked first in open programmes worldwide. Switzerland has the second highest rate (almost 18% in 2003) of foreign students in tertiary education, after Australia (slightly over 18%).
The Graduate Institute of International and Development Studies, located in Geneva, is continental Europe's oldest graduate school of international and development studies. It is widely held to be one of its most prestigious.
### Science
Switzerland has birthed many Nobel Prize laureates. They include Albert Einstein, who developed his special relativity in Bern. Later, Vladimir Prelog, Heinrich Rohrer, Richard Ernst, Edmond Fischer, Rolf Zinkernagel, Kurt Wüthrich and Jacques Dubochet received Nobel science prizes. In total, 114 laureates across all fields have a relationship to Switzerland. The Nobel Peace Prize has been awarded nine times to organisations headquartered in Switzerland.
The LHC tunnel. CERN is the world's largest laboratory and also the birthplace of the World Wide Web.
Geneva and the nearby French department of Ain co-host the world's largest laboratory, CERN, dedicated to particle physics research. Another important research centre is the Paul Scherrer Institute.
Notable inventions include lysergic acid diethylamide (LSD), diazepam (Valium), the scanning tunnelling microscope (Nobel prize) and Velcro. Some technologies enabled the exploration of new worlds such as the pressurised balloon of Auguste Piccard and the Bathyscaphe which permitted Jacques Piccard to reach the deepest point of the world's oceans.
The Swiss Space Office has been involved in various space technologies and programmes. It was one of the 10 founders of the European Space Agency in 1975 and is the seventh largest contributor to the ESA budget. In the private sector, several companies participate in the space industry, such as Oerlikon Space or Maxon Motors.
### Energy
Switzerland has the tallest dams in Europe, among which the Mauvoisin Dam, in the Alps. Hydroelectric power is the most important domestic source of energy in the country.
Electricity generated in Switzerland is 56% from hydroelectricity and 39% from nuclear power, producing negible CO2. On 18 May 2003, two anti-nuclear referendums were defeated: Moratorium Plus, aimed at forbidding the building of new nuclear power plants (41.6% supported), and Electricity Without Nuclear (33.7% supported) after a moratorium expired in 2000. After the Fukushima nuclear disaster, in 2011 the government announced plans to end the use of nuclear energy in the following 2 or 3 decades. In November 2016, Swiss voters rejected a Green Party referendum to accelerate the phaseout of nuclear power (45.8% supported). The Swiss Federal Office of Energy (SFOE) is responsible for energy supply and energy use within the Federal Department of Environment, Transport, Energy and Communications (DETEC). The agency supports the 2000-watt society initiative to cut the nation's energy use by more than half by 2050.
### Transport
Entrance of the new Lötschberg Base Tunnel, the third-longest railway tunnel in the world, under the old Lötschberg railway line. It was the first completed tunnel of the greater project NRLA.
The densest rail network in Europe spans 5,250 kilometres (3,260 mi) and carries over 596 million passengers annually as of 2015. In 2015, each Swiss resident travelled on average 2,550 kilometres (1,580 mi) by rail, more than any other European country. Virtually 100% of the network is electrified. 60% of the network is operated by the Swiss Federal Railways (SBB CFF FFS). Besides the second largest standard gauge railway company, BLS AG, two railways companies operate on narrow gauge networks: the Rhaetian Railway (RhB) in Graubünden, which includes some World Heritage lines, and the Matterhorn Gotthard Bahn (MGB), which co-operates with RhB the Glacier Express between Zermatt and St. Moritz/Davos. Switzerland operates the world's longest and deepest railway tunnel and the first flat, low-level route through the Alps, the 57.1-kilometre long (35.5 mi) Gotthard Base Tunnel, the largest part of the New Railway Link through the Alps (NRLA) project.
Switzerland has a publicly managed, toll-free road network financed by highway permits as well as vehicle and gasoline taxes. The Swiss autobahn/autoroute system requires the annual purchase of a vignette (toll sticker)—for 40 Swiss francs—to use its roadways, including passenger cars and trucks. The Swiss autobahn/autoroute network stretches for 1,638 km (1,018 mi) and has one of the highest motorway densities in the world.
Zurich Airport is Switzerland's largest international flight gateway; it handled 22.8 million passengers in 2012. The other international airports are Geneva Airport (13.9 million passengers in 2012), EuroAirport Basel Mulhouse Freiburg (located in France), Bern Airport, Lugano Airport, St. Gallen-Altenrhein Airport and Sion Airport. Swiss International Air Lines is the flag carrier. Its main hub is Zürich, but it is legally domiciled in Basel.
### Environment
Switzerland has one of the best environmental records among developed nations. It is a signatory to the Kyoto Protocol. With Mexico and South Korea it forms the Environmental Integrity Group (EIG).
The country is active in recycling and anti-littering programs and is one of the world's top recyclers, recovering 66% to 96% of recyclable materials, varying across the country. The 2014 Global Green Economy Index placed Switzerland among the top 10 green economies.
Switzerland has an economic system for garbage disposal, which is based mostly on recycling and energy-producing incinerators. As in other European countries, the illegal disposal of garbage is heavily fined. In almost all Swiss municipalities, mandatory stickers or dedicated garbage bags allow the identification of disposable garbage.
## Demographics
Population density in Switzerland (2019)
Percentage of foreigners in Switzerland (2019)
Resident population (age 15+) by migration status (2012/2021)
Migration status Year pct. Change
Without migration background 2021
59% -6%
2012
65%
Immigrants: First Generation 2021
31% +3%
2012
28%
Immigrants: Second Generation 2021
8% +1%
2012
7%
Migration status unknown 2021
1% +1%
2012
0%
In common with other developed countries, the Swiss population increased rapidly during the industrial era, quadrupling between 1800 and 1990 and has continued to grow.
The population is about 8.7 million (2020 est.). Population growth was projected into 2035, due mostly to immigration. Like most of Europe, Switzerland faces an ageing population, with a fertility rate close to replacement level. Switzerland has one of the world's oldest populations, with an average age of 42.5 years.
Fourteen percent of men and 6.5% of women between 20 and 24 reported consuming cannabis in the past 30 days, and 5 Swiss cities were listed among the top 10 European cities for cocaine use as measured in wastewater.
### Immigration
As of 2020, resident foreigners made up 25.7%. Most of these (83%) were from European countries. Italy provided the largest single group of foreigners, providing 14.7% of total foreign population, followed closely by Germany (14.0%), Portugal (11.7%), France (6.6%), Kosovo (5.1%), Spain (3.9%), Turkey (3.1%), North Macedonia (3.1%), Serbia (2.8%), Austria (2.0%), United Kingdom (1.9%), Bosnia and Herzegovina (1.3%) and Croatia (1.3%). Immigrants from Sri Lanka (1.3%), most of them former Tamil refugees, were the largest group of Asian origin (7.9%).
2021 figures show that 39.5% (compared to 34.7% in 2012) of the permanent resident population aged 15 or over (around 2.89 million), had an immigrant background. 38% of the population with an immigrant background (1.1 million) held Swiss citizenship.
In the 2000s, domestic and international institutions expressed concern about what was perceived as an increase in xenophobia. In reply to one critical report, the Federal Council noted that "racism unfortunately is present in Switzerland", but stated that the high proportion of foreign citizens in the country, as well as the generally successful integration of foreigners, underlined Switzerland's openness. A follow-up study conducted in 2018 reported that 59% considered racism a serious problem in Switzerland. The proportion of the population that claimed to have been targeted by racial discrimination increased from 10% in 2014 to almost 17% in 2018, according to the Federal Statistical Office.
### Largest cities
Largest towns in Switzerland
Swiss Federal Statistical Office (FSO), Neuchâtel, 2020
Rank Name Canton Pop. Rank Name Canton Pop.
Zürich
Geneva
1 Zürich Zürich 421,878 11 Thun Bern 43,476
Basel
Lausanne
2 Geneva Geneva 203,856 12 Bellinzona Ticino 43,360
3 Basel Basel-Stadt 178,120 13 Köniz Bern 42,388
4 Lausanne Vaud 140,202 14 La Chaux-de-Fonds Neuchâtel 36,915
5 Bern Bern 134,794 15 Fribourg Fribourg 38,039
6 Winterthur Zürich 114,220 16 Schaffhausen Schaffhausen 36,952
7 Lucerne Luzern 82,620 17 Vernier Geneva 34,898
8 St. Gallen St. Gallen 76,213 18 Chur Graubünden 36,336
9 Lugano Ticino 62,315 19 Sion Valais 34,978
10 Biel/Bienne Bern 55,206 20 Uster Zürich 35,337
### Languages
National languages in Switzerland (2016):
German (62.8%)
French (22.9%)
Italian (8.2%)
Romansh (0.5%)
Switzerland has four national languages: mainly German (spoken natively by 62.8% of the population in 2016); French (22.9%) in the west; and Italian (8.2%) in the south. The fourth national language, Romansh (0.5%), is a Romance language spoken locally in the southeastern trilingual canton of Grisons, and is designated by Article 4 of the Federal Constitution as a national language along with German, French, and Italian. In Article 70 it is mentioned as an official language if the authorities communicate with persons who speak Romansh. However, federal laws and other official acts do not need to be decreed in Romansh.
In 2016, the languages most spoken at home among permanent residents aged 15 and older were Swiss German (59.4%), French (23.5%), Standard German (10.6%), and Italian (8.5%). Other languages spoken at home included English (5.0%), Portuguese (3.8%), Albanian (3.0%), Spanish (2.6%) and Serbian and Croatian (2.5%). 6.9% reported speaking another language at home. In 2014 almost two-thirds (64.4%) of the permanent resident population indicated speaking more than one language regularly.
The federal government is obliged to communicate in the official languages, and in the federal parliament simultaneous translation is provided from and into German, French and Italian.
Aside from the official forms of their respective languages, the four linguistic regions of Switzerland also have local dialectal forms. The role played by dialects in each linguistic region varies dramatically: in German-speaking regions, Swiss German dialects have become more prevalent since the second half of the 20th century, especially in the media, and are used as an everyday language for many, while the Swiss variety of Standard German is almost always used instead of dialect for written communication (c.f. diglossic usage of a language). Conversely, in the French-speaking regions, local Franco-Provençal dialects have almost disappeared (only 6.3% of the population of Valais, 3.9% of Fribourg, and 3.1% of Jura still spoke dialects at the end of the 20th century), while in the Italian-speaking regions, the use of Lombard dialects is mostly limited to family settings and casual conversation.
The principal official languages have terms not used outside of Switzerland, known as Helvetisms. German Helvetisms are, roughly speaking, a large group of words typical of Swiss Standard German that do not appear in Standard German, nor in other German dialects. These include terms from Switzerland's surrounding language cultures (German Billett from French), from similar terms in another language (Italian azione used not only as act but also as discount from German Aktion). Swiss French, while generally close to the French of France, also contains some Helvetisms. The most frequent characteristics of Helvetisms are in vocabulary, phrases, and pronunciation, although certain Helvetisms denote themselves as special in syntax and orthography. Duden, the comprehensive German dictionary, contains about 3000 Helvetisms. Current French dictionaries, such as the Petit Larousse, include several hundred Helvetisms; notably, Swiss French uses different terms than that of France for the numbers 70 (septante) and 90 (nonante) and often 80 (huitante) as well.
Learning one of the other national languages is compulsory for all Swiss pupils, so many Swiss are supposed to be at least bilingual, especially those belonging to linguistic minority groups. Because the largest part of Switzerland is German-speaking, many French, Italian, and Romansh speakers migrating to the rest of Switzerland and the children of those non-German-speaking Swiss born within the rest of Switzerland speak German. While learning one of the other national languages at school is important, most Swiss learn English to communicate to Swiss speakers other languages, as it is perceived as a neutral means of communication. English often functions as a lingua franca.
## Health
Swiss residents are required to buy health insurance from private insurance companies, which in turn are required to accept every applicant. While the cost of the system is among the highest, its health outcomes compare well with other European countries; patients have been reported as in general, highly satisfied with it. In 2012, life expectancy at birth was 80.4 years for men and 84.7 years for women – the world's highest. However, spending on health at 11.4% of GDP (2010) is on par with Germany and France (11.6%) and other European countries, but notably less than the US (17.6%). From 1990, costs steadily increased.
It is estimated that one out of six Swiss persons suffers from mental illness.
## Culture
Alphorn concert in Vals
Swiss culture is characterised by diversity, which is reflected in diverse traditional customs. A region may be in some ways culturally connected to the neighbouring country that shares its language, all rooted in western European culture. The linguistically isolated Romansh culture in Graubünden in eastern Switzerland constitutes an exception. It survives only in the upper valleys of the Rhine and the Inn and strives to maintain its rare linguistic tradition.
Switzerland is home to notable contributors to literature, art, architecture, music and sciences. In addition, the country attracted creatives during times of unrest or war. Some 1000 museums are found in the country; more than tripling since 1950.
Among the most important cultural performances held annually are the Paléo Festival, Lucerne Festival, the Montreux Jazz Festival, the Locarno International Film Festival and Art Basel.
Alpine symbolism played an essential role in shaping Swiss history and the Swiss national identity. Many alpine areas and ski resorts attract visitors for winter sports as well as hiking and mountain biking in summer. The quieter seasons are spring and autumn. A traditional pastoral culture predominate in many areas, and small farms are omnipresent in rural areas. Folk art is nurtured in organisations across the country. Switzerland most directly in appears in music, dance, poetry, wood carving, and embroidery. The alphorn, a trumpet-like musical instrument made of wood has joined yodeling and the accordion as epitomes of traditional Swiss music.
### Religion
Religion in Switzerland (age 15+, 2018–2020):
Old Catholics (0.1%)
Other Christians (0.3%)
Unaffiliated (29.4%)
Islam (5.4%)
Hinduism (0.6%)
Buddhism (0.5%)
Judaism (0.2%)
Other religions (0.3%)
Undetermined (1.1%)
Christianity is the predominant religion according to national surveys of Swiss Federal Statistical Office (about 67% of resident population in 2016–2018 and 75% of Swiss citizens), divided between the Catholic Church (35.8% of the population), the Swiss Reformed Church (23.8%), further Protestant churches (2.2%), Eastern Orthodoxy (2.5%), and other Christian denominations (2.2%).
Switzerland has no official state religion, though most of the cantons (except Geneva and Neuchâtel) recognise official churches, either the Catholic Church or the Swiss Reformed Church. These churches, and in some cantons the Old Catholic Church and Jewish congregations, are financed by official taxation of members. In 2020, the Roman Catholic Church had 3,048,475 registered and church tax paying members (corresponding to 35.2% of the total population), while the Swiss Reformed Church had 2,015,816 members (23.3% of the total population).
26.3% of Swiss permanent residents are not affiliated with a religious community.
As of 2020, according to a national survey conducted by the Swiss Federal Statistical Office, Christian minority communities included Neo-Pietism (0.5%), Pentecostalism (0.4%, mostly incorporated in the Schweizer Pfingstmission), Apostolic communities (0.3%), other Protestant denominations (1.1%, including Methodism), the Old Catholic Church (0.1%), other Christian denominations (0.3%). Non-Christian religions are Islam (5.3%), Hinduism (0.6%), Buddhism (0.5%), Judaism (0.25%) and others (0.4%).
Historically, the country was about evenly balanced between Catholic and Protestant, in a complex patchwork. During the Reformation Switzerland became home to many reformers. Geneva converted to Protestantism in 1536, just before John Calvin arrived. In 1541, he founded the Republic of Geneva on his own ideals. It became known internationally as the Protestant Rome and housed such reformers as Theodore Beza, William Farel or Pierre Viret. Zürich became another reform stronghold around the same time, with Huldrych Zwingli and Heinrich Bullinger taking the lead. Anabaptists Felix Manz and Conrad Grebel also operated there. They were later joined by the fleeing Peter Martyr Vermigli and Hans Denck. Other centres included Basel (Andreas Karlstadt and Johannes Oecolampadius), Berne (Berchtold Haller and Niklaus Manuel), and St. Gallen (Joachim Vadian). One canton, Appenzell, was officially divided into Catholic and Protestant sections in 1597. The larger cities and their cantons (Bern, Geneva, Lausanne, Zürich and Basel) used to be predominantly Protestant. Central Switzerland, the Valais, the Ticino, Appenzell Innerrhodes, the Jura, and Fribourg are traditionally Catholic.
The Swiss Constitution of 1848, under the recent impression of the clashes of Catholic vs Protestant cantons that culminated in the Sonderbundskrieg, consciously defines a consociational state, allowing the peaceful co-existence of Catholics and Protestants.[citation needed] A 1980 initiative calling for the complete separation of church and state was rejected by 78.9% of the voters. Some traditionally Protestant cantons and cities nowadays have a slight Catholic majority, because since about 1970 a steadily growing minority were not affiliated with any religious body (21.4% in Switzerland, 2012) especially in traditionally Protestant regions, such as Basel-City (42%), canton of Neuchâtel (38%), canton of Geneva (35%), canton of Vaud (26%), or Zürich city (city: >25%; canton: 23%).
### Literature
Jean-Jacques Rousseau was not only a writer but also an influential philosopher of the eighteenth century.
The earliest forms of literature were in German, reflecting the language's early predominance. In the 18th century, French became fashionable in Bern and elsewhere, while the influence of the French-speaking allies and subject lands increased.
Among the classic authors of Swiss literature are Jeremias Gotthelf (1797–1854) and Gottfried Keller (1819–1890). The undisputed giants of 20th-century Swiss literature are Max Frisch (1911–91) and Friedrich Dürrenmatt (1921–90), whose repertoire includes Die Physiker (The Physicists) and Das Versprechen (The Pledge), released in 2001 as a Hollywood film.
Famous French-speaking writers were Jean-Jacques Rousseau (1712–1778) and Germaine de Staël (1766–1817). More recent authors include Charles Ferdinand Ramuz (1878–1947), whose novels describe the lives of peasants and mountain dwellers, set in a harsh environment, and Blaise Cendrars (born Frédéric Sauser, 1887–1961). Italian and Romansh-speaking authors also contributed to the Swiss literary landscape, generally in proportion to their number.
Probably the most famous Swiss literary creation, Heidi, the story of an orphan girl who lives with her grandfather in the Alps, is one of the most popular children's books and has come to be a symbol of Switzerland. Her creator, Johanna Spyri (1827–1901), wrote a number of books on similar themes.
### Media
Freedom of the press and the right to free expression is guaranteed in the constitution. The Swiss News Agency (SNA) broadcasts information in three of the four national languages—on politics, economics, society and culture. The SNA supplies almost all Swiss media and foreign media with its reporting.
Switzerland has historically boasted the world's greatest number of newspaper titles relative to its population and size. The most influential newspapers are the German-language Tages-Anzeiger and Neue Zürcher Zeitung NZZ, and the French-language Le Temps, but almost every city has at least one local newspaper, in the most common local language.
The government exerts greater control over broadcast media than print media, especially due to financing and licensing. The Swiss Broadcasting Corporation, whose name was recently changed to SRG SSR, is charged with the production and distribution of radio and television content. SRG SSR studios are distributed across the various language regions. Radio content is produced in six central and four regional studios while video media are produced in Geneva, Zürich, Basel, and Lugano. An extensive cable network allows most Swiss to access content from neighbouring countries.
### Sports
Ski area over the glaciers of Saas-Fee
Skiing, snowboarding and mountaineering are among the most popular sports, reflecting the nature of the country Winter sports are practised by natives and visitors. The bobsleigh was invented in St. Moritz. The first world ski championships were held in Mürren (1931) and St. Moritz (1934). The latter town hosted the second Winter Olympic Games in 1928 and the fifth edition in 1948. Among its most successful skiers and world champions are Pirmin Zurbriggen and Didier Cuche.
The most prominently watched sports in Switzerland are football, ice hockey, Alpine skiing, "Schwingen", and tennis.
The headquarters of the international football's and ice hockey's governing bodies, the International Federation of Association Football (FIFA) and International Ice Hockey Federation (IIHF) are located in Zürich. Many other headquarters of international sports federations are located in Switzerland. For example, the International Olympic Committee (IOC), IOC's Olympic Museum and the Court of Arbitration for Sport (CAS) are located in Lausanne.
Switzerland hosted the 1954 FIFA World Cup and was the joint host, with Austria, of the UEFA Euro 2008 tournament. The Swiss Super League is the nation's professional football club league. Europe's highest football pitch, at 2,000 metres (6,600 ft) above sea level, is located in Switzerland, the Ottmar Hitzfeld Stadium.
Many Swiss follow ice hockey and support one of the 12 teams of the National League, which is the most attended league in Europe. In 2009, Switzerland hosted the IIHF World Championship for the tenth time. It also became World Vice-Champion in 2013 and 2018. Its numerous lakes make Switzerland an attractive sailing destination. The largest, Lake Geneva, is the home of the sailing team Alinghi which was the first European team to win the America's Cup in 2003 and which successfully defended the title in 2007.
Roger Federer has won 20 Grand Slam singles titles, making him among the most successful men's tennis players ever.
Swiss tennis player Roger Federer is widely regarded as among the sport's greatest players. He won 20 Grand Slam tournaments overall including a record 8 Wimbledon titles. He won a record 6 ATP Finals. He was ranked no. 1 in the ATP rankings for a record 237 consecutive weeks. He ended 2004, 2005, 2006, 2007 and 2009 ranked no. 1. Fellow Swiss players Martina Hingis and Stan Wawrinka also hold multiple Grand Slam titles. Switzerland won the Davis Cup title in 2014.
Motorsport racecourses and events were banned in Switzerland following the 1955 Le Mans disaster with exceptions for events such as hillclimbing. The country continued to produce successful racing drivers such as Clay Regazzoni, Sébastien Buemi, Jo Siffert, Dominique Aegerter, successful World Touring Car Championship driver Alain Menu, 2014 24 Hours of Le Mans winner Marcel Fässler and 2015 24 Hours Nürburgring winner Nico Müller. Switzerland also won the A1GP World Cup of Motorsport in 2007–08 with driver Neel Jani. Swiss motorcycle racer Thomas Lüthi won the 2005 MotoGP World Championship in the 125cc category. In June 2007 the Swiss National Council, one house of the Federal Assembly of Switzerland, voted to overturn the ban, however the other house, the Swiss Council of States rejected the change and the ban remains in place.
Traditional sports include Swiss wrestling or "Schwingen", a tradition from the rural central cantons and considered the national sport by some. Hornussen is another indigenous Swiss sport, which is like a cross between baseball and golf. Steinstossen is the Swiss variant of stone put, a competition in throwing a heavy stone. Practised only among the alpine population since prehistoric times, it is recorded to have taken place in Basel in the 13th century. It is central to the Unspunnenfest, first held in 1805, with its symbol the 83.5 stone named Unspunnenstein.
### Cuisine
Fondue is melted cheese, into which bread is dipped.
The cuisine is multifaceted. While dishes such as fondue, raclette or rösti are omnipresent, each region developed its gastronomy according to the varieties of climate and language. Traditional Swiss cuisine uses ingredients similar to those in other European countries, as well as unique dairy products and cheeses such as Gruyère or Emmental, produced in the valleys of Gruyères and Emmental. The number of fine-dining establishments is high, particularly in western Switzerland.
Chocolate has been made in Switzerland since the 18th century. Its reputation grew at the end of the 19th century with the invention of modern techniques such as conching and tempering, which enabled higher quality. Another breakthrough was the invention of solid milk chocolate in 1875 by Daniel Peter. The Swiss are the world's largest chocolate consumers.
Due to the popularisation of processed foods at the end of the 19th century, Swiss health food pioneer Maximilian Bircher-Benner created the first nutrition-based therapy in the form of the well-known rolled oats cereal dish, called Birchermüesli.[citation needed]
The most popular alcoholic drink is wine. Switzerland is notable for its variety of grape varieties, reflecting the large variations in terroirs. Swiss wine is produced mainly in Valais, Vaud (Lavaux), Geneva and Ticino, with a small majority of white wines. Vineyards have been cultivated in Switzerland since the Roman era, even though traces of a more ancient origin can be found. The most widespread varieties are the Chasselas (called Fendant in Valais) and Pinot Noir. Merlot is the main variety produced in Ticino. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17126411199569702, "perplexity": 9389.31294756993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00123.warc.gz"} |
https://worldwidescience.org/topicpages/m/m-band+adaptive+optics.html | #### Sample records for m-band adaptive optics
Science.gov (United States)
Saleh, M
2016-04-01
Adaptive optics is a technology enhancing the visual performance of an optical system by correcting its optical aberrations. Adaptive optics have already enabled several breakthroughs in the field of visual sciences, such as improvement of visual acuity in normal and diseased eyes beyond physiologic limits, and the correction of presbyopia. Adaptive optics technology also provides high-resolution, in vivo imaging of the retina that may eventually help to detect the onset of retinal conditions at an early stage and provide better assessment of treatment efficacy. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Directory of Open Access Journals (Sweden)
Thomas R. Rimmele
2011-06-01
Full Text Available Adaptive optics (AO has become an indispensable tool at ground-based solar telescopes. AO enables the ground-based observer to overcome the adverse effects of atmospheric seeing and obtain diffraction limited observations. Over the last decade adaptive optics systems have been deployed at major ground-based solar telescopes and revitalized ground-based solar astronomy. The relatively small aperture of solar telescopes and the bright source make solar AO possible for visible wavelengths where the majority of solar observations are still performed. Solar AO systems enable diffraction limited observations of the Sun for a significant fraction of the available observing time at ground-based solar telescopes, which often have a larger aperture than equivalent space based observatories, such as HINODE. New ground breaking scientific results have been achieved with solar adaptive optics and this trend continues. New large aperture telescopes are currently being deployed or are under construction. With the aid of solar AO these telescopes will obtain observations of the highly structured and dynamic solar atmosphere with unprecedented resolution. This paper reviews solar adaptive optics techniques and summarizes the recent progress in the field of solar adaptive optics. An outlook to future solar AO developments, including a discussion of Multi-Conjugate AO (MCAO and Ground-Layer AO (GLAO will be given.
Science.gov (United States)
Rimmele, Thomas R; Marino, Jose
Adaptive optics (AO) has become an indispensable tool at ground-based solar telescopes. AO enables the ground-based observer to overcome the adverse effects of atmospheric seeing and obtain diffraction limited observations. Over the last decade adaptive optics systems have been deployed at major ground-based solar telescopes and revitalized ground-based solar astronomy. The relatively small aperture of solar telescopes and the bright source make solar AO possible for visible wavelengths where the majority of solar observations are still performed. Solar AO systems enable diffraction limited observations of the Sun for a significant fraction of the available observing time at ground-based solar telescopes, which often have a larger aperture than equivalent space based observatories, such as HINODE. New ground breaking scientific results have been achieved with solar adaptive optics and this trend continues. New large aperture telescopes are currently being deployed or are under construction. With the aid of solar AO these telescopes will obtain observations of the highly structured and dynamic solar atmosphere with unprecedented resolution. This paper reviews solar adaptive optics techniques and summarizes the recent progress in the field of solar adaptive optics. An outlook to future solar AO developments, including a discussion of Multi-Conjugate AO (MCAO) and Ground-Layer AO (GLAO) will be given. Supplementary material is available for this article at 10.12942/lrsp-2011-2.
Science.gov (United States)
Ren, Deqing; Zhu, Yongtian; Zhang, Xi; Dou, Jiangpei; Zhao, Gang
2014-03-10
Conventional solar adaptive optics uses one deformable mirror (DM) and one guide star for wave-front sensing, which seriously limits high-resolution imaging over a large field of view (FOV). Recent progress toward multiconjugate adaptive optics indicates that atmosphere turbulence induced wave-front distortion at different altitudes can be reconstructed by using multiple guide stars. To maximize the performance over a large FOV, we propose a solar tomography adaptive optics (TAO) system that uses tomographic wave-front information and uses one DM. We show that by fully taking advantage of the knowledge of three-dimensional wave-front distribution, a classical solar adaptive optics with one DM can provide an extra performance gain for high-resolution imaging over a large FOV in the near infrared. The TAO will allow existing one-deformable-mirror solar adaptive optics to deliver better performance over a large FOV for high-resolution magnetic field investigation, where solar activities occur in a two-dimensional field up to 60'', and where the near infrared is superior to the visible in terms of magnetic field sensitivity.
CERN Document Server
Brandner, Wolfgang; ESO Workshop
2005-01-01
The field of Adaptive Optics (AO) for astronomy has matured in recent years, and diffraction-limited image resolution in the near-infrared is now routinely achieved by ground-based 8 to 10m class telescopes. This book presents the proceedings of the ESO Workshop on Science with Adaptive Optics held in the fall of 2003. The book provides an overview on AO instrumentation, data acquisition and reduction strategies, and covers observations of the sun, solar system objects, circumstellar disks, substellar companions, HII regions, starburst environments, late-type stars, the galactic center, active galaxies, and quasars. The contributions present a vivid picture of the multitude of science topics being addressed by AO in observational astronomy.
Science.gov (United States)
Roorda, Austin; Duncan, Jacque L
2015-11-01
This review starts with a brief history and description of adaptive optics (AO) technology, followed by a showcase of the latest capabilities of AO systems for imaging the human retina and an extensive review of the literature on where AO is being used clinically. The review concludes with a discussion on future directions and guidance on usage and interpretation of images from AO systems for the eye.
OpenAIRE
Roorda, Austin; Duncan, Jacque L.
2015-01-01
This review starts with a brief history and description of adaptive optics (AO) technology, followed by a showcase of the latest capabilities of AO systems for imaging the human retina and an extensive review of the literature on where AO is being used clinically. The review concludes with a discussion on future directions and guidance on usage and interpretation of images from AO systems for the eye.
Science.gov (United States)
Tsang, P. W. M.; Poon, Ting-Chung; Liu, J.-P.
2016-01-01
Optical Scanning Holography (OSH) is a powerful technique that employs a single-pixel sensor and a row-by-row scanning mechanism to capture the hologram of a wide-view, three-dimensional object. However, the time required to acquire a hologram with OSH is rather lengthy. In this paper, we propose an enhanced framework, which is referred to as Adaptive OSH (AOSH), to shorten the holographic recording process. We have demonstrated that the AOSH method is capable of decreasing the acquisition time by up to an order of magnitude, while preserving the content of the hologram favorably. PMID:26916866
9. Center for Adaptive Optics | Software
Science.gov (United States)
Optics Software The Center for Adaptive Optics acts as a clearing house for distributing Software to Institutes it gives specialists in Adaptive Optics a place to distribute their software. All software is shared on an "as-is" basis and the users should consult with the software authors with any
10. Accuracies Of Optical Processors For Adaptive Optics
Science.gov (United States)
Downie, John D.; Goodman, Joseph W.
1992-01-01
Paper presents analysis of accuracies and requirements concerning accuracies of optical linear-algebra processors (OLAP's) in adaptive-optics imaging systems. Much faster than digital electronic processor and eliminate some residual distortion. Question whether errors introduced by analog processing of OLAP overcome advantage of greater speed. Paper addresses issue by presenting estimate of accuracy required in general OLAP that yields smaller average residual aberration of wave front than digital electronic processor computing at given speed.
11. Maritime adaptive optics beam control
OpenAIRE
Corley, Melissa S.
2010-01-01
The Navy is interested in developing systems for horizontal, near ocean surface, high-energy laser propagation through the atmosphere. Laser propagation in the maritime environment requires adaptive optics control of aberrations caused by atmospheric distortion. In this research, a multichannel transverse adaptive filter is formulated in Matlab's Simulink environment and compared to a complex lattice filter that has previously been implemented in large system simulations. The adaptive fil...
12. Intelligent Optical Systems Using Adaptive Optics
Science.gov (United States)
Clark, Natalie
2012-01-01
Until recently, the phrase adaptive optics generally conjured images of large deformable mirrors being integrated into telescopes to compensate for atmospheric turbulence. However, the development of smaller, cheaper devices has sparked interest for other aerospace and commercial applications. Variable focal length lenses, liquid crystal spatial light modulators, tunable filters, phase compensators, polarization compensation, and deformable mirrors are becoming increasingly useful for other imaging applications including guidance navigation and control (GNC), coronagraphs, foveated imaging, situational awareness, autonomous rendezvous and docking, non-mechanical zoom, phase diversity, and enhanced multi-spectral imaging. The active components presented here allow flexibility in the optical design, increasing performance. In addition, the intelligent optical systems presented offer advantages in size and weight and radiation tolerance.
13. M-BAND IMAGING OF THE HR 8799 PLANETARY SYSTEM USING AN INNOVATIVE LOCI-BASED BACKGROUND SUBTRACTION TECHNIQUE
International Nuclear Information System (INIS)
Galicher, Raphael; Marois, Christian; Macintosh, Bruce; Konopacky, Quinn; Barman, Travis
2011-01-01
Multi-wavelength observations/spectroscopy of exoplanetary atmospheres are the basis of the emerging exciting field of comparative exoplanetology. The HR 8799 planetary system is an ideal laboratory to study our current knowledge gap between massive field brown dwarfs and the cold 5 Gyr old solar system planets. The HR 8799 planets have so far been imaged at J- to L-band, with only upper limits available at M-band. We present here deep high-contrast Keck II adaptive optics M-band observations that show the imaging detection of three of the four currently known HR 8799 planets. Such detections were made possible due to the development of an innovative LOCI-based background subtraction scheme that is three times more efficient than a classical median background subtraction for Keck II AO data, representing a gain in telescope time of up to a factor of nine. These M-band detections extend the broadband photometric coverage out to ∼5 μm and provide access to the strong CO fundamental absorption band at 4.5 μm. The new M-band photometry shows that the HR 8799 planets are located near the L/T-type dwarf transition, similar to what was found by other studies. We also confirm that the best atmospheric fits are consistent with low surface gravity, dusty, and non-equilibrium CO/CH 4 chemistry models.
Science.gov (United States)
Ammons, M.; Poyneer, L.; GPI Team
2014-09-01
A long-standing challenge has been to directly image faint extrasolar planets adjacent to their host suns, which may be ~1-10 million times brighter than the planet. Several extreme AO systems designed for high-contrast observations have been tested at this point, including SPHERE, Magellan AO, PALM-3000, Project 1640, NICI, and the Gemini Planet Imager (GPI, Macintosh et al. 2014). The GPI is the world's most advanced high-contrast adaptive optics system on an 8-meter telescope for detecting and characterizing planets outside of our solar system. GPI will detect a previously unstudied population of young analogs to the giant planets of our solar system and help determine how planetary systems form. GPI employs a 44x44 woofer-tweeter adaptive optics system with a Shack-Hartmann wavefront sensor operating at 1 kHz. The controller uses Fourier-based reconstruction and modal gains optimized from system telemetry (Poyneer et al. 2005, 2007). GPI has an apodized Lyot coronal graph to suppress diffraction and a near-infrared integral field spectrograph for obtaining planetary spectra. This paper discusses current performance limitations and presents the necessary instrumental modifications and sensitivity calculations for scenarios related to high-contrast observations of non-sidereal targets.
15. Wavefront measurement using computational adaptive optics.
Science.gov (United States)
South, Fredrick A; Liu, Yuan-Zhi; Bower, Andrew J; Xu, Yang; Carney, P Scott; Boppart, Stephen A
2018-03-01
In many optical imaging applications, it is necessary to correct for aberrations to obtain high quality images. Optical coherence tomography (OCT) provides access to the amplitude and phase of the backscattered optical field for three-dimensional (3D) imaging samples. Computational adaptive optics (CAO) modifies the phase of the OCT data in the spatial frequency domain to correct optical aberrations without using a deformable mirror, as is commonly done in hardware-based adaptive optics (AO). This provides improvement of image quality throughout the 3D volume, enabling imaging across greater depth ranges and in highly aberrated samples. However, the CAO aberration correction has a complicated relation to the imaging pupil and is not a direct measurement of the pupil aberrations. Here we present new methods for recovering the wavefront aberrations directly from the OCT data without the use of hardware adaptive optics. This enables both computational measurement and correction of optical aberrations.
16. Adaptive optics imaging of the retina
Directory of Open Access Journals (Sweden)
Rajani Battu
2014-01-01
Full Text Available Adaptive optics is a relatively new tool that is available to ophthalmologists for study of cellular level details. In addition to the axial resolution provided by the spectral-domain optical coherence tomography, adaptive optics provides an excellent lateral resolution, enabling visualization of the photoreceptors, blood vessels and details of the optic nerve head. We attempt a mini review of the current role of adaptive optics in retinal imaging. PubMed search was performed with key words Adaptive optics OR Retina OR Retinal imaging. Conference abstracts were searched from the Association for Research in Vision and Ophthalmology (ARVO and American Academy of Ophthalmology (AAO meetings. In total, 261 relevant publications and 389 conference abstracts were identified.
17. Atmospheric free-space coherent optical communications with adaptive optics
Science.gov (United States)
Ting, Chueh; Zhang, Chengyu; Yang, Zikai
2017-02-01
Free-space coherent optical communications have a potential application to offer last mile bottleneck solution in future local area networks (LAN) because of their information carrier, information security and license-free status. Coherent optical communication systems using orthogonal frequency division multiplexing (OFDM) digital modulation are successfully demonstrated in a long-haul tens Giga bits via optical fiber, but they are not yet available in free space due to atmospheric turbulence-induced channel fading. Adaptive optics is recognized as a promising technology to mitigate the effects of atmospheric turbulence in free-space optics. In this paper, a free-space coherent optical communication system using an OFDM digital modulation scheme and adaptive optics (FSO OFDM AO) is proposed, a Gamma-Gamma distribution statistical channel fading model for the FSO OFDM AO system is examined, and FSO OFDM AO system performance is evaluated in terms of bit error rate (BER) versus various propagation distances.
18. A Miniaturized Adaptive Optic Device for Optical Telecommunications, Phase I
Data.gov (United States)
National Aeronautics and Space Administration — To advance the state-of-the-art uplink laser communication technology, new adaptive optic beam compensation techniques are needed for removing various time-varying...
19. InAs/GaAs quantum dots on GaAs-on-V-grooved-Si substrate with high optical quality in the 1.3 μm band
International Nuclear Information System (INIS)
Wan, Yating; Li, Qiang; Geng, Yu; Shi, Bei; Lau, Kei May
2015-01-01
We report self-assembled InAs/GaAs quantum dots (QDs) grown on a specially engineered GaAs-on-V-grooved-Si substrate by metal-organic vapor phase epitaxy. Recessed pockets formed on V-groove patterned Si (001) substrates were used to prevent most of the hetero-interfacial stacking faults from extending into the upper QD active region. 1.3 μm room temperature emission from high-density (5.6 × 10 10 cm −2 ) QDs has been obtained, with a narrow full-width-at-half-maximum of 29 meV. Optical quality of the QDs was found to be better than those grown on conventional planar offcut Si templates, as indicated by temperature-dependent photoluminescence analysis. Results suggest great potential to integrate QD lasers on a Si complementary-metal-oxide-semiconductor compatible platform using such GaAs on Si templates
20. Adaptive Optics, LLLFT Interferometry, Astronomy
National Research Council Canada - National Science Library
2002-01-01
We propose to build a three telescope Michelson optical interferometer equipped with wavefront compensation technology as a demonstration and test bed for high resolution Deep Space Surveillance (DSS) and Astronomy...
1. Adaptive Optics for Industry and Medicine
Science.gov (United States)
Dainty, Christopher
2008-01-01
pt. 1. Wavefront correctors and control. Liquid crystal lenses for correction of presbyopia (Invited Paper) / Guoqiang Li and Nasser Peyghambarian. Converging and diverging liquid crystal lenses (oral paper) / Andrew X. Kirby, Philip J. W. Hands, and Gordon D. Love. Liquid lens technology for miniature imaging systems: status of the technology, performance of existing products and future trends (invited paper) / Bruno Berge. Carbon fiber reinforced polymer deformable mirrors for high energy laser applications (oral paper) / S. R. Restaino ... [et al.]. Tiny multilayer deformable mirrors (oral paper) / Tatiana Cherezova ... [et al.]. Performance analysis of piezoelectric deformable mirrors (oral paper) / Oleg Soloviev, Mikhail Loktev and Gleb Vdovin. Deformable membrane mirror with high actuator density and distributed control (oral paper) / Roger Hamelinck ... [et al.]. Characterization and closed-loop demonstration of a novel electrostatic membrane mirror using COTS membranes (oral paper) / David Dayton ... [et al.]. Electrostatic micro-deformable mirror based on polymer materials (oral paper) / Frederic Zamkotsian ... [et al.]. Recent progress in CMOS integrated MEMS A0 mirror development (oral paper) / A. Gehner ... [et al.]. Compact large-stroke piston-tip-tilt actuator and mirror (oral paper) / W. Noell ... [et al.]. MEMS deformable mirrors for high performance AO applications (oral paper) / Paul Bierden, Thomas Bifano and Steven Cornelissen. A versatile interferometric test-rig for the investigation and evaluation of ophthalmic AO systems (poster paper) / Steve Gruppetta, Jiang Jian Zhong and Luis Diaz-Santana. Woofer-tweeter adaptive optics (poster paper) / Thomas Farrell and Chris Dainty. Deformable mirrors based on transversal piezoeffect (poster paper) / Gleb Vdovin, Mikhail Loktev and Oleg Soloviev. Low-cost spatial light modulators for ophthalmic applications (poster paper) / Vincente Durán ... [et al.]. Latest MEMS DM developments and the path ahead
Science.gov (United States)
Casey, Shawn Patrick
2010-12-01
'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.
3. Terahertz adaptive optics with a deformable mirror.
Science.gov (United States)
Brossard, Mathilde; Sauvage, Jean-François; Perrin, Mathias; Abraham, Emmanuel
2018-04-01
We report on the wavefront correction of a terahertz (THz) beam using adaptive optics, which requires both a wavefront sensor that is able to sense the optical aberrations, as well as a wavefront corrector. The wavefront sensor relies on a direct 2D electro-optic imaging system composed of a ZnTe crystal and a CMOS camera. By measuring the phase variation of the THz electric field in the crystal, we were able to minimize the geometrical aberrations of the beam, thanks to the action of a deformable mirror. This phase control will open the route to THz adaptive optics in order to optimize the THz beam quality for both practical and fundamental applications.
4. Micromirror Arrays for Adaptive Optics; TOPICAL
International Nuclear Information System (INIS)
Carr, E.J.
2000-01-01
The long-range goal of this project is to develop the optical and mechanical design of a micromirror array for adaptive optics that will meet the following criteria: flat mirror surface ((lambda)/20), high fill factor ( and gt; 95%), large stroke (5-10(micro)m), and pixel size(approx)-200(micro)m. This will be accomplished by optimizing the mirror surface and actuators independently and then combining them using bonding technologies that are currently being developed
5. The TMT Adaptive Optics Program
Science.gov (United States)
Ellerbroek, Brent
2011-09-01
We provide an overview of the Thirty Meter Telescope (TMT) AO program, with an emphasis upon the progress made since the first AO4ELT conference held in 2009. The first light facility AO system for TMT is the Narrow Field Infra-Red AO System (NFIRAOS), which will provide diffraction-limited performance in the J, H, and K bands over 18-30 arc sec diameter fields with 50% sky coverage at the galactic pole. This is accomplished with order 60x60 wavefront sensing and correction, two deformable mirrors conjugate to ranges of 0 and 11.2 km, 6 sodium laser guide stars in an asterism with a diameter of 70 arc sec, and three low order (tip/tilt or tip/tilt focus), infra-red natural guide star (NGS) wavefront sensors deployable within a 2 arc minute diameter patrol field. The first light LGS asterism is generated by the Laser Guide Star Facility (LGSF), which initially incorporates 6 20-25W class laser systems mounted to the telescope elevation journal, a mirror-based beam transfer optics system, and a 0.4m diameter laser launch telescope located behind the TMT secondary mirror. Future plans for additional AO capabilities include a mid infra-red AO (MIRAO) system to support science instruments in the 4-20 micron range, a ground-layer AO (GLAO) system for wide-field spectroscopy, a multi-object AO (MOAO) system for multi-object integral field unit spectroscopy, and extreme AO (ExAO) for high contrast imaging. Significant progress has been made in developing the first-light AO architecture since 2009. This includes the adoption of a new NFIRAOS opto-mechanical design consisting of two off-axis parabola (OAP) relays in series, which eliminates field distortion and also significantly simplifies the designs of the LGS wavefront sensors, optical source simulators, and turbulence generator subsystem. The design of the LGSF has also been interated, and has been simplfied by the relocation of the (smaller, gravity invarient) laser systems to the telescope elevation journal
6. Adaption of optical Fresnel transform to optical Wigner transform
International Nuclear Information System (INIS)
Lv Cuihong; Fan Hongyi
2010-01-01
Enlightened by the algorithmic isomorphism between the rotation of the Wigner distribution function (WDF) and the αth fractional Fourier transform, we show that the optical Fresnel transform performed on the input through an ABCD system makes the output naturally adapting to the associated Wigner transform, i.e. there exists algorithmic isomorphism between ABCD transformation of the WDF and the optical Fresnel transform. We prove this adaption in the context of operator language. Both the single-mode and the two-mode Fresnel operators as the image of classical Fresnel transform are introduced in our discussions, while the two-mode Wigner operator in the entangled state representation is introduced for fitting the two-mode Fresnel operator.
Science.gov (United States)
Dubra, Alfredo; Sulai, Yusufu
2011-01-01
A broadband adaptive optics scanning ophthalmoscope (BAOSO) consisting of four afocal telescopes, formed by pairs of off-axis spherical mirrors in a non-planar arrangement, is presented. The non-planar folding of the telescopes is used to simultaneously reduce pupil and image plane astigmatism. The former improves the adaptive optics performance by reducing the root-mean-square (RMS) of the wavefront and the beam wandering due to optical scanning. The latter provides diffraction limited performance over a 3 diopter (D) vergence range. This vergence range allows for the use of any broadband light source(s) in the 450-850 nm wavelength range to simultaneously image any combination of retinal layers. Imaging modalities that could benefit from such a large vergence range are optical coherence tomography (OCT), multi- and hyper-spectral imaging, single- and multi-photon fluorescence. The benefits of the non-planar telescopes in the BAOSO are illustrated by resolving the human foveal photoreceptor mosaic in reflectance using two different superluminescent diodes with 680 and 796 nm peak wavelengths, reaching the eye with a vergence of 0.76 D relative to each other. PMID:21698035
8. Multifocal multiphoton microscopy with adaptive optical correction
Science.gov (United States)
Coelho, Simao; Poland, Simon; Krstajic, Nikola; Li, David; Monypenny, James; Walker, Richard; Tyndall, David; Ng, Tony; Henderson, Robert; Ameer-Beg, Simon
2013-02-01
Fluorescence lifetime imaging microscopy (FLIM) is a well established approach for measuring dynamic signalling events inside living cells, including detection of protein-protein interactions. The improvement in optical penetration of infrared light compared with linear excitation due to Rayleigh scattering and low absorption have provided imaging depths of up to 1mm in brain tissue but significant image degradation occurs as samples distort (aberrate) the infrared excitation beam. Multiphoton time-correlated single photon counting (TCSPC) FLIM is a method for obtaining functional, high resolution images of biological structures. In order to achieve good statistical accuracy TCSPC typically requires long acquisition times. We report the development of a multifocal multiphoton microscope (MMM), titled MegaFLI. Beam parallelization performed via a 3D Gerchberg-Saxton (GS) algorithm using a Spatial Light Modulator (SLM), increases TCSPC count rate proportional to the number of beamlets produced. A weighted 3D GS algorithm is employed to improve homogeneity. An added benefit is the implementation of flexible and adaptive optical correction. Adaptive optics performed by means of Zernike polynomials are used to correct for system induced aberrations. Here we present results with significant improvement in throughput obtained using a novel complementary metal-oxide-semiconductor (CMOS) 1024 pixel single-photon avalanche diode (SPAD) array, opening the way to truly high-throughput FLIM.
9. The Durham Adaptive Optics Simulation Platform (DASP): Current status
OpenAIRE
Basden, Alastair; Bharmal, Nazim; Jenkins, David; Morris, Timothy; Osborn, James; Jia, Peng; Staykov, Lazar
2018-01-01
The Durham Adaptive Optics Simulation Platform (DASP) is a Monte-Carlo modelling tool used for the simulation of astronomical and solar adaptive optics systems. In recent years, this tool has been used to predict the expected performance of the forthcoming extremely large telescope adaptive optics systems, and has seen the addition of several modules with new features, including Fresnel optics propagation and extended object wavefront sensing. Here, we provide an overview of the features of D...
10. Adaptive optics system application for solar telescope
Science.gov (United States)
Lukin, V. P.; Grigor'ev, V. M.; Antoshkin, L. V.; Botugina, N. N.; Emaleev, O. N.; Konyaev, P. A.; Kovadlo, P. G.; Krivolutskiy, N. P.; Lavrionova, L. N.; Skomorovski, V. I.
2008-07-01
The possibility of applying adaptive correction to ground-based solar astronomy is considered. Several experimental systems for image stabilization are described along with the results of their tests. Using our work along several years and world experience in solar adaptive optics (AO) we are assuming to obtain first light to the end of 2008 for the first Russian low order ANGARA solar AO system on the Big Solar Vacuum Telescope (BSVT) with 37 subapertures Shack-Hartmann wavefront sensor based of our modified correlation tracker algorithm, DALSTAR video camera, 37 elements deformable bimorph mirror, home made fast tip-tip mirror with separate correlation tracker. Too strong daytime turbulence is on the BSVT site and we are planning to obtain a partial correction for part of Sun surface image.
11. Lithographic manufacturing of adaptive optics components
Science.gov (United States)
Scott, R. Phillip; Jean, Madison; Johnson, Lee; Gatlin, Ridley; Bronson, Ryan; Milster, Tom; Hart, Michael
2017-09-01
Adaptive optics systems and their laboratory test environments call for a number of unusual optical components. Examples include lenslet arrays, pyramids, and Kolmogorov phase screens. Because of their specialized application, the availability of these parts is generally limited, with high cost and long lead time, which can also significantly drive optical system design. These concerns can be alleviated by a fast and inexpensive method of optical fabrication. To that end, we are exploring direct-write lithographic techniques to manufacture three different custom elements. We report results from a number of prototype devices including 1, 2, and 3 wave Multiple Order Diffractive (MOD) lenslet arrays with 0.75 mm pitch and phase screens with near Kolmogorov structure functions with a Fried length r0 around 1 mm. We also discuss plans to expand our research to include a diffractive pyramid that is smaller, lighter, and more easily manufactured than glass versions presently used in pyramid wavefront sensors. We describe how these components can be produced within the limited dynamic range of the lithographic process, and with a rapid prototyping and manufacturing cycle. We discuss exploratory manufacturing methods, including replication, and potential observing techniques enabled by the ready availability of custom components.
12. Scanning laser ophthalmoscope design with adaptive optics
OpenAIRE
Laut, SP; Jones, SM; Olivier, SS; Werner, JS
2005-01-01
A design for a high-resolution scanning instrument is presented for in vivo imaging of the human eye at the cellular scale. This system combines adaptive optics technology with a scanning laser ophthalmoscope (SLO) to image structures with high lateral (∼2 μm) resolution. In this system, the ocular wavefront aberrations that reduce the resolution of conventional SLOs are detected by a Hartmann-Shack wavefront sensor, and compensated with two deformable mirrors in a closed-loop for dynamic cor...
Science.gov (United States)
Barrett, Harrison H.; Myers, Kyle J.; Devaney, Nicholas; Dainty, J. C.; Caucci, Luca
2006-06-01
In objective or task-based assessment of image quality, figures of merit are defined by the performance of some specific observer on some task of scientific interest. This methodology is well established in medical imaging but is just beginning to be applied in astronomy. In this paper we survey the theory needed to understand the performance of ideal or ideal-linear (Hotelling) observers on detection tasks with adaptive-optical data. The theory is illustrated by discussing its application to detection of exoplanets from a sequence of short-exposure images.
14. Adaptive optics and laser guide stars at Lick observatory
Energy Technology Data Exchange (ETDEWEB)
Brase, J.M. [Lawrence Livermore National Lab., CA (United States)
1994-11-15
For the past several years LLNL has been developing adaptive optics systems for correction of both atmospheric turbulence effects and thermal distortions in optics for high-power lasers. Our early work focused on adaptive optics for beam control in laser isotope separation and ground-based free electron lasers. We are currently developing innovative adaptive optics and laser systems for sodium laser guide star applications at the University of Californias Lick and Keck Observeratories. This talk will describe our adaptive optics technology and some of its applications in high-resolution imaging and beam control.
15. Optical design of the adaptive optics laser guide star system
Energy Technology Data Exchange (ETDEWEB)
Bissinger, H. [Lawrence Livermore National Lab., CA (United States)
1994-11-15
The design of an adaptive optics package for the 3 meter Lick telescope is presented. This instrument package includes a 69 actuator deformable mirror and a Hartmann type wavefront sensor operating in the visible wavelength; a quadrant detector for the tip-tile sensor and a tip-tilt mirror to stabilize atmospheric first order tip-tile errors. A high speed computer drives the deformable mirror to achieve near diffraction limited imagery. The different optical components and their individual design constraints are described. motorized stages and diagnostics tools are used to operate and maintain alignment throughout observation time from a remote control room. The expected performance are summarized and actual results of astronomical sources are presented.
16. Object-oriented Matlab adaptive optics toolbox
Science.gov (United States)
Conan, R.; Correia, C.
2014-08-01
Object-Oriented Matlab Adaptive Optics (OOMAO) is a Matlab toolbox dedicated to Adaptive Optics (AO) systems. OOMAO is based on a small set of classes representing the source, atmosphere, telescope, wavefront sensor, Deformable Mirror (DM) and an imager of an AO system. This simple set of classes allows simulating Natural Guide Star (NGS) and Laser Guide Star (LGS) Single Conjugate AO (SCAO) and tomography AO systems on telescopes up to the size of the Extremely Large Telescopes (ELT). The discrete phase screens that make the atmosphere model can be of infinite size, useful for modeling system performance on large time scales. OOMAO comes with its own parametric influence function model to emulate different types of DMs. The cone effect, altitude thickness and intensity profile of LGSs are also reproduced. Both modal and zonal modeling approach are implemented. OOMAO has also an extensive library of theoretical expressions to evaluate the statistical properties of turbulence wavefronts. The main design characteristics of the OOMAO toolbox are object-oriented modularity, vectorized code and transparent parallel computing. OOMAO has been used to simulate and to design the Multi-Object AO prototype Raven at the Subaru telescope and the Laser Tomography AO system of the Giant Magellan Telescope. In this paper, a Laser Tomography AO system on an ELT is simulated with OOMAO. In the first part, we set-up the class parameters and we link the instantiated objects to create the source optical path. Then we build the tomographic reconstructor and write the script for the pseudo-open-loop controller.
17. Through-focus scanning optical microscopy (TSOM) with adaptive optics
Science.gov (United States)
Lee, Jun Ho; Park, Gyunam; Jeong, Junhee; Park, Chris
2018-03-01
Through-focus optical microscopy (TSOM) with nanometer-scale lateral and vertical sensitivity levels matching those of scanning electron microscopy has been demonstrated to be useful both for 3D inspections and metrology assessments. In 2014, funded by two private companies (Nextin/Samsung Electronics) and the Korea Evaluation Institute of Industrial Technology (KEIT), a research team from four universities in South Korea set out to investigate core technologies for developing in-line TSOM inspection and metrology tools, with the respective teams focusing on optics implementation, defect inspection, computer simulation and high-speed metrology matching. We initially confirmed the reported validity of the TSOM operation through a computer simulation, after which we implemented the TSOM operation by throughfocus scanning of existing UV (355nm) and IR (800nm) inspection tools. These tools have an identical sampling distance of 150 nm but have different resolving distances (310 and 810 nm, respectively). We initially experienced some improvement in the defect inspection sensitivity level over TSV (through-silicon via) samples with 6.6 μm diameters. However, during the experiment, we noted sensitivity and instability issues when attempting to acquire TSOM images. As TSOM 3D information is indirectly extracted by differentiating a target TSOM image from reference TSOM images, any instability or mismatch in imaging conditions can result in measurement errors. As a remedy to such a situation, we proposed the application of adaptive optics to the TSOM operation and developed a closed-loop system with a tip/tilt mirror and a Shack-Hartmann sensor on an optical bench. We were able to keep the plane position within in RMS 0.4 pixel by actively compensating for any position instability which arose during the TSOM scanning process along the optical axis. Currently, we are also developing another TSOM tool with a deformable mirror instead of a tip/tilt mirror, in which case we
18. The Coming of Age of Adaptive Optics
Science.gov (United States)
1995-10-01
How Ground-Based Astronomers Beat the Atmosphere Adaptive Optics (AO) is the new wonder-weapon'' in ground-based astronomy. By means of advanced electro-optical devices at their telescopes, astronomers are now able to neutralize'' the image-smearing turbulence of the terrestrial atmosphere (seen by the unaided eye as the twinkling of stars) so that much sharper images can be obtained than before. In practice, this is done with computer-controlled, flexible mirrors which refocus the blurred images up to 100 times per second, i.e. at a rate that is faster than the changes in the atmospheric turbulence. This means that finer details in astronomical objects can be studied and also - because of the improved concentration of light in the telescope's focal plane - that fainter objects can be observed. At the moment, Adaptive Optics work best in the infrared part of spectrum, but at some later time it may also significantly improve observations at the shorter wavelengths of visible light. The many-sided aspects of this new technology and its impact on astronomical instrumentation was the subject of a recent AO conference [1] with over 150 participants from about 30 countries, presenting a total of more than 100 papers. The Introduction of AO Techniques into Astronomy The scope of this meeting was the design, fabrication and testing of AO systems, characterisation of the sources of atmospheric disturbance, modelling of compensation systems, individual components, astronomical AO results, non-astronomical applications, laser guide star systems, non-linear optical phase conjugation, performance evaluation, and other areas of this wide and complex field, in which front-line science and high technology come together in a new and powerful symbiosis. One of the specific goals of the meeting was to develop contacts between AO scientists and engineers in the western world and their colleagues in Russia and Asia. For the first time at a conference of this type, nine Russian
19. Optically sensitive Medipix2 detector for adaptive optics wavefront sensing
CERN Document Server
Vallerga, John; Tremsina, Anton; Siegmund, Oswald; Mikulec, Bettina; Clark, Allan G; CERN. Geneva
2005-01-01
A new hybrid optical detector is described that has many of the attributes desired for the next generation adaptive optics (AO) wavefront sensors. The detector consists of a proximity focused microchannel plate (MCP) read out by multi-pixel application specific integrated circuit (ASIC) chips developed at CERN ("Medipix2") with individual pixels that amplify, discriminate and count input events. The detector has 256 x 256 pixels, zero readout noise (photon counting), can be read out at 1 kHz frame rates and is abutable on 3 sides. The Medipix2 readout chips can be electronically shuttered down to a temporal window of a few microseconds with an accuracy of 10 ns. When used in a Shack-Hartmann style wavefront sensor, a detector with 4 Medipix chips should be able to centroid approximately 5000 spots using 7 x 7 pixel sub-apertures resulting in very linear, off-null error correction terms. The quantum efficiency depends on the optical photocathode chosen for the bandpass of interest.
20. Optically sensitive Medipix2 detector for adaptive optics wavefront sensing
International Nuclear Information System (INIS)
Vallerga, John; McPhate, Jason; Tremsin, Anton; Siegmund, Oswald; Mikulec, Bettina; Clark, Allan
2005-01-01
A new hybrid optical detector is described that has many of the attributes desired for the next generation adaptive optics (AO) wavefront sensors. The detector consists of a proximity focused microchannel plate (MCP) read out by multi-pixel application specific integrated circuit (ASIC) chips developed at CERN ('Medipix2') with individual pixels that amplify, discriminate and count input events. The detector has 256x256 pixels, zero readout noise (photon counting), can be read out at 1 kHz frame rates and is abutable on 3 sides. The Medipix2 readout chips can be electronically shuttered down to a temporal window of a few microseconds with an accuracy of 10 ns. When used in a Shack-Hartmann style wavefront sensor, a detector with 4 Medipix chips should be able to centroid approximately 5000 spots using 7x7 pixel sub-apertures resulting in very linear, off-null error correction terms. The quantum efficiency depends on the optical photocathode chosen for the bandpass of interest
1. Practical guidelines for implementing adaptive optics in fluorescence microscopy
Science.gov (United States)
Wilding, Dean; Pozzi, Paolo; Soloviev, Oleg; Vdovin, Gleb; Verhaegen, Michel
2018-02-01
In life sciences, interest in the microscopic imaging of increasingly complex three dimensional samples, such as cell spheroids, zebrafish embryos, and in vivo applications in small animals, is growing quickly. Due to the increasing complexity of samples, more and more life scientists are considering the implementation of adaptive optics in their experimental setups. While several approaches to adaptive optics in microscopy have been reported, it is often difficult and confusing for the microscopist to choose from the array of techniques and equipment. In this poster presentation we offer a small guide to adaptive optics providing general guidelines for successful adaptive optics implementation.
2. Receding-horizon adaptive contyrol of aero-optical wavefronts
NARCIS (Netherlands)
Tesch, J.; Gibson, S.; Verhaegen, M.
2013-01-01
A new method for adaptive prediction and correction of wavefront errors in adaptive optics (AO) is introduced. The new method is based on receding-horizon control design and an adaptive lattice filter. Experimental results presented illustrate the capability of the new adaptive controller to predict
3. Adaptive optics imaging of inherited retinal diseases.
Science.gov (United States)
Georgiou, Michalis; Kalitzeos, Angelos; Patterson, Emily J; Dubra, Alfredo; Carroll, Joseph; Michaelides, Michel
2017-11-15
Adaptive optics (AO) ophthalmoscopy allows for non-invasive retinal phenotyping on a microscopic scale, thereby helping to improve our understanding of retinal diseases. An increasing number of natural history studies and ongoing/planned interventional clinical trials exploit AO ophthalmoscopy both for participant selection, stratification and monitoring treatment safety and efficacy. In this review, we briefly discuss the evolution of AO ophthalmoscopy, recent developments and its application to a broad range of inherited retinal diseases, including Stargardt disease, retinitis pigmentosa and achromatopsia. Finally, we describe the impact of this in vivo microscopic imaging on our understanding of disease pathogenesis, clinical trial design and outcome metrics, while recognising the limitation of the small cohorts reported to date. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
4. Robust adaptive optics systems for vision science
Science.gov (United States)
Burns, S. A.; de Castro, A.; Sawides, L.; Luo, T.; Sapoznik, K.
2018-02-01
Adaptive Optics (AO) is of growing importance for understanding the impact of retinal and systemic diseases on the retina. While AO retinal imaging in healthy eyes is now routine, AO imaging in older eyes and eyes with optical changes to the anterior eye can be difficult and requires a control and an imaging system that is resilient when there is scattering and occlusion from the cornea and lens, as well as in the presence of irregular and small pupils. Our AO retinal imaging system combines evaluation of local image quality of the pupil, with spatially programmable detection. The wavefront control system uses a woofer tweeter approach, combining an electromagnetic mirror and a MEMS mirror and a single Shack Hartmann sensor. The SH sensor samples an 8 mm exit pupil and the subject is aligned to a region within this larger system pupil using a chin and forehead rest. A spot quality metric is calculated in real time for each lenslet. Individual lenslets that do not meet the quality metric are eliminated from the processing. Mirror shapes are smoothed outside the region of wavefront control when pupils are small. The system allows imaging even with smaller irregular pupils, however because the depth of field increases under these conditions, sectioning performance decreases. A retinal conjugate micromirror array selectively directs mid-range scatter to additional detectors. This improves detection of retinal capillaries even when the confocal image has poorer image quality that includes both photoreceptors and blood vessels.
5. Adaptive optics without altering visual perception.
Science.gov (United States)
Koenig, D E; Hart, N W; Hofer, H J
2014-04-01
Adaptive optics combined with visual psychophysics creates the potential to study the relationship between visual function and the retina at the cellular scale. This potential is hampered, however, by visual interference from the wavefront-sensing beacon used during correction. For example, we have previously shown that even a dim, visible beacon can alter stimulus perception (Hofer et al., 2012). Here we describe a simple strategy employing a longer wavelength (980nm) beacon that, in conjunction with appropriate restriction on timing and placement, allowed us to perform psychophysics when dark adapted without altering visual perception. The method was verified by comparing detection and color appearance of foveally presented small spot stimuli with and without the wavefront beacon present in 5 subjects. As an important caution, we found that significant perceptual interference can occur even with a subliminal beacon when additional measures are not taken to limit exposure. Consequently, the lack of perceptual interference should be verified for a given system, and not assumed based on invisibility of the beacon. Copyright © 2014 Elsevier B.V. All rights reserved.
6. Linear zonal atmospheric prediction for adaptive optics
Science.gov (United States)
McGuire, Patrick C.; Rhoadarmer, Troy A.; Coy, Hanna A.; Angel, J. Roger P.; Lloyd-Hart, Michael
2000-07-01
We compare linear zonal predictors of atmospheric turbulence for adaptive optics. Zonal prediction has the possible advantage of being able to interpret and utilize wind-velocity information from the wavefront sensor better than modal prediction. For simulated open-loop atmospheric data for a 2- meter 16-subaperture AO telescope with 5 millisecond prediction and a lookback of 4 slope-vectors, we find that Widrow-Hoff Delta-Rule training of linear nets and Back- Propagation training of non-linear multilayer neural networks is quite slow, getting stuck on plateaus or in local minima. Recursive Least Squares training of linear predictors is two orders of magnitude faster and it also converges to the solution with global minimum error. We have successfully implemented Amari's Adaptive Natural Gradient Learning (ANGL) technique for a linear zonal predictor, which premultiplies the Delta-Rule gradients with a matrix that orthogonalizes the parameter space and speeds up the training by two orders of magnitude, like the Recursive Least Squares predictor. This shows that the simple Widrow-Hoff Delta-Rule's slow convergence is not a fluke. In the case of bright guidestars, the ANGL, RLS, and standard matrix-inversion least-squares (MILS) algorithms all converge to the same global minimum linear total phase error (approximately 0.18 rad2), which is only approximately 5% higher than the spatial phase error (approximately 0.17 rad2), and is approximately 33% lower than the total 'naive' phase error without prediction (approximately 0.27 rad2). ANGL can, in principle, also be extended to make non-linear neural network training feasible for these large networks, with the potential to lower the predictor error below the linear predictor error. We will soon scale our linear work to the approximately 108-subaperture MMT AO system, both with simulations and real wavefront sensor data from prime focus.
7. TESTING THE APODIZED PUPIL LYOT CORONAGRAPH ON THE LABORATORY FOR ADAPTIVE OPTICS EXTREME ADAPTIVE OPTICS TESTBED
International Nuclear Information System (INIS)
Thomas, Sandrine J.; Dillon, Daren; Gavel, Donald; Soummer, Remi; Macintosh, Bruce; Sivaramakrishnan, Anand
2011-01-01
We present testbed results of the Apodized Pupil Lyot Coronagraph (APLC) at the Laboratory for Adaptive Optics (LAO). These results are part of the validation and tests of the coronagraph and of the Extreme Adaptive Optics (ExAO) for the Gemini Planet Imager (GPI). The apodizer component is manufactured with a halftone technique using black chrome microdots on glass. Testing this APLC (like any other coronagraph) requires extremely good wavefront correction, which is obtained to the 1 nm rms level using the microelectricalmechanical systems (MEMS) technology, on the ExAO visible testbed of the LAO at the University of Santa Cruz. We used an APLC coronagraph without central obstruction, both with a reference super-polished flat mirror and with the MEMS to obtain one of the first images of a dark zone in a coronagraphic image with classical adaptive optics using a MEMS deformable mirror (without involving dark hole algorithms). This was done as a complementary test to the GPI coronagraph testbed at American Museum of Natural History, which studied the coronagraph itself without wavefront correction. Because we needed a full aperture, the coronagraph design is very different from the GPI design. We also tested a coronagraph with central obstruction similar to that of GPI. We investigated the performance of the APLC coronagraph and more particularly the effect of the apodizer profile accuracy on the contrast. Finally, we compared the resulting contrast to predictions made with a wavefront propagation model of the testbed to understand the effects of phase and amplitude errors on the final contrast.
8. Custom CCD for adaptive optics applications
Science.gov (United States)
Downing, Mark; Arsenault, Robin; Baade, Dietrich; Balard, Philippe; Bell, Ray; Burt, David; Denney, Sandy; Feautrier, Philippe; Fusco, Thierry; Gach, Jean-Luc; Diaz Garcia, José Javier; Guillaume, Christian; Hubin, Norbert; Jorden, Paul; Kasper, Markus; Meyer, Manfred; Pool, Peter; Reyes, Javier; Skegg, Michael; Stadler, Eric; Suske, Wolfgang; Wheeler, Patrick
2006-06-01
ESO and JRA2 OPTICON have funded e2v technologies to develop a compact packaged Peltier cooled 24 μm square 240x240 pixels split frame transfer 8-output back-illuminated L3Vision CCD3, L3Vision CCD for Adaptive Optic Wave Front Sensor (AO WFS) applications. The device is designed to achieve sub-electron read noise at frame rates from 25 Hz to 1,500 Hz and dark current lower than 0.01 e-/pixel/frame. The development has many unique features. To obtain high frame rates, multi-output EMCCD gain registers and metal buttressing of row clock lines are used. The baseline device is built in standard silicon. In addition, a split wafer run has enabled two speculative variants to be built; deep depletion silicon devices to improve red response and devices with an electronic shutter to extend use to Rayleigh and Pulsed Laser Guide Star applications. These are all firsts for L3Vision CCDs. The designs of the CCD and Peltier package have passed their reviews and fabrication has begun. This paper will describe the progress to date, the requirements and the design of the CCD and compact Peltier package, technology trade-offs, schedule and proposed test plan. High readout speed, low noise and compactness (requirement to fit in confined spaces) provide special challenges to ESO's AO variant of its NGC, New General detector Controller to drive this CCD. This paper will describe progress made on the design of the controller to meet these special needs.
9. Optical implementations of associative networks with versatile adaptive learning capabilities.
Science.gov (United States)
Fisher, A D; Lippincott, W L; Lee, J N
1987-12-01
Optical associative, parallel-processing architectures are being developed using a multimodule approach, where a number of smaller, adaptive, associative modules are nonlinearly interconnected and cascaded under the guidance of a variety of organizational principles to structure larger architectures for solving specific problems. A number of novel optical implementations with versatile adaptive learning capabilities are presented for the individual associative modules, including holographic configurations and five specific electrooptic configurations. The practical issues involved in real optical architectures are analyzed, and actual laboratory optical implementations of associative modules based on Hebbian and Widrow-Hoff learning rules are discussed, including successful experimental demonstrations of their operation.
10. Research on the adaptive optical control technology based on DSP
Science.gov (United States)
Zhang, Xiaolu; Xue, Qiao; Zeng, Fa; Zhao, Junpu; Zheng, Kuixing; Su, Jingqin; Dai, Wanjun
2018-02-01
Adaptive optics is a real-time compensation technique using high speed support system for wavefront errors caused by atmospheric turbulence. However, the randomness and instantaneity of atmospheric changing introduce great difficulties to the design of adaptive optical systems. A large number of complex real-time operations lead to large delay, which is an insurmountable problem. To solve this problem, hardware operation and parallel processing strategy are proposed, and a high-speed adaptive optical control system based on DSP is developed. The hardware counter is used to check the system. The results show that the system can complete a closed loop control in 7.1ms, and improve the controlling bandwidth of the adaptive optical system. Using this system, the wavefront measurement and closed loop experiment are carried out, and obtain the good results.
11. Piezoelectric deformable mirror for intra-cavity laser adaptive optics.
CSIR Research Space (South Africa)
Long, CS
2008-03-01
Full Text Available This paper describes the development of a deformable mirror to be used in conjunction with diffractive optical elements inside a laser cavity. A prototype piezoelectric unimorph adaptive mirror was developed to correct for time dependent phase...
12. Solar adaptive optics: specificities, lessons learned, and open alternatives
Science.gov (United States)
Montilla, I.; Marino, J.; Asensio Ramos, A.; Collados, M.; Montoya, L.; Tallon, M.
2016-07-01
First on sky adaptive optics experiments were performed on the Dunn Solar Telescope on 1979, with a shearing interferometer and limited success. Those early solar adaptive optics efforts forced to custom-develop many components, such as Deformable Mirrors and WaveFront Sensors, which were not available at that time. Later on, the development of the correlation Shack-Hartmann marked a breakthrough in solar adaptive optics. Since then, successful Single Conjugate Adaptive Optics instruments have been developed for many solar telescopes, i.e. the National Solar Observatory, the Vacuum Tower Telescope and the Swedish Solar Telescope. Success with the Multi Conjugate Adaptive Optics systems for GREGOR and the New Solar Telescope has proved to be more difficult to attain. Such systems have a complexity not only related to the number of degrees of freedom, but also related to the specificities of the Sun, used as reference, and the sensing method. The wavefront sensing is performed using correlations on images with a field of view of 10", averaging wavefront information from different sky directions, affecting the sensing and sampling of high altitude turbulence. Also due to the low elevation at which solar observations are performed we have to include generalized fitting error and anisoplanatism, as described by Ragazzoni and Rigaut, as non-negligible error sources in the Multi Conjugate Adaptive Optics error budget. For the development of the next generation Multi Conjugate Adaptive Optics systems for the Daniel K. Inouye Solar Telescope and the European Solar Telescope we still need to study and understand these issues, to predict realistically the quality of the achievable reconstruction. To improve their designs other open issues have to be assessed, i.e. possible alternative sensing methods to avoid the intrinsic anisoplanatism of the wide field correlation Shack-Hartmann, new parameters to estimate the performance of an adaptive optics solar system, alternatives to
13. Adaptive Forward Error Correction for Energy Efficient Optical Transport Networks
DEFF Research Database (Denmark)
Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert
2013-01-01
In this paper we propose a novel scheme for on the fly code rate adjustment for forward error correcting (FEC) codes on optical links. The proposed scheme makes it possible to adjust the code rate independently for each optical frame. This allows for seamless rate adaption based on the link state...
14. Problems of Aero-optics and Adaptive Optical Systems: Analytical Review
Directory of Open Access Journals (Sweden)
Yu. I. Shanin
2017-01-01
15. Meaning of visualizing retinal cone mosaic on adaptive optics images.
Science.gov (United States)
Jacob, Julie; Paques, Michel; Krivosic, Valérie; Dupas, Bénédicte; Couturier, Aude; Kulcsar, Caroline; Tadayoni, Ramin; Massin, Pascale; Gaudric, Alain
2015-01-01
To explore the anatomic correlation of the retinal cone mosaic on adaptive optics images. Retrospective nonconsecutive observational case series. A retrospective review of the multimodal imaging charts of 6 patients with focal alteration of the cone mosaic on adaptive optics was performed. Retinal diseases included acute posterior multifocal placoid pigment epitheliopathy (n = 1), hydroxychloroquine retinopathy (n = 1), and macular telangiectasia type 2 (n = 4). High-resolution retinal images were obtained using a flood-illumination adaptive optics camera. Images were recorded using standard imaging modalities: color and red-free fundus camera photography; infrared reflectance scanning laser ophthalmoscopy, fluorescein angiography, indocyanine green angiography, and spectral-domain optical coherence tomography (OCT) images. On OCT, in the marginal zone of the lesions, a disappearance of the interdigitation zone was observed, while the ellipsoid zone was preserved. Image recording demonstrated that such attenuation of the interdigitation zone co-localized with the disappearance of the cone mosaic on adaptive optics images. In 1 case, the restoration of the interdigitation zone paralleled that of the cone mosaic after a 2-month follow-up. Our results suggest that the interdigitation zone could contribute substantially to the reflectance of the cone photoreceptor mosaic. The absence of cones on adaptive optics images does not necessarily mean photoreceptor cell death. Copyright © 2015 Elsevier Inc. All rights reserved.
16. Wavefront sensorless adaptive optics ophthalmoscopy in the human eye
Science.gov (United States)
Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason
2011-01-01
Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779
17. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
Science.gov (United States)
Downie, John D.
1990-01-01
A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.
18. Adaptive optics scanning laser ophthalmoscopy in fundus imaging, a review and update
OpenAIRE
Zhang, Bing; Li, Ni; Kang, Jie; He, Yi; Chen, Xiao-Ming
2017-01-01
Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics (AO) and AO-SLO. Then it compares AO-SLO with conventional imaging methods (fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography) and other AO techniques (adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherenc...
19. Wavelet methods in multi-conjugate adaptive optics
International Nuclear Information System (INIS)
Helin, T; Yudytskiy, M
2013-01-01
The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory. (paper)
20. Generation of optical vortices with an adaptive helical mirror.
Science.gov (United States)
Ghai, Devinder Pal
2011-04-01
Generation of optical vortices using a new design of adaptive helical mirror (AHM) is reported. The new AHM is a reflective device that can generate an optical vortex of any desired topological charge, both positive and negative, within its breakdown limits. The most fascinating feature of the AHM is that the topological charge of the optical vortex generated with it can be changed in real time by varying the excitation voltage. Generation of optical vortices up to topological charge 4 has been demonstrated. The presence of a vortex in the optical field generated with the AHM is confirmed by producing both fork and spiral fringes in an interferometric setup. Various design improvements to further enhance the performance of the reported AHM are discussed. Some of the important applications of AHM are also listed. © 2011 Optical Society of America
1. Adaptive Optics Technology for High-Resolution Retinal Imaging
Science.gov (United States)
Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe
2013-01-01
Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging. PMID:23271600
2. Holographic fluorescence microscopy with incoherent digital holographic adaptive optics.
Science.gov (United States)
Jang, Changwon; Kim, Jonghyun; Clark, David C; Lee, Seungjae; Lee, Byoungho; Kim, Myung K
2015-01-01
Introduction of adaptive optics technology into astronomy and ophthalmology has made great contributions in these fields, allowing one to recover images blurred by atmospheric turbulence or aberrations of the eye. Similar adaptive optics improvement in microscopic imaging is also of interest to researchers using various techniques. Current technology of adaptive optics typically contains three key elements: a wavefront sensor, wavefront corrector, and controller. These hardware elements tend to be bulky, expensive, and limited in resolution, involving, for example, lenslet arrays for sensing or multiactuator deformable mirrors for correcting. We have previously introduced an alternate approach based on unique capabilities of digital holography, namely direct access to the phase profile of an optical field and the ability to numerically manipulate the phase profile. We have also demonstrated that direct access and compensation of the phase profile are possible not only with conventional coherent digital holography, but also with a new type of digital holography using incoherent light: selfinterference incoherent digital holography (SIDH). The SIDH generates a complex—i.e., amplitude plus phase—hologram from one or several interferograms acquired with incoherent light, such as LEDs, lamps, sunlight, or fluorescence. The complex point spread function can be measured using guide star illumination and it allows deterministic deconvolution of the full-field image. We present experimental demonstration of aberration compensation in holographic fluorescence microscopy using SIDH. Adaptive optics by SIDH provides new tools for improved cellular fluorescence microscopy through intact tissue layers or other types of aberrant media.
3. Estimating Coastal Turbidity using MODIS 250 m Band Observations
Science.gov (United States)
Davies, James E.; Moeller, Christopher C.; Gunshor, Mathew M.; Menzel, W. Paul; Walker, Nan D.
2004-01-01
Terra MODIS 250 m observations are being applied to a Suspended Sediment Concentration (SSC) algorithm that is under development for coastal case 2 waters where reflectance is dominated by sediment entrained in major fluvial outflows. An atmospheric correction based on MODIS observations in the 500 m resolution 1.6 and 2.1 micron bands is used to isolate the remote sensing reflectance in the MODIS 25Om resolution 650 and 865 nanometer bands. SSC estimates from remote sensing reflectance are based on accepted inherent optical properties of sediment types known to be prevalent in the U.S. Gulf of Mexico coastal zone. We present our findings for the Atchafalaya Bay region of the Louisiana Coast, in the form of processed imagery over the annual cycle. We also apply our algorithm to selected sites worldwide with a goal of extending the utility of our approach to the global direct broadcast community.
4. Adaptive optical microscope for brain imaging in vivo
Science.gov (United States)
Wang, Kai
2017-04-01
The optical heterogeneity of biological tissue imposes a major limitation to acquire detailed structural and functional information deep in the biological specimens using conventional microscopes. To restore optimal imaging performance, we developed an adaptive optical microscope based on direct wavefront sensing technique. This microscope can reliably measure and correct biological samples induced aberration. We demonstrated its performance and application in structural and functional brain imaging in various animal models, including fruit fly, zebrafish and mouse.
5. Surface Plasmon Wave Adapter Designed with Transformation Optics
DEFF Research Database (Denmark)
Zhang, Jingjing; Xiao, Sanshui; Wubs, Martijn
2011-01-01
On the basis of transformation optics, we propose the design of a surface plasmon wave adapter which confines surface plasmon waves on non-uniform metal surfaces and enables adiabatic mode transformation of surface plasmon polaritons with very short tapers. This adapter can be simply achieved...... with homogeneous anisotropic naturally occurring materials or subwavelength grating-structured dielectric materials. Full wave simulations based on a finite-element method have been performed to validate our proposal....
DEFF Research Database (Denmark)
Buss, Thomas
This Ph.D. thesis presents methods for enhancing the optical functionality of transparent glass panes by introduction of invisible nanoscale surface structures, such as gratings and planar photonic cyrstals. In this way the primary functionality of the glass - transparancy - may be enhanced...... been designed, fabricated and analyzed. First a solar harvesting method, based on nanoscale gratings which are imprinted in a thin-film which is deposited on the window pane is discussed. Free-space light which is incident onto a window is coupled to guided modes in the thin-film or the substrate...
7. Optical components of adaptive systems for improving laser beam quality
Science.gov (United States)
Malakhov, Yuri I.; Atuchin, Victor V.; Kudryashov, Aleksis V.; Starikov, Fedor A.
2008-10-01
The short overview is given of optical equipment developed within the ISTC activity for adaptive systems of new generation allowing for correction of high-power laser beams carrying optical vortices onto the phase surface. They are the kinoform many-level optical elements of new generation, namely, special spiral phase plates and ordered rasters of microlenses, i.e. lenslet arrays, as well as the wide-aperture Hartmann-Shack sensors and bimorph deformable piezoceramics- based mirrors with various grids of control elements.
8. A Status Report on the Thirty Meter Telescope Adaptive Optics
2016-01-27
Jan 27, 2016 ... We provide an update on the recent development of the adaptive optics (AO) systems for the Thirty Meter Telescope (TMT) since mid-2011. The first light AO facility for TMT consists of the Narrow Field Infra-Red AO System (NFIRAOS) and the associated Laser Guide Star Facility (LGSF). This order 60 × 60 ...
9. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
Science.gov (United States)
Downie, John D.; Goodman, Joseph W.
1989-10-01
The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.
10. Modeling for deformable mirrors and the adaptive optics optimization program
International Nuclear Information System (INIS)
Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.
1997-01-01
We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language
11. Simulated annealing in adaptive optics for imaging the eye retina
International Nuclear Information System (INIS)
Zommer, S.; Adler, J.; Lipson, S. G.; Ribak, E.
2004-01-01
Full Text:Adaptive optics is a method designed to correct deformed images in real time. Once the distorted wavefront is known, a deformable mirror is used to compensate the aberrations and return the wavefront to a plane wave. This study concentrates on methods that omit wave front sensing from the reconstruction process. Such methods use stochastic algorithms to find the extremum of a certain sharpness function, thereby correcting the image without any information on the wavefront. Theoretical work [l] has shown that the optical problem can be mapped onto a model for crystal roughening. The main algorithm applied is simulated annealing. We present a first hardware realization of this algorithm in an adaptive optics system designed to image the retina of the human eye
12. Analysis and Design of Adaptive OCDMA Passive Optical Networks
Science.gov (United States)
2017-07-01
OCDMA systems can support multiple classes of service by differentiating code parameters, power level and diversity order. In this paper, we analyze BER performance of a multi-class 1D/2D OCDMA system and propose a new approximation method that can be used to generate accurate estimation of system BER using a simple mathematical form. The proposed approximation provides insight into proper system level analysis, system level design and sensitivity of system performance to the factors such as code parameters, power level and diversity order. Considering code design, code cardinality and system performance constraints, two design problems are defined and their optimal solutions are provided. We then propose an adaptive OCDMA-PON that adaptively shares unused resources of inactive users among active ones to improve upstream system performance. Using the approximated BER expression and defined design problems, two adaptive code allocation algorithms for the adaptive OCDMA-PON are presented and their performances are evaluated by simulation. Simulation results show that the adaptive code allocation algorithms can increase average transmission rate or decrease average optical power consumption of ONUs for dynamic traffic patterns. According to the simulation results, for an adaptive OCDMA-PON with BER value of 1e-7 and user activity probability of 0.5, transmission rate (optical power consumption) can be increased (decreased) by a factor of 2.25 (0.27) compared to fixed code assignment.
International Nuclear Information System (INIS)
Buis, E.J.; Berkhout, G.C.G.; Love, G.D.; Kirby, A.K.; Taylor, J.M.; Hannemann, S.; Collon, M.J.
2012-01-01
To assess its radiation hardness, a liquid crystal based adaptive optical element has been irradiated using a 60 MeV proton beam. The device with the functionality of an optical beam steerer was characterised before, during and after the irradiation. A systematic set of measurements on the transmission and beam deflection angles was carried out. The measurements showed that the transmission decreased only marginally and that its optical performance degraded only after a very high proton fluence (10 10 p/cm 2 ). The device showed complete annealing in the functionality as a beam steerer, which leads to the conclusion that the liquid crystal technology for optical devices is not vulnerable to proton irradiation as expected in space.
Energy Technology Data Exchange (ETDEWEB)
Buis, E.J., E-mail: ernst-jan.buis@tno.nl [cosine Science and Computing BV, Niels Bohrweg 11, 2333 CA Leiden (Netherlands); Berkhout, G.C.G. [cosine Science and Computing BV, Niels Bohrweg 11, 2333 CA Leiden (Netherlands); Huygens Laboratory, Leiden University, P.O. Box 9504, 2300 RA Leiden (Netherlands); Love, G.D.; Kirby, A.K.; Taylor, J.M. [Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom); Hannemann, S.; Collon, M.J. [cosine Research BV, Niels Bohrweg 11, 2333 CA Leiden (Netherlands)
2012-01-01
To assess its radiation hardness, a liquid crystal based adaptive optical element has been irradiated using a 60 MeV proton beam. The device with the functionality of an optical beam steerer was characterised before, during and after the irradiation. A systematic set of measurements on the transmission and beam deflection angles was carried out. The measurements showed that the transmission decreased only marginally and that its optical performance degraded only after a very high proton fluence (10{sup 10}p/cm{sup 2}). The device showed complete annealing in the functionality as a beam steerer, which leads to the conclusion that the liquid crystal technology for optical devices is not vulnerable to proton irradiation as expected in space.
15. Brillouin micro-spectroscopy through aberrations via sensorless adaptive optics
Science.gov (United States)
Edrei, Eitan; Scarcelli, Giuliano
2018-04-01
Brillouin spectroscopy is a powerful optical technique for non-contact viscoelastic characterizations which has recently found applications in three-dimensional mapping of biological samples. Brillouin spectroscopy performances are rapidly degraded by optical aberrations and have therefore been limited to homogenous transparent samples. In this work, we developed an adaptive optics (AO) configuration designed for Brillouin scattering spectroscopy to engineer the incident wavefront and correct for aberrations. Our configuration does not require direct wavefront sensing and the injection of a "guide-star"; hence, it can be implemented without the need for sample pre-treatment. We used our AO-Brillouin spectrometer in aberrated phantoms and biological samples and obtained improved precision and resolution of Brillouin spectral analysis; we demonstrated 2.5-fold enhancement in Brillouin signal strength and 1.4-fold improvement in axial resolution because of the correction of optical aberrations.
16. Implementation of Texture Based Image Retrieval Using M-band Wavelet Transform
Institute of Scientific and Technical Information of China (English)
LiaoYa-li; Yangyan; CaoYang
2003-01-01
Wavelet transform has attracted attention because it is a very useful tool for signal analyzing. As a fundamental characteristic of an image, texture traits play an important role in the human vision system for recognition and interpretation of images. The paper presents an approach to implement texture-based image retrieval using M-band wavelet transform. Firstly the traditional 2-band wavelet is extended to M-band wavelet transform. Then the wavelet moments are computed by M-band wavelet coefficients in the wavelet domain. The set of wavelet moments forms the feature vector related to the texture distribution of each wavelet images. The distances between the feature vectors describe the similarities of different images. The experimental result shows that the M-band wavelet moment features of the images are effective for image indexing.The retrieval method has lower computational complexity, yet it is capable of giving better retrieval performance for a given medical image database.
OpenAIRE
O’Connor, Kathryn W.; Loughlin, Patrick J.; Redfern, Mark S.; Sparto, Patrick J.
2008-01-01
The purpose of this study is to understand the processes of adaptation (changes in within-trial postural responses) and habituation (reductions in between-trial postural responses) to visual cues in older and young adults. Of particular interest were responses to sudden increases in optic flow magnitude. The postural sway of 25 healthy young adults and 24 healthy older adults was measured while subjects viewed anterior-posterior 0.4 Hz sinusoidal optic flow for 45 s. Three trials for each of ...
18. Segmented bimorph mirrors for adaptive optics: morphing strategy.
Science.gov (United States)
Bastaits, Renaud; Alaluf, David; Belloni, Edoardo; Rodrigues, Gonçalo; Preumont, André
2014-08-01
This paper discusses the concept of a light weight segmented bimorph mirror for adaptive optics. It focuses on the morphing strategy and addresses the ill-conditioning of the Jacobian of the segments, which are partly outside the optical pupil. Two options are discussed, one based on truncating the singular values and one called damped least squares, which minimizes a combined measure of the sensor error and the voltage vector. A comparison of various configurations of segmented mirrors was conducted; it is shown that segmentation sharply increases the natural frequency of the system with limited deterioration of the image quality.
Science.gov (United States)
Carroll, C. W.; Vijaya Kumar, B. V. K.
1988-01-01
The results of the investigation of the applicability of optical processing to Adaptive Phased Array Radar (APAR) data processing will be summarized. Subjects that are covered include: (1) new iterative Fourier transform based technique to determine the array antenna weight vector such that the resulting antenna pattern has nulls at desired locations; (2) obtaining the solution of the optimal Wiener weight vector by both iterative and direct methods on two laboratory Optical Linear Algebra Processing (OLAP) systems; and (3) an investigation of the effects of errors present in OLAP systems on the solution vectors.
20. ADAPTIVE OPTICS IMAGING OF VY CANIS MAJORIS AT 2-5 μm WITH LBT/LMIRCam
Energy Technology Data Exchange (ETDEWEB)
Shenoy, Dinesh P.; Jones, Terry J.; Humphreys, Roberta M. [Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Marengo, Massimo [Department of Physics, Iowa State University, Ames, IA 50011 (United States); Leisenring, Jarron M. [Institute for Astronomy, ETH, Wolfgang-Pauli-Strasse 27, 8093 Zurich (Switzerland); Nelson, Matthew J.; Wilson, John C.; Skrutskie, Michael F. [Department of Astronomy, University of Virginia, 530 McCormick Road, Charlottesville, VA 22904 (United States); Hinz, Philip M.; Hoffmann, William F.; Bailey, Vanessa; Skemer, Andrew; Rodigas, Timothy; Vaitheeswaran, Vidhya, E-mail: shenoy@astro.umn.edu [Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States)
2013-10-01
We present adaptive optics images of the extreme red supergiant VY Canis Majoris in the K{sub s} , L', and M bands (2.15-4.8 μm) made with LMIRCam on the Large Binocular Telescope. The peculiar ''Southwest Clump'' previously imaged from 1 to 2.2 μm appears prominently in all three filters. We find its brightness is due almost entirely to scattering, with the contribution of thermal emission limited to at most 25%. We model its brightness as optically thick scattering from silicate dust grains using typical size distributions. We find a lower limit mass for this single feature of 5 × 10{sup –3} M {sub ☉} to 2.5 × 10{sup –2} M {sub ☉} depending on the assumed gas-to-dust ratio. The presence of the Clump as a distinct feature with no apparent counterpart on the other side of the star is suggestive of an ejection event from a localized region of the star and is consistent with VY CMa's history of asymmetric high-mass-loss events.
1. ADAPTIVE OPTICS IMAGING OF VY CANIS MAJORIS AT 2-5 μm WITH LBT/LMIRCam
International Nuclear Information System (INIS)
Shenoy, Dinesh P.; Jones, Terry J.; Humphreys, Roberta M.; Marengo, Massimo; Leisenring, Jarron M.; Nelson, Matthew J.; Wilson, John C.; Skrutskie, Michael F.; Hinz, Philip M.; Hoffmann, William F.; Bailey, Vanessa; Skemer, Andrew; Rodigas, Timothy; Vaitheeswaran, Vidhya
2013-01-01
We present adaptive optics images of the extreme red supergiant VY Canis Majoris in the K s , L', and M bands (2.15-4.8 μm) made with LMIRCam on the Large Binocular Telescope. The peculiar ''Southwest Clump'' previously imaged from 1 to 2.2 μm appears prominently in all three filters. We find its brightness is due almost entirely to scattering, with the contribution of thermal emission limited to at most 25%. We model its brightness as optically thick scattering from silicate dust grains using typical size distributions. We find a lower limit mass for this single feature of 5 × 10 –3 M ☉ to 2.5 × 10 –2 M ☉ depending on the assumed gas-to-dust ratio. The presence of the Clump as a distinct feature with no apparent counterpart on the other side of the star is suggestive of an ejection event from a localized region of the star and is consistent with VY CMa's history of asymmetric high-mass-loss events
2. Adaptive Optics Imaging of VY Canis Majoris at 2-5 μm with LBT/LMIRCam
Science.gov (United States)
Shenoy, Dinesh P.; Jones, Terry J.; Humphreys, Roberta M.; Marengo, Massimo; Leisenring, Jarron M.; Nelson, Matthew J.; Wilson, John C.; Skrutskie, Michael F.; Hinz, Philip M.; Hoffmann, William F.; Bailey, Vanessa; Skemer, Andrew; Rodigas, Timothy; Vaitheeswaran, Vidhya
2013-10-01
We present adaptive optics images of the extreme red supergiant VY Canis Majoris in the Ks , L', and M bands (2.15-4.8 μm) made with LMIRCam on the Large Binocular Telescope. The peculiar "Southwest Clump" previously imaged from 1 to 2.2 μm appears prominently in all three filters. We find its brightness is due almost entirely to scattering, with the contribution of thermal emission limited to at most 25%. We model its brightness as optically thick scattering from silicate dust grains using typical size distributions. We find a lower limit mass for this single feature of 5 × 10-3 M ⊙ to 2.5 × 10-2 M ⊙ depending on the assumed gas-to-dust ratio. The presence of the Clump as a distinct feature with no apparent counterpart on the other side of the star is suggestive of an ejection event from a localized region of the star and is consistent with VY CMa's history of asymmetric high-mass-loss events. The LBT is an international collaboration among institutions in the United States, Italy, and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University; and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota, and University of Virginia.
3. Contrast-based sensorless adaptive optics for retinal imaging.
Science.gov (United States)
Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T O; He, Zheng; Metha, Andrew
2015-09-01
Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes.
4. Optical properties of photoreceptor and retinal pigment epithelium cells investigated with adaptive optics optical coherence tomography
Science.gov (United States)
Liu, Zhuolin
Human vision starts when photoreceptors collect and respond to light. Photoreceptors do not function in isolation though, but share close interdependence with neighboring photoreceptors and underlying retinal pigment epithelium (RPE) cells. These cellular interactions are essential for normal function of the photoreceptor-RPE complex, but methods to assess these in the living human eye are limited. One approach that has gained increased promise is high-resolution retinal imaging that has undergone tremendous technological advances over the last two decades to probe the living retina at the cellular level. Pivotal in these advances has been adaptive optics (AO) and optical coherence tomography (OCT) that together allow unprecedented spatial resolution of retinal structures in all three dimensions. Using these high-resolution systems, cone photoreceptor are now routinely imaged in healthy and diseased retina enabling fundamental structural properties of cones to be studied such as cell spacing, packing arrangement, and alignment. Other important cell properties, however, have remained elusive to investigation as even better imaging performance is required and thus has resulted in an incomplete understanding of how cells in the photoreceptor-RPE complex interact with light. To address this technical bottleneck, we expanded the imaging capability of AO-OCT to detect and quantify more accurately and completely the optical properties of cone photoreceptor and RPE cells at the cellular level in the living human retina. The first objective of this thesis was development of a new AO-OCT method that is more precise and sensitive, thus enabling a more detailed view of the 3D optical signature of the photoreceptor-RPE complex than was previously possible (Chapter 2). Using this new system, the second objective was quantifying the waveguide properties of individual cone photoreceptor inner and outer segments across the macula (Chapter 3). The third objective extended the AO
5. Optical power allocation for adaptive transmissions in wavelength-division multiplexing free space optical networks
Directory of Open Access Journals (Sweden)
Hui Zhou
2015-08-01
Full Text Available Attracting increasing attention in recent years, the Free Space Optics (FSO technology has been recognized as a cost-effective wireless access technology for multi-Gigabit rate wireless networks. Radio on Free Space Optics (RoFSO provides a new approach to support various bandwidth-intensive wireless services in an optical wireless link. In an RoFSO system using wavelength-division multiplexing (WDM, it is possible to concurrently transmit multiple data streams consisting of various wireless services at very high rate. In this paper, we investigate the problem of optical power allocation under power budget and eye safety constraints for adaptive WDM transmission in RoFSO networks. We develop power allocation schemes for adaptive WDM transmissions to combat the effect of weather turbulence on RoFSO links. Simulation results show that WDM RoFSO can support high data rates even over long distance or under bad weather conditions with an adequate system design.
6. Laser guide star adaptive optics at Lick Observatory
OpenAIRE
Gavel, Donald; Dillon, Daren; Kupke, Renate; Rudy, Alex
2015-01-01
We present an overview of the adaptive optics system at the Shane telescope (ShaneAO) along with research and development efforts on the technology and algorithms for that will advance AO into wider application for astronomy. Diffraction-limited imaging and spectroscopy from ground based large aperture telescopes will open up the opportunity for unprecedented science advancement. The AO challenges we are targeting are correction down to visible science wavelengths, which demands high-order wa...
7. Adaptive Optical System for Retina Imaging Approaches Clinic Applications
Science.gov (United States)
Ling, N.; Zhang, Y.; Rao, X.; Wang, C.; Hu, Y.; Jiang, W.; Jiang, C.
We presented "A small adaptive optical system on table for human retinal imaging" at the 3rd Workshop on Adaptive Optics for Industry and Medicine. In this system, a 19 element small deformable mirror was used as wavefront correction element. High resolution images of photo receptors and capillaries of human retina were obtained. In recent two years, at the base of this system a new adaptive optical system for human retina imaging has been developed. The wavefront correction element is a newly developed 37 element deformable mirror. Some modifications have been adopted for easy operation. Experiments for different imaging wavelengths and axial positions were conducted. Mosaic pictures of photoreceptors and capillaries were obtained. 100 normal and abnormal eyes of different ages have been inspected.The first report in the world concerning the most detailed capillary distribution images cover ±3° by ± 3° field around the fovea has been demonstrated. Some preliminary very early diagnosis experiment has been tried in laboratory. This system is being planned to move to the hospital for clinic experiments.
8. Adaptive optics with pupil tracking for high resolution retinal imaging.
Science.gov (United States)
Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris
2012-02-01
Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.
9. Pupil-segmentation-based adaptive optics for microscopy
Science.gov (United States)
Ji, Na; Milkie, Daniel E.; Betzig, Eric
2011-03-01
Inhomogeneous optical properties of biological samples make it difficult to obtain diffraction-limited resolution in depth. Correcting the sample-induced optical aberrations needs adaptive optics (AO). However, the direct wavefront-sensing approach commonly used in astronomy is not suitable for most biological samples due to their strong scattering of light. We developed an image-based AO approach that is insensitive to sample scattering. By comparing images of the sample taken with different segments of the pupil illuminated, local tilt in the wavefront is measured from image shift. The aberrated wavefront is then obtained either by measuring the local phase directly using interference or with phase reconstruction algorithms similar to those used in astronomical AO. We implemented this pupil-segmentation-based approach in a two-photon fluorescence microscope and demonstrated that diffraction-limited resolution can be recovered from nonbiological and biological samples.
10. High-resolution retinal imaging using adaptive optics and Fourier-domain optical coherence tomography
Science.gov (United States)
Olivier, Scot S.; Werner, John S.; Zawadzki, Robert J.; Laut, Sophie P.; Jones, Steven M.
2010-09-07
This invention permits retinal images to be acquired at high speed and with unprecedented resolution in three dimensions (4.times.4.times.6 .mu.m). The instrument achieves high lateral resolution by using adaptive optics to correct optical aberrations of the human eye in real time. High axial resolution and high speed are made possible by the use of Fourier-domain optical coherence tomography. Using this system, we have demonstrated the ability to image microscopic blood vessels and the cone photoreceptor mosaic.
11. Adaptive optics improves multiphoton super-resolution imaging
Science.gov (United States)
Zheng, Wei; Wu, Yicong; Winter, Peter; Shroff, Hari
2018-02-01
Three dimensional (3D) fluorescence microscopy has been essential for biological studies. It allows interrogation of structure and function at spatial scales spanning the macromolecular, cellular, and tissue levels. Critical factors to consider in 3D microscopy include spatial resolution, signal-to-noise (SNR), signal-to-background (SBR), and temporal resolution. Maintaining high quality imaging becomes progressively more difficult at increasing depth (where optical aberrations, induced by inhomogeneities of refractive index in the sample, degrade resolution and SNR), and in thick or densely labeled samples (where out-of-focus background can swamp the valuable, in-focus-signal from each plane). In this report, we introduce our new instrumentation to address these problems. A multiphoton structured illumination microscope was simply modified to integrate an adpative optics system for optical aberrations correction. Firstly, the optical aberrations are determined using direct wavefront sensing with a nonlinear guide star and subsequently corrected using a deformable mirror, restoring super-resolution information. We demonstrate the flexibility of our adaptive optics approach on a variety of semi-transparent samples, including bead phantoms, cultured cells in collagen gels and biological tissues. The performance of our super-resolution microscope is improved in all of these samples, as peak intensity is increased (up to 40-fold) and resolution recovered (up to 176+/-10 nm laterally and 729+/-39 nm axially) at depths up to 250 μm from the coverslip surface.
12. Sub-Airy Confocal Adaptive Optics Scanning Ophthalmoscopy.
Science.gov (United States)
Sredar, Nripun; Fagbemi, Oladipo E; Dubra, Alfredo
2018-04-01
To demonstrate the viability of improving transverse image resolution in reflectance scanning adaptive optics ophthalmoscopy using sub-Airy disk confocal detection. The foveal cone mosaic was imaged in five human subjects free of known eye disease using two custom adaptive optics scanning light ophthalmoscopes (AOSLOs) in reflectance with 7.75 and 4.30 mm pupil diameters. Confocal pinholes of 0.5, 0.6, 0.8, and 1.0 Airy disk diameters (ADDs) were used in a retinal conjugate plane before the light detector. Average cone photoreceptor intensity profile width and power spectrum were calculated for the resulting images. Detected energy using a model eye was recorded for each pinhole size. The cone photoreceptor mosaic is better resolved with decreasing confocal pinhole size, with the high spatial frequency content of the images enhanced in both the large- and small-pupil AOSLOs. The average cone intensity profile width was reduced by ∼15% with the use of a 0.5 ADD pinhole when compared to a 1.0 ADD, with an accompanying reduction in signal greater than a factor of four. The use of sub-Airy disk confocal pinhole detection without increasing retinal light exposure results in a substantial improvement in image resolution at the cost of larger than predicted signal reduction. Improvement in transverse resolution using sub-Airy disk confocal detection is a practical and low-cost approach that is applicable to all point- and line-scanning ophthalmoscopes, including optical coherence tomographers.
13. Night myopia studied with an adaptive optics visual analyzer.
Directory of Open Access Journals (Sweden)
Pablo Artal
Full Text Available PURPOSE: Eyes with distant objects in focus in daylight are thought to become myopic in dim light. This phenomenon, often called "night myopia" has been studied extensively for several decades. However, despite its general acceptance, its magnitude and causes are still controversial. A series of experiments were performed to understand night myopia in greater detail. METHODS: We used an adaptive optics instrument operating in invisible infrared light to elucidate the actual magnitude of night myopia and its main causes. The experimental setup allowed the manipulation of the eye's aberrations (and particularly spherical aberration as well as the use of monochromatic and polychromatic stimuli. Eight subjects with normal vision monocularly determined their best focus position subjectively for a Maltese cross stimulus at different levels of luminance, from the baseline condition of 20 cd/m(2 to the lowest luminance of 22 × 10(-6 cd/m(2. While subjects performed the focusing tasks, their eye's defocus and aberrations were continuously measured with the 1050-nm Hartmann-Shack sensor incorporated in the adaptive optics instrument. The experiment was repeated for a variety of controlled conditions incorporating specific aberrations of the eye and chromatic content of the stimuli. RESULTS: We found large inter-subject variability and an average of -0.8 D myopic shift for low light conditions. The main cause responsible for night myopia was the accommodation shift occurring at low light levels. Other factors, traditionally suggested to explain night myopia, such as chromatic and spherical aberrations, have a much smaller effect in this mechanism. CONCLUSIONS: An adaptive optics visual analyzer was applied to study the phenomenon of night myopia. We found that the defocus shift occurring in dim light is mainly due to accommodation errors.
14. An adaptive optics imaging system designed for clinical use
Science.gov (United States)
Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R.; Rossi, Ethan A.
2015-01-01
Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2–3 arc minutes, (arcmin) 2) ~0.5–0.8 arcmin and, 3) ~0.05–0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3–5 arcmin, 2) ~0.7–1.1 arcmin and 3) ~0.07–0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing. PMID:26114033
15. Aberrations and adaptive optics in super-resolution microscopy
Science.gov (United States)
Booth, Martin; Andrade, Débora; Burke, Daniel; Patton, Brian; Zurauskas, Mantas
2015-01-01
As one of the most powerful tools in the biological investigation of cellular structures and dynamic processes, fluorescence microscopy has undergone extraordinary developments in the past decades. The advent of super-resolution techniques has enabled fluorescence microscopy – or rather nanoscopy – to achieve nanoscale resolution in living specimens and unravelled the interior of cells with unprecedented detail. The methods employed in this expanding field of microscopy, however, are especially prone to the detrimental effects of optical aberrations. In this review, we discuss how super-resolution microscopy techniques based upon single-molecule switching, stimulated emission depletion and structured illumination each suffer from aberrations in different ways that are dependent upon intrinsic technical aspects. We discuss the use of adaptive optics as an effective means to overcome this problem. PMID:26124194
16. Fourier transform digital holographic adaptive optics imaging system
Science.gov (United States)
Liu, Changgeng; Yu, Xiao; Kim, Myung K.
2013-01-01
A Fourier transform digital holographic adaptive optics imaging system and its basic principles are proposed. The CCD is put at the exact Fourier transform plane of the pupil of the eye lens. The spherical curvature introduced by the optics except the eye lens itself is eliminated. The CCD is also at image plane of the target. The point-spread function of the system is directly recorded, making it easier to determine the correct guide-star hologram. Also, the light signal will be stronger at the CCD, especially for phase-aberration sensing. Numerical propagation is avoided. The sensor aperture has nothing to do with the resolution and the possibility of using low coherence or incoherent illumination is opened. The system becomes more efficient and flexible. Although it is intended for ophthalmic use, it also shows potential application in microscopy. The robustness and feasibility of this compact system are demonstrated by simulations and experiments using scattering objects. PMID:23262541
Energy Technology Data Exchange (ETDEWEB)
Romashko, R V; Bezruk, M N; Kamshilin, A A; Kulchin, Yurii N
2012-06-30
We have proposed and analysed a scheme for the multiplexing of orthogonal dynamic holograms in photorefractive crystals which ensures almost zero cross talk between the holographic channels upon phase demodulation. A six-channel adaptive fibre-optic interferometer was built, and the detection limit for small phase fluctuations in the channels of the interferometer was determined to be 2.1 Multiplication-Sign 10{sup -8} rad W{sup 1/2} Hz{sup -1/2}. The channel multiplexing capacity of the interferometer was estimated. The formation of 70 channels such that their optical fields completely overlap in the crystal reduces the relative detection limit in the working channel by just 10 %. We found conditions under which the maximum cross talk between the channels was within the intrinsic noise level in the channels (-47 dB).
18. Adaptive phase measurements in linear optical quantum computation
International Nuclear Information System (INIS)
Ralph, T C; Lund, A P; Wiseman, H M
2005-01-01
Photon counting induces an effective non-linear optical phase shift in certain states derived by linear optics from single photons. Although this non-linearity is non-deterministic, it is sufficient in principle to allow scalable linear optics quantum computation (LOQC). The most obvious way to encode a qubit optically is as a superposition of the vacuum and a single photon in one mode-so-called 'single-rail' logic. Until now this approach was thought to be prohibitively expensive (in resources) compared to 'dual-rail' logic where a qubit is stored by a photon across two modes. Here we attack this problem with real-time feedback control, which can realize a quantum-limited phase measurement on a single mode, as has been recently demonstrated experimentally. We show that with this added measurement resource, the resource requirements for single-rail LOQC are not substantially different from those of dual-rail LOQC. In particular, with adaptive phase measurements an arbitrary qubit state α vertical bar 0>+β vertical bar 1> can be prepared deterministically
19. Conjugate adaptive optics with remote focusing in multiphoton microscopy
Science.gov (United States)
Tao, Xiaodong; Lam, Tuwin; Zhu, Bingzhao; Li, Qinggele; Reinig, Marc R.; Kubby, Joel
2018-02-01
The small correction volume for conventional wavefront shaping methods limits their application in biological imaging through scattering media. In this paper, we take advantage of conjugate adaptive optics (CAO) and remote focusing (CAORF) to achieve three-dimensional (3D) scanning through a scattering layer with a single correction. Our results show that the proposed system can provide 10 times wider axial field of view compared with a conventional conjugate AO system when 16,384 segments are used on a spatial light modulator. We demonstrate two-photon imaging with CAORF through mouse skull. The fluorescent microspheres embedded under the scattering layers can be clearly observed after applying the correction.
20. Adaptive optics system for the IRSOL solar observatory
Science.gov (United States)
Ramelli, Renzo; Bucher, Roberto; Rossini, Leopoldo; Bianda, Michele; Balemi, Silvano
2010-07-01
We present a low cost adaptive optics system developed for the solar observatory at Istituto Ricerche Solari Locarno (IRSOL), Switzerland. The Shack-Hartmann Wavefront Sensor is based on a Dalsa CCD camera with 256 pixels × 256 pixels working at 1kHz. The wavefront compensation is obtained by a deformable mirror with 37 actuators and a Tip-Tilt mirror. A real time control software has been developed on a RTAI-Linux PC. Scicos/Scilab based software has been realized for an online analysis of the system behavior. The software is completely open source.
1. Optimal model-based sensorless adaptive optics for epifluorescence microscopy.
Science.gov (United States)
Pozzi, Paolo; Soloviev, Oleg; Wilding, Dean; Vdovin, Gleb; Verhaegen, Michel
2018-01-01
We report on a universal sample-independent sensorless adaptive optics method, based on modal optimization of the second moment of the fluorescence emission from a point-like excitation. Our method employs a sample-independent precalibration, performed only once for the particular system, to establish the direct relation between the image quality and the aberration. The method is potentially applicable to any form of microscopy with epifluorescence detection, including the practically important case of incoherent fluorescence emission from a three dimensional object, through minor hardware modifications. We have applied the technique successfully to a widefield epifluorescence microscope and to a multiaperture confocal microscope.
2. High resolution observations using adaptive optics: Achievements and future needs
Science.gov (United States)
Sankarasubramanian, K.; Rimmele, T.
2008-06-01
Over the last few years, several interesting observations were obtained with the help of solar Adaptive Optics (AO). In this paper, few observations made using the solar AO are enlightened and briefly discussed. A list of disadvantages with the current AO system are presented. With telescopes larger than 1.5 m expected during the next decade, there is a need to develop the existing AO technologies for large aperture telescopes. Some aspects of this development are highlighted. Finally, the recent AO developments in India are also presented.
Science.gov (United States)
Arimoto, Yoshinori; Hayano, Yutaka; Klaus, Werner
1997-05-01
We propose a satellite laser communication system between a ground station and a geostationary satellite, named high- speed optical feeder link system. It is based on the application of (a) high-speed optical devices, which have been developed for ground-based high-speed fiber-optic communications, and (b) the adaptive optics which compensates wavefront distortions due to atmospheric turbulences using a real time feedback control. A link budget study shows that a system with 10-Gbps bit-rate are available assuming the state-of-the-art device performance of the Er-doped fiber amplifier. We further discuss preliminary measurement results of the atmospheric turbulence at the telescope site in Tokyo, and present current study on the design of the key components for the feeder-link laser transceiver.
4. In vivo imaging of human photoreceptor mosaic with wavefront sensorless adaptive optics optical coherence tomography.
Science.gov (United States)
Wong, Kevin S K; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V
2015-02-01
Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation.
5. Computational adaptive optics for broadband optical interferometric tomography of biological tissue.
Science.gov (United States)
2012-05-08
Aberrations in optical microscopy reduce image resolution and contrast, and can limit imaging depth when focusing into biological samples. Static correction of aberrations may be achieved through appropriate lens design, but this approach does not offer the flexibility of simultaneously correcting aberrations for all imaging depths, nor the adaptability to correct for sample-specific aberrations for high-quality tomographic optical imaging. Incorporation of adaptive optics (AO) methods have demonstrated considerable improvement in optical image contrast and resolution in noninterferometric microscopy techniques, as well as in optical coherence tomography. Here we present a method to correct aberrations in a tomogram rather than the beam of a broadband optical interferometry system. Based on Fourier optics principles, we correct aberrations of a virtual pupil using Zernike polynomials. When used in conjunction with the computed imaging method interferometric synthetic aperture microscopy, this computational AO enables object reconstruction (within the single scattering limit) with ideal focal-plane resolution at all depths. Tomographic reconstructions of tissue phantoms containing subresolution titanium-dioxide particles and of ex vivo rat lung tissue demonstrate aberration correction in datasets acquired with a highly astigmatic illumination beam. These results also demonstrate that imaging with an aberrated astigmatic beam provides the advantage of a more uniform depth-dependent signal compared to imaging with a standard gaussian beam. With further work, computational AO could enable the replacement of complicated and expensive optical hardware components with algorithms implemented on a standard desktop computer, making high-resolution 3D interferometric tomography accessible to a wider group of users and nonspecialists.
6. Extended depth of focus adaptive optics spectral domain optical coherence tomography
Science.gov (United States)
Sasaki, Kazuhiro; Kurokawa, Kazuhiro; Makita, Shuichi; Yasuno, Yoshiaki
2012-01-01
We present an adaptive optics spectral domain optical coherence tomography (AO-SDOCT) with a long focal range by active phase modulation of the pupil. A long focal range is achieved by introducing AO-controlled third-order spherical aberration (SA). The property of SA and its effects on focal range are investigated in detail using the Huygens-Fresnel principle, beam profile measurement and OCT imaging of a phantom. The results indicate that the focal range is extended by applying SA, and the direction of extension can be controlled by the sign of applied SA. Finally, we demonstrated in vivo human retinal imaging by altering the applied SA. PMID:23082278
7. Overview of deformable mirror technologies for adaptive optics and astronomy
Science.gov (United States)
2012-07-01
From the ardent bucklers used during the Syracuse battle to set fire to Romans’ ships to more contemporary piezoelectric deformable mirrors widely used in astronomy, from very large voice coil deformable mirrors considered in future Extremely Large Telescopes to very small and compact ones embedded in Multi Object Adaptive Optics systems, this paper aims at giving an overview of Deformable Mirror technology for Adaptive Optics and Astronomy. First the main drivers for the design of Deformable Mirrors are recalled, not only related to atmospheric aberration compensation but also to environmental conditions or mechanical constraints. Then the different technologies available today for the manufacturing of Deformable Mirrors will be described, pros and cons analyzed. A review of the Companies and Institutes with capabilities in delivering Deformable Mirrors to astronomers will be presented, as well as lessons learned from the past 25 years of technological development and operation on sky. In conclusion, perspective will be tentatively drawn for what regards the future of Deformable Mirror technology for Astronomy.
8. Photometric Calibration of the Gemini South Adaptive Optics Imager
Science.gov (United States)
Stevenson, Sarah Anne; Rodrigo Carrasco Damele, Eleazar; Thomas-Osip, Joanna
2017-01-01
The Gemini South Adaptive Optics Imager (GSAOI) is an instrument available on the Gemini South telescope at Cerro Pachon, Chile, utilizing the Gemini Multi-Conjugate Adaptive Optics System (GeMS). In order to allow users to easily perform photometry with this instrument and to monitor any changes in the instrument in the future, we seek to set up a process for performing photometric calibration with standard star observations taken across the time of the instrument’s operation. We construct a Python-based pipeline that includes IRAF wrappers for reduction and combines the AstroPy photutils package and original Python scripts with the IRAF apphot and photcal packages to carry out photometry and linear regression fitting. Using the pipeline, we examine standard star observations made with GSAOI on 68 nights between 2013 and 2015 in order to determine the nightly photometric zero points in the J, H, Kshort, and K bands. This work is based on observations obtained at the Gemini Observatory, processed using the Gemini IRAF and gemini_python packages, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil).
9. A New, Adaptable, Optical High-Resolution 3-Axis Sensor
Directory of Open Access Journals (Sweden)
Niels Buchhold
2017-01-01
Full Text Available This article presents a new optical, multi-functional, high-resolution 3-axis sensor which serves to navigate and can, for example, replace standard joysticks in medical devices such as electric wheelchairs, surgical robots or medical diagnosis devices. A light source, e.g., a laser diode, is affixed to a movable axis and projects a random geometric shape on an image sensor (CMOS or CCD. The downstream microcontroller’s software identifies the geometric shape’s center, distortion and size, and then calculates x, y, and z coordinates, which can be processed in attached devices. Depending on the image sensor in use (e.g., 6.41 megapixels, the 3-axis sensor features a resolution of 1544 digits from right to left and 1038 digits up and down. Through interpolation, these values rise by a factor of 100. A unique feature is the exact reproducibility (deflection to coordinates and its precise ability to return to its neutral position. Moreover, optical signal processing provides a high level of protection against electromagnetic and radio frequency interference. The sensor is adaptive and adjustable to fit a user’s range of motion (stroke and force. This recommendation aims to optimize sensor systems such as joysticks in medical devices in terms of safety, ease of use, and adaptability.
10. Solar multi-conjugate adaptive optics performance improvement
Science.gov (United States)
Zhang, Zhicheng; Zhang, Xiaofang; Song, Jie
2015-08-01
In order to overcome the effect of the atmospheric anisoplanatism, Multi-Conjugate Adaptive Optics (MCAO), which was developed based on turbulence correction by means of several deformable mirrors (DMs) conjugated to different altitude and by which the limit of a small corrected FOV that is achievable with AO is overcome and a wider FOV is able to be corrected, has been widely used to widen the field-of-view (FOV) of a solar telescope. With the assistance of the multi-threaded Adaptive Optics Simulator (MAOS), we can make a 3D reconstruction of the distorted wavefront. The correction is applied by one or more DMs. This technique benefits from information about atmospheric turbulence at different layers, which can be used to reconstruct the wavefront extremely well. In MAOS, the sensors are either simulated as idealized wavefront gradient sensors, tip-tilt sensors based on the best Zernike fit, or a WFS using physical optics and incorporating user specified pixel characteristics and a matched filter pixel processing algorithm. Only considering the atmospheric anisoplanatism, we focus on how the performance of a solar MCAO system is related to the numbers of DMs and their conjugate heights. We theoretically quantify the performance of the tomographic solar MCAO system. The results indicate that the tomographic AO system can improve the average Strehl ratio of a solar telescope by only employing one or two DMs conjugated to the optimum altitude. And the S.R. has a significant increase when more deformable mirrors are used. Furthermore, we discuss the effects of DM conjugate altitude on the correction achievable by the MCAO system, and present the optimum DM conjugate altitudes.
11. Control algorithms and applications of the wavefront sensorless adaptive optics
Science.gov (United States)
Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen
2017-10-01
Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.
12. Statistical learning methods for aero-optic wavefront prediction and adaptive-optic latency compensation
Science.gov (United States)
Burns, W. Robert
Since the early 1970's research in airborne laser systems has been the subject of continued interest. Airborne laser applications depend on being able to propagate a near diffraction-limited laser beam from an airborne platform. Turbulent air flowing over the aircraft produces density fluctuations through which the beam must propagate. Because the index of refraction of the air is directly related to the density, the turbulent flow imposes aberrations on the beam passing through it. This problem is referred to as Aero-Optics. Aero-Optics is recognized as a major technical issue that needs to be solved before airborne optical systems can become routinely fielded. This dissertation research specifically addresses an approach to mitigating the deleterious effects imposed on an airborne optical system by aero-optics. A promising technology is adaptive optics: a feedback control method that measures optical aberrations and imprints the conjugate aberrations onto an outgoing beam. The challenge is that it is a computationally-difficult problem, since aero-optic disturbances are on the order of kilohertz for practical applications. High control loop frequencies and high disturbance frequencies mean that adaptive-optic systems are sensitive to latency in sensors, mirrors, amplifiers, and computation. These latencies build up to result in a dramatic reduction in the system's effective bandwidth. This work presents two variations of an algorithm that uses model reduction and data-driven predictors to estimate the evolution of measured wavefronts over a short temporal horizon and thus compensate for feedback latency. The efficacy of the two methods are compared in this research, and evaluated against similar algorithms that have been previously developed. The best version achieved over 75% disturbance rejection in simulation in the most optically active flow region in the wake of a turret, considerably outperforming conventional approaches. The algorithm is shown to be
13. Adaptive optics stochastic optical reconstruction microscopy (AO-STORM) by particle swarm optimization.
Science.gov (United States)
Tehrani, Kayvan F; Zhang, Yiwen; Shen, Ping; Kner, Peter
2017-11-01
Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm.
14. Binary stars observed with adaptive optics at the starfire optical range
Energy Technology Data Exchange (ETDEWEB)
Drummond, Jack D. [Air Force Research Laboratory, Directed Energy Directorate, RDSAM, 3550 Aberdeen Avenue SE, Kirtland AFB, NM 87117-5776 (United States)
2014-03-01
In reviewing observations taken of binary stars used as calibration objects for non-astronomical purposes with adaptive optics on the 3.5 m Starfire Optical Range telescope over the past 2 years, one-fifth of them were found to be off-orbit. In order to understand such a high number of discrepant position angles and separations, all previous observations in the Washington Double Star Catalog for these rogue binaries were obtained from the Naval Observatory. Adding our observations to these yields new orbits for all, resolving the discrepancies. We have detected both components of γ Gem for the first time, and we have shown that 7 Cam is an optical pair, not physically bound.
15. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method
OpenAIRE
Zhang, Lijuan; Li, Dongming; Su, Wei; Yang, Jinhua; Jiang, Yutong
2014-01-01
To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constrain...
Science.gov (United States)
O’Connor, Kathryn W.; Loughlin, Patrick J.; Redfern, Mark S.; Sparto, Patrick J.
2008-01-01
17. The adaptation of methods in multilayer optics for the calculation of specular neutron reflection
International Nuclear Information System (INIS)
Penfold, J.
1988-10-01
The adaptation of standard methods in multilayer optics to the calculation of specular neutron reflection is described. Their application is illustrated with examples which include a glass optical flat and a deuterated Langmuir-Blodgett film. (author)
18. Design of an optimized adaptive optics system with a photo-controlled deformable mirror
Czech Academy of Sciences Publication Activity Database
Pilař, Jan; Bonora, Stefano; Lucianetti, Antonio; Jelínková, H.; Mocek, Tomáš
2016-01-01
Roč. 28, č. 13 (2016), s. 1422-1425 ISSN 1041-1135 Institutional support: RVO:68378271 Keywords : adaptive optics * closed loop systems * deformable mirror Subject RIV: BH - Optics, Masers, Lasers Impact factor: 2.375, year: 2016
19. Artificial guide stars for adaptive optics using unmanned aerial vehicles
Science.gov (United States)
Basden, A. G.; Brown, Anthony M.; Chadwick, P. M.; Clark, P.; Massey, R.
2018-06-01
Astronomical adaptive optics (AO) systems are used to increase effective telescope resolution. However, they cannot be used to observe the whole sky since one or more natural guide stars of sufficient brightness must be found within the telescope field of view for the AO system to work. Even when laser guide stars are used, natural guide stars are still required to provide a constant position reference. Here, we introduce a technique to overcome this problem by using rotary unmanned aerial vehicles (UAVs) as a platform from which to produce artificial guide stars. We describe the concept that relies on the UAV being able to measure its precise relative position. We investigate the AO performance improvements that can be achieved, which in the cases presented here can improve the Strehl ratio by a factor of at least 2 for a 8 m class telescope. We also discuss improvements to this technique, which is relevant to both astronomical and solar AO systems.
20. Adaptive fiber optics collimator based on flexible hinges.
Science.gov (United States)
Zhi, Dong; Ma, Yanxing; Ma, Pengfei; Si, Lei; Wang, Xiaolin; Zhou, Pu
2014-08-20
In this manuscript, we present a new design for an adaptive fiber optics collimator (AFOC) based on flexible hinges by using piezoelectric stacks actuators for X-Y displacement. Different from traditional AFOC, the new structure is based on flexible hinges to drive the fiber end cap instead of naked fiber. We fabricated a real AFOC based on flexible hinges, and the end cap's deviation and resonance frequency of the device were measured. Experimental results show that this new AFOC can provide fast control of tip-tilt deviation of the laser beam emitting from the end cap. As a result, the fiber end cap can support much higher power than naked fiber, which makes the new structure ideal for tip-tilt controlling in a high-power fiber laser system.
1. Performance of the Keck Observatory adaptive-optics system.
Science.gov (United States)
van Dam, Marcos A; Le Mignant, David; Macintosh, Bruce A
2004-10-10
The adaptive-optics (AO) system at the W. M. Keck Observatory is characterized. We calculate the error budget of the Keck AO system operating in natural guide star mode with a near-infrared imaging camera. The measurement noise and bandwidth errors are obtained by modeling the control loops and recording residual centroids. Results of sky performance tests are presented: The AO system is shown to deliver images with average Strehl ratios of as much as 0.37 at 1.58 microm when a bright guide star is used and of 0.19 for a magnitude 12 star. The images are consistent with the predicted wave-front error based on our error budget estimates.
2. Adaptive optics retinal imaging in the living mouse eye
Science.gov (United States)
Geng, Ying; Dubra, Alfredo; Yin, Lu; Merigan, William H.; Sharma, Robin; Libby, Richard T.; Williams, David R.
2012-01-01
Correction of the eye’s monochromatic aberrations using adaptive optics (AO) can improve the resolution of in vivo mouse retinal images [Biss et al., Opt. Lett. 32(6), 659 (2007) and Alt et al., Proc. SPIE 7550, 755019 (2010)], but previous attempts have been limited by poor spot quality in the Shack-Hartmann wavefront sensor (SHWS). Recent advances in mouse eye wavefront sensing using an adjustable focus beacon with an annular beam profile have improved the wavefront sensor spot quality [Geng et al., Biomed. Opt. Express 2(4), 717 (2011)], and we have incorporated them into a fluorescence adaptive optics scanning laser ophthalmoscope (AOSLO). The performance of the instrument was tested on the living mouse eye, and images of multiple retinal structures, including the photoreceptor mosaic, nerve fiber bundles, fine capillaries and fluorescently labeled ganglion cells were obtained. The in vivo transverse and axial resolutions of the fluorescence channel of the AOSLO were estimated from the full width half maximum (FWHM) of the line and point spread functions (LSF and PSF), and were found to be better than 0.79 μm ± 0.03 μm (STD)(45% wider than the diffraction limit) and 10.8 μm ± 0.7 μm (STD)(two times the diffraction limit), respectively. The axial positional accuracy was estimated to be 0.36 μm. This resolution and positional accuracy has allowed us to classify many ganglion cell types, such as bistratified ganglion cells, in vivo. PMID:22574260
3. 4th International Workshop on Adaptive Optics for Industry and Medicine
CERN Document Server
Wittrock, Ulrich
2005-01-01
This book treats the development and application of adaptive optics for industry and medicine. The contributions describe recently developed components for adaptive-optics systems such as deformable mirrors, wavefront sensors, and mirror drivers as well as complete adaptive optical systems and their applications in industry and medicine. Applications range from laser-beam forming and adaptive aberration correction for high-power lasers to retinal imaging in ophthalmology. The contributions are based on presentations made at the 4th International Workshop on Adaptive Optics in Industry and Medicine which took place in Münster, Germany, in October 2003. This highly successful series of workshops on adaptive optics started in 1997 and continues with the 5th workshop in Beijing in 2005.
4. Adaptive optics scanning laser ophthalmoscope imaging: technology update
Directory of Open Access Journals (Sweden)
Merino D
2016-04-01
Full Text Available David Merino, Pablo Loza-Alvarez The Institute of Photonic Sciences (ICFO, The Barcelona Institute of Science and Technology, Castelldefels, Barcelona, Spain Abstract: Adaptive optics (AO retinal imaging has become very popular in the past few years, especially within the ophthalmic research community. Several different retinal techniques, such as fundus imaging cameras or optical coherence tomography systems, have been coupled with AO in order to produce impressive images showing individual cell mosaics over different layers of the in vivo human retina. The combination of AO with scanning laser ophthalmoscopy has been extensively used to generate impressive images of the human retina with unprecedented resolution, showing individual photoreceptor cells, retinal pigment epithelium cells, as well as microscopic capillary vessels, or the nerve fiber layer. Over the past few years, the technique has evolved to develop several different applications not only in the clinic but also in different animal models, thanks to technological developments in the field. These developments have specific applications to different fields of investigation, which are not limited to the study of retinal diseases but also to the understanding of the retinal function and vision science. This review is an attempt to summarize these developments in an understandable and brief manner in order to guide the reader into the possibilities that AO scanning laser ophthalmoscopy offers, as well as its limitations, which should be taken into account when planning on using it. Keywords: high-resolution, in vivo retinal imaging, AOSLO
5. Acute Solar Retinopathy Imaged With Adaptive Optics, Optical Coherence Tomography Angiography, and En Face Optical Coherence Tomography.
Science.gov (United States)
Wu, Chris Y; Jansen, Michael E; Andrade, Jorge; Chui, Toco Y P; Do, Anna T; Rosen, Richard B; Deobhakta, Avnish
2018-01-01
Solar retinopathy is a rare form of retinal injury that occurs after direct sungazing. To enhance understanding of the structural changes that occur in solar retinopathy by obtaining high-resolution in vivo en face images. Case report of a young adult woman who presented to the New York Eye and Ear Infirmary with symptoms of acute solar retinopathy after viewing the solar eclipse on August 21, 2017. Results of comprehensive ophthalmic examination and images obtained by fundus photography, microperimetry, spectral-domain optical coherence tomography (OCT), adaptive optics scanning light ophthalmoscopy, OCT angiography, and en face OCT. The patient was examined after viewing the solar eclipse. Visual acuity was 20/20 OD and 20/25 OS. The patient was left-eye dominant. Spectral-domain OCT images were consistent with mild and severe acute solar retinopathy in the right and left eye, respectively. Microperimetry was normal in the right eye but showed paracentral decreased retinal sensitivity in the left eye with a central absolute scotoma. Adaptive optics images of the right eye showed a small region of nonwaveguiding photoreceptors, while images of the left eye showed a large area of abnormal and nonwaveguiding photoreceptors. Optical coherence tomography angiography images were normal in both eyes. En face OCT images of the right eye showed a small circular hyperreflective area, with central hyporeflectivity in the outer retina of the right eye. The left eye showed a hyperreflective lesion that intensified in area from inner to middle retina and became mostly hyporeflective in the outer retina. The shape of the lesion on adaptive optics and en face OCT images of the left eye corresponded to the shape of the scotoma drawn by the patient on Amsler grid. Acute solar retinopathy can present with foveal cone photoreceptor mosaic disturbances on adaptive optics scanning light ophthalmoscopy imaging. Corresponding reflectivity changes can be seen on en face OCT, especially
6. SILDENAFIL CITRATE INDUCED RETINAL TOXICITY-ELECTRORETINOGRAM, OPTICAL COHERENCE TOMOGRAPHY, AND ADAPTIVE OPTICS FINDINGS.
Science.gov (United States)
Yanoga, Fatoumata; Gentile, Ronald C; Chui, Toco Y P; Freund, K Bailey; Fell, Millie; Dolz-Marco, Rosa; Rosen, Richard B
2018-02-27
To report a case of persistent retinal toxicity associated with a high dose of sildenafil citrate intake. Single retrospective case report. A 31-year-old white man with no medical history presented with complaints of bilateral multicolored photopsias and erythropsia (red-tinted vision), shortly after taking sildenafil citrate-purchased through the internet. Patient was found to have cone photoreceptor damage, demonstrated using electroretinogram, optical coherence tomography, and adaptive optics imaging. The patient's symptoms and the photoreceptor structural changes persisted for several months. Sildenafil citrate is a widely used erectile dysfunction medication that is typically associated with transient visual symptoms in normal dosage. At high dosage, sildenafil citrate can lead to persistent retinal toxicity in certain individuals.
7. Optical design considerations when imaging the fundus with an adaptive optics correction
Science.gov (United States)
Wang, Weiwei; Campbell, Melanie C. W.; Kisilak, Marsha L.; Boyd, Shelley R.
2008-06-01
Adaptive Optics (AO) technology has been used in confocal scanning laser ophthalmoscopes (CSLO) which are analogous to confocal scanning laser microscopes (CSLM) with advantages of real-time imaging, increased image contrast, a resistance to image degradation by scattered light, and improved optical sectioning. With AO, the instrumenteye system can have low enough aberrations for the optical quality to be limited primarily by diffraction. Diffraction-limited, high resolution imaging would be beneficial in the understanding and early detection of eye diseases such as diabetic retinopathy. However, to maintain diffraction-limited imaging, sufficient pixel sampling over the field of view is required, resulting in the need for increased data acquisition rates for larger fields. Imaging over smaller fields may be a disadvantage with clinical subjects because of fixation instability and the need to examine larger areas of the retina. Reduction in field size also reduces the amount of light sampled per pixel, increasing photon noise. For these reasons, we considered an instrument design with a larger field of view. When choosing scanners to be used in an AOCSLO, the ideal frame rate should be above the flicker fusion rate for the human observer and would also allow user control of targets projected onto the retina. In our AOCSLO design, we have studied the tradeoffs between field size, frame rate and factors affecting resolution. We will outline optical approaches to overcome some of these tradeoffs and still allow detection of the earliest changes in the fundus in diabetic retinopathy.
8. Adaptive optics scanning laser ophthalmoscopy in fundus imaging, a review and update
Directory of Open Access Journals (Sweden)
Bing Zhang
2017-11-01
Full Text Available Adaptive optics scanning laser ophthalmoscopy (AO-SLO has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics (AO and AO-SLO. Then it compares AO-SLO with conventional imaging methods (fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography and other AO techniques (adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherence tomography. Furthermore, an update of current research situation in AO-SLO is made based on different fundus structures as photoreceptors (cones and rods, fundus vessels, retinal pigment epithelium layer, retinal nerve fiber layer, ganglion cell layer and lamina cribrosa. Finally, this review indicates possible research directions of AO-SLO in future.
9. Modeling and Control of Magnetic Fluid Deformable Mirrors for Adaptive Optics Systems
CERN Document Server
Wu, Zhizheng; Ben Amara, Foued
2013-01-01
Modeling and Control of Magnetic Fluid Deformable Mirrors for Adaptive Optics Systems presents a novel design of wavefront correctors based on magnetic fluid deformable mirrors (MFDM) as well as corresponding control algorithms. The presented wavefront correctors are characterized by their linear, dynamic response. Various mirror surface shape control algorithms are presented along with experimental evaluations of the performance of the resulting adaptive optics systems. Adaptive optics (AO) systems are used in various fields of application to enhance the performance of optical systems, such as imaging, laser, free space optical communication systems, etc. This book is intended for undergraduate and graduate students, professors, engineers, scientists and researchers working on the design of adaptive optics systems and their various emerging fields of application. Zhizheng Wu is an associate professor at Shanghai University, China. Azhar Iqbal is a research associate at the University of Toronto, Canada. Foue...
10. Noninvasive optical imaging of resistance training adaptations in human muscle
Science.gov (United States)
Warren, Robert V.; Cotter, Joshua; Ganesan, Goutham; Le, Lisa; Agustin, Janelle P.; Duarte, Bridgette; Cutler, Kyle; O'Sullivan, Thomas; Tromberg, Bruce J.
2017-12-01
A quantitative and dynamic analysis of skeletal muscle structure and function can guide training protocols and optimize interventions for rehabilitation and disease. While technologies exist to measure body composition, techniques are still needed for quantitative, long-term functional imaging of muscle at the bedside. We evaluate whether diffuse optical spectroscopic imaging (DOSI) can be used for long-term assessment of resistance training (RT). DOSI measures of tissue composition were obtained from 12 adults before and after 5 weeks of training and compared to lean mass fraction (LMF) from dual-energy X-ray absorptiometry (DXA). Significant correlations were detected between DXA LMF and DOSI-measured oxy-hemo/myoglobin, deoxy-hemo/myoglobin, total-hemo/myoglobin, water, and lipid. RT-induced increases of ˜6% in oxy-hemo/myoglobin (3.4±1.0 μM, p=0.00314) and total-hemo/myoglobin (4.9±1.1 μM, p=0.00024) from the medial gastrocnemius were detected with DOSI and accompanied by ˜2% increases in lean soft tissue mass (36.4±12.4 g, p=0.01641) and ˜60% increases in 1 rep-max strength (41.5±6.2 kg, p = 1.9E-05). DOSI measures of vascular and/or muscle changes combined with correlations between DOSI and DXA suggest that quantitative diffuse optical methods can be used to evaluate body composition, provide feedback on long-term interventions, and generate new insight into training-induced muscle adaptations.
11. Digital adaptive optics for achieving space-invariant lateral resolution in optical coherence tomography
International Nuclear Information System (INIS)
Kumar, A.
2015-01-01
Optical coherence tomography (OCT) is a non-invasive optical interferometric imaging technique that provides reflectivity profiles of the sample structures with high axial resolution. The high axial resolution is due to the use of low coherence (broad-band) light source. However, the lateral resolution in OCT depends on the numerical aperture (NA) of the focusing/imaging optics and it is affected by defocus and other higher order optical aberrations induced by the imperfect optics, or by the sample itself.Hardware based adaptive optics (AO) has been successfully combined with OCT to achieve high lateral resolution in combination with high axial resolution provided by OCT. AO, which conventionally uses Shack-Hartmann wavefront sensor (SH WFS) and deformable mirror for wavefront sensing and correction respectively, can compensate for optical aberration and can enable diffraction-limited resolution in OCT. Visualization of cone photoreceptors in 3-D has been successfully demonstrated using AO-OCT. However, OCT being an interferometric imaging technique can provide access to phase information.This phase information can be exploited by digital adaptive optics (DAO) techniques to correct optical aberration in the post-processing step to obtain diffraction-limited space invariant lateral resolution throughout the image volume. Thus, the need for hardware based AO can be eliminated, which in turn can reduce the system complexity and economical cost. In the first paper of this thesis, a novel DAO method based on sub-aperture correlation is presented which is the digital equivalent of SH WFS. The advantage of this method is that it is non-iterative in nature and it does not require a priori knowledge of any system parameters such wavelength, focal length, NA or detector pixel size. For experimental proof, a FF SS OCT system was used and the sample consisted of resolution test target and a plastic plate that introduced random optical aberration. Experimental results show that
12. Large-field-of-view imaging by multi-pupil adaptive optics.
Science.gov (United States)
Park, Jung-Hoon; Kong, Lingjie; Zhou, Yifeng; Cui, Meng
2017-06-01
Adaptive optics can correct for optical aberrations. We developed multi-pupil adaptive optics (MPAO), which enables simultaneous wavefront correction over a field of view of 450 × 450 μm 2 and expands the correction area to nine times that of conventional methods. MPAO's ability to perform spatially independent wavefront control further enables 3D nonplanar imaging. We applied MPAO to in vivo structural and functional imaging in the mouse brain.
13. Novel adaptive fiber-optics collimator for coherent beam combination.
Science.gov (United States)
Zhi, Dong; Ma, Pengfei; Ma, Yanxing; Wang, Xiaolin; Zhou, Pu; Si, Lei
2014-12-15
In this manuscript, we experimentally validate a novel design of adaptive fiber-optics collimator (AFOC), which utilizes two levers to enlarge the movable range of the fiber end cap. The enlargement of the range makes the new AFOC possible to compensate the end-cap/tilt aberration in fiber laser beam combining system. The new AFOC based on flexible hinges and levers was fabricated and the performance of the new AFOC was tested carefully, including its control range, frequency response and control accuracy. Coherent beam combination (CBC) of two 5-W fiber amplifiers array with simultaneously end-cap/tilt control and phase-locking control was implemented successfully with the novel AFOC. Experimental results show that the average normalized power in the bucket (PIB) value increases from 0.311 to 0.934 with active phasing and tilt aberration compensation simultaneously, and with both controls on, the fringe contrast improves to more than 82% from 0% for the case with both control off. This work presents a promising structure for tilt aberration control in high power CBC system.
14. Nanomechanical characterization of adaptive optics components in microprojectors
International Nuclear Information System (INIS)
Palacio, Manuel; Bhushan, Bharat
2010-01-01
Compact microprojectors are being developed for information display in mobile electronic devices. A key component of the microprojector is the green laser package, which consists of an adaptive optics component with a drive mechanism. A crucial concern is the mechanical wear of key drive mechanism components, such as the carbon fiber reinforced polymer (CFRP) driving rod, the Zn alloy body and the stainless steel friction plate, after prolonged operation. Since friction and wear are dependent on the mechanical properties, nanoindentation experiments were conducted on these drive mechanism components using a depth-sensing nanoindenter at room and elevated temperatures up to 100 °C. The hardness and elastic modulus of all the materials studied decrease at increasing test temperatures. From plasticity index analysis, a correlation between the tendency for plastic deformation and the mechanical properties was obtained. Nanoscratch studies were also conducted in order to simulate wear, as well as examine the scratch resistance and deformation modes of these materials, where it was found that the CFRP rod exhibited the highest scratch resistance. The CFRP rod undergoes mostly brittle deformation, while the Zn alloy body and friction plate undergo plastic deformation.
15. Control code for laboratory adaptive optics teaching system
Science.gov (United States)
Jin, Moonseob; Luder, Ryan; Sanchez, Lucas; Hart, Michael
2017-09-01
By sensing and compensating wavefront aberration, adaptive optics (AO) systems have proven themselves crucial in large astronomical telescopes, retinal imaging, and holographic coherent imaging. Commercial AO systems for laboratory use are now available in the market. One such is the ThorLabs AO kit built around a Boston Micromachines deformable mirror. However, there are limitations in applying these systems to research and pedagogical projects since the software is written with limited flexibility. In this paper, we describe a MATLAB-based software suite to interface with the ThorLabs AO kit by using the MATLAB Engine API and Visual Studio. The software is designed to offer complete access to the wavefront sensor data, through the various levels of processing, to the command signals to the deformable mirror and fast steering mirror. In this way, through a MATLAB GUI, an operator can experiment with every aspect of the AO system's functioning. This is particularly valuable for tests of new control algorithms as well as to support student engagement in an academic environment. We plan to make the code freely available to the community.
16. Sensorless adaptive optics for isoSTED nanoscopy
Science.gov (United States)
Antonello, Jacopo; Hao, Xiang; Allgeyer, Edward S.; Bewersdorf, Joerg; Rittscher, Jens; Booth, Martin J.
2018-02-01
The presence of aberrations is a major concern when using fluorescence microscopy to image deep inside tissue. Aberrations due to refractive index mismatch and heterogeneity of the specimen under investigation cause severe reduction in the amount of fluorescence emission that is collected by the microscope. Furthermore, aberrations adversely affect the resolution, leading to loss of fine detail in the acquired images. These phenomena are particularly troublesome for super-resolution microscopy techniques such as isotropic stimulated-emission-depletion microscopy (isoSTED), which relies on accurate control of the shape and co-alignment of multiple excitation and depletion foci to operate as expected and to achieve the super-resolution effect. Aberrations can be suppressed by implementing sensorless adaptive optics techniques, whereby aberration correction is achieved by maximising a certain image quality metric. In confocal microscopy for example, one can employ the total image brightness as an image quality metric. Aberration correction is subsequently achieved by iteratively changing the settings of a wavefront corrector device until the metric is maximised. This simplistic approach has limited applicability to isoSTED microscopy where, due to the complex interplay between the excitation and depletion foci, maximising the total image brightness can lead to introducing aberrations in the depletion foci. In this work we first consider the effects that different aberration modes have on isoSTED microscopes. We then propose an iterative, wavelet-based aberration correction algorithm and evaluate its benefits.
17. THE INNER KILOPARSEC OF Mrk 273 WITH KECK ADAPTIVE OPTICS
Energy Technology Data Exchange (ETDEWEB)
U, Vivian; Sanders, David; Kewley, Lisa [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Dr., Honolulu, HI 96822 (United States); Medling, Anne; Max, Claire [Department of Astronomy and Astrophysics, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Armus, Lee [Spitzer Science Center, California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125 (United States); Iwasawa, Kazushi [ICREA and Institut del Ciències del Cosmos, Universitat de Barcelona (IEEC-UB), Martí i Franquès, 1, E-08028 Barcelona (Spain); Evans, Aaron [Department of Astronomy, University of Virginia, 530 McCormick Road, Charlottesville, VA 22904 (United States); Fazio, Giovanni, E-mail: vivianu@ucr.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States)
2013-10-01
There is X-ray, optical, and mid-infrared imaging and spectroscopic evidence that the late-stage ultraluminous infrared galaxy merger Mrk 273 hosts a powerful active galactic nucleus (AGN). However, the exact location of the AGN and the nature of the nucleus have been difficult to determine due to dust obscuration and the limited wavelength coverage of available high-resolution data. Here we present near-infrared integral-field spectra and images of the nuclear region of Mrk 273 taken with OSIRIS and NIRC2 on the Keck II Telescope with laser guide star adaptive optics. We observe three spatially resolved components, and analyze the nuclear molecular and ionized gas emission lines and their kinematics. We confirm the presence of the hard X-ray AGN in the southwest nucleus. In the north nucleus, we find a strongly rotating gas disk whose kinematics indicate a central black hole of mass 1.04 ± 0.1 × 10{sup 9} M{sub ☉}. The H{sub 2} emission line shows an increase in velocity dispersion along the minor axis in both directions, and an increased flux with negative velocities in the southeast direction; this provides direct evidence for a collimated molecular outflow along the axis of rotation of the disk. The third spatially distinct component appears to the southeast, 640 and 750 pc from the north and southwest nuclei, respectively. This component is faint in continuum emission but shows several strong emission line features, including [Si VI] 1.964 μm which traces an extended coronal-line region. The geometry of the [Si VI] emission combined with shock models and energy arguments suggest that [Si VI] in the southeast component must be at least partly ionized by the SW AGN or a putative AGN in the northern disk, either through photoionization or through shock-heating from strong AGN- and circumnuclear-starburst-driven outflows. This lends support to a scenario in which Mrk 273 may be a dual AGN system.
18. Electrostatic polymer-based microdeformable mirror for adaptive optics
Science.gov (United States)
Zamkotsian, Frederic; Conedera, Veronique; Granier, Hugues; Liotard, Arnaud; Lanzoni, Patrick; Salvagnac, Ludovic; Fabre, Norbert; Camon, Henri
2007-02-01
Future adaptive optics (AO) systems require deformable mirrors with very challenging parameters, up to 250 000 actuators and inter-actuator spacing around 500 μm. MOEMS-based devices are promising for the development of a complete generation of new deformable mirrors. Our micro-deformable mirror (MDM) is based on an array of electrostatic actuators with attachments to a continuous mirror on top. The originality of our approach lies in the elaboration of layers made of polymer materials. Mirror layers and active actuators have been demonstrated. Based on the design of this actuator and our polymer process, realization of a complete polymer-MDM has been done using two process flows: the first involves exclusively polymer materials while the second uses SU8 polymer for structural layers and SiO II and sol-gel for sacrificial layers. The latest shows a better capability in order to produce completely released structures. The electrostatic force provides a non-linear actuation, while AO systems are based on linear matrices operations. Then, we have developed a dedicated 14-bit electronics in order to "linearize" the actuation, using a calibration and a sixth-order polynomial fitting strategy. The response is nearly perfect over our 3×3 MDM prototype with a standard deviation of 3.5 nm; the influence function of the central actuator has been measured. First evaluation on the cross non-linarities has also been studied on OKO mirror and a simple look-up table is sufficient for determining the location of each actuator whatever the locations of the neighbor actuators. Electrostatic MDM are particularly well suited for open-loop AO applications.
19. Adaptive Optics Observations of Exoplanets, Brown Dwarfs, and Binary Stars
Science.gov (United States)
Hinkley, Sasha
2012-04-01
The current direct observations of brown dwarfs and exoplanets have been obtained using instruments not specifically designed for overcoming the large contrast ratio between the host star and any wide-separation faint companions. However, we are about to witness the birth of several new dedicated observing platforms specifically geared towards high contrast imaging of these objects. The Gemini Planet Imager, VLT-SPHERE, Subaru HiCIAO, and Project 1640 at the Palomar 5m telescope will return images of numerous exoplanets and brown dwarfs over hundreds of observing nights in the next five years. Along with diffraction-limited coronagraphs and high-order adaptive optics, these instruments also will return spectral and polarimetric information on any discovered targets, giving clues to their atmospheric compositions and characteristics. Such spectral characterization will be key to forming a detailed theory of comparative exoplanetary science which will be widely applicable to both exoplanets and brown dwarfs. Further, the prevalence of aperture masking interferometry in the field of high contrast imaging is also allowing observers to sense massive, young planets at solar system scales (~3-30 AU)- separations out of reach to conventional direct imaging techniques. Such observations can provide snapshots at the earliest phases of planet formation-information essential for constraining formation mechanisms as well as evolutionary models of planetary mass companions. As a demonstration of the power of this technique, I briefly review recent aperture masking observations of the HR 8799 system. Moreover, all of the aforementioned techniques are already extremely adept at detecting low-mass stellar companions to their target stars, and I present some recent highlights.
20. Adaptive optics plug-and-play setup for high-resolution microscopes with multi-actuator adaptive lens
Science.gov (United States)
Quintavalla, M.; Pozzi, P.; Verhaegen, Michelle; Bijlsma, Hielke; Verstraete, Hans; Bonora, S.
2018-02-01
Adaptive Optics (AO) has revealed as a very promising technique for high-resolution microscopy, where the presence of optical aberrations can easily compromise the image quality. Typical AO systems however, are almost impossible to implement on commercial microscopes. We propose a simple approach by using a Multi-actuator Adaptive Lens (MAL) that can be inserted right after the objective and works in conjunction with an image optimization software allowing for a wavefront sensorless correction. We presented the results obtained on several commercial microscopes among which a confocal microscope, a fluorescence microscope, a light sheet microscope and a multiphoton microscope.
Science.gov (United States)
Wang, Yukun; Xu, Huanyu; Li, Dayu; Wang, Rui; Jin, Chengbin; Yin, Xianghui; Gao, Shijie; Mu, Quanquan; Xuan, Li; Cao, Zhaoliang
2018-01-18
The performance of free-space optics communication (FSOC) is greatly degraded by atmospheric turbulence. Adaptive optics (AO) is an effective method for attenuating the influence. In this paper, the influence of the spatial and temporal characteristics of turbulence on the performance of AO in a FSOC system is investigated. Based on the Greenwood frequency (GF) and the ratio of receiver aperture diameter to atmospheric coherent length (D/r 0 ), the relationship between FSOC performance (CE) and AO parameters (corrected Zernike modes number and bandwidth) is derived for the first time. Then, simulations and experiments are conducted to analyze the influence of AO parameters on FSOC performance under different GF and D/r 0 . The simulation and experimental results show that, for common turbulence conditions, the number of corrected Zernike modes can be fixed at 35 and the bandwidth of the AO system should be larger than the GF. Measurements of the bit error rate (BER) for moderate turbulence conditions (D/r 0 = 10, f G = 60 Hz) show that when the bandwidth is two times that of GF, the average BER is decreased by two orders of magnitude compared with f G /f 3dB = 1. These results and conclusions can provide important guidance in the design of an AO system for FSOC.
2. Adaptive optics fundus images of cone photoreceptors in the macula of patients with retinitis pigmentosa
Directory of Open Access Journals (Sweden)
Tojo N
2013-01-01
Full Text Available Naoki Tojo, Tomoko Nakamura, Chiharu Fuchizawa, Toshihiko Oiwake, Atsushi HayashiDepartment of Ophthalmology, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, JapanBackground: The purpose of this study was to examine cone photoreceptors in the macula of patients with retinitis pigmentosa using an adaptive optics fundus camera and to investigate any correlations between cone photoreceptor density and findings on optical coherence tomography and fundus autofluorescence.Methods: We examined two patients with typical retinitis pigmentosa who underwent ophthalmological examination, including measurement of visual acuity, and gathering of electroretinographic, optical coherence tomographic, fundus autofluorescent, and adaptive optics fundus images. The cone photoreceptors in the adaptive optics images of the two patients with retinitis pigmentosa and five healthy subjects were analyzed.Results: An abnormal parafoveal ring of high-density fundus autofluorescence was observed in the macula in both patients. The border of the ring corresponded to the border of the external limiting membrane and the inner segment and outer segment line in the optical coherence tomographic images. Cone photoreceptors at the abnormal parafoveal ring were blurred and decreased in the adaptive optics images. The blurred area corresponded to the abnormal parafoveal ring in the fundus autofluorescence images. Cone densities were low at the blurred areas and at the nasal and temporal retina along a line from the fovea compared with those of healthy controls. The results for cone spacing and Voronoi domains in the macula corresponded with those for the cone densities.Conclusion: Cone densities were heavily decreased in the macula, especially at the parafoveal ring on high-density fundus autofluorescence in both patients with retinitis pigmentosa. Adaptive optics images enabled us to observe in vivo changes in the cone photoreceptors of
3. Phase Diversity Wavefront Sensing for Control of Space Based Adaptive Optics Systems
National Research Council Canada - National Science Library
Schgallis, Richard J
2007-01-01
Phase Diversity Wavefront Sensing (PD WFS) is a wavefront reconstruction technique used in adaptive optics, which takes advantage of the curvature conjugating analog physical properties of a deformable mirror (MMDM or Bi-morph...
4. Adaptive Optics Simulation for the World's Largest Telescope on Multicore Architectures with Multiple GPUs
KAUST Repository
Ltaief, Hatem; Gratadour, Damien; Charara, Ali; Gendron, Eric
2016-01-01
We present a high performance comprehensive implementation of a multi-object adaptive optics (MOAO) simulation on multicore architectures with hardware accelerators in the context of computational astronomy. This implementation will be used
5. IMAGING WITH MULTIMODAL ADAPTIVE-OPTICS OPTICAL COHERENCE TOMOGRAPHY IN MULTIPLE EVANESCENT WHITE DOT SYNDROME: THE STRUCTURE AND FUNCTIONAL RELATIONSHIP.
Science.gov (United States)
Labriola, Leanne T; Legarreta, Andrew D; Legarreta, John E; Nadler, Zach; Gallagher, Denise; Hammer, Daniel X; Ferguson, R Daniel; Iftimia, Nicusor; Wollstein, Gadi; Schuman, Joel S
2016-01-01
To elucidate the location of pathological changes in multiple evanescent white dot syndrome (MEWDS) with the use of multimodal adaptive optics (AO) imaging. A 5-year observational case study of a 24-year-old female with recurrent MEWDS. Full examination included history, Snellen chart visual acuity, pupil assessment, intraocular pressures, slit lamp evaluation, dilated fundoscopic exam, imaging with Fourier-domain optical coherence tomography (FD-OCT), blue-light fundus autofluorescence (FAF), fundus photography, fluorescein angiography, and adaptive-optics optical coherence tomography. Three distinct acute episodes of MEWDS occurred during the period of follow-up. Fourier-domain optical coherence tomography and adaptive-optics imaging showed disturbance in the photoreceptor outer segments (PR OS) in the posterior pole with each flare. The degree of disturbance at the photoreceptor level corresponded to size and extent of the visual field changes. All findings were transient with delineation of the photoreceptor recovery from the outer edges of the lesion inward. Hyperautofluorescence was seen during acute flares. Increase in choroidal thickness did occur with each active flare but resolved. Although changes in the choroid and RPE can be observed in MEWDS, Fourier-domain optical coherence tomography, and multimodal adaptive optics imaging localized the visually significant changes seen in this disease at the level of the photoreceptors. These transient retinal changes specifically occur at the level of the inner segment ellipsoid and OS/RPE line. En face optical coherence tomography imaging provides a detailed, yet noninvasive method for following the convalescence of MEWDS and provides insight into the structural and functional relationship of this transient inflammatory retinal disease.
DEFF Research Database (Denmark)
Borkowski, Robert; Zhang, Xu; Zibar, Darko
2011-01-01
We report on a successful experimental demonstration of a digital optical performance monitoring (OPM) yielding satisfactory estimation accuracy along with adaptive impairment equalization. No observable penalty is measured when equalizer is driven by monitoring module.......We report on a successful experimental demonstration of a digital optical performance monitoring (OPM) yielding satisfactory estimation accuracy along with adaptive impairment equalization. No observable penalty is measured when equalizer is driven by monitoring module....
7. Probing Hypergiant Mass Loss with Adaptive Optics Imaging and Polarimetry in the Infrared: MMT-Pol and LMIRCam Observations of IRC +10420 and VY Canis Majoris
Science.gov (United States)
Shenoy, Dinesh P.; Jones, Terry J.; Packham, Chris; Lopez-Rodriguez, Enrique
2015-07-01
We present 2-5 μm adaptive optics (AO) imaging and polarimetry of the famous hypergiant stars IRC +10420 and VY Canis Majoris. The imaging polarimetry of IRC +10420 with MMT-Pol at 2.2 μ {m} resolves nebular emission with intrinsic polarization of 30%, with a high surface brightness indicating optically thick scattering. The relatively uniform distribution of this polarized emission both radially and azimuthally around the star confirms previous studies that place the scattering dust largely in the plane of the sky. Using constraints on scattered light consistent with the polarimetry at 2.2 μ {m}, extrapolation to wavelengths in the 3-5 μm band predicts a scattered light component significantly below the nebular flux that is observed in our Large Binocular Telescope/LMIRCam 3-5 μm AO imaging. Under the assumption this excess emission is thermal, we find a color temperature of ˜500 K is required, well in excess of the emissivity-modified equilibrium temperature for typical astrophysical dust. The nebular features of VY CMa are found to be highly polarized (up to 60%) at 1.3 μm, again with optically thick scattering required to reproduce the observed surface brightness. This star’s peculiar nebular feature dubbed the “Southwest Clump” is clearly detected in the 3.1 μm polarimetry as well, which, unlike IRC +10420, is consistent with scattered light alone. The high intrinsic polarizations of both hypergiants’ nebulae are compatible with optically thick scattering for typical dust around evolved dusty stars, where the depolarizing effect of multiple scatters is mitigated by the grains’ low albedos. Observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona.
8. P2 Asymmetry of Au's M-band Flux and its smoothing effect due to high-Z ablator dopants
Science.gov (United States)
Li, Yongsheng; Zhai, Chuanlei; Ren, Guoli; Gu, Jianfa; Huo, Wenyi; Meng, Xujun; Ye, Wenhua; Lan, Ke; Zhang, Weiyan
2017-10-01
X-ray drive asymmetry is one of the main seeds of low-mode implosion asymmetry that blocks further improvement of the nuclear performance of high-foot'' experiments on the National Ignition Facility. More particularly, the P2 asymmetry of Au's M-band flux can also severely influence the implosion performance. Here we study the smoothing effect of mid- and/or high-Z dopants in ablator on M-band flux asymmetries, by modeling and comparing the implosion processes of a Ge-doped and a Si-doped ignition capsule driven by x-ray sources with asymmetric M-band flux. As the results, (1) mid- or high-Z dopants absorb M-band flux and re-emit isotropically, helping to smooth M-band flux arriving at the ablation front, therefore reducing the P2 asymmetries of the imploding shell and hot spot; (2) the smoothing effect of Ge-dopant is more remarkable than Si-dopant due to its higher opacity than the latter in Au's M-band; and (3) placing the doped layer at a larger radius in ablator is more efficient. Applying this effect may not be a main measure to reduce the low-mode implosion asymmetry, but might be of significance in some critical situations such as Inertial Confinement Fusion (ICF) experiments very near the performance cliffs of asymmetric x-ray drives.
9. Adaptive optics parallel spectral domain optical coherence tomography for imaging the living retina
Science.gov (United States)
Zhang, Yan; Rha, Jungtae; Jonnal, Ravi S.; Miller, Donald T.
2005-06-01
Although optical coherence tomography (OCT) can axially resolve and detect reflections from individual cells, there are no reports of imaging cells in the living human retina using OCT. To supplement the axial resolution and sensitivity of OCT with the necessary lateral resolution and speed, we developed a novel spectral domain OCT (SD-OCT) camera based on a free-space parallel illumination architecture and equipped with adaptive optics (AO). Conventional flood illumination, also with AO, was integrated into the camera and provided confirmation of the focus position in the retina with an accuracy of ±10.3 μm. Short bursts of narrow B-scans (100x560 μm) of the living retina were subsequently acquired at 500 Hz during dynamic compensation (up to 14 Hz) that successfully corrected the most significant ocular aberrations across a dilated 6 mm pupil. Camera sensitivity (up to 94 dB) was sufficient for observing reflections from essentially all neural layers of the retina. Signal-to-noise of the detected reflection from the photoreceptor layer was highly sensitive to the level of cular aberrations and defocus with changes of 11.4 and 13.1 dB (single pass) observed when the ocular aberrations (astigmatism, 3rd order and higher) were corrected and when the focus was shifted by 200 μm (0.54 diopters) in the retina, respectively. The 3D resolution of the B-scans (3.0x3.0x5.7 μm) is the highest reported to date in the living human eye and was sufficient to observe the interface between the inner and outer segments of individual photoreceptor cells, resolved in both lateral and axial dimensions. However, high contrast speckle, which is intrinsic to OCT, was present throughout the AO parallel SD-OCT B-scans and obstructed correlating retinal reflections to cell-sized retinal structures.
10. Adaptive optics scanning laser ophthalmoscopy in combination with en-face optical coherence tomography
International Nuclear Information System (INIS)
Felberer, F.
2014-01-01
The human retina is a most important tissue and plays a fundamental role for the vision. Diseases of the eye affect the normal retinal function which, if untreated, may lead to vision loss or ultimately to blindness. Thus, in vivo diagnostic tools that provide detailed information on the retinal status are required in order to improve diagnosis and treatment. In recent years, several new optical imaging methods of the human retina have been developed and now represent the key part in a standard ophthalmic examination process. One of these technologies is optical coherence tomography (OCT), which provides images of the retina noninvasively and with a high axial resolution. However, imperfections of the eye's optics cause aberrations of the wavefront of the imaging light, thus limiting the transverse resolution of such systems. Improvements in the resolution of retinal images are necessary to resolve individual cells (e.g. photoreceptors) which may provide new opportunities in retinal diagnostics and therapy control. Adaptive optics (AO), a technology known from astronomy, may be used to increase image resolution. Aberrations of the imaging light are measured and corrected, resulting in an increase of lateral resolution up to the diffraction limit. Within this thesis, AO was combined with a scanning laser ophthalmoscope (SLO) that enables high resolution imaging of the retina. Measurements on healthy subjects demonstrated the ability of the system to resolve foveal cones (the smallest cone photoreceptors within the retina) and even rod photoreceptors. However, the depth resolution of the system remained limited compared to OCT instruments. Thus, in a second step, the instrument was extended to a combined AO-SLO/OCT system. The OCT system is based on transversal scanning (TS)-OCT which records en-face images of the retina and incorporates a high-speed axial eye tracking device. Together with transverse motion correction based on the AO-SLO images, the system
11. High-Resolution Adaptive Optics Test-Bed for Vision Science
International Nuclear Information System (INIS)
Wilks, S.C.; Thomspon, C.A.; Olivier, S.S.; Bauman, B.J.; Barnes, T.; Werner, J.S.
2001-01-01
We discuss the design and implementation of a low-cost, high-resolution adaptive optics test-bed for vision research. It is well known that high-order aberrations in the human eye reduce optical resolution and limit visual acuity. However, the effects of aberration-free eyesight on vision are only now beginning to be studied using adaptive optics to sense and correct the aberrations in the eye. We are developing a high-resolution adaptive optics system for this purpose using a Hamamatsu Parallel Aligned Nematic Liquid Crystal Spatial Light Modulator. Phase-wrapping is used to extend the effective stroke of the device, and the wavefront sensing and wavefront correction are done at different wavelengths. Issues associated with these techniques will be discussed
12. Investigation on adaptive optics performance from propagation channel characterization with the small optical transponder
Science.gov (United States)
Petit, Cyril; Védrenne, Nicolas; Velluet, Marie Therese; Michau, Vincent; Artaud, Geraldine; Samain, Etienne; Toyoshima, Morio
2016-11-01
In order to address the high throughput requested for both downlink and uplink satellite to ground laser links, adaptive optics (AO) has become a key technology. While maturing, application of this technology for satellite to ground telecommunication, however, faces difficulties, such as higher bandwidth and optimal operation for a wide variety of atmospheric conditions (daytime and nighttime) with potentially low elevations that might severely affect wavefront sensing because of scintillation. To address these specificities, an accurate understanding of the origin of the perturbations is required, as well as operational validation of AO on real laser links. We report here on a low Earth orbiting (LEO) microsatellite to ground downlink with AO correction. We discuss propagation channel characterization based on Shack-Hartmann wavefront sensor (WFS) measurements. Fine modeling of the propagation channel is proposed based on multi-Gaussian model of turbulence profile. This model is then used to estimate the AO performance and validate the experimental results. While AO performance is limited by the experimental set-up, it proves to comply with expected performance and further interesting information on propagation channel is extracted. These results shall help dimensioning and operating AO systems for LEO to ground downlinks.
13. Adaptive optics fundus images of cone photoreceptors in the macula of patients with retinitis pigmentosa.
Science.gov (United States)
Tojo, Naoki; Nakamura, Tomoko; Fuchizawa, Chiharu; Oiwake, Toshihiko; Hayashi, Atsushi
2013-01-01
The purpose of this study was to examine cone photoreceptors in the macula of patients with retinitis pigmentosa using an adaptive optics fundus camera and to investigate any correlations between cone photoreceptor density and findings on optical coherence tomography and fundus autofluorescence. We examined two patients with typical retinitis pigmentosa who underwent ophthalmological examination, including measurement of visual acuity, and gathering of electroretinographic, optical coherence tomographic, fundus autofluorescent, and adaptive optics fundus images. The cone photoreceptors in the adaptive optics images of the two patients with retinitis pigmentosa and five healthy subjects were analyzed. An abnormal parafoveal ring of high-density fundus autofluorescence was observed in the macula in both patients. The border of the ring corresponded to the border of the external limiting membrane and the inner segment and outer segment line in the optical coherence tomographic images. Cone photoreceptors at the abnormal parafoveal ring were blurred and decreased in the adaptive optics images. The blurred area corresponded to the abnormal parafoveal ring in the fundus autofluorescence images. Cone densities were low at the blurred areas and at the nasal and temporal retina along a line from the fovea compared with those of healthy controls. The results for cone spacing and Voronoi domains in the macula corresponded with those for the cone densities. Cone densities were heavily decreased in the macula, especially at the parafoveal ring on high-density fundus autofluorescence in both patients with retinitis pigmentosa. Adaptive optics images enabled us to observe in vivo changes in the cone photoreceptors of patients with retinitis pigmentosa, which corresponded to changes in the optical coherence tomographic and fundus autofluorescence images.
14. Optic flow improves adaptability of spatiotemporal characteristics during split-belt locomotor adaptation with tactile stimulation
OpenAIRE
Anthony Eikema, Diderik Jan A.; Chien, Jung Hung; Stergiou, Nicholas; Myers, Sara A.; Scott-Pandorf, Melissa M.; Bloomberg, Jacob J.; Mukherjee, Mukul
2015-01-01
Human locomotor adaptation requires feedback and feed-forward control processes to maintain an appropriate walking pattern. Adaptation may require the use of visual and proprioceptive input to decode altered movement dynamics and generate an appropriate response. After a person transfers from an extreme sensory environment and back, as astronauts do when they return from spaceflight, the prolonged period required for re-adaptation can pose a significant burden. In our previous paper, we showe...
15. Analysis technique for controlling system wavefront error with active/adaptive optics
Science.gov (United States)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
16. Imaging retinal nerve fiber bundles using optical coherence tomography with adaptive optics.
Science.gov (United States)
Kocaoglu, Omer P; Cense, Barry; Jonnal, Ravi S; Wang, Qiang; Lee, Sangyeol; Gao, Weihua; Miller, Donald T
2011-08-15
Early detection of axonal tissue loss in retinal nerve fiber layer (RNFL) is critical for effective treatment and management of diseases such as glaucoma. This study aims to evaluate the capability of ultrahigh-resolution optical coherence tomography with adaptive optics (UHR-AO-OCT) for imaging the RNFL axonal bundles (RNFBs) with 3×3×3μm(3) resolution in the eye. We used a research-grade UHR-AO-OCT system to acquire 3°×3° volumes in four normal subjects and one subject with an arcuate retinal nerve fiber layer defect (n=5; 29-62years). Cross section (B-scans) and en face (C-scan) slices extracted from the volumes were used to assess visibility and size distribution of individual RNFBs. In one subject, we reimaged the same RNFBs twice over a 7month interval and compared bundle width and thickness between the two imaging sessions. Lastly we compared images of an arcuate RNFL defect acquired with UHR-AO-OCT and commercial OCT (Heidelberg Spectralis). Individual RNFBs were distinguishable in all subjects at 3° retinal eccentricity in both cross-sectional and en face views (width: 30-50μm, thickness: 10-15μm). At 6° retinal eccentricity, RNFBs were distinguishable in three of the five subjects in both views (width: 30-45μm, thickness: 20-40μm). Width and thickness RNFB measurements taken 7months apart were strongly correlated (p<0.0005). Mean difference and standard deviation of the differences between the two measurement sessions were -0.1±4.0μm (width) and 0.3±1.5μm (thickness). UHR-AO-OCT outperformed commercial OCT in terms of clarity of the microscopic retina. To our knowledge, these are the first measurements of RNFB cross section reported in the living human eye. Copyright © 2011 Elsevier Ltd. All rights reserved.
17. Computational adaptive optics for broadband interferometric tomography of tissues and cells
Science.gov (United States)
Adie, Steven G.; Mulligan, Jeffrey A.
2016-03-01
Adaptive optics (AO) can shape aberrated optical wavefronts to physically restore the constructive interference needed for high-resolution imaging. With access to the complex optical field, however, many functions of optical hardware can be achieved computationally, including focusing and the compensation of optical aberrations to restore the constructive interference required for diffraction-limited imaging performance. Holography, which employs interferometric detection of the complex optical field, was developed based on this connection between hardware and computational image formation, although this link has only recently been exploited for 3D tomographic imaging in scattering biological tissues. This talk will present the underlying imaging science behind computational image formation with optical coherence tomography (OCT) -- a beam-scanned version of broadband digital holography. Analogous to hardware AO (HAO), we demonstrate computational adaptive optics (CAO) and optimization of the computed pupil correction in 'sensorless mode' (Zernike polynomial corrections with feedback from image metrics) or with the use of 'guide-stars' in the sample. We discuss the concept of an 'isotomic volume' as the volumetric extension of the 'isoplanatic patch' introduced in astronomical AO. Recent CAO results and ongoing work is highlighted to point to the potential biomedical impact of computed broadband interferometric tomography. We also discuss the advantages and disadvantages of HAO vs. CAO for the effective shaping of optical wavefronts, and highlight opportunities for hybrid approaches that synergistically combine the unique advantages of hardware and computational methods for rapid volumetric tomography with cellular resolution.
18. P2 asymmetry of Au's M-band flux and its smoothing effect due to high-Z ablator dopants
Directory of Open Access Journals (Sweden)
Yongsheng Li
2017-03-01
Full Text Available X-ray drive asymmetry is one of the main seeds of low-mode implosion asymmetry that blocks further improvement of the nuclear performance of “high-foot” experiments on the National Ignition Facility [Miller et al., Nucl. Fusion 44, S228 (2004]. More particularly, the P2 asymmetry of Au's M-band flux can also severely influence the implosion performance of ignition capsules [Li et al., Phys. Plasmas 23, 072705 (2016]. Here we study the smoothing effect of mid- and/or high-Z dopants in ablator on Au's M-band flux asymmetries, by modeling and comparing the implosion processes of a Ge-doped ignition capsule and a Si-doped one driven by X-ray sources with P2 M-band flux asymmetry. As the results, (1 mid- or high-Z dopants absorb hard X-rays (M-band flux and re-emit isotropically, which helps to smooth the asymmetric M-band flux arriving at the ablation front, therefore reducing the P2 asymmetries of the imploding shell and hot spot; (2 the smoothing effect of Ge-dopant is more remarkable than Si-dopant because its opacity in Au's M-band is higher than the latter's; and (3 placing the doped layer at a larger radius in ablator is more efficient. Applying this effect may not be a main measure to reduce the low-mode implosion asymmetry, but might be of significance in some critical situations such as inertial confinement fusion (ICF experiments very near the performance cliffs of asymmetric X-ray drives.
19. Adaptive digital back-propagation for optical communication systems
NARCIS (Netherlands)
Lin, C.-Y.; Napoli, A.; Spinnler, B.; Sleiffer, V.A.J.M.; Rafique, D.; Kuschnerov, M.; Bohn, M.; Schmauss, B.
2014-01-01
We propose an adaptive digital back-propagation method (A-DBP) to selfdetermine unknown fiber nonlinear coefficient gamma. Performance is experimentally verified with 10224-Gb/s POLMUX-16QAM over 656km. Optimal DBP performance, without knowledge of gamma, is obtained by A-DBP.
20. Radio over fiber link with adaptive order n‐QAM optical phase modulated OFDM and digital coherent detection
DEFF Research Database (Denmark)
Arlunno, Valeria; Borkowski, Robert; Guerrero Gonzalez, Neil
2011-01-01
Successful digital coherent demodulation of asynchronous optical phase‐modulated adaptive order QAM (4, 16, and 64) orthogonal frequency division multiplexing signals is achieved by a single reconfigurable digital receiver after 78 km of optical deployed fiber transmission....
1. Sky coverage modeling for the whole sky for laser guide star multiconjugate adaptive optics.
Science.gov (United States)
Wang, Lianqi; Andersen, David; Ellerbroek, Brent
2012-06-01
The scientific productivity of laser guide star adaptive optics systems strongly depends on the sky coverage, which describes the probability of finding natural guide stars for the tip/tilt wavefront sensor(s) to achieve a certain performance. Knowledge of the sky coverage is also important for astronomers planning their observations. In this paper, we present an efficient method to compute the sky coverage for the laser guide star multiconjugate adaptive optics system, the Narrow Field Infrared Adaptive Optics System (NFIRAOS), being designed for the Thirty Meter Telescope project. We show that NFIRAOS can achieve more than 70% sky coverage over most of the accessible sky with the requirement of 191 nm total rms wavefront.
2. An adaptive optics system for solid-state laser systems used in inertial confinement fusion
International Nuclear Information System (INIS)
Salmon, J.T.; Bliss, E.S.; Byrd, J.L.; Feldman, M.; Kartz, M.A.; Toeppen, J.S.; Wonterghem, B. Van; Winters, S.E.
1995-01-01
Using adaptive optics the authors have obtained nearly diffraction-limited 5 kJ, 3 nsec output pulses at 1.053 microm from the Beamlet demonstration system for the National Ignition Facility (NIF). The peak Strehl ratio was improved from 0.009 to 0.50, as estimated from measured wavefront errors. They have also measured the relaxation of the thermally induced aberrations in the main beam line over a period of 4.5 hours. Peak-to-valley aberrations range from 6.8 waves at 1.053 microm within 30 minutes after a full system shot to 3.9 waves after 4.5 hours. The adaptive optics system must have enough range to correct accumulated thermal aberrations from several shots in addition to the immediate shot-induced error. Accumulated wavefront errors in the beam line will affect both the design of the adaptive optics system for NIF and the performance of that system
3. Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems.
Science.gov (United States)
González-Gutiérrez, Carlos; Santos, Jesús Daniel; Martínez-Zarzuela, Mario; Basden, Alistair G; Osborn, James; Díaz-Pernas, Francisco Javier; De Cos Juez, Francisco Javier
2017-06-02
Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named "CARMEN" are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances.
4. Atmospheric Turbulence Measurements in Support of Adaptive Optics Technology
Science.gov (United States)
1989-03-01
microthermal 2 Cn measurements is also included. In the near future we anticipate completion of the in-depth study of the radar Cn2 applications in the form...temperature fluctuations necessary to use (2) are measured using standard microthermal temperature-resistance sensors and very sensitive - 12...panel is optical Cn computed from microthermal 2measurements of CT assuming negligible water vapor contribution. The middle panel depicts the
5. Design and realization of adaptive optical principle system without wavefront sensing
Science.gov (United States)
Wang, Xiaobin; Niu, Chaojun; Guo, Yaxing; Han, Xiang'e.
2018-02-01
In this paper, we focus on the performance improvement of the free space optical communication system and carry out the research on wavefront-sensorless adaptive optics. We use a phase only liquid crystal spatial light modulator (SLM) as the wavefront corrector. The optical intensity distribution of the distorted wavefront is detected by a CCD. We develop a wavefront controller based on ARM and a software based on the Linux operating system. The wavefront controller can control the CCD camera and the wavefront corrector. There being two SLMs in the experimental system, one simulates atmospheric turbulence and the other is used to compensate the wavefront distortion. The experimental results show that the performance quality metric (the total gray value of 25 pixels) increases from 3037 to 4863 after 200 iterations. Besides, it is demonstrated that our wavefront-sensorless adaptive optics system based on SPGD algorithm has a good performance in compensating wavefront distortion.
6. Adaptive optics for reduced threshold energy in femtosecond laser induced optical breakdown in water based eye model
Science.gov (United States)
Hansen, Anja; Krueger, Alexander; Ripken, Tammo
2013-03-01
In ophthalmic microsurgery tissue dissection is achieved using femtosecond laser pulses to create an optical breakdown. For vitreo-retinal applications the irradiance distribution in the focal volume is distorted by the anterior components of the eye causing a raised threshold energy for breakdown. In this work, an adaptive optics system enables spatial beam shaping for compensation of aberrations and investigation of wave front influence on optical breakdown. An eye model was designed to allow for aberration correction as well as detection of optical breakdown. The eye model consists of an achromatic lens for modeling the eye's refractive power, a water chamber for modeling the tissue properties, and a PTFE sample for modeling the retina's scattering properties. Aberration correction was performed using a deformable mirror in combination with a Hartmann-Shack-sensor. The influence of an adaptive optics aberration correction on the pulse energy required for photodisruption was investigated using transmission measurements for determination of the breakdown threshold and video imaging of the focal region for study of the gas bubble dynamics. The threshold energy is considerably reduced when correcting for the aberrations of the system and the model eye. Also, a raise in irradiance at constant pulse energy was shown for the aberration corrected case. The reduced pulse energy lowers the potential risk of collateral damage which is especially important for retinal safety. This offers new possibilities for vitreo-retinal surgery using femtosecond laser pulses.
7. Benefit of adaptive FEC in shared backup path protected elastic optical network.
Science.gov (United States)
Guo, Hong; Dai, Hua; Wang, Chao; Li, Yongcheng; Bose, Sanjay K; Shen, Gangxiang
2015-07-27
We apply an adaptive forward error correction (FEC) allocation strategy to an Elastic Optical Network (EON) operated with shared backup path protection (SBPP). To maximize the protected network capacity that can be carried, an Integer Linear Programing (ILP) model and a spectrum window plane (SWP)-based heuristic algorithm are developed. Simulation results show that the FEC coding overhead required by the adaptive FEC scheme is significantly lower than that needed by a fixed FEC allocation strategy resulting in higher network capacity for the adaptive strategy. The adaptive FEC allocation strategy can also significantly outperform the fixed FEC allocation strategy both in terms of the spare capacity redundancy and the average FEC coding overhead needed per optical channel. The proposed heuristic algorithm is efficient and not only performs closer to the ILP model but also does much better than the shortest-path algorithm.
8. NAOMI: a low-order adaptive optics system for the VLT interferometer
Science.gov (United States)
Gonté, Frédéric Yves J.; Alonso, Jaime; Aller-Carpentier, Emmanuel; Andolfato, Luigi; Berger, Jean-Philippe; Cortes, Angela; Delplancke-Strobele, Françoise; Donaldson, Rob; Dorn, Reinhold J.; Dupuy, Christophe; Egner, Sebastian E.; Huber, Stefan; Hubin, Norbert; Kirchbauer, Jean-Paul; Le Louarn, Miska; Lilley, Paul; Jolley, Paul; Martis, Alessandro; Paufique, Jérôme; Pasquini, Luca; Quentin, Jutta; Ridings, Robert; Reyes, Javier; Shchkaturov, Pavel; Suarez, Marcos; Phan Duc, Thanh; Valdes, Guillermo; Woillez, Julien; Le Bouquin, Jean-Baptiste; Beuzit, Jean-Luc; Rochat, Sylvain; Vérinaud, Christophe; Moulin, Thibaut; Delboulbé, Alain; Michaud, Laurence; Correia, Jean-Jacques; Roux, Alain; Maurel, Didier; Stadler, Eric; Magnard, Yves
2016-08-01
The New Adaptive Optics Module for Interferometry (NAOMI) will be developed for and installed at the 1.8-metre Auxiliary Telescopes (ATs) at ESO Paranal. The goal of the project is to equip all four ATs with a low-order Shack- Hartmann adaptive optics system operating in the visible. By improving the wavefront quality delivered by the ATs for guide stars brighter than R = 13 mag, NAOMI will make the existing interferometer performance less dependent on the seeing conditions. Fed with higher and more stable Strehl, the fringe tracker(s) will achieve the fringe stability necessary to reach the full performance of the second-generation instruments GRAVITY and MATISSE.
9. Modeling update for the Thirty Meter Telescope laser guide star dual-conjugate adaptive optics system
Science.gov (United States)
Gilles, Luc; Wang, Lianqi; Ellerbroek, Brent
2010-07-01
This paper describes the modeling efforts undertaken in the past couple of years to derive wavefront error (WFE) performance estimates for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility laser guide star (LGS) dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The estimates describe the expected performance of NFIRAOS as a function of seeing on Mauna Kea, zenith angle, and galactic latitude (GL). They have been developed through a combination of integrated AO simulations, side analyses, allocations, lab and lidar experiments.
10. The fundus photo has met its match: optical coherence tomography and adaptive optics ophthalmoscopy are here to stay.
Science.gov (United States)
Morgan, Jessica I W
2016-05-01
Over the past 25 years, optical coherence tomography (OCT) and adaptive optics (AO) ophthalmoscopy have revolutionised our ability to non-invasively observe the living retina. The purpose of this review is to highlight the techniques and human clinical applications of recent advances in OCT and adaptive optics scanning laser/light ophthalmoscopy (AOSLO) ophthalmic imaging. Optical coherence tomography retinal and optic nerve head (ONH) imaging technology allows high resolution in the axial direction resulting in cross-sectional visualisation of retinal and ONH lamination. Complementary AO ophthalmoscopy gives high resolution in the transverse direction resulting in en face visualisation of retinal cell mosaics. Innovative detection schemes applied to OCT and AOSLO technologies (such as spectral domain OCT, OCT angiography, confocal and non-confocal AOSLO, fluorescence, and AO-OCT) have enabled high contrast between retinal and ONH structures in three dimensions and have allowed in vivo retinal imaging to approach that of histological quality. In addition, both OCT and AOSLO have shown the capability to detect retinal reflectance changes in response to visual stimuli, paving the way for future studies to investigate objective biomarkers of visual function at the cellular level. Increasingly, these imaging techniques are being applied to clinical studies of the normal and diseased visual system. Optical coherence tomography and AOSLO technologies are capable of elucidating the structure and function of the retina and ONH noninvasively with unprecedented resolution and contrast. The techniques have proven their worth in both basic science and clinical applications and each will continue to be utilised in future studies for many years to come. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
11. Neptune’s zonal winds from near-IR Keck adaptive optics imaging in August 2001
NARCIS (Netherlands)
Martin, S.C.; De Pater, I.; Marcus, P.
2011-01-01
We present H-band (1.4–1.8 ?m) images of Neptune with a spatial resolution of ?0.06?, taken with the W.M. Keck II telescope using the slit-viewing camera (SCAM) of the NIRSPEC instrument backed with Adaptive Optics. Images with 60-second integration times span 4 hours each on UT 20 and 21 August,
12. Development of a scalable generic platform for adaptive optics real time control
Science.gov (United States)
Surendran, Avinash; Burse, Mahesh P.; Ramaprakash, A. N.; Parihar, Padmakar
2015-06-01
The main objective of the present project is to explore the viability of an adaptive optics control system based exclusively on Field Programmable Gate Arrays (FPGAs), making strong use of their parallel processing capability. In an Adaptive Optics (AO) system, the generation of the Deformable Mirror (DM) control voltages from the Wavefront Sensor (WFS) measurements is usually through the multiplication of the wavefront slopes with a predetermined reconstructor matrix. The ability to access several hundred hard multipliers and memories concurrently in an FPGA allows performance far beyond that of a modern CPU or GPU for tasks with a well-defined structure such as Adaptive Optics control. The target of the current project is to generate a signal for a real time wavefront correction, from the signals coming from a Wavefront Sensor, wherein the system would be flexible to accommodate all the current Wavefront Sensing techniques and also the different methods which are used for wavefront compensation. The system should also accommodate for different data transmission protocols (like Ethernet, USB, IEEE 1394 etc.) for transmitting data to and from the FPGA device, thus providing a more flexible platform for Adaptive Optics control. Preliminary simulation results for the formulation of the platform, and a design of a fully scalable slope computer is presented.
13. Joint optimization of phase diversity and adaptive optics : Demonstration of potential
NARCIS (Netherlands)
Korkiakoski, V.; Keller, C.U.; Doelman, N.; Fraanje, P.R.; Verhaegen, M.H.G.
2011-01-01
We study different possibilities to use adaptive optics (AO) and phase diversity (PD) together in a jointly optimized system. The potential of the joint system is demonstrated through numerical simulations. We find that the most significant benefits are obtained from the improved deconvolution of
14. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system
International Nuclear Information System (INIS)
Cheng Sheng-Yi; Liu Wen-Jin; Chen Shan-Qiu; Dong Li-Zhi; Yang Ping; Xu Bing
2015-01-01
15. Addition of Adapted Optics towards obtaining a quantitative detection of diabetic retinopathy
Science.gov (United States)
Yust, Brian; Obregon, Isidro; Tsin, Andrew; Sardar, Dhiraj
2009-04-01
An adaptive optics system was assembled for correcting the aberrated wavefront of light reflected from the retina. The adaptive optics setup includes a superluminous diode light source, Hartmann-Shack wavefront sensor, deformable mirror, and imaging CCD camera. Aberrations found in the reflected wavefront are caused by changes in the index of refraction along the light path as the beam travels through the cornea, lens, and vitreous humour. The Hartmann-Shack sensor allows for detection of aberrations in the wavefront, which may then be corrected with the deformable mirror. It has been shown that there is a change in the polarization of light reflected from neovascularizations in the retina due to certain diseases, such as diabetic retinopathy. The adaptive optics system was assembled towards the goal of obtaining a quantitative measure of onset and progression of this ailment, as one does not currently exist. The study was done to show that the addition of adaptive optics results in a more accurate detection of neovascularization in the retina by measuring the expected changes in polarization of the corrected wavefront of reflected light.
16. A pilot study on slit lamp-adapted optical coherence tomography imaging of trabeculectomy filtering blebs.
NARCIS (Netherlands)
Theelen, T.; Wesseling, P.; Keunen, J.E.E.; Klevering, B.J.
2007-01-01
BACKGROUND: Our study aims to identify anatomical characteristics of glaucoma filtering blebs by means of slit lamp-adapted optical coherence tomography (SL-OCT) and to identify new parameters for the functional prognosis of the filter in the early post-operative period. METHODS: Patients with
17. Adaptive optics in spinning disk microscopy: improved contrast and brightness by a simple and fast method.
Science.gov (United States)
Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J
2015-09-01
Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
18. Retinal and optical adaptations for nocturnal vision in the halictid bee Megalopta genalis.
Science.gov (United States)
Greiner, Birgit; Ribi, Willi A; Warrant, Eric J
2004-06-01
The apposition compound eye of a nocturnal bee, the halictid Megalopta genalis, is described for the first time. Compared to the compound eye of the worker honeybee Apis mellifera and the diurnal halictid bee Lasioglossum leucozonium, the eye of M. genalis shows specific retinal and optical adaptations for vision in dim light. The major anatomical adaptations within the eye of the nocturnal bee are (1) nearly twofold larger ommatidial facets and (2) a 4-5 times wider rhabdom diameter than found in the diurnal bees studied. Optically, the apposition eye of M. genalis is 27 times more sensitive to light than the eyes of the diurnal bees. This increased optical sensitivity represents a clear optical adaptation to low light intensities. Although this unique nocturnal apposition eye has a greatly improved ability to catch light, a 27-fold increase in sensitivity alone cannot account for nocturnal vision at light intensities that are 8 log units dimmer than during daytime. New evidence suggests that additional neuronal spatial summation within the first optic ganglion, the lamina, is involved.
19. Cone and Rod Loss in Stargardt Disease Revealed by Adaptive Optics Scanning Light Ophthalmoscopy
Science.gov (United States)
Song, Hongxin; Rossi, Ethan A.; Latchney, Lisa; Bessette, Angela; Stone, Edwin; Hunter, Jennifer J.; Williams, David R.; Chung, Mina
2015-01-01
Importance Stargardt disease (STGD1) is characterized by macular atrophy and flecks in the retinal pigment epithelium. The causative ABCA4 gene encodes a protein localizing to photoreceptor outer segments. The pathologic steps by which ABCA4 mutations lead to clinically detectable retinal pigment epithelium changes remain unclear. We investigated early STGD1 using adaptive optics scanning light ophthalmoscopy. Observations Adaptive optics scanning light ophthalmoscopy imaging of 2 brothers with early STGD1 and their unaffected parents was compared with conventional imaging. Cone and rod spacing were increased in both patients (P optics scanning light ophthalmoscopy reveals increased cone and rod spacing in areas that appear normal in conventional images, suggesting that photoreceptor loss precedes clinically detectable retinal pigment epithelial disease in STGD1. PMID:26247787
20. Extended use of two crossed Babinet compensators for wavefront sensing in adaptive optics
Science.gov (United States)
Paul, Lancelot; Kumar Saxena, Ajay
2010-12-01
An extended use of two crossed Babinet compensators as a wavefront sensor for adaptive optics applications is proposed. This method is based on the lateral shearing interferometry technique in two directions. A single record of the fringes in a pupil plane provides the information about the wavefront. The theoretical simulations based on this approach for various atmospheric conditions and other errors of optical surfaces are provided for better understanding of this method. Derivation of the results from a laboratory experiment using simulated atmospheric conditions demonstrates the steps involved in data analysis and wavefront evaluation. It is shown that this method has a higher degree of freedom in terms of subapertures and on the choice of detectors, and can be suitably adopted for real-time wavefront sensing for adaptive optics.
1. Adaptive oriented PDEs filtering methods based on new controlling speed function for discontinuous optical fringe patterns
Science.gov (United States)
Zhou, Qiuling; Tang, Chen; Li, Biyuan; Wang, Linlin; Lei, Zhenkun; Tang, Shuwei
2018-01-01
The filtering of discontinuous optical fringe patterns is a challenging problem faced in this area. This paper is concerned with oriented partial differential equations (OPDEs)-based image filtering methods for discontinuous optical fringe patterns. We redefine a new controlling speed function to depend on the orientation coherence. The orientation coherence can be used to distinguish the continuous regions and the discontinuous regions, and can be calculated by utilizing fringe orientation. We introduce the new controlling speed function to the previous OPDEs and propose adaptive OPDEs filtering models. According to our proposed adaptive OPDEs filtering models, the filtering in the continuous and discontinuous regions can be selectively carried out. We demonstrate the performance of the proposed adaptive OPDEs via application to the simulated and experimental fringe patterns, and compare our methods with the previous OPDEs.
2. Underwater wireless optical MIMO system with spatial modulation and adaptive power allocation
Science.gov (United States)
Huang, Aiping; Tao, Linwei; Niu, Yilong
2018-04-01
In this paper, we investigate the performance of underwater wireless optical multiple-input multiple-output communication system combining spatial modulation (SM-UOMIMO) with flag dual amplitude pulse position modulation (FDAPPM). Channel impulse response for coastal and harbor ocean water links are obtained by Monte Carlo (MC) simulation. Moreover, we obtain the closed-form and upper bound average bit error rate (BER) expressions for receiver diversity including optical combining, equal gain combining and selected combining. And a novel adaptive power allocation algorithm (PAA) is proposed to minimize the average BER of SM-UOMIMO system. Our numeric results indicate an excellent match between the analytical results and numerical simulations, which confirms the accuracy of our derived expressions. Furthermore, the results show that adaptive PAA outperforms conventional fixed factor PAA and equal PAA obviously. Multiple-input single-output system with adaptive PAA obtains even better BER performance than MIMO one, at the same time reducing receiver complexity effectively.
3. Coherent optical adaptive technique improves the spatial resolution of STED microscopy in thick samples
Science.gov (United States)
Yan, Wei; Yang, Yanlong; Tan, Yu; Chen, Xun; Li, Yang; Qu, Junle; Ye, Tong
2018-01-01
Stimulated emission depletion microscopy (STED) is one of far-field optical microscopy techniques that can provide sub-diffraction spatial resolution. The spatial resolution of the STED microscopy is determined by the specially engineered beam profile of the depletion beam and its power. However, the beam profile of the depletion beam may be distorted due to aberrations of optical systems and inhomogeneity of specimens’ optical properties, resulting in a compromised spatial resolution. The situation gets deteriorated when thick samples are imaged. In the worst case, the sever distortion of the depletion beam profile may cause complete loss of the super resolution effect no matter how much depletion power is applied to specimens. Previously several adaptive optics approaches have been explored to compensate aberrations of systems and specimens. However, it is hard to correct the complicated high-order optical aberrations of specimens. In this report, we demonstrate that the complicated distorted wavefront from a thick phantom sample can be measured by using the coherent optical adaptive technique (COAT). The full correction can effectively maintain and improve the spatial resolution in imaging thick samples. PMID:29400356
4. High-speed adaptive optics line scan confocal retinal imaging for human eye.
Science.gov (United States)
Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua
2017-01-01
Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.
5. Adaptive Optics Facility: control strategy and first on-sky results of the acquisition sequence
Science.gov (United States)
Madec, P.-Y.; Kolb, J.; Oberti, S.; Paufique, J.; La Penna, P.; Hackenberg, W.; Kuntschner, H.; Argomedo, J.; Kiekebusch, M.; Donaldson, R.; Suarez, M.; Arsenault, R.
2016-07-01
The Adaptive Optics Facility is an ESO project aiming at converting Yepun, one of the four 8m telescopes in Paranal, into an adaptive telescope. This is done by replacing the current conventional secondary mirror of Yepun by a Deformable Secondary Mirror (DSM) and attaching four Laser Guide Star (LGS) Units to its centerpiece. In the meantime, two Adaptive Optics (AO) modules have been developed incorporating each four LGS WaveFront Sensors (WFS) and one tip-tilt sensor used to control the DSM at 1 kHz frame rate. The four LGS Units and one AO module (GRAAL) have already been assembled on Yepun. Besides the technological challenge itself, one critical area of AOF is the AO control strategy and its link with the telescope control, including Active Optics used to shape M1. Another challenge is the request to minimize the overhead due to AOF during the acquisition phase of the observation. This paper presents the control strategy of the AOF. The current control of the telescope is first recalled, and then the way the AO control makes the link with the Active Optics is detailed. Lab results are used to illustrate the expected performance. Finally, the overall AOF acquisition sequence is presented as well as first results obtained on sky with GRAAL.
6. The M-band transmission flux of the plastic foil with a coated layer of silicon or germanium
International Nuclear Information System (INIS)
Li, Liling; Zhang, Lu; Jiang, Shaoen; Guo, Liang; Qing, Bo; Li, Zhichao; Zhang, Jiyan; Yang, Jiamin; Ding, Yongkun
2014-01-01
Silicon (Si) and Germanium (Ge) can be used as the dopant in the ablator material for the purpose of reducing preheating in indirect-drive inertial confinement fusion. Their performances in reducing preheating are quite different. A method to evaluate the difference of these two kinds of dopants has been presented in this letter. In the Shenguang-II high power laser facility, the M-band (1.6–4.4 keV) transmission flux of Si-coated plastic (CH) and Ge-coated plastic (CH) has been measured by using the M-band x-ray diode. In the experiment, we find that the Si-coated CH can absorb more M-band x-rays and thus reduce the preheating of the fuel in our experiment condition. By using the radiation hydrodynamic code MULTI-1D, we got the simulation result which was well suited for the experiment. The comparison of their opacities (T e = 60–100 eV and ρ = 0.1–0.5 g/cm 3 ) also shows that the opacity of Si is higher than that of Ge almost in the whole range of 1.6–4.4 keV
7. Modeling a space-based quantum link that includes an adaptive optics system
Science.gov (United States)
Duchane, Alexander W.; Hodson, Douglas D.; Mailloux, Logan O.
2017-10-01
Quantum Key Distribution uses optical pulses to generate shared random bit strings between two locations. If a high percentage of the optical pulses are comprised of single photons, then the statistical nature of light and information theory can be used to generate secure shared random bit strings which can then be converted to keys for encryption systems. When these keys are incorporated along with symmetric encryption techniques such as a one-time pad, then this method of key generation and encryption is resistant to future advances in quantum computing which will significantly degrade the effectiveness of current asymmetric key sharing techniques. This research first reviews the transition of Quantum Key Distribution free-space experiments from the laboratory environment to field experiments, and finally, ongoing space experiments. Next, a propagation model for an optical pulse from low-earth orbit to ground and the effects of turbulence on the transmitted optical pulse is described. An Adaptive Optics system is modeled to correct for the aberrations caused by the atmosphere. The long-term point spread function of the completed low-earth orbit to ground optical system is explored in the results section. Finally, the impact of this optical system and its point spread function on an overall quantum key distribution system as well as the future work necessary to show this impact is described.
8. Simulating the performance of adaptive optics techniques on FSO communications through the atmosphere
Science.gov (United States)
Martínez, Noelia; Rodríguez Ramos, Luis Fernando; Sodnik, Zoran
2017-08-01
The Optical Ground Station (OGS), installed in the Teide Observatory since 1995, was built as part of ESA efforts in the research field of satellite optical communications to test laser telecommunication terminals on board of satellites in Low Earth Orbit and Geostationary Orbit. As far as one side of the link is settled on the Earth, the laser beam (either on the uplink or on the downlink) has to bear with the atmospheric turbulence. Within the framework of designing an Adaptive Optics system to improve the performance of the Free-Space Optical Communications at the OGS, turbulence conditions regarding uplink and downlink have been simulated within the OOMAO (Object-Oriented Matlab Adaptive Optics) Toolbox as well as the possible utilization of a Laser Guide Star to measure the wavefront in this context. Simulations have been carried out by reducing available atmospheric profiles regarding both night-time and day-time measurements and by having into account possible seasonal changes. An AO proposal to reduce atmospheric aberrations and, therefore, ameliorate FSO links performance is presented and analysed in this paper
9. Fuzzy-Based Adaptive Hybrid Burst Assembly Technique for Optical Burst Switched Networks
Directory of Open Access Journals (Sweden)
2014-01-01
Full Text Available The optical burst switching (OBS paradigm is perceived as an intermediate switching technology for future all-optical networks. Burst assembly that is the first process in OBS is the focus of this paper. In this paper, an intelligent hybrid burst assembly algorithm that is based on fuzzy logic is proposed. The new algorithm is evaluated against the traditional hybrid burst assembly algorithm and the fuzzy adaptive threshold (FAT burst assembly algorithm via simulation. Simulation results show that the proposed algorithm outperforms the hybrid and the FAT algorithms in terms of burst end-to-end delay, packet end-to-end delay, and packet loss ratio.
10. The CHARA array adaptive optics I: common-path optical and mechanical design, and preliminary on-sky results
Science.gov (United States)
Che, Xiao; Sturmann, Laszlo; Monnier, John D.; ten Brummelaar, Theo A.; Sturmann, Judit; Ridgway, Stephen T.; Ireland, Michael J.; Turner, Nils H.; McAlister, Harold A.
2014-07-01
The CHARA array is an optical interferometer with six 1-meter diameter telescopes, providing baselines from 33 to 331 meters. With sub-milliarcsecond angular resolution, its versatile visible and near infrared combiners offer a unique angle of studying nearby stellar systems by spatially resolving their detailed structures. To improve the sensitivity and scientific throughput, the CHARA array was funded by NSF-ATI in 2011 to install adaptive optics (AO) systems on all six telescopes. The initial grant covers Phase I of the AO systems, which includes on-telescope Wavefront Sensors (WFS) and non-common-path (NCP) error correction. Meanwhile we are seeking funding for Phase II which will add large Deformable Mirrors on telescopes to close the full AO loop. The corrections of NCP error and static aberrations in the optical system beyond the WFS are described in the second paper of this series. This paper describes the design of the common-path optical system and the on-telescope WFS, and shows the on-sky commissioning results.
11. Adaptive Electronic Dispersion Compensator for Chromatic and Polarization-Mode Dispersions in Optical Communication Systems
OpenAIRE
Koc Ut-Va
2005-01-01
The widely-used LMS algorithm for coefficient updates in adaptive (feedforward/decision-feedback) equalizers is found to be suboptimal for ASE-dominant systems but various coefficient-dithering approaches suffer from slow adaptation rate without guarantee of convergence. In view of the non-Gaussian nature of optical noise after the square-law optoelectronic conversion, we propose to apply the higher-order least-mean 2 th-order (LMN) algorithms resulting in OSNR penalty which is 1.5–2 d...
12. Non-common path aberration correction in an adaptive optics scanning ophthalmoscope.
Science.gov (United States)
Sulai, Yusufu N; Dubra, Alfredo
2014-09-01
The correction of non-common path aberrations (NCPAs) between the imaging and wavefront sensing channel in a confocal scanning adaptive optics ophthalmoscope is demonstrated. NCPA correction is achieved by maximizing an image sharpness metric while the confocal detection aperture is temporarily removed, effectively minimizing the monochromatic aberrations in the illumination path of the imaging channel. Comparison of NCPA estimated using zonal and modal orthogonal wavefront corrector bases provided wavefronts that differ by ~λ/20 in root-mean-squared (~λ/30 standard deviation). Sequential insertion of a cylindrical lens in the illumination and light collection paths of the imaging channel was used to compare image resolution after changing the wavefront correction to maximize image sharpness and intensity metrics. Finally, the NCPA correction was incorporated into the closed-loop adaptive optics control by biasing the wavefront sensor signals without reducing its bandwidth.
13. On distributed wavefront reconstruction for large-scale adaptive optics systems.
Science.gov (United States)
de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel
2016-05-01
The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.
14. Numerical analysis of modal tomography for solar multi-conjugate adaptive optics
International Nuclear Information System (INIS)
Dong Bing; Ren Deqing; Zhang Xi
2012-01-01
Multi-conjugate adaptive optics (MCAO) can considerably extend the corrected field of view with respect to classical adaptive optics, which will benefit solar observation in many aspects. In solar MCAO, the Sun structure is utilized to provide multiple guide stars and a modal tomography approach is adopted to implement three-dimensional wavefront restorations. The principle of modal tomography is briefly reviewed and a numerical simulation model is built with three equivalent turbulent layers and a different number of guide stars. Our simulation results show that at least six guide stars are required for an accurate wavefront reconstruction in the case of three layers, and only three guide stars are needed in the two layer case. Finally, eigenmode analysis results are given to reveal the singular modes that cannot be precisely retrieved in the tomography process.
15. Differential Polarization Nonlinear Optical Microscopy with Adaptive Optics Controlled Multiplexed Beams
Directory of Open Access Journals (Sweden)
Virginijus Barzda
2013-09-01
Full Text Available Differential polarization nonlinear optical microscopy has the potential to become an indispensable tool for structural investigations of ordered biological assemblies and microcrystalline aggregates. Their microscopic organization can be probed through fast and sensitive measurements of nonlinear optical signal anisotropy, which can be achieved with microscopic spatial resolution by using time-multiplexed pulsed laser beams with perpendicular polarization orientations and photon-counting detection electronics for signal demultiplexing. In addition, deformable membrane mirrors can be used to correct for optical aberrations in the microscope and simultaneously optimize beam overlap using a genetic algorithm. The beam overlap can be achieved with better accuracy than diffraction limited point-spread function, which allows to perform polarization-resolved measurements on the pixel-by-pixel basis. We describe a newly developed differential polarization microscope and present applications of the differential microscopy technique for structural studies of collagen and cellulose. Both, second harmonic generation, and fluorescence-detected nonlinear absorption anisotropy are used in these investigations. It is shown that the orientation and structural properties of the fibers in biological tissue can be deduced and that the orientation of fluorescent molecules (Congo Red, which label the fibers, can be determined. Differential polarization microscopy sidesteps common issues such as photobleaching and sample movement. Due to tens of megahertz alternating polarization of excitation pulses fast data acquisition can be conveniently applied to measure changes in the nonlinear signal anisotropy in dynamically changing in vivo structures.
16. ADAPTIVE OPTICS IMAGING OF FOVEAL SPARING IN GEOGRAPHIC ATROPHY SECONDARY TO AGE-RELATED MACULAR DEGENERATION.
Science.gov (United States)
Querques, Giuseppe; Kamami-Levy, Cynthia; Georges, Anouk; Pedinielli, Alexandre; Capuano, Vittorio; Blanco-Garavito, Rocio; Poulon, Fanny; Souied, Eric H
2016-02-01
To describe adaptive optics (AO) imaging of foveal sparing in geographic atrophy (GA) secondary to age-related macular degeneration. Flood-illumination AO infrared (IR) fundus images were obtained in four consecutive patients with GA using an AO retinal camera (rtx1; Imagine Eyes). Adaptive optics IR images were overlaid with confocal scanning laser ophthalmoscope near-IR autofluorescence images to allow direct correlation of en face AO features with areas of foveal sparing. Adaptive optics appearance of GA and foveal sparing, preservation of functional photoreceptors, and cone densities in areas of foveal sparing were investigated. In 5 eyes of 4 patients (all female; mean age 74.2 ± 11.9 years), a total of 5 images, sized 4° × 4°, of foveal sparing visualized on confocal scanning laser ophthalmoscope near-IR autofluorescence were investigated by AO imaging. En face AO images revealed GA as regions of inhomogeneous hyperreflectivity with irregularly dispersed hyporeflective clumps. By direct comparison with adjacent regions of GA, foveal sparing appeared as well-demarcated areas of reduced reflectivity with less hyporeflective clumps (mean 14.2 vs. 3.2; P = 0.03). Of note, in these areas, en face AO IR images revealed cone photoreceptors as hyperreflective dots over the background reflectivity (mean cone density 3,271 ± 1,109 cones per square millimeter). Microperimetry demonstrated residual function in areas of foveal sparing detected by confocal scanning laser ophthalmoscope near-IR autofluorescence. Adaptive optics allows the appreciation of differences in reflectivity between regions of GA and foveal sparing. Preservation of functional cone photoreceptors was demonstrated on en face AO IR images in areas of foveal sparing detected by confocal scanning laser ophthalmoscope near-IR autofluorescence.
17. Integration of adaptive optics into highEnergy laser modeling and simulation
Science.gov (United States)
2017-06-01
contain hundreds of actuators with high control bandwidths and low hysteresis, all of which are ideal parameters for accurate reconstruction of higher... Available : https://web.archive.org/web/20110111093235/http: //csis.org/blog/missile-defense-umbrella [10] C. Kopp, “ High energy laser directed energy...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS INTEGRATION OF ADAPTIVE OPTICS INTO HIGH ENERGY LASER MODELING AND SIMULATION by Donald Puent
18. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system
Science.gov (United States)
Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing
2015-08-01
19. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method
Directory of Open Access Journals (Sweden)
Lijuan Zhang
2014-01-01
Full Text Available To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constraint. Secondly, the EM algorithm is improved by combining the AO imaging system parameters and regularization technique. A cost function for the joint-deconvolution multiframe AO images is given, and the optimization model for their parameter estimations is built. Lastly, the image-restoration experiments on both analog images and the real AO are performed to verify the recovery effect of our algorithm. The experimental results show that comparing with the Wiener-IBD or RL-IBD algorithm, our iterations decrease 14.3% and well improve the estimation accuracy. The model distinguishes the PSF of the AO images and recovers the observed target images clearly.
20. Multiconjugate adaptive optics applied to an anatomically accurate human eye model.
Science.gov (United States)
Bedggood, P A; Ashman, R; Smith, G; Metha, A B
2006-09-04
Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics.
1. Multiconjugate adaptive optics applied to an anatomically accurate human eye model
Science.gov (United States)
Bedggood, P. A.; Ashman, R.; Smith, G.; Metha, A. B.
2006-09-01
Aberrations of both astronomical telescopes and the human eye can be successfully corrected with conventional adaptive optics. This produces diffraction-limited imagery over a limited field of view called the isoplanatic patch. A new technique, known as multiconjugate adaptive optics, has been developed recently in astronomy to increase the size of this patch. The key is to model atmospheric turbulence as several flat, discrete layers. A human eye, however, has several curved, aspheric surfaces and a gradient index lens, complicating the task of correcting aberrations over a wide field of view. Here we utilize a computer model to determine the degree to which this technology may be applied to generate high resolution, wide-field retinal images, and discuss the considerations necessary for optimal use with the eye. The Liou and Brennan schematic eye simulates the aspheric surfaces and gradient index lens of real human eyes. We show that the size of the isoplanatic patch of the human eye is significantly increased through multiconjugate adaptive optics.
2. Do kinematic metrics of walking balance adapt to perturbed optical flow?
Science.gov (United States)
Thompson, Jessica D; Franz, Jason R
2017-08-01
3. Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera
International Nuclear Information System (INIS)
Liu Rui-Xue; Zheng Xian-Liang; Li Da-Yu; Hu Li-Fa; Cao Zhao-Liang; Mu Quan-Quan; Xuan Li; Xia Ming-Liang
2014-01-01
With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with −8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
4. Ship detection for high resolution optical imagery with adaptive target filter
Science.gov (United States)
Ju, Hongbin
2015-10-01
Ship detection is important due to both its civil and military use. In this paper, we propose a novel ship detection method, Adaptive Target Filter (ATF), for high resolution optical imagery. The proposed framework can be grouped into two stages, where in the first stage, a test image is densely divided into different detection windows and each window is transformed to a feature vector in its feature space. The Histograms of Oriented Gradients (HOG) is accumulated as a basic feature descriptor. In the second stage, the proposed ATF highlights all the ship regions and suppresses the undesired backgrounds adaptively. Each detection window is assigned a score, which represents the degree of the window belonging to a certain ship category. The ATF can be adaptively obtained by the weighted Logistic Regression (WLR) according to the distribution of backgrounds and targets of the input image. The main innovation of our method is that we only need to collect positive training samples to build the filter, while the negative training samples are adaptively generated by the input image. This is different to other classification method such as Support Vector Machine (SVM) and Logistic Regression (LR), which need to collect both positive and negative training samples. The experimental result on 1-m high resolution optical images shows the proposed method achieves a desired ship detection performance with higher quality and robustness than other methods, e.g., SVM and LR.
5. Frequency Adaptive Control Technique for Periodic Runout and Wobble Cancellation in Optical Disk Drives
Directory of Open Access Journals (Sweden)
Yee-Pien Yang
2006-10-01
Full Text Available Periodic disturbance occurs in various applications on the control of the rotational mechanical systems. For optical disk drives, the spirally shaped tracks are usually not perfectly circular and the assembly of the disk and spindle motor is unavoidably eccentric. The resulting periodic disturbance is, therefore, synchronous with the disk rotation, and becomes particularly noticeable for the track following and focusing servo system. This paper applies a novel adaptive controller, namely Frequency Adaptive Control Technique (FACT, for rejecting the periodic runout and wobble effects in the optical disk drive with dual actuators. The control objective is to attenuate adaptively the specific frequency contents of periodic disturbances without amplifying its rest harmonics. FACT is implemented in a plug-in manner and provides a suitable framework for periodic disturbance rejection in the cases where the fundamental frequencies of the disturbance are alterable. It is shown that the convergence property of parameters in the proposed adaptive algorithm is exponentially stable. It is applicable to both the spindle modes of constant linear velocity (CLV and constant angular velocity (CAV for various operation speeds. The experiments showed that the proposed FACT has successful improvement on the tracking and focusing performance of the CD-ROM, and is extended to various compact disk drives.
6. Pipelining Computational Stages of the Tomographic Reconstructor for Multi-Object Adaptive Optics on a Multi-GPU System
KAUST Repository
Charara, Ali; Ltaief, Hatem; Gratadour, Damien; Keyes, David E.; Sevin, Arnaud; Abdelfattah, Ahmad; Gendron, Eric; Morel, Carine; Vidal, Fabrice
2014-01-01
called MOSAIC has been proposed to perform multi-object spectroscopy using the Multi-Object Adaptive Optics (MOAO) technique. The core implementation of the simulation lies in the intensive computation of a tomographic reconstruct or (TR), which is used
7. mBAND analysis for high- and low-LET radiation-induced chromosome aberrations: A review
Energy Technology Data Exchange (ETDEWEB)
Hada, Megumi, E-mail: megumi.hada-1@nasa.gov [NASA Johnson Space Center, Houston, TX 77058 (United States); Universities Space Research Association, Houston, TX 77058 (United States); Wu Honglu; Cucinotta, Francis A. [NASA Johnson Space Center, Houston, TX 77058 (United States)
2011-06-03
During long-term space travel or cancer therapy, humans are exposed to high linear energy transfer (LET) energetic heavy ions. High-LET radiation is much more effective than low-LET radiation in causing various biological effects, including cell inactivation, genetic mutations, cataracts and cancer induction. Most of these biological endpoints are closely related to chromosomal damage, and cytogenetic damage can be utilized as a biomarker for radiation insults. Epidemiological data, mainly from survivors of the atomic bomb detonations in Japan, have enabled risk estimation from low-LET radiation exposures. The identification of a cytogenetic signature that distinguishes high- from low-LET exposure remains a long-term goal in radiobiology. Recently developed fluorescence in situ hybridization (FISH)-painting methodologies have revealed unique endpoints related to radiation quality. Heavy-ions induce a high fraction of complex-type exchanges, and possibly unique chromosome rearrangements. This review will concentrate on recent data obtained with multicolor banding in situ hybridization (mBAND) methods in mammalian cells exposed to low- and high-LET radiations. Chromosome analysis with mBAND technique allows detection of both inter- and intrachromosomal exchanges, and also distribution of the breakpoints of aberrations.
8. Improved fixation quality provided by a Bessel beacon in an adaptive optics system.
Science.gov (United States)
Lambert, Andrew J; Daly, Elizabeth M; Dainty, Christopher J
2013-07-01
We investigate whether a structured probe beam that creates the beacon for use in a retinal imaging adaptive optics system can provide useful side effects. In particular we investigate whether a Bessel beam that is seen by the subject as a set of concentric rings has a dampening effect on fixation variations of the subject under observation. This calming effect would allow longer periods of observation, particularly for patients with abnormal fixation. An experimental adaptive optics system developed for retinal imaging is used to monitor the fluctuations in aberrations for artificial and human subjects. The probe beam is alternated between a traditional beacon and one provided by a Bessel beam created by SLM. Time-frequency analysis is used to indicate the differences in power and time variation during fixation depending on whether the Bessel beam or the traditional beacon is employed. Comparison is made with the response for an artificial eye to discount systemic variations. Significant evidence is accrued to indicate the reduced fluctuations in fixation when the Bessel beam is employed to create the beacon. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
9. Modulation transfer function estimation of optical lens system by adaptive neuro-fuzzy methodology
Science.gov (United States)
Petković, Dalibor; Shamshirband, Shahaboddin; Pavlović, Nenad T.; Anuar, Nor Badrul; Kiah, Miss Laiha Mat
2014-07-01
The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the adaptive neuro-fuzzy (ANFIS) estimator is designed and adapted to estimate MTF value of the actual optical system. Neural network in ANFIS adjusts parameters of membership function in the fuzzy logic of the fuzzy inference system. The back propagation learning algorithm is used for training this network. This intelligent estimator is implemented using Matlab/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method.
10. Adaptive optics correction into single mode fiber for a low Earth orbiting space to ground optical communication link using the OPALS downlink.
Science.gov (United States)
Wright, Malcolm W; Morris, Jeffery F; Kovalik, Joseph M; Andrews, Kenneth S; Abrahamson, Matthew J; Biswas, Abhijit
2015-12-28
An adaptive optics (AO) testbed was integrated to the Optical PAyload for Lasercomm Science (OPALS) ground station telescope at the Optical Communications Telescope Laboratory (OCTL) as part of the free space laser communications experiment with the flight system on board the International Space Station (ISS). Atmospheric turbulence induced aberrations on the optical downlink were adaptively corrected during an overflight of the ISS so that the transmitted laser signal could be efficiently coupled into a single mode fiber continuously. A stable output Strehl ratio of around 0.6 was demonstrated along with the recovery of a 50 Mbps encoded high definition (HD) video transmission from the ISS at the output of the single mode fiber. This proof of concept demonstration validates multi-Gbps optical downlinks from fast slewing low-Earth orbiting (LEO) spacecraft to ground assets in a manner that potentially allows seamless space to ground connectivity for future high data-rates network.
11. 1.7 μm band narrow-linewidth tunable Raman fiber lasers pumped by spectrum-sliced amplified spontaneous emission.
Science.gov (United States)
Zhang, Peng; Wu, Di; Du, Quanli; Li, Xiaoyan; Han, Kexuan; Zhang, Lizhong; Wang, Tianshu; Jiang, Huilin
2017-12-10
A 1.7 μm band tunable narrow-linewidth Raman fiber laser based on spectrally sliced amplified spontaneous emission (SS-ASE) and multiple filter structures is proposed and experimentally demonstrated. In this scheme, an SS-ASE source is employed as a pump source in order to avoid stimulated Brillouin scattering. The ring configuration includes a 500 m long high nonlinear optical fiber and a 10 km long dispersion shifted fiber as the gain medium. A segment of un-pumped polarization-maintaining erbium-doped fiber is used to modify the shape of the spectrum. Furthermore, a nonlinear polarization rotation scheme is applied as the wavelength selector to generate lasers. A high-finesse ring filter and a ring filter are used to narrow the linewidth of the laser, respectively. We demonstrate tuning capabilities of a single laser over 28 nm between 1652 nm and 1680 nm by adjusting the polarization controller (PC) and tunable filter. The tunable laser has a 0.023 nm effective linewidth with the high-finesse ring filter. The stable multi-wavelength laser operation of up to four wavelengths can be obtained by adjusting the PC carefully when the pump power increases.
12. Adaptive Sensor Optimization and Cognitive Image Processing Using Autonomous Optical Neuroprocessors; TOPICAL
International Nuclear Information System (INIS)
CAMERON, STEWART M.
2001-01-01
Measurement and signal intelligence demands has created new requirements for information management and interoperability as they affect surveillance and situational awareness. Integration of on-board autonomous learning and adaptive control structures within a remote sensing platform architecture would substantially improve the utility of intelligence collection by facilitating real-time optimization of measurement parameters for variable field conditions. A problem faced by conventional digital implementations of intelligent systems is the conflict between a distributed parallel structure on a sequential serial interface functionally degrading bandwidth and response time. In contrast, optically designed networks exhibit the massive parallelism and interconnect density needed to perform complex cognitive functions within a dynamic asynchronous environment. Recently, all-optical self-organizing neural networks exhibiting emergent collective behavior which mimic perception, recognition, association, and contemplative learning have been realized using photorefractive holography in combination with sensory systems for feature maps, threshold decomposition, image enhancement, and nonlinear matched filters. Such hybrid information processors depart from the classical computational paradigm based on analytic rules-based algorithms and instead utilize unsupervised generalization and perceptron-like exploratory or improvisational behaviors to evolve toward optimized solutions. These systems are robust to instrumental systematics or corrupting noise and can enrich knowledge structures by allowing competition between multiple hypotheses. This property enables them to rapidly adapt or self-compensate for dynamic or imprecise conditions which would be unstable using conventional linear control models. By incorporating an intelligent optical neuroprocessor in the back plane of an imaging sensor, a broad class of high-level cognitive image analysis problems including geometric
13. Optical imaging of metabolic adaptability in metastatic and non-metastatic breast cancer
Science.gov (United States)
Rebello, Lisa; Rajaram, Narasimhan
2018-02-01
Accurate methods for determining metastatic risk from the primary tumor are crucial for patient survival. Cell metabolism could potentially be used as a marker of metastatic risk. Optical imaging of the endogenous fluorescent molecules nicotinamide adenine dinucleotide (NADH) and flavin adenine dinucleotide (FAD) provides a non-destructive and label-free method for determining cell metabolism. The optical redox ratio (FAD/FAD+NADH) is sensitive to the balance between glycolysis and oxidative phosphorylation (OXPHOS). We have previously established that hypoxia-reoxygenation stress leads to metastatic potential-dependent changes in optical redox ratio. The objective of this study was to monitor the changes in optical redox ratio in breast cancer cells in response to different periods of hypoxic stress as well various levels of hypoxia to establish an optimal protocol. We measured the optical redox ratio of highly metastatic 4T1 murine breast cancer cells under normoxic conditions and after exposure to 30, 60, and 120 minutes of 0.5% O2. This was followed by an hour of reoxygenation. We found an increase in the optical redox ratio following reoxygenation from hypoxia for all durations. Statistically significant differences were observed at 60 and 120 minutes (p˂0.01) compared with normoxia, implying an ability to adapt to OXPHOS after reoxygenation. The switch to OXPHOS has been shown to be a key promoter of cell invasion. We will present our results from these investigations in human breast cancer cells as well as non-metastatic breast cancer cells exposed to various levels of hypoxia.
14. Super-resolution pupil filtering for visual performance enhancement using adaptive optics
Science.gov (United States)
Zhao, Lina; Dai, Yun; Zhao, Junlei; Zhou, Xiaojun
2018-05-01
Ocular aberration correction can significantly improve visual function of the human eye. However, even under ideal aberration correction conditions, pupil diffraction restricts the resolution of retinal images. Pupil filtering is a simple super-resolution (SR) method that can overcome this diffraction barrier. In this study, a 145-element piezoelectric deformable mirror was used as a pupil phase filter because of its programmability and high fitting accuracy. Continuous phase-only filters were designed based on Zernike polynomial series and fitted through closed-loop adaptive optics. SR results were validated using double-pass point spread function images. Contrast sensitivity was further assessed to verify the SR effect on visual function. An F-test was conducted for nested models to statistically compare different CSFs. These results indicated CSFs for the proposed SR filter were significantly higher than the diffraction correction (p vision optical correction of the human eye.
15. Images of photoreceptors in living primate eyes using adaptive optics two-photon ophthalmoscopy
Science.gov (United States)
Hunter, Jennifer J.; Masella, Benjamin; Dubra, Alfredo; Sharma, Robin; Yin, Lu; Merigan, William H.; Palczewska, Grazyna; Palczewski, Krzysztof; Williams, David R.
2011-01-01
In vivo two-photon imaging through the pupil of the primate eye has the potential to become a useful tool for functional imaging of the retina. Two-photon excited fluorescence images of the macaque cone mosaic were obtained using a fluorescence adaptive optics scanning laser ophthalmoscope, overcoming the challenges of a low numerical aperture, imperfect optics of the eye, high required light levels, and eye motion. Although the specific fluorophores are as yet unknown, strong in vivo intrinsic fluorescence allowed images of the cone mosaic. Imaging intact ex vivo retina revealed that the strongest two-photon excited fluorescence signal comes from the cone inner segments. The fluorescence response increased following light stimulation, which could provide a functional measure of the effects of light on photoreceptors. PMID:21326644
16. Immature visual neural system in children reflected by contrast sensitivity with adaptive optics correction
Science.gov (United States)
Liu, Rong; Zhou, Jiawei; Zhao, Haoxin; Dai, Yun; Zhang, Yudong; Tang, Yong; Zhou, Yifeng
2014-01-01
This study aimed to explore the neural development status of the visual system of children (around 8 years old) using contrast sensitivity. We achieved this by eliminating the influence of higher order aberrations (HOAs) with adaptive optics correction. We measured HOAs, modulation transfer functions (MTFs) and contrast sensitivity functions (CSFs) of six children and five adults with both corrected and uncorrected HOAs. We found that when HOAs were corrected, children and adults both showed improvements in MTF and CSF. However, the CSF of children was still lower than the adult level, indicating the difference in contrast sensitivity between groups cannot be explained by differences in optical factors. Further study showed that the difference between the groups also could not be explained by differences in non-visual factors. With these results we concluded that the neural systems underlying vision in children of around 8 years old are still immature in contrast sensitivity. PMID:24732728
17. Adaptive Optics System with Deformable Composite Mirror and High Speed, Ultra-Compact Electronics
Science.gov (United States)
Chen, Peter C.; Knowles, G. J.; Shea, B. G.
2006-06-01
We report development of a novel adaptive optics system for optical astronomy. Key components are very thin Deformable Mirrors (DM) made of fiber reinforced polymer resins, subminiature PMN-PT actuators, and low power, high bandwidth electronics drive system with compact packaging and minimal wiring. By using specific formulations of fibers, resins, and laminate construction, we are able to fabricate mirror face sheets that are thin (2 KHz. By utilizing QorTek’s proprietary synthetic impendence power supply technology, all the power, control, and signal extraction for many hundreds to 1000s of actuators and sensors can be implemented on a single matrix controller printed circuit board co-mounted with the DM. The matrix controller, in turn requires only a single serial bus interface, thereby obviating the need for massive wiring harnesses. The technology can be scaled up to multi-meter aperture DMs with >100K actuators.
18. Stochastic parallel gradient descent based adaptive optics used for a high contrast imaging coronagraph
International Nuclear Information System (INIS)
Dong Bing; Ren Deqing; Zhang Xi
2011-01-01
An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10 -3 to 10 -4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.
19. Holographic line field en-face OCT with digital adaptive optics in the retina in vivo.
Science.gov (United States)
Ginner, Laurin; Schmoll, Tilman; Kumar, Abhishek; Salas, Matthias; Pricoupenko, Nastassia; Wurster, Lara M; Leitgeb, Rainer A
2018-02-01
We demonstrate a high-resolution line field en-face time domain optical coherence tomography (OCT) system using an off-axis holography configuration. Line field en-face OCT produces high speed en-face images at rates of up to 100 Hz. The high frame rate favors good phase stability across the lateral field-of-view which is indispensable for digital adaptive optics (DAO). Human retinal structures are acquired in-vivo with a broadband light source at 840 nm, and line rates of 10 kHz to 100 kHz. Structures of different retinal layers, such as photoreceptors, capillaries, and nerve fibers are visualized with high resolution of 2.8 µm and 5.5 µm in lateral directions. Subaperture based DAO is successfully applied to increase the visibility of cone-photoreceptors and nerve fibers. Furthermore, en-face Doppler OCT maps are generated based on calculating the differential phase shifts between recorded lines.
20. Adaptive Electronic Dispersion Compensator for Chromatic and Polarization-Mode Dispersions in Optical Communication Systems
Directory of Open Access Journals (Sweden)
Koc Ut-Va
2005-01-01
Full Text Available The widely-used LMS algorithm for coefficient updates in adaptive (feedforward/decision-feedback equalizers is found to be suboptimal for ASE-dominant systems but various coefficient-dithering approaches suffer from slow adaptation rate without guarantee of convergence. In view of the non-Gaussian nature of optical noise after the square-law optoelectronic conversion, we propose to apply the higher-order least-mean 2 th-order (LMN algorithms resulting in OSNR penalty which is 1.5–2 dB less than that of LMS. Furthermore, combined with adjustable slicer threshold control, the proposed equalizer structures are demonstrated through extensive Monte Carlo simulations to achieve better performance.
Science.gov (United States)
Wang, Ruyan; Liang, Alei; Wu, Dapeng; Wu, Dalei
2017-07-01
Wireless-Optical Broadband Access Network (WOBAN) is capacity-high, reliable, flexible, and ubiquitous, as it takes full advantage of the merits from both optical communication and wireless communication technologies. Similar to other access networks, the high energy consumption poses a great challenge for building up WOBANs. To shot this problem, we can make some load-light Optical Network Units (ONUs) sleep to reduce the energy consumption. Such operation, however, causes the increased packet delay. Jointly considering the energy consumption and transmission delay, we propose a delay-aware adaptive sleep mechanism. Specifically, we develop a new analytical method to evaluate the transmission delay and queuing delay over the optical part, instead of adopting M/M/1 queuing model. Meanwhile, we also analyze the access delay and queuing delay of the wireless part. Based on such developed delay models, we mathematically derive ONU's optimal sleep time. In addition, we provide numerous simulation results to show the effectiveness of the proposed mechanism.
2. High-resolution adaptive optics scanning laser ophthalmoscope with multiple deformable mirrors
Science.gov (United States)
Chen, Diana C.; Olivier, Scot S.; Jones; Steven M.
2010-02-23
An adaptive optics scanning laser ophthalmoscopes is introduced to produce non-invasive views of the human retina. The use of dual deformable mirrors improved the dynamic range for correction of the wavefront aberrations compared with the use of the MEMS mirror alone, and improved the quality of the wavefront correction compared with the use of the bimorph mirror alone. The large-stroke bimorph deformable mirror improved the capability for axial sectioning with the confocal imaging system by providing an easier way to move the focus axially through different layers of the retina.
3. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.
Science.gov (United States)
Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang
2015-10-23
An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.
4. Multi-conjugate adaptive optics observations of the Orion Trapezium Cluster
International Nuclear Information System (INIS)
Petr-Gotzens, M G; Kolb, J; Marchetti, E; Sterzik, M F; Ivanov, V D; Nuernberger, D; Koehler, R; Bouy, H; MartIn, E L; Huelamo, N; Navascues, D Barrado y
2008-01-01
We obtained very deep and high spatial resolution near-infrared images of the Orion Trapezium Cluster using the Multi-Conjugate Adaptive Optics Demonstrator (MAD) instrument at the VLT. The goal of these observations has been to search for objects at the very low-mass end of the IMF down to the planetary-mass regime. Three fields in the innermost dense part of the Trapezium Cluster, with a total area of 3.5 sq.arcmin have been surveyed at 1.65μm and 2.2μm. Several new candidate planetary mass objects with potential masses Jup have been detected based on their photometry and on their location in the colour-magnitude diagram. The performance of the multi-conjugate adaptive optics correction is excellent over a large field-of-view of ∼ 1'. The final data has a spatial resolution of Jup ), however, must await future confirmation by spectroscopic and/or photometric observations.
5. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics
Directory of Open Access Journals (Sweden)
Dongming Li
2017-04-01
Full Text Available An adaptive optics (AO system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Energy Technology Data Exchange (ETDEWEB)
Liuzzo, E. [Osservatorio di Radioastronomia, INAF, via Gobetti 101, I-40129 Bologna (Italy); Falomo, R.; Paiano, S.; Baruffolo, A.; Farinato, J.; Moretti, A.; Ragazzoni, R. [Osservatorio Astronomico di Padova, INAF, vicolo dell’Osservatorio 5, I-35122 Padova (Italy); Treves, A. [Università dell’Insubria (Como) (Italy); Uslenghi, M. [INAF-IASF, via E. Bassini 15, I-20133 Milano (Italy); Arcidiacono, C.; Diolaiti, E.; Lombini, M. [Osservatorio Astronomico di Bologna, INAF, Bologna, Via Ranzani 1, I-40127 Bologna (Italy); Brast, R. [Dipartimento di Fisica e Astronomia, Università di Bologna, Via Irnerio, 46, I-40126, Bologna (Italy); Donaldson, R.; Kolb, J.; Marchetti, E.; Tordo, S., E-mail: liuzzo@ira.inaf.it [European Southern Observatory, Karl-Schwarschild-Strasse 2, D-85748 Garching bei München (Germany)
2016-08-01
We present near-IR images of five luminous quasars at z ∼ 2 and one at z ∼ 4 obtained with an experimental adaptive optics (AO) instrument at the European Southern Observatory Very Large Telescope. The observations are part of a program aimed at demonstrating the capabilities of multi-conjugated adaptive optics imaging combined with the use of natural guide stars for high spatial resolution studies on large telescopes. The observations were mostly obtained under poor seeing conditions but in two cases. In spite of these nonoptimal conditions, the resulting images of point sources have cores of FWHM ∼ 0.2 arcsec. We are able to characterize the host galaxy properties for two sources and set stringent upper limits to the galaxy luminosity for the others. We also report on the expected capabilities for investigating the host galaxies of distant quasars with AO systems coupled with future Extremely Large Telescopes. Detailed simulations show that it will be possible to characterize compact (2–3 kpc) quasar host galaxies for quasi-stellar objects at z = 2 with nucleus K -magnitude spanning from 15 to 20 (corresponding to absolute magnitude −31 to −26) and host galaxies that are 4 mag fainter than their nuclei.
7. Ground-based adaptive optics coronagraphic performance under closed-loop predictive control
Science.gov (United States)
Males, Jared R.; Guyon, Olivier
2018-01-01
The discovery of the exoplanet Proxima b highlights the potential for the coming generation of giant segmented mirror telescopes (GSMTs) to characterize terrestrial-potentially habitable-planets orbiting nearby stars with direct imaging. This will require continued development and implementation of optimized adaptive optics systems feeding coronagraphs on the GSMTs. Such development should proceed with an understanding of the fundamental limits imposed by atmospheric turbulence. Here, we seek to address this question with a semianalytic framework for calculating the postcoronagraph contrast in a closed-loop adaptive optics system. We do this starting with the temporal power spectra of the Fourier basis calculated assuming frozen flow turbulence, and then apply closed-loop transfer functions. We include the benefits of a simple predictive controller, which we show could provide over a factor of 1400 gain in raw point spread function contrast at 1 λ/D on bright stars, and more than a factor of 30 gain on an I=7.5 mag star such as Proxima. More sophisticated predictive control can be expected to improve this even further. Assuming a photon-noise limited observing technique such as high-dispersion coronagraphy, these gains in raw contrast will decrease integration times by the same large factors. Predictive control of atmospheric turbulence should therefore be seen as one of the key technologies that will enable ground-based telescopes to characterize terrestrial planets.
8. Noninvasive imaging of the human rod photoreceptor mosaic using a confocal adaptive optics scanning ophthalmoscope
Science.gov (United States)
Dubra, Alfredo; Sulai, Yusufu; Norris, Jennifer L.; Cooper, Robert F.; Dubis, Adam M.; Williams, David R.; Carroll, Joseph
2011-01-01
The rod photoreceptors are implicated in a number of devastating retinal diseases. However, routine imaging of these cells has remained elusive, even with the advent of adaptive optics imaging. Here, we present the first in vivo images of the contiguous rod photoreceptor mosaic in nine healthy human subjects. The images were collected with three different confocal adaptive optics scanning ophthalmoscopes at two different institutions, using 680 and 775 nm superluminescent diodes for illumination. Estimates of photoreceptor density and rod:cone ratios in the 5°–15° retinal eccentricity range are consistent with histological findings, confirming our ability to resolve the rod mosaic by averaging multiple registered images, without the need for additional image processing. In one subject, we were able to identify the emergence of the first rods at approximately 190 μm from the foveal center, in agreement with previous histological studies. The rod and cone photoreceptor mosaics appear in focus at different retinal depths, with the rod mosaic best focus (i.e., brightest and sharpest) being at least 10 μm shallower than the cones at retinal eccentricities larger than 8°. This study represents an important step in bringing high-resolution imaging to bear on the study of rod disorders. PMID:21750765
9. Robustness study of the pseudo open-loop controller for multiconjugate adaptive optics.
Science.gov (United States)
Piatrou, Piotr; Gilles, Luc
2005-02-20
Robustness of the recently proposed "pseudo open-loop control" algorithm against various system errors has been investigated for the representative example of the Gemini-South 8-m telescope multiconjugate adaptive-optics system. The existing model to represent the adaptive-optics system with pseudo open-loop control has been modified to account for misalignments, noise and calibration errors in deformable mirrors, and wave-front sensors. Comparison with the conventional least-squares control model has been done. We show with the aid of both transfer-function pole-placement analysis and Monte Carlo simulations that POLC remains remarkably stable and robust against very large levels of system errors and outperforms in this respect least-squares control. Approximate stability margins as well as performance metrics such as Strehl ratios and rms wave-front residuals averaged over a 1-arc min field of view have been computed for different types and levels of system errors to quantify the expected performance degradation.
10. Multi-GPU Development of a Neural Networks Based Reconstructor for Adaptive Optics
Directory of Open Access Journals (Sweden)
Carlos González-Gutiérrez
2018-01-01
Full Text Available Aberrations introduced by the atmospheric turbulence in large telescopes are compensated using adaptive optics systems, where the use of deformable mirrors and multiple sensors relies on complex control systems. Recently, the development of larger scales of telescopes as the E-ELT or TMT has created a computational challenge due to the increasing complexity of the new adaptive optics systems. The Complex Atmospheric Reconstructor based on Machine Learning (CARMEN is an algorithm based on artificial neural networks, designed to compensate the atmospheric turbulence. During recent years, the use of GPUs has been proved to be a great solution to speed up the learning process of neural networks, and different frameworks have been created to ease their development. The implementation of CARMEN in different Multi-GPU frameworks is presented in this paper, along with its development in a language originally developed for GPU, like CUDA. This implementation offers the best response for all the presented cases, although its advantage of using more than one GPU occurs only in large networks.
Science.gov (United States)
Liuzzo, E.; Falomo, R.; Paiano, S.; Treves, A.; Uslenghi, M.; Arcidiacono, C.; Baruffolo, A.; Diolaiti, E.; Farinato, J.; Lombini, M.; Moretti, A.; Ragazzoni, R.; Brast, R.; Donaldson, R.; Kolb, J.; Marchetti, E.; Tordo, S.
2016-08-01
We present near-IR images of five luminous quasars at z ˜ 2 and one at z ˜ 4 obtained with an experimental adaptive optics (AO) instrument at the European Southern Observatory Very Large Telescope. The observations are part of a program aimed at demonstrating the capabilities of multi-conjugated adaptive optics imaging combined with the use of natural guide stars for high spatial resolution studies on large telescopes. The observations were mostly obtained under poor seeing conditions but in two cases. In spite of these nonoptimal conditions, the resulting images of point sources have cores of FWHM ˜ 0.2 arcsec. We are able to characterize the host galaxy properties for two sources and set stringent upper limits to the galaxy luminosity for the others. We also report on the expected capabilities for investigating the host galaxies of distant quasars with AO systems coupled with future Extremely Large Telescopes. Detailed simulations show that it will be possible to characterize compact (2-3 kpc) quasar host galaxies for quasi-stellar objects at z = 2 with nucleus K-magnitude spanning from 15 to 20 (corresponding to absolute magnitude -31 to -26) and host galaxies that are 4 mag fainter than their nuclei.
12. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication
Science.gov (United States)
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
13. Wavefront error budget development for the Thirty Meter Telescope laser guide star adaptive optics system
Science.gov (United States)
Gilles, Luc; Wang, Lianqi; Ellerbroek, Brent
2008-07-01
This paper describes the modeling effort undertaken to derive the wavefront error (WFE) budget for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility, laser guide star (LGS), dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The budget describes the expected performance of NFIRAOS at zenith, and has been decomposed into (i) first-order turbulence compensation terms (120 nm on-axis), (ii) opto-mechanical implementation errors (84 nm), (iii) AO component errors and higher-order effects (74 nm) and (iv) tip/tilt (TT) wavefront errors at 50% sky coverage at the galactic pole (61 nm) with natural guide star (NGS) tip/tilt/focus/astigmatism (TTFA) sensing in J band. A contingency of about 66 nm now exists to meet the observatory requirement document (ORD) total on-axis wavefront error of 187 nm, mainly on account of reduced TT errors due to updated windshake modeling and a low read-noise NGS wavefront sensor (WFS) detector. A detailed breakdown of each of these top-level terms is presented, together with a discussion on its evaluation using a mix of high-order zonal and low-order modal Monte Carlo simulations.
14. Tree-based solvers for adaptive mesh refinement code FLASH - I: gravity and optical depths
Science.gov (United States)
Wünsch, R.; Walch, S.; Dinnbier, F.; Whitworth, A.
2018-04-01
We describe an OctTree algorithm for the MPI parallel, adaptive mesh refinement code FLASH, which can be used to calculate the gas self-gravity, and also the angle-averaged local optical depth, for treating ambient diffuse radiation. The algorithm communicates to the different processors only those parts of the tree that are needed to perform the tree-walk locally. The advantage of this approach is a relatively low memory requirement, important in particular for the optical depth calculation, which needs to process information from many different directions. This feature also enables a general tree-based radiation transport algorithm that will be described in a subsequent paper, and delivers excellent scaling up to at least 1500 cores. Boundary conditions for gravity can be either isolated or periodic, and they can be specified in each direction independently, using a newly developed generalization of the Ewald method. The gravity calculation can be accelerated with the adaptive block update technique by partially re-using the solution from the previous time-step. Comparison with the FLASH internal multigrid gravity solver shows that tree-based methods provide a competitive alternative, particularly for problems with isolated or mixed boundary conditions. We evaluate several multipole acceptance criteria (MACs) and identify a relatively simple approximate partial error MAC which provides high accuracy at low computational cost. The optical depth estimates are found to agree very well with those of the RADMC-3D radiation transport code, with the tree-solver being much faster. Our algorithm is available in the standard release of the FLASH code in version 4.0 and later.
15. Phenotypic diversity in autosomal-dominant cone-rod dystrophy elucidated by adaptive optics retinal imaging.
Science.gov (United States)
Song, Hongxin; Rossi, Ethan A; Stone, Edwin; Latchney, Lisa; Williams, David; Dubra, Alfredo; Chung, Mina
2018-01-01
16. Multimodal adaptive optics for depth-enhanced high-resolution ophthalmic imaging
Science.gov (United States)
Hammer, Daniel X.; Mujat, Mircea; Iftimia, Nicusor V.; Lue, Niyom; Ferguson, R. Daniel
2010-02-01
We developed a multimodal adaptive optics (AO) retinal imager for diagnosis of retinal diseases, including glaucoma, diabetic retinopathy (DR), age-related macular degeneration (AMD), and retinitis pigmentosa (RP). The development represents the first ever high performance AO system constructed that combines AO-corrected scanning laser ophthalmoscopy (SLO) and swept source Fourier domain optical coherence tomography (SSOCT) imaging modes in a single compact clinical prototype platform. The SSOCT channel operates at a wavelength of 1 μm for increased penetration and visualization of the choriocapillaris and choroid, sites of major disease activity for DR and wet AMD. The system is designed to operate on a broad clinical population with a dual deformable mirror (DM) configuration that allows simultaneous low- and high-order aberration correction. The system also includes a wide field line scanning ophthalmoscope (LSO) for initial screening, target identification, and global orientation; an integrated retinal tracker (RT) to stabilize the SLO, OCT, and LSO imaging fields in the presence of rotational eye motion; and a high-resolution LCD-based fixation target for presentation to the subject of stimuli and other visual cues. The system was tested in a limited number of human subjects without retinal disease for performance optimization and validation. The system was able to resolve and quantify cone photoreceptors across the macula to within ~0.5 deg (~100-150 μm) of the fovea, image and delineate ten retinal layers, and penetrate to resolve targets deep into the choroid. In addition to instrument hardware development, analysis algorithms were developed for efficient information extraction from clinical imaging sessions, with functionality including automated image registration, photoreceptor counting, strip and montage stitching, and segmentation. The system provides clinicians and researchers with high-resolution, high performance adaptive optics imaging to help
17. Multipoint dynamically reconfigure adaptive distributed fiber optic acoustic emission sensor (FAESense) system for condition based maintenance
Science.gov (United States)
Mendoza, Edgar; Prohaska, John; Kempen, Connie; Esterkin, Yan; Sun, Sunjian; Krishnaswamy, Sridhar
2010-09-01
This paper describes preliminary results obtained under a Navy SBIR contract by Redondo Optics Inc. (ROI), in collaboration with Northwestern University towards the development and demonstration of a next generation, stand-alone and fully integrated, dynamically reconfigurable, adaptive fiber optic acoustic emission sensor (FAESense™) system for the in-situ unattended detection and localization of shock events, impact damage, cracks, voids, and delaminations in new and aging critical infrastructures found in ships, submarines, aircraft, and in next generation weapon systems. ROI's FAESense™ system is based on the integration of proven state-of-the-art technologies: 1) distributed array of in-line fiber Bragg gratings (FBGs) sensors sensitive to strain, vibration, and acoustic emissions, 2) adaptive spectral demodulation of FBG sensor dynamic signals using two-wave mixing interferometry on photorefractive semiconductors, and 3) integration of all the sensor system passive and active optoelectronic components within a 0.5-cm x 1-cm photonic integrated circuit microchip. The adaptive TWM demodulation methodology allows the measurement of dynamic high frequnency acoustic emission events, while compensating for passive quasi-static strain and temperature drifts. It features a compact, low power, environmentally robust 1-inch x 1-inch x 4-inch small form factor (SFF) package with no moving parts. The FAESense™ interrogation system is microprocessor-controlled using high data rate signal processing electronics for the FBG sensors calibration, temperature compensation and the detection and analysis of acoustic emission signals. Its miniaturized package, low power operation, state-of-the-art data communications, and low cost makes it a very attractive solution for a large number of applications in naval and maritime industries, aerospace, civil structures, the oil and chemical industry, and for homeland security applications.
18. Computational hydrodynamics and optical performance of inductively-coupled plasma adaptive lenses
Energy Technology Data Exchange (ETDEWEB)
Mortazavi, M.; Urzay, J., E-mail: jurzay@stanford.edu; Mani, A. [Center for Turbulence Research, Stanford University, Stanford, California 94305-3024 (United States)
2015-06-15
This study addresses the optical performance of a plasma adaptive lens for aero-optical applications by using both axisymmetric and three-dimensional numerical simulations. Plasma adaptive lenses are based on the effects of free electrons on the phase velocity of incident light, which, in theory, can be used as a phase-conjugation mechanism. A closed cylindrical chamber filled with Argon plasma is used as a model lens into which a beam of light is launched. The plasma is sustained by applying a radio-frequency electric current through a coil that envelops the chamber. Four different operating conditions, ranging from low to high powers and induction frequencies, are employed in the simulations. The numerical simulations reveal complex hydrodynamic phenomena related to buoyant and electromagnetic laminar transport, which generate, respectively, large recirculating cells and wall-normal compression stresses in the form of local stagnation-point flows. In the axisymmetric simulations, the plasma motion is coupled with near-wall axial striations in the electron-density field, some of which propagate in the form of low-frequency traveling disturbances adjacent to vortical quadrupoles that are reminiscent of Taylor-Görtler flow structures in centrifugally unstable flows. Although the refractive-index fields obtained from axisymmetric simulations lead to smooth beam wavefronts, they are found to be unstable to azimuthal disturbances in three of the four three-dimensional cases considered. The azimuthal striations are optically detrimental, since they produce high-order angular aberrations that account for most of the beam wavefront error. A fourth case is computed at high input power and high induction frequency, which displays the best optical properties among all the three-dimensional simulations considered. In particular, the increase in induction frequency prevents local thermalization and leads to an axisymmetric distribution of electrons even after introduction of
19. ASSOCIATIONS BETWEEN MACULAR EDEMA AND CIRCULATORY STATUS IN EYES WITH RETINAL VEIN OCCLUSION: An Adaptive Optics Scanning Laser Ophthalmoscopy Study.
Science.gov (United States)
Iida, Yuto; Muraoka, Yuki; Uji, Akihito; Ooto, Sotaro; Murakami, Tomoaki; Suzuma, Kiyoshi; Tsujikawa, Akitaka; Arichika, Shigeta; Takahashi, Ayako; Miwa, Yuko; Yoshimura, Nagahisa
2017-10-01
To investigate associations between parafoveal microcirculatory status and foveal pathomorphology in eyes with macular edema (ME) secondary to retinal vein occlusion (RVO). Ten consecutive patients (10 eyes) with acute retinal vein occlusion were enrolled, 9 eyes of which received intravitreal ranibizumab (IVR) injections. Foveal morphologic changes were examined via optical coherence tomography (OCT), and parafoveal circulatory status was assessed via adaptive optics scanning laser ophthalmoscopy (AO-SLO). The mean parafoveal aggregated erythrocyte velocity (AEV) measured by adaptive optics scanning laser ophthalmoscopy in eyes with retinal vein occlusion was 0.99 ± 0.43 mm/second at baseline, which was significantly lower than that of age-matched healthy subjects (1.41 ± 0.28 mm/second, P = 0.042). The longitudinal adaptive optics scanning laser ophthalmoscopy examinations of each patient showed that parafoveal AEV was strongly inversely correlated with optical coherence tomography-measured central foveal thickness (CFT) over the entire observation period. Using parafoveal AEV and central foveal thickness measurements obtained at the first and second examinations, we investigated associations between differences in parafoveal AEV and central foveal thickness, which were significantly and highly correlated (r = -0.84, P = 0.002). Using adaptive optics scanning laser ophthalmoscopy in eyes with retinal vein occlusion macular edema, we could quantitatively evaluate the parafoveal AEV. A reduction or an increase in parafoveal AEV may be a clinical marker for the resolution or development/progression of macular edema respectively.
20. Experimental demonstration of single-mode fiber coupling over relatively strong turbulence with adaptive optics.
Science.gov (United States)
Chen, Mo; Liu, Chao; Xian, Hao
2015-10-10
High-speed free-space optical communication systems using fiber-optic components can greatly improve the stability of the system and simplify the structure. However, propagation through atmospheric turbulence degrades the spatial coherence of the signal beam and limits the single-mode fiber (SMF) coupling efficiency. In this paper, we analyze the influence of the atmospheric turbulence on the SMF coupling efficiency over various turbulences. The results show that the SMF coupling efficiency drops from 81% without phase distortion to 10% when phase root mean square value equals 0.3λ. The simulations of SMF coupling with adaptive optics (AO) indicate that it is inevitable to compensate the high-order aberrations for SMF coupling over relatively strong turbulence. The SMF coupling efficiency experiments, using an AO system with a 137-element deformable mirror and a Hartmann-Shack wavefront sensor, obtain average coupling efficiency increasing from 1.3% in open loop to 46.1% in closed loop under a relatively strong turbulence, D/r0=15.1.
1. Optical solar energy adaptations and radiative temperature control of green leaves and tree barks
Energy Technology Data Exchange (ETDEWEB)
Henrion, Wolfgang; Tributsch, Helmut [Department of Si-Photovoltaik and Solare Energetik, Hahn-Meitner-Institut Berlin, 14109 Berlin (Germany)
2009-01-15
Trees have adapted to keep leaves and barks cool in sunshine and can serve as interesting bionic model systems for radiative cooling. Silicon solar cells, on the other hand, loose up to one third of their energy efficiency due to heating in intensive sunshine. It is shown that green leaves minimize absorption of useful radiation and allow efficient infrared thermal emission. Since elevated temperatures are detrimental for tensile water flow in the Xylem tissue below barks, the optical properties of barks should also have evolved so as to avoid excessive heating. This was tested by performing optical studies with tree bark samples from representative trees. It was found that tree barks have optimized their reflection of incoming sunlight between 0.7 and 2 {mu}m. This is approximately the optical window in which solar light is transmitted and reflected by green vegetation. Simultaneously, the tree bark is highly absorbing and thus radiation emitting between 6 and 10 {mu}m. These two properties, mainly provided by tannins, create optimal conditions for radiative temperature control. In addition, tannins seem to have adopted a function as mediators for excitation energy towards photo-antioxidative activity for control of radiation damage. The results obtained are used to discuss challenges for future solar cell optimization. (author)
2. Implantable collamer lens and femtosecond laser for myopia: comparison using an adaptive optics visual simulator
Directory of Open Access Journals (Sweden)
Cari Pérez-Vives
2014-04-01
Full Text Available Purpose: To compare optical and visual quality of implantable collamer lens (ICL implantation and femtosecond laser in situ keratomileusis (F-LASIK for myopia. Methods: The CRX1 adaptive optics visual simulator (Imagine Eyes, Orsay, France was used to simulate the wavefront aberration pattern after the two surgical procedures for -3-diopter (D and -6-D myopia. Visual acuity at different contrasts and contrast sensitivities at 10, 20, and 25 cycles/degree (cpd were measured for 3-mm and 5-mm pupils. The modulation transfer function (MTF and point spread function (PSF were calculated for 5-mm pupils. Results: F-LASIK MTF was worse than ICL MTF, which was close to diffraction-limited MTF. ICL cases showed less spread out of PSF than F-LASIK cases. ICL cases showed better visual acuity values than F-LASIK cases for all pupils, contrasts, and myopic treatments (p0.05. For -6-D myopia, however, statistically significant differences in contrast sensitivities were found for both pupils for all evaluated spatial frequencies (p<0.05. Contrast sensitivities were better after ICL implantation than after F-LASIK. Conclusions: ICL implantation and F-LASIK provide good optical and visual quality, although the former provides better outcomes of MTF, PSF, visual acuity, and contrast sensitivity, especially for cases with large refractive errors and pupil sizes. These outcomes are related to the F-LASIK producing larger high-order aberrations.
3. Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms
Directory of Open Access Journals (Sweden)
Elena Calzolari
2017-11-01
Full Text Available Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs, as related to the sensory modality of the target. Visual, auditory, and multisensory – audio-visual – targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1. Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2, is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24 to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3. Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These
4. An Adaptive Damping Network Designed for Strapdown Fiber Optic Gyrocompass System for Ships
Directory of Open Access Journals (Sweden)
Jin Sun
2017-03-01
Full Text Available The strapdown fiber optic gyrocompass (strapdown FOGC system for ships primarily works on external horizontal damping and undamping statuses. When there are large sea condition changes, the system will switch frequently between the external horizontal damping status and the undamping status. This means that the system is always in an adjustment status and influences the dynamic accuracy of the system. Aiming at the limitations of the conventional damping method, a new design idea is proposed, where the adaptive control method is used to design the horizontal damping network of the strapdown FOGC system. According to the size of acceleration, the parameters of the damping network are changed to make the system error caused by the ship’s maneuvering to a minimum. Furthermore, the jump in damping coefficient was transformed into gradual change to make a smooth system status switch. The adaptive damping network was applied for strapdown FOGC under the static and dynamic condition, and its performance was compared with the conventional damping, and undamping means. Experimental results showed that the adaptive damping network was effective in improving the dynamic performance of the strapdown FOGC.
5. Robo-AO KP: A new era in robotic adaptive optics
Science.gov (United States)
Riddle, Reed L.; Baranec, Christoph; Law, Nicholas M.; Kulkarni, Shrinivas R.; Duev, Dmitry; Ziegler, Carl; Jensen-Clem, Rebecca M.; Atkinson, Dani Eleanor; Tanner, Angelle M.; Zhang, Celia; Ray, Amy
2016-01-01
Robo-AO is the first and only fully automated adaptive optics laser guide star AO instrument. It was developed as an instrument for 1-3m robotic telescopes, in order to take advantage of their availability to pursue large survey programs and target of opportunity observations that aren't possible with other AO systems. Robo-AO is currently the most efficient AO system in existence, and it can achieve an observation rate of 20+ science targets per hour. In more than three years of operations at Palomar Observatory, it has been quite successful, producing technology that is being adapted by other AO systems and robotic telescope projects, as well as several high impact scientific publications. Now, Robo-AO has been selected to take over operation of the Kitt Peak National Observatory 2.1m telescope. This will give Robo-AO KP the opportunity to pursue multiple science programs consisting of several thousand targets each during the three years it will be on the telescope. One-sixth of the observing time will be allocated to the US community through the NOAO TAC process. This presentation will discuss the process adapting Robo-AO to the KPNO 2.1m telescope, the plans for integration and initial operations, and the science operations and programs to be pursued.
6. Comparison of the marginal adaptation of direct and indirect composite inlay restorations with optical coherence tomography.
Science.gov (United States)
Türk, Ayşe Gözde; Sabuncu, Metin; Ünal, Sena; Önal, Banu; Ulusoy, Mübin
2016-01-01
The purpose of the study was to use the photonic imaging modality of optical coherence tomography (OCT) to compare the marginal adaptation of composite inlays fabricated by direct and indirect techniques. Class II cavities were prepared on 34 extracted human molar teeth. The cavities were randomly divided into two groups according to the inlay fabrication technique. The first group was directly restored on cavities with a composite (Esthet X HD, Dentsply, Germany) after isolating. The second group was indirectly restored with the same composite material. Marginal adaptations were scanned before cementation with an invisible infrared light beam of OCT (Thorlabs), allowing measurement in 200 µm intervals. Restorations were cemented with a self-adhesive cement resin (SmartCem2, Dentsply), and then marginal adaptations were again measured with OCT. Mean values were statistically compared by using independent-samples t-test and paired samples t-test (pmarginal discrepancy values than indirect inlays, before (p=0.00001442) and after (p=0.00001466) cementation. Marginal discrepancy values were increased for all restorations after cementation (p=0.00008839, p=0.000000952 for direct and indirect inlays, respectively). The mean marginal discrepancy value of the direct group increased from 56.88±20.04 µm to 91.88±31.7 µm, whereas the indirect group increased from 107.54±35.63 µm to 170.29±54.83 µm. Different techniques are available to detect marginal adaptation of restorations, but the OCT system can give quantitative information about resin cement thickness and its interaction between tooth and restoration in a nondestructive manner. Direct inlays presented smaller marginal discrepancy than indirect inlays. The marginal discrepancy values were increased for all restorations that refer to cement thickness after cementation.
7. Adaptive Optics Imaging of Pluto-Charon and the Discovery of a Moon aroun d the Asteroid 45 Eugenia: The Potential of Adaptive Optics in Planetary Astrono my
Science.gov (United States)
Close, L. M.; Merline, W. J.; Tholen, D.; Owen, T.; Roddier, F.; Dumas, C.
1999-12-01
We outline two separate projects which highlight the power of adaptive optics (AO) to aid planetary research. The first project utilized AO to resolve the Pluto-Charon system by producing 0.15" FWHM images. We used the University of Hawaii AO system (Roddier et al. PASP 103, 131,1991) at CFHT to obtain deep (20 min) narrow band images in/out the molecular bands of water and methane ices. Our images confirm that the variation of Pluto's albedo is mainly governed by the presence of methane ice over its surface, resulting in a lower albedo at 2.26 um than at 2.02 um. Our observations confirm also that Charon is mostly covered with water-ice (Buie et al. NATURE 329, 522,1987). See Tholen et al. (ICARUS submitted) for more details on these AO results. In another application of AO, we discovered a moon around asteroid 45 Eugenia by use of the PUEO AO facility at CFHT (Rigaut et al. PASP 110, 152, 1998). With PUEO we preformed a search for asteroidal satellites among two dozen asteroids, achieving moderate Strehl ratios (35%) and FWHM of about 0.12" at H band. During this survey, we detected a faint close companion to 45 Eugenia. The satellite was 6.14 magnitudes (at 1.65 um) fainter and located at most 0.75" from Eugenia. Without the ability of AO (to sharpen the contrast and increase the resolution to 0.1"), the detection of this companion would have been impossible with ground based-telescopes. The companion was found to be in a 1200 km circular orbit with a period of 4.7 days. A more detailed discussion of this new satellite is given by Merline et al. in this volume. Adaptive optics is entering a powerful new age as all the major ground based large telescopes are developing facility AO systems. Planetary astronomy is particularly well posed to take advantage of the diffraction-limited, near-IR images (0.050" FWHM) that will become commonplace at all 8 m facilities in the near future (It is already occurring on the KECK and GEMINI-North telescopes). In particular, we
8. Recent results and future plans for a 45 actuator adaptive x-ray optics experiment at the advanced light source
Energy Technology Data Exchange (ETDEWEB)
Brejnholt, Nicolai F., E-mail: brejnholt1@llnl.gov; Poyneer, Lisa A.; Hill, Randal M.; Pardini, Tommaso; Hagler, Lisle; Jackson, Jessie; Jeon, Jae; McCarville, Thomas J.; Palmer, David W. [Lawrence Livermore National Laboratory, Livermore, California (United States); Celestre, Richard [Advanced Light Source - Lawrence Berkeley National Laboratory, Berkeley, California (United States); Brooks, Audrey D. [Northrop Grumman - AOA Xinetics Inc., Cambridge, Massachusetts (United States)
2016-07-27
We report on the current status of the Adaptive X-ray Optics project run by Lawrence Livermore National Laboratory (LLNL). LLNL is collaborating with the Advanced Light Source (ALS) to demonstrate a near real-time adaptive X-ray optic. To this end, a custom-built 45 cm long deformable mirror has been installed at ALS beamline 5.3.1 (end station 2) for a two-year period that started in September 2014. We will outline general aspects of the instrument, present results from a recent experimental campaign and touch on future plans for the project.
9. DYNAMISM OF DOT SUBRETINAL DRUSENOID DEPOSITS IN AGE-RELATED MACULAR DEGENERATION DEMONSTRATED WITH ADAPTIVE OPTICS IMAGING.
Science.gov (United States)
Zhang, Yuhua; Wang, Xiaolin; Godara, Pooja; Zhang, Tianjiao; Clark, Mark E; Witherspoon, C Douglas; Spaide, Richard F; Owsley, Cynthia; Curcio, Christine A
2018-01-01
To investigate the natural history of dot subretinal drusenoid deposits (SDD) in age-related macular degeneration, using high-resolution adaptive optics scanning laser ophthalmoscopy. Six eyes of four patients with intermediate age-related macular degeneration were studied at baseline and 1 year later. Individual dot SDD within the central 30° retina were examined with adaptive optics scanning laser ophthalmoscopy and optical coherence tomography. A total of 269 solitary SDD were identified at baseline. Over 12.25 ± 1.18 months, all 35 Stage 1 SDD progressed to advanced stages. Eighteen (60%) Stage 2 lesions progressed to Stage 3 and 12 (40%) remained at Stage 2. Of 204 Stage 3 SDD, 12 (6.4%) disappeared and the rest remained. Twelve new SDD were identified, including 6 (50%) at Stage 1, 2 (16.7%) at Stage 2, and 4 (33.3%) at Stage 3. The mean percentage of the retina affected by dot SDD, measured by the adaptive optics scanning laser ophthalmoscopy, increased in 5/6 eyes (from 2.31% to 5.08% in the most changed eye) and decreased slightly in 1/6 eye (from 10.67% to 10.54%). Dynamism, the absolute value of the areas affected by new and regressed lesions, ranged from 0.7% to 9.3%. Adaptive optics scanning laser ophthalmoscopy reveals that dot SDD, like drusen, are dynamic.
10. Functional photoreceptor loss revealed with adaptive optics: an alternate cause of color blindness.
Science.gov (United States)
Carroll, Joseph; Neitz, Maureen; Hofer, Heidi; Neitz, Jay; Williams, David R
2004-06-01
There is enormous variation in the X-linked L/M (long/middle wavelength sensitive) gene array underlying "normal" color vision in humans. This variability has been shown to underlie individual variation in color matching behavior. Recently, red-green color blindness has also been shown to be associated with distinctly different genotypes. This has opened the possibility that there may be important phenotypic differences within classically defined groups of color blind individuals. Here, adaptive optics retinal imaging has revealed a mechanism for producing dichromatic color vision in which the expression of a mutant cone photopigment gene leads to the loss of the entire corresponding class of cone photoreceptor cells. Previously, the theory that common forms of inherited color blindness could be caused by the loss of photoreceptor cells had been discounted. We confirm that remarkably, this loss of one-third of the cones does not impair any aspect of vision other than color.
11. An automated algorithm for photoreceptors counting in adaptive optics retinal images
Science.gov (United States)
Liu, Xu; Zhang, Yudong; Yun, Dai
2012-10-01
Eyes are important organs of humans that detect light and form spatial and color vision. Knowing the exact number of cones in retinal image has great importance in helping us understand the mechanism of eyes' function and the pathology of some eye disease. In order to analyze data in real time and process large-scale data, an automated algorithm is designed to label cone photoreceptors in adaptive optics (AO) retinal images. Images acquired by the flood-illuminated AO system are taken to test the efficiency of this algorithm. We labeled these images both automatically and manually, and compared the results of the two methods. A 94.1% to 96.5% agreement rate between the two methods is achieved in this experiment, which demonstrated the reliability and efficiency of the algorithm.
12. Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics
Science.gov (United States)
Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.
2003-02-01
Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.
13. Compensation for the orbital angular momentum of a vortex beam in turbulent atmosphere by adaptive optics
Science.gov (United States)
Li, Nan; Chu, Xiuxiang; Zhang, Pengfei; Feng, Xiaoxing; Fan, ChengYu; Qiao, Chunhong
2018-01-01
A method which can be used to compensate for a distorted orbital angular momentum and wavefront of a beam in atmospheric turbulence, simultaneously, has been proposed. To confirm the validity of the method, an experimental setup for up-link propagation of a vortex beam in a turbulent atmosphere has been simulated. Simulation results show that both of the distorted orbital angular momentum and the distorted wavefront of a beam due to turbulence can be compensated by an adaptive optics system with the help of a cooperative beacon at satellite. However, when the number of the lenslet of wavefront sensor (WFS) and the actuators of the deform mirror (DM) is small, satisfactory results cannot be obtained.
14. Fast-adaptive fiber-optic sensor for ultra-small vibration and deformation measurement
International Nuclear Information System (INIS)
Romashko, R V; Girolamo, S Di; Kulchin, Y N; Launay, J C; Kamshilin, A A
2007-01-01
Adaptive fiber-optic interferometer measuring system based on a dynamic hologram recorded in photorefractive CdTe crystal without applying an external electric field is developed. Vectorial mixing of two waves with different polarizations in the anisotropic diffraction geometry allows for the realization of linear regime of phase demodulation at the diffusion hologram. High sensitivity of the interferometer is achieved due to recording of the hologram in reflection geometry at high spatial frequencies in a crystal with sufficient concentration of photorefractive centers. The sensitivity obtained makes possible a broadband detection of ultra-small vibrations with amplitude of less then 0.1 nm. High cut-off frequency of the interferometer achieved using low-power light sources due to fast response of CdTe crystal allows one to eliminate temperature fluctuations and other industrial noises
15. Adaptive matching of the iota ring linear optics for space charge compensation
Energy Technology Data Exchange (ETDEWEB)
2016-10-09
Many present and future accelerators must operate with high intensity beams when distortions induced by space charge forces are among major limiting factors. Betatron tune depression of above approximately 0.1 per cell leads to significant distortions of linear optics. Many aspects of machine operation depend on proper relations between lattice functions and phase advances, and can be i proved with proper treatment of space charge effects. We implement an adaptive algorithm for linear lattice re matching with full account of space charge in the linear approximation for the case of Fermilab’s IOTA ring. The method is based on a search for initial second moments that give closed solution and, at the same predefined set of goals for emittances, beta functions, dispersions and phase advances at and between points of interest. Iterative singular value decomposition based technique is used to search for optimum by varying wide array of model parameters
16. Fluorescence imaging as a diagnostic of M-band x-ray drive condition in hohlraum with fluorescent Si targets
International Nuclear Information System (INIS)
Li, Qi; Hu, Zhimin; Yao, Li; Huang, Chengwu; Yuan, Zheng; Zhao, Yang; Xiong, Gang; Qing, Bo; Lv, Min; Zhu, Tuo; Deng, Bo; Li, Jin; Wei, Minxi; Zhan, Xiayu; Li, Jun; Yang, Yimeng; Su, Chunxiao; Yang, Guohong; Zhang, Jiyan; Li, Sanwei
2017-01-01
Fluorescence imaging of surrogate Si-doped CH targets has been used to provide a measurement for drive condition of high-energy x-ray (i.e. M-band x-ray) drive symmetry upon the capsule in hohlraum on Shenguang-II laser facility. A series of experiments dedicated to the study of photo-pumping and fluorescence effect in Si-plasma are presented. To investigate the feasibility of fluorescence imaging in Si-plasma, an silicon plasma in Si-foil target is pre-formed at ground state by the soft x-ray from a half-hohlraum, which is then photo-pumped by the K-shell lines from a spatially distinct laser-produced Si-plasma. The resonant Si photon pump is used to improve the fluorescence signal and cause visible image in the Si-foil. Preliminary fluorescence imaging of Si-ball target is performed in both Si-doped and pure Au hohlraum. The usual capsule at the center of the hohlraum is replaced with a solid Si-doped CH-ball (Si-ball). Since the fluorescence is proportional to the photon pump upon the Si-plasma, high-energy x-ray drive symmetry is equal to the fluorescence distribution of the Si-ball. (paper)
Science.gov (United States)
Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.
2003-09-01
Multiconjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wave-front control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10-2 Hz, i.e., 4-5 orders of magnitude lower than the typical 103 Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.
18. Towards an automatic wind speed and direction profiler for Wide Field adaptive optics systems
Science.gov (United States)
Sivo, G.; Turchi, A.; Masciadri, E.; Guesalaga, A.; Neichel, B.
2018-05-01
Wide Field Adaptive Optics (WFAO) systems are among the most sophisticated adaptive optics (AO) systems available today on large telescopes. Knowledge of the vertical spatio-temporal distribution of wind speed (WS) and direction (WD) is fundamental to optimize the performance of such systems. Previous studies already proved that the Gemini Multi-Conjugated AO system (GeMS) is able to retrieve measurements of the WS and WD stratification using the SLOpe Detection And Ranging (SLODAR) technique and to store measurements in the telemetry data. In order to assess the reliability of these estimates and of the SLODAR technique applied to such complex AO systems, in this study we compared WS and WD values retrieved from GeMS with those obtained with the atmospheric model Meso-NH on a rich statistical sample of nights. It has previously been proved that the latter technique provided excellent agreement with a large sample of radiosoundings, both in statistical terms and on individual flights. It can be considered, therefore, as an independent reference. The excellent agreement between GeMS measurements and the model that we find in this study proves the robustness of the SLODAR approach. To bypass the complex procedures necessary to achieve automatic measurements of the wind with GeMS, we propose a simple automatic method to monitor nightly WS and WD using Meso-NH model estimates. Such a method can be applied to whatever present or new-generation facilities are supported by WFAO systems. The interest of this study is, therefore, well beyond the optimization of GeMS performance.
19. Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images.
Directory of Open Access Journals (Sweden)
Marco Lombardo
Full Text Available PURPOSE: To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. METHODS: Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL. The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr, the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. RESULTS: The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. CONCLUSIONS: The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi
Science.gov (United States)
Gilles, Luc; Ellerbroek, Brent L; Vogel, Curtis R
2003-09-10
Multiconjugate adaptive optics (MCAO) systems with 10(4)-10(5) degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 10(4) actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10(-2) Hz, i.e., 4-5 orders of magnitude lower than the typical 10(3) Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.
1. Automated Photoreceptor Cell Identification on Nonconfocal Adaptive Optics Images Using Multiscale Circular Voting.
Science.gov (United States)
Liu, Jianfei; Jung, HaeWon; Dubra, Alfredo; Tam, Johnny
2017-09-01
Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics.
2. Develop techniques for ion implantation of PLZT [lead-lanthanum-zirconate-titanate] for adaptive optics
International Nuclear Information System (INIS)
Batishko, C.R.; Brimhall, J.L.; Pawlewicz, W.T.; Stahl, K.A.; Toburen, L.H.
1987-09-01
Research was conducted at Pacific Northwest Laboratory to develop high photosensitivity adaptive optical elements utilizing ion implanted lanthanum-doped lead-zirconate-titanate (PLZT). One centimeter square samples were prepared by implanting ferroelectric and anti-ferroelectric PLZT with a variety of species or combinations of species. These included Ne, O, Ni, Ne/Cr, Ne/Al, Ne/Ni, Ne/O, and Ni/O, at a variety of energies and fluences. An indium-tin oxide (ITO) electrode coating was designed to give a balance of high conductivity and optical transmission at near uv to near ir wavelengths. Samples were characterized for photosensitivity; implanted layer thickness, index of refraction, and density; electrode (ITO) conductivity; and in some cases, residual stress curvature. Thin film anti-ferroelectric PLZT was deposited in a preliminary experiment. The structure was amorphous with x-ray diffraction showing the beginnings of a structure at substrate temperatures of approximately 550 0 C. This report summarizes the research and provides a sampling of the data taken during the report period
3. Effective distance adaptation traffic dispatching in software defined IP over optical network
Science.gov (United States)
Duan, Zhiwei; Li, Hui; Liu, Yuze; Ji, Yuefeng; Li, Hongfa; Lin, Yi
2017-10-01
The rapid growth of IP traffic has contributed to the wide deployment of optical devices (ROADM/OXC, etc.). Meanwhile, with the emergence and application of high-performance network services such as ultra-high video transmission, people are increasingly becoming more and more particular about the quality of service (QoS) of network. However, the pass-band shape of WSSs which is utilized in the ROADM/OXC is not ideal, causing narrowing of spectrum. Spectral narrowing can lead to signal impairment. Therefore, guard-bands need to be inserted between adjacent paths. In order to minimize the bandwidth waste due to guard bands, we propose an effective distance-adaptation traffic dispatching algorithm in IP over optical network based on SDON architecture. We use virtualization technology to set up virtual resources direct links by extracting part of the resources on paths which meet certain specific constraints. We also assign different bandwidth to each IP request based on path length. There is no need for guard-bands between the adjacent paths on the virtual link, which can effectively reduce the number of guard-bands and save the spectrum.
4. Demonstration of a vectorial optical field generator with adaptive close loop control.
Science.gov (United States)
Chen, Jian; Kong, Lingjiang; Zhan, Qiwen
2017-12-01
We experimentally demonstrate a vectorial optical field generator (VOF-Gen) with an adaptive close loop control. The close loop control capability is illustrated with the calibration of polarization modulation of the system. To calibrate the polarization ratio modulation, we generate 45° linearly polarized beam and make it propagate through a linear analyzer whose transmission axis is orthogonal to the incident beam. For the retardation calibration, circularly polarized beam is employed and a circular polarization analyzer with the opposite chirality is placed in front of the CCD as the detector. In both cases, the close loop control automatically changes the value of the corresponding calibration parameters in the pre-set ranges to generate the phase patterns applied to the spatial light modulators and records the intensity distribution of the output beam by the CCD camera. The optimized calibration parameters are determined corresponding to the minimum total intensity in each case. Several typical kinds of vectorial optical beams are created with and without the obtained calibration parameters, and the full Stokes parameter measurements are carried out to quantitatively analyze the polarization distribution of the generated beams. The comparisons among these results clearly show that the obtained calibration parameters could remarkably improve the accuracy of the polarization modulation of the VOF-Gen, especially for generating elliptically polarized beam with large ellipticity, indicating the significance of the presented close loop in enhancing the performance of the VOF-Gen.
5. Speckle noise reduction for optical coherence tomography based on adaptive 2D dictionary
Science.gov (United States)
Lv, Hongli; Fu, Shujun; Zhang, Caiming; Zhai, Lin
2018-05-01
As a high-resolution biomedical imaging modality, optical coherence tomography (OCT) is widely used in medical sciences. However, OCT images often suffer from speckle noise, which can mask some important image information, and thus reduce the accuracy of clinical diagnosis. Taking full advantage of nonlocal self-similarity and adaptive 2D-dictionary-based sparse representation, in this work, a speckle noise reduction algorithm is proposed for despeckling OCT images. To reduce speckle noise while preserving local image features, similar nonlocal patches are first extracted from the noisy image and put into groups using a gamma- distribution-based block matching method. An adaptive 2D dictionary is then learned for each patch group. Unlike traditional vector-based sparse coding, we express each image patch by the linear combination of a few matrices. This image-to-matrix method can exploit the local correlation between pixels. Since each image patch might belong to several groups, the despeckled OCT image is finally obtained by aggregating all filtered image patches. The experimental results demonstrate the superior performance of the proposed method over other state-of-the-art despeckling methods, in terms of objective metrics and visual inspection.
6. Cone structure imaged with adaptive optics scanning laser ophthalmoscopy in eyes with nonneovascular age-related macular degeneration.
Science.gov (United States)
Zayit-Soudry, Shiri; Duncan, Jacque L; Syed, Reema; Menghini, Moreno; Roorda, Austin J
2013-11-15
To evaluate cone spacing using adaptive optics scanning laser ophthalmoscopy (AOSLO) in eyes with nonneovascular AMD, and to correlate progression of AOSLO-derived cone measures with standard measures of macular structure. Adaptive optics scanning laser ophthalmoscopy images were obtained over 12 to 21 months from seven patients with AMD including four eyes with geographic atrophy (GA) and four eyes with drusen. Adaptive optics scanning laser ophthalmoscopy images were overlaid with color, infrared, and autofluorescence fundus photographs and spectral domain optical coherence tomography (SD-OCT) images to allow direct correlation of cone parameters with macular structure. Cone spacing was measured for each visit in selected regions including areas over drusen (n = 29), at GA margins (n = 14), and regions without drusen or GA (n = 13) and compared with normal, age-similar values. Adaptive optics scanning laser ophthalmoscopy imaging revealed continuous cone mosaics up to the GA edge and overlying drusen, although reduced cone reflectivity often resulted in hyporeflective AOSLO signals at these locations. Baseline cone spacing measures were normal in 13/13 unaffected regions, 26/28 drusen regions, and 12/14 GA margin regions. Although standard clinical measures showed progression of GA in all study eyes, cone spacing remained within normal ranges in most drusen regions and all GA margin regions. Adaptive optics scanning laser ophthalmoscopy provides adequate resolution for quantitative measurement of cone spacing at the margin of GA and over drusen in eyes with AMD. Although cone spacing was often normal at baseline and remained normal over time, these regions showed focal areas of decreased cone reflectivity. These findings may provide insight into the pathophysiology of AMD progression. (ClinicalTrials.gov number, NCT00254605).
7. Optical Communication System for Remote Monitoring and Adaptive Control of Distributed Ground Sensors Exhibiting Collective Intelligence
Energy Technology Data Exchange (ETDEWEB)
Cameron, S.M.; Stantz, K.M.; Trahan, M.W.; Wagner, J.S.
1998-11-01
Comprehensive management of the battle-space has created new requirements in information management, communication, and interoperability as they effect surveillance and situational awareness. The objective of this proposal is to expand intelligent controls theory to produce a uniquely powerful implementation of distributed ground-based measurement incorporating both local collective behavior, and interoperative global optimization for sensor fusion and mission oversight. By using a layered hierarchal control architecture to orchestrate adaptive reconfiguration of autonomous robotic agents, we can improve overall robustness and functionality in dynamic tactical environments without information bottlenecks. In this concept, each sensor is equipped with a miniaturized optical reflectance modulator which is interactively monitored as a remote transponder using a covert laser communication protocol from a remote mothership or operative. Robot data-sharing at the ground level can be leveraged with global evaluation criteria, including terrain overlays and remote imaging data. Information sharing and distributed intelli- gence opens up a new class of remote-sensing applications in which small single-function autono- mous observers at the local level can collectively optimize and measure large scale ground-level signals. AS the need for coverage and the number of agents grows to improve spatial resolution, cooperative behavior orchestrated by a global situational awareness umbrella will be an essential ingredient to offset increasing bandwidth requirements within the net. A system of the type described in this proposal will be capable of sensitively detecting, tracking, and mapping spatial distributions of measurement signatures which are non-stationary or obscured by clutter and inter- fering obstacles by virtue of adaptive reconfiguration. This methodology could be used, for example, to field an adaptive ground-penetrating radar for detection of underground structures in
8. Errors in the estimation method for the rejection of vibrations in adaptive optics systems
Science.gov (United States)
Kania, Dariusz
2017-06-01
In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.
9. Removing damped sinusoidal vibrations in adaptive optics systems using a DFT-based estimation method
Science.gov (United States)
Kania, Dariusz
2017-06-01
The problem of a vibrations rejection in adaptive optics systems is still present in publications. These undesirable signals emerge because of shaking the system structure, the tracking process, etc., and they usually are damped sinusoidal signals. There are some mechanical solutions to reduce the signals but they are not very effective. One of software solutions are very popular adaptive methods. An AVC (Adaptive Vibration Cancellation) method has been presented and developed in recent years. The method is based on the estimation of three vibrations parameters and values of frequency, amplitude and phase are essential to produce and adjust a proper signal to reduce or eliminate vibrations signals. This paper presents a fast (below 10 ms) and accurate estimation method of frequency, amplitude and phase of a multifrequency signal that can be used in the AVC method to increase the AO system performance. The method accuracy depends on several parameters: CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, THD, b - number of A/D converter bits in a real time system, γ - the damping ratio of the tested signal, φ - the phase of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value of systematic error for γ = 0.1%, CiR = 1.1 and N = 32 is approximately 10^-4 Hz/Hz. This paper focuses on systematic errors of and effect of the signal phase and values of γ on the results.
Science.gov (United States)
Broom, Donald M
2006-01-01
11. Pre-processing, registration and selection of adaptive optics corrected retinal images.
Science.gov (United States)
Ramaswamy, Gomathy; Devaney, Nicholas
2013-07-01
In this paper, the aim is to demonstrate enhanced processing of sequences of fundus images obtained using a commercial AO flood illumination system. The purpose of the work is to (1) correct for uneven illumination at the retina (2) automatically select the best quality images and (3) precisely register the best images. Adaptive optics corrected retinal images are pre-processed to correct uneven illumination using different methods; subtracting or dividing by the average filtered image, homomorphic filtering and a wavelet based approach. These images are evaluated to measure the image quality using various parameters, including sharpness, variance, power spectrum kurtosis and contrast. We have carried out the registration in two stages; a coarse stage using cross-correlation followed by fine registration using two approaches; parabolic interpolation on the peak of the cross-correlation and maximum-likelihood estimation. The angle of rotation of the images is measured using a combination of peak tracking and Procrustes transformation. We have found that a wavelet approach (Daubechies 4 wavelet at 6th level decomposition) provides good illumination correction with clear improvement in image sharpness and contrast. The assessment of image quality using a 'Designer metric' works well when compared to visual evaluation, although it is highly correlated with other metrics. In image registration, sub-pixel translation measured using parabolic interpolation on the peak of the cross-correlation function and maximum-likelihood estimation are found to give very similar results (RMS difference 0.047 pixels). We have confirmed that correcting rotation of the images provides a significant improvement, especially at the edges of the image. We observed that selecting the better quality frames (e.g. best 75% images) for image registration gives improved resolution, at the expense of poorer signal-to-noise. The sharpness map of the registered and de-rotated images shows increased
12. Refined adaptive optics simulation with wide field of view for the E-ELT
International Nuclear Information System (INIS)
Chebbo, Manal
2012-01-01
Refined simulation tools for wide field AO systems (such as MOAO, MCAO or LTAO) on ELTs present new challenges. Increasing the number of degrees of freedom (scales as the square of the telescope diameter) makes the standard simulation's codes useless due to the huge number of operations to be performed at each step of the Adaptive Optics (AO) loop process. This computational burden requires new approaches in the computation of the DM voltages from WFS data. The classical matrix inversion and the matrix vector multiplication have to be replaced by a cleverer iterative resolution of the Least Square or Minimum Mean Square Error criterion (based on sparse matrices approaches). Moreover, for this new generation of AO systems, concepts themselves will become more complex: data fusion coming from multiple Laser and Natural Guide Stars (LGS / NGS) will have to be optimized, mirrors covering all the field of view associated to dedicated mirrors inside the scientific instrument itself will have to be coupled using split or integrated tomography schemes, differential pupil or/and field rotations will have to be considered, etc. All these new entries should be carefully simulated, analysed and quantified in terms of performance before any implementation in AO systems. For those reasons I developed, in collaboration with the ONERA, a full simulation code, based on iterative solution of linear systems with many parameters (use of sparse matrices). On this basis, I introduced new concepts of filtering and data fusion (LGS / NGS) to effectively manage modes such as tip, tilt and defocus in the entire process of tomographic reconstruction. The code will also eventually help to develop and test complex control laws (Multi-DM and multi-field) who have to manage a combination of adaptive telescope and post-focal instrument including dedicated deformable mirrors. The first application of this simulation tool has been studied in the framework of the EAGLE multi-object spectrograph
13. Characterization of highly stacked InAs quantum dot layers on InP substrate for a planar saturable absorber at 1.5 μm band
International Nuclear Information System (INIS)
Inoue, Jun; Akahane, Kouichi; Yamamoto, Naokatsu; Isu, Toshiro; Tsuchiya, Masahiro
2006-01-01
We examined the absorption saturation properties in the 1.5 μm band of novel highly stacked InAs quantum dot layers. The transmission change at vertical incidence based on the saturable absorption of the quantum dots was more than 1%. This value is as large as the reflection changes of previously reported 1-μm-band quantum dot saturable absorber with interference enhancement. (copyright 2006 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
International Development Research Centre (IDRC) Digital Library (Canada)
building skills, knowledge or networks on adaptation, ... the African partners leading the AfricaAdapt network, together with the UK-based Institute of Development Studies; and ... UNCCD Secretariat, Regional Coordination Unit for Africa, Tunis, Tunisia .... 26 Rural–urban Cooperation on Water Management in the Context of.
15. Pipelining Computational Stages of the Tomographic Reconstructor for Multi-Object Adaptive Optics on a Multi?GPU System
KAUST Repository
Charara, Ali; Ltaief, Hatem; Gratadour, Damien; Keyes, David E.; Sevin, Arnaud; Abdelfattah, Ahmad; Gendron, Eric; Morel, Carine; Vidal, Fabrice
2014-01-01
European Extreme Large Telescope (E-ELT) is a high priority project in ground based astronomy that aims at constructing the largest telescope ever built. MOSAIC is an instrument proposed for E-ELT using Multi- Object Adaptive Optics (MOAO) technique for astronomical telescopes, which compensates for effects of atmospheric turbulence on image quality, and operates on patches across a large FoV.
16. Pipelining Computational Stages of the Tomographic Reconstructor for Multi-Object Adaptive Optics on a Multi?GPU System
KAUST Repository
Charara, Ali
2014-05-04
European Extreme Large Telescope (E-ELT) is a high priority project in ground based astronomy that aims at constructing the largest telescope ever built. MOSAIC is an instrument proposed for E-ELT using Multi- Object Adaptive Optics (MOAO) technique for astronomical telescopes, which compensates for effects of atmospheric turbulence on image quality, and operates on patches across a large FoV.
17. H2-optimal control of an adaptive optics system : Part I, data-driven modeling of the wavefront disturbance
NARCIS (Netherlands)
Hinnen, K.; Verhaegen, M.; Doelman, N.
2005-01-01
Even though the wavefront distortion introduced by atmospheric turbulence is a dynamic process, its temporal evolution is usually neglected in the adaptive optics (AO) control design. Most AO control systems consider only the spatial correlation in a separate wavefront reconstruction step. By
18. Excitation of the 4.3-μm bands of CO2 by low-energy electrons
International Nuclear Information System (INIS)
Bulos, R.R.; Phelps, A.V.
1976-01-01
Rate coefficients for the excitation of the 4.3-μm bands of CO 2 by low-energy electrons in CO 2 have been measured using a drift-tube technique. The CO 2 density [(1.5 to 7) x 10 17 molecules/cm 3 ] was chosen to maximize the radiation reaching the detector. Line-by-line transmission calculations were used to take into account the absorption of 4.3-μm radiation. A small fraction of the approximately 10 -8 W of the 4.3-μm radiation produced by the approximately 10 -7 -A electron current was incident on an InSb photovoltaic detector. The detector calibration and absorption calculations were checked by measuring the readily calculated excitation coefficients for vibrational excitation of N 2 containing a small concentration of CO 2 . For pure CO 2 the number of molecules capable of emitting 4.3-μm radiation produced per cm of electron drift and per CO 2 molecule varied from 10 -17 cm -2 at E/N = 6 x 10 -17 V cm 2 to 5.4 x 10 -16 cm -2 at E/N = 4 x 10 -16 V cm 2 . Here E is the electric field and N is total gas density. The excitation coefficients at lower E/N are much larger than estimated previously. A set of vibrational excitation cross sections is obtained for CO 2 which is consistent with the excitation coefficient data and with most of the published electron-beam data
19. GLAS: engineering a common-user Rayleigh laser guide star for adaptive optics on the William Herschel Telescope
Science.gov (United States)
Talbot, Gordon; Abrams, Don Carlos; Apostolakos, Nikolaos; Bassom, Richard; Blackburn, Colin; Blanken, Maarten; Cano Infantes, Diego; Chopping, Alan; Dee, Kevin; Dipper, Nigel; Elswijk, Eddy; Enthoven, Bernard; Gregory, Thomas; ter Horst, Rik; Humphreys, Ron; Idserda, Jan; Jolley, Paul; Kuindersma, Sjouke; McDermid, Richard; Morris, Tim; Myers, Richard; Pico, Sergio; Pragt, Johan; Rees, Simon; Rey, Jürg; Reyes, Marcos; Rutten, René; Schoenmaker, Ton; Skvarc, Jure; Tromp, Niels; Tulloch, Simon; Veninga, Auke
2006-06-01
The GLAS (Ground-layer Laser Adaptive-optics System) project is to construct a common-user Rayleigh laser beacon that will work in conjunction with the existing NAOMI adaptive optics system, instruments (near IR imager INGRID, optical integral field spectrograph OASIS, coronagraph OSCA) and infrastructure at the 4.2-m William Herschel Telescope (WHT) on La Palma. The laser guide star system will increase sky coverage available to high-order adaptive optics from ~1% to approaching 100% and will be optimized for scientific exploitation of the OASIS integral-field spectrograph at optical wavelengths. Additionally GLAS will be used in on-sky experiments for the application of laser beacons to ELTs. This paper describes the full range of engineering of the project ranging through the laser launch system, wavefront sensors, computer control, mechanisms, diagnostics, CCD detectors and the safety system. GLAS is a fully funded project, with final design completed and all equipment ordered, including the laser. Integration has started on the WHT and first light is expected summer 2006.
20. Application of fluidic lens technology to an adaptive holographic optical element see-through autophoropter
Science.gov (United States)
Chancy, Carl H.
A device for performing an objective eye exam has been developed to automatically determine ophthalmic prescriptions. The closed loop fluidic auto-phoropter has been designed, modeled, fabricated and tested for the automatic measurement and correction of a patient's prescriptions. The adaptive phoropter is designed through the combination of a spherical-powered fluidic lens and two cylindrical fluidic lenses that are orientated 45o relative to each other. In addition, the system incorporates Shack-Hartmann wavefront sensing technology to identify the eye's wavefront error and corresponding prescription. Using the wavefront error information, the fluidic auto-phoropter nulls the eye's lower order wavefront error by applying the appropriate volumes to the fluidic lenses. The combination of the Shack-Hartmann wavefront sensor the fluidic auto-phoropter allows for the identification and control of spherical refractive error, as well as cylinder error and axis; thus, creating a truly automated refractometer and corrective system. The fluidic auto-phoropter is capable of correcting defocus error ranging from -20D to 20D and astigmatism from -10D to 10D. The transmissive see-through design allows for the observation of natural scenes through the system at varying object planes with no additional imaging optics in the patient's line of sight. In this research, two generations of the fluidic auto-phoropter are designed and tested; the first generation uses traditional glass optics for the measurement channel. The second generation of the fluidic auto-phoropter takes advantage of the progress in the development of holographic optical elements (HOEs) to replace all the traditional glass optics. The addition of the HOEs has enabled the development of a more compact, inexpensive and easily reproducible system without compromising its performance. Additionally, the fluidic lenses were tested during a National Aeronautics Space Administration (NASA) parabolic flight campaign, to
1. Adapting an optical nanoantenna for high E-field probing applications to a waveguided optical waveguide (WOW)
DEFF Research Database (Denmark)
2013-01-01
In the current work we intend to use the optical nano-antenna to include various functionalities for the recently demonstrated waveguided optical waveguide (WOW) by Palima et al. (Optics Express 2012). Specifically, we intend to study a WOW with an optical nano-antenna which can block the guiding......-stop characteristic. We give geometrical parameters necessary for realizing functioning nanoantennas. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.......In the current work we intend to use the optical nano-antenna to include various functionalities for the recently demonstrated waveguided optical waveguide (WOW) by Palima et al. (Optics Express 2012). Specifically, we intend to study a WOW with an optical nano-antenna which can block the guiding...... light wavelength while admitting other wavelengths of light which address certain functionalities, e.g. drug release, in the WOW. In particular, we study a bow-tie optical nano-antenna to circular dielectric waveguides in aqueous environments. It is shown with finite element computer simulations...
2. The main postulates of adaptive correction of distortions of the wave front in large-size optical systems
Directory of Open Access Journals (Sweden)
V. V. Sychev
2014-01-01
Full Text Available In the development of optical telescopes, striving to increase the penetrating power of a telescope has been always the main trend. A real way to solve this problem is to raise the quality of the image (reduction of the image angular size under real conditions of distorting factor and increase a diameter of the main mirror. This is counteracted by the various distorting factors or interference occurring in realtime use of telescopes, as well as by complicated manufacturing processes of large mirrors.It is shown that the most effective method to deal with the influence of distorting factors on the image quality in the telescope is the minimization (through selecting the place to mount a telescope and choosing the rational optical scheme, creating materials and new technologies, improving a design, unloading the mirrors, mounting choice, etc., and then the adaptive compensation of remaining distortions.It should be noted that a domestic concept to design large-sized telescopes allows us to use, in our opinion, the most efficient ways to do this. It means to abandon the creation of "an absolutely rigid and well-ordered" design, providing the passively aligned state telescope optics under operating conditions. The design must just have such a level of residual deformations that their effect can be efficiently compensated by the adaptive system using the segmented elements of the primary mirror and the secondary mirror as a corrector.It has been found that in the transmission optical systems to deliver laser power to a remote object, it is necessary not only to overcome the distorting effect of factors inherent in optical information systems, but, additionally, find a way to overcome a number of new difficulties. The main ones have been identified to be as follows:• the influence of laser radiation on the structure components and the propagation medium and, as a consequence, the opposite effect of the structure components and the propagation
3. The Laser Guide Star System for Adaptive Optics at Subaru Telescope
Science.gov (United States)
Hayano, Y.; Saito, Y.; Ito, M.; Saito, N.; Akagawa, K.; Takazawa, A.; Ito, M.; Wada, S.; Takami, H.; Iye, M.
We report on the current status of developing the new laser guide star (LGS) system for the Subaru adaptive optics (AO) system. We have three major subsystems: the laser unit, the relay optical fiber and the laser launching telescope. A 4W-class all-solid-state 589nm laser has been developed as a light source for sodium laser guide star. We use two mode-locked Nd:YAG lasers operated at the wavelength of 1064nm and 1319nm to generate sum-frequency conversion into 589nm. The side-LD pumped configuration is used for the mode-locked Nd:YAG lasers. We have carefully considered the thermal lens effect in the cavity to achieve a high beam quality with TEM00; M2 = 1.06. The mode-locked frequency is selected at 143 MHz. We obtained the output powers of 16.5 W and 5.0 W at 1064nm and 1319 nm. Sum frequency generated by mixing two synchronized Nd:YAG mode-locked pulsed beams is precisely tuned to the sodium D2 line by thermal control of the etalon in the 1064nm Nd:YAG laser by observing the maximum fluorescence intensity of heated sodium vapor cell. The maximum output power at 589.159 nm reaches to 4.6 W using a PPMgOSLT crystal as a nonlinear optical crystal. And the output power can be maintained within a stability of +/- 1.2% for more than 3 days without optical damage. We developed a single-mode photonic crystal fiber (PCF) to relay the laser beam from laser clean room, in which the laser unit is located on the Nasmyth platform, to the laser launching telescope mounted behind the secondary mirror of Subaru Telescope. The photonic crystal fiber has solid pure silica core with the mode field diameter of 14 micron, which is relatively larger than that of the conventional step-index type single mode fiber. The length of the PCF is 35m and transmission loss due to the pure silica is 10dB/km at 589nm, which means PCF transmits 92% of the laser beam. We have preliminary achieved 75% throughput in total. Small mode-locked pulse width in time allows us to transmit the high
4. GMTIFS: the adaptive optics beam steering mirror for the GMT integral-field spectrograph
Science.gov (United States)
Davies, J.; Bloxham, G.; Boz, R.; Bundy, D.; Espeland, B.; Fordham, B.; Hart, J.; Herrald, N.; Nielsen, J.; Sharp, R.; Vaccarella, A.; Vest, C.; Young, P. J.
2016-07-01
To achieve the high adaptive optics sky coverage necessary to allow the GMT Integral-Field Spectrograph (GMTIFS) to access key scientific targets, the on-instrument adaptive-optics wavefront-sensing (OIWFS) system must patrol the full 180 arcsecond diameter guide field passed to the instrument. The OIWFS uses a diffraction limited guide star as the fundamental pointing reference for the instrument. During an observation the offset between the science target and the guide star will change due to sources such as flexure, differential refraction and non-sidereal tracking rates. GMTIFS uses a beam steering mirror to set the initial offset between science target and guide star and also to correct for changes in offset. In order to reduce image motion from beam steering errors to those comparable to the AO system in the most stringent case, the beam steering mirror is set a requirement of less than 1 milliarcsecond RMS. This corresponds to a dynamic range for both actuators and sensors of better than 1/180,000. The GMTIFS beam steering mirror uses piezo-walk actuators and a combination of eddy current sensors and interferometric sensors to achieve this dynamic range and control. While the sensors are rated for cryogenic operation, the actuators are not. We report on the results of prototype testing of single actuators, with the sensors, on the bench and in a cryogenic environment. Specific failures of the system are explained and suspected reasons for them. A modified test jig is used to investigate the option of heating the actuator and we report the improved results. In addition to individual component testing, we built and tested a complete beam steering mirror assembly. Testing was conducted with a point source microscope, however controlling environmental conditions to less than 1 micron was challenging. The assembly testing investigated acquisition accuracy and if there was any un-sensed hysteresis in the system. Finally we present the revised beam steering mirror
5. The Last Gasps of VY Canis Majoris: Aperture Synthesis and Adaptive Optics Imagery
Science.gov (United States)
Monnier, J. D.; Tuthill, P. G.; Lopez, B.; Cruzalebes, P.; Danchi, W. C.; Haniff, C. A.
1999-02-01
We present new observations of the red supergiant VY CMa at 1.25, 1.65, 2.26, 3.08, and 4.8 μm. Two complementary observational techniques were utilized: nonredundant aperture masking on the 10 m Keck I telescope, yielding images of the innermost regions at unprecedented resolution, and adaptive optics imaging on the ESO 3.6 m telescope at La Silla, attaining an extremely high (~105) peak-to-noise dynamic range over a wide field. For the first time the inner dust shell has been resolved in the near-infrared to reveal a one-sided extension of circumstellar emission within 0.1" (~15 R*) of the star. The line-of-sight optical depths of the circumstellar dust shell at 1.65, 2.26, and 3.08 μm have been estimated to be 1.86+/-0.42, 0.85+/-0.20, and 0.44+/-0.11, respectively. These new results allow the bolometric luminosity of VY CMa to be estimated independent of the dust shell geometry, yielding L*~2×105 Lsolar. A variety of dust condensations, including a large scattering plume and a bow-shaped dust feature, were observed in the faint, extended nebula up to 4" from the central source. While the origin of the nebulous plume remains uncertain, a geometrical model is developed assuming the plume is produced by radially driven dust grains forming at a rotating flow insertion point with a rotational period between 1200 and 4200 yr, which is perhaps the stellar rotational period or the orbital period of an unseen companion.
6. Dual-conjugate adaptive optics for wide-field high-resolution retinal imaging.
Science.gov (United States)
Thaung, Jörgen; Knutsson, Per; Popovic, Zoran; Owner-Petersen, Mette
2009-03-16
We present analysis and preliminary laboratory testing of a real-time dual-conjugate adaptive optics (DCAO) instrument for ophthalmology that will enable wide-field high resolution imaging of the retina in vivo. The setup comprises five retinal guide stars (GS) and two deformable mirrors (DM), one conjugate to the pupil and one conjugate to a plane close to the retina. The DCAO instrument has a closed-loop wavefront sensing wavelength of 834 nm and an imaging wavelength of 575 nm. It incorporates an array of collimator lenses to spatially filter the light from all guide stars using one adjustable iris, and images the Hartmann patterns of multiple reference sources on a single detector. Zemax simulations were performed at 834 nm and 575 nm with the Navarro 99 and the Liou- Brennan eye models. Two correction alternatives were evaluated; conventional single conjugate AO (SCAO, using one GS and a pupil DM) and DCAO (using multiple GS and two DM). Zemax simulations at 575 nm based on the Navarro 99 eye model show that the diameter of the corrected field of view for diffraction-limited imaging (Strehl >or= 0.8) increases from 1.5 deg with SCAO to 6.5 deg using DCAO. The increase for the less stringent condition of a wavefront error of 1 rad or less (Strehl >or= 0.37) is from 3 deg with SCAO to approximately 7.4 deg using DCAO. Corresponding results for the Liou-Brennan eye model are 3.1 deg (SCAO) and 8.2 deg (DCAO) for Strehl >or= 0.8, and 4.8 deg (SCAO) and 9.6 deg (DCAO) for Strehl >or= 0.37. Potential gain in corrected field of view with DCAO is confirmed both by laboratory experiments on a model eye and by preliminary in vivo imaging of a human eye. (c) 2009 Optical Society of America
7. HoYbBIG epitaxial thick films used for Faraday rotator in the 1.55μm band
International Nuclear Information System (INIS)
Zhong, Z.W.; Xu, X.W.; Chong, T.C.; Yuan, S.N.; Li, M.H.; Zhang, G.Y.; Freeman, B.
2005-01-01
Ho 3-x-y Yb y Bi x Fe 5 O 12 (HoYbBIG) garnet thick films with Bi content of x=0.9-1.5 were prepared by the liquid phase epitaxy (LPE) method. Optical properties and magneto-optical properties were characterized. The LPE-grown HoYbBIG thick films exhibited large Faraday rotation coefficients up to 1540 o /cm at 1.55μm, and good wavelength and temperature stability
8. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.
Science.gov (United States)
Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo
2013-06-01
In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.
9. Adapting an optical nanoantenna for high E-field probing applications to a waveguided optical waveguide (WOW)
Science.gov (United States)
2013-03-01
In the current work we intend to use the optical nano-antenna to include various functionalities for the recently demonstrated waveguided optical waveguide (WOW) by Palima et al. (Optics Express 2012). Specifically, we intend to study a WOW with an optical nano-antenna which can block the guiding light wavelength while admitting other wavelengths of light which address certain functionalities, e.g. drug release, in the WOW. In particular, we study a bow-tie optical nano-antenna to circular dielectric waveguides in aqueous environments. It is shown with finite element computer simulations that the nanoantenna can be made to operate in a bandstop mode around its resonant wavelength where there is a very high evanescent strong electrical probing field close to the antennas, and additionally the fluorescence or Raman excitations will be be unpolluted by stray light from the WOW due to the band-stop characteristic. We give geometrical parameters necessary for realizing functioning nanoantennas.
Science.gov (United States)
Bargatze, L. F.
2015-12-01
11. eXtragalactic astronomy: the X-games of adaptive optics
Science.gov (United States)
Lai, Olivier
2000-07-01
Observing active nuclei, Ultra-Luminous Infrared Galaxies, starburst and merging galaxies, is both a challenge and a requirement for adaptive optics. It is a requirement, because models needed to explain the high infrared flux and the physics of these monsters need constraints that come, in part, from the fine details gleaned on high angular resolution images, and it is a challenge because, being distant, these objects are usually faint in apparent visual magnitude, meaning that the wavefront sensors have to operate in a photon starved regime. Many observations have been controversial in the past, and it is always difficult to tell an artifact such as astigmatism from an inner bar. The importance of observing the point spread function is therefore even more crucial than on bright objects, as PSF reconstruction methods 'a la Veran' break down when the photon noise dominates the statistics of the wave front, or when locking the loop on extended objects. Yet, while some cases have been controversial, some very clear and profound results have been obtained in the extragalactic domain, such as the detection of host galaxy to quasars and star formation studies. It turns out that the fundamental prerequisite to such success stories is a stable, well understood and well calibrated PSF.
12. REFERENCE-LESS DETECTION, ASTROMETRY, AND PHOTOMETRY OF FAINT COMPANIONS WITH ADAPTIVE OPTICS
International Nuclear Information System (INIS)
2009-01-01
We propose a complete framework for the detection, astrometry, and photometry of faint companions from a sequence of adaptive optics (AO) corrected short exposures. The algorithms exploit the difference in statistics between the on-axis and off-axis intensity of the AO point-spread function (PSF) to differentiate real sources from speckles. We validate the new approach and illustrate its performance using moderate Strehl ratio data obtained with the natural guide star AO system on the Lick Observatory's 3 m Shane Telescope. We obtain almost a 2 mag gain in achievable contrast by using our detection method compared to 5σ detectability in long exposures. We also present a first guide to expected accuracy of differential photometry and astrometry with the new techniques. Our approach performs better than PSF-fitting in general and especially so for close companions, which are located within the uncompensated seeing (speckle) halo. All three proposed algorithms are self-calibrating, i.e., they do not require observation of a calibration star. One of the advantages of this approach is improved observing efficiency.
13. Stroke saturation on a MEMS deformable mirror for woofer-tweeter adaptive optics.
Science.gov (United States)
Morzinski, Katie; Macintosh, Bruce; Gavel, Donald; Dillon, Daren
2009-03-30
High-contrast imaging of extrasolar planet candidates around a main-sequence star has recently been realized from the ground using current adaptive optics (AO) systems. Advancing such observations will be a task for the Gemini Planet Imager, an upcoming "extreme" AO instrument. High-order "tweeter" and low-order "woofer" deformable mirrors (DMs) will supply a >90%-Strehl correction, a specialized coronagraph will suppress the stellar flux, and any planets can then be imaged in the "dark hole" region. Residual wavefront error scatters light into the DM-controlled dark hole, making planets difficult to image above the noise. It is crucial in this regard that the high-density tweeter, a micro-electrical mechanical systems (MEMS) DM, have sufficient stroke to deform to the shapes required by atmospheric turbulence. Laboratory experiments were conducted to determine the rate and circumstance of saturation, i.e. stroke insufficiency. A 1024-actuator 1.5-microm-stroke MEMS device was empirically tested with software Kolmogorov-turbulence screens of r(0) =10-15 cm. The MEMS when solitary suffered saturation approximately 4% of the time. Simulating a woofer DM with approximately 5-10 actuators across a 5-m primary mitigated MEMS saturation occurrence to a fraction of a percent. While no adjacent actuators were saturated at opposing positions, mid-to-high-spatial-frequency stroke did saturate more frequently than expected, implying that correlations through the influence functions are important. Analytical models underpredict the stroke requirements, so empirical studies are important.
14. Can we use adaptive optics for UHR spectroscopy with PEPSI at the LBT?
Science.gov (United States)
Sacco, Germano G.; Pallavicini, Roberto; Spano, Paolo; Andersen, Michael; Woche, Manfred F.; Strassmeier, Klaus G.
2004-10-01
We investigate the potential of using adaptive optics (AO) in the V, R, and I bands to reach ultra-high resolution (UHR, R >= 200,000) in echelle spectrographs at 8-10m telescopes. In particular, we investigate the possibility of implementing an UHR mode for the fiber-fed spectrograph PEPSI (Potsdam Echelle Polarimetric and Spectrographic Instrument) being developed for the Large Binocular Telescope (LBT). By simulating the performances of the advanced AO system that will be available at first light at the LBT, and by using first-order estimates of the spectrograph performances, we calculate the total efficiency and signal to noise ratio (SNR) of PEPSI in the AO mode for stars of different magnitudes, different fiber core sizes, and different fractions of incident light diverted to the wavefront sensor. We conclude that AO can provide a significant advantage, of up to a factor ~2 in the V, R and I bands, for stars brighter than mR ~ 12 - 13. However, if these stars are observed at UHR in non-AO mode, slit losses caused by the need to use a very narrow slit can be compensated more effectively by the use of image slicers.
15. Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope.
Science.gov (United States)
He, Yi; Deng, Guohua; Wei, Ling; Li, Xiqi; Yang, Jinsheng; Shi, Guohua; Zhang, Yudong
2016-01-01
We have designed, constructed and tested an adaptive optics scanning laser ophthalmoscope (AOSLO) using a bimorph mirror. The simulated AOSLO system achieves diffraction-limited criterion through all the raster scanning fields (6.4 mm pupil, 3° × 3° on pupil). The bimorph mirror-based AOSLO corrected ocular aberrations in model eyes to less than 0.1 μm RMS wavefront error with a closed-loop bandwidth of a few Hz. Facilitated with a bimorph mirror at a stroke of ±15 μm with 35 elements and an aperture of 20 mm, the new AOSLO system has a size only half that of the first-generation AOSLO system. The significant increase in stroke allows for large ocular aberrations such as defocus in the range of ±600° and astigmatism in the range of ±200°, thereby fully exploiting the AO correcting capabilities for diseased human eyes in the future.
16. Use of focus measure operators for characterization of flood illumination adaptive optics ophthalmoscopy image quality.
Science.gov (United States)
Alonso-Caneiro, David; Sampson, Danuta M; Chew, Avenell L; Collins, Michael J; Chen, Fred K
2018-02-01
Adaptive optics flood illumination ophthalmoscopy (AO-FIO) allows imaging of the cone photoreceptor in the living human retina. However, clinical interpretation of the AO-FIO image remains challenging due to suboptimal quality arising from residual uncorrected wavefront aberrations and rapid eye motion. An objective method of assessing image quality is necessary to determine whether an AO-FIO image is suitable for grading and diagnostic purpose. In this work, we explore the use of focus measure operators as a surrogate measure of AO-FIO image quality. A set of operators are tested on data sets acquired at different focal depths and different retinal locations from healthy volunteers. Our results demonstrate differences in focus measure operator performance in quantifying AO-FIO image quality. Further, we discuss the potential application of the selected focus operators in (i) selection of the best quality AO-FIO image from a series of images collected at the same retinal location and (ii) assessment of longitudinal changes in the diseased retina. Focus function could be incorporated into real-time AO-FIO image processing and provide an initial automated quality assessment during image acquisition or reading center grading.
17. Reducing adaptive optics latency using Xeon Phi many-core processors
Science.gov (United States)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah
2015-11-01
The next generation of Extremely Large Telescopes (ELTs) for astronomy will rely heavily on the performance of their adaptive optics (AO) systems. Real-time control is at the heart of the critical technologies that will enable telescopes to deliver the best possible science and will require a very significant extrapolation from current AO hardware existing for 4-10 m telescopes. Investigating novel real-time computing architectures and testing their eligibility against anticipated challenges is one of the main priorities of technology development for the ELTs. This paper investigates the suitability of the Intel Xeon Phi, which is a commercial off-the-shelf hardware accelerator. We focus on wavefront reconstruction performance, implementing a straightforward matrix-vector multiplication (MVM) algorithm. We present benchmarking results of the Xeon Phi on a real-time Linux platform, both as a standalone processor and integrated into an existing real-time controller (RTC). Performance of single and multiple Xeon Phis are investigated. We show that this technology has the potential of greatly reducing the mean latency and variations in execution time (jitter) of large AO systems. We present both a detailed performance analysis of the Xeon Phi for a typical E-ELT first-light instrument along with a more general approach that enables us to extend to any AO system size. We show that systematic and detailed performance analysis is an essential part of testing novel real-time control hardware to guarantee optimal science results.
18. Flexible Riser Monitoring Using Hybrid Magnetic/Optical Strain Gage Techniques through RLS Adaptive Filtering
Directory of Open Access Journals (Sweden)
Daniel Pipa
2010-01-01
Full Text Available Flexible riser is a class of flexible pipes which is used to connect subsea pipelines to floating offshore installations, such as FPSOs (floating production/storage/off-loading unit and SS (semisubmersible platforms, in oil and gas production. Flexible risers are multilayered pipes typically comprising an inner flexible metal carcass surrounded by polymer layers and spiral wound steel ligaments, also referred to as armor wires. Since these armor wires are made of steel, their magnetic properties are sensitive to the stress they are subjected to. By measuring their magnetic properties in a nonintrusive manner, it is possible to compare the stress in the armor wires, thus allowing the identification of damaged ones. However, one encounters several sources of noise when measuring electromagnetic properties contactlessly, such as movement between specimen and probe, and magnetic noise. This paper describes the development of a new technique for automatic monitoring of armor layers of flexible risers. The proposed approach aims to minimize these current uncertainties by combining electromagnetic measurements with optical strain gage data through a recursive least squares (RLSs adaptive filter.
19. The Chandra Deep Field South as a test case for Global Multi Conjugate Adaptive Optics
Science.gov (United States)
Portaluri, E.; Viotto, V.; Ragazzoni, R.; Gullieuszik, M.; Bergomi, M.; Greggio, D.; Biondi, F.; Dima, M.; Magrin, D.; Farinato, J.
2017-04-01
The era of the next generation of giant telescopes requires not only the advent of new technologies but also the development of novel methods, in order to exploit fully the extraordinary potential they are built for. Global Multi Conjugate Adaptive Optics (GMCAO) pursues this approach, with the goal of achieving good performance over a field of view of a few arcmin and an increase in sky coverage. In this article, we show the gain offered by this technique to an astrophysical application, such as the photometric survey strategy applied to the Chandra Deep Field South as a case study. We simulated a close-to-real observation of a 500 × 500 arcsec2 extragalactic deep field with a 40-m class telescope that implements GMCAO. We analysed mock K-band images of 6000 high-redshift (up to z = 2.75) galaxies therein as if they were real to recover the initial input parameters. We attained 94.5 per cent completeness for source detection with SEXTRACTOR. We also measured the morphological parameters of all the sources with the two-dimensional fitting tools GALFIT. The agreement we found between recovered and intrinsic parameters demonstrates GMCAO as a reliable approach to assist extremely large telescope (ELT) observations of extragalactic interest.
20. PALM-3000: EXOPLANET ADAPTIVE OPTICS FOR THE 5 m HALE TELESCOPE
Energy Technology Data Exchange (ETDEWEB)
Dekany, Richard; Bouchez, Antonin; Baranec, Christoph; Hale, David; Zolkower, Jeffry; Henning, John; Croner, Ernest; McKenna, Dan; Hildebrandt, Sergi; Milburn, Jennifer [Caltech Optical Observatories, California Institute of Technology, 1200 East California Boulevard, MC 11-17, Pasadena, CA 91125 (United States); Roberts, Jennifer; Burruss, Rick; Truong, Tuan; Guiwits, Stephen; Angione, John; Trinh, Thang; Shelton, J. Christopher; Palmer, Dean; Troy, Mitchell; Tesch, Jonathan, E-mail: rgd@astro.caltech.edu [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Boulevard, Pasadena, CA 91109 (United States)
2013-10-20
We describe and report first results from PALM-3000, the second-generation astronomical adaptive optics (AO) facility for the 5.1 m Hale telescope at Palomar Observatory. PALM-3000 has been engineered for high-contrast imaging and emission spectroscopy of brown dwarfs and large planetary mass bodies at near-infrared wavelengths around bright stars, but also supports general natural guide star use to V ≈ 17. Using its unique 66 × 66 actuator deformable mirror, PALM-3000 has thus far demonstrated residual wavefront errors of 141 nm rms under ∼1'' seeing conditions. PALM-3000 can provide phase conjugation correction over a 6.''4 × 6.''4 working region at λ = 2.2 μm, or full electric field (amplitude and phase) correction over approximately one-half of this field. With optimized back-end instrumentation, PALM-3000 is designed to enable 10{sup –7} contrast at 1'' angular separation, including post-observation speckle suppression processing. While continued optimization of the AO system is ongoing, we have already successfully commissioned five back-end instruments and begun a major exoplanet characterization survey, Project 1640.
1. Modeling astronomical adaptive optics performance with temporally filtered Wiener reconstruction of slope data
Science.gov (United States)
Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.
2017-10-01
We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.
2. Axial length and cone density as assessed with adaptive optics in myopia
Directory of Open Access Journals (Sweden)
Supriya Dabir
2015-01-01
Full Text Available Aim: To assess the variations in cone mosaic in myopia and its correlation with axial length (AL. Subjects and Methods: Twenty-five healthy myopic volunteers underwent assessment of photoreceptors using adaptive optics retinal camera at 2° and 3° from the foveal center in four quadrants superior, inferior, temporal and nasal. Data was analyzed using SPSS version 17 (IBM. Multivariable regression analysis was conducted to study the relation between cone density and AL, quadrant around the fovea and eccentricity from the fovea. Results: The mean cone density was significantly lower as the eccentricity increased from 2° from the fovea to 3° (18,560 ± 5455-16,404 ± 4494/mm 2 respectively. There was also a statistically significant difference between four quadrants around the fovea. The correlation of cone density and spacing with AL showed that there was a significant inverse relation of AL with the cone density. Conclusion: In myopic patients with good visual acuity cone density around the fovea depends on the quadrant, distance from the fovea as well as the AL. The strength of the relation of AL with cone density depends on the quadrant and distance.
3. Testing for a slope-based decoupling algorithm in a woofer-tweeter adaptive optics system.
Science.gov (United States)
Cheng, Tao; Liu, WenJin; Yang, KangJian; He, Xin; Yang, Ping; Xu, Bing
2018-05-01
It is well known that using two or more deformable mirrors (DMs) can improve the compensation ability of an adaptive optics (AO) system. However, to keep the stability of an AO system, the correlation between the multiple DMs must be suppressed during the correction. In this paper, we proposed a slope-based decoupling algorithm to simultaneous control the multiple DMs. In order to examine the validity and practicality of this algorithm, a typical woofer-tweeter (W-T) AO system was set up. For the W-T system, a theory model was simulated and the results indicated in theory that the algorithm we presented can selectively make woofer and tweeter correct different spatial frequency aberration and suppress the cross coupling between the dual DMs. At the same time, the experimental results for the W-T AO system were consistent with the results of the simulation, which demonstrated in practice that this algorithm is practical for the AO system with dual DMs.
4. The PALM-3000 high-order adaptive optics system for Palomar Observatory
Science.gov (United States)
Bouchez, Antonin H.; Dekany, Richard G.; Angione, John R.; Baranec, Christoph; Britton, Matthew C.; Bui, Khanh; Burruss, Rick S.; Cromer, John L.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; McKenna, Daniel L.; Moore, Anna M.; Roberts, Jennifer E.; Trinh, Thang Q.; Troy, Mitchell; Truong, Tuan N.; Velur, Viswa
2008-07-01
Deployed as a multi-user shared facility on the 5.1 meter Hale Telescope at Palomar Observatory, the PALM-3000 highorder upgrade to the successful Palomar Adaptive Optics System will deliver extreme AO correction in the near-infrared, and diffraction-limited images down to visible wavelengths, using both natural and sodium laser guide stars. Wavefront control will be provided by two deformable mirrors, a 3368 active actuator woofer and 349 active actuator tweeter, controlled at up to 3 kHz using an innovative wavefront processor based on a cluster of 17 graphics processing units. A Shack-Hartmann wavefront sensor with selectable pupil sampling will provide high-order wavefront sensing, while an infrared tip/tilt sensor and visible truth wavefront sensor will provide low-order LGS control. Four back-end instruments are planned at first light: the PHARO near-infrared camera/spectrograph, the SWIFT visible light integral field spectrograph, Project 1640, a near-infrared coronagraphic integral field spectrograph, and 888Cam, a high-resolution visible light imager.
5. PENETRATING THE HOMUNCULUS-NEAR-INFRARED ADAPTIVE OPTICS IMAGES OF ETA CARINAE
International Nuclear Information System (INIS)
Artigau, Etienne; Martin, John C.; Humphreys, Roberta M.; Davidson, Kris; Chesneau, Olivier; Smith, Nathan
2011-01-01
Near-infrared adaptive optics imaging with the Near-Infrared Coronagraphic Imager (NICI) and NaCO reveal what appears to be a three-winged or lobed pattern, the 'butterfly nebula', outlined by bright Brγ and H 2 emission and light scattered by dust. In contrast, the [Fe II] emission does not follow the outline of the wings, but shows an extended bipolar distribution which is tracing the Little Homunculus ejected in η Car's second or lesser eruption in the 1890s. Proper motions measured from the combined NICI and NaCO images together with radial velocities show that the knots and filaments that define the bright rims of the butterfly were ejected at two different epochs corresponding approximately to the great eruption and the second eruption. Most of the material is spatially distributed 10 0 -20 0 above and below the equatorial plane apparently behind the Little Homunculus and the larger SE lobe. The equatorial debris either has a wide opening angle or the clumps were ejected at different latitudes relative to the plane. The butterfly is not a coherent physical structure or equatorial torus but spatially separate clumps and filaments ejected at different times, and now 2000-4000 AU from the star.
6. Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration
Directory of Open Access Journals (Sweden)
Kevin B. Wynne
2017-12-01
Full Text Available As the use of Digital Micro Mirror Devices (DMDs becomes more prevalent in optics research, the ability to precisely locate the Fourier “footprint” of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam’s characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.
7. A High-resolution Multi-wavelength Simultaneous Imaging System with Solar Adaptive Optics
Energy Technology Data Exchange (ETDEWEB)
Rao, Changhui; Zhu, Lei; Gu, Naiting; Rao, Xuejun; Zhang, Lanqiang; Bao, Hua; Kong, Lin; Guo, Youming; Zhong, Libo; Ma, Xue’an; Li, Mei; Wang, Cheng; Zhang, Xiaojun; Fan, Xinlong; Chen, Donghong; Feng, Zhongyi; Wang, Xiaoyun; Wang, Zhiyong, E-mail: gunaiting@ioe.ac.cn [The Key Laboratory on Adaptive Optics, Chinese Academy of Sciences, P.O. Box 350, Shuangliu, Chengdu 610209, Sichuan (China)
2017-10-01
A high-resolution multi-wavelength simultaneous imaging system from visible to near-infrared bands with a solar adaptive optics system, in which seven imaging channels, including the G band (430.5 nm), the Na i line (589 nm), the H α line (656.3 nm), the TiO band (705.7 nm), the Ca ii IR line (854.2 nm), the He i line (1083 nm), and the Fe i line (1565.3 nm), are chosen, is developed to image the solar atmosphere from the photosphere layer to the chromosphere layer. To our knowledge, this is the solar high-resolution imaging system with the widest spectral coverage. This system was demonstrated at the 1 m New Vaccum Solar Telescope and the on-sky high-resolution observational results were acquired. In this paper, we will illustrate the design and performance of the imaging system. The calibration and the data reduction of the system are also presented.
8. Adaptive Optics Simulation for the World's Largest Telescope on Multicore Architectures with Multiple GPUs
KAUST Repository
Ltaief, Hatem
2016-06-02
We present a high performance comprehensive implementation of a multi-object adaptive optics (MOAO) simulation on multicore architectures with hardware accelerators in the context of computational astronomy. This implementation will be used as an operational testbed for simulating the de- sign of new instruments for the European Extremely Large Telescope project (E-ELT), the world\\'s biggest eye and one of Europe\\'s highest priorities in ground-based astronomy. The simulation corresponds to a multi-step multi-stage pro- cedure, which is fed, near real-time, by system and turbulence data coming from the telescope environment. Based on the PLASMA library powered by the OmpSs dynamic runtime system, our implementation relies on a task-based programming model to permit an asynchronous out-of-order execution. Using modern multicore architectures associated with the enormous computing power of GPUS, the resulting data-driven compute-intensive simulation of the entire MOAO application, composed of the tomographic reconstructor and the observing sequence, is capable of coping with the aforementioned real-time challenge and stands as a reference implementation for the computational astronomy community.
9. Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration
Science.gov (United States)
Wynne, Kevin B.; Knuth, Kevin H.; Petruccelli, Jonathan
2017-12-01
As the use of Digital Micro Mirror Devices (DMDs) becomes more prevalent in optics research, the ability to precisely locate the Fourier "footprint" of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam's characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.
10. ABISM: an interactive image quality assessment tool for adaptive optics instruments
Science.gov (United States)
Girard, Julien H.; Tourneboeuf, Martin
2016-07-01
ABISM (Automatic Background Interactive Strehl Meter) is a interactive tool to evaluate the image quality of astronomical images. It works on seeing-limited point spread functions (PSF) but was developed in particular for diffraction-limited PSF produced by adaptive optics (AO) systems. In the VLT service mode (SM) operations framework, ABISM is designed to help support astronomers or telescope and instruments operators (TIOs) to quickly measure the Strehl ratio (SR) during or right after an observing block (OB) to evaluate whether it meets the requirements/predictions or whether is has to be repeated and will remain in the SM queue. It's a Python-based tool with a graphical user interface (GUI) that can be used with little AO knowledge. The night astronomer (NA) or Telescope and Instrument Operator (TIO) can launch ABISM in one click and the program is able to read keywords from the FITS header to avoid mistakes. A significant effort was also put to make ABISM as robust (and forgiven) with a high rate of repeatability. As a matter of fact, ABISM is able to automatically correct for bad pixels, eliminate stellar neighbours and estimate/fit properly the background, etc.
11. Extreme Computing for Extreme Adaptive Optics: the Key to Finding Life Outside our Solar System
KAUST Repository
Ltaief, Hatem; Sukkari, Dalal; Guyon, Olivier; Keyes, David E.
2018-01-01
The real-time correction of telescopic images in the search for exoplanets is highly sensitive to atmospheric aberrations. The pseudo- inverse algorithm is an efficient mathematical method to filter out these turbulences. We introduce a new partial singular value decomposition (SVD) algorithm based on QR-based Diagonally Weighted Halley (QDWH) iteration for the pseudo-inverse method of adaptive optics. The QDWH partial SVD algorithm selectively calculates the most significant singular values and their corresponding singular vectors. We develop a high performance implementation and demonstrate the numerical robustness of the QDWH-based partial SVD method. We also perform a benchmarking campaign on various generations of GPU hardware accelerators and compare against the state-of-the-art SVD implementation SGESDD from the MAGMA library. Numerical accuracy and performance results are reported using synthetic and real observational datasets from the Subaru telescope. Our implementation outperforms SGESDD by up to fivefold and fourfold performance speedups on ill-conditioned synthetic matrices and real observational datasets, respectively. The pseudo-inverse simulation code will be deployed on-sky for the Subaru telescope during observation nights scheduled early 2018.
12. Simulated human eye retina adaptive optics imaging system based on a liquid crystal on silicon device
International Nuclear Information System (INIS)
Jiang Baoguang; Cao Zhaoliang; Mu Quanquan; Hu Lifa; Li Chao; Xuan Li
2008-01-01
In order to obtain a clear image of the retina of model eye, an adaptive optics system used to correct the wave-front error is introduced in this paper. The spatial light modulator that we use here is a liquid crystal on a silicon device instead of a conversional deformable mirror. A paper with carbon granule is used to simulate the retina of human eye. The pupil size of the model eye is adjustable (3-7 mm). A Shack–Hartman wave-front sensor is used to detect the wave-front aberration. With this construction, a value of peak-to-valley is achieved to be 0.086 λ, where λ is wavelength. The modulation transfer functions before and after corrections are compared. And the resolution of this system after correction (691p/m) is very close to the dirraction limit resolution. The carbon granule on the white paper which has a size of 4.7 μm is seen clearly. The size of the retina cell is between 4 and 10 mu;m. So this system has an ability to image the human eye's retina. (classical areas of phenomenology)
13. Rapid and highly integrated FPGA-based Shack-Hartmann wavefront sensor for adaptive optics system
Science.gov (United States)
Chen, Yi-Pin; Chang, Chia-Yuan; Chen, Shean-Jen
2018-02-01
In this study, a field programmable gate array (FPGA)-based Shack-Hartmann wavefront sensor (SHWS) programmed on LabVIEW can be highly integrated into customized applications such as adaptive optics system (AOS) for performing real-time wavefront measurement. Further, a Camera Link frame grabber embedded with FPGA is adopted to enhance the sensor speed reacting to variation considering its advantage of the highest data transmission bandwidth. Instead of waiting for a frame image to be captured by the FPGA, the Shack-Hartmann algorithm are implemented in parallel processing blocks design and let the image data transmission synchronize with the wavefront reconstruction. On the other hand, we design a mechanism to control the deformable mirror in the same FPGA and verify the Shack-Hartmann sensor speed by controlling the frequency of the deformable mirror dynamic surface deformation. Currently, this FPGAbead SHWS design can achieve a 266 Hz cyclic speed limited by the camera frame rate as well as leaves 40% logic slices for additionally flexible design.
14. Simpler Adaptive Optics using a Single Device for Processing and Control
Science.gov (United States)
Zovaro, A.; Bennet, F.; Rye, D.; D'Orgeville, C.; Rigaut, F.; Price, I.; Ritchie, I.; Smith, C.
The management of low Earth orbit is becoming more urgent as satellite and debris densities climb, in order to avoid a Kessler syndrome. A key part of this management is to precisely measure the orbit of both active satellites and debris. The Research School of Astronomy and Astrophysics at the Australian National University have been developing an adaptive optics (AO) system to image and range orbiting objects. The AO system provides atmospheric correction for imaging and laser ranging, allowing for the detection of smaller angular targets and drastically increasing the number of detectable objects. AO systems are by nature very complex and high cost systems, often costing millions of dollars and taking years to design. It is not unusual for AO systems to comprise multiple servers, digital signal processors (DSP) and field programmable gate arrays (FPGA), with dedicated tasks such as wavefront sensor data processing or wavefront reconstruction. While this multi-platform approach has been necessary in AO systems to date due to computation and latency requirements, this may no longer be the case for those with less demanding processing needs. In recent years, large strides have been made in FPGA and microcontroller technology, with todays devices having clock speeds in excess of 200 MHz whilst using a 1kHz) with low latency (general AO applications, such as in 1-3 m telescopes for space surveillance, or even for amateur astronomy.
15. Implementation and on-sky results of an optimal wavefront controller for the MMT NGS adaptive optics system
Science.gov (United States)
Powell, Keith B.; Vaitheeswaran, Vidhya
2010-07-01
The MMT observatory has recently implemented and tested an optimal wavefront controller for the NGS adaptive optics system. Open loop atmospheric data collected at the telescope is used as the input to a MATLAB based analytical model. The model uses nonlinear constrained minimization to determine controller gains and optimize the system performance. The real-time controller performing the adaptive optics close loop operation is implemented on a dedicated high performance PC based quad core server. The controller algorithm is written in C and uses the GNU scientific library for linear algebra. Tests at the MMT confirmed the optimal controller significantly reduced the residual RMS wavefront compared with the previous controller. Significant reductions in image FWHM and increased peak intensities were obtained in J, H and K-bands. The optimal PID controller is now operating as the baseline wavefront controller for the MMT NGS-AO system.
16. Fast optimal wavefront reconstruction for multi-conjugate adaptive optics using the Fourier domain preconditioned conjugate gradient algorithm.
Science.gov (United States)
Vogel, Curtis R; Yang, Qiang
2006-08-21
We present two different implementations of the Fourier domain preconditioned conjugate gradient algorithm (FD-PCG) to efficiently solve the large structured linear systems that arise in optimal volume turbulence estimation, or tomography, for multi-conjugate adaptive optics (MCAO). We describe how to deal with several critical technical issues, including the cone coordinate transformation problem and sensor subaperture grid spacing. We also extend the FD-PCG approach to handle the deformable mirror fitting problem for MCAO.
17. Gender equity issues in astronomy: facts, fiction, and what the adaptive optics community can do to close the gap
Science.gov (United States)
2014-07-01
Gender equality in modern societies is a topic that never fails to raise passion and controversy, in spite of the large body of research material and studies currently available to inform the general public and scientists alike. This paper brings the gender equity and equality discussion on the Adaptive Optics community doorstep. Its aim is threefold: (1) Raising awareness about the gender gap in science and astronomy in general, and in Adaptive Optics in particular; (2) Providing a snapshot of real and/or perceived causes for the gender gap existing in science and engineering; and (3) Presenting a range of practical solutions which have been or are being implemented at various institutions in order to bridge this gap and increase female participation at all levels of the scientific enterprise. Actual data will be presented to support aim (1), including existing gender data in science, engineering and astronomy, as well as original data specific to the Adaptive Optics community to be gathered in time for presentation at this conference. (2) will explore the often complex causes converging to explain gender equity issues that are deeply rooted in our male-dominated culture, including: conscious and unconscious gender biases in perceptions and attitudes, worklife balance, n-body problem, fewer numbers of female leaders and role models, etc. Finally, (3) will offer examples of conscious and pro-active gender equity measures which are helping to bring the female to male ratio closer to its desirable 50/50 target in science and astronomy.
18. Increasing the field of view of adaptive optics scanning laser ophthalmoscopy.
Science.gov (United States)
Laslandes, Marie; Salas, Matthias; Hitzenberger, Christoph K; Pircher, Michael
2017-11-01
An adaptive optics scanning laser ophthalmoscope (AO-SLO) set-up with two deformable mirrors (DM) is presented. It allows high resolution imaging of the retina on a 4°×4° field of view (FoV), considering a 7 mm pupil diameter at the entrance of the eye. Imaging on such a FoV, which is larger compared to classical AO-SLO instruments, is allowed by the use of the two DMs. The first DM is located in a plane that is conjugated to the pupil of the eye and corrects for aberrations that are constant in the FoV. The second DM is conjugated to a plane that is located ∼0.7 mm anterior to the retina. This DM corrects for anisoplanatism effects within the FoV. The control of the DMs is performed by combining the classical AO technique, using a Shack-Hartmann wave-front sensor, and sensorless AO, which uses a criterion characterizing the image quality. The retinas of four healthy volunteers were imaged in-vivo with the developed instrument. In order to assess the performance of the set-up and to demonstrate the benefits of the 2 DM configuration, the acquired images were compared with images taken in conventional conditions, on a smaller FoV and with only one DM. Moreover, an image of a larger patch of the retina was obtained by stitching of 9 images acquired with a 4°×4° FoV, resulting in a total FoV of 10°×10°. Finally, different retinal layers were imaged by shifting the focal plane.
19. Adaptive grazing incidence optics for the next generation of x-ray observatories
Science.gov (United States)
Lillie, C.; Pearson, D.; Plinta, A.; Metro, B.; Lintz, E.; Shropshire, D.; Danner, R.
2010-09-01
Advances in X-ray astronomy require high spatial resolution and large collecting area. Unfortunately, X-ray telescopes with grazing incidence mirrors require hundreds of concentric mirror pairs to obtain the necessary collecting area, and these mirrors must be thin shells packed tightly together... They must also be light enough to be placed in orbit with existing launch vehicles, and able to be fabricated by the thousands for an affordable cost. The current state of the art in X-ray observatories is represented by NASA's Chandra X-ray observatory with 0.5 arc-second resolution, but only 400 cm2 of collecting area, and by ESA's XMM-Newton observatory with 4,300 cm2 of collecting area but only 15 arc-second resolution. The joint NASA/ESA/JAXA International X-ray Observatory (IXO), with {15,000 cm2 of collecting area and 5 arc-second resolution which is currently in the early study phase, is pushing the limits of passive mirror technology. The Generation-X mission is one of the Advanced Strategic Mission Concepts that NASA is considering for development in the post-2020 period. As currently conceived, Gen-X would be a follow-on to IXO with a collecting area >= 50 m2, a 60-m focal length and 0.1 arc-second spatial resolution. Gen-X would be launched in {2030 with a heavy lift Launch Vehicle to an L2 orbit. Active figure control will be necessary to meet the challenging requirements of the Gen-X optics. In this paper we present our adaptive grazing incidence mirror design and the results from laboratory tests of a prototype mirror.
20. Nonlinear adaptive optics: aberration correction in three photon fluorescence microscopy for mouse brain imaging
Science.gov (United States)
Sinefeld, David; Paudel, Hari P.; Wang, Tianyu; Wang, Mengran; Ouzounov, Dimitre G.; Bifano, Thomas G.; Xu, Chris
2017-02-01
Multiphoton fluorescence microscopy is a well-established technique for deep-tissue imaging with subcellular resolution. Three-photon microscopy (3PM) when combined with long wavelength excitation was shown to allow deeper imaging than two-photon microscopy (2PM) in biological tissues, such as mouse brain, because out-of-focus background light can be further reduced due to the higher order nonlinear excitation. As was demonstrated in 2PM systems, imaging depth and resolution can be improved by aberration correction using adaptive optics (AO) techniques which are based on shaping the scanning beam using a spatial light modulator (SLM). In this way, it is possible to compensate for tissue low order aberration and to some extent, to compensate for tissue scattering. Here, we present a 3PM AO microscopy system for brain imaging. Soliton self-frequency shift is used to create a femtosecond source at 1675 nm and a microelectromechanical (MEMS) SLM serves as the wavefront shaping device. We perturb the 1020 segment SLM using a modified nonlinear version of three-point phase shifting interferometry. The nonlinearity of the fluorescence signal used for feedback ensures that the signal is increasing when the spot size decreases, allowing compensation of phase errors in an iterative optimization process without direct phase measurement. We compare the performance for different orders of nonlinear feedback, showing an exponential growth in signal improvement as the nonlinear order increases. We demonstrate the impact of the method by applying the 3PM AO system for in-vivo mouse brain imaging, showing improvement in signal at 1-mm depth inside the brain.
1. The ARGOS laser system: green light for ground layer adaptive optics at the LBT
Science.gov (United States)
Raab, Walfried; Rabien, Sebastian; Gässler, Wolfgang; Esposito, Simone; Barl, Lothar; Borelli, Jose; Daysenroth, Matthias; Gemperlein, Hans; Kulas, Martin; Ziegleder, Julian
2014-07-01
We report on the development of the laser system of ARGOS, the multiple laser guide star adaptive optics system for the Large Binocular Telescope (LBT). The system uses a total of six high powered, pulsed Nd:YAG lasers frequency-doubled to a wavelength of 532 nm to generate a set of three guide stars above each of the LBT telescopes. The position of each of the LGS constellations on sky as well as the relative position of the individual laser guide stars within this constellation is controlled by a set of steerable mirrors and a fast tip-tilt mirror within the laser system. The entire opto-mechanical system is housed in two hermetically sealed and thermally controlled enclosures on the SX and DX side of the LBT telescope. The laser beams are propagated through two refractive launch telescopes which focus the beams at an altitude of 12 km, creating a constellation of laser guide stars around a 4 arcminute diameter circle by means of Rayleigh scattering. In addition to the GLAO Rayleigh beacon system, ARGOS has also been designed for a possible future upgrade with a hybrid sodium laser - Rayleigh beacon combination, enabling diffraction limited operation. The ARGOS laser system was successfully installed at the LBT in April 2013. Extensive functional tests have been carried out and have verified the operation of the systems according to specifications. The alignment of the laser system with respect to the launch telescope was carried out during two more runs in June and October 2013, followed by the first propagation of laser light on sky in November 2013.
2. On the power and offset allocation for rate adaptation of spatial multiplexing in optical wireless MIMO channels
KAUST Repository
Park, Kihong
2013-04-01
In this paper, we consider resource allocation method in the visible light communication. It is challenging to achieve high data rate due to the limited bandwidth of the optical sources. In order to increase the spectral efficiency, we design a suitable multiple-input multiple-output (MIMO) system utilizing spatial multiplexing based on singular value decomposition and adaptive modulation. More specifically, after explaining why the conventional allocation method in radio frequency MIMO channels cannot be applied directly to the optical intensity channels, we theoretically derive a power allocation method for an arbitrary number of transmit and receive antennas for optical wireless MIMO systems. Based on three key constraints: the nonnegativity of the intensity-modulated signal, the aggregate optical power budget, and the bit error rate requirement, we propose a novel method to allocate the optical power, the offset value, and the modulation size. Based on some selected simulation results, we show that our proposed allocation method gives a better spectral efficiency at the expense of an increased computational complexity in comparison to a simple method that allocates the optical power equally among all the data streams. © 2013 IEEE.
3. On the power and offset allocation for rate adaptation of spatial multiplexing in optical wireless MIMO channels
KAUST Repository
Park, Kihong; Ko, Youngchai; Alouini, Mohamed-Slim
2013-01-01
In this paper, we consider resource allocation method in the visible light communication. It is challenging to achieve high data rate due to the limited bandwidth of the optical sources. In order to increase the spectral efficiency, we design a suitable multiple-input multiple-output (MIMO) system utilizing spatial multiplexing based on singular value decomposition and adaptive modulation. More specifically, after explaining why the conventional allocation method in radio frequency MIMO channels cannot be applied directly to the optical intensity channels, we theoretically derive a power allocation method for an arbitrary number of transmit and receive antennas for optical wireless MIMO systems. Based on three key constraints: the nonnegativity of the intensity-modulated signal, the aggregate optical power budget, and the bit error rate requirement, we propose a novel method to allocate the optical power, the offset value, and the modulation size. Based on some selected simulation results, we show that our proposed allocation method gives a better spectral efficiency at the expense of an increased computational complexity in comparison to a simple method that allocates the optical power equally among all the data streams. © 2013 IEEE.
4. Advancing adaptive optics technology: Laboratory turbulence simulation and optimization of laser guide stars
Science.gov (United States)
Rampy, Rachel A.
Since Galileo's first telescope some 400 years ago, astronomers have been building ever-larger instruments. Yet only within the last two decades has it become possible to realize the potential angular resolutions of large ground-based telescopes, by using adaptive optics (AO) technology to counter the blurring effects of Earth's atmosphere. And only within the past decade have the development of laser guide stars (LGS) extended AO capabilities to observe science targets nearly anywhere in the sky. Improving turbulence simulation strategies and LGS are the two main topics of my research. In the first part of this thesis, I report on the development of a technique for manufacturing phase plates for simulating atmospheric turbulence in the laboratory. The process involves strategic application of clear acrylic paint onto a transparent substrate. Results of interferometric characterization of the plates are described and compared to Kolmogorov statistics. The range of r0 (Fried's parameter) achieved thus far is 0.2--1.2 mm at 650 nm measurement wavelength, with a Kolmogorov power law. These plates proved valuable at the Laboratory for Adaptive Optics at University of California, Santa Cruz, where they have been used in the Multi-Conjugate Adaptive Optics testbed, during integration and testing of the Gemini Planet Imager, and as part of the calibration system of the on-sky AO testbed named ViLLaGEs (Visible Light Laser Guidestar Experiments). I present a comparison of measurements taken by ViLLaGEs of the power spectrum of a plate and the real sky turbulence. The plate is demonstrated to follow Kolmogorov theory well, while the sky power spectrum does so in a third of the data. This method of fabricating phase plates has been established as an effective and low-cost means of creating simulated turbulence. Due to the demand for such devices, they are now being distributed to other members of the AO community. The second topic of this thesis pertains to understanding and
5. Opto-mechanical design of ShaneAO: the adaptive optics system for the 3-meter Shane Telescope
Science.gov (United States)
Ratliff, C.; Cabak, J.; Gavel, D.; Kupke, R.; Dillon, D.; Gates, E.; Deich, W.; Ward, J.; Cowley, D.; Pfister, T.; Saylor, M.
2014-07-01
A Cassegrain mounted adaptive optics instrument presents unique challenges for opto-mechanical design. The flexure and temperature tolerances for stability are tighter than those of seeing limited instruments. This criteria requires particular attention to material properties and mounting techniques. This paper addresses the mechanical designs developed to meet the optical functional requirements. One of the key considerations was to have gravitational deformations, which vary with telescope orientation, stay within the optical error budget, or ensure that we can compensate with a steering mirror by maintaining predictable elastic behavior. Here we look at several cases where deformation is predicted with finite element analysis and Hertzian deformation analysis and also tested. Techniques used to address thermal deformation compensation without the use of low CTE materials will also be discussed.
6. High-resolution imaging of the retinal nerve fiber layer in normal eyes using adaptive optics scanning laser ophthalmoscopy.
Science.gov (United States)
Takayama, Kohei; Ooto, Sotaro; Hangai, Masanori; Arakawa, Naoko; Oshima, Susumu; Shibata, Naohisa; Hanebuchi, Masaaki; Inoue, Takashi; Yoshimura, Nagahisa
2012-01-01
To conduct high-resolution imaging of the retinal nerve fiber layer (RNFL) in normal eyes using adaptive optics scanning laser ophthalmoscopy (AO-SLO). AO-SLO images were obtained in 20 normal eyes at multiple locations in the posterior polar area and a circular path with a 3-4-mm diameter around the optic disc. For each eye, images focused on the RNFL were recorded and a montage of AO-SLO images was created. AO-SLO images for all eyes showed many hyperreflective bundles in the RNFL. Hyperreflective bundles above or below the fovea were seen in an arch from the temporal periphery on either side of a horizontal dividing line to the optic disc. The dark lines among the hyperreflective bundles were narrower around the optic disc compared with those in the temporal raphe. The hyperreflective bundles corresponded with the direction of the striations on SLO red-free images. The resolution and contrast of the bundles were much higher in AO-SLO images than in red-free fundus photography or SLO red-free images. The mean hyperreflective bundle width around the optic disc had a double-humped shape; the bundles at the temporal and nasal sides of the optic disc were narrower than those above and below the optic disc (Poptical coherence tomography correlated with the hyperreflective bundle widths on AO-SLO (Pfiber bundles and Müller cell septa. The widths of the nerve fiber bundles appear to be proportional to the RNFL thickness at equivalent distances from the optic disc.
7. Binding of Myomesin to Obscurin-Like-1 at the Muscle M-Band Provides a Strategy for Isoform-Specific Mechanical Protection.
Science.gov (United States)
Pernigo, Stefano; Fukuzawa, Atsushi; Beedle, Amy E M; Holt, Mark; Round, Adam; Pandini, Alessandro; Garcia-Manyes, Sergi; Gautel, Mathias; Steiner, Roberto A
2017-01-03
The sarcomeric cytoskeleton is a network of modular proteins that integrate mechanical and signaling roles. Obscurin, or its homolog obscurin-like-1, bridges the giant ruler titin and the myosin crosslinker myomesin at the M-band. Yet, the molecular mechanisms underlying the physical obscurin(-like-1):myomesin connection, important for mechanical integrity of the M-band, remained elusive. Here, using a combination of structural, cellular, and single-molecule force spectroscopy techniques, we decode the architectural and functional determinants defining the obscurin(-like-1):myomesin complex. The crystal structure reveals a trans-complementation mechanism whereby an incomplete immunoglobulin-like domain assimilates an isoform-specific myomesin interdomain sequence. Crucially, this unconventional architecture provides mechanical stability up to forces of ∼135 pN. A cellular competition assay in neonatal rat cardiomyocytes validates the complex and provides the rationale for the isoform specificity of the interaction. Altogether, our results reveal a novel binding strategy in sarcomere assembly, which might have implications on muscle nanomechanics and overall M-band organization. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
8. Adaptive Scanning Optical Microscope (ASOM): A multidisciplinary optical microscope design for large field of view and high resolution imaging
NARCIS (Netherlands)
Potsaid, B.; Bellouard, Y.J.; Wen, J.T.
2005-01-01
From micro-assembly to biological observation, the optical microscope remains one of the most important tools for observing below the threshold of the naked human eye. However, in its conventional form, it suffers from a trade-off between resolution and field of view. This paper presents a new
9. Tm3+/Yb3+ co-doped tellurite glass with silver nanoparticles for 1.85 μm band laser material
Science.gov (United States)
Huang, Bo; Zhou, Yaxun; Cheng, Pan; Zhou, Zizhong; Li, Jun; Jin, Wei
2016-10-01
Tm3+/Yb3+ co-doped tellurite glasses with different silver nanoparticles (Ag NPs) concentrations were prepared using the conventional melt-quenching technique and characterized by the UV/Vis/NIR absorption spectra, 1.85 μm band fluorescence emission spectra, transmission electron microscopy (TEM) images, differential scanning calorimeter (DSC) curves and X-ray diffraction (XRD) patterns to investigate the effects of Ag NPs on the 1.85 μm band spectroscopic properties of Tm3+ ions, thermal stability and structural nature of glass hosts. Under the excitation of 980 nm laser diode (LD), the 1.85 μm band fluorescence emission of Tm3+ ions enhances significantly in the presence of Ag NPs with average diameter of ∼8 nm and local surface Plasmon resonance (LSPR) band of ∼590 nm, which is mainly attributed to the increased local electric field induced by Ag NPs at the proximity of doped rare-earth ions on the basis of energy transfer from Yb3+ to Tm3+ ions. An improvement by about 110% of fluorescence intensity is observed in the Tm3+/Yb3+ co-doped tellurite glass containing 0.5 mol% amount of AgNO3 while the prepared glass samples possess good thermal stability and amorphous structural nature. Meanwhile, the Judd-Ofelt intensity parameters Ωt (t = 2,4,6), spontaneous radiative transition probabilities, fluorescence branching ratios and radiative lifetimes of relevant excited levels of Tm3+ ions were determined based on the Judd-Ofelt theory to reveal the enhanced effects of Ag NPs on the 1.85 μm band spectroscopic properties, and the energy transfer micro-parameters and phonon contribution ratios were calculated based on the non-resonant energy transfer theory to elucidate the energy transfer mechanism between Yb3+ and Tm3+ ions. The present results indicate that the prepared Tm3+/Yb3+ co-doped tellurite glass with an appropriate amount of Ag NPs is a promising lasing media applied for 1.85 μm band solid-state lasers and amplifiers.
10. Adaptive optics imaging of healthy and abnormal regions of retinal nerve fiber bundles of patients with glaucoma.
Science.gov (United States)
Chen, Monica F; Chui, Toco Y P; Alhadeff, Paula; Rosen, Richard B; Ritch, Robert; Dubra, Alfredo; Hood, Donald C
2015-01-08
To better understand the nature of glaucomatous damage of the macula, especially the structural changes seen between relatively healthy and clearly abnormal (AB) retinal regions, using an adaptive optics scanning light ophthalmoscope (AO-SLO). Adaptive optics SLO images and optical coherence tomography (OCT) vertical line scans were obtained on one eye of seven glaucoma patients, with relatively deep local arcuate defects on the 10-2 visual field test in one (six eyes) or both hemifields (one eye). Based on the OCT images, the retinal nerve fiber (RNF) layer was divided into two regions: (1) within normal limits (WNL), relative RNF layer thickness within mean control values ±2 SD; and (2) AB, relative thickness less than -2 SD value. As seen on AO-SLO, the pattern of AB RNF bundles near the border of the WNL and AB regions differed across eyes. There were normal-appearing bundles in the WNL region of all eyes and AB-appearing bundles near the border with the AB region. This region with AB bundles ranged in extent from a few bundles to the entire AB region in the case of one eye. All other eyes had a large AB region without bundles. However, in two of these eyes, a few bundles were seen within this region of otherwise missing bundles. The AO-SLO images revealed details of glaucomatous damage that are difficult, if not impossible, to see with current OCT technology. Adaptive optics SLO may prove useful in following progression in clinical trials, or in disease management, if AO-SLO becomes widely available and easy to use. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
11. The Dimensions and Pole of Asteroid (21) Lutetia from Adaptive Optics Images
Science.gov (United States)
Drummond, Jack D.; Conrad, A.; Merline, W.; Carry, B.
2009-09-01
In a campaign to study the Rosetta mission target, asteroid (21) Lutetia, we obtained 81 images on December 2, 2008, at 2.12 microns with adaptive optics (AO) on the Keck-II 10 m telescope. From these nearly consecutive images obtained over a quarter of rotation, we have determined the asteroid's triaxial ellipsoid diameters to be 132x101x76 km, with formal uncertainties of 1 km for the equatorial dimensions, and 31 km for the shortest axis. This latter uncertainty occurs because the observations were made at the relatively high sub-Earth latitude of -69 degrees. From these observations we determine that Lutetia's pole lies at 2000.0 coordinates of RA=48, Dec=+9, or Ecliptic coordinates of [49;-8], with a formal uncertainty radius of 3 deg. (The other possible pole is eliminated by considering its lightcurve history.) The rotational pole derived for the lightcurve inversion model (available at http://astro.troja.mff.cuni.cz/ projects/asteroids3D/web.php), is only 5 deg from ours, but comparing our images to the lightcurve inversion model we find that Lutetia is more pointed than the model. Our technique of deriving the dimensions of asteroids from AO images has been calibrated against Pluto and 4 satellites of Saturn with precise diameters, and we find that any systematic errors can be no more than 1-3%. We acknowledge the assistance of other team members Christophe Dumas (ESO), Peter Tamblyn (SwRI), and Clark Chapman (SwRI). We also thank Hal Weaver (JHU/APL) as the lead for our collaboration with the Rosetta mission. We are grateful for telescope time made available to us by S. Kulkarni and M. Busch (Cal Tech) for a portion of our overall Lutetia effort. We also thank our collaborators on Team Keck, the Keck science staff, for making possible some of the Lutetia observations and for their participation. Additional Lutetia observations were acquired at Gemini North under NOAO time allocation.
12. Laser Guidestar Satellite for Ground-based Adaptive Optics Imaging of Geosynchronous Satellites and Astronomical Targets
Science.gov (United States)
Marlow, W. A.; Cahoy, K.; Males, J.; Carlton, A.; Yoon, H.
2015-12-01
Real-time observation and monitoring of geostationary (GEO) satellites with ground-based imaging systems would be an attractive alternative to fielding high cost, long lead, space-based imagers, but ground-based observations are inherently limited by atmospheric turbulence. Adaptive optics (AO) systems are used to help ground telescopes achieve diffraction-limited seeing. AO systems have historically relied on the use of bright natural guide stars or laser guide stars projected on a layer of the upper atmosphere by ground laser systems. There are several challenges with this approach such as the sidereal motion of GEO objects relative to natural guide stars and limitations of ground-based laser guide stars; they cannot be used to correct tip-tilt, they are not point sources, and have finite angular sizes when detected at the receiver. There is a difference between the wavefront error measured using the guide star compared with the target due to cone effect, which also makes it difficult to use a distributed aperture system with a larger baseline to improve resolution. Inspired by previous concepts proposed by A.H. Greenaway, we present using a space-based laser guide starprojected from a satellite orbiting the Earth. We show that a nanosatellite-based guide star system meets the needs for imaging GEO objects using a low power laser even from 36,000 km altitude. Satellite guide star (SGS) systemswould be well above atmospheric turbulence and could provide a small angular size reference source. CubeSatsoffer inexpensive, frequent access to space at a fraction of the cost of traditional systems, and are now being deployed to geostationary orbits and on interplanetary trajectories. The fundamental CubeSat bus unit of 10 cm cubed can be combined in multiple units and offers a common form factor allowing for easy integration as secondary payloads on traditional launches and rapid testing of new technologies on-orbit. We describe a 6U CubeSat SGS measuring 10 cm x 20 cm x
13. A convergent blind deconvolution method for post-adaptive-optics astronomical imaging
International Nuclear Information System (INIS)
Prato, M; Camera, A La; Bertero, M; Bonettini, S
2013-01-01
In this paper, we propose a blind deconvolution method which applies to data perturbed by Poisson noise. The objective function is a generalized Kullback–Leibler (KL) divergence, depending on both the unknown object and unknown point spread function (PSF), without the addition of regularization terms; constrained minimization, with suitable convex constraints on both unknowns, is considered. The problem is non-convex and we propose to solve it by means of an inexact alternating minimization method, whose global convergence to stationary points of the objective function has been recently proved in a general setting. The method is iterative and each iteration, also called outer iteration, consists of alternating an update of the object and the PSF by means of a fixed number of iterations, also called inner iterations, of the scaled gradient projection (SGP) method. Therefore, the method is similar to other proposed methods based on the Richardson–Lucy (RL) algorithm, with SGP replacing RL. The use of SGP has two advantages: first, it allows one to prove global convergence of the blind method; secondly, it allows the introduction of different constraints on the object and the PSF. The specific constraint on the PSF, besides non-negativity and normalization, is an upper bound derived from the so-called Strehl ratio (SR), which is the ratio between the peak value of an aberrated versus a perfect wavefront. Therefore, a typical application, but not a unique one, is to the imaging of modern telescopes equipped with adaptive optics systems for the partial correction of the aberrations due to atmospheric turbulence. In the paper, we describe in detail the algorithm and we recall the results leading to its convergence. Moreover, we illustrate its effectiveness by means of numerical experiments whose results indicate that the method, pushed to convergence, is very promising in the reconstruction of non-dense stellar clusters. The case of more complex astronomical targets
14. ADAPTIVE OPTICS IMAGING OF A MASSIVE GALAXY ASSOCIATED WITH A METAL-RICH ABSORBER
International Nuclear Information System (INIS)
Chun, Mark R.; Kulkarni, Varsha P.; Gharanfoli, Soheila; Takamiya, Marianne
2010-01-01
The damped and sub-damped Lyα absorption (DLA and sub-DLA) line systems in quasar spectra are believed to be produced by intervening galaxies. However, the connection of quasar absorbers to galaxies is not well-understood, since attempts to image the absorbing galaxies have often failed. While most DLAs appear to be metal poor, a population of metal-rich absorbers, mostly sub-DLAs, has been discovered in recent studies. Here we report high-resolution K-band imaging with the Keck laser guide star adaptive optics (LGSAO) system of the field of quasar SDSSJ1323-0021 in search of the galaxy producing the z = 0.72 sub-DLA absorber. With a metallicity of 2-4 times the solar level, this absorber is one of the most metal-rich systems found to date. Our data show a large bright galaxy with an angular separation of only 1.''25 from the quasar, well-resolved from the quasar at the high resolution of our data. The galaxy has a magnitude of K = 17.6-17.9, which corresponds to a luminosity of ∼3-6 L*. Morphologically, the galaxy is fitted with a model with an effective radius, enclosing half of the total light, of R e = 4 kpc and a bulge-to-total ratio of 0.4-1.0, indicating a substantial bulge stellar population. Based on the mass-metallicity relation of nearby galaxies, the absorber galaxy appears to have a stellar mass of ∼>10 11 M sun . Given the small impact parameter (9.0 kpc at the absorber redshift), this massive galaxy appears to be responsible for the metal-rich sub-DLA. The absorber galaxy is consistent with the metallicity-luminosity relation observed for nearby galaxies, but is near the upper end of metallicity. Our study marks the first application of LGSAO for the study of the structure of galaxies producing distant quasar absorbers. Finally, this study offers the first example of a massive galaxy with a substantial bulge producing a metal-rich absorber.
15. An adaptive optics multiplicity census of young stars in Upper Scorpius
Energy Technology Data Exchange (ETDEWEB)
Lafrenière, David [Département de Physique, Université de Montréal, C.P. 6128 Succ. Centre-Ville, Montréal, QC H3C 3J7 (Canada); Jayawardhana, Ray; Van Kerkwijk, Marten H. [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4 (Canada); Brandeker, Alexis [Department of Astronomy, Stockholm University, SE-106 91 Stockholm (Sweden); Janson, Markus, E-mail: david@astro.umontreal.ca [Astrophysics Research Center, Queen' s University Belfast, BT7 1NN Belfast (United Kingdom)
2014-04-10
We present the results of a multiplicity survey of 91 stars spanning masses of ∼0.2-10 M {sub ☉} in the Upper Scorpius star-forming region, based on adaptive optics imaging with the Gemini North telescope. Our observations identified 29 binaries, 5 triples, and no higher order multiples. The corresponding raw multiplicity frequency is 0.37 ± 0.05. In the regime where our observations are complete—companion separations of 0.''1-5'' (∼15-800 AU) with magnitude limits ranging from K < 9.3 at 0.''1 to K < 15.8 at 5''—the multiplicity frequency is 0.27{sub −0.04}{sup +0.05}. For similar separations, the multiplicity frequency in Upper Scorpius is comparable to that in other dispersed star-forming regions, but is a factor of two to three higher than in denser star-forming regions or in the field. Our sample displays a constant multiplicity frequency as a function of stellar mass. Among our sample of binaries, we find that both wider (>100 AU) and higher-mass systems tend to have companions with lower companion-to-primary mass ratios. Three of the companions identified in our survey are unambiguously substellar and have estimated masses below 0.04 M {sub ☉} (two of them are new discoveries from this survey—1RXS J160929.1–210524b and HIP 78530B—although we have reported them separately in earlier papers). These three companions have projected orbital separations of 300-900 AU. Based on a statistical analysis factoring in sensitivity limits, we calculate an occurrence rate of 5-40 M {sub Jup} companions of ∼4.0% for orbital separations of 250-1000 AU, compared to <1.8% at smaller separations, suggesting that such companions are more frequent on wider orbits.
16. New neighbours. III. 21 new companions to nearby dwarfs, discovered with adaptive optics
Science.gov (United States)
Beuzit, J.-L.; Ségransan, D.; Forveille, T.; Udry, S.; Delfosse, X.; Mayor, M.; Perrier, C.; Hainaut, M.-C.; Roddier, C.; Roddier, F.; Martín, E. L.
2004-10-01
We present some results of a CFHT adaptive optics search for companions to nearby dwarfs. We identify 21 new components in solar neighbourhood systems, of which 13 were found while surveying a volume-limited sample of M dwarfs within 12 pc. We are obtaining complete observations for this subsample, to derive unbiased multiplicity statistics for the very-low-mass disk population. Additionally, we resolve for the first time 6 known spectroscopic or astrometric binaries, for a total of 27 newly resolved companions. A significant fraction of the new binaries has favourable parameters for accurate mass determinations. The newly resolved companion of Gl 120.1C was thought to have a spectroscopic minimum mass in the brown-dwarf range (Duquennoy & Mayor \\cite{duquennoy91}), and it contributed to the statistical evidence that a few percent of solar-type stars might have close-in brown-dwarf companions. We find that Gl 120.1C actually is an unrecognised double-lined spectroscopic pair. Its radial-velocity amplitude had therefore been strongly underestimated by Duquennoy & Mayor (\\cite{duquennoy91}), and it does not truly belong to their sample of single-lined systems with minimum spectroscopic mass below the substellar limit. We also present the first direct detection of Gl 494B, an astrometric brown-dwarf candidate. Its luminosity straddles the substellar limit, and it is a brown dwarf if its age is less than ˜300 Myr. A few more years of observations will ascertain its mass and status from first principles. Based on observations made at Canada-France-Hawaii Telescope, operated by the National Research Council of Canada, the Centre National de la Recherche Scientifique de France and the University of Hawaii. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The
17. Two fiber optics communication adapters apply to the control system of HIRFL-CSR
International Nuclear Information System (INIS)
Wang Dan; Zhang Shuocheng; Jing Lan; Zhang Wei; Ma Yunhai
2006-01-01
The authors introduced two kinds of fiber adapters that apply to the engineering HIRFL-CSR. Including design of two adapters, operational principle, and hardware construction, field of application. How to control equipment which have the standard RS232 or RS485 interface at long distance by two adapters. Replace the RS485 bus with the fiber and the 485-Fiber Adapter, solved the problem of communication disturb. The requirements of control in the national great science engineering HIRFL-CSR are fulfilled. (authors)
18. Neural mechanisms underlying spatial realignment during adaptation to optical wedge prisms.
Science.gov (United States)
Chapman, Heidi L; Eramudugolla, Ranmalee; Gavrilescu, Maria; Strudwick, Mark W; Loftus, Andrea; Cunnington, Ross; Mattingley, Jason B
2010-07-01
19. 15 Gbit/s indoor optical wireless systems employing fast adaptation and imaging reception in a realistic environment
Science.gov (United States)
2016-03-01
Optical wireless systems are promising candidates for next-generation indoor communication networks. Optical wireless technology offers freedom from spectrum regulations and, compared to current radio-frequency networks, higher data rates and increased security. This paper presents a fast adaptation method for multibeam angle and delay adaptation systems and a new spot-diffusing geometry, and also considers restrictions needed for complying with eye safety regulations. The fast adaptation algorithm reduces the computational load required to reconfigure the transmitter in the case of transmitter and/or receiver mobility. The beam clustering approach enables the transmitter to assign power to spots within the pixel's field of view (FOV) and increases the number of such spots. Thus, if the power per spot is restricted to comply with eye safety standards, the new approach, in which more spots are visible within the FOV of the pixel, leads to enhanced signal-to-noise ratio (SNR). Simulation results demonstrate that the techniques proposed in this paper lead to SNR improvements that enable reliable operation at data rates as high as 15 Gbit/s. These results are based on simulation and not on actual measurements or experiments.
20. Optically beamformed beam-switched adaptive antennas for fixed and mobile broadband wireless access networks
NARCIS (Netherlands)
Piqueras, M.A.; Grosskopf, G.; Vidal, B.; Herrera Llorente, J.; Martinez, J.M.; Sanchis, P.; Polo, V.; Corral, J.L.; Marceaux, A.; Galière, J.; Lopez, J.; Enard, A.; Valard, J.-L.; Parillaud, O.; Estèbe, E.; Vodjdani, N.; Choi, M.-S.; Besten, den J.H.; Soares, F.M.; Smit, M.K.; Marti, J.
2006-01-01
In this paper, a 3-bit optical beamforming architecture based in 2×2 optical switches and dispersive media is proposed and demonstrated. The performance of this photonic beamformer is experimentally demonstrated at 42.7 GHz in both transmission and reception modes. The progress achieved for
1. Analysis of retinal capillaries in patients with type 1 diabetes and nonproliferative diabetic retinopathy using adaptive optics imaging.
Science.gov (United States)
Lombardo, Marco; Parravano, Mariacristina; Serrao, Sebastiano; Ducoli, Pietro; Stirpe, Mario; Lombardo, Giuseppe
2013-09-01
To illustrate a noninvasive method to analyze the retinal capillary lumen caliber in patients with Type 1 diabetes. Adaptive optics imaging of the retinal capillaries were acquired in two parafoveal regions of interest in eyes with nonproliferative diabetic retinopathy and unaffected controls. Measures of the retinal capillary lumen caliber were quantified using an algorithm written in Matlab by an independent observer in a masked manner. Comparison of the adaptive optics images with red-free and color wide fundus retinography images was also assessed. Eight eyes with nonproliferative diabetic retinopathy (eight patients, study group), no macular edema, and preserved visual acuity and eight control eyes (eight healthy volunteers; control group) were analyzed. The repeatability of capillary lumen caliber measurements was 0.22 μm (3.5%) with the 95% confidence interval between 0.12 and 0.31 μm in the study group. It was 0.30 μm (4.1%) with the 95% confidence interval between 0.16 and 0.43 μm in the control group. The average capillary lumen caliber was significantly narrower in eyes with nonproliferative diabetic retinopathy (6.27 ± 1.63 μm) than in the control eyes (7.31 ± 1.59 μm, P = 0.002). The authors demonstrated a noninvasive method to analyze, with micrometric scale of resolution, the lumen of retinal capillaries. The parafoveal capillaries were narrower in patients with Type 1 diabetes and nonproliferative diabetic retinopathy than in healthy subjects, showing the potential capability of adaptive optics imaging to detect pathologic variations of the retinal microvascular structures in vaso-occlusive diseases.
2. Statistical properties of single-mode fiber coupling of satellite-to-ground laser links partially corrected by adaptive optics.
Science.gov (United States)
Canuet, Lucien; Védrenne, Nicolas; Conan, Jean-Marc; Petit, Cyril; Artaud, Geraldine; Rissons, Angelique; Lacan, Jerome
2018-01-01
In the framework of satellite-to-ground laser downlinks, an analytical model describing the variations of the instantaneous coupled flux into a single-mode fiber after correction of the incoming wavefront by partial adaptive optics (AO) is presented. Expressions for the probability density function and the cumulative distribution function as well as for the average fading duration and fading duration distribution of the corrected coupled flux are given. These results are of prime interest for the computation of metrics related to coded transmissions over correlated channels, and they are confronted by end-to-end wave-optics simulations in the case of a geosynchronous satellite (GEO)-to-ground and a low earth orbit satellite (LEO)-to-ground scenario. Eventually, the impact of different AO performances on the aforementioned fading duration distribution is analytically investigated for both scenarios.
3. Compact akinetic swept source optical coherence tomography angiography at 1060 nm supporting a wide field of view and adaptive optics imaging modes of the posterior eye.
Science.gov (United States)
Salas, Matthias; Augustin, Marco; Felberer, Franz; Wartak, Andreas; Laslandes, Marie; Ginner, Laurin; Niederleithner, Michael; Ensher, Jason; Minneman, Michael P; Leitgeb, Rainer A; Drexler, Wolfgang; Levecq, Xavier; Schmidt-Erfurth, Ursula; Pircher, Michael
2018-04-01
Imaging of the human retina with high resolution is an essential step towards improved diagnosis and treatment control. In this paper, we introduce a compact, clinically user-friendly instrument based on swept source optical coherence tomography (SS-OCT). A key feature of the system is the realization of two different operation modes. The first operation mode is similar to conventional OCT imaging and provides large field of view (FoV) images (up to 45° × 30°) of the human retina and choroid with standard resolution. The second operation mode enables it to optically zoom into regions of interest with high transverse resolution using adaptive optics (AO). The FoV of this second operation mode (AO-OCT mode) is 3.0° × 2.8° and enables the visualization of individual retinal cells such as cone photoreceptors or choriocapillaris. The OCT engine is based on an akinetic swept source at 1060 nm and provides an A-scan rate of 200 kHz. Structural as well as angiographic information can be retrieved from the retina and choroid in both operational modes. The capabilities of the prototype are demonstrated in healthy and diseased eyes.
4. Volumetric imaging of rod and cone photoreceptor structure with a combined adaptive optics-optical coherence tomography-scanning laser ophthalmoscope
Science.gov (United States)
Wells-Gray, Elaine M.; Choi, Stacey S.; Zawadzki, Robert J.; Finn, Susanna C.; Greiner, Cherry; Werner, John S.; Doble, Nathan
2018-03-01
We have designed and implemented a dual-mode adaptive optics (AO) imaging system that combines spectral domain optical coherence tomography (OCT) and scanning laser ophthalmoscopy (SLO) for in vivo imaging of the human retina. The system simultaneously acquires SLO frames and OCT B-scans at 60 Hz with an OCT volume acquisition time of 4.2 s. Transverse eye motion measured from the SLO is used to register the OCT B-scans to generate three-dimensional (3-D) volumes. Key optical design considerations include: minimizing system aberrations through the use of off-axis relay telescopes, conjugate pupil plane requirements, and the use of dichroic beam splitters to separate and recombine the OCT and SLO beams around the nonshared horizontal scanning mirrors. To demonstrate system performance, AO-OCT-SLO images and measurements are taken from three normal human subjects ranging in retinal eccentricity from the fovea out to 15-deg temporal and 20-deg superior. Also presented are en face OCT projections generated from the registered 3-D volumes. The ability to acquire high-resolution 3-D images of the human retina in the midperiphery and beyond has clinical importance in diseases, such as retinitis pigmentosa and cone-rod dystrophy.
5. Results from a portable Adaptive Optics system on the 1 meter telescope at the Naval Observatory Flagstaff Station
Science.gov (United States)
Restaino, Sergio R.; Gilbreath, G. Charmaine; Payne, Don M.; Baker, Jeffrey T.; Martinez, Ty; DiVittorio, Michael; Mozurkewich, David; Friedman, Jeffrey
2003-02-01
In this paper we present results using a compact, portable adaptive optics system. The system was developed as a joint venture between the Naval Research Laboratory, Air Force Research Laboratory, and two small, New Mexico based-businesses. The system has a footprint of 18x24x18 inches and weighs less than 100 lbs. Key hardware design characteristics enable portability, easy mounting, and stable alignment. The system also enables quick calibration procedures, stable performance, and automatic adaptability to various pupil configurations. The system was tested during an engineering run in late July 2002 at the Naval Observatory Flagstaff Station one-meter telescope. Weather prevented extensive testing and the seeing during the run was marginal but a sufficient opportunity was provided for proof-of-concept, initial characterization of closed loop performance, and to start addressing some of the most pressing engineering and scientific issues.
6. NEAR-INFRARED ADAPTIVE OPTICS IMAGING OF INFRARED LUMINOUS GALAXIES: THE BRIGHTEST CLUSTER MAGNITUDE-STAR FORMATION RATE RELATION
International Nuclear Information System (INIS)
Randriamanakoto, Z.; Väisänen, P.; Escala, A.; Kankare, E.; Kotilainen, J.; Mattila, S.; Ryder, S.
2013-01-01
We have established a relation between the brightest super star cluster (SSC) magnitude in a galaxy and the host star formation rate (SFR) for the first time in the near-infrared (NIR). The data come from a statistical sample of ∼40 luminous IR galaxies (LIRGs) and starbursts utilizing K-band adaptive optics imaging. While expanding the observed relation to longer wavelengths, less affected by extinction effects, it also pushes to higher SFRs. The relation we find, M K ∼ –2.6log SFR, is similar to that derived previously in the optical and at lower SFRs. It does not, however, fit the optical relation with a single optical to NIR color conversion, suggesting systematic extinction and/or age effects. While the relation is broadly consistent with a size-of-sample explanation, we argue physical reasons for the relation are likely as well. In particular, the scatter in the relation is smaller than expected from pure random sampling strongly suggesting physical constraints. We also derive a quantifiable relation tying together cluster-internal effects and host SFR properties to possibly explain the observed brightest SSC magnitude versus SFR dependency
7. Validation of S-NPP VIIRS Day-Night Band and M Bands Performance Using Ground Reference Targets of Libya 4 and Dome C
Science.gov (United States)
Chen, Xuexia; Wu, Aisheng; Xiong, Xiaoxiong; Lei, Ning; Wang, Zhipeng; Chiang, Kwofu
2015-01-01
This paper provides methodologies developed and implemented by the NASA VIIRS Calibration Support Team (VCST) to validate the S-NPP VIIRS Day-Night band (DNB) and M bands calibration performance. The Sensor Data Records produced by the Interface Data Processing Segment (IDPS) and NASA Land Product Evaluation and Algorithm Testing Element (PEATE) are acquired nearly nadir overpass for Libya 4 desert and Dome C snow surfaces. In the past 3.5 years, the modulated relative spectral responses (RSR) change with time and lead to 3.8% increase on the DNB sensed solar irradiance and 0.1% or less increases on the M4-M7 bands. After excluding data before April 5th, 2013, IDPS DNB radiance and reflectance data are consistent with Land PEATE data with 0.6% or less difference for Libya 4 site and 2% or less difference for Dome C site. These difference are caused by inconsistent LUTs and algorithms used in calibration. In Libya 4 site, the SCIAMACHY spectral and modulated RSR derived top of atmosphere (TOA) reflectance are compared with Land PEATE TOA reflectance and they indicate a decrease of 1.2% and 1.3%, respectively. The radiance of Land PEATE DNB are compared with the simulated radiance from aggregated M bands (M4, M5, and M7). These data trends match well with 2% or less difference for Libya 4 site and 4% or less difference for Dome C. This study demonstrate the consistent quality of DNB and M bands calibration for Land PEATE products during operational period and for IDPS products after April 5th, 2013.
8. woptic: Optical conductivity with Wannier functions and adaptive k-mesh refinement
Czech Academy of Sciences Publication Activity Database
Assmann, E.; Wissgott, P.; Kuneš, Jan; Toschi, A.; Blaha, P.; Held, K.
2016-01-01
Roč. 202, May (2016), s. 1-11 ISSN 0010-4655 Institutional support: RVO:68378271 Keywords : optical spectra * Wannier orbital Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.936, year: 2016
9. NASA Laser Communications with Adaptive Optics and Linear Mode Photon Counting, Phase I
Data.gov (United States)
National Aeronautics and Space Administration — In this effort, the Optical Sciences Company (tOSC) and Raytheon Vision Systems (RVS) will team to provide NASA with a long range laser communications system for...
10. The Exciton-Polariton Dispersion Law under the Action of Strong Pumping in the Region of the M-Band of Luminescence
Science.gov (United States)
2018-04-01
The double-pulse interaction with excitons and biexcitons in semiconductors is studied theoretically. It is shown that the dispersion law of carrier wave has three branches under the action of a powerful pumping in the region of the M-band of luminescence. Values of parameters at which the dispersion law branches can intersect due to the degeneration of the exciton level energy have been found. The effect of a significant change in the force of coupling between the exciton and photon of a weak pulse with a change in the pumping intensity is predicted.
11. Potential energy surface, dipole moment surface and the intensity calculations for the 10 μm, 5 μm and 3 μm bands of ozone
Science.gov (United States)
Polyansky, Oleg L.; Zobov, Nikolai F.; Mizus, Irina I.; Kyuberis, Aleksandra A.; Lodi, Lorenzo; Tennyson, Jonathan
2018-05-01
Monitoring ozone concentrations in the Earth's atmosphere using spectroscopic methods is a major activity which undertaken both from the ground and from space. However there are long-running issues of consistency between measurements made at infrared (IR) and ultraviolet (UV) wavelengths. In addition, key O3 IR bands at 10 μm, 5 μm and 3 μm also yield results which differ by a few percent when used for retrievals. These problems stem from the underlying laboratory measurements of the line intensities. Here we use quantum chemical techniques, first principles electronic structure and variational nuclear-motion calculations, to address this problem. A new high-accuracy ab initio dipole moment surface (DMS) is computed. Several spectroscopically-determined potential energy surfaces (PESs) are constructed by fitting to empirical energy levels in the region below 7000 cm-1 starting from an ab initio PES. Nuclear motion calculations using these new surfaces allow the unambiguous determination of the intensities of 10 μm band transitions, and the computation of the intensities of 10 μm and 5 μm bands within their experimental error. A decrease in intensities within the 3 μm is predicted which appears consistent with atmospheric retrievals. The PES and DMS form a suitable starting point both for the computation of comprehensive ozone line lists and for future calculations of electronic transition intensities.
Science.gov (United States)
Zhao, Jian; Chen, Lian-Kuan
2017-04-17
We investigate the constellation design and symbol error rate (SER) of set-partitioned (SP) quadrature amplitude modulation (QAM) formats. Based on the SER analysis, we derive the adaptive bit and power loading algorithm for SP QAM based intensity-modulation direct-detection (IM/DD) orthogonal frequency division multiplexing (OFDM). We experimentally show that the proposed system significantly outperforms the conventional adaptively-loaded IM/DD OFDM and can increase the data rate from 36 Gbit/s to 42 Gbit/s in the presence of severe dispersion-induced spectral nulls after 40-km single-mode fiber. It is also shown that the adaptive algorithm greatly enhances the tolerance to fiber nonlinearity and allows for more power budget.
13. Adaptive elimination of optical fiber transmission noise in fiber ocean bottom seismic system
Science.gov (United States)
Zhong, Qiuwen; Hu, Zhengliang; Cao, Chunyan; Dong, Hongsheng
2017-10-01
In this paper, a pressure and acceleration insensitive reference Interferometer is used to obtain laser and public noise introduced by transmission fiber and laser. By using direct subtraction and adaptive filtering, this paper attempts to eliminate and estimation the transmission noise of sensing probe. This paper compares the noise suppression effect of four methods, including the direct subtraction (DS), the least mean square error adaptive elimination (LMS), the normalized least mean square error adaptive elimination (NLMS) and the least square (RLS) adaptive filtering. The experimental results show that the noise reduction effect of RLS and NLMS are almost the same, better than LMS and DS, which can reach 8dB (@100Hz). But considering the workload, RLS is not conducive to the real-time operating system. When it comes to the same treatment effect, the practicability of NLMS is higher than RLS. The noise reduction effect of LMS is slightly worse than that of RLS and NLMS, about 6dB (@100Hz), but its computational complexity is small, which is beneficial to the real time system implementation. It can also be seen that the DS method has the least amount of computational complexity, but the noise suppression effect is worse than that of the adaptive filter due to the difference of the noise amplitude between the RI and the SI, only 4dB (@100Hz) can be reached. The adaptive filter can basically eliminate the influence of the transmission noise, and the simulation signal of the sensor is kept intact.
14. Fiber optic adaptation of the interference filter photometer SPECTRAN for in-line measurements in PUREX process control
International Nuclear Information System (INIS)
Buerck, J.; Kraemer, K.; Koenig, W.
1990-02-01
The multicomponent version of the interference filter photometer SPECTRAN was adapted by radiation resistant quartz glass optical fibers to in-line flow cells in the aqueous and organic bypass stream of an uranium laboratory extraction column. A combined photometric/electrolytical conductivity measurement allows this modified process instrument to be used as uranium/plutonium in-line monitor in radioactive process streams. By applying a high performance 100 W quartz halogen lamp and suitable light focussing optics the light intensity, attenuated by coupling losses, could be increased to the desired level even when 1000 μm-single strand fibers (2x18 m) were used to transmit the light. In a series of calibration experiments the U(VI)- and U(IV)-extinction coefficients were determined as a function of nitric acid molarity (for U(VI) also in TBP/kerosene). Furthermore the validity of Lambert-Beer's law was examined for both oxidation states at different optical path lengths and nitric acid/electrolytical conductivity calibration functions between 0-100 g/l U(VI) and 0-4 mol/l HNO 3 were set up. (orig./EF) [de
15. Modeling satellite-Earth quantum channel downlinks with adaptive-optics coupling to single-mode fibers
Science.gov (United States)
Gruneisen, Mark T.; Flanagan, Michael B.; Sickmiller, Brett A.
2017-12-01
The efficient coupling of photons from a free-space quantum channel into a single-mode optical fiber (SMF) has important implications for quantum network concepts involving SMF interfaces to quantum detectors, atomic systems, integrated photonics, and direct coupling to a fiber network. Propagation through atmospheric turbulence, however, leads to wavefront errors that degrade mode matching with SMFs. In a free-space quantum channel, this leads to photon losses in proportion to the severity of the aberration. This is particularly problematic for satellite-Earth quantum channels, where atmospheric turbulence can lead to significant wavefront errors. This report considers propagation from low-Earth orbit to a terrestrial ground station and evaluates the efficiency with which photons couple either through a circular field stop or into an SMF situated in the focal plane of the optical receiver. The effects of atmospheric turbulence on the quantum channel are calculated numerically and quantified through the quantum bit error rate and secure key generation rates in a decoy-state BB84 protocol. Numerical simulations include the statistical nature of Kolmogorov turbulence, sky radiance, and an adaptive-optics system under closed-loop control.
16. HIGH-REDSHIFT DUST OBSCURED GALAXIES: A MORPHOLOGY-SPECTRAL ENERGY DISTRIBUTION CONNECTION REVEALED BY KECK ADAPTIVE OPTICS
International Nuclear Information System (INIS)
Melbourne, J.; Matthews, K.; Soifer, B. T.
2009-01-01
A simple optical to mid-IR color selection, R - [24]>14, i.e., f ν (24 μm)/f ν (R) ∼> 1000, identifies highly dust obscured galaxies (DOGs) with typical redshifts of z ∼ 2 ± 0.5. Extreme mid-IR luminosities (L IR > 10 12-14 ) suggest that DOGs are powered by a combination of active galactic nuclei (AGNs) and star formation, possibly driven by mergers. In an effort to compare their photometric properties with their rest-frame optical morphologies, we obtained high-spatial resolution (0.''05-0.''1) Keck Adaptive Optics K'-band images of 15 DOGs. The images reveal a wide range of morphologies, including small exponential disks (eight of 15), small ellipticals (four of 15), and unresolved sources (two of 15). One particularly diffuse source could not be classified because of low signal-to-noise ratio. We find a statistically significant correlation between galaxy concentration and mid-IR luminosity, with the most luminous DOGs exhibiting higher concentration and smaller physical size. DOGs with high concentration also tend to have spectral energy distributions (SEDs) suggestive of AGN activity. Thus, central AGN light may be biasing the morphologies of the more luminous DOGs to higher concentration. Conversely, more diffuse DOGs tend to show an SED shape suggestive of star formation. Two of 15 in the sample show multiple resolved components with separations of ∼1 kpc, circumstantial evidence for ongoing mergers.
17. Diffractive generalized phase contrast for adaptive phase imaging and optical security
DEFF Research Database (Denmark)
2012-01-01
We analyze the properties of Generalized Phase Contrast (GPC) when the input phase modulation is implemented using diffractive gratings. In GPC applications for patterned illumination, the use of a dynamic diffractive optical element for encoding the GPC input phase allows for onthe- fly optimiza...... security applications and can be used to create phasebased information channels for enhanced information security....
18. SOFTWARE FOR SIMULATION OF TECHNOLOGICAL ADAPTATION OF THE OPTICAL INSTRUMENTS SYSTEMS
Directory of Open Access Journals (Sweden)
N. K. Artioukhina
2012-01-01
Full Text Available Programs for calculation and analysis of optical systems of any class are provides. The most effective was to combine the programs into a complex with the general system of mathematical models. A characteristic feature is to unify the exchange of information between these programs and software systems Opal and Zemax.
19. Impact of design-parameters on the optical performance of a highpower adaptive mirror
NARCIS (Netherlands)
Koek, W.D.; Nijkerk, M.D.; Smeltink, J.A.; Dool, T.C. van den; Zwet, E.J. van; Baars, G.E. van
2017-01-01
TNO is developing a High Power Adaptive Mirror (HPAM) to be used in the CO2 laser beam path of an Extreme Ultra-Violet (EUV) light source for next-generation lithography. In this paper we report on a developed methodology, and the necessary simulation tools, to assess the performance and associated
20. Motion adaptation leads to parsimonious encoding of natural optic flow by blowfly motion vision system
NARCIS (Netherlands)
Heitwerth, J.; Kern, R.; Hateren, J.H. van; Egelhaaf, M.
Neurons sensitive to visual motion change their response properties during prolonged motion stimulation. These changes have been interpreted as adaptive and were concluded, for instance, to adjust the sensitivity of the visual motion pathway to velocity changes or to increase the reliability of
1. Optics
CERN Document Server
Mathieu, Jean Paul
1975-01-01
Optics, Parts 1 and 2 covers electromagnetic optics and quantum optics. The first part of the book examines the various of the important properties common to all electromagnetic radiation. This part also studies electromagnetic waves; electromagnetic optics of transparent isotropic and anisotropic media; diffraction; and two-wave and multi-wave interference. The polarization states of light, the velocity of light, and the special theory of relativity are also examined in this part. The second part is devoted to quantum optics, specifically discussing the classical molecular theory of optical p
Science.gov (United States)
Parthasarathy, S.; Giggenbach, D.; Kirstädter, A.
2014-10-01
Free-space optical (FSO) communication systems have seen significant developments in recent years due to growing need for very high data rates and tap-proof communication. The operation of an FSO link is suited to diverse variety of applications such as satellites, High Altitude Platforms (HAPs), Unmanned Aerial Vehicles (UAVs), aircrafts, ground stations and other areas involving both civil and military situations. FSO communication systems face challenges due to different effects of the atmospheric channel. FSO channel primarily suffers from scintillation effects due to Index of Refraction Turbulence (IRT). In addition, acquisition and pointing becomes more difficult because of the high directivity of the transmitted beam: Miss-pointing of the transmitted beam and tracking errors at the receiver generate additional fading of the optical signal. High Altitude Platforms (HAPs) are quasi-stationary vehicles operating in the stratosphere. The slowly varying but precisely determined time-of-flight of the Inter-HAP channel adds to its characteristics. To propose a suitable ARQ scheme, proper theoretical understanding of the optical atmospheric propagation and modeling of a specific scenario FSO channel is required. In this paper, a bi-directional symmetrical Inter-HAP link has been selected and modeled. The Inter-HAP channel model is then investigated via simulations in terms of optical scintillation induced by IRT and in presence of pointing error. The performance characteristic of the model is then quantified in terms of fading statistics from which the Packet Error Probability (PEP) is calculated. Based on the PEP characteristics, we propose suitable ARQ schemes.
3. Adaptive restoration of a partially coherent blurred image using an all-optical feedback interferometer with a liquid-crystal device.
Science.gov (United States)
Shirai, Tomohiro; Barnes, Thomas H
2002-02-01
A liquid-crystal adaptive optics system using all-optical feedback interferometry is applied to partially coherent imaging through a phase disturbance. A theoretical analysis based on the propagation of the cross-spectral density shows that the blurred image due to the phase disturbance can be restored, in principle, irrespective of the state of coherence of the light illuminating the object. Experimental verification of the theory has been performed for two cases when the object to be imaged is illuminated by spatially coherent light originating from a He-Ne laser and by spatially incoherent white light from a halogen lamp. We observed in both cases that images blurred by the phase disturbance were successfully restored, in agreement with the theory, immediately after the adaptive optics system was activated. The origin of the deviation of the experimental results from the theory, together with the effect of the feedback misalignment inherent in our optical arrangement, is also discussed.
4. SPATIALLY RESOLVED M-BAND EMISSION FROM IO’S LOKI PATERA–FIZEAU IMAGING AT THE 22.8 m LBT
Energy Technology Data Exchange (ETDEWEB)
Conrad, Albert; Veillet, Christian [LBT Observatory, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721 (United States); Kleer, Katherine de; Pater, Imke de [University of California at Berkeley, Berkeley, CA 94720 (United States); Leisenring, Jarron; Defrère, Denis; Hinz, Philip; Skemer, Andy [University of Arizona, 1428 E. University Blvd., Tucson, AZ 85721 (United States); Camera, Andrea La; Bertero, Mario; Boccacci, Patrizia [DIBRIS, University of Genoa, Via Dodecaneso 35, I-16146 Genova (Italy); Arcidiacono, Carmelo [INAF-Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Hofmann, Karl-Heinz; Schertl, Dieter; Weigelt, Gerd [Max Planck Institute for Radio Astronomy, Auf dem Hügel 69, D-53121 Bonn (Germany); Kürster, Martin [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Rathbun, Julie [Planetary Science Institute, 1700 E. Fort Lowell, Tucson, AZ 85719 (United States); Skrutskie, Michael [University of Virginia, 530 McCormick Road, Charlottesville, VA 22904 (United States); Spencer, John [Southwest Research Institute, 1050 Walnut Ste. Suite 300, Boulder, CO 80302 (United States); Woodward, Charles E., E-mail: aconrad@lbto.org [Minnesota Institute for Astrophysics, 116 Church St., Minneapolis, MN 55455 (United States)
2015-05-15
The Large Binocular Telescope Interferometer mid-infrared camera, LMIRcam, imaged Io on the night of 2013 December 24 UT and detected strong M-band (4.8 μm) thermal emission arising from Loki Patera. The 22.8 m baseline of the Large Binocular Telescope provides an angular resolution of ∼32 mas (∼100 km at Io) resolving the Loki Patera emission into two distinct maxima originating from different regions within Loki’s horseshoe lava lake. This observation is consistent with the presence of a high-temperature source observed in previous studies combined with an independent peak arising from cooling crust from recent resurfacing. The deconvolved images also reveal 15 other emission sites on the visible hemisphere of Io including two previously unidentified hot spots.
5. Effects of the P2 M-band flux asymmetry of laser-driven gold Hohlraums on the implosion of ICF ignition capsule
Energy Technology Data Exchange (ETDEWEB)
Li, Yongsheng [Institute of Applied Physics and Computational Mathematics, Beijing 100094 (China); Graduate School, China Academy of Engineering Physics, Beijing 100088 (China); Gu, Jianfa; Wu, Changshu; Song, Peng; Dai, Zhensheng; Li, Shuanggui; Li, Xin; Kang, Dongguo; Gu, Peijun; Zheng, Wudi; Zou, Shiyang [Institute of Applied Physics and Computational Mathematics, Beijing 100094 (China); Ding, Yongkun [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Center for Applied Physics and Technology, Peking University, Beijing 100871 (China); Lan, Ke; Ye, Wenhua, E-mail: ye-wenhua@iapcm.ac.cn [Institute of Applied Physics and Computational Mathematics, Beijing 100094 (China); Center for Applied Physics and Technology, Peking University, Beijing 100871 (China); Zhang, Weiyan [China Academy of Engineering Physics, Mianyang 621900 (China)
2016-07-15
Low-mode asymmetries in the laser-indirect-drive inertial confinement fusion implosion experiments conducted on the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, S228 (2004)] are deemed the main obstacles hindering further improvement of the nuclear performance of deuterium-tritium-layered capsules. The dominant seeds of these asymmetries include the P2 and P4 asymmetries of x-ray drives and P2 asymmetry introduced by the supporting “tent.” Here, we explore the effects of another possible seed that can lead to low-mode asymmetric implosions, i.e., the M-band flux asymmetry (MFA) in laser-driven cylindrical gold Hohlraums. It is shown that the M-band flux facilitates the ablation and acceleration of the shell, and that positive P2 MFAs can result in negative P2 asymmetries of hot spots and positive P2 asymmetries of shell's ρR. An oblate or toroidal hot spot, depending on the P2 amplitude of MFA, forms at stagnation. The energy loss of such a hot spot via electron thermal conduction is seriously aggravated not only due to the enlarged hot spot surface but also due to the vortices that develop and help transferring thermal energy from the hotter center to the colder margin of such a hot spot. The cliffs of nuclear performance for the two methodologies of applying MFA (i.e., symmetric flux in the presence of MFA and MFA added for symmetric soft x-ray flux) are obtained locating at 9.5% and 5.0% of P2/P0 amplitudes, respectively.
6. Effects of the P2 M-band flux asymmetry of laser-driven gold Hohlraums on the implosion of ICF ignition capsule
Science.gov (United States)
Li, Yongsheng; Gu, Jianfa; Wu, Changshu; Song, Peng; Dai, Zhensheng; Li, Shuanggui; Li, Xin; Kang, Dongguo; Gu, Peijun; Zheng, Wudi; Zou, Shiyang; Ding, Yongkun; Lan, Ke; Ye, Wenhua; Zhang, Weiyan
2016-07-01
Low-mode asymmetries in the laser-indirect-drive inertial confinement fusion implosion experiments conducted on the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, S228 (2004)] are deemed the main obstacles hindering further improvement of the nuclear performance of deuterium-tritium-layered capsules. The dominant seeds of these asymmetries include the P2 and P4 asymmetries of x-ray drives and P2 asymmetry introduced by the supporting "tent." Here, we explore the effects of another possible seed that can lead to low-mode asymmetric implosions, i.e., the M-band flux asymmetry (MFA) in laser-driven cylindrical gold Hohlraums. It is shown that the M-band flux facilitates the ablation and acceleration of the shell, and that positive P2 MFAs can result in negative P2 asymmetries of hot spots and positive P2 asymmetries of shell's ρR. An oblate or toroidal hot spot, depending on the P2 amplitude of MFA, forms at stagnation. The energy loss of such a hot spot via electron thermal conduction is seriously aggravated not only due to the enlarged hot spot surface but also due to the vortices that develop and help transferring thermal energy from the hotter center to the colder margin of such a hot spot. The cliffs of nuclear performance for the two methodologies of applying MFA (i.e., symmetric flux in the presence of MFA and MFA added for symmetric soft x-ray flux) are obtained locating at 9.5% and 5.0% of P2/P0 amplitudes, respectively.
7. Effects of the P2 M-band flux asymmetry of laser-driven gold Hohlraums on the implosion of ICF ignition capsule
International Nuclear Information System (INIS)
Li, Yongsheng; Gu, Jianfa; Wu, Changshu; Song, Peng; Dai, Zhensheng; Li, Shuanggui; Li, Xin; Kang, Dongguo; Gu, Peijun; Zheng, Wudi; Zou, Shiyang; Ding, Yongkun; Lan, Ke; Ye, Wenhua; Zhang, Weiyan
2016-01-01
Low-mode asymmetries in the laser-indirect-drive inertial confinement fusion implosion experiments conducted on the National Ignition Facility [G. H. Miller et al., Nucl. Fusion 44, S228 (2004)] are deemed the main obstacles hindering further improvement of the nuclear performance of deuterium-tritium-layered capsules. The dominant seeds of these asymmetries include the P2 and P4 asymmetries of x-ray drives and P2 asymmetry introduced by the supporting “tent.” Here, we explore the effects of another possible seed that can lead to low-mode asymmetric implosions, i.e., the M-band flux asymmetry (MFA) in laser-driven cylindrical gold Hohlraums. It is shown that the M-band flux facilitates the ablation and acceleration of the shell, and that positive P2 MFAs can result in negative P2 asymmetries of hot spots and positive P2 asymmetries of shell's ρR. An oblate or toroidal hot spot, depending on the P2 amplitude of MFA, forms at stagnation. The energy loss of such a hot spot via electron thermal conduction is seriously aggravated not only due to the enlarged hot spot surface but also due to the vortices that develop and help transferring thermal energy from the hotter center to the colder margin of such a hot spot. The cliffs of nuclear performance for the two methodologies of applying MFA (i.e., symmetric flux in the presence of MFA and MFA added for symmetric soft x-ray flux) are obtained locating at 9.5% and 5.0% of P2/P0 amplitudes, respectively.
8. Optics
CERN Document Server
Fincham, W H A
2013-01-01
Optics: Ninth Edition Optics: Ninth Edition covers the work necessary for the specialization in such subjects as ophthalmic optics, optical instruments and lens design. The text includes topics such as the propagation and behavior of light; reflection and refraction - their laws and how different media affect them; lenses - thick and thin, cylindrical and subcylindrical; photometry; dispersion and color; interference; and polarization. Also included are topics such as diffraction and holography; the limitation of beams in optical systems and its effects; and lens systems. The book is recommen
9. Large deflection angle, high-power adaptive fiber optics collimator with preserved near-diffraction-limited beam quality.
Science.gov (United States)
Zhi, Dong; Ma, Yanxing; Chen, Zilun; Wang, Xiaolin; Zhou, Pu; Si, Lei
2016-05-15
We report on the development of a monolithic adaptive fiber optics collimator, with a large deflection angle and preserved near-diffraction-limited beam quality, that has been tested at a maximal output power at the 300 W level. Additionally, a new measurement method of beam quality (M2 factor) is developed. Experimental results show that the deflection angle of the collimated beam is in the range of 0-0.27 mrad in the X direction and 0-0.19 mrad in the Y direction. The effective working frequency of the device is about 710 Hz. By employing the new measurement method of the M2 factor, we calculate that the beam quality is Mx2=1.35 and My2=1.24, which is in agreement with the result from the beam propagation analyzer and is preserved well with the increasing output power.
10. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source
International Nuclear Information System (INIS)
Poynee, L A
2003-01-01
Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation
11. Thirty Meter Telescope (TMT) Narrow Field Infrared Adaptive Optics System (NFIRAOS) real-time controller preliminary architecture
Science.gov (United States)
Kerley, Dan; Smith, Malcolm; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi
2016-08-01
The Narrow Field Infrared Adaptive Optics System (NFIRAOS) is the first light Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). A critical component of NFIRAOS is the Real-Time Controller (RTC) subsystem which provides real-time wavefront correction by processing wavefront information to compute Deformable Mirror (DM) and Tip/Tilt Stage (TTS) commands. The National Research Council of Canada - Herzberg (NRC-H), in conjunction with TMT, has developed a preliminary design for the NFIRAOS RTC. The preliminary architecture for the RTC is comprised of several Linux-based servers. These servers are assigned various roles including: the High-Order Processing (HOP) servers, the Wavefront Corrector Controller (WCC) server, the Telemetry Engineering Display (TED) server, the Persistent Telemetry Storage (PTS) server, and additional testing and spare servers. There are up to six HOP servers that accept high-order wavefront pixels, and perform parallelized pixel processing and wavefront reconstruction to produce wavefront corrector error vectors. The WCC server performs low-order mode processing, and synchronizes and aggregates the high-order wavefront corrector error vectors from the HOP servers to generate wavefront corrector commands. The Telemetry Engineering Display (TED) server is the RTC interface to TMT and other subsystems. The TED server receives all external commands and dispatches them to the rest of the RTC servers and is responsible for aggregating several offloading and telemetry values that are reported to other subsystems within NFIRAOS and TMT. The TED server also provides the engineering GUIs and real-time displays. The Persistent Telemetry Storage (PTS) server contains fault tolerant data storage that receives and stores telemetry data, including data for Point-Spread Function Reconstruction (PSFR).
12. Social Science at the Center for Adaptive Optics: Synergistic Systems of Program Evaluation, Applied Research, Educational Assessment, and Pedagogy
Science.gov (United States)
Goza, B. K.; Hunter, L.; Shaw, J. M.; Metevier, A. J.; Raschke, L.; Espinoza, E.; Geaney, E. R.; Reyes, G.; Rothman, D. L.
2010-12-01
This paper describes the interaction of four elements of social science as they have evolved in concert with the Center for Adaptive Optics Professional Development Program (CfAO PDP). We hope these examples persuade early-career scientists and engineers to include social science activities as they develop grant proposals and carry out their research. To frame our discussion we use a metaphor from astronomy. At the University of California Santa Cruz (UCSC), the CfAO PDP and the Educational Partnership Center (EPC) are two young stars in the process of forming a solar system. Together, they are surrounded by a disk of gas and dust made up of program evaluation, applied research, educational assessment, and pedagogy. An idea from the 2001 PDP intensive workshops program evaluation developed into the Assessing Scientific Inquiry and Leadership Skills (AScILS) applied research project. In iterative cycles, AScILS researchers participated in subsequent PDP intensive workshops, teaching social science while piloting AScILS measurement strategies. Subsequent "orbits" of the PDP program evaluation gathered ideas from the applied research and pedagogy. The denser regions of this disk of social science are in the process of forming new protoplanets as tools for research and teaching are developed. These tools include problem-solving exercises or simulations of adaptive optics explanations and scientific reasoning; rubrics to evaluate the scientific reasoning simulation responses, knowledge regarding inclusive science education, and student explanations of science/engineering inquiry investigations; and a scientific reasoning curriculum. Another applied research project is forming with the design of a study regarding how to assess engineering explanations. To illustrate the mutual shaping of the cross-disciplinary, intergenerational group of educational researchers and their projects, the paper ends with a description of the professional trajectories of some of the
13. High performance pseudo-analytical simulation of multi-object adaptive optics over multi-GPU systems
KAUST Repository
Abdelfattah, Ahmad; Gendron, É ric; Gratadour, Damien; Keyes, David E.; Ltaief, Hatem; Sevin, Arnaud; Vidal, Fabrice
2014-01-01
Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique dedicated to the special case of wide-field multi-object spectrographs (MOS). It applies dedicated wavefront corrections to numerous independent tiny patches spread over a large field of view (FOV). The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. The output of this study helps the design of a new instrument called MOSAIC, a multi-object spectrograph proposed for the European Extremely Large Telescope (E-ELT). We have developed a novel hybrid pseudo-analytical simulation scheme that allows us to accurately simulate in detail the tomographic problem. The main challenge resides in the computation of the tomographic reconstructor, which involves pseudo-inversion of a large dense symmetric matrix. The pseudo-inverse is computed using an eigenvalue decomposition, based on the divide and conquer algorithm, on multicore systems with multi-GPUs. Thanks to a new symmetric matrix-vector product (SYMV) multi-GPU kernel, our overall implementation scores significant speedups over standard numerical libraries on multicore, like Intel MKL, and up to 60% speedups over the standard MAGMA implementation on 8 Kepler K20c GPUs. At 40,000 unknowns, this appears to be the largest-scale tomographic AO matrix solver submitted to computation, to date, to our knowledge and opens new research directions for extreme scale AO simulations. © 2014 Springer International Publishing Switzerland.
14. Ion-optical studies for a range adaptation method in ion beam therapy using a static wedge degrader combined with magnetic beam deflection
International Nuclear Information System (INIS)
Chaudhri, Naved; Saito, Nami; Bert, Christoph; Franczak, Bernhard; Steidl, Peter; Durante, Marco; Schardt, Dieter; Rietzel, Eike
2010-01-01
15. Noiseless imaging detector for adaptive optics with kHz frame rates
CERN Document Server
Vallerga, J V; Mikulec, Bettina; Tremsin, A; Clark, Allan G; Siegmund, O H W; CERN. Geneva
2004-01-01
A new hybrid optical detector is described that has many of the attributes desired for the next generation AO wavefront sensors. The detector consists of a proximity focused MCP read out by four multi-pixel application specific integrated circuit (ASIC) chips developed at CERN (âワMedipix2â) with individual pixels that amplify, discriminate and count input events. The detector has 512 x 512 pixels, zero readout noise (photon counting) and can be read out at 1 kHz frame rates. The Medipix2 readout chips can be electronically shuttered down to a temporal window of a few microseconds with an accuracy of 10 nanoseconds. When used in a Shack-Hartman style wavefront sensor, it should be able to centroid approximately 5000 spots using 7 x 7 pixel sub-apertures resulting in very linear, off-null error correction terms. The quantum efficiency depends on the optical photocathode chosen for the bandpass of interest. A three year development effort for this detector technology has just been funded as part of the...
16. Adaptation and focusing of optode configurations for fluorescence optical tomography by experimental design methods.
Science.gov (United States)
Freiberger, Manuel; Clason, Christian; Scharfetter, Hermann
2010-01-01
Fluorescence tomography excites a fluorophore inside a sample by light sources on the surface. From boundary measurements of the fluorescent light, the distribution of the fluorophore is reconstructed. The optode placement determines the quality of the reconstructions in terms of, e.g., resolution and contrast-to-noise ratio. We address the adaptation of the measurement setup. The redundancy of the measurements is chosen as a quality criterion for the optodes and is computed from the Jacobian of the mathematical formulation of light propagation. The algorithm finds a subset with minimum redundancy in the measurements from a feasible pool of optodes. This allows biasing the design in order to favor reconstruction results inside a given region. Two different variations of the algorithm, based on geometric and arithmetic averaging, are compared. Both deliver similar optode configurations. The arithmetic averaging is slightly more stable, whereas the geometric averaging approach shows a better conditioning of the sensitivity matrix and mathematically corresponds more closely with entropy optimization. Adapted illumination and detector patterns are presented for an initial set of 96 optodes placed on a cylinder with focusing on different regions. Examples for the attenuation of fluorophore signals from regions outside the focus are given.
17. The wire optical test: a thorough analytical study in and out of caustic surface, and advantages of a dynamical adaptation
Science.gov (United States)
Alejandro Juárez-Reyes, Salvador; Sosa-Sánchez, Citlalli Teresa; Silva-Ortigoza, Gilberto; de Jesús Cabrera-Rosas, Omar; Espíndola-Ramos, Ernesto; Ortega-Vidals, Paula
2018-03-01
Among the best known non-interferometric optical tests are the wire test, the Foucault test and Ronchi test with a low frequency grating. Since the wire test is the seed to understand the other ones, the aim of the present work is to do a thorough study of this test for a lens with symmetry of revolution and to do this study for any configuration of the object and detection planes where both planes could intersect: two, one or no branches of the caustic region (including the marginal and paraxial foci). To this end, we calculated the vectorial representation for the caustic region, and we found the analytical expression for the pattern; we report that the analytical pattern explicitly depends on the magnitude of a branch of the caustic. With the analytical pattern we computed a set of simulations of a dynamical adaptation of the optical wire test. From those simulations, we have done a thorough analysis of the topological structure of the pattern; so we explain how the multiple image formation process and the image collapse process take place for each configuration, in particular, when both the wire and the detection planes are placed inside the caustic region, which has not been studied before. For the first time, we remark that not only the intersections of the object and detection planes with the caustic are important in the change of pattern topology; but also the projection of the intersection between the caustic and the object plane mapped onto the detection plane; and the virtual projection of the intersection between the caustic and the detection plane mapped onto the object plane. We present that for the new configurations of the optical system, the wire image is curves of the Tschirnhausen’s cubic, the piriform and the deformed eight-curve types.
18. High-resolution imaging of retinal nerve fiber bundles in glaucoma using adaptive optics scanning laser ophthalmoscopy.
Science.gov (United States)
Takayama, Kohei; Ooto, Sotaro; Hangai, Masanori; Ueda-Arakawa, Naoko; Yoshida, Sachiko; Akagi, Tadamichi; Ikeda, Hanako Ohashi; Nonaka, Atsushi; Hanebuchi, Masaaki; Inoue, Takashi; Yoshimura, Nagahisa
2013-05-01
To detect pathologic changes in retinal nerve fiber bundles in glaucomatous eyes seen on images obtained by adaptive optics (AO) scanning laser ophthalmoscopy (AO SLO). Prospective cross-sectional study. Twenty-eight eyes of 28 patients with open-angle glaucoma and 21 normal eyes of 21 volunteer subjects underwent a full ophthalmologic examination, visual field testing using a Humphrey Field Analyzer, fundus photography, red-free SLO imaging, spectral-domain optical coherence tomography, and imaging with an original prototype AO SLO system. The AO SLO images showed many hyperreflective bundles suggesting nerve fiber bundles. In glaucomatous eyes, the nerve fiber bundles were narrower than in normal eyes, and the nerve fiber layer thickness was correlated with the nerve fiber bundle widths on AO SLO (P fiber layer defect area on fundus photography, the nerve fiber bundles on AO SLO were narrower compared with those in normal eyes (P optic disc, the nerve fiber bundle width was significantly lower, even in areas without nerve fiber layer defect, in eyes with glaucomatous eyes compared with normal eyes (P = .026). The mean deviations of each cluster in visual field testing were correlated with the corresponding nerve fiber bundle widths (P = .017). AO SLO images showed reduced nerve fiber bundle widths both in clinically normal and abnormal areas of glaucomatous eyes, and these abnormalities were associated with visual field defects, suggesting that AO SLO may be useful for detecting early nerve fiber bundle abnormalities associated with loss of visual function. Copyright © 2013 Elsevier Inc. All rights reserved.
19. Adaptive enhancement of optical fringe patterns by selective reconstruction using FABEMD algorithm and Hilbert spiral transform.
Science.gov (United States)
Trusiak, Maciej; Patorski, Krzysztof; Wielgus, Maciej
2012-10-08
Presented method for fringe pattern enhancement has been designed for processing and analyzing low quality fringe patterns. It uses a modified fast and adaptive bidimensional empirical mode decomposition (FABEMD) for the extraction of bidimensional intrinsic mode functions (BIMFs) from an interferogram. Fringe pattern is then selectively reconstructed (SR) taking the regions of selected BIMFs with high modulation values only. Amplitude demodulation and normalization of the reconstructed image is conducted using the spiral phase Hilbert transform (HS). It has been tested using computer generated interferograms and real data. The performance of the presented SR-FABEMD-HS method is compared with other normalization techniques. Its superiority, potential and robustness to high fringe density variations and the presence of noise, modulation and background illumination defects in analyzed fringe patterns has been corroborated.
20. Optics
CERN Document Server
Fincham, W H A
2013-01-01
Optics: Eighth Edition covers the work necessary for the specialization in such subjects as ophthalmic optics, optical instruments and lens design. The text includes topics such as the propagation and behavior of light; reflection and refraction - their laws and how different media affect them; lenses - thick and thin, cylindrical and subcylindrical; photometry; dispersion and color; interference; and polarization. Also included are topics such as diffraction and holography; the limitation of beams in optical systems and its effects; and lens systems. The book is recommended for engineering st
1. Embedded Adaptive Optics for Ubiquitous Lab-on-a-Chip Readout on Intact Cell Phones
Directory of Open Access Journals (Sweden)
Pakorn Preechaburana
2012-06-01
Full Text Available The evaluation of disposable lab-on-a-chip (LOC devices on cell phones is an attractive alternative to migrate the analytical strength of LOC solutions to decentralized sensing applications. Imaging the micrometric detection areas of LOCs in contact with intact phone cameras is central to provide such capability. This work demonstrates a disposable and morphing liquid lens concept that can be integrated in LOC devices and refocuses micrometric features in the range necessary for LOC evaluation using diverse cell phone cameras. During natural evaporation, the lens focus varies adapting to different type of cameras. Standard software in the phone commands a time-lapse acquisition for best focal selection that is sufficient to capture and resolve, under ambient illumination, 50 μm features in regions larger than 500 × 500 μm2. In this way, the present concept introduces a generic solution compatible with the use of diverse and unmodified cell phone cameras to evaluate disposable LOC devices.
2. RETRACTED: Adaptive neuro-fuzzy prediction of modulation transfer function of optical lens system
Science.gov (United States)
Petković, Dalibor; Shamshirband, Shahaboddin; Anuar, Nor Badrul; Md Nasir, Mohd Hairul Nizam; Pavlović, Nenad T.; Akib, Shatirah
2014-07-01
3. Three-State Locally Adaptive Texture Preserving Filter for Radar and Optical Image Processing
Directory of Open Access Journals (Sweden)
Jaakko T. Astola
2005-05-01
Full Text Available Textural features are one of the most important types of useful information contained in images. In practice, these features are commonly masked by noise. Relatively little attention has been paid to texture preserving properties of noise attenuation methods. This stimulates solving the following tasks: (1 to analyze the texture preservation properties of various filters; and (2 to design image processing methods capable to preserve texture features well and to effectively reduce noise. This paper deals with examining texture feature preserving properties of different filters. The study is performed for a set of texture samples and different noise variances. The locally adaptive three-state schemes are proposed for which texture is considered as a particular class. For “detection†of texture regions, several classifiers are proposed and analyzed. As shown, an appropriate trade-off of the designed filter properties is provided. This is demonstrated quantitatively for artificial test images and is confirmed visually for real-life images.
4. Optic Nerve Stimulation System with Adaptive Wireless Powering and Data Telemetry
Directory of Open Access Journals (Sweden)
Xing Li
2017-12-01
Full Text Available To treat retinal degenerative diseases, a transcorneal electrical stimulation-based system is proposed, which consists of an eye implant and an external component. The eye implant is wirelessly powered and controlled by the external component to generate the required bi-polar current pattern for transcorneal stimulation with an amplitude range of 5 μA to 320 μA, a frequency range of 10 Hz to 160 Hz and a duty ratio range of 2.5% to 20%. Power delivery control includes power boosting in preparation for stimulation, and normal power regulation that adapts to both coupling and load variations. Only one pair of coils is used for both the power link and the bi-directional data link. Except for the secondary coil, the eye implant is fully integrated on chip and is fabricated using UMC (United Microelectronics Corporation, Hsinchu, Taiwan 0.13 μm complementary metal-oxide-semiconductor (CMOS process with a size of 1.5 mm × 1.5 mm. The secondary coil is fabricated on a printed circuit board (PCB with a diameter of only 4.4 mm. After coating with biocompatible silicone, the whole implant has dimensions of 6 mm in diameter with a thickness of less than 1 mm. The whole device can be put onto the sclera and beneath the eye’s conjunctiva. System functionality and electrical performance are demonstrated with measurement results.
5. Tip-tilt compensation: Resolution limits for ground-based telescopes using laser guide star adaptive optics
International Nuclear Information System (INIS)
Olivier, S.S.; Max, C.E.; Gavel, D.T.; Brase, J.M.
1992-01-01
The angular resolution of long-exposure images from ground-based telescopes equipped with laser guide star adaptive optics systems is fundamentally limited by the the accuracy with which the tip-tilt aberrations introduced by the atmosphere can be corrected. Assuming that a natural star is used as the tilt reference, the residual error due to tilt anisoplanatism can significantly degrade the long-exposure resolution even if the tilt reference star is separated from the object being imaged by a small angle. Given the observed distribution of stars in the sky, the need to find a tilt reference star quite close to the object restricts the fraction of the sky over which long-exposure images with diffraction limited resolution can be obtained. In this paper, the authors present a comprehensive performance analysis of tip-tilt compensation systems that use a natural star as a tilt reference, taking into account properties of the atmosphere and of the Galactic stellar populations, and optimizing over the system operating parameters to determine the fundamental limits to the long-exposure resolution. Their results show that for a ten meter telescope on Mauna Kea, if the image of the tilt reference star is uncorrected, about half the sky can be imaged in the V band with long-exposure resolution less than 60 milli-arc-seconds (mas), while if the image of the tilt reference star is fully corrected, about half the sky can be imaged in the V band with long-exposure resolution less than 16 mas. Furthermore, V band images long-exposure resolution of less than 16 mas may be obtained with a ten meter telescope on Mauna Kea for unresolved objects brighter than magnitude 22 that are fully corrected by a laser guide star adaptive optics system. This level of resolution represents about 70% of the diffraction limit of a ten meter telescope in the V band and is more than a factor of 45 better than the median seeing in the V band on Mauna Kea
6. Confocal Adaptive Optics Imaging of Peripapillary Nerve Fiber Bundles: Implications for Glaucomatous Damage Seen on Circumpapillary OCT Scans.
Science.gov (United States)
Hood, Donald C; Chen, Monica F; Lee, Dongwon; Epstein, Benjamin; Alhadeff, Paula; Rosen, Richard B; Ritch, Robert; Dubra, Alfredo; Chui, Toco Y P
2015-04-01
To improve our understanding of glaucomatous damage as seen on circumpapillary disc scans obtained with frequency-domain optical coherence tomography (fdOCT), fdOCT scans were compared to images of the peripapillary retinal nerve fiber (RNF) bundles obtained with an adaptive optics-scanning light ophthalmoscope (AO-SLO). The AO-SLO images and fdOCT scans were obtained on 6 eyes of 6 patients with deep arcuate defects (5 points ≤-15 db) on 10-2 visual fields. The AO-SLO images were montaged and aligned with the fdOCT images to compare the RNF bundles seen with AO-SLO to the RNF layer thickness measured with fdOCT. All 6 eyes had an abnormally thin (1% confidence limit) RNF layer (RNFL) on fdOCT and abnormal (hyporeflective) regions of RNF bundles on AO-SLO in corresponding regions. However, regions of abnormal, but equal, RNFL thickness on fdOCT scans varied in appearance on AO-SLO images. These regions could be largely devoid of RNF bundles (5 eyes), have abnormal-appearing bundles of lower contrast (6 eyes), or have isolated areas with a few relatively normal-appearing bundles (2 eyes). There also were local variations in reflectivity of the fdOCT RNFL that corresponded to the variations in AO-SLO RNF bundle appearance. Relatively similar 10-2 defects with similar fdOCT RNFL thickness profiles can have very different degrees of RNF bundle damage as seen on fdOCT and AO-SLO. While the results point to limitations of fdOCT RNFL thickness as typically analyzed, they also illustrate the potential for improving fdOCT by attending to variations in local intensity.
7. Improved laser-based triangulation sensor with enhanced range and resolution through adaptive optics-based active beam control.
Science.gov (United States)
Reza, Syed Azer; Khwaja, Tariq Shamim; Mazhar, Mohsin Ali; Niazi, Haris Khan; Nawab, Rahma
2017-07-20
Various existing target ranging techniques are limited in terms of the dynamic range of operation and measurement resolution. These limitations arise as a result of a particular measurement methodology, the finite processing capability of the hardware components deployed within the sensor module, and the medium through which the target is viewed. Generally, improving the sensor range adversely affects its resolution and vice versa. Often, a distance sensor is designed for an optimal range/resolution setting depending on its intended application. Optical triangulation is broadly classified as a spatial-signal-processing-based ranging technique and measures target distance from the location of the reflected spot on a position sensitive detector (PSD). In most triangulation sensors that use lasers as a light source, beam divergence-which severely affects sensor measurement range-is often ignored in calculations. In this paper, we first discuss in detail the limitations to ranging imposed by beam divergence, which, in effect, sets the sensor dynamic range. Next, we show how the resolution of laser-based triangulation sensors is limited by the interpixel pitch of a finite-sized PSD. In this paper, through the use of tunable focus lenses (TFLs), we propose a novel design of a triangulation-based optical rangefinder that improves both the sensor resolution and its dynamic range through adaptive electronic control of beam propagation parameters. We present the theory and operation of the proposed sensor and clearly demonstrate a range and resolution improvement with the use of TFLs. Experimental results in support of our claims are shown to be in strong agreement with theory.
8. A Simplified Method to Measure Choroidal Thickness Using Adaptive Compensation in Enhanced Depth Imaging Optical Coherence Tomography
Science.gov (United States)
Gupta, Preeti; Sidhartha, Elizabeth; Girard, Michael J. A.; Mari, Jean Martial; Wong, Tien-Yin; Cheng, Ching-Yu
2014-01-01
9. A simplified method to measure choroidal thickness using adaptive compensation in enhanced depth imaging optical coherence tomography.
Directory of Open Access Journals (Sweden)
Preeti Gupta
10. Pipelining Computational Stages of the Tomographic Reconstructor for Multi-Object Adaptive Optics on a Multi-GPU System
KAUST Repository
Charara, Ali
2014-11-01
The European Extremely Large Telescope project (E-ELT) is one of Europe\\'s highest priorities in ground-based astronomy. ELTs are built on top of a variety of highly sensitive and critical astronomical instruments. In particular, a new instrument called MOSAIC has been proposed to perform multi-object spectroscopy using the Multi-Object Adaptive Optics (MOAO) technique. The core implementation of the simulation lies in the intensive computation of a tomographic reconstruct or (TR), which is used to drive the deformable mirror in real time from the measurements. A new numerical algorithm is proposed (1) to capture the actual experimental noise and (2) to substantially speed up previous implementations by exposing more concurrency, while reducing the number of floating-point operations. Based on the Matrices Over Runtime System at Exascale numerical library (MORSE), a dynamic scheduler drives all computational stages of the tomographic reconstruct or simulation and allows to pipeline and to run tasks out-of order across different stages on heterogeneous systems, while ensuring data coherency and dependencies. The proposed TR simulation outperforms asymptotically previous state-of-the-art implementations up to 13-fold speedup. At more than 50000 unknowns, this appears to be the largest-scale AO problem submitted to computation, to date, and opens new research directions for extreme scale AO simulations. © 2014 IEEE.
11. ELT-scale Adaptive Optics real-time control with thes Intel Xeon Phi Many Integrated Core Architecture
Science.gov (United States)
Jenkins, David R.; Basden, Alastair; Myers, Richard M.
2018-05-01
We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.
12. Noninvasive near infrared autofluorescence imaging of retinal pigment epithelial cells in the human retina using adaptive optics.
Science.gov (United States)
Liu, Tao; Jung, HaeWon; Liu, Jianfei; Droettboom, Michael; Tam, Johnny
2017-10-01
The retinal pigment epithelial (RPE) cells contain intrinsic fluorophores that can be visualized using infrared autofluorescence (IRAF). Although IRAF is routinely utilized in the clinic for visualizing retinal health and disease, currently, it is not possible to discern cellular details using IRAF due to limits in resolution. We demonstrate that the combination of adaptive optics (AO) with IRAF (AO-IRAF) enables higher-resolution imaging of the IRAF signal, revealing the RPE mosaic in the living human eye. Quantitative analysis of visualized RPE cells in 10 healthy subjects across various eccentricities demonstrates the possibility for in vivo density measurements of RPE cells, which range from 6505 to 5388 cells/mm 2 for the areas measured (peaking at the fovea). We also identified cone photoreceptors in relation to underlying RPE cells, and found that RPE cells support on average up to 18.74 cone photoreceptors in the fovea down to an average of 1.03 cone photoreceptors per RPE cell at an eccentricity of 6 mm. Clinical application of AO-IRAF to a patient with retinitis pigmentosa illustrates the potential for AO-IRAF imaging to become a valuable complementary approach to the current landscape of high resolution imaging modalities.
13. Robo-AO Kepler Asteroseismic Survey. I. Adaptive Optics Imaging of 99 Asteroseismic Kepler Dwarfs and Subgiants
Energy Technology Data Exchange (ETDEWEB)
Schonhut-Stasik, Jessica S.; Baranec, Christoph; Huber, Daniel; Atkinson, Dani; Hagelberg, Janis; Marel, Nienke van der; Hodapp, Klaus W. [Institute for Astronomy, University of Hawai‘i at Mānoa, Hilo, HI 96720-2700 (United States); Ziegler, Carl; Law, Nicholas M. [Department of Physics and Astronomy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3255 (United States); Gaidos, Eric [Department of Geology and Geophysics, University of Hawai‘i at Mānoa, Honolulu, HI 96822 (United States); Riddle, Reed, E-mail: jstasik@hawaii.edu [Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States)
2017-10-01
We used the Robo-AO laser adaptive optics (AOs) system to image 99 main sequence and subgiant stars that have Kepler -detected asteroseismic signals. Robo-AO allows us to resolve blended secondary sources at separations as close as ∼0.″15 that may contribute to the measured Kepler light curves and affect asteroseismic analysis and interpretation. We report eight new secondary sources within 4.″0 of these Kepler asteroseismic stars. We used Subaru and Keck AOs to measure differential infrared photometry for these candidate companion systems. Two of the secondary sources are likely foreground objects, while the remaining six are background sources; however, we cannot exclude the possibility that three of the objects may be physically associated. We measured a range of i ′-band amplitude dilutions for the candidate companion systems from 0.43% to 15.4%. We find that the measured amplitude dilutions are insufficient to explain the previously identified excess scatter in the relationship between asteroseismic oscillation amplitude and the frequency of maximum power.
14. SEARCHING FOR BINARY Y DWARFS WITH THE GEMINI MULTI-CONJUGATE ADAPTIVE OPTICS SYSTEM (GeMS)
International Nuclear Information System (INIS)
Opitz, Daniela; Tinney, C. G.; Faherty, Jacqueline K.; Sweet, Sarah; Gelino, Christopher R.; Kirkpatrick, J. Davy
2016-01-01
The NASA Wide-field Infrared Survey Explorer (WISE) has discovered almost all the known members of the new class of Y-type brown dwarfs. Most of these Y dwarfs have been identified as isolated objects in the field. It is known that binaries with L- and T-type brown dwarf primaries are less prevalent than either M-dwarf or solar-type primaries, they tend to have smaller separations and are more frequently detected in near-equal mass configurations. The binary statistics for Y-type brown dwarfs, however, are sparse, and so it is unclear if the same trends that hold for L- and T-type brown dwarfs also hold for Y-type ones. In addition, the detection of binary companions to very cool Y dwarfs may well be the best means available for discovering even colder objects. We present results for binary properties of a sample of five WISE Y dwarfs with the Gemini Multi-Conjugate Adaptive Optics System. We find no evidence for binary companions in these data, which suggests these systems are not equal-luminosity (or equal-mass) binaries with separations larger than ∼0.5–1.9 AU. For equal-mass binaries at an age of 5 Gyr, we find that the binary binding energies ruled out by our observations (i.e., 10 42 erg) are consistent with those observed in previous studies of hotter ultra-cool dwarfs
15. Real-time wavefront processors for the next generation of adaptive optics systems: a design and analysis
Science.gov (United States)
Truong, Tuan; Brack, Gary L.; Troy, Mitchell; Trinh, Thang; Shi, Fang; Dekany, Richard G.
2003-02-01
Adaptive optics (AO) systems currently under investigation will require at least two orders of magitude increase in the number of actuators, which in turn translates to effectively a 104 increase in compute latency. Since the performance of an AO system invariably improves as the compute latency decreases, it is important to study how today's computer systems will scale to address this expected increase in actuator utilization. This paper answers this question by characterizing the performance of a single deformable mirror (DM) Shack-Hartmann natural guide star AO system implemented on the present-generation digital signal processor (DSP) TMS320C6701 from Texas Instruments. We derive the compute latency of such a system in terms of a few basic parameters, such as the number of DM actuators, the number of data channels used to read out the camera pixels, the number of DSPs, the available memory bandwidth, as well as the inter-processor communication (IPC) bandwidth and the pixel transfer rate. We show how the results would scale for future systems that utilizes multiple DMs and guide stars. We demonstrate that the principal performance bottleneck of such a system is the available memory bandwidth of the processors and to lesser extent the IPC bandwidth. This paper concludes with suggestions for mitigating this bottleneck.
16. Adaptive optics scanning laser ophthalmoscope using liquid crystal on silicon spatial light modulator: Performance study with involuntary eye movement
Science.gov (United States)
Huang, Hongxin; Toyoda, Haruyoshi; Inoue, Takashi
2017-09-01
The performance of an adaptive optics scanning laser ophthalmoscope (AO-SLO) using a liquid crystal on silicon spatial light modulator and Shack-Hartmann wavefront sensor was investigated. The system achieved high-resolution and high-contrast images of human retinas by dynamic compensation for the aberrations in the eyes. Retinal structures such as photoreceptor cells, blood vessels, and nerve fiber bundles, as well as blood flow, could be observed in vivo. We also investigated involuntary eye movements and ascertained microsaccades and drifts using both the retinal images and the aberrations recorded simultaneously. Furthermore, we measured the interframe displacement of retinal images and found that during eye drift, the displacement has a linear relationship with the residual low-order aberration. The estimated duration and cumulative displacement of the drift were within the ranges estimated by a video tracking technique. The AO-SLO would not only be used for the early detection of eye diseases, but would also offer a new approach for involuntary eye movement research.
17. Fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave and free-space-optics architecture with an adaptive diversity combining technique.
Science.gov (United States)
Zhang, Junwen; Wang, Jing; Xu, Yuming; Xu, Mu; Lu, Feng; Cheng, Lin; Yu, Jianjun; Chang, Gee-Kung
2016-05-01
We propose and experimentally demonstrate a novel fiber-wireless integrated mobile backhaul network based on a hybrid millimeter-wave (MMW) and free-space-optics (FSO) architecture using an adaptive combining technique. Both 60 GHz MMW and FSO links are demonstrated and fully integrated with optical fibers in a scalable and cost-effective backhaul system setup. Joint signal processing with an adaptive diversity combining technique (ADCT) is utilized at the receiver side based on a maximum ratio combining algorithm. Mobile backhaul transportation of 4-Gb/s 16 quadrature amplitude modulation frequency-division multiplexing (QAM-OFDM) data is experimentally demonstrated and tested under various weather conditions synthesized in the lab. Performance improvement in terms of reduced error vector magnitude (EVM) and enhanced link reliability are validated under fog, rain, and turbulence conditions.
18. Estimation and control of large-scale systems with an application to adaptive optics for EUV lithography
NARCIS (Netherlands)
Haber, A.
2014-01-01
Extreme UltraViolet (EUV) lithography is a new technology for production of integrated circuits. In EUV lithographic machines, optical elements are heated by absorption of exposure energy. Heating induces thermoelastic deformations of optical elements and consequently, it creates wavefront
19. On the power and offset allocation for rate adaptation of spatial multiplexing in optical wireless MIMO channels
KAUST Repository
Park, Kihong; Ko, Youngchai; Alouini, Mohamed-Slim
2011-01-01
Visible light communication (VLC) using optical sources which can be simultaneously utilized for illumination and communication is currently an attractive option for wireless personal area network. Improving the data rate in optical wireless
20. THE PALOMAR/KECK ADAPTIVE OPTICS SURVEY OF YOUNG SOLAR ANALOGS: EVIDENCE FOR A UNIVERSAL COMPANION MASS FUNCTION
International Nuclear Information System (INIS)
Metchev, Stanimir A.; Hillenbrand, Lynne A.
2009-01-01
We present results from an adaptive optics survey for substellar and stellar companions to Sun-like stars. The survey targeted 266 F5-K5 stars in the 3 Myr-3 Gyr age range with distances of 10-190 pc. Results from the survey include the discovery of two brown dwarf companions (HD 49197B and HD 203030B), 24 new stellar binaries, and a triple system. We infer that the frequency of 0.012-0.072 M sun brown dwarfs in 28-1590 AU orbits around young solar analogs is 3.2 +3.1 -2.7 % (2σ limits). The result demonstrates that the deficiency of substellar companions at wide orbital separations from Sun-like stars is less pronounced than in the radial velocity 'brown dwarf desert'. We infer that the mass distribution of companions in 28-1590 AU orbits around solar-mass stars follows a continuous dN/dM 2 ∝ M -0.4 2 relation over the 0.01-1.0 M sun secondary mass range. While this functional form is similar to that for isolated objects less than 0.1 M sun , over the entire 0.01-1.0 M sun range, the mass functions of companions and of isolated objects differ significantly. Based on this conclusion and on similar results from other direct imaging and radial velocity companion surveys in the literature, we argue that the companion mass function follows the same universal form over the entire range between 0 and 1590 AU in orbital semimajor axis and ∼ 0.01-20 M sun in companion mass. In this context, the relative dearth of substellar versus stellar secondaries at all orbital separations arises naturally from the inferred form of the companion mass function.
1. A 5 × 10{sup 9}M{sub ⊙} BLACK HOLE IN NGC 1277 FROM ADAPTIVE OPTICS SPECTROSCOPY
Energy Technology Data Exchange (ETDEWEB)
Walsh, Jonelle L. [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Van den Bosch, Remco C. E.; Yıldırım, Akın [Max-Planck Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Gebhardt, Karl [Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712 (United States); Richstone, Douglas O.; Gültekin, Kayhan [Department of Astronomy, University of Michigan, 1085 S. University Ave., Ann Arbor, MI 48109 (United States); Husemann, Bernd, E-mail: walsh@physics.tamu.edu [European Southern Observatory, Karl-Schwarzschild-Str. 2, D-85748 Garching (Germany)
2016-01-20
The nearby lenticular galaxy NGC 1277 is thought to host one of the largest black holes known, however the black hole mass measurement is based on low spatial resolution spectroscopy. In this paper, we present Gemini Near-infrared Integral Field Spectrometer observations assisted by adaptive optics. We map out the galaxy's stellar kinematics within ∼440 pc of the nucleus with an angular resolution that allows us to probe well within the region where the potential from the black hole dominates. We find that the stellar velocity dispersion rises dramatically, reaching ∼550 km s{sup −1} at the center. Through orbit-based, stellar-dynamical models we obtain a black hole mass of (4.9 ± 1.6) × 10{sup 9} M{sub ⊙} (1σ uncertainties). Although the black hole mass measurement is smaller by a factor of ∼3 compared to previous claims based on large-scale kinematics, NGC 1277 does indeed contain one of the most massive black holes detected to date, and the black hole mass is an order of magnitude larger than expectations from the empirical relation between black hole mass and galaxy luminosity. Given the galaxy's similarities to the higher redshift (z ∼ 2) massive quiescent galaxies, NGC 1277 could be a relic, passively evolving since that period. A population of local analogs to the higher redshift quiescent galaxies that also contain over-massive black holes may suggest that black hole growth precedes that of the host galaxy.
2. Assessment of Different Sampling Methods for Measuring and Representing Macular Cone Density Using Flood-Illuminated Adaptive Optics.
Science.gov (United States)
Feng, Shu; Gale, Michael J; Fay, Jonathan D; Faridi, Ambar; Titus, Hope E; Garg, Anupam K; Michaels, Keith V; Erker, Laura R; Peters, Dawn; Smith, Travis B; Pennesi, Mark E
2015-09-01
To describe a standardized flood-illuminated adaptive optics (AO) imaging protocol suitable for the clinical setting and to assess sampling methods for measuring cone density. Cone density was calculated following three measurement protocols: 50 × 50-μm sampling window values every 0.5° along the horizontal and vertical meridians (fixed-interval method), the mean density of expanding 0.5°-wide arcuate areas in the nasal, temporal, superior, and inferior quadrants (arcuate mean method), and the peak cone density of a 50 × 50-μm sampling window within expanding arcuate areas near the meridian (peak density method). Repeated imaging was performed in nine subjects to determine intersession repeatability of cone density. Cone density montages could be created for 67 of the 74 subjects. Image quality was determined to be adequate for automated cone counting for 35 (52%) of the 67 subjects. We found that cone density varied with different sampling methods and regions tested. In the nasal and temporal quadrants, peak density most closely resembled histological data, whereas the arcuate mean and fixed-interval methods tended to underestimate the density compared with histological data. However, in the inferior and superior quadrants, arcuate mean and fixed-interval methods most closely matched histological data, whereas the peak density method overestimated cone density compared with histological data. Intersession repeatability testing showed that repeatability was greatest when sampling by arcuate mean and lowest when sampling by fixed interval. We show that different methods of sampling can significantly affect cone density measurements. Therefore, care must be taken when interpreting cone density results, even in a normal population.
3. Energy-efficient orthogonal frequency division multiplexing-based passive optical network based on adaptive sleep-mode control and dynamic bandwidth allocation
Science.gov (United States)
Zhang, Chongfu; Xiao, Nengwu; Chen, Chen; Yuan, Weicheng; Qiu, Kun
2016-02-01
We propose an energy-efficient orthogonal frequency division multiplexing-based passive optical network (OFDM-PON) using adaptive sleep-mode control and dynamic bandwidth allocation. In this scheme, a bidirectional-centralized algorithm named the receiver and transmitter accurate sleep control and dynamic bandwidth allocation (RTASC-DBA), which has an overall bandwidth scheduling policy, is employed to enhance the energy efficiency of the OFDM-PON. The RTASC-DBA algorithm is used in an optical line terminal (OLT) to control the sleep mode of an optical network unit (ONU) sleep and guarantee the quality of service of different services of the OFDM-PON. The obtained results show that, by using the proposed scheme, the average power consumption of the ONU is reduced by ˜40% when the normalized ONU load is less than 80%, compared with the average power consumption without using the proposed scheme.
4. Intelligent Optics Laboratory
Data.gov (United States)
Federal Laboratory Consortium — The Intelligent Optics Laboratory supports sophisticated investigations on adaptive and nonlinear optics; advancedimaging and image processing; ground-to-ground and...
5. High-Resolution Imaging of Parafoveal Cones in Different Stages of Diabetic Retinopathy Using Adaptive Optics Fundus Camera.
Directory of Open Access Journals (Sweden)
Mohamed Kamel Soliman
Full Text Available To assess cone density as a marker of early signs of retinopathy in patients with type II diabetes mellitus.An adaptive optics (AO retinal camera (rtx1™; Imagine Eyes, Orsay, France was used to acquire images of parafoveal cones from patients with type II diabetes mellitus with or without retinopathy and from healthy controls with no known systemic or ocular disease. Cone mosaic was captured at 0° and 2°eccentricities along the horizontal and vertical meridians. The density of the parafoveal cones was calculated within 100×100-μm squares located at 500-μm from the foveal center along the orthogonal meridians. Manual corrections of the automated counting were then performed by 2 masked graders. Cone density measurements were evaluated with ANOVA that consisted of one between-subjects factor, stage of retinopathy and the within-subject factors. The ANOVA model included a complex covariance structure to account for correlations between the levels of the within-subject factors.Ten healthy participants (20 eyes and 25 patients (29 eyes with type II diabetes mellitus were recruited in the study. The mean (± standard deviation [SD] age of the healthy participants (Control group, patients with diabetes without retinopathy (No DR group, and patients with diabetic retinopathy (DR group was 55 ± 8, 53 ± 8, and 52 ± 9 years, respectively. The cone density was significantly lower in the moderate nonproliferative diabetic retinopathy (NPDR and severe NPDR/proliferative DR groups compared to the Control, No DR, and mild NPDR groups (P < 0.05. No correlation was found between cone density and the level of hemoglobin A1c (HbA1c or the duration of diabetes.The extent of photoreceptor loss on AO imaging may correlate positively with severity of DR in patients with type II diabetes mellitus. Photoreceptor loss may be more pronounced among patients with advanced stages of DR due to higher risk of macular edema and its sequelae.
6. Adaptation of AMO-FBMC-OQAM in optical access network for accommodating asynchronous multiple access in OFDM-based uplink transmission
Science.gov (United States)
Jung, Sun-Young; Jung, Sang-Min; Han, Sang-Kook
2015-01-01
Exponentially expanding various applications in company with proliferation of mobile devices make mobile traffic exploded annually. For future access network, bandwidth efficient and asynchronous signals converged transmission technique is required in optical network to meet a huge bandwidth demand, while integrating various services and satisfying multiple access in perceived network resource. Orthogonal frequency division multiplexing (OFDM) is highly bandwidth efficient parallel transmission technique based on orthogonal subcarriers. OFDM has been widely studied in wired-/wireless communication and became a Long term evolution (LTE) standard. Consequently, OFDM also has been actively researched in optical network. However, OFDM is vulnerable frequency and phase offset essentially because of its sinc-shaped side lobes, therefore tight synchronism is necessary to maintain orthogonality. Moreover, redundant cyclic prefix (CP) is required in dispersive channel. Additionally, side lobes act as interference among users in multiple access. Thus, it practically hinders from supporting integration of various services and multiple access based on OFDM optical transmission In this paper, adaptively modulated optical filter bank multicarrier system with offset QAM (AMO-FBMC-OQAM) is introduced and experimentally investigated in uplink optical transmission to relax multiple access interference (MAI), while improving bandwidth efficiency. Side lobes are effectively suppressed by using FBMC, therefore the system becomes robust to path difference and imbalance among optical network units (ONUs), which increase bandwidth efficiency by reducing redundancy. In comparison with OFDM, a signal performance and an efficiency of frequency utilization are improved in the same experimental condition. It enables optical network to effectively support heterogeneous services and multiple access.
7. On the power and offset allocation for rate adaptation of spatial multiplexing in optical wireless MIMO channels
KAUST Repository
Park, Kihong
2011-07-01
Visible light communication (VLC) using optical sources which can be simultaneously utilized for illumination and communication is currently an attractive option for wireless personal area network. Improving the data rate in optical wireless communication system is challenging due to the limited bandwidth of the optical sources. In this paper, we design the singular value decomposition (SVD)- based multiplexing multiple-input multiple-output (MIMO) system to support two data streams in optical wireless channels. Noting that the conventional allocation method in radio frequency (RF) MIMO channels cannot be applied directly to the optical intensity channels, we propose a novel method to allocate the optical power, the offset value and the modulation size for maximum sum rate under the constraints of the nonnegativity of the modulated signals, the aggregate optical power and the bit error rate (BER) requirement. The simulation results show that the proposed allocation method gives the better performance than the method to allocate the optical power equally for each data stream. © 2011 IEEE.
8. MEASUREMENTS OF THE MEAN DIFFUSE GALACTIC LIGHT SPECTRUM IN THE 0.95–1.65 μm BAND FROM CIBER
Energy Technology Data Exchange (ETDEWEB)
Arai, T.; Matsuura, S.; Sano, K.; Matsumoto, T.; Nakagawa, T.; Onishi, Y. [Department of Space Astronomy and Astrophysics, Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Bock, J.; Lanz, A.; Korngut, P.; Zemcov, M. [Department of Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Cooray, A.; Smidt, J. [Center for Cosmology, University of California, Irvine, Irvine, CA 92697 (United States); Kim, M. G.; Lee, H. M. [Department of Physics and Astronomy, Seoul National University, Seoul 151-742 (Korea, Republic of); Lee, D. H. [Korea Astronomy and Space Science Institute (KASI), Daejeon 305-348 (Korea, Republic of); Shirahata, M. [National Institutes of Natural Science, National Astronomical Observatory of Japan (NAOJ), Tokyo 181-8588 (Japan); Tsumura, K. [Frontier Research Institute for Interdisciplinary Science, Tohoku University, Sendai 980-8578 (Japan)
2015-06-10
We report measurements of the diffuse galactic light (DGL) spectrum in the near-infrared, spanning the wavelength range 0.95–1.65 μm by the Cosmic Infrared Background ExpeRiment. Using the low-resolution spectrometer calibrated for absolute spectro-photometry, we acquired long-slit spectral images of the total diffuse sky brightness toward six high-latitude fields spread over four sounding rocket flights. To separate the DGL spectrum from the total sky brightness, we correlated the spectral images with a 100 μm intensity map, which traces the dust column density in optically thin regions. The measured DGL spectrum shows no resolved features and is consistent with other DGL measurements in the optical and at near-infrared wavelengths longer than 1.8 μm. Our result implies that the continuum is consistently reproduced by models of scattered starlight in the Rayleigh scattering regime with a few large grains.
9. Design and optimization of an adaptive optics system for a high-average-power multi-slab laser (HiLASE)
Czech Academy of Sciences Publication Activity Database
Pilař, Jan; Slezák, Jiří; Sikocinski, Pawel; Divoký, Martin; Sawicka, Magdalena; Bonora, Stefano; Lucianetti, Antonio; Mocek, Tomáš; Jelínková, H.
2014-01-01
Roč. 53, č. 15 (2014), 3255-3261 ISSN 1559-128X R&D Projects: GA MŠk ED2.1.00/01.0027; GA MŠk EE2.3.20.0143; GA MŠk EE2.3.30.0057 Grant - others:HILASE(XE) CZ.1.05/2.1.00/01.0027; OP VK 6(XE) CZ.1.07/2.3.00/20.0143; OP VK 4 POSTDOK(XE) CZ.1.07/2.3.00/30.0057 Institutional support: RVO:68378271 Keywords : adaptive optics * multislab * amplifier * wavefront Subject RIV: BH - Optics, Masers, Lasers Impact factor: 1.784, year: 2014
10. Dependence of the compensation error on the error of a sensor and corrector in an adaptive optics phase-conjugating system
International Nuclear Information System (INIS)
Kiyko, V V; Kislov, V I; Ofitserov, E N
2015-01-01
In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of the mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)
11. Dependence of the compensation error on the error of a sensor and corrector in an adaptive optics phase-conjugating system
Energy Technology Data Exchange (ETDEWEB)
Kiyko, V V; Kislov, V I; Ofitserov, E N [A M Prokhorov General Physics Institute, Russian Academy of Sciences, Moscow (Russian Federation)
2015-08-31
In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of the mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)
12. A novel grooming algorithm with the adaptive weight and load balancing for dynamic holding-time-aware traffic in optical networks
Science.gov (United States)
Xu, Zhanqi; Huang, Jiangjiang; Zhou, Zhiqiang; Ding, Zhe; Ma, Tao; Wang, Junping
2013-10-01
To maximize the resource utilization of optical networks, the dynamic traffic grooming, which could efficiently multiplex many low-speed services arriving dynamically onto high-capacity optical channels, has been studied extensively and used widely. However, the link weights in the existing research works can be improved since they do not adapt to the network status and load well. By exploiting the information on the holding times of the preexisting and new lightpaths, and the requested bandwidth of a user service, this paper proposes a grooming algorithm using Adaptively Weighted Links for Holding-Time-Aware (HTA) (abbreviated as AWL-HTA) traffic, especially in the setup process of new lightpath(s). Therefore, the proposed algorithm can not only establish a lightpath that uses network resource efficiently, but also achieve load balancing. In this paper, the key issues on the link weight assignment and procedure within the AWL-HTA are addressed in detail. Comprehensive simulation and experimental results show that the proposed algorithm has a much lower blocking ratio and latency than other existing algorithms.
13. Enhancement of Optical Adaptive Sensing by Using a Dual-Stage Seesaw-Swivel Actuator with a Tunable Vibration Absorber
Directory of Open Access Journals (Sweden)
Po-Chien Chou
2011-05-01
Full Text Available Technological obstacles to the use of rotary-type swing arm actuators to actuate optical pickup modules in small-form-factor (SFF disk drives stem from a hinge’s skewed actuation, subsequently inducing off-axis aberrations and deteriorating optical quality. This work describes a dual-stage seesaw-swivel actuator for optical pickup actuation. A triple-layered bimorph bender made of piezoelectric materials (PZTs is connected to the suspension of the pickup head, while the tunable vibration absorber (TVA unit is mounted on the seesaw swing arm to offer a balanced force to reduce vibrations in a focusing direction. Both PZT and TVA are designed to satisfy stable focusing operation operational requirements and compensate for the tilt angle or deformation of a disc. Finally, simulation results verify the performance of the dual-stage seesaw-swivel actuator, along with experimental procedures and parametric design optimization confirming the effectiveness of the proposed system.
14. A star-forming shock front in radio galaxy 4C+41.17 resolved with laser-assisted adaptive optics spectroscopy
Energy Technology Data Exchange (ETDEWEB)
Steinbring, Eric, E-mail: Eric.Steinbring@nrc-cnrc.gc.ca [National Research Council Canada, Victoria, BC V9E 2E7 (Canada)
2014-07-01
Near-infrared integral-field spectroscopy of redshifted [O III], Hβ, and optical continuum emission from the z = 3.8 radio galaxy 4C+41.17 is presented, obtained with the laser-guide-star adaptive optics facility on the Gemini North telescope. Employing a specialized dithering technique, a spatial resolution of 0.''10, or 0.7 kpc, is achieved in each spectral element, with a velocity resolution of ∼70 km s{sup –1}. Spectra similar to local starbursts are found for bright knots coincident in archival Hubble Space Telescope ( HST) rest-frame ultraviolet images, which also allows a key line diagnostic to be mapped together with new kinematic information. There emerges a clearer picture of the nebular emission associated with the jet in 8.3 GHz and 15 GHz Very Large Array maps, closely tied to a Lyα-bright shell-shaped structure seen with HST. This supports a previous interpretation of that arc tracing a bow shock, inducing ∼10{sup 10–11} M {sub ☉} star formation regions that comprise the clumpy broadband optical/ultraviolet morphology near the core.
15. Fast, accurate, and robust frequency offset estimation based on modified adaptive Kalman filter in coherent optical communication system
Science.gov (United States)
Yang, Yanfu; Xiang, Qian; Zhang, Qun; Zhou, Zhongqing; Jiang, Wen; He, Qianwen; Yao, Yong
2017-09-01
We propose a joint estimation scheme for fast, accurate, and robust frequency offset (FO) estimation along with phase estimation based on modified adaptive Kalman filter (MAKF). The scheme consists of three key modules: extend Kalman filter (EKF), lock detector, and FO cycle slip recovery. The EKF module estimates time-varying phase induced by both FO and laser phase noise. The lock detector module makes decision between acquisition mode and tracking mode and consequently sets the EKF tuning parameter in an adaptive manner. The third module can detect possible cycle slip in the case of large FO and make proper correction. Based on the simulation and experimental results, the proposed MAKF has shown excellent estimation performance featuring high accuracy, fast convergence, as well as the capability of cycle slip recovery.
16. Evaluation of white-to-white distance and anterior chamber depth measurements using the IOL Master, slit-lamp adapted optical coherence tomography and digital photographs in phakic eyes.
Science.gov (United States)
Wilczyński, Michał; Pośpiech-Zabierek, Aleksandra
2015-01-01
The accurate measurement of the anterior chamber internal diameter and depth is important in ophthalmic diagnosis and before some eye surgery procedures. The purpose of the study was to compare the white-to-white distance measurements performed using the IOL-Master and photography with internal anterior chamber diameter determined using slit lamp adapted optical coherence tomography in healthy eyes, and to compare anterior chamber depth measurements by IOL-Master and slit lamp adapted optical coherence tomography. The data were gathered prospectively from a non-randomized consecutive series of patients. The examined group consisted of 46 eyes of 39 patients. White-to-white was measured using IOL-Master and photographs of the eye were taken with a digital camera. Internal anterior chamber diameter was measured with slit-lamp adapted optical coherence tomography. Anterior chamber depth was measured using the IOL Master and slit-lamp adapted optical coherence tomography. Statistical analysis was performed using parametric tests. A Bland-Altman plot was drawn. White-to-white distance by the IOL Master was 11.8 +/- 0.40 mm, on photographs it was 11.29 +/- 0.58 mm and internal anterior chamber diameter by slit-lamp adapted optical coherence tomography was 11.34?0.54 mm. A significant difference was found between IOL-Master and slit-lamp adapted optical coherence tomography (pphotographs (pphotographs (p>0.05). All measurements were correlated (Spearman pphotographs. In order to obtain accurate measurements of the internal anterior chamber diameter and anterior chamber depth, a method involving direct visualization of intraocular structures should be used.
17. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.
Science.gov (United States)
Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M
2010-02-01
In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.
18. Variable Delay With Directly-Modulated R-SOA and Optical Filters for Adaptive Antenna Radio-Fiber Access
DEFF Research Database (Denmark)
Prince, Kamau; Presi, Marco; Chiuchiarelli, Andrea
2009-01-01
types of signals defined in IEEE 802.16 (WiMAX) standard for wireless networks: a 90 Mbps single-carrier signal (64-QAM at 2.4 GHz) and a 78 Mbps multitone orthogonal frequency-division multiple access (OFDMA) signal. The power budget of this configuration supports a 4-element antenna array....... on a directly-modulated reflective emiconductor amplifier (R-SOA) and exploits the interplay between transmission-line dispersion and tunable optical filtering to achieve flexible true time delay, with $2pi$ beam steering at the different antennas. The system was characterized, then successfully tested with two...
19. Optical UWB pulse generator using an N tap microwave photonic filter and phase inversion adaptable to different pulse modulation formats.
Science.gov (United States)
Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José
2009-03-30
We propose theoretically and demonstrate experimentally an optical architecture for flexible Ultra-Wideband pulse generation. It is based on an N-tap reconfigurable microwave photonic filter fed by a laser array by using phase inversion in a Mach-Zehnder modulator. Since a large number of positive and negative coefficients can be easily implemented, UWB pulses fitted to the FCC mask requirements can be generated. As an example, a four tap pulse generator is experimentally demonstrated which complies with the FCC regulation. The proposed pulse generator allows different pulse modulation formats since the amplitude, polarity and time delay of generated pulse is controlled.
20. A survey of the signal stability and radiation dose response of sulfates in the context of adapting optical dating for Mars
International Nuclear Information System (INIS)
O'Connor, V.A.; Lepper, K.; Morken, T.O.; Thorstad, D.J.; Podoll, A.; Giles, M.J.
2011-01-01
The Martian landscape is currently dominated by eolian processes, and eolian dunes are a direct geomorphic expression of the dynamic interaction between the atmosphere and the lithosphere of planets. The timing, frequency, and spatial extent of dune mobility directly reflects changing climatic conditions, therefore, sedimentary depositional ages are important for understanding the paleoclimatic and geomorphologic history of features and processes present on the surface of the Earth or Mars. Optical dating is an established terrestrial dosimetric dating technique that is being developed for this task on Mars. Gypsum and anhydrite are two of the most stable and abundant sulfate species found on the Earth, and they have been discovered in Martian sediments along with various magnesium sulfates and jarosite. In this study, the optical dating properties of various Ca-, Mg-, and Fe-bearing sulfates were documented to help evaluate the influence they may have on in-situ optical dating in eolian environments on Mars. Single-aliquot regenerative-dose (SAR) experimental procedures have been adapted to characterize the radiation dose response and signal stability of the Martian sulfate analogs. Jarosite was dosimetrically inert in our experiments. The radiation dose response of the Ca- and Mg-sulfates was monotonically increasing in all cases with characteristic doses ranging from ∼100 to ∼1000 Gy. Short-term signal fading also varied considerably in the Ca- and Mg-sulfates ranging from ∼0% to ∼40% per decade for these materials. These results suggest that the OSL properties of Ca- and Mg-sulfates will need to be considered when developing protocols for in-situ optical dating on Mars, but more enticingly, our results foreshadow the potential for gypsum to be developed as a geochronometer for Mars or the Earth. - Highlights: → The radiation dose response and OSL signal stability of Ca- and Mg-sulfates was highly variable. → OSL properties of Ca- and Mg
1. Self-referencing Mach-Zehnder interferometer as a laser system diagnostic: Active and adaptive optical systems
International Nuclear Information System (INIS)
Feldman, M.; Mockler, D.J.; English, R.E. Jr.; Byrd, J.L.; Salmon, J.T.
1991-01-01
We are incorporating a novel self-referencing Mach-Zehnder interferometer into a large scale laser system as a real time, interactive diagnostic tool for wavefront measurement. The instrument is capable of absolute wavefront measurements accurate to better than λ/10 pv over a wavelength range > 300 nm without readjustment of the optical components. This performance is achieved through the design of both refractive optics and catadioptric collimator to achromatize the Mach-Zehnder reference arm. Other features include polarization insensitivity through the use of low angles of incidence on all beamsplitters as well as an equal path length configuration that allows measurement of either broad-band or closely spaced laser-line sources. Instrument accuracy is periodically monitored in place by means of a thermally and mechanically stable wavefront reference source that is calibrated off-line with a phase conjugate interferometer. Video interferograms are analyzed using Fourier transform techniques on a computer that includes dedicated array processor. Computer and video networks maintain distributed interferometers under the control of a single analysis computer with multiple user access. 7 refs., 11 figs
2. Detailed Morphological Changes of Foveoschisis in Patient with X-Linked Retinoschisis Detected by SD-OCT and Adaptive Optics Fundus Camera
Directory of Open Access Journals (Sweden)
Keiichiro Akeo
2015-01-01
Full Text Available Purpose. To report the morphological and functional changes associated with a regression of foveoschisis in a patient with X-linked retinoschisis (XLRS. Methods. A 42-year-old man with XLRS underwent genetic analysis and detailed ophthalmic examinations. Functional assessments included best-corrected visual acuity (BCVA, full-field electroretinograms (ERGs, and multifocal ERGs (mfERGs. Morphological assessments included fundus photography, spectral-domain optical coherence tomography (SD-OCT, and adaptive optics (AO fundus imaging. After the baseline clinical data were obtained, topical dorzolamide was applied to the patient. The patient was followed for 24 months. Results. A reported RS1 gene mutation was found (P203L in the patient. At the baseline, his decimal BCVA was 0.15 in the right and 0.3 in the left eye. Fundus photographs showed bilateral spoke wheel-appearing maculopathy. SD-OCT confirmed the foveoschisis in the left eye. The AO images of the left eye showed spoke wheel retinal folds, and the folds were thinner than those in fundus photographs. During the follow-up period, the foveal thickness in the SD-OCT images and the number of retinal folds in the AO images were reduced. Conclusions. We have presented the detailed morphological changes of foveoschisis in a patient with XLRS detected by SD-OCT and AO fundus camera. However, the findings do not indicate whether the changes were influenced by topical dorzolamide or the natural history.
3. Evaluation of optical data gained by ARAMIS-measurement of abdominal wall movements for an anisotropic pattern design of stress-adapted hernia meshes produced by embroidery technology
Science.gov (United States)
Breier, A.; Bittrich, L.; Hahn, J.; Spickenheuer, A.
2017-10-01
For the sustainable repair of abdominal wall hernia the application of hernia meshes is required. One reason for the relapse of hernia after surgery is seen in an inadequate adaption of the mechanical properties of the mesh to the movements of the abdominal wall. Differences in the stiffness of the mesh and the abdominal tissue cause tension, friction and stress resulting in a deficient tissue response and subsequently in a recurrence of a hernia, preferentially in the marginal area of the mesh. Embroidery technology enables a targeted influence on the mechanical properties of the generated textile structure by a directed thread deposition. Textile parameters like stitch density, alignment and angle can be changed easily and locally in the embroidery pattern to generate a space-resolved mesh with mechanical properties adapted to the requirement of the surrounding tissue. To determine those requirements the movements of the abdominal wall and the resulting distortions need to be known. This study was conducted to gain optical data of the abdominal wall movements by non-invasive ARAMIS-measurement on 39 test persons to estimate direction and value of the major strains.
4. Morphology and Topography of Retinal Pericytes in the Living Mouse Retina Using In Vivo Adaptive Optics Imaging and Ex Vivo Characterization
Science.gov (United States)
Schallek, Jesse; Geng, Ying; Nguyen, HoanVu; Williams, David R.
2013-01-01
Purpose. To noninvasively image retinal pericytes in the living eye and characterize NG2-positive cell topography and morphology in the adult mouse retina. Methods. Transgenic mice expressing fluorescent pericytes (NG2, DsRed) were imaged using a two-channel, adaptive optics scanning laser ophthalmoscope (AOSLO). One channel imaged vascular perfusion with near infrared light. A second channel simultaneously imaged fluorescent retinal pericytes. Mice were also imaged using wide-field ophthalmoscopy. To confirm in vivo imaging, five eyes were enucleated and imaged in flat mount with conventional fluorescent microscopy. Cell topography was quantified relative to the optic disc. Results. We observed strong DsRed fluorescence from NG2-positive cells. AOSLO revealed fluorescent vascular mural cells enveloping all vessels in the living retina. Cells were stellate on larger venules, and showed banded morphology on arterioles. NG2-positive cells indicative of pericytes were found on the smallest capillaries of the retinal circulation. Wide-field SLO enabled quick assessment of NG2-positive distribution, but provided insufficient resolution for cell counts. Ex vivo microscopy showed relatively even topography of NG2-positive capillary pericytes at eccentricities more than 0.3 mm from the optic disc (515 ± 94 cells/mm2 of retinal area). Conclusions. We provide the first high-resolution images of retinal pericytes in the living animal. Subcellular resolution enabled morphological identification of NG2-positive cells on capillaries showing classic features and topography of retinal pericytes. This report provides foundational basis for future studies that will track and quantify pericyte topography, morphology, and function in the living retina over time, especially in the progression of microvascular disease. PMID:24150762
5. Adaptable Optical Fiber Displacement-Curvature Sensor Based on a Modal Michelson Interferometer with a Tapered Single Mode Fiber.
Science.gov (United States)
Salceda-Delgado, G; Martinez-Rios, A; Selvas-Aguilar, R; Álvarez-Tamayo, R I; Castillo-Guzman, A; Ibarra-Escamilla, B; Durán-Ramírez, V M; Enriquez-Gomez, L F
2017-06-02
A compact, highly sensitive optical fiber displacement and curvature radius sensor is presented. The device consists of an adiabatic bi-conical fused fiber taper spliced to a single-mode fiber (SMF) segment with a flat face end. The bi-conical taper structure acts as a modal coupling device between core and cladding modes for the SMF segment. When the bi-conical taper is bent by an axial displacement, the symmetrical bi-conical shape of the tapered structure is stressed, causing a change in the refractive index profile which becomes asymmetric. As a result, the taper adiabaticity is lost, and interference between modes appears. As the bending increases, a small change in the fringe visibility and a wavelength shift on the periodical reflection spectrum of the in-fiber interferometer is produced. The displacement sensitivity and the spectral periodicity of the device can be adjusted by the proper selection of the SMF length. Sensitivities from around 1.93 to 3.4 nm/mm were obtained for SMF length between 7.5 and 12.5 cm. Both sensor interrogations, wavelength shift and visibility contrast, can be used to measure displacement and curvature radius magnitudes.
6. A software reconfigurable optical multiband UWB system utilizing a bit-loading combined with adaptive LDPC code rate scheme
Science.gov (United States)
He, Jing; Dai, Min; Chen, Qinghui; Deng, Rui; Xiang, Changqing; Chen, Lin
2017-07-01
In this paper, an effective bit-loading combined with adaptive LDPC code rate algorithm is proposed and investigated in software reconfigurable multiband UWB over fiber system. To compensate the power fading and chromatic dispersion for the high frequency of multiband OFDM UWB signal transmission over standard single mode fiber (SSMF), a Mach-Zehnder modulator (MZM) with negative chirp parameter is utilized. In addition, the negative power penalty of -1 dB for 128 QAM multiband OFDM UWB signal are measured at the hard-decision forward error correction (HD-FEC) limitation of 3.8 × 10-3 after 50 km SSMF transmission. The experimental results show that, compared to the fixed coding scheme with the code rate of 75%, the signal-to-noise (SNR) is improved by 2.79 dB for 128 QAM multiband OFDM UWB system after 100 km SSMF transmission using ALCR algorithm. Moreover, by employing bit-loading combined with ALCR algorithm, the bit error rate (BER) performance of system can be further promoted effectively. The simulation results present that, at the HD-FEC limitation, the value of Q factor is improved by 3.93 dB at the SNR of 19.5 dB over 100 km SSMF transmission, compared to the fixed modulation with uncoded scheme at the same spectrum efficiency (SE).
7. Multileaf collimator leaf position verification and analysis for adaptive radiation therapy using a video-optical method
Science.gov (United States)
Sethna, Sohrab B.
External beam radiation therapy is commonly used to eliminate and control cancerous tumors. High-energy beams are shaped to match the patient's specific tumor volume, whereby maximizing radiation dose to malignant cells and limiting dose to normal tissue. A multileaf collimator (MLC) consisting of multiple pairs of tungsten leaves is used to conform the radiation beam to the desired treatment field. Advanced treatment methods utilize dynamic MLC settings to conform to multiple treatment fields and provide intensity modulated radiation therapy (IMRT). Future methods would further increase conformity by actively tracking tumor motion caused by patient cardiac and respiratory motion. Leaf position quality assurance for a dynamic MLC is critical as variation between the planned and actual leaf positions could induce significant errors in radiation dose. The goal of this research project is to prototype a video-optical quality assurance system for MLC leaf positions. The system captures light-field images of MLC leaf sequences during dynamic therapy. Image acquisition and analysis software was developed to determine leaf edge positions. The mean absolute difference between QA prototype predicted and caliper measured leaf positions was found to be 0.6 mm with an uncertainty of +/- 0.3 mm. Maximum errors in predicted positions were below 1.0 mm for static fields. The prototype served as a proof of concept for quality assurance of future tumor tracking methods. Specifically, a lung tumor phantom was created to mimic a lung tumor's motion from respiration. The lung tumor video images were superimposed on MLC field video images for visualization and analysis. The toolbox is capable of displaying leaf position, leaf velocity, tumor position, and determining errors between planned and actual treatment fields for dynamic radiation therapy.
8. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina.
Science.gov (United States)
Zawadzki, Robert J; Zhang, Pengfei; Zam, Azhar; Miller, Eric B; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G; Werner, John S; Burns, Marie E; Pugh, Edward N
2015-06-01
Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed.
9. ADAPTIVE OPTICS OBSERVATIONS OF 3 {mu}m WATER ICE IN SILHOUETTE DISKS IN THE ORION NEBULA CLUSTER AND M43
Energy Technology Data Exchange (ETDEWEB)
Terada, Hiroshi; Pyo, Tae-Soo; Minowa, Yosuke; Hayano, Yutaka; Oya, Shin; Hattori, Masayuki; Takami, Hideki [Subaru Telescope, National Astronomical Observatory of Japan, 650 North A' ohoku Place, Hilo, HI 96720 (United States); Tokunaga, Alan T. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Watanabe, Makoto [Department of Cosmosciences, Hokkaido University, Kita 10, Nishi 8, Kita-ku, Sapporo, Hokkaido 060-0810 (Japan); Saito, Yoshihiko [Department of Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro, Tokyo 152-8551 (Japan); Ito, Meguru [Department of Mechanical Engineering, University of Victoria, 3800 Finnerty Road, Victoria, BC, V8P 5C2 (Canada); Iye, Masanori, E-mail: terada@subaru.naoj.org [National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan)
2012-12-01
We present the near-infrared images and spectra of four silhouette disks in the Orion Nebula Cluster (M42) and M43 using the Subaru Adaptive Optics system. While d053-717 and d141-1952 show no water ice feature at 3.1 {mu}m, a moderately deep ({tau}{sub ice} {approx} 0.7) water ice absorption is detected toward d132-1832 and d216-0939. Taking into account the water ice so far detected in the silhouette disks, the critical inclination angle to produce a water ice absorption feature is confirmed to be 65 Degree-Sign -75 Degree-Sign . As for d216-0939, the crystallized water ice profile is exactly the same as in the previous observations taken 3.63 years ago. If the water ice material is located at 30 AU, then the observations suggest it is uniform at a scale of about 3.5 AU.
10. The Performance of the Robo-AO Laser Guide Star Adaptive Optics System at the Kitt Peak 2.1 m Telescope
Science.gov (United States)
Jensen-Clem, Rebecca; Duev, Dmitry A.; Riddle, Reed; Salama, Maïssa; Baranec, Christoph; Law, Nicholas M.; Kulkarni, S. R.; Ramprakash, A. N.
2018-01-01
Robo-AO is an autonomous laser guide star adaptive optics (AO) system recently commissioned at the Kitt Peak 2.1 m telescope. With the ability to observe every clear night, Robo-AO at the 2.1 m telescope is the first dedicated AO observatory. This paper presents the imaging performance of the AO system in its first 18 months of operations. For a median seeing value of 1.″44, the average Strehl ratio is 4% in the i\\prime band. After post processing, the contrast ratio under sub-arcsecond seeing for a 2≤slant i\\prime ≤slant 16 primary star is five and seven magnitudes at radial offsets of 0.″5 and 1.″0, respectively. The data processing and archiving pipelines run automatically at the end of each night. The first stage of the processing pipeline shifts and adds the rapid frame rate data using techniques optimized for different signal-to-noise ratios. The second “high-contrast” stage of the pipeline is eponymously well suited to finding faint stellar companions. Currently, a range of scientific programs, including the synthetic tracking of near-Earth asteroids, the binarity of stars in young clusters, and weather on solar system planets are being undertaken with Robo-AO.
11. Adaptive optics retinal imaging with automatic detection of the pupil and its boundary in real time using Shack-Hartmann images.
Science.gov (United States)
de Castro, Alberto; Sawides, Lucie; Qi, Xiaofeng; Burns, Stephen A
2017-08-20
Retinal imaging with an adaptive optics (AO) system usually requires that the eye be centered and stable relative to the exit pupil of the system. Aberrations are then typically corrected inside a fixed circular pupil. This approach can be restrictive when imaging some subjects, since the pupil may not be round and maintaining a stable head position can be difficult. In this paper, we present an automatic algorithm that relaxes these constraints. An image quality metric is computed for each spot of the Shack-Hartmann image to detect the pupil and its boundary, and the control algorithm is applied only to regions within the subject's pupil. Images on a model eye as well as for five subjects were obtained to show that a system exit pupil larger than the subject's eye pupil could be used for AO retinal imaging without a reduction in image quality. This algorithm automates the task of selecting pupil size. It also may relax constraints on centering the subject's pupil and on the shape of the pupil.
12. A DETAILED GRAVITATIONAL LENS MODEL BASED ON SUBMILLIMETER ARRAY AND KECK ADAPTIVE OPTICS IMAGING OF A HERSCHEL-ATLAS SUBMILLIMETER GALAXY AT z = 4.243 {sup ,} {sup ,}
Energy Technology Data Exchange (ETDEWEB)
Bussmann, R. S.; Gurwell, M. A. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Fu Hai; Cooray, A. [Department of Physics and Astronomy, University of California, Irvine, CA 92697 (United States); Smith, D. J. B.; Bonfield, D.; Dunne, L. [Centre for Astrophysics, Science and Technology Research Institute, University of Hertfordshire, Hatfield, Herts AL10 9AB (United Kingdom); Dye, S.; Eales, S. [School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD (United Kingdom); Auld, R. [Cardiff University, School of Physics and Astronomy, Queens Buildings, The Parade, Cardiff CF24 3AA (United Kingdom); Baes, M.; Fritz, J. [Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, B-9000 Gent (Belgium); Baker, A. J. [Department of Physics and Astronomy, Rutgers, the State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854-8019 (United States); Cava, A. [Departamento de Astrofisica, Facultad de CC. Fisicas, Universidad Complutense de Madrid, E-28040 Madrid (Spain); Clements, D. L.; Dariush, A. [Imperial College London, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom); Coppin, K. [Department of Physics, McGill University, Ernest Rutherford Building, 3600 Rue University, Montreal, Quebec, H3A 2T8 (Canada); Dannerbauer, H. [Universitaet Wien, Institut fuer Astronomie, Tuerkenschanzstrasse 17, 1180 Wien, Oesterreich (Austria); De Zotti, G. [Universita di Padova, Dipto di Astronomia, Vicolo dell' Osservatorio 2, IT 35122, Padova (Italy); Hopwood, R., E-mail: rbussmann@cfa.harvard.edu [Department of Physics and Astronomy, Open University, Walton Hall, Milton Keynes, MK7 6AA (United Kingdom); and others
2012-09-10
We present high-spatial resolution imaging obtained with the Submillimeter Array (SMA) at 880 {mu}m and the Keck adaptive optics (AO) system at the K{sub S}-band of a gravitationally lensed submillimeter galaxy (SMG) at z = 4.243 discovered in the Herschel Astrophysical Terahertz Large Area Survey. The SMA data (angular resolution Almost-Equal-To 0.''6) resolve the dust emission into multiple lensed images, while the Keck AO K{sub S}-band data (angular resolution Almost-Equal-To 0.''1) resolve the lens into a pair of galaxies separated by 0.''3. We present an optical spectrum of the foreground lens obtained with the Gemini-South telescope that provides a lens redshift of z{sub lens} = 0.595 {+-} 0.005. We develop and apply a new lens modeling technique in the visibility plane that shows that the SMG is magnified by a factor of {mu} = 4.1 {+-} 0.2 and has an intrinsic infrared (IR) luminosity of L{sub IR} = (2.1 {+-} 0.2) Multiplication-Sign 10{sup 13} L{sub Sun }. We measure a half-light radius of the background source of r{sub s} = 4.4 {+-} 0.5 kpc which implies an IR luminosity surface density of {Sigma}{sub IR} (3.4 {+-} 0.9) Multiplication-Sign 10{sup 11} L{sub Sun} kpc{sup -2}, a value that is typical of z > 2 SMGs but significantly lower than IR luminous galaxies at z {approx} 0. The two lens galaxies are compact (r{sub lens} Almost-Equal-To 0.9 kpc) early-types with Einstein radii of {theta}{sub E1} 0.57 {+-} 0.01 and {theta}{sub E2} = 0.40 {+-} 0.01 that imply masses of M{sub lens1} = (7.4 {+-} 0.5) Multiplication-Sign 10{sup 10} M{sub Sun} and M{sub lens2} = (3.7 {+-} 0.3) Multiplication-Sign 10{sup 10} M{sub Sun }. The two lensing galaxies are likely about to undergo a dissipationless merger, and the mass and size of the resultant system should be similar to other early-type galaxies at z {approx} 0.6. This work highlights the importance of high spatial resolution imaging in developing models of strongly lensed galaxies
13. Misregistration in Adaptive Optics Systems
Science.gov (United States)
2009-03-01
introduces a new factor called the influence function , or the amount of slope that is introduced in neighboring subapertures by pushing one actuator...nearest neighbor is unity. The actuator influence function Akl, is the phase caused by poking an individual actuator. It is assumed that Akl = 1 at the...The square bracket indicates actua- tor indices, and the round brackets are subaperture indices. 28 influence function is given by A(x, y
14. Center for Adaptive Optics | Center
Science.gov (United States)
Astronomy, UCSC's CfAO and ISEE, and Maui Community College, runs education and internship programs in / Jacobs Retina Center Department of Psychology University of California, San Francisco Department of University School of Optometry Maui Community College Maui Community College Space Grant Program Montana
15. Adaptive Optics, LLLFT Interferometry, Astronomy
National Research Council Canada - National Science Library
2002-01-01
.... We will combine the wavefronts from the three telescopes using a conventional beam recombination system and acquire and track the fringes formed with a Low Light Level Fringe Tracking system (LLLFT...
16. High-contrast imaging of the close environment of HD 142527. VLT/NaCo adaptive optics thermal and angular differential imaging
Science.gov (United States)
Rameau, J.; Chauvin, G.; Lagrange, A.-M.; Thébault, P.; Milli, J.; Girard, J. H.; Bonnefoy, M.
2012-10-01
Context. It has long been suggested that circumstellar disks surrounding young stars may be the signposts of planets, and even more so since the recent discoveries of embedded substellar companions. According to models, the planet-disk interaction may create large structures, gaps, rings, or spirals in the disk. In that sense, the Herbig star HD 142527 is particularly compelling, as its massive disk displays intriguing asymmetries that suggest the existence of a dynamical peturber of unknown nature. Aims: Our goal was to obtain deep thermal images of the close circumstellar environment of HD 142527 to re-image the reported close-in structures (cavity, spiral arms) of the disk and to search for stellar and substellar companions that could be connected to their presence. Methods: We obtained high-contrast images with the NaCo adaptive optics system at the Very Large Telescope in L'-band. We applied different analysis strategies using both classical PSF-subtraction and angular differential imaging to probe for any extended structures or point-like sources. Results: The circumstellar environment of HD 142527 is revealed at an unprecedented spatial resolution down to the subarcsecond level for the first time at 3.8 μm. Our images reveal important radial and azimuthal asymmetries that invalidate an elliptical shape for the disk. It instead suggests a bright inhomogeneous spiral arm plus various fainter spiral arms. We also confirm an inner cavity down to 30 AU and two important dips at position angles of 0 and 135 deg. The detection performance in angular differential imaging enables exploration of the planetary mass regime for projected physical separations as close as 40 AU. Use of our detection map together with Monte Carlo simulations sets stringent constraints on the presence of planetary mass, brown dwarf or stellar companions as a function of the semi-major axis. They severely limit any presence of massive giant planets with semi-major axis beyond 50 AU, i
17. 50 years of nonlinear optics
International Nuclear Information System (INIS)
Shen Yuanrang
2011-01-01
This article presents a brief introduction to the birth and early investigations of nonlinear optics, such as second harmonic generation,sum and difference frequency generation, stimulated Raman scattering,and self-action of light etc. Several important research achievements and applications of nonlinear optics are presented as well, including nonlinear optical spectroscopy, phase conjugation and adaptive optics, coherent nonlinear optics, and high-order harmonic generation. In the end, current and future research topics in nonlinear optics are summarized. (authors)
18. Optics/Optical Diagnostics Laboratory
Data.gov (United States)
Federal Laboratory Consortium — The Optics/Optical Diagnostics Laboratory supports graduate instruction in optics, optical and laser diagnostics and electro-optics. The optics laboratory provides...
19. Infrared Spectra of the 10-μm Bands of 1,2-Difluoroethane and 1,1,2-Trifluoroethane: Vibrationally Mediated Torsional Tunneling in 1,1,2-Trifluoroethane
Science.gov (United States)
Stone, Stephen C.; Miller, C. Cameron; Philips, Laura A.; Andrews, A. M.; Fraser, G. T.; Pate, B. H.; Xu, Li-Hong
1995-12-01
The 3-MHz-resolution infrared spectra of the 10-μm bands of thegaucheconformer of 1,2-difluoroethane (HFC152) and theC1-symmetry conformer of 1,1,2-trifluoroethane (HFC143) have been measured using a molecular-beam electric-resonance optothermal spectrometer with a tunable microwave-sideband CO2laser source. For 1,2-difluoroethane, two bands have been studied, the ν17B-symmetry C-F stretch at 1077.3 cm-1and the ν13B-symmetry CH2rock at 896.6 cm-1. Both bands are well fit to a asymmetric-rotor Hamiltonian to better than 0.5 MHz. The ν13band is effectively unperturbed, while the ν17band is weakly perturbed, as shown by the large change in centrifugal distortion constants from the ground state values. Two bands have also been studied for 1,1,2-trifluoroethane, the ν11symmetric CF2stretch at 1077.2 cm-1and the ν13C-C stretch at 905.1 cm-1. One of the two bands, ν11, is unperturbed and fit to near the experimental precision. The ν13vibration, on the other hand, is weakly perturbed by an interaction with a nearby state. This perturbation leads to a doubling or splitting of the lines, due to a perturbation-induced lifting of the degeneracy of the symmetric and antisymmetric tunneling states associated with tunneling between the two equivalentC1forms. For theJ,Kastates studied, the splittings are as large as 37 MHz. Combining this observation with published low-resolution far-infrared measurements of torsional sequence-band and hot-band frequencies and calculations from an empirical torsional potential allows us to identify the perturbing state as ν17+ 6ν18. Here, ν17is the CF2twist and ν18is the torsion. The matrix element responsible for this interaction exchanges eight vibrational quanta!
20. Adaptation of closed cycle refrigeration system spectrum to radiation cryochemistry: γ-irradiation, ESR and optical absorption spectroscopy, ITL and RTL of frozen matrices at temperatures down to 14 K
International Nuclear Information System (INIS)
Mayer, J.; Plonka, A.; Ratajski, A.; Suwalski, J.P.; Wypych, M.
1978-01-01
The adaptation of the commercially available closed cycle refrigeration system Spectrim sup(TM) for radiation cryochemistry experiments with frozen matrices down to 14 K is described. The cold head of Spectrim sup(TM), equipped with vacuum shroud extensions and sample holders proper for the given type of experiments, was contained in lead shields, provided with special entrances for irradiation of samples with 60 Co γ-rays. The shroud extensions used for ESR and optical absorption measurements and the sample holders for isothermal luminescence and radiothermolumininescence measurements are described. (U.K.)
1. Discovery of a Highly Unequal-mass Binary T Dwarf with Keck Laser Guide Star Adaptive Optics: A Coevality Test of Substellar Theoretical Models and Effective Temperatures
Science.gov (United States)
Liu, Michael C.; Dupuy, Trent J.; Leggett, S. K.
2010-10-01
Highly unequal-mass ratio binaries are rare among field brown dwarfs, with the mass ratio distribution of the known census described by q (4.9±0.7). However, such systems enable a unique test of the joint accuracy of evolutionary and atmospheric models, under the constraint of coevality for the individual components (the "isochrone test"). We carry out this test using two of the most extreme field substellar binaries currently known, the T1 + T6 epsilon Ind Bab binary and a newly discovered 0farcs14 T2.0 + T7.5 binary, 2MASS J12095613-1004008AB, identified with Keck laser guide star adaptive optics. The latter is the most extreme tight binary resolved to date (q ≈ 0.5). Based on the locations of the binary components on the Hertzsprung-Russell (H-R) diagram, current models successfully indicate that these two systems are coeval, with internal age differences of log(age) = -0.8 ± 1.3(-1.0+1.2 -1.3) dex and 0.5+0.4 -0.3(0.3+0.3 -0.4) dex for 2MASS J1209-1004AB and epsilon Ind Bab, respectively, as inferred from the Lyon (Tucson) models. However, the total mass of epsilon Ind Bab derived from the H-R diagram (≈ 80 M Jup using the Lyon models) is strongly discrepant with the reported dynamical mass. This problem, which is independent of the assumed age of the epsilon Ind Bab system, can be explained by a ≈ 50-100 K systematic error in the model atmosphere fitting, indicating slightly warmer temperatures for both components; bringing the mass determinations from the H-R diagram and the visual orbit into consistency leads to an inferred age of ≈ 6 Gyr for epsilon Ind Bab, older than previously assumed. Overall, the two T dwarf binaries studied here, along with recent results from T dwarfs in age and mass benchmark systems, yield evidence for small (≈100 K) errors in the evolutionary models and/or model atmospheres, but not significantly larger. Future parallax, resolved spectroscopy, and dynamical mass measurements for 2MASS J1209-1004AB will enable a more
2. Optical materials
International Nuclear Information System (INIS)
Poker, D.B.; Ortiz, C.
1989-01-01
This book reports on: Diamond films, Synthesis of optical materials, Structure related optical properties, Radiation effects in optical materials, Characterization of optical materials, Deposition of optical thin films, and Optical fibers and waveguides
3. Adaptation of a radiofrequency glow discharge optical emission spectrometer (RF-GD-OES) to the analysis of light elements (carbon, nitrogen, oxygen and hydrogen) in solids: glove box integration for the analysis of nuclear samples
International Nuclear Information System (INIS)
Hubinois, J.-C.
2001-01-01
The purpose of this work is to use the radiofrequency glow discharge optical emission spectrometry in order to quantitatively determine carbon, nitrogen, oxygen and hydrogen at low concentration (in the ppm range) in nuclear materials. In this study, and before the definitive contamination of the system, works are carried out on non radioactive materials (steel, pure iron, copper and titanium). As the initial apparatus could not deliver a RF power inducing a reproducible discharge and was not adapted to the analysis of light elements: 1- The radiofrequency system had to be changed, 2- The systems controlling gaseous atmospheres had to be improved in order to obtain analytical signals stemming strictly from the sample, 3- Three discharge lamps had to be tested and compared in terms of performances, 4- The system of collection of light had to be optimized. The modifications that were brought to the initial system improved intensities and stabilities of signals which allowed lower detection limits (1000 times lower for carbon). These latter are in the ppm range for carbon and about a few tens of ppm for nitrogen and oxygen in pure irons. Calibration curves were plotted in materials presenting very different sputtering rates in order to check the existence of a 'function of analytical transfer' with the purpose of palliating the lack of reference materials certified in light elements at low concentration. Transposition of this type of function to other matrices remains to be checked. Concerning hydrogen, since no usable reference material with our technique is available, certified materials in deuterium (chosen as a surrogate for hydrogen) were studied in order to exhibit the feasibility the analysis of hydrogen. Parallel to these works, results obtained by modeling a RF discharge show that the performances of the lamp can be improved and that the optical system must be strictly adapted to the glow discharge. (author) [fr
4. MEMS optical sensor
DEFF Research Database (Denmark)
2013-01-01
The present invention relates to an all-optical sensor utilizing effective index modulation of a waveguide and detection of a wavelength shift of reflected light and a force sensing system accommodating said optical sensor. One embodiment of the invention relates to a sensor system comprising...... at least one multimode light source, one or more optical sensors comprising a multimode sensor optical waveguide accommodating a distributed Bragg reflector, at least one transmitting optical waveguide for guiding light from said at least one light source to said one or more multimode sensor optical...... waveguides, a detector for measuring light reflected from said Bragg reflector in said one or more multimode sensor optical waveguides, and a data processor adapted for analyzing variations in the Bragg wavelength of at least one higher order mode of the reflected light....
Science.gov (United States)
Anderson, Lorin W.
1979-01-01
Schools have devised several ways to adapt instruction to a wide variety of student abilities and needs. Judged by criteria for what adaptive education should be, most learning for mastery programs look good. (Author/JM)
6. Higher performance and lower cost optical DPSK receiver
Data.gov (United States)
National Aeronautics and Space Administration — To demonstrate (benchtop experiment) a DPSK receiver with a free-space interferometer, showing that fiber-optic coupling, associated adaptive optics, and optical...
7. Optic neuritis
Science.gov (United States)
Retro-bulbar neuritis; Multiple sclerosis - optic neuritis; Optic nerve - optic neuritis ... The exact cause of optic neuritis is unknown. The optic nerve carries visual information from your eye to the brain. The nerve can swell when ...
Science.gov (United States)
Martin, Maurice
The topics are presented in viewgraph form and include the following: adaptive structures flight experiments; enhanced resolution using active vibration suppression; Advanced Controls Technology Experiment (ACTEX); ACTEX program status; ACTEX-2; ACTEX-2 program status; modular control patch; STRV-1b Cryocooler Vibration Suppression Experiment; STRV-1b program status; Precision Optical Bench Experiment (PROBE); Clementine Spacecraft Configuration; TECHSAT all-composite spacecraft; Inexpensive Structures and Materials Flight Experiment (INFLEX); and INFLEX program status.
9. Experimental and theoretical study of Bragg-Fresnel focalizing optical systems engraved on multi layers interferential mirrors adapted to X and X-UV fields
International Nuclear Information System (INIS)
Idir, M.
1995-02-01
This work concerns the study of a particular type of X-ray focusing optics known as Bragg-Fresnel lenses, formed through ion-etching of multilayered structures. Using the Super-ACO (LURE/Orsay) synchrotron storage ring, we tested several Bragg-Fresnel lenses having either linear or elliptical geometries (producing a line or a point focus, respectively). Diffraction profiles were first obtained for the linear lenses ion-etched on W/Si multilayers of nano-metric period. The experimental results were compared with our theoretical predictions. We next proposed and tested a solution to the problem superposing the different diffraction orders in the focal plane, that of fabricating Bragg-Fresnel lenses with an off-axis configuration, first for the linear and then the elliptical geometry. An experimental application, for an off-axis elliptical lens produced a focused X-ray spot of 5 x 10 microns 2 for the Super-ACO synchrotron source. The same lens also produced a 1/3-size X-ray image of a grid-like object at 1750 eV using the first and third diffraction orders. (author)
DEFF Research Database (Denmark)
Petersen, Kjell Yngve; Søndergaard, Karin; Kongshaug, Jesper
2015-01-01
Adaptive Lighting Adaptive lighting is based on a partial automation of the possibilities to adjust the colour tone and brightness levels of light in order to adapt to people’s needs and desires. IT support is key to the technical developments that afford adaptive control systems. The possibilities...... offered by adaptive lighting control are created by the ways that the system components, the network and data flow can be coordinated through software so that the dynamic variations are controlled in ways that meaningfully adapt according to people’s situations and design intentions. This book discusses...... differently into an architectural body. We also examine what might occur when light is dynamic and able to change colour, intensity and direction, and when it is adaptive and can be brought into interaction with its surroundings. In short, what happens to an architectural space when artificial lighting ceases...
Data.gov (United States)
12. Optical Neural Network Classifier Architectures
National Research Council Canada - National Science Library
1998-01-01
We present an adaptive opto-electronic neural network hardware architecture capable of exploiting parallel optics to realize real-time processing and classification of high-dimensional data for Air...
13. RAVEN AND THE CENTER OF MAFFEI 1: MULTI-OBJECT ADAPTIVE OPTICS OBSERVATIONS OF THE CENTER OF A NEARBY ELLIPTICAL GALAXY AND THE DETECTION OF AN INTERMEDIATE AGE POPULATION
Energy Technology Data Exchange (ETDEWEB)
Davidge, T. J.; Andersen, D. R. [Dominion Astrophysical Observatory, National Research Council of Canada, 5071 West Saanich Road, Victoria, BC V9E 2E7 (Canada); Lardière, O.; Bradley, C.; Blain, C. [Department of Mechanical Engineering, University of Victoria, Victoria, BC V8W 3P2 (Canada); Oya, S. [Subaru Telescope, National Optical Observatory of Japan Hilo, HI 96720 (United States); Akiyama, M.; Ono, Y. H., E-mail: tim.davidge@nrc.ca, E-mail: david.andersen@nrc.ca, E-mail: lardiere@uvic.ca, E-mail: cbr@uvic.ca, E-mail: celia.blain@gmail.com, E-mail: oya@subaru.naoj.org, E-mail: akiyama@astr.tohoku.ac.jp, E-mail: yo-2007@astr.tohoku.ac.jp [Astronomical Institute, Tohoku University 6–3 Aramaki, Aoba-ku, Sedai, 980-8578 Japan (Japan)
2015-10-01
Near-infrared (NIR) spectra that have an angular resolution of ∼0.15 arcsec are used to examine the stellar content of the central regions of the nearby elliptical galaxy Maffei 1. The spectra were recorded at the Subaru Telescope, with wavefront distortions corrected by the RAVEN Multi-object Adaptive Optics science demonstrator. The Ballick–Ramsey C{sub 2} absorption bandhead near 1.76 μm is detected, and models in which ∼10%–20% of the light near 1.8 μm originates from stars of spectral type C5 reproduce the depth of this feature. Archival NIR and mid-infrared images are also used to probe the structural and photometric properties of the galaxy. Comparisons with models suggest that an intermediate age population dominates the spectral energy distribution between 1 and 5 μm near the galaxy center. This is consistent not only with the presence of C stars, but also with the large Hβ index that has been measured previously for Maffei 1. The J − K color is more or less constant within 15 arcsec of the galaxy center, suggesting that the brightest red stars are well-mixed in this area.
14. Smart X-ray optics
International Nuclear Information System (INIS)
Michette, A G; Pfauntsch, S J; Sahraei, S; Shand, M; Morrison, G R; Hart, D; Vojnovic, B; Stevenson, T; Parkes, W; Dunare, C; Willingale, R; Feldman, C; Button, T; Zhang, D; Rodriguez-Sanmartin, D; Wang, H
2009-01-01
This paper describes reflective adaptive/active optics for applications including studies of biological radiation damage. The optics work on the polycapillary principle, but use arrays of channels in thin silicon. For optimum performance the x-rays should reflect once off a channel wall in each of two successive arrays. This reduces aberrations since then the Abbe sine condition is approximately satisfied. Adaptivity is achieved by flexing the arrays via piezo actuation, providing further aberration reduction and controllable focal length.
DEFF Research Database (Denmark)
Møller Larsen, Marcus; Lyngsie, Jacob
2017-01-01
We investigate the connection between contract duration, relational mechanisms, and premature relationship termination. Based on an analysis of a large sample of exchange relationships in the global service-provider industry, we argue that investments in either longer contract duration or more in...... ambiguous reference points for adaption and thus increase the likelihood of premature termination by restricting the parties' set of adaptive actions....
Science.gov (United States)
Kinzig, Ann P.
2015-03-01
This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.
Science.gov (United States)
Chandramouli, Rajarathnam; Li, Grace; Memon, Nasir D.
2002-04-01
Steganalysis techniques attempt to differentiate between stego-objects and cover-objects. In recent work we developed an explicit analytic upper bound for the steganographic capacity of LSB based steganographic techniques for a given false probability of detection. In this paper we look at adaptive steganographic techniques. Adaptive steganographic techniques take explicit steps to escape detection. We explore different techniques that can be used to adapt message embedding to the image content or to a known steganalysis technique. We investigate the advantages of adaptive steganography within an analytical framework. We also give experimental results with a state-of-the-art steganalysis technique demonstrating that adaptive embedding results in a significant number of bits embedded without detection.
DEFF Research Database (Denmark)
Petersen, Kjell Yngve; Søndergaard, Karin; Kongshaug, Jesper
2015-01-01
the investigations of lighting scenarios carried out in two test installations: White Cube and White Box. The test installations are discussed as large-scale experiential instruments. In these test installations we examine what could potentially occur when light using LED technology is integrated and distributed......Adaptive Lighting Adaptive lighting is based on a partial automation of the possibilities to adjust the colour tone and brightness levels of light in order to adapt to people’s needs and desires. IT support is key to the technical developments that afford adaptive control systems. The possibilities...... differently into an architectural body. We also examine what might occur when light is dynamic and able to change colour, intensity and direction, and when it is adaptive and can be brought into interaction with its surroundings. In short, what happens to an architectural space when artificial lighting ceases...
19. Building nonredundant adaptive wavelets by update lifting
NARCIS (Netherlands)
H.J.A.M. Heijmans (Henk); B. Pesquet-Popescu; G. Piella (Gema)
2002-01-01
textabstractAdaptive wavelet decompositions appear useful in various applications in image and video processing, such as image analysis, compression, feature extraction, denoising and deconvolution, or optic flow estimation. For such tasks it may be important that the multiresolution representations
20. Adaptive Intelligent Ventilation Noise Control, Phase II
Data.gov (United States)
National Aeronautics and Space Administration — To address the NASA need for quiet on-orbit crew quarters (CQ), Physical Optics Corporation (POC) proposes to develop a new Adaptive Intelligent Ventilation Noise...
1. Adaptive Intelligent Ventilation Noise Control, Phase I
Data.gov (United States)
National Aeronautics and Space Administration — To address NASA needs for quiet crew volumes in a space habitat, Physical Optics Corporation (POC) proposes to develop a new Adaptive Intelligent Ventilation Noise...
2. The 14 mu m band of carbon stars
NARCIS (Netherlands)
Yamamura, [No Value; de Jong, T; Waters, LBFM; Cami, J; Justtanont, K; LeBertre, T; Lebre, A; Waelkens, C
1999-01-01
We have studied the absorption bands around 14 mum in the spectra of 11 carbon stars with mass-loss rates ranging from 10(-8) to 10(-4) M-circle dot yr(-1), based on data obtained with the Short Wavelength Spectrometer (SWS) on board the Infrared Space Observatory (ISO). All stars clearly show a
International Development Research Centre (IDRC) Digital Library (Canada)
Addressing Climate Change Adaptation in Africa through Participatory Action Research. A Regional Observatory ... while the average annual rainfall recorded between. 1968 and 1999 was .... the region of Thies. For sustainability reasons, the.
International Development Research Centre (IDRC) Digital Library (Canada)
By Reg'
adaptation to climate change from various regions of the Sahel. Their .... This simple system, whose cost and maintenance were financially sustainable, brought ... method that enables him to learn from experience and save time, which he ...
5. Optical sensor for measuring humidity, strain and temperature
DEFF Research Database (Denmark)
2015-01-01
The present invention relates to an optical sensor (100) adapted to measure at least three physical parameters, said optical sensor comprising a polymer-based optical waveguide structure comprising a first Bragg grating structure (101) being adapted to provide information about a first, a second...
Energy Technology Data Exchange (ETDEWEB)
Romashko, R V; Kulchin, Yu N; Bezruk, M N; Ermolaev, S A [Institute of Automation and Control Processes, Far Eastern Branch of the Russian Academy of Sciences, Vladivostok (Russian Federation)
2016-03-31
A new type of a laser hydrophone based on dynamic holograms, formed in a photorefractive crystal, is proposed and studied. It is shown that the use of dynamic holograms makes it unnecessary to use complex optical schemes and systems for electronic stabilisation of the interferometer operating point. This essentially simplifies the scheme of the laser hydrophone preserving its high sensitivity, which offers the possibility to use it under a strong variation of the environment parameters. The laser adaptive holographic hydrophone implemented at present possesses the sensitivity at a level of 3.3 mV Pa{sup -1} in the frequency range from 1 to 30 kHz. (laser hydrophones)
Directory of Open Access Journals (Sweden)
Thais Flores Nogueira Diniz
2008-04-01
Full Text Available The article begins by historicizing film adaptation from the arrival of cinema, pointing out the many theoretical approaches under which the process has been seen: from the concept of “the same story told in a different medium” to a comprehensible definition such as “the process through which works can be transformed, forming an intersection of textual surfaces, quotations, conflations and inversions of other texts”. To illustrate this new concept, the article discusses Spike Jonze’s film Adaptation. according to James Naremore’s proposal which considers the study of adaptation as part of a general theory of repetition, joined with the study of recycling, remaking, and every form of retelling. The film deals with the attempt by the scriptwriter Charles Kaufman, cast by Nicholas Cage, to adapt/translate a non-fictional book to the cinema, but ends up with a kind of film which is by no means what it intended to be: a film of action in the model of Hollywood productions. During the process of creation, Charles and his twin brother, Donald, undergo a series of adventures involving some real persons from the world of film, the author and the protagonist of the book, all of them turning into fictional characters in the film. In the film, adaptation then signifies something different from itstraditional meaning. The article begins by historicizing film adaptation from the arrival of cinema, pointing out the many theoretical approaches under which the process has been seen: from the concept of “the same story told in a different medium” to a comprehensible definition such as “the process through which works can be transformed, forming an intersection of textual surfaces, quotations, conflations and inversions of other texts”. To illustrate this new concept, the article discusses Spike Jonze’s film Adaptation. according to James Naremore’s proposal which considers the study of adaptation as part of a general theory of repetition
8. High Resolution Observations using Adaptive Optics: Achievements ...
ground-based telescope (aperture >= 50 cm) designs have an integrated AO system. The realisation of the .... netic field measurements are started to produce quantitative information about ... A 10 × 10 sub-aperture for sampling the wavefront ...
9. Extragalactic Fields Optimized for Adaptive Optics
Science.gov (United States)
2011-03-01
4Gemini Observatory, Southern Operations Center, c/o AURA, Casilla 603,La Serena, Chile . sObservatories of the Carnegie Institution of Washington...unsuitable anyway. Any such fields would be inaccessible from Chile and be at quite high air mass most of the time for major northem hemisphere...drawback of such a star is not the vertical blooming , which affects a small fraction of the imaging area, but the halos due to internal reflections
Science.gov (United States)
1983-12-01
2.03 mm 136 mm 41.6 mm Dense Flint Glass .58 1.06 48.6 21.7 LiNbO3 .65 2.24 250 46 1011 1 -: PbMoO4 .207 1.25 84.3 25.3 .- Slow Shear TeO2 .0586...mm 41.6 m Dense Flint Glass 5.9 1.06 3.2 21.7 LiNbO3 6.6 2.24 16.3 46 PbMoO4 2.1 1.25 5.5 25.6 TeO2 ’" ’" (slow,•...: Shear) 0.59 0.21 0.15 4.32 It is...was observed. 3.1.3 Delay Line The delay line used for the initial experiment is an Isomet Type 1201 AO modulator. This is a glass unit operat- ing at
11. Compact adaptive optic-optical coherence tomography system
Science.gov (United States)
Olivier, Scot S [Livermore, CA; Chen, Diana C [Fremont, CA; Jones, Steven M [Danville, CA; McNary, Sean M [Stockton, CA
2011-05-17
Badal Optometer and rotating cylinders are inserted in the AO-OCT to correct large spectacle aberrations such as myopia, hyperopic and astigmatism for ease of clinical use and reduction. Spherical mirrors in the sets of the telescope are rotated orthogonally to reduce aberrations and beam displacement caused by the scanners. This produces greatly reduced AO registration errors and improved AO performance to enable high order aberration correction in a patient eyes.
12. Acousto-Optic Applications for Multichannel Adaptive Optical Processor
Science.gov (United States)
1992-06-01
AO cell and the two- channel line-scan camera system described in Subsection 4.1. The AO material for this IntraAction AOD-70 device was flint glass (n...Single-Channel 1.68 (flint glass ) 60,.0 AO Cell Multichannel 2.26 (TeO 2) 20.0 AO Cell Beam splitter 1.515 ( glass ) 50.8 Multichannel correlation was...Tone Intermodulation Dynamic Ranges of Longitudinal TeO2 Bragg Cells for Several Acoustic Power Densities 4-92 f f2 f 3 1 t SOURCE: Reference 21 TR-92
13. Alternative Optical Architectures for Multichannel Adaptive Optical Processing
Science.gov (United States)
1993-04-01
0j +,de 2; j__ -L12 izi j=#1 Pj F2VSIm YS F2 vdI ) N X exp[,2 x (,j + fF,)t]w.-- akx 2 f Fxxp(-j2Xskr W, (16)fsL 2Iu(-LJ:aherir F2VsIJX( 2 fk))r(6 ka...1:512) -off-we); Puss (offset) -1; Rl-Puls8. *31n(2*pi* ([1: 2048 )4phaisrr ./10.24); sukx1la±(221) ,plM (Rl) tit~le ( ’sen chanrnel niqual’) U~sin loop
DEFF Research Database (Denmark)
Andersen, Torben Juul
2015-01-01
This article provides an overview of theoretical contributions that have influenced the discourse around strategic adaptation including contingency perspectives, strategic fit reasoning, decision structure, information processing, corporate entrepreneurship, and strategy process. The related...... concepts of strategic renewal, dynamic managerial capabilities, dynamic capabilities, and strategic response capabilities are discussed and contextualized against strategic responsiveness. The insights derived from this article are used to outline the contours of a dynamic process of strategic adaptation....... This model incorporates elements of central strategizing, autonomous entrepreneurial behavior, interactive information processing, and open communication systems that enhance the organization's ability to observe exogenous changes and respond effectively to them....
DEFF Research Database (Denmark)
Petersen, Kjell Yngve; Kongshaug, Jesper; Søndergaard, Karin
2015-01-01
offered by adaptive lighting control are created by the ways that the system components, the network and data flow can be coordinated through software so that the dynamic variations are controlled in ways that meaningfully adapt according to people’s situations and design intentions. This book discusses...... to be static, and no longer acts as a kind of spatial constancy maintaining stability and order? Moreover, what new potentials open in lighting design? This book is one of four books that is published in connection with the research project entitled LED Lighting; Interdisciplinary LED Lighting Research...
DEFF Research Database (Denmark)
Kjeldsen, Lars Peter; Eriksen, Mette Rose
2010-01-01
Artikelen er en evaluering af de adaptive tests, som blev indført i folkeskolen. Artiklen sætter særligt fokus på evaluering i folkeskolen, herunder bidrager den med vejledning til evaluering, evalueringsværktøjer og fagspecifkt evalueringsmateriale.......Artikelen er en evaluering af de adaptive tests, som blev indført i folkeskolen. Artiklen sætter særligt fokus på evaluering i folkeskolen, herunder bidrager den med vejledning til evaluering, evalueringsværktøjer og fagspecifkt evalueringsmateriale....
17. Nonlinear optics
International Nuclear Information System (INIS)
Boyd, R.W.
1992-01-01
Nonlinear optics is the study of the interaction of intense laser light with matter. This book is a textbook on nonlinear optics at the level of a beginning graduate student. The intent of the book is to provide an introduction to the field of nonlinear optics that stresses fundamental concepts and that enables the student to go on to perform independent research in this field. This book covers the areas of nonlinear optics, quantum optics, quantum electronics, laser physics, electrooptics, and modern optics
18. Physical optics
International Nuclear Information System (INIS)
Kim Il Gon; Lee, Seong Su; Jang, Gi Wan
2012-07-01
This book indicates physical optics with properties and transmission of light, mathematical expression of wave like harmonic wave and cylindrical wave, electromagnetic theory and light, transmission of light with Fermat principle and Fresnel equation, geometrical optics I, geometrical optics II, optical instrument such as stops, glasses and camera, polarized light like double refraction by polarized light, interference, interference by multiple reflections, diffraction, solid optics, crystal optics such as Faraday rotation and Kerr effect and measurement of light. Each chapter has an exercise.
19. Physical optics
Energy Technology Data Exchange (ETDEWEB)
Kim Il Gon; Lee, Seong Su; Jang, Gi Wan
2012-07-15
This book indicates physical optics with properties and transmission of light, mathematical expression of wave like harmonic wave and cylindrical wave, electromagnetic theory and light, transmission of light with Fermat principle and Fresnel equation, geometrical optics I, geometrical optics II, optical instrument such as stops, glasses and camera, polarized light like double refraction by polarized light, interference, interference by multiple reflections, diffraction, solid optics, crystal optics such as Faraday rotation and Kerr effect and measurement of light. Each chapter has an exercise.
20. Quantum optics
National Research Council Canada - National Science Library
Agarwal, G. S
2013-01-01
.... Focusing on applications of quantum optics, the textbook covers recent developments such as engineering of quantum states, quantum optics on a chip, nano-mechanical mirrors, quantum entanglement...
Directory of Open Access Journals (Sweden)
Thais Flores Nogueira Diniz
2006-04-01
Full Text Available The article begins by historicizing film adaptation from the arrival of cinema, pointing out the many theoretical approaches under which the process has been seen: from the concept of “the same story told in a different medium” to a comprehensible definition such as “the process through which works can be transformed, forming an intersection of textual surfaces, quotations, conflations and inversions of other texts”. To illustrate this new concept, the article discusses Spike Jonze’s film Adaptation. according to James Naremore’s proposal which considers the study of adaptation as part of a general theory of repetition, joined with the study of recycling, remaking, and every form of retelling. The film deals with the attempt by the scriptwriter Charles Kaufman, cast by Nicholas Cage, to adapt/translate a non-fictional book to the cinema, but ends up with a kind of film which is by no means what it intended to be: a film of action in the model of Hollywood productions. During the process of creation, Charles and his twin brother, Donald, undergo a series of adventures involving some real persons from the world of film, the author and the protagonist of the book, all of them turning into fictional characters in the film. In the film, adaptation then signifies something different from itstraditional meaning.
2. Optical traps with geometric aberrations
International Nuclear Information System (INIS)
Roichman, Yael; Waldron, Alex; Gardel, Emily; Grier, David G.
2006-01-01
We assess the influence of geometric aberrations on the in-plane performance of optical traps by studying the dynamics of trapped colloidal spheres in deliberately distorted holographic optical tweezers. The lateral stiffness of the traps turns out to be insensitive to moderate amounts of coma, astigmatism, and spherical aberration. Moreover holographic aberration correction enables us to compensate inherent shortcomings in the optical train, thereby adaptively improving its performance. We also demonstrate the effects of geometric aberrations on the intensity profiles of optical vortices, whose readily measured deformations suggest a method for rapidly estimating and correcting geometric aberrations in holographic trapping systems
International Development Research Centre (IDRC) Digital Library (Canada)
IDRC
vital sector is under threat. While it is far from the only development challenge facing local farmers, extreme variations in the climate of West Africa in the past several decades have dealt the region a bad hand. Drought and flood now follow each other in succession. Adaptation is... “The floods spoiled our harvests and we.
DEFF Research Database (Denmark)
Møller Larsen, Marcus; Lyngsie, Jacob
and reciprocal adaptation of informal governance structure create ambiguity in situations of contingencies, which, subsequently, increases the likelihood of premature relationship termination. Using a large sample of exchange relationships in the global service provider industry, we find support for a hypothesis...
5. Statistical behaviour of optical vortex fields
CSIR Research Space (South Africa)
Roux, FS
2009-09-01
Full Text Available ) Density limitation→ effective profile for point vortex (remove evanescent field) . – p.10/37 Scintillated optical beams Optical beam in a turbulent atmosphere: → index variations cause random phase modulations → leads to distortion of the optical beam.... Weak scintillation→ continuous phase distortions that can be corrected by an adaptive optical system: Wavefront sensor Beam splitter Scintillated beam Corrected beam Deformable mirror Control signal . – p.11/37 Strong scintillation Strong scintillation...
Directory of Open Access Journals (Sweden)
Paul Rozin
2008-02-01
Full Text Available People live in a world in which they are surrounded by potential disgust elicitors such as used'' chairs, air, silverware, and money as well as excretory activities. People function in this world by ignoring most of these, by active avoidance, reframing, or adaptation. The issue is particularly striking for professions, such as morticians, surgeons, or sanitation workers, in which there is frequent contact with major disgust elicitors. In this study, we study the `adaptation'' process to dead bodies as disgust elicitors, by measuring specific types of disgust sensitivity in medical students before and after they have spent a few months dissecting a cadaver. Using the Disgust Scale, we find a significant reduction in disgust responses to death and body envelope violation elicitors, but no significant change in any other specific type of disgust. There is a clear reduction in discomfort at touching a cold dead body, but not in touching a human body which is still warm after death.
Energy Technology Data Exchange (ETDEWEB)
Huq, Saleemul
2011-11-15
Efforts to help the world's poor will face crises in coming decades as climate change radically alters conditions. Action Research for Community Adapation in Bangladesh (ARCAB) is an action-research programme on responding to climate change impacts through community-based adaptation. Set in Bangladesh at 20 sites that are vulnerable to floods, droughts, cyclones and sea level rise, ARCAB will follow impacts and adaptation as they evolve over half a century or more. National and international 'research partners', collaborating with ten NGO 'action partners' with global reach, seek knowledge and solutions applicable worldwide. After a year setting up ARCAB, we share lessons on the programme's design and move into our first research cycle.
8. Advanced optical manufacturing digital integrated system
Science.gov (United States)
Tao, Yizheng; Li, Xinglan; Li, Wei; Tang, Dingyong
2012-10-01
It is necessarily to adapt development of advanced optical manufacturing technology with modern science technology development. To solved these problems which low of ration, ratio of finished product, repetition, consistent in big size and high precision in advanced optical component manufacturing. Applied business driven and method of Rational Unified Process, this paper has researched advanced optical manufacturing process flow, requirement of Advanced Optical Manufacturing integrated System, and put forward architecture and key technology of it. Designed Optical component core and Manufacturing process driven of Advanced Optical Manufacturing Digital Integrated System. the result displayed effective well, realized dynamic planning Manufacturing process, information integration improved ratio of production manufactory.
International Nuclear Information System (INIS)
1993-01-01
This paper describes the circuits and programs in assembly language, developed to control the two DC motors that give mobility to a mechanical arm with two degrees of freedom. As a whole, the system is based in a adaptable regulator designed around a 8 bit microprocessor that, starting from a mode of regulation based in the successive approximation method, evolve to another mode through which, only one approximation is sufficient to get the right position of each motor. (Author) 22 fig. 6 ref
International Nuclear Information System (INIS)
1993-01-01
This paper describes the circuits and programs in assembly language, developed to control the two DC motors that give mobility to a mechanical arm with two degrees of freedom. As a whole, the system is based in a adaptable regulator designed around a 8 bit microprocessor that, starting from a mode of regulation based in the successive approximation method, evolve to another mode through which, only one approximation is sufficient to get the right position of each motor. (Author) 6 refs
11. Optical measuring system with an interrogator and a polymer-based single-mode fibre optic sensor system
DEFF Research Database (Denmark)
2017-01-01
The present invention relates to an optical measuring system comprising a polymer-based single-mode fibre-optic sensor system (102), an optical interrogator (101), and an optical arrangement (103) interconnecting the optical interrogator (101) and the polymer-based single-mode fibre-optic sensor...... system (102). The invention further relates to an optical interrogator adapted to be connected to a polymer-based single-mode fibre-optic sensor system via an optical arrangement. The interrogator comprises a broadband light source arrangement (104) and a spectrum analysing arrangement which receives...
DEFF Research Database (Denmark)
Berth, Mette
2005-01-01
This paper focuses on the use of an adaptive ethnography when studying such phenomena as young people's use of mobile media in a learning perspective. Mobile media such as PDAs and mobile phones have a number of affordances which make them potential tools for learning. However, before we begin to...... formal and informal learning contexts. The paper also proposes several adaptive methodological techniques for studying young people's interaction with mobiles.......This paper focuses on the use of an adaptive ethnography when studying such phenomena as young people's use of mobile media in a learning perspective. Mobile media such as PDAs and mobile phones have a number of affordances which make them potential tools for learning. However, before we begin...... to design and develop educational materials for mobile media platforms we must first understand everyday use and behaviour with a medium such as a mobile phone. The paper outlines the research design for a PhD project on mobile learning which focuses on mobile phones as a way to bridge the gap between...
13. Optical Computing
OpenAIRE
Woods, Damien; Naughton, Thomas J.
2008-01-01
We consider optical computers that encode data using images and compute by transforming such images. We give an overview of a number of such optical computing architectures, including descriptions of the type of hardware commonly used in optical computing, as well as some of the computational efficiencies of optical devices. We go on to discuss optical computing from the point of view of computational complexity theory, with the aim of putting some old, and some very recent, re...
14. Advances in Retinal Optical Imaging
Directory of Open Access Journals (Sweden)
Yanxiu Li
2018-04-01
Full Text Available Retinal imaging has undergone a revolution in the past 50 years to allow for better understanding of the eye in health and disease. Significant improvements have occurred both in hardware such as lasers and optics in addition to software image analysis. Optical imaging modalities include optical coherence tomography (OCT, OCT angiography (OCTA, photoacoustic microscopy (PAM, scanning laser ophthalmoscopy (SLO, adaptive optics (AO, fundus autofluorescence (FAF, and molecular imaging (MI. These imaging modalities have enabled improved visualization of retinal pathophysiology and have had a substantial impact on basic and translational medical research. These improvements in technology have translated into early disease detection, more accurate diagnosis, and improved management of numerous chorioretinal diseases. This article summarizes recent advances and applications of retinal optical imaging techniques, discusses current clinical challenges, and predicts future directions in retinal optical imaging.
15. Engineering Optics
CERN Document Server
Iizuka, Keigo
2008-01-01
Engineering Optics is a book for students who want to apply their knowledge of optics to engineering problems, as well as for engineering students who want to acquire the basic principles of optics. It covers such important topics as optical signal processing, holography, tomography, holographic radars, fiber optical communication, electro- and acousto-optic devices, and integrated optics (including optical bistability). As a basis for understanding these topics, the first few chapters give easy-to-follow explanations of diffraction theory, Fourier transforms, and geometrical optics. Practical examples, such as the video disk, the Fresnel zone plate, and many more, appear throughout the text, together with numerous solved exercises. There is an entirely new section in this updated edition on 3-D imaging.
16. Electron optics
CERN Document Server
Grivet, Pierre; Bertein, F; Castaing, R; Gauzit, M; Septier, Albert L
1972-01-01
Electron Optics, Second English Edition, Part I: Optics is a 10-chapter book that begins by elucidating the fundamental features and basic techniques of electron optics, as well as the distribution of potential and field in electrostatic lenses. This book then explains the field distribution in magnetic lenses; the optical properties of electrostatic and magnetic lenses; and the similarities and differences between glass optics and electron optics. Subsequent chapters focus on lens defects; some electrostatic lenses and triode guns; and magnetic lens models. The strong focusing lenses and pris
17. Proceedings of the thirty fifth international conference on contemporary trends in optics and optoelectronics: conference digest - extended abstracts
International Nuclear Information System (INIS)
2011-01-01
Optics and optoelectronics are indispensable in all spheres of human activity, ranging from day to day needs to advanced scientific and technological pursuits and their applications for the benefit of the society. This conference covers the following topics: adaptive optics, biomedical optics and imaging, classical and quantum optics, fibre optics, optics for space applications, optical metrology and NDT, optical information processing, optical and optoelectronic materials. Papers relevant to INIS are indexed separately
Science.gov (United States)
Gatenby, Robert A; Silva, Ariosto S; Gillies, Robert J; Frieden, B Roy
2009-06-01
A number of successful systemic therapies are available for treatment of disseminated cancers. However, tumor response is often transient, and therapy frequently fails due to emergence of resistant populations. The latter reflects the temporal and spatial heterogeneity of the tumor microenvironment as well as the evolutionary capacity of cancer phenotypes to adapt to therapeutic perturbations. Although cancers are highly dynamic systems, cancer therapy is typically administered according to a fixed, linear protocol. Here we examine an adaptive therapeutic approach that evolves in response to the temporal and spatial variability of tumor microenvironment and cellular phenotype as well as therapy-induced perturbations. Initial mathematical models find that when resistant phenotypes arise in the untreated tumor, they are typically present in small numbers because they are less fit than the sensitive population. This reflects the "cost" of phenotypic resistance such as additional substrate and energy used to up-regulate xenobiotic metabolism, and therefore not available for proliferation, or the growth inhibitory nature of environments (i.e., ischemia or hypoxia) that confer resistance on phenotypically sensitive cells. Thus, in the Darwinian environment of a cancer, the fitter chemosensitive cells will ordinarily proliferate at the expense of the less fit chemoresistant cells. The models show that, if resistant populations are present before administration of therapy, treatments designed to kill maximum numbers of cancer cells remove this inhibitory effect and actually promote more rapid growth of the resistant populations. We present an alternative approach in which treatment is continuously modulated to achieve a fixed tumor population. The goal of adaptive therapy is to enforce a stable tumor burden by permitting a significant population of chemosensitive cells to survive so that they, in turn, suppress proliferation of the less fit but chemoresistant
19. Quantum Transduction with Adaptive Control
Science.gov (United States)
Zhang, Mengzhen; Zou, Chang-Ling; Jiang, Liang
2018-01-01
Quantum transducers play a crucial role in hybrid quantum networks. A good quantum transducer can faithfully convert quantum signals from one mode to another with minimum decoherence. Most investigations of quantum transduction are based on the protocol of direct mode conversion. However, the direct protocol requires the matching condition, which in practice is not always feasible. Here we propose an adaptive protocol for quantum transducers, which can convert quantum signals without requiring the matching condition. The adaptive protocol only consists of Gaussian operations, feasible in various physical platforms. Moreover, we show that the adaptive protocol can be robust against imperfections associated with finite squeezing, thermal noise, and homodyne detection, and it can be implemented to realize quantum state transfer between microwave and optical modes.
20. Quantum Transduction with Adaptive Control.
Science.gov (United States)
Zhang, Mengzhen; Zou, Chang-Ling; Jiang, Liang
2018-01-12
Quantum transducers play a crucial role in hybrid quantum networks. A good quantum transducer can faithfully convert quantum signals from one mode to another with minimum decoherence. Most investigations of quantum transduction are based on the protocol of direct mode conversion. However, the direct protocol requires the matching condition, which in practice is not always feasible. Here we propose an adaptive protocol for quantum transducers, which can convert quantum signals without requiring the matching condition. The adaptive protocol only consists of Gaussian operations, feasible in various physical platforms. Moreover, we show that the adaptive protocol can be robust against imperfections associated with finite squeezing, thermal noise, and homodyne detection, and it can be implemented to realize quantum state transfer between microwave and optical modes. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.590862512588501, "perplexity": 4966.924629362406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212039.16/warc/CC-MAIN-20200923175652-20200923205652-00034.warc.gz"} |
https://koreascience.or.kr/search.page?keywords=Large+Eddy+simulation | • Title, Summary, Keyword: Large Eddy simulation
### DETACHED EDDY SIMULATION OF BASE FLOW IN SUPERSONIC MAINSTREAM (초음속 유동장에서 기저 유동의 Detached Eddy Simulation)
• Shin, J.R.;Won, S.H.;Choi, J.Y.
• 한국전산유체공학회:학술대회논문집
• /
• /
• pp.104-110
• /
• 2008
• Detached Eddy Simulation (DES) is applied to an axisymmetric base flow at supersonic mainstream. DES is a hybrid approach to modeling turbulence that combines the best features of the Reynolds-averaged Navier-Stokes (RANS) and large-eddy simulation (LES) approaches. In the Reynolds-averaged mode, the model is currently based on either the Spalart-Allmaras (S-A) turbulence model. In the large eddy simulation mode, it is based on the Smagorinski subgrid scale model. Accurate predictions of the base flowfield and base pressure are successfully achieved by using the DES methodology with less computational cost than that of pure LES and monotone integrated large-eddy simulation (MILES) approaches. The DES accurately resolves the physics of unsteady turbulent motions, such as shear layer rollup, large-eddy motions in the downstream region, small-eddy motions inside the recirculating region. Comparison of the results shows that it is necessary to resolve approaching boundary layers and free shear-layer velocity profiles from the base edge correctly for the accurate prediction of base flows. The consideration of an empirical constant CDES for a compressible flow analysis may suggest that the optimal value of empirical constant CDES may be larger in the flows with strong compressibility than in incompressible flows.
### DETACHED EDDY SIMULATION OF BASE FLOW IN SUPERSONIC MAINSTREAM (초음속 유동장에서 기저 유동의 Detached Eddy Simulation)
• Shin, J.R.;Won, S.H.;Choi, J.Y.
• 한국전산유체공학회:학술대회논문집
• /
• /
• pp.104-110
• /
• 2008
• Detached Eddy Simulation (DES) is applied to an axisymmetric base flow at supersonic mainstream. DES is a hybrid approach to modeling turbulence that combines the best features of the Reynolds-averaged Navier-Stokes RANS) and large-eddy simulation (LES) approaches. In the Reynolds-averaged mode, the model is currently based on either the Spalart-Allmaras (S-A) turbulence model. In the large eddy simulation mode, it is based on the Smagorinski subgrid scale model. Accurate predictions of the base flowfield and base pressure are successfully achieved by using the DES methodology with less computational cost than that of pure LES and monotone integrated large-eddy simulation (MILES) approaches. The DES accurately resolves the physics of unsteady turbulent motions, such as shear layer rollup, large-eddy motions in the downstream region, small-eddy motions inside the recirculating region. Comparison of the results shows that it is necessary to resolve approaching boundary layers and free shear-layer velocity profiles from the base edge correctly for the accurate prediction of base flows. The consideration of an empirical constant CDES for a compressible flow analysis may suggest that the optimal value of empirical constant CDES may be larger in the flows with strong compressibility than in incompressible flows.
### Analysis of Compound Open Channel Flow Using Large Eddy Simulation (LES) (Large Eddy Simulation (LES)을 이용한 복단면 개수로 흐름 분석)
• Lee, Du Han
• Ecology and Resilient Infrastructure
• /
• v.4 no.1
• /
• pp.54-62
• /
• 2017
• This study investigated compound open channel flow using OpenFOAM Large Eddy Simulation (LES). Large eddy simulations were carried out by solving the filtered continuity and momentum equations numerically. One equation LES and non-uniform grid were applied to capture the anisotropic turbulence and secondary flow near the wall. The results of large eddy simulations of turbulent flow in a compound open channel with deep and shallow flood plain depths are presented. These LESs are validated with experimental data, resulting in a good agreement between measured and calculated data. The role of anisotropic turbulence in generating secondary currents is illustrated.
### COARSE GRID LARGE-EDDY SIMULATION OF FLOW OVER A HEAVY VEHICLE (화물차 주위 유동의 성긴 격자 큰에디모사)
• Lee, S.;Kim, M.;You, D.;Kim, J.J.;Lee, S.J.
• Journal of computational fluids engineering
• /
• v.21 no.1
• /
• pp.30-35
• /
• 2016
• In order to investigate effects of grid resolution on large-eddy simulation of flow over a heavy vehicle, large-eddy simulations over the vehicle with coarse grid and fine grid are conducted. In addition, comparison of drag coefficients with the experimental data obtained by a wind tunnel experiment is conducted. Both of the drag coefficients of coarse grid and fine grid large-eddy simulation show good agreement with the experimental data. Flow fields obtained by the coarse and the fine grid large-eddy simulation are compared in the vehicle frontal-face region, the vehicle rear wheel region, and the vehicle base region. Coarse grid large-eddy simulation shows good agreement with the fine grid large-eddy simulation in the vehicle front face region and the vehicle rear wheel region, since the flow over the present vehicle is dominated by flow separation which is geometrically pre-determined, not by the skin friction which is known to be sensitive to grid resolution.
### Flow Analysis in the Tip Clearance of Axial Flow Rotor Using Finite-Element Large-Eddy Simulation Method (유한요소 LES법에 의한 축류 회전차 팁 틈새의 유동해석)
• Lee, Myeong-Ho
• Journal of Advanced Marine Engineering and Technology
• /
• v.33 no.5
• /
• pp.686-695
• /
• 2009
• Flow characteristics in linear axial cascade have been studied using large eddy simulation(LES) based on finite element method(FEM) to investigate details of the leakage flow in the tip clearance of axial flow rotor. STAR-CD(FVM) and PAT-Flow(FEM) have been adopted to solve the Navier-Stokes equations for the simulation of the unsteady turbulent flow. Numerical results from the present study have been compared with the existing experimental results to investigate a tip clearance effect on velocity profile and static pressure distribution on blade surface at various spanwise positions. Both simulation results agree well with the experimental data. However, it has been shown that the results of finite-element large-eddy simulation agree better with experimental data than $k-{\varepsilon}$ turbulent model based on finite volume method regarding the tip vortex geometry and static pressure distribution at the center of the tip vortex core. As a result of this study, it is shown that finite-element large-eddy simulation method can predict more exactly on the tip leakage vortex flow and behind flow field.
### Visualization of Unsteady Fluid Flows by Using Large Eddy Simulation
• Journal of Mechanical Science and Technology
• /
• v.15 no.12
• /
• pp.1750-1756
• /
• 2001
• Three-dimensional and unsteady flow analysis is a practical target of high performance computation. As recently advances of computers, a numerical prediction by the large eddy simulation (LES) are introduced and evaluated for various engineering problems. Its advanced methods for the complex turbulent flows are discussed by several examples applied for aerodynamic designs, analysis of fluid flow mechanisms and their interaction to complex phenomena. These results of time-dependent and three-dimensional phenomena are visualized by interactive graphics and animations.
### Large Eddy Simulation of Swirling Turbulent Flows in a Annular Combustor (환형연소기의 스월난류유동장에 대한 Large Eddy Simulation)
• Kim, Jong-Chan;Sung, Hong-Gye;Cha, Bong-Jun;Yang, Gye-Byeung
• Proceedings of the Korean Society of Propulsion Engineers Conference
• /
• /
• pp.67-70
• /
• 2008
• Production and dissipation of turbulent structure in a swirl stabilized combustor was investigated using three-dimensional Large Eddy Simulation analysis. The combustor of concern is the LM6000, lean premixed dry low-NOx annular combustor, developed by GEAE. Inlet condition was based on experimental data. Strong vortex breakdown in main stream, vortex ring proceeding downstream, and the turbulent structure periodically oscillating have been observed. Reasonable agreement was obtained by comparison of the results with experiments and previous LES studies.
### On the Spectral Eddy Viscosity in Isotropic Turbulence
• Park Noma;Yoo Jung Yu;Choi Haecheon
• 한국전산유체공학회:학술대회논문집
• /
• /
• pp.105-106
• /
• 2003
• The spectral eddy viscosity model is investigated through the large eddy simulation of the decaying and forced isotropic turbulence. It is shown that the widely accepted 'plateau and cusp' model overpredicts resolved kinetic energy due to the amplification of energy at intermediate wavenumbers. Whereas, the simple plateau model reproduces a correct energy spectrum. This result overshadows a priori tests based on the filtered DNS or experimental data. An alternative method for the validation of subgrid-scale model is discussed.
### Dynamic Large Eddy Simulation of the Vortex Breakdown of Swirling Flow using MPI Parallel Technique (Dynamic Large Eddy Simulation과 MPI병렬 계산 기법을 이용한 스월 유동에서의 Vortex Breakdown에 관한 연구)
• Sung Hong Gye
• Journal of computational fluids engineering
• /
• v.6 no.1
• /
• pp.31-39
• /
• 2001
• 연소실 안으로 분출되는 스월 유동의 vortex breakdown mechanism에 대한 연구를 하였다. 3차원 유한 체적기법과 Runge-Kutta 시간 적분법이 적용되었으며, 난류모델은 dynamic large eddy simulation (DLES)이 적용되었다. 계산 시간의 효율성과 기억용량을 효과적으로 사용하기 위하여 message passing interface (MPI) 병렬계산 기법이 적용되었다. 스월 난류 유동에 있어서 vortex breakdown 거동을 가시적으로 표착 하였는데, 이는 스월 유동에 의한 난류 응력 증대, 난류 생성/소산율 증대 및 혼합율 증대에 대한 실험적 근거를 뒷받침하는 매우 중요한 결과이다. 또한 평균 속도와 난류 운동에너지에 대한 계산 결과도 실험 결과와 비교하였다.
### Large Eddy Simulation of a Lifted Methane/Air Flame using FGM-based Multi-Environment PDF Approach (FGM기반 Multi-Environment PDF 모델을 이용한 메탄/공기 부상화염장의 Large Eddy Simulation)
• Kim, Namsu;Kim, Jaehyun;Kim, Yongmo
• 한국연소학회:학술대회논문집
• /
• /
• pp.265-266
• /
• 2015
• The multi-environment PDF model coupled with flamelet generated manifolds(FGM) has been developed for a large eddy simulation of turbulent partially premixed lifted flame. This approach has a capability to realistically account for the transport and evolution of probability density function for mixture fraction and progress variable with the manageable computational burden. Using the tabulated chemistry, it is possible to track radical distributions which is important to predict autoignition process with the vitiated coflow environment. Numerical results indicate that the present yields the good agreement with experimental data in terms of mixture fraction, temperature, and species mass fractions. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.837611973285675, "perplexity": 6590.316358114325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.96/warc/CC-MAIN-20210507025943-20210507055943-00582.warc.gz"} |
http://troelschristensen.dk/tandfonline-com-interpretations-of-boxplots-helping-middle-school-students-to-think-outside-the-box/ | # tandfonline.com – Interpretations of Boxplots: Helping Middle School Students to Think Outside the Box
tandfonline.com har udgivet en rapport under søgningen “Teacher Education Mathematics”:
ABSTRACT
Boxplots are statistical representations for organizing and displaying data that are relatively easy to create with a five-number summary. However, boxplots are not as easy to understand, interpret, or connect with other statistical representations of the same data. We worked at two different schools with 259 middle school students who constructed and interpreted boxplots. We observed that even students who were able to create boxplots had difficulty interpreting data represented in a boxplot. After sharing specific difficulties that we observed students having, we discuss ways to help students to make sense of data presented in boxplots. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612304329872131, "perplexity": 2378.5765252678425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00678.warc.gz"} |
https://dantopology.wordpress.com/2017/06/11/looking-for-spaces-in-which-every-compact-subspace-is-metrizable/ | # Looking for spaces in which every compact subspace is metrizable
Once it is known that a topological space is not metrizable, it is natural to ask, from a metrizability standpoint, which subspaces are metrizable, e.g. whether every compact subspace is metrizable. This post discusses several classes of spaces in which every compact subspace is metrizable. Though the goal here is not to find a complete characterization of such spaces, this post discusses several classes of spaces and various examples that have this property. The effort brings together many interesting basic and well known facts. Thus the notion “every compact subspace is metrizable” is an excellent learning opportunity.
Several Classes of Spaces
The notion “every compact subspace is metrizable” is a very broad class of spaces. It includes well known spaces such as Sorgenfrey line, Michael line and the first uncountable ordinal $\omega_1$ (with the order topology) as well as Moore spaces. Certain function spaces are in the class “every compact subspace is metrizable”. The following diagram is a good organizing framework.
\displaystyle \begin{aligned} &1. \ \text{Metrizable} \\&\ \ \ \ \ \ \ \ \ \Downarrow \\&2. \ \text{Submetrizable} \Longleftarrow 5. \ \exists \ \text{countable network} \\&\ \ \ \ \ \ \ \ \ \Downarrow \\&3. \ \exists \ G_\delta \text{ diagonal} \\&\ \ \ \ \ \ \ \ \ \Downarrow \\&4. \ \text{Every compact subspace is metrizable} \end{aligned}
Let $(X, \tau)$ be a space. It is submetrizable if there is a topology $\tau_1$ on the set $X$ such that $\tau_1 \subset \tau$ and $(X, \tau_1)$ is a metrizable space. The topology $\tau_1$ is said to be weaker (coarser) than $\tau$. Thus a space $X$ is submetrizable if it has a weaker metrizable topology.
Let $\mathcal{N}$ be a set of subsets of the space $X$. $\mathcal{N}$ is said to be a network for $X$ if for every open subset $O$ of $X$ and for each $x \in O$, there exists $N \in \mathcal{N}$ such that $x \in N \subset O$. Having a network that is countable in size is a strong property (see here for a discussion on spaces with a countable network).
The diagonal of the space $X$ is the subset $\Delta=\left\{(x,x): x \in X \right\}$ of the square $X \times X$. The space $X$ has a $G_\delta$-diagonal if $\Delta$ is a $G_\delta$-subset of $X \times X$, i.e. $\Delta$ is the intersection of countably many open subsets of $X \times X$.
The implication $1 \Longrightarrow 2$ is clear. For $5 \Longrightarrow 2$, see Lemma 1 in this previous post on countable network. The implication $2 \Longrightarrow 3$ is left as an exercise. To see $3 \Longrightarrow 4$, let $K$ be a compact subset of $X$. The property of having a $G_\delta$-diagonal is hereditary. Thus $K$ has a $G_\delta$-diagonal. According to a well known result, any compact space with a $G_\delta$-diagonal is metrizable (see here).
None of the implications in the diagram is reversible. The first uncountable ordinal $\omega_1$ is an example for $4 \not \Longrightarrow 3$. This follows from the well known result that any countably compact space with a $G_\delta$-diagonal is metrizable (see here). The Mrowka space is an example for $3 \not \Longrightarrow 2$ (see here). The Sorgenfrey line is an example for both $2 \not \Longrightarrow 5$ and $2 \not \Longrightarrow 1$.
To see where the examples mentioned earlier are placed, note that Sorgenfrey line and Michael line are submetrizable, both are submetrizable by the usual Euclidean topology on the real line. Each compact subspace of the space $\omega_1$ is countable and is thus contained in some initial segment $[0,\alpha]$ which is metrizable. Any Moore space has a $G_\delta$-diagonal. Thus compact subspaces of a Moore space are metrizable.
Function Spaces
We now look at some function spaces that are in the class “every compact subspace is metrizable.” For any Tychonoff space (completely regular space) $X$, $C_p(X)$ is the space of all continuous functions from $X$ into $\mathbb{R}$ with the pointwise convergence topology (see here for basic information on pointwise convergence topology).
Theorem 1
Suppose that $X$ is a separable space. Then every compact subspace of $C_p(X)$ is metrizable.
Proof
The proof here actually shows more than is stated in the theorem. We show that $C_p(X)$ is submetrizable by a separable metric topology. Let $Y$ be a countable dense subspace of $X$. Then $C_p(Y)$ is metrizable and separable since it is a subspace of the separable metric space $\mathbb{R}^{\omega}$. Thus $C_p(Y)$ has a countable base. Let $\mathcal{E}$ be a countable base for $C_p(Y)$.
Let $\pi:C_p(X) \longrightarrow C_p(Y)$ be the restriction map, i.e. for each $f \in C_p(X)$, $\pi(f)=f \upharpoonright Y$. Since $\pi$ is a projection map, it is continuous and one-to-one and it maps $C_p(X)$ into $C_p(Y)$. Thus $\pi$ is a continuous bijection from $C_p(X)$ into $C_p(Y)$. Let $\mathcal{B}=\left\{\pi^{-1}(E): E \in \mathcal{E} \right\}$.
We claim that $\mathcal{B}$ is a base for a topology on $C_p(X)$. Once this is established, the proof of the theorem is completed. Note that $\mathcal{B}$ is countable and elements of $\mathcal{B}$ are open subsets of $C_p(X)$. Thus the topology generated by $\mathcal{B}$ is coarser than the original topology of $C_p(X)$.
For $\mathcal{B}$ to be a base, two conditions must be satisfied – $\mathcal{B}$ is a cover of $C_p(X)$ and for $B_1,B_2 \in \mathcal{B}$, and for $f \in B_1 \cap B_2$, there exists $B_3 \in \mathcal{B}$ such that $f \in B_3 \subset B_1 \cap B_2$. Since $\mathcal{E}$ is a base for $C_p(Y)$ and since elements of $\mathcal{B}$ are preimages of elements of $\mathcal{E}$ under the map $\pi$, it is straightforward to verify these two points. $\square$
Theorem 1 is actually a special case of a duality result in $C_p$ function space theory. More about this point later. First, consider a corollary of Theorem 1.
Corollary 2
Let $X=\prod_{\alpha where $c$ is the cardinality continuum and each $X_\alpha$ is a separable space. Then every compact subspace of $C_p(X)$ is metrizable.
The key fact for Corollary 2 is that the product of continuum many separable spaces is separable (this fact is discussed here). Theorem 1 is actually a special case of a deep result.
Theorem 3
Suppose that $X=\prod_{\alpha<\kappa} X_\alpha$ is a product of separable spaces where $\kappa$ is any infinite cardinal. Then every compact subspace of $C_p(X)$ is metrizable.
Theorem 3 is a much more general result. The product of any arbitrary number of separable spaces is not separable if the number of factors is greater than continuum. So the proof for Theorem 1 will not work in the general case. This result is Problem 307 in [2].
A Duality Result
Theorem 1 is stated in a way that gives the right information for the purpose at hand. A more correct statement of Theorem 1 is: $X$ is separable if and only if $C_p(X)$ is submetrizable by a separable metric topology. Of course, the result in the literature is based on density and weak weight.
The cardinal function of density is the least cardinality of a dense subspace. For any space $Y$, the weight of $Y$, denoted by $w(Y)$, is the least cardinaility of a base of $Y$. The weak weight of a space $X$ is the least $w(Y)$ over all space $Y$ for which there is a continuous bijection from $X$ onto $Y$. Thus if the weak weight of $X$ is $\omega$, then there is a continuous bijection from $X$ onto some separable metric space, hence $X$ has a weaker separable metric topology.
There is a duality result between density and weak weight for $X$ and $C_p(X)$. The duality result:
The density of $X$ coincides with the weak weight of $C_p(X)$ and the weak weight of $X$ coincides with the density of $C_p(X)$. These are elementary results in $C_p$-theory. See Theorem I.1.4 and Theorem I.1.5 in [1].
$\text{ }$
$\text{ }$
$\text{ }$
Reference
1. Arkhangelskii, A. V., Topological Function Spaces, Mathematics and Its Applications Series, Kluwer Academic Publishers, Dordrecht, 1992.
2. Tkachuk V. V., A $C_p$-Theory Problem Book, Topological and Function Spaces, Springer, New York, 2011.
$\text{ }$
$\text{ }$
$\text{ }$
$\copyright$ 2017 – Dan Ma | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 130, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925274848937988, "perplexity": 127.92428964036553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573323.60/warc/CC-MAIN-20190918172932-20190918194932-00416.warc.gz"} |
https://www.wisdomjobs.com/e-university/yii-tutorial-1288/yii-query-builder-17200.html | # Yii Query Builder - Yii
## What is Yii Query Builder?
Query builder facilitates in the creation of SQL queries in a more programmatic way and helps in writing more readable SQL-related code.
## How to use Yii Query Builder?
Yii query builder is used by following some of the steps like:
• Build an yii\db\Query object.
• Execute a query method.
To build an yii\db\Query object, adifferent query builder functions is called to define different parts of an SQL query.
Step 1 − To show a typical usage of the query builder, the actionTestDb method is modified by the code:
Step 2 − Visit http://localhost:8080/index.php?r=site/test-db, and the output appears as:
### Where() function
The WHERE fragment of the query is defined by the where() function. The three different formats used to specify a WHERE condition are:
• string format − 'name = User10'
• hash format − ['name' => 'User10', 'email => user10@gmail.com']
• operator format − ['like', 'name', 'User']
Example of String format
The output appears as:
Example of Hash format
The output appears as:
By the Operator format the arbitrary conditions are defined in the following format −
The operator can be −
• and − ['and', 'id = 1', 'id = 2'] will generate id = 1 AND id = 2 or: similar to the and operator
• between − ['between', 'id', 1, 15] will generate id BETWEEN 1 AND 15
• not between − similar to the between operator, but BETWEEN is replaced with NOT BETWEEN
• in − ['in', 'id', [5,10,15]] will generate id IN (5,10,15)
• not in − similar to the in operator, but IN is replaced with NOT IN
• like − ['like', 'name', 'user'] will generate name LIKE '%user%'
• or like − similar to the like operator, but OR is used to split the LIKE predicates
• not like − similar to the like operator, but LIKE is replaced with NOT LIKE
• or not like − similar to the not like operator, but OR is used to concatenate the NOT LIKE predicates
• exists − requires one operand which must be an instance of the yii\db\Query class
• not exists − similar to the exists operator, but builds a NOT EXISTS (subquery) expression
• <, <=, >, >=, or any other DB operator: ['<', 'id', 10] will generate id<10
Example of Operator format
The output appears as:
### OrderBy() Function
The ORDER fragment is defined by the orderBy() function.
For instance,
The output appears as:
### groupBy() Function
The GROUP BY fragment is defined by the groupBy() function and the HAVING fragment specifies the having() method.
For instance,
The output appears as:
LIMIT and OFFSET fragments are defined by the limit() and offset() methods.
For instance −
The output appears as:
## What are the different methods provided by Yii Query?
The yii\db\Query class provides a set of methods for different purposes
• all() − Returns an array of rows of name-value pairs.
• one() − Returns the first row.
• column() − Returns the first column.
• scalar() − Returns a scalar value from the first row and first column of the result.
• exists() − Returns a value indicating whether the query contains any result
• count() - Returns the result of a COUNT query
• other aggregation query methods − Includes sum($q), average($q), max($q), min($q). The \$q parameter can be either a column name or a DB expression. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3685826063156128, "perplexity": 10255.372766203081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.76/warc/CC-MAIN-20211130232232-20211201022232-00038.warc.gz"} |
https://matthewmumpower.com/publications/paper/2012/mumpower/the-rare-earth-peak-an-overlooked-r-process-diagnostic | ## The rare earth peak: an overlooked $r$-process diagnostic
### M. Mumpower, G. C. McLaughlin, R. Surman
Published ApJ 752, 117 (2012)
The astrophysical site or sites responsible for the $r$-process of nucleosynthesis still remains an enigma. Since the rare earth region is formed in the latter stages of the $r$-process it provides a unique probe of the astrophysical conditions during which the $r$-process takes place. We use features of a successful rare earth region in the context of a high entropy $r$-process ($S\gtrsim100k_B$) and discuss the types of astrophysical conditions that produce abundance patterns that best match meteoritic and observational data. Despite uncertainties in nuclear physics input, this method effectively constrains astrophysical conditions.
## Mail
Matthew Mumpower
Los Alamos National Lab
MS B283
TA-3 Bldg 123
Los Alamos, NM 87545 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.458187073469162, "perplexity": 5543.65795989054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597458.22/warc/CC-MAIN-20200120052454-20200120080454-00251.warc.gz"} |
https://stultus.in/post/malayalam-page-numbers-using-xelatex/ | # Malayalam Page Numbers in XeLatex
Recently I had an opportunity to be a part of an effort to enable SCERT(The Govt. organization which is responsible for the content, curriculum and the textbooks which are used in the schools of the state) to use unicode technologies for textbook publishing.
As part of this effort, we tried to typeset Malayalam textbooks for 5th, 7th and 11th standards using xelatex.
I wrote the following macro to automatically generate the page numbers using malayalam numerals
%%%———–Malayalam Page Number—————%%%%
\makeatletter
\def\@malnumber#1{\expandafter\@@malnumber\number#1\@nil}
\def\@@malnumber#1{%
\ifx#1\@nil
\else
\char\numexpr#1+”0D66\relax
\expandafter\@@malnumber\fi}
\def\malcounter#1{\expandafter\@malnumber\csname c@#1\endcsname}
\def\malnumeral#1{\@@malnumber#1\@nil}
\makeatother
\def\MalpageNum{\malcounter{page}}
The \MalpageNum command will provide the current page number using the Malayalam numerals.
Happy Hacking!! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7651307582855225, "perplexity": 3114.512407330408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829429.94/warc/CC-MAIN-20181218143757-20181218165757-00389.warc.gz"} |
https://gis.stackexchange.com/questions/274358/missing-qgis2-folder | # Missing .qgis2 folder
I uninstalled Qgis 2.18.16 osgeo4w deleting all related folders. When I installed the qgis 3.0.0 osgeo4w there were missing folders. For example one of the crucial folders missing was .qgis2 or maybe it's supposed to be .qgis3. I tried reinstalling again several times, the same results occurred. Does somebody know how to fix the installation?
``````QgsApplication.qgisSettingsDirPath()
• For Windows 7, it seems to be `C:\Users\<username>\AppData\Roaming\QGIS\QGIS3\profiles\default`. Mar 11 '18 at 17:55
`Settings > User Profiles > Open active profile folder` | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9794257879257202, "perplexity": 6668.786595217363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00232.warc.gz"} |
https://easychair.org/publications/paper/L4Kd | Download PDFOpen PDF in browser
Utilization and Validation of Hydraulic Formula to Optimize Pipeline Diameter in Waterworks~Downsizing of Water Facilities to Prepare for Decrease in Water Demand due to Population Decline~
7 pagesPublished: September 20, 2018
Abstract
In order to optimize and downsize pipeline diameter to prepare for water demand decrease in the future, we conducted validation to apply the Hazen-Williams formula to existing pipeline. We focused on the flow velocity coefficient (hereafter referred to as, “C”) and validated it through a pipeline network simulation and field experiments. As a result, the present value for C that is uniformly adopted in Japan should be modified for existing pipeline. Furthermore, variance in C due to the differences between the inner linings of pipeline was verified. We evaluated the effectiveness of downsizing of pipeline diameter with the result of this study, and we confirmed that this study contributes to optimizing and downsizing pipeline diameter.
Keyphrases: Downsizing, Flow velocity coefficient, Hazen-Williams formula, Head loss
In: Goffredo La Loggia, Gabriele Freni, Valeria Puleo and Mauro De Marchis (editors). HIC 2018. 13th International Conference on Hydroinformatics, vol 3, pages 1955--1961
Links:
Download PDFOpen PDF in browser | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495233058929443, "perplexity": 4945.253047497714}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00534.warc.gz"} |
http://mathoverflow.net/users/27196/user27196?tab=activity | # user27196
less info
reputation
2
bio website location age member for 2 years, 6 months seen Jun 24 '13 at 6:26 profile views 15
# 6 Actions
Oct25 awarded Supporter Oct14 comment maximal order of elements in GL(n,p) I have meanwhile found the paper: Ivan Niven, Fermat theorem for matrices, Duke Math. J. 15 (1948), 823-826, which gives an elementary and explicit description of the possible orders of elements in GL(n,q), where q is a prime power. Thanks again. Oct13 awarded Student Oct13 comment maximal order of elements in GL(n,p) Thank you very much again - this was very helpful. Oct13 comment maximal order of elements in GL(n,p) Thank you very much for the elegant answer! A related question: what is the maximal p-power which is the order of an element of GL(n,p)? Oct12 asked maximal order of elements in GL(n,p) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820491373538971, "perplexity": 1532.974987860844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430456100475.98/warc/CC-MAIN-20150501045500-00062-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://science4performance.com/tag/cycling/ | ## Milan Sanremo in a Random Forest
Last time I tried to predict a race, I trained up a neural network on past race results, ahead of the World Championships in Harrogate. The model backed Sam Bennett, but it did not take account of the weather conditions, which turned out to be terrible. Fortunately the forecast looks good for tomorrow’s Milan Sanremo.
This time I have tried using a Random Forest, based on the results of the UCI races that took place in 2020 and so far in 2021. The model took account of each rider’s past results, team, height and weight, together with key statistics about each race, including date, distance, average speed and type of parcours.
One of the nice things about this type of model is that it is possible to see how the factors contribute to the overall predictions. The following waterfall chart explains why the model uncontroversially has Wout van Aert as the favourite.
The largest positive contribution comes from being Wout van Aert. This is because he has a lot of good results. His height and weight favour Milan Sanremo. He also has a strong positive coming from his team. This distance and race type make further positive contributions.
We can contrast this with the model’s prediction for Mathieu van der Poel, who is ranked 9th.
We see a positive personal contribution from being van der Poel, but having raced fewer UCI events, he has less of a strong set of results than van Aert. According to the model the Alpecin Fenix team contribution is not a strong as Jumbo Visma, but the long distance of the race works in favour of the Dutchman. The day of year gives a small negative contribution, suggesting that his road results have been stronger later in the year, but this could be due to last year’s unusual timing of races.
Each of the other riders in the model’s top 10 is in with a shout.
It’s taken me all afternoon to set up this model, so this is just a short post.
## Post race comment
### Where was Jasper Stuyven?
Like Mads Pedersen in Harrogate back in 2019, Jasper Stuyven was this year’s surprise winner in Sanremo. So what had the model expected for him? Scrolling down the list of predictions, Stuyven was ranked 39th.
His individual rider prediction was negative, perhaps because he has not had many good results so far this year, though he did win Omloop Het Nieuwsblad last year and had several top 10 finishes. The model assessed that his greatest advantage came from the length of the race, suggesting that he tends to do well over greater distances.
The nice thing about this approach is that that it identifies factors that are relevant to particular riders, in a quantitative fashion. This helps to overcome personal biases and the human tendency to overweight and project forward what has happened most recently.
## Pro cycling team networks
The COVID-19 pandemic has further exposed the weakness of the professional cycling business model. The competition between the teams for funding from a limited number of sponsors undermines the stability of the profession. With marketing budgets under strain, more teams are likely to face difficulties, in spite of the great advertising and publicity that the sport provides. Douglas Ryder is fighting an uphill struggle trying to keep his team alive after the withdrawal of NTT as a lead sponsor. One aspect of stability is financial, but another measure is the level of transfers between teams.
The composition of some teams is more stable than others. This is illustrated by analysing the history of riders’ careers, which is available on ProCyclingStats. The following chart is a network of the transfers between teams in the last year, where the yellow nodes are 2020 teams and the purple ones are 2019. The width of the edges indicates how many riders transferred between the teams, with the thick green lines representing the bulk of the riders who stuck with the same team. The blue labels give the initials of the official name of each team, such as M-S (Mitchelton-Scott), MT (Movistar Team), T-S (Trek-Segafredo) and TS (Team Sunweb). Riders who switched teams are labelled in red.
Although there is a Dutch/German grouping on the lower right, the main structure is from the outside towards the centre of the network.
The spikes around the end of the chart show riders like Geoffrey Soupe or Rubén Fernández, who stepped down to smaller non World Tour teams like Team Total Direct Energie (TTDE), Nippo Delko One Provence (NNDP), Euskaltel-Euskadi (E-E), Androni Giocattoli-Sidermec (AG-S ) or U-XPCT (Uno-X Pro Cycling Team).
The two World Tour outliers were Mitchelton-Scott (M-S) and Groupama FDJ (GF), who retained virtually all their riders from 2019. Moving closer in, a group of teams lies around the edge of the central mass, where a few transfers occurred. Moving anti-clockwise we see CCC Team (CT), Astana Pro Team (APT), Trek-Segafredo (T-S), AG2R Le Mondial (ALM), Circus-Wanty Gobert (C-WG), Team Jumbo Visma (TJV), Bora-Hansgrohe (B-H) and EF Pro Cycling (EPC).
Deeper in the mêlée, Ineos (TI_19/IG_20), Deceuninck – Quick Step (D-QS), UAE-Team Emirates (U-TE), Lotto Soudal (LS), Bahrain – McLaren (B-H) and Movistar Team(MT) exchanged a number of riders.
Right in the centre Israel Start-Up Nation (IS-UN) grabbed a whole lot of riders, including 7 from Team Arkéa Samsic (TAS). Meanwhile likes of Victor Campenaerts and Domenico Pozzovivo are probably regretting joining NTT Pro Cycling (TDD_19/NPC_20).
## Looking forward
A few of the top riders have contracts for next year showing up on ProCyclingStats. So far 2020/2021 looks like the network below. Many riders are renewing with their existing teams, indicated by the broad green lines. But some big names are changing teams, including Chris Froome, Richie Porte, Laurens De Plus, Sam Oomen, Romain Bardet and Wilco Keldeman, Bob Jungels and Lilian Calmejane.
## What about networks of riders?
My original thought when starting this analysis was that over their careers, certain riders must have been team mates with most of the riders in today’s peloton, so who is the most connected? Unfortunately this turned out to be ridiculously complicated, as shown in the image below, where nodes are riders with links if they were ever teammates and the colours represent the current teams. The highest ranked rider in each team is shown in red.
It is hard to make much sense of this, other than to note that those with shorter careers in the same team are near the edge and that Philippe Gilbert is close to the centre. Out of interest, the rider around 9 o’clock linking Bora and Jumbo Visma is Christoph Pfingsten, who moved this year. At least we can conclude that professional cyclists are well-connected.
## Time to be aerodynamic
The Covid-19 epidemic provided a huge boost to the Zwift streaming service. Confined by a global lockdown, cyclists freed themselves from the boredom of pedalling on a static turbo trainer by logging into one of a broadening range of online virtual worlds. Zwift racing has become particularly popular. While it is relatively straightforward to simulate variations in gradient and even the effects of drafting, it is not possible for riders to demonstrate superior bike handling skills. Nor can racers benefit from adopting a superior aerodynamic position on the bike, in fact this may prove to be a disadvantage.
Setting aside e-doping suspicions, such as riders understating their weights, in the artificial world of a Zwift race, the outcome largely comes down the the ability to sustain a high level of power (watts per kilo). The engagingly competitive nature of simulated races encourages everyone to push their limits. However, since Zwift offers no penalty against maintaining a non-aerodynamic body position on your trainer, it is quite possible that regular Zwifters might become habituated to riding in position that is far from optimal for the road.
## Fresh aerodynamics
Once out in the fresh air again, many riders may have noticed improvements in the levels of power they are able to sustain, thanks to the high levels of exertion required to compete on Zwift. But in the real world, when it comes to beating other riders in a race or a time trial, the principle force a rider has to overcome is aerodynamic drag, not electromagnetic resistance.
Maximum speed is attained by adopting a riding position that provides the optimal tradeoff between the ability to generate power and a low level of aerodynamic drag. Drag depends on a rider’s CdA, which represents the drag coefficient multiplied by frontal area. Since power rises with the cube of velocity, there comes a point where it is better to compromise on power in order to reduce frontal area. This is the key to time trialing and successful breakaways.
When the race season begins, skilful and more aerodynamic racers will be able to benefit from drafting in the huge wind shadow created by Zwift diesels, while offering back much less assistance when they pull through. So after prolonged training on Zwift, racers and time trialists really need to focus on improving their aerodynamics
There are various ways to reduce drag, starting withs some basics as described in an earlier blog. Post ride analysis can be performed using Golden Cheetah, BestBikeSplit or MyWindSock. There is also a range of devices that claim to offer real time measurement of CdA. These have been primarily targeted at the TT/triathlon market, but there’s no doubt that these could be incredibly useful for both training or even, perhaps, a race breakaway. Cycling Weekly recently reviewed the Notio device, but, while useful, these tools remain expensive and a bit clunky.
Whatever you choose to do, stay safe and stay aero.
## No drafting
In a fascinating white paper, Bert Blocken, Professor of Civil Engineering at Eindhoven University of Technology, comments on social distancing when applied to walking, running or cycling. His point is that the government recommendations to maintain a distance of 1.5 or 2 metres assume people are standing still indoors or outdoors in calm weather. However, when a person is moving, the majority of particulate droplets are swept along in a trailing slipstream.
Cyclists typically prefer to ride closely behind each other, in order to benefit from the aerodynamic drafting effect. Cycling is currently a permitted form of exercise in the UK, though only if riding alone or with members of your household. Nevertheless, there may be times when you find yourself catching up with a cyclist ahead. In this situation, you should avoid the habitual tendency to move up into the slipstream of the rider in front.
Professor Blocken’s team has performed computational fluid dynamics (CFD) simulations showing the likely spread of micro-droplets behind people moving at different speeds. As the cloud of particles, produced when someone coughs or sneezes, is swept into the slipstream, the heavier droplets, shown in red in the diagram above, fall faster. These are generally thought to be more considerably more contagious. You can see that they can land on the hands and body of the following athlete.
Based on the results, Blocken advises to keep a distance of at least four to five meters behind the leading person while walking in the slipstream, ten meters when running or cycling slowly and at least twenty metres when cycling fast.
Social Distancing v2.0
The recommendation, for overtaking other cyclists, is to start moving into a staggered position some twenty metres behind the rider in front, consistently avoiding the slipstream as you pass.
The results will be reported in a forthcoming peer-reviewed publication. But given the importance of the topic, I recommend that you take a look at the highly accessible three page white paper available here.
## Bike Identification as a web app
One of the first skills acquired in the latest version of the fast.ai course on deep learning is how to create a production version of an image classifier that runs as a web application. I decided to test this out on a set of images of road bikes, TT bikes and mountain bikes. To try it out, click on the image above or go to this website https://bike-identifier.onrender.com/ and select an image from your device. If you are using a phone, you can try taking photos of different bikes, then click on Analyse to see if they are correctly identified. Side-on images work best.
### How does it work?
The first task was to collect some sample images for the three classes of bicycles I had chosen: road, TT and MTB. It turns out that there is a neat way to obtain the list of urls for a Google image search, by running some javascript in the console. I downloaded 200 images for each type of bike and removed any that could not be opened. This relatively small data set allowed me to do all the machine learning using the CPU on my MacBook Pro in less than an hour.
The fast.ai library provides a range of convenient ways to access images for the purpose of training a neural network. In this instance, I used the default option of applying transfer learning to a pre-trained ResNet34 model, scaling the images to 224 pixel squares, with data augmentation. After doing some initial training, it was useful to look at the images that had been misclassified, as many of these were incorrect images of motorbikes or cartoons or bike frames without wheels or TT bars. Taking advantage of a useful fast.ai widget, I removed unhelpful training images and trained the model further.
The confusion matrix showed that final version of my model was running at about 90% accuracy on the validation set, which was hardly world-beating, but not too bad. The main problem was a tendency to mistake certain road bikes for TT bikes. This was understandable, given the tendency for road bikes to become more aero, though it was disappointing when drop handlebars were clearly visible.
The next step was to make my trained network available as a web application. First I exported the models parameter settings to Dropbox. Then I forked a fast.ai repository into my GitHub account and edited the files to link to my Dropbox, switching the documentation appropriately for bicycle identification. In the final step, I set up a free account on Render to host a web service linked to my GitHub repository. This automatically updates for any changes pushed to the repository.
Amazingly, it all works!
### References
fast.ai lesson 2
My GitHub repository, include Jupyter notebook
## Strava – Tour de Richmond Park Clockwise
Following my recent update on the Tour de Richmond Park leaderboard, a friend asked about the ideal weather conditions for a reverse lap, clockwise around the park. This is a less popular direction, because it involves turning right at each mini-roundabout, including Cancellara corner, where the great Swiss rouleur crashed in the 2012 London Olympics, costing him a chance of a medal.
An earlier analysis suggested that apart from choosing a warm day and avoiding traffic, the optimal wind direction for a conventional anticlockwise lap was a moderate easterly, offering a tailwind up Sawyers Hill. It does not immediately follow that a westerly wind would be best for a clockwise lap, because trees, buildings and the profile of the course affect the extent to which the wind helps or hinders a rider.
Currently there are over 280,000 clockwise laps recorded by nearly 35,000 riders, compared with more than a million anticlockwise laps by almost 55,000 riders. As before, I downloaded the top 1,000 entries from the leaderboard and then looked up the wind conditions when each time was set on a clockwise lap.
In the previous analysis, I took account of the prevailing wind direction in London. If wind had no impact, we would expect the distribution of wind directions for leaderboard entries to match the average distribution of winds over the year. I defined the wind direction advantage to be the difference between these two distributions and checked if it was statistically significant. These are the results for the clockwise lap.
The wind direction advantage was significant (at p=1.3%). Two directions stand out. A westerly provides a tailwind on the more exposed section of the park between Richmond Gate and Roehampton, which seems to be a help, even though it is largely downhill. A wind blowing from the NNW would be beneficial between Roehampton and Robin Hood Gate, but apparently does not provide much hindrance on the drag from Kingston Gate up to Richmond, perhaps because this section of the park is more sheltered. The prevailing southwesterly wind was generally unfavourable to riders setting PBs on a clockwise lap.
The excellent mywindsock web site provides very good analysis for avid wind dopers. This confirms that the wind was blowing predominantly from the west for the top ten riders on the leaderboard, including the KOM, though the wind strength was generally light.
The interesting thing about this exercise is that it demonstrates a convergence between our online and our offline lives, as increasing volumes of data are uploaded from mobile sensors. A detailed analysis of each section of the million laps riders have recorded for Richmond Park could reveal many subtleties about how the wind flows across the terrain, depending on strength and direction. This could be extended across the country or globally, potentially identifying local areas where funnelling effects might make a wind turbine economically viable.
### References
Jupyter notebook for calculations
## Can self-driving cars detect cyclists?
Self-driving cars employ sophisticated software to interpret the world around them. How do these systems work? And how good are they at detecting cyclists? Can cyclists feel safe sharing roads with an increasing number of vehicles that make use of these systems?
## How hard is it to spot a cyclist?
Vehicles can use a range of detection systems, including cameras, radar and lidar. Deep learning techniques have become very good at identifying objects in photographic images. So one important question is how hard is it to spot a cyclist in a photo taken from a moving vehicle?
Researchers at Tsinghua University, working in collaboration with Daimler, created a publicly available collection of dashboard camera photos, where humans have painstakingly drawn boxes around other road users. The data set is used by academics to benchmark the performance of their image recognition algorithms. The images are rather grey and murky, reflecting the cloudy and polluted atmosphere of the Chinese city location. It is striking that, in the majority of cases, the cyclists are very small, representing around 900 pixels out of the 2048 x 1024 images, i.e. less than 0.05% of the total area. For example, the cyclist in the middle of the image above is pretty hard to make out, even for a human.
Object-detecting neural networks are typically trained to identify the subject of a photo, which normally takes up are significant portion of the image. Finding a tall, thin segment containing a cyclist is significantly more difficult.
If you think about it, the cyclist taking up the largest percentage of a dash cam image will be riding across the direction of travel, directly in front of the vehicle, at which point it may be too late to take action. So a crucial aspect of any successful algorithm is to find more distant cyclists, before they are too close.
## Setting up the problem
Taking advantage of skills acquired on the fast.ai course on deep learning, I decided to have a go at training a neural network to detect cyclists. Many of the images in the Tsinghua Daimler data set include multiple cyclists. In order to make the problem more manageable, I set out to find the single largest cyclist in each image.
If you are not interested in the technical bit, just scroll down to the results.
## The technical bit
In order to save space on my drive, I downloaded about a third of the training set. The 3209 images were split 80:20 to create a training and validation sets. I also downloaded 641 unseen images that were excluded from training and used only for testing the final model.
I used transfer learning to fine-tune a neural network using a pre-trained ResNet34 backbone, with a customised head designed to generate four numbers representing the coordinates of a bounding box around the largest object in each image. All images were scaled down to 224 pixel squares, without cropping. Data augmentation added variation to the training images, including small rotations, horizontal flips and adjustments to lighting.
It took a couple of hours to train the network on my MacBook Pro, without needing to resort to a cloud-based GPU, to produce bounding boxes with an average error of just 12 pixels on each coordinate. The network had learned to do a pretty good job at detecting cyclists in the training set.
## Results
The key step was to test my neural network on the set of 641 unseen images. The results were impressive: the average error on the bounding box coordinates was just 14 pixels. The network was surprisingly good at detecting cyclists.
The 16 photos above were taken at random from the test set. The cyan box shows the predicted position of the largest cyclist in the image, while the white box shows the human annotation. There is a high degree of overlap for eleven cyclists 2, 3, 4, 5, 6, 8, 11, 12, 14, 15 and 16. Box 9 was close, falling between two similar sized riders, but 7 was a miss. The algorithm failed on the very distant cyclists in 1, 10 and 13. If you rank the photos, based on the size of the cyclist, we can see that the network had a high success rate for all but the smallest of cyclists.
In conclusion, as long as the cyclists were not too far away, it was surprisingly easy to detect riders pretty reliably, using a neural network trained over an afternoon. With all the resources available to Google, Uber and the big car manufacturers, we can be sure that much more sophisticated systems have been developed. I did not consider, for example, using a sequence of images to detect motion or combining them with data about the motion of the camera vehicle. Nor did I attempt to distinguish cyclists from other road users, such as pedestrians or motorbikes.
After completing this project, I feel reassured that cyclists of the future will be spotted by self-driving cars. The riders in the data set generally did not wear reflective clothing and did not have rear lights. These basic safety measures make cyclists, particularly commuters, more obvious to all road users, whether human or AI.
Car manufacturers could potentially develop significant goodwill and credibility in their commitment to road safety by offering cyclists lightweight and efficient beacons that would make them more obvious to automated driving systems.
## References
“A new benchmark for vision-based cyclist detection”, X. Li, F. Flohr, Y. Yang, H. Xiong, M. Braun, S. Pan, K. Li and D. M. Gavrila, in proceedings of IEEE Intelligent Vehicles Symposium (IV), pages 1028-1033, June 2016
## Don’t ride your bike like an astronaut
Astronauts return from the International Space Station with weak bones, due to the lack of gravitational forces. It is surprising to learn that competitive cyclists can experience similar losses in bone density over the period of a race season.
The problem is called Relative Energy Deficiency is Sport (RED-S). This occurs when lean athletes reach a tipping point where the benefits of losing weight become overwhelmed by negative impacts on health. When deprived of sufficient energy intake to match training load, certain metabolic systems become impaired or shut down.
Colleagues from Durham University and I recently published a study investigating what cyclists at risk of RED-S can do to improve their health and performance. It is freely available and written in an accessible way, without the requirement for specialist expertise.
## Race performance
Race performance was measured by the number of British Cycling points accumulated over the season. This was correlated with power (FTP and FTP/kg) and training load. However, changes in energy availability proved to be an important factor. After adjusting for FTP, cyclists who improved their fuelling (green triangles) gained, on average, 95 points more than those who made no change. In contrast, those who restricted their nutrition (red crosses) accumulated 95 fewer points and reported fatigue, illness and injury.
The nutritional advice included recommendations on adequate fuelling before, during and after rides. Also see my previous article on fuelling for the work required.
## Bone health
Competitive road cyclists can fall into an energy deficit due to the long hours of training they complete. Although an initial loss of excess body weight can lead to performance improvements, athletes need to maintain a healthy body mass. The lumbar spine is particularly sensitive to deficiencies of energy availability.
In cyclists, the lower back also fails to benefit from the gravitational stresses of weight-bearing sports. This is why, in addition to nutritional advice, study participants were recommended some basic skeletal loading exercises (yes, that is me in the pictures).
The cyclists fell into three general groups: those who made positive changes to nutrition and skeletal loading, those who made negative changes and the remainder. The resulting changes in bone mineral density over a six month period were striking, with highly statistically significant differences observed between the groups.
Those making positive changes (green triangles) saw significant gains in bone mineral density, while those making negative changes (red crosses) saw equally significant negative losses in bone density. Any individual observation outside the band of the least significant change (LSC) is indicative of a material change in bone health.
## Conclusions
The study provided strong evidence of the benefits of positive changes and the costs of negative changes in nutrition and skeletal loading exercises. It was noted that certain cyclists found it hard to overcome psychological barriers preventing them from deviating from their current routines. It is hoped that such strong statistical results will help these vulnerable athletes make beneficial behavioural changes
## References
Clinical evaluation of education relating to nutrition and skeletal loading in competitive male road cyclists at risk of relative energy deficiency in sports (RED-S): 6-month randomised controlled trial, Nicola Keay, Gavin Francis, Ian Entwistle, Karen Hind. BMJ Open Sport and Exercise Medicine Journal, Volume 5, Issue 1. http://dx.doi.org/10.1136/bmjsem-2019-000523
## Learning the language of the Giro
Computers are becoming ever better at natural language processing (NLP). I set up a neural network as a language model and loaded a set of pre-trained weights on my MacBook Pro, to see what it could do. After priming it with some text about the Giro d’Italia, this is what the model produced as a continuation.
Priming text: The Giro d’ Italia (Italian pronunciation: [ˈdʒiːro diˈtaːlja]; English: Tour of Italy; also known as the Giro) is an annual multiple-stage bicycle race primarily held in Italy, while also occasionally passing through nearby countries. The first race was organized in 1909 to increase sales of the newspaper La Gazzetta dello Sport; however it is currently run by RCS Sport. The race has been held annually since its first edition in 1909, except when it was stopped for the two world wars. As the Giro gained prominence and popularity the race was lengthened, and the peloton expanded from primarily Italian participation to riders from all over the world.
Computer generated continuation: the race was won by the italian rider , giovanni di u_n , who won the race in the first leg of the race . the race was won by italian rider giovanni u_n , who won the race by a margin of two lengths .
= = = world tour = = =
the tour de france was the first of the tour de france . the tour de france was won by the reigning world champion , the reigning world champion , who had won the tour de france in the previous year ‘s race …
The output may not make a lot of sense, but the point is that it looks like English (in lower case). The grammar is reasonable, with commas, fullstops and a header inserted in a logical way. Furthermore, the model has demonstrated some understanding of the context by suggesting that the Giro could be won by an Italian ride called Giovanni. The word “u_n” stands for unknown, which is consistent with the idea that an Italian surname may not be a familiar English word. It turns out that a certain Giovanni Di Santi raced against Fausto Coppi (pictured above) in the 1940 Giro, though he did not win the first stage. In addition to this, the model somehow knew that the Giro, in common with the Tour the France, is a World Tour event that could be won by the reigning world champion.
I found this totally amazing. And it was not a one off: further examples on random topics are included below. This neural network is just an architecture, defining a collection of matrix multiplications and transformations, along with a set of connection weights. Admittedly there are a lot of connection weights: 115.6 million of them, but they are just numbers. It was not explicitly provided with any rules about English grammar or any domain knowledge.
## How could this possibly work?
In machine learning, language models are assessed on a simple metric: accuracy in predicting the next word of a sentence. The neural network approach has proved to be remarkably successful. Given enough data and a suitable architecture, deep learning now far outstrips traditional methods that relied on linguistic expertise to parse sentences and apply grammatical rules that differ across languages.
I was experimenting with an AWD-LSTM model originally created by Stephen Merity. This is a recurrent neural network (RNN) with three LSTM layers that include dropout. The pre-trained weights for the wt103 model were generated by Jeremy Howard of fast.ai, using a large corpus of text from Wikipedia.
Jeremy Howard converted the Wikipedia text into tokens. A tokeniser, such as spaCy, breaks text into words and punctuation, resulting in a vocabulary of tokens that are indexed as integers. This allows blocks of text to be fed into the neural network as lists of numbers. The outputs are numbers that can be converted back into the predicted words.
The wt103 model includes a linear encoder that creates embeddings of word tokens. These are passed through three LSTM layers whose states are able to retain a memory of previous words or context. The result is passed through a decoder, employing the same weights as the encoder, to produce a softmax output that can be treated as a set of probabilities, across the vocabulary, to predict the next word token. Special forms of dropout were employed, as described in the paper, to limit overfitting and make the model more robust.
The network was trained by minimising cross-entropy loss using stochastic gradient descent. A technique called back propagation through time (BPTT) introduced some random variation into the length of the blocks of text employed in each training batch. The connection weights were iteratively improved towards optimal values.
## OK, but so what?
Impressive as it may be for a neural network to spout highly plausible but nonsensical English, what possible use could this have? Well, suppose you want to automate the separation of texts into different categories, for example, distinguish different authors or determine the sentiment of tweets or predict the reaction of the stock market to news items or identify the sentiment of movie reviews. Fine-tuning a neural network that already has a good knowledge of English provides a huge advantage over starting from scratch. In lesson 10 of the fast.ai course, Jeremy Howard demonstrates how this approach can be used to achieve state of the art results on the widely used IMDB movie review data set.
## Further examples
In the first example, the network finishes the sentence about a hurricane in perfect English. Next, it comes up with the name of a critic (actually a rock music critic rather than a book reviewer, but heh!) and it makes use of its LSTM to open and close quotation marks. In the bean example, it amazingly starts describing flowers and correctly converts their dimensions from centimetres to inches!
My code can be found here.
Priming text: Hurricane Ivan was a large long Cape Verde hurricane that caused widespread damage in the Caribbean and United States The cyclone was
Computer generated continuation: the first hurricane to strike the united states since hurricane katrina in 2005 .
Priming text: The book was widely acclaimed by the critics.
Computer generated continuation: the book was published in the united states by the american library association on june 1 , 2006 .
= = = critical response = = =
the book received mixed reviews from critics . the new york times reviewer robert christgau called it ” a great book ” and ” a great book ” . he praised the book ‘s ” strong , strong , and well – written ” prose , and the ” excellent ” prose .
Priming text: Most varieties of bean grow either as an erect bush or as a climbing plant, but a few important kinds are of intermediate form. Dwarf and semiclimbers are grown extensively. When the climbing type is grown for its immature pods, artificial supports are necessary to facilitate harvesting. Varieties differ greatly in size, shape, colour, and fibrousness or tenderness of the immature pods. In general, varieties grown for dry mature seeds produce pods that are too fibrous to be eaten at any state of development. Most edible-podded beans produce relatively low yields of mature seeds, or seeds that are of low eating quality. Seed colours range from white through green, yellow, tan, pink, red, brown, and purple to black in solid colours and countless contrasting patterns. Seed shapes range from nearly spherical to flattened, elongated, and kidney-shaped. Pods are of various shades of green, yellow, red, and purple and splashed with red or purple; pod shapes range from flat to round, smooth to irregular, and straight to sharply curved; length ranges from 75 to 200 millimetres (3 to 8 inches) or more.
Computer generated continuation: the most common form of the flower is the flower spike , which is a characteristic of the flower spike . the flower is made up of a series of small , cylindrical , cylindrical , cylindrical , cylindrical stems that are up to 10 cm ( 3.9 in ) long and 2.5 cm ( 0.98 in ) wide . the flower spikes are arranged in a series of three or four pairs of flowers
## Creating artistic images from Strava rides
When you upload a ride, Strava draws a map using the longitude and latitude coordinates recorded by your GPS device. This article explores ways in which these numbers, along with other metrics, can be used to create interesting images that might have some artistic merit.
The idea was motivated by the huge advances made in the field of Deep Learning, particularly applications for image recognition. However, since datasets come in all shapes and forms, researchers have explored ways of converting different types of data into images. In a paper published in 2015, the authors achieved success in identifying standard time series by converting them into images.
GPS bike computers typically record snapshots of information every second. What kind of images could these time series generate? It turns out that there are several ways to convert a time series into an image.
### Spectrogram
Creating a spectrogram is a standard approach from signal processing that is particularly useful for analysing acoustic files. The spectrogram is a heat map that shows how the underlying frequencies contributing to the signal change over time. Technically, it is derived by calculating the discrete Fourier transform of a window that slides across the time series. I applied this to my regular Saturday morning club ride of four laps around Richmond Park. The image changes a bit once the ride gets going after about 1200 seconds (20 minutes), but, frankly, the result was not particularly illuminating. There is no obvious reason to consider cycling power data as a superposition of frequencies.
### Ah! Now we are getting somewhere
The authors of the referenced paper took a different approach to produce things called Gramian Angular Summation Field (GASF), Gramian Angular Difference Field (GADF), and Markov Transition Field (MTF). Read the paper if want to know the details. I created these and something call a Recurrence Plot. All of these methods generate a matrix, by combining every element in the time series with every other element. The underling observations occurring at times $t_{1}$ and $t_{2}$ determine the colour of the pixel at position ($t_{1}$, $t_{2}$). Images are symmetric along the lower-left to upper-right diagonal, apart from GADF, which is antisymmetric.
Let’s see how do they look for on four laps of Richmond Park. We have six time series, with corresponding sets of images below. The segmentation of the images is due to periodicity of the data. This is particularly clear in the geographic data (longitude, latitude and altitude). The higher intensity of the main part of the ride is most obvious in the heart rate data. The MTF plots are quite interesting. Scroll down through the images to the next section
### From cycle ride to art
It is one thing to create an image of each item, but how can we combine these to summarise a ride in a single image. I considered two methods of combining time series into a single image: a) create a new image where the vertical and horizontal axes represent different series and b) create a new image by simply adding the corresponding values from two underlying images.
One problem is that some cyclists don’t have gadgets like heart rate monitors and power meters, so I initially restricted myself to just the longitude, latitude and altitude data. Nevertheless, as noted in an earlier blog, it is possible to work out speed, because the time interval is one second between each reading. Furthermore, one can estimate power, from the speed and changes in elevation.
Another problem is that rides differ in length. For this I split the ride into, say, 128 intervals and took the last observation in each interval. So for a 3 hour ride, I’d be sampling about once every 84 seconds.
The chart at the top of this blog was created by first normalising each series to a standard range (-1, +1). Method a) was used to create two images: longitude was added to latitude and altitude was multiplied by speed. These were added using method b). Using these measures will produce pretty much the same chart each time the ride is done. In contrast, an image that is totally unique to the ride can be produced using data relating to the individual rider. The image below uses the same recipe to combine speed, heart rate, power and cadence. If this had been a particularly special ride, the image would be a nice personal memento.
For anyone interested in the underlying code, I have posted a Jupyter notebook here.
### References
Encoding Time Series as Images for Visual Inspection and Classification Using Tiled Convolutional Neural Networks, Wang Z Oates T, https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/viewFile/10179/10251 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2623462677001953, "perplexity": 1905.9670491367451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710789.95/warc/CC-MAIN-20221201021257-20221201051257-00669.warc.gz"} |
https://worldwidescience.org/topicpages/v/vertical+ampliado+em.html | Sample records for vertical ampliado em
1. Acesso transeptal vertical ampliado em reoperações valvares mitrais com átrio esquerdo pequeno Extended vertical transseptal approach in mitral valve reoperation with a small left atrium
Directory of Open Access Journals (Sweden)
Walter Vosgrau Fagundes
2004-03-01
Full Text Available OBJETIVO: Avaliar a abordagem transeptal vertical ampliada em reoperações da valva mitral com átrio esquerdo pequeno. MÉTODO: De janeiro de 2001 a dezembro de 2002, 15 pacientes portadores de doença valvar mitral com indicação de reintervenção cirúrgica, átrio esquerdo pequeno (menor ou igual a 4,0 cm e fibrilação atrial crônica, foram submetidos à abordagem transeptal vertical ampliada da valva mitral. Nove pacientes (pt eram do sexo feminino. A idade variou de 22 a 48 anos. As indicações cirúrgicas foram: disfunção de prótese mitral (seis pt; insuficiência mitral (cinco pt e dupla lesão mitral (quatro pt. Três pacientes apresentavam insuficiência aórtica associada e um pt, insuficiência tricúspide. Nove (60% pacientes encontravam-se em ICC CF III da NYHA e seis (40%, em CF IV. RESULTADOS: A exposição do aparelho valvar mitral foi excelente. O tempo de circulação extracorpórea variou de 65 a 150 min (média = 95min. Foram implantadas próteses em todos os pacientes (15 mitrais, três aórticas e um tricúspide. A mortalidade hospitalar foi de 6,7%, com um óbito devido a baixo débito cardíaco e falência de múltiplos órgãos. Um (6,7% paciente apresentou broncopneumonia na fase hospitalar. Dez pacientes permaneceram com fibrilação atrial, três pt reverteram para ritmo sinusal e um evoluiu com ritmo juncional. A permanência hospitalar média foi de 8,2 dias. Doze (85,7% pacientes encontram-se em CF I e dois (14,3% em CF II. A curva atuarial de sobrevida é de 92,5 % em 22 meses de seguimento. CONCLUSÃO: A técnica cirúrgica empregada proporciona excelente visibilização do aparelho valvar mitral, com baixo índice de complicações.OBJECTIVE: To evaluate the efficacy of the extended vertical transseptal approach in mitral valve reoperation with a small left atrium. METHOD: From January 2001 to December 2002, 15 patients with previous mitral operations, small left atrium and atrial fibrillation
2. O Fenótipo Ampliado do Autismo em genitores de crianças com Transtorno do Espectro Autista - TEA
Directory of Open Access Journals (Sweden)
Renata Giuliani Endres
Full Text Available RESUMOPesquisadores têm identificado expressões mais leves de traços do Transtorno do Espectro do Autismo - TEA em pais e irmãos destes indivíduos, que são definidas como Fenótipo Ampliado do Autismo (FAA. Este estudo investigou o perfil de personalidade de 20 genitores de crianças com o diagnóstico de TEA, utilizando a Bateria Fatorial de Personalidade e o Broad Autism Phenotype Questionnaire. Os resultados apontam para a presença de alguns traços de personalidade (ex: tendência à rigidez e ao retraimento social que podem, em alguma medida, corresponder às áreas de comprometimento presentes no TEA. Estes achados refletem um campo promissor de estudos no Brasil, sobretudo porque se utilizou um instrumento brasileiro, ainda não empregado em investigações na área do autismo.
3. Extended-spectrum beta-lactamases in Klebsiella spp and Escherichia coli obtained in a Brazilian teaching hospital: detection, prevalence and molecular typing beta-lactamases de espectro ampliado em Klebsiella spp e em Escherichia coli obtidas em um hospital escola brasileiro: detecção, prevalência e tipagem molecular
Directory of Open Access Journals (Sweden)
Ana Lúcia Peixoto de Freitas
2003-12-01
Full Text Available His study was performed to compare the methods of detection and to estimate the prevalence of extended-spectrum beta-lactamases (ESBL among Klebsiella spp and E.coli in a university hospital in southern Brazil. We also used a molecular typing method to evaluate the genetic correlation between isolates of ESBL K.pneumoniae. Production of ESBL was investigated in 95 clinical isolates of Klebsiella spp and Escherichia coli from Hospital de Clínicas de Porto Alegre, using Kirby-Bauer zone diameter (KB, double-disk diffusion (DD, breakpoint for ceftazidime (MIC CAZ, increased zone diameter with clavulanate (CAZ/CAC and ratio of ceftazidime MIC/ceftazidime-clavulanate MIC (MIC CAZ/CAC. Molecular typing was performed by DNA macrorestriction analysis followed by pulsed-field gel electrophoresis. The KB method displayed the highest rates of ESBL (up to 70% of Klebsiella and 59% of E.coli, contrasting with all the other methods (p Este estudo foi desenvolvido para comparar métodos de detecção e para estimar a prevalência de Klebsiella spp e E.coli produtoras de beta-lactamases de espetro ampliado (ESBL em um Hospital Universitário no sul do Brasil. A correlação genética, determinada através de método molecular de tipagem, entre as amostras de K. pneumoniae também foi determinada. A produção de ESBL foi investigada em 95 amostras de Klebsiella spp e E.coli obtidas de pacientes no Hospital de Clínicas de Porto Alegre usando-se: medida do diâmetro a zona de inibição (KB, dupla-difusão de disco (DD, valores de concentração inibitória mínima da ceftazidima (MIC CAZ, aumento do diâmetro da zona de inibição com adição de clavulanato (CAZ/CAC e a relação entre o MIC da ceftazidima/MIC ceftazidima com clavulanato (MIC CAZ/CAC. A tipagem molecular foi realizada utilizando-se o método de macrorestrição de DNA e eletroforese em campo pulsado (PFGE. O método KB apresentou as maiores taxas de produção de ESBL (> 70% para Klebsiella e
4. Mortalidade por insuficiência cardíaca: análise ampliada e tendência temporal em três estados do Brasil Mortalidad por insuficiencia cardiaca: análisis ampliado y tendencia temporal en tres estados de Brasil Mortality due to heart failure: extended analysis and temporal trend in three states of Brazil
Directory of Open Access Journals (Sweden)
Eduardo Nagib Gaui
2010-01-01
5. Cadastro ampliado em saúde da família como instrumento gerencial para diagnóstico de condições de vida e saúde The expanded enrolment form in the Brazilian Family Health Program as a management tool for diagnosis of living and health conditions
Directory of Open Access Journals (Sweden)
Arnaldo Sala
2004-12-01
6. Modelagem e controle de cargas em movimento vertical
OpenAIRE
Terceiro, Georges Jean Bruel
2004-01-01
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-graduação em Engenharia Elétrica Vários são os aspectos envolvidos no processo de controle de movimento de veículos para deslocamento vertical que tornam o seu estudo interessante, sobretudo quando se trata de elevadores para transporte de pessoas, em que pesa sobremaneira o fato da existência de "cargas" vivas e conscientes. O controle do movimento de equipamentos elevadores é realizado a ...
7. Vertical Mulching e manejo da água em semeadura direta Vertical Mulching and water management in no tillage system
Directory of Open Access Journals (Sweden)
Sandra Maria Garcia
2008-04-01
8. AIDS em gestantes: possibilidade de reduzir a transmissão vertical
Directory of Open Access Journals (Sweden)
Fernanda Scherer Wiethäuper
2003-06-01
Full Text Available Neste estudo, buscamos investigar o conhecimento que gestantes possuem sobre a transmissão vertical, o comprometimento do feto e o significado do resultado soropositivo que a identifica como infectada pelo HIV. A pesquisa exploratória, de natureza qualitativa, foi desenvolvida em Unidades Sanitárias de São Leopoldo/RS. A análise permitiu captar a percepção de 63 gestantes entre 16 e 40 anos sobre os motivos e os significados para realização do teste, os conhecimentos e vivências do cotidiano e as perspectivas e cuidados com o bebê. Os resultados trazem um alerta aos profissionais que atuam no pré-natal, visto que necessitam atender uma complexidade de situações que emergem quando se vincula gestação e AIDS.
9. Avaliação da vertical visual subjetiva em indivíduos brasileiros normais Subjective visual vertical evaluation in normal Brazilian subjects
Directory of Open Access Journals (Sweden)
Aline M. Kozoroski Kanashiro
2007-06-01
Full Text Available A função otolítica pode ser avaliada pela Vertical Visual Subjetiva (VVS que determina a capacidade de um indivíduo julgar se objetos estão na posição vertical na ausência de outras referências visuais. O objetivo deste estudo foi avaliar a VVS em indivíduos brasileiros normais usando um aparelho portátil. As medidas da VVS foram realizadas em 160 indivíduos (16 a 85 anos. O valor médio da VVS foi obtido após dez ajustes. A VVS teve valores médios entre -2,0º e +2,4º (média=0,18º, e DP=0,77º. Não houve diferença entre as médias da VVS em relação à idade (teste de Kruskal-Wallis; p=0,40, mas as faixas etárias maiores tiveram variância maior (teste de Levene; p=0,016. Os valores da VVS encontrados neste estudo foram semelhantes aos registrados na literatura. Não houve diferença nas médias das inclinações da VVS de acordo com a idade, mas foi encontrada maior variância entre indivíduos mais idosos.Otolith function can be evaluated by subjective visual vertical (SVV that determine the capacity of a subject to judge if the objects are on vertical position with absence of any visual reference. The aim of this study was to evaluate the SVV in a sample of normal Brazilian subjects using a portable device. Measurements of SVV were performed in 160 normal subjects (aged from 16 to 85. SVV mean value was obtained after ten adjustments. SVV mean values ranged from -2.0º to +2.4º (mean=0.18º, and SD=0.77. Considering all age groups, there was no difference of SVV mean values (Kruskal-Wallis test; p=0.40, but older groups had a greater variance (Levene test; p=0.016. SVV values observed in this study are comparable to those described in previous studies. Although there was no difference in mean SVV-inclination according to age, there was a greater variance in older subjects.
10. Proposta de medição da posição vertical da laringe em repouso Proposal of measurement of vertical larynx position at rest
Directory of Open Access Journals (Sweden)
Osiris de Oliveira Camponês do Brasil
2005-06-01
Full Text Available OBJETIVO: Esta pesquisa tem como objetivo propor uma forma de medir a posição vertical da laringe (PVL no pescoço, em repouso, de adultos jovens sem queixas vocais. FORMA DE ESTUDO: Estudo de coorte transversal. MATERIAL E MÉTODO: Participaram da pesquisa 68 sujeitos, faixa etária de 18 a 44 anos de idade, sendo 33 do sexo feminino e 35 do sexo masculino. Os pontos de referência utilizados para a pesquisa foram os ângulos da mandíbula direito e esquerdo (AMD e AME, o centro do arco da cartilagem cricóidea (CC e o centro da fúrcula esternal (FE. Para a obtenção das medidas, os sujeitos foram orientados a permanecerem sentados com a cabeça em hiperextensão máxima. Os materiais utilizados foram um compasso e uma régua de 20cm. RESULTADOS: A obtenção das medidas se mostrou ser de fácil realização e não apresentou qualquer tipo de desconforto aos participantes. Houve diferença estatisticamente significante entre os sexos feminino e masculino quanto à posição vertical da laringe, sendo que as mulheres apresentaram a laringe em posição mais alta que os homens. A posição vertical da laringe foi de fácil obtenção e parece ser um parâmetro muito interessante no acompanhamento clínico intra-sujeitos.AIM: The purpose of this research is to propose a procedure to measure the vertical larynx position in the neck at rest in young adults without vocal complaint. STUDY DESIGN: Transversal cohort study. MATERIAL AND METHOD: There were 68 subjects, aged between 18 to 44 years, 33 female and 35 male. The anatomical landmarks used for this research study were the right and left jaw angle (RJA and LJA, the centre of the cricoid arch cartilage (CC and the centre of the sternal furculum (SF. In order to obtain the measures, the subjects were asked to be sitting still with their heads stretched up to the highest possible position. The devices used were a drawing compass and a 20-centimeter ruler. RESULTS: The measurement procedure
11. CARACTERIZAÇÃO DE ARGAMASSAS PARA USO EM SISTEMAS DE VEDAÇÃO VERTICAL EXTERNO (SVVE
Directory of Open Access Journals (Sweden)
Vagner Arruda de Castro
2011-03-01
Full Text Available A Associação Brasileira de Normas Técnicas publicou em maio de 2010 uma nova norma de desempenho, a NBR 15.575, referente as edificações de até cinco pavimentos. Essa norma trará impactos importantes a fabricantes de materiais, projetistas, construtoras e prestadores de serviços, portanto os laboratórios de construção civil, como o LCC do IFRN, devem estar atentos a implicações destas normas, buscando gerar conhecimento a cerca dos aspectos nela envolvida, quanto para estarem aptos a avaliarem os materiais de construção e os sistemas construtivos empregados nas edificações abordadas na norma. Dentro deste contexto este artigo pretende caracterizar argamassas de revestimento para uso em sistema de vedação vertical externo (SVVE compostos de blocos cerâmicos, revestidos com argamassas em diferentes composições, visando propor inovações tecnológicas aplicados ao processo de produção de SVVE, objetivando a melhoria de desempenho, qualidade e custo dos SVVE. PALAVRAS-CHAVE: argamassa de revestimento, sistemas de vedação vertical externo, propriedades no estado fresco.
12. DIEL AND VERTICAL DYNAMIC OF LIMNOLOGICAL CHARACTERISTICS IN FISH REARING NET-CAGES ENVIROMENT DINÂMICA NICTIMERAL E VERTICAL DAS CARACTERÍSTICAS LIMNOLÓGICAS EM AMBIENTE DE CRIAÇÃO DE PEIXES EM TANQUES-REDE
Directory of Open Access Journals (Sweden)
Odair Diemer
2010-04-01
Full Text Available
This study aimed to verify the diel and vertical dynamic characteristics in limnological environment for rearing native fish in net-cage at the reservoir of Itaipu Binacional. The parameters evaluated were water temperature, dissolved oxygen, electrical conductivity, pH, phosphorus, nitrite and ammonia. It was found that there was diel variation for all parameters, except for ammonia and phosphorus. But the variables are in the recommended limits for aquaculture, with the exception of dissolved oxygen that showed critical rates at night. For the vertical distribution concentrations of physical and chemical parameters of water did not exceed the limit established by CONAMA Resolution 357/05 for fish rearing, however, there was vertical variation for nitrite and phosphorus.
KEY WORDS: Aquaculture, intensive culture, limnology, water quality.
O presente trabalho teve como objetivo verificar a dinâmica nictimeral e vertical das características limnológicas em ambiente de criação de peixes nativos em tanques-rede no reservatório da Itaipu Binacional. Os parâmetros avaliados foram temperatura da água, oxigênio dissolvido, condutividade elétrica, pH, fósforo total, nitrito e amônia. Verificou-se que ocorreu variação nictimeral para todos os parâmetros, exceto para amônia e fósforo total. Entretanto, as variáveis estão dentro dos limites recomendados para a aquicultura, com exceção do oxigênio dissolvido, que apresentou valores críticos à noite. Para a distribuição vertical, as concentrações dos parâmetros físicos e químicos da água não ultrapassaram o limite estabelecido pela resolução do CONAMA 357/05 para criação de peixes. No entanto, houve variação vertical para nitrito e fósforo.
PALAVRAS-CHAVES: Aquicultura, cultivo intensivo, limnologia, qualidade de água.
13. Unidade piloto em regime de batelada com sistema de reatores anaeróbios + microalgas + wetlands construídos em fluxo vertical
Directory of Open Access Journals (Sweden)
Matheus Wink
2016-10-01
Full Text Available Três configurações para operar em batelada sistemas integrados que evoluíram para Reatores Anaeróbios + Microalgas + Wetlands Construídos de Fluxo Vertical (RA + MA + WCFV foram investigadas neste trabalho, tendo as configurações as seguintes características: combinação de tanque de MA com 90L de volume útil, dotado de recirculação interna em cone de acrílico e externa com tanque de 20L, também em acrílico, sendo integrado com WCFV com tempo de detenção hidráulico (TDH de 3 dias com a macrófita Hymenachne grumosa. A configuração do sistema RAs +MA + WCFV vem em operação nos últimos 8 meses, mostrando reduções totais de N-NH4+ (com concentração inicial de 68 mg L-1 , associando 50% de redução de DQO e 70% de fósforo total. Melhorias para o controle de remoção de algas residuais devem ser feitas para aplicação do WCFV, especialmente quanto a carga volumétrica, que deverá ser com até 20 cm dia-1. Para continuidade da evolução foram considerados que devem ser pesquisados os seguintes aspectos: impossibilidade de operação do sistema em fluxo contínuo; drenagem do sistema de lodo não permite sua remoção completa nos Reatores Anaeróbios (RAs e impossibilidade de recarga simultânea dos sistemas.
14. Distribuição vertical e horizontal de temperaturas do ar em ambientes protegidos Vertical and horizontal distribution of air temperature in a plastic greenhouse
Directory of Open Access Journals (Sweden)
Raquel A. Furlan
2002-04-01
Full Text Available Este trabalho foi realizado na área experimental do Departamento de Engenharia Rural da Escola Superior de Agricultura "Luiz de Queiroz", Piracicaba, SP, Brasil, em dois ambientes protegidos construídos no sentido leste-oeste, com área total de 112 m² e coberto com plástico (150 micra, tratado contra raios ultravioleta. Para caracterizar a distribuição espacial da temperatura do ar no ambiente protegido, instalaram-se termopares (cobre-constantã formando malhas, com espaçamento horizontal entre eles de 3,0 m e nas alturas de 0,5, 1,0, 2,0, 3,0 e 4,0 m em relação ao solo. Os dados foram armazenados a cada 15 min por sistemas automáticos de aquisição de dados nos ambientes protegidos. O sistema de nebulização constituiu-se de duas linhas com 70 bocais totais, instalados a uma altura de 3,0 m, utilizando-se uma pressão de trabalho de 200 kPa. A nebulização não afetou o gradiente vertical de temperatura, que manteve a tendência de aumento de temperatura com a altura, em relação ao nível do solo, enquanto o efeito na redução de temperatura pelo sistema de nebulização somente foi eficaz durante a realização da mesma. Para a representação da distribuição espacial de temperatura do ar no ambiente protegido nos diferentes níveis de altura, construiu-se superfícies isotérmicas a partir dos resultados. Verifica-se que a nebulização apresentou maior efeito na homogeneização da distribuição de temperatura no ambiente protegido no nível referente a 2,0 m de altura, em relação ao solo.This work was conducted in the experimental area of the Department of Rural Engineering of "Escola Superior de Agricultura Luiz de Queiroz", University of São Paulo, Piracicaba, São Paulo, Brazil. Two greenhouses were installed in the east-west direction, with 6.4 m of width, 17.5 m of length and 3.0 m high, with total area of 112 m²; covered by plastic of 150 micra thickness, treated against ultra violet rays. To characterize the
15. Application of EM holographic methods to borehole vertical electric source data to map a fuel oil spill
International Nuclear Information System (INIS)
Bartel, L.C.
1993-01-01
The multifrequency, multisource holographic method used in the analysis of seismic data is to extended electromagnetic (EM) data within the audio frequency range. The method is applied to the secondary magnetic fields produced by a borehole, vertical electric source (VES). The holographic method is a numerical reconstruction procedure based on the double focusing principle for both the source array and the receiver array. The approach used here is to Fourier transform the constructed image from frequency space to time space and set time equal to zero. The image is formed when the in-phase part (real part) is a maximum or the out-of-phase (imaginary part) is a minimum; i.e., the EM wave is phase coherent at its origination. In the application here the secondary magnetic fields are treated as scattered fields. In the numerical reconstruction, the seismic analog of the wave vector is used; i.e., the imaginary part of the actual wave vector is ignored. The multifrequency, multisource holographic method is applied to calculated model data and to actual field data acquired to map a diesel fuel oil spill
16. Applications of EM holographic methods to borehole vertical electric source data to map a fuel oil spill
International Nuclear Information System (INIS)
Bartel, L.C.
1993-01-01
The multifrequency, multisource holographic method used in the analysis of seismic data is to extended electromagnetic (EM) data within the audio frequency range. The method is applied to the secondary magnetic fields produced by a borehole, vertical electric source (VES). The holographic method is a numerical reconstruction procedure based on the double focusing principle for both the source array and the receiver array. The approach used here is to Fourier transform the constructed image from frequency space to time space and set time equal to zero. The image is formed when the in-phase part (real part) is a maximum or the out-of-phase (imaginary part) is a minimum; i.e., the EM wave is phase coherent at its origination. In the application here the secondary magnetic fields are treated as scattered fields. In the numerical reconstruction, the seismic analog of the wave vector is used; i.e., the imaginary part of the actual wave vector is ignore. The multifrequency, multisource holographic method is applied to calculated model data and to actual field data acquired to map a diesel fuel oil spill
17. Modelo de dinâmica de sistemas para o processo de S&OP ampliado
Directory of Open Access Journals (Sweden)
Jean Carlos Domingos
2015-01-01
18. The Agency's Safeguards System (1965, as provisionally extended in 1966 and 1968); Sistema de Salvaguardias del Organismo (1965, ampliado provisionalmente en 1966 y 1968)
Energy Technology Data Exchange (ETDEWEB)
NONE
1968-09-24
The Agency's safeguards system, as approved by the Board of Governors in 1965, and provisionally extended in 1966, is set forth in this document for the Information of all Members [Spanish] En el presente documento se reproduce, para informacion de todos los Estados Miembros, el Sistema de salvaguardias del Organismo aprobado por la Junta de Gobernadores en 1965 y ampliado provisionalmente en 1966 y 1968.
19. Maternidade e projetos vitais em jovens infectadas com HIV por transmissão vertical
Directory of Open Access Journals (Sweden)
Ana Paula Eid
2015-01-01
20. Maternidade e projetos vitais em jovens infectadas com HIV por transmissão vertical
Directory of Open Access Journals (Sweden)
Ana Paula Eid, Brasil
2015-07-01
1. Prevalência de HIV em gestantes e transmissão vertical segundo perfil socioeconômico, Vitória, ES
Directory of Open Access Journals (Sweden)
Anne Caroline Barbosa Cerqueira Vieira
2011-08-01
2. Estratégias para o agronegócio no Mercosul ampliado Mercosur agri-food strategies
Directory of Open Access Journals (Sweden)
Marcos Sawaya Jank
1999-12-01
3. Transmissão vertical do HIV em população atendida no serviço de referência Vertical transmission of HIV in the population treated at a reference center
Directory of Open Access Journals (Sweden)
Sueli Teresinha Cruz Rodrigues
2013-01-01
Full Text Available OBJETIVO: Identificar a taxa de transmissão vertical do HIV e avaliar os fatores envolvidos em partes materna e fetal. MÉTODOS: Estudo transversal realizado no Serviço de Atendimento Especializado. Foram investigados 102 prontuários de mulheres com HIV que deram à luz a recém-nascidos vivos. RESULTADOS: A prevalência de 6,6% de transmissão vertical. Entre as crianças infectadas: 40,0% de mães sem pré-natal e 75% sem a profilaxia com anti-retrovirais durante o pré-natal, 50,0% sem profilaxia com AZT com oral e amamentado. Entre as crianças não infectadas: 91,5% iniciaram a profilaxia com AZT oral ao nascimento e 84,1% das mães receberam ARV. CONCLUSÃO: A ocorrência de transmissão vertical do HIV no serviço de referência correspondeu a 6,6%, o que indica uma alta prevalência.OBJECTIVE: To identify the rate of vertical transmission of HIV and assess the factors involved in maternal and fetal share. METHODS: Cross-sectional study conducted in the Specialized Care Service. We investigated 102 clinical records of HIV positive women who had given birth to live newborns. The primary variable was the occurrence of vertical transmission of HIV and the secondary variables were the factors associated with vertical transmission of HIV. RESULTS: Prevalence of 6.6% of vertical transmission. Among the infected children: 40.0% of mothers with out prenatal care and 75% without prophylaxis with antiretroviral drugs during the prenatal, 50.0% without AZT prophylaxis with oral and breast-fed. Among the uninfected children: 91.5% were started on prophylaxis with oral AZT at birth and 84.1% of mothers received ARV delivery. CONCLUSION: The occurrence of vertical transmission of HIV in the reference service corresponded to 6.6%, indicating a high prevalence.
4. Parto vertical em hospital universitário: série histórica, 1996 a 2005 Vertical-position births at a University Hospital: a time-series study, 1996 to 2005
Directory of Open Access Journals (Sweden)
Odaléa Maria Brüggemann
2009-06-01
Full Text Available OBJETIVOS: descrever a evolução do número de partos horizontais e verticais na maternidade do Hospital da Universidade Federal de Santa Catarina, Brasil, e avaliar a associação dos mesmos com a taxa de cesárea, de internações dos recém-nascidos em unidade de tratamento intensivo e semi-intensivo e as transfusões sanguíneas maternas. MÉTODOS: estudo descritivo -série histórica. Foram incluídos todos os partos, as internações dos recémnascidos na Unidade de Terapia Intensiva e as transfusões sanguíneas maternas ocorridas de 1996 até 2005. Para testar as tendências, utilizou-se o método de Prais-Winsten para regressão linear generalizada. RESULTADOS: em 1996 a porcentagem de partos verticais era 5,4% e em 2005 foi 52,3%. A variação média anual dos partos verticais foi de +20,8% (p=0,007 e dos partos horizontais de -15,2% (pOBJECTIVES: to describe the evolution of the number of horizontal and vertical births in the maternity ward of the University Hospital of the Federal University of Santa Catarina, Brazil and to evaluate their correlation with the rates for caesarian, for transfer of newborns to intensive and semi-intensive care units, and maternal blood transfusions. METHODS: a time-series study. All births resulting in newborns being transferred to the Intensive Care Unit, and the maternal blood transfusions obstetrics ward between 1996 and 2005 were included in this study. In order to test the tendencies, the Prais-Winsten generalized linear regression method was used. RESULTS: in 1996 the percentage for vertical births was 5.4% and in 2005 52.3%. The average annual variance for vertical births was +20.8% (p=0.007, and for horizontal births -15.2% (p<0.001. Caesarian births showed a tendency to stabilize (p=0.243. There was a decrease of in the number of newborns transferred to the neonatal intensive care unit, 6.1% per year (p=0.001 and in the need of maternal blood transfusions (5.2% -p<0.01. CONCLUSIONS: the
5. Sintering study in vertical fixed bed reactor for synthetic aggregate production; Estudo da sinterizacao em reator vertical de leito fixo para producao de agregado sintetico
Energy Technology Data Exchange (ETDEWEB)
Quaresma, D.S.; Neves, A.S.S.; Melo, A.O.; Pereira, L.F.S.; Bezerra, P.T.S.; Macedo, E.N.; Souza, J.A.S., E-mail: danysq@gmail.com [Universidade Federal do Para (UFPA), Belem, PA (Brazil). Faculdade de Engenharia Quimica
2017-04-15
The synthetic aggregates are being employed in civil construction for the reduction of mineral extraction activities. Within this context, the recycling of industrial waste is the basis of the majority of processes to reduce the exploitation of mineral resources. In this work the sintering in a vertical fixed bed reactor for synthetic aggregate production using 20% pellets and 80% charcoal was studied. The pellets were prepared from a mixture containing clay, charcoal and fly ash. Two experiments varying the speed of air sucking were carried out. The material produced was analyzed by X-ray diffraction, scanning electron microscopy, measures of their ceramic properties, and particle size analysis. The results showed that the solid-state reactions, during the sintering process, were efficient and the produced material was classified as coarse lightweight aggregate. The process is interesting for the sintering of aggregates, and can be controlled by composition, particle size, temperature gradient and gaseous flow. (author)
6. Avaliação da capacidade preditiva da bioimpedância tetrapolar segmentada vertical na detecção do excesso de peso em adolescentes
Directory of Open Access Journals (Sweden)
Felipe Silva Neves
2015-12-01
7. Distribuição vertical e sazonal de Anopheles (Kerteszia em Ilha Comprida, SP Vertical and seasonal distribution of Anopheles (Kerteszia in Ilha Comprida, Southeastern Brazil
Directory of Open Access Journals (Sweden)
Helene Mariko Ueno
2007-04-01
8. Diafragmas horizontais de piso em madeira, confeccionados com chapas de OSB e vigas I, submetido ao carregamento vertical = Wood light-frame floor diaphragms, made with OSB panels and Ijoists, subjected to vertical loads
Directory of Open Access Journals (Sweden)
Altevir Castro dos Santos
2007-07-01
Full Text Available Este trabalho aborda construções em madeira sob a ótica de sistemas com estruturas leves; apresenta análise computacional por meio de modelagem pelo método de elementos finitos de diafragmas de piso e vigas I, submetidas a ensaio de flexão a quatro pontos. O objetivo geral é avaliar a resistência e a rigidez de diafragmas horizontais,construídos em Sistemas Leves de Madeira, quando submetidos a ações verticais. As análises foram realizadas por meio do programa computacional SAP2000 e foram avaliadas as influências dos seguintes parâmetros: espaçamento entre vigas que constituem os elementosde ossatura do diafragma horizontal e o espaçamento entre pregos de fixação do contrapiso, composto por chapas de OSB – Oriented Strand Board. Ao final do trabalho, comparam-se os resultados obtidos a partir das análises numérica e teórica, e são apresentadas algumasconclusões.Wood light-frame floor diaphragms, made with OSB panels and I-joists, subjected to vertical loads. This work is focused on lightweight woodframe constructions, and presents a finite element modeling of floor diaphragms and wood I-joists subjected to four-point bending. It presents the results of experimental tests on wood I-joists subjected to vertical loads. The main goal of this research is to evaluate the resistance and rigidity of wood light-frame floor diaphragms, when subjected to monotonic vertical forces acting in the plane of the floor. The analyses were performed using the SAP2000 computer program and tested with diferent constructive arrangements, and the influence of the following variables were examined: distance between wood I-joists, and distance between nails around the perimeter of the OSB boards. Finally, a comparison between analytical and numerical results is performed.
9. "Vertical mulching" como prática conservacionista para manejo de enxurrada em sistema plantio direto Vertical mulching as a soil conservation practice to manage runoff in no tillage systems
Directory of Open Access Journals (Sweden)
José Eloir Denardin
2008-12-01
10. Acidente rural ampliado: o caso das "chuvas" de agrotóxicos sobre a cidade de Lucas do Rio Verde - MT Major rural accident: the pesticide "rain" case in Lucas do Rio Verde city - MT
Directory of Open Access Journals (Sweden)
Wanderlei Antonio Pignati
2007-03-01
Full Text Available O artigo aborda o acidente ambiental causado por derivas de pulverizações aéreas de agrotóxico que atingiram o espaço urbano de Lucas do Rio Verde-MT, em março de 2006. Caracterizou-se como "acidente rural ampliado" de caráter ocupacional e ambiental, cuja gravidade e extensão ultrapassaram a unidade produtiva rural, causando impactos sanitários, sociais e ambientais. Este estudo de caso objetivou conhecer o cenário sociotécnico do acidente e o processo de vigilância em saúde-ambiente, numa dinâmica de pesquisa-ação. As informações foram colhidas através de entrevistas, documentos e registro de observações do cotidiano. Referenciou-se ainda na análise interdisciplinar e participativa de acidentes, com envolvimento de instituições locais de Saúde, Agricultura e Ambiente, lideranças sindicais e políticas, chacareiros e fazendeiros, Promotoria de Justiça, jornalistas e a Universidade. O estudo mostra que as ações de vigilância do "uso e abuso" de agrotóxicos ampliaram-se para "movimento pelo desenvolvimento sustentável da região", apoiado na vigilância participativa, articulada com a luta pela democracia e justiça social, na busca de uma agricultura e/ou ambiente sustentável.The article reports the environmental accident caused by aerial pesticide spraying that reached the urban space of Lucas do Rio Verde-MT, in March 2006. It was characterized as a "major rural accident" of environmental and occupational aspects whose seriousness and extension crossed the agriculturally productive unit boundaries causing sanitary, social and environmental impact. This case study had as its objective the understanding of the social-technical scene of the accident and the monitoring process in health-environment in a research-action dynamic. The information was collected through interviews, documents and daily observation reports. It also referred to accidents, multidisciplinary and participatory analyses with the participation
11. Numerical analysis of hydrodynamic forces acting on vertical lift gates; Analise dos esforcos hidrodinamicas em comportas hidraulicas
Energy Technology Data Exchange (ETDEWEB)
Andrade, Jell Lima de [Mecanica Pesada S.A., Taubate, SP (Brazil); Amorim, Jose Carlos Cesar [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil)]. E-mail: jcamorim@ime.eb.br
1997-07-01
A numerical analysis has been developed for calculating viscous flows controlled by a vertical lift gate and hydrodynamic forces acting on it. The numerical solution is obtained from the incompressible Navier-Stoles equations. The numerical techniques is based on a finite element method. A Poisson equation is derived from the pressure-weighted substitution of the full momentum equations into the continuity equation. Turbulence effects are simulated by a K-{epsilon} turbulence model. The procedure developed here is applied for a vertical lift gate operating in a CESP installation, and the results are compared with available experimental data at various opening positions. Good agreement is obtained for the velocity and pressure distributions. (author)
12. Efeito do Uso do Estabilizador Active Ankle System® na Altura do Salto Vertical em Jogadores de Voleibol Effect of the Use of the Active Ankle System Stabilizer in The Vertical Jump Height in Volleyball Players
Directory of Open Access Journals (Sweden)
Marco Túlio Saldanha dos Anjos
2009-10-01
13. Vertical distribution of phytoplankton functional groups in a tropical shallow lake: driving forces on a diel scale Distribuição vertical de grupos funcionais fitoplanctônicos em um lago tropical raso: forças direcionadoras em escala nictemeral
Directory of Open Access Journals (Sweden)
Luciana Gomes Barbosa
2011-03-01
Full Text Available AIM: This study analyzed the vertical distribution of phytoplankton functional groups in two diel cycles in a warm monomictic shallow tropical lake; METHODS: Sampling of the abiotic variables, phytoplankton and zooplankton communities was performed at intervals of 3 hours over 24 hours in vertical profiles, in the stratification (February and circulation (July periods; RESULTS: The high thermal stability and the partial atelomixis favored the coexistence of functional groups that are sensitive to destratification, N A and F, composed by desmids and Chlorophyceae coccoids, and groups S2 and Lo, which persisted during the circulation, and were composed by filamentous cyanobacteria which do not fix N2 and dinoflagellates, respectively. The discontinuity in the vertical distribution of the functional groups, with dominance of N A and F in the epilimnion and R and Lo in the metalimnion and hypolimnion, was characteristic of the stratification, and differences between the daytime and nighttime periods were not significant. CONCLUSIONS: The reduction of 80% of the biomass of the NA group during the mixing period indicates the influence of thermal stability and partial atelomixis as determinant factors in the compartmentalization of functional groups, restricting daytime vertical migration (DVM and loss by sedimentation during the stratification period.OBJETIVO: Analisar a distribuição vertical dos grupos funcionais fitoplanctônicos em dois ciclos nictemerais em um lago tropical monomítico raso; MÉTODOS: As amostragens das variáveis abióticas, comunidades fitoplanctônica e zooplanctônica foram realizadas a intervalos de 3 horas ao longo de 24 horas em perfis verticais, nos períodos de estratificação (fevereiro e de circulação (julho; RESULTADOS: A elevada estabilidade térmica e a atelomixia parcial favoreceram a coexistência de grupos funcionais sensíveis a desestratificação N A e F, compostos por desmidias e clorofícias cocoides e
14. Prevenção da transmissão vertical do HIV: atitude dos obstetras em Salvador, Brasil Prevention of HIV vertical transmission: obstetricians'atitude in Salvador, Brasil
Directory of Open Access Journals (Sweden)
João Paulo Queiroz Farias
2008-03-01
15. Transmissão vertical do HIV: situação encontrada em uma maternidade de Teresina Transmisión vertical del HIV: situación encontrada en una maternidade de Teresina Vertical transmission of HIV: situation found in a maternity of Teresina
Directory of Open Access Journals (Sweden)
Liliam Mendes de Araújo
2007-08-01
16. Climatologia da estrutura vertical da atmosfera em novembro para Belém-PA Climatology of the atmospheric vertical structure over the Belém-PA city during november
Directory of Open Access Journals (Sweden)
Daniela dos Santos Ananias
2010-06-01
17. Respostas neuromusculares dos membros inferiores durante protocolo intermitente de saltos verticais em voleibolistas Neuromuscular responses of the lower limb muscles during vertical jumping in volleyball athletes
Directory of Open Access Journals (Sweden)
Caroline Tosini Felicissimo
2012-03-01
Full Text Available O objetivo deste estudo foi analisar o desempenho e as respostas eletromiográficas dos músculos Reto Femoral, Bíceps Femoral e Gastrocnêmio Medial durante protocolo de saltos verticais. Participaram 13 voleibolistas do sexo feminino (15,6 ± 0,9 anos. Inicialmente foi realizado um protocolo de potência máxima (três saltos máximos, seguido do protocolo de resistência de saltos (ciclos de três saltos máximos em aproximadamente 10 segundos (s - um salto a cada três s, com recuperação de 15 s. O tempo de duração do protocolo de resistência foi de 20 minutos. Foi usada a técnica do salto com contramovimento sem ajuda dos braços, sobre tapete de contato. Para tratamento dos dados os saltos foram divididos em quatro períodos com 12 ciclos cada um. Os resultados mostraram queda na altura dos saltos de aproximadamente 1,3cm entre os períodos de 1 a 4, sendo que, essa queda foi mais significativa nos 3º e 4º períodos em comparação ao 1º e 2º. Entretanto, com relação às variáveis RMS e FM, não ocorreu alteração nas respostas eletromiográficas entre músculos e períodos. Concluiu-se, assim, que a fadiga pode depender de variáveis psicofisiológicas, ao nível do SNC, que também influem no desempenho.The purpose of this study was to analyze the performance and the electromyographic responses of the muscles Rectus Femoris, Biceps Femoris and Gastrocnemius Medialis during vertical jumping protocol. Participated 13 female volleyball players (15,6 ± 0,9 years. Initially was performed a protocol of maximum power (three maximum jumps, followed by resistance jumps protocol (cycles of three maximum jumps in about 10 seconds (s - one jump every three s, with recovery of 15s. The duration of resistance protocol was 20 minutes. Technique used was countermovement jump without the aid of arms on a mat of contact. The data collected during the jumps were divided into four periods containing 12 cycles each. The results showed a
18. Bloqueio do plexo braquial, por via infraclavicular vertical, em paciente com doença pulmonar obstrutiva crônica: relato de caso Bloqueo del plexo braquial, por vía infraclavicular vertical, en paciente con enfermedad pulmonar obstructiva crónica: relato de caso Infraclavicular vertical brachial plexus blockade in patients with chronic obstructive pulmonary disease: case report
Directory of Open Access Journals (Sweden)
Diogo Brüggemann da Conceição
2006-10-01
Full Text Available JUSTIFICATIVA E OBJETIVOS: Os pacientes com doença pulmonar obstrutiva crônica (DPOC têm risco aumentado de complicações pós-operatórias, sobretudo quando submetidos à anestesia geral. O bloqueio do plexo braquial representa uma alternativa para estes pacientes em intervenções cirúrgicas de membros superiores. O objetivo deste relato foi apresentar um caso de bloqueio do plexo braquial, por via infraclavicular vertical em paciente com DPOC com fratura de cotovelo. RELATO DO CASO: Paciente do sexo feminino, 67 anos, 52 kg, estado físico ASA III, com indicação de osteossíntese de cotovelo, portadora de bronquiectasias desde nove anos de idade após uma pneumonia. Apresentava tosse produtiva habitualmente, e fora submetida à avaliação de seu pneumologista que a liberou para o procedimento. Após instalação de monitorização com pressão arterial não-invasiva, ECG e oxímetro de pulso, foi realizado bloqueio do plexo braquial por via infraclavicular vertical com 30 mL de ropivacaína a 0,5%. A intervenção cirúrgica foi realizada sem intercorrências. A paciente recebeu alta hospitalar no dia seguinte ao procedimento cirúrgico. CONCLUSÕES: O bloqueio do plexo braquial por via infraclavicular vertical é uma técnica alternativa para portadores de DPOC e fratura de cotovelo, por sua menor morbidade quando comparado com a anestesia geral e com a via supraclavicular.JUSTIFICATIVA Y OBJETIVOS: Los pacientes con enfermedad pulmonar obstructiva crónica (DPOC presentan riesgo mayor de complicaciones postoperatorias especialmente cuando se les someten a la anestesia general. El bloqueo del plexo braquial representa una alternativa para esos pacientes en intervenciones quirúrgicas de miembros superiores. El objetivo de este relato fue el de presentar un caso de bloqueo del plexo braquial, por vía infraclavicular vertical en paciente con DPOC con fractura de codo. RELATO DEL CASO: Paciente del sexo femenino, 67 años, 52 kg
19. Prevalência de HIV em gestantes e transmissão vertical segundo perfil socioeconômico, Vitória, ES Factores asociados a recidiva en hanseníasis en Mato Grosso, Centro-oeste de Brasil HIV prevalence in pregnant women and vertical transmission in according to socioeconomic status, Southeastern Brazil
Directory of Open Access Journals (Sweden)
Anne Caroline Barbosa Cerqueira Vieira
2011-08-01
20. Modelo para simulação da dinâmica de nitrato em colunas verticais de solo não saturado A simulation model of nitrate displacement in vertical columns in a non-saturated soil
Directory of Open Access Journals (Sweden)
Jarbas H. de Miranda
2002-01-01
Full Text Available A agricultura intensiva está sempre em busca de incrementos de produtividade mas, em contrapartida, pouca atenção é dedicada a possíveis impactos ambientais. Portanto, o entendimento sobre processos de transporte de solutos no solo auxilia na redução da sua lixiviação para as camadas subsuperficiais. Neste sentido, objetivou-se, com o presente trabalho, desenvolver e avaliar um modelo computacional aplicado para simulação da dinâmica de solutos no solo por meio de soluções numéricas de equações diferenciais que descrevam esse transporte. Pelos resultados obtidos, o modelo apresentou bom ajuste das concentrações de nitrato e dos perfis de umidade, simulados com relação aos medidos em condições de laboratório em coluna vertical de solo não saturado.Intensive agriculture always aims at increased productivity, with limited or no attention dedicated to possible impacts on the environment. Therefore, the understanding of processes of solute transport in the soil contributes to reduction of leaching to the deep layers. In this connection, the present study had the objective of developing and evaluating a computational model for solute displacement simulation in the soil based on numerical solutions of differential equations describing this displacement. From the results obtained, the model presented a good agreement of nitrate concentrations as well as soil moisture profile when compared with the results obtained on a vertical column of non-saturated soil under laboratory conditions.
1. Moisture profile measurements of concrete samples in vertical flow by gamma ray attenuation method. Medidas do perfil de umidade de amostras de concreto em infiltracao vertical, atraves da atenuacao de raios gama
Energy Technology Data Exchange (ETDEWEB)
Appoloni, C R; Nardocci, A C; Obuti, M M [Universidade Estadual de Londrina, PR (Brazil). Dept. de Fisica
1988-04-01
This work deals with the study of the water diffusion in concrete by the gamma ray attenuation method. The moisture profiles, [theta] (z,t), of the vertical water flow were determined in concrete samples of different trace and porosity. The data were taken with a vertical and horizontal measurement table, a [sup 60] Co gamma ray source, a NaI (T) scintillation detector and the standard gamma ray spectrometry electronic. The [theta] (z,t) data analysis is presented using a phenomenological model of the moisture profile temporal evolution in heterogeneous materials. Two other models, Cell and Sandwich, were also applied to determine the attenuation coefficient of a non-homogeneous media from the attenuation coefficients of the components, taking into account particles-size effects. (author).
2. Perfil clínico-laboratorial de crianças vivendo com HIV/AIDS por transmissão vertical em uma cidade do Nordeste brasileiro Clinical and laboratory profile of children living with vertically transmitted HIV/AIDS in a city in northeastern Brazily
Directory of Open Access Journals (Sweden)
Margareth Jamil Maluf e Silva
2010-02-01
Full Text Available INTRODUÇÃO: a transmissão vertical constitui a principal via de infecção infantil pelo vírus HIV-1 (vírus da imunodeficiência humana. A presente pesquisa tem como objetivo estudar a evolução clínica e laboratorial de crianças vivendo com HIV/AIDS decorrente da transmissão vertical. MÉTODOS: trata-se de um estudo descritivo, retrospectivo, realizado a partir da coleta de dados em prontuário médico de todas as crianças atendidas em um Serviço de Assistência Especializada, no período de janeiro de 1998 a junho de 2006. RESULTADOS: foram avaliadas 80 crianças que preencheram critérios de inclusão. Observou-se que em 56 (70% crianças, o diagnóstico da infecção pelo HIV na mãe deu-se após o parto e que em 44 (55% o parto foi via vaginal. Amamentação ao seio materno foi documentada em 56 (70% crianças e esta variou de um mês até mais de 12 meses. A não utilização ou uso incompleto do Protocolo ACTG 076 foi documentado em 63 (78,5% casos. CONCLUSÕES: os dados observados em nosso estudo são bastante preocupantes e revelam falha na assistência materno-infantil, especialmente voltada para prevenção da transmissão.INTRODUCTION: Vertical transmission constitutes the main route for child infection by the HIV-1 virus (human immune deficiency virus. This study aimed to investigate the clinical and laboratory evolution of children with vertically transmitted HIV/AIDS. METHODS: This was a retrospective descriptive study based on data gathered from the medical records of all the children who were seen at a specialized care unit between January 1998 and June 2006. RESULTS: Eighty children who met the inclusion criteria were evaluated. In the cases 56 (70% of the children, their mothers were diagnosed as HIV-positive after childbirth. The delivery was vaginal for 44 (55% of the children. Fifty-six children (70% were breastfed by their mothers for periods ranging from one to more than 12 months. Failure to use or incomplete
3. Vertical integration
International Nuclear Information System (INIS)
Antill, N.
1999-01-01
This paper focuses on the trend in international energy companies towards vertical integration in the gas chain from wellhead to power generation, horizontal integration in refining and marketing businesses, and the search for larger projects with lower upstream costs. The shape of the petroleum industry in the next millennium, the creation of super-major oil companies, and the relationship between size and risk are discussed. The dynamics of vertical integration, present events and future developments are considered. (UK)
4. Determining the amount of anhydrous alcohol evaporated in vertical cylindrical tanks; Determinacao da quantidade de alcool etilico anidro evaporado em tanques cilindricos verticais
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Elcio Cruz de [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)
2008-07-01
In order to assess the anhydrous alcohol evaporated amount in vertical cylindrical tanks was developed a calculation methodology based on the rate of mass transfer of the product, the Reynolds number and the mass transfer coefficient. An Excel spreadsheet was prepared with data entry of the tank and physical and chemical properties of the product (temperature and density). For a temperature of 50 deg C, the volume evaporated reaches values of 0.8% by day. (author)
5. Mapeamento do ambiente térmico de aviários de postura abertos em sistema vertical de criação
Directory of Open Access Journals (Sweden)
Diogo J. de R. Coelho
2015-10-01
6. Fauna de flebotomíneos (Diptera: Psychodidae em fragmentos de floresta ao redor de conjuntos habitacionais na cidade de Manaus, Amazonas, Brasil. I. Estratificação Vertical Sand flies fauna (Diptera: Psychodidae in forest fragments around housing complexes in the Manaus municipality, state of Amazonas, Brazil. I. Vertical Stratification
Directory of Open Access Journals (Sweden)
Marlisson Augusto Costa Feitosa
2006-12-01
7. Espectro e distribuição vertical das estratégias de dispersão de diásporos do componente arbóreo em uma floresta estacional no sul do Brasil Spectrum and vertical distribution of diaspore dispersal modes in a seasonal forest in Southern Brazil
Directory of Open Access Journals (Sweden)
Eduardo Luís Hettwer Giehl
2007-03-01
8. Composition, community structure and vertical distribution of epiphytic ferns on Alsophila setosa Kaulf., in a Semideciduous Seasonal Forest, Morro Reuter, RS, Brazil Composição, estrutura comunitária e distribuição vertical de samambaias epifíticas sobre Alsophila setosa Kaulf. (Cyatheaceae, em Floresta Estacional Semidecidual, Morro Reuter, RS, Brasil
Directory of Open Access Journals (Sweden)
Paulo Henrique Schneider
2011-09-01
Full Text Available In tropical forests, tree ferns constitute an important phorophyte for the establishment and occurrence of epiphytic species. Composition, structure and vertical distribution of epiphytic ferns were studied on Alsophila setosa Kaulf., in a semideciduous seasonal forest fragment, in the city of Morro Reuter (29º32'07"S and 51º05'26"W, in the state of Rio Grande do Sul, Brazil. The sample consisted of 60 caudices of at least 4 m high, which were divided in 1 m intervals from the ground. The specific importance value was estimated trough the coverage value and caudex frequency at the intervals. A total of 14 species was recorded, belonging to 10 genera and five families. The highest specific richness occurred in Polypodiaceae. The rarefaction curve for the total sample did not reach an asymptote with an estimated 14.98 to 16.95 species, showing that a few species could still be recorded. The species with the highest importance value and vertical amplitude was Blechnum binervatum (Poir. C.V. Morton & Lellinger, with a decreasing frequency from bottom to top of the caudex. Considering the predominance of habitual holoepiphytes, the removal of Alsophila setosa caudices compromises microhabitat availability for epiphytes in the forest understory.Nas florestas tropicais as samambaias arborescentes constituem forófitos importantes para o estabelecimento e ocorrência de espécies epifíticas. A composição, a estrutura e a distribuição vertical de samambaias epifíticas foram estudadas sobre Alsophila setosa Kaulf., em fragmento de Floresta Estacional Semidecidual, localizado no município de Morro Reuter (29º32'07"S e 51º05'26"W, Rio Grande do Sul, Brasil. Foram amostrados 60 cáudices de no mínimo 4 m de altura e eles foram divididos em intervalos de 1 m, a partir do solo. O valor de importância específico foi estimado a partir da freqüência nos cáudices , nos intervalos e do valor de cobertura. Foram registradas 14 espécies epif
9. Efeitos do tratamento da Classe II divisão 1 em pacientes dolicofaciais tratados segundo a Terapia Bioprogressiva (AEB cervical e arco base inferior, com ênfase no controle vertical Treatment effects on Class II division 1 high angle patients treated according to the Bioprogressive therapy (cervical headgear and lower utility arch, with emphasis on vertical control
Directory of Open Access Journals (Sweden)
Viviane Santini Tamburús
2011-06-01
10. Aplicación de criterios ampliados para la valoración de riñones obtenidos de cadáveres Application of extended criteria for the evaluation of kidneys from dead donors
Directory of Open Access Journals (Sweden)
David Orret Cruz
2006-12-01
11. Elementos para la construcción de una política pública de bilingüismo en el Valle del Cauca: un análisis descriptivo a partir del censo ampliado de 2005
Directory of Open Access Journals (Sweden)
Julio César Alonso
2012-01-01
Full Text Available Los índices de competitividad reflejan un pobre desempeño del departamento del Valle del Cauca, especialmente en lo referente al capital humano. Una política pública de bilingüismo permite acumular capital humano a través de la educación en inglés, acceder a nuevos mercados y mejor información, lo que hace posible el desarrollo de otros factores necesarios para la competitividad de una región. En este documento se hace un primer diagnóstico del bilingüismo en el Valle del Cauca, con miras a brindar argumentos para la creación de una eficaz política pública de bilingüismo. Los datos procesados provienen del Censo ampliado de 2005, y los resultados obtenidos a partir de él no son alentadores, pues muestran la necesidad de convertir el bilingüismo en un tema de prioridad en la agenda pública.
12. Downward surface flux computations in a vertically inhomogeneous grey planetary atmosphere Cálculo do fluxo radiativo superficial em uma atmosfera planetária cinza e verticalmente não-homogênea
Directory of Open Access Journals (Sweden)
Marcos Pimenta de Abreu
2008-03-01
13. Uncertainty determination in a custody transfer operation from vertical cylinder storage tanks; Determinacao da incerteza do volume transferido em tanques cilindricos verticais
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Elcio C.; Ferreira, Ana Luisa A.S. [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil); Orlando, Alcir F.; Val, Luiz G. do [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)
2004-07-01
The INMETRO/ANP 1 regulation (2000), presents rules to be followed for measuring and calibrating cylindrical vertical oil storage tanks in Brazil, according to ISO 7507-1 (1993) standard. A methodology for estimating the uncertainty (95,45 % confidence level) of the volume in a custody transfer process was developed, based on ISO GUM (1998) standard. The strapping method was selected for this study, because it has been used as a standard procedure by INMETRO. In this study, the same uncertainty values, as suggested by the standard, were used to estimate the uncertainty of the liquid volume in the tank. This study showed that the uncertainty of the transferred liquid volume from the tank varies from 0,2% to 0,4%, being smaller for larger volumes, which is thus the recommended application. The uncertainty of the ring height measurement is the largest contribution to the volume measurement uncertainty, and, thus, must be accurately measured. The tank internal diameter uncertainty is a small contribution to it. This paper calculates the uncertainty of liquid volume transferred from the tank by three methods, namely, this paper's, ISO 7507- 1's and INMETRO's, and shows that the most important contribution to the measurement uncertainty is the density measurement uncertainty, which must be accurately measured, at least, to within {+-} 0,0005, if the volume uncertainty is to remain in the 0,5 % to 1 % range. (author)
14. Distribuição horizontal e vertical de fósforo em sistemas de cultivos exclusivos de soja e de integração lavoura-pecuária-floresta
Directory of Open Access Journals (Sweden)
Debora Diel
2014-08-01
15. Acidentes químicos ampliados: um desafio para a saúde pública The increase in chemical accidents: a challenge for public health
Directory of Open Access Journals (Sweden)
Carlos M. de Freitas
1995-12-01
16. Produção hidropônica de morangueiro em sistema de colunas verticais, sob cultivo protegido Hydroponic strawberry production in vertical columns system under protected cultivation
Directory of Open Access Journals (Sweden)
Eunice Oliveira Calvete
2007-01-01
Full Text Available O cultivo do morangueiro fora do solo possibilita a eliminação do uso de produtos para desinfecção, reduzindo o consumo de frutos contaminados e a agressão ao meio ambiente, além de proporcionar melhor aproveitamento da área e maior facilidade de manejo da cultura. Este trabalho teve como objetivo avaliar, em ambiente protegido e colunas verticais, dois sistemas de irrigação: gotejamento por estacas (externo e autocompensante (interno; dois tipos de substratos: Horta 2 e Tabaco 1; com e sem drenagem. A cultivar utilizada foi a Oso Grande. O delineamento foi em blocos casualizados, com os tratamentos dispostos em parcela subsubdividida, com três repetições. Com base nos rendimentos obtidos nos terços superior, médio e inferior das colunas, o sistema de irrigação mais indicado é o gotejamento por estacas (externo, com drenagem na extremidade inferior da coluna. Os substratos não diferem quanto à produção, mas Horta 2 incrementa o teor de antocianina nos frutos.Since strawberry cultivation in soil less system does not needs disinfection products, it decreases fruits and environmental contamination. Besides it provides a better utilization of the area and makes easy the management of the culture. The objective of this work was to evaluate two irrigation systems: dripping for props (outside and self compensate (inside; and two types of substrates: Horta 2 and Tabaco 1; with or without draining, on the cultivar Oso Grande of strawberry. The experiment was carried out under protected cultivation and in vertical columns conditions. The experimental design was randomized blocks, with three replications, and the treatments were arranged in a split-plot. The strawberry yield found in the upper, medium, and lower positions of the columns indicates that the dripping for props (outside is the most efficient irrigation system, since drainage is used in the lower extremity of the column. Although there were no differences substrates, Horta
17. Tratamento da escoliose em crianças com paralisia cerebral utilizando a prótese vertical expansível de titânio para costela (VEPTR Tratamiento de la escoliosis en niños con parálisis cerebral mediante la prótesis vertical expansible de titanio para las costillas (VEPTR Treatment of scoliosis in children with cerebral palsy using the vertical expandable prosthetic titanium rib (VEPTR
Directory of Open Access Journals (Sweden)
Kiyomori de Quental Tyba
2011-01-01
18. Avaliação da resistência de força explosiva em voleibolistas através de testes de saltos verticais Assessment of explosive strength-endurance in volleyball players through vertical jumping test
Directory of Open Access Journals (Sweden)
Jefferson Eduardo Hespanhol
2007-06-01
Full Text Available O propósito deste estudo foi verificar a existência de diferenças entre o teste de salto vertical com natureza contínua de 60 segundos (TSVC e o teste de salto vertical com natureza intermitente de quatro séries de 15 segundos (TSVI. Os dados foram obtidos através de amostra composta por 10 voleibolistas do sexo masculino (19,01 ± 1,36 anos; 191,5 ± 5,36cm; e 81,74 ± 7,45kg, todos com participação voluntária. As variáveis estudadas foram: as estimativas do pico de potência (PP, potência média (PM e o índice de fadiga (IF. O desempenho estimado através dos testes TSVC, com duração de 60 segundos, e o TSVI foi determinado em quatro séries de 15 segundos, com 10 segundos de recuperação entre cada série. Os dados foram determinados através da estatística descritiva e do teste de Wilcoxon; o nível de significância utilizado foi de p The aim of this study was to verify the differences between the continuous jump test of 60 seconds (CJ60 sec and the intermittent jump test of 4 sets of 15 seconds (IJ4x15 sec. The sample was composed of 10 male volleyball players with 19.01 ± 1.36 years, 191.5 ± 5.36 cm height and 81.74 ± 7.45 of body mass, who participated in this research as volunteers. The variables studied were estimated as the peak power (PP, mean power (MP and fatigue index (FI. These performances were measured through tests of vertical jump with duration the 60 seconds and with the performance of 4 sets of 15 seconds with 10 seconds of recovery between the sets. The data were analyzed through descriptive statistics and the Wilcoxon test. The significance level was of p < 0.05. It was possible to analyze that the continuous and the intermittent jump test presented significant differences in MP (p < 0.05, FI (p < 0.01, and in the number of the vertical jump in 60 seconds (p < 0.01, and the height in 60 seconds exercise (p < 0.05. The MP found in IJ4x15sec was significantly higher than in the CJ60 sec in volleyball
19. Relación de la percepción del acudiente del menor sobre la calidad del servicio asistencial de vacunación y su adherencia al programa ampliado de inmunización
Directory of Open Access Journals (Sweden)
William de Jesús Atehortua-Puerta
2015-06-01
20. Análise da ocorrência dos fluxos e Jatos de Nível Baixo no perfil vertical do vento na baixa atmosfera em Manaus (AM Analysis of the occurrence of streams and Low Level Jets in the vertical wind profile at the lower-atmosphere of Manaus city
Directory of Open Access Journals (Sweden)
Cleber Souza Corrêa
2008-09-01
Full Text Available Este estudo apresenta uma análise sobre as estruturas verticais nos baixos níveis da atmosfera tropical, região norte do Brasil, utilizando dados de radiossondagem realizadas na cidade de Manaus. Foi descrito um modelo dinâmico que envolve fluxos/Jatos de Nível Baixo (JNB entre os níveis de 950 hPa e 926 hPa, entre 860 hPa e 880 hPa (correspondendo aproximadamente ao nível intermediário de 850 hPa e ao terceira camada mais alta entre 800 hPa e 700 hPa (níveis médios. Esses fluxos e Jatos caracterizam um processo dinâmico de intenso transporte de energia e massa, criando uma estrutura estratificada turbulenta muito eficiente na geração de convecção na região tropical, demonstrando a influência da Camada Limite Planetária Tropical (CLPT na geração de convecção em meso escala.This study presents an analysis of the vertical structure of the low level tropical atmosphere, north region of Brazil, using radiosonde data at Manaus. A dynamical model involving flow/Low Level Jet (LLJ between the 950hPa and 926 hPa levels, between 860 hPa and 880 hPa levels (correspounding to 850 hPa intermediary level and in the third higher layer between 800 hPa and 700 hPa levels (middle levels is described. These streams and jets characterise a dynamical process of intense energy and mass transport, creating a turbulent stratified structure which is very efficient in producing convection at tropical region, demonstrating the influence of the Tropical Planetary Boundary Layer (TPLB, in the mesoscale convection generation.
1. Uso da prótese vertical expansível de titânio para costela no tratamento da cifose congênita em portadores de mielomeningocele torácica Uso de la prótesis vertical expansible de titanio para costilla en el tratamiento de la cifosis congénita en portadores de mielomeningocele torácico Use of vertical expandable prosthetic of titanium for the rib for treating congenital kyphosis in thoracic meningomyelocele patients
Directory of Open Access Journals (Sweden)
Guilherme Rebechi Zuiani
2009-09-01
2. Salmonelose humana e animal em Araraquara, S. Paulo: prevalência de Shigella em casos humanos
Directory of Open Access Journals (Sweden)
Deise Pasetto Falcão
1975-10-01
Full Text Available Admitida uma baixa freqüência de Salmonella com base em verificações anteriores em casos de diarréia aguda na cidade de Araraquara, S. Paulo, realizou-se uma investigação com um esquema ampliado para o isolamento de Salmonella a partir de 47 casos de gastroenterite infantil, 51 amostras de fezes de animais e 50 amostras de carnes e vísceras de animais destinados a alimentação. Foram isoladas Salmonella em 6,3% dos casos de gastroenterite, 5,8% de fezes de animais e 8,0% de amostras de alimentos, nesse último caso verificando-se a contaminação exclusivamente em fígado de porco, senão negativos todos os resultados de carne e vísceras de bovinos. Os sorotipos isolados corresponderam, em ordem decrescente de freqüência a S. anatum, S. derby e S. daytona. Nas gastroenterites infantis a freqüência de Shigella (12,7% foi duas vezes superior à de Salmonella (6,3%.After some observations, which indicated a low prevalence of Salmonella infections in the area of Araraquara, S. Paulo, an extended schedule of enrichment and isolation media were applied for investigating the presence of Salmonella organisms in 47 fecal specimens from infantile gastroenterites, 51 stool specimens from animais and 50 food samples of animal origin. Salmonella isolates were obtained in 6,3% of the g astro enter itis cases; 5,8% from animal stool cultures and 8,0% from food stuffs. In the last case only pork liver gave positive results, with no isolations being achieved from beef Products. S. anatum, S. derby and S. daytona were found in descendiny order of frequency. As related to the gastroenteritis cases Shigella organisms were twice as frequent than the Salmonella.
3. Global Vertical Reference Frame
Czech Academy of Sciences Publication Activity Database
Burša, Milan; Kenyon, S.; Kouba, J.; Šíma, Zdislav; Vatrt, V.; Vojtíšková, M.
2004-01-01
Roč. 33, - (2004), s. 404-407 ISSN 1436-3445 Institutional research plan: CEZ:AV0Z1003909 Keywords : geopotential WO * vertical systems * global vertical frame Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
4. Antiteatrodocumentário: Verdade e ficção em Conversas com meu pai, de Janaína Leite e Alexandre Dal Farra
Directory of Open Access Journals (Sweden)
Artur Kon
2017-12-01
Full Text Available O ensaio pretende investigar o modo pelo qual classificações como “teatro documentário” ou “teatros do real”, muito em voga nas recentes discussões teóricas sobre a cena contemporânea, são colocados em questão, desconstruídos e ampliados pelo espetáculo paulistano Conversas com meu pai, de 2014, parceria da atriz Janaína Leite com o dramaturgo Alexandre Dal Farra. Para tal, o autor se apoia em reflexões de Jacques Rancière, Georges Didi-Huberman e Hans-Thies Lehmann sobre os encontros e separações entre arte e realidade.
5. Vertical axis wind turbines
Science.gov (United States)
Krivcov, Vladimir [Miass, RU; Krivospitski, Vladimir [Miass, RU; Maksimov, Vasili [Miass, RU; Halstead, Richard [Rohnert Park, CA; Grahov, Jurij [Miass, RU
2011-03-08
A vertical axis wind turbine is described. The wind turbine can include a top ring, a middle ring and a lower ring, wherein a plurality of vertical airfoils are disposed between the rings. For example, three vertical airfoils can be attached between the upper ring and the middle ring. In addition, three more vertical airfoils can be attached between the lower ring and the middle ring. When wind contacts the vertically arranged airfoils the rings begin to spin. By connecting the rings to a center pole which spins an alternator, electricity can be generated from wind.
6. Neosporose bovina: avaliação da transmissão vertical e fração atribuível de aborto em uma população de bovinos no Estado do Rio Grande do Sul
Directory of Open Access Journals (Sweden)
Héber E. Hein
2012-05-01
7. Escrita Escolar Brasileira: a escrita vertical
Directory of Open Access Journals (Sweden)
Carlos André Xavier Villela
2014-12-01
Full Text Available O exame grafoscópico visa essencialmente a determinar se duas escritas partiram ou não de um mesmo punho. A fim de buscar dados empíricos capazes de melhor embasar uma valoração de raridade, diversos pesquisadores se dedicaram ao estudo dos sistemas de escrita. O presente trabalho analisa algumas bibliografias nacionais e estrangeiras que descrevem como se deu o alvorecer da escrita vertical e sua implantação no universo escolar brasileiro. Foi a partir das últimas décadas do século XIX que alógrafos cursivos verticais começaram a ser utilizados para o ensino da escrita. Os novos sistemas, sob a denominação escrita vertical, tiveram presença marcante em todo o mundo ocidental, sendo até hoje os sistemas predominantes em diversos países.
8. Vertical pump assembly
International Nuclear Information System (INIS)
Dohnal, M.; Rosel, J.; Skarka, V.
1988-01-01
The mounting is described of the drive assembly of a vertical pump for nuclear power plants in areas with seismic risk. The assembly is attached to the building floor using flexible and damping elements. The design allows producing seismically resistant pumps without major design changes in the existing types of vertical pumps. (E.S.). 1 fig
9. Simulação do deslocamento de potássio em colunas verticais de solo não-saturado Potassium displacement simulation in vertical columns of unsaturated soil
Directory of Open Access Journals (Sweden)
Jarbas H. Miranda
2005-12-01
Full Text Available O estudo do transporte de água e potássio em solo não-saturado é importante, tanto do ponto de vista do ambiente quanto do econômico. Assim sendo, o uso da modelagem computacional é importante, pois permite de maneira precisa e rápida o monitoramento do deslocamento de solutos, importante na prevenção de impactos ao ambiente. No presente trabalho, teve-se o objetivo de avaliar a simulação do deslocamento do íon potássio em colunas de solo não-saturado, utilizando o modelo MIDI, bem como apresentar a determinação dos parâmetros de transporte do íon potássio em um Latossolo Vermelho-Amarelo, fase arenosa. Concluiu-se que o modelo foi capaz de simular de maneira satisfatória o perfil de umidade e o deslocamento do íon potássio.Water and solute transport studies in unsaturated soil are important for both economical and environmental points of view and, in this sense, it should be emphasized the increase of agricultural use of urban and industrial residues, to the water resources and fertilizers saving. Thus, the computational modeling use is important, because it allows the monitoring of solute displacement, necessary to the environmental impacts prevention in a precise and fast way. The main objective of the present work is to simulate the displacement of potassium ion in unsaturated soil columns, using the MIDI model, as well as to present transport parameters determination of the potassium ion in a Red Yellowish Latossol, sandy phase. The obtained results allowed concluding that the model was capable to adequately simulate the potassium ion displacement.
10. Comportamento diário ao longo do período de alimentação do primeiro estágio do sistema francês de wetland vertical, em termos de remoção de matéria orgânica e amônia
Directory of Open Access Journals (Sweden)
Camila Maria Trein
2018-01-01
11. Coordination in vertical jumping
NARCIS (Netherlands)
Bobbert, Maarten F.; van Ingen Schenau, Gerrit Jan
1988-01-01
The present study was designed to investigate for vertical jumping the relationships between muscle actions, movement pattern and jumping achievement. Ten skilled jumpers performed jumps with preparatory countermovement. Ground reaction forces and cinematographic data were recorded. In addition,
12. Diafragmas horizontais de piso em madeira, confeccionados com chapas de OSB e vigas I, submetido ao carregamento vertical - DOI: 10.4025/actascitechnol.v29i2.579
Directory of Open Access Journals (Sweden)
Altevir Castro dos Santos
2008-02-01
Full Text Available Este trabalho aborda construções em madeira sob a ótica de sistemas com estruturas leves; apresenta an疝ise computacional por meio de modelagem pelo metodo de elementos finitos de diafragmas de piso e vigas I, submetidas a ensaio de flexão a quatro pontos. O objetivo geral é avaliar a resistência e a rigidez de diafragmas horizontais, construçõos em Sistemas Leves de Madeira, quando submetidos a ações verticais. As análises foram realizadas por meio do programa computacional SAP2000 e foram avaliadas as influencias dos seguintes parametros: espaçamento entre vigas que constituem os elementos de ossatura do diafragma horizontal e o espaçamento entre pregos de fixação do contrapiso, composto por chapas de OSB – Oriented Strand Board. Ao final do trabalho, comparam-se os resultados obtidos a partir das análises numérica e teórica, e são apresentadas algumas conclusões.
13. O declínio da colegialidade das dicisões dos tribunais e os poderes ampliados do relator nos recursos cíveis : análise à luz do art. 557 do CPC
OpenAIRE
Rosalina Freitas Martins de Sousa
2010-01-01
O presente trabalho tem como objeto a análise dos poderes decisórios do relator nos recursos cíveis, à luz do art. 557 do CPC. Para atenuar a carga de trabalho dos tribunais, da qual resultaria, pelo menos a priori, agilização no trâmite dos recursos em geral e, de conseqüência, combate à morosidade da justiça, atribuiuse ao relator poderes para apreciar os recursos no âmbito dos tribunais, isto sem necessidade de submissão do feito ao órgão colegiado. De acordo com o ordenamento jurídi...
14. Hybrid vertical cavity laser
DEFF Research Database (Denmark)
Chung, Il-Sug; Mørk, Jesper
2010-01-01
A new hybrid vertical cavity laser structure for silicon photonics is suggested and numerically investigated. It incorporates a silicon subwavelength grating as a mirror and a lateral output coupler to a silicon ridge waveguide.......A new hybrid vertical cavity laser structure for silicon photonics is suggested and numerically investigated. It incorporates a silicon subwavelength grating as a mirror and a lateral output coupler to a silicon ridge waveguide....
15. Within plant distribution of Anthonomus grandis (Coleoptera: Curculionidae feeding and oviposition damages in cotton cultivars Distribuição vertical de botões florais com danos de alimentação e de oviposição de Anthonomus grandis (Coleoptera: Curculionidae em cultivares de algodoeiro
Directory of Open Access Journals (Sweden)
José Fernando Jurca Grigolli
2013-02-01
Full Text Available The feeding and oviposition behavior of boll weevil in new cotton cultivars is essential for an adequate management. The objective of this study was to evaluate the vertical distribution of squares punctured for feeding and oviposition of the pest in the cultivars NuOPAL, DeltaOPAL, FMT-701, FMX-910 and FMX-993, and record the most and least preferred times of feeding and oviposition. The number of squares used for boll weevil feeding and oviposition were evaluated weekly in three parts of plant canopy. It was observed that, regardless the cultivar, A. grandis preferred to lay eggs in squares located in the upper part and feed on squares in the middle and upper parts. The boll weevil preferred to feed on cultivar FMT-701 in the beginning of the period of cotton flowering and fruiting, and the cultivars NuOPAL, DeltaOPAL, FMX-910 and FMX-993 throughout the whole period of flowering and fruiting. A. grandis preferred to lay eggs on cultivars NuOPAL, FMT-701 and FMX-993 at the beginning and end of flowering and fruiting of plants, while the cultivars DeltaOPAL and FMX-910 are used for oviposition throughout the period of flowering and fruiting.O conhecimento do comportamento de alimentação e de oviposição de Anthonomus grandis em cultivares recentes de algodoeiro é essencial para seu manejo. Neste trabalho, objetivou-se avaliar a distribuição vertical de botões florais com orifícios de alimentação e de oviposição da praga nas cultivares NuOPAL, DeltaOPAL, FMT-701, FMX-910 e FMX-993, bem como registrar as épocas de maior e menor preferência alimentar e de oviposição. O experimento foi conduzido em Jaboticabal, SP, Brasil, safra 2010/2011. Foram realizadas avaliações semanais, baseadas no número de botões florais, utilizados para alimentação e para oviposição pelo bicudo-do-algodoeiro, em três regiões do dossel das plantas. Observou-se que A. grandis preferiu ovipositar em botões florais localizados no terço superior das
16. O QUE É UM AMBIENTE LAICO? ESPAÇOS (INTERRELIGIOSOS EM INSTITUIÇÕES PÚBLICAS
Directory of Open Access Journals (Sweden)
Emerson Giumbelli
2014-03-01
Full Text Available O texto resulta de uma pesquisa sobre espaços religiosos existentes em instituições públicas na cidade de Porto Alegre, Brasil. Acompanha a polêmica que se levantou em torno da proposta da direção de um hospital público, que consistia em transformar uma capela católica em um “espaço de espiritualidade”. A referência, no seio desta polêmica, a outras experiências de elaboração de espaços ecumênicos ou interreligiosos, na mesma cidade, propicia um panorama ampliado, que contempla outros dois hospitais, um shopping center o aeroporto da cidade. Ao procurar entender os argumentos e propostas em jogo, a análise empreendida pretende discutir a noção de laicidade, demonstrando a multiplicidade de entendimentos e de configurações que sobre ela se manifestam.
17. Vertical and horizontal subsidiarity
Directory of Open Access Journals (Sweden)
Ivan V. Daniluk
2016-02-01
Full Text Available This article makes an attempt to analyze the principle of subsidiarity in its two main manifestations, namely vertical and horizontal, to outline the principles of relations between the state and regions within the vertical subsidiarity, and features a collaboration of the government and civil society within the horizontal subsidiarity. Scientists identify two types, or two levels of the subsidiarity principle: vertical subsidiarity and horizontal subsidiarity. First, vertical subsidiarity (or territorial concerning relations between the state and other levels of subnational government, such as regions and local authorities; second, horizontal subsidiarity (or functional concerns the relationship between state and citizen (and civil society. Vertical subsidiarity expressed in the context of the distribution of administrative responsibilities to the appropriate higher level lower levels relative to the state structure, ie giving more powers to local government. However, state intervention has subsidiary-lower action against local authorities in cases of insolvency last cope on their own, ie higher organisms intervene only if the duties are less authority is insufficient to achieve the goals. Horizontal subsidiarity is within the relationship between power and freedom, and is based on the assumption that the concern for the common good and the needs of common interest community, able to solve community members (as individuals and citizens’ associations and role of government, in accordance horizontal subsidiarity comes to attracting features subsidiarity assistance, programming, coordination and possibly control.
18. Experimental analysis of ultrasonic signals in air-water vertical upward for void fraction measurement using neural networks; Analise experimental dos sinais ultra-sonicos em escoamentos verticais bifasicos para medicao da fracao de vazios atraves de redes neurais
Energy Technology Data Exchange (ETDEWEB)
Nishida, Milton Y.; Massignan, Joao P.D.; Daciuk, Rafael J.; Neves Junior, Flavio; Arruda, Lucia V.R. [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil)
2008-07-01
Rheology of emulsion mixtures and void fraction measurements of multiphase flows requires proper instrumentation. Sometimes it is not possible to install this instrumentation inside the pipe or view the flow. Ultrasound technology has characteristics compatible with the requirements of the oil industry. It can assist the production of heavy oil. This study provides important information for an analysis of the feasibility of developing non-intrusive equipment. These probes can be used for measurement of multiphase void fraction and detect the flow pattern using ultrasound. Experiments using simulated upward air-water vertical two-phase flow show that there is a correlation between the acoustic attenuation and the concentration of the gas phase. Experimental data were obtained through the prototype developed for ultrasonic data acquisition. This information was processed and used as input parameters for a neural network classifier. Void fractions ({proportional_to}) were analyzed between 0% - 16%, in increments of 1%. The maximum error of the neural network for the classification of the flow pattern was 6%. (author)
19. [Duane vertical surgical treatment].
Science.gov (United States)
Merino, M L; Gómez de Liaño, P; Merino, P; Franco, G
2014-04-01
We report 3 cases with a vertical incomitance in upgaze, narrowing of palpebral fissure, and pseudo-overaction of both inferior oblique muscles. Surgery consisted of an elevation of both lateral rectus muscles with an asymmetrical weakening. A satisfactory result was achieved in 2 cases, whereas a Lambda syndrome appeared in the other case. The surgical technique of upper-insertion with a recession of both lateral rectus muscles improved vertical incomitance in 2 of the 3 patients; however, a residual deviation remains in the majority of cases. Copyright © 2011 Sociedad Española de Oftalmología. Published by Elsevier Espana. All rights reserved.
20. Vertical market participation
DEFF Research Database (Denmark)
1998-01-01
Firms that operate at both levels of vertically related Cournot oligopolies will purchase some input supplies from independent rivals, even though they can produce the good at a lower cost, driving up input price for nonintegrated firms at the final good level. Foreclosure, which avoids this stra......Firms that operate at both levels of vertically related Cournot oligopolies will purchase some input supplies from independent rivals, even though they can produce the good at a lower cost, driving up input price for nonintegrated firms at the final good level. Foreclosure, which avoids...
1. Vertical Protocol Composition
DEFF Research Database (Denmark)
Groß, Thomas; Mödersheim, Sebastian Alexander
2011-01-01
The security of key exchange and secure channel protocols, such as TLS, has been studied intensively. However, only few works have considered what happens when the established keys are actually used—to run some protocol securely over the established “channel”. We call this a vertical protocol.......e., that the combination cannot introduce attacks that the individual protocols in isolation do not have. In this work, we prove a composability result in the symbolic model that allows for arbitrary vertical composition (including self-composition). It holds for protocols from any suite of channel and application...
2. Isoline curves obtained from vertical aerophotos
OpenAIRE
Barros, Zacarias Xavier de; Campos, Sérgio; Cardoso, Lincoln Gehring; Pollo, Ronaldo Alberto
2000-01-01
Objetiva-se, com este trabalho, obter curvas de nível a partir de fotografias aéreas verticais, utilizando-se de gráfico linear de correção, em áreas com diferentes classes de declividade. A análise estatística dos dados foi efetuada por meio de regressões múltiplas das variáveis, erro horizontal e erro vertical, em função das variáveis independentes: altitude; altitude e declividade. Os erros médios horizontais e verticais pouco dependem da altitude, bem como da altitude e declividade, induz...
3. Avaliação cefalométrica das alterações verticais e ântero-posteriores em pacientes Classe II esquelética, tratados com aparelho extrabucal de tração cervical ou combinada Cephalometric evaluation of anteroposterior and vertical changes in skeletal Class II patients treated with cervical or combined traction
Directory of Open Access Journals (Sweden)
Márlio Vinícius de Oliveira
2007-04-01
4. Vertical Search Engines
OpenAIRE
Curran, Kevin; Mc Glinchey, Jude
2017-01-01
This paper outlines the growth in popularity of vertical search engines, their origins, the differences between them and well-known broad based search engines such as Google and Yahoo. We also discuss their use in business-to-business, their marketing and advertising costs, what the revenue streams are and who uses them.
5. Vertical cavity laser
DEFF Research Database (Denmark)
2016-01-01
The present invention provides a vertical cavity laser comprising a grating layer comprising an in-plane grating, the grating layer having a first side and having a second side opposite the first side and comprising a contiguous core grating region having a grating structure, wherein an index...
6. Global Vertical Reference Frame
Czech Academy of Sciences Publication Activity Database
Burša, Milan; Kenyon, S.; Kouba, J.; Šíma, Zdislav; Vatrt, V.; Vojtíšková, M.
-, č. 5 (2009), s. 53-63 ISSN 1801-8483 R&D Projects: GA ČR GA205/08/0328 Institutional research plan: CEZ:AV0Z10030501 Keywords : sea surface topography * satellite altimetry * vertical frames Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
7. Efeitos de um programa de treinamento neuromuscular sobre o consumo máximo de oxigênio e salto vertical em atletas iniciantes de voleibol Los efectos de un programa de entrenamiento neuromuscular nel consumo máximo de oxígeno y salto vertical en atletas principiantes de voleyball Effects of a neuromuscular training program on the maximal Oxygen consumption and vertical jump in beginning volleyball players
Directory of Open Access Journals (Sweden)
Alexandre Altini Neto
2006-02-01
8. Doação de órgãos e tecidos: relação com o corpo em nossa sociedade
OpenAIRE
Roza,Bartira De Aguiar; Garcia,Valter Duro; Barbosa,Sayonara de Fátima Faria; Mendes,Karina Dal Sasso; Schirmer,Janine
2010-01-01
Este estudo, de revisão bibliográfica, objetivou tecer considerações teóricas sobre doação de órgãos e tecidos e sua relação com o corpo em nossa sociedade. O aumento da taxa de doação depende de um olhar ampliado além das questões técnicas do processo de doação de órgãos e tecidos. Vários países, com larga experiência temporal e, que trabalham sistematicamente nesse processo, incorporaram a abordagem social e a perspectiva ética, baseadas no voluntarismo das famílias e no respeito ao direito...
9. Vertical steam generator
International Nuclear Information System (INIS)
Cuda, F.; Kondr, M.; Kresta, M.; Kusak, V.; Manek, O.; Turon, S.
1982-01-01
A vertical steam generator for nuclear power plants and dual purpose power plants consists of a cylindrical vessel in which are placed heating tubes in the form upside-down U. The heating tubes lead to the jacket of the cylindrical collector placed in the lower part of the steam generator perpendicularly to its vertical axis. The cylindrical collector is divided by a longitudinal partition into the inlet and outlet primary water sections of the heating tubes. One ends of the heating tube leads to the jacket of the collector for primary water feeding and the second ends of the heating tubes into the jacket of the collector which feeds and offtakes primary water from the heating tubes. (B.S.)
10. Vertical organic transistors.
Science.gov (United States)
Lüssem, Björn; Günther, Alrun; Fischer, Axel; Kasemann, Daniel; Leo, Karl
2015-11-11
Organic switching devices such as field effect transistors (OFETs) are a key element of future flexible electronic devices. So far, however, a commercial breakthrough has not been achieved because these devices usually lack in switching speed (e.g. for logic applications) and current density (e.g. for display pixel driving). The limited performance is caused by a combination of comparatively low charge carrier mobilities and the large channel length caused by the need for low-cost structuring. Vertical Organic Transistors are a novel technology that has the potential to overcome these limitations of OFETs. Vertical Organic Transistors allow to scale the channel length of organic transistors into the 100 nm regime without cost intensive structuring techniques. Several different approaches have been proposed in literature, which show high output currents, low operation voltages, and comparatively high speed even without sub-μm structuring technologies. In this review, these different approaches are compared and recent progress is highlighted.
11. Relato de caso: transmissão vertical de dengue Case report: vertical dengue infection
Directory of Open Access Journals (Sweden)
Samara L. C. Maroun
2008-12-01
12. Vertical axis wind turbine
International Nuclear Information System (INIS)
Obretenov, V.; Tsalov, T.; Chakarov, T.
2012-01-01
In recent years, the interest in wind turbines with vertical axis noticeably increased. They have some important advantages: low cost, relatively simple structure, reliable packaging system of wind aggregate long period during which require no maintenance, low noise, independence of wind direction, etc.. The relatively low efficiency, however, makes them applicable mainly for small facilities. The work presents a methodology and software for approximately aerodynamic design of wind turbines of this type, and also analyzed the possibility of improving the efficiency of their workflow
13. Vertical vector face lift.
Science.gov (United States)
Somoano, Brian; Chan, Joanna; Morganroth, Greg
2011-01-01
Facial rejuvenation using local anesthesia has evolved in the past decade as a safer option for patients seeking fewer complications and minimal downtime. Mini- and short-scar face lifts using more conservative incision lengths and extent of undermining can be effective in the younger patient with lower face laxity and minimal loose, elastotic neck skin. By incorporating both an anterior and posterior approach and using an incision length between the mini and more traditional face lift, the Vertical Vector Face Lift can achieve longer-lasting and natural results with lesser cost and risk. Submentoplasty and liposuction of the neck and jawline, fundamental components of the vertical vector face lift, act synergistically with superficial musculoaponeurotic system plication to reestablish a more youthful, sculpted cervicomental angle, even in patients with prominent jowls. Dramatic results can be achieved in the right patient by combining with other procedures such as injectable fillers, chin implants, laser resurfacing, or upper and lower blepharoplasties. © 2011 Wiley Periodicals, Inc.
14. Vertical organic transistors
International Nuclear Information System (INIS)
Lüssem, Björn; Günther, Alrun; Fischer, Axel; Kasemann, Daniel; Leo, Karl
2015-01-01
Organic switching devices such as field effect transistors (OFETs) are a key element of future flexible electronic devices. So far, however, a commercial breakthrough has not been achieved because these devices usually lack in switching speed (e.g. for logic applications) and current density (e.g. for display pixel driving). The limited performance is caused by a combination of comparatively low charge carrier mobilities and the large channel length caused by the need for low-cost structuring. Vertical Organic Transistors are a novel technology that has the potential to overcome these limitations of OFETs. Vertical Organic Transistors allow to scale the channel length of organic transistors into the 100 nm regime without cost intensive structuring techniques. Several different approaches have been proposed in literature, which show high output currents, low operation voltages, and comparatively high speed even without sub-μm structuring technologies. In this review, these different approaches are compared and recent progress is highlighted. (topical review)
15. Ascensão e queda do pacto populista em Cuba, 1934-1959
Directory of Open Access Journals (Sweden)
Gillian McGillivray
2012-01-01
Full Text Available O regime que pôs fim aos "100 dias de reforma" em Cuba é rotulado com frequência como "contrarrevolução" quando, na verdade, a expressão mais apropriada seria a de "populismo autoritário". O novo regime não reverteu a Revolução de 1933; muito pelo contrário, suas lideranças valeram-se da violência combinada com reformas revolucionárias como forma de incorporar, de maneira compulsória, um número cada vez maior de pessoas em um novo e ampliado sistema estatal de liderança. Fulgencio Batista recebeu o apoio de parte da classe trabalhadora ao longo do período democrático que vigorou durante a Segunda Guerra Mundial, mas o anticomunismo da Guerra Fria desestabilizou seu regime, esvaziando o populismo cubano de grande parte da sua substância.
16. Perfil dos acidentes de trabalho em refinaria de petróleo
Directory of Open Access Journals (Sweden)
Carlos Augusto Vaz de Souza
2002-10-01
17. Desempenho de um “wetland” vertical aplicado ao tratamento do efluente de um filtro anaeróbio em uma estação de tratamento de águas cinzas claras visando o reúso não potável em edificações residenciais
OpenAIRE
Sarnaglia, Solange Aparecida Alho
2014-01-01
No tratamento de água cinza com vistas ao reúso predial os “wetlands” têm se mostrado como uma opção viável devido à boa remoção de poluentes, ao baixo custo de implantação e operação, além do baixo impacto ambiental quando comparados a outros sistemas. O presente estudo teve como objetivo caracterizar físico-química e microbiologicamente a água cinza clara gerada em um edifício universitário avaliar a influência das cargas hidráulica e orgânica na remoção de matéria orgânica, turbidez e de c...
18. Phytosociology in agroforestry systems of different ages in the town of Medicilândia, Pará, Brazil = Fitossociologia em sistemas agroflorestais com diferentes idades de implantação no município de Medicilândia, PA
Directory of Open Access Journals (Sweden)
Fábio Miranda Leão
2017-03-01
estrutural, realizou-se o censo florestal nos três sistemas agroflorestais, inventariando todos os indivíduos arbóreos com diâmetro a altura do peito > 10 cm. Para a análise da estrutura horizontal, foram considerados os parâmetros fitossociológicos absolutos e relativos de densidade e dominância. Os parâmetros absolutos e relativos da posição sociológica e regeneração natural foram calculados para a análise estrutural vertical dos sistemas. Para análise do Índice de Valor de Importância Ampliado (IVIA, foram somados todos os parâmetros verticais e horizontais relativos. Os sistemas agroflorestais apresentaram distribuição diamétrica em forma de “J invertido”. Por serem espécies chaves no plantio agroflorestal, Swietenia macrophylla e Tabebuia impetignosa foram as espécies mais importantes em todos os sistemas agroflorestais. A condução da regeneração natural favoreceu o estabelecimento de espécies de valor comercial que não fizeram parte do arranjo inicial dos SAFs, tais como Bagassa guianenses, Tabebuia serratifolia, Schizolobium amazonicum e Dipteryx odorata, denotando sustentabilidade econômica e ecológica nestes sistemas.
19. The online distribution strategy of luxury products and their vertical extensions
OpenAIRE
Marliot, Sylvain Jean-Claude
2014-01-01
Uma importante tendência do mercado de luxo é a extensão de marca em um novo segmento de mercado por meio da chamada extensão vertical, que pode ser para cima ou para baixo. Em outras palavras, significa que a organização passa a atuar em um novo segmento dentro de uma mesma categoria de produtos, mas com diferente público-alvo que sua marca original. Nesse processo, a empresa inicia atividade em um novo segmento com diferente nível de luxo. A distribuição é um aspecto fundamental do compo...
20. Vertical guidance of shearers
International Nuclear Information System (INIS)
Pocock, J.
1985-01-01
Mining Engineers have always been aware of the basic need to avoid contamination of the mined product, by controlling the cutting horizon at the coal face. The ability to maintain the optimum cutting horizon results in more effective roof control and ensures a safer and more efficient working environment, for men and machinery. The cost of treatment in the surface coal preparation plant is reduced. Transportation through the total mine system of material finally destined for the spoil heap is minimised. A reduction in product contamination is achieved and makes more effective use of the mine capacity. These benefits make possible significant improvements in productivity and financial returns. Exploitation of micro computer based systems has enabled the successful development of equipment which employs sensors to detect the very low natural gamma radiation from roof strata; to determine and allow control of the position of the cut relative to the roof and floor. This paper reviews the experience gained by the National Coal Board, particularly in South Yorkshire Area, with the vertical steering of ranging drum shearers. It outlines the benefits and considers the future for this technology and its contribution to total coal face automation
1. TRATAMIENTO DE AGUAS RESIDUALES MEDIANTE REACTORES ANAERÓBICOS DE PLACAS VERTICALES PARALELAS EN ACRÍLICO TRATAMENTO DE ÁGUAS RESIDUÁRIAS POR REATORES ANAERÓBIOS DE PLACAS VERTICAIS PARALELAS EM ACRÍLICO WASTEWATER TREATMENT BY ANAEROBIC REACTORS OF VERTICAL PARALLEL PLATES IN ACRYLIC
Directory of Open Access Journals (Sweden)
Guillermo Chaux F
2011-12-01
Full Text Available Algunos filtros anaeróbicos con lecho de piedra construidos en el departamento del Cauca (Colombia, están presentando problemas de colmatación. Si se reemplaza la piedra por placas verticales paralelas, se elimina el problema de obstrucción. Este documento presenta el desarrollo y resultados de una investigación que evaluó a escala de laboratorio el potencial de los reactores anaeróbicos de placas verticales paralelas en acrílico para remover contaminantes (materia orgánica y sólidos suspendidos. El reactor anaeróbico de placas paralelas en acrílico se desempeñó como tratamiento secundario; se alimentó con agua residual efluente de un Tanque Imhoff con concentraciones medias de 156 ± 14 mg/L de DB05, 438 ± 32 mg/L de DQO y 98 ±22 mg/L de sólidos suspendidos totales. Las remociones de DQO y DB05 en el reactor sobrepasan el 50% y la remoción de sólidos suspendidos sobrepasó el 60% para tiempos de detención de 24 horas. La facilidad en la operación del reactor lo hace viable como tratamiento biológico anaeróbico de aguas residuales previamente decantadasAlguns filtros anaeróbios com recheio de pedras construída no departamento de Cauca (Colombia estão apresentando problemas de obstrução. Se a pedra é substituída por placas verticais paralelas, evita o problema da obstrução. Este artigo apresenta o desenvolvimento e os resultados e no estudo realizado em escala de laboratório que avaliaram o potencial de reatores anaeróbios de placas verticais paralelas em acrílico para remover os contaminantes (sólidos suspensos e matéria orgânica. 0 reator anaeróbio de placas paralelas de acrílico serviu como tratamento secundário; foi alimentado com água residuária do efluente de um tanque Imhoff com concentrações médias de 156 ± 14 mg/L DB05, 438 ± 32 mg/L de DQO e 98 ±22 mg/L de sólidos suspensos totais. A remoção de DQO e DB05 no reator são mais de 50% ea remoção de sólidos em suspensão superior a 60
2. Trade Liberalisation and Vertical Integration
DEFF Research Database (Denmark)
Bache, Peter Arendorf; Laugesen, Anders
We build a three-country model of international trade in final goods and intermediate inputs and study the relation between different types of trade liberalisation and vertical integration. Firms are heterogeneous with respect to both productivity and factor intensity as observed in data. Final......-good producers face decisions on exporting, vertical integration of intermediate-input production, and whether the intermediate-input production should be offshored to a low-wage country. We find that the fractions of final-good producers that pursue either vertical integration, offshoring, or exporting are all...... increasing when intermediate-input or final-goods trade is liberalised and when the fixed cost of vertical integration is reduced. At the same time, one observes firms that shift away from either vertical integration, offshoring, or exporting. Further, we provide guidance for testing the open...
3. Inservice testing of vertical pumps
International Nuclear Information System (INIS)
Cornman, R.E. Jr.; Schumann, K.E.
1994-01-01
This paper focuses on the problems that may occur with vertical pumps while inservice tests are conducted in accordance with existing American Society of Mechanical Engineers Code, Section XI, standards. The vertical pump types discussed include single stage, multistage, free surface, and canned mixed flow pumps. Primary emphasis is placed on the hydraulic performance of the pump and the internal and external factors to the pump that impact hydraulic performance. In addition, the paper considers the mechanical design features that can affect the mechanical performance of vertical pumps. The conclusion shows how two recommended changes in the Code standards may increase the quality of the pump's operational readiness assessment during its service life
4. Vertical axis wind turbine airfoil
Science.gov (United States)
2012-12-18
A vertical axis wind turbine airfoil is described. The wind turbine airfoil can include a leading edge, a trailing edge, an upper curved surface, a lower curved surface, and a centerline running between the upper surface and the lower surface and from the leading edge to the trailing edge. The airfoil can be configured so that the distance between the centerline and the upper surface is the same as the distance between the centerline and the lower surface at all points along the length of the airfoil. A plurality of such airfoils can be included in a vertical axis wind turbine. These airfoils can be vertically disposed and can rotate about a vertical axis.
5. Trade Liberalisation and Vertical Integration
DEFF Research Database (Denmark)
Bache, Peter Arendorf; Laugesen, Anders Rosenstand
We build a three-country model of international trade in final goods and intermediate inputs and study the relation between four different types of trade liberalisation and vertical integration. Firms are heterogeneous with respect to both productivity and factor (headquarter) intensity. Final......-good producers face decisions on exporting, vertical integration of intermediate-input production, and whether the intermediate-input production should be offshored to a low-wage country. We find that the fractions of final-good producers that pursue either vertical integration, offshoring, or exporting are all...... increasing when intermediate-input trade or final-goods trade is liberalised. Finally, we provide guidance for testing the open-economy property rights theory of the firm using firm-level data and surprisingly show that the relationship between factor (headquarter) intensity and the likelihood of vertical...
6. The TEXT upgrade vertical interferometer
International Nuclear Information System (INIS)
Hallock, G.A.; Gartman, M.L.; Li, W.; Chiang, K.; Shin, S.; Castles, R.L.; Chatterjee, R.; Rahman, A.S.
1992-01-01
A far-infrared interferometer has been installed on TEXT upgrade to obtain electron density profiles. The primary system views the plasma vertically through a set of large (60-cm radialx7.62-cm toroidal) diagnostic ports. A 1-cm channel spacing (59 channels total) and fast electronic time response is used, to provide high resolution for radial profiles and perturbation experiments. Initial operation of the vertical system was obtained late in 1991, with six operating channels
7. A Produção Científica de Custos: Análise das Publicações em Periódicos Nacionais de Contabilidade sob a perspectiva das Redes Sociais e da Bibliometria
Directory of Open Access Journals (Sweden)
2012-12-01
8. O Papel da Psicoterapia de Grupo na Formação do Residente em Psiquiatria
Directory of Open Access Journals (Sweden)
Cláudia de Paula Juliano Souza
Full Text Available RESUMO O objetivo deste trabalho foi analisar o papel da psicoterapia de grupo na formação do residente em psiquiatria do Programa de Residência Médica em Psiquiatria da Universidade Federal de Goiás. Trata-se de um estudo descritivo exploratório com abordagem qualitativa em educação médica. Os dados foram coletados por meio de relatórios descritivos e entrevistas semiestruturadas, submetidas à análise de conteúdo temático-categorial. Emergiram da análise dos dados duas categorias: as ações educativas do ensino da psicoterapia de grupo e as ações sociais do ensino da psicoterapia de grupo. Na análise da primeira categoria, obtivemos cinco subcategorias: relação médico-paciente, aprendizagem cognitiva, aprendizagem afetiva, diálogo interdisciplinar e desenvolvimento pessoal. Na segunda categoria, obtivemos duas subcategorias: socialização e encontro. Conclui-se que o ensino da psicoterapia de grupo tem papel educativo, pois contribui com a inovação dos cenários de prática, possibilitando mudanças na relação médico residente-paciente e consolidando o conceito ampliado de saúde na perspectiva da integralidade. Revela também seu papel social, pois contribui para uma aproximação sociointerativa entre preceptor-residente-grupo.
9. Evaluation of <em>HER2em> Gene Amplification in Breast Cancer Using Nuclei Microarray <em>in em>S>itu em>Hybridization
Directory of Open Access Journals (Sweden)
Xuefeng Zhang
2012-05-01
Full Text Available Fluorescence<em> em>>in situ em>hybridization (FISH assay is considered the “gold standard” in evaluating <em>HER2/neu (HER2em> gene status. However, FISH detection is costly and time consuming. Thus, we established nuclei microarray with extracted intact nuclei from paraffin embedded breast cancer tissues for FISH detection. The nuclei microarray FISH (NMFISH technology serves as a useful platform for analyzing <em>HER2em> gene/chromosome 17 centromere ratio. We examined <em>HER2em> gene status in 152 cases of invasive ductal carcinomas of the breast that were resected surgically with FISH and NMFISH. <em>HER2em> gene amplification status was classified according to the guidelines of the American Society of Clinical Oncology and College of American Pathologists (ASCO/CAP. Comparison of the cut-off values for <em>HER2em>/chromosome 17 centromere copy number ratio obtained by NMFISH and FISH showed that there was almost perfect agreement between the two methods (κ coefficient 0.920. The results of the two methods were almost consistent for the evaluation of <em>HER2em> gene counts. The present study proved that NMFISH is comparable with FISH for evaluating <em>HER2em> gene status. The use of nuclei microarray technology is highly efficient, time and reagent conserving and inexpensive.
10. Análise de um acidente por contaminação fúngica em uma biblioteca pública no município do Rio de Janeiro
Directory of Open Access Journals (Sweden)
Maria Cristina Strausz
11. O desempenho terminológico dos descritores em Ciência da Informação do Vocabulário Controlado do SIBi/USP nos processos de indexação manual, automática e semi-automática
Directory of Open Access Journals (Sweden)
Vania Mara Alves Lima
Full Text Available Avaliou-se o desempenho terminológico, nos processos de indexação manual, automática e semi-automática, dos descritores, do Vocabulário Controlado do SIBi/USP, que representam o domínio da Ciência da Informação. Concluiu-se que os atuais descritores em Ciência da Informação do Vocabulário Controlado do SIBi/USP para representar adequadamente o conteúdo do corpus indexado devem ser ampliados e contextualizados através de definições terminológicas, de maneira a atender as necessidades de informação de seus usuários.
12. A New Natural Lactone from <em>Dimocarpus> <em>longan> Lour. Seeds
Directory of Open Access Journals (Sweden)
Zhongjun Li
2012-08-01
Full Text Available A new natural product named longanlactone was isolated from <em>Dimocarpus> <em>longan> Lour. seeds. Its structure was determined as 3-(2-acetyl-1<em>H>-pyrrol-1-yl-5-(prop-2-yn-1-yldihydrofuran-2(3H-one by spectroscopic methods and HRESIMS.
13. Reference Gene Selection in the Desert Plant <em>Eremosparton songoricuem>m>
Directory of Open Access Journals (Sweden)
Dao-Yuan Zhang
2012-06-01
Full Text Available <em>Eremosparton songoricum em>(Litv. Vass. (<em>E. songoricumem> is a rare and extremely drought-tolerant desert plant that holds promise as a model organism for the identification of genes associated with water deficit stress. Here, we cloned and evaluated the expression of eight candidate reference genes using quantitative real-time reverse transcriptase polymerase chain reactions. The expression of these candidate reference genes was analyzed in a diverse set of 20 samples including various <em>E. songoricumem> plant tissues exposed to multiple environmental stresses. GeNorm analysis indicated that expression stability varied between the reference genes in the different experimental conditions, but the two most stable reference genes were sufficient for normalization in most conditions.<em> EsEFem> and <em>Esα-TUB> were sufficient for various stress conditions, <em>EsEF> and <em>EsACT> were suitable for samples of differing germination stages, and <em>EsGAPDH>and <em>Es>UBQ em>were most stable across multiple adult tissue samples. The <em>Es18Sem> gene was unsuitable as a reference gene in our analysis. In addition, the expression level of the drought-stress related transcription factor <em>EsDREB2em>> em>verified the utility of<em> E. songoricumem> reference genes and indicated that no single gene was adequate for normalization on its own. This is the first systematic report on the selection of reference genes in <em>E. songoricumem>, and these data will facilitate future work on gene expression in this species.
14. Vertical distribution of pelagic photosynthesis
DEFF Research Database (Denmark)
Lyngsgaard, Maren Moltke
chlorophyll maxima (DCM) to be a general feature in the ocean. Today, it is generally accepted that DCMs occur in most of our oceans still, despite this empirical knowledge, subsurface primary production is still largely ignored in marine science. The work included in this PhD examines the vertical...... each of the three regions combined with 15 years of survey data for the Baltic Sea transition zone. Overall, the results of this PhD work show that the vertical distribution of phytoplankton and their activity is important for the understanding, dynamics and functioning of pelagic ecosystems. It, thus......, emphasizes that future research and modelling exercises aimed at improving understanding of pelagic ecosystems and their role in the global ocean should include a consideration of the vertical heterogeneity in phytoplankton distributions and activity....
15. Neglected locked vertical patellar dislocation
Science.gov (United States)
Gupta, Rakesh Kumar; Gupta, Vinay; Sangwan, Sukhbir Singh; Kamboj, Pradeep
2012-01-01
Patellar dislocations occurring about the vertical and horizontal axis are rare and irreducible. The neglected patellar dislocation is still rarer. We describe the clinical presentation and management of a case of neglected vertical patellar dislocation in a 6 year-old boy who sustained an external rotational strain with a laterally directed force to his knee. Initially the diagnosis was missed and 2 months later open reduction was done. The increased tension generated by the rotation of the lateral extensor retinaculum kept the patella locked in the lateral gutter even with the knee in full extension. Traumatic patellar dislocation with rotation around a vertical axis has been described earlier, but no such neglected case has been reported to the best of our knowledge. PMID:23162154
16. Neglected locked vertical patellar dislocation
Directory of Open Access Journals (Sweden)
Rakesh Kumar Gupta
2012-01-01
Full Text Available Patellar dislocations occurring about the vertical and horizontal axis are rare and irreducible. The neglected patellar dislocation is still rarer. We describe the clinical presentation and management of a case of neglected vertical patellar dislocation in a 6 year-old boy who sustained an external rotational strain with a laterally directed force to his knee. Initially the diagnosis was missed and 2 months later open reduction was done. The increased tension generated by the rotation of the lateral extensor retinaculum kept the patella locked in the lateral gutter even with the knee in full extension. Traumatic patellar dislocation with rotation around a vertical axis has been described earlier, but no such neglected case has been reported to the best of our knowledge.
17. Building a progressive vertical integration
International Nuclear Information System (INIS)
Charette, D.
2008-01-01
AAER Inc. is a Quebec-based company that manufactures turbines using proven European designs. This presentation discussed the company's business model. The company places an emphasis on identifying strategic and key components currently available for its turbines. Market analyses are performed in order to determine ideal suppliers and define business strategies and needs. The company invests in long-term relationships with its suppliers. Business partners for AAER are of a similar size and have a mutual understanding and respect for the company's business practices. Long-term agreements with suppliers are signed in order to ensure reliability and control over costs. Progressive vertical integration has been achieved by progressively manufacturing key components and integrating a North American supply chain. The company's secure supply chain and progressive vertical integration has significantly reduced financial costs and provided better quality control. It was concluded that vertical integration has also allowed AAER to provide better customer service and reduce transportation costs. tabs., figs
18. Hybrid Vertical-Cavity Laser
DEFF Research Database (Denmark)
2010-01-01
The present invention provides a light source (2) for light circuits on a silicon platform (3). A vertical laser cavity is formed by a gain region (101) arranged between a top mirror (4) and a bottom grating-mirror (12) in a grating region (11) in a silicon layer (10) on a substrate. A waveguide...... (18, 19) for receiving light from the grating region (11) is formed within or to be connected to the grating region, and functions as an 5 output coupler for the VCL. Thereby, vertical lasing modes (16) are coupled to lateral in-plane modes (17, 20) of the in-plane waveguide formed in the silicon...
19. SECADOR VERTICAL SOLAR PARA AMÊNDOAS DE CACAU
Directory of Open Access Journals (Sweden)
Everton Costa Santos
2014-12-01
Full Text Available Este artigo apresenta a simulação de um secador vertical solar e sua eficiência em relação ao método tradicional. Usando um programa computacional é obtida a geometria, efeitos térmicos e mecânicos. Depois é feita uma simulação para a transferência de calor via condução, convecção e radiação. Para teste de confiabilidade fazemos uma comparação dos nossos resultados com os dados simulados nas barcaças.
20. Vertical reactor coolant pump instabilities
International Nuclear Information System (INIS)
Jones, R.M.
1985-01-01
The investigation conducted at the Tennessee Valley Authority's Sequoyah Nuclear Power Plant to determine and correct increasing vibrations in the vertical reactor coolant pumps is described. Diagnostic procedures to determine the vibration causes and evaluate the corractive measures taken are also described
1. Zonação de comunidade bêntica do entremarés em molhes sob diferente hidrodinamismo na costa norte do estado do Rio de Janeiro, Brasil Zonation of intertidal benthic communities on breakwaters of different hydrodynamics in the north coast of the state of Rio de Janeiro, Brazil
Directory of Open Access Journals (Sweden)
Bruno P. Masi
2008-12-01
2. Synthesis, Crystal Structure and Luminescent Property of Cd (II Complex with <em>N-Benzenesulphonyl-L>-leucine
Directory of Open Access Journals (Sweden)
Xishi Tai
2012-09-01
Full Text Available A new trinuclear Cd (II complex [Cd3(L6(2,2-bipyridine3] [L =<em> Nem>-phenylsulfonyl-L>-leucinato] has been synthesized and characterized by elemental analysis, IR and X-ray single crystal diffraction analysis. The results show that the complex belongs to the orthorhombic, space group<em> Pem>212121 with<em> aem> = 16.877(3 Å, <em>b> em>= 22.875(5 Å, <em>c em>= 29.495(6 Å, <em>α> em>= <emem>= <emem>= 90°, <em>V> em>= 11387(4 Å3, <em>Z> em>= 4, <em>Dc>= 1.416 μg·m−3, <emem>= 0.737 mm−1, <em>F> em>(000 = 4992, and final <em>R>1 = 0.0390, <em>ωR>2 = 0.0989. The complex comprises two seven-coordinated Cd (II atoms, with a N2O5 distorted pengonal bipyramidal coordination environment and a six-coordinated Cd (II atom, with a N2O4 distorted octahedral coordination environment. The molecules form one dimensional chain structure by the interaction of bridged carboxylato groups, hydrogen bonds and p-p interaction of 2,2-bipyridine. The luminescent properties of the Cd (II complex and <em>N-Benzenesulphonyl-L>-leucine in solid and in CH3OH solution also have been investigated.
3. Vivências de familiares de crianças internadas em um Serviço de Pronto-Socorro
Directory of Open Access Journals (Sweden)
Ana Maria Ribeiro dos Santos
2011-04-01
Full Text Available A infância apresenta-se como uma fase que exige bastante atenção da família e do serviço de saúde, uma vez que seus integrantes, além de dependerem de familiares, são vulneráveis ao ambiente. Objetivou-se descrever as vivências de familiares de crianças internadas em um serviço de pronto-socorro, discutir como essas vivências influenciam no cotidiano da família e relatar os aspectos que interferem no cuidado de enfermagem. Estudo descritivo, de abordagem qualitativa, desenvolvido em um hospital de urgência da rede privada. Utilizou a técnica de entrevista com dez familiares para produzir os dados. Estes foram submetidos à análise temática, elaborando-se três categorias: vivências do familiar, alterações no cotidiano da família, a fé e a aproximação familiar atuando como agentes facilitadores. Concluiu-se que o ser acompanhante passa por adaptações, ao vivenciar a hospitalização, existindo alterações na rotina familiar. Porém, devido aos conflitos vivenciados pelo familiar, a enfermagem deve compreendê-lo como sujeito do cuidado ampliado.
Directory of Open Access Journals (Sweden)
5. Análise de propostas de gestão de riscos em ambientes com atividades envolvendo nanomateriais
Directory of Open Access Journals (Sweden)
2013-11-01
Full Text Available A manipulação de nanomateriais apresenta enormes desafios para a gestão de ris-cos em pesquisa e na produção de novos materiais. Porém, os dados sobre os impactos destes novos materiais sobre a saúde humana e meio ambiente precisam ser ampliados. Vários esforços têm sido feitos para mitigar as adversidades e oferecer diretrizes para a gestão destes riscos associados aos nanomateriais. Este artigo objetiva fornecer visão ampla e comparativa entre as principais propostas existentes na literatura. A metodologia utilizada foi análise sistemática englobando 17 propostas de gestão de riscos com nanomateriais. Os resultados indicam que, embora não haja consenso sobre as métricas utilizadas para caracterizar os riscos dos nanomateriais, a adoção do Princípio da Precaução, do enfoque de controle de bandas e da participação dos envolvidos se sobressai entre os documentos analisados.
6. Coexistence of Strategic Vertical Separation and Integration
DEFF Research Database (Denmark)
Jansen, Jos
2003-01-01
This paper gives conditions under which vertical separation is chosen by some upstream firms, while vertical integration is chosen by others in the equilibrium of a symmetric model. A vertically separating firm trades off fixed contracting costs against the strategic benefit of writing a (two......-part tariff, exclusive dealing) contract with its retailer. Coexistence emerges when more than two vertical Cournot oligopolists supply close substitutes. When vertical integration and separation coexist, welfare could be improved by reducing the number of vertically separating firms. The scope...
7. Vertical and lateral heterogeneous integration
Science.gov (United States)
Geske, Jon; Okuno, Yae L.; Bowers, John E.; Jayaraman, Vijay
2001-09-01
A technique for achieving large-scale monolithic integration of lattice-mismatched materials in the vertical direction and the lateral integration of dissimilar lattice-matched structures has been developed. The technique uses a single nonplanar direct-wafer-bond step to transform vertically integrated epitaxial structures into lateral epitaxial variation across the surface of a wafer. Nonplanar wafer bonding is demonstrated by integrating four different unstrained multi-quantum-well active regions lattice matched to InP on a GaAs wafer surface. Microscopy is used to verify the quality of the bonded interface, and photoluminescence is used to verify that the bonding process does not degrade the optical quality of the laterally integrated wells. The authors propose this technique as a means to achieve greater levels of wafer-scale integration in optical, electrical, and micromechanical devices.
8. Kinematic Fitting of Detached Vertices
Energy Technology Data Exchange (ETDEWEB)
Mattione, Paul [Rice Univ., Houston, TX (United States)
2007-05-01
The eg3 experiment at the Jefferson Lab CLAS detector aims to determine the existence of the $\\Xi_{5}$ pentaquarks and investigate the excited $\\Xi$ states. Specifically, the exotic $\\Xi_{5}^{--}$ pentaquark will be sought by first reconstructing the $\\Xi^{-}$ particle through its weak decays, $\\Xi^{-}\\to\\pi^{-}\\Lambda$ and $\\Lambda\\to\\pi^{-}$. A kinematic fitting routine was developed to reconstruct the detached vertices of these decays, where confidence level cuts on the fits are used to remove background events. Prior to fitting these decays, the exclusive reaction $\\gamma D\\rightarrow pp\\pi^{-}$ was studied in order to correct the track measurements and covariance matrices of the charged particles. The $\\Lambda\\rightarrow p\\pi^{-}$ and $\\Xi^{-}\\to\\pi^{-}\\Lambda$ decays were then investigated to demonstrate that the kinematic fitting routine reconstructs the decaying particles and their detached vertices correctly.
9. Interference Lithography for Vertical Photovoltaics
Science.gov (United States)
Balls, Amy; Pei, Lei; Kvavle, Joshua; Sieler, Andrew; Schultz, Stephen; Linford, Matthew; Vanfleet, Richard; Davis, Robert
2009-10-01
We are exploring low cost approaches for fabricating three dimensional nanoscale structures. These vertical structures could significantly improve the efficiency of devices made from low cost photovoltaic materials. The nanoscale vertical structure provides a way to increase optical absorption in thin photovoltaic films without increasing the electronic carrier separation distance. The target structure is a high temperature transparent template with a dense array of holes on a 400 - 600 nm pitch fabricated by a combination of interference lithography and nanoembossing. First a master was fabricated using ultraviolet light interference lithography and the pattern was transferred into a silicon wafer master by silicon reactive ion etching. Embossing studies were performed with the master on several high temperature polymers.
10. Vertically Integrated Circuits at Fermilab
International Nuclear Information System (INIS)
Deptuch, Grzegorz; Demarteau, Marcel; Hoff, James; Lipton, Ronald; Shenai, Alpana; Trimpl, Marcel; Yarema, Raymond; Zimmerman, Tom
2009-01-01
The exploration of the vertically integrated circuits, also commonly known as 3D-IC technology, for applications in radiation detection started at Fermilab in 2006. This paper examines the opportunities that vertical integration offers by looking at various 3D designs that have been completed by Fermilab. The emphasis is on opportunities that are presented by through silicon vias (TSV), wafer and circuit thinning and finally fusion bonding techniques to replace conventional bump bonding. Early work by Fermilab has led to an international consortium for the development of 3D-IC circuits for High Energy Physics. The consortium has submitted over 25 different designs for the Fermilab organized MPW run organized for the first time.
11. Vertical Launch System Loadout Planner
Science.gov (United States)
2015-03-01
United States Navy USS United States’ Ship VBA Visual Basic for Applications VLP VLS Loadout Planner VLS Vertical Launch System...with 32 gigabytes of random access memory and eight processors, General Algebraic Modeling System (GAMS) CPLEX version 24 (GAMS, 2015) solves this...problem in ten minutes to an integer tolerance of 10%. The GAMS interpreter and CPLEX solver require 75 Megabytes of random access memory for this
12. NASA-Ames vertical gun
Science.gov (United States)
Schultz, P. H.
1984-01-01
A national facility, the NASA-Ames vertical gun range (AVGR) has an excellent reputation for revealing fundamental aspects of impact cratering that provide important constraints for planetary processes. The current logistics in accessing the AVGR, some of the past and ongoing experimental programs and their relevance, and the future role of this facility in planetary studies are reviewed. Publications resulting from experiments with the gun (1979 to 1984) are listed as well as the researchers and subjects studied.
13. Strategic Inventories in Vertical Contracts
OpenAIRE
Krishnan Anand; Ravi Anupindi; Yehuda Bassok
2008-01-01
Classical reasons for carrying inventory include fixed (nonlinear) production or procurement costs, lead times, nonstationary or uncertain supply/demand, and capacity constraints. The last decade has seen active research in supply chain coordination focusing on the role of incentive contracts to achieve first-best levels of inventory. An extensive literature in industrial organization that studies incentives for vertical controls largely ignores the effect of inventories. Does the ability to ...
14. [Vertical fractures: apropos of 2 clinical cases].
Science.gov (United States)
Félix Mañes Ferrer, J; Micò Muñoz, P; Sánchez Cortés, J L; Paricio Martín, J J; Miñana Laliga, R
1991-01-01
The aim of the study is to present a clinical review of the vertical root fractures. Two clinical cases are presented to demonstrates the criteria for obtaining a correct diagnosis of vertical root fractures.
15. Vertical melting of a stack of membranes
Science.gov (United States)
Borelli, M. E. S.; Kleinert, H.; Schakel, A. M. J.
2001-02-01
A stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is studied. At low temperatures, the system forms a lamellar phase. At a critical temperature, the stack disorders vertically in a melting-like transition.
16. VERTICAL ACTIVITY ESTIMATION USING 2D RADAR
African Journals Online (AJOL)
hennie
estimates on aircraft vertical behaviour from a single 2D radar track. ... Fortunately, the problem of detecting relative vertical motion using a single 2D ..... awareness tools in scenarios where aerial activity sensing is typically limited to 2D.
17. Vertical structures in vibrated wormlike micellar solutions
Science.gov (United States)
Epstein, Tamir; Deegan, Robert
2008-11-01
Vertically vibrated shear thickening particulate suspensions can support a free-standing interfaces oriented parallel to gravity. We find that shear thickening worm-like micellar solutions also support such vertical interfaces. Above a threshold in acceleration, the solution spontaneously accumulates into a labyrinthine pattern characterized by a well-defined vertical edge. The formation of vertical structures is of interest because they are unique to shear-thickening fluids, and they indicate the existence of an unknown stress bearing mechanism.
18. Vertical sounding balloons for stratospheric photochemistry
Science.gov (United States)
Pommereau, J. P.
The use of vertical sounding balloons for stratospheric photochemistry studies is illustrated by the use of a vertical piloted gas balloon for the search of NO2 diurnal variations. It is shown that the use of montgolfieres (hot air balloons) can enhance the vertical sounding technique. Particular attention is given to a sun-heated montgolfiere and to the more sophisticated infrared montgolfiere that is able to perform three to four vertical excursions per day and to remain aloft for weeks or months.
19. Determinations of vertical stroke V{sub cb} vertical stroke and vertical stroke V{sub ub} vertical stroke from baryonic Λ{sub b} decays
Energy Technology Data Exchange (ETDEWEB)
Hsiao, Y.K. [Shanxi Normal University, School of Physics and Information Engineering, Linfen (China); National Tsing Hua University, Department of Physics, Hsinchu (China); Geng, C.Q. [Shanxi Normal University, School of Physics and Information Engineering, Linfen (China); National Tsing Hua University, Department of Physics, Hsinchu (China); Hunan Normal University, Synergetic Innovation Center for Quantum Effects and Applications (SICQEA), Changsha (China)
2017-10-15
We present the first attempt to extract vertical stroke V{sub cb} vertical stroke from the Λ{sub b} → Λ{sub c}{sup +}l anti ν{sub l} decay without relying on vertical stroke V{sub ub} vertical stroke inputs from the B meson decays. Meanwhile, the hadronic Λ{sub b} → Λ{sub c}M{sub (c)} decays with M = (π{sup -},K{sup -}) and M{sub c} =(D{sup -},D{sup -}{sub s}) measured with high precisions are involved in the extraction. Explicitly, we find that vertical stroke V{sub cb} vertical stroke =(44.6 ± 3.2) x 10{sup -3}, agreeing with the value of (42.11 ± 0.74) x 10{sup -3} from the inclusive B → X{sub c}l anti ν{sub l} decays. Furthermore, based on the most recent ratio of vertical stroke V{sub ub} vertical stroke / vertical stroke V{sub cb} vertical stroke from the exclusive modes, we obtain vertical stroke V{sub ub} vertical stroke = (4.3 ± 0.4) x 10{sup -3}, which is close to the value of (4.49 ± 0.24) x 10{sup -3} from the inclusive B → X{sub u}l anti ν{sub l} decays. We conclude that our determinations of vertical stroke V{sub cb} vertical stroke and vertical stroke V{sub ub} vertical stroke favor the corresponding inclusive extractions in the B decays. (orig.)
20. Metal Oxide Vertical Graphene Hybrid Supercapacitors
Science.gov (United States)
Meyyappan, Meyya (Inventor)
2018-01-01
A metal oxide vertical graphene hybrid supercapacitor is provided. The supercapacitor includes a pair of collectors facing each other, and vertical graphene electrode material grown directly on each of the pair of collectors without catalyst or binders. A separator may separate the vertical graphene electrode materials.
1. Influência da alteração da dimensão vertical de oclusão na postura da cabeça e da coluna cervical, em voluntários edêntulos portadores de disfunção temporomandibular, tratados com aparelhos oclusais planos
OpenAIRE
João Paulo dos Santos Fernandes
2012-01-01
Resumo: O objetivo neste trabalho foi analisar a influência da dimensão vertical de oclusão na postura da coluna cervical e da cabeça por meio de aferições de medidas angulares craniocervicais. Foram selecionados 17 voluntários desdentados totais, com sinais clínicos de diminuição de dimensão vertical de oclusão, portadores de sinais e sintomas de disfunção temporomandibular e usuários de próteses totais, inscritos no cadastro de pacientes do CETASE (Centro de Estudos e Tratamento das Alteraç...
2. Crystalline beams: The vertical zigzag
International Nuclear Information System (INIS)
Haffmans, A.F.; Maletic, D.; Ruggiero, A.G.
1994-01-01
This note is the continuation of our comprehensive investigation of Crystalline Beams. After having determined the equations of motion and the conditions for the formation of the simplest configuration, i.e. the string, we study the possibility of storing an intense beam of charged particles in a storage ring where they form a vertical zigzag. We define the equilibrium configuration, and examine the confinement conditions. Subsequently, we derive the transfer matrix for motion through various elements of the storage ring. Finally we investigate the stability conditions for such a beam
3. Vertices in the abelized picture
International Nuclear Information System (INIS)
Embacher, F.
1990-01-01
Covariant vertices of open bosonic string theory are transformed to the abelized picture. The way the pure transverse (light-cone gauge) vertex is contained therein is exhibited explicitly. The formalism shows in a quite transparent way that all further content of a covariant vertex is of gauge type. By applying the transverse projection operator in the abelized picture, an algebraic condition whether a set of Neumann coefficients define a vertex for string theory is obtained. A speculation concerning field redefinitions in string field theory is added. (Author) 33 refs
4. Determination of the quark coupling strength vertical bar V-ub vertical bar using baryonic decays
NARCIS (Netherlands)
Aaij, R.; Adeva, B.; Adinolfi, M.; Older, A. A.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Cartelle, P. Alvarez; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Gutierrez, O. Aquines; Archilli, F.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baldini, W.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Batozskaya, V.; Battista, V.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Bel, L. J.; Belyaev, I.; Ben-Haim, E.; Bencivenni, G.; Onderwater, C. J. G.; Pellegrino, A.; Tolk, S.
In the Standard Model of particle physics, the strength of the couplings of the b quark to the u and c quarks, vertical bar V-ub vertical bar and vertical bar V-ub vertical bar, are governed by the coupling of the quarks to the Higgs boson. Using data from the LHCb experiment at the Large Hadron
5. Neonatal Phosphate Nutrition Alters <em>in em>Vivo> and <em>in em>Vitro> Satellite Cell Activity in Pigs
Directory of Open Access Journals (Sweden)
2012-05-01
Full Text Available Satellite cell activity is necessary for postnatal skeletal muscle growth. Severe phosphate (PO4 deficiency can alter satellite cell activity, however the role of neonatal PO4 nutrition on satellite cell biology remains obscure. Twenty-one piglets (1 day of age, 1.8 ± 0.2 kg BW were pair-fed liquid diets that were either PO4 adequate (0.9% total P, supra-adequate (1.2% total P in PO4 requirement or deficient (0.7% total P in PO4 content for 12 days. Body weight was recorded daily and blood samples collected every 6 days. At day 12, pigs were orally dosed with BrdU and 12 h later, satellite cells were isolated. Satellite cells were also cultured <em>in vitroem> for 7 days to determine if PO4 nutrition alters their ability to proceed through their myogenic lineage. Dietary PO4 deficiency resulted in reduced (<em>P> < 0.05 sera PO4 and parathyroid hormone (PTH concentrations, while supra-adequate dietary PO4 improved (<em>P> < 0.05 feed conversion efficiency as compared to the PO4 adequate group. <em>In vivoem> satellite cell proliferation was reduced (<em>P> < 0.05 among the PO4 deficient pigs, and these cells had altered <em>in vitroem> expression of markers of myogenic progression. Further work to better understand early nutritional programming of satellite cells and the potential benefits of emphasizing early PO4 nutrition for future lean growth potential is warranted.
6. Vertically stacked nanocellulose tactile sensor.
Science.gov (United States)
Jung, Minhyun; Kim, Kyungkwan; Kim, Bumjin; Lee, Kwang-Jae; Kang, Jae-Wook; Jeon, Sanghun
2017-11-16
Paper-based electronic devices are attracting considerable attention, because the paper platform has unique attributes such as flexibility and eco-friendliness. Here we report on what is claimed to be the firstly fully integrated vertically-stacked nanocellulose-based tactile sensor, which is capable of simultaneously sensing temperature and pressure. The pressure and temperature sensors are operated using different principles and are stacked vertically, thereby minimizing the interference effect. For the pressure sensor, which utilizes the piezoresistance principle under pressure, the conducting electrode was inkjet printed on the TEMPO-oxidized-nanocellulose patterned with micro-sized pyramids, and the counter electrode was placed on the nanocellulose film. The pressure sensor has a high sensitivity over a wide range (500 Pa-3 kPa) and a high durability of 10 4 loading/unloading cycles. The temperature sensor combines various materials such as poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) (PEDOT:PSS), silver nanoparticles (AgNPs) and carbon nanotubes (CNTs) to form a thermocouple on the upper nanocellulose layer. The thermoelectric-based temperature sensors generate a thermoelectric voltage output of 1.7 mV for a temperature difference of 125 K. Our 5 × 5 tactile sensor arrays show a fast response, negligible interference, and durable sensing performance.
7. Averiguação do potencial de vento em ambiente edificado para aproveitamentos
OpenAIRE
Magalhães, Nuno Filipe da Costa
2011-01-01
O objectivo deste projecto consiste em analisar o potencial eólico em ambiente edificado urbano, considerando a utilização de turbinas eólicas de eixo vertical para produção de energia nesse contexto. Pretende-se com este documento demonstrar que, embora os estudos sobre as turbinas de eixo vertical sejam ainda reduzidos quando comparados aos das de eixo horizontal, tal não implica que as mesmas não tenham características que, em determinados cenários, sejam superiores às turbinas de eixo hor...
8. Constituents from <em>Vigna em>vexillata> and Their Anti-Inflammatory Activity
Directory of Open Access Journals (Sweden)
Guo-Feng Chen
2012-08-01
Full Text Available The seeds of <em>Vigna em>genus are important food resources and there have already been many reports regarding their bioactivities. In our preliminary bioassay, the chloroform layer of methanol extracts of<em> V. vexillata em>demonstrated significant anti-inflammatory bioactivity. Therefore, the present research is aimed to purify and identify the anti-inflammatory principles of <em>V. vexillataem>. One new sterol (1 and two new isoflavones (2,3 were reported from the natural sources for the first time and their chemical structures were determined by the spectroscopic and mass spectrometric analyses. In addition, 37 known compounds were identified by comparison of their physical and spectroscopic data with those reported in the literature. Among the isolates, daidzein (23, abscisic acid (25, and quercetin (40 displayed the most significant inhibition of superoxide anion generation and elastase release.
9. Comparação entre o efeito do aumento da dimensão vertical de oclusão e do avanço mandibular na qualidade do sono em pacientes idosos portadores de próteses totais bimaxilares
OpenAIRE
Thiago Carôso Fróes
2011-01-01
A população idosa possui alta prevalência de edentulismo e, conseqüentemente, é afetada pelos problemas a ele associados. A perda da dimensão vertical de oclusão (DVO) é um destes problemas que compromete, entre outros fatores, o desempenho do sistema estomatognático. Logo, doenças relacionadas ao colapso da musculatura da via aérea superior (VAS), como a síndrome da apnéia obstrutiva do sono (SAOS), tornam-se enfermidades relevantes para pacientes nesta faixa etária. Sendo assim, medidas ter...
10. Relações de produção em indústrias criativas: trabalho, consumo cultural e sustentação identitária em editoras infantojuvenis
Directory of Open Access Journals (Sweden)
Isabel de Sá Affonso da Costa
11. Capillary holdup between vertical spheres
Directory of Open Access Journals (Sweden)
S. Zeinali Heris
2009-12-01
Full Text Available The maximum volume of liquid bridge left between two vertically mounted spherical particles has been theoretically determined and experimentally measured. As the gravitational effect has not been neglected in the theoretical model, the liquid interface profile is nonsymmetrical around the X-axis. Symmetry in the interface profile only occurs when either the particle size ratio or the gravitational force becomes zero. In this paper, some equations are derived as a function of the spheres' sizes, gap width, liquid density, surface tension and body force (gravity/centrifugal to estimate the maximum amount of liquid that can be held between the two solid spheres. Then a comparison is made between the result based on these equations and several experimental results.
12. Equipes de referência e apoio especializado matricial: um ensaio sobre a reorganização do trabalho em saúde Local reference teams and specialized matrix support: an essay about reorganizing work in health services
Directory of Open Access Journals (Sweden)
Gastão Wagner de Sousa Campos
1999-01-01
Full Text Available Este artigo propõe um novo arranjo organizacional para o trabalho em saúde. É desenvolvido e ampliado o conceito de equipe de referência - proposto e experimentado pelo autor desde 1989. É também reelaborado o conceito de organização matricial do trabalho, invertendo-se em relação ao esquema original o que seria permanente e aquilo que seria transitório (recorte matricial nos serviços de saúde. São também apresentadas considerações teóricas que autorizam e justificam a construção desta nova proposta.A new organizational settlement for the work in the health services is proposed. An original concept is developed to defining the profile of a local reference team, as created and experimented by the author sice 1989. The classical organizational structure in matrix is realaborated to encompass this new approach. Theoretical consideration that subsidize and give basis to building this new proposal are presented.
13. Momentos em freios e em embraiagens
OpenAIRE
Mimoso, Rui Miguel Pereira
2011-01-01
Dissertação para obtenção do Grau de Mestre em Mestrado Integrado em Engenharia Mecânica Nesta dissertação reúnem-se os modelos de cálculo utilizados na determinação dos momentos em freios e em embraiagens. Neste trabalho consideram-se os casos de freios e embraiagens de atrito seco e atrito viscoso. Nos freios de atrito viscoso são considerados casos em que as características dos fluidos não são induzidas, e outros em que são induzidas modificações a essas mesmas características. São a...
14. Ensino do cuidado de enfermagem em saúde mental na graduação em enfermagem Enseñanza del cuidado de enfermería en salud mental en el pregrado en enfermería Teaching nursing care in mental health in undergraduate nursing
Directory of Open Access Journals (Sweden)
Jeferson Rodrigues
2012-01-01
15. Oralidade letrada e competência comunicativa: implicações para a construção da escrita em sala de aula
Directory of Open Access Journals (Sweden)
Angela B. Kleiman
2002-10-01
Full Text Available Neste trabalho, propõe-se que as práticas orais letradas do professor alfabetizador sejam analisadas segundo o conceito de viabilidade do modelo de competência comunicativa de Hymes (1966. A utilização do modelo de competência comunicativa, com as modificações propostas por Gumperz (1982, é complementada com a noção de gênero de Bakhtin (1953, de forma a construir uma matriz que permita integrar os aspectos sociocognitivos do modelo de Hymes, os aspectos sociointeracionais da noção de conhecimento construído na interação de Gumperz e os aspectos socio-históricos. O conceito de viabilidade no modelo assim ampliado permite encontrar as características partilhadas e a competência comunicativa de duas alfabetizadoras em dois eventosde letramento que visam à introdução da criança nas práticas de uso da escrita, na superfície, extremamente diferentes.
16. Dermatoses em renais cronicos em terapia dialitica
Directory of Open Access Journals (Sweden)
Luis Alberto Batista Peres
2014-03-01
17. Vigilância ambiental em saúde e sua implantação no Sistema Único de Saúde
Directory of Open Access Journals (Sweden)
Barcellos Christovam
2006-01-01
18. Vigilância ambiental em saúde e sua implantação no Sistema Único de Saúde
Directory of Open Access Journals (Sweden)
Christovam Barcellos
2006-02-01
19. Vertical grid of retrieved atmospheric profiles
International Nuclear Information System (INIS)
Ceccherini, Simone; Carli, Bruno; Raspollini, Piera
2016-01-01
The choice of the vertical grid of atmospheric profiles retrieved from remote sensing observations is discussed considering the two cases of profiles used to represent the results of individual measurements and of profiles used for subsequent data fusion applications. An ozone measurement of the MIPAS instrument is used to assess, for different vertical grids, the quality of the retrieved profiles in terms of profile values, retrieval errors, vertical resolutions and number of degrees of freedom. In the case of individual retrievals no evident advantage is obtained with the use of a grid finer than the one with a reduced number of grid points, which are optimized according to the information content of the observations. Nevertheless, this instrument dependent vertical grid, which seems to extract all the available information, provides very poor results when used for data fusion applications. A loss of about a quarter of the degrees of freedom is observed when the data fusion is made using the instrument dependent vertical grid relative to the data fusion made using a vertical grid optimized for the data fusion product. This result is explained by the analysis of the eigenvalues of the Fisher information matrix and leads to the conclusion that different vertical grids must be adopted when data fusion is the expected application. - Highlights: • Data fusion application is taken into account for the choice of the vertical grid. • The study is performed using ozone profiles retrieved from MIPAS measurements. • A very fine vertical grid is not needed for the analysis of a single instrument. • The instrument dependent vertical grid is not the best choice for data fusion. • A data fusion dependent vertical grid must be used for profiles that will be fused.
20. Vertical and horizontal access configurations
International Nuclear Information System (INIS)
Spampinato, P.T.
1987-01-01
A number of configuration features and maintenance operations are influenced by the choice of whether a design is based on vertical or horizontal access for replacing reactor components. The features which are impacted most include the first wall/blanket segmentation, the poloidal field coil locations, the toroidal field coil number and size, access port size for in-vessel components, and facilities. Since either configuration can be made to work, the choice between the two is not clear cut because both have certain advantages. It is apparent that there are large cost benefits in the poloidal field coil system for ideal coil locations for high elongation plasmas and marginal savings for the INTOR case. If we assume that a new tokamak design will require a higher plasma elongation, the recommendation is to arrange the poloidal field coils in a cost-effective manner while providing reasonable midplane access for heating interfaces and test modules. If a new design study is not based on a high elongation plasma, it still appears prudent to consider this approach so that in-vessel maintenance can be accomplished without moving very massive structures such as the bulk shield. 10 refs., 29 figs., 3 tabs
1. The Ames Vertical Gun Range
Science.gov (United States)
Karcz, J. S.; Bowling, D.; Cornelison, C.; Parrish, A.; Perez, A.; Raiche, G.; Wiens, J.-P.
2016-01-01
The Ames Vertical Gun Range (AVGR) is a national facility for conducting laboratory- scale investigations of high-speed impact processes. It provides a set of light-gas, powder, and compressed gas guns capable of accelerating projectiles to speeds up to 7 km s(exp -1). The AVGR has a unique capability to vary the angle between the projectile-launch and gravity vectors between 0 and 90 deg. The target resides in a large chamber (diameter approximately 2.5 m) that can be held at vacuum or filled with an experiment-specific atmosphere. The chamber provides a number of viewing ports and feed-throughs for data, power, and fluids. Impacts are observed via high-speed digital cameras along with investigation-specific instrumentation, such as spectrometers. Use of the range is available via grant proposals through any Planetary Science Research Program element of the NASA Research Opportunities in Space and Earth Sciences (ROSES) calls. Exploratory experiments (one to two days) are additionally possible in order to develop a new proposal.
2. Flooding Mechanism in Vertical Flow
International Nuclear Information System (INIS)
Ronny-Dwi Agussulistyo; Indarto
2000-01-01
This research was carried out to investigate the mechanism of flooding ina vertical liquid-gas counter current flow, along two meter length of thetube. The tube use both circular and square tube, a cross section of squaretube was made the same as a cross section of circular tube with one inchdiameter tube. The liquid enters the tube, passes through a porous wall inletand a groove inlet in a distributor and it flows downwards through a liquidoutlet in a collector. The gas is being introduced at the bottom of the tube,it flows upwards through nozzle in the collector. The results of researchshowed that the flooding occurs earlier in the circular tube than in thesquare tube, either uses a porous wall inlet or a groove inlet. In the squaretube , onset of the flooding occurs at the top of the tube, in front ofliquid injection, it is related to the formation of a film wave, just belowthe liquid feed. Whereas in the circular tube, onset of the flooding occursfrom the bottom of the tube, at the liquid outlet, it is related to theexpand of the film wave. However, in the circular tube with the groove inlet,for the higher liquid flow rate, onset of the flooding from the top, like inthe square tube. (author)
3. Wind tower with vertical rotors
Energy Technology Data Exchange (ETDEWEB)
Dietz, A
1978-08-03
The invention concerns a wind tower with vertical rotors. A characteristic is that the useful output of the rotors is increased by the wind pressure, which is guided to the rotors at the central opening and over the whole height of the structure by duct slots in the inner cells. These duct slots start behind the front nose of the inner cell and lead via the transverse axis of the pillar at an angle into the space between the inner cells and the cell body. This measure appreciably increases the useful output of the rotors, as the rotors do not have to provide any displacement work from their output, but receive additional thrust. The wind pressure pressing from inside the rotor and accelerating from the outside produces a better outflow of the wind from the power plant pillar with only small tendency to turbulence, which appreciably improves the effect of the adjustable turbulence smoothers, which are situated below the rotors over the whole height.
4. Vertical mixing by Langmuir circulations
International Nuclear Information System (INIS)
McWilliams, James C.; Sullivan, Peter P.
2001-01-01
Wind and surface wave frequently induce Langmuir circulations (LC) in the upper ocean, and the LC contribute to mixing materials down from the surface. In this paper we analyze large-eddy simulation (LES) cases based on surface-wave-averaged, dynamical equations and show that the effect of the LC is a great increase in the vertical mixing efficiency for both material properties and momentum. We provide new confirmation that the previously proposed K-profile parameterization (KPP) model accurately characterizes the turbulent transport in a weakly convective, wind-driven boundary layer with stable interior stratification. We also propose a modest generalization of KPP for the regime of weakly convective Langmuir turbulence. This makes the KPP turbulent flux profiles match those in the LES case with LC present fairly well, especially so for material properties being transported downwards from the ocean surface. However, some open issues remain about how well the present LES and KPP formulations represent Langmuir turbulence, in part because wave-breaking effects are not yet included. (Author)
5. Influência do anel de contenção na evolução pós-operatória dos pacientes submetidos à gastroplastia vertical em "Y de roux para tratamento da obesidade mórbida
OpenAIRE
Silvia Zenobio Nascimento
2012-01-01
Introdução: A obesidade mórbida (OM) e as doenças a ela associadas tornaram-se grave problema de saúde pública. O tratamento cirúrgico da OM é considerado o método mais eficaz de perda sustentada do excesso de peso. A derivação gástrica em Y de Roux (GVYR) ou bypass gástrico é um dos procedimentos mais realizados no Brasil e no mundo. A utilização do anel de contenção associada a este procedimento tem o objetivo de retardar o esvaziamento gástrico, mantendo a sensação de saciedade mais pr...
6. Futuros sacerdotes do Senhor: a decisão vocacional entre seminaristas em Santa Catarina.
Directory of Open Access Journals (Sweden)
Marcos Alfonso Spiess
2016-06-01
7. The green building envelope : Vertical greening
NARCIS (Netherlands)
Ottelé, M.
2011-01-01
Planting on roofs and façades is one of the most innovative and fastest developing fields of green technologies with respect to the built environment and horticulture. This thesis is focused on vertical greening of structures and to the multi-scale benefits of vegetation. Vertical green can improve
8. Safety Aspects for Vertical Wall Breakwaters
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Burcharth, H. F.; Christiani, E.
1996-01-01
In this appendix some safety aspects in relation to vertical wall breakwaters are discussed. Breakwater structures such as vertical wall breakwaters are used under quite different conditions. The expected lifetime can be from 5 years (interim structure) to 100 years (permanent structure) and the ...
9. Updated Vertical Extent of Collision Damage
DEFF Research Database (Denmark)
Tagg, R.; Bartzis, P.; Papanikolaou, P.
2002-01-01
The probabilistic distribution of the vertical extent of collision damage is an important and somewhat controversial component of the proposed IMO harmonized damage stability regulations for cargo and passenger ships. The only pre-existing vertical distribution, currently used in the international...
10. Plasmon Modes of Vertically Aligned Superlattices
DEFF Research Database (Denmark)
Filonenko, Konstantin; Duggen, Lars; Willatzen, Morten
2017-01-01
By using the Finite Element Method we visualize the modes of vertically aligned superlattice composed of gold and dielectric nanocylinders and investigate the emitter-plasmon interaction in approximation of weak coupling. We find that truncated vertically aligned superlattice can function...
11. The strategic value of partial vertical integration
OpenAIRE
Fiocco, Raffaele
2014-01-01
We investigate the strategic incentives for partial vertical integration, namely, partial ownership agreements between manufacturers and retailers, when retailers privately know their costs and engage in differentiated good price competition. The partial misalignment between the profit objectives within a partially integrated manufacturer-retailer hierarchy entails a higher retail price than under full integration. This information vertical effect' translates into an opposite ...
12. Vertical integration increases opportunities for patient flow.
Science.gov (United States)
Radoccia, R A; Benvenuto, J A; Blancett, L
1991-08-01
New sources of patients will become more and more important in the next decade as hospitals continue to feel the squeeze of a competitive marketplace. Vertical integration, a distribution tool used in other industries, will be a significant tool for health care administrators. In the following article, the authors explain the vertical integration model that shows promise for other institutions.
13. Vertical integration from the large Hilbert space
Science.gov (United States)
Erler, Theodore; Konopka, Sebastian
2017-12-01
We develop an alternative description of the procedure of vertical integration based on the observation that amplitudes can be written in BRST exact form in the large Hilbert space. We relate this approach to the description of vertical integration given by Sen and Witten.
14. Vertical Integration, Monopoly, and the First Amendment.
Science.gov (United States)
Brennan, Timothy J.
This paper addresses the relationship between the First Amendment, monopoly of transmission media, and vertical integration of transmission and content provision. A survey of some of the incentives a profit-maximizing transmission monopolist may have with respect to content is followed by a discussion of how vertical integration affects those…
15. Moving vertices to make drawings plane
NARCIS (Netherlands)
Goaoc, X.; Kratochvil, J.; Okamoto, Y.; Shin, C.S.; Wolff, A.; Hong, S.K.; Nishizeki, T.; Quan, W.
2008-01-01
In John Tantalo’s on-line game Planarity the player is given a non-plane straight-line drawing of a planar graph. The aim is to make the drawing plane as quickly as possible by moving vertices. In this paper we investigate the related problem MinMovedVertices which asks for the minimum number of
16. Percepção do processo saúde-doença: significados e valores da educação em saúde
Directory of Open Access Journals (Sweden)
Ana Maria Chagas Sette Câmara
17. Estrutura de uma floresta tropical dez anos após exploração de madeira em Moju, Pará
Directory of Open Access Journals (Sweden)
Fernando Cristóvam da Silva Jardim
18. EM International. Volume 1
Energy Technology Data Exchange (ETDEWEB)
1993-07-01
It is the intent of EM International to describe the Office of Environmental Restoration and Waste Managements (EMs) various roles and responsibilities within the international community. Cooperative agreements and programs, descriptions of projects and technologies, and synopses of visits to international sites are all highlighted in this semiannual journal. Focus on EM programs in this issue is on international collaboration in vitrification projects. Technology highlights covers: in situ sealing for contaminated sites; and remote sensors for toxic pollutants. Section on profiles of countries includes: Arctic contamination by the former Soviet Union, and EM activities with Germany--cooperative arrangements.
19. Effects of target location and uncertainty on reaching movements in standing position Los efectos de la ubicación de la diana y la incertidumbre en los movimientos de alcance en la posición vertical Efeitos da localização do alvo e da incerteza em movimentos de alcance na postura ereta
Directory of Open Access Journals (Sweden)
Luiz de França Bahia Loureiro Junior
2012-09-01
movimientos de alcance en la posición erecta.Os efeitos da localização do alvo e da incerteza quanto à posição do alvo em movimentos de alcance foram investigados. Dez adultos permaneceram emem frente a um monitor sensível ao toque. Eles foram instruídos a pressionar com o dedo indicador direito um interruptor e tocar o centro do alvo apresentado no monitor após ele acender, movendo o membro superior rapidamente. O alvo foi mostrado ipsi ou contralateralmente e os participantes tinham ou não certeza sobre a posição do alvo. O tempo de reação (TR e movimento (TM e o erro radial (ER foram avaliados. Os resultados revelaram menor TR (≈35 ms e ER (≈0,19 cm para a condição de certeza e maiores TR (≈8 ms e TM (≈18 ms para os moimentos ao alvo contralateral. Concluindo, esses achados mostraram que os efeitos da incerteza da localização e a posição final do alvo podem ser aplicados para movimentos de alcance na posição ereta.
20. COMPUTING VERTICES OF INTEGER PARTITION POLYTOPES
Directory of Open Access Journals (Sweden)
A. S. Vroublevski
2015-01-01
Full Text Available The paper describes a method of generating vertices of the polytopes of integer partitions that was used by the authors to calculate all vertices and support vertices of the partition polytopes for all n ≤ 105 and all knapsack partitions of n ≤ 165. The method avoids generating all partitions of n. The vertices are determined with the help of sufficient and necessary conditions; in the hard cases, the well-known program Polymake is used. Some computational aspects are exposed in more detail. These are the algorithm for checking the criterion that characterizes partitions that are convex combinations of two other partitions; the way of using two combinatorial operations that transform the known vertices to the new ones; and employing the Polymake to recognize a limited number (for small n of partitions that need three or more other partitions for being convexly expressed. We discuss the computational results on the numbers of vertices and support vertices of the partition polytopes and some appealing problems these results give rise to.
1. Microsatellite Loci in the Gypsophyte <em>Lepidium subulatum em>(Brassicaceae, and Transferability to Other <em>Lepidieae>
Directory of Open Access Journals (Sweden)
José Gabriel Segarra-Moragues
2012-09-01
Full Text Available Polymorphic microsatellite markers were developed for the Ibero-North African, strict gypsophyte <em>Lepidium subulatumem> to unravel the effects of habitat fragmentation in levels of genetic diversity, genetic structure and gene flow among its populations. Using 454 pyrosequencing 12 microsatellite loci including di- and tri-nucleotide repeats were characterized in <em>L. subulatumem>. They amplified a total of 80 alleles (2–12 alleles per locus in a sample of 35 individuals of <em>L. subulatumem>, showing relatively high levels of genetic diversity, <em>H>O = 0.645, <em>H>E = 0.627. Cross-species transferability of all 12 loci was successful for the Iberian endemics <em>Lepidium cardaminesem>, <em>Lepidium stylatumem>, and the widespread, <em>Lepidium graminifoliumem> and one species each of two related genera, <em>Cardaria drabaem> and <em>Coronopus didymusem>. These microsatellite primers will be useful to investigate genetic diversity, population structure and to address conservation genetics in species of <em>Lepidium>.
2. Vertical Motions of Oceanic Volcanoes
Science.gov (United States)
Clague, D. A.; Moore, J. G.
2006-12-01
lasting a few hundred thousand years as the island migrates over a broad flexural arch related to isostatic compensation of a nearby active volcano. The arch is located about 190±30 km away from the center of volcanic activity and is also related to the rejuvenated volcanic stage on the islands. Reefs on Oahu that are uplifted several tens of m above sea level are the primary evidence for uplift as the islands over-ride the flexural arch. At the other end of the movement spectrum, both in terms of magnitude and length of response, are the rapid uplift and subsidence that occurs as magma is accumulated within or erupted from active submarine volcanoes. These changes are measured in days to years and are of cm to m variation; they are measured using leveling surveys, tiltmeters, EDM and GPS above sea level and pressure gauges and tiltmeters below sea level. Other acoustic techniques to measure such vertical movement are under development. Elsewhere, evidence for subsidence of volcanoes is also widespread, ranging from shallow water carbonates on drowned Cretaceous guyots, to mapped shoreline features, to the presence of subaerially-erupted (degassed) lavas on now submerged volcanoes. Evidence for uplift is more limited, but includes makatea islands with uplifted coral reefs surrounding low volcanic islands. These are formed due to flexural uplift associated with isostatic loading of nearby islands or seamounts. In sum, oceanic volcanoes display a long history of subsidence, rapid at first and then slow, sometimes punctuated by brief periods of uplift due to lithospheric loading by subsequently formed nearby volcanoes.
3. Measuring of vertical stroke Vub vertical stroke in the forthcoming decade
International Nuclear Information System (INIS)
Kim, C.S.
1997-01-01
I first introduce the importance of measuring V ub precisely. Then, from a theoretician's point of view, I review (a) past history, (b) present trials, and (c) possible future alternatives on measuring vertical stroke V ub vertical stroke and/or vertical stroke V ub /V cb vertical stroke. As of my main topic, I introduce a model-independent method, which predicts Γ(B→X u lν)/Γ(B→X c lν)≡(γ u /γ c ) x vertical stroke V ub /V cb vertical stroke 2 ≅(1.83±0.28) x vertical stroke V ub /V cb vertical stroke 2 and vertical stroke V ub /V cb vertical stroke ≡(γ c /γ u ) 1/2 x [B(B→X u lν)/B(B→ X c lν]) 1/2 ≅(0.74±0.06) x [B(B→X u lν/)B(B→X c lν)] 1/2 , based on the heavy quark effective theory I also explore the possible experimental options to separate B→X u lν from the dominant B→X c lν: the measurement of inclusive hadronic invariant mass distributions, and the 'D-π' (and 'K-π') separation conditions I also clarify the relevant experimental backgrounds. (orig.)
4. IMPLICAÇÕES DOS DISTÚRBIOS RESPIRATÓRIOS DO SONO EM ALUNOS COM DEFICIÊNCIA INTELECTUAL: REVISÃO SISTEMÁTICA
Directory of Open Access Journals (Sweden)
2017-06-01
5. determination of verticality of reservoir engineering structure
African Journals Online (AJOL)
user
applications is 3D survey and management of oil and gas facilities and other engineering structures. This recent .... also affect ground water contamination. 2. VERTICALITY ...... The soil, water and concrete in a Reservoir at the foundation bed ...
6. Vertical activity estimation using 2D radar
CSIR Research Space (South Africa)
Hakl, H
2008-12-01
Full Text Available Understanding airspace activity is essential for airspace control. Being able to detect vertical activity in aircraft allows prediction of aircraft intent, thereby allowing more accurate situation awareness and correspondingly more appropriate...
7. HL-LHC vertical cryostat during construction
CERN Multimedia
Lanaro, Andrea
2016-01-01
7m high "Cluster D" vertical test cryostat during construction at contractor's premises, Alca Technology Srl, in Schio, Italy. The inner helium vessel with its heat exchanger are visible. To be installed in the D pit in SMA18.
8. Prefabricated vertical drains, vol. I : engineering guidelines.
Science.gov (United States)
1986-09-01
This volume presents procedures and guidelines applicable to the design and instal : tion of prefabricated vertical drains to accelerate consolidation of soils. The : contents represent the Consultant's interpretation of the state-of-the-art as of : ...
9. Electrically Pumped Vertical-Cavity Amplifiers
DEFF Research Database (Denmark)
Greibe, Tine
2007-01-01
In this work, the design of electrically pumped vertical cavity semiconductor optical amplifiers (eVCAs) for use in a mode-locked external-cavity laser has been developed, investigated and analysed. Four different eVCAs, one top-emitting and three bottom emitting structures, have been designed...... and discussed. The thesis concludes with recommendations for further work towards the realisation of compact electrically pumped mode-locked vertical externalcavity surface emitting lasers....
10. Vertical Josephson Interferometer for Tunable Flux Qubit
Energy Technology Data Exchange (ETDEWEB)
Granata, C [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Vettoliere, A [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Lisitskiy, M [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Rombetto, S [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Russo, M [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Ruggiero, B [Istituto di Cibernetica ' E. Caianiello' del Consiglio Nazionale delle Ricerche, I- 80078, Pozzuoli (Italy); Corato, V [Dipartimento di Ingegneria dell' Informazione, Seconda Universita di Napoli, I-8 1031, Aversa (Italy) and Istituto di Cibernetica ' E. Caianiello' del CNR, I-80078, Pozzuoli (Italy); Russo, R [Dipartimento di Ingegneria dell' Informazione, Seconda Universita di Napoli, I-8 1031, Aversa (Italy) and Istituto di Cibernetica ' E. Caianiello' del CNR, I-80078, Pozzuoli (Italy); Silvestrini, P [Dipartimento di Ingegneria dell' Informazione, Seconda Universita di Napoli, I-8 1031, Aversa (Italy) and Istituto di Cibernetica ' E. Caianiello' del CNR, I-80078, Pozzuoli (Italy)
2006-06-01
We present a niobium-based Josephson device as prototype for quantum computation with flux qubits. The most interesting feature of this device is the use of a Josephson vertical interferometer to tune the flux qubit allowing the control of the off-diagonal Hamiltonian terms of the system. In the vertical interferometer, the Josephson current is precisely modulated from a maximum to zero with fine control by a small transversal magnetic field parallel to the rf superconducting loop plane.
11. Vertical Scan-Conversion for Filling Purposes
OpenAIRE
Hersch, R. D.
1988-01-01
Conventional scan-conversion algorithms were developed independently of filling algorithms. They cause many problems, when used for filling purposes. However, today's raster printers and plotters require extended use of filling, especially for the generation of typographic characters and graphic line art. A new scan-conversion algorithm, called vertical scan-conversion has been specifically designed to meet the requirements of parity scan line fill algorithms. Vertical scan-conversion ensures...
12. Marketing em moda
OpenAIRE
Leães, Sabrina Durgante
2008-01-01
Dissertação de mestrado em Design e Marketing O actual estado do Marketing em Moda é uma das questões ainda complexa com que se debate a sociedade global. As questões do Marketing em Moda percorrem alguns aspectos fundamentais tais como as constantes mutações do meio envolvente, a forma de como é percebida e comunicada a identidade das marcas de moda, em busca da melhor forma de segmentar o mercado e definir o seu posicionamento, bem como a reacção ao produto de moda do consumidor final. ...
13. Analysis of the direction of plasma vertical movement during major disruptions in ITER
International Nuclear Information System (INIS)
Lukash, Victor; Sugihara, Masayoshi; Gribov, Yuri; Fujieda, Hirobumi
2005-01-01
The plasma movement in the upward direction (away from the X-point) after the thermal quench (TQ) of major disruptions in ITER is favourable for the machine design, since the downward movement causes larger electromagnetic (EM) load due to the induced eddy and halo currents. Vertical directions of plasma movement after the TQ in ITER are investigated using the predictive mode of the DINA code. Three dominant parameters in determining the direction of plasma movement are identified: (i) the rate of plasma current quench (plasma temperature after the TQ) (ii) the width of plasma current mixing area just after the TQ (change of the internal plasma inductance l i ) and (iii) the initial vertical position of plasma column before the TQ. It is shown that the reference ITER plasma moves upwards after the TQ, if the electron temperature after the TQ is less than 10 eV and the drop of l i does not exceed 0.2 for the present reference initial vertical position (55.5 cm above the centre of the machine). It is also shown that the operational domain leading to the upward movement is considerably large for disruptions with fast current quench, which could generate quite severe EM load due to the induced eddy current combined with the induced halo current if the movement is downwards
14. Sistemas de informação em bibliotecas: o comportamento dos usuários e bibliotecários frente às novas tecnologias de informação
Directory of Open Access Journals (Sweden)
Patrícia Maria Silva
2008-02-01
Full Text Available A Tecnologia da Informação influencia o trabalho intelectual e de pesquisa nas várias áreas do conhecimento. Nas bibliotecas a tecnologia é utilizada através dos Sistemas de Informação para armazenar, manipular, filtrar e gerar informação de forma rápida e eficaz. O presente trabalho pretende colaborar para o aprofundamento do conhecimento sobre algumas questões fundamentais no uso de Sistemas de Informação em bibliotecas. Busca compreender e identificar melhor os determinantes e barreiras de usabilidade, que leva a não interação usuário/sistema. O estudo foi conduzido a partir de um levantamento bibliográfico, comparando conceituações de pesquisadores da área, numa abordagem crítica. Como resultado destacamos que a biblioteca poderá ter seu espaço ampliado pela capacitação dos bibliotecários e usuários no manuseio dos Sistemas de Informação.O dinamismo do profissional bibliotecário seja como orientador no uso do Sistema de Informação, seja como executor da pesquisa, também é um minimizador das barreiras de usabilidade.
15. Emergency Medical Service (EMS) Stations
Data.gov (United States)
Kansas Data Access and Support Center — EMS Locations in Kansas The EMS stations dataset consists of any location where emergency medical services (EMS) personnel are stationed or based out of, or where...
16. Vertical integration in the nuclear fuel cycle
International Nuclear Information System (INIS)
Mommsen, J.T.
1977-01-01
Vertical integration in the nuclear fuel cycle and its contribution to market power of integrated fuel suppliers were studied. The industry subdivision analyzed is the uranium raw materials sector. The hypotheses demonstrated are that (1) this sector of the industry is trending toward vertical integration between production of uranium raw materials and the manufacture of nuclear fuel elements, and (2) this vertical integration confers upon integrated firms a significant market advantage over non-integrated fuel manufacturers. Under microeconomic concepts the rationale for vertical integration is the pursuit of efficiency, and it is beneficial because it increases physical output and decreases price. The Market Advantage Model developed is an arithmetical statement of the relative market power (in terms of price) between non-integrated nuclear fuel manufacturers and integrated raw material/fuel suppliers, based on the concept of the ''squeeze.'' In operation, the model compares net profit and return on sales of nuclear fuel elements between the competitors, under different price and cost circumstances. The model shows that, if integrated and non-integrated competitors sell their final product at identical prices, the non-integrated manufacturer returns a net profit only 17% of the integrated firm. Also, the integrated supplier can price his product 35% below the non-integrated producer's price and still return the same net profit. Vertical integration confers a definite market advantage to the integrated supplier, and the basic source of that advantage is the cost-price differential of the raw material, uranium
17. Climatology of tropospheric vertical velocity spectra
Science.gov (United States)
Ecklund, W. L.; Gage, K. S.; Balsley, B. B.; Carter, D. A.
1986-01-01
Vertical velocity power spectra obtained from Poker Flat, Alaska; Platteville, Colorado; Rhone Delta, France; and Ponape, East Caroline Islands using 50-MHz clear-air radars with vertical beams are given. The spectra were obtained by analyzing the quietest periods from the one-minute-resolution time series for each site. The lengths of available vertical records ranged from as long as 6 months at Poker Flat to about 1 month at Platteville. The quiet-time vertical velocity spectra are shown. Spectral period ranging from 2 minutes to 4 hours is shown on the abscissa and power spectral density is given on the ordinate. The Brunt-Vaisala (B-V) periods (determined from nearby sounding balloons) are indicated. All spectra (except the one from Platteville) exhibit a peak at periods slightly longer than the B-V period, are flat at longer periods, and fall rapidly at periods less than the B-V period. This behavior is expected for a spectrum of internal waves and is very similar to what is observed in the ocean (Eriksen, 1978). The spectral amplitudes vary by only a factor of 2 or 3 about the mean, and show that under quiet conditions vertical velocity spectra from the troposphere are very similar at widely different locations.
18. A Physician's Perspective On Vertical Integration.
Science.gov (United States)
Berenson, Robert A
2017-09-01
Vertical integration has been a central feature of health care delivery system change for more than two decades. Recent studies have demonstrated that vertically integrated health care systems raise prices and costs without observable improvements in quality, despite many theoretical reasons why cost control and improved quality might occur. Less well studied is how physicians view their newfound partnerships with hospitals. In this article I review literature findings and other observations on five aspects of vertical integration that affect physicians in their professional and personal lives: patients' access to physicians, physician compensation, autonomy versus system support, medical professionalism and culture, and lifestyle. I conclude that the movement toward physicians' alignment with and employment in vertically integrated systems seems inexorable but that policy should not promote such integration either intentionally or inadvertently. Instead, policy should address the flaws in current payment approaches that reward high prices and excessive service use-outcomes that vertical integration currently produces. Project HOPE—The People-to-People Health Foundation, Inc.
19. <em>Angiostrongylus vasorumem> in red foxes (<em>Vulpes vulpesem> and badgers (<em>Meles melesem> from Central and Northern Italy
Directory of Open Access Journals (Sweden)
Marta Magi
2010-06-01
Full Text Available Abstract During 2004-2005 and 2007-2008, 189 foxes (<em>Vulpes vulpesem> and 6 badgers (<em>Meles melesem> were collected in different areas of Central Northern Italy (Piedmont, Liguria and Tuscany and examined for <em>Angiostrongylus vasorumem> infection. The prevalence of the infection was significantly different in the areas considered, with the highest values in the district of Imperia (80%, Liguria and in Montezemolo (70%, southern Piedmont; the prevalence in Tuscany was 7%. One badger collected in the area of Imperia turned out to be infected, representing the first report of the parasite in this species in Italy. Further studies are needed to evaluate the role played by fox populations as reservoirs of infection and the probability of its spreading to domestic dogs.
Riassunto <em>Angiostrongylus vasorumem> nella volpe (<em>Vulpes vulpesem> e nel tasso (<em>Meles melesem> in Italia centro-settentrionale. Nel 2004-2005 e 2007-2008, 189 volpi (<em>Vulpes vulpesem> e 6 tassi (<em>Meles melesem> provenienti da differenti aree dell'Italia settentrionale e centrale (Piemonte, Liguria Toscana, sono stati esaminati per la ricerca di <em>Angiostrongylus vasorumem>. La prevalenza del nematode è risultata significativamente diversa nelle varie zone, con valori elevati nelle zone di Imperia (80% e di Montezemolo (70%, provincia di Cuneo; la prevalenza in Toscana è risultata del 7%. Un tasso proveniente dall'area di Imperia è risultato positivo per A. vasorum; questa è la prima segnalazione del parassita in tale specie in Italia. Ulteriori studi sono necessari per valutare il potenziale della volpe come serbatoio e la possibilità di diffusione della parassitosi ai cani domestici.
doi:10.4404/hystrix-20.2-4442
20. Measurement of vertical stroke Vcb vertical stroke at the Z energy from B mesons exclusive decays
International Nuclear Information System (INIS)
Marinelli, N.
1998-01-01
Recent ALEPH, DELPHI and OPAL measurements of the form factors in the exclusive decay modes anti B 0 → D *+ l - anti ν l and anti B 0 →D + l - anti ν l are reviewed here. The values obtained allow an almost model-independent determination of vertical stroke V cb vertical stroke in the HQET framework. (orig.)
1. Measurement of vertical stroke Vub vertical stroke using b hadron semileptonic decay
International Nuclear Information System (INIS)
Abbiendi, G.; Aakesson, P.F.
2001-01-01
The magnitude of the CKM matrix element vertical stroke V ub vertical stroke is determined by measuring the inclusive charmless semileptonic branching fraction of beauty hadrons at OPAL based on b → X u lν event topology and kinematics. This analysis uses OPAL data collected between 1991 and 1995, which correspond to about four million hadronic Z decays. We measure Br(b → X u lν) to be (1.63 ±0.53 +0.55 -0.62 ) x 10 -3 . The first uncertainty is the statistical error and the second is the systematic error. From this analysis, vertical stroke V ub vertical stroke is determined to be: vertical stroke V ub vertical stroke =(4.00±0.65(stat) +0.67 -0.76 (sys)±0.19(HQE)) x 10 -3 . The last error represents the theoretical uncertainties related to the extraction of vertical stroke V ub vertical stroke from Br(b→X u l ν) using the Heavy Quark Expansion. (orig.)
2. A global vertical reference frame based on four regional vertical datums
Czech Academy of Sciences Publication Activity Database
Burša, Milan; Kenyon, S.; Kouba, J.; Šíma, Zdislav; Vatrt, V.; Vojtíšková, M.
2004-01-01
Roč. 48, č. 3 (2004), s. 493-502 ISSN 0039-3169 Institutional research plan: CEZ:AV0Z1003909 Keywords : geopotentinal * local vertical datums * global vertical reference frame Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 0.447, year: 2004
3. On the vertical structure of wind gusts
DEFF Research Database (Denmark)
Suomi, I.; Gryning, Sven-Erik; Floors, Rogier Ralph
2015-01-01
The increasing size of wind turbines, their height and the area swept by their blades have revised the need for understanding the vertical structure of wind gusts. Information is needed for the whole profile. In this study, we analyzed turbulence measurements from a 100m high meteorological mast...... and the turbulence intensity, of which the turbulence intensity was found to dominate over the peak factor in determining the effects of stability and height above the surface on the gust factor. The peak factor only explained 15% or less of the vertical decrease of the gust factor, but determined the effect of gust...... duration on the gust factor. The statistical method to estimate the peak factor did not reproduce the observed vertical decrease in near-neutral and stable conditions and near-constant situation in unstable conditions. Despite this inconsistency, the theoretical method provides estimates for the peak...
4. Vertical Footbridge Vibrations: The Response Spectrum Methodology
DEFF Research Database (Denmark)
Georgakis, Christos; Ingólfsson, Einar Thór
2008-01-01
In this paper, a novel, accurate and readily codifiable methodology for the prediction of vertical footbridge response is presented. The methodology is based on the well-established response spectrum approach used in the majority of the world’s current seismic design codes of practice. The concept...... of a universally applicable reference response spectrum is introduced, from which the pedestrian-induced vertical response of any footbridge may be determined, based on a defined “event” and the probability of occurrence of that event. A series of Monte Carlo simulations are undertaken for the development...... period is introduced and its implication on the calculation of footbridge response is discussed. Finally, a brief comparison is made between the theoretically predicted pedestrian-induced vertical response of an 80m long RC footbridge (as an example) and actual field measurements. The comparison shows...
5. Certified standards and vertical coordination in aquaculture
DEFF Research Database (Denmark)
Trifkovic, Neda
2014-01-01
This paper explores the interaction between food standards and vertical coordination in the Vietnamese pangasius sector. For farmers and processors alike, the adoption of standards is motivated by a desire to improve market access by ensuring high quality supply. Instead of encouraging the applic......This paper explores the interaction between food standards and vertical coordination in the Vietnamese pangasius sector. For farmers and processors alike, the adoption of standards is motivated by a desire to improve market access by ensuring high quality supply. Instead of encouraging...... the application of standards and contract farming, processing companies prefer to vertically integrate primary production largely due to concerns over the stable supply of pangasius with satisfactory quality and safety attributes. These tendencies increase the market dominance of industrial farming and worsen...
6. Vertical vibration analysis for elevator compensating sheave
International Nuclear Information System (INIS)
Watanabe, Seiji; Nakazawa, Daisuke; Fukui, Daiki; Okawa, Takeya
2013-01-01
Most elevators applied to tall buildings include compensating ropes to satisfy the balanced rope tension between the car and the counter weight. The compensating ropes receive tension by the compensating sheave, which is installed at the bottom space of the elevator shaft. The compensating sheave is only suspended by the compensating ropes, therefore, the sheave can move vertically while the car is traveling. This paper shows the elevator dynamic model to evaluate the vertical motion of the compensating sheave. Especially, behavior in emergency cases, such as brake activation and buffer strike, was investigated to evaluate the maximum upward motion of the sheave. The simulation results were validated by experiments and the most influenced factor for the sheave vertical motion was clarified
7. Plasmonic Properties of Vertically Aligned Nanowire Arrays
Directory of Open Access Journals (Sweden)
Hua Qi
2012-01-01
Full Text Available Nanowires (NWs/Ag sheath composites were produced to investigate plasmonic coupling between vertically aligned NWs for surface-enhanced Raman scattering (SERS applications. In this investigation, two types of vertical NW arrays were studied; those of ZnO NWs grown on nanosphere lithography patterned sapphire substrate via vapor-liquid-solid (VLS mechanism and Si NW arrays produced by wet chemical etching. Both types of vertical NW arrays were coated with a thin layer of silver by electroless silver plating for SERS enhancement studies. The experimental results show extremely strong SERS signals due to plasmonic coupling between the NWs, which was verified by COMSOL electric field simulations. We also compared the SERS enhancement intensity of aligned and random ZnO NWs, indicating that the aligned NWs show much stronger and repeatable SERS signal than those grown in nonaligned geometries.
8. Vertical gradients of sunspot magnetic fields
Science.gov (United States)
Hagyard, M. J.; Teuber, D.; West, E. A.; Tandberg-Hanssen, E.; Henze, W., Jr.; Beckers, J. M.; Bruner, M.; Hyder, C. L.; Woodgate, B. E.
1983-01-01
The results of a Solar Maximum Mission (SMM) guest investigation to determine the vertical gradients of sunspot magnetic fields for the first time from coordinated observations of photospheric and transition-region fields are described. Descriptions are given of both the photospheric vector field of a sunspot, derived from observations using the NASA Marshall Space Flight Center vector magnetograph, and of the line-of-sight component in the transition region, obtained from the SMM Ultraviolet Spectrometer and Polarimeter instrument. On the basis of these data, vertical gradients of the line-of-sight magnetic field component are calculated using three methods. It is found that the vertical gradient of Bz is lower than values from previous studies and that the transition-region field occurs at a height of approximately 4000-6000 km above the photosphere.
9. Vertical vibration analysis for elevator compensating sheave
Science.gov (United States)
Watanabe, Seiji; Okawa, Takeya; Nakazawa, Daisuke; Fukui, Daiki
2013-07-01
Most elevators applied to tall buildings include compensating ropes to satisfy the balanced rope tension between the car and the counter weight. The compensating ropes receive tension by the compensating sheave, which is installed at the bottom space of the elevator shaft. The compensating sheave is only suspended by the compensating ropes, therefore, the sheave can move vertically while the car is traveling. This paper shows the elevator dynamic model to evaluate the vertical motion of the compensating sheave. Especially, behavior in emergency cases, such as brake activation and buffer strike, was investigated to evaluate the maximum upward motion of the sheave. The simulation results were validated by experiments and the most influenced factor for the sheave vertical motion was clarified.
10. Study on characteristics of vertical strong motions
International Nuclear Information System (INIS)
Akao, Y.; Katukura, H.; Fukushima, S.; Mizutani, M.
1993-01-01
Statistic properties of vertical strong ground motions from near-field earthquakes are discussed in comparison with that of horizontal motions. It is a feature of this analysis that time history of each observed record is divided into direct P- and S-wave segments from a seismological viewpoint. Following results are obtained. Vertical motion energy excited by direct S-waves is about 0.6 times of horizontal ones at deep underground, and it approaches to 1.0 at shallow place. Horizontal motion energy excited by direct P-waves becomes 0.2 times (at deep) or more (at shallow) of vertical one. These results can be available in modeling of input motions for aseismic design. (author)
11. Vertical stability, high elongation, and the consequences of loss of vertical control on DIII-D
International Nuclear Information System (INIS)
Kellman, A.G.; Ferron, J.R.; Jensen, T.H.; Lao, L.L.; Luxon, J.L.; Skinner, D.G.; Strait, E.J.; Reis, E.; Taylor, T.S.; Turnbull, A.D.; Lazarus, E.A.; Lister, J.B.
1990-09-01
Recent modifications to the vertical control system for DIII-D has enabled operation of discharges with vertical elongation κ, up to 2.5. When vertical stability is lost, a disruption follows and a large vertical force on the vacuum vessel is observed. The loss of plasma energy begins when the edge safety factor q is 2 but the current decay does not begin until q ∼1.3. Current flow on the open field lines in the plasma scrapeoff layer has been measured and the magnitude and distribution of these currents can explain the observed force on the vessel. Equilibrium calculations and simulation of this vertical displacement episode are presented. 7 refs., 4 figs
12. The Revolutionary Vertical Lift Technology (RVLT) Project
Science.gov (United States)
Yamauchi, Gloria K.
2018-01-01
The Revolutionary Vertical Lift Technology (RVLT) Project is one of six projects in the Advanced Air Vehicles Program (AAVP) of the NASA Aeronautics Research Mission Directorate. The overarching goal of the RVLT Project is to develop and validate tools, technologies, and concepts to overcome key barriers for vertical lift vehicles. The project vision is to enable the next generation of vertical lift vehicles with aggressive goals for efficiency, noise, and emissions, to expand current capabilities and develop new commercial markets. The RVLT Project invests in technologies that support conventional, non-conventional, and emerging vertical-lift aircraft in the very light to heavy vehicle classes. Research areas include acoustic, aeromechanics, drive systems, engines, icing, hybrid-electric systems, impact dynamics, experimental techniques, computational methods, and conceptual design. The project research is executed at NASA Ames, Glenn, and Langley Research Centers; the research extensively leverages partnerships with the US Army, the Federal Aviation Administration, industry, and academia. The primary facilities used by the project for testing of vertical-lift technologies include the 14- by 22-Ft Wind Tunnel, Icing Research Tunnel, National Full-Scale Aerodynamics Complex, 7- by 10-Ft Wind Tunnel, Rotor Test Cell, Landing and Impact Research facility, Compressor Test Facility, Drive System Test Facilities, Transonic Turbine Blade Cascade Facility, Vertical Motion Simulator, Mobile Acoustic Facility, Exterior Effects Synthesis and Simulation Lab, and the NASA Advanced Supercomputing Complex. To learn more about the RVLT Project, please stop by booth #1004 or visit their website at https://www.nasa.gov/aeroresearch/programs/aavp/rvlt.
13. Vertically Integrated Multinationals and Productivity Spillovers
DEFF Research Database (Denmark)
Clementi, Federico; Bergmann, Friedrich
are not automatic. In this paper, we study how these externalities are affected by the strategy of vertical integration of foreign multinationals. Our analysis, based on firm-level data of European manufacturing companies, shows that local firms perceive weaker backward spillovers if client foreign affiliates...... are vertically integrated in their industry. The spillovers that arise from the activity of companies that do not invest in the domestic firms’ industry are 2.6 to 5 times stronger than the ones than come from affiliates of multinationals that invest in the industry of local firms....
14. Thermal Stratification in Vertical Mantle Tanks
DEFF Research Database (Denmark)
Knudsen, Søren; Furbo, Simon
2001-01-01
It is well known that it is important to have a high degree of thermal stratification in the hot water storage tank to achieve a high thermal performance of SDHW systems. This study is concentrated on thermal stratification in vertical mantle tanks. Experiments based on typical operation conditions...... are carried out to investigate how the thermal stratification is affected by different placements of the mantle inlet. The heat transfer between the solar collector fluid in the mantle and the domestic water in the inner tank is analysed by CFD-simulations. Furthermore, the flow pattern in the vertical mantle...
15. Breakwaters with Vertical and Inclined Concrete Walls
DEFF Research Database (Denmark)
Burcharth, Hans Falk
Following the PIANC PTC II working group on Analyses of Rubble Mound Breakwaters it was, in 1991, decided to form Working Group (WG) n° 28 on "Breakwaters with vertical and inclined concrete walls" The scope of the work was to achieve a better understanding of the overall safety aspects in the de......Following the PIANC PTC II working group on Analyses of Rubble Mound Breakwaters it was, in 1991, decided to form Working Group (WG) n° 28 on "Breakwaters with vertical and inclined concrete walls" The scope of the work was to achieve a better understanding of the overall safety aspects...
16. Preserving the Modernist Vertical Urban Factory
Directory of Open Access Journals (Sweden)
Nina Rappaport
2016-07-01
Full Text Available This essay is adapted in part, from the section, “Modern Factory Architecture” case studies from Nina Rappaport’s book Vertical Urban Factory, published by Actar this spring. Vertical Urban Factory began as an architecture studio, and then an exhibition, which opened in New York in 2011 and traveled to Detroit and Toronto in 2012. Last year the show was displayed at Archizoom at EPFL in Lausanne; Industry City, Brooklyn; and the Charles Moore School of Architecture at Kean University, in New Jersey. The project continues as a think tank evaluating factory futures and urban industrial potential.
17. Geophysical aspects of vertical streamer seismic data
Energy Technology Data Exchange (ETDEWEB)
Sognnes, Walter
1999-12-31
Vertical cable acquisition is performed by deploying a certain number of vertical hydrophone arrays in the water column, and subsequently shooting a source point on top of it. The advantage of this particular geometry is that gives a data set with all azimuths included. Therefore a more complete 3-D velocity model can be derived. In this paper there are presented some results from the Fuji survey in the Gulf of Mexico. Based on these results, improved geometries and review recommendations for future surveys are discussed. 7 figs.
18. Geophysical aspects of vertical streamer seismic data
Energy Technology Data Exchange (ETDEWEB)
Sognnes, Walter
1998-12-31
Vertical cable acquisition is performed by deploying a certain number of vertical hydrophone arrays in the water column, and subsequently shooting a source point on top of it. The advantage of this particular geometry is that gives a data set with all azimuths included. Therefore a more complete 3-D velocity model can be derived. In this paper there are presented some results from the Fuji survey in the Gulf of Mexico. Based on these results, improved geometries and review recommendations for future surveys are discussed. 7 figs.
19. Measurement of the CKM matrix element vertical stroke Vts vertical stroke 2
International Nuclear Information System (INIS)
Unverdorben, Christopher Gerhard
2015-03-01
This is the first direct measurement of the CKM matrix element vertical stroke V ts vertical stroke, using data collected by the ATLAS detector in 2012 at √(s)= 8 TeV pp-collisions with a total integrated luminosity of 20.3 fb -1 . The analysis is based on 112 171 reconstructed t anti t candidate events in the lepton+jets channel, having a purity of 90.0 %. 183 t anti t→W + W - b anti s decays are expected (charge conjugation implied), which are available for the extraction of the CKM matrix element vertical stroke V ts vertical stroke 2 . To identify these rare decays, several observables are examined, such as the properties of jets, tracks and of b-quark identification algorithms. Furthermore, the s-quark hadrons K 0 s are considered, reconstructed by a kinematic fit. The best observables are combined in a multivariate analysis, called ''boosted decision trees''. The responses from Monte Carlo simulations are used as templates for a fit to data events yielding a significance value of 0.7σ for t→s+W decays. An upper limit of vertical stroke V ts vertical stroke 2 <1.74 % at 95 % confidence level is set, including all systematic and statistical uncertainties. So this analysis, using a direct measurement of the CKM matrix element vertical stroke V ts vertical stroke 2 , provides the best direct limit on vertical stroke V ts vertical stroke 2 up to now.
20. Development of dynamic 3-D surface profilometry using stroboscopic interferometric measurement and vertical scanning techniques
Energy Technology Data Exchange (ETDEWEB)
Fan, K-C [Department of Mechanical Engineering, National Taiwan University, 1, Sec. 4 Roosevelt Rd, Taipei, Taiwan (China); Chen, L-C [Graduate Institute of Automation Technology, National Taipei University of Technology, 1 Sec. 3 Chung-Hsiao East Rd, Taipei, 106, Taiwan (China); Lin, C-D [Department of Mechanical Engineering, National Taiwan University, 1, Sec. 4 Roosevelt Rd, Taipei, Taiwan (China); Chang, Calvin C [Industrial Technology Research Institute, Centre for Measurement Standards, 321 Sec. 2, Kuang Fu Rd, Hsinchu, Taiwan, 300 (China); Kuo, C-F [Industrial Technology Research Institute, Centre for Measurement Standards, 321 Sec. 2, Kuang Fu Rd, Hsinchu, Taiwan, 300 (China); Chou, J-T [Industrial Technology Research Institute, Centre for Measurement Standards, 321 Sec. 2, Kuang Fu Rd, Hsinchu, Taiwan, 300 (China)
2005-01-01
The main objective of this technical advance is to provide a single optical interferometric framework and methodology to be capable of delivering both nano-scale static and dynamic surface profilometry. Microscopic interferometry is a powerful technique for static and dynamic characterization of micro (opto) electromechanical systems (M (O) EMS). In view of this need, a microscopic prototype based on white-light stroboscopic interferometry and the white light vertical scanning principle, was developed to achieve dynamic full-field profilometry and characterization of MEMS devices. The system primarily consists of an optical microscope, on which a Mirau interferometric objective embedded with a piezoelectric vertical translator, a high-power LED light module with dual operation modes and light synchronizing electronics unit are integrated. A micro cantilever beam used in AFM was measured to verify the system capability in accurate characterization of dynamic behaviours of the device. The full-field second-mode vibration at a vibratory frequency of 68.60 kHz can be fully characterized and 3-5 nm of vertical measurement resolution as well as tens of micrometers of vertical measurement range can be easily achieved.
1. 33 CFR 118.85 - Lights on vertical lift bridges.
Science.gov (United States)
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Lights on vertical lift bridges... BRIDGES BRIDGE LIGHTING AND OTHER SIGNALS § 118.85 Lights on vertical lift bridges. (a) Lift span lights. The vertical lift span of every vertical lift bridge shall be lighted so that the center of the...
2. Salmonella sp. bacteriology monitoring in laying hens at different growing and laying periods from poultry farms in Metropolitan Region of Fortaleza Monitoramento bacteriológico para Salmonella sp. em poedeira comercial em diferentes fases de recria e produção de empresas avícolas da Região Metropolitana de Fortaleza
Directory of Open Access Journals (Sweden)
Emanuella Evangelista da Silva
2008-07-01
Full Text Available
This work aimed to verify <em>Salmonella> occurrence in laying hen flocks from eight poultry farms in Metropolitan Region of Fortaleza city. Swab collections were performed in transport boxes of day-old-chicks, totaling 40 feces samples (5 samples/flock, which presented no <em>Salmonella> contamination. Bacterial analyses from a pool of feces were performed in the same flocks at 10, 20, 30 and 40 weeks of age. <em>Salmonella enterica em>rough strain and <em>Salmonella em>Newport were found in two flocks at 20 and 40 weeks of age, respectively. These results suggest that the birds were infected with <em>Salmonella em>after their arrival in the poultry farms. It was verified that 25% of the poultry farms presented positive feces samples for <em>Salmonella> contamination, indicating the need for a more efficacious preventive program in the poultry farms for egg production. This work suggests that day old birds were of <em>Salmonella> contamination which indicates no vertical <em>Salmonella> transmission, however the rearing phase present failures regarding bacterial control.
KEY WORDS: Bacteriology, chickens, eggs, feces, Salmonella.
O presente trabalho objetivou investigar a presença de <em>Salmonella em>em lotes de poedeiras comerciais de oito empresas da região metropolitana de Fortaleza,CE, Brasil. Realizaram-se suabes em cinco caixas de transporte por lote das oito empresas analisadas, totalizando quarenta amostras de mecônio, sendo todas negativas para <em>Salmonella. em>Os mesmos lotes (oito foram monitorados na décima, vigésima, trigésima e quadragésima semanas de idade com exame bacteriológico de <em>pool> de cem fezes frescas. Foram isoladas <em>Salmonella> <em>enterica em>subsepécie <em>enterica> cepa rugosa e<em> Salmonella em>Newport> em>das amostras de fezes nas empresas 2 e 6 na
3. Vortex capturing vertical axis wind turbine
International Nuclear Information System (INIS)
Zannetti, L; Gallizio, F; Ottino, G
2007-01-01
An analytical-numerical study is presented for an innovative lift vertical axis turbine whose blades are designed with vortex trapping cavities that act as passive flow control devices. The unsteady flow field past one-bladed and two-bladed turbines is described by a combined analytical and numerical method based on conformal mapping and on a blob vortex method
4. Digital Microfluidic System with Vertical Functionality
Directory of Open Access Journals (Sweden)
Brian F. Bender
2015-11-01
Full Text Available Digital (droplet microfluidics (DµF is a powerful platform for automated lab-on-a-chip procedures, ranging from quantitative bioassays such as RT-qPCR to complete mammalian cell culturing. The simple MEMS processing protocols typically employed to fabricate DµF devices limit their functionality to two dimensions, and hence constrain the applications for which these devices can be used. This paper describes the integration of vertical functionality into a DµF platform by stacking two planar digital microfluidic devices, altering the electrode fabrication process, and incorporating channels for reversibly translating droplets between layers. Vertical droplet movement was modeled to advance the device design, and three applications that were previously unachievable using a conventional format are demonstrated: (1 solutions of calcium dichloride and sodium alginate were vertically mixed to produce a hydrogel with a radially symmetric gradient in crosslink density; (2 a calcium alginate hydrogel was formed within the through-well to create a particle sieve for filtering suspensions passed from one layer to the next; and (3 a cell spheroid formed using an on-chip hanging-drop was retrieved for use in downstream processing. The general capability of vertically delivering droplets between multiple stacked levels represents a processing innovation that increases DµF functionality and has many potential applications.
5. Vertical retorts for distilling, carbonizing, roasting, etc
Energy Technology Data Exchange (ETDEWEB)
Walker, H R.L.; Bates, W R
1917-11-17
In a continuously operated vertical retort for destructive distillation or roasting the combination of an annular internally and externally heated construction with an annular plunger adapted to compress and assist the travel of the charge and to aid in discharging material substantially is described.
6. The Design Philosophy for a Vertical Breakwater
DEFF Research Database (Denmark)
Vrijling, J. K.; Burcharth, H. F.; Voortman, H. G.
2000-01-01
A consistent risk-based design philosophy for vertical breakwaters is proposed. The design philosophy consists of a two-step approach. The first step is the definition of the main function of the breakwater, which leads to a definition of failure. The second step is the choice of the acceptable...
7. Determinants Of Vertical And Horizontal Export Diversification ...
African Journals Online (AJOL)
The study also reveals domestic investment plays an important role to enhance vertical as well as horizontal export diversification for East Asia, while it only ... resource-based industries and gradually shift production and exports from customary products to more dynamic ones by developing competitive advantage in the ...
8. A Comparison of Methods of Vertical Equating.
Science.gov (United States)
Loyd, Brenda H.; Hoover, H. D.
Rasch model vertical equating procedures were applied to three mathematics computation tests for grades six, seven, and eight. Each level of the test was composed of 45 items in three sets of 15 items, arranged in such a way that tests for adjacent grades had two sets (30 items) in common, and the sixth and eighth grades had 15 items in common. In…
9. Vertical reflector for bifacial PV-panels
DEFF Research Database (Denmark)
Jakobsen, Michael Linde; Thorsteinsson, Sune; Poulsen, Peter Behrensdorff
2016-01-01
Bifacial solar modules offer an interesting price/performance ratio, and much work has been focused on directing the ground albedo to the back of the solar cells. In this work we design and develop a reflector for a vertical bifacial panel, with the objective to optimize the energy harvest...
10. On production costs in vertical differentiation models
OpenAIRE
Dorothée Brécard
2009-01-01
In this paper, we analyse the effects of the introduction of a unit production cost beside a fixed cost of quality improvement in a duopoly model of vertical product differentiation. Thanks to an original methodology, we show that a low unit cost tends to reduce product differentiation and thus prices, whereas a high unit cost leads to widen product differentiation and to increase prices
11. MHD stability of vertically asymmetric tokamak equilibria
International Nuclear Information System (INIS)
Dalhed, H.E.; Grimm, R.C.; Johnson, J.L.
1981-03-01
The ideal MHD stability properties of a special class of vertically asymmetric tokamak equilibria are examined. The calculations confirm that no major new physical effects are introduced and the modifications can be understood by conventional arguments. The results indicate that significant departures from up-down symmetry can be tolerated before the reduction in β becomes important for reactor operation
12. Optical anisotropy in vertically coupled quantum dots
DEFF Research Database (Denmark)
Yu, Ping; Langbein, Wolfgang Werner; Leosson, Kristjan
1999-01-01
We have studied the polarization of surface and edge-emitted photoluminescence (PL) from structures with vertically coupled In0.5Ga0.5As/GaAs quantum dots (QD's) grown by molecular beam epitaxy. The PL polarization is found to be strongly dependent on the number of stacked layers. While single...... number due to increasing dot size....
13. Transient well flow in vertically heterogeneous aquifers.
NARCIS (Netherlands)
Hemker, C.J.
1999-01-01
A solution for the general problem of computing well flow in vertically heterogeneous aquifers is found by an integration of both analytical and numerical techniques. The radial component of flow is treated analytically; the drawdown is a continuous function of the distance to the well. The
14. Proverbs : Probabilistic design tools for vertical breakwaters
NARCIS (Netherlands)
Oumeraci, H.; Allsop, N.W.H.; De Groot, M.B.; Crouch, R.S.; Vrijling, J.K.
1999-01-01
Final report and appendices of the European project Proverbs on tools for the design of vertical breakwaters (caisson type breakwaters) and similar hydraulic structures in the coastal zone. It includes the loads (waves) as well as the strength of the structure (geotechnial aspects, structural
15. The capillary interaction between two vertical cylinders
KAUST Repository
Cooray, Himantha; Cicuta, Pietro; Vella, Dominic
2012-01-01
surface clusters. Here we present a numerical method for determining the three-dimensional meniscus around a pair of vertical circular cylinders. This involves the numerical solution of the fully nonlinear Laplace-Young equation using a mesh-free finite
16. Manufacturing: the new case for vertical integration
NARCIS (Netherlands)
Kumpe, Ted; Bolwijn, Piet
1988-01-01
The article argues that the solid corporation will continue to view vertical integration as a critical part of manufacturing reform. Manufacturing reform and backward integration are related in insidious ways to the three stages of production over which the big manufacturers preside. Without
17. Vertical integration as organizational strategy formation
NARCIS (Netherlands)
Romme, A.G.L.
1990-01-01
This paper contributes to research into the strategy—environment relationship, especially looking at the issue of vertical integration. It aims at a synthesis of process and content approaches to strategic change on the level of the organization’s dominant group. The key factor is uncertainty, which
18. Vertical Integration: Teachers' Knowledge and Teachers' Voice.
Science.gov (United States)
Corrie, L.
1995-01-01
Traces the theoretical basis for vertical integration in early school years. Contrasts transmission-based pedagogy with a higher level of teacher control, and acquirer-based pedagogy with a higher level of student control. Suggests that early childhood pedagogy will be maintained when teachers are able to articulate their pedagogical knowledge and…
19. Vertical integration of HRD policy within companies
NARCIS (Netherlands)
Wognum, Ida
2001-01-01
This study concerns HRD policy making in companies. More specifically, it explores whether so-called vertical integration of HRD policy at different organizational levels occurs within companies. The study involved forty-four large companies in the industrial and the financial and commercial
20. A note on partial vertical integration
NARCIS (Netherlands)
G.W.J. Hendrikse (George); H.J.M. Peters (Hans)
1989-01-01
textabstractA simple model is constructed to show how partial vertical integration may emerge as an equilibrium market structure in a world characterized by rationing, differences in the reservation prices of buyers, and in the risk attitudes of buyers and sellers. The buyers with the high
1. Vertical Integration Spurs American Health Care Revolution.
Science.gov (United States)
Phillips, Richard C.
1986-01-01
Under new "managed health care systems," the classical functional separation of risk taker, claims payor, and provider are vertically integrated into a common entity. This evolution should produce a competitive environment with medical care rendered to all Americans on a more cost-effective basis. (CJH)
2. Oblique patterned etching of vertical silicon sidewalls
Science.gov (United States)
Bruce Burckel, D.; Finnegan, Patrick S.; David Henry, M.; Resnick, Paul J.; Jarecki, Robert L.
2016-04-01
A method for patterning on vertical silicon surfaces in high aspect ratio silicon topography is presented. A Faraday cage is used to direct energetic reactive ions obliquely through a patterned suspended membrane positioned over the topography. The technique is capable of forming high-fidelity pattern (100 nm) features, adding an additional fabrication capability to standard top-down fabrication approaches.
3. Vertical Dynamic Stiffness of Offshore Foundations
DEFF Research Database (Denmark)
Latini, Chiara; Cisternino, Michele; Zania, Varvara
2016-01-01
Nowadays, pile and suction caisson foundations are widely used to support offshore structures which are subjected to vertical dynamic loads. The dynamic soil-structure interaction of floating foundations (foundations embedded in a soil layer whose height is greater than the foundation length) is ...
4. Vertical pump with free floating check valve
International Nuclear Information System (INIS)
Lindsay, M.
1980-01-01
A vertical pump is described which has a bottom discharge with a free floating check valve disposed in the outlet plenum thereof. The free floating check valve comprises a spherical member with a hemispherical cage-like member attached thereto which is capable of allowing forward or reverse flow under appropriate conditions while preventing reverse flow under inappropriate conditions
5. Vertical field and equilibrium calculation in ETE
International Nuclear Information System (INIS)
Montes, Antonio; Shibata, Carlos Shinya.
1996-01-01
The free-boundary MHD equilibrium code HEQ is used to study the plasma behaviour in the tokamak ETE, with optimized compensations coils and vertical field coils. The changes on the equilibrium parameters for different plasma current values are also investigated. (author). 5 refs., 4 figs., 2 tabs
6. International EMS Systems
DEFF Research Database (Denmark)
Langhelle, Audun; Lossius, Hans Morten; Silfvast, Tom
2004-01-01
exist, however, especially within the ground and air ambulance service, and the EMS systems face several challenges. Main problems and challenges emphasized by the authors are: (1) Denmark: the dispatch centres are presently not under medical control and are without a national criteria based system......Emergency medicine service (EMS) systems in the five Nordic countries have more similarities than differences. One similarity is the involvement of anaesthesiologists as pre-hospital physicians and their strong participation for all critically ill and injured patients in-hospital. Discrepancies do....... Access to on-line medical advice of a physician is not available; (2) Finland: the autonomy of the individual municipalities and their responsibility to cover for primary and specialised health care, as well as the EMS, and the lack of supporting or demanding legislation regarding the EMS; (3) Iceland...
7. Subsurface imaging by electrical and EM methods
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-12-01
This report consists of 3 subjects. 1) Three dimensional inversion of resistivity data with topography : In this study, we developed a 3-D inversion method based on the finite element calculation of model responses, which can effectively accommodate the irregular topography. In solving the inverse problem, the iterative least-squares approach comprising the smoothness-constraints was taken along with the reciprocity approach in the calculation of Jacobian. Furthermore the Active Constraint Balancing, which has been recently developed by ourselves to enhance the resolving power of the inverse problem, was also employed. Since our new algorithm accounts for the topography in the inversion step, topography correction is not necessary as a preliminary processing and we can expect a more accurate image of the earth. 2) Electromagnetic responses due to a source in the borehole : The effects of borehole fluid and casing on the borehole EM responses should thoroughly be analyzed since they may affect the resultant image of the earth. In this study, we developed an accurate algorithm for calculating the EM responses containing the effects of borehole fluid and casing when a current-carrying ring is located on the borehole axis. An analytic expression for primary vertical magnetic field along the borehole axis was first formulated and the fast Fourier transform is to be applied to get the EM fields at any location in whole space. 3) High frequency electromagnetic impedance survey : At high frequencies the EM impedance becomes a function of the angle of incidence or the horizontal wavenumber, so the electrical properties cannot be readily extracted without first eliminating the effect of horizontal wavenumber on the impedance. For this purpose, this paper considers two independent methods for accurately determining the horizontal wavenumber, which in turn is used to correct the impedance data. The 'apparent' electrical properties derived from the corrected impedance
8. A poliomielite em Sergipe
Directory of Open Access Journals (Sweden)
Hélio A. Oliveira
1994-06-01
Energy Technology Data Exchange (ETDEWEB)
1981-01-01
The standard contains technical specifications and conditions of production, testing, packing, transport and storage of EM type planar calibration standards containing radionuclides /sup 14/C, /sup 60/Co, /sup 90/Sr, /sup 137/Cs, /sup 147/Pm, /sup 204/Tl, /sup 239/Pu, /sup 241/Am and natural U. The terminology is explained, the related Czechoslovak standards and legal prescriptions given and amendments to these prescriptions presented.
10. Modeling tides and vertical tidal mixing: A reality check
International Nuclear Information System (INIS)
Robertson, Robin
2010-01-01
Recently, there has been a great interest in the tidal contribution to vertical mixing in the ocean. In models, vertical mixing is estimated using parameterization of the sub-grid scale processes. Estimates of the vertical mixing varied widely depending on which vertical mixing parameterization was used. This study investigated the performance of ten different vertical mixing parameterizations in a terrain-following ocean model when simulating internal tides. The vertical mixing parameterization was found to have minor effects on the velocity fields at the tidal frequencies, but large effects on the estimates of vertical diffusivity of temperature. Although there was no definitive best performer for the vertical mixing parameterization, several parameterizations were eliminated based on comparison of the vertical diffusivity estimates with observations. The best performers were the new generic coefficients for the generic length scale schemes and Mellor-Yamada's 2.5 level closure scheme.
International Nuclear Information System (INIS)
Gong, Hong-Yu; Gu, Wei-Min
2017-01-01
In the classic picture of standard thin accretion disks, viscous heating is balanced by radiative cooling through the diffusion process, and the radiation-pressure-dominated inner disk suffers convective instability. However, recent simulations have shown that, owing to the magnetic buoyancy, the vertical advection process can significantly contribute to energy transport. In addition, in comparing the simulation results with the local convective stability criterion, no convective instability has been found. In this work, following on from simulations, we revisit the vertical structure of radiation-pressure-dominated thin disks and include the vertical advection process. Our study indicates a link between the additional energy transport and the convectively stable property. Thus, the vertical advection not only significantly contributes to the energy transport, but it also plays an important role in making the disk convectively stable. Our analyses may help to explain the discrepancy between classic theory and simulations on standard thin disks.
12. vertical bar Vub vertical bar from exclusive semileptonic B→π decays
International Nuclear Information System (INIS)
Flynn, Jonathan M.; Nieves, Juan
2007-01-01
We use Omnes representations of the form factors f + and f 0 for exclusive semileptonic B→π decays, paying special attention to the treatment of the B* pole and its effect on f + . We apply them to combine experimental partial branching fraction information with theoretical calculations of both form factors to extract vertical bar V ub vertical bar. The precision we achieve is competitive with the inclusive determination and we do not find a significant discrepancy between our result, vertical bar V ub vertical bar=(3.90+/-0.32+/-0.18)x10 -3 , and the inclusive world average value (4.45+/-0.20+/-0.26)x10 -3 [Heavy Flavor Averaging Group (HFAG), hep-ex/0603003
13. Vertical distribution of paracalanus crassirostris (copepoda, calanoidea: analysis by the general linear model
Directory of Open Access Journals (Sweden)
Ana Milstein
1979-01-01
14. The Vertical Farm: A Review of Developments and Implications for the Vertical City
Directory of Open Access Journals (Sweden)
Kheir Al-Kodmany
2018-02-01
Full Text Available This paper discusses the emerging need for vertical farms by examining issues related to food security, urban population growth, farmland shortages, “food miles”, and associated greenhouse gas (GHG emissions. Urban planners and agricultural leaders have argued that cities will need to produce food internally to respond to demand by increasing population and to avoid paralyzing congestion, harmful pollution, and unaffordable food prices. The paper examines urban agriculture as a solution to these problems by merging food production and consumption in one place, with the vertical farm being suitable for urban areas where available land is limited and expensive. Luckily, recent advances in greenhouse technologies such as hydroponics, aeroponics, and aquaponics have provided a promising future to the vertical farm concept. These high-tech systems represent a paradigm shift in farming and food production and offer suitable and efficient methods for city farming by minimizing maintenance and maximizing yield. Upon reviewing these technologies and examining project prototypes, we find that these efforts may plant the seeds for the realization of the vertical farm. The paper, however, closes by speculating about the consequences, advantages, and disadvantages of the vertical farm’s implementation. Economic feasibility, codes, regulations, and a lack of expertise remain major obstacles in the path to implementing the vertical farm.
15. Adaptation of the vertical vestibulo-ocular reflex in cats during low-frequency vertical rotation.
Science.gov (United States)
Fushiki, Hiroaki; Maruyama, Motoyoshi; Shojaku, Hideo
2018-04-01
16. Evaluation of vertical coordinate and vertical mixing algorithms in the HYbrid-Coordinate Ocean Model (HYCOM)
Science.gov (United States)
Halliwell, George R.
Vertical coordinate and vertical mixing algorithms included in the HYbrid Coordinate Ocean Model (HYCOM) are evaluated in low-resolution climatological simulations of the Atlantic Ocean. The hybrid vertical coordinates are isopycnic in the deep ocean interior, but smoothly transition to level (pressure) coordinates near the ocean surface, to sigma coordinates in shallow water regions, and back again to level coordinates in very shallow water. By comparing simulations to climatology, the best model performance is realized using hybrid coordinates in conjunction with one of the three available differential vertical mixing models: the nonlocal K-Profile Parameterization, the NASA GISS level 2 turbulence closure, and the Mellor-Yamada level 2.5 turbulence closure. Good performance is also achieved using the quasi-slab Price-Weller-Pinkel dynamical instability model. Differences among these simulations are too small relative to other errors and biases to identify the "best" vertical mixing model for low-resolution climate simulations. Model performance deteriorates slightly when the Kraus-Turner slab mixed layer model is used with hybrid coordinates. This deterioration is smallest when solar radiation penetrates beneath the mixed layer and when shear instability mixing is included. A simulation performed using isopycnic coordinates to emulate the Miami Isopycnic Coordinate Ocean Model (MICOM), which uses Kraus-Turner mixing without penetrating shortwave radiation and shear instability mixing, demonstrates that the advantages of switching from isopycnic to hybrid coordinates and including more sophisticated turbulence closures outweigh the negative numerical effects of maintaining hybrid vertical coordinates.
17. Microstructure, vertical strain control and tunable functionalities in self-assembled, vertically aligned nanocomposite thin films
International Nuclear Information System (INIS)
Chen, Aiping; Bi, Zhenxing; Jia, Quanxi; MacManus-Driscoll, Judith L.; Wang, Haiyan
2013-01-01
Vertically aligned nanocomposite (VAN) oxide thin films have recently stimulated a significant amount of research interest owing to their novel architecture, vertical interfacial strain control and tunable material functionalities. In this work, the growth mechanisms of VAN thin films have been investigated by varying the composite material system, the ratio of the two constituent phases, and the thin film growth conditions including deposition temperature and oxygen pressure as well as growth rate. It has been shown that thermodynamic parameters, elastic and interfacial energies and the multiple phase ratio play dominant roles in the resulting microstructure. In addition, vertical interfacial strain has been observed in BiFeO 3 (BFO)- and La 0.7 Sr 0.3 MnO 3 (LSMO)-based VAN thin film systems; the vertical strain could be tuned by the growth parameters and selection of a suitable secondary phase. The tunability of physical properties such as dielectric loss in BFO:Sm 2 O 3 VAN and low-field magnetoresistance in LSMO-based VAN systems has been demonstrated. The enhancement and tunability of those physical properties have been attributed to the unique VAN architecture and vertical strain control. These results suggest that VAN architecture with novel microstructure and unique vertical strain tuning could provide a general route for tailoring and manipulating the functionalities of oxide thin films
18. Conhecimento dos obstetras sobre a transmissão vertical da hepatite B Knowledge of obstetricians about the vertical transmission of hepatitis B virus
Directory of Open Access Journals (Sweden)
Joseni Santos da Conceição
2009-03-01
19. Ultimately short ballistic vertical graphene Josephson junctions
Science.gov (United States)
Lee, Gil-Ho; Kim, Sol; Jhi, Seung-Hoon; Lee, Hu-Jong
2015-01-01
Much efforts have been made for the realization of hybrid Josephson junctions incorporating various materials for the fundamental studies of exotic physical phenomena as well as the applications to superconducting quantum devices. Nonetheless, the efforts have been hindered by the diffusive nature of the conducting channels and interfaces. To overcome the obstacles, we vertically sandwiched a cleaved graphene monoatomic layer as the normal-conducting spacer between superconducting electrodes. The atomically thin single-crystalline graphene layer serves as an ultimately short conducting channel, with highly transparent interfaces with superconductors. In particular, we show the strong Josephson coupling reaching the theoretical limit, the convex-shaped temperature dependence of the Josephson critical current and the exceptionally skewed phase dependence of the Josephson current; all demonstrate the bona fide short and ballistic Josephson nature. This vertical stacking scheme for extremely thin transparent spacers would open a new pathway for exploring the exotic coherence phenomena occurring on an atomic scale. PMID:25635386
20. New Urban Vertical Axis Wind Turbine Design
Directory of Open Access Journals (Sweden)
Alexandru-Mihai CISMILIANU
2015-12-01
Full Text Available This paper develops a different approach for enhancing the performance of Vertical Axis Wind Turbines for the use in the urban or rural environment and remote isolated residential areas. Recently the vertical axis wind turbines (VAWT have become more attractive due to the major advantages of this type of turbines in comparison to the horizontal axis wind turbines. We aim to enhance the overall performance of the VAWT by adding a second set of blades (3 x 2=6 blades following the rules of biplane airplanes. The model has been made to operate at a maximum power in the range of the TSR between 2 to 2.5. The performances of the VAWT were investigated numerically and experimentally and justify the new proposed design.
1. Alignment analysis of a vertical sodium pump
International Nuclear Information System (INIS)
Gupta, V.K.; Fair, C.E.
1981-01-01
With the objective of identifying important alignment features of pumps such as FFTF, HALLAM, EBR II, PNC, PHENIX, and CRBR, alignment of the vertical sodium pump for the Clinch River Breeder Reactor Plant (CRBRP) is investigated. The CRBRP pump includes a flexibly coupled pump shaft and motor shaft, two oil-film tilting-pad hydrodynamic radial bearings in the motor plus a vertical thrust bearing, and two sodium hydrostatic bearings straddling the double-suction centrifugal impeller in the pump. The assembled CRBRP prototype pump shows smooth predictable vibration behavior experienced during water test. An ealier swing check of the pump shaft about the motor shaft hub demonstrated that the pump is relatively insensitive to manufacturing and assembly tolerances, a consequence of close dimensional control and unique alignment features. (orig./GL)
2. Asymmetric SOL Current in Vertically Displaced Plasma
Science.gov (United States)
Cabrera, J. D.; Navratil, G. A.; Hanson, J. M.
2017-10-01
Experiments at the DIII-D tokamak demonstrate a non-monotonic relationship between measured scrape-off layer (SOL) currents and vertical displacement event (VDE) rates with SOL currents becoming largely n=1 dominant as plasma is displaced by the plasma control system (PCS) at faster rates. The DIII-D PCS is used to displace the magnetic axis 10x slower than the intrinsic growth time of similar instabilities in lower single-null plasmas. Low order (n VDE instabilities observed when vertical control is disabled. Previous inquiry shows VDE asymmetry characterized by SOL current fraction and geometric parameters of tokamak plasmas. We note that, of plasmas displaced by the PCS, short displacement time scales near the limit of the PCS temporal control appear to result in larger n=1/n=2 asymmetries. Work supported under USDOE Cooperative Agreement DE-FC02-04ER54698 and DE-FG02-04ER54761.
3. Round beams generated by vertical dispersion
International Nuclear Information System (INIS)
Bagley, P.
1990-01-01
Simulations suggest that in e + e - storage rings collisions of round beams (equal emittances and equal β*) can produce very large tune shifts and luminosities. We understand how to make equal β*s, but generating equal emittances is more difficult. We describe an equal emittance scheme that uses several skew quads to couple horizontal dispersion into vertical dispersion. These skew quads also produce a coupling bump. At the interaction point and at other points outside the coupling bump, the motion is not coupled, so that the 'A' normal mode corresponds to horizontal motion and the 'B' normal mode corresponds to vertical motion. We present a round beam lattice for CESR that incorporates this scheme
4. Equilibrium vertical field in the TBR Tokamak
International Nuclear Information System (INIS)
Ueta, A.Y.
1985-01-01
An experimental study on the influence of the vertical magnetic field of the TBR tokamak on the stability and equilibrium of plasma column, was done. Magnetic pick-up coils were built to measure plasma current and position, together with active networks, necessary fo the electronic processing of signals. Some measurements were on the space configuration of the vertical field, and on the influence due to the toroidal vessel. From the data obtained it was possible to discuss the influence of the currents induced on the vessel surface, on plasma equilibrium. Theoretical and experimental results of the vertica field, as a function of plasma current were compared, and allowed an evaluation of the plasma kinetic pressure and temperature. (Author) [pt
5. High-Performance Vertical Organic Electrochemical Transistors.
Science.gov (United States)
Donahue, Mary J; Williamson, Adam; Strakosas, Xenofon; Friedlein, Jacob T; McLeod, Robert R; Gleskova, Helena; Malliaras, George G
2018-02-01
Organic electrochemical transistors (OECTs) are promising transducers for biointerfacing due to their high transconductance, biocompatibility, and availability in a variety of form factors. Most OECTs reported to date, however, utilize rather large channels, limiting the transistor performance and resulting in a low transistor density. This is typically a consequence of limitations associated with traditional fabrication methods and with 2D substrates. Here, the fabrication and characterization of OECTs with vertically stacked contacts, which overcome these limitations, is reported. The resulting vertical transistors exhibit a reduced footprint, increased intrinsic transconductance of up to 57 mS, and a geometry-normalized transconductance of 814 S m -1 . The fabrication process is straightforward and compatible with sensitive organic materials, and allows exceptional control over the transistor channel length. This novel 3D fabrication method is particularly suited for applications where high density is needed, such as in implantable devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
6. Soft soils reinforced by rigid vertical inclusions
Directory of Open Access Journals (Sweden)
Iulia-Victoria NEAGOE
2013-12-01
Full Text Available Reinforcement of soft soils by rigid vertical inclusions is an increasingly used technique over the last few years. The system consists of rigid or semi-rigid vertical inclusions and a granular platform for the loads transfer from the structure to the inclusions. This technique aims to reduce the differential settlements both at ground level as below the structure. Reinforcement by rigid inclusions is mainly used for foundation works for large commercial and industrial platforms, storage tanks, wastewater treatment plants, wind farms, bridges, roads, railway embankments. The subject is one of interest as it proves the recently concerns at international level in research and design; however, most studies deal more with the static behavior and less with the dynamic one.
7. Interaction vertices in reduced string field theories
International Nuclear Information System (INIS)
Embacher, F.
1989-01-01
In contrast to previous expectations, covariant overlap vertices are not always suitable for gauge-covariant formulations of bosonic string field theory with a reduced supplementary field content. This is demonstrated for the version of the theory suggested by Neveu, Schwarz and West. The method to construct the interaction, as formulated by Neveu and West, fails at one level higher than these authors have considered. The condition for a general vertex to describe formally a local gauge-invariant interaction is derived. The solution for the action functional and the gauge transformation law is exhibited for all fields at once, to the first order in the coupling constant. However, all these vertices seem to be unphysical. 21 refs. (Author)
8. Purification, Characterization and Antioxidant Activities <em>in Vitroem>> em>and <em>in Vivoem> of the Polysaccharides from <em>Boletus edulisem> Bull
Directory of Open Access Journals (Sweden)
Yijun Fan
2012-07-01
Full Text Available A water-soluble polysaccharide (BEBP was extracted from <em>Boletus edulis em>Bull using hot water extraction followed by ethanol precipitation. The polysaccharide BEBP was further purified by chromatography on a DEAE-cellulose column, giving three major polysaccharide fractions termed BEBP-1, BEBP-2 and BEBP-3. In the next experiment, the average molecular weight (Mw, IR and monosaccharide compositional analysis of the three polysaccharide fractions were determined. The evaluation of antioxidant activities both <em>in vitroem> and <em>in vivo em>suggested that BEBP-3 had good potential antioxidant activity, and should be explored as a novel potential antioxidant.
9. Sulla presenza di <em>Sorex antinoriiem>, <em>Neomys anomalusem> (Insectivora, Soricidae e <em>Talpa caecaem> (Insectivora, Talpidae in Umbria
Directory of Open Access Journals (Sweden)
A.M. Paci
2003-10-01
Full Text Available Lo scopo del contributo è di fornire un aggiornamento sulla presenza del Toporagno del Vallese <em>Sorex antinoriiem>, del Toporagno acquatico di Miller <em>Neomys anomalusem> e della Talpa cieca <em>Talpa caecaem> in Umbria, dove le specie risultano accertate ormai da qualche anno. A tal fine sono stati rivisitati i reperti collezionati e la bibliografia conosciuta. Toporagno del Vallese: elevato di recente a livello di specie da Brünner et al. (2002, altrimenti considerato sottospecie del Toporagno comune (<em>S. araneus antinoriiem>. È conservato uno di tre crani incompleti (mancano mandibole ed incisivi superiori al momento prudenzialmente riferiti a <em>Sorex> cfr. <em>antinorii>, provenienti dall?Appennino umbro-marchigiano settentrionale (dintorni di Scalocchio - PG, 590 m. s.l.m. e determinati sulla base della pigmentazione rossa degli ipoconi del M1 e M2; Toporagno acquatico di Miller: tre crani (Breda in Paci e Romano op. cit. e un esemplare intero (Paci, ined. sono stati trovati a pochi chilometri di distanza gli uni dall?altro tra i comuni di Assisi e Valfabbrica, in ambienti mediocollinari limitrofi al Parco Regionale del M.te Subasio (Perugia. In provincia di Terni la specie viene segnalata da Isotti (op. cit. per i dintorni di Orvieto. Talpa cieca: sono noti una femmina e un maschio raccolti nel comune di Pietralunga (PG, rispettivamente in una conifereta a <em>Pinus nigraem> (m. 630 s.l.m. e nelle vicinanze di un bosco misto collinare a prevalenza di <em>Quercus cerrisem> (m. 640 s.l.m.. Recentemente un terzo individuo è stato rinvenuto nel comune di Sigillo (PG, all?interno del Parco Regionale di M.te Cucco, sul margine di una faggeta a 1100 m s.l.m. In entrambi i casi l?areale della specie è risultato parapatrico con quello di <em>Talpa europaeaem>.
10. Electrically floating, near vertical incidence, skywave antenna
Science.gov (United States)
Anderson, Allen A.; Kaser, Timothy G.; Tremblay, Paul A.; Mays, Belva L.
2014-07-08
An Electrically Floating, Near Vertical Incidence, Skywave (NVIS) Antenna comprising an antenna element, a floating ground element, and a grounding element. At least part of said floating ground element is positioned between said antenna element and said grounding element. The antenna is separated from the floating ground element and the grounding element by one or more electrical insulators. The floating ground element is separated from said antenna and said grounding element by one or more electrical insulators.
11. RADIALLY MAGNETIZED PROTOPLANETARY DISK: VERTICAL PROFILE
International Nuclear Information System (INIS)
Russo, Matthew; Thompson, Christopher
2015-01-01
This paper studies the response of a thin accretion disk to an external radial magnetic field. Our focus is on protoplanetary disks (PPDs), which are exposed during their later evolution to an intense, magnetized wind from the central star. A radial magnetic field is mixed into a thin surface layer, wound up by the disk shear, and pushed downward by a combination of turbulent mixing and ambipolar and ohmic drift. The toroidal field reaches much greater strengths than the seed vertical field that is usually invoked in PPD models, even becoming superthermal. Linear stability analysis indicates that the disk experiences the magnetorotational instability (MRI) at a higher magnetization than a vertically magnetized disk when both the effects of ambipolar and Hall drift are taken into account. Steady vertical profiles of density and magnetic field are obtained at several radii between 0.06 and 1 AU in response to a wind magnetic field B r ∼ (10 −4 –10 −2 )(r/ AU) −2 G. Careful attention is given to the radial and vertical ionization structure resulting from irradiation by stellar X-rays. The disk is more strongly magnetized closer to the star, where it can support a higher rate of mass transfer. As a result, the inner ∼1 AU of a PPD is found to evolve toward lower surface density. Mass transfer rates around 10 −8 M ⊙ yr −1 are obtained under conservative assumptions about the MRI-generated stress. The evolution of the disk and the implications for planet migration are investigated in the accompanying paper
12. RADIALLY MAGNETIZED PROTOPLANETARY DISK: VERTICAL PROFILE
Energy Technology Data Exchange (ETDEWEB)
Russo, Matthew [Department of Physics, University of Toronto, 60 St. George St., Toronto, ON M5S 1A7 (Canada); Thompson, Christopher [Canadian Institute for Theoretical Astrophysics, 60 St. George St., Toronto, ON M5S 3H8 (Canada)
2015-11-10
This paper studies the response of a thin accretion disk to an external radial magnetic field. Our focus is on protoplanetary disks (PPDs), which are exposed during their later evolution to an intense, magnetized wind from the central star. A radial magnetic field is mixed into a thin surface layer, wound up by the disk shear, and pushed downward by a combination of turbulent mixing and ambipolar and ohmic drift. The toroidal field reaches much greater strengths than the seed vertical field that is usually invoked in PPD models, even becoming superthermal. Linear stability analysis indicates that the disk experiences the magnetorotational instability (MRI) at a higher magnetization than a vertically magnetized disk when both the effects of ambipolar and Hall drift are taken into account. Steady vertical profiles of density and magnetic field are obtained at several radii between 0.06 and 1 AU in response to a wind magnetic field B{sub r} ∼ (10{sup −4}–10{sup −2})(r/ AU){sup −2} G. Careful attention is given to the radial and vertical ionization structure resulting from irradiation by stellar X-rays. The disk is more strongly magnetized closer to the star, where it can support a higher rate of mass transfer. As a result, the inner ∼1 AU of a PPD is found to evolve toward lower surface density. Mass transfer rates around 10{sup −8} M{sub ⊙} yr{sup −1} are obtained under conservative assumptions about the MRI-generated stress. The evolution of the disk and the implications for planet migration are investigated in the accompanying paper.
13. Assessing verticalization effects on urban safety perception
OpenAIRE
Lourenço, Ricardo Barros
2017-01-01
We describe an experiment with the modeling of urban verticalization effects on perceived safety scores as obtained with computer vision on Google Streetview data for New York City. Preliminary results suggests that for smaller buildings (between one and seven floors), perceived safety increases with building height, but that for high-rise buildings, perceived safety decreases with increased height. We also determined that while height contributing for this relation, other zonal aspects also ...
14. Vertical Integration Versus Outsourcing in Industry Equilibrium
OpenAIRE
Bin Wang
2006-01-01
We study the determinants of the extent of in-house vertical integration and of outsourcing in foreign countries. Potential suppliers must make a relationship-specific investment in order to serve each prospective customer. Such investments are governed by imperfect contracts. A final-good producer can manufacture components for it, but the per-unit cost is higher than for specialized suppliers. We consider how the size of the cost differential, the trade costs of components, the relative cos...
15. Vertical Integration in the Taiwan Aquaculture Industry
OpenAIRE
Tzong-Ru Lee (Jiun-Shen); Yi-Hsu; Cheng-Jen Lin; Kongkiti Phusavat; Nirote Sinnarong
2011-01-01
The study aims to improve the distribution channels in the Taiwan aquaculture industry through a better vertical integration. This study is derived from a need to improve the distribution performance of agricultural-based industries in response to increasing food demands in Asia and elsewhere. Based on a four-by-eight matrix derived from both a value chain and a service profit chain, thirty different strategies are developed. This development is based on key success factors and strategies for...
16. Vertical integration technologies for vertex detectors
International Nuclear Information System (INIS)
Ratti, L.
2011-01-01
This work is focused on the use of vertical integration (3D) technologies in the design of hybrid or monolithic pixel detectors in view of applications to silicon vertex trackers (SVTs) at the future high luminosity colliders. After a short introduction on the specifications of next-generation SVTs, the paper will discuss the general features of 3D microelectronic processes and the benefits they can provide to the design of pixel detectors for high energy physics experiments.
17. Vertical hydraulic transport of particulate solids
International Nuclear Information System (INIS)
Restini, C.V.; Massarani, G.
1977-01-01
The problem of particulate solid vertical transport is formulated using the conservation equations of Continuum Mechanics. It is shown that the constitutive equation for solid-fluid interaction term in the equations of motion may be determined by rather simple experiments of homogeneous fluidization. The predicted fluid pressure drop and solid concentration are in satisfacting agreement with past experiments and with data obtained in this work. (Author) [pt
18. Data driven modelling of vertical atmospheric radiation
International Nuclear Information System (INIS)
Antoch, Jaromir; Hlubinka, Daniel
2011-01-01
In the Czech Hydrometeorological Institute (CHMI) there exists a unique set of meteorological measurements consisting of the values of vertical atmospheric levels of beta and gamma radiation. In this paper a stochastic data-driven model based on nonlinear regression and on nonhomogeneous Poisson process is suggested. In the first part of the paper, growth curves were used to establish an appropriate nonlinear regression model. For comparison we considered a nonhomogeneous Poisson process with its intensity based on growth curves. In the second part both approaches were applied to the real data and compared. Computational aspects are briefly discussed as well. The primary goal of this paper is to present an improved understanding of the distribution of environmental radiation as obtained from the measurements of the vertical radioactivity profiles by the radioactivity sonde system. - Highlights: → We model vertical atmospheric levels of beta and gamma radiation. → We suggest appropriate nonlinear regression model based on growth curves. → We compare nonlinear regression modelling with Poisson process based modeling. → We apply both models to the real data.
19. Rotation of vertically oriented objects during earthquakes
Science.gov (United States)
Hinzen, Klaus-G.
2012-10-01
Vertically oriented objects, such as tombstones, monuments, columns, and stone lanterns, are often observed to shift and rotate during earthquake ground motion. Such observations are usually limited to the mesoseismal zone. Whether near-field rotational ground motion components are necessary in addition to pure translational movements to explain the observed rotations is an open question. We summarize rotation data from seven earthquakes between 1925 and 2009 and perform analog and numeric rotation testing with vertically oriented objects. The free-rocking motion of a marble block on a sliding table is disturbed by a pulse in the direction orthogonal to the rocking motion. When the impulse is sufficiently strong and occurs at the right' moment, it induces significant rotation of the block. Numeric experiments of a free-rocking block show that the initiation of vertical block rotation by a cycloidal acceleration pulse applied orthogonal to the rocking axis depends on the amplitude of the pulse and its phase relation to the rocking cycle. Rotation occurs when the pulse acceleration exceeds the threshold necessary to provoke rocking of a resting block, and the rocking block approaches its equilibrium position. Experiments with blocks subjected to full 3D strong motion signals measured during the 2009 L'Aquila earthquake confirm the observations from the tests with analytic ground motions. Significant differences in the rotational behavior of a monolithic block and two stacked blocks exist.
20. Human sensitivity to vertical self-motion.
Science.gov (United States)
Nesti, Alessandro; Barnett-Cowan, Michael; Macneilage, Paul R; Bülthoff, Heinrich H
2014-01-01
Perceiving vertical self-motion is crucial for maintaining balance as well as for controlling an aircraft. Whereas heave absolute thresholds have been exhaustively studied, little work has been done in investigating how vertical sensitivity depends on motion intensity (i.e., differential thresholds). Here we measure human sensitivity for 1-Hz sinusoidal accelerations for 10 participants in darkness. Absolute and differential thresholds are measured for upward and downward translations independently at 5 different peak amplitudes ranging from 0 to 2 m/s(2). Overall vertical differential thresholds are higher than horizontal differential thresholds found in the literature. Psychometric functions are fit in linear and logarithmic space, with goodness of fit being similar in both cases. Differential thresholds are higher for upward as compared to downward motion and increase with stimulus intensity following a trend best described by two power laws. The power laws' exponents of 0.60 and 0.42 for upward and downward motion, respectively, deviate from Weber's Law in that thresholds increase less than expected at high stimulus intensity. We speculate that increased sensitivity at high accelerations and greater sensitivity to downward than upward self-motion may reflect adaptations to avoid falling.
1. Measurement of vertical stability metrics in KSTAR
Science.gov (United States)
Hahn, Sang-Hee; Humphreys, D. A.; Mueller, D.; Bak, J. G.; Eidietis, N. W.; Kim, H.-S.; Ko, J. S.; Walker, M. L.; Kstar Team
2017-10-01
The paper summarizes results of multi-year ITPA experiments regarding measurement of the vertical stabilization capability of KSTAR discharges, including most recent measurements at the highest achievable elongation (κ 2.0 - 2.1). The measurements of the open-loop growth rate of VDE (γz) and the maximum controllable vertical displacement (ΔZmax) are done by the release-and-catch method. The dynamics of the vertical movement of the plasma is verified by both relevant magnetic reconstructions and non-magnetic diagnostics. The measurements of γz and ΔZmax were done for different plasma currents, βp, internal inductances, elongations and different configurations of the vessel conductors that surround the plasma as the first wall. Effects of control design choice and diagnostics noise are discussed, and comparison with the axisymmetric plasma response model is given for partial accounting for the measured control capability. This work supported by Ministry of Science, ICT, and Future Planning under KSTAR project.
2. Feedback control of vertical instability in TNS
International Nuclear Information System (INIS)
Frantz, E.R.
1978-05-01
Due to the unfavorable curvature of the vertical vacuum magnetic field, elongated plasmas are vertically unstable when the elongation, epsilon, becomes too large. The TNS (The Next Step) tokamak, as evolved in the Westinghouse-ORNL studies has an inside-D configuration (epsilon = 1.6, A = 5/1.25 = 4) characterized by an average decay index n approximately equal -0.75 at the plasma flux surface near the magnetic axis and is vertically unstable with a growth rate γ 0 approximately 10 5 sec -1 . Eddy currents produced in the vacuum vessel wall will slow this instability to growth rates γ 0 approximately 10 2 sec -1 provided there are no transverse insulating gaps in the vessel wall. A matrix equation has been developed for calculating the eddy currents induced in the EF coils and their stabilizing effect. Control theory for feedback systems with and without delay time is presented and possible plasma position detectors are discussed. For a plasma current of 6.1 MA, the controller peak power requirements using separate controller circuits are approximately 1 MW depending upon EF coil configurations and time delay. This feedback system is designed to stabilize a maximum plasma excursion of 10 cm from the midplane with delay times up to 2 sec
3. TFTR vertically viewing electron cyclotron emission diagnostic
International Nuclear Information System (INIS)
Taylor, G.
1990-01-01
The Tokamak Fusion Test Reactor (TFTR) Michelson interferometer has a spectral coverage of 75--540 GHz, allowing measurement of the first four electron cyclotron harmonics. Until recently the instrument has been configured to view the TFTR plasma on the horizontal midplane, primarily in order to measure the electron temperature profile. Electron cyclotron emission (ECE) extraordinary mode spectra from TFTR Supershot plasmas exhibit a pronounced, spectrally narrow feature below the second harmonic. A similar feature is seen with the ECE radiometer diagnostic below the electron cyclotron fundamental frequency in the ordinary mode. Analysis of the ECE spectra indicates the possibility of a non-Maxwellian 40--80 keV tail on the electron distribution in or near the core. During 1990 three vertical views with silicon carbide viewing targets will be installed to provide a direct measurement of the electron energy distribution at major radii of 2.54, 2.78, and 3.09 m with an energy resolution of approximately 20% at 100 keV. To provide the maximum flexibility, the optical components for the vertical views will be remotely controlled to allow the Michelson interferometer to be reconfigured to either the midplane horizontal view or one of the three vertical views between plasma shots
4. Vertical and horizontal seismometric observations of tides
Science.gov (United States)
Lambotte, S.; Rivera, L.; Hinderer, J.
2006-01-01
Tidal signals have been largely studied with gravimeters, strainmeters and tiltmeters, but can also be retrieved from digital records of the output of long-period seismometers, such as STS-1, particularly if they are properly isolated. Horizontal components are often noisier than the vertical ones, due to sensitivity to tilt at long periods. Hence, horizontal components are often disturbed by local effects such as topography, geology and cavity effects, which imply a strain-tilt coupling. We use series of data (duration larger than 1 month) from several permanent broadband seismological stations to examine these disturbances. We search a minimal set of observable signals (tilts, horizontal and vertical displacements, strains, gravity) necessary to reconstruct the seismological record. Such analysis gives a set of coefficients (per component for each studied station), which are stable over years and then can be used systematically to correct data from these disturbances without needing heavy numerical computation. A special attention is devoted to ocean loading for stations close to oceans (e.g. Matsushiro station in Japon (MAJO)), and to pressure correction when barometric data are available. Interesting observations are made for vertical seismometric components; in particular, we found a pressure admittance between pressure and data 10 times larger than for gravimeters for periods larger than 1 day, while this admittance reaches the usual value of -3.5 nm/s 2/mbar for periods below 3 h. This observation may be due to instrumental noise, but the exact mechanism is not yet understood.
5. Algebraic motion of vertically displacing plasmas
Science.gov (United States)
Pfefferlé, D.; Bhattacharjee, A.
2018-02-01
The vertical motion of a tokamak plasma is analytically modelled during its non-linear phase by a free-moving current-carrying rod inductively coupled to a set of fixed conducting wires or a cylindrical conducting shell. The solutions capture the leading term in a Taylor expansion of the Green's function for the interaction between the plasma column and the surrounding vacuum vessel. The plasma shape and profiles are assumed not to vary during the vertical drifting phase such that the plasma column behaves as a rigid body. In the limit of perfectly conducting structures, the plasma is prevented to come in contact with the wall due to steep effective potential barriers created by the induced Eddy currents. Resistivity in the wall allows the equilibrium point to drift towards the vessel on the slow timescale of flux penetration. The initial exponential motion of the plasma, understood as a resistive vertical instability, is succeeded by a non-linear "sinking" behaviour shown to be algebraic and decelerating. The acceleration of the plasma column often observed in experiments is thus concluded to originate from an early sharing of toroidal current between the core, the halo plasma, and the wall or from the thermal quench dynamics precipitating loss of plasma current.
6. Dynamic stiffness of suction caissons - vertical vibrations
Energy Technology Data Exchange (ETDEWEB)
Ibsen, Lars Bo; Liingaard, M.; Andersen, Lars
2006-12-15
The dynamic response of offshore wind turbines are affected by the properties of the foundation and the subsoil. The purpose of this report is to evaluate the dynamic soil-structure interaction of suction caissons for offshore wind turbines. The investigation is limited to a determination of the vertical dynamic stiffness of suction caissons. The soil surrounding the foundation is homogenous with linear viscoelastic properties. The dynamic stiffness of the suction caisson is expressed by dimensionless frequency-dependent dynamic stiffness coefficients corresponding to the vertical degree of freedom. The dynamic stiffness coefficients for the foundations are evaluated by means of a dynamic three-dimensional coupled Boundary Element/Finite Element model. Comparisons are made with known analytical and numerical solutions in order to evaluate the static and dynamic behaviour of the Boundary Element/Finite Element model. The vertical frequency dependent stiffness has been determined for different combinations of the skirt length, Poisson's ratio and the ratio between soil stiffness and skirt stiffness. Finally the dynamic behaviour at high frequencies is investigated. (au)
7. Methyl 2-Benzamido-2-(1<em>H>-benzimidazol-1-ylmethoxyacetate
Directory of Open Access Journals (Sweden)
Alami Anouar
2012-09-01
Full Text Available The heterocyclic carboxylic α-aminoester methyl 2-benzamido-2-(1<em>H>-benzimidazol-1-ylmethoxyacetate is obtained by <em>O>-alkylation of methyl α-azido glycinate <em>N>-benzoylated with 1<em>H>-benzimidazol-1-ylmethanol.
8. A specimen of <em>Sorex> cfr. <em>samniticus> in Barn Owl's pellets from Murge plateau (Apulia, Italy / Su di un <em>Sorex> cfr. <em>samniticus> (Insectivora, Soricidae rinvenuto in borre di <em>Tyto albaem> delle Murge (Puglia, Italia
Directory of Open Access Journals (Sweden)
Giovanni Ferrara
1992-07-01
Full Text Available Abstract In a lot of Barn Owl's pellets from the Murge plateau a specimen of <em>Sorex> sp. was detected. Thank to some morphological and morphometrical features, the cranial bones can be tentatively attributed to <em>Sorex samniticusem> Altobello, 1926. The genus <em>Sorex> was not yet included in the Apulia's fauna southwards of the Gargano district; the origin and significance of the above record is briefly discussed, the actual presence of a natural population of <em>Sorex> in the Murge being not yet proved. Riassunto Viene segnalato il rinvenimento di un esemplare di <em>Sorex> cfr. <em>samniticus> da borre di <em>Tyto albaem> delle Murge. Poiché il genere non era stato ancora segnalato nella Puglia a sud del Gargano, viene discusso il significato faunistico del reperto.
9. Glycosylation of Vanillin and 8-Nordihydrocapsaicin by Cultured <em>Eucalyptus perrinianaem> Cells
Directory of Open Access Journals (Sweden)
Naoji Kubota
2012-05-01
Full Text Available Glycosylation of vanilloids such as vanillin and 8-nordihydrocapsaicin by cultured plant cells of <em>Eucalyptus perrinianaem> was studied. Vanillin was converted into vanillin 4-<em>O>-b-D-glucopyranoside, vanillyl alcohol, and 4-<em>O>-b-D-glucopyranosylvanillyl alcohol by <em>E. perriniana em>cells. Incubation of cultured <em>E. perrinianaem> cells with 8-nor- dihydrocapsaicin gave 8-nordihydrocapsaicin 4-<em>O>-b-D-glucopyranoside and 8-nordihydro- capsaicin 4-<em>O>-b-D-gentiobioside.
10. Retratos em movimento.
Directory of Open Access Journals (Sweden)
Luiz Carlos Oliveira Junior
Full Text Available resumo O artigo aborda aspectos da relação do cinema com a arte do retrato. Buscamos, em primeiro lugar, uma definição estética do que seria um retrato cinematográfico, sempre em tensão com os critérios formais e padrões estilísticos que historicamente constituíram o retrato pictórico. Em seguida, relacionamos essa questão com a importância que se deu à representação do close-up de rosto nas primeiras décadas do cinema, quando foi atribuído aos filmes um papel inédito no estudo da fisionomia e da expressão facial. Por fim, apresentamos exemplos de autorretratos na pintura e no cinema para expor a forma como a autorrepresentação põe em crise as noções de subjetividade e identidade em que a definição clássica do retrato se apoiava.
11. Retrieving Vertical Air Motion and Raindrop Size Distributions from Vertically Pointing Doppler Radars
Science.gov (United States)
Williams, C. R.; Chandra, C. V.
2017-12-01
The vertical evolution of falling raindrops is a result of evaporation, breakup, and coalescence acting upon those raindrops. Computing these processes using vertically pointing radar observations is a two-step process. First, the raindrop size distribution (DSD) and vertical air motion need to be estimated throughout the rain shaft. Then, the changes in DSD properties need to be quantified as a function of height. The change in liquid water content is a measure of evaporation, and the change in raindrop number concentration and size are indicators of net breakup or coalescence in the vertical column. The DSD and air motion can be retrieved using observations from two vertically pointing radars operating side-by-side and at two different wavelengths. While both radars are observing the same raindrop distribution, they measure different reflectivity and radial velocities due to Rayleigh and Mie scattering properties. As long as raindrops with diameters greater than approximately 2 mm are in the radar pulse volumes, the Rayleigh and Mie scattering signatures are unique enough to estimate DSD parameters using radars operating at 3- and 35-GHz (Williams et al. 2016). Vertical decomposition diagrams (Williams 2016) are used to explore the processes acting on the raindrops. Specifically, changes in liquid water content with height quantify evaporation or accretion. When the raindrops are not evaporating, net raindrop breakup and coalescence are identified by changes in the total number of raindrops and changes in the DSD effective shape as the raindrops. This presentation will focus on describing the DSD and air motion retrieval method using vertical profiling radar observations from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) central facility in Northern Oklahoma.
12. ostglacial rebound from VLBI Geodesy: On Establishing Vertical Reference
Science.gov (United States)
Argus, Donald .
1996-01-01
I propose that a useful reference frame for vertical motions is that found by minimizing differences between vertical motions observed with VLBI [Ma and Ryan, 1995] and predictions from postglacial rebound predictions [Peltier, 1995].
13. Vertically aligned carbon nanotube field-effect transistors
KAUST Repository
Li, Jingqi; Zhao, Chao; Wang, Qingxiao; Zhang, Qiang; Wang, Zhihong; Zhang, Xixiang; Abutaha, Anas I.; Alshareef, Husam N.
2012-01-01
Vertically aligned carbon nanotube field-effect transistors (CNTFETs) have been developed using pure semiconducting carbon nanotubes. The source and drain were vertically stacked, separated by a dielectric, and the carbon nanotubes were placed
14. Going Up? The Pros and Cons of Vertical Expansion.
Science.gov (United States)
Myler, Patricia A.; Boggs, Richard C.
2002-01-01
Describes the advantages and disadvantages of the vertical expansion of school buildings. Considers such factors as fire protection, compliance with the Americans with Disabilities Act, and cost. Discusses alternatives to vertical expansion. (PKP)
15. Horizontalidade e verticalidade: os modelos de improvisação de Pixinguinha e K-Ximbinho no choro brasileiro Horizontal and vertical structures: Pixinguinha and K-Ximbinho's models of improvisation in the Brazilian Music
Directory of Open Access Journals (Sweden)
Paula Veneziano Valente
2011-06-01
Full Text Available Análise sobre os procedimentos de improvisação utilizados por Pixinguinha em 1 x 0 (1947 e por K-Ximbinho em Velhos Companheiros (1981. Uma comparação das diferenças e semelhanças entre suas abordagens mostra uma preferência pelos modelos estilísticos vertical ou horizontal.Analysis of the improvisation procedures of Brazilian instrumentalists Pixinguinha in 1 x 0 (One to zero; 1947 and K-Ximbinho in Velhos Companheiros (Old pals; 1981. A comparison of differences and similarities in their approaches reveals a preference for horizontal or vertical stylistic models.
16. <em>In Vivoem> Histamine Optical Nanosensors
Directory of Open Access Journals (Sweden)
Heather A. Clark
2012-08-01
Full Text Available In this communication we discuss the development of ionophore based nanosensors for the detection and monitoring of histamine levels <em>in vivoem>. This approach is based on the use of an amine-reactive, broad spectrum ionophore which is capable of recognizing and binding to histamine. We pair this ionophore with our already established nanosensor platform, and demonstrate <em>in vitroem> and <em>in vivoem> monitoring of histamine levels. This approach enables capturing rapid kinetics of histamine after injection, which are more difficult to measure with standard approaches such as blood sampling, especially on small research models. The coupling together of <em>in vivoem> nanosensors with ionophores such as nonactin provide a way to generate nanosensors for novel targets without the difficult process of designing and synthesizing novel ionophores.
17. Tolerances for the vertical emittance in damping rings
International Nuclear Information System (INIS)
Raubenheimer, T.O.
1991-11-01
Future damping rings for linear colliders will need to have very small vertical emittances. In the limit of low beam current, the vertical emittance is primarily determined by the vertical dispersion and the betatron coupling. In this paper, the contributions to these effects from random misalignments are calculated and tolerances are derived to limit the vertical emittance with a 95% confidence level. 10 refs., 5 figs
18. The correction of occlusal vertical dimension on tooth wear
Directory of Open Access Journals (Sweden)
Rostiny Rostiny
2007-12-01
Full Text Available The loss of occlusal vertical dimension which is caused by tooth wear is necessarily treated to regain vertical dimension. Correctional therapy should be done as early possible. In this case, simple and relatively low cost therapy was performed. In unserve loss of occlusal vertical dimension, partial removable denture could be used and the improvement of lengthening anterior teeth using composite resin to improve to regain vertical dimensional occlusion.
19. Vertical specialization and industrial upgrading: a preliminary note
OpenAIRE
Xiao Jiang; William Milberg
2012-01-01
Abstract Vertical specialization is a measure of the import content of exports. Given the widely recognized importance of trade in tasks and global production networks, vertical specialization has recently gained the attention of international trade researchers and policy makers. In this note, we use measured changes in the within-country pattern of vertical specialization to gauge the relevance of task trade for industrial upgrading and economic development. We first calculate vertical speci...
20. Study of the <em>in Vitroem> Antiplasmodial, Antileishmanial and Antitrypanosomal Activities of Medicinal Plants from Saudi Arabia
Directory of Open Access Journals (Sweden)
Nawal M. Al-Musayeib
2012-09-01
Full Text Available The present study investigated the <em>in vitroem> antiprotozoal activity of sixteen selected medicinal plants. Plant materials were extracted with methanol and screened <em>in vitroem> against erythrocytic schizonts of <em>Plasmodium falciparumem>, intracellular amastigotes of <em>Leishmania infantum em>and <em>Trypanosoma cruzi em>and free trypomastigotes of<em> T. bruceiem>. Cytotoxic activity was determined against MRC-5 cells to assess selectivity<em>. em>The criterion for activity was an IC50 < 10 µg/mL (4. Antiplasmodial activity was found in the<em> em>extracts of<em> em>>Prosopis julifloraem> and <em>Punica granatumem>. Antileishmanial activity<em> em>against <em>L. infantumem> was demonstrated in <em>Caralluma sinaicaem> and <em>Periploca aphylla.em> Amastigotes of<em> T. cruzi em>were affected by the methanol extract of<em> em>>Albizia lebbeckem>> em>pericarp, <em>Caralluma sinaicaem>,> Periploca aphylla em>and <em>Prosopius julifloraem>. Activity against<em> T. brucei em>was obtained in<em> em>>Prosopis julifloraem>. Cytotoxicity (MRC-5 IC50 < 10 µg/mL and hence non-specific activities were observed for<em> em>>Conocarpus lancifoliusem>.>
1. Analysis of vertical stability limits and vertical displacement event behavior on NSTX-U
Science.gov (United States)
Boyer, Mark; Battaglia, Devon; Gerhardt, Stefan; Menard, Jonathan; Mueller, Dennis; Myers, Clayton; Sabbagh, Steven; Smith, David
2017-10-01
The National Spherical Torus Experiment Upgrade (NSTX-U) completed its first run campaign in 2016, including commissioning a larger center-stack and three new tangentially aimed neutral beam sources. NSTX-U operates at increased aspect ratio due to the larger center-stack, making vertical stabilization more challenging. Since ST performance is improved at high elongation, improvements to the vertical control system were made, including use of multiple up-down-symmetric flux loop pairs for real-time estimation, and filtering to remove noise. Similar operating limits to those on NSTX (in terms of elongation and internal inductance) were achieved, now at higher aspect ratio. To better understand the observed limits and project to future operating points, a database of vertical displacement events and vertical oscillations observed during the plasma current ramp-up on NSTX/NSTX-U has been generated. Shots were clustered based on the characteristics of the VDEs/oscillations, and the plasma parameter regimes associated with the classes of behavior were studied. Results provide guidance for scenario development during ramp-up to avoid large oscillations at the time of diverting, and provide the means to assess stability of target scenarios for the next campaign. Results will also guide plans for improvements to the vertical control system. Work supported by U.S. D.O.E. Contract No. DE-AC02-09CH11466.
2. 2D Vertical Heterostructures for Novel Tunneling Device Applications
Science.gov (United States)
2017-03-01
2D Vertical Heterostructures for Novel Tunneling Device Applications Philip M. Campbell, Christopher J. Perini, W. Jud Ready, and Eric M. Vogel...School of Materials Science and Engineering Georgia Institute of Technology Atlanta, GA, USA 30332 Abstract: Vertical heterostructures...digital logic, signal processing, analog-to-digital conversion, and high-frequency communications, vertical heterostructure tunneling devices have
3. Vertical vs. Horizontal Integration: Pre-emptive Merging.
OpenAIRE
Colangelo, Giuseppe
1995-01-01
Preemption plays a crucial role in arms merger decisions. The author studies whether and under which circumstances preemptive merging occurs in vertically related industries. He finds that vertical mergers often preempt horizontal mergers and are dominant outcomes. Preempting the threat of a detrimental horizontal integration may be the main reason for vertically integrating. Copyright 1995 by Blackwell Publishing Ltd.
4. Mechanical design of NASA Ames Research Center vertical motion simulator
Science.gov (United States)
Engelbert, D. F.; Bakke, A. P.; Chargin, M. K.; Vallotton, W. C.
1976-01-01
NASA has designed and is constructing a new flight simulator with large vertical travel. Several aspects of the mechanical design of this Vertical Motion Simulator (VMS) are discussed, including the multiple rack and pinion vertical drive, a pneumatic equilibration system, and the friction-damped rigid link catenaries used as cable supports.
5. Vertical selection in the information domain of children
NARCIS (Netherlands)
Duarte Torres, Sergio; Hiemstra, Djoerd; Huibers, Theo W.C.
In this paper we explore the vertical selection methods in aggregated search in the specific domain of topics for children between 7 and 12 years old. A test collection consisting of 25 verticals, 3.8K queries and relevant assessments for a large sample of these queries mapping relevant verticals to
6. Effect of vertical integration on the utilization of hardwood resources
Science.gov (United States)
Jan Wiedenbeck
2002-01-01
The effectiveness of vertical integration in promoting the efficient utilization of the hardwood resource in the eastern United States was assessed during a series of interviews with vertically integrated hardwood manufacturers in the Appalachian region. Data from 19 companies that responded to the 1996 phone survey indicate that: 1) vertically integrated hardwood...
7. Simple suggestions for including vertical physics in oil spill models
International Nuclear Information System (INIS)
D'Asaro, Eric; University of Washington, Seatle, WA
2001-01-01
Current models of oil spills include no vertical physics. They neglect the effect of vertical water motions on the transport and concentration of floating oil. Some simple ways to introduce vertical physics are suggested here. The major suggestion is to routinely measure the density stratification of the upper ocean during oil spills in order to develop a database on the effect of stratification. (Author)
8. ELECTRICAL MUSCLE STIMULATION (EMS IMPLEMENTATION IN EXPLOSIVE STRENGTH DEVELOPMENT
Directory of Open Access Journals (Sweden)
Zoran Đokić
2013-07-01
Full Text Available Electrical muscle stimulation (EMS, is also known as neuromuscular electrical stimulation (NMES may be used for therapeutic purposes and training. EMS is causing muscle contractions via electrical impulses. The survey was conducted as a case study. The study was conducted on subject of 3 male of different ages. The study lasted 4 weeks, and the respondents have not used any type of training or activity, which would affect the development of explosive strength of the lower extremities. Electrical stimulation was performed in the evening, every other day, with COMPEX mi sport apparatus (Medical SA - All rights reserved - 07/06 - Art. 885,616 - V.2 model. In 4 week period, a total of 13 treatments were performed on selected muscle groups - quadriceps femoris and gastrocnemius. Program of plyometric training (Plyometric (28 min per treatment, for each muscle group were applied. The main objective of this study was to quantify and compare explosive leg strength, using different vertical jump protocols, before and after the EMS program. The initial and final testing was conducted in the laboratory of the Faculty of Sport and Tourism in Novi Sad, on the contact plate AXON JUMP (Bioingeniería Deportiva, VACUMED, 4538 Westinghouse Street Ventura, CA 93 003 under identical conditions. In all three of the respondents indicated an increase in vertical jump in all applied protocols.
OpenAIRE
Wendt, Guilherme Welter
2012-01-01
O cyberbullying é entendido como uma forma de comportamento agressivo que ocorre através dos meios eletrônicos de interação (computadores, celulares, sites de relacionamento virtual), sendo realizado de maneira intencional por uma pessoa ou grupo contra alguém em situação desigual de poder e, ainda, com dificuldade em se defender. Os estudos disponíveis até o presente momento destacam que o cyberbullying é um fator de risco para o desenvolvimento de sintomas de ansiedade, depressão, ideação s...
10. Nietzsche em voga
OpenAIRE
Borromeu, Carlos
2015-01-01
Resumo:Texto publicado em 1941, na revista de orientação católica A Ordem, no Rio de Janeiro. Seu autor considera que Nietzsche teria negado a moral tradicional, concebendo em seu lugar outra, porém imoral e brutal. Acusa o filósofo, por fim, de ser responsável pela Guerra ora e curso na Europa. Abstract:Text published in 1941 in the Catholic orientation magazine, A Ordem, in Rio de Janeiro. The author believes that Nietzsche would have denied traditional morality, conceiving another in it...
11. Transient well flow in vertically heterogeneous aquifers
Science.gov (United States)
Hemker, C. J.
1999-11-01
A solution for the general problem of computing well flow in vertically heterogeneous aquifers is found by an integration of both analytical and numerical techniques. The radial component of flow is treated analytically; the drawdown is a continuous function of the distance to the well. The finite-difference technique is used for the vertical flow component only. The aquifer is discretized in the vertical dimension and the heterogeneous aquifer is considered to be a layered (stratified) formation with a finite number of homogeneous sublayers, where each sublayer may have different properties. The transient part of the differential equation is solved with Stehfest's algorithm, a numerical inversion technique of the Laplace transform. The well is of constant discharge and penetrates one or more of the sublayers. The effect of wellbore storage on early drawdown data is taken into account. In this way drawdowns are found for a finite number of sublayers as a continuous function of radial distance to the well and of time since the pumping started. The model is verified by comparing results with published analytical and numerical solutions for well flow in homogeneous and heterogeneous, confined and unconfined aquifers. Instantaneous and delayed drainage of water from above the water table are considered, combined with the effects of partially penetrating and finite-diameter wells. The model is applied to demonstrate that the transient effects of wellbore storage in unconfined aquifers are less pronounced than previous numerical experiments suggest. Other applications of the presented solution technique are given for partially penetrating wells in heterogeneous formations, including a demonstration of the effect of decreasing specific storage values with depth in an otherwise homogeneous aquifer. The presented solution can be a powerful tool for the analysis of drawdown from pumping tests, because hydraulic properties of layered heterogeneous aquifer systems with
12. Vertical Transport by Coastal Mesoscale Convective Systems
Science.gov (United States)
2016-12-01
This work is part of an ongoing investigation of coastal mesoscale convective systems (MCSs), including changes in vertical transport of boundary layer air by storms moving from inland to offshore. The density of a storm's cold pool versus that of the offshore marine atmospheric boundary layer (MABL), in part, determines the ability of the storm to successfully cross the coast, the mechanism driving storm propagation, and the ability of the storm to lift air from the boundary layer aloft. The ability of an MCS to overturn boundary layer air can be especially important over the eastern US seaboard, where warm season coastal MCSs are relatively common and where large coastal population centers generate concentrated regions of pollution. Recent work numerically simulating idealized MCSs in a coastal environment has provided some insight into the physical mechanisms governing MCS coastal crossing success and the impact on vertical transport of boundary layer air. Storms are simulated using a cloud resolving model initialized with atmospheric conditions representative of a Mid-Atlantic environment. Simulations are run in 2-D at 250 m horizontal resolution with a vertical resolution stretched from 100 m in the boundary layer to 250 m aloft. The left half of the 800 km domain is configured to represent land, while the right half is assigned as water. Sensitivity experiments are conducted to quantify the influence of varying MABL structure on MCS coastal crossing success and air transport, with MABL values representative of those observed over the western Mid-Atlantic during warm season. Preliminary results indicate that when the density of the cold pool is much greater than the MABL, the storm successfully crosses the coastline, with lifting of surface parcels, which ascend through the troposphere. When the density of the cold pool is similar to that of the MABL, parcels within the MABL remain at low levels, though parcels above the MABL ascend through the troposphere.
13. Vertical deformation at western part of Sumatra
Energy Technology Data Exchange (ETDEWEB)
Febriyani, Caroline, E-mail: caroline.fanuel@students.itb.ac.id; Prijatna, Kosasih, E-mail: prijatna@gd.itb.ac.id; Meilano, Irwan, E-mail: irwan.meilano@gd.itb.ac.id
2015-04-24
This research tries to make advancement in GPS signal processing to estimate the interseismic vertical deformation field at western part of Sumatra Island. The data derived by Continuous Global Positioning System (CGPS) from Badan Informasi Geospasial (BIG) between 2010 and 2012. GPS Analyze at Massachusetts Institute of Technology (GAMIT) software and Global Kalman Filter (GLOBK) software are used to process the GPS signal to estimate the vertical velocities of the CGPS station. In order to minimize noise due to atmospheric delay, Vienna Mapping Function 1 (VMF1) is used as atmospheric parameter model and include daily IONEX file provided by the Center for Orbit Determination in Europe (CODE) as well. It improves GAMIT daily position accuracy up to 0.8 mm. In a second step of processing, the GLOBK is used in order to estimate site positions and velocities in the ITRF08 reference frame. The result shows that the uncertainties of estimated displacement velocity at all CGPS stations are smaller than 1.5 mm/yr. The subsided deformation patterns are seen at the northern and southern part of west Sumatra. The vertical deformation at northern part of west Sumatra indicates postseismic phase associated with the 2010 and 2012 Northern Sumatra earthquakes and also the long-term postseismic associated with the 2004 and 2005 Northern Sumatra earthquakes. The uplifted deformation patterns are seen from Bukit Tinggi to Seblat which indicate a long-term interseismic phase after the 2007 Bengkulu earthquake and 2010 Mentawai earthquake. GANO station shows a subsidence at rate 12.25 mm/yr, indicating the overriding Indo-Australia Plate which is dragged down by the subducting Southeast Asian Plate.
14. Natural Products from Antarctic Colonial Ascidians of the Genera <em>Aplidium> and <em>Synoicum>: Variability and Defensive Role
Directory of Open Access Journals (Sweden)
Conxita Avila
2012-08-01
Full Text Available Ascidians have developed multiple defensive strategies mostly related to physical, nutritional or chemical properties of the tunic. One of such is chemical defense based on secondary metabolites. We analyzed a series of colonial Antarctic ascidians from deep-water collections belonging to the genera <em>Aplidium> and <em>Synoicum> to evaluate the incidence of organic deterrents and their variability. The ether fractions from 15 samples including specimens of the species <em>A.> <em>falklandicum>, <em>A.> <em>fuegiense>, <em>A.> <em>meridianum>, <em>A.> <em>millari> and <em>S.> <em>adareanum> were subjected to feeding assays towards two relevant sympatric predators: the starfish <em>Odontaster> <em>validus>, and the amphipod <em>Cheirimedon> <em>femoratus>. All samples revealed repellency. Nonetheless, some colonies concentrated defensive chemicals in internal body-regions rather than in the tunic. Four ascidian-derived meroterpenoids, rossinones B and the three derivatives 2,3-epoxy-rossinone B, 3-epi-rossinone B, 5,6-epoxy-rossinone B, and the indole alkaloids meridianins A–G, along with other minoritary meridianin compounds were isolated from several samples. Some purified metabolites were tested in feeding assays exhibiting potent unpalatabilities, thus revealing their role in predation avoidance. Ascidian extracts and purified compound-fractions were further assessed in antibacterial tests against a marine Antarctic bacterium. Only the meridianins showed inhibition activity, demonstrating a multifunctional defensive role. According to their occurrence in nature and within our colonial specimens, the possible origin of both types of metabolites is discussed.
15. Opportunity's Surroundings on Sol 1818 (Vertical)
Science.gov (United States)
2009-01-01
NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends. This view is presented as a vertical projection with geometric seam correction. North is at the top. The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view. The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock.
16. On triangle meshes with valence dominant vertices
KAUST Repository
Morvan, Jean-Marie
2018-01-01
We study triangulations $\\cal T$ defined on a closed disc $X$ satisfying the following condition: In the interior of $X$, the valence of all vertices of $\\cal T$ except one of them (the irregular vertex) is $6$. By using a flat singular Riemannian metric adapted to $\\cal T$, we prove a uniqueness theorem when the valence of the irregular vertex is not a multiple of $6$. Moreover, for a given integer $k >1$, we exhibit non isomorphic triangulations on $X$ with the same boundary, and with a unique irregular vertex whose valence is $6k$.
17. Surface vertical deposition for gold nanoparticle film
International Nuclear Information System (INIS)
Diao, J J; Qiu, F S; Chen, G D; Reeves, M E
2003-01-01
In this rapid communication, we present the surface vertical deposition (SVD) method to synthesize the gold nanoparticle films. Under conditions where the surface of the gold nanoparticle suspension descends slowly by evaporation, the gold nanoparticles in the solid-liquid-gas junction of the suspension aggregate together on the substrate by the force of solid and liquid interface. When the surface properties of the substrate and colloidal nanoparticle suspension define for the SVD, the density of gold nanoparticles in the thin film made by SVD only depends on the descending velocity of the suspension surface and on the concentration of the gold nanoparticle suspension. (rapid communication)
18. Biomineralization of superhydrophilic vertically aligned carbon nanotubes.
Science.gov (United States)
Marsi, Teresa Cristina O; Santos, Tiago G; Pacheco-Soares, Cristina; Corat, Evaldo J; Marciano, Fernanda R; Lobo, Anderson O
2012-03-06
Vertically aligned carbon nanotubes (VACNT) promise a great role for the study of tissue regeneration. In this paper, we introduce a new biomimetic mineralization routine employing superhydrophilic VACNT films as highly stable template materials. The biomineralization was obtained after VACNT soaking in simulated body fluid solution. Detailed structural analysis reveals that the polycrystalline biological apatites formed due to the -COOH terminations attached to VACNT tips after oxygen plasma etching. Our approach not only provides a novel route for nanostructured materials, but also suggests that COOH termination sites can play a significant role in biomimetic mineralization. These new nanocomposites are very promising as nanobiomaterials due to the excellent human osteoblast adhesion.
19. On triangle meshes with valence dominant vertices
KAUST Repository
Morvan, Jean-Marie
2018-02-16
We study triangulations $\\\\cal T$ defined on a closed disc $X$ satisfying the following condition: In the interior of $X$, the valence of all vertices of $\\\\cal T$ except one of them (the irregular vertex) is $6$. By using a flat singular Riemannian metric adapted to $\\\\cal T$, we prove a uniqueness theorem when the valence of the irregular vertex is not a multiple of $6$. Moreover, for a given integer $k >1$, we exhibit non isomorphic triangulations on $X$ with the same boundary, and with a unique irregular vertex whose valence is $6k$.
20. Vertically Polarized Omnidirectional Printed Slot Loop Antenna
DEFF Research Database (Denmark)
Kammersgaard, Nikolaj Peter Iversen; Kvist, Søren H.; Thaysen, Jesper
2015-01-01
A novel vertically polarized omnidirectional printed slot loop antenna has been designed, simulated, fabricated and measured. The slot loop works as a magnetic loop. The loop is loaded with inductors to insure uniform and in-phase fields in the slot in order to obtain an omnidirectional radiation...... pattern. The antenna is designed for the 2.45 GHz Industrial, Scientific and Medical band. Applications of the antenna are many. One is for on-body applications since it is ideal for launching a creeping waves due to the polarization....
1. Vertical-Screw-Auger Conveyer Feeder
Science.gov (United States)
Walton, Otis (Inventor); Vollmer, Hubert J. (Inventor)
2016-01-01
A conical feeder is attached to a vertically conveying screw auger. The feeder is equipped with scoops and rotated from the surface to force-feed regolith the auger. Additional scoops are possible by adding a cylindrical section above the conical funnel section. Such then allows the unit to collect material from swaths larger in diameter than the enclosing casing pipe of the screw auger. A third element includes a flexible screw auger. All three can be used in combination in microgravity and zero atmosphere environments to drill and recover a wide area of subsurface regolith and entrained volatiles through a single access point on the surface.
2. Vertical Cable Seismic Survey for Hydrothermal Deposit
Science.gov (United States)
Asakawa, E.; Murakami, F.; Sekino, Y.; Okamoto, T.; Ishikawa, K.; Tsukahara, H.; Shimura, T.
2012-04-01
The vertical cable seismic is one of the reflection seismic methods. It uses hydrophone arrays vertically moored from the seafloor to record acoustic waves generated by surface, deep-towed or ocean bottom sources. Analyzing the reflections from the sub-seabed, we could look into the subsurface structure. This type of survey is generally called VCS (Vertical Cable Seismic). Because VCS is an efficient high-resolution 3D seismic survey method for a spatially-bounded area, we proposed the method for the hydrothermal deposit survey tool development program that the Ministry of Education, Culture, Sports, Science and Technology (MEXT) started in 2009. We are now developing a VCS system, including not only data acquisition hardware but data processing and analysis technique. Our first experiment of VCS surveys has been carried out in Lake Biwa, JAPAN in November 2009 for a feasibility study. Prestack depth migration is applied to the 3D VCS data to obtain a high quality 3D depth volume. Based on the results from the feasibility study, we have developed two autonomous recording VCS systems. After we carried out a trial experiment in the actual ocean at a water depth of about 400m and we carried out the second VCS survey at Iheya Knoll with a deep-towed source. In this survey, we could establish the procedures for the deployment/recovery of the system and could examine the locations and the fluctuations of the vertical cables at a water depth of around 1000m. The acquired VCS data clearly shows the reflections from the sub-seafloor. Through the experiment, we could confirm that our VCS system works well even in the severe circumstances around the locations of seafloor hydrothermal deposits. We have, however, also confirmed that the uncertainty in the locations of the source and of the hydrophones could lower the quality of subsurface image. It is, therefore, strongly necessary to develop a total survey system that assures a accurate positioning and a deployment techniques
3. Propulsion systems for vertical flight aircraft
Energy Technology Data Exchange (ETDEWEB)
Brooks, A.
1990-01-01
The present evaluation of VTOL airframe/powerplant integration configurations combining high forward flight speed with safe and efficient vertical flight identifies six configurations that can be matched with one of three powerplant types: turboshafts, convertible-driveshaft lift fans, and gas-drive lift fans. The airframes configurations are (1) tilt-rotor, (2) folded tilt-rotor, (3) tilt-wing, (4) rotor wing/disk wing, (5) lift fan, and (6) variable-diameter rotor. Attention is given to the lift-fan VTOL configuration. The evaluation of these configurations has been conducted by both a joint NASA/DARPA program and the NASA High Speed Rotorcraft program. 7 refs.
4. Dissociation of Vertical Semiconductor Diatomic Artificial Molecules
International Nuclear Information System (INIS)
Pi, M.; Emperador, A.; Barranco, M.; Garcias, F.; Muraki, K.; Tarucha, S.; Austing, D. G.
2001-01-01
We investigate the dissociation of few-electron circular vertical semiconductor double quantum dot artificial molecules at 0T as a function of interdot distance. A slight mismatch introduced in the fabrication of the artificial molecules from nominally identical constituent quantum wells induces localization by offsetting the energy levels in the quantum dots by up to 2meV, and this plays a crucial role in the appearance of the addition energy spectra as a function of coupling strength particularly in the weak coupling limit
5. Carbon export by vertically migrating zooplankton
DEFF Research Database (Denmark)
Hansen, Agnethe Nøhr; Visser, André W.
2016-01-01
Through diel vertical migration (DVM), zooplankton add an active transport to the otherwise passive sinking of detrital material that constitutes the biological pump. This active transport has proven difficult to quantify. We present a model that estimates both the temporal and depth characterist...... is transported than at either equatorial or boreal latitudes. We estimate that the amount of carbon transported below the mixed layer by migrating zooplankton in the North Atlantic Ocean constitutes 27% (16–30%) of the total export flux associated with the biological pump in that region...
6. Diel vertical migration and distribution of zooplankton in a tropical Brazilian reservoirlian
Directory of Open Access Journals (Sweden)
Ana M. A. da Silva
2009-08-01
amostradas em quatro profundidades (Sub-superfície, 50% Io, 1% Io e Fundo em uma estação de amostragem com cinco metros de profundidade, em intervalos de quatro horas, ao longo de 24 horas. Duas espécies de Cladocera (Moina minuta e Diaphanosoma spinulosum e uma espécie de Copepoda (Notodiaptomus cearensis apresentaram padrões relativamente semelhantes de migração noturna, permanecendo no fundo durante o dia e se deslocando próximo à superfície no fim da tarde e ao longo da noite. Brachionus falcatus e Hexarthra mira (Rotifera não apresentaram padrões migratórios e as suas distribuições verticais foram relativamente homogêneas. As variáveis ambientais e as distribuições das espécies correlacionaram-se fracamente, sugerindo que outros mecanismos podem ser responsáveis por induzir a migração vertical.
7. Vertical designs and agriculture joined for food production in the modules for urban vertical gardens.
Directory of Open Access Journals (Sweden)
Fritz Hammerling Navas Navarro
2012-10-01
Full Text Available Modules for Vertical Urban Gardens (MHUG are a hybrid of vertical gardens and urban agriculture. Vertical gardens have been recognized for the past 2500 years, mainly in the form of the Hanging Gardens of Babylon, while urban agriculture is being practiced today by more than 700 million people worldwide. The benefits that MHUV offers are multiple, but perhaps the most significant is the consumption of foods free of chemicals, free of GMO’s, irrigated with potable water, and that are 100% organic. It is presented a “culinary and medicinal module” that can be implemented in the kitchen area, on roofs, terraces, balconies or patios, where species such as thyme, mint, peppermint, parsley, lemon balm and rosemary can be at hand when preparing dishes. The module consists of three plastic baskets that are recyclable and resistant to decay. Each basket has four rows with space for fourteen seedlings. The baskets are first lined on the interior with a black geotextile, and then are covered with a mesh (polisombra which helps support the substrate and seedlings. Each basket rests on a structure made of recycled wood (from pallets or crates that both holds the basket vertically and serves as a rain cover. The cages measure 0.33m by 0.55m by 0.14m. Each module comes with hosing and connectors for a drip irrigation system, and an instructional manual. The modules demonstrate the benefits of urban agriculture combined with the beauty and modality of vertical gardens, leading to useful applications for food production and decoration in the spaces where vertical urban gardens are possible.
8. Role of the vertical pressure gradient in wave boundary layers
DEFF Research Database (Denmark)
Jensen, Karsten Lindegård; Sumer, B. Mutlu; Vittori, Giovanna
2014-01-01
By direct numerical simulation (DNS) of the flow in an oscillatory boundary layer, it is possible to obtain the pressure field. From the latter, the vertical pressure gradient is determined. Turbulent spots are detected by a criterion involving the vertical pressure gradient. The vertical pressure...... gradient is also treated as any other turbulence quantity like velocity fluctuations and statistical properties of the vertical pressure gradient are calculated from the DNS data. The presence of a vertical pressure gradient in the near bed region has significant implications for sediment transport....
9. Expected sliding distance of vertical slit caisson breakwater
Science.gov (United States)
Kim, Dong Hyawn
2017-06-01
Evaluating the expected sliding distance of a vertical slit caisson breakwater is proposed. Time history for the wave load to a vertical slit caisson is made. It consists of two impulsive wave pressures followed by a smooth sinusoidal pressure. In the numerical analysis, the sliding distance for an attack of single wave was shown and the expected sliding distance during 50 years was also presented. Those results were compared with a vertical front caisson breakwater without slit. It was concluded that the sliding distance of a vertical slit caisson may be over-estimated if the wave pressure on the caisson is evaluated without considering vertical slit.
10. Processing vertical size disparities in distinct depth planes.
Science.gov (United States)
Duke, Philip A; Howard, Ian P
2012-08-17
A textured surface appears slanted about a vertical axis when the image in one eye is horizontally enlarged relative to the image in the other eye. The surface appears slanted in the opposite direction when the same image is vertically enlarged. Two superimposed textured surfaces with different horizontal size disparities appear as two surfaces that differ in slant. Superimposed textured surfaces with equal and opposite vertical size disparities appear as a single frontal surface. The vertical disparities are averaged. We investigated whether vertical size disparities are averaged across two superimposed textured surfaces in different depth planes or whether they induce distinct slants in the two depth planes. In Experiment 1, two superimposed textured surfaces with different vertical size disparities were presented in two depth planes defined by horizontal disparity. The surfaces induced distinct slants when the horizontal disparity was more than ±5 arcmin. Thus, vertical size disparities are not averaged over surfaces with different horizontal disparities. In Experiment 2 we confirmed that vertical size disparities are processed in surfaces away from the horopter, so the results of Experiment 1 cannot be explained by the processing of vertical size disparities in a fixated surface only. Together, these results show that vertical size disparities are processed separately in distinct depth planes. The results also suggest that vertical size disparities are not used to register slant globally by their effect on the registration of binocular direction of gaze.
11. Vertical Slot Convection: A linear study
International Nuclear Information System (INIS)
McAllister, A.; Steinolfson, R.; Tajima, T.
1992-11-01
The linear stability properties of fluid convection in a vertical slot were studied. We use a Fourier-Chebychev decomposition was used to set up the linear eigenvalue problems for the Vertical Slot Convection and Benard problems. The eigenvalues, neutral stability curves, and critical point values of the Grashof number, G, and the wavenumber were determined. Plots of the real and imaginary parts of the eigenvalues as functions of G and α are given for a wide range of the Prandtl number, Pr, and special note is made of the complex mode that becomes linearly unstable above Pr ∼ 12.5. A discussion comparing different special cases facilitates the physical understanding of the VSC equations, especially the interaction of the shear-flow and buoyancy induced physics. Making use of the real and imaginary eigenvalues and the phase properties of the eigenmodes, the eigenmodes were characterized. One finds that the mode structure becomes progressively simpler with increasing Pr, with the greatest complexity in the mid ranges where the terms in the heat equation are of roughly the same size
12. Effective solidity in vertical axis wind turbines
Science.gov (United States)
Parker, Colin M.; Leftwich, Megan C.
2016-11-01
The flow surrounding vertical axis wind turbines (VAWTs) is investigated using particle imaging velocimetry (PIV). This is done in a low-speed wind tunnel with a scale model that closely matches geometric and dynamic properties tip-speed ratio and Reynolds number of a full size turbine. Previous results have shown a strong dependance on the tip-speed ratio on the wake structure of the spinning turbine. However, it is not clear whether this is a speed or solidity effect. To determine this, we have measured the wakes of three turbines with different chord-to-diameter ratios, and a solid cylinder. The flow is visualized at the horizontal mid-plane as well as the vertical mid-plane behind the turbine. The results are both ensemble averaged and phase averaged by syncing the PIV system with the rotation of the turbine. By keeping the Reynolds number constant with both chord and diameter, we can determine how each effects the wake structure. As these parameters are varied there are distinct changes in the mean flow of the wake. Additionally, by looking at the vorticity in the phase averaged profiles we can see structural changes to the overall wake pattern.
13. The capillary interaction between two vertical cylinders
KAUST Repository
Cooray, Himantha
2012-06-27
Particles floating at the surface of a liquid generally deform the liquid surface. Minimizing the energetic cost of these deformations results in an inter-particle force which is usually attractive and causes floating particles to aggregate and form surface clusters. Here we present a numerical method for determining the three-dimensional meniscus around a pair of vertical circular cylinders. This involves the numerical solution of the fully nonlinear Laplace-Young equation using a mesh-free finite difference method. Inter-particle force-separation curves for pairs of vertical cylinders are then calculated for different radii and contact angles. These results are compared with previously published asymptotic and experimental results. For large inter-particle separations and conditions such that the meniscus slope remains small everywhere, good agreement is found between all three approaches (numerical, asymptotic and experimental). This is as expected since the asymptotic results were derived using the linearized Laplace-Young equation. For steeper menisci and smaller inter-particle separations, however, the numerical simulation resolves discrepancies between existing asymptotic and experimental results, demonstrating that this discrepancy was due to the nonlinearity of the Laplace-Young equation. © 2012 IOP Publishing Ltd.
14. Control of the vertical instability in tokamaks
International Nuclear Information System (INIS)
Lazarus, E.A.; Lister, J.B.; Neilson, G.H.
1989-05-01
The problem of control of the vertical instability is formulated for a massless filamentary plasma. The massless approximation is justified by an examination of the role of inertia in the control problem. The system is solved using Laplace transform techniques. The linear system is studied to determine the stability boundaries. It is found that the system can be stabilized up to a critical decay index, which is predominantly a function of the geometry of the passive stabilizing shell. A second, smaller critical index, which is a function of the geometry of the control coils, determines the limit of stability in the absence of derivative gain in the control circuit. The system is also studied numerically in order to incorporate the non-linear effects of power supply dynamics. The power supply bandwidth requirement is determined by the open-loop growth rate of the instability. The system is studied for a number of control coil options which are available on the DIII-D tokamak. It is found that many of the coils will not provide adequate stabilization and that the use of inboard coils is advantageous in stabilizing the system up to the critical index. Experiments carried out on DIII-D confirm the appropriateness of the model. Using the results of the model study, we have stabilized DIII-D plasmas with decay indices up to 98% of the critical index. Measurement of the plasma vertical position is also discussed. (author) 27 figs., 6 refs
15. Algebraic motion of vertically displacing plasmas
Science.gov (United States)
Bhattacharjee, Amitava; Pfefferle, David; Hirvijoki, Eero
2017-10-01
The vertical displacement of tokamak plasmas is modelled during the non-linear phase by a free-moving current-carrying rod coupled to a set of fixed conducting wires and a cylindrical conducting shell. The models capture the leading term in a Taylor expansion of the Green's function for the interaction between the plasma column and the vacuum vessel. The plasma is assumed not to vary during the VDE such that it behaves as a rigid body. In the limit of perfectly conducting structures, the plasma is prevented from coming in contact with the wall due to steep effective potential barriers by the eddy currents, and will hence oscillate at Alfvénic frequencies about a given force-free position. In addition to damping oscillations, resistivity allows for the column to drift towards the vessel on slow flux penetration timescales. The initial exponential motion of the plasma, i.e. the resistive vertical instability, is succeeded by a non-linear sinking behaviour, that is shown analytically to be algebraic and decelerative. The acceleration of the plasma column often observed in experiments is thus conjectured to originate from an early sharing of toroidal current between the core, the halo plasma and the wall or from the thermal quench dynamics precipitating loss of plasma current
16. Elastic kirchhoff migration for vertical seismic profiles
International Nuclear Information System (INIS)
Keho, T.H.; Wu, R.S.
1987-01-01
Elastic Kirchhoff migration is implemented for the VSP recording geometry. The resulting migration formula requires measurement of the stress as well as the displacement. Since stress is not measured in a VSP, and in many cases the horizontal component of displacement is not measured, approximate migration formulas are given for these cases. The elastic migration formula for the case where only the vertical components are available, is the same as the acoustic migration formula, where the pressure data are replaced by the magnitudes of the elastic data as reconstructed from the vertical components, and the acoustic Green's functions are replaced with either the P or S wave elastic Green's functions. Two expressions for migration of two component displacement data are presented. In the first, the terms involving traction data are simply ignored. In the second, an improved backpropagation operator for the displacement field is obtained by replacing the traction data in the Kirchhoff integral by displacement data using Hooke's law. The migration expressions for the cases where two component data are available produce images which are less contaminated by artifacts than the migration images of one component data
17. Vertical profile of 137Cs in soil.
Science.gov (United States)
Krstić, D; Nikezić, D; Stevanović, N; Jelić, M
2004-12-01
In this paper, a vertical distribution of 137Cs in undisturbed soil was investigated experimentally and theoretically. Soil samples were taken from the surroundings of the city of Kragujevac in central Serbia during spring-summer of 2001. The sampling locations were chosen in such a way that the influence of soil characteristics on depth distribution of 137Cs in soil could be investigated. Activity of 137Cs in soil samples was measured using a HpGe detector and multi-channel analyzer. Based on vertical distribution of 137Cs in soil which was measured for each of 10 locations, the diffusion coefficient of 137Cs in soil was determined. In the next half-century, 137Cs will remain as the source of the exposure. Fifteen years after the Chernobyl accident, and more than 30 years after nuclear probes, the largest activity of 137Cs is still within 10 cm of the upper layer of the soil. This result confirms that the penetration of 137Cs in soil is a very slow process. Experimental results were compared with two different Green functions and no major differences were found between them. While both functions fit experimental data well in the upper layer of soil, the fitting is not so good in deeper layers. Although the curves obtained by these two functions are very close to each other, there are some differences in the values of parameters acquired by them.
18. High-performance vertical organic transistors.
Science.gov (United States)
Kleemann, Hans; Günther, Alrun A; Leo, Karl; Lüssem, Björn
2013-11-11
Vertical organic thin-film transistors (VOTFTs) are promising devices to overcome the transconductance and cut-off frequency restrictions of horizontal organic thin-film transistors. The basic physical mechanisms of VOTFT operation, however, are not well understood and VOTFTs often require complex patterning techniques using self-assembly processes which impedes a future large-area production. In this contribution, high-performance vertical organic transistors comprising pentacene for p-type operation and C60 for n-type operation are presented. The static current-voltage behavior as well as the fundamental scaling laws of such transistors are studied, disclosing a remarkable transistor operation with a behavior limited by injection of charge carriers. The transistors are manufactured by photolithography, in contrast to other VOTFT concepts using self-assembled source electrodes. Fluorinated photoresist and solvent compounds allow for photolithographical patterning directly and strongly onto the organic materials, simplifying the fabrication protocol and making VOTFTs a prospective candidate for future high-performance applications of organic transistors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
19. Vertical transmission of macular telangiectasia type 2.
Science.gov (United States)
Delaere, Lien; Spielberg, Leigh; Leys, Anita M
2012-01-01
The purpose of this study was to report vertical transmission of macular telangiectasia type 2 and type 2 diabetes mellitus in 3 families. In this retrospective interventional case series, the charts of patients with inherited macular telangiectasia type 2 were reviewed. A large spectrum of presentations of macular telangiectasia type 2 was observed and has been studied with different techniques including best-corrected visual acuity, microperimetry, confocal blue reflectance fundus autofluorescence, fluorescein angiography, and time domain and spectral domain optical coherence tomography. Vertical transmission of macular telangiectasia type 2 and associated type 2 diabetes mellitus is described in 3 families. Symptomatic as well as asymptomatic eyes with macular telangiectasia type 2 were identified. In 2 families, a mother and son experienced visual loss and were diagnosed with macular telangiectasia type 2. All 4 patients had type 2 diabetes. Diabetic retinopathy was observed in one mother and her son. In the third family, the index patient was diagnosed macular telangiectasia type 2 after complaints of metamorphopsia. She and her family members had type 2 diabetes mellitus, and further screening of her family revealed familial macular telangiectasia type 2. None of the patients were treated for macular telangiectasia type 2. Macular telangiectasia type 2 may be more common than previously assumed, as vision can remain preserved and patients may go undiagnosed. Screening of family members is indicated, and detection of mild anomalies is possible using fundus autofluorescence and spectral domain optical coherence tomography.
20. Fusion reactor horizontal versus vertical maintenance approach
International Nuclear Information System (INIS)
Charruyer, Ph.; Djerassi, H.; Leger, D.; Maupou, M.; Rouillard, J.; Salpietro, E.; Holloway, C.; Suppan, A.
1987-01-01
This paper concerns the comparison of horizontal versus vertical maintenance options of internal components (blanket and segment) of fusion reactors NET (Next European Torus) and INTOR Design. The described mechanical options are taken to ensure the handling of internals with the required precision, taking into account the problems raised by the safety and confinement requirements. Handling is obviously performed remotely. The option comparisons are performed according to the criteria of feasibility, building size, duration of maintenance operations, safety, flexibility, availability and cost. The first conclusions point on that the vertical handling option offers advantages, as regards the ease of handling and confinement possibilities. From the building size point of view, the two solutions are almost equivalent, while other criteria do not provide a basis for choice. It is emphasized that the confinement option C.T.U. (Containment Transfer Unit) or T.I.C. (Tight Intermediate Confinement) should be the major factor in determining the best options. In additions, a cost comparative analysis emphasizes the best cost/benefit ratio for the different options studied
1. Determinants of Arbovirus Vertical Transmission in Mosquitoes.
Directory of Open Access Journals (Sweden)
Sebastian Lequime
2016-05-01
Full Text Available Vertical transmission (VT and horizontal transmission (HT of pathogens refer to parental and non-parental chains of host-to-host transmission. Combining HT with VT enlarges considerably the range of ecological conditions in which a pathogen can persist, but the factors governing the relative frequency of each transmission mode are poorly understood for pathogens with mixed-mode transmission. Elucidating these factors is particularly important for understanding the epidemiology of arthropod-borne viruses (arboviruses of public health significance. Arboviruses are primarily maintained by HT between arthropod vectors and vertebrate hosts in nature, but are occasionally transmitted vertically in the vector population from an infected female to her offspring, which is a proposed maintenance mechanism during adverse conditions for HT. Here, we review over a century of published primary literature on natural and experimental VT, which we previously assembled into large databases, to identify biological factors associated with the efficiency of arbovirus VT in mosquito vectors. Using a robust statistical framework, we highlight a suite of environmental, taxonomic, and physiological predictors of arbovirus VT. These novel insights contribute to refine our understanding of strategies employed by arboviruses to persist in the environment and cause substantial public health concern. They also provide hypotheses on the biological processes underlying the relative VT frequency for pathogens with mixed-mode transmission that can be tested empirically.
2. Clinical Relevance of <em>CDH1em> and <em>CDH13em> DNA-Methylation in Serum of Cervical Cancer Patients
Directory of Open Access Journals (Sweden)
Günther K. Bonn
2012-07-01
Full Text Available This study was designed to investigate the DNA-methylation status of <em>E>-cadherin (<em>CDH1em> and <em>H>-cadherin (<em>CDH13em> in serum samples of cervical cancer patients and control patients with no malignant diseases and to evaluate the clinical utility of these markers. DNA-methylation status of <em>CDH1em> and <em>CDH13em> was analyzed by means of MethyLight-technology in serum samples from 49 cervical cancer patients and 40 patients with diseases other than cancer. To compare this methylation analysis with another technique, we analyzed the samples with a denaturing high performance liquid chromatography (DHPLC PCR-method. The specificity and sensitivity of <em>CDH1em> DNA-methylation measured by MethyLight was 75% and 55%, and for <em>CDH13em> DNA-methylation 95% and 10%. We identified a specificity of 92.5% and a sensitivity of only 27% for the <em>CDH1em> DHPLC-PCR analysis. Multivariate analysis showed that serum <em>CDH1em> methylation-positive patients had a 7.8-fold risk for death (95% CI: 2.2–27.7; <em>p> = 0.001 and a 92.8-fold risk for relapse (95% CI: 3.9–2207.1; <em>p> = 0.005. We concluded that the serological detection of <em>CDH1em> and <em>CDH13em> DNA-hypermethylation is not an ideal diagnostic tool due to low diagnostic specificity and sensitivity. However, it was validated that <em>CDH1em> methylation analysis in serum samples may be of potential use as a prognostic marker for cervical cancer patients.
3. Fumigant Antifungal Activity of Myrtaceae Essential Oils and Constituents from <em>Leptospermum petersoniiem> against Three <em>Aspergillus> Species
Directory of Open Access Journals (Sweden)
Il-Kwon Park
2012-09-01
Full Text Available Commercial plant essential oils obtained from 11 Myrtaceae plant species were tested for their fumigant antifungal activity against <em>Aspergillus ochraceusem>, <em>A. flavusem>, and <em>A. nigerem>. Essential oils extracted from<em> em>Leptospermum> <em>petersonii> at air concentrations of 56 × 10−3 mg/mL and 28 × 10−3 mg/mL completely inhibited the growth of the three <em>Aspergillus> species. However, at an air concentration of 14 × 10−3 mg/mL, inhibition rates of <em>L. petersoniiem> essential oils were reduced to 20.2% and 18.8% in the case of <em>A. flavusem> and <em>A. nigerem>, respectively. The other Myrtaceae essential oils (56 × 10−3 mg/mL only weakly inhibited the fungi or had no detectable affect. Gas chromatography-mass spectrometry analysis identified 16 compounds in <em>L. petersoniiem>> em>essential> em>oil.> em>The antifungal activity of the identified compounds was tested individually by using standard or synthesized compounds. Of these, neral and geranial inhibited growth by 100%, at an air concentration of 56 × 10−3 mg/mL, whereas the activity of citronellol was somewhat lover (80%. The other compounds exhibited only moderate or weak antifungal activity. The antifungal activities of blends of constituents identified in <em>L. petersoniiem> oil indicated that neral and geranial were the major contributors to the fumigant and antifungal activities.
4. Perfil dos acidentes de trabalho em refinaria de petróleo Occupational accidents in an oil refinery in Brazil
Directory of Open Access Journals (Sweden)
Carlos Augusto Vaz de Souza
2002-10-01
5. Predicting vertical jump height from bar velocity.
Science.gov (United States)
García-Ramos, Amador; Štirn, Igor; Padial, Paulino; Argüelles-Cienfuegos, Javier; De la Fuente, Blanca; Strojnik, Vojko; Feriche, Belén
2015-06-01
The objective of the study was to assess the use of maximum (Vmax) and final propulsive phase (FPV) bar velocity to predict jump height in the weighted jump squat. FPV was defined as the velocity reached just before bar acceleration was lower than gravity (-9.81 m·s(-2)). Vertical jump height was calculated from the take-off velocity (Vtake-off) provided by a force platform. Thirty swimmers belonging to the National Slovenian swimming team performed a jump squat incremental loading test, lifting 25%, 50%, 75% and 100% of body weight in a Smith machine. Jump performance was simultaneously monitored using an AMTI portable force platform and a linear velocity transducer attached to the barbell. Simple linear regression was used to estimate jump height from the Vmax and FPV recorded by the linear velocity transducer. Vmax (y = 16.577x - 16.384) was able to explain 93% of jump height variance with a standard error of the estimate of 1.47 cm. FPV (y = 12.828x - 6.504) was able to explain 91% of jump height variance with a standard error of the estimate of 1.66 cm. Despite that both variables resulted to be good predictors, heteroscedasticity in the differences between FPV and Vtake-off was observed (r(2) = 0.307), while the differences between Vmax and Vtake-off were homogenously distributed (r(2) = 0.071). These results suggest that Vmax is a valid tool for estimating vertical jump height in a loaded jump squat test performed in a Smith machine. Key pointsVertical jump height in the loaded jump squat can be estimated with acceptable precision from the maximum bar velocity recorded by a linear velocity transducer.The relationship between the point at which bar acceleration is less than -9.81 m·s(-2) and the real take-off is affected by the velocity of movement.Mean propulsive velocity recorded by a linear velocity transducer does not appear to be optimal to monitor ballistic exercise performance.
6. Assessment of Genetic Fidelity in <em>Rauvolfia em>s>erpentina em>Plantlets Grown from Synthetic (Encapsulated Seeds Following <em>in Vitroem> Storage at 4 °C
Directory of Open Access Journals (Sweden)
2012-05-01
Full Text Available An efficient method was developed for plant regeneration and establishment from alginate encapsulated synthetic seeds of <em>Rauvolfia serpentinaem>. Synthetic seeds were produced using <em>in vitroem> proliferated microshoots upon complexation of 3% sodium alginate prepared in Llyod and McCown woody plant medium (WPM and 100 mM calcium chloride. Re-growth ability of encapsulated nodal segments was evaluated after storage at 4 °C for 0, 1, 2, 4, 6 and 8 weeks and compared with non-encapsulated buds. Effects of different media <em>viz>; Murashige and Skoog medium; Lloyd and McCown woody Plant medium, Gamborg’s B5 medium and Schenk and Hildebrandt medium was also investigated for conversion into plantlets. The maximum frequency of conversion into plantlets from encapsulated nodal segments stored at 4 °C for 4 weeks was achieved on woody plant medium supplement with 5.0 μM BA and 1.0 μM NAA. Rooting in plantlets was achieved in half-strength Murashige and Skoog liquid medium containing 0.5 μM indole-3-acetic acid (IAA on filter paper bridges. Plantlets obtained from stored synseeds were hardened, established successfully <em>ex vitroem> and were morphologically similar to each other as well as their mother plant. The genetic fidelity of <em>Rauvolfia em>clones raised from synthetic seeds following four weeks of storage at 4 °C were assessed by using random amplified polymorphic<em> em>DNA (RAPD and inter-simple sequence repeat<em> em>(ISSR markers. All the RAPD and ISSR profiles from generated plantlets were monomorphic and comparable<em> em>to the mother plant, which confirms the genetic<em> em>stability among the clones. This synseed protocol could be useful for establishing a particular system for conservation, short-term storage and production of genetically identical and stable plants before it is released for commercial purposes.
7. The EM Earthquake Precursor
Science.gov (United States)
Jones, K. B., II; Saxton, P. T.
2013-12-01
Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After the 1989 Loma Prieta Earthquake, American earthquake investigators predetermined magnetometer use and a minimum earthquake magnitude necessary for EM detection. This action was set in motion, due to the extensive damage incurred and public outrage concerning earthquake forecasting; however, the magnetometers employed, grounded or buried, are completely subject to static and electric fields and have yet to correlate to an identifiable precursor. Secondly, there is neither a networked array for finding any epicentral locations, nor have there been any attempts to find even one. This methodology needs dismissal, because it is overly complicated, subject to continuous change, and provides no response time. As for the minimum magnitude threshold, which was set at M5, this is simply higher than what modern technological advances have gained. Detection can now be achieved at approximately M1, which greatly improves forecasting chances. A propagating precursor has now been detected in both the field and laboratory. Field antenna testing conducted outside the NE Texas town of Timpson in February, 2013, detected three strong EM sources along with numerous weaker signals. The antenna had mobility, and observations were noted for recurrence, duration, and frequency response. Next, two
8. Ruas e a ocupação vertical recente: labirintos murados
Directory of Open Access Journals (Sweden)
Lígia Beatriz Carreri Mauá
Full Text Available Resumo No Brasil, o processo de verticalização nas cidades é cada vez mais intenso e, por conseguinte, verifica-se um grande número de lançamentos imobiliários em torres altas. A partir do exame das implantações de edifícios residenciais recentes, pontua-se a relação insuficiente entre esses espaços privados com as ruas nas quais estão inseridos. Essas construções ocupam grandes lotes, possuem fechamentos extensos e não evidenciam nenhum cuidado quanto à sua colocação no espaço urbano. Este estudo pretende abordar a qualidade da rua em uma conjuntura de ocupação vertical contemporânea. O estudo de caso abrange uma rua no bairro Gleba Palhano, em Londrina, PR, que apresenta uma concentração de edifícios verticais e em processo de consolidação. A fundamentação teórica possibilitou a extração de atributos analíticos dos espaços públicos e o exame do estudo de caso. O resultado obtido aponta prejuízos desse contexto no cotidiano dos cidadãos, uma vez que o espaço público não é utilizado como lugar de interação e troca social. O artigo conclui afirmando ser indispensável a retificação dessa forma de produção, objetivando a concepção de ambientes de maior qualidade.
9. Classe social: conceitos e esquemas operacionais em pesquisa em saude
Directory of Open Access Journals (Sweden)
2013-08-01
10. Efecto de extractos vegetales de <em>Polygonum hydropiperoidesem>, <em>Solanum nigrumem> y <em>Calliandra pittieriem> sobre el gusano cogollero (<em>Spodoptera frugiperdaem>
Directory of Open Access Journals (Sweden)
Lizarazo H. Karol
2008-12-01
Full Text Available
El gusano cogollero <em>Spodoptera frugiperdaem> es una de las plagas que más afectan los cultivos en la región de Sumapaz (Cundinamarca, Colombia. En la actualidad se controla principalmente aplicando productos de síntesis química, sin embargo la aplicación de extractos vegetales surge como una alternativa de menor impacto sobre el ambiente. Este control se emplea debido a que las plantas contienen metabolitos secundarios que pueden inhibir el desarrollo de los insectos. Por tal motivo, la presente investigación evaluó el efecto insecticida y antialimentario de extractos vegetales de barbasco <em>Polygonum hydropiperoidesem> (Polygonaceae, carbonero <em>Calliandra pittieriem> (Mimosaceae y hierba mora <em>Solanum nigrumem> (Solanaceae sobre larvas de <em>S. frugiperdaem> biotipo maíz. Se estableció una cría masiva del insecto en el laboratorio utilizando una dieta natural con hojas de maíz. Posteriormente se obtuvieron extractos vegetales utilizando solventes de alta polaridad (agua y etanol y media polaridad (diclorometano los cuales se aplicaron sobre las larvas de segundo instar. Los resultados más destacados se presentaron con extractos de <em>P. hydropiperoidesem>, obtenidos con diclorometano en sus diferentes dosis, con los cuales se alcanzó una mortalidad de 100% 12 días después de la aplicación y un efecto antialimentario representado por un consumo de follaje de maíz inferior al 4%, efectos similares a los del testigo comercial (Clorpiriphos.
11. Estimating tropical vertical motion profile shapes from satellite observations
Science.gov (United States)
Back, L. E.; Handlos, Z.
2013-12-01
The vertical structure of tropical deep convection strongly influences interactions with larger scale circulations and climate. This research focuses on investigating this vertical structure and its relationship with mesoscale tropical weather states. We test the hypothesis that vertical motion shape varies in association with weather state type. We estimate mean state vertical motion profile shapes for six tropical weather states defined using cloud top pressure and optical depth properties from the International Satellite Cloud Climatology Project. The relationship between vertical motion and the dry static energy budget are utilized to set up a regression analysis that empirically determines two modes of variability in vertical motion from reanalysis data. We use these empirically determined modes, this relationship and surface convergence to estimate vertical motion profile shape from observations of satellite retrievals of rainfall and surface convergence. We find that vertical motion profile shapes vary systematically between different tropical weather states. The "isolated systems" regime exhibits a more ''bottom-heavy'' profile shape compared to the convective/thick cirrus and vigorous deep convective regimes, with maximum upward vertical motion occurring in the lower troposphere rather than the middle to upper troposphere. The variability we observe with our method does not coincide with that expected based on conventional ideas about how stratiform rain fraction and vertical motion are related.
12. Opportunity's Surroundings on Sol 1798 (Vertical)
Science.gov (United States)
2009-01-01
NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top. This view is presented as a vertical projection with geometric seam correction. The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock.
13. Opportunity's Surroundings on Sol 1687 (Vertical)
Science.gov (United States)
2009-01-01
NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this 360-degree view of the rover's surroundings on the 1,687th Martian day, or sol, of its surface mission (Oct. 22, 2008). Opportunity had driven 133 meters (436 feet) that sol, crossing sand ripples up to about 10 centimeters (4 inches) tall. The tracks visible in the foreground are in the east-northeast direction. Opportunity's position on Sol 1687 was about 300 meters southwest of Victoria Crater. The rover was beginning a long trek toward a much larger crater, Endeavour, about 12 kilometers (7 miles) to the southeast. This view is presented as a vertical projection with geometric seam correction.
14. Opportunity's Surroundings After Sol 1820 Drive (Vertical)
Science.gov (United States)
2009-01-01
NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009). This view is presented as a vertical projection with geometric seam correction. North is at the top. The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock.
15. Lepton flavor violation with displaced vertices
Directory of Open Access Journals (Sweden)
Julian Heeck
2018-01-01
Full Text Available If light new physics with lepton-flavor-violating couplings exists, the prime discovery channel might not be ℓ→ℓ′γ but rather ℓ→ℓ′X, where the new boson X could be an axion, majoron, familon or Z′ gauge boson. The most conservative bound then comes from ℓ→ℓ′+inv, but if the on-shell X can decay back into leptons or photons, displaced-vertex searches could give much better limits. We show that only a narrow region in parameter space allows for displaced vertices in muon decays, μ→eX,X→γγ,ee, whereas tauon decays can have much more interesting signatures.
16. Aerodynamic drag reduction by vertical splitter plates
Science.gov (United States)
Gilliéron, Patrick; Kourta, Azeddine
2010-01-01
The capacity of vertical splitter plates placed at the front or the rear of a simplified car geometry to reduce drag, with and without skew angle, is investigated for Reynolds numbers between 1.0 × 106 and 1.6 × 106. The geometry used is a simplified geometry to represent estate-type vehicles, for the rear section, and MPV-type vehicle. Drag reductions of nearly 28% were obtained for a zero skew angle with splitter plates placed at the front of models of MPV or utility vehicles. The results demonstrate the advantage of adapting the position and orientation of the splitter plates in the presence of a lateral wind. All these results confirm the advantage of this type of solution, and suggest that this expertise should be used in the automotive field to reduce consumption and improve dynamic stability of road vehicles.
17. Aerodynamic drag reduction by vertical splitter plates
Energy Technology Data Exchange (ETDEWEB)
Gillieron, Patrick [Renault Group, Research Division, Fluid Mechanics and Aerodynamics, Guyancourt (France); Kourta, Azeddine [Polytech' Orleans, Institut PRISME, ESA, Orleans (France)
2010-01-15
The capacity of vertical splitter plates placed at the front or the rear of a simplified car geometry to reduce drag, with and without skew angle, is investigated for Reynolds numbers between 1.0 x 10{sup 6} and 1.6 x 10{sup 6}. The geometry used is a simplified geometry to represent estate-type vehicles, for the rear section, and MPV-type vehicle. Drag reductions of nearly 28% were obtained for a zero skew angle with splitter plates placed at the front of models of MPV or utility vehicles. The results demonstrate the advantage of adapting the position and orientation of the splitter plates in the presence of a lateral wind. All these results confirm the advantage of this type of solution, and suggest that this expertise should be used in the automotive field to reduce consumption and improve dynamic stability of road vehicles. (orig.)
18. Vertically aligned BCN nanotubes with high capacitance.
Science.gov (United States)
Iyyamperumal, Eswaramoorthi; Wang, Shuangyin; Dai, Liming
2012-06-26
Using a chemical vapor deposition method, we have synthesized vertically aligned BCN nanotubes (VA-BCNs) on a Ni-Fe-coated SiO(2)/Si substrate from a melamine diborate precursor. The effects of pyrolysis conditions on the morphology and thermal property of grown nanotubes, as well as the nanostructure and composition of an individual BCN nanotube, were systematically studied. It was found that nitrogen atoms are bonded to carbons in both graphitic and pyridinic forms and that the resultant VA-BCNs grown at 1000 °C show the highest specific capacitance (321.0 F/g) with an excellent rate capability and high durability with respect to nonaligned BCN (167.3 F/g) and undoped multiwalled carbon nanotubes (117.3 F/g) due to synergetic effects arising from the combined co-doping of B and N in CNTs and the well-aligned nanotube structure.
19. Functionalization of vertically aligned carbon nanotubes.
Science.gov (United States)
Van Hooijdonk, Eloise; Bittencourt, Carla; Snyders, Rony; Colomer, Jean-François
2013-01-01
This review focuses and summarizes recent studies on the functionalization of carbon nanotubes oriented perpendicularly to their substrate, so-called vertically aligned carbon nanotubes (VA-CNTs). The intrinsic properties of individual nanotubes make the VA-CNTs ideal candidates for integration in a wide range of devices, and many potential applications have been envisaged. These applications can benefit from the unidirectional alignment of the nanotubes, the large surface area, the high carbon purity, the outstanding electrical conductivity, and the uniformly long length. However, practical uses of VA-CNTs are limited by their surface characteristics, which must be often modified in order to meet the specificity of each particular application. The proposed approaches are based on the chemical modifications of the surface by functionalization (grafting of functional chemical groups, decoration with metal particles or wrapping of polymers) to bring new properties or to improve the interactions between the VA-CNTs and their environment while maintaining the alignment of CNTs.
20. The ergonomics of vertical turret lathe operation.
Science.gov (United States)
Pratt, F M; Corlett, E N
1970-12-01
A study of the work load of 14 vertical turret lathe operators engaged on different work tasks in two factories is reported. For eight of these workers continuous heart rate recordings were made throughout the day. It was shown that in four cases improved technology was unlikely to lead to higher output and certain aspects of posture and equipment manipulation were major contributors to the limitations on increased output. The role of the work-rest schedule in increasing work loads was also demonstrated. Improvements in technology and methods to reduce the extent of certain work loads to enable heavy work to be done in shorter periods followed by light work or rest periods are given as means to modify and improve the output of these machines. Finally, the direction for the development of a predictive model for man-machine matching is introduced.
1. The Vertical Profile of Ocean Mixing
Science.gov (United States)
Ferrari, R. M.; Nikurashin, M.; McDougall, T. J.; Mashayek, A.
2014-12-01
The upwelling of bottom waters through density surfaces in the deep ocean is not possible unless the sloping nature of the sea floor is taken into account. The bottom--intensified mixing arising from interaction of internal tides and geostrophic motions with bottom topography implies that mixing is a decreasing function of height in the deep ocean. This would further imply that the diapycnal motion in the deep ocean is downward, not upwards as is required by continuity. This conundrum regarding ocean mixing and upwelling in the deep ocean will be resolved by appealing to the fact that the ocean does not have vertical side walls. Implications of the conundrum for the representation of ocean mixing in climate models will be discussed.
2. Solar heat gain through vertical cylindrical glass
Energy Technology Data Exchange (ETDEWEB)
Kassem, M.A.; Kaseb, S.; El-Refaie, M.F. [Cairo Univ., Mechanical Power Engineering Dept., Cairo (Egypt)
1999-10-01
Spaces with nonplanar glazed envelopes are frequently encountered in contemporary buildings. Such spaces represent a problem when calculating the solar heat gain in the course of estimating the cooling or heating load; and hence, sizing of cooling or heating systems. The calculation, using the information currently available in the literature, is tedious and/or approximate. In the present work, the computational procedure for evaluating the solar heat gain to a space having a vertical cylindrical glass envelope is established, and, a computer program is coded to carry out the necessary computations and yield the results in a detailed usable form. The program is versatile and allows for the arbitrary variation of all pertinent parameters. (Author)
3. Solar heat gain through vertical cylindrical glass
International Nuclear Information System (INIS)
Kassem, M.A.; Kaseb, S.; El-Refaie, M.F.
1999-01-01
Spaces with nonplanar glazed envelopes are frequently encountered in contemporary buildings. Such spaces represent a problem when calculating the solar heat gain in the course of estimating the cooling or heating load; and hence, sizing of cooling or heating systems. The calculation, using the information currently available in the literature, is tedious and/or approximate. In the present work, the computational procedure for evaluating the solar heat gain to a space having a vertical cylindrical glass envelope is established, and, a computer program is coded to carry out the necessary computations and yield the results in a detailed usable form. The program is versatile and allows for the arbitrary variation of all pertinent parameters. (Author)
4. Solar heat gain through vertical cylindrical glass
Energy Technology Data Exchange (ETDEWEB)
Kassem, M.A.; Kaseb, S.; El-Refaie, M.F. [Cairo Univ., Mechanical Power Engineering Dept., Cairo (Egypt)
1999-07-01
Spaces with nonplanar glazed envelopes are frequently encountered in contemporary buildings. Such spaces represent a problem when calculating the solar heat gain in the course of estimating the cooling or heating load; and hence, sizing of cooling or heating systems. The calculation, using the information currently available in the literature, is tedious and/or approximate. In the present work, the computational procedure for evaluating the solar heat gain to a space having a vertical cylindrical glass envelope is established, and, a computer program is coded to carry out the necessary computations and yield the results in a detailed usable form. The program is versatile and allows for the arbitrary variation of all pertinent parameters. (Author)
5. Vertical pellet injection in FTU discharges
International Nuclear Information System (INIS)
Giovannozzi, E.; Annibaldi, S.V.; Buratti, P.
2005-01-01
Central fuelling and pellet enhanced performance modes have been obtained with pellets injected vertically from the high field side on the FTU tokamak. Four phases have been recognized: ablation of the pellets, drifting plasmoids, MHD modes which take the density to the centre of the discharge and finally an anomalous drift which further increases the density peaking. Pellet ablation data have been compared with values from a pellet ablation and deposition code. Comparison between 0.8 and 1.1 MA discharges at a high magnetic field (B T = 7 T) has been carried out: a higher performance has been obtained with the latter due to the higher target density and the larger inversion radius which would increase the effects of m = 1 modes to take the density to the plasma centre
6. Vertical integration and optimal reimbursement policy.
Science.gov (United States)
Afendulis, Christopher C; Kessler, Daniel P
2011-09-01
Health care providers may vertically integrate not only to facilitate coordination of care, but also for strategic reasons that may not be in patients' best interests. Optimal Medicare reimbursement policy depends upon the extent to which each of these explanations is correct. To investigate, we compare the consequences of the 1997 adoption of prospective payment for skilled nursing facilities (SNF PPS) in geographic areas with high versus low levels of hospital/SNF integration. We find that SNF PPS decreased spending more in high integration areas, with no measurable consequences for patient health outcomes. Our findings suggest that integrated providers should face higher-powered reimbursement incentives, i.e., less cost-sharing. More generally, we conclude that purchasers of health services (and other services subject to agency problems) should consider the organizational form of their suppliers when choosing a reimbursement mechanism.
7. Vertical distribution of radionuclides in soil
International Nuclear Information System (INIS)
Bikit, I.; Slivka, J.; Krmar, M.; Chonkic, Lj.; Veskovic, M.; Hadzhic, V.
1990-01-01
Pedological profiles were opened at selected representative locations on different geomorphic types and in certain soil layers down to 3 m depth. The mechanical composition, hydro physical and chemical features were studied. The vertical distribution of naturally occurring radionuclides and 137 Cs was analyzed during 1988 and 1989. The parameter alpha of the exponential dependence of activity concentration vs. depth were calculated for the three soil types, as well as the activity concentration of 137 Cs at depths of 1 and 3 m. The extent of 137 Cs migration was evaluated at these depths and it is shown that the coefficient α is proportional to the reciprocal of the time elapsed from the surface contamination. (author)
8. GAUGING THE VERTICAL SPECIALIZATION IN EU TRADE
Directory of Open Access Journals (Sweden)
IULIA MONICA OEHLER-SINCAI
2014-11-01
Full Text Available The purpose of this paper is threefold. First, we review the mechanisms and determinants of vertical specialization (VS, as this has gradually become the dominant characteristic of international trade. Second, we underline that there is a rich literature regarding VS in EU trade, at aggregate and individual levels and research is advancing together with the instruments used to measure trade in value added. Third, our investigation brings to the forefront a classification of EU countries according to their GVC participation index, taking into consideration both upstream and downstream links. As a conclusion, the VS analyses help us better understand the interconnectedness among countries and industries by means of foreign direct investment, trade, labour migration and technology transfer.
9. Development of Vertical Cable Seismic System (2)
Science.gov (United States)
Asakawa, E.; Murakami, F.; Tsukahara, H.; Ishikawa, K.
2012-12-01
The vertical cable seismic is one of the reflection seismic methods. It uses hydrophone arrays vertically moored from the seafloor to record acoustic waves generated by surface, deep-towed or ocean bottom sources. Analyzing the reflections from the sub-seabed, we could look into the subsurface structure. This type of survey is generally called VCS (Vertical Cable Seismic). Because VCS is an efficient high-resolution 3D seismic survey method for a spatially-bounded area, we proposed the method for the hydrothermal deposit survey tool development program that the Ministry of Education, Culture, Sports, Science and Technology (MEXT) started in 2009. We are now developing a VCS system, including not only data acquisition hardware but data processing and analysis technique. Our first experiment of VCS surveys has been carried out in Lake Biwa, JAPAN in November 2009 for a feasibility study. Prestack depth migration is applied to the 3D VCS data to obtain a high quality 3D depth volume. Based on the results from the feasibility study, we have developed two autonomous recording VCS systems. After we carried out a trial experiment in the actual ocean at a water depth of about 400m and we carried out the second VCS survey at Iheya Knoll with a deep-towed source. In this survey, we could establish the procedures for the deployment/recovery of the system and could examine the locations and the fluctuations of the vertical cables at a water depth of around 1000m. The acquired VCS data clearly shows the reflections from the sub-seafloor. Through the experiment, we could confirm that our VCS system works well even in the severe circumstances around the locations of seafloor hydrothermal deposits. We have carried out two field surveys in 2011. One is a 3D survey with a boomer for a high-resolution surface source and the other one for an actual field survey in the Izena Cauldron an active hydrothermal area in the Okinawa Trough. Through these surveys, we have confirmed that the
10. Vertical profiles of BC direct radiative effect over Italy: high vertical resolution data and atmospheric feedbacks
Science.gov (United States)
Močnik, Griša; Ferrero, Luca; Castelli, Mariapina; Ferrini, Barbara S.; Moscatelli, Marco; Grazia Perrone, Maria; Sangiorgi, Giorgia; Rovelli, Grazia; D'Angelo, Luca; Moroni, Beatrice; Scardazza, Francesco; Bolzacchini, Ezio; Petitta, Marcello; Cappelletti, David
2016-04-01
Black carbon (BC), and its vertical distribution, affects the climate. Global measurements of BC vertical profiles are lacking to support climate change research. To fill this gap, a campaign was conducted over three Italian basin valleys, Terni Valley (Appennines), Po Valley and Passiria Valley (Alps), to characterize the impact of BC on the radiative budget under similar orographic conditions. 120 vertical profiles were measured in winter 2010. The BC vertical profiles, together with aerosol size distribution, aerosol chemistry and meteorological parameters, have been determined using a tethered balloon-based platform equipped with: a micro-Aethalometer AE51 (Magee Scientific), a 1.107 Grimm OPC (0.25-32 μm, 31 size classes), a cascade impactor (Siuotas SKC), and a meteorological station (LSI-Lastem). The aerosol chemical composition was determined from collected PM2.5 samples. The aerosol absorption along the vertical profiles was measured and optical properties calculated using the Mie theory applied to the aerosol size distribution. The aerosol optical properties were validated with AERONET data and then used as inputs to the radiative transfer model libRadtran. Vertical profiles of the aerosol direct radiative effect, the related atmospheric absorption and the heating rate were calculated. Vertical profile measurements revealed some common behaviors over the studied basin valleys. From below the mixing height to above it, a marked concentration drop was found for both BC (from -48.4±5.3% up to -69.1±5.5%) and aerosol number concentration (from -23.9±4.3% up to -46.5±7.3%). These features reflected on the optical properties of the aerosol. Absorption and scattering coefficients decreased from below the MH to above it (babs from -47.6±2.5% up to -71.3±3.0% and bsca from -23.5±0.8% up to -61.2±3.1%, respectively). Consequently, the Single Scattering Albedo increased above the MH (from +4.9±2.2% to +7.4±1.0%). The highest aerosol absorption was
11. Poesia em Revista: Oroboro
Directory of Open Access Journals (Sweden)
Helena Alves Gouveia
2008-10-01
Full Text Available http://dx.doi.org/10.5007/1984-784x.2008v8n12p38 A serpente que engole a si mesma é uma figura curiosa do simbolismo de um processo de contínua transformação, de um movimento circular incessante, rumo à infinitude, sem traços de fim ou começo. Oroboro é um nome de origem grega que remete a esta serpente que se morde e penetra em si mesma ao engolir o próprio rabo. Mas também é o nome da revista de cultura editada em Curitiba pelos artistas-editores Ricardo Corona e Eliana Borges.
12. Hipervitaminose D em animais
Directory of Open Access Journals (Sweden)
Paulo V. Peixoto
2012-07-01
Full Text Available Por meio de revisão da literatura, são apresentados dados referentes ao metabolismo da vitamina D, bem como aos principais aspectos toxicológicos, clínicos, bioquímicos, macroscópicos, microscópicos, ultraestruturais, imuno-histoquímicos e radiográficos de animais intoxicados natural e experimentalmente por essa vitamina, em diferentes espécies. Este estudo objetiva demonstrar a existência de muitas lacunas no conhecimento sobre mineralização fisiológica e patológica, em especial na mediação hormonal do fenômeno, bem como alertar para os riscos de ocorrência dessa intoxicação.
13. A democracia em Cuba
OpenAIRE
Zaldívar, Julio César Guanche
2011-01-01
O triunfo revolucionário de 1959 consagrou em Cuba um novo conceito de democracia, com o intuito de garantir o acesso à vida política ativa de grandes setores da população, antes excluídos. Para isso, foi desenvolvida uma política de inclusão social com caráter universal. A prática política popular deixou as riquezas do país em mãos da população carente e gerou uma grande mobilidade social, fato que foi central para o aumento da participação popular. O contexto de agressão imperialista e o pr...
14. Tuberculose Infantil em Portugal
OpenAIRE
Carapau, João
2014-01-01
Dos números recentemente publicados pela Direcção Geral da Saúde / Núcleo de Tuberculose e Doenças Respiratórias relativos aos anos de 1992 e 1993 e pelo Instituto Nacional de Estatística relativos a 1994, conclui-se que os casos de Tuberculose (TB) notificados pouco têm decrescido nos últimos 15 anos: descida média anual de 6,3% para os casos em geral e 14% para os menores de 15 anos; a taxa global de incidência apurada em 1994 voltou a subir — 51,1 (52,4 no Continente). Para o autor a me...
OpenAIRE
Katuta, Ângela Massumi; UEL/CCE/Departamento de Geociências
2010-01-01
A Universidade, desde as suas origens no século XII, sempre esteve atrelada a instituições e setores hegemônicos da sociedade. Segundo Trindade (2000), a sua “invenção” ocorreu em plena Idade Média na Europa, sob a proteção da Igreja romana, sendo que as Universidades de Bolonha (1108) e Paris (1211) foram as primeiras a serem criadas
16. Produção de frutos e estolhos do morangueiro em diferentes sistemas de cultivo em ambiente protegido
Directory of Open Access Journals (Sweden)
Fernandes-Júnior Flavio
2002-01-01
Full Text Available Este trabalho teve por objetivo comparar a produção de frutos e de estolhos do morangueiro (Fragariaxananassa Duch. cv. Campinas IAC-2712, em função de três sistemas de condução em ambiente protegido (solo, hidropônico-NFT e hidroponia em casca de arroz carbonizada em colunas verticais. O experimento foi realizado no período de junho de 2000 a fevereiro de 2001, na Estação Experimental de Agronomia de Jundiaí (latitude: 23:06'S, longitude: 46:55'W, altitude média: 715 m, clima Cwa, do Instituto Agronômico, seguindo-se o delineamento de parcelas subdivididas com três repetições, em casa de vegetação modelo semi-arco com abertura zenital superior. Nos dois sistemas hidropônicos foram usadas duas composições de soluções nutritivas, respectivamente, para a fase de crescimento vegetativo e para a produção de frutos. Os resultados obtidos permitiram concluir que no sistema vertical, embora as produções de frutos e de estolhos por planta tenham sido menores que nos demais sistemas estudados, há possibilidade de melhor aproveitamento interno do ambiente protegido, com reflexos positivos no aumento do rendimento por área e maior facilidade de manejo da cultura, incluindo as operações de transplante, limpeza das plantas e colheitas de frutos e remoção de estolhos. Essas vantagens também se aplicam ao sistema hidropônico-NFT mesmo não tendo apresentado diferenças de produção em relação ao cultivo convencional.
17. Vertical and horizontal distribution of pollination systems in cerrado fragments of central Brazil
Directory of Open Access Journals (Sweden)
Fernanda Quintas Martins
2007-05-01
Full Text Available In fragments of the cerrado, we determined the frequency of pollination systems and analyzed their spatial distribution. We placed 38 transects, sampling 2,280 individuals and 121 species. As expected in Neotropical regions, bee-pollination was the most frequent pollination system. We found a decrease in the frequency of plants pollinated by beetles towards the fragment interior. Similarly, we found significant variation in relation to height just for the bats; there was an increase in the frequency of plants pollinated by bats towards the higher heights. In general, we found no horizontal and vertical variation in the pollination systems, probably as consequence of the more open physiognomy of the cerrado vegetation.As principais pressões seletivas nas estratégias de polinização originam principalmente do ambiente em que plantas ocorrem, como subdossel, dossel, borda ou interior de um fragmento. Diferentes condições ambientais aumentam as diferenças entre os nichos ecológicos e podem implicar diferenças nas proporções dos sistemas de polinização. Em fragmentos de cerrado, determinamos a freqüência dos sistemas de polinização e analisamos sua distribuição espacial. Lançamos 38 transecções aleatoriamente, amostrando 2.280 indivíduos e 121 espécies. Como esperado para regiões neotropicais, a polinização por abelhas foi o sistema de polinização mais freqüente. Encontramos uma diminuição na freqüência de plantas polinizadas por besouros em direção ao interior do fragmento. De modo similar, encontramos uma variação significativa em relação à altura somente para os morcegos, havendo um aumento na freqüência de plantas em direção a alturas mais altas. Em geral, não encontramos variações horizontais e verticais nos sistemas de polinização, provavelmente, como conseqüência da fisionomia mais aberta de cerrado.
18. Decoration of vertical graphene with aerosol nanoparticles for gas sensing
International Nuclear Information System (INIS)
Cui, Shumao; Guo, Xiaoru; Ren, Ren; Zhou, Guihua; Chen, Junhong
2015-01-01
A facile method was demonstrated to decorate aerosol Ag nanoparticles onto vertical graphene surfaces using a mini-arc plasma reactor. The vertical graphene was directly grown on a sensor electrode using a plasma-enhanced chemical vapor deposition (PECVD) method. The aerosol Ag nanoparticles were synthesized by a simple vapor condensation process using a mini-arc plasma source. Then, the nanoparticles were assembled on the surface of vertical graphene through the assistance of an electric field. Based on our observation, nonagglomerated Ag nanoparticles formed in the gas phase and were assembled onto vertical graphene sheets. Nanohybrids of Ag nanoparticle-decorated vertical graphene were characterized for ammonia gas detection at room temperature. The vertical graphene served as the conductance channel, and the conductance change upon exposure to ammonia was used as the sensing signal. The sensing results show that Ag nanoparticles significantly improve the sensitivity, response time, and recovery time of the sensor. (paper)
19. Rapid Development of Microsatellite Markers with 454 Pyrosequencing in a Vulnerable Fish<em>,> the Mottled Skate<em>, Raja em>pulchra>
Directory of Open Access Journals (Sweden)
Jung-Ha Kang
2012-06-01
Full Text Available The mottled skate, <em>Raja pulchraem>, is an economically valuable fish. However, due to a severe population decline, it is listed as a vulnerable species by the International Union for Conservation of Nature. To analyze its genetic structure and diversity, microsatellite markers were developed using 454 pyrosequencing. A total of 17,033 reads containing dinucleotide microsatellite repeat units (mean, 487 base pairs were identified from 453,549 reads. Among 32 loci containing more than nine repeat units, 20 primer sets (62% produced strong PCR products, of which 14 were polymorphic. In an analysis of 60 individuals from two <em>R. pulchra em>populations, the number of alleles per locus ranged from 1–10, and the mean allelic richness was 4.7. No linkage disequilibrium was found between any pair of loci, indicating that the markers were independent. The Hardy–Weinberg equilibrium test showed significant deviation in two of the 28 single-loci after sequential Bonferroni’s correction. Using 11 primer sets, cross-species amplification was demonstrated in nine related species from four families within two classes. Among the 11 loci amplified from three other <em>Rajidae> family species; three loci were polymorphic. A monomorphic locus was amplified in all three <em>Rajidae> family species and the <em>Dasyatidae> family. Two <em>Rajidae> polymorphic loci amplified monomorphic target DNAs in four species belonging to the Carcharhiniformes class, and another was polymorphic in two Carcharhiniformes species.
20. Micropolítica do trabalho vivo em ato, ergologia e educação popular: proposição de um dispositivo de formação de trabalhadores da saúde The micropolitics of living work in the act, ergology and popular education: a proposition of a device to train health workers
Directory of Open Access Journals (Sweden)
Suze Rosa Sant'Anna
2011-01-01
Full Text Available O presente artigo tem como objetivos discutir o trabalho em saúde e apresentar um dispositivo para a formação de trabalhadores sob a ótica do conceito ampliado de saúde, fundamentado em três principais referenciais teóricos: a démarche ergológica e seu dispositivo dinâmico a três polos de Yves Schwartz, a cartografia da micropolítica do trabalho vivo em ato de Emerson Elias Merhy e a educação popular em saúde, inspirada em Paulo Freire. Espera-se com este estudo contribuir para a reflexão e a construção de uma estratégia de formação para intensificar a inserção dos estudantes nos cenários de prática que enfatizem a construção compartilhada de conhecimentos e favoreçam especialmente a produção e efetivação de saberes e dos aspectos relacionais que compõem o núcleo tecnológico do cuidado em saúde.This article aims to discuss the work done in the health area and to present a tool to train workers under the light of the expanded concept of health, based on three main theoretical frameworks: Yves Schwartz' ergology demarche and its dynamic threepole tool; Emerson Elias Merhy's cartography of the micropolitics of living work in the act; and the popular health education, inspired in Paulo Freire. It is hoped that this study will contribute to a reflection on and to the construction of a training strategy to enhance the integration of students in practical activities that emphasize the shared construction of knowledge and, especially, encourage the production and realization of knowledge and relational aspects that make up the technological core in health care.
1. Acarofauna em plantas ornamentais
Directory of Open Access Journals (Sweden)
Jania Claudia Camilo dos Santos
2014-10-01
2. Analysis and design of a vertical axis wind turbine
OpenAIRE
Goyena Iriso, Joseba
2011-01-01
The main objective of this project is to design a new vertical axis wind turbine, specifically one Giromill wind turbine. The project development requires performing a previous study of the vertical axis wind turbines currently development. This study has to be performed before starting to design the wind turbine. Other very important aim is the development of a new vertical axis wind turbine. The after analyses that will result in the final design of the wind turbine will b...
3. Self-starting aerodynamics analysis of vertical axis wind turbine
OpenAIRE
Jianyang Zhu; Hailin Huang; Hao Shen
2015-01-01
Vertical axis wind turbine is a special type of wind-force electric generator which is capable of working in the complicated wind environment. The self-starting aerodynamics is one of the most important considerations for this kind of turbine. This article aims at providing a systematic synthesis on the self-starting aerodynamic characteristics of vertical axis wind turbine based on the numerical analysis approach. First, the physical model of vertical axis wind turbine and its parameter defi...
4. Comparison of VP broadband tiltmeter and VS vertical pendulum tiltmeter
OpenAIRE
Ma, Wugang; Wu, Yanxia; Zhao, Huiqin
2015-01-01
Vertical pendulum (VP) tiltmeter is a kind of earthquake precursor observation equipment, which is used to record the interaction force associated with astronomical tidal tilts caused. Currently, VP broadband tiltmeter and vertical sensor (VS) vertical pendulum tiltmeter are primarily used. In this paper, we compare the two different instruments by using four aspects—mechanical structure, circuitry, zeroing, and bandwidth—based on their working principles and applications. We conclude that VP...
5. Outsourcing versus Vertical Integration: A Dynamic Model of Industry Equilibrium
OpenAIRE
Roman Fossati
2012-01-01
Why do supply relations vary across industries and across firms within industries? Recent evidence by Hortaçsu and Syverson (2009) shows that vertically integrated producers are more productive, their size distribution dominates (in first order stochastic dominance sense) the size distribution of not vertically integrated manufacturers and there is assortative matching of upstream and downstream plants by productivity and size. Besides vertical integration (VI) and procurement of inputs from ...
6. Robotic platform for traveling on vertical piping network
Science.gov (United States)
Nance, Thomas A; Vrettos, Nick J; Krementz, Daniel; Marzolf, Athneal D
2015-02-03
This invention relates generally to robotic systems and is specifically designed for a robotic system that can navigate vertical pipes within a waste tank or similar environment. The robotic system allows a process for sampling, cleaning, inspecting and removing waste around vertical pipes by supplying a robotic platform that uses the vertical pipes to support and navigate the platform above waste material contained in the tank.
7. Structural reasons for vertical integration in the international oil industry
International Nuclear Information System (INIS)
Luciani, G.
1991-01-01
Once upon a time, the international oil industry was vertically integrated. A small group of companies controlled a very substantial share of international oil flows, extending their operations from the oil well to the gas pump, and relying on intracorporate transfers for most in-between transactions. The historical reasons for vertical disintegration, the market role, and structural reasons for vertical reintegration are examined. (author)
8. Trajectory optimization for lunar rover performing vertical takeoff vertical landing maneuvers in the presence of terrain
Science.gov (United States)
Ma, Lin; Wang, Kexin; Xu, Zuhua; Shao, Zhijiang; Song, Zhengyu; Biegler, Lorenz T.
2018-05-01
This study presents a trajectory optimization framework for lunar rover performing vertical takeoff vertical landing (VTVL) maneuvers in the presence of terrain using variable-thrust propulsion. First, a VTVL trajectory optimization problem with three-dimensional kinematics and dynamics model, boundary conditions, and path constraints is formulated. Then, a finite-element approach transcribes the formulated trajectory optimization problem into a nonlinear programming (NLP) problem solved by a highly efficient NLP solver. A homotopy-based backtracking strategy is applied to enhance the convergence in solving the formulated VTVL trajectory optimization problem. The optimal thrust solution typically has a "bang-bang" profile considering that bounds are imposed on the magnitude of engine thrust. An adaptive mesh refinement strategy based on a constant Hamiltonian profile is designed to address the difficulty in locating the breakpoints in the thrust profile. Four scenarios are simulated. Simulation results indicate that the proposed trajectory optimization framework has sufficient adaptability to handle VTVL missions efficiently.
9. VDE/disruption EM analysis for ITER in-vessel components
International Nuclear Information System (INIS)
Miki, N.; Ioki, K.; Ilio, F.; Kodama, T.; Chiocchio, S.; Williamson, D.; Roccella, M.; Barabaschi, P.; Sayer, R.S.
1998-01-01
This paper summarises the results of EM analyses for ITER in-vessel components, such as blanket modules, backplate and divertor modules. In the ITER design the following two disruption scenarios are taken into account: centered or radial disruption, and vertical displacement event (VDE). Eddy currents and forces due to plasma disruption were calculated using the 3D shell element code EDDYCUFF and the 3D solid element code EMAS. The plasma motion and current decay used in the EM analysis was supplied by 2-D axisymmetric plasma equilibrium codes, TSC and MAXFEA. (authors)
10. Comparison of VP broadband tiltmeter and VS vertical pendulum tiltmeter
Directory of Open Access Journals (Sweden)
Wugang Ma
2015-05-01
Full Text Available Vertical pendulum (VP tiltmeter is a kind of earthquake precursor observation equipment, which is used to record the interaction force associated with astronomical tidal tilts caused. Currently, VP broadband tiltmeter and vertical sensor (VS vertical pendulum tiltmeter are primarily used. In this paper, we compare the two different instruments by using four aspects—mechanical structure, circuitry, zeroing, and bandwidth—based on their working principles and applications. We conclude that VP broadband tiltmeter is more superior compared with VS vertical pendulum tiltmeter because of its higher bandwidth and degree of automation.
11. Fatores protetores e de risco envolvidos na transmissão vertical do HIV-1 Protective and risk factors related to vertical transmission of the HIV-1
Directory of Open Access Journals (Sweden)
Rosângela P. Gianvecchio
2005-04-01
Full Text Available Este estudo avalia os fatores maternos e fetais envolvidos na transmissão vertical do HIV-1 em 47 pares de mãe e filho. As variáveis comportamentais, demográficas e obstétricas foram obtidas mediante entrevista; os dados referentes ao parto e ao recém-nascido, dos prontuários das maternidades. Durante o terceiro trimestre de gestação foi realizada a contagem da carga viral materna e dos linfócitos T CD4+. A média de idade foi de 25 anos e 23,4% das gestantes eram primigestas, e o fator comportamental mais prevalente foi não usar preservativos. Dentre as gestantes, 48,9% tinham células CD4+ superior a 500 células/mm³ e 93,6% se enquadravam na categoria clínica A; 95,7% submeteram-se à profilaxia com zidovudina durante a gestação ou no parto, a qual foi ministrada a todos os recém-nascidos; 50,0% delas foram submetidas à cesárea eletiva. Apesar de expostas a vários fatores de risco e protetores, nenhuma criança tornou-se infectada. A transmissão vertical resulta de um desequilíbrio entre os fatores, com predomínio dos de risco sobre os protetores.This study aimed to evaluate maternal and fetal factors related to vertical transmission of HIV-1. Participants included 47 mother-child pairs. Behavioral, demographic, and obstetric data were obtained through interviews. Data related to delivery and newborns were collected from registries in the maternity hospitals. During the third trimester of pregnancy, CD4+ T lymphocytes and maternal viral load were measured. Mean age of the mothers was 25 years and 23.4% of the pregnant women were primigravidae. The most prevalent behavioral factor was lack of condom use. 48.9% of the women presented a CD4+ count greater than 500 cells/ mm³, and 93.6% belonged to clinical category A. 95.7% of the women received zidovudine prophylaxis during pregnancy or childbirth, and the medication was also administered to all the neonates. 50.0% of patients were submitted to elective cesareans. Despite
12. Development of Vertical Cable Seismic System (3)
Science.gov (United States)
Asakawa, E.; Murakami, F.; Tsukahara, H.; Mizohata, S.; Ishikawa, K.
2013-12-01
The VCS (Vertical Cable Seismic) is one of the reflection seismic methods. It uses hydrophone arrays vertically moored from the seafloor to record acoustic waves generated by surface, deep-towed or ocean bottom sources. Analyzing the reflections from the sub-seabed, we could look into the subsurface structure. Because VCS is an efficient high-resolution 3D seismic survey method for a spatially-bounded area, we proposed the method for the hydrothermal deposit survey tool development program that the Ministry of Education, Culture, Sports, Science and Technology (MEXT) started in 2009. We are now developing a VCS system, including not only data acquisition hardware but data processing and analysis technique. We carried out several VCS surveys combining with surface towed source, deep towed source and ocean bottom source. The water depths of the survey are from 100m up to 2100m. The target of the survey includes not only hydrothermal deposit but oil and gas exploration. Through these experiments, our VCS data acquisition system has been completed. But the data processing techniques are still on the way. One of the most critical issues is the positioning in the water. The uncertainty in the positions of the source and of the hydrophones in water degraded the quality of subsurface image. GPS navigation system are available on sea surface, but in case of deep-towed source or ocean bottom source, the accuracy of shot position with SSBL/USBL is not sufficient for the very high-resolution imaging. We have developed another approach to determine the positions in water using the travel time data from the source to VCS hydrophones. In the data acquisition stage, we estimate the position of VCS location with slant ranging method from the sea surface. The deep-towed source or ocean bottom source is estimated by SSBL/USBL. The water velocity profile is measured by XCTD. After the data acquisition, we pick the first break times of the VCS recorded data. The estimated positions of
13. Development of Vertical Cable Seismic System
Science.gov (United States)
Asakawa, E.; Murakami, F.; Sekino, Y.; Okamoto, T.; Ishikawa, K.; Tsukahara, H.; Shimura, T.
2011-12-01
In 2009, Ministry of Education, Culture, Sports, Science and Technology(MEXT) started the survey system development for Hydrothermal deposit. We proposed the Vertical Cable Seismic (VCS), the reflection seismic survey with vertical cable above seabottom. VCS has the following advantages for hydrothermal deposit survey. (1) VCS is an efficient high-resolution 3D seismic survey in limited area. (2) It achieves high-resolution image because the sensors are closely located to the target. (3) It avoids the coupling problems between sensor and seabottom that cause serious damage of seismic data quality. (4) Because of autonomous recording system on sea floor, various types of marine source are applicable with VCS such as sea-surface source (GI gun etc.) , deep-towed or ocean bottom source. Our first experiment of 2D/3D VCS surveys has been carried out in Lake Biwa, JAPAN, in November 2009. The 2D VCS data processing follows the walk-away VSP, including wave field separation and depth migration. Seismic Interferometry technique is also applied. The results give much clearer image than the conventional surface seismic. Prestack depth migration is applied to 3D data to obtain good quality 3D depth volume. Seismic Interferometry technique is applied to obtain the high resolution image in the very shallow zone. Based on the feasibility study, we have developed the autonomous recording VCS system and carried out the trial experiment in actual ocean at the water depth of about 400m to establish the procedures of deployment/recovery and to examine the VC position or fluctuation at seabottom. The result shows that the VC position is estimated with sufficient accuracy and very little fluctuation is observed. Institute of Industrial Science, the University of Tokyo took the research cruise NT11-02 on JAMSTEC R/V Natsushima in February, 2011. In the cruise NT11-02, JGI carried out the second VCS survey using the autonomous VCS recording system with the deep towed source provided by
14. Vertical Cable Seismic Survey for SMS exploration
Science.gov (United States)
Asakawa, Eiichi; Murakami, Fumitoshi; Tsukahara, Hotoshi; Mizohata, Shigeharu
2014-05-01
The Vertical Cable Seismic (VCS) survey is one of the reflection seismic methods. It uses hydrophone arrays vertically moored from the seafloor to record acoustic waves generated by sea-surface, deep-towed or ocean bottom sources. Analyzing the reflections from the sub-seabed, we could look into the subsurface structure. Because the VCS is an efficient high-resolution 3D seismic survey method for a spatially-bounded area, we proposed it for the SMS survey tool development program that the Ministry of Education, Culture, Sports, Science and Technology (MEXT) started in 2009. We have been developing the VCS survey system, including not only data acquisition hardware but data processing and analysis technique. We carried out several VCS surveys combining with surface towed source, deep towed source and ocean bottom source. The water depths of these surveys are from 100m up to 2100 m. Through these experiments, our VCS data acquisition system has been also completed. But the data processing techniques are still on the way. One of the most critical issues is the positioning in the water. The uncertainty in the positions of the source and of the hydrophones in water degraded the quality of subsurface image. GPS navigation system is available on sea surface, but in case of deep-towed source or ocean bottom source, the accuracy of shot position with SSBL/USBL is not sufficient for the very high-resolution imaging. We have developed a new approach to determine the positions in water using the travel time data from the source to VCS hydrophones. In 2013, we have carried out the second VCS survey using the surface-towed high-voltage sparker and ocean bottom source in the Izena Cauldron, which is one of the most promising SMS areas around Japan. The positions of ocean bottom source estimated by this method are consistent with the VCS field records. The VCS data with the sparker have been processed with 3D PSTM. It gives the very high resolution 3D volume deeper than two
15. How <em>Varroa> Parasitism Affects the Immunological and Nutritional Status of the Honey Bee, <em>Apis melliferaem>
Directory of Open Access Journals (Sweden)
Katherine A. Aronstein
2012-06-01
Full Text Available We investigated the effect of the parasitic mite <em>Varroa destructorem> on the immunological and nutritional condition of honey bees, <em>Apis melliferaem>, from the perspective of the individual bee and the colony. Pupae, newly-emerged adults and foraging adults were sampled from honey bee colonies at one site in S. Texas, USA. <em>Varroa>‑infested bees displayed elevated titer of Deformed Wing Virus (DWV, suggestive of depressed capacity to limit viral replication. Expression of genes coding three anti-microbial peptides (<em>defensin1, abaecin, hymenoptaecinem> was either not significantly different between <em>Varroa>-infested and uninfested bees or was significantly elevated in <em>Varroa>-infested bees, varying with sampling date and bee developmental age. The effect of <em>Varroa> on nutritional indices of the bees was complex, with protein, triglyceride, glycogen and sugar levels strongly influenced by life-stage of the bee and individual colony. Protein content was depressed and free amino acid content elevated in <em>Varroa>-infested pupae, suggesting that protein synthesis, and consequently growth, may be limited in these insects. No simple relationship between the values of nutritional and immune-related indices was observed, and colony-scale effects were indicated by the reduced weight of pupae in colonies with high <em>Varroa> abundance, irrespective of whether the individual pupa bore <em>Varroa>.
16. Controle de Cerconota anonella (Sepp. (Lep.: Oecophoridae e de Bephratelloides pomorum (Fab. (Hym.: Eurytomidae em frutos de pinha (Annona squamosa L.
Directory of Open Access Journals (Sweden)
Letice Souza da Silva
2014-01-01
17. Directory of Open Access Journals (Sweden)
2012-05-01
Full Text Available A group of benzimidazole analogs of sildenafil, 3-benzimidazolyl-4-methoxy-phenylsulfonylpiperazines 2–4 and 3-benzimidazolyl-4-methoxy-<em>N,N>-dimethyl- benzenesulfonamide (5, were efficiently synthesized. Compounds 2–5 were characterized by NMR and MS and contrary to the reported mass spectra of sildenafil, the spectra of the piperazine-containing compounds 2–4 showed a novel fragmentation pattern leading to an <em>m/z> = 316. A mechanism for the formation of this fragment was proposed.
18. Measurement of vertical bar Vub vertical bar in semi-inclusive charmless B → πX decays
International Nuclear Information System (INIS)
Kim, C.S.; Lee, Jake; Oha, Sechul
2002-01-01
We study semi-inclusive charmless decays B → πX, where X does not contain a charm (anti)quark. The mode B-bar 0 → π - X turns out to be be particularly useful for determination of the CKM matrix element vertical bar V ub vertical bar. We present the branching ratio (BR) of B-bar 0 → π - X as a function of vertical bar V ub vertical bar, with an estimation of possible uncertainty. The BR is expected to be an order of 10 -4
19. Free convective condensation in a vertical enclosure
Energy Technology Data Exchange (ETDEWEB)
Fox, R.J.; Peterson, P.F. [Univ. of California, Berkeley, CA (United States); Corradini, M.L.; Pernsteiner, A.P. [Univ. of Wisconsin, Madison, WI (United States)
1995-09-01
Free convective condensation in a vertical enclosure was studied numerically and the results were compared with experiments. In both the numerical and experimental investigations, mist formation was observed to occur near the cooling wall, with significant droplet concentrations in the bulk. Large recirculation cells near the end of the condensing section were generated as the heavy noncondensing gas collecting near the cooling wall was accelerated downward. Near the top of the enclosure the recirculation cells became weaker and smaller than those below, ultimately disappearing near the top of the condenser. In the experiment the mist density was seen to be highest near the wall and at the bottom of the condensing section, whereas the numerical model predicted a much more uniform distribution. The model used to describe the formation of mist was based on a Modified Critical Saturation Model (MCSM), which allows mist to be generated once the vapor pressure exceeds a critical value. Equilibrium, nonequilibrium, and MCSM calculations were preformed, showing the experimental results to lie somewhere in between the equilibrium and nonequilibrium predictions of the numerical model. A single adjustable constant (indicating the degree to which equilibrium is achieved) is used in the model in order to match the experimental results.
20. Vertical landscraping, a big regionalism for Dubai.
Science.gov (United States)
Wilson, Matthew
2010-01-01
Dubai's ecologic and economic complications are exacerbated by six years of accelerated expansion, a fixed top-down approach to urbanism and the construction of iconic single-phase mega-projects. With recent construction delays, project cancellations and growing landscape issues, Dubai's tower typologies have been unresponsive to changing environmental, socio-cultural and economic patterns (BBC, 2009; Gillet, 2009; Lewis, 2009). In this essay, a theory of "Big Regionalism" guides an argument for an economically and ecologically linked tower typology called the Condenser. This phased "box-to-tower" typology is part of a greater Landscape Urbanist strategy called Vertical Landscraping. Within this strategy, the Condenser's role is to densify the city, facilitating the creation of ecologic voids that order the urban region. Delineating "Big Regional" principles, the Condenser provides a time-based, global-local urban growth approach that weaves Bigness into a series of urban-regional, economic and ecological relationships, builds upon the environmental performance of the city's regional architecture and planning, promotes a continuity of Dubai's urban history, and responds to its landscape issues while condensing development. These speculations permit consideration of the overlooked opportunities embedded within Dubai's mega-projects and their long-term impact on the urban morphology.
1. The vertical oscillations of coupled magnets
International Nuclear Information System (INIS)
Li Kewei; Lin Jiahuang; Kang Zi Yang; Liang, Samuel Yee Wei; Juan, Jeremias Wong Say
2011-01-01
The International Young Physicists' Tournament (IYPT) is a worldwide, annual competition for high school students. This paper is adapted from the winning solution to Problem 14, Magnetic Spring, as presented in the final round of the 23rd IYPT in Vienna, Austria. Two magnets were arranged on top of each other on a common axis. One was fixed, while the other could move vertically. Various parameters of interest were investigated, including the effective gravitational acceleration, the strength, size, mass and geometry of the magnets, and damping of the oscillations. Despite its simplicity, this setup yielded a number of interesting and unexpected relations. The first stage of the investigation was concerned only with the undamped oscillations of small amplitudes, and the period of small amplitude oscillations was found to be dependent only on the eighth root of important magnet properties such as its strength and mass. The second stage sought to investigate more general oscillations. A numerical model which took into account magnet size, magnet geometry and damping effects was developed to model the general oscillations. Air resistance and friction were found to be significant sources of damping, while eddy currents were negligible.
2. Vertical barriers with increased sorption capacities
International Nuclear Information System (INIS)
1997-01-01
Vertical barriers are commonly used for the containment of contaminated areas. Due to the very small permeability of the barrier material which is usually in the order of magnitude of 10-10 m/s or less the advective contaminant transport can be more or less neglected. Nevertheless, there will always be a diffusive contaminant transport through the barrier which is caused by the concentration gradient. Investigations have been made to increase the sorption capacity of the barrier material by adding substances such as organoclays, zeolites, inorganic oxides and fly ashes. The contaminants taken into account where heavy metals (Pb) and for organic contaminants Toluole and Phenantrene. The paper presents results of model calculations and experiments. As a result, barrier materials can be designed 'tailor-made' depending on the individual contaminant range of each site (e.g. landfills, gasworks etc.). The parameters relevant for construction such as rheological properties, compressive strength and permeability are not affected by the addition of the sorbents
3. Functionalization of vertically aligned carbon nanotubes
Directory of Open Access Journals (Sweden)
Eloise Van Hooijdonk
2013-02-01
Full Text Available This review focuses and summarizes recent studies on the functionalization of carbon nanotubes oriented perpendicularly to their substrate, so-called vertically aligned carbon nanotubes (VA-CNTs. The intrinsic properties of individual nanotubes make the VA-CNTs ideal candidates for integration in a wide range of devices, and many potential applications have been envisaged. These applications can benefit from the unidirectional alignment of the nanotubes, the large surface area, the high carbon purity, the outstanding electrical conductivity, and the uniformly long length. However, practical uses of VA-CNTs are limited by their surface characteristics, which must be often modified in order to meet the specificity of each particular application. The proposed approaches are based on the chemical modifications of the surface by functionalization (grafting of functional chemical groups, decoration with metal particles or wrapping of polymers to bring new properties or to improve the interactions between the VA-CNTs and their environment while maintaining the alignment of CNTs.
4. Separation surgery for total vertical craniopagus twins.
Science.gov (United States)
Goh, Keith Y C
2004-08-01
A pair of conjoined twins aged 11 months underwent investigations, followed by surgical separation in Singapore General Hospital in April 2001. They were joined at the skull vertex and facing in opposite directions. Radiological investigations including cerebral angiography, magnetic resonance imaging and computerized tomographic scans were performed, leading to the diagnosis of total vertical craniopagus. There were two separate brains, with separate arterial circulations, but with a common superior sagittal sinus. Tissue expanders were inserted in the subgaleal space for 6 months of scalp expansion prior to surgery. Pre-operative planning involved the use of virtual reality equipment and life-sized polymer models of the conjoined skulls and brains. Surgical separation of the twins was achieved after approximately 100 h of operating time, using intraoperative image guidance, microsurgical techniques and intraoperative neurophysiologic monitoring. Reconstruction of the dura, calvarium and scalp was performed with artificial dura, absorbable plates and split skin grafts. Postoperative complications included focal cortical infarction, meningitis, and hydrocephalus. Despite these complications, the twins recovered satisfactorily and were discharged to their home country within 6 months. The 3-month outcome was minor disability in one twin and severe developmental delays in the other. Separation surgery is possible for complex cranially-conjoined twins but requires detailed planning and extensive surgical management.
5. A demographic analysis of vertical root fractures.
Science.gov (United States)
Cohen, Stephen; Berman, Louis H; Blanco, Lucia; Bakland, Leif; Kim, Jay S
2006-12-01
Teeth with vertical root fractures (VRFs) have complete or incomplete fractures that extends through the enamel, dentin and pulp, down the long axis of the tooth. Several different variables were investigated and statistically evaluated as to their correlation with the presence of VRFs. Specifically analyzed were gender, tooth location, age, radiographic and clinical findings, bruxism, and pulpal status. The data were collected from three different endodontists, from three different geographic locations, comprising a total of 227 teeth. Although VRFs may occur in conjunction with any of the parameters investigated, only certain factors were found to occur in a significant number of cases. The results indicate that VRFs are statistically more prevalent in mandibular molars and maxillary premolars. They are associated with periradicular bone loss, pain to percussion, extensive restorations, and seem to occur more often in females and older patients. However, VRFs are not necessarily related to periapical bone loss, a widening of the periodontal ligament space, associated periodontal pockets, a sinus tract, particular pulpal status, or bruxism.
6. Engineering design of vertical test stand cryostat
International Nuclear Information System (INIS)
Suhane, S.K.; Sharma, N.K.; Raghavendra, S.; Joshi, S.C.; Das, S.; Kush, P.K.; Sahni, V.C.; Gupta, P.D.; Sylvester, C.; Rabehl, R.; Ozelis, J.
2011-01-01
Under Indian Institutions and Fermilab collaboration, Raja Ramanna Centre for Advanced Technology and Fermi National Accelerator Laboratory are jointly developing 2K Vertical Test Stand (VTS) cryostats for testing SCRF cavities at 2K. The VTS cryostat has been designed for a large testing aperture of 86.36 cm for testing of 325 MHz Spoke resonators, 650 MHz and 1.3 GHz multi-cell SCRF cavities for Fermilab's Project-X. Units will be installed at Fermilab and RRCAT and used to test cavities for Project-X. A VTS cryostat comprises of liquid helium (LHe) vessel with internal magnetic shield, top insert plate equipped with cavity support stand and radiation shield, liquid nitrogen (LN 2 ) shield and vacuum vessel with external magnetic shield. The engineering design and analysis of VTS cryostat has been carried out using ASME B and PV Code and Finite Element Analysis. Design of internal and external magnetic shields was performed to limit the magnetic field inside LHe vessel at the cavity surface 2 shield has been performed to check the effectiveness of LN 2 cooling and for compliance with ASME piping code allowable stresses.
7. Learning styles in vertically integrated teaching.
Science.gov (United States)
Brumpton, Kay; Kitchener, Scott; Sweet, Linda
2013-10-01
With vertical integration, registrars and medical students attend the same educational workshops. It is not known whether these learners have similar or different learning styles related to their level of education within the medical training schema. This study aims to collect information about learning styles with a view to changing teaching strategies. If a significant difference is demonstrated this will impact on required approaches to teaching. The VARK learning inventory questionnaire was administered to 36 general practice registrars and 20 medical students. The learning styles were compared as individuals and then related to their level of education within the medical training schema. Students had a greater preference for multimodal learning compared with registrars (62.5 per cent versus 33.3 per cent, respectively). More than half of the registrars preferred uni or bimodal learning modalities, compared with one-third of the medical students. The present workshop format based on visual and aural material will not match the learning needs of most learners. This small study has shown that the majority of medical students and registrars could have their learning preferences better met by the addition of written material to the workshop series. Surprisingly, a significantly larger number of medical students than registrars appeared to be broadly multimodal in their learning style, and this warrants further research. © 2013 John Wiley & Sons Ltd.
8. ATLAS LTCS Vertically Challenged System Lessons Learned
Science.gov (United States)
Patel, Deepak; Garrison, Matt; Ku, Jentung
2014-01-01
Re-planning of LTCS TVAC testing and supporting RTA (Receiver Telescope Assembly) Test Plan and Procedure document preparation. The Laser Thermal Control System (LTCS) is designed to maintain the lasers onboard Advanced Topographic Laser Altimeter System (ATLAS) at their operational temperatures. In order to verify the functionality of the LTCS, a thermal balance test of the thermal hardware was performed. During the first cold start of the LTCS, the Loop Heat Pipe (LHP) was unable to control the laser mass simulators temperature. The control heaters were fully on and the loop temperature remained well below the desired setpoint. Thermal analysis of the loop did not show these results. This unpredicted behavior of the LTCS was brought up to a panel of LHP experts. Based on the testing and a review of all the data, there were multiple diagnostic performed in order to narrow down the cause. The prevailing theory is that gravity is causing oscillating flow within the loop, which artificially increased the control power needs. This resulted in a replan of the LTCS test flow and the addition of a GSE heater to allow vertical operation.
9. Radon 222 and tropospheric vertical transport
International Nuclear Information System (INIS)
Liu, S.C.; McAfee, J.R.; Cicerone, R.J.
1984-01-01
Radon 222 is an inert gas whose loss is due only to radioactive decay with a half life of 3.83 days (5.51-day ''exponential'' lifetime). It is a very useful tracer of continental air because only ground level continental sources are significant. Thus it is similar in several ways to many air pollutants (e.g., NO/sub x/) (NO+NO 2 ), SO 2 , and certain hydrocarbons. Previously published measured 222 Rn profiles are analyzed here by averaging for the summer, winter, and spring-fall seasons. The analysis shows that in summer, about 55% of the 222 Rn is transported above the planetary boundary layer, considerably more than during the other seasons. Similarly, in summer, about 20% rises to over 5.5 km (500 mbar). The average profiles have been used to derive vertical eddy diffusion coefficients with maximum values of 5-7 x 10 5 cm 2 s -1 in the midtroposphere and 8 x 10 3 to 5 x 10 4 cm 2 s -1 near the surface
10. Vertically Integrated Edgeless Photon Imaging Camera
Energy Technology Data Exchange (ETDEWEB)
Fahim, Farah [Fermilab; Deptuch, Grzegorz [Fermilab; Shenai, Alpana [Fermilab; Maj, Piotr [AGH-UST, Cracow; Kmon, Piotr [AGH-UST, Cracow; Grybos, Pawel [AGH-UST, Cracow; Szczygiel, Robert [AGH-UST, Cracow; Siddons, D. Peter [Brookhaven; Rumaiz, Abdul [Brookhaven; Kuczewski, Anthony [Brookhaven; Mead, Joseph [Brookhaven; Bradford, Rebecca [Argonne; Weizeorick, John [Argonne
2017-01-01
The Vertically Integrated Photon Imaging Chip - Large, (VIPIC-L), is a large area, small pixel (65μm), 3D integrated, photon counting ASIC with zero-suppressed or full frame dead-time-less data readout. It features data throughput of 14.4 Gbps per chip with a full frame readout speed of 56kframes/s in the imaging mode. VIPIC-L contain 192 x 192 pixel array and the total size of the chip is 1.248cm x 1.248cm with only a 5μm periphery. It contains about 120M transistors. A 1.3M pixel camera module will be developed by arranging a 6 x 6 array of 3D VIPIC-L’s bonded to a large area silicon sensor on the analog side and to a readout board on the digital side. The readout board hosts a bank of FPGA’s, one per VIPIC-L to allow processing of up to 0.7 Tbps of raw data produced by the camera.
11. Vertical Distribution of Water at Phoenix
Science.gov (United States)
Tamppari, L. K.; Lemmon, M. T.
2011-01-01
Phoenix results, combined with coordinated observations from the Mars Reconnaissance Orbiter of the Phoenix lander site, indicate that the water vapor is nonuniform (i.e., not well mixed) up to a calculated cloud condensation level. It is important to understand the mixing profile of water vapor because (a) the assumption of a well-mixed atmosphere up to a cloud condensation level is common in retrievals of column water abundances which are in turn used to understand the seasonal and interannual behavior of water, (b) there is a long history of observations and modeling that conclude both that water vapor is and is not well-mixed, and some studies indicate that the water vapor vertical mixing profile may, in fact, change with season and location, (c) the water vapor in the lowest part of the atmosphere is the reservoir that can exchange with the regolith and higher amounts may have an impact on the surface chemistry, and (d) greater water vapor abundances close to the surface may enhance surface exchange thereby reducing regional transport, which in turn has implications to the net transport of water vapor over seasonal and annual timescales.
12. Mudflow rheology in a vertically rotating flume
Science.gov (United States)
Holmes, Robert R.; Westphal, Jerome A.; Jobson, Harvey E.; ,
1990-01-01
Joint research by the U.S. Geological Survey and the University of Missouri-Rolla currently (1990) is being conducted on a 3.05 meters in diameter vertically rotating flume used to simulate mudflows under steady-state conditions. Observed mudflow simulations indicate flow patterns in the flume are similar to those occurring in natural mudflows. Variables such as mean and surface velocity, depth, and average boundary shear stress can be measured in this flume more easily than in the field or in a traditional tilting flume. Sensitive variables such as sediment concentration, grain-size distribution, and Atterberg limits also can be precisely and easily controlled. A known Newtonian fluid, SAE 30 motor oil, was tested in the flume and the computed value for viscosity was within 12.5 percent of the stated viscosity. This provided support that the data from the flume can be used to determine the rheological properties of fluids such as mud. Measurements on mud slurries indicate that flows with sediment concentrations ranging from 81 to 87 percent sediment by weight can be approximated as Bingham plastic for strain rates greater than 1 per second. In this approximation, the yield stress and Bingham viscosity were extremely sensitive to sediment concentration. Generally, the magnitude of the yield stress was large relative to the change in shear stress with increasing mudflow velocity.
13. Plug cementing: Horizontal to vertical conditions
Energy Technology Data Exchange (ETDEWEB)
Calvert, D.G.; Heathman, J.F.; Griffith, J.E.
1995-12-31
This paper presents an in-depth study of cement plug placement that was conducted with large-scale models for the improvement of plug cementing practices and plug integrity. Common hole and workstring geometries were examined with various rheology and density ratios between the drilling fluid and cement. The critical conditions dictating the difference between success and failure for various wellbore angles and conditions were explored, and the mechanisms controlling slurry movement before and after placement are now better understood. An understanding of these mechanisms allows the engineer to better tailor a design to specific hole conditions. Controversial concepts regarding plug-setting practices have been examined and resolved. The cumulative effects of density, rheology, and hole angle are major factors affecting plug success. While the Boycott effect and an extrusion effect were observed to be predominant in inclined wellbores, a spiraling or {open_quotes}roping{close_quotes} effect controls slurry movement in vertical wellbores. Ultimate success of a cement plug can be obtained if allowances are made for these effects in the job design, provided all other previously published recommended placement practices are followed. Results of this work can be applied to many sidetracking and plug-to-abandon operations. Additionally, the understanding of the fluid movement (creep) mechanisms holds potential for use in primary and remedial cementing work, and in controlling the placement of noncementitious fluids in the wellbore.
14. Hydrodynamics of vertical jumping in Archer fish
Science.gov (United States)
Techet, Alexandra H.; Mendelson, Leah
2017-11-01
Vertical jumping for aerial prey from an aquatic environment requires both propulsive power and precise aim to succeed. Rapid acceleration to a ballistic velocity sufficient for reaching the prey height occurs before the fish leaves the water completely and experiences a thousandfold drop in force-producing ability. In addition to speed, accuracy and stability are crucial for successful feeding by jumping. This talk examines the physics of jumping using the archer fish as a model. Better known for their spitting abilities, archer fish will jump multiple body lengths out of the water for prey capture, from a stationary position just below the free surface. Modulation of oscillatory body kinematics and use of multiple fins for force production are identified as methods through which the fish can meet requirements for both acceleration and stabilization in limited space. Quantitative 3D PIV wake measurements reveal how variations in tail kinematics relate to thrust production throughout the course of a jumping maneuver and over a range of jump heights. By performing measurements in 3D, the timing, interactions, and relative contributions to thrust and lateral forces from each fin can be evaluated, elucidating the complex hydrodynamics that enable archer fish water exit.
15. Anestesia Diploica em Endodontia
OpenAIRE
Macedo, Ricardo Ribeiro Veiga de
2013-01-01
Trabalho final do 5º ano com vista à atribuição do grau de mestre no âmbito do ciclo de estudos de Mestrado Integrado em Medicina Dentária apresentado à Faculdade de Medicina da Universidade de Coimbra. Objetivos Comparar a eficácia das técnicas de anestesia convencionais, a anestesia infiltrativa periapical, com a anestesia diploica. Metodologia Foram selecionados 32 voluntários, saudáveis, aos quais foram administradas ambas as técnicas anestésicas no dente 1.4. Numa primeira fase os...
OpenAIRE
Silva, Juliana Marisa Fernandes
2016-01-01
Dissertação apresentada à Universidade Fernando Pessoa como parte dos requisitos para a obtenção do grau de Mestre em Psicologia, ramo de Psicologia Clínica e da Saúde Este estudo é sobre a Síndrome de Burnout e teve como objetivo geral compreender se o Burnout está presente nos cuidadores formais da Santa Casa da Misericórdia de Castelo de Paiva e quais as variáveis socioprofissionais que poderão exercer influência no seu aparecimento. Pretendeu-se avaliar o Burnout dos cuidadores a tr...
17. O local e o global na estrutura da política ambiental internacional: a construção social do acidente químico ampliado de Bhopal e da Convenção 174 da OIT The local and the global in the international environmental politics structure: the social construction of the Bhopal major chemical accident and the ILO Convention 174
Directory of Open Access Journals (Sweden)
2006-06-01
Full Text Available Neste artigo, adota-se a abordagem construtivista das Relações Internacionais (RI para analisar o impacto normativo internacional do acidente químico de Bhopal, privilegiando-se o papel constitutivo da ação humana na Política Ambiental Internacional (PAI. Emprega-se articulação de conceitos construtivistas, útil na visualização da estrutura em que está inserido o evento local e, também, do processo de construção social tanto do evento quanto da norma internacional que precipitou.O pressupostoéodeque prevalece a co-constituição entre estruturas e agentes - responsáveis pela construção social do evento-ede que, portanto, não se pode prescindir desses elementos, tampouco dos elos que os unem. Almeja-se, por um lado, compreender a maneira pela qual o evento local é construído socialmente, tendo por referência a estrutura de idéias e de normas, referidas à proteção ambiental e ao desenvolvimento sustentável; e, por outro, como a ocorrência local gera impactos políticos, sociais e normativos em nível global. Assim, evidenciam-se elementos de globalidade, pertencentes ao evento local, sobretudo quando se verifica um processo de amadurecimento ideacional e normativo que tem como marcos políticos a Conferência de Estocolmo sobre o Meio Ambiente Humano de 1972 e a Conferência das Nações Unidas sobre Desenvolvimento e Meio Ambiente, ocorrida no Rio de Janeiro, em 1992. Assim, destaca-se o contexto cultural e institucional da cena ambiental como um todo, evidenciando-se o veio condutor do processo de construção social enfocado: a relação local/global na área ambiental. Focaliza-se, também, o papel da Organização Internacional do Trabalho (OIT como agência líder na discussão internacional da segurança química a fim de indicar por que a construção normativa enfocada se verifica no fórum daquela Organização Internacional (OI.The present article utilizes the constructivist approach of International
18. Antioxidant Profile of <em>Trifolium pratenseem> L.
Directory of Open Access Journals (Sweden)
Heidy Schwartsova
2012-09-01
Full Text Available In order to examine the antioxidant properties of five different extracts of <em>Trifolium pratenseem> L. (Leguminosae leaves, various assays which measure free radical scavenging ability were carried out: 1,1-diphenyl-2-picrylhydrazyl, hydroxyl, superoxide anion and nitric oxide radical scavenger capacity tests and lipid peroxidation assay. In all of the tests, only the H2O and (to some extent the EtOAc extracts showed a potent antioxidant effect compared with BHT and BHA, well-known synthetic antioxidants. In addition, <em>in vivo em>experiments were conducted with antioxidant systems (activities of GSHPx, GSHR, Px, CAT, XOD, GSH content and intensity of LPx in liver homogenate and blood of mice after their treatment with extracts of <em>T. pratenseem> leaves, or in combination with CCl4. Besides, in the extracts examined the total phenolic and flavonoid amounts were also determined, together with presence of the selected flavonoids: quercetin, luteolin, apigenin, naringenin and kaempferol, which were studied using a HPLC-DAD technique. HPLC-DAD analysis showed a noticeable content of natural products according to which the examined <em>Trifolium pratenseem> species could well be regarded as a promising new source of bioactive natural compounds, which can be used both as a food supplement and a remedy.
19. The Naval Ocean Vertical Aerosol Model : Progress Report
NARCIS (Netherlands)
Leeuw, G. de; Gathman, S.G.; Davidson, K.L.; Jensen, D.R.
1990-01-01
The Naval Oceanic Vertical Aerosol Model (NOVAM) has been formulated to estimate the vertical structure of the optical and infrared extinction coefficients in the marine atmospheric boundary layer (MABL). NOVAM was designed to predict the non-uniform and non-logarithmic extinction profiles which are
20. Verification of the Naval Oceanic Vertical Aerosol Model During Fire
NARCIS (Netherlands)
Davidson, K.L.; Leeuw, G. de; Gathman, S.G.; Jensen, D.R.
1990-01-01
The Naval Oceanic Vertical Aerosol Model (NOVAM) has been formulated to estimate the vertical structure of the optical and infrared extinction coefficients in the marine atmospheric boundary layer (MABL), for waverengths between 0,2 and 40 um. NOVAM was designed to predict, utilizing a set of
1. Private incentives to vertical disintegration among firms with heterogeneous objectives
OpenAIRE
Rossini, Gianpaolo
2003-01-01
A vertically integrated monopoly is compared to a decentralized market arrangement where production is segmented. A Labor Managed firm produces an input used by a profit maximizer manufacturer of a final good. Unlike what usually occurs between homogeneous firms we find circumstances in which the decentralised vertical arrangement is privately superior to the integrated one.
2. CREATING EFFECTIVE MODELS OF VERTICAL INTEGRATED STRUCTURES IN UKRAINE
Directory of Open Access Journals (Sweden)
D. V. Koliesnikov
2011-01-01
Full Text Available The results of scientific research aimed at development of methodology-theoretical mechanisms of building the effective models of vertically-integrated structures are presented. A presence of vertically-integrated structures on natural-monopolistic markets at private and governmental sectors of economy and priority directions of integration are given.
3. Buried injector logic, a vertical IIL using deep ion implantation
NARCIS (Netherlands)
Mouthaan, A.J.
1987-01-01
A vertically integrated alternative for integrated injection logic has been realized, named buried injector logic (BIL). 1 MeV ion implantations are used to create buried layers. The vertical pnp and npn transistors have thin base regions and exhibit a limited charge accumulation if a gate is
4. Finding people, papers, and posts: Vertical search algorithms and evaluation
NARCIS (Netherlands)
Berendsen, R.W.
2015-01-01
There is a growing diversity of information access applications. While general web search has been dominant in the past few decades, a wide variety of so-called vertical search tasks and applications have come to the fore. Vertical search is an often used term for search that targets specific
5. Does Contract Complexity Limit Opportunities? Vertical Organization and Flexibility
NARCIS (Netherlands)
H.P.G. Pennings (Enrico)
2010-01-01
textabstractThe vertical organization of production entails a range of make-or-buy decisions of intermediate goods that are influenced by the difficulty of writing contracts with a potential supplier. When contracting causes high transaction costs, a firm can decide to vertically integrate the
6. Vertical dispersion produced by random closed orbit distortions and sextupoles
International Nuclear Information System (INIS)
Suzuki, Toshio.
1977-01-01
Vertical dispersion appears even in a machine designed with plane symmetry because of vertical closed orbit distortions, linear coupling and coupling due to sextupoles. This gives rise to several undesirable effects in an electron-positron storage ring such as PEP. Vertical dispersion at the interaction point will increase beam height and reduce luminosity. Vertical dispersion around the ring will modify vertical emittance and partition numbers for synchrotron radiation damping. It will also induce betatron-synchrotron resonance and affect chromaticity correction. Vertical dispersion due to random closed orbit distortions and sextupoles has been studied by Piwinski, and he has indicated that correction of chromaticity and chromatic change of β-function is important. However, he has assumed one error element and evaluated the dispersion at the position of the element. We generalize his argument to a more realistic case and derive more precise criteria for the correction of vertical dispersion. Horizontal dispersion due to perturbations is also studied. Vertical dispersion due to linear coupling is neglected in this note, since it has been studied by other authors. 7 refs
7. Diel vertical migration of zooplankton in the Tanzanian waters of ...
African Journals Online (AJOL)
The diel vertical migration of zooplankton was studied in the Southern part of Lake Victoria in January and July 2002. A van dorn water sampler was used to collect zooplankton. In January 2002, zooplankton showed a pronounced diel vertical migration whereby zooplankton were moving upward at around sunset and ...
8. Prevention of vertical transmission of HIV in Denmark
DEFF Research Database (Denmark)
Rasmussen, M.B.; Rasmussen, J.B.; Nielsen, V.R.
2008-01-01
INTRODUCTION: Human immunodeficiency virus (HIV) is a RNA virus that can be transmitted parenterally, sexually or vertically. An effective prevention strategy has been implemented in industrialised countries, thereby reducing vertical transmission from 15-25% to < 1%. The aim of this study was to...
9. Second vertical derivative of potential fields using an adaptation of ...
African Journals Online (AJOL)
The second vertical derivative of magnetic fields is commonly used for resolution of anomalies in gravity and magnetic fields. It is also commonly used as an aid to geologic mapping i.e. for the delineation of geological discontinuities in the subsurface. Frequency domain methods for calculating second vertical derivatives ...
10. Horizontal and vertical seismic isolation of a nuclear power plant
International Nuclear Information System (INIS)
Ikonomou, A.S.
1983-01-01
This paper presents a study for the horizontal and vertical seismic isolation of a nuclear power plant with a base isolation system, developed by the author, called the Alexisismon. This system -- which comprises different schemes for horizontal or vertical or both horizontal and vertical isolation -- is a linear system based on the principle of separation of functions. That is, horizontal and vertical isolation are realized through different components and act independently from each other. As far as horizontal isolation is concerned, the role of transmitting vertical loads is uncoupled from the role of inducing horizontal restoring forces so that both functions can be performed without instability. It is possible either to provide both horizontal and vertical isolation to the whole nuclear plant or to isolate the whole plant horizontally and to provide vertical isolation to sensitive and costly equipment only. When the fundamental period of the plant or equipment is 2 seconds and when the vertical displacements are of the order of + or - 20 inches, the structure or equipment are protected against earthquakes up to 1.10 and 1.30 g for actual and 0.60 and 1.50 g for artificial accelerograms. In both cases all the isolation elements behave elastically up to these acceleration limits as well as the superstructure and equipment
11. Vertical integration and diversification of acute care hospitals: conceptual definitions.
Science.gov (United States)
Clement, J P
1988-01-01
The terms vertical integration and diversification, although used quite frequently, are ill-defined for use in the health care field. In this article, the concepts are defined--specifically for nonuniversity acute care hospitals. The resulting definitions are more useful than previous ones for predicting the effects of vertical integration and diversification.
12. SOMPROF: A vertically explicit soil organic matter model
NARCIS (Netherlands)
Braakhekke, M.C.; Beer, M.; Hoosbeek, M.R.; Kruijt, B.; Kabat, P.
2011-01-01
Most current soil organic matter (SOM) models represent the soil as a bulk without specification of the vertical distribution of SOM in the soil profile. However, the vertical SOM profile may be of great importance for soil carbon cycling, both on short (hours to years) time scale, due to
13. Imaging of the vertical particle tracks without any depth scanning
International Nuclear Information System (INIS)
Soroko, L.M.
2001-01-01
The principle of a new optical microscope which enables us to get the image of a vertical particle track without any depth scanning is described. This new optical microscope contains a spatial transformer which consists of mirror lamellar elements and which produces a secondary in focus image of the vertical particle track. Properties of such a system are presented. A longitudinal resolution is estimated
14. Proximate Composition, Nutritional Attributes and Mineral Composition of <em>Peperomia> <em>pellucida> L. (Ketumpangan Air Grown in Malaysia
Directory of Open Access Journals (Sweden)
Maznah Ismail
2012-09-01
Full Text Available This study presents the proximate and mineral composition of <em>Peperomia> <em>pellucida> L., an underexploited weed plant in Malaysia. Proximate analysis was performed using standard AOAC methods and mineral contents were determined using atomic absorption spectrometry. The results indicated <em>Peperomia> <em>pellucida> to be rich in crude protein, carbohydrate and total ash contents. The high amount of total ash (31.22% suggests a high-value mineral composition comprising potassium, calcium and iron as the main elements. The present study inferred that <em>Peperomia> <em>pellucida> would serve as a good source of protein and energy as well as micronutrients in the form of a leafy vegetable for human consumption.
15. Doppler Lidar Vertical Velocity Statistics Value-Added Product
Energy Technology Data Exchange (ETDEWEB)
Newsom, R. K. [DOE ARM Climate Research Facility, Washington, DC (United States); Sivaraman, C. [DOE ARM Climate Research Facility, Washington, DC (United States); Shippert, T. R. [DOE ARM Climate Research Facility, Washington, DC (United States); Riihimaki, L. D. [DOE ARM Climate Research Facility, Washington, DC (United States)
2015-07-01
Accurate height-resolved measurements of higher-order statistical moments of vertical velocity fluctuations are crucial for improved understanding of turbulent mixing and diffusion, convective initiation, and cloud life cycles. The Atmospheric Radiation Measurement (ARM) Climate Research Facility operates coherent Doppler lidar systems at several sites around the globe. These instruments provide measurements of clear-air vertical velocity profiles in the lower troposphere with a nominal temporal resolution of 1 sec and height resolution of 30 m. The purpose of the Doppler lidar vertical velocity statistics (DLWSTATS) value-added product (VAP) is to produce height- and time-resolved estimates of vertical velocity variance, skewness, and kurtosis from these raw measurements. The VAP also produces estimates of cloud properties, including cloud-base height (CBH), cloud frequency, cloud-base vertical velocity, and cloud-base updraft fraction.
16. Progress and Prospects in Developing Marine Vertical Datum
Directory of Open Access Journals (Sweden)
ZHOU Xinghua
2017-10-01
Full Text Available Marine vertical datum system construction is the basic work of marine surveying. In the 2009-2012, China has preliminarily constructed the transformation and unification of different height/depth datum model in the China Sea of 80 nautical mile. In recent years, this model has been extended to the South China Sea, the western Pacific and the eastern Indian Ocean, and then a seamless vertical datum model in the south and north pole area to the whole world gradually will be constructed in the near future, this is the foundation of supporting the digital ocean construction in China. This paper mainly discusses the research status of marine vertical datum construction which had been carried out in major coastal countries or regions, analyzes the main work and approaches and key technologies in the process of the marine vertical datum system building, and then expounds the achievements and existing problems in the practice of our country marine vertical datum building.
17. Toroidal inhomogeneity of the vertical field in a tokamak apparatus
International Nuclear Information System (INIS)
Sometani, Taro; Takashima, Hidekazu
1977-01-01
An experiment with a model device has been made on the toroidal inhomogeneity of the vertical field in a Tokamak with an iron core. The D.C. vertical field is increased near the yokes of the iron core, while the gross plasma image field (consisting of the components due to the plasma current, the primary current, and its image) is reduced there. These two vertical fields, when superposed, exert force on the plasma as a less inhomogeneous external vertical field. The vertical field can be homogenized satisfactorily by using a compensation winding wound at a proper position on the iron core even if the shielding plates, which are mounted on some Tokamaks, are dispensed with. (auth.)
18. Theoretic base of Edge Local Mode triggering by vertical displacements
Energy Technology Data Exchange (ETDEWEB)
Wang, Z. T. [Southwestern Institute of Physics, Chengdu 610041 (China); College of Physics Science and Technology, Sichuan University, Chengdu 610065 (China); He, Z. X.; Wang, Z. H. [Southwestern Institute of Physics, Chengdu 610041 (China); Wu, N.; Tang, C. J. [College of Physics Science and Technology, Sichuan University, Chengdu 610065 (China)
2015-05-15
Vertical instability is studied with R-dependent displacement. For Solovev's configuration, the stability boundary of the vertical instability is calculated. The pressure gradient is a destabilizing factor which is contrary to Rebhan's result. Equilibrium parallel current density, j{sub //}, at plasma boundary is a drive of the vertical instability similar to Peeling-ballooning modes; however, the vertical instability cannot be stabilized by the magnetic shear which tends towards infinity near the separatrix. The induced current observed in the Edge Local Mode (ELM) triggering experiment by vertical modulation is derived. The theory provides some theoretic explanation for the mitigation of type-I ELMS on ASDEX Upgrade. The principle could be also used for ITER.
19. O insight em psiquiatria
Directory of Open Access Journals (Sweden)
Ana Margarida P. Cardoso
2008-12-01
Full Text Available O sinal de que algo está a acontecer contribui para que o paciente reconheça que alguma coisa de estranho se está a passar consigo. Este reconhecimento faz com que o sujeito possa desempenhar uma função activa e seja um elemento colaborante do seu processo de recuperação. Cada doença apresenta, contudo, diferentes sintomas, uma vez que cada doença psiquiátrica consiste em diferentes perturbações com diversos efeitos sobre o funcionamento mental. Desta maneira, o fenómeno do insight que é registado em cada doença é diferente e expressa-se sob diferentes formas, não somente devido às manifestações clínicas da doença mas também devido às características individuais do sujeito.
20. Antibioticos profilaticos em neurocirurgia
Directory of Open Access Journals (Sweden)
Reynaldo A. Brandt
1979-03-01
Full Text Available O índice de infecções pós-operatórias em pacientes neuro-cirúrgicos que receberam antibióticos profiláticos neste período foi comparado com o de pacientes que não receberam antibióticos. Infecções ocorreram em proporções significativamente maiores nos pacientes que receberam antibióticos, particularmente naqueles com afecções intracranianas graves; estas infecções foram graves e fatais na maioria dos casos. A administração de antibióticos profiláticos nestes pacientes não só foi incapaz de prevenir o aparecimento de infecções pós-operatórias, como aparentemente favoreceu o seu desenvolvimento. Tal se deveu, provavelmente, à destruição do equilíbrio microbiano no organismo, favorecendo o desenvolvimento de germes patogênicos e resistentes aos antibióticos usuais
1. Spirit Near 'Stapledon' on Sol 1802 (Vertical)
Science.gov (United States)
2009-01-01
NASA Mars Exploration Rover Spirit used its navigation camera for the images assembled into this full-circle view of the rover's surroundings during the 1,802nd Martian day, or sol, (January 26, 2009) of Spirit's mission on the surface of Mars. North is at the top. This view is presented as a vertical projection with geometric seam correction. Spirit had driven down off the low plateau called 'Home Plate' on Sol 1782 (January 6, 2009) after spending 12 months on a north-facing slope on the northern edge of Home Plate. The position on the slope (at about the 9-o'clock position in this view) tilted Spirit's solar panels toward the sun, enabling the rover to generate enough electricity to survive its third Martian winter. Tracks at about the 11-o'clock position of this panorama can be seen leading back to that 'Winter Haven 3' site from the Sol 1802 position about 10 meters (33 feet) away. For scale, the distance between the parallel wheel tracks is about one meter (40 inches). Where the receding tracks bend to the left, a circular pattern resulted from Spirit turning in place at a soil target informally named 'Stapledon' after William Olaf Stapledon, a British philosopher and science-fiction author who lived from 1886 to 1950. Scientists on the rover team suspected that the soil in that area might have a high concentration of silica, resembling a high-silica soil patch discovered east of Home Plate in 2007. Bright material visible in the track furthest to the right was examined with Spirit's alpha partical X-ray spectrometer and found, indeed, to be rich in silica. The team laid plans to drive Spirit from this Sol 1802 location back up onto Home Plate, then southward for the rover's summer field season.
2. A democracia em Cuba
Directory of Open Access Journals (Sweden)
Julio César Guanche Zaldívar
2011-01-01
Full Text Available O triunfo revolucionário de 1959 consagrou em Cuba um novo conceito de democracia, com o intuito de garantir o acesso à vida política ativa de grandes setores da população, antes excluídos. Para isso, foi desenvolvida uma política de inclusão social com caráter universal. A prática política popular deixou as riquezas do país em mãos da população carente e gerou uma grande mobilidade social, fato que foi central para o aumento da participação popular. O contexto de agressão imperialista e o próprio desenvolvimento do processo produziram a consolidação de noções que limitaram a participação popular: o apogeu da burocracia, a compreensão da unidade como unanimidade e o seguimento, em certa medida, de correntes do marxismo soviético. Os desafios atuais para aprofundar a democracia em Cuba se apresentam em três planos: socializar o poder, promover a sociodiversidade e desenvolver a ideologia revolucionária.El triunfo revolucionario de 1959 consagró en Cuba un nuevo concepto de democracia, basado en garantizar acceso a la vida política activa a grandes sectores poblacionales, antes excluidos de ella. Para ello desarrolló una política de inclusión social con carácter universal. La práctica política popular puso las riquezas del país en manos de los desposeídos y generó gran movilidad social, hecho que resultó clave para el aumento de la participación popular. El contexto de agresión imperialista y el propio desarrollo del proceso produjo el afianzamiento de nociones que limitaron la participación popular: el auge de la burocracia, la comprensión de la unidad como unanimidad y el seguimiento, en parte, de corrientes del marxismo soviético. Los desafíos actuales se presentan en tres planos para profundizar la democracia en Cuba: socializar el poder, promover la sociodiversidad y desarrollar la ideología revolucionaria.The revolutionary triumph of 1959 established in Cuba a new concept of democracy, one that
3. Identified EM Earthquake Precursors
Science.gov (United States)
Jones, Kenneth, II; Saxton, Patrick
2014-05-01
Many attempts have been made to determine a sound forecasting method regarding earthquakes and warn the public in turn. Presently, the animal kingdom leads the precursor list alluding to a transmission related source. By applying the animal-based model to an electromagnetic (EM) wave model, various hypotheses were formed, but the most interesting one required the use of a magnetometer with a differing design and geometry. To date, numerous, high-end magnetometers have been in use in close proximity to fault zones for potential earthquake forecasting; however, something is still amiss. The problem still resides with what exactly is forecastable and the investigating direction of EM. After a number of custom rock experiments, two hypotheses were formed which could answer the EM wave model. The first hypothesis concerned a sufficient and continuous electron movement either by surface or penetrative flow, and the second regarded a novel approach to radio transmission. Electron flow along fracture surfaces was determined to be inadequate in creating strong EM fields, because rock has a very high electrical resistance making it a high quality insulator. Penetrative flow could not be corroborated as well, because it was discovered that rock was absorbing and confining electrons to a very thin skin depth. Radio wave transmission and detection worked with every single test administered. This hypothesis was reviewed for propagating, long-wave generation with sufficient amplitude, and the capability of penetrating solid rock. Additionally, fracture spaces, either air or ion-filled, can facilitate this concept from great depths and allow for surficial detection. A few propagating precursor signals have been detected in the field occurring with associated phases using custom-built loop antennae. Field testing was conducted in Southern California from 2006-2011, and outside the NE Texas town of Timpson in February, 2013. The antennae have mobility and observations were noted for
4. Characterization of vertical mixing in oscillatory vegetated flows
Science.gov (United States)
Abdolahpour, M.; Ghisalberti, M.; Lavery, P.; McMahon, K.
2016-02-01
Seagrass meadows are primary producers that provide important ecosystem services, such as improved water quality, sediment stabilisation and trapping and recycling of nutrients. Most of these ecological services are strongly influenced by the vertical exchange of water across the canopy-water interface. That is, vertical mixing is the main hydrodynamic process governing the large-scale ecological and environmental impact of seagrass meadows. The majority of studies into mixing in vegetated flows have focused on steady flow environments whereas many coastal canopies are subjected to oscillatory flows driven by surface waves. It is known that the rate of mass transfer will vary greatly between unidirectional and oscillatory flows, necessitating a specific investigation of mixing in oscillatory canopy flows. In this study, we conducted an extensive laboratory investigation to characterise the rate of vertical mixing through a vertical turbulent diffusivity (Dt,z). This has been done through gauging the evolution of vertical profiles of concentration (C) of a dye sheet injected into a wave-canopy flow. Instantaneous measurement of the variance of the vertical concentration distribution ( allowed the estimation of a vertical turbulent diffusivity (). Two types of model canopies, rigid and flexible, with identical heights and frontal areas, were subjected to a wide and realistic range of wave height and period. The results showed two important mechanisms that dominate vertical mixing under different conditions: a shear layer that forms at the top of the canopy and wake turbulence generated by the stems. By allowing a coupled contribution of wake and shear layer mixing, we present a relationship that can be used to predict the rate of vertical mixing in coastal canopies. The results further showed that the rate of vertical mixing within flexible vegetation was always lower than the corresponding rigid canopy, confirming the impact of plant flexibility on canopy
5. Osteoporose em caprinos
Directory of Open Access Journals (Sweden)
Fábio B. Rosa
2013-04-01
6. Effect of hang cleans or squats paired with countermovement vertical jumps on vertical displacement.
Science.gov (United States)
Andrews, Tedi R; Mackey, Theresa; Inkrott, Thomas A; Murray, Steven R; Clark, Ida E; Pettitt, Robert W
2011-09-01
Complex training is characterized by pairing resistance exercise with plyometric exercise to exploit the postactivation potentiation (PAP) phenomenon, thereby promising a better training effect. Studies on PAP as measured by human power performances are equivocal. One issue may be the lack of analyses across multiple sets of paired exercises, a common practice used by athletes. We evaluated countermovement vertical jump (CMJ) performance in 19 women, collegiate athletes in 3 of the following trials: (a) CMJs-only, where 1 set of CMJs served as a conditioning exercise, (b) heavy-load, back squats paired with CMJs, and (c) hang cleans paired with CMJs. The CMJ vertical displacement (3-attempt average), as measured with digital video, served as the dependent variable of CMJ performance. Across 3 sets of paired-exercise regimens, CMJ-only depreciated 1.6 cm and CMJ paired with back squats depreciated 2.0 cm (main effect, p squats or CMJs in and of themselves. Future research on exercise modes of complex training that best help athletes preserve and train with the highest power possible, in a given training session, is warranted.
7. Directory of Open Access Journals (Sweden)
Xuemei Liu
2012-09-01
Full Text Available Cellulose synthase (CESA, which is an essential catalyst for the generation of plant cell wall biomass, is mainly encoded by the <em>CesA> gene family that contains ten or more members. In this study; four full-length cDNAs encoding CESA were isolated from<em> Betula platyphyllaem> Suk., which is an important timber species, using RT-PCR combined with the RACE method and were named as <em>BplCesA3em>, <em>−4em>,> −7 em>and> −8em>. These deduced CESAs contained the same typical domains and regions as their <em>Arabidopsis> homologs. The cDNA lengths differed among these four genes, as did the locations of the various protein domains inferred from the deduced amino acid sequences, which shared amino acid sequence identities ranging from only 63.8% to 70.5%. Real-time RT-PCR showed that all four <em>BplCesAs> were expressed at different levels in diverse tissues. Results indicated that BplCESA8 might be involved in secondary cell wall biosynthesis and floral development. BplCESA3 appeared in a unique expression pattern and was possibly involved in primary cell wall biosynthesis and seed development; it might also be related to the homogalacturonan synthesis. BplCESA7 and BplCESA4 may be related to the formation of a cellulose synthase complex and participate mainly in secondary cell wall biosynthesis. The extremely low expression abundance of the four BplCESAs in mature pollen suggested very little involvement of them in mature pollen formation in <em>Betula>. The distinct expression pattern of the four <em>BplCesAs> suggested they might participate in developments of various tissues and that they are possibly controlled by distinct mechanisms in <em>Betula.>
8. Subjective visual vertical after treatment of benign paroxysmal positional vertigo
Directory of Open Access Journals (Sweden)
Maristela Mian Ferreira
Full Text Available Abstract Introduction: Otolith function can be studied by testing the subjective visual vertical, because the tilt of the vertical line beyond the normal range is a sign of vestibular dysfunction. Benign paroxysmal positional vertigo is a disorder of one or more labyrinthine semicircular canals caused by fractions of otoliths derived from the utricular macula. Objective: To compare the subjective visual vertical with the bucket test before and immediately after the particle repositioning maneuver in patients with benign paroxysmal positional vertigo. Methods: We evaluated 20 patients. The estimated position where a fluorescent line within a bucket reached the vertical position was measured before and immediately after the particle repositioning maneuver. Data were tabulated and statistically analyzed. Results: Before repositioning maneuver, 9 patients (45.0% had absolute values of the subjective visual vertical above the reference standard and 2 (10.0% after the maneuver; the mean of the absolute values of the vertical deviation was significantly lower after the intervention (p < 0.001. Conclusion: There is a reduction of the deviations of the subjective visual vertical, evaluated by the bucket test, immediately after the particle repositioning maneuver in patients with benign paroxysmal positional vertigo.
9. Response of ramus following vertical lengthening with distraction osteogenesis.
Science.gov (United States)
Tuzuner-Oncul, Aysegul Mine; Kisnisci, Reha S
2011-09-01
Vertical lengthening of the mandibular ramus is considered to be one of the least stable surgical procedures in the management of musculoskeletal maxillofacial deformities. The aim of this study was to evaluate the response of the mandibular ramus following vertical lengthening by means of distraction osteogenesis. This study included eight non-syndromic adult patients with temporomandibular joint ankylosis. The vertical height deficiency of the mandibular ramus and the ramus/condyle unit on the affected side were simultaneously reconstructed by transportation of a bone segment using distraction osteogenesis following gap arthroplasty. Lateral and posteroanterior (PA) cephalograms taken postoperatively before active distraction, at the completion of distraction and 6, 12, 24 months after distraction, were compared to evaluate the changes of the ramus height. In all cases the vertical ramus and ramus/condyle unit height loss were successfully reconstructed by distraction osteogenesis. There was no relapse in the amount of height gained by distraction osteogenesis at the 24 months follow-up review (p>0.05). Acute one stage vertical lengthening of the mandibular ramus is considered to be one of the least stable musculoskeletal procedures with relapse being a significant adverse outcome. In this clinical study gradual vertical lengthening of the ramus through ramus/condyle unit distraction osteogenesis has maintained the initial vertical ramus height gained for 24 months. Copyright © 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
10. In vitro force delivery of nickel-titanium superelastic archwires in vertical displacement
Directory of Open Access Journals (Sweden)
Aisha de Souza Gomes Stumpf
2012-12-01
11. <em>N>-Substituted 5-Chloro-6-phenylpyridazin-3(2<em>H>-ones: Synthesis, Insecticidal Activity Against <em>Plutella xylostella em>(L. and SAR Study
Directory of Open Access Journals (Sweden)
Song Yang
2012-08-01
Full Text Available A series of <em>N>-substituted 5-chloro-6-phenylpyridazin-3(2<em>H>-one derivatives were synthesized based on our previous work; all compounds were characterized by spectral data and tested for <em>in vitroem> insecticidal activity against <em>Plutella xylostellaem>. The results showed that the synthesized pyridazin-3(2<em>H>-one compounds possessed good insecticidal activities, especially the compounds 4b, 4d, and 4h which showed > 90% activity at 100 mg/L. The structure-activity relationships (SAR for these compounds were also discussed.
12. Vibrações e choques mecânicos em pintos de um dia transportados em diferentes estradas
Directory of Open Access Journals (Sweden)
Aérica C. Nazareno
2015-07-01
OpenAIRE
Costa, Samuel Alves Barbi; Côrtes, Larissa Silveira; Coelho Netto, Taiana; Freitas Junior, Moacyr Moreira de
2016-01-01
Este artigo se propõe a analisar a evolução dos prestadores de serviços de saneamento do estado de MinasGerais entre os anos de 2005 e 2010 com base nos indicadores do Sistema Nacional de Informações em Saneamento(SNIS). Foram definidos parâmetros técnicos para a análise dos indicadores, classificados os resultados como satisfatórios(verdes) ou insatisfatórios (vermelhos). Esta categorização atende a concepção da Regulação Sunshine, trazendo à tona omonitoramento do progresso das ações no set...
14. Em favor da talassografia
Directory of Open Access Journals (Sweden)
Jean-Louis Boudou
2001-01-01
Full Text Available A Talassografia (“descrição do mar” interessa-sepelos impactos físicos, biológicos, ecológicos... culturais da violenta antropização dos ambientes costeiros (oceânicos e continentais, caracterizados pelaexigüidade, vulnerabilidade, fragilidade e plasticidade. Como o Brasil é um “país marítimo”, os geó-grafos (os talassógrafos brasileiros são convidadosa intensificar suas pesquisas nas áreas costeiras e acriar novas estruturas para divulgá-las: Revista, Encontros, Associação, Pós-Graduação... tudo em prolda talassografia.
15. Infância e educação em Platão Childhood and education in Plato
Directory of Open Access Journals (Sweden)
Walter Omar Kohan
2003-06-01
16. Vertical-horizontal wells for depletion and sweep
Energy Technology Data Exchange (ETDEWEB)
Muraikhi, A. J.; Pham, T. R.; Liu, J. S.; Khatib, M. R.; Muhaish, A. S. [Saudi Aramco (Saudi Arabia)
1998-12-31
A well completion scheme currently in use in a thick, large, elongated carbonate anticline Middle-East oil reservoir is described. This method of well completion calls for a combination of an open hole horizontal section penetrating the top 10 feet of the reservoir and a cased or undisturbed vertical segment through the thick formation. The horizontal section is used for producing and the vertical segment is used for monitoring purposes. Field experience and supported reservoir simulation exercises have shown that the horizontal application is superior to conventional vertical completion both from the economic and from the sweep point of view. 4 refs., 12 figs.
17. Analytical Model of Steam Chamber Evolution from Vertical Well
Science.gov (United States)
Shevchenko, D. V.; Usmanov, S. A.; Shangaraeva, A. I.; Murtaizin, T. A.
2018-05-01
This paper is aimed to check the possibility of applying the Steam Assisted Gravity Drainage in vertical wells. This challenge seems to be vital because most of the natural bitumen reservoirs are found to occur above the oil fields being developed so that a well system is already available at the stage of field management. The existing vertical wells are hard to be used for horizontal sidetracking in most of cases as the bitumen reservoir occurs at a shallow depth. The matter is to use the existing wells as vertical ones. At the same time, it is possible to drill an additional sidetrack as a producer or an injector.
18. A proposed orbit and vertical dispersion correction system for PEP
International Nuclear Information System (INIS)
Close, E.; Cornacchia, M.; King, A.S.; Lee, M.J.
1978-07-01
The proposed arrangement of position monitors and dipole magnets for the closed orbit correction system in PEP is described. The computer code ALIGN, which simulates and corrects closed orbit displacements, has been used to study the most effective layout of monitors and correctors. The vertical dispersion function has been computed before and after closed orbit correction. The results indicate that the residual vertical dispersion after the orbit is corrected could exceed the tolerable values. A correction procedure for the vertical dispersion has been studied with the compute code CO-OP and this scheme of correction has been verified experimentally in SPEAR. 9 refs., 8 figs., 2 tabs
19. Beyond vertical integration--Community based medical education.
Science.gov (United States)
Kennedy, Emma Margaret
2006-11-01
The term 'vertical integration' is used broadly in medical education, sometimes when discussing community based medical education (CBME). This article examines the relevance of the term 'vertical integration' and provides an alternative perspective on the complexities of facilitating the CBME process. The principles of learner centredness, patient centredness and flexibility are fundamental to learning in the diverse contexts of 'community'. Vertical integration as a structural concept is helpful for academic organisations but has less application to education in the community setting; a different approach illuminates the strengths and challenges of CBME that need consideration by these organisations.
20. ONLINE MINIMIZATION OF VERTICAL BEAM SIZES AT APS
Energy Technology Data Exchange (ETDEWEB)
Sun, Yipeng
2017-06-25
In this paper, online minimization of vertical beam sizes along the APS (Advanced Photon Source) storage ring is presented. A genetic algorithm (GA) was developed and employed for the online optimization in the APS storage ring. A total of 59 families of skew quadrupole magnets were employed as knobs to adjust the coupling and the vertical dispersion in the APS storage ring. Starting from initially zero current skew quadrupoles, small vertical beam sizes along the APS storage ring were achieved in a short optimization time of one hour. The optimization results from this method are briefly compared with the one from LOCO (Linear Optics from Closed Orbits) response matrix correction.
1. Coupling and Vertical Dispersion Correction in the SPS
CERN Document Server
Aiba, M; Franchi, A; Tomas, R; Vanbavinckhove, G
2010-01-01
Consolidation of the coupling correction scheme in the LHC is challenged by a missing skew quadrupole family in Sector 3-4 at the start-up in 2009-2010. Simultaneous coupling and vertical dispersion correction using vertical orbit bumps at the sextupoles, was studied by analyzing turn-byturn data. This scheme was tested in the CERN SPS where the optical structure of arc cells is quite similar to the LHC. In the SPS, horizontal and vertical beam positions are measured separately with single plane BPMs, thus a technique to construct ”pseudo double plane BPM” is also discussed.
2. A study of reconstruction artifacts in cone beam tomography using filtered backprojection and iterative EM algorithms
International Nuclear Information System (INIS)
Zeng, G.L.; Gullberg, G.T.
1990-01-01
Reconstruction artifacts in cone beam tomography are studied for filtered backprojection (Feldkamp) and iterative EM algorithms. The filtered backprojection algorithm uses a voxel-driven, interpolated backprojection to reconstruct the cone beam data; whereas, the iterative EM algorithm performs ray-driven projection and backprojection operations for each iteration. Two weight in schemes for the projection and backprojection operations in the EM algorithm are studied. One weights each voxel by the length of the ray through the voxel and the other equates the value of a voxel to the functional value of the midpoint of the line intersecting the voxel, which is obtained by interpolating between eight neighboring voxels. Cone beam reconstruction artifacts such as rings, bright vertical extremities, and slice-to slice cross talk are not found with parallel beam and fan beam geometries
3. INFLUÊNCIA ESTOICA NA CONCEPÇÃO DE <em>STATUS> E <em>DICTUM> COMO <em> QUASI RES EM> (ὡσανεì τινά EM ABERLARDO STOIC INFLUENCE IN ABELARD'S CONCEPTION OF <em>STATUS> AND <em>DICTUM> AS <em>QUASI RESem> (ὡσανεì τινά.
Directory of Open Access Journals (Sweden)
Guy Hamelin
2011-09-01
Full Text Available Na sua obra, Pedro Abelardo (1079-1142 destaca duas noções metafísicas que fundamentam sua teoria lógica: o <em style="mso-bidi-font-style: normal;">statusem> e o <em style="mso-bidi-font-style: normal;">dictum propositionisem>, ao causar, respectivamente, a imposição (<em style="mso-bidi-font-style: normal;">impositioem> dos termos universais e o valor de verdade das proposições. Trata-se de expressões que se referem a naturezas ontológicas peculiares, na medida em que não são consideradas coisas (<em style="mso-bidi-font-style: normal;">resem>, mesmo que constituem causas. Todavia, também não são nada. Abelardo as chama de ‘quase coisas’ (<em style="mso-bidi-font-style: normal;">quasi resem>. No presente artigo, explicamos, primeiro, essas duas noções essenciais da lógica abelardiana, antes de tentar, em seguida, encontrar a fonte dessa metafísica particular. Em oposição a comentadores importantes da lógica de Abelardo, que estimam que haja uma forte influência platônica sobre essa concepção específica, defendemos antes, com apoio de textos significativos e de acordo com o nominalismo abelardiano, que a maior ascendência sobre a metafísica do nosso autor é a do estoicismo, sobretudo, antigo.In his work, Peter Abelard (1079-1142 highlights two metaphysical notions, which sustain his logical theory: the <em>status> and the <em>dictum propositionisem>, causing respectively both the imposition (<em>impositio> of universal terms and the thuth-value of propositions. Both expressions refer to peculiar ontological natures, in so far as they are not considered things (<em>res>, even if they constitute causes. Nevertheless, neither are they ‘nothing’. Abelard calls them ‘quasi-things’ (<em>quasi resem>. In the present article, we expound first these two essential notions of Abelardian logic before then trying to find the source of this particular metaphysics. Contrary to some important
4. Ploidia de DNA em astrocitomas: estudo em 66 pacientes brasileiros
Directory of Open Access Journals (Sweden)
KRUTMAN-ZVEIBIL DEBORAH
1999-01-01
Full Text Available A determinação do conteúdo de DNA nuclear (fração de fase S e ploidia de DNA foi realizada por meio de análise de imagem em 66 astrocitomas, a partir de material fixado em formalina e seccionado em cortes de 5 micrômetros corados pela técnica de Feulgen. Nossos resultados mostraram forte relação entre a idade do paciente, grau histológico e sobrevida , com a ploidia de DNA e o percentual de células em fase de síntese. A análise da atividade proliferativa de astrocitomas intracranianos é a nosso ver muito útil no entendimento do comportamento biológico , do prognóstico e para o planejamento terapêutico dessas lesões.
5. Busca de estruturas em grandes escalas em altos redshifts
Science.gov (United States)
Boris, N. V.; Sodré, L., Jr.; Cypriano, E.
2003-08-01
A busca por estruturas em grandes escalas (aglomerados de galáxias, por exemplo) é um ativo tópico de pesquisas hoje em dia, pois a detecção de um único aglomerado em altos redshifts pode por vínculos fortes sobre os modelos cosmológicos. Neste projeto estamos fazendo uma busca de estruturas distantes em campos contendo pares de quasares próximos entre si em z Â3 0.9. Os pares de quasares foram extraídos do catálogo de Véron-Cetty & Véron (2001) e estão sendo observados com os telescópios: 2,2m da University of Hawaii (UH), 2,5m do Observatório de Las Campanas e com o GEMINI. Apresentamos aqui a análise preliminar de um par de quasares observado nos filtros i'(7800 Å) e z'(9500 Å) com o GEMINI. A cor (i'-z') mostrou-se útil para detectar objetos "early-type" em redshifts menores que 1.1. No estudo do par 131046+0006/J131055+0008, com redshift ~ 0.9, o uso deste método possibilitou a detecção de sete objetos candidatos a galáxias "early-type". Num mapa da distribuição projetada dos objetos para 22 escala. Um outro argumento em favor dessa hipótese é que eles obedecem uma relação do tipo Kormendy (raio equivalente X brilho superficial dentro desse raio), como a apresentada pelas galáxias elípticas em z = 0.
6. A test of vertical economies for non-vertically integrated firms: The case of rural electric cooperatives
International Nuclear Information System (INIS)
Greer, Monica L.
2008-01-01
This paper seeks to evaluate unrealized economies of vertical integration for rural electric cooperatives. Given the well-established network economies that are inherent in the generation, transmission, and distribution of electricity, the coops long-standing choice of market structure is questionable (especially if their strategy is welfare maximization). Organized as either generation-and-transmission or distribution-only, the traditional measures of vertical economies will not work. Thus, I have devised an alternative method by which to measure such economies and find that, on average, cost savings in excess of 39% could have been realized had the coops adopted a vertically integrated structure. (author)
7. Vertical motion and ''scarred'' eigenfunctions in the stadium billiard
International Nuclear Information System (INIS)
Christoffel, K.M.; Brumer, P.
1985-01-01
A subset of pseudoregular eigenfunctions of the classically chaotic stadium billiard is shown to participate strongly in vertically directed motion, supporting the conjectures of McDonald and of Heller regarding periodic orbits and pseudoregular eigenfunctions
8. CAMEX-3 AIRBORNE VERTICAL ATMOSPHERE PROFILING SYSTEM (AVAPS) V1
Data.gov (United States)
National Aeronautics and Space Administration — The CAMEX-3 DC-8 Airborne Vertical Atmosphere Profiling System (AVAPS) uses dropwindsonde and Global Positioning System (GPS) receivers to measure the atmospheric...
9. A Method for Modeling of Floating Vertical Axis Wind Turbine
DEFF Research Database (Denmark)
Wang, Kai; Hansen, Martin Otto Laver; Moan, Torgeir
2013-01-01
It is of interest to investigate the potential advantages of floating vertical axis wind turbine (FVAWT) due to its economical installation and maintenance. A novel 5MW vertical axis wind turbine concept with a Darrieus rotor mounted on a semi-submersible support structure is proposed in this paper....... In order to assess the technical and economic feasibility of this novel concept, a comprehensive simulation tool for modeling of the floating vertical axis wind turbine is needed. This work presents the development of a coupled method for modeling of the dynamics of a floating vertical axis wind turbine....... This integrated dynamic model takes into account the wind inflow, aerodynamics, hydrodynamics, structural dynamics (wind turbine, floating platform and the mooring lines) and a generator control. This approach calculates dynamic equilibrium at each time step and takes account of the interaction between the rotor...
10. Vertical Wave Impacts on Offshore Wind Turbine Inspection Platforms
DEFF Research Database (Denmark)
Bredmose, Henrik; Jacobsen, Niels Gjøl
2011-01-01
Breaking wave impacts on a monopile at 20 m depth are computed with a VOF (Volume Of Fluid) method. The impacting waves are generated by the second-order focused wave group technique, to obtain waves that break at the position of the monopile. The subsequent impact from the vertical run-up flow...... on a horizontal inspection platform is computed for five different platform levels. The computational results show details of monopile impact such as slamming pressures from the overturning wave front and the formation of run-up flow. The results show that vertical platform impacts can occur at 20 m water depth....... The dependence of the vertical platform load to the platform level is discussed. Attention is given to the significant downward force that occur after the upward force associated with the vertical impact. The effect of the numerical resolution on the results is assessed. The position of wave overturning is found...
11. Performance of horizontal versus vertical vapor extraction wells
International Nuclear Information System (INIS)
Birdsell, K.H.; Roseberg, N.D.; Edlund, K.M.
1994-06-01
Vapor extraction wells used for site remediation of volatile organic chemicals in the vadose zone are typically vertical wells. Over the past few years, there has been an increased interest in horizontal wells for environmental remediation. Despite the interest and potential benefits of horizontal wells, there has been little study of the relative performance of horizontal and vertical vapor extraction wells. This study uses numerical simulations to investigate the relative performance of horizontal versus vertical vapor extraction wells under a variety of conditions. The most significant conclusion that can be drawn from this study is that in a homogeneous medium, a single, horizontal vapor extraction well outperforms a single, vertical vapor extraction well (with surface capping) only for long, linear plumes. Guidelines are presented regarding the use of horizontal wells
12. Horizontal Multinational Firms, Vertical Multinational Firms and Domestic Investment
NARCIS (Netherlands)
J. Emami Namini (Julian); H.P.G. Pennings (Enrico)
2009-01-01
textabstractWe build a dynamic general equilibrium model with 2 countries, horizontal and vertical multinational activity and endogenous domestic and foreign investment. It is found that horizontal multinational activity always leads to a complementary relationship between domestic and foreign
13. Vertical Lift by Series Hybrid Power, Phase II
Data.gov (United States)
National Aeronautics and Space Administration — A major market for vertical lift aircraft is in urban operations, primarily for police and electronic news gathering (typically a Bell 206 or a Eurocopter AS350)....
14. Vertical distribution of ectomycorrhizal fungal taxa in a podzol profile
NARCIS (Netherlands)
Rosling, A.; Landeweert, R.; Lindahl, B.D.; Larsson, K.H.; Kuyper, T.W.; Taylor, A.F.S.; Finlay, R.F.
2003-01-01
Studies of ectomycorrhizal fungal communities in forest soils are usually restricted to the uppermost organic horizons. Boreal forest podzols are highly stratified and little is known about the vertical distribution of ectomycorrhizal communities in the underlying mineral horizons. Ectomycorrhizal
15. GLOBEC NEP Vertical Plankton Tow (VPT) Data, 1997-2001
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — GLOBEC (GLOBal Ocean ECosystems Dynamics) NEP (Northeast Pacific) California Current Program Vertical Plankton Tow (VPT) Data For more information, see...
16. Development of an autonomous vertical profiler for oceanographic studies
Digital Repository Service at National Institute of Oceanography (India)
Dabholkar, N.; Desa, E.; Afzulpurkar, S.; Madhan, R.; Mascarenhas, A.A.M.Q.; Navelkar, G.; Maurya, P.K.; Prabhudesai, S.; Nagvekar, S.; Martins, H.; Sawkar, G.; Fernandes, P.; Manoj, K.K.
groups. This paper is based on a concept patent on a thruster driven Autonomous Vertical profiler [AVP], and describes the developmental steps being taken on the integration of sensors, control electronics, communications and a Graphical User interface...
17. 1 Vertical structure of orographic precipitating clouds observed over ...
11
The present study explores the vertical structure of precipitating clouds associated with orographic features in South .... The PR, by design, detects PLW and not CLW. Dryness of ...... Organization of Asian Monsoon Convection*; J. Clim. 19(14) ...
18. Shaping the distribution of vertical velocities of antihydrogen in GBAR
Energy Technology Data Exchange (ETDEWEB)
Dufour, G.; Lambrecht, A.; Reynaud, S. [CNRS, ENS, UPMC, Laboratoire Kastler-Brossel, Paris (France); Debu, P. [CEA-Saclay, Institut de Recherche sur les lois Fondamentales de l' Univers, Gif-sur-Yvette (France); Nesvizhevsky, V.V. [Institut Max von Laue-Paul Langevin, Grenoble (France); Voronin, A.Yu. [P.N. Lebedev Physical Institute, Moscow (Russian Federation)
2014-01-15
GBAR is a project aiming at measuring the freefall acceleration of gravity for antimatter, namely antihydrogen atoms (H). The precision of this timing experiment depends crucially on the dispersion of initial vertical velocities of the atoms as well as on the reliable control of their distribution.We propose to use a new method for shaping the distribution of the vertical velocities of H, which improves these factors simultaneously. The method is based on quantum reflection of elastically and specularly bouncing H with small initial vertical velocity on a bottom mirror disk, and absorption of atoms with large initial vertical velocities on a top rough disk.We estimate statistical and systematic uncertainties, and we show that the accuracy for measuring the free fall acceleration g of H could be pushed below 10{sup -3} under realistic experimental conditions. (orig.)
19. AWWA E102-17 submersible vertical turbine pumps
CERN Document Server
2017-01-01
This standard describes minimum requirements for submersible vertical turbine pumps utilizing a discharge column pipe assembly, 5 hp or larger, used in water service, including materials, design, manufacture, inspection, and testing.
20. Vertical partitioning of relational OLTP databases using integer programming
DEFF Research Database (Denmark)
Amossen, Rasmus Resen
2010-01-01
A way to optimize performance of relational row store databases is to reduce the row widths by vertically partition- ing tables into table fractions in order to minimize the number of irrelevant columns/attributes read by each transaction. This pa- per considers vertical partitioning algorithms...... for relational row- store OLTP databases with an H-store-like architecture, meaning that we would like to maximize the number of single-sited transactions. We present a model for the vertical partitioning problem that, given a schema together with a vertical partitioning and a workload, estimates the costs...... applied to the TPC-C benchmark and the heuristic is shown to obtain solutions with costs close to the ones found using the quadratic program....
1. Measurement of vertical track deflection from a moving rail car.
Science.gov (United States)
2013-02-01
The University of Nebraska has been conducting research sponsored by the Federal Railroad Administrations Office of Research and Development to develop a system that measures vertical track deflection/modulus from a moving rail car. Previous work ...
2. Ultra-Low Noise Vertical Takeoff and Landing (VTOL)
Data.gov (United States)
National Aeronautics and Space Administration — A unique type of vertical lift propulsor is being designed/analyzed/ developed to push blade passage frequency harmonics above the human audible range, while also...
3. Glow phenomenon surrounding the vertical stabilizer and OMS pods
Science.gov (United States)
1994-01-01
This 35mm frame, photographed as the Space Shuttle Columbia was orbiting Earth during a 'night' pass, documents the glow phenomenon surrounding the vertical stabilizer and the Orbital Maneuvering System (OMS) pods of the spacecraft.
4. Reliability Analysis and Optimal Design of Monolithic Vertical Wall Breakwaters
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Burcharth, Hans F.; Christiani, E.
1994-01-01
Reliability analysis and reliability-based design of monolithic vertical wall breakwaters are considered. Probabilistic models of the most important failure modes, sliding failure, failure of the foundation and overturning failure are described . Relevant design variables are identified...
5. TREE STEM RECONSTRUCTION USING VERTICAL FISHEYE IMAGES: A PRELIMINARY STUDY
Directory of Open Access Journals (Sweden)
A. Berveglieri
2016-06-01
Full Text Available A preliminary study was conducted to assess a tree stem reconstruction technique with panoramic images taken with fisheye lenses. The concept is similar to the Structure from Motion (SfM technique, but the acquisition and data preparation rely on fisheye cameras to generate a vertical image sequence with height variations of the camera station. Each vertical image is rectified to four vertical planes, producing horizontal lateral views. The stems in the lateral view are rectified to the same scale in the image sequence to facilitate image matching. Using bundle adjustment, the stems are reconstructed, enabling later measurement and extraction of several attributes. The 3D reconstruction was performed with the proposed technique and compared with SfM. The preliminary results showed that the stems were correctly reconstructed by using the lateral virtual images generated from the vertical fisheye images and with the advantage of using fewer images and taken from one single station.
6. Inverse vertical migration and feeding in glacier lanternfish (Benthosema glaciale)
KAUST Repository
Dypvik, Eivind; Klevjer, Thor A.; Kaartvedt, Stein
2011-01-01
lanternfish (Benthosema glaciale) were mainly distributed below ~200 m and displayed three different diel behavioral strategies: normal diel vertical migration (NDVM), inverse DVM (IDVM) and no DVM (NoDVM). The IDVM group was the focus of this study
7. Shaping the distribution of vertical velocities of antihydrogen in GBAR
CERN Document Server
Dufour, G.; Lambrecht, A.; Nesvizhevsky, V.V.; Reynaud, S.; Voronin, A.Yu.
2014-01-30
GBAR is a project aiming at measuring the free fall acceleration of gravity for antimatter, namely antihydrogen atoms ($\\overline{\\mathrm{H}}$). Precision of this timing experiment depends crucially on the dispersion of initial vertical velocities of the atoms as well as on the reliable control of their distribution. We propose to use a new method for shaping the distribution of vertical velocities of $\\overline{\\mathrm{H}}$, which improves these factors simultaneously. The method is based on quantum reflection of elastically and specularly bouncing $\\overline{\\mathrm{H}}$ with small initial vertical velocity on a bottom mirror disk, and absorption of atoms with large initial vertical velocities on a top rough disk. We estimate statistical and systematic uncertainties, and show that the accuracy for measuring the free fall acceleration $\\overline{g}$ of $\\overline{\\mathrm{H}}$ could be pushed below $10^{-3}$ under realistic experimental conditions.
8. A method for calculating active feedback system to provide vertical
The active feedback system is applied to control slow motions of plasma. The objective of the ... The other problem is connected with the control of plasma vertical position with active feedback system. Calculation of ... Current Issue Volume 90 ...
9. simulation of vertical water flow through vadose zone
African Journals Online (AJOL)
HOD
Simulation of vertical water flow representing the release of water from the vadose zone to the aquifer of surroundings ... ground water pollution from agricultural, industrial and municipal .... Peak Flow Characteristics of Wyoming. Streams: US ...
10. Vertical structure of atmosphere in pre-monsoon season over ...
(CIN), precipitable water content (PWC) and dynamical parameter vertical wind shear difference (VWS) are studied. ... These results are found to be significant at 99% confidence. It is found ... thunderstorms are maximum in terms of number.
11. A Location-Aware Vertical Handoff Algorithm for Hybrid Networks
KAUST Repository
Mehbodniya, Abolfazl; Aissa, Sonia; Chitizadeh, Jalil
2010-01-01
. Horizontal handoff, or generally speaking handoff, is a process which maintains a mobile user's active connection as it moves within a wireless network, whereas vertical handoff (VHO) refers to handover between different types of networks or different network
12. Vertical foramina in the lumbosacral region: CT appearance
International Nuclear Information System (INIS)
Beers, G.J.; Carter, A.P.; McNary, W.F.
1984-01-01
Several computed tomographic (CT) examples of vertically oriented foramina in the neural arches of the lumbosacral vertebrae are presented. The literature is reviewed briefly, and the possible clincal and embryologic significance of these foramina is discussed
13. Unsteady MHD free convective flow past a vertical porous plate ...
African Journals Online (AJOL)
user
International Journal of Engineering, Science and Technology .... dimensional MHD boundary layer on the body with time varying temperature. ... flow of an electrically conducting fluid past an infinite vertical porous flat plate coinciding with.
14. Extraction of Dihydroquercetin<em> em>from <em>Larix gmeliniem>i> em>with Ultrasound-Assisted and Microwave-Assisted Alternant Digestion
Directory of Open Access Journals (Sweden)
Yuangang Zu
2012-07-01
Full Text Available An ultrasound and microwave assisted alternant extraction method (UMAE was applied for extracting dihydroquercetin (DHQ from <em>Larix gmeliniem>i> wood. This investigation was conducted using 60% ethanol as solvent, 1:12 solid to liquid ratio, and 3 h soaking time. The optimum treatment time was ultrasound 40 min, microwave 20 min, respectively, and the extraction was performed once. Under the optimized conditions, satisfactory extraction yield of the target analyte was obtained. Relative to ultrasound-assisted or microwave-assisted method, the proposed approach provides higher extraction yield. The effect of DHQ of different concentrations and synthetic antioxidants on oxidative stability in soy bean oil stored for 20 days at different temperatures (25 °C and 60 °C was compared. DHQ was more effective in restraining soy bean oil oxidation, and a dose-response relationship was observed. The antioxidant activity of DHQ was a little stronger than that of BHA and BHT. Soy bean oil supplemented with 0.08 mg/g DHQ exhibited favorable antioxidant effects and is preferable for effectively avoiding oxidation. The <em>L. gmeliniiem> wood samples before and after extraction were characterized by scanning electron microscopy. The results showed that the UMAE method is a simple and efficient technique for sample preparation.
Directory of Open Access Journals (Sweden)
Muñoz Karen
2009-04-01
Full Text Available
16. Improving performance through vertical disintegration: Evidence from UK manufacturing firms
OpenAIRE
Desyllas, Panos
2009-01-01
Unlike previous work on the vertical integration-performance relationship, we investigate the performance consequences of vertical disintegration. We offer a theoretical justification for the disintegration decision and we condition the disintegration effect on performance on the initial degree of firm integration, the timing and the direction of disintegration. Using a sample of UK manufacturing firms and controlling for disintegration endogeneity, we find that disintegration eventually resu...
17. Refining geoid and vertical gradient of gravity anomaly
Directory of Open Access Journals (Sweden)
Zhang Chijun
2011-11-01
Full Text Available We have derived and tested several relations between geoid (N and quasi-geoid (ζ with model validation. The elevation correction consists of the first-term (Bouguer anomaly and second-term (vertical gradient of gravity anomaly. The vertical gradient was obtained from direct measurement and terrain calculation. The test results demonstrated that the precision of geoid can reach centimeter-level in mountains less than 5000 meters high.
18. Activity Based Startup Plan for Prototype Vertical Denitration Calciner
International Nuclear Information System (INIS)
SUTTER, C.S.
1999-01-01
Testing activation on the Prototype Vertical Denitration Calciner at PFP were suspended in January 1997 due to the hold on fissile material handling in the facility. The restart of testing activities will require a review through an activity based startup process based upon Integrated Safety Management (ISM) principles to verify readiness. The Activity Based Startup Plan for the Prototype vertical Denitration Calciner has been developed for this process
19. Alignment and operability analysis of a vertical sodium pump
International Nuclear Information System (INIS)
Gupta, V.K.; Fair, C.E.
1981-01-01
With the objective of identifying important alignment features of pumps such as FFTF, HALLAM, EBR II, PNC, PHENIX, and CRBR, alignment of the vertical sodium pump for the Clinch River Breeder Reactor Plant (CRBRP) is investigated. The CRBRP pump includes a flexibly coupled pump shaft and motor shaft, two oil-film tilting-pad hydrodynamic radial bearings in the motor plus a vertical thrust bearing, and two sodium hydrostatic bearings straddling the double-suction centrifugal impeller in the pump
20. A note on unionized firms' incentive to integrate vertically
OpenAIRE
Grandner, Thomas
2000-01-01
In this paper I analyze a vertically structured monopolized market with unionized firms. I compare two types of contracts: vertical integration and franchising. With franchising and wage bargaining at the firm level the union in the downstream firm is either very powerful or has no bargaining power at all, depending on the specific time structure of the model. These arguments could make integration preferable for the profit owners even if integration is accompanied by small transaction costs.... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6897351145744324, "perplexity": 16212.199890290864}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157070.28/warc/CC-MAIN-20180921112207-20180921132607-00493.warc.gz"} |
https://www.physicsforums.com/threads/thermodynamics-pressure-and-temp.216415/ | # Thermodynamics - pressure and temp.
1. Feb 19, 2008
### Niles
1. The problem statement, all variables and given/known data
Ok, I'm a little confused about the connection between pressure and temperature. Let's take two scenarios:
1) I have a ballon filled with helium at 30 degrees, and then I put it in the freezer. Then the volume changes, but the pressure stays constant, right?
2) I have the following setup:
The two buckets have different temperature - so the gas inside the hose has different temperatures at the sides 1 and 2. But why isn't the volume in part 1 bigger than the volume in part 2? Is that because the pressure is not constant?
I can't quite figure these things out.
sincerely Niles.
2. Feb 19, 2008
### blochwave
Well why does the volume decrease?
More fundamentally, why is the balloon "inflated"? Because the air inside exerts a pressure that causes the fabric to expand and voila, inflated balloon. If you shove in too much air the pressure is too great and the balloon ruptures. Reduce the pressure and the balloon shrinks a bit. Normally you reduce the pressure by letting air out. Cooling it however will slow the molecules in the air, reducing their average kinetic energy, so they're not gonna spread out and cover as much space, and the pressure is reduced, which is why the volume decreases
For part B, it's as simple as the two buckets are the same size. You always assume the gas expands to fill its container, so there you have it.
EDIT: So you can infer everything you'd need to know from the ideal gas law. PV=nRT, even if it's not an ideal gas the basic relationships are the same
If you increase pressure while holding volume constant, temperature has to increase(so that the equality holds, you made the left side bigger, n and R are constants, gotta make the right side bigger) and similarly for all the relationships. Basically remember that in a gas the temperature is a measurement of average kinetic energy. If there's a high temperature the molecules are bouncing a lot harder and are gonna spread out and hit walls harder, meaning increased pressure, unless you allow the walls to expand, then increase volume. If you have gas with a set temperature and you shrink the volume, you have all those molecules with whatever kinetic energy now confined to a smaller space. If you're bouncing off the walls already, and you make the walls closer, you're gonna be bouncing harder, and so on
Last edited: Feb 19, 2008
Similar Discussions: Thermodynamics - pressure and temp. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8063669800758362, "perplexity": 607.2243859892244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104565.76/warc/CC-MAIN-20170818043915-20170818063915-00028.warc.gz"} |
http://theinfolist.com/html/ALL/s/cladogram.html | TheInfoList
Greek#REDIRECT Greek Greek may refer to: Greece Anything of, from, or related to Greece Greece ( el, Ελλάδα, , ), officially the Hellenic Republic, is a country located in Southeast Europe. Its population is approximately 10.7 million as of ...
''clados'' "branch" and ''gramma'' "character") is a diagram used in
cladistics Cladistics (; ) is an approach to biological classification In biology Biology is the natural science that studies life and living organisms, including their anatomy, physical structure, Biochemistry, chemical processes, Molecular ...
to show relations among organisms. A cladogram is not, however, an
evolutionary tree A phylogenetic tree (also phylogeny or evolutionary tree Felsenstein J. (2004). ''Inferring Phylogenies'' Sinauer Associates: Sunderland, MA.) is a branching diagram A diagram is a symbolic representation Representation may refer to: Law a ...
because it does not show how ancestors are related to descendants, nor does it show how much they have changed, so many differing evolutionary trees can be consistent with the same cladogram. A cladogram uses lines that branch off in different directions ending at a
clade A clade (), also known as a monophyletic group or natural group, is a group of organisms that are monophyly, monophyletic – that is, composed of a common ancestor and all its lineage (evolution), lineal descendants - on a phylogenetic tree. R ...
, a group of organisms with a
last common ancestor In biology Biology is the natural science that studies life and living organisms, including their anatomy, physical structure, Biochemistry, chemical processes, Molecular biology, molecular interactions, Physiology, physiological mechanisms ...
. There are many shapes of cladograms but they all have lines that branch off from other lines. The lines can be traced back to where they branch off. These branching off points represent a hypothetical ancestor (not an actual entity) which can be inferred to exhibit the traits shared among the terminal taxa above it. This hypothetical ancestor might then provide clues about the order of evolution of various features, adaptation, and other evolutionary narratives about ancestors. Although traditionally such cladograms were generated largely on the basis of morphological characters,
DNA Deoxyribonucleic acid (; DNA) is a molecule File:Pentacene on Ni(111) STM.jpg, A scanning tunneling microscopy image of pentacene molecules, which consist of linear chains of five carbon rings. A molecule is an electrically neutral gro ...
and
RNA Ribonucleic acid (RNA) is a polymer A polymer (; Greek ''wikt:poly-, poly-'', "many" + ''wikt:-mer, -mer'', "part") is a Chemical substance, substance or material consisting of very large molecules, or macromolecules, composed of many Re ...
sequencing data and
computational phylogenetics Computational phylogenetics is the application of computational algorithm In and , an algorithm () is a finite sequence of , computer-implementable instructions, typically to solve a class of problems or to perform a computation. Algorithms ...
are now very commonly used in the generation of cladograms, either on their own or in combination with morphology.
## Molecular versus morphological data
The characteristics used to create a cladogram can be roughly categorized as either morphological (synapsid skull, warm blooded,
notochord In anatomy Anatomy (Greek ''anatomē'', 'dissection') is the branch of biology concerned with the study of the structure of organism In biology, an organism (from Ancient Greek, Greek: ὀργανισμός, ''organismos'') is any in ...
, unicellular, etc.) or molecular (DNA, RNA, or other genetic information). Prior to the advent of DNA sequencing, cladistic analysis primarily used morphological data. Behavioral data (for animals) may also be used. As
DNA sequencing DNA sequencing is the process of determining the nucleic acid sequence A nucleic acid sequence is a succession of bases signified by a series of a set of five different letters that indicate the order of nucleotides Nucleotides are organic ...
has become cheaper and easier,
molecular systematics Molecular phylogenetics () is the branch of phylogeny that analyzes genetic, hereditary molecular differences, predominately in DNA sequences, to gain information on an organism's evolutionary relationships. From these analyses, it is possible to ...
has become a more and more popular way to infer phylogenetic hypotheses. Using a parsimony criterion is only one of several methods to infer a phylogeny from molecular data. Approaches such as
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating Estimation (or estimating) is the process of finding an estimate, or approximation An approximation is anything that is intentionally similar but not exactly equa ...
, which incorporate explicit models of sequence evolution, are non-Hennigian ways to evaluate sequence data. Another powerful method of reconstructing phylogenies is the use of genomic
retrotransposon markerRetrotransposon markers are components of DNA which are used as cladistic markers. They assist in determining the common ancestry, or not, of related taxa. The "presence" of a given retrotransposon in related taxa suggests their orthologous integ ...
s, which are thought to be less prone to the problem of reversion that plagues sequence data. They are also generally assumed to have a low incidence of homoplasies because it was once thought that their integration into the
genome In the fields of molecular biology Molecular biology is the branch of biology Biology is the natural science that studies life and living organisms, including their anatomy, physical structure, Biochemistry, chemical processes, M ...
was entirely random; this seems at least sometimes not to be the case, however.
## Plesiomorphies and synapomorphies
Researchers must decide which character states are "ancestral" ('''') and which are derived (''
synapomorphies 279px, trait states. In phylogenetics, apomorphy and synapomorphy refer to derived characters of a clade A clade (; from grc, , ''klados'', "branch"), also known as a monophyletic group or natural group, is a group of organisms that are mon ...
''), because only synapomorphic character states provide evidence of grouping. This determination is usually done by comparison to the character states of one or more ''outgroups''. States shared between the outgroup and some members of the in-group are symplesiomorphies; states that are present only in a subset of the in-group are synapomorphies. Note that character states unique to a single terminal (autapomorphies) do not provide evidence of grouping. The choice of an outgroup is a crucial step in cladistic analysis because different outgroups can produce trees with profoundly different topologies.
## Homoplasies
A
homoplasy Homoplasy, in biology and phylogenetics, is when a Phenotypic trait, trait has been gained or lost independently in separate lineages over the course of evolution. This is different from Homology (biology), homology, which is the similarity of trait ...
is a character state that is shared by two or more taxa due to some cause ''other'' than common ancestry. The two main types of homoplasy are convergence (evolution of the "same" character in at least two distinct lineages) and reversion (the return to an ancestral character state). Characters that are obviously homoplastic, such as white fur in different lineages of Arctic mammals, should not be included as a character in a phylogenetic analysis as they do not contribute anything to our understanding of relationships. However, homoplasy is often not evident from inspection of the character itself (as in DNA sequence, for example), and is then detected by its incongruence (unparsimonious distribution) on a most-parsimonious cladogram. Note that characters that are homoplastic may still contain phylogenetic signal. A well-known example of homoplasy due to convergent evolution would be the character, "presence of wings". Although the wings of birds,
bat Bats are mammal Mammals (from Latin Latin (, or , ) is a classical language belonging to the Italic branch of the Indo-European languages. Latin was originally spoken in the area around Rome, known as Latium. Through the po ...
s, and insects serve the same function, each evolved independently, as can be seen by their
anatomy Anatomy (Greek ''anatomē'', 'dissection') is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science which deals with the structural organization of living things. It ...
. If a bird, bat, and a winged insect were scored for the character, "presence of wings", a homoplasy would be introduced into the dataset, and this could potentially confound the analysis, possibly resulting in a false hypothesis of relationships. Of course, the only reason a homoplasy is recognizable in the first place is because there are other characters that imply a pattern of relationships that reveal its homoplastic distribution.
## What is not a cladogram
A cladogram is the diagrammatic result of an analysis, which groups taxa on the basis of synapomorphies alone. There are many other phylogenetic algorithms that treat data somewhat differently, and result in phylogenetic trees that look like cladograms but are not cladograms. For example, phenetic algorithms, such as UPGMA and Neighbor-Joining, group by overall similarity, and treat both synapomorphies and symplesiomorphies as evidence of grouping, The resulting diagrams are phenograms, not cladograms, Similarly, the results of model-based methods (Maximum Likelihood or Bayesian approaches) that take into account both branching order and "branch length," count both synapomorphies and autapomorphies as evidence for or against grouping, The diagrams resulting from those sorts of analysis are not cladograms, either.
There are several
algorithms In mathematics Mathematics (from Greek: ) includes the study of such topics as numbers ( and ), formulas and related structures (), shapes and spaces in which they are contained (), and quantities and their changes ( and ). There is no ...
available to identify the "best" cladogram. Most algorithms use a
metric METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration) is a computer model Computer simulation is the process of mathematical modelling, performed on a computer, which is designed to predict the behaviour of or th ...
to measure how consistent a candidate cladogram is with the data. Most cladogram algorithms use the mathematical techniques of
optimization File:Nelder-Mead Simionescu.gif, Nelder-Mead minimum search of Test functions for optimization, Simionescu's function. Simplex vertices are ordered by their values, with 1 having the lowest ( best) value., alt= Mathematical optimization (alter ...
and minimization. In general, cladogram generation algorithms must be implemented as computer programs, although some algorithms can be performed manually when the data sets are modest (for example, just a few species and a couple of characteristics). Some algorithms are useful only when the characteristic data are molecular (DNA, RNA); other algorithms are useful only when the characteristic data are morphological. Other algorithms can be used when the characteristic data includes both molecular and morphological data. Algorithms for cladograms or other types of phylogenetic trees include
least squares The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the resid ...
,
neighbor-joining In bioinformatics, neighbor joining is a bottom-up (agglomerative) Cluster analysis, clustering method for the creation of phylogenetic trees, created by Naruya Saitou and Masatoshi Nei in 1987. Usually used for trees based on DNA or protein primary ...
, parsimony,
maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating Estimation (or estimating) is the process of finding an estimate, or approximation An approximation is anything that is intentionally similar but not exactly equa ...
, and
Bayesian inference Bayesian inference is a method of in which is used to update the probability for a hypothesis as more or becomes available. Bayesian inference is an important technique in , and especially in . Bayesian updating is particularly important in th ...
. Biologists sometimes use the term parsimony for a specific kind of cladogram generation algorithm and sometimes as an umbrella term for all phylogenetic algorithms. Algorithms that perform optimization tasks (such as building cladograms) can be sensitive to the order in which the input data (the list of species and their characteristics) is presented. Inputting the data in various orders can cause the same algorithm to produce different "best" cladograms. In these situations, the user should input the data in various orders and compare the results. Using different algorithms on a single data set can sometimes yield different "best" cladograms, because each algorithm may have a unique definition of what is "best". Because of the astronomical number of possible cladograms, algorithms cannot guarantee that the solution is the overall best solution. A nonoptimal cladogram will be selected if the program settles on a local minimum rather than the desired global minimum. To help solve this problem, many cladogram algorithms use a
simulated annealing Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem ...
approach to increase the likelihood that the selected cladogram is the optimal one. The basal position is the direction of the base (or root) of a rooted phylogenetic tree or cladogram. A basal clade is the earliest clade (of a given taxonomic rank to branch within a larger clade.
# Statistics
## Incongruence length difference test (or partition homogeneity test)
The incongruence length difference test (ILD) is a measurement of how the combination of different datasets (e.g. morphological and molecular, plastid and nuclear genes) contributes to a longer tree. It is measured by first calculating the total tree length of each partition and summing them. Then replicates are made by making randomly assembled partitions consisting of the original partitions. The lengths are summed. A p value of 0.01 is obtained for 100 replicates if 99 replicates have longer combined tree lengths.
## Measuring homoplasy
Some measures attempt to measure the amount of homoplasy in a dataset with reference to a tree,reviewed in though it is not necessarily clear precisely what property these measures aim to quantify
### Consistency index
The consistency index (CI) measures the consistency of a tree to a set of data – a measure of the minimum amount of homoplasy implied by the tree. It is calculated by counting the minimum number of changes in a dataset and dividing it by the actual number of changes needed for the cladogram. A consistency index can also be calculated for an individual character ''i'', denoted ci. Besides reflecting the amount of homoplasy, the metric also reflects the number of taxa in the dataset, (to a lesser extent) the number of characters in a dataset, the degree to which each character carries phylogenetic information, and the fashion in which additive characters are coded, rendering it unfit for purpose. ci occupies a range from 1 to 1/ 'n.taxa''/2in binary characters with an even state distribution; its minimum value is larger when states are not evenly spread. In general, for a binary or non-binary character with $n.states$, ci occupies a range from 1 to $\left(n.states-1\right)/\left(n.taxa-\lceil n.taxa/n.states\rceil\right)$.
### Retention index
The retention index (RI) was proposed as an improvement of the CI "for certain applications" This metric also purports to measure of the amount of homoplasy, but also measures how well synapomorphies explain the tree. It is calculated taking the (maximum number of changes on a tree minus the number of changes on the tree), and dividing by the (maximum number of changes on the tree minus the minimum number of changes in the dataset). The rescaled consistency index (RC) is obtained by multiplying the CI by the RI; in effect this stretches the range of the CI such that its minimum theoretically attainable value is rescaled to 0, with its maximum remaining at 1. The homoplasy index (HI) is simply 1 − CI.
### Homoplasy Excess Ratio
This measures the amount of homoplasy observed on a tree relative to the maximum amount of homoplasy that could theoretically be present – 1 − (observed homoplasy excess) / (maximum homoplasy excess). A value of 1 indicates no homoplasy; 0 represents as much homoplasy as there would be in a fully random dataset, and negative values indicate more homoplasy still (and tend only to occur in contrived examples). The HER is presented as the best measure of homoplasy currently available.
*
Phylogenetics In biology Biology is the natural science that studies life and living organisms, including their anatomy, physical structure, Biochemistry, chemical processes, Molecular biology, molecular interactions, Physiology, physiological mechanism ...
*
Dendrogram File:Phylogenetic tree.svg, A dendrogram of the Tree of Life. This phylogenetic tree is adapted from Woese et al. rRNA analysis. The vertical line at bottom represents the last universal common ancestor (LUCA). A dendrogram is a diagram repre ...
*
Basal (phylogenetics)In phylogenetics In biology Biology is the natural science that studies life and living organisms, including their anatomy, physical structure, Biochemistry, chemical processes, Molecular biology, molecular interactions, Physiology, physiolog ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6973565816879272, "perplexity": 2275.2341558899084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00189.warc.gz"} |
https://rupress.org/jgp/article/146/2/161/43457/Sphingomyelinase-D-inhibits-store-operated-Ca2 | Infections caused by certain bacteria including Mycobacterium tuberculosis and Corynebacterium pseudotuberculosis provoke inflammatory responses characterized by the formation of granulomas with necrotic foci—so-called caseous necrosis. The granulomas of infected animals show prominent infiltration by T lymphocytes, and T cell depletion increases host mortality. Notorious zoonotic C. pseudotuberculosis secretes sphingomyelinase (SMase) D, a phospholipase that cleaves off the choline moiety of sphingomyelin, a phospholipid found primarily in the outer leaflet of host cell plasma membranes. Experimental C. pseudotuberculosis strains that lack SMase D are markedly less infectious and unable to spread in hosts, indicating that this enzyme is a crucial virulence factor for sustaining the caseous lymphadenitis infections caused by this microbe. However, the molecular mechanism by which SMase D helps bacteria evade the host’s immune response remains unknown. Here, we find that SMase D inhibits store-operated Ca2+ entry (SOCE) in human T cells and lowers the production of the SOCE-dependent cytokines interleukin-2, which is critical for T cell growth, proliferation, and differentiation, and tumor necrosis factor α, which is crucial for the formation and maintenance of granulomas in microbial infections. SMase D inhibits SOCE through a previously unknown mechanism, namely, suppression of Orai1 current, rather than through altering gating of voltage-gated K+ channels. This finding suggests that, whereas certain genetic mutations abolish Orai1 activity causing severe combined immunodeficiency (SCID), bacteria have the ability to suppress Orai1 activity with SMase D to create an acquired, chronic SCID-like condition that allows persistent infection. Thus, in an example of how virulence factors can disrupt key membrane protein function by targeting phospholipids in host cell membranes, our study has uncovered a novel molecular mechanism that bacteria can use to thwart host immunity.
INTRODUCTION
Infections by zoonotic Corynebacterium pseudotuberculosis and Mycobacterium tuberculosis are characterized by caseous necrosis. C. pseudotuberculosis secretes sphingomyelinase (SMase) D, a virulence factor secreted by some other human bacterial pathogens, and also the active component of certain spider venoms (McNamara et al., 1995; Isbister and Fan, 2011). SMase D cleaves the choline moiety from sphingomyelin (Fig. 1 A) (Souček et al., 1971), a phospholipid found predominantly in the plasma membrane’s outer leaflet, leaving behind ceramide-1-phosphate (C1P). C. pseudotuberculosis is perhaps the most studied model of SMase D in bacterial virulence. Lymph nodes infected with it show prominent infiltration of T lymphocytes (Ellis, 1988; Pépin et al., 1994). These cells play a critical role in the host’s resistance to these microbes, as antibody-mediated depletion of host T cells or of T cell cytokines promotes the spread of infection and increases host mortality (Lan et al., 1999). Intriguingly, experimental C. pseudotuberculosis strains that lack SMase D struggle to establish infections and fail to disseminate throughout infected hosts (McNamara et al., 1994). However, the molecular mechanism by which SMase D evades host immunity has remained unknown.
T lymphocyte function depends on Ca2+ signaling (Hogan et al., 2010). Antigen recognition by the T cell receptor (TCR) engenders the production of intracellular inositol 1,4,5 trisphosphate (IP3), which by activating ER IP3 receptor channels causes Ca2+ to leave the ER (Imboden and Stobo, 1985). The resulting ER Ca2+ store depletion mobilizes the Ca2+-sensing molecule Stim1 to activate the store-operated Ca2+ entry (SOCE) channel Orai1 in the plasma membrane (Liou et al., 2005; Roos et al., 2005; Feske et al., 2006; Vig et al., 2006; Zhang et al., 2006). This Orai1-mediated extracellular Ca2+ entry then dramatically amplifies the IP3 receptor–mediated Ca2+ signal and ultimately triggers T cell proliferation, differentiation, cytokine production, and cytotoxic granule release (Hogan et al., 2010). Genetic defects in Orai1 can produce a severe combined immunodeficiency (SCID), underscoring the critical role this channel plays in human immunity (Partiseti et al., 1994; Feske et al., 2006).
T lymphocyte SOCE is supported by endogenously expressed KV1.3 channels (DeCoursey et al., 1984; Matteson and Deutsch, 1984). These channels play a major role in setting the negative resting membrane potential, typically near −50 mV, which drives the entry of Ca2+ ions across cell plasma membranes. Inhibition of KV1.3 channels has been shown to suppress T cell Ca2+ signaling and the critical immune functions it triggers (Cahalan and Chandy, 2009). Our group previously found that, at the −50-mV membrane potential, SMase D treatment can boost the mean fraction of active KV1.3 channels from near 0 to ∼20% (Combs et al., 2013; see also Ramu et al., 2006; Xu et al., 2008; Milescu et al., 2009), which, absent other effects, would be expected to further hyperpolarize the membrane potential and enhance T lymphocyte function. Contrary to this expectation, the effect of SMase D in C. pseudotuberculosis infection is immune suppression. To resolve this apparent conundrum, we investigated the effects of SMase D on human T lymphocytes and found that SMase D actually lowers Ca2+ entry into T lymphocytes rather than boosting it.
MATERIALS AND METHODS
Cell cultures, molecular biology, biochemistry, and reagents
Chinese hamster ovary (CHO) or Jurkat (Clone E6-1) and human T cells were cultured in F12 Kaighn’s or RPMI 1640 media (Invitrogen) supplemented with 10% FBS (Hyclone) at 37°C with 5% CO2. Before recording, CHO cells were trypsinized and resuspended in recording solutions. These cell lines were obtained from ATCC, whereas human peripheral blood T cells were from healthy volunteers through the Penn Immunology Core (IRB protocol 705906). hStim1 and hOrai1 cDNAs (provided by M. Cahalan, University of California, Irvine, Irvine, CA) were cloned into pIRES2-AcGFP vectors. For expression of these constructs, CHO cells were transfected with the Fugene6 (Promega) method 24–48 h before study, and visualization of fluorescence signals was used to identify successful transfection.
All organic chemicals were purchased from Sigma-Aldrich. Thapsigargin (Tg), ionomycin, and valinomycin stock solutions were prepared at 1 mg/ml in DMSO; PMA was prepared at 0.1 mg/ml in DMSO. 5-(4-phenoxybutoxy)psoralen (PAP-1) stock solutions were made at 200 µM in DMSO. Mouse anti–human CD3ε, mouse anti–human CD28, and goat anti–mouse IgG antibodies (all from R&D Systems) were prepared at 1 mg/ml in PBS. N-palmitoyl-ceramide-l-phosphate (Avanti Polar Lipids, Inc.) was prepared as 500-nmol aliquots dried under argon; these aliquots were resuspended in 200 µl of 100% ethanol, which was added drop-wise to an equimolar amount of defatted BSA freshly prepared in calcium-containing solutions (see recipes below) to yield a final lipid concentration of 0.5 mg/ml (Lipsky and Pagano, 1985). Mouse anti–human CD4 antibody conjugated to PerCP-Cy5.5, mouse anti–human CD8 antibody conjugated to APC-Cy7, and CompBeads for compensation controls were purchased from BD. Recombinant bacterial SMase D and an inactive form of this enzyme containing H11A and H47A mutations, as well as recombinant SMase C, were generated as described previously (Ramu et al., 2007) (the SMase D cDNA was provided by S. Billington, University of Arizona, Tucson, AZ). In all experiments, SMase D and SMase C were applied at 0.3 µg/ml, and the inactive form of SMase D was applied at 3 µg/ml.
Ca2+ imaging
For Ca2+-imaging experiments, human T cells suspended at 106 cells/ml in complete RPMI media were exposed for 30 min at 20°C in the dark, with gentle shaking, to 1 µg/ml of the acetoxymethyl ester (AM) form of Fura-2. Fura-2-AM was diluted from 1-mg/ml stock solutions prepared in DMSO, and these stocks were prepared from lyophilized aliquotted powders (Molecular Probes), kept frozen, and used within 1–2 wk. After Fura-2-AM loading, cells were collected by spinning for 4 min at 1,500 g, and then resuspended at 106 cells/ml in complete RPMI media. Cells were incubated for 30 more minutes at 20°C in the dark with gentle shaking to allow for complete de-esterification of Fura-2-AM.
Fura-2 imaging of cells was performed on an inverted microscope (Eclipse-Ti; Nikon) using a 40× oil objective. Fura-2 fluorescence was excited at 340 or 380 nm with a xenon light source housed in a high-speed wavelength switcher (Lamba-DG4; Sutter Instrument) and detected at 510 nm. Images were captured at 0.2 Hz with a Retiga-SRV camera (Q-Imaging) and analyzed on a PC with Elements-AR software (Nikon). The Fura-2 ratio (intensity of the emitted light excited at 340 nm relative to that at 380 nm) was averaged for pixels corresponding to all the cells (19–84 per experiment) in the field of view. Background correction was performed by subtraction of signal from cell-free areas. The 340- and 380-nm signals were corrected for cell autofluorescence by exchanging the bath with solutions containing 10 mM MnCl2 and 1 µg/ml ionomycin to quench the Fura-2 dye at the end of each experiment, and subtracting any remaining fluorescence from the respective signals.
During the above experiments, cells were allowed to settle onto poly-l-lysine–coated glass coverslips mounted on a chamber (RC-24N; Warner Instruments), and bathing solutions were changed by means of a gravity-driven perfusion system. All experiments were performed at room temperature. The 2-mM Ca2+-containing salt solution (2 Ca) contained (mM): 145 NaCl, 5 KCl, 2 CaCl2, 1 MgCl2, 10 glucose, and 10 HEPES, with pH adjusted to 7.30 with NaOH. The nominally Ca-free solution (0 Ca; ∼500 nM Ca2+) contained (mM): 145 NaCl, 5 KCl, 0.8 CaCl2, 1 EGTA, 1 MgCl2, 10 glucose, and 10 HEPES, with pH adjusted to 7.30 with NaOH. All perfused reagents were freshly diluted into 500 µl of the desired bathing solution before perfusion.
Data from the Ca2+-imaging experiments were normalized to facilitate comparison between and within the figures. The initial (time = 0) value of the Fura-2 signal for a particular control experiment was used to normalize that tracing and the remaining control tracings, as well as the SMase D–treated tracings. This procedure was followed separately for each stimulus type (anti-CD3ε antibody, Tg, and ionomycin).
Flow cytometry
For flow cytometry experiments, human T cells were suspended at 106 cells/ml in complete RPMI media and exposed for 30 min at 20°C in the dark with gentle shaking to 1 µg/ml of the AM form of Indo-1. Indo-1-AM was diluted from 1-mg/ml stock solutions prepared in DMSO, and these stocks were prepared from lyophilized aliquotted powders (Molecular Probes), kept frozen, and used within 1–2 wk. After Indo-1-AM loading, cells were collected by spinning for 4 min at 300 g, and then resuspended in cold RPMI at 107 cells/ml. Cells were incubated on ice in the presence of CD4 and CD8 antibodies for 20 min, diluted with 9 vol of cold flow salt solution (FSS; recipe follows), collected by spinning for 4 min at 300 g, and then resuspended at 106 cells/ml for analysis. Unstained cell controls and single-stained antibody compensation control beads, the latter prepared according to the manufacturer’s instructions, were prepared in parallel with the multi-stained cells.
Flow cytometry experiments were performed on a flow cytometer running FACSDiva software (LSR II; BD) and housed and maintained by the University of Pennsylvania Flow Cytometry Core Facility. PerCP-Cy5.5 was excited by a 488-nm laser; emission was captured by a 670-nm low-pass filter set behind a 635-nm low-pass dichroic mirror. APC-Cy7 was excited by a 633-nm laser; emission was captured by a bandpass filter centered at 780 nm set behind a 735 low-pass dichroic mirror. Indo-1 was excited by a 355-nm laser; calcium-bound emission was captured by a bandpass filter centered at 405 nm, and calcium-free emission was captured at 530 nm behind a 450-nm low-pass dichroic mirror. Unstained cells were used to set gates for human T cells and define background fluorescence levels for all channels. Indo-1–loaded cells were used to set photomultiplier tube (PMT) voltages for Indo-1 emissions, and single-stained compensation control beads were used to set PMT voltages for antibody-bound fluorophores. Cells were analyzed at speeds of ∼300/s, and post-collection data analysis was performed on FlowJo software (Tree Star). The mean Indo-1 ratio (intensity of the emitted light excited at 405 nm relative to that at 530 nm) was determined for cells binned over 3-s windows. The 405- and 530-nm signals were corrected for cell autofluorescence by the addition of 2 µl of 10 mM MnCl2 and 0.5 µg ionomycin to quench the Indo-1 dye at the end of each experiment, and by subtracting any remaining fluorescence from the respective signals.
All experiments were performed at room temperature in a nominally Ca-free (0 Ca; ∼500 nM Ca2+) FSS containing (mM) 145 NaCl, 5 KCl, 0.8 CaCl2, 1 EGTA, 1 MgCl2, 10 glucose, and 10 HEPES, with pH adjusted to 7.30 with NaOH. Data from the Ca2+-imaging experiments were normalized to facilitate comparison between and within the figures. The initial (time = 0) value of the Indo-1 signal for a particular control experiment was used to normalize that trace and the remaining control traces, as well as the SMase D–treated traces.
Electrophysiology
Channel currents from cells were recorded in the whole-cell configuration with a patch-clamp amplifier (Axopatch 200B; Molecular Devices), filtered at 5 kHz (KV1.3), 2 kHz (CHO Orai1), or 1 kHz (Jurkat Orai1), and sampled at 50 kHz using a Digidata 1322 (Molecular Devices) interfaced to a PC. pClamp 10 software (Molecular Devices) was used for amplifier control and data acquisition. Electrodes were fire polished (2–4 MΩ) and coated with beeswax. Capacitance and series resistance were electronically compensated.
For all KV1.3 studies, membrane potential was held at −100 mV and recordings were started 5 min after the whole-cell configuration was established. KV1.3 steady-state inactivation curves were obtained using a double-pulse protocol where, every 30 s, a 2,550-ms pulse to between −90 and −10 mV at 10-mV intervals was followed by a second 50-ms test pulse to 0 mV. Steady-state inactivation curves were constructed from the peak currents during the second pulse and fit to the following Boltzmann function:
$Inormalized=1−c1+eZF(V−V1/2)RT+c,$
where V1/2 is the midpoint; Z is the slope; c is the fraction of noninactivated channels; and F, R, and T have their usual meaning. G-V relationships (G-V curves) for KV1.3 channels were constructed from isochronic tail currents and fit to the following Boltzmann function:
$GGmax=11+e−ZF(V−V1/2)RT,$
with symbol meanings as above. For KV channel recordings, the bath solution contained (mM): 145 NaCl, 5 KCl, 0.3 CaCl2, 1 MgCl2, and 10 HEPES, with pH adjusted to 7.30 with NaOH. The electrode solution contained (mM): 140 KCl, 10 EGTA, 1 CaCl2, 1 MgCl2, and 10 HEPES, with pH adjusted to 7.30 with KOH.
For all studies of Orai1 currents expressed in CHO cells, transfected CHO cells in suspension were allowed to settle onto glass coverslips in a bath containing (mM): 150 NaCl, 5 KCl, 0.1 CaCl2, 2 MgCl2, 1 EGTA, and 10 HEPES, with pH adjusted to 7.30 with NaOH, and 1 µM Tg. This solution has a free Ca2+ concentration of ∼20 nM, and recordings in this solution were used to obtain nominally calcium-independent leak traces by stepping from the 0-mV holding potential to test potentials between −120 and 60 mV in 10-mV intervals every 2 s. After collection of these leak traces, Orai1 currents were revealed after replacing the bath solution with a solution containing (mM): 150 NaCl, 5 KCl, 20 CaCl2, 2 MgCl2, and 10 HEPES, with pH adjusted to 7.30 with NaOH, and 1 µM Tg. Calcium-dependent currents were then obtained by stepping from the 0-mV holding potential to test potentials between −120 and 60 mV in 10-mV intervals every 2 s. During tests of the effect of SMase D or the inactive enzyme, cell membrane potential was stepped repeatedly from 0 to −120 mV every 10 s. After enzyme treatment, calcium-dependent and calcium-independent leak traces were recollected as before. Calcium-independent traces were subtracted from calcium-dependent ones to isolate the Orai1 currents. For all Orai1 recordings, the electrode solution contained (mM): 140 CsMeSO3, 8 MgCl2, 1 CaCl2, 10 EGTA, and 10 HEPES, with pH adjusted to 7.30 with MeSO3H.
The approach and solutions used to record expressed Orai1 currents in CHO cells were also used to record native Orai1 currents from Jurkat cells, but with two modifications. First, calcium-dependent and calcium-independent traces were obtained by stepping from the 0-mV holding potential to test potentials between −100 and 60 mV in 20-mV steps every 2 s. Second, Orai1 currents were isolated by subtracting the mean steady-state calcium-independent current at each voltage from the corresponding calcium-dependent current.
ELISA assays of interleukin-2 (IL-2) and tumor necrosis factor α (TNF)
Human T lymphocytes at 2 × 106 cells/ml were exposed to inactive enzyme or SMase D for 5 min in complete RPMI culture medium. Media were then refreshed by centrifuging the cells for 4 min at 1.5 g and resuspending them in fresh media. The cells were then transferred to a 24-well tissue culture plate containing polystyrene beads coated with both anti-CD3 and anti-CD28 antibodies (Life Technologies) to achieve a 1:1 cell/bead ratio and concentrations of 106 cells/ml in 500-µl volumes. Stimulation with the antibody-labeled beads was allowed to proceed for 24 h, whereafter supernatants were collected by centrifugation. The IL-2 or TNF concentrations of supernatant samples diluted at 1:100 in fresh RPMI 10 were determined using an ELISA kit (R&D Systems) with a 96-well plate format. Optical densities of each well at 450 nm were analyzed on an EMax plate reader (Molecular Devices) with wavelength correction set to 570 nm.
Statistics
Statistically significant comparisons are indicated by asterisks in the figure panels. Nearly all figures compare experiments using SMase D with independent control experiments, where a catalytically inactive form of the enzyme was used instead. Generally, these comparisons are made between more than one experimental feature (e.g., the peak height and declining phase of Ca2+-imaging experiments) that are themselves non-independent. In such cases, p-values obtained from t tests were used to evaluate statistical significance, with a P < 0.05 denoted in figure panels with an asterisk. However, some experiments compare three independent conditions: SMase D treatment, C1P treatment, and control. In these cases, one-way ANOVA was first used to evaluate for significance, and asterisks denote post-comparison Bonferroni tests with P < 0.05.
RESULTS
SMase D treatment suppresses the intracellular Ca2+ signal of T lymphocytes
To stimulate human T lymphocyte Ca2+ signaling, we used antibodies against CD3ε (Ab1; a component of the TCR) to mimic antigen recognition by TCRs, and cross-linked these antibodies in situ with an anti-IgG antibody (Ab2) to boost the stimulus strength. This commonly used protocol produced the expected rise of intracellular Ca2+ concentration (Fig. 1 B), as monitored on a fluorescence imaging system with the ratiometric Ca2+ indicator Fura-2 (Grynkiewicz et al., 1985). In the presence of 2 mM of extracellular Ca2+, the Fura-2 Ca2+ signal characteristically peaked and then decayed toward a plateau. Several factors cause the Ca2+ signal to decline and reach a temporary steady state, including deactivation of SOCE channels, refilling of ER, extrusion of Ca2+ by the plasma membrane Ca2+ ATPase, reduction of the driving force for Ca2+ entry after the Ca2+ entry–caused membrane depolarization, and buffering of Ca2+ by other organelles and proteins (Hogan et al., 2010).
To test the effect of SMase D on the Ca2+ signal, we treated T lymphocytes with SMase D before antibody stimulation (Fig. 1 B). Contrary to expectations based on its KV1.3-stimulating effect, SMase D actually lowered the Ca2+ signal. SMase D treatment lowered both the peak and, more modestly, the declining phase of the Fura-2 Ca2+ signal as compared with control cells treated with a catalytically inactive mutant of SMase D (Fig. 1, B and C).
SMase D treatment suppresses SOCE
SMase D could diminish the size of the Fura-2 signal either by directly reducing SOCE across the plasma membrane or by reducing Ca2+ release from the ER stores, which would in turn result in reduced SOCE. To distinguish between these possibilities, we used Tg, an inhibitor of the sarco-/endoplasmic Ca2+ ATPase, to deplete ER Ca2+ stores by blocking their refill, an experimental manipulation known to selectively activate SOCE (Hogan et al., 2010). As expected, when cells treated with inactive SMase D were exposed to 1 µg Tg in the presence of a nominally Ca2+ free solution, the Fura-2 signal rose transiently, reflecting the slow release of Ca2+ from ER stores (Fig. 1 D). Subsequent reperfusion of these cells with the 2-mM Ca2+-containing solution revealed the SOCE component. SMase D treatment markedly suppressed the SOCE component of the Tg-triggered Fura-2 signal (Fig. 1, D and E). This result suggests that SMase D suppresses the Ca2+ signal not primarily by reducing Ca2+ release from ER, as the enzyme still suppresses SOCE when the ER is maximally depleted.
Indeed, SMase D had little effect on releasable Ca2+ of the ER, as cells treated with either active or inactive SMase D showed nearly identical Tg-triggered ER Ca2+ release (Fig. 2, A and B). Similar results were seen with an alternative known method to release intracellular Ca2+, namely, application of a very low concentration of the Ca2+ ionophore ionomycin (Dolmetsch and Lewis, 1994), further supporting the notion that SMase D has little or no direct effect on intracellular Ca2+ store content (Fig. 2, C and D). SMase D evidently suppresses the Fura-2 Ca2+ signal by lowering SOCE across the plasma membrane.
SMase D treatment suppresses SOCE in CD4-positive or CD8-positive T lymphocytes
C. pseudotuberculosis infections elicit responses from both CD4-positive and CD8-positive T lymphocytes (Ellis, 1988; Pépin et al., 1994). To evaluate the SMase D sensitivity of these T cell subtypes, T lymphocytes were labeled with fluorophore-conjugated antibodies to CD4 and CD8, and changes in intracellular Ca2+ levels were monitored with the ratiometric indicator Indo-1 during flow cytometry (Grynkiewicz et al., 1985). SMase D strongly suppressed the SOCE component of the Tg-triggered Indo-1 signal, in both the total T lymphocyte population and the CD4-positive and CD8-positive subsets (Fig. 3). Furthermore, these flow cytometry data, covering hundreds of thousands of cells per experiment, document an SMase D effect comparable to that initially observed with Fura-2 Ca2+ imaging, indicating that these previous observations were not made on a rare or peculiar cell subtype.
Effects of C1P and SMase C on SOCE
In principle, SMase D could suppress SOCE through an intracellular signaling cascade by generating the second messenger C1P (Fig. 1 A). We therefore tested, by means of Ca2+ imaging, whether C1P reproduces the inhibitory effect of SMase D on T lymphocyte SOCE. C1P treatment augmented the Tg-triggered ER store release (Fig. 4, B and D), but it did not suppress SOCE (Fig. 4, A and C). On the other hand, SMase D did not alter the Tg-triggered ER store release but suppressed SOCE (Figs. 1 and 2). Alternatively, SMase D may suppress SOCE by modifying critical interactions between sphingomyelin head groups and membrane proteins. In this case, SMase C may produce a similar effect even though it hydrolyzes sphingomyelin in a different way (generating ceramide instead of C1P; Fig. 1 A; Glenny and Stevens, 1935; Doery et al., 1963). Indeed, we found that SMase C also inhibited Tg-triggered SOCE in T lymphocytes (Fig. 4 E).
Effects of SMase D on Kv1.3 channels
We next turned our attention to certain channels known to support SOCE in T lymphocytes. SMase D could lower Ca2+ influx across the plasma membrane either by inhibiting the SOCE pathway or by reducing the activity of K+ channels such as KV1.3, thereby lowering the electric driving force for Ca2+ entry. KV1.3 undergoes two types of voltage-dependent gating processes: activation and (C-type) inactivation (Cahalan et al., 1985). Although not tested yet, SMase D may lower the fraction of open KV1.3 channels at steady state by affecting their inactivation, thus depolarizing the cell membrane. Orai1 current is sensitive to membrane depolarization because Orai1 channels conduct Ca2+ in an inwardly rectifying manner (Lewis and Cahalan, 1989). We therefore examined how SMase D might affect the steady-state inactivation of KV1.3 channels.
Fig. 5 (A and B) shows KV1.3 currents elicited with double-pulse protocols to examine the extent of steady-state inactivation at various voltages before and after SMase D treatment. Concomitant with its known effect on the G-V curve, SMase D treatment caused a −15-mV shift in the steady-state inactivation curve of KV1.3 in T cells (Fig. 5 C). For a given condition, the channels are expected to exhibit meaningful steady-state activity within the “triangular” window beneath each set of activation and inactivation curves. SMase D caused a hyperpolarizing shift of this window. Judging from this shift alone, SMase D would likely hyperpolarize the membrane potential and thus favor, not impair, Ca2+ influx. If so, agents that hyperpolarize the membrane potential by increasing overall K+ conductance would be expected to have a relatively smaller boosting effect on the Ca2+ signal in cells already treated with SMase D compared with controls.
We tested that prediction with the K+ ionophore valinomycin, insertion of which into the plasma membrane is expected to cause hyperpolarization. In T cells treated with inactive SMase D, as the antibody-triggered Ca2+ signal approached the plateau, the addition of valinomycin induced a sizable Ca2+ transient (Fig. 6 A). Such transient was much smaller after SMase D treatment (Fig. 6, A and B). It was similarly decreased in Tg-stimulated cells (Fig. 6, C and D). These results are consistent with the K+ conductance of SMase D–treated cells being already greater than that of untreated cells. If SMase D’s action on KV1.3 channels indeed produces hyperpolarization, the effect is expected to persist even when KV1.3 channels are blocked, and the cause of the enzyme’s inhibitory effect on Ca2+ influx must be sought elsewhere. To demonstrate this, we blocked KV1.3 channels with PAP-1 (Schmitz et al., 2005). PAP-1 diminished the SOCE component of the Tg-triggered Fura-2 signal in control cells, reflecting the lower driving force for Ca2+ influx when KV1.3 is blocked (Fig. 6, E and F). As predicted, even during this block, SMase D treatment still suppressed the SOCE component of the Tg-triggered Fura-2 signal. Thus, the suppression of SOCE by SMase D cannot be explained by the enzyme’s effects on KV1.3 activity.
SMase D treatment suppresses Orai1 current
The above findings suggest that SMase D treatment reduces Ca2+ current through the SOCE pathway, within which Orai1 and Stim1 are essential components (Lewis and Cahalan, 1989; Liou et al., 2005; Roos et al., 2005; Feske et al., 2006; Vig et al., 2006; Zhang et al., 2006). In native T cells, the whole-cell Orai1 current is generally very small, 2–3 pA given a driving force of about −240 mV (Partiseti et al., 1994), too small for us to examine its SMase D sensitivity. We therefore examined Orai1 currents heterologously expressed in CHO cells. Currents were recorded in the presence of 1 µM Tg (to deplete the ER) and 20 mM [Ca2+]ext (to boost current). Fig. 7 A shows Orai1 currents elicited by repeatedly stepping membrane potential from the 0-mV holding potential to −100 mV. Each current trace represents the difference between a matching pair of currents recorded in 20 or 0.1 mM of extracellular Ca2+. The addition of SMase D to the extracellular solution suppressed the current (Fig. 7, A and B). We collected peak and steady-state currents at a range of test potentials before and after SMase D treatment (Fig. 7, C and D) and plotted them against voltage in Fig. 7 E. Judging from the I-V curves, SMase D suppresses current at all voltages. Moreover, we found that SMase D also suppressed native Orai1 currents in Jurkat cells, a leukemic T cell line that expresses larger native Orai1 current than normal T cells (Fig. 7 F). These results show that the SMase D treatment strongly suppresses Orai1 current.
SMase D treatment suppresses production of IL-2 and TNF
Many T cell functions, including the production of key cytokines, depend on Orai1 currents. Two cytokines are important in the present context: IL-2, a signaling molecule important for the growth, proliferation, and differentiation of T lymphocytes (Waldmann, 2006); and TNF, a cytokine that plays a crucial role in the formation and maintenance of granulomas in microbial infections (Aggarwal et al., 2012). Genetic defects in Orai1 impair the production of IL-2 and TNF in stimulated T cells (Feske et al., 2000). In light of these properties and the fact that SMase D inhibits Orai1 current, we tested whether SMase D also decreases production of these key cytokines. Using antibody-coated beads, we stimulated human T lymphocytes in culture to produce IL-2 and TNF. We assayed cell culture supernatants by ELISA for the presence of these cytokines and found that SMase D indeed suppresses IL-2 and TNF production (Fig. 8).
DISCUSSION
Host immune systems struggle to clear certain bacterial pathogens. The histology of infected tissues may reveal granulomas—aggregates of multinucleated macrophages, fibroblasts, and lymphocytes—with necrotic foci. These necrotic areas impart a cheese-like texture to gross organ specimens, and this caseous necrosis is characteristic of certain bacterial infections. Although granuloma formation limits dissemination of bacteria, the failure of host immune cells to clear out invading bacteria can ultimately result in latent or chronic infection. Studies of the caseous lymphadenitis of C. pseudotuberculosis show that experimental mutant strains lacking SMase D exhibit dramatically reduced infectiousness and cannot effectively disseminate in host organs. These observations strongly suggest that SMase D helps bacteria to evade host immunity. However, the molecular mechanism by which SMase D produces this effect remains unknown.
In a previous set of biophysical studies, the activation gating of many KV channels was shown to be sensitive to certain SMase enzymes, including bacterial SMase D (Ramu et al., 2006; Xu et al., 2008; Milescu et al., 2009; Combs et al., 2013). SMase D stimulates KV channels by removing the positively charged choline group from sphingomyelin molecules in the outer leaflet of the membrane, making it energetically easier for the positively charged voltage sensor to move to the extracellular side and the channel to become activated (Ramu et al., 2006; Xu et al., 2008; Milescu et al., 2009). In doing so, the enzyme shifts the Q-V and G-V curves in the hyperpolarized direction. SMase D can activate KV1.3 channels, including endogenous KV1.3 channels in human T lymphocytes (Combs et al., 2013). Physiologically, KV1.3 is best characterized in the T lymphocyte system where the channel helps maintain negative resting membrane potentials, ensuring a driving force for Ca2+ entry adequate to trigger various critical immune responses (Cahalan and Chandy, 2009). In fact, KV1.3 channel inhibitors are being developed as effective immunosuppressants to treat autoimmune diseases (Chandy et al., 2004). SMase D markedly increases KV1.3 activity of T lymphocytes near typical resting membrane potentials (−50 mV). Therefore, SMase D would be expected to promote Ca2+ entry and thus lymphocyte activation. However, we found here that SMase D markedly lowers Ca2+ entry in both CD4-positive and CD8-positive lymphocytes by suppressing SOCE. This finding cannot be readily explained by the enzyme’s previously documented stimulating effect on T lymphocyte KV1.3 channels. Given that SMase D treatment shifts the Q-V curve in the hyperpolarized direction, the enzyme could, in principle, lower SOCE by promoting steady-state inactivation, thus causing depolarization and thereby reducing the driving force for Ca2+ entry. However, we find no evidence supporting such a scenario. In fact, even after KV1.3 channels are blocked, SMase D still effectively lowers the Ca2+ signal. These findings imply that SMase D treatment negatively impacts Orai1 activity. Indeed, we find that SMase D strongly suppresses Ca2+ current through both native and heterologously expressed Orai1 channels.
Two types of lipid–channel interaction mechanisms are frequently invoked to explain the effects of lipids on ion channel function: lipid molecules acting indirectly as second messengers, and direct lipid–channel interactions. The best known example of indirect action is PIP2-mediated regulation (Huang et al., 1998). As for direct lipid–channel interactions, native lipid molecules remain bound to KcsA channels, even after the channels are solubilized in detergent-containing solutions and eventually crystallized (Zhou et al., 2001; Valiyaveetil et al., 2002). Furthermore, although common lipids can generally support the folding of KcsA, an appropriate type of lipid head group is critical for KcsA function (Valiyaveetil et al., 2002). In the present case, therefore, one possible scenario is that SMase-generated lipids act as second messengers, triggering an intracellular signaling cascade that leads to Orai1 inhibition. Some prior studies of SMase C have favored this mechanism. For example, SMase C, ceramide (the lipid product of SMase C), and sphingosine have been reported to suppress SOCE in Jurkat T cells (Breittmayer et al., 1994; Lepple-Wienhues et al., 1999; Church et al., 2005). However, other investigations report instead that ceramide and sphingosine boost SOCE in Jurkat T cells by triggering Ca2+ release from intracellular stores (Sakano et al., 1996; Colina et al., 2005b). In this study, we find that exogenous C1P (the lipid product of SMase D) fails to suppress SOCE and instead augments the release of store Ca2+ by Tg. A prior study of Jurkat T cells has documented Ca2+ store release by exogenous C1P (Colina et al., 2005a). Thus, whereas exogenous C1P affects store Ca2+ release, it fails to mimic SMase D’s suppression of SOCE. It is noteworthy that the C1P generated by exogenous SMase D does not appear to break down into other sphingolipids, as SMase D does not increase cell ceramide levels (Feldhaus et al., 2002), and the enzyme generates stoichiometric quantities of C1P from sphingomyelin (Subbaiah et al., 2003). Additionally, sphingomyelin levels recover 5 h after SMase C treatment but show little or no recovery even 20 h after SMase D treatment (Subbaiah et al., 2003).
Hydrolysis of sphingomyelin by SMase D has been shown to suppress the CFTR Cl channel and to lower the activation voltages of voltage-gated ion channels (Ramu et al., 2006, 2007; Milescu et al., 2009; Combs et al., 2013). The SMase D effect on Kv channels persists even 24 h after treatment (Xu et al., 2008). Exposure to exogenous sphingomyelin or C1P fails to reproduce these functional effects in either channel type (Ramu et al., 2006, 2007). These findings suggest that the channel-impacting lipid molecules remain tightly bound to channel proteins, as they demonstrably do in the KcsA channel. Then implicitly, SMase D affects the channel function by modifying channel–sphingomyelin interactions in situ. This scenario can also explain why SMase D, which generates C1P, and SMase C, which generates a different lipid product (ceramide), both suppress Orai1 activity. Although C1P may, in principle, be broken down to ceramide or other sphingolipids, SMase D does not appear to raise the level of these lipid species, as mentioned above. In any case, further studies are needed to firmly establish the mechanism by which SMases suppress Orai1 activity.
Genetic defects in Orai1 have been identified as a cause of SCID (Partiseti et al., 1994; Feske et al., 2006). The Orai1 mutations in these SCID T cells result in dramatic functional impairments. For example, when stimulated, T cells from these SCID patients show decreased production of key cytokines like IL-2 and TNF (Feske et al., 2000). The former cytokine is crucial for T cell growth, proliferation, and differentiation (Waldmann, 2006), whereas the latter supports the formation and maintenance of granulomas in microbial infections (Aggarwal et al., 2012). In fact, the use of TNF inhibitors in the treatment of human inflammatory diseases is complicated by a risk of reactivation tuberculosis (Wallis, 2008). In the present study, we find that SMase D suppresses Orai1 current and also decreases production of IL-2 and TNF. Thus, C. pseudotuberculosis could, by means of inhibiting Orai1 with SMase D, create an acquired SCID-like condition, allowing the bacteria to avoid clearance by host T lymphocytes. Experimental worsening of this situation, for example, by antibody depletion of host T cells or their cytokines, turns a chronic disease into an acute and highly lethal one (Lan et al., 1998, 1999). A strikingly similar picture is seen in acquired immunodeficiency syndrome, where viral depletion of T cells enables the aggressive spread and high lethality of another organism known for caseous granulomas—M. tuberculosis, which has a C-type SMase activity (Vargas-Villarreal et al., 2003). Although suppression of Orai1 current can account for the mechanism by which SMase D helps to prevent bacterial clearance by the host immune system, SMase D may also affect other signaling processes in lymphocytes. Given that Orai1 channels are present in many other cell types (Hogan et al., 2010), SMase D may impact their functions as well.
The failure of human or animal immune systems to either adequately contain C. pseudotuberculosis in granulomas or to eradicate them from the body has a profoundly negative impact on host health. For example, in some regions of the world the prevalence of caseous lymphadenitis in livestock may be as high as 20%, causing substantial economic losses (Baird and Fontaine, 2007). As a zoonotic organism, C. pseudotuberculosis can be transmitted to humans. On the other hand, M. tuberculosis continues to pose a major threat to human health (Zumla et al., 2013). Worldwide, about two billion people are estimated to be infected with M. tuberculosis, and an estimated 1 in 14 new infections occurs in individuals infected with HIV (Lönnroth and Raviglione, 2008). Treatment of caseous lymphadenitis remains a challenge today, as both antibiotics and vaccines suffer from limited efficacy (Baird and Fontaine, 2007). Antibiotics were originally more successful in treating tuberculosis infections, but their widespread use has led to the emergence of highly resistant strains (Zumla et al., 2013). Thus, our discovery suggests neutralization of bacterial phospholipidases as a new additional strategy to combat these recalcitrant infections, and highlights the importance of appropriate protein–lipid interactions in maintaining normal function of ion channels.
Acknowledgments
We thank S. Billington for SMase D cDNAs; M. Cahalan for Orai1 and Stim1 cDNAs; the Penn Flow Cytometry and Cell Sorting Resource Laboratory and the Penn Human Immunology Core for their services and expertise; and P. De Weer for review of the manuscript.
This study was supported by the National Institutes of Health: a research grant to Z. Lu from the National Institute of General Medical Science (RO1 GM55560) and a fellowship grant to D.J. Combs from the National Institute of Neurological Disorders and Strokes (F31 NS73070). Z. Lu is an investigator of the Howard Hughes Medical Institute.
The authors declare no competing financial interests.
Kenton J. Swartz served as editor.
References
References
Aggarwal
,
B.B.
,
S.C.
Gupta
, and
J.H.
Kim
.
2012
.
Historical perspectives on tumor necrosis factor and its superfamily: 25 years later, a golden journey
.
Blood.
119
:
651
665
.
Baird
,
G.J.
, and
M.C.
Fontaine
.
2007
.
Corynebacterium pseudotuberculosis and its role in ovine caseous lymphadenitis
.
J. Comp. Pathol.
137
:
179
210
.
Breittmayer
,
J.P.
,
A.
Bernard
, and
C.
Aussel
.
1994
.
Regulation by sphingomyelinase and sphingosine of Ca2+ signals elicited by CD3 monoclonal antibody, thapsigargin, or ionomycin in the Jurkat T cell line
.
J. Biol. Chem.
269
:
5054
5058
.
Cahalan
,
M.D.
, and
K.G.
Chandy
.
2009
.
The functional network of ion channels in T lymphocytes
.
Immunol. Rev.
231
:
59
87
.
Cahalan
,
M.D.
,
K.G.
Chandy
,
T.E.
DeCoursey
, and
S.
Gupta
.
1985
.
A voltage-gated potassium channel in human T lymphocytes
.
J. Physiol.
358
:
197
237
.
Chandy
,
K.G.
,
H.
Wulff
,
C.
Beeton
,
M.
Pennington
,
G.A.
Gutman
, and
M.D.
Cahalan
.
2004
.
K+ channels as targets for specific immunomodulation
.
Trends Pharmacol. Sci.
25
:
280
289
.
Church
,
L.D.
,
G.
Hessler
,
J.E.
Goodall
,
D.A.
Rider
,
C.J.
Workman
,
D.A.
Vignali
,
P.A.
Bacon
,
E.
Gulbins
, and
S.P.
Young
.
2005
.
TNFR1-induced sphingomyelinase activation modulates TCR signaling by impairing store-operated Ca2+ influx
.
J. Leukoc. Biol.
78
:
266
278
.
Colina
,
C.
,
A.
Flores
,
C.
Castillo
,
M.R.
Garrido
,
A.
Israel
,
R.
DiPolo
, and
G.
Benaim
.
2005a
.
Ceramide-1-P induces Ca2+ mobilization in Jurkat T-cells by elevation of Ins(1,4,5)-P3 and activation of a store-operated calcium channel
.
Biochem. Biophys. Res. Commun.
336
:
54
60
.
Colina
,
C.
,
A.
Flores
,
H.
Rojas
,
A.
Acosta
,
C.
Castillo
,
M.R.
Garrido
,
A.
Israel
,
R.
DiPolo
, and
G.
Benaim
.
2005b
.
Ceramide increase cytoplasmic Ca2+ concentration in Jurkat T cells by liberation of calcium from intracellular stores and activation of a store-operated calcium channel
.
Arch. Biochem. Biophys.
436
:
333
345
.
Combs
,
D.J.
,
H.G.
Shin
,
Y.
Xu
,
Y.
Ramu
, and
Z.
Lu
.
2013
.
Tuning voltage-gated channel activity and cellular excitability with a sphingomyelinase
.
J. Gen. Physiol.
142
:
367
380
.
DeCoursey
,
T.E.
,
K.G.
Chandy
,
S.
Gupta
, and
M.D.
Cahalan
.
1984
.
Voltage-gated K+ channels in human T lymphocytes: a role in mitogenesis?
Nature.
307
:
465
468
.
Doery
,
H.M.
,
B.J.
Magnusson
,
I.M.
Cheyne
, and
J.
Sulasekharam
.
1963
.
A phospholipase in staphylococcal toxin which hydrolyses sphingomyelin
.
Nature.
198
:
1091
1092
.
Dolmetsch
,
R.E.
, and
R.S.
Lewis
.
1994
.
Signaling between intracellular Ca2+ stores and depletion-activated Ca2+ channels generates [Ca2+]i oscillations in T lymphocytes
.
J. Gen. Physiol.
103
:
365
388
.
Ellis
,
J.A.
1988
.
Immunophenotype of pulmonary cellular infiltrates in sheep with visceral caseous lymphadenitis
.
Vet. Pathol.
25
:
362
368
.
Feldhaus
,
M.J.
,
A.S.
Weyrich
,
G.A.
Zimmerman
, and
T.M.
McIntyre
.
2002
.
Ceramide generation in situ alters leukocyte cytoskeletal organization and β 2-integrin function and causes complete degranulation
.
J. Biol. Chem.
277
:
4285
4293
.
Feske
,
S.
,
R.
Draeger
,
H.H.
Peter
,
K.
Eichmann
, and
A.
Rao
.
2000
.
The duration of nuclear residence of NFAT determines the pattern of cytokine expression in human SCID T cells
.
J. Immunol.
165
:
297
305
.
Feske
,
S.
,
Y.
Gwack
,
M.
Prakriya
,
S.
Srikanth
,
S.H.
Puppel
,
B.
Tanasa
,
P.G.
Hogan
,
R.S.
Lewis
,
M.
Daly
, and
A.
Rao
.
2006
.
A mutation in Orai1 causes immune deficiency by abrogating CRAC channel function
.
Nature.
441
:
179
185
.
Glenny
,
A.T.
, and
N.F.
Stevens
.
1935
.
Staphylococcal toxins and antitoxins
.
J. Pathol. Bacteriol.
40
:
201
210
.
Grynkiewicz
,
G.
,
M.
Poenie
, and
R.Y.
Tsien
.
1985
.
A new generation of Ca2+ indicators with greatly improved fluorescence properties
.
J. Biol. Chem.
260
:
3440
3450
.
Hogan
,
P.G.
,
R.S.
Lewis
, and
A.
Rao
.
2010
.
Molecular basis of calcium signaling in lymphocytes: STIM and ORAI
.
Annu. Rev. Immunol.
28
:
491
533
.
Huang
,
C.L.
,
S.
Feng
, and
D.W.
Hilgemann
.
1998
.
Direct activation of inward rectifier potassium channels by PIP2 and its stabilization by Gβγ
.
Nature.
391
:
803
806
.
Imboden
,
J.B.
, and
J.D.
Stobo
.
1985
.
Transmembrane signalling by the T cell antigen receptor. Perturbation of the T3–antigen receptor complex generates inositol phosphates and releases calcium ions from intracellular stores
.
J. Exp. Med.
161
:
446
456
.
Isbister
,
G.K.
, and
H.W.
Fan
.
2011
.
Spider bite
.
Lancet.
378
:
2039
2047
.
Lan
,
D.T.
,
S.
Taniguchi
,
S.
Makino
,
T.
Shirahata
, and
A.
Nakane
.
1998
.
Role of endogenous tumor necrosis factor alpha and gamma interferon in resistance to Corynebacterium pseudotuberculosis infection in mice
.
Microbiol. Immunol.
42
:
863
870
.
Lan
,
D.T.
,
S.
Makino
,
T.
Shirahata
,
M.
, and
A.
Nakane
.
1999
.
Tumor necrosis factor alpha and gamma interferon are required for the development of protective immunity to secondary Corynebacterium pseudotuberculosis infection in mice
.
J. Vet. Med. Sci.
61
:
1203
1208
.
Lepple-Wienhues
,
A.
,
C.
Belka
,
T.
Laun
,
A.
Jekle
,
B.
Walter
,
U.
Wieland
,
M.
Welz
,
L.
Heil
,
J.
Kun
,
G.
Busch
, et al
.
1999
.
Stimulation of CD95 (Fas) blocks T lymphocyte calcium channels through sphingomyelinase and sphingolipids
.
96
:
13795
13800
.
Lewis
,
R.S.
, and
M.D.
Cahalan
.
1989
.
Mitogen-induced oscillations of cytosolic Ca2+ and transmembrane Ca2+ current in human leukemic T cells
.
Cell Regul.
1
:
99
112
.
Liou
,
J.
,
M.L.
Kim
,
W.D.
Heo
,
J.T.
Jones
,
J.W.
Myers
,
J.E.
Ferrell
Jr
, and
T.
Meyer
.
2005
.
STIM is a Ca2+ sensor essential for Ca2+-store-depletion-triggered Ca2+ influx
.
Curr. Biol.
15
:
1235
1241
.
Lipsky
,
N.G.
, and
R.E.
Pagano
.
1985
.
A vital stain for the Golgi apparatus
.
Science.
228
:
745
747
.
Lönnroth
,
K.
, and
M.
Raviglione
.
2008
.
Global epidemiology of tuberculosis: Prospects for control
.
Semin. Respir. Crit. Care Med.
29
:
481
491
.
Matteson
,
D.R.
, and
C.
Deutsch
.
1984
.
K channels in T lymphocytes: a patch clamp study using monoclonal antibody adhesion
.
Nature.
307
:
468
471
.
McNamara
,
P.J.
,
G.A.
, and
J.G.
Songer
.
1994
.
Targeted mutagenesis of the phospholipase D gene results in decreased virulence of Corynebacterium pseudotuberculosis
.
Mol. Microbiol.
12
:
921
930
.
McNamara
,
P.J.
,
W.A.
Cuevas
, and
J.G.
Songer
.
1995
.
Toxic phospholipases D of Corynebacterium pseudotuberculosis, C. ulcerans and Arcanobacterium haemolyticum: cloning and sequence homology
.
Gene.
156
:
113
118
.
Milescu
,
M.
,
F.
Bosmans
,
S.
Lee
,
A.A.
Alabi
,
J.I.
Kim
, and
K.J.
Swartz
.
2009
.
Interactions between lipids and voltage sensor paddles detected with tarantula toxins
.
Nat. Struct. Mol. Biol.
16
:
1080
1085
.
Partiseti
,
M.
,
F.
Le Deist
,
C.
Hivroz
,
A.
Fischer
,
H.
Korn
, and
D.
Choquet
.
1994
.
The calcium current activated by T cell receptor and store depletion in human lymphocytes is absent in a primary immunodeficiency
.
J. Biol. Chem.
269
:
32327
32335
.
Pépin
,
M.
,
J.C.
Pittet
,
M.
Olivier
, and
I.
Gohin
.
1994
.
Cellular composition of Corynebacterium pseudotuberculosis pyogranulomas in sheep
.
J. Leukoc. Biol.
56
:
666
670
.
Ramu
,
Y.
,
Y.
Xu
, and
Z.
Lu
.
2006
.
Enzymatic activation of voltage-gated potassium channels
.
Nature.
442
:
696
699
.
Ramu
,
Y.
,
Y.
Xu
, and
Z.
Lu
.
2007
.
Inhibition of CFTR Cl− channel function caused by enzymatic hydrolysis of sphingomyelin
.
104
:
6448
6453
.
Roos
,
J.
,
P.J.
DiGregorio
,
A.V.
Yeromin
,
K.
Ohlsen
,
M.
Lioudyno
,
S.
Zhang
,
O.
Safrina
,
J.A.
Kozak
,
S.L.
Wagner
,
M.D.
Cahalan
, et al
.
2005
.
STIM1, an essential and conserved component of store-operated Ca2+ channel function
.
J. Cell Biol.
169
:
435
445
.
Sakano
,
S.
,
H.
Takemura
,
K.
,
K.
Imoto
,
M.
Kaneko
, and
H.
Ohshika
.
1996
.
Ca2+ mobilizing action of sphingosine in Jurkat human leukemia T cells. Evidence that sphingosine releases Ca2+ from inositol trisphosphate- and phosphatidic acid-sensitive intracellular stores through a mechanism independent of inositol trisphosphate
.
J. Biol. Chem.
271
:
11148
11155
.
Schmitz
,
A.
,
A.
Sankaranarayanan
,
P.
Azam
,
K.
Schmidt-Lassen
,
D.
Homerick
,
W.
Hänsel
, and
H.
Wulff
.
2005
.
Design of PAP-1, a selective small molecule Kv1.3 blocker, for the suppression of effector memory T cells in autoimmune diseases
.
Mol. Pharmacol.
68
:
1254
1270
.
Souček
,
A.
,
C.
Michalec
, and
A.
Soucková
.
1971
.
Identification and characterization of a new enzyme of the group “phospholipase D” isolated from Corynebacterium ovis
.
Biochim. Biophys. Acta.
227
:
116
128
.
Subbaiah
,
P.V.
,
S.J.
Billington
,
B.H.
Jost
,
J.G.
Songer
, and
Y.
Lange
.
2003
.
Sphingomyelinase D, a novel probe for cellular sphingomyelin: effects on cholesterol homeostasis in human skin fibroblasts
.
J. Lipid Res.
44
:
1574
1580
.
Valiyaveetil
,
F.I.
,
Y.
Zhou
, and
R.
MacKinnon
.
2002
.
Lipids in the structure, folding, and function of the KcsA K+ channel
.
Biochemistry.
41
:
10771
10777
.
Vargas-Villarreal
,
J.
,
B.D.
Mata-Cárdenas
,
M.
Deslauriers
,
F.D.
Quinn
,
J.
Castro-Garza
,
H.G.
Martínez-Rodrĺguez
, and
S.
Said-Fernández
.
2003
.
Identification of acidic, alkaline, and neutral sphingomyelinase activities in Mycobacterium tuberculosis
.
Med. Sci. Monit.
9
:
BR225
BR230
.
Vig
,
M.
,
A.
Beck
,
J.M.
Billingsley
,
A.
Lis
,
S.
Parvez
,
C.
Peinelt
,
D.L.
Koomoa
,
J.
Soboloff
,
D.L.
Gill
,
A.
Fleig
, et al
.
2006
.
CRACM1 multimers form the ion-selective pore of the CRAC channel
.
Curr. Biol.
16
:
2073
2079
.
Waldmann
,
T.A.
2006
.
The biology of interleukin-2 and interleukin-15: implications for cancer therapy and vaccine design
.
Nat. Rev. Immunol.
6
:
595
601
.
Wallis
,
R.S.
2008
.
Tumour necrosis factor antagonists: structure, function, and tuberculosis risks
.
Lancet Infect. Dis.
8
:
601
611
.
Xu
,
Y.
,
Y.
Ramu
, and
Z.
Lu
.
2008
.
Removal of phospho-head groups of membrane lipids immobilizes voltage sensors of K+ channels
.
Nature.
451
:
826
829
.
Zhang
,
S.L.
,
A.V.
Yeromin
,
X.H.
Zhang
,
Y.
Yu
,
O.
Safrina
,
A.
Penna
,
J.
Roos
,
K.A.
Stauderman
, and
M.D.
Cahalan
.
2006
.
Genome-wide RNAi screen of Ca2+ influx identifies genes that regulate Ca2+ release-activated Ca2+ channel activity
.
103
:
9357
9362
.
Zhou
,
Y.
,
J.H.
Morais-Cabral
,
A.
Kaufman
, and
R.
MacKinnon
.
2001
.
Chemistry of ion coordination and hydration revealed by a K+ channel-Fab complex at 2.0 Å resolution
.
Nature.
414
:
43
48
.
Zumla
,
A.
,
M.
Raviglione
,
R.
Hafner
, and
C.F.
von Reyn
.
2013
.
Tuberculosis
.
N. Engl. J. Med.
368
:
745
755
.
Abbreviations used in this paper:
• C1P
ceramide-1-phosphate
•
• CHO
Chinese hamster ovary
•
• IL-2
interleukin-2
•
• IP3
inositol 1,4,5 trisphosphate
•
• PAP-1
5-(4-phenoxybutoxy)psoralen
•
• SCID
severe combined immunodeficiency
•
• SMase
sphingomyelinase
•
• SOCE
store-operated Ca2+ entry
•
• TCR
T cell receptor
•
• Tg
thapsigargin
•
• TNF
tumor necrosis factor α | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.678652286529541, "perplexity": 16558.39799881881}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529128.47/warc/CC-MAIN-20210122051338-20210122081338-00435.warc.gz"} |
https://brilliant.org/problems/from-the-earth-to-the-moon/ | # From the earth to the moon
Discrete Mathematics Level pending
Let $$A$$ be a set of $$300$$ distinct points given in the plane. Let $$B$$ be the set of midpoints of all segments with two distinct endpoints in $$A$$. What is the smallest possible size of $$B$$?
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7828356623649597, "perplexity": 156.56494113290321}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280485.79/warc/CC-MAIN-20170116095120-00147-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://gamedev.stackexchange.com/questions/48086/vector3-vs-vector2-performance-usage/48087 | # Vector3 vs. Vector2 - performance, usage?
I'm currently playing around with XNA, and creating a simple 2D platformer. I was thinking of adding multiple layers to make it a little bit of challenge.
In stead of having a Vector2 for my positions, I now use a Vector3, solely to use it's Z as layer depth. However, since you can't use operators between Vector2 and Vector3 for some unknown reason [1], I ended up changing all other Vector2s in my game, such as acceleration, speed and offset, so I can do things like position += offset without errors.
I also changed my rotation variable from float to Vector3, and I use the Z value to rotate my textures. I'm planning to use the X and Y to scale-flip my textures, so you get the Super Paper Mario effect.
However, after changing all these Vector2s in Vector3s, I felt a little bad about it. How does this effect the performance of games? I know I shouldn't have to worry about performance in my little platformer game, but I'm just curious about it.
Is there any notable performance between Vector2s and Vector3s, for example when adding or multiplying them, or when calling Normalize, Transform, or Distance?
[1] Just a side question, why are there no operators for calculations between Vector3 and Vector2?
Is there any notable performance between Vector2s and Vector3s, for example when adding or multiplying them, or when calling Normalize, Transform, or Distance?
Yes, you have one more coordinate so you will use more CPU cycles.
But it is very unlikely that it will ever give you any trouble. XNA 4 is using SIMD extensions for vector math (EDIT: on Windows Phone only), so the implementation is very optimal (on that platform). Except if you're doing very heavy computing, it is very unlikely to ever cause you trouble. You do need Vector3s for your positions because you're now doing 3D (or 2.5D...), so please don't do any premature optimization. This is 97% evil1.
Just a side question, why are there no operators for calculations between Vector3 and Vector2?
Because it makes no sense, mathematically. What would you expect to come out from such calculations? For instance what should happen if you try to add a Vector3 and a Vector2:
[x1, y1, z1] + [x2, y2] = [x1 + x2, y1 + y2, z1] or [x1, y1 + x2, z1 + y2] ?
In this case, you'll typically need to determine by yourself what you want as a third coordinate for the Vector2, and where you wish to add it. For instance this solves the ambiguity:
[x1, y1, z1] + [x2, y2, 0] = [x1 + x2, y1 + y2, z1]
Now it's possible that some parts of your gameplay work only in 2D. If there are cases where you only need 2D coordinates, and if the computing does get really heavy (e.g. 2D physics), you can stick to Vector2s in that specific part of the code to save some precious cycles. You can then easily switch between 2D and 3D coordinates when you need to (e.g. get a scene position from a 2D physics position, or the other way around):
E.g. from Vector2 to Vector3 using this constructor:
Vector2 v2;
Vector3 v3(v2, someDepthValue);
Or from Vector3 to Vector2 using that constructor;
Vector3 v3;
Vector2 v2(v3.X, v3.Y);
Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
• Like I said, I know I don't have to worry about performance so soon, but it was just my curiosity that made me post this question. I guess you're right about the ambiguity. Thanks for the clear answer :) – Ruud Lenders Jan 25 '13 at 12:27
• @RuudLenders I've added details on how you could proceed if you ever face performance issues. – Laurent Couvidou Jan 25 '13 at 12:43
• I've edited your answer to include a very important point: SIMD is only used by XNA on Windows Phone! The .NET runtime on PC - and therefore XNA itself - does not support SIMD. The equivalent is also not supported on Xbox 360. – Andrew Russell Jan 26 '13 at 9:58
• (On PC and Xbox 360, Understanding XNA Framework Performance is still the applicable guide for optimising your vector operations.) – Andrew Russell Jan 26 '13 at 10:25
• @AndrewRussell Thanks for pointing that out! – Laurent Couvidou Jan 26 '13 at 11:27
You're trying to optimize prematurely. Most of the operations you mentioned ( normalize, transform, distance ) are pretty much identical to what vector2D does, if you can look at their code you will notice that they are practically the same. The only difference is that vector3D has a third axis. Performance wise it should be trivial compared to a Vector2D.
As for your side question:
Because you can't multiply matrixes/row-vectors/column-vectors that both have different sizes.
One of the biggest performance effects of using Vector3 unnecessarily, instead of Vector2, is the 50% increase in size and the effect that has on cache.
That unnecessary extra data needs to be loaded into the CPU cache from main memory. This is sloooow.
In addition, by loading in this unnecessary data, you increase the chance that you are pushing out useful data that then immediately has to be loaded back into cache.
In a modestly tight loop, the cache effects will overwhelm any CPU effects of doing extra operations.
Also, it's faster to add the elements directly (due to various quirks of .NET). So if you're micro-optimising you won't be using the vector operations anyway. So if you only need to add the first two elements of a vector, you could do this:
v1.X += v2.X; v1.Y += v2.Y;
But these kinds of performance considerations are only really applicable to things like particle engines, physics engines, and so on. So don't worry too much!
• So is manual inlining still the fastest way to go even with the added support of SIMD? – Mikael Högström Jan 29 '13 at 19:00
• @MikaelHögström SIMD is almost certainly going to be faster - but it is only available on the Windows Phone platform (see the edit and comments I made to Laurent Couvidou's answer). – Andrew Russell Jan 30 '13 at 1:52
• Ahh right thanks! – Mikael Högström Jan 30 '13 at 8:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3279062807559967, "perplexity": 1438.0522962504162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209165.16/warc/CC-MAIN-20180814150733-20180814170733-00434.warc.gz"} |
https://www.arxiv-vanity.com/papers/hep-th/9307038/ | arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
IASSNS-HEP-93/40, PUPT-1397
Computing The Complete Massless Spectrum Of A Landau-Ginzburg Orbifold
Shamit Kachru Research supported in part by an NSF Graduate Fellowship
Joseph Henry Laboratories
Princeton University
Princeton, NJ 08544
Edward Witten Research supported in part by NSF Grant PHY92-45317
School of Natural Sciences
Olden Lane
Princeton, NJ 08540
We develop techniques to compute the complete massless spectrum in heterotic string compactification on N=2 supersymmetric Landau-Ginzburg orbifolds. This includes not just the familiar charged fields, but also the gauge singlets. The number of gauge singlets can vary in the moduli space of a given compactification and can differ from what it would be in the large radius limit of the corresponding Calabi-Yau. Comparison with exactly soluble Gepner models provides a confirmation of our results at Gepner points. Our methods carry over straightforwardly to Landau-Ginzburg models.
July 1993
1. Introduction
Landau-Ginzburg models have long been used as mean field models of critical phenomena. More recently it was realized that in two dimensions, much sharper results can be extracted from them. For instance, minimal conformal field theories can be described as Landau-Ginzburg models as shown for bosonic theories by Zamolodchikov [1] ; this was extended for supersymmetry in [2] and for in [3] [4] [5].
The case has many special simplifications related in part to the non-renormalization theorems for the superpotential. For instance, for it is possible to calculate the minimal model characters directly from the Landau-Ginzburg model [6]. Also, for , certain orbifolds of Landau-Ginzburg models have a beautiful and unexpected relation to Calabi-Yau sigma models [4][7][8][9][10]. The Landau-Ginzburg model describes a certain “point,” or really a certain submanifold, in the Calabi-Yau moduli space.
The models also have particularly interesting physical applications. theories with the appropriate central charge can be used to construct compactifications of the heterotic string, and thereby to build models of particle physics, with unbroken space-time supersymmetry. Landau-Ginzburg models can in particular be used to build such compactifications – giving specializations of Calabi-Yau models [11][12].
These specializations are technically natural, in the usual sense of particle physics, because of enhanced symmetries (involving twist fields; see [10], §3.4, for an explicit explanation). They are interesting because of calculable stringy effects (such as the enhanced symmetries or a deviation of the number of massless particles from what it would be in the field theory limit).
Also, Landau-Ginzburg models are special cases of Calabi-Yau models in which instanton corrections are turned off (see [10], §3.4). As the instanton corrections are the usual obstruction to forming deformations of sigma models [13], it would appear likely that Landau-Ginzburg models (which are easily constructed [10], §6) have conformally invariant infrared fixed points. This is then an interesting case in which conformally invariant models should be accessible for fairly detailed study. models are of course of considerable interest because of their use in constructing models of particle physics with effective four dimensional gauge groups more realistic than .
Except for Gepner models, which are more or less fully constructed algebraically, most studies of these models have focussed on the chiral primary states. Those states enter in many beautiful constructions and among other things determine the spectrum of massless charged particles. However, the massless gauge singlets are not (all) determined by the chiral primary states, and the notion of chiral primaries does not carry over to models. (The two facts are related: the massless gauge singlets that do not come from chiral primaries are represented by vertex operators that break or supersymmetry down to .) Our intention in this paper is to develop methods for computing the complete massless spectrum of Landau-Ginzburg models, both and models, and including all of the gauge singlets.
In §2 we describe the necessary facts and methods. In §3 we study in detail a familiar model – the quintic. One virtue of this model is that (at a special point in the parameter space) the results can be compared to known results about the corresponding Gepner model. It should be clear, however, that our methods carry over without essential change to arbitrary Landau-Ginzburg models, including models.
For Calabi-Yau manifolds, one can identify the particles which are massless in the field theory limit by computing suitable cohomology groups; but difficult questions then arise, in general, of whether instanton corrections might give non-vanishing (but exponentially small in the field theory limit) masses to some of these states. For Landau-Ginzburg models, however, one can argue – as we will do in §2.1 – that our results are actually exact. Intuitively, this is in keeping with the fact that the Landau-Ginzburg models have no instantons.
2. Background And Methods
We will work in superspace with coordinates (our conventions follow those of [10]). In an superconformal theory, there are four supersymmetry charges and , where and specify left- and right-movers on the worldsheet. We will use the terms left-moving and right-moving somewhat loosely to describe modes that in the conformally invariant limit are left-moving or right-moving. The right moving supersymmetries satisfy
Q2+=¯¯¯¯Q2+=0, {Q+, ¯¯¯¯Q+}=2 L0+
where is the coefficient of the zero mode in the Laurent expansion of the right moving stress-energy tensor .
The worldsheet “matter” that we are interested in will be chiral superfields . Such fields satisfy
[¯¯¯¯¯D+, Φ] = [¯¯¯¯¯D−, Φ] = 0
where and are known as superspace covariant derivatives; the complex conjugates of the ’s are anti-chiral fields that satisfy equation (2.2) with . The chiral superfields have an expansion in terms of component fields
Φ(x,θ)=ϕ(y)+√2θαψα(y)+θαθαF(y).
Recall that the most general renormalizable Lagrangian for an supersymmetric theory with chiral superfields and their anti-chiral conjugates has the form
L1=∫ d2x d4θ K(Φ,¯¯¯¯Φ)−∫ dθ+ dθ− W(Φ)−∫ d¯¯¯θ+ d¯¯¯θ− ¯¯¯¯¯¯W(¯¯¯¯Φ)
where is called the Kahler potential (its derivatives determine the metric on target space; the target spaces of models constructed from chiral superfields are always Kahler manifolds) and is a holomorphic function of the fields, called the superpotential; we will choose to have the form corresponding to a flat metric. After performing the integrals and integrating out the auxiliary fields, the Lagrangian becomes
L1=∫∑i d2x (−∂α¯¯¯ϕi∂αϕi+i¯¯¯¯ψ−,i(∂0+∂1)ψ−,i+i¯¯¯¯ψ+(∂0−∂1)ψ+,i−∑i∣∣∣∂W∂ϕi∣∣∣2−∂2W∂ϕi∂ϕjψ−,iψ+,j−∂2¯¯¯¯¯¯W∂¯¯¯ϕi∂¯¯¯ϕj¯¯¯¯ψ+,j¯¯¯¯ψ−,i.)
The superpotential is said to be quasi-homogeneous if for some integers and one has . Such quasi-homogeneity ensures the existence of left- and right-moving -symmetries that play an important role. The models that are believed to be related to Calabi-Yau models are actually not Landau-Ginzburg models as introduced above but orbifolds in which one projects onto states with integral charges. For future use, it is convenient to set
αi=nid.
The theory described by (2.4) is believed to flow in the infrared to a conformal field theory with central charge
ˆc=∑i(1−2αi).
In applications in string theory, it is necessary to consider the model formulated in four sectors – (R,R), (NS,R), (R,NS), and (NS,NS), where R and NS refer to Ramond and Neveu-Schwarz boundary conditions; the two entries give the boundary conditions for left-movers and for right-movers. In applications to Type II superstrings, one would have (in models of this particular type) space-time supersymmetries coming from both left- and right-movers. These supersymmetries determine the spectrum in all four sectors in terms of the spectrum in, say, the (R,R) sector. In practice, this means that to identify massless particles in space-time, it suffices to find the (R,R) ground states. These have very special properties which have been much exploited in the literature on Landau-Ginzburg models and their applications. Their (NS,NS) cousins are represented by vertex operators that preserve (2,2) world-sheet supersymmetry.
We are actually interested in using the same models to describe compactifications of the heterotic string. In this case, we supplement (2.4) by ten left-moving free fermions
L2=∫d2x10∑I=1λIi(∂0+∂1)λI
and extra degrees of freedom representing an additional current algebra. The are given the same NS or R boundary conditions as the left-moving part of (2.4). The combined Lagrangian is expected (as in Calabi-Yau compactification) to give an unbroken gauge group in space-time.
Space-time supersymmetries are now derived from right-movers only. Therefore, there are two sectors that must be studied – (R,R) and (NS,R). The study of the (NS,R) model is one of the main novelties in this paper. We are no longer interested only in states with a simple relation to (R,R) ground states, so new methods must be developed.
In fact, in the (NS,R) sector, there are massless gauge singlet states that are represented by vertex operators that (even if one suppresses the ’s) break world-sheet supersymmetry down to supersymmetry. These are the modes that, in compactification on a Calabi-Yau manifold , arise from . For some computations of this cohomology group in Calabi-Yau models see [14][15][16][17]. Understanding these modes in the context of Landau-Ginzburg models is one of our main goals in this paper. In the process of doing this, we will automatically develop the techniques needed to compute the complete massless spectrum in more general (0,2) Landau-Ginzburg models.
An symmetry acting on the ten ’s is manifest in the above Lagrangian. is not a maximal subgroup of , which instead contains an factor. The generator is simply the left-moving -current – call it – of the Landau-Ginzburg theory with Lagrangian . The rest of is harder to see explicitly; the additional currents are twist fields coming from states in the left-moving Ramond sector.
2.1. The Born-Oppenheimer Approximation
Because we are looking for massless states in space-time, we can set the space-time momentum to zero and look for worldsheet wavefunctions which have only polynomial dependence on the lowest oscillator modes. In sectors with negative vacuum energy, we have to keep the lowest excited modes of the various fields. This truncation of the theory to a small finite number of modes, a worldsheet “Born-Oppenheimer” approximation, has been applied before in a string theory context in [18] and [19]. However, the focus there was on sigma models. In the Landau-Ginzburg context, it is easy to be more explicit.
What is the degree of validity of the Born-Oppenheimer approximation? We will argue that for identifying the massless modes it is exact.
We will denote the right- and left-moving world-sheet Hamiltonians as and . In the (R,R) and (NS,R) sectors that we will study, physical states have ; for massless particles on-shell, the “space-time” part of the string does not contribute to , so we can consider to be the right-moving Hamiltonian of the “internal” theory only. In a right-moving Ramond sector, there are two right-moving global supersymmetries, say and , with
{Q+,¯¯¯¯Q+}=2L0+, Q+2=¯¯¯¯Q+2=0.
As in Hodge theory, it follows that the kernel of is the same as the cohomology of .
This simple fact is the starting point for all our computations: we identify the massless states with the cohomology of (or actually the subspace of that cohomology consisting of states with the correct eigenvalue of ). This is a great advantage because – due to the simple properties of triangular matrices – cohomology is usually highly computable.
In the particular case at hand, the simplification comes mostly because the cohomology is naturally invariant under a rescaling of the superpotential by . To be more precise, under , the cohomology group of right-moving charge is multiplied by , because of the scaling introduced momentarily. The reason for this is that, up to a rescaling of the fields by
Φi→ϵ−αiΦi,
is equivalent to a certain modification of the kinetic energy. The whole kinetic energy is of the form so the modification of the kinetic energy induced by the transformation (2.10) does not affect the cohomology. This means that in computing the cohomology, we can set to zero except when it is needed to lift degeneracies that are otherwise present. That fact is the basis for all of our calculations.
It is straightforward to write down the operator of the Landau-Ginzburg model:
¯¯¯¯Q+=i√2∫ dx1 (i¯¯¯¯ψ+,i(∂0+∂1)ϕi+∂W∂ϕiψ−,i)
An additional simplification arises (as in [6]) because of the principle stated in the last paragraph. Taking and trying to compute the cohomology perturbatively in , the first step is to compute the cohomology of the part of that is independent of :
¯¯¯¯Q+,R=i√2∫ dx1 (i¯¯¯¯ψ+,i(∂0+∂1)ϕi) .
The cohomology of this operator is the subspace of the full Hilbert space consisting of states in which the right-moving oscillators are all in their ground states and which depend holomorphically on the zero modes of the ; moreover the zero modes of and can be omitted. This leaves a smaller Hilbert space, consisting of left-moving oscillators, zero modes of and , and holomorphic functions of boson zero modes. Let us call this the left-moving Hilbert space .
The next step, analogous to degenerate perturbation theory in quantum mechanics, is to compute the cohomology of the “perturbation”
¯¯¯¯Q+,L=i√2∫ dx1 ∂W∂ϕiψ−,i
in . In quantum mechanics this would usually be only the beginning of a systematic expansion; but in the present situation we are actually at this stage finished (at least to all finite orders), because of the triangular nature of cohomology and the simplicity of the cohomology of the operator. The requisite argument is a standard “zig-zag” argument, as in [20], p. 95, using the following facts. Let be the operator that assigns the value to , to , and 0 to other fields. Then , , and the cohomology of is zero except at one value of .
Let us use these facts to prove that the cohomology is naturally isomorphic to the cohomology of in the cohomology (which is isomorphic to ). So to begin with we have a state that is annihilated by and annihilated by modulo . We can assume that has since the cohomology vanishes for other values of . The fact that is annihilated by modulo means that there is some , necessarily of , such that
¯¯¯¯Q+,L|α0⟩=−¯¯¯¯Q+,R|α−1⟩ .
Then . Moreover
¯¯¯¯Q+,R(¯¯¯¯Q+,L|α−1⟩)=−¯¯¯¯Q+,L¯¯¯¯Q+,R|α−1⟩=¯¯¯¯Q+,L¯¯¯¯Q+,L|α0⟩=0
where the first step uses , the second step uses (2.14), and the last step uses . therefore represents a state in the cohomology of at ; since the cohomology vanishes except at , this state is cohomologically trivial and there is a state of such that . Continuing in this way, one inductively solves the equations
¯¯¯¯Q+,R|α−n−1⟩=−¯¯¯¯Q+,L|α−n⟩.
The sum is then the desired state annihilated by . In defining and obeying the equations up to the first terms we have shown that the state which has zero energy in the Born-Oppenheimer approximation has zero energy up to order in perturbation theory in the superpotential .
The question of whether the series converges is more subtle, but intuitively this should follow from the super-renormalizability of the Landau-Ginzburg model. The state has , and as is carried only by fermions, is a state with very high energy, roughly at least the energy of a degenerate fermi gas with fermi energy . For such high energy states, dominates over because of being constructed from a current of higher dimension (containing an extra derivative), and in the relation (2.16), it should be possible to choose to be much smaller than in norm, ensuring convergence of the series. A rigorous proof of this assertion would be interesting.
The cohomology can be decomposed according to the action of certain operators that commute with or have simple commutation relations with it. In fact, commutes with the left-moving charge but raises the right-moving charge by one unit. The statement that the right charge is convention-dependent. Our conventions for charges are given in §2.2 . also obviously commutes with the ’s, so states can be labeled by the number of oscillators.
Somewhat less obviously [6], in the Landau-Ginzburg theory (2.4), one can find an superconformal algebra of left-moving fields that commute with . In components, one has
In (2.17), , , etc., are components in the expansion (2.3) of the superfields . Hopefully, these operators converge in the infrared to the left-moving algebra of the expected conformally invariant fixed point theory. The central charge of the algebra (2.17) is given by (2.7). With a fairly obvious change (renaming and as and ) this realization of the algebra was first given in [[21]], where the operator also appeared, with a somewhat different rationale.
There are several reasons that it is convenient to have these operators. First of all, physical states, in addition to being annihilated by , must have the appropriate eigenvalue of . So among other things, we need to be able to compute the quantum number of the Fock ground state in each sector of Hilbert space.
Furthermore, to know which singlet states are singlets, which belong to ’s of , and which to ’s, we need to work out the quantum numbers, so in particular we need to compute the charge of the Fock ground state. We will return to these matters later.
A subtler reason for needing (2.17) is as follows. In compactification on a Calabi-Yau manifold , massless gauge singlets of the (NS,R) sector are of three kinds: states that come from , states that come from , and states that come from . We would like the analogous decomposition in the case of Landau-Ginzburg models. This can be done as follows. In the Calabi-Yau case, the three kinds of states can be described as states that are annihilated by , states that are annihilated by , and states that are annihilated by neither. Since from (2.17) we can get an explicit and practical construction of and , we can make the decomposition into , , and also in the Landau-Ginzburg case.
In addition to being of intrinsic interest, this decomposition can be of practical use in the following sense. The singlets coming from and are in one to one correspondence with ’s of SO(10) which arise in the same twisted sectors. The concrete form of the correspondence is as follows. Consider a singlet which is created by a left chiral field, so its representative in the cohomology satisfies . Then the corresponding of is given by . A similar construction applies to left anti-chiral singlets, with the role of and reversed. We will illustrate this explicitly in the example of §3.
2.2. Symmetries And Quantum Numbers
Consider an Landau-Ginzburg theory with chiral superfields and quasi-homogeneous superpotential such that
W(λniΦi) = λd W(Φi)
and again set . The superpotential will then have left- and right-moving charges – as befits a marginal operator – if the superfields have charges . In fact, the of both charges are mere conventions. Flipping the convention for one leads to an exchange of ’s and ’s; this simple observation motivated the discovery of mirror symmetry. In components the charges are therefore as in Table 1.
3 \columns\+& \tenbfTable 1& \+&& \+\tenbfField &\tenbfq−&\tenbfq+ \+ ϕi&αi&αi \+ ¯¯¯ϕi&−αi&−αi \+ ψi− &αi−1&αi \+ ψi+ &αi&αi−1 \+ ¯¯¯¯ψi− &1−αi&−αi \+ ¯¯¯¯ψi+ &−αi&1−αi
At this point, the attentive reader might worry about the following point. The operator that transforms the fields according to the charges given in the table is
The density that is being integrated in (2.19) does not commute with , but the integrated expression does. On the other hand, in equation (2.17) we have written down a left-moving charge density that does commute with . Using this density, we have a second candidate for the left-moving charge, namely
J′L=∑i∫ dx1((αi−1)ψ−,i¯¯¯¯ψ−,i+iαiϕi(∂0−∂1)¯¯¯ϕi).
Using the commutation relations
{¯¯¯¯Q+,ψ+,i}=−√2(∂0+∂1)ϕi[¯¯¯¯Q+,¯¯¯ϕi]=i√2¯¯¯¯ψ+,i{¯¯¯¯Q+,¯¯¯¯ψ−,i}=i√2∂W∂ϕi
(with other components vanishing), one finds that
JL=J′L+{¯¯¯¯Q+,i√2∫ dx1 (∑iαi¯¯¯ϕiψ+,i)}.
This shows that as regards the action on the cohomology, it does not matter whether we use or . arises naturally in the simplest description of the algebra that acts on the cohomology, while is distinguished because it generates a symmetry even before taking the cohomology.
A similar question, which we might as well dispose of now, arises for the left-moving energy operator . The Landau-Ginzburg theory (2.4), even away from criticality, has a conserved Hamiltonian and momentum . The conventional operator would be or concretely
The operator that we would form from the stress tensor in (2.17) is instead
In fact, , though a slightly lengthy calculation is needed to show this. For instance, to reduce (2.24) to a more recognizable form, one first writes , and then evaluates via the equations of motion. can be treated similarly. Discarding a total derivative, the term in (2.24) can be replaced by
(∂0+∂1)(iαiψ−,i¯¯¯¯ψ−,i−ϕi(∂0−∂1)¯¯¯ϕi)).
Using the fact that , it follows that if , then . Applying this principle with being the current in the first line in (2.17), we find that up to , (2.25) can be replaced by . This in turn can be evaluated using the equations of motion. After adding one last correction term
⎧⎨⎩¯¯¯¯Q+,−i√2∫ dx1∂¯¯¯¯¯¯W∂¯¯¯ϕi¯¯¯¯ψ−,i⎫⎬⎭
to (2.24) one obtains the desired results that modulo . The significance of this is similar to the significance of the analogous statement demonstrated for the currents in the last paragraph: is more closely related to the algebra that acts on the cohomology, but is natural because it generates a symmetry even before taking the cohomology.
The equivalence of the two operators and of the two operators means that the ground state quantum numbers are independent of (which does not appear in and ) and can be computed using the standard formulas associated with normal-ordering of and .
2.3. Construction Of The Orbifold
Calabi-Yau sigma models are related not quite to Landau-Ginzburg models but to certain Landau-Ginzburg orbifolds. These are orbifolds in which one projects on integral values of ; then automatically also becomes integral. The projection is made by dividing by the group generated by
e−2πi∮JL(z) = e−2πiJL
with a due modification which we will now explain when certain fermion zero modes are present.
In physical applications of the Landau-Ginzburg orbifold, one wishes to sum over left-moving Ramond and Neveu-Schwarz sectors. (This is the GSO-like projection that enters in constructing current algebra.) In models, the GSO projection [22] can be interpreted as a projection onto states for which is even. We are not quite dealing here with an model but with a model containing also the left-moving free fermions . Hence, in the left-moving NS sectors, the GSO projection that we want is the one that projects onto states in which plus the number of excitations is even. So we project onto states with where
g=exp(−iπJL)⋅(−1)λ.
The necessary statement in R sectors is more subtle because of fermion zero modes. Let and be the left-moving and right-moving charges of the “internal” Landau-Ginzburg theory. Then in left-moving Ramond sectors, the GSO projection (on states that are in the ground state of the sector) can be summarized by saying that the value of determines whether states transform in the or the of SO(10). One (standard) way to understand this in more detail is to organize the ten fermions of (2.8) into five complex fermions
ηI=1√2 (λ2I−1+iλ2I)
where . The complex fermi fields have zero modes and which satisfy the standard anti-commutation relations
{η0,I,η0,J}={η∗0,I,η∗0,J}=0, {η0,I,η∗0,J}=δIJ .
Then acting on the Fock vacuum which satisfies , a 32 dimensional representation of is furnished by the 32 states
η∗0,j1⋯η∗0,jk|0⟩ .
It is well known that this is a reducible representation of which decomposes into two 16 dimensional irreducible representations, the and the ; the is composed of the states in (2.31) with even, while the is given by the states in (2.31) with odd. Notice from (2.28) that the gauge fermions should be thought of as carrying an extra charge of 1, for the purposes of the projection onto even left-moving charge. Then the states in (2.31) with a given value of carry a left charge of (the being the charge of the Fock vacuum ; see §2.4). The conclusion is that states with are projected onto ’s of , while states with are associated with ’s of .
Physical applications also involve a right-moving GSO projection, onto states with the appropriate mod 2 right-moving fermion number. We will be interested in massless states, which are always right-moving ground states; for such states the GSO projection in right-moving Ramond sectors means the following. States with even give left-handed spin one-half massless fermions in space-time; states with odd give right-handed ones. The detailed explanation involves exactly the same sort of reasoning that we have just carried out for left-movers. (The description of the right-moving GSO projection in right moving NS sectors is standard but we need not give it here as we only consider right moving R sectors in this paper.)
Since in constructing the spectrum, we project onto states with a particular eigenvalue of the operator of equation (2.28), modular invariance forces us to add twisted sectors constructed with twists by arbitrary powers of . The operator is a version of the operator that counts fermions modulo two. So, starting with the completely untwisted (R,R) sector, a twist by an even power of makes a left-moving Ramond sector; a twist by an odd power makes a left-moving Neveu-Schwarz sectors. With being the least common denominator of the charges of the , , so there are sectors twisted by .
2.4. Ground State Quantum Numbers
As is well known in analogous computations, one of the main steps in determining the spectrum of one of these models is to determine the quantum numbers of the ground state in each twisted sector. To be precise, in the sector twisted by , we wish to determine the left- and right-moving charges (i.e., and eigenvalues), and the left-moving energy ( eigenvalue) of the ground state. We will always consider right-moving Ramond sectors, so the eigenvalue of the ground state will always be zero.
First, we determine the charges. Our viewpoint is that of [23]: the reason the twisted sectors have fractional charges is that when the fermions satisfy twisted boundary conditions, the vacuum has a fractional fermion number. Formally, the charge carried by a filled fermi sea with fermions of charge is
Q = e∫0−∞ dE ρ(E)
where is the density of states. This is of course divergent, and must be regulated. Since we are really interested in the change in as a function of the twisted boundary conditions on the fermions, we can subtract an (infinite) constant without doing any harm; we also introduce a convergence factor:
Q = −12 lims→0 ∫∞−∞ dE sign(E) ρ(E) e−s|E| .
For our case of interest, which is left moving fermions on a circle of circumference (and coordinate ) with Hamiltonian , the integral in (2.33) is easily evaluated for arbitrary choice of boundary conditions. In particular, for fermions with boundary conditions
ψ(σ+2π)=e−iθψ(σ)
with , one finds
Q = eθ2π −e2
(so the vacuum has a fractional fermion number of ). The above formula is valid for . It becomes valid for all after the obvious modification to
Q = e(θ2π−[θ2π] −12)
where denotes the greatest integer less than . There is an important subtlety here. The expression has a discontinuity when is an integral multiple of . At such values of , both values of should be kept. The reason for this is that precisely when , with integer , there are fermion zero modes; upon quantizing them, one finds (for a single complex fermion) a pair of ground states. One of these is the limit of the ground state as approaches from above; the other is the limit as approaches from below. So the charges of the two ground states are the two limiting values of (2.36).
The analogous formula for right-moving fermions is easily derived, with the result that for the same boundary conditions (2.34) the right-moving fermion would contribute . Since the right-moving worldsheet fermions do carry non-vanishing left charge, it is important to take into account their contribution when computing the left charges of the twisted vacua.
We know the charges of the fermions from Table 1, and in the sector twisted by they pick up phases when going around the circle. So without further ado, we can write the general formula for the left charges of the vacua:
qk,−=∑i{(αi−1)(k(αi−1)2+[k(1−αi)2]+12)+αi (−kαi2+[kαi2]+12)}
The analogous formula for the right-moving charges is simply
qk,+=∑i{αi(k(αi−1)2+[k(1−αi)2]+12)+(αi−1)(−kαi2+[kαi2]+12)}
We also need to determine the ground state eigenvalues of ( always vanishes in the ground state by right-moving supersymmetry). In the (R,R) sectors, the vacuum eigenvalue of vanishes. Indeed, the contribution of the fields in the “internal” Landau-Ginzburg theory vanishes by supersymmetry, since the bosons and fermions satisfy the same boundary conditions in (R,R) sectors. The contribution of the 16 fermions (in their ground state, which is in the NS sector, that is with antiperiodic boundary conditions) is while the contribution of the 10 fermions is and the contribution of the remaining 2 spacetime bosons (in light-cone gauge) is . Simply doing the arithmetic, this sums to 0.
The (NS,R) sectors, on the other hand, can have negative vacuum energies. The 10 fermions, 16 fermions, and 2 spacetime bosons contribute to the vacuum energy. The contribution of the internal Landau-Ginzburg theory can be determined by using the standard formulae for the energy of a twisted boson or fermion. The contribution to the ground state energy (normal ordering constant of ) for a complex fermion twisted by () with respect to being antiperiodic
ψ→ei(π+θ)ψ
is given by
Eθ = −124 + 18 (θπ)2 .
A boson with the same boundary conditions would contribute the negative of (2.40) to the vacuum energy.
We are interested in bose-fermi pairs with left charges and . Therefore, if in some (NS,R) sector the fermion has boundary condition (with between and ) then the boson is away from being antiperiodic. Simply using the formula (2.40) we see that the fermion-boson pair then contributes
Eθ = 14|θ|π−18
to the vacuum energy.
Using these formulae and the fermion and boson charges from table 1, we find that the vacuum energy of the sector twisted by with odd is given in general by
Ek = −58 + ∑i (14|βik|− 18) .
is , reduced mod to lie between and 1.
Now that we know the quantum numbers of the twisted vacua , we must determine the spectrum of physical states in each twisted sector. In the next section, we will do this in detail in a familiar example: The Landau-Ginzburg model that corresponds to a quintic hypersurface in .
2.5. And Supersymmetry Multiplets And Charges
Certain symmetries of these systems – symmetry and space-time supersymmetry – are not manifest in the formalism. The proper assembly of states into multiplets and supermultiplets can be carried out using the charges.
Let us consider first the construction of multiplets. The and of decompose under as and . Therefore, singlets of with are parts of s and s of , while singlets of with are also singlets of . The decomposition of the adjoint representation of as is also helpful in studying gluinos.
The right-moving charge plays a similar role in identifying supermultiplets [24]. For right-moving NS states, one can understand the values of by considering unitarity constraints. For example, if we consider a state of right conformal weight and right-moving charge , denoted by , then using
{G1/2,+,¯¯¯¯G−1/2,+} = 2L0,++J0,R{G−1/2,+,¯¯¯¯G1/2,+} = 2L0,+−J0,R
and requiring that the states and have non-negative norm we find that
h+≥12|q+| .
This is useful because we know that massless right NS states must have . Then also requiring locality means that : if , the state is right chiral (annihilated by ) and if the state is right antichiral (annihilated by ).
Consider a spin zero physical state of . It is represented by the spin zero part of a chiral superfields with component expansion
S(x,θ) = s(x) + θ η(x) + θθ F(x)
Likewise a scalar of is represented by a supermultiplet
¯¯¯¯S(x,¯¯¯θ) = ¯¯¯s(x) + ¯¯¯θ ¯¯¯η(x) + ¯¯¯θ¯¯¯θ ¯¯¯¯F(x) .
We are most interested in the worldsheet quantum numbers of the vertex operators for and , since we are going to be finding the spectrum of spacetime fermions. The fermions are obtained by acting with the spacetime supersymmetries on (2.45) and (2.46). In particular, with the information derived above and a knowledge of charges of the spacetime supersymmetry generators, we can infer the expected values of for the fermions which are part of chiral or antichiral multiplets. Recall that the explicit form of the spacetime supersymmetries is
Qα = ∮ dz e−ρ2 Sα Σ(z)Q˙α = ∮ dz e−ρ2 S˙α Σ†(z)
where is a spin field for the superconformal ghosts, and are spin fields for the world sheet “spacetime” fermions , and and are Ramond sector fields which essentially implement right spectral flow by . Therefore, we see that and leave the value of unchanged, while they change by .
Now using the fact that is constrained to have by the representation theory of the right moving N=2 algebra, we see that must have , while the vertex operator for the auxiliary field must have . Similarly, must have , while must have .
The same argument can be applied to find the quantum numbers of the gauginos. We know that generically in heterotic string theory the spacetime gauge symmetry must be generated by (NS,NS) vector bosons, which correspond to states of the form
J−1,L ψμ−1/2,+|0⟩
where is a left-moving symmetry generator and is one of the right-moving “spacetime” fermions. In particular, the state (2.48) always has . The gauginos arise by applying the supersymmetries (2.47) to the vector superfields, which have the same quantum numbers as (2.48). Therefore, in particular gauginos always have . For the gaugino partners of the symmetries of Gepner models, which are also neutral under the spacetime gauge symmetry, as well.
So in summary: We expect to find fermions with which are parts of spacetime antichiral and chiral supermultiplets, and fermions with which are part of spacetime vector supermultiplets. The latter are in correspondence with generators of spacetime gauge symmetries.
3. The Quintic
Let us now use the technology developed in §2 to study the massless spectrum of string theory compactified on a quintic hypersurface , in the Landau-Ginzburg orbifold formulation. We consider a quintic defined by the zeroes of a generic quintic polynomial
W=15∑i1…i5wi1…i5Φi1…Φi5.
In practice, that means that we consider a Landau-Ginzburg orbifold with as superpotential. The general results involve a reduction to a description involving finite matrices. When we want to make the results completely explicit, we will consider the example of the Fermat quintic, with
W=5∑i=115 Φ5i
which has enhanced symmetry and corresponds to a soluble Gepner point [25]. We will carry out the discussion for a model with superpotential , but no essential modification is required for the case, as we will explain in §3.9.
We must obtain the spectrum in 10 sectors, which arise, starting with the untwisted (R,R) sector, by twisting by , with . In practice, it suffices to consider , as CPT exchanges with .
The (R,R) sector is the sum of the twisted sectors of even , and the (NS,R) sector is the sum of the twisted sectors of odd . Happily, the (R,NS) and (NS,NS) sectors need not be studied explicitly, as they are related to (R,R) and (NS,R) by space-time supersymmetry.
As a preliminary, let us review the fields and their quantum numbers here. In addition to the bosons and , there are left moving fermions | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.951366126537323, "perplexity": 486.1729423102556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882051.19/warc/CC-MAIN-20200703122347-20200703152347-00493.warc.gz"} |
https://blender.stackexchange.com/questions/692/what-does-the-cycles-fresnel-node-do | # What does the Cycles Fresnel Node do?
What does the Fresnel node (in cycles) do? I know that it kind of does rim selection(similar to rim lighting), but I'm not exactly sure what it's doing. I've seen it used in many tutorials, but I can't find any clear explanations.
What's the explanation, and which situations make sense to use it?
• 2 minutes of googling would give you some quite nice answers – zeffii Jun 6 '13 at 20:59
• @zeffii I can't find any good explanations on Google. – CharlesL Jun 6 '13 at 21:46
• an image search for fresnel shader can probably say more than trying to read about it. – zeffii Jun 6 '13 at 22:06
• @zeffii that's a really good idea, thanks! – CharlesL Jun 6 '13 at 22:11
The Fresnel node outputs which percentage of light would be reflected off a glossy layer, with the rest being refracted through the layer. The most common use is to mix between two BSDFs using it as a blending factor in a mix shader node.
For a simple glass material you would mix between a glossy refraction and glossy reflection. At grazing angles more light will be reflected than refracted as happens in reality.
For a two layered material with a diffuse base and a glossy coating, you can use the same setup, mixing between a diffuse and glossy BSDF. By using the fresnel as blending factor you're specifying that any light which is refracted through the glossy coating layer would hit the diffuse base and be reflected off that.
For the Cycles node it is assumed that this glossy layer is a simple dielectric material with an index of refraction. Different and more advanced Fresnel equations exist but are not currently implemented. More advanced layering models exist but fresnel mixing is a commonly used approximation.
• note that there are many different implementations of Fresnel equations. you should use it carefully. – eriawan Jun 7 '13 at 4:07
• But what is the logic behind using it as a blend factor, what if the glossy input is in the first shader and the diffuse in the second shader. Would it go other way around(as opposed stated above and go like:" any light which is refracted through the diffuse coating layer would hit the glossy base and be reflected off that."? – bzal Jan 18 '16 at 13:46
What it does is that it distinguishes between those areas that would be totally reflective with the given index of refraction and those that wouldn't, letting you tweak your shader accordingly. I assume you know what index of refraction is, if not you should read up on it on wikipedia.
In most general cases you'd rather want to use the glass BSDF directly, but there are non-standard uses in which the fresnel value can be used for other purposes than to simulate translucent materials of differing density.
I seem to recall someone using it to tweak a car paint shader once, but having read about that months ago I'm afraid I don't have the link at hand. Also months ago, also without a link to it, I recall someone creating an ice shader utilizing fresnel input. Both SHOULD be in some thread over at Blender Artists if you're curious enough to go looking.
Short version: for advanced materials it's sometimes desirable to access the fresnel value outside of the default glass shader. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2624875605106354, "perplexity": 1292.8067580125894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738892.21/warc/CC-MAIN-20200812112531-20200812142531-00118.warc.gz"} |
https://mittheory.wordpress.com/2016/10/03/estimating-transitive-closure/ | # Estimating Transitive Closure via Sampling
In this post, I describe an algorithm of Edith Cohen, which estimates the size of the transitive closure of a given directed graph in near-linear time. This simple, but extremely clever algorithm uses ideas somewhat similar to the algorithm of Flajolet–Martin for estimating the number of distinct elements in a stream, and to MinHash sketch of Broder1.
Suppose we have a large directed graph with $n$ vertices and $m$ directed edges. For a vertex $v$, let us denote $R_v$ the set of vertices that are reachable from $v$. There are two known ways to compute sets $R_v$ (all at the same time):
• Perform Depth-First Search (DFS) from each vertex. This takes time $O(nm)$, which is the best known bound for sparse graphs;
• Use fast matrix multiplication, which takes time $O(n^{2.37\ldots})$. This algorithm is better for dense graphs.
Can we do better? Turns out we can, if we are OK with merely approximating the size of every $R_v$. Namely, the following theorem was proved back in 1994:
Theorem 1. There exists a randomized algorithm for computing $(1 + \varepsilon)$-multiplicative approximation for every $|R_v|$ with running time $\varepsilon^{-2}\cdot m \cdot \mathrm{poly}(\log n)$.
Instead of spelling out the full proof, I will present it as a sequence of problems: each of them will likely be doable for a mathematically mature reader. Going through the problems should be fun, and besides, it will save me some typing.
Problem 1. Let $f \colon V \to [0, 1]$ be a function that assigns random independent and uniform reals between 0 and 1 to every vertex. Let us define $g(v) = \min_{w \in R_v} f(w)$. Show how to compute values of $g(v)$ for all vertices $v$ at once in time $m \cdot \mathrm{poly}(\log n)$.
Problem 2. For a positive integer $k$, denote $U_k$ the distribution of the minimum of $k$ independent and uniform reals between 0 and 1. Suppose we receive several independent samples from $U_k$ with an unknown value of $k$. Show that we can obtain a $(1 + \varepsilon)$-multiplicative approximation of $k$ with probability $1 - \delta$ using as few as $O(\log(1 / \delta) / \varepsilon^2)$ samples.
Problem 3. Combine the solutions for two previous problems and prove Theorem 1.
### Footnotes
1. These similarities explain my extreme enthusiasm towards the algorithm. Sketching-based techniques are useful for a problem covered in 6.006, yay!
Ilya
Why do we requre $f$ to be special (in terms of randomness) in Problem 1? Seems like it will work for any reasonable $f$.
I once asked on CSTheory how to do this. One comment pointed me to the paper you mention. There’s also an answer pointing to some more recent work on how to compute not just cardinalities but also the sets $R_v$: http://cstheory.stackexchange.com/q/553/236 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 27, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661771655082703, "perplexity": 413.0159244680696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525402.30/warc/CC-MAIN-20190717201828-20190717223828-00392.warc.gz"} |
https://gmatclub.com/forum/what-is-the-value-of-integer-k-247691.html | It is currently 22 Nov 2017, 22:54
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# What is the value of integer k?
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 42305
Kudos [?]: 133079 [0], given: 12403
What is the value of integer k? [#permalink]
### Show Tags
22 Aug 2017, 03:00
Expert's post
1
This post was
BOOKMARKED
00:00
Difficulty:
25% (medium)
Question Stats:
86% (00:34) correct 14% (00:40) wrong based on 41 sessions
### HideShow timer Statistics
What is the value of integer k?
(1) k + 3 > 0
(2) $$k^4 \leq 0$$
[Reveal] Spoiler: OA
_________________
Kudos [?]: 133079 [0], given: 12403
Director
Joined: 21 May 2013
Posts: 541
Kudos [?]: 48 [0], given: 492
Re: What is the value of integer k? [#permalink]
### Show Tags
22 Aug 2017, 03:37
Bunuel wrote:
What is the value of integer k?
(1) k + 3 > 0
(2) $$k^4 \leq 0$$
B. K^4 can never be less than 0,so k=0
Kudos [?]: 48 [0], given: 492
Intern
Joined: 04 Nov 2015
Posts: 36
Kudos [?]: 3 [0], given: 2
Re: What is the value of integer k? [#permalink]
### Show Tags
22 Aug 2017, 03:50
KS15 wrote:
Bunuel wrote:
What is the value of integer k?
(1) k + 3 > 0
(2) $$k^4 \leq 0$$
B. K^4 can never be less than 0,so k=0
Answer has to be E from B we get k = 0 and o is not an integer.
Kudos [?]: 3 [0], given: 2
Manager
Joined: 27 Dec 2016
Posts: 177
Kudos [?]: 47 [0], given: 205
Concentration: Social Entrepreneurship, Nonprofit
GPA: 3.65
WE: Sales (Consumer Products)
What is the value of integer k? [#permalink]
### Show Tags
22 Aug 2017, 03:51
Bunuel wrote:
What is the value of integer k?
(1) k + 3 > 0
(2) $$k^4 \leq 0$$
Statement 1
- $$k + 3 > 0$$, so $$k > -3$$
- Not sufficient.
Statement 2
- $$k^4 <= 0$$
- If K (+), no positive numbers satisfy this inequation.
- If K (-), no negative numbers satisfy this inequation.
- In fact, $$k^4$$ cannot be negative for any numbers, so only $$k=0$$ will satisfy.
- Hence, it sufficient.
B.
_________________
There's an app for that - Steve Jobs.
Last edited by septwibowo on 22 Aug 2017, 03:52, edited 1 time in total.
Kudos [?]: 47 [0], given: 205
Math Expert
Joined: 02 Sep 2009
Posts: 42305
Kudos [?]: 133079 [0], given: 12403
Re: What is the value of integer k? [#permalink]
### Show Tags
22 Aug 2017, 03:52
sandeep211986 wrote:
KS15 wrote:
Bunuel wrote:
What is the value of integer k?
(1) k + 3 > 0
(2) $$k^4 \leq 0$$
B. K^4 can never be less than 0,so k=0
Answer has to be E from B we get k = 0 and o is not an integer.
0 is neither positive nor negative even integer.
_________________
Kudos [?]: 133079 [0], given: 12403
Manager
Joined: 27 Dec 2016
Posts: 177
Kudos [?]: 47 [0], given: 205
Concentration: Social Entrepreneurship, Nonprofit
GPA: 3.65
WE: Sales (Consumer Products)
Re: What is the value of integer k? [#permalink]
### Show Tags
22 Aug 2017, 03:54
sandeep211986 wrote:
KS15 wrote:
Bunuel wrote:
What is the value of integer k?
(1) k + 3 > 0
(2) $$k^4 \leq 0$$
B. K^4 can never be less than 0,so k=0
Answer has to be E from B we get k = 0 and o is not an integer.
I think 0 is considered as an integer, sandeep211986
_________________
There's an app for that - Steve Jobs.
Kudos [?]: 47 [0], given: 205
Director
Joined: 21 May 2013
Posts: 541
Kudos [?]: 48 [0], given: 492
Re: What is the value of integer k? [#permalink]
### Show Tags
22 Aug 2017, 03:55
sandeep211986 wrote:
KS15 wrote:
Bunuel wrote:
What is the value of integer k?
(1) k + 3 > 0
(2) $$k^4 \leq 0$$
B. K^4 can never be less than 0,so k=0
Answer has to be E from B we get k = 0 and o is not an integer.
Buddy 0 is an integer.
Kudos [?]: 48 [0], given: 492
Director
Joined: 21 May 2013
Posts: 541
Kudos [?]: 48 [0], given: 492
Re: What is the value of integer k? [#permalink]
### Show Tags
22 Aug 2017, 03:58
(1) k + 3 > 0
(2) $$k^4 \leq 0$$[/quote]
B. K^4 can never be less than 0,so k=0[/quote]
Answer has to be E from B we get k = 0 and o is not an integer.[/quote]
0 is neither positive nor negative even integer.[/quote]
Bunuel-your explanation could convey an incorrect meaning here. I know you mean to say ' 0 is neither positive nor negative' but that it is an integer
This is not what your sentence means
Kudos [?]: 48 [0], given: 492
VP
Joined: 26 Mar 2013
Posts: 1285
Kudos [?]: 296 [0], given: 165
What is the value of integer k? [#permalink]
### Show Tags
23 Aug 2017, 03:36
Bunuel wrote:
What is the value of integer k?
(1) k + 3 > 0
(2) $$k^4 \leq 0$$
1) K can take any positive value or 0 or negative value such that -3 < k < 0
Clearly insufficient
2) K^4 is ALWAYS bob-negative value it can't be negative. Only value here is k = 0
Sufficient
Kudos [?]: 296 [0], given: 165
What is the value of integer k? [#permalink] 23 Aug 2017, 03:36
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5253649950027466, "perplexity": 8878.145303135267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806736.55/warc/CC-MAIN-20171123050243-20171123070243-00342.warc.gz"} |
https://gmatclub.com/forum/if-x-and-y-are-positive-integers-which-of-the-following-74924.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 16 Nov 2018, 00:29
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in November
PrevNext
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
2526272829301
Open Detailed Calendar
• ### Free GMAT Strategy Webinar
November 17, 2018
November 17, 2018
07:00 AM PST
09:00 AM PST
Nov. 17, 7 AM PST. Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT.
• ### GMATbuster's Weekly GMAT Quant Quiz # 9
November 17, 2018
November 17, 2018
09:00 AM PST
11:00 AM PST
Join the Quiz Saturday November 17th, 9 AM PST. The Quiz will last approximately 2 hours. Make sure you are on time or you will be at a disadvantage.
# If x and y are positive integers, which of the following
Author Message
TAGS:
### Hide Tags
Manager
Joined: 30 Dec 2008
Posts: 119
If x and y are positive integers, which of the following [#permalink]
### Show Tags
Updated on: 25 May 2013, 03:49
6
54
00:00
Difficulty:
65% (hard)
Question Stats:
52% (00:58) correct 48% (00:50) wrong based on 1198 sessions
### HideShow timer Statistics
If x and y are positive integers, which of the following CANNOT be the greatest common divisor of 35x and 20y?
A. 5
B. 5(x-y)
C. 20x
D. 20y
E. 35x
Originally posted by cul3s on 18 Jan 2009, 21:40.
Last edited by Bunuel on 25 May 2013, 03:49, edited 2 times in total.
Edited the question and added the OA
Math Expert
Joined: 02 Sep 2009
Posts: 50615
### Show Tags
08 Feb 2012, 02:06
19
21
If x and y are positive integers, which of the following CANNOT be the greatest common divisor of 35x and 20y?
A. 5
B. 5(x – y)
C. 20x
D. 20y
E. 35x
Greatest common divisor (GCD) of $$35x$$ and $$20y$$ obviously must be a divisor of both $$35x$$ and $$20y$$, which means that $$\frac{35x}{GCD}$$ and $$\frac{20y}{GCD}$$ must be an integer.
If $$GCD=20x$$ (option C), then $$\frac{35x}{20x}=\frac{7}{4}\neq{integer}$$, which means that $$20x$$ cannot be GCD of $$35x$$ and $$20y$$ as it is not a divisor of $$35x$$.
How about the other choices, can they be GCD of $$35x$$ and $$20y$$?
A. $$5$$ --> if $$x=y=1$$ --> $$35x=35$$ and $$20y=20$$ --> $$GCD(35,20)=5$$. Answer is YES, $$5$$ can be GCD of $$35x=35$$ and $$20y$$;
B. $$5(x-y)$$ --> if $$x=3$$ and $$y=2$$ --> $$35x=105$$ and $$20y=40$$ --> $$GCD(105,40)=5=5(x-y)$$. Answer is YES, $$5(x-y)$$ can be GCD of $$35x$$ and $$20y$$;
D. $$20y$$ --> if $$x=4$$ and $$y=1$$ --> $$35x=140$$ and $$20y=20$$ --> $$GCD(140,20)=20=20y$$. Answer is YES, $$20y$$ can be GCD of $$35x$$ and $$20y$$;
E. $$35x$$ --> if $$x=1$$ and $$y=7$$ --> $$35x=35$$ and $$20y=140$$ --> $$GCD(35,140)=35=35x$$. Answer is YES, $$35x$$ can be GCD of $$35x$$ and $$20y$$.
Hope it's clear.
_________________
Intern
Joined: 16 Jan 2009
Posts: 14
### Show Tags
19 Jan 2009, 08:38
7
3
If x and y are positive integers, which of the following CANNOT be the greatest common divisor of 35x and 20y?
a. 5
b. 5(x-y)
c. 20x
d. 20y
e. 35x
We are looking for a choice that CANNOT be the greatest common divisor of 35x and 20y ...which means 35x and 20y when divided by the answer choice the quotient should not be a integer.
lets check
a. 5 35x/5 = 7x and 20y/5 = 4y both are integers so eliminate
b. 5(x-y) when x = 2 and y = 1 it could be be the greatest common divisor ..so eliminate
c. 20x when x = 1 its 20 and 20 cannot be the greatest common divisor of 35x and 20y ...
or 35x/20x = 7/4 which is not a integer.
##### General Discussion
Director
Joined: 29 Nov 2012
Posts: 765
Re: If x and y are positive integers, which of the following [#permalink]
### Show Tags
12 Jul 2013, 08:15
would plugging in number a better strategy for such problems?
_________________
Click +1 Kudos if my post helped...
Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/
GMAT Prep software What if scenarios http://gmatclub.com/forum/gmat-prep-software-analysis-and-what-if-scenarios-146146.html
Math Expert
Joined: 02 Sep 2009
Posts: 50615
Re: If x and y are positive integers, which of the following [#permalink]
### Show Tags
12 Jul 2013, 08:16
2
fozzzy wrote:
would plugging in number a better strategy for such problems?
Better strategy is the one that suits you best.
_________________
Director
Joined: 29 Nov 2012
Posts: 765
### Show Tags
12 Jul 2013, 08:26
Bunuel wrote:
If x and y are positive integers, which of the following CANNOT be the greatest common divisor of 35x and 20y?
A. 5
B. 5(x – y)
C. 20x
D. 20y
E. 35x
Greatest common divisor (GCD) of $$35x$$ and $$20y$$ obviously must be a divisor of both $$35x$$ and $$20y$$, which means that $$\frac{35x}{GCD}$$ and $$\frac{20y}{GCD}$$ must be an integer.
If $$GCD=20x$$ (option C), then $$\frac{35x}{20x}=\frac{7}{4}\neq{integer}$$, which means that $$20x$$ cannot be GCD of $$35x$$ and $$20y$$ as it is not a divisor of $$35x$$.
In this question for the division does it mean that both X and Y must be divisible or if any one is divisible the solution works
In option D
$$\frac{35X}{20Y}$$ doesn't work according to that strategy?
_________________
Click +1 Kudos if my post helped...
Amazing Free video explanation for all Quant questions from OG 13 and much more http://www.gmatquantum.com/og13th/
GMAT Prep software What if scenarios http://gmatclub.com/forum/gmat-prep-software-analysis-and-what-if-scenarios-146146.html
Intern
Joined: 06 May 2008
Posts: 12
Concentration: Strategy, General Management
Re: If x and y are positive integers, which of the following [#permalink]
### Show Tags
12 Jul 2013, 08:32
1
I proceeded like this:
35x can have following prime factors : 5 ,7, x [well, x can have > 1 prime factors too; if x=6, 2 and 3 will be added to the list of prime factors]
Similarly, 20y has following prime factors : 2, 5, y [Same theory holds good for y]
the GCF has to have one 5 for sure. [IF we found any answer choices that is not a multiple of 5, it could be omitted right away]
A. 5 => We already covered that GCF has 5. Eliminate
B. 5 (x -y) => If x and y were 2 and 1 respectively, this would reduce to 5. (same as answer choice A). Eliminate.
C. 20x prime factors are 2, 5 and x. For 2 to be part of GCF, it must have come from x as 35 in 35x doesn't have 2.
[If x had 2's then, 20x= 4 x 5 x X as GCF would not tally because, there is only two 2's in 20y]
D. 20y = 2 x 2 x 5 x y ... If x were 4, this would be very possible.
E. 35x = 5 * 7 * x; If y=7 and x =4, this is also possible.
There are simpler reasons already stated to say why C is the answer. But, for those who use prime factor trees to attach such problems, this is how I would explain.
Director
Joined: 14 Dec 2012
Posts: 767
Location: India
Concentration: General Management, Operations
GMAT 1: 700 Q50 V34
GPA: 3.6
### Show Tags
Updated on: 12 Jul 2013, 08:39
2
fozzzy wrote:
Bunuel wrote:
If x and y are positive integers, which of the following CANNOT be the greatest common divisor of 35x and 20y?
A. 5
B. 5(x – y)
C. 20x
D. 20y
E. 35x
Greatest common divisor (GCD) of $$35x$$ and $$20y$$ obviously must be a divisor of both $$35x$$ and $$20y$$, which means that $$\frac{35x}{GCD}$$ and $$\frac{20y}{GCD}$$ must be an integer.
If $$GCD=20x$$ (option C), then $$\frac{35x}{20x}=\frac{7}{4}\neq{integer}$$, which means that $$20x$$ cannot be GCD of $$35x$$ and $$20y$$ as it is not a divisor of $$35x$$.
In this question for the division does it mean that both X and Y must be divisible or if any one is divisible the solution works
In option D
$$\frac{35X}{20Y}$$ doesn't work according to that strategy?
hi fozzy ,
i will say that best way to undersatand the defenetions of GCF and LCM.
GCF of 2 numbers means ...biggest number which is factor of those numbers.
now hers 35x==>prime factors 5/7...and others we dont know about x
now 20y==>prime factors 2/2/5..and others we dont know as we dont about y
now as lets take options C:
LETS SAY 20x is GCF...THEN IT MUST BE FACTOR OF BOTH...means..==>35x/20x==>this must be integer(according to defenetion of factor)==>but when we simplify that we are getting 7/4==>fraction===>hence we are sure 100 percent that this cant be a factor of both....hence it cant be GCF.
in rest all option we unknown variables are not getting cancelled...so we are not sure in that.
hope it helps
_________________
When you want to succeed as bad as you want to breathe ...then you will be successfull....
GIVE VALUE TO OFFICIAL QUESTIONS...
learn AWA writing techniques while watching video : http://www.gmatprepnow.com/module/gmat-analytical-writing-assessment
Originally posted by blueseas on 12 Jul 2013, 08:34.
Last edited by blueseas on 12 Jul 2013, 08:39, edited 1 time in total.
Math Expert
Joined: 02 Sep 2009
Posts: 50615
### Show Tags
12 Jul 2013, 08:37
fozzzy wrote:
Bunuel wrote:
If x and y are positive integers, which of the following CANNOT be the greatest common divisor of 35x and 20y?
A. 5
B. 5(x – y)
C. 20x
D. 20y
E. 35x
Greatest common divisor (GCD) of $$35x$$ and $$20y$$ obviously must be a divisor of both $$35x$$ and $$20y$$, which means that $$\frac{35x}{GCD}$$ and $$\frac{20y}{GCD}$$ must be an integer.
If $$GCD=20x$$ (option C), then $$\frac{35x}{20x}=\frac{7}{4}\neq{integer}$$, which means that $$20x$$ cannot be GCD of $$35x$$ and $$20y$$ as it is not a divisor of $$35x$$.
In this question for the division does it mean that both X and Y must be divisible or if any one is divisible the solution works
In option D
$$\frac{35X}{20Y}$$ doesn't work according to that strategy?
Not sure I understand your question...
But notice that $$\frac{35x}{20y}=\frac{7x}{4y}$$ could be an integer, for example if x=4 and y=1.
_________________
Senior Manager
Joined: 07 Apr 2012
Posts: 370
### Show Tags
23 Nov 2013, 01:37
Bunuel wrote:
If x and y are positive integers, which of the following CANNOT be the greatest common divisor of 35x and 20y?
A. 5
B. 5(x – y)
C. 20x
D. 20y
E. 35x
Greatest common divisor (GCD) of $$35x$$ and $$20y$$ obviously must be a divisor of both $$35x$$ and $$20y$$, which means that $$\frac{35x}{GCD}$$ and $$\frac{20y}{GCD}$$ must be an integer.
If $$GCD=20x$$ (option C), then $$\frac{35x}{20x}=\frac{7}{4}\neq{integer}$$, which means that $$20x$$ cannot be GCD of $$35x$$ and $$20y$$ as it is not a divisor of $$35x$$.
How about the other choices, can they be GCD of $$35x$$ and $$20y$$?
A. $$5$$ --> if $$x=y=1$$ --> $$35x=35$$ and $$20y=20$$ --> $$GCD(35,20)=5$$. Answer is YES, $$5$$ can be GCD of $$35x=35$$ and $$20y$$;
B. $$5(x-y)$$ --> if $$x=3$$ and $$y=2$$ --> $$35x=105$$ and $$20y=40$$ --> $$GCD(105,40)=5=5(x-y)$$. Answer is YES, $$5(x-y)$$ can be GCD of $$35x$$ and $$20y$$;
D. $$20y$$ --> if $$x=4$$ and $$y=1$$ --> $$35x=140$$ and $$20y=20$$ --> $$GCD(140,20)=20=20y$$. Answer is YES, $$20y$$ can be GCD of $$35x$$ and $$20y$$;
E. $$35x$$ --> if $$x=1$$ and $$y=7$$ --> $$35x=35$$ and $$20y=140$$ --> $$GCD(35,140)=35=35x$$. Answer is YES, $$35x$$ can be GCD of $$35x$$ and $$20y$$.
Hope it's clear.
Hi Bunuel,
Is there a way to do this using prime factorization of 35 and 20?
That's the first thing that comes to mind, but I can see how to proceed from there.
Thanks,
Senior Manager
Joined: 15 Aug 2013
Posts: 251
### Show Tags
26 May 2014, 11:46
Bunuel wrote:
If x and y are positive integers, which of the following CANNOT be the greatest common divisor of 35x and 20y?
A. 5
B. 5(x – y)
C. 20x
D. 20y
E. 35x
Greatest common divisor (GCD) of $$35x$$ and $$20y$$ obviously must be a divisor of both $$35x$$ and $$20y$$, which means that $$\frac{35x}{GCD}$$ and $$\frac{20y}{GCD}$$ must be an integer.
If $$GCD=20x$$ (option C), then $$\frac{35x}{20x}=\frac{7}{4}\neq{integer}$$, which means that $$20x$$ cannot be GCD of $$35x$$ and $$20y$$ as it is not a divisor of $$35x$$.
How about the other choices, can they be GCD of $$35x$$ and $$20y$$?
A. $$5$$ --> if $$x=y=1$$ --> $$35x=35$$ and $$20y=20$$ --> $$GCD(35,20)=5$$. Answer is YES, $$5$$ can be GCD of $$35x=35$$ and $$20y$$;
B. $$5(x-y)$$ --> if $$x=3$$ and $$y=2$$ --> $$35x=105$$ and $$20y=40$$ --> $$GCD(105,40)=5=5(x-y)$$. Answer is YES, $$5(x-y)$$ can be GCD of $$35x$$ and $$20y$$;
D. $$20y$$ --> if $$x=4$$ and $$y=1$$ --> $$35x=140$$ and $$20y=20$$ --> $$GCD(140,20)=20=20y$$. Answer is YES, $$20y$$ can be GCD of $$35x$$ and $$20y$$;
E. $$35x$$ --> if $$x=1$$ and $$y=7$$ --> $$35x=35$$ and $$20y=140$$ --> $$GCD(35,140)=35=35x$$. Answer is YES, $$35x$$ can be GCD of $$35x$$ and $$20y$$.
Hope it's clear.
Hi Bunuel,
The steps here are easy to follow but one thing that bugs me is the number selection. It's almost as if you had to KNOW the answer to select the numbers to prove the statements worth. On the GMAT, that might be a little challenging.
is there a way to do this algebraically by using Prime Boxes? Meaning, 35 has 7 and 5 as it's PF and 20 has xxx?
Intern
Joined: 03 Jan 2014
Posts: 2
WE: Information Technology (Computer Software)
Re: If x and y are positive integers, which of the following [#permalink]
### Show Tags
25 Jun 2014, 23:14
How i did this (using prime factors/prime boxes)
35x will have following prime factors (pf) : 5 ,7, x (x could be anything but we leave that for now)
20y will have following prime factors (pf) : 2, 5, y (Again y could be anything but we leave that for now)
So : GCF - 5 or 5xy
A. 5 => Eliminate as GCF can be 5
B. 5 (x -y) => Leave the option for now or pick numbers to check. I left it for later (there was no need to come back to this and check as i got C as an answer)
C. 20x = 2 * 2 * 5 * x. GCF could be 5xy but 20y already has two 2's so ideally this should have come from 35x for 2*2 to be in the GCF and hence this is the answer as this can never be the GCF
D. 20y = 2 * 2 * 5 * y ; GCF could be 5xy and if x=4 (we pick this number to prove this option incorrect), this would be true
E. 35x = 5 * 7 * x; GCF could be 5xy and if y=7 (we pick this number to prove this option incorrect), this would be true
Current Student
Joined: 12 Aug 2015
Posts: 2633
Schools: Boston U '20 (M)
GRE 1: Q169 V154
Re: If x and y are positive integers, which of the following [#permalink]
### Show Tags
14 Mar 2016, 01:24
i was able to arrive at C as for all the value i was able to portray the GCD
but C was not coming to be true
hence I choose C
then i realized then 20x caanot be the GCD as x must be greater than 4x which is impossible.
_________________
MBA Financing:- INDIAN PUBLIC BANKS vs PRODIGY FINANCE!
Getting into HOLLYWOOD with an MBA!
The MOST AFFORDABLE MBA programs!
STONECOLD's BRUTAL Mock Tests for GMAT-Quant(700+)
AVERAGE GRE Scores At The Top Business Schools!
EMPOWERgmat Instructor
Status: GMAT Assassin/Co-Founder
Affiliations: EMPOWERgmat
Joined: 19 Dec 2014
Posts: 12863
Location: United States (CA)
GMAT 1: 800 Q51 V49
GRE 1: Q170 V170
Re: If x and y are positive integers, which of the following [#permalink]
### Show Tags
15 Feb 2018, 12:21
Hi All,
This question can be solved with math "theory" or by TESTing VALUES. Here's how to eliminate the 4 wrong answers by TESTing VALUES...
We're told that X and Y are POSITIVE INTEGERS. We're asked which of the following CANNOT be the greatest common divisor of 35x and 20y.
IF...X = 1, Y = 1...
35 and 20 have a GCD of 5.
IF...X = 3, Y = 2...
105 and 40 have a GCD of 5. 5(3-2) = 5
IF...X = 4, Y = 1...
140 and 20 have a GCD of 20. 20(1) = 2
IF...X = 2, Y = 7...
70 and 140 have a GCD of 70. 35(2) = 70
GMAT assassins aren't born, they're made,
Rich
_________________
760+: Learn What GMAT Assassins Do to Score at the Highest Levels
Contact Rich at: Rich.C@empowergmat.com
# Rich Cohen
Co-Founder & GMAT Assassin
Special Offer: Save \$75 + GMAT Club Tests Free
Official GMAT Exam Packs + 70 Pt. Improvement Guarantee
www.empowergmat.com/
*****Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!*****
CEO
Status: GMATINSIGHT Tutor
Joined: 08 Jul 2010
Posts: 2700
Location: India
GMAT: INSIGHT
WE: Education (Education)
Re: If x and y are positive integers, which of the following [#permalink]
### Show Tags
20 Oct 2018, 22:53
cul3s wrote:
If x and y are positive integers, which of the following CANNOT be the greatest common divisor of 35x and 20y?
A. 5
B. 5(x-y)
C. 20x
D. 20y
E. 35x
Please find the video solution oft eh question as attached here. Subscribe our youtube channel if you want more such concepts and video
Attachments
File comment: www.GMATinsight.com
Screenshot 2018-10-21 at 12.17.36 PM.png [ 475.2 KiB | Viewed 800 times ]
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
Re: If x and y are positive integers, which of the following &nbs [#permalink] 20 Oct 2018, 22:53
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7769597768783569, "perplexity": 1619.4026016161715}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742981.53/warc/CC-MAIN-20181116070420-20181116092420-00200.warc.gz"} |
https://brilliant.org/practice/turning-points/ | Calculus
# Turning Points
Given the function $$f(x) = \frac{ x^2-8x + 12}{x^2+6x - 16}$$, what is $$\displaystyle{\lim_{ x \rightarrow 2 } f(x)}$$?
How many integers $$k$$ are there such that the function $f(x)=x^3+kx^2+3x+2$ has no turning points?
Let $$f(x)=x^3-6x^2+14x+9.$$ What is the sum of the $$x$$-coordinates of turning points such that $$f(x)$$ switches from a decreasing function to an increasing function?
A polynomial of degree $$25$$ has $$m$$ real roots and $$n$$ turning points. What is the maximum value of $$m+n$$?
What is the sum of all the $$x$$-coordinates of the turning points in the graph of $f(x)=-2x^3+18x^2-30x+9?$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6273666024208069, "perplexity": 113.43262590107915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948738.65/warc/CC-MAIN-20180427002118-20180427022118-00066.warc.gz"} |
https://ysharifi.wordpress.com/tag/semisimple/ | ## Posts Tagged ‘semisimple’
Throughout $k$ is a field, $K$ is the algebraic closure of $k$ and $A$ is a finite dimensional central simple $k$-algebra.
Lemma. $A \otimes_k K \cong M_n(K),$ for some integer $n.$
Proof. Let $S:=A \otimes_k K.$ By the first part of the corollary in this post we know that $S$ is simple. We also have
$Z(S) = Z(A) \otimes_k K = k \otimes_k K \cong K.$
It is easy to see that if $\{a_i \}$ is a $k$-basis for $A,$ then $\{a_i \otimes_k 1 \}$ is an $K$-basis for $S.$ Thus $\dim_K S = \dim_k A.$ So $S$ is a finite dimensional central simple $K$-algebra and hence, since $K$ is algebraically closed, $S \cong M_n(K),$ for some $n,$ by Remark 2 in this post. $\Box$
Theorem. If $A$ is a finite dimensional central simple $k$-algebra, then $\dim_k A$ is a perfect square.
Proof. By the lemma, there exists an integer $n$ such that $A \otimes_k K \cong M_n(K).$ Thus
$\dim_k A = \dim_K A \otimes_k K = \dim_K M_n(K) = n^2. \ \Box$
Definition. The degree of $A$ is defined by $\deg A = \sqrt{\dim_k A}.$
Remark. Let $R$ be a finite dimensional $k$-algebra. Then $R$ is reduced if and only if $R$ is a finite direct product of finite dimensional division $k$-algebras. In this case, $\dim_k R=n^2\dim_k Z(R)$ for some integer $n \geq 1.$
Proof. obviously $R$ is (left) Artinian because $\dim_k R < \infty$ and so $J(R)$ is nilpotent. Thus $J(R)=(0)$ because $R$ is reduced and so $R$ is semisimple. The result now follows from the Artin-Wedderburn theorem. The converse is trivial. Finally, the fact that, by the above theorem, $\dim_{Z(D)}D$ is a perfect square for any finite dimensional division algebra $D,$ proves the last part of the remark. $\Box$
## von Neumann Regular rings (1)
Posted: October 22, 2010 in Noncommutative Ring Theory Notes, von Neumann Regular rings
Tags: , , , ,
Definition. A ring $R$ is called von Nemann regular, or just regular, if for every $a \in R$ there exists $x \in R$ such that $a=axa.$
Remark 1. Regular rings are semiprimitive. To see this, let $R$ be a regular ring. Let $a \in J(R),$ the Jacobson radical of $R,$ and choose $x \in R$ such that $a=axa.$ Then $a(1-xa)=0$ and, since $1-xa$ is invertible because $a$ is in the Jacobson radical of $R,$ we get $a=0.$
Examples 1. Every division ring is obviously regular because if $a = 0,$ then $a=axa$ for all $x$ and if $a \neq 0,$ then $a=axa$ for $x = a^{-1}.$
Example 2. Every direct product of regular rings is clearly a regular ring.
Example 3. If $V$ is a vector space over a division ring $D,$ then ${\rm End}_D V$ is regular.
Proof. Let $R={\rm End}_D V$ and $f \in R.$ There exist vector subspaces $V_1, V_2$ of $V$ such that $\ker f \oplus V_1 = {\rm im}(f) \oplus V_2 = V.$ So if $u \in V,$ then $u=u_1+u_2$ for some unique elements $u_1 \in {\rm im}(f)$ and $u_2 \in V_2.$ We also have $u_1 = v_1 + v$ for some unique elements $v_1 \in \ker f$ and $v \in V_1.$ Now define $g: V \longrightarrow V$ by $g(u)=v.$ It is obvious that $g$ is well-defined and easy to see that $g \in R$ and $fgf=f. \ \Box$
Example 4. Every semisimple ring is regular.
Proof. For a division ring $D$ the ring $M_n(D) \cong End_D D^n$ is regular by Example 3. Now apply Example 2 and the Wedderburn-Artin theorem.
Theorem. A ring $R$ is regular if and only if every finitely generated left ideal of $R$ is generated by an idempotent.
Proof. Suppose first that every finitely generated left ideal of $R$ can be generated by an idempotent. Let $x \in R.$ Then $I=Rx = Re$ for some idempotent $e.$ That is $x = re$ and $e=sx$ for some $r,s \in R.$ But then $xsx=xe=re^2=re=x.$ Conversely, suppose that $R$ is regular. We first show that every cyclic left ideal $I=Rx$ can be generated by an idempotent. This is quite easy to see: let $y \in R$ be such that $xyx=x$ and let $yx=e.$ Clearly $e$ is an idempotent and $xe=x.$ Thus $x \in Re$ and so $I \subseteq Re.$ Also $e=yx \in I$ and hence $Re \subseteq I.$ So $I=Re$ and we’re done for this part. To complete the proof of the theorem we only need to show that if $J=Rx_1 + Rx_2,$ then there exists some idempotent $e \in R$ such that $J=Re.$ To see this, choose an idempotent $e_1$ such that $Rx_1=Re_1.$ Thus $J=Re_1 + Rx_2(1-e_1).$ Now choose an idempotent $e_2$ such that $Rx_2(1-e_1)=Re_2$ and put $e_3=(1-e_1)e_2.$ See that $e_3$ is an idempotent, $e_1e_3=e_3e_1=0$ and $Re_2=Re_3.$ Thus $J=Re_1 + Re_3.$ Let $e=e_1+e_3.$ Then $e$ is an idempotent and $J=Re. \Box$
Corollary. If the number of idempotents of a regular ring $R$ is finite, then $R$ is semisimple.
Proof. By the theorem, $R$ has only a finite number of left principal ideals. Since every left ideal is a sum of left principal ideals, it follows that $R$ has only a finite number of left ideals and hence it is left Artinian. Thus $R$ is semisimple because $R$ is semiprimitive by Remark 1. $\Box$
Remark 2. The theorem is also true for finitely generated right ideals. The proof is similar.
Remark 3. Since, by the Wedderburn-Artin theorem, a commutative ring is semisimple if and only if it is a finite direct product of fields, it follows from the Corollary that if the number of idempotents of a commutative von Neumann regular ring $R$ is finite, then $R$ is a finite direct product of fields.
## Ring of endomorphisms (3)
Posted: June 9, 2010 in Noncommutative Ring Theory Notes, Ring of Endomorphisms
Tags: , , , ,
Schur’s lemma states that if $A$ is a simple $R$ module, then $\text{End}_R(A)$ is a division ring. A similar easy argument shows that:
Example 6. For simple $R$-modules $A \ncong B$ we have $\text{Hom}_R(A,B)=\{0\}.$
Let’s generalize Schur’s lemma: let $M$ be a finite direct product of simple $R$-submodules. So $M \cong \bigoplus_{i=1}^k M_i^{n_i},$ where each $M_i$ is a simple $R$-module and $M_i \ncong M_j$ for all $i \neq j.$ Therefore, by Example 6 and Theorem 1, $\text{End}_R(M) \cong \bigoplus_{i=1}^k \mathbb{M}_{n_i}(D_i),$ where $D_i = \text{End}_R(M_i)$ is a division ring by Schur’s lemma. An important special case is when $R$ is a semisimple ring. (Note that simple submodules of a ring are exactly minimal left ideals of that ring.)
Theorem 2. (Artin-Wedderburn) Let $R$ be a semisimple ring. There exist a positive integer $k$ and division rings $D_i, \ 1 \leq i \leq ,$ such that $R \cong \bigoplus_{i=1}^k \mathbb{M}_{n_i}(D_i)$.
Proof. Obvious, by Example 1 and the above discussion. $\Box$
Some applications of Theorem 2.
1. A commutative semisimple ring is a finite direct product of fields.
2. A reduced semisimple ring is a finite direct product of division rings.
3. A finite reduced ring is a finite direct product of finite fields.
## Quotient rings; some facts (2)
Posted: January 2, 2010 in Noncommutative Ring Theory Notes, Quotient Rings
Tags: , , , , , ,
For the first part see here.
6) If $R$ is simple, then $Z(R)=Z(Q).$
Proof. Let $x=s^{-1}a \in Z(Q).$ Then from $xs=sx$ we get $sa=as$ and thus $s^{-1}a=as^{-1}.$ Hence for every $b \in R$ we’ll have $s^{-1}ab=bs^{-1}a=bas^{-1},$ which gives us $abs=sba.$ Also, since $R$ is simple, $RsR=R,$ which means $\sum_{i=1}^n b_isc_i = 1,$ for some $b_i, \ c_i \in R.$ Thus $\sum_{i=1}^n sb_iac_i = \sum_{i=1}^n ab_isc_i = a=sx$ and so $x=\sum_{i=1}^n b_iac_i \in R.$ Therefore $x \in Z(R)$ which proves $Z(Q) \subseteq Z(R).$ Conversely, let $b \in Z(R)$ and $x=s^{-1}a \in Q$. Since $bs=sb,$ we have $s^{-1}b=bs^{-1}$ and thus $bx=bs^{-1}a=s^{-1}ba=s^{-1}ab=xb$ and so $b \in Z(Q). \Box$
7) The left uniform dimension of $R$ and $Q$ are equal.
Proof. We saw in the previous section that the left ideals of $Q$ are exactly in the form $QI,$ where $I$ is a left ideal of $R.$ Clearly $\sum QI_i$ is direct iff $\sum I_i$ is direct.
8) Let $N$ be a nilpotent ideal of $R$ and let $I$ be the right annihilator of $N$ in $R.$ Then $I$ is an essential left ideal of $R$ and hence $QI$ is an essential left ideal of $Q.$
Proof. Let $I$ be the right annihilator of $N$ in $R.$ For an essential left ideal $J$ of $R$ the left ideal $QJ$ of $Q$ is essential in $Q$ because for every non-zero left ideal $K$ of $R : (0) \neq \ Q(J \cap K) \subseteq QJ \cap QK.$ So we only need to prove the first part of the claim. Let $J$ be any non-zero left ideal of $R$ and put $n=\min \{k \geq 0 : \ N^k J \neq (0) \}.$ Then $(0) \neq N^n J \subseteq I \cap J.$
9) If $Q$ is semisimple, then $R$ is semiprime.
Proof. So we need to prove that $R$ has no non-zero nilpotent ideal. Suppose that $N$ is a nilpotent ideal of $R$ and let $I$ be the right annihilator of $N$ in $R.$ Since $Q$ is semisimple, $QI \oplus A = Q,$ for some left ideal $A$ of $Q.$ But, from the previous fact, we know that $QI$ is essential in $Q$ and thus $A=(0),$ i.e. $QI=Q.$ Thus $s^{-1}a=1,$ for some $a \in I=\text{r.ann}_R N.$ So $s=a$ and $Ns=Na=(0).$ Thus $N=Nss^{-1}=(0).$
We proved, in the previous section, that if $R$ is prime, then $Q$ is prime too.
10) If $Q$ is simple, then $R$ is prime.
Proof. Let $I,J$ be two non-zero ideals of $R.$ We need to show that $IJ \neq (0).$ We have $QIQ=Q,$ because $I \neq (0)$ and $Q$ is simple. Therefore $1=\sum_{i=1}^n x_ia_iy_i,$ for some $x_i,y_i \in Q$ and $a_i \in I.$ We can write $x_i = s^{-1}b_i,$ for some $b_i \in R.$ Then $s=\sum_{i=1}^n b_ia_iy_i \in IQ.$ So $IQ$ is a right ideal of $Q$ which contains a unit. Thus $IQ=Q.$ Similarly $JQ=Q$ and hence $IJQ=Q.$ As a result, $IJ \neq (0). \Box$
## Semiprimitivity of C[G]
Posted: December 2, 2009 in Group Algebras, Noncommutative Ring Theory Notes
Tags: , , , ,
Notation. For a ring $R$ let $J(R)$ be the Jacobson radical of $R.$
Definition. Recall that if $k$ is a field and $G$ is a group, then the group algebra $k[G]$ has two structures. Firstly, as a vector space over $k,$ it has $G$ as a basis, i.e. every element of $k[G]$ is uniquely written as $\sum_{g \in G} x_g g,$ where $x_g \in k.$ In particular, $\dim_k k[G]=|G|,$ as cardinal numbers. Secondly, multiplication is also defined in $k[G].$ If $x = \sum_{g \in G} x_g g$ and $y = \sum_{g \in G} y_g g$ are two elements of $k[G],$ then we just multiply $xy$ in the ordinary fashion using distribution law. To be more precise, we define $xy = \sum_{g \in G} z_g g,$ where $z_g = \sum_{rs=g} x_r y_s.$
We are going to prove that $J(\mathbb{C}[G])=0,$ for every group $G.$
Lemma. $J(\mathbb{C}[G])$ is nil, i.e. every element of $J(\mathbb{C}[G])$ is nilpotent.
Proof. If $G$ is countable, we are done by this theorem. For the general case, let $\alpha \in J(\mathbb{C}[G]).$ So $\alpha =\sum_{i=1}^n c_ig_i,$ for some $c_i \in \mathbb{C}, \ g_i \in G.$ Let $H=\langle g_1,g_2, \cdots , g_n \rangle.$ Clearly $\alpha \in H$ and $H$ is countable. So to complete the proof, we only need to show that $\alpha \in J(\mathbb{C}[H]).$ Write $G = \bigcup_i x_iH,$ where $x_iH$ are the distinct cosets of $H$ in $G.$ Then $\mathbb{C}[G]=\bigoplus_i x_i \mathbb{C}[H],$ which means $\mathbb{C}[G]=\mathbb{C}[H] \oplus K,$ for some right $\mathbb{C}[H]$ module $K.$ Now let $\beta \in \mathbb{C}[H].$ Since $\alpha \in J(\mathbb{C}[G]),$ there exists $\gamma \in \mathbb{C}[G]$ such that $\gamma (1 - \beta \alpha ) = 1.$ We also have $\gamma = \gamma_1 + \gamma_2,$ for some $\gamma_1 \in \mathbb{C}[H], \ \gamma_2 \in K.$ That now gives us $\gamma_1(1 - \beta \alpha)=1. \ \Box$
Theorem. $J(\mathbb{C}[G])=0,$ for any group $G.$
Proof. For any $x =\sum_{i=1}^n c_i g_i\in \mathbb{C}[G]$ define
$x^* = \sum_{i=1}^n \overline{c_i} g_i^{-1}.$
It’s easy to see that $xx^*=0$ if and only if $x=0$ and for all $x,y \in \mathbb{C}[G]: \ (xy)^*=y^*x^*.$ Now suppose that $J(\mathbb{C}[G]) \neq 0$ and let $0 \neq \alpha \in J(\mathbb{C}[G]).$ Put $\beta = \alpha \alpha^* \in J(\mathbb{C}[G]).$ By what I just mentioned $\beta \neq 0$ and $(\beta^m)^* = (\beta^*)^m=\beta^m,$ for all positive integers $m.$ By the lemma, there exists $k \geq 2$ such that $\beta^k = 0$ and $\beta^{k-1} \neq 0.$ Thus $\beta^{k-1} (\beta^{k-1})^* = \beta^{2k-2} = 0,$ which implies that $\beta^{k-1} = 0.$ Contradiction! $\Box$
Corollary. If $G$ is finite, then $\mathbb{C}[G]$ is semisimple.
Proof. We just proved that $J(\mathbb{C}[G])=(0).$ So we just need to show that $\mathbb{C}[G]$ is Artinian. Let
$I_1 \supset I_2 \supset \cdots \ \ \ \ \ \ \ \ \ \ \ \ \ (*)$
be a descending chain of left ideals of $\mathbb{C}[G].$ Obviously each $I_j$ is a $\mathbb{C}$-subspace of $\mathbb{C}[G].$ Thus each $I_j$ is finite dimensional because $\dim_{\mathbb{C}} \mathbb{C}[G]=|G| < \infty.$ Hence $(*)$ will stablize at some point because $\dim_{\mathbb{C}} I_1 < \infty$ and $\dim_{\mathbb{C}}I_1 > \dim_{\mathbb{C}} I_2 > \cdots .$ Thus $\mathbb{C}[G]$ is (left) Artinian and the proof is complete because we know a ring is semisimple if and only if it is (left) Artinian and its Jacobson radical is zero. $\Box$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 335, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9933694005012512, "perplexity": 101.53921312785442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120092.26/warc/CC-MAIN-20170423031200-00398-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://publications.lib.chalmers.se/publication/4763-eleven-dimensional-supergravity-in-light-cone-superspace | CPL - Chalmers Publication Library
# ELEVEN-DIMENSIONAL SUPERGRAVITY IN LIGHT-CONE SUPERSPACE
Lars Brink (Institutionen för teoretisk fysik och mekanik, Elementarpartikelfysik)
JHEP p. in print. (2005)
We show that Supergravity in eleven dimensions can be described in terms of a constrained superfield on the light-cone, without the use of auxiliary fields. We build its action to first order in the gravitational coupling constant \kappa, by "oxidizing" (N=8,d=4) Supergravity. This is simply achieved, as for N=4 Yang-Mills, by extending the transverse derivatives into superspace. The eleven-dimensional SuperPoincare algebra is constructed and a fourth order interaction is conjectured. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9513615965843201, "perplexity": 4489.89065733067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279657.18/warc/CC-MAIN-20170116095119-00085-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://bitbucket.org/prologic/devpi | # devpi: PyPI server and packaging/testing/release tool
devpi is a meta package installing two other packages:
• devpi-server: for serving a pypi.python.org consistent caching index as well as local github-style overlay indexes.
• devpi-client: command line tool with sub commands for creating users, using indexes, uploading to and installing from indexes, as well as a "test" command for invoking tox.
For getting started see http://doc.devpi.net/
Holger Krekel, October 2013 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8033643364906311, "perplexity": 29848.608347621037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593004.92/warc/CC-MAIN-20180722022235-20180722042235-00002.warc.gz"} |
http://mathhelpforum.com/calculus/138898-convergence-integral-print.html | # Convergence of an integral
• April 13th 2010, 05:03 AM
Gok2
Convergence of an integral
Hey people.
Anyone have any idea why the integral $\int_2^{\infty} \frac{\ln x}{(x-1)\sqrt{x+1}}$ converges ?
I tried to use all the tricks I know, Dirichlet's test for convergence of integrals, the comparison test , but nothing worked for me....
Any clue how to show that this integral converges?
Thanks!
• April 13th 2010, 05:38 AM
Laurent
Quote:
Originally Posted by Gok2
Hey people.
Anyone have any idea why the integral $\int_2^{\infty} \frac{\ln x}{(x-1)\sqrt{x+1}}$ converges ?
I tried to use all the tricks I know, Dirichlet's test for convergence of integrals, the comparison test , but nothing worked for me....
Any clue how to show that this integral converges?
Thanks!
Choose $0<\epsilon<\frac{1}{2}$. You can note that $\ln x\leq x^{\epsilon}$ when $x$ is large enough (because the ratio goes to 0), hence for such large $x$, the integrand is less that $\frac{x^\epsilon}{(x-1)\sqrt{x+1}}\sim \frac{1}{x^{1+\frac{1}{2}-\epsilon}}$, and the exponent is greater than $1$ because of the choice of epsilon small enough. Hence the convergence.
• April 13th 2010, 06:06 AM
Gok2
Hmm I see, think I got it. thanks a lot! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9982777833938599, "perplexity": 417.22571135912034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645151768.51/warc/CC-MAIN-20150827031231-00284-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.effortlessmath.com/math-topics/ssat-middle-level-math-practice-test-questions/ | # SSAT Middle Level Math Practice Test Questions
Preparing your student for the SSAT Middle Level Math test? Try these free SSAT Middle Level Math Practice questions. Reviewing practice questions is the best way to brush up your student’s Math skills. Here, we walk you through solving 10 common SSAT Middle Level Math practice problems covering the most important math concepts on the SSAT Middle Level Math test.
These SSAT Middle Level Math practice questions are designed to be similar to those found on the real SSAT Middle Level Math test. They will assess your student’s level of preparation and will give you a better idea of what your student needs to study on his/her exam.
## 10 Sample SSAT Middle Level Math Practice Questions
1- In a group of $$5$$ books, the average number of pages is $$24$$. Mary adds a book with $$36$$ pages to the group. What is the new average number of pages per book?
☐A. 20
☐B. 22
☐C. 24
☐D. 26
☐E. 30
2- A football team won exactly $$70\%$$ of the games it played during last session. Which of the following could be the total number of games the team played last season?
☐A. 49
☐B. 40
☐C. 32
☐D. 12
☐E. 9
3- If a gas tank can hold $$35$$ gallons, how many gallons does it contain when it is $$\frac{2}{5}$$ full?
☐A. 50
☐B. 125
☐C. 62.5
☐D. 14
☐E. 8
4- What is the value of $$𝑥$$ in the following figure? (Figure is not drawn to scale)
☐A. 150
☐B. 145
☐C. 125
☐D. 105
☐E. 85
5- The capacity of a red box is $$20\%$$bigger than the capacity of a blue box. If $$36$$ books can be put in the red box, how many books can be put in the blue box?
☐A. 15
☐B. 20
☐C. 24
☐D. 30
☐E. 32
6- A taxi driver earns $$8$$ per 1-hour work. If he works $$10$$ hours a day and in $$1$$ hour he uses $$2$$-liters petrol with price $$1$$ for $$1$$-liter. How much money does he earn in one day?
☐A. $90 ☐B.$88
☐C. $70 ☐D.$60
☐E. $56 7- Which of the following is less than $$\frac{1}{5}$$? ☐A. $$\frac{1}{4}$$ ☐B. 0.5 ☐C. $$\frac{1}{7}$$ ☐D. 0.28 ☐E. 0.31 8- Amy and John work in a same company. Last month, both of them received a raise of $$20$$ percent. If Amy earns $$30.00$$ per hour now and John earns $$28.80$$, Amy earned how much more per hour than John before their raises? ☐A.$8.25
☐B. $4.25 ☐C.$3.00
☐D. $2.25 ☐E.$1.00
9- Three people can paint $$3$$ houses in $$12$$ days. How many people are needed to paint $$6$$ houses in $$6$$ days?
☐A. 6
☐B. 8
☐C. 12
☐D. 16
☐E. 20
10- If $$𝑁×6−3=12$$ then $$𝑁=?$$
☐A. 4
☐B. 12
☐C. 13
☐D. 14
☐E. 18
## Best SSAT Middle Level Math Prep Resource for 2020
1- D
In a group of $$5$$ books, the average number of pages is $$24$$. Therefore, the sum of pages in all $$5$$ books is $$(5×24=120)$$. Mary adds a book with $$36$$ pages to the group. Then, the sum of pages in all 6 books is $$(5×24+36=156)$$. The new average number of pages per book is:$$\frac{156}{6}=26$$
2- B
Choices A, C, D, and E are incorrect because $$70\%$$ of each of the numbers is a non-whole number.
A. $$49, 70\%$$ of $$49 = 0.70×49=34.3$$
B. $$40, 70\%$$ of $$40=0.70×40=28$$
C. $$32, 70\%$$ of $$32=0.80×32=22.4$$
D. $$12, 70\%$$ of $$12=0.70×12=8.$$
E. $$9, 70\%$$ of $$9=0.80×9=6.3$$
3- D
$$\frac{2}{5}×35=\frac{70}{5}=14$$
4- A
$$x=25+125=150$$
5- D
The red box is $$20\%$$ bigger than the blue box. Let $$x$$ be the capacity of the blue box. Then:
$$x+20\%$$ of $$x=36→1.2x=36→x=\frac{36}{1.2}=30$$
6- D
$$8×10=80$$, Petrol use: $$10×2=20$$ liters, Petrol cost: $$20×1=20$$
Money earned: $$80-20=60$$
7- C
From the choices provided, only C $$(\frac{1}{7})$$ is less than $$\frac{1}{5}$$.
8- E
Amy earns $$30.00$$ per hour now. $$30.00$$ per hour is $$20$$ percent more than her previous rate. Let $$x$$ be her rate before her raise. Then: $$x+0.20x=30→1.2x=30→x=\frac{30}{1.2}=25$$
John earns (\$28.80) per hour now. $$28.80$$ per hour is $$20$$ percent more than his previous rate. Let $$x$$ be John’s rate before his raise. Then:$$x+0.20x=28.80→1.2x=28.80→x=\frac{28.80}{1.2}=24$$, Amy earned $$1.00$$ more per hour than John before their raises.
9- C
Three people can paint $$3$$ houses in $$12$$ days. It means that for painting $$6$$ houses in $$12$$ days we need $$6$$ people. To paint $$6$$ houses in $$6$$ days, $$12$$ people are needed.
10- A
$$N×(6-3)=12→N×3=12→N=4$$
Looking for the best resource to help you succeed on the SSAT Middle Level Math test?
23% OFF
X
## How Does It Work?
### 1. Find eBooks
Locate the eBook you wish to purchase by searching for the test or title.
### 3. Checkout
Complete the quick and easy checkout process.
## Why Buy eBook From Effortlessmath?
Save up to 70% compared to print
Help save the environment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.337680459022522, "perplexity": 2444.5623137525577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402124756.81/warc/CC-MAIN-20201001062039-20201001092039-00379.warc.gz"} |
https://stats.stackexchange.com/questions/418728/metropolis-sampling-sample-order | # Metropolis Sampling sample order
I am new to Metropolis sampling, here is a question that confuses me. Assume that there are two sets of variables $$a$$ and $$b$$ we want to sample. Let $$X$$ denote the observations and $$p(X|a,b)$$ denote the likelihood. $$T$$ is the max iterations. Which of the followings are correct?
Option 1:
for t in 1...T:
new_a = ... //propose new variables a
p = min(1,p(X|new_a,b)/p(X|a,b))
thresh = ... //a random number generated from a uniform [0,1]
if thresh < p:
a = new_a //accept new_a
new_b = ... //propose new variables b
p = min(1,p(X|a,new_b)/p(X|a,b))
thresh = ... //a random number generated from a uniform [0,1]
if thresh < p:
b = new_b //accept new_b
Option 2:
for t in 1...T:
new_a = ... //propose new variables a
new_b = ... //propose new variables b
p = min(1,p(X|new_a,new_b)/p(X|a,b))
thresh = ... //a random number generated from a uniform [0,1]
if thresh < p:
a = new_a, b = new_b //accept both a and b
Essentially, option 1 samples $$a$$ and $$b$$ separately. In each iteration, it samples $$a$$ first, then it uses the value of $$a$$ (either accepted new value or the old value) to sample $$b$$. Option 2 samples $$a$$ and $$b$$ together, they are either both accepted or rejected.
Simulating both components $$a$$ and $$b$$ from the prior at the same time and accepting with probability $$1 \wedge \dfrac{p(X\mid a^\text{new},b^\text{new})}{p(X\mid a,b)}$$ is a regular (and valid) format of the Metropolis-Hastings algorithm.
Simulating each component $$a$$ and $$b$$ sequentially from the conditional priors $$\pi(a|b)$$ and $$\pi(b|a)$$ and accepting with probabilities $$1 \wedge \dfrac{p(X\mid a^\text{new},b)}{p(X\mid a,b)} \quad\text{and}\quad 1 \wedge \dfrac{p(X\mid a,b^\text{new})}{p(X\mid a,b)}$$ is a (valid) form of the Metropolis-Hastings-within-Gibbs algorithm. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7747029662132263, "perplexity": 1964.9936631146506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154032.75/warc/CC-MAIN-20210730220317-20210731010317-00013.warc.gz"} |
https://forum.zettelkasten.de/discussion/2334/share-your-zk-plans-for-17-july-23-july-2022 | # Share your ZK plans for 17 July - 23 July 2022
edited July 2022
What are your plans for zettelkasting this week?
My plans for this coming week include continuing my deep dive into The Tao of Travel by Paul Theroux. Theroux's book is an interesting turn on the philosophy of travel. I'm connecting ideas about the philosophy of travel with relativity and space/time.
I'm starting the book The Book of Form and Emptiness by Ruth Ozeki. I don't know what to expect at this point. The many layers of story in her first book moved me a lot: A Tale for the Time Being by Ruth Ozeki. I just ordered The Book of Form and Emptiness via InterLibrarary Loan, and it took four months to get it (the anticipation built up my expectations!). It came across the country from the Yuma Public Library. Thanks, University of Idaho Library, for hunting it down and the community of Yuma for lending it!
Here are a few of the titles of zettel I've been working on last week. The top two in the list are still in my "#proofing" oven.
Intimate Travel Within 202207160757
The Block Universe 202207161523
B-Slipstream Time Hacking 202207161553
B-The Tao of Travel 202207091440
A-How Animals See Themselves 202207141208
Population, Cultural, and Seed Dispersion 202207130857
Umwelt Compassion 202207130858
Syndyasticon Zōon 202207140715
A-PD Pathology (NINDS) 202207130901
PD Clinical Trials 202207131645
A-Learning the Truth By Thinking 202207140716
The Dark Side of Flow 202207130855
Post edited by Will on
Will Simpson
The quality of our thinking is directly proportional to the quality of our reading. To think better, we must read better. - Rohan
kestrelcreek.com
• @Will said:
What are your plans for zettelkasting this week?
My plans for this coming week include continuing my deep dive into The Tao of Travel by Paul Theroux. Theroux's book is an interesting turn on the philosophy of travel. I'm connecting ideas about the philosophy of travel with relativity and space/time.
You may have read the book "The Time Traveler's Wife"? More gist for your mill
• This week begins with recovering around one hundred media files that were truncated to zero length in my Dropbox, probably due to an errant iOS or Android editing program. I may need a NAS in addition to cloud storage. Cross-OS compatibility isn't quite here.
Had the thought of proceeding through several references simultaneously, following the inverse of the Cantor pairing function $(\langle x, y\rangle:\mathbb{N}\times\mathbb{N}\rightarrow\mathbb{N})$. Each row is a reference, and the columns are section or chapter numbers. Just a fantasy.
Still plodding through a math project. I might say what it is here, instead of uploading a preprint, though both are possible...
Erdős #2. ZK software components. “If you’re thinking without writing, you only think you’re thinking.” -- Leslie Lamport. Replies sometimes delayed since life is short.
• ## Previous week
• We attended a wedding; that's the 3rd this year, and there's one more to come in September. That'll be 4 weddings out of a total of 6 I will have attended to my whole life. Feels quite busy
• We harvested our first batch of potatos. Was 5 plants worth of "Solist" which apparently requires 130kg of nitrogen per hectar. Go figure. Anyway, ~13kg of potatoes harvested! And that was maybe 2 square meters of about 30 we planted potato crops on. That's gonna be a yuuuuuge harvest later this year. (Fork for size reference)
I still have no script to quickly export a list of notes I modified, so my weekly retrospective is still limited to newly created notes:
• 202207151417 Software team leadership should favor autonomy and alignment
• 202207140952 Colorize CLI output with RGB hex codes in Ruby
• 202207140924 Jira ticket links in org buffers with bug-reference-mode
• 202207131809 Customize color of disclosure button in NSOutlineView
• 202207131615 Search git log for a string that was removed or added
• 202207131129 ME-Improved FM-Score. Old concept of @Sascha to track belly fat; I wrote a JavaScript tracking tool in 2013 but never extracted the gist into a note, I noticed.
• 202207130859 Insert enumerating counter in Emacs macros
## This week
• More work-work from ~8h--16h. Am a couple weeks into practicing to time-box the freelance work more, so I have larger uninterrupted chunks of time in the evening.
• Get ahead of the ZK blog post editing curve. Finished 1 larger post last week, looking to get 2 more through by the end of this week.
@ZettelDistraction From personal experience I can absolutely recommend Unraid (https://unraid.net/) to reclaim any old computer as a NAS. You can throw any old disks into an Unraid box and set them up in an "array" with parity drives, so you could lose as many drives as you have parity drives in the disk array before you begin to lose data. (Unlike RAID arrays, the disks don't need to be of the same size and it's basically "plug and play".) -- I'm a network and system admin noob, so this was rather nice to set up network drives.
I went a step further and installed additional software (Nextcloud) to replace cloud storage. Works well for me so far, even for project collaborations. But exposing your NAS to the internet as a cloud storage server is a lot more involved. I do have notes on the topic in case you're interested and want to spend a weekend fiddling with everything
Author at Zettelkasten.de • https://christiantietze.de/
• Forgot one thing for this week -- The Archive's theme support (or lack thereof) for table styling gets on my nerves with variable-width fonts. Pondering to reshuffle priorities there
Author at Zettelkasten.de • https://christiantietze.de/
• edited July 2022
@ctietze Thanks for mentioning UNRAID. I vaguely recall encountering UNRAID over a decade ago, not quite as advanced if I am not mistaken. Before you mentioned UNRAID, I was considering either a Synology or QNAP NAS--now I am considering all three. There is no question that I have to do something. I like the idea of parity drives. Google's three-way copying is also supposed to be more reliable than RAID (I didn't check if UNRAID supports this).
I don't want to expose a NAS to the Internet. That would probably mean supporting a VPN, etc. I had OpenVPN going in my wasted youth.
UPDATE: typos and omissions corrected (and more introduced probably).
Post edited by ZettelDistraction on
Erdős #2. ZK software components. “If you’re thinking without writing, you only think you’re thinking.” -- Leslie Lamport. Replies sometimes delayed since life is short.
• @ZettelDistraction If you have the budget, a Synology would be nice. It's supposedly cheaper than QNAP, especially if one doesn't need the hardware to perform media transcoding or some such. The Synology DS718+ was recommended to me; the "+" line in general, actually. Rui Carmo uses a DS1019+ and is quite happy with it.
Author at Zettelkasten.de • https://christiantietze.de/
• @ctietze The higher-end Synology NAS configurations look like the way to go. I don't have a need to transcode media, however, running Linux/Ubuntu VMs for proof assistants and other software would be useful to me now.
Erdős #2. ZK software components. “If you’re thinking without writing, you only think you’re thinking.” -- Leslie Lamport. Replies sometimes delayed since life is short. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16718341410160065, "perplexity": 5258.966313437503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00417.warc.gz"} |
https://www.physicsforums.com/threads/a-work-problem.165868/ | # A Work problem
1. Apr 15, 2007
### rootX
This questions asks what is the work done by the tension in the cable. And, the book answered that it is equal to the work done by the gravity.
But shouldn't it be more than the work done by the gravity? (because there is also a horizontal displacement)
see the attached image
#### Attached Files:
• ###### lastscan.jpg
File size:
41.2 KB
Views:
30
2. Apr 15, 2007
### Andrew Mason
I can't see your attachment yet, what is the direction of the force? So what is $\vec{F} \cdot \vec{d}$ ?
AM | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8569310307502747, "perplexity": 1582.9015882856252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00044-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://arxiv.org/abs/1304.8069 | cs.NA
(what is this?)
# Title:Fast Approximate Polynomial Multipoint Evaluation and Applications
Abstract: It is well known that, using fast algorithms for polynomial multiplication and division, evaluation of a polynomial $F \in \mathbb{C}[x]$ of degree $n$ at $n$ complex-valued points can be done with $\tilde{O}(n)$ exact field operations in $\mathbb{C},$ where $\tilde{O}(\cdot)$ means that we omit polylogarithmic factors. We complement this result by an analysis of approximate multipoint evaluation of $F$ to a precision of $L$ bits after the binary point and prove a bit complexity of $\tilde{O}(n(L + τ+ nΓ)),$ where $2^τ$ and $2^Γ,$ with $τ, Γ\in \mathbb{N}_{\ge 1},$ are bounds on the magnitude of the coefficients of $F$ and the evaluation points, respectively. In particular, in the important case where the precision demand dominates the other input parameters, the complexity is soft-linear in $n$ and $L$.
Our result on approximate multipoint evaluation has some interesting consequences on the bit complexity of further approximation algorithms which all use polynomial evaluation as a key subroutine. Of these applications, we discuss in detail an algorithm for polynomial interpolation and for computing a Taylor shift of a polynomial. Furthermore, our result can be used to derive improved complexity bounds for algorithms to refine isolating intervals for the real roots of a polynomial. For all of the latter algorithms, we derive near-optimal running times.
Comments: minor editorial changes over the first version: revised references and mentioned related work Subjects: Numerical Analysis (cs.NA); Symbolic Computation (cs.SC); Numerical Analysis (math.NA) MSC classes: 65Y20 ACM classes: F.2.1; G.1.0 Cite as: arXiv:1304.8069 [cs.NA] (or arXiv:1304.8069v2 [cs.NA] for this version)
## Submission history
From: Alexander Kobel [view email]
[v1] Tue, 30 Apr 2013 17:01:11 UTC (21 KB)
[v2] Fri, 27 May 2016 09:11:12 UTC (287 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8860710859298706, "perplexity": 741.1586709205757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829812.88/warc/CC-MAIN-20181218204638-20181218230638-00593.warc.gz"} |
https://www.nearly42.org/category/cstheory/page/2/ | # Rolling a cube can be tricky
An amateur proof that the rolling cube puzzle is NP-complete.
Abstract
We settle two open problems related to the rolling cube puzzle: Hamil-
tonian cycles are not unique even in fully labeled boards and rolling
cube puzzle is NP-complete in labeled boards without free cells and with
blocked cells.
NOTE: another example of two distinct Hamiltonian cycles in a fully labeled board has also been found by Pálvölgyi Dömötör (see this post on mathoverflow).
# Have fun with Boulder Dash
An amateur proof that the popular game is NP-hard.
Abstract
Boulder Dash is a videogame created by Peter Liepa and Chris Gray in 1983 and released for many personal computers and console systems under license from First Star Software. Its concept is simple: the main character must dig through caves, collect diamonds, avoid falling stones and other nasties, and finally reach the exit within a time limit. In this report we show that the decision problem “Is an $N\times N$ Boulder Dash level solvable?” is NP-hard. The constructive proof is based on a simple gadget that allows us to transform the Hamiltonian cycle problem on a 3-connected cubic planar graph to a Boulder Dash level in polynomial time.
NOTE: the same result has been proved by G. Viglietta in the paper: Gaming Is a Hard Job, But Someone Has to Do It! ; his proof, which is embedded in a more general and powerful framework that can be used to prove complexity of games, doesn’t require the Dirt element. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24756312370300293, "perplexity": 1328.164391722693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107872686.18/warc/CC-MAIN-20201020105000-20201020135000-00197.warc.gz"} |
https://www.physicsforums.com/threads/what-angle-does-it-scatter-at-if-the-yellow-ball-is-scattered.358559/ | # What angle does it scatter at if the yellow ball is scattered
1. Nov 28, 2009
### drkidd22
1. The problem statement, all variables and given/known data
The white ball (2kg) in the figure has a speed of 1.74 m/s and the yellow ball (1kg) is at rest prior to an elastic glancing collision. After the collision the white ball has a speed of 1.37 m/s. what angle does it scatter at if the yellow ball is scattered at 280 degrees?
2. Relevant equations
mva=mvacos(@)+mvbcos(@)
3. The attempt at a solution
2(1.37)Cos(@)a+0.58Cos280
2.74Cos@+.10
=92 degrees
I think I'm close, but not quite.
Last edited: Nov 28, 2009
2. Nov 28, 2009
### drkidd22
Re: momemtum
3. Nov 28, 2009
### Staff: Mentor
Re: momemtum
That's momentum conservation in one direction. What about the other? And what about the fact that the collision is elastic?
I don't understand what you're doing here. I don't see the full equation being used. Where did you get '1.37' and '0.58'? Show all your steps.
4. Nov 28, 2009
### drkidd22
Re: momemtum
1.37 is given as the speed of ball a after the collision.
0.58 is what I had found the speed of ball b to be after collision, but I think it's not correct as I'm not sure how to really do this problem. I can't really understand what the author of the book is trying to say on a similar problem.
5. Nov 28, 2009
### Staff: Mentor
Re: momemtum
OK.
How did you get this? (Hint: That's where the fact that the collision is elastic will come in handy.)
6. Nov 28, 2009
### drkidd22
Re: momemtum
ok, so I think .58 was incorrect.
mava+mbvb = mava'+mbvb'
2(1.74)+0=2(1.37)+vb'
.74m/s = vb'
Right?
7. Nov 28, 2009
### drkidd22
Re: momemtum
When I put this in I still don't get the right answer.
2(1.37)Cos(@)+.74Cos280
2.74Cos@+.13
=92.72 degrees
8. Nov 28, 2009
### Staff: Mentor
Re: momemtum
No. That equation isn't valid. (Momentum is a vector--direction matters.)
Instead, make use of the fact that the collision is elastic. What does that mean?
9. Nov 28, 2009
### drkidd22
Re: momemtum
KE is also conserved
10. Nov 28, 2009
### Staff: Mentor
Re: momemtum
Right! Use that to determine the speed of the yellow ball after the collision.
11. Nov 28, 2009
### drkidd22
Re: momemtum
3.0276 = 1.8769 + .5(v^2)
(3.0276 - 1.8769)/(.5) = v^2
Vb' = 1.51 m/s
12. Nov 28, 2009
### Staff: Mentor
Re: momemtum
Looks good. (I get 1.52, when I round off.)
13. Nov 28, 2009
### drkidd22
Re: momemtum
0 = 2(1.37)Sin@+1.52Sin280
0 = 2.74Sin@ - 1.50
= 33 degree
I think that's right.
Thanks a million. Took me while to understand it, but I got it. Thanks.
Similar Discussions: What angle does it scatter at if the yellow ball is scattered | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8346015214920044, "perplexity": 2647.658476986781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888135.38/warc/CC-MAIN-20180119204427-20180119224427-00335.warc.gz"} |
http://devmaster.net/forums/topic/17512-timebased-sleep-0-fps-issue/page__p__88665 | # timebased sleep, 0 FPS issue.
24 replies to this topic
### #1fdmfdm
Member
• Members
• 25 posts
Posted 29 December 2012 - 08:00 PM
hi there,
i have been trying to get my game at roughly 60 FPS , a bit more or does not matter, but it was running at 150 FPS, which worked fine for using the frames as time, but i wanted to get a more steady speed, by putting in a sleep function at the end of the frame with as sleeptime 1000/60-elapsed time.
for this i used :
long long milliseconds_now() {
static LARGE_INTEGER s_frequency;
static BOOL s_use_qpc = QueryPerformanceFrequency(&s_frequency);
if (s_use_qpc) {
LARGE_INTEGER now;
QueryPerformanceCounter(&now);
} else {
return GetTickCount();
}
}
long long start = milliseconds_now();
long long elapsed = milliseconds_now() - start;
printf ("time: %f\n", elapsed );
Sleep(((1000/60)-elapsed));
i found this code on the internet, as i could not find a good place to get the codes for getting the elapsed time, and i honestly dont know if there is something wrong with this code or not, but as soon as i put this in i had a issue so now and then of the game running with 0 FMS
i got no errors, and it started, but just did nothing.
can anyone tell me what i did wrong, or point me at a place where i can learn this better?, as i want my game to work with frames as the time, and therefor i need a roughly steady FPS
thanks a lot for helping
Member
• Members
• 27 posts
• LocationVictoria, Australia
Posted 30 December 2012 - 07:11 AM
fdmfdm, on 29 December 2012 - 08:00 PM, said:
point me at a place where i can learn this better
### #3fdmfdm
Member
• Members
• 25 posts
Posted 30 December 2012 - 12:08 PM
i tried that one, but they give as ultimate code:
double t = 0.0;
const double dt = 0.01;
double currentTime = hires_time_in_seconds();
double accumulator = 0.0;
State previous;
State current;
while ( !quit )
{
double newTime = time();
double frameTime = newTime - currentTime;
if ( frameTime > 0.25 )
frameTime = 0.25; // note: max frame time to avoid spiral of death
currentTime = newTime;
accumulator += frameTime;
while ( accumulator >= dt )
{
previousState = currentState;
integrate( currentState, t, dt );
t += dt;
accumulator -= dt;
}
const double alpha = accumulator / dt;
State state = currentState*alpha + previousState * ( 1.0 - alpha );
render( state );
}
but never explain what is what,
like it uses hires_time_in_seconds , which microsoft visual studio 2010 does not know:
same goes with state, time, currentTime, accumulator and dt
now the problem here is that i need to know how long the program took for the last frame, so that i can sleep the rest of the 1/60th second.
i have read that entire tutorial, and altough the text itself is quite easy to follow, but does not explain much about the variables used in it
i might have to add (altough i think its quite clear now XD) that im quite new to C++, and altough i followed multiple tutorials, this is the first time im making an entire game with the need of controlling the time
### #4fireside
Senior Member
• Members
• 1587 posts
Posted 30 December 2012 - 05:52 PM
It looks overly complicated to me. I know I did something like this in java. Basically, I got the time right before render, kept it in temp variable for currentTime, but before that, kept another variable for last loop time and put the value in there, subtract and sleep if it's less than 1/60 th of a second. so it's something like
lastTime = currentTime;
currentTime = getTime();
sleepTime = 1/60 - (currentTime - lastTime);
if(sleepTime > 0) Sleep(sleepTime);
Currently using Blender and Unity.
### #5Reedbeta
DevMaster Staff
• 5307 posts
• LocationBellevue, WA
Posted 30 December 2012 - 05:56 PM
fdmfdm, on 30 December 2012 - 12:08 PM, said:
but never explain what is what, like it uses hires_time_in_seconds , which microsoft visual studio 2010 does not know
This is pseudocode; it's not supposed to be copy/pasted. The author is assuming you're smart enough to adapt the idea of this code to your own project. For instance, hires_time_in_seconds() is just a placeholder for however you measure time in your language/API/OS. In your case, you'd write a function that would use QueryPerformanceCounter and convert the result to seconds as a double. (BTW, most advanced articles on programming are like this: you usually cannot just copy/paste others' code into your own project and expect it to work, because they've made different design decisions or called things different names from you. You must adapt their code to your project.)
Anyway, trying to maintain a stable framerate by calculating the remaining time and sleeping is actually not a good approach. The problem is that "sleep" is not a precise operation. The OS will wake up your app at some point later, but you can't rely on it being at the time you requested - it will be whenever the OS decides to give you another time-slice.
A better approach is to use your graphics API to wait for the next screen vsync. The details depend on what API you're using, but hopefully there is a way to do this. It will use some internal graphics HW / OS magic to do a better job of syncing your program to 60 Hz (or whatever the screen refresh rate is, but 60 Hz is the most common) than you could likely do by yourself. Then you'd use QueryPerformanceCounter or whatever to measure the time at the start of each frame, subtract from the previous frame's result to get the time elapsed, and run your game by that amount of time, either using a fixed timestep or not.
reedbeta.com - developer blog, OpenGL demos, and other projects
### #6fdmfdm
Member
• Members
• 25 posts
Posted 30 December 2012 - 08:02 PM
Reedbeta, on 30 December 2012 - 05:56 PM, said:
This is pseudocode; it's not supposed to be copy/pasted.
i had figured out that much, but as i dont know the actual codes for it, i did not get much further
but more on the topic, i did read something about vsync somewhere, but they said it was a bad idea because it could be turned off?
and you say it depends on what API i use, do you mean there is no standart time-calculator in c++
i know many people use libraries made by others (as i do with the library given with the template of the tutorials im following here made by IGAD), but i expected to be some standard timecalculator, that is not perfect, but given with the normal libraries.
if not, is there a way to find what does what?
the system i use now, which solved the issue of 0FPs, but gives a very rough 60 FPS:
long long milliseconds_now() {
static LARGE_INTEGER s_frequency;
static BOOL s_use_qpc = QueryPerformanceFrequency(&s_frequency);
if (s_use_qpc) {
LARGE_INTEGER now;
QueryPerformanceCounter(&now);
} else {
return GetTickCount();
}
}
main()
{
long long start = milliseconds_now();
long long elapsed = milliseconds_now() - start;
if((elapsed>0)&(elapsed<1000/60))
{
Sleep ((1000/60)-elapsed);
}
}
this usually gives about 57, but sometimes goes down under 40, and sometimes gives up to 62, which does proof your point of it not bieing the time i told it
1 last thing,
i dont know QueryPerformanceCounter, so i googled it, and i found :
http://msdn.microsof...s644904(v=vs.85).aspx
here they gave this piece of code:
BOOL WINAPI QueryPerformanceCounter(
_Out_ LARGE_INTEGER *lpPerformanceCount
);
in which if i understand it right, lpPerformanceCount, is the variable where they put the total counts, but what i was wondering here, is that in seconds(or rather, mseconds? for if it is not in seconds, it is no use to me XD
and the idea here is to use this time to make a Delta time, to get the speed, not matter what the time is?
my question here is, is it needed to sync with vsync if i can me the speed dependent of the ammount of frames/second ?
a quick summary to make sure i am not mistaken:
Dont use sleep to get a fixed framerate, as it is not precise enough.
use vsync to get the framerate (59 in my case (small question here, any idea why my default is 59? always wondered that))
and then i use QueryPerformanceCounter to make sure my games runs at the right speed.
### #7Reedbeta
DevMaster Staff
• 5307 posts
• LocationBellevue, WA
Posted 30 December 2012 - 09:16 PM
BTW, you can use [ code ] ... [ /code ] tags around your code to preserve the formatting on the forum.
Yes, a lot of graphics drivers these days have the option to force vsync on or off for an application, regardless of what the application actually asks for. That's a bit unfortunate, but it's not such a big deal because your game has to deal with different refresh rates anyway. (If your game is locked to 60 Hz and someone runs it on a monitor that refreshes at 75 Hz, it's going to look terrible.) Anyway, the way to ask for vsync depends on what graphics API you're using (Direct3D, OpenGL, etc). There is no standard way to do it in C++, because this doesn't have anything to do with the language - it has to do with the graphics API.
As for QueryPerformanceCounter, you already have the code to use it; it's in your milliseconds_now() function (didn't you notice that?). Obviously if you have the time in milliseconds then you can easily convert to seconds if you need to.
reedbeta.com - developer blog, OpenGL demos, and other projects
### #8fireside
Senior Member
• Members
• 1587 posts
Posted 30 December 2012 - 09:40 PM
I think it's usually better to time animation sequences and let the game run as fast as it wants. I don't notice as much stutter that way. The only time I've stopped frames was with a browser app because it needed a pause for other things in the browser to be working. Since I had to have a pause, I used it also as a buffer for when the computer was working harder on a particular loop.
Currently using Blender and Unity.
### #9fdmfdm
Member
• Members
• 25 posts
Posted 30 December 2012 - 09:48 PM
hehehe nope did not notice that
it was yesterday when i tried to figure it out, and forgot it while sleeping
but i should be using direct3d if im not mistaken, so ill check the internet for that
and fireside, the issue with running it as fast as it wants, is that everything will run faster, unless you adjust the speed according to the framerate.
and thanks for all the help this time issue really was the biggest problem i faced so far, because of my lack of knowledge on the subject XD
working on the background and collision detector with the map at the moment, but ill check this timer tomorrow (or if i can this evening/night) to see if i catch any problems
thanks again
ps. i'll do that code thing next time
### #10fireside
Senior Member
• Members
• 1587 posts
Posted 30 December 2012 - 10:09 PM
Quote
and fireside, the issue with running it as fast as it wants, is that everything will run faster, unless you adjust the speed according to the framerate.
No, because you are adjusting animations and distances using a timer instead of using the framerate as a timer. It sounds complicated, but it really isn't. Unity uses this method. I think a lot of engines do. You are just taking the time from the last frame and factoring it into animations. It has the added advantage of making animations more asynchronous because you aren't suddenly stopping everything and then dropping all the frames at that loop, and you aren't basically punishing people who have fast computers.
Currently using Blender and Unity.
### #11Stainless
Member
• Members
• 581 posts
• LocationSouthampton
Posted 30 December 2012 - 10:21 PM
In general using sleep for timing is very rare these days.
Most platforms now work on a timed event system.
The model is
Setup a timer at the require framerate
Release back to host
Event fires and control passes back to game
XNA uses a separate update / draw cycle. The update method is going to be be called at the required frame rate, the draw method might not
Most other systems I work on use call backs. You register a timer callback and set the timer off
You cannot be sure to have a system clock with the required accuracy, hell the granularity on some systems is 100ms so when you query the system timer your max framerate is 10fps
I would look for other techniques, otherwise your code is not portable
### #12fdmfdm
Member
• Members
• 25 posts
Posted 31 December 2012 - 12:04 AM
fireside, on 30 December 2012 - 10:09 PM, said:
No, because you are adjusting animations and distances using a timer instead of using the framerate as a timer. It sounds complicated, but it really isn't. Unity uses this method. I think a lot of engines do. You are just taking the time from the last frame and factoring it into animations. It has the added advantage of making animations more asynchronous because you aren't suddenly stopping everything and then dropping all the frames at that loop, and you aren't basically punishing people who have fast computers.
i actually meant that with adjusting speed according to the framerate
btw a small question that is offtopic, but no need to make another topic if someone here can explain this to me,
i have read in the tutorial i was following that a warning of a int going to float or the other way around is bad,
not my physics are working fine as i have them now, so should i worry about these warnings or just take them out by telling the compiler i know its happening?
### #13fireside
Senior Member
• Members
• 1587 posts
Posted 31 December 2012 - 02:09 AM
If you go from float to int, the decimal portion will be dropped, which can be inaccurate. For instance 1.999 would end up being 1 as an int. I prefer to round in that situation. A simple way is to add .5 to it before the conversion. Use a typecast to eliminate the warning. It's kind of bad form and it pays to eliminate them. If you have more than one, write a function. Take warnings seriously, especially when you are new. It's a good time to do a little more study and learn a bit more. There may be a rounding function in the standard library, also. I haven't used c++ in a while.
Currently using Blender and Unity.
### #14Stainless
Member
• Members
• 581 posts
• LocationSouthampton
Posted 31 December 2012 - 10:40 AM
going from float to int sometimes is a requirement.
If you are working on opengles, there is a known issue with aliasing. If you draw a sprite at pixel x = 10.1 it will look different than if you draw it at pixel 10
Use casts to get rid of the warning and think about what you are doing each time.
### #15fdmfdm
Member
• Members
• 25 posts
Posted 31 December 2012 - 12:00 PM
im working on directRd and i have seen no problems with aliasing, so at least thats not a issue,
but are you guys saying its best to recheck all the warnings to see if it is really needed to make that conversion, and if it is use a cast (im guessing a cast is like putting (int) before the variable)?
as you can see under here there are quite some warnings,
now the & warnings will be gone in seconds , but the others, is this ammount a issue?
(dont worry about the error, just put that in to get the warnings, did not know how to see them otherwise )
warnings:
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(254): warning C4244: '+=' : conversion from 'float' to 'int', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(283): warning C4244: '+=' : conversion from 'float' to 'int', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(285): warning C4554: '&' : check operator precedence for possible error; use parentheses to clarify precedence
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(302): warning C4554: '&' : check operator precedence for possible error; use parentheses to clarify precedence
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(303): warning C4244: 'argument' : conversion from 'int' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(308): warning C4244: '=' : conversion from 'int' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(309): warning C4244: '=' : conversion from 'int' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(312): warning C4244: '=' : conversion from 'double' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(313): warning C4244: '=' : conversion from 'double' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(314): warning C4244: '=' : conversion from 'double' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(315): warning C4244: '=' : conversion from 'double' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(335): warning C4554: '&' : check operator precedence for possible error; use parentheses to clarify precedence
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(376): warning C4244: '=' : conversion from 'float' to 'int', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(377): warning C4244: '=' : conversion from 'float' to 'int', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(378): warning C4244: '=' : conversion from 'float' to 'int', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(379): warning C4244: '=' : conversion from 'float' to 'int', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(457): warning C4244: '=' : conversion from 'int' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(462): warning C4244: 'argument' : conversion from 'int' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(462): warning C4244: 'argument' : conversion from 'int' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(463): warning C4244: 'argument' : conversion from 'float' to 'int', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(463): warning C4244: 'argument' : conversion from 'float' to 'int', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(473): warning C4244: '=' : conversion from 'int' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(474): warning C4244: '=' : conversion from 'int' to 'float', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(487): warning C4244: 'argument' : conversion from 'float' to 'int', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(487): warning C4244: 'argument' : conversion from 'float' to 'int', possible loss of data
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(508): error C2065: 'a' : undeclared identifier
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(508): error C2146: syntax error : missing ';' before identifier 'i'
1>c:\myprojects\template\devmaster_intro-to-c-tmpl83.00c_oct14\game.cpp(518): warning C4244: '=' : conversion from 'float' to 'int', possible loss of data
under here my code untill this point, still need to clean some things as i was converting from max and min x/y values to tilecollision, so thats not finished yet.
// Template, major revision 3
// IGAD/NHTV - Jacco Bikker - 2006-2009
#include "string.h"
#include "surface.h"
#include "stdlib.h"
#include "template.h"
#include "game.h"
using namespace Tmpl8;
// sprites
Sprite poppetje( new Surface("assets/poppetje.BMP"), 1);
Sprite bullet( new Surface("assets/bullet.BMP"), 1);
Sprite bubble1( new Surface("assets/bubble1.BMP"), 1);
//background
Surface* tileSet[9];
int landTile[11][40] = {{2, 7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,1},
{6, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6},
{6, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6},
{6, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6},
{6, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6},
{6, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6},
{6, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6},
{6, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6},
{6, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6},
{6, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6},
{3, 5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,4}};
int floory = 640;
// player variables
float xplayer = 320;
float yplayer = 200;
float dyplayer;
int xspeed = 3;
int playerheigth = 30;
int playertilex1;
int playertilex2;
int playertiley1;
int playertiley2;
bool left;
bool right;
bool up;
bool down;
// bullet variables
float xbullet[6];
float ybullet[6];
float x1bullet[6];
float y1bullet[6];
float x2bullet[6];
float y2bullet[6];
float dxbullet[6];
float dybullet[6];
float speed = 10;
float steps[6];
int bullet1[6]; // 1 is false 2 is true
int i=0;
bool shot = false;
int shotbreak = 0;
int shoottime = 10;
// bubble variables
int bubble1x[6]; //= 100;
int bubble1y[6]; //= 50;
int drawbubble[6]; //= true;
float bubble1dy[6]; //= 1;
float bubble1ddy[6]; //= 0.1;
float bubble1ddx[6]; //= 0.1;
float bubble1dx[6]; //= 1;
float maxyspeed[6]; //= 12;
float maxxspeed[6];
int bubblecount;
float bubbledist;
int bubbledistcount;
float bubbledistfix;
float bubblefraction;
float bubblexdist;
float bubbleydist;
int bubbledoubler[6];
// collision variables
int distance;
//level variables
int bubblestokill;
int lvl;
// timer variables
long long milliseconds_now() {
static LARGE_INTEGER s_frequency;
static BOOL s_use_qpc = QueryPerformanceFrequency(&s_frequency);
if (s_use_qpc) {
LARGE_INTEGER now;
QueryPerformanceCounter(&now);
} else {
return GetTickCount();
}
}
void Game::Init()
{
// background
tileSet[0] = new Surface( "assets/background1.BMP" );
tileSet[1] = new Surface( "assets/background2.BMP" );
tileSet[2] = new Surface( "assets/background3.BMP" );
tileSet[3] = new Surface( "assets/background4.BMP" );
tileSet[4] = new Surface( "assets/background5.BMP" );
tileSet[5] = new Surface( "assets/background6.BMP" );
tileSet[6] = new Surface( "assets/background7.BMP" );
tileSet[7] = new Surface( "assets/background8.BMP" );
// bullet inits
bullet1[0]=1;
bullet1[1]=1;
bullet1[2]=1;
bullet1[3]=1;
bullet1[4]=1;
bullet1[5]=1;
// bubbles inits
bubble1x[0]=100;
bubble1x[1]=200;
bubble1x[2]=300;
bubble1y[0]=50;
bubble1y[1]=50;
bubble1y[2]=50;
drawbubble[0]=2;
drawbubble[1]=2;
drawbubble[2]=2;
bubble1dy[0]=1;
bubble1dy[1]=1;
bubble1dy[2]=1;
bubble1ddy[0]=0.1;
bubble1ddy[1]=0.1;
bubble1ddy[2]=0.1;
bubble1ddy[3]=0.1;
bubble1ddy[4]=0.1;
bubble1ddy[5]=0.1;
bubble1ddx[0]=0.00;
bubble1ddx[1]=0.00;
bubble1ddx[2]=0.00;
bubble1ddx[3]=0.00;
bubble1ddx[4]=0.00;
bubble1ddx[5]=0.00;
bubble1dx[0]=1;
bubble1dx[1]=1;
bubble1dx[2]=1;
maxyspeed[0]=12;
maxyspeed[1]=12;
maxyspeed[2]=12;
maxyspeed[3]=12;
maxyspeed[4]=12;
maxyspeed[5]=12;
bubblestokill = 3;
maxxspeed[0] = 5;
maxxspeed[1] = 5;
maxxspeed[2] = 5;
maxxspeed[3] = 5;
maxxspeed[4] = 5;
maxxspeed[5] = 5;
bubbledoubler[0] = 0;
bubbledoubler[1] = 0;
bubbledoubler[2] = 0;
bubbledoubler[3] = 0;
bubbledoubler[4] = 0;
bubbledoubler[5] = 0;
// lvl inits
lvl = 1;
}
int mousex, mousey;
void Game::MouseMove( unsigned int x, unsigned int y )
{
mousex = x;
mousey = y;
}
void Game::Tick( float a_DT )
{
long long start = milliseconds_now();
//background
m_Screen->Clear( 0 );
for (int indexX = 0; indexX <= 39; indexX++)
{
for (int indexY = 0; indexY <= 10; indexY++)
{
int tile = landTile[indexY][indexX];
tileSet[tile]->CopyTo( m_Screen, indexX * 20, indexY * 60 );
}
}
// bubblecalculator
while (bubblecount<bubblestokill)
{
if(drawbubble[bubblecount]==2)
{
/*
if ((bubble1dy[bubblecount] < 0.01) & (bubble1dy[bubblecount] > -0.01))
{
bubble1dy[bubblecount] = 0;
}
*/
bubble1y[bubblecount] += bubble1dy[bubblecount];
bubble1dy[bubblecount] += bubble1ddy[bubblecount];
// to prevent alls from jumping straight up and down
if ((bubble1dx[bubblecount] < 1) & (bubble1dx[bubblecount] > 0))
{
bubble1dx[bubblecount] += bubble1ddx[bubblecount];
}
if ((bubble1dx[bubblecount] < 0) & (bubble1dx[bubblecount] > -1))
{
bubble1dx[bubblecount] -= bubble1ddx[bubblecount];
}
if ((bubble1dy[bubblecount] >maxyspeed[bubblecount]))
{
bubble1dy[bubblecount] = maxyspeed[bubblecount];
}
if ((bubble1dy[bubblecount] <-maxyspeed[bubblecount]))
{
bubble1dy[bubblecount] = -maxyspeed[bubblecount];
}
if (bubble1y[bubblecount] < 20)
{
bubble1y[bubblecount] = 40 - bubble1y[bubblecount];
bubble1dy[bubblecount]= -bubble1dy[bubblecount];
}
bubble1x[bubblecount] += bubble1dx[bubblecount];
if(((bubble1x[bubblecount]> (780 - 2 * bubbleradius[bubblecount]) )& (bubble1dx[bubblecount] > 0))|| ((bubble1x[bubblecount] <20) & bubble1dx[bubblecount] <0))
{
bubble1dx[bubblecount] = -bubble1dx[bubblecount];
}
{
bubble1dy[bubblecount] = -bubble1dy[bubblecount];
}
bubble1.Draw(bubble1x[bubblecount],bubble1y[bubblecount], m_Screen);
}
while(bubbledistcount<bubblestokill)
{
if (bubbledistcount!=bubblecount)
{
if(drawbubble[bubblecount] == 2 & drawbubble[bubblecount] == 2)
{
bubblefraction = bubbledistfix / bubbledist;
bubble1dx[bubbledistcount] = bubble1dx[bubbledistcount] + 2.5 * (bubblefraction * bubblexdist * 0.5);
bubble1dy[bubbledistcount] = bubble1dy[bubbledistcount] + 2.5 * (bubblefraction * bubbleydist * 0.5);
bubble1dx[bubblecount] = -bubble1dx[bubblecount] - (bubblefraction * bubblexdist * -0.5);
bubble1dy[bubblecount] = -bubble1dy[bubblecount] - (bubblefraction * bubbleydist * -0.5);
bubbledoubler[bubblecount]++;
if ((bubble1dy[bubblecount] >maxyspeed[bubblecount]))
{
bubble1dy[bubblecount] = maxyspeed[bubblecount];
}
if ((bubble1dy[bubblecount] <-maxyspeed[bubblecount]))
{
bubble1dy[bubblecount] = -maxyspeed[bubblecount];
}
if ((bubble1dy[bubbledistcount] >maxyspeed[bubblecount]))
{
bubble1dy[bubbledistcount] = maxyspeed[bubblecount];
}
if ((bubble1dy[bubbledistcount] <-maxyspeed[bubblecount]))
{
bubble1dy[bubbledistcount] = -maxyspeed[bubblecount];
}
if(bubbledoubler[bubbledistcount] == 10 & bubblestokill<6)
{
bubble1dx[bubblestokill] = - bubble1dx[bubbledistcount];
bubble1dy[bubblestokill] = bubble1dy[bubbledistcount];
bubble1ddy[bubblestokill] = bubble1ddy[bubbledistcount];
drawbubble[bubblestokill] = 2;
bubblestokill++;
bubbledoubler[bubbledistcount]=0;
}
if(bubble1dx[bubbledistcount]>maxxspeed[bubbledistcount])
{
bubble1dx[bubbledistcount] = maxxspeed[bubbledistcount];
}
if(bubble1dx[bubblecount]>maxxspeed[bubblecount])
{
bubble1dx[bubblecount] = maxxspeed[bubblecount];
}
if(bubble1dx[bubbledistcount]<-maxxspeed[bubbledistcount])
{
bubble1dx[bubbledistcount] = -maxxspeed[bubbledistcount];
}
if(bubble1dx[bubblecount]<-maxxspeed[bubblecount])
{
bubble1dx[bubblecount] = -maxxspeed[bubblecount];
}
}
}
bubbledistcount++;
}
bubbledistcount = 0;
bubblecount++;
}
bubblecount = 0;
// landTile[13][66]
playertilex1 = (xplayer-1)/20;
playertilex2 = (xplayer+20)/20;
playertiley1 = (yplayer/60);
playertiley2 = ((yplayer + 20)/60);
if ((
landTile[playertiley1 ][playertilex1] == 6 ||
landTile[playertiley1 ][playertilex1] == 3) &
(landTile[playertiley2 ][playertilex1] == 6 ||
landTile[playertiley2 ][playertilex1] == 3)
)
{
left = false;
}
else
{
left = true;
}
if ((
landTile[playertiley1 ][playertilex2] == 6 ||
landTile[playertiley1 ][playertilex2] == 4 )&
(landTile[playertiley2 ][playertilex2] == 6 ||
landTile[playertiley2 ][playertilex2] == 4)
)
{
right = false;
}
else
{
right = true;
}
if ((
landTile[playertiley2 ][playertilex1] == 5 ||
landTile[playertiley2 ][playertilex1] == 4 ||
landTile[playertiley2 ][playertilex1] == 3) &
(landTile[playertiley2 ][playertilex2] == 5 ||
landTile[playertiley2 ][playertilex2] == 4 ||
landTile[playertiley2 ][playertilex2] == 3)
)
{
down = false;
}
else
{
down = true;
}
if(GetAsyncKeyState( 0x44 )) // D
{
if (right == true)
{
xplayer=xplayer + xspeed;
}
}
if(GetAsyncKeyState( 0x41 )) // A
{
if (left == true)
{
xplayer = xplayer - xspeed;
}
}
if((GetAsyncKeyState( VK_SPACE ))&(down == false))
{
dyplayer = 10;
}
if(dyplayer>0)
{
yplayer -= dyplayer;
dyplayer -= 0.1;
}
if (down == true)
{
yplayer += 5;
}
if(yplayer > (floory - playerheigth))
{
yplayer = (floory - playerheigth);
}
m_Screen->Line( xplayer, yplayer, mousex, mousey, 0xff0000 );
poppetje.Draw(xplayer,yplayer, m_Screen);
// bullet calculator
while(i<6)
{
if((GetAsyncKeyState( MK_LBUTTON )||(GetAsyncKeyState( WM_LBUTTONDOWN))) & (bullet1[i]==1) & (shotbreak > shoottime))
{
bullet1[i] = 2;
x2bullet[i] = mousex;
y2bullet[i] = mousey;
x1bullet[i] = xplayer;
y1bullet[i] = yplayer;
steps[i] = ((sqrtf(((x2bullet[i]-x1bullet[i])*(x2bullet[i]-x1bullet[i]))+((y2bullet[i]-y1bullet[i])*(y2bullet[i]-y1bullet[i]))))/speed);
dxbullet[i]=(x2bullet[i]-x1bullet[i])/steps[i];
dybullet[i]=(y2bullet[i]-y1bullet[i])/steps[i];
xbullet[i] = x1bullet[i];
ybullet[i] = y1bullet[i];
shot = true;
shotbreak=0;
}
if(bullet1[i] == 2)
{
bullet.Draw(xbullet[i],ybullet[i], m_Screen);
xbullet[i] += dxbullet[i];
ybullet[i] += dybullet[i];
if (xbullet[i]<0||xbullet[i]>1300||ybullet[i]<0||ybullet[i]>900)
{
bullet1[i]=1;
xbullet[i]=0;
ybullet[i]=0;
}
}
if (shot == false)
{
i++;
}
else
{
i = 7;
}
}
a
i=0;
shot = false;
// making sure player wont shoot all bullets at once (+ giving possibilities to different kind of weapons)
shotbreak++;
// hitcalculator
while(i<6)
{
while (bubblecount < bubblestokill)
{
if(distance<27)
{
drawbubble[bubblecount] = 1;
xbullet[i] = 0;
ybullet[i] = 0;
bubble1x[bubblecount] = 0;
bubble1y[bubblecount] = 0;
}
bubblecount++;
}
i++;
bubblecount=0;
}
i=0;
{
if(lvl == 1)
{
lvl2 ();
}
else if(lvl == 2)
{
lvl3 ();
}
}
{
lvl3 ();
}
long long elapsed = milliseconds_now() - start;
// printf ("time: %f\n", elapsed );
if((elapsed>0)&(elapsed<1000/60))
{
// Sleep ((1000/60)-elapsed);
}
}
### #16fireside
Senior Member
• Members
• 1587 posts
Posted 31 December 2012 - 12:34 PM
I guess it's up to you. Personally, I don't like them. If nothing else, you'll probably miss something more crucial in all that. If they don't bother you, they are harmless as long as it works the way you want. The way I look at it is that, I haven't written clear code if I get a warning.
Currently using Blender and Unity.
### #17fdmfdm
Member
• Members
• 25 posts
Posted 31 December 2012 - 12:44 PM
fireside, on 31 December 2012 - 12:34 PM, said:
I guess it's up to you. Personally, I don't like them. If nothing else, you'll probably miss something more crucial in all that.
hehehe, gettign rid of them was never the question
it was how by a call or by actually changing them.
but you are right, they can be annoying if you have something serious
### #18Stainless
Member
• Members
• 581 posts
• LocationSouthampton
Posted 31 December 2012 - 02:31 PM
Well you should know that 2.5 is a double so if you do something like float x += 2.4; you will get a warning.
It's good practice to add a 'f' to float variables. float x += 2.4f; will not generate a warning.
Then if you do something like int ix = x; that will generate a warning when int ix = (int)x; will not
Compilers are stupid, it's up to you to prove you are more intelligent than they are by telling them "I wrote the code, I know what I'm doing!"
This is boring I know, but it is important. Some things that come up as warnings really are bugs.
For example
extern void dosomething(int x, int y, int* result);
void me()
{
int x=10;
int y=15;
int z=0;
dosomething(x,y,z);
}
Will generate a warning, something like "warning converting an int to a pointer without a cast"
Running the code will cause a crash.
### #19fdmfdm
Member
• Members
• 25 posts
Posted 31 December 2012 - 03:41 PM
thanks stainless, that was quite a good explaination
ill check out all those warnings after i finished my tilecollisiondetector (which is bieing a pain as im trying to make to so that the bubbles bounce off corners realisticly.....)
### #20Kenneth Gorking
Senior Member
• Members
• 939 posts
Posted 31 December 2012 - 04:47 PM
Keep in mind that casting a float to int, internally generates a call to ftol, which is quite slow. As for the VSync issue, I watched a Google I/O video yesterday that describes it pretty well, and why it helps:
"Stupid bug! You go squish now!!" - Homer Simpson
#### 1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27327415347099304, "perplexity": 6391.654440443774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701063060/warc/CC-MAIN-20130516104423-00028-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.eurotrib.com/user/uid:1232/diary | Welcome to European Tribune. It's gone a bit quiet around here these days, but it's still going.
## Introducing the Berwick Brown Bear
by ChrisCook Sun Nov 20th, 2016 at 06:39:01 PM EST
This Diary introduces the Berwick Brown Bear pub project currently gathering pace in historic Berwick-upon-Tweed with its unique Corporation and Guild of Freemen
The aim is for Berwick's emblematic Brown Bear
to illustrate and prove the concept of a new breed of community Pub - the People's Pub Partnership - first conceived a few years ago by Mark Dodds.
## Party and Policy in a Time of Monsters
by ChrisCook Sun Jul 24th, 2016 at 12:36:47 PM EST
The old world is dying and the new world struggles to be born. Now is the time of monsters. Gramsci
This Diary grew out of a response to AR Geezer's LQD: Labour's Civil War Is Due To A Paradigm Shift.
As I have been saying on European Tribune since I first turned up here (which is longer ago than I care to remember) I think we are seeing the emergence of Society (Paradigm) 3.0.
Society 1.0 (which still exists everywhere but most evidently in the developing world) is decentralised/local but disconnected with physical market presence and interaction based on personal trust/credit.
Society 2.0 is centralised but connected, but with presence in the market and in decision making via trusted intermediaries/middlemen, being corporates and nation states respectively.
I see the emerging Society 3.0 as being decentralised but connected, with network presence replacing both physical presence and presence through intermediaries.
The institutions and instruments necessary for such a Society 3.0 have intrigued me and been the subject of my work for well over fifteen years.
Frontpaged - Frank Schnittger
## Iran's Oil Strategy: Interview in Iranian Magazine "Ayandenegar"
by ChrisCook Mon Nov 2nd, 2015 at 06:05:23 AM EST
From time to time the thoughts of Chairman Cook are published in all sorts of Iranian publications,and this interview on Page 57 of the Iranian energy/finance magazine "Ayandenegar" is my latest.
Since my Farsi is poor, I'm not sure how much of the interview was printed, and to what extent it was edited but the original English text follows.
With your permission I would like firstly to outline my view of the dynamics of modern commodity markets. The market in fossil fuels has exhibited the same cyclical 'Boom and Bust' behaviour historically as any other commodity market. In my analysis there are essentially two price levels or boundaries in commodity markets, and where the commodity is limited in supply both of these boundary levels trend upwards over time.
Firstly, there is a lower boundary level or 'buyer's market' at which supply exceeds demand. The cheapness of the commodity attracts new buyers while producers with high costs shut down production when losses become too great to bear. Meanwhile banks and investors are reluctant to finance new development.
Over time, demand for consumption begins to exceed supply - a 'seller's market' - until the market price eventually reaches the upper boundary level. At this point the combination of new higher cost supply, and demand destruction through substitution or efficiency measures leads to an excess of supply over demand and the price falls to the lower boundary again.
It is self evident that this $11 increase in the Brent/WTI spread in six weeks had precisely nothing to do with a physical oil market where supply and demand changes relatively slowly and where if anything oversupply has increased to the extent that the US - which is flooded with oil - is increasingly likely to lift its decades long embargo on oil exports. So what on earth is going on? Read more... (2 comments, 1036 words in story) ## Oil Market: A Picture Tells A Thousand Words by ChrisCook Thu Jan 1st, 2015 at 07:56:52 PM EST I recently posted the second of two stories in 'Tehran Times' on the subject of the recent collapse in oil market prices, which I have been publicly forecasting consistently for over three years, initially at a major conference in Tehran in late 2011, and most recently on November 2nd when the oil price was still over$80/barrel.
Since it seems to have disappeared, I thought I might republish it at European Tribune.
## My Call is Scotland to Vote Yes By A Good Margin
by ChrisCook Sun Sep 14th, 2014 at 01:41:41 AM EST
Here from my Eagle's Nest in Linlithgow, in Scotland's Central Belt, I thought it would be rude not to chip in my thoughts as to next Thursday's referendum vote.
My first data points are historic election turnout figures in Scotland covering both UK & Scottish Parliament Elections.
Then there's the 2011 Scottish Parliament Election outright win for the SNP which the voting system had pretty much been gerrymandered to prevent. I assume that very few of those voting SNP in 2011 will either abstain or vote No.
frontpaged by afew
## Welcome to the Pornocracy (Part One): the UK Economic Miracle
by ChrisCook Fri Jun 20th, 2014 at 11:07:19 AM EST
I am coming to the irresistible conclusion that the Coalition government is a modern day version of the late Byzantine Pornocracy - government by harlots. This facebook quote was the final straw.
You might not know that job centres and work programme providers are encouraging claimants to take up self-employed status on the basis they can pretend they're working, claim Working Tax Credit and get the same money as they would on Job Seekers Allowance but with no hassle or fear of sanction.
That's where all the new 'jobs' are coming from Cameron crows about, that's why there's a boom in so-called self-employment, and that's why productivity's so low per capita, these people aren't working at all. I assume this is why Ian Duncan-Smith still has a job despite the ongoing absolute chaos at the Dept of Work & Pensions as he's set this all up.
So as automation and austerity do for the UK middle class what Thatcher did for the working class we see an exponentially growing class of intellectual value flowing to a shrinking number of skilled workers (who are next in line for automation) and the holders of the relevant intellectual property.
This charade is of course fine to carry the Pornocracy through to the next election. After which, another crackdown on workshy shirkers while Serco, G4S or whoever is Blame-Taker of the Week.
## Three Dimensional Accounting - Chiralkine Redux
by ChrisCook Sun Dec 29th, 2013 at 01:50:44 PM EST
I think that the previous Chiralkine Accounting thread which I kicked off back in April must have generated more comments, heat and acrimony than almost any in ET's history.
But I thought then, and continue to think, that Martin Hay is on to something important, and I promised at some point to post my own thinking on the subject.
I've been pondering the discussion and assimilating Martin's work ever since within my own analysis and world view, and I understand from him that he and his collaborators gained a great deal from the ET experience.
My instinct was that the problem was that Martin's foundational assumptions were mistaken as to the basis of our modern financial system, and this instinct was reinforced by the discussion that took place.
A seasonal update from Martin today gave rise to an 'Aha!' moment when the penny dropped and I was finally able to pull my thoughts together, as follows. This is the responsible extract from Martin's e-mail.
I have realised that if you "resolve" account balances into right and left components, then you can in effect keep a complete history of all currency created, exchanged and redeemed by a person in just a left and a right balance.
## Obama's Conversion
by ChrisCook Sun Oct 13th, 2013 at 06:10:32 AM EST
Joseph Firestone is publishing a series of articles at New Economic Perspectives by way of a response to President Obama' cavalier treatment of suggested options for resolving the US debt ceiling crisis.
Apparently a commenter called Beowulf came up with the idea of Consols, so I thought I would weigh in with a comment, as this is a subject I aired on ET as long ago as August 2009 in the context of Iceland
I'm being a bit more radical by proposing a debt/equity swap on a US scale: Obama's Conversion.
## Energy Standard Redux
by ChrisCook Thu Sep 12th, 2013 at 02:13:02 PM EST
A hat tip here to my friend John Rogers, who has spent a great many years in the trenches working on community currencies.
HG Wells wrote A Modern Utopia in 1905 in which he imagined an economic system that preserved gold as the medium of exchange but NOT as the standard of value, which instead would be based on energy units. Entertaining and interesting!
## Credit, Currency and Other Animals
by ChrisCook Wed Sep 4th, 2013 at 05:09:26 AM EST
Brett Scott, of the excellent Suitpossum blog recently wrote a long and thoughtful article on currency. So you want to invent your own currency?
Since Brett frequents a facebook group in which I participate, I thought I would respond, and the response grew quite a bit to the extent that I thought I would post it here as a Diary.
## Bullshit Markets or Markets in Bullshit?
by ChrisCook Sat Aug 24th, 2013 at 08:02:56 AM EST
David Graeber has just come up with a great meme on the subject of bullshit jobs.
I've been saying for eight years that emissions trading (invented by Enron) is a (bullshit) market invented by middlemen for middlemen.
One of the analogies I have found useful in critiquing this by reference to system inputs and outputs, has been a joking reference to animal inputs and outputs. To wit, if you wish to keep a bull healthy, you don't regulate what comes out of it, you regulate what goes in.
## Introducing Chiralkine Contracts
by ChrisCook Fri Aug 16th, 2013 at 04:06:55 AM EST
I suspect some ET'ers will be interested in the work that Martin Hay is doing on what he describes as Chiralkine contracts, the Resolution of Zero and
Chiralkine Logic
My instinct is (I am still attempting to entirely grok it, because I don't do abstract) that this is very important work.
It very much seems to tie in with my analysis (Reality-based Economics and the Last Big Thing ) in relation to pervasive confusion and misrepresentation at the heart of modern economics - thanks to two dimensional double-entry book-keeping - between the credit/equity (ownership) and debt relationship
The single entry tally stick is 1-Dimensional: current double-entry accounting is 2-Dimensional: and Martin Hay posits 3-Dimensional (triple entry) accounting.
Ian Grigg - who is pre-eminent in the field of e-payments, crypto and so on - was writing 8 years ago about Triple Entry Book-Keeping. Todd Boyle's netledger was in the same space five years before that, and Satoshi's Bitcoin architecture is very much in this zone.
But I digress.
Martin is essentially pointing out that there are in fact four economic states:
Mine: Not Yours (10);
Yours: Not Mine (01);
Both Mine And Yours (11);
Neither Mine Nor Yours (00).
It's really quite deep: by 'resolving zero' we see that there are two aspects of the Zero state, not a million miles away from two different directions to particle 'spin'.
Anyway, I thought I would introduce Chiralkine concepts to ET'ers for consideration and discussion since I think that 3-D accounting is an important concept. Particularly when we consider the architecture for an emergent next-generation, subject-oriented and people-centric Web 3.0 which may have security and personal control hard-wired into it.
I will invite Martin - whom I recently met at Cumbria University in Lancaster at a conference on community currency - to respond, expand upon and hopefully discuss his ideas.
## Post-Modern Fiscal Theory Again
by ChrisCook Sun Jul 21st, 2013 at 05:41:59 AM EST
There was an interesting facebook post in a group I frequent by a thinker on Economics - Kimball Corson.
Thinking Is About as I Expected on the Problem of Monopoly Rents
My post on several economic related websites challenging the sufficiency of of a single tax on land, in addition to getting me kicked off the LVT (Land Value Tax) website, has brought to the fore a very serious point: there is little coherency and agreement on how to attack the monopoly rents problem in the American economic system. There is even disagreement on what and where they are. If such groups members were to the given equal power correct matters tomorrow, they very clearly would not know how to proceed. That is how diffuse, confused and dispersed their thinking is on these matters which we all agree are serious. Why this result?
My view is that economic literacy among members is not what it could be. There are myriad disparate views on too much. Economic misunderstandings abound. While that is not surprising given the level of economic training and thinking among most members, it is nonetheless distressing. The pinnacle of irony is for one well trained in economics to be told by one who is not that he doesn't understand much economics. That is bizarre, but so it goes. The economics of too many is simply "home brew" with too little reading and study to back it up. This deficiency does not however, mitigate the strong economic views held or how erstwhiledly they are vocalized.
The reason I posted the article contending that a single tax on land is not sufficient to attack the monopoly rent problem was to flush out the disparate views in the area. I knew everyone would run to the fore with their own thoughts and we would see how disparate and confused many are. I was proved correct and, of course, attacked along the way. I expected as much, although not to be booted off the LTV site. I think that is more than a bit extreme, but so is some of the thinking there. I have no interest in remaining an LTV member for multiple reasons, so nothing is lost to me.
It has been an interesting exercise but it shows what the chances are Americans will rise up united and force a solution to the problem. They are approaching zero. This has to delight the oligarchs because the economic thinking, poor as it is on these FB sites, is clearly much better on average than among the American populace at large.
My response follows.
## Personal Operating Systems and a Subject-oriented Web
by ChrisCook Fri Mar 29th, 2013 at 11:55:11 AM EST
Saving data donkeys in quicksand is an interesting BBC article in relation to the phenomenon of data tagging.
A friend of mine worked out the importance of generic subjective tagging for messaging (of all types) about 10 years ago, but could never engage with anyone to develop the resulting applications he also developed. Among other things, these would essentially finish off what's left of the existing business model of advertisers, and in turn mean that the likes of Google and Facebook would have to move on from their existing business models.
But because of the voracious and ruthless nature of the corporate players involved and the pernicious regime of IP rights and law, the concepts were not implemented, although the elements which he had far-sightedly analysed are now beginning to emerge.
From that perspective and experience, I think that where tagging will lead is to a simple 'personal operating system' resident on personal devices, and which will connect - with the minimum of complex code - directly to decentralised/distributed data-bases.
The only central assets would be servers which resolve:
(a) basic personal identity to a market/enterprise/group identity ; and
(b) machine identity to market identity ie what I have for years termed a 'Dot Market' model with a market-specific domain such as Dot Oil, or Dot Gas.
This raises the Big Brother issue of who can be trusted with such servers, and maybe the former might be domiciled in (say) Iceland, and the latter in (say) Switzerland.
Individuals only have one basic personal ID, but they may potentially have thousands of market-specific, enterprise-specific and group-specific IDs.
Their basic personal ID is only physically located in one place at one time, which adds in the possibility of generic geo-authentication of transactions - ie mapping mobile device locations to static machine locations. This brings another Big Brother issue of who can be trusted with that data.
Attempting to create a Semantic Web on an object-oriented machine-centric basis has always seemed to me to be a dead end, where increasingly sophisticated algorithms have attempted to derive meaning from data objects by reference to experience. But tagging is different and subjective or subject-oriented, because only you know what you mean, and will tag using language as you see fit.
I believe that we are in a transition from a complex, centralised, fragile, machine-centric Web 2.0 to a simple, decentralised, resilient, people-centric Web 3.0.
## The Case for Cypriot National Equity
by ChrisCook Wed Mar 27th, 2013 at 06:30:22 PM EST
Yesterday I had a guest post on the FT Alphaville Blog
Cyprus - the Case for Cypriot National Equity
The second attempt to resolve the unsustainable debt burden of Cyprus's over-leveraged banks spreads the pain differently to the disastrous initial attempt, but looks likely to leave Cyprus as an economic wasteland for generations. Frances Coppola outlined brilliantly yesterday the sort of financial disaster zone which Cypriots can expect.
Cyprus, in common with many other countries, but far more urgently, requires resolution and transition: Resolution of existing debt; and transition to a sustainable and low carbon economy. Surely there must be a better way of achieving this?
Well, my research leads me to conclude that there was; there is; and there will be again; if Cyprus ceases to attempt to resolve 21st century problems with 20th century solutions and instead uses an updated version of a financial instrument which pre-dates modern debt and equity finance capital.
In this post I will suggest how the Cyprus National Debt may be resolved into a Cyprus National Equity... but not equity as we know it.
## Value, Utility and an Energy Economy
by ChrisCook Sat Feb 16th, 2013 at 10:34:23 AM EST
The following quote on a thread piqued my interest and the response which follows below the line.
Value is always subjective, utility objective. Utility acquires value only if by becoming scarce, someone will pay for its provision.
In my analysis, Value - like Beauty, Quality and many other concepts - is an aspect of Reality and may be defined only in relative terms by reference to a standard unit of measure of Value or 'unit of account'.
NB: one can no more have a scarcity of units of account than one can have a scarcity of kilogrammes (standard unit of measure for weight) or metres (standard unit of measure for length).
A scarcity of currency, on the other hand is necessary - we are told by most economists - for that currency to maintain its value.
My somewhat sprawling response builds upon my metaphysical assumptions that the three sources of Value, through their utility over time, are:
(a) Location - immaterial 3D Space;
(b) Energy - material (static) and dynamic forms;
(c) Intellect - (subjective - which dies with us) and objective (data patterns).
## Clash of the Titans
by ChrisCook Wed Jan 30th, 2013 at 03:09:12 AM EST
There is an interesting BBC article today [26-01-2013] on the subject of Bank of England £1m ('Giant') and £100m ('Titan') notes.
front-paged by afew
## LQD: Radical Abundance - Cold Fusion Time
by ChrisCook Sat Jan 12th, 2013 at 05:38:36 AM EST
I am posting this Youtube clip of a lecture by one Dr Iwamura with the following comment from someone with a nom de plume of 'Dlight Sky'.
Talk about Radical Abundance! Thanks for finding this, it's the best talk by Iwamura I've seen so far. It's obvious that this is very mature technology. It's cool to see that they are now able to create platinum from tungsten (almost like creating gold from lead). Since tungsten costs about $50 per kilo and platinum about$3000 per kilo there is potential to make money with this technology, if significant quantities could be produced.
Interestingly Mitusubishi Heavy Industries is primarily interested in the technology for transmutation of radioactive waste from conventional nuclear reactors into non-radioactive elements
Because of this focus they haven't done much work on turning this into an energy-producing technology which it clearly has the potential to be. This is a clean fusion reaction which produces very little radiation.
This looks like an ordinary talk, but it's describing a massive paradigm shift showing a technology that has the potential to solve the world's energy problems. It has clearly proven that nuclear fusion can take place inside of a metal lattice at very low energy states. Most of his experiments don't require any input power at all.
Unfortunately his experiments have been associated with "cold fusion" (which it is) and are relatively unknown outside of a small circle. Also if the military grabs on to this, which they probably have, they likely keep any successes to themselves.
However one can see from the talk that this is quite mature technology, and they have used many sophisticated setups with an array of different sensors to verify the results.
Once commercialized, when we buy a new car it will come pre-loaded with a bit of cesium and heavy water and we will be able to run the car for its whole life without ever needing to re-fuel.
This mature technology is already here. No pollution, no mess, no fuss. It should have spawned a gigantic wave of research, but for some reason hasn't yet. There is a apparently a deep obstacle operating here, whether it's conceptual, spiritual or emotional--mankind simply isn't ready to receive this incredible gift yet.
I'd be interested in what our resident physicists and cynics have to say.
## What is the Point of a Bitcoin?
by ChrisCook Wed Jan 9th, 2013 at 06:01:23 AM EST
I've never been able to understand why anyone would regard a Bitcoin as having any value, since it is evidence of past (useless) work and energy expenditure with no value other than the creation of a Bitcoin.
Mind you, it is generally accepted that a Bitcoin is made valuable purely by its acceptability to Bitcoin participants as currency. ie it is completely 'faith-based'.
Do Not Throw Stones At This Notice comes to mind in terms of pointless circularity.
In respect of faith-based value - rather than value which derives from use value over time - a Bitcoin as a value token is not dissimilar to gold, of course, but at least gold has amenity value, being nice to look at for a few million years, and possessing some specialised uses.
There's an interesting fork of Bitcoin as well - Freicoin - which introduces Gesell's concept of 'money that rusts' (demurrage) in order to discourage hoarding and encourage spending.
Bitcoin's P2P architecture on the other hand? Now that is valuable: and I haven't even mentioned anonymity and Big Government.
For me, the challenge is to create a unit of account, platform, framework/protocol and generally acceptable instruments (currency) which combine credit, utility and trust.
I think that to do so is both completely necessary and achievable, and moreover represents what is now an implementable Adjacent Possible.
frontpaged by afew
# News and Views
## December 2019
by Colman - Dec 11, 425 comments
Your take on this month's news
## End of Year (and possibly times) thread
by Colman - Dec 11, 79 comments
What could possibly go wrong?
# Top Diaries
by Oui - Jan 28
by Oui - Jan 10
by Oui - Jan 23
## The narcissism of minor differences
by Frank Schnittger - Jan 14
by Oui - Jan 19
by Oui - Jan 17
## Impeachment gets real
by ARGeezer - Jan 17
by Oui - Jan 13
# Recent Diaries
by Oui - Jan 28
by Oui - Jan 28
by Oui - Jan 27
by Oui - Jan 24
by Oui - Jan 23
by Oui - Jan 19
by Oui - Jan 17
## Impeachment gets real
by ARGeezer - Jan 17
## The narcissism of minor differences
by Frank Schnittger - Jan 14
by Oui - Jan 13
by Oui - Jan 13
1 comment
by Oui - Jan 10
by Oui - Jan 9
by Oui - Jan 8
## More Spanish repression
by IdiotSavant - Jan 6
by Oui - Jan 6
## Revered Martyrs? Holy Hell!
by ARGeezer - Jan 5
by Oui - Jan 4
by Oui - Jan 3
by Oui - Jan 2
More Diaries...
# European Tribune
## Associated Sites
BooMan
The Oil Drum
L'Etoile de Martin
## The Trail Blazers
Daily Kos
MyDD
Billmon (on DKos)
Moon of Alabama
Colman
Frank Schnittger
Bjinse
ARGeezer | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24204394221305847, "perplexity": 3715.211748900952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251802249.87/warc/CC-MAIN-20200129194333-20200129223333-00417.warc.gz"} |
http://www.science20.com/quantum_diaries_survivor/new_rare_b_decays_nailed_cdf_door_new_physics | The CDF Collaboration has recently produced a new analysis of proton-antiproton collisions at the now second-world-best collision energy of 1.96 TeV. They searched for very rare decays of the B mesons, particles composed of, would you guess, a b-quark and a lighter partner orbiting around each other.
The B meson is a very fancy particle: its featured b-quark cannot be found in ordinary matter and is only produced in energetic particle collisions. However, the eight-years-worth of data collected by the Tevatron experiments contain billions of them by now. The CDF and DZERO experiments are thus capable of studying these particles in close detail, with analyses that might be divided in two classes: systematics-limited precision studies of macroscopic characteristics of B mesons, exploiting the detailed features of their most common decays to put the underlying electroweak theory to the test; and statistics-limited investigations of the phenomenology of their rarest decay modes. Both these strategies provide stringent tests of the standard model, and both could in principle reveal telling deviations if some new physics process modified the phenomenology.
While what limits the accuracy of the measurement of generic properties, which can be studied in large-statistics samples, are systematic effects which are usually very hard to impossible to beat down, what determines the sensitivity of a statistics-limited search is the amount of time that one is willing to wait as the detector collects the data. So in a sense the latter is a much "easier" endeavour!
Granted that one cannot say that precision measurements are hard while searches for rare processes are easy -since every data analysis is hard if done outstandingly, or easy if done in a sloppy manner- there is indeed a distinction to make: new physics is expected to stick out more clearly in the phenomenology of rare decays, because what makes these decays rare might just be a standard model rule, one which new physics is immune of. This, in essence, is why we are driven to search for the rarest phenomena: a one-part-per-ten-million effect may increase the rate of a one-in-a-million process by 10%, or the rate of a one-in-ten process by a millionth. Clearly, it is easier to detect the former!
If we look in the PDG -the particle physicist's bible, which collects every measurement of subnuclear particles produced in the last seventy years or so- we find pages and pages and pages of measurements. Behind each line in the PDG there is the dedicated effort of dozens of scientists... Their work is not in vain.
It turns out that some of the best quark transitions to study -those which might in principle be the place where new physics first gives an observable effect- involve the transmutation of a b-quark into a s-quark. Since the s-quark has the same electric charge of the b-quark, the resulting decays proceed through neutral currents. The quark line somehow manages to magically change flavour, and nothing else: the resulting emitted "current" which originates from this transition may then materialize a pair of leptons, allowing us to spot the rare occurrence.
CDF can study the $b \to s$transition both in $B^\pm$or $B^0$mesons and in $B_s$mesons, despite only the latter contains a s-quark, through reactions such as the following ones:
$B^+ \to \mu^+ \mu^- K^+$
$B^0 \to \mu^+ \mu^- K^*$
$B_s \to \mu^+ \mu^- \phi$
These decays can all be pictured in a similar way (see above, but bear in mind that there are in fact many such possible diagrams to consider!), since they are quite similar to one another: in each, the b-quark turns into a s-quark, remaining bound to the "spectator" -the light quark which made up the original B meson, which in the diagram is the unnamed horizontal line on top. The released energy is channelled in a "neutral current" which ends up materializing the pair of muons. What makes them rare is the fact that they are "second-order" electroweak processes: they proceed through the exchange of two, and not just one, electroweak bosons. In fact, two weak vertices are needed to turn the b-quark first into a up-type one (a u-quark, or a c-quark, or a t-quark), and then into a s-quark; and two more vertices are needed to create, and then destroy, the boson which ends up producing the dimuon pair, for instance.
Of course the annoying constraints mentioned above only affect the standard model processes. New physics might be capable of turning the b-quark directly into a s-quark without an intermediate state; or it might provide an extra boson which couples more readily to these fermionic fields; or it might allow extra particles in the virtual loops that change the quark flavour. All these possibilities would "speed up" the reaction, making it more likely to occur. They might also manifest themselves in modified angular distributions of the decay products, because a different interaction will "kick out" the final state particles in a different way from that expected from standard model interactions.
There is further attractiveness in the above reactions: the final states only contain charged particles -most notably, there are no neutrinos annoyingly carrying away part of the energy balance of the decay; but also no neutral pions. Neutral pions decay immediately after creation into two photons, and these can only be seen by the electromagnetic calorimeter, where they are often lost in the background from other hadrons produced in their proximity. In summary the absence of neutrinos or other neutral particles, which would make the full reconstruction of the initial mass of the decaying particle much harder, is a very welcome characteristics of these rare processes. Note that the K*(892) meson -a neutral particle!- is an excited version of the neutral kaon: it immediately decays into a charged kaon and a charged pion -again, charged particles!
CDF had searched for the three decays above in about one inverse femtobarn of collisions, in 2007, and extracted significant peaks of each B meson; significant, but not yet "observation-level" signals. Time would tell, as I mentioned above. And in fact, by analyzing a dataset almost five times as large, and by refining the analysis strategy, the three signals have finally made it well past the five-sigma mark. Two of them had already been studied at the B factories; but the one involving the B_s meson is new ground!
Below you can see how the three resonances look like. The $B^+ \to \mu^+ \mu^- K^+$signal, which totals 120+/-16 events with a significance of 9.7 standard deviations,
... the $B^o \to \mu^+ \mu^- K^*$ signal, yielding 101+/- 12 events with a significance of 8.5 standard deviations,
... and the $B_s \to \mu^+ \mu^- \phi$ signal, totaling 27 +-6 events for a 6.3-sigma significance.
In all figures, the black points represent the experimental data distribution in the reconstructed mass of the decaying meson; the red hatched line is the background model, the blue hatched line is the signal model, and the black line is the sum of signal and background returned by the fit.
From the three fits above, taking into account the detection efficiency of each signal, and the relative systematic uncertainties, CDF measures the branching fractions shown on the right, where the first error is statistical, and the second is systematic. So these branching fractions are indeed all in the 10^-6 - 10^-7 range, which is quite an achievement!
Unfortunately, the measured rates are all in excellent agreement with standard model predictions. So there appear to be no exotic mechanisms that enhance the frequency of such decays. However, it must be said that none of the most en vogue models of new physics are ruled out by these measurements: to really cut into the flesh of supersymmetric theories, for instance, we will need to measure branching fractions of the order of 10^-9, such as in the direct decays $B^0 \to \mu^+ \mu^-$ and $B_s \to \mu^+ \mu^-$. These, too, are being sought, and CDF and DZERO are placing more and more stringent limits on their rate. I wrote about these searches here, if you are interested.
Now, the interesting thing about the new measurements is that despite the rarity of the decays seen, the samples are large enough to allow a first study of some angular distributions, which might be able to discriminate standard model production and new physics models. This might sound silly: if the rate agrees with the standard model, how can the kinematics disagree ? Well, the fact that the rate is okay does not mean by itself that there is no margin for a component of decays due to exotic processes in the data; the statistical error is large enough to prevent very stringent statements. The event shape is always complementary, additional information which may, in some cases, provide the smoking gun for a new discovery even when there are no rate disagreements.
As an example, have a look at the figure shown below. In it, you can see the forward-backward asymmetry of one of the the studied decays, that of the neutral B meson, as a function of the square of the energy released in the particle's disintegration (labeled "q^2" in the horizontal axis).
[If you are very curious about the physics: the forward-backward asymmetry displayed is a quantity one may extract from the distribution of the muons emitted in the decays. It depends by a few so-called "Wilson coefficients", which may be used to describe the kinematics of flavor-changing neutral current processes. In particular, the factor named $C_7$is one relevant for the decays studied by CDF.]
The black squares show the asymmetry in five independent bins of q^2, the red curve shows the standard model prediction for the asymmetry, and in blue is shown the distribution one would observe from a new physics model which were to invert the factor $C_7$from its standard value. As you can see, there is not enough data to tell apart the two hypotheses, but nevertheless it is already extraordinary to observe that we are starting to use these rare decays for something more than just event counting!
(One detail some of you may be wondering on is the meaning of the green bars, and why there is no data point in those regions of q^2. The answer is that in those regions the production is dominated by the resonant modes involving the exchange of vector mesons, the J/Psi and the Psi(2S). The processes are then different, and not nearly as rare -they are in fact 1000 times more frequent. The q^2 values encompassed by the green bands is the square of those two particle masses).
I look forward to more studies of these rare decays by CDF and DZERO. Bear in mind that these two experiments are expected to produce results with a statistics more than doubled with respect to the one used in this analysis. Twice more data will not change the situation much, admittedly, but it might be enough to spot some interesting deviation from theory. In any case, these rare decays are bound to become business for the Large Hadron Collider, which is expected to collect not twice, but a hundred times more data, at a center-of-mass energy five to seven times higher. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9236125349998474, "perplexity": 881.9952031965324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320595.24/warc/CC-MAIN-20170625235624-20170626015624-00155.warc.gz"} |
https://forum.micropython.org/viewtopic.php?f=17&t=7346&p=42252 | Questions and discussion about running MicroPython on a micro:bit board.
Target audience: MicroPython users with a micro:bit.
simonc8
Posts: 7
Joined: Sat Nov 30, 2019 9:56 am
I am trying to programme a HC-SR04 ultrasonic sensor mounted on a 4tronix bitbot robot. The following micropython code is from the 4tronix website:
Code: Select all
from microbit import *
from utime import ticks_us, sleep_us
SONAR = pin15
def sonar( ):
SONAR.write_digital(0) # Clear trigger
sleep_us(2)
SONAR.write_digital(1) # Send 10us Ping pulse
sleep_us(10)
SONAR.write_digital(0)
# set pin15 to read, with no voltage applied
SONAR.set_pull(SONAR.NO_PULL)
while SONAR.read_digital() == 0: # ensure Ping pulse has cleared
pass
start = ticks_us() # define starting time
while SONAR.read_digital() == 1: # wait for Echo pulse to return
pass
end = ticks_us() # define ending time
echo = end - start
distance = int(0.01715 * echo) # Calculate cm distance
return distance
while True:
display.scroll(sonar())
sleep(1000)
but this doesn't work, because it gets stuck at the line
Code: Select all
while SONAR.read_digital() == 0:
If I programme the microbit using PXT the sensor works properly. PXT uses the c++ function pulseIN() to read the digital pin but I can't find a way to get micropython to mimic the same behaviour.
Grateful for assistance.
Online
lujo
Posts: 14
Joined: Sat May 11, 2019 2:30 pm
Hi,
Did you really used the same pin for triggering and reading back the echo?
Code: Select all
from microbit import *
from utime import ticks_us, sleep_us, ticks_diff
def sonar( ):
pin0.write_digital(0)
sleep_us(10)
pin0.write_digital(1)
sleep_us(10)
pin0.write_digital(0)
pass
start = ticks_us()
pass
end = ticks_us()
cm = ticks_diff(end, start) // 58
return cm
while True:
display.scroll(sonar())
sleep(1000)
lujo
simonc8
Posts: 7
Joined: Sat Nov 30, 2019 9:56 am
I have no control over the pins, as that's the way the bitbot is wired up.
I know it works using the same pin for both because this code in Java PXT returns the distance (albeit not very accurately):
Code: Select all
basic.forever(function () {
basic.showNumber(Math.round(bitbot.sonar(BBPingUnit.Centimeters)))
basic.pause(1000)
})
the bitbot PXT library function sonar I assume comes from the github project at
https://github.com/srs/pxt-bitbot
and the code in main.ts includes:
Code: Select all
export function sonar(unit: BBPingUnit): number {
// send pulse
let trig = DigitalPin.P15;
let echo = DigitalPin.P15;
let maxCmDistance = 500;
pins.setPull(trig, PinPullMode.PullNone);
pins.digitalWritePin(trig, 0);
control.waitMicros(2);
pins.digitalWritePin(trig, 1);
control.waitMicros(10);
pins.digitalWritePin(trig, 0);
let d = pins.pulseIn(echo, PulseValue.High, maxCmDistance * 58);
switch (unit) {
case BBPingUnit.Centimeters: return d / 58;
case BBPingUnit.Inches: return d / 148;
default: return d;
}
}
This uses the function pulseIn() for the sensing but I can't find the source for this function to see exactly what it does to try and duplicate it in micropython.
Online
lujo
Posts: 14
Joined: Sat May 11, 2019 2:30 pm
Hi,
There is a pulse in function. Source is here: https://github.com/bbcmicrobit/micropyt ... ne_pulse.c
Code: Select all
from microbit import *
from utime import sleep_us
from machine import time_pulse_us
def sonar( ):
pin0.write_digital(0)
sleep_us(10)
pin0.write_digital(1)
sleep_us(10)
pin0.write_digital(0)
pin1.set_pull(pin1.PULL_DOWN)
t = time_pulse_us(pin1, 1, 10**5) # pin, level, time-out
return t // 58
while True:
display.scroll(sonar())
sleep(1000)
lujo
jimmo
Posts: 1204
Joined: Tue Aug 08, 2017 1:57 am
Location: Sydney, Australia
Contact:
Can confirm that lujo's code works on the bit:bot -- here's code that I've seen working (which is basically identical to lujo's snippet above). https://github.com/jimmo/microbit-demos ... -finder.py
simonc8
Posts: 7
Joined: Sat Nov 30, 2019 9:56 am
Thanks for the replies. I loaded lugo's code onto the bitbot and it just returns -1 all the time, which means it's timing out while waiting for the pulse to finish. Not quite sure what this means.
I would suspect the batteries except for the fact that the same bitbot will return ultrasonic distances ok when programmed with PXT blocks in microsoft.makecode.org.
Mysterious.
jimmo
Posts: 1204
Joined: Tue Aug 08, 2017 1:57 am
Location: Sydney, Australia
Contact:
I went looking through my notes from a summer school I used to teach at -- https://github.com/jimmo/ncss-embedded/ ... opixels.md This is specifically about using the bit:bot with the micro:bit and MicroPython.
There's a code snippet at the end:
Code: Select all
from microbit import *
import machine
def distance_cm():
# Send a pulse on pin 15
pin15.write_digital(1)
pin15.write_digital(0)
# Read from the pin to turn it back to an input.
# Turn on the internal pull-up resistor
pin15.set_pull(pin15.PULL_UP)
pulse_time = machine.time_pulse_us(pin15, 1)
if pulse_time < 0:
return 0
return pulse_time * 0.034 / 2
while True:
print(distance_cm())
sleep(500)
simonc8
Posts: 7
Joined: Sat Nov 30, 2019 9:56 am
jimmo - brilliant - that code works!
Many thanks for your help. Your notes on the bit:bot are extremely useful as well.
On your github pages for ncss-embedded is there something I need to have installed so the equations show properly? In my browser they just show as the text eg
$$V_{tot} = \frac{V}{2}$$
jimmo
Posts: 1204
Joined: Tue Aug 08, 2017 1:57 am
Location: Sydney, Australia
Contact:
simonc8 wrote:
Wed Dec 11, 2019 9:57 am
On your github pages for ncss-embedded is there something I need to have installed so the equations show properly? In my browser they just show as the text eg
$$V_{tot} = \frac{V}{2}$$
I originally wrote all the formulae using Math URL (see the note in the main readme file) but a friend wanted to make it into a PDF so he set it up to use pandoc and LaTeX etc, so that's why the equations look like that. Here's a copy of the generated PDF https://drive.google.com/file/d/0B-O86M ... sp=sharing
simonc8
Posts: 7
Joined: Sat Nov 30, 2019 9:56 am | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.17079614102840424, "perplexity": 11600.192640646384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146414.42/warc/CC-MAIN-20200226150200-20200226180200-00525.warc.gz"} |
http://www.thespectrumofriemannium.com/2015/07/04/log171-from-bohrlogy-to-dualities/ | # LOG#171. From Bohrlogy to dualities.
Old (old fashioned!) Quantum Mechanics is understood as the quantum theory before its final formulation around 1927-1931…It includes the Bohr model, the Wilson-Bohr-Sommerfeld quantization and some other tricks like the one to take into account the finite nuclear size. For finite nuclear mass (not infinite), considering it as fixed, the hypothesis implies that (M is the nuclear mass and m is the electron mass around the nucle-we take the hydrogenic one single electron atom for simplicity) thre reduced mass will be:
(1)
or
(2)
Compare this equation with the association of two electrical resistances (or inductances) in parallel (exactly with the same formal expression) or two capacitors in series! Simple and neat/clear analogy!
Following the Bohr model of the hydrogen atom and its success (today we know it was only partial) to explain the hydrogen spectrum, people began to study if it could be generalized to atoms with higher atomic number. Even Bohr himself tried to solve the issue…Bohr’s formula and model had worked well to give the previously known Rydberg for the hydrogen atom, but it was not known then if it would also give spectra for other elements with higher Z numbers, or even precisely what the Z numbers (in terms of charge) for heavier elements were.
It was known that he ordering of atoms in the periodic table did tend to be according to atomic weights or mass, but there were a few famous “reversed” cases where the periodic table demanded that an element with a higher atomic weight (such as cobalt at weight 58.9) nevertheless be placed at a lower position (Z=27), before an element like nickel (with a lower atomic weight of 58.7), which the table demanded take the higher position at Z=28. Moseley inquired if Bohr thought that the electromagnetic emission spectra of cobalt and nickel would follow their ordering by weight, or by their periodic table position (atomic number, Z), and Bohr said it would certainly be by Z. Moseley’s reply was “We shall see!”
Since the spectral emissions for high Z elements would be in the soft X-ray range (easily absorbed in air), Moseley was required to use vacuum tube techniques to measure them. Using X-ray diffracton techniques in 1913-1914, Moseley found that the most intense short-wavelenght line in the x-ray spectrum of a particular element was indeed related to the element’s periodic table atomic number, Z.
This line was known as the so-called K-alpha line. Following Bohr’s lead, Moseley found that this relationship could be expressed by a simple formula, later called Moseley’s Law. Mathematically:
(3)
or equivalently
(4)
Moseley’s two given formulae for K-alpha and L-alpha lines, in his original semi-Rydberg style notion were derived to be:
(5)
(6)
The energy of photons that a hydrogen atom can emit in the Bohr deduction of the Rydberg formula is given by the difference of any two hydrogen energy levels:
For the hydrogen atom, the quantity of charge reads
because Z (the nuclear positive charge, in fundamental units of the electron charge is essentially Q=Ne) is equal to 1. That is, the hydrogen nucleus contains a single charge. However, for the hydrogenic atoms (those in which the electron acts as though it circles a single structure with effective charge Z), Bohr realized from his derivation that an extra quantity would need to be added to the conventional charge to the fourth power, in order to account for the extra pull on the electron, and thus the extra energy between levels, as a result of the increased nuclear charge. In 1914 it was realized that Moseley’s formula could be adapted from Bohr’s, if two assumptions were made:
1st. The electron responsible for the brightest spectral line (K-alpha) which Moseley was investigating from each element, results from a transition by a single electron between the K and L shells of the atom (i.e., from the nearest to the nucleus and the one next farthest out), with energy quantum numbers corresponding to 1 and 2.
2nd. The Z in Bohr’s formula, though still squared, required a reduction by 1 to calculate K-alpha. This effect arises because the initial and final states of the atom have different amounts of electron-electron repulsion. A widespread oversimplification is the idea that the effective charge of the nucleus decreases by 1 when it is being screened by an unpaired electron. In any case, Bohr’s formula for Moseley’s K-alpha X-ray transitions became:
In the case with the transition with initial n=1 and final n=2, dividing both sides by h to convert energy to frequency, we get:
The final value of the theoretical frequency
is in good agreement with Moseley’s empirically-derived value. This fundamental frequency is the same as that of the hydrogen Lyman-alpha line, because the 1s to 2p transition in hydrogen is responsible for both Lyman-alpha lines in the hydrogen atom, and also the K-alpha lines in X-ray spectroscopy for elements beyond hydrogen, which are described by Moseley’s law. Moseley was indeed fully aware that his fundamental frequency was Lyman-alpha, the fundamental Rydberg frequency resulting from two fundamental atomic energies, and for this reason differing by the Rydberg-Bohr factor of exactly 3/4, and he explicitly showed it clearly in his original papers.
As regards Moseley’s L-alpha transitions in relation with current modern Quantum Mechanics, the modern view associates electron shells with principle quantum numbers n, with each shell containing two times n squared electrons, giving the n=1 shell of atoms 2 electrons, and the n=2 shell 8 provides electrons. The empirical value of 7.4 for Moseley’s second K is thus associated with n=2 to n=3, then called L-alpha transitions (not to be confused with Lyman-alpha transitions), and occurring from the “M to L” shells in Bohr’s later notation. This value of 7.4 is now known to represent an electron screening effect for a fraction (specifically 0.74) of the total of 10 electrons contained in what we now know to be the n=1 and n=2 (or K and L) “shells”.
What else? The theory of special relativity!!!!! Considerations from the special theory of relativity implied that the circular orbits of the electron in the atom should be considered an approximation. Just as it happens with the elliptical orbits in the solar system. Circles are a particle type of ellipse. That is how Wilson, Bohr and Sommerfeld, with the aid of some ideas of classical mechanics and the special theory of relativity, arrived to a more general quantization rule. Mathematically speaking, it reads
(7)
for some “periodic” curve C. Classical Bohr quantization can be read off from this rule. Take a periodic orbit be a circle with angular moment L. The Wilson-Bohr-Sommerfeld above is a quantization of the action-angle variable (p,q) to be
so
and thus the angular moment is quantized as Bohr’s model hypotheses/rules argued
However, the power of Wilson-Bohr-Sommerfeld ideas is that we can go beyond Bohr’s model. For simple harmonic motion, we obtain, from classical mechanics
and thus
We know that
and
Therefore,
with
Wilson-Bohr-Sommerfeld rule provides
or
but
and
yields
Thus, we have arrived to
but this is the Einstein’s and Planck’s quantization rule for harmonic oscillators/quanta of “light” (or more generally bosons).
Finally, our third example of Wilson-Bohr-Sommerfeld quantization rule. A free particle in a box. Firstly we consider the unidimensional (D=1) box with length equal to L. The action-angle variable is
Thus, we have that
Now, going further, we have three cases:
1st. Massive non-relativistic particle. We have
Energy is quantized, so linear momentum is also quantized
Remarkably, this quantized momentum also appear in Kaluza-Klein theories when you use dimensional reduction of a periodic extra space-like dimension of size pi times L!
2nd. Massless (m=0) relativistic particle (ultrarelativistic particle).
The momentum is also quantized too
Remark: in natural units (c=1), up to a numerical 2 (1/2) factor, this is the same momentum as the previous case! The energy spectrum differs from the presence of mass, a number AND the power in the Planck constant and n. Of course, it coincides with the KK quantization for a periodic space-like extra dimension!
3rd. Relativistic massive particle.
From this, we get a rest mass shifted quantized relativistic momentum
These formulae can be easily generalized to a particle in a multidimensional box of size
or more generally
You only have to take D different angle-action variables and mimic the procedure. For a non-relativistic massive particle you will have
where
For the massless relativistic D-dimensional case you have
and the massive relativistic D-dimensional case
Tired of boring point particle spectra? Go to superstring theory!!!!!! We have already seen that a single particle in a box (Am I a mad man with a box????? LoL) of size L has the spectrum
where I am using natural units right now. Note that when L goes to zero the energy diverges! Thus, finite energy states is n=0 only and that does NOT depend on the box size L at all! Going into D spacetime dimensions ambient, if you reduce dimensionally the theory and compactify k extra space-like dimensions, you can only get D-k non compact dimensions AND possible zero modes. That is, fields can only, at least in principle from this framework, propagate along D-k non compact dimensions (today, current modern theories can do this better and this feature depends on the particular model building of your theory, i.e., you can guess theories with fields propagating not only in the non compact space, the brane, but also into the compact space-that can be LARGE- or in the bulk; these are commonly referred as brane-worlds). In summary, small dimensions probe energies greater than 1/L, not lesser. Moreover, closed strings (loops!) have an intersting spectra:
where the last two terms are purely stringy from the winding modes and the harmonic oscillator excitations of the string (they can possible include a zero point contributions). By the other hand, open strings have the spectrum
There are no winding modes in the open string spectrum. Indeed, a quite mysterious remark was that even when open strings DO contain closed strings as excitations, in a dual picture there is no empty space. It was discovered that a hidden extended object, with p=D-k-1 spatial components, is linving in the edges of the open string. These objects are D-branes (from Dirichlet boundary conditions) or Dp-branes (D-k-1=p-membrane or p-brane)! The role of these new objects in the theory formerly known as superstring theory is very striking and surprising. Indeed, there are cool mathematical objects involving their dualities. We have seen above that T-duality exchange the roles of winding modes and KK modes, and much more interestingly, relates a compactified space with size L with another of 1/L is suitable units. So, somehow, large and small quantities are not differnt from the viewpoint of T-duality and the related theories! That is a very strange result and indeed has nothing to do with common experience. But it is true and a solid result in superstring theory, now completely established. Very small and very big things are related with T-duality. What else? S-duality!!!!! But before talking about S-duality, we will go back in time to the early times of quantum physics, and we will met the thoughts of a guy called Paul A. M. Dirac…
See you in my next blog post!
View ratings | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8788191080093384, "perplexity": 1162.8071505103103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998755.95/warc/CC-MAIN-20190618143417-20190618165417-00353.warc.gz"} |
https://www.queryhome.com/puzzle/537/identify-the-two-numbers-from-the-following-conversation | # Identify the two numbers from the following conversation?
199 views
There is a one person who have two numbers, he tells sum to the person A and product of those numbers to B.
Now there is this conversion between A and B.
A: I don't know what are the numbers.
B: I also don't know what are the numbers.
A: Now i know what are the numbers.
B: Now i also know what are the numbers.
Assuming A and B to be very wise and good in mathematics, What are those two numbers?
posted Apr 29, 2014
Numbers are 2 and 2.
A would have got 4, which means 1+3 or 2+2. so that’s why he was not sure of the numbers.
B would have also got 4, possibilities 1*4 and 2*2.
Now, had the numbers be 1 and 3, B would have got 3 and he would have been sure of the numbers, but that was not the case, So A became sure that numbers are 2 and 2.
Now, B knows that numbers cant be 1 and 4, because there are two possibilities of getting the sum as 5, 1+4, 2+3, and in both these cases A cant guess the number depending on B’s earlier answer, as for both product 4 and 6 there are more than 1 possibilities.
Thus B also knows that the numbers are 2 and 2.
A: I don't know what are the numbers.
``````X: "I have no idea what your sum is, Y." | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8724184036254883, "perplexity": 422.36058728163357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00072.warc.gz"} |
https://link.springer.com/article/10.1007%2Fs40818-017-0043-7 | Annals of PDE
, 4:7
# A Two-Soliton with Transient Turbulent Regime for the Cubic Half-Wave Equation on the Real Line
• Patrick Gérard
• Enno Lenzmann
• Oana Pocovnicu
• Pierre Raphaël
Manuscript
## Abstract
We consider the focusing cubic half-wave equation on the real line
\begin{aligned} i \partial _t u + |D| u = |u|^2 u, \ \ \widehat{|D|u}(\xi )=|\xi |\hat{u}(\xi ), \ \ (t,x)\in {\mathbb {R}}_+\times {\mathbb {R}}. \end{aligned}
We construct an asymptotic global-in-time compact two-soliton solution with arbitrarily small $$L^2$$-norm which exhibits the following two regimes: (i) a transient turbulent regime characterized by a dramatic and explicit growth of its $$H^1$$-norm on a finite time interval, followed by (ii) a saturation regime in which the $$H^1$$-norm remains stationary large forever in time.
## Keywords
Multi-soliton modulation theory Wave turbulence Growth of Sobolev norms Half-wave equation Cubic Szegő equation
## Mathematics Subject Classification
35B40 35L05 35Q41 35Q51 37K40
## Notes
### Acknowledgements
P.G. is supported by Grant ANAE of French ANR, and partially supported by the ERC-2014-CoG 646650 SingWave. E.L. is supported by the Swiss National Science Foundation (SNF) through Grant No. 200021-149233. O.P. was supported by the NSF grant under Agreement No. DMS-1128155 during the year 2013-2014 that she spent at the Institute for Advanced Study. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. P.R is supported by the ERC-2014-CoG 646650 SingWave and is a junior member of the Institut Universitaire de France. Part of this work was done while P.R was visiting the Mathematics Department at MIT, Boston, which he would like to thank for its kind hospitality. Another part was done while P.G., O.P., and P.R. were in residence at MSRI in Berkeley, California, during the Fall 2015 semester, and were supported by the NSF under Grant No. DMS-1440140.
## References
1. 1.
Bourgain, J.: Aspects of long time behaviour of solutions of nonlinear Hamiltonian evolution equations. Geom. Funct. Anal. 5(2), 105–140 (1995)
2. 2.
Bourgain, J.: On the growth in time of higher Sobolev norms of smooth solutions of Hamiltonian PDE. Int. Math. Res. Not. 1996, 277–304 (1996)
3. 3.
Bourgain, J.: On growth in time of Sobolev norms of smooth solutions of nonlinear Schrödinger equations in $${\mathbb{R}}^D$$. J. Anal. Math. 72, 299–310 (1997)
4. 4.
Bourgain, J.: Problems in Hamiltonian PDEs. Geom. Funct. Anal. Part I, 32–56 (2000)
5. 5.
Bourgain, J.: Remarks on stability and diffusion in high-dimensional Hamiltonian systems and partial differential equations. Ergod. Theory Dyn. Syst. 24(5), 1331–1357 (2004)
6. 6.
Cai, D., Majda, A., McLaughlin, D., Tabak, E.: Dispersive wave turbulence in one dimension. Physica D 152(153), 551–572 (2001)
7. 7.
Colliander, J., Keel, M., Staffilani, G., Takaoka, H., Tao, T.: Transfer of energy to high frequencies in the cubic defocusing nonlinear Schrödinger equation. Invent. Math. 181, 39–113 (2010)
8. 8.
Colliander, J., Kwon, S., Oh, T.: A remark on normal forms and the “upside-down” I-method for periodic NLS: growth of higher Sobolev norms. J. Anal. Math. 118(1), 55–82 (2012)
9. 9.
Dodson, B.: Global well-posedness and scattering for the mass critical nonlinear Schrödinger equation with mass below the mass of the ground state. Adv. Math. 285, 1589–1618 (2015)
10. 10.
Elgart, A., Schlein, B.: Mean field dynamics of boson stars. Commun. Pure Appl. Math. 60(4), 500–545 (2007)
11. 11.
Frank, R., Lenzmann, E.: Uniqueness of non-linear ground states for fractional Laplacians in $${ R}$$. Acta Math. 210, 261–318 (2013)
12. 12.
Fröhlich, J., Lenzmann, E.: Blowup for nonlinear wave equations describing boson stars. Commun. Pure Appl. Math. 60(11), 1691–1705 (2007)
13. 13.
Gérard, P., Grellier, S.: The cubic Szegő equation. Ann. Sci. Éc. Norm. Supér (4). 43(5), 761–810 (2010)
14. 14.
Gérard, P., Grellier, S.: Invariant tori for the cubic Szegő equation. Invent. Math. 187, 707–754 (2012)
15. 15.
Gérard, P., Grellier, S.: Effective integrable dynamics for a certain nonlinear wave equation. Anal. PDE 5, 1139–1155 (2012)
16. 16.
Gérard, P., Grellier, S.: An explicit formula for the cubic Szegő equation. Trans. Am. Math. Soc. 367, 2979–2995 (2015)
17. 17.
Gérard, P., Grellier, S.: The cubic Szegő equation and Hankel operators, Astérisque 389 (2017)Google Scholar
18. 18.
Guardia, M.: Growth of Sobolev norms in the cubic nonlinear Schrödinger equation with a convolution potential. Commun. Math. Phys. 329(1), 405–434 (2014)
19. 19.
Guardia, M., Kaloshin, V.: Growth of Sobolev norms in the cubic defocusing nonlinear Schrödinger equation. J. Eur. Math. Soc. 17, 71–149 (2015)
20. 20.
Guardia, M., Haus, E., Procesi, M.: Growth of Sobolev norms for the analytic NLS on $${\mathbb{T}}^2$$. Adv. Math. 301, 615–692 (2016)
21. 21.
Hani, Z.: Long-time strong instability and unbounded orbits for some periodic nonlinear Schrödinger equations. Arch. Ration. Mech. Anal. 211, 929–964 (2014)
22. 22.
Hani, Z., Pausader, B., Tzvetkov, N., Visciglia, N.: Modified scattering for the cubic Schrödinger equations on product spaces and applications. In: Forum Mathematics, Pi, no. 3 (2015)Google Scholar
23. 23.
Haus, E., Procesi, M.: Growth of Sobolev norms for the quintic NLS on $${\mathbb{T}}^2$$. Anal. PDE 8, 883–922 (2015)
24. 24.
Kenig, C.E., Martel, Y., Robbiano, L.: Local well-posedness and blow-up in the energy space for a class of $$L^2$$ critical dispersion generalized Benjamin-Ono equations. Ann. Inst. H. Poincaré Anal. Non Linéaire 28(6), 853–887 (2011)
25. 25.
Kirkpatrick, K., Lenzmann, E., Staffilani, G.: On the continuum limit for discrete NLS with long-range interactions. Commun. Math. Phys. 317, 563–591 (2013)
26. 26.
Krieger, J., Lenzmann, E., Raphaël, P.: Nondispersive solutions to the $$L^2$$-critical half-wave equation. Arch. Ration. Mech. Anal. 209(1), 61–129 (2013)
27. 27.
Krieger, J., Martel, Y., Raphaël, P.: Two solitons solution to the gravitational Hartree equation. Commun. Pure Appl. Math. 62(11), 1501–1550 (2009)
28. 28.
Lindblad, H., Tao, T.: Asymptotic decay for a one-dimensional nonlinear wave equation. Anal. PDE 5(2), 411–422 (2012)
29. 29.
Kuksin, S.B.: Oscillations in space-periodic nonlinear Schrödinger equations. Geom. Funct. Anal. 7(2), 338–363 (1997)
30. 30.
Majda, A., McLaughlin, A., Tabak, E.: A one-dimensional model for dispersive wave turbulence. J. Nonlinear Sci. 7(1), 9–44 (1997)
31. 31.
Martel, Y.: Asymptotic $$N$$-soliton-like solutions of the subcritical and critical generalized Korteweg-de Vries equations. Am. J. Math. 127(5), 1103–1140 (2005)
32. 32.
Martel, Y., Merle, F.: Multi solitary waves for nonlinear Schrödinger equations. Ann. Inst. H. Poincaré Anal. Non Linéaire 23(6), 849–864 (2006)
33. 33.
Martel, Y., Merle, F.: Stability of blow-up profile and lower bounds for blow-up rate for the critical generalized KdV equation. Ann. Math. (2) 155(1), 235–280 (2002)
34. 34.
Martel, Y., Merle, F.: Description of two soliton collision for the quartic gKdV equation. Ann. Math. (2) 174(2), 757–857 (2011)
35. 35.
Martel, Y., Merle, F.: Inelastic interaction of nearly equal solitons for the quartic gKdV equation. Invent. Math. 183(3), 563–648 (2011)
36. 36.
Martel, Y., Merle, F.: Construction of multi-solitons for the energy-critical wave equation in dimension 5. Arch. Ration. Mech. Anal. 222(3), 1113–1160 (2016)
37. 37.
Martel, Y., Merle, F., Raphaël, P.: Blow up for the critical gKdV equation II: minimal mass blow up. J. Eur. Math. Soc. 17(8), 1855–1925 (2015)
38. 38.
Martel, Y., Raphaël P.: Strongly interacting blow up bubbles for the mass critical NLS. Ann. Ec. Norm. Sup. (2015). arXiv:1512.00900
39. 39.
Merle, F.: Construction of solutions with exactly k blow-up points for the Schrödinger equation with critical nonlinearity. Commun. Math. Phys. 129(2), 223–240 (1990)
40. 40.
Merle, F., Raphaël, P., Szeftel, J.: On collapsing ring blow-up solutions to the mass supercritical nonlinear Schrödinger equation. Duke Math. J. 163(2), 369–431 (2014)
41. 41.
Mizumachi, T.: Weak interaction between solitary waves of the generalized KdV equations. SIAM J. Math. Anal. 35(4), 1042–1080 (2003)
42. 42.
Pocovnicu, O.: Traveling waves for the cubic Szegő equation on the real line. Anal. PDE 4–3, 379–404 (2011)
43. 43.
Pocovnicu, O.: Explicit formula for the solution of the Szegő equation on the real line and applications. Discrete Contin. Dyn. Syst. A 31(3), 607–649 (2011)
44. 44.
Pocovnicu, O.: First and second order approximations for a nonlinear wave equation. J. Dyn. Differ. Equ. 25(2), 305–333 (2013)
45. 45.
Pocovnicu, O.: Soliton interaction with small Toeplitz potentials for the Szegő equation on the real line. Dyn. Partial Differ. Equ. 9(1), 1–27 (2012). Erratum to “Soliton interaction with small Toeplitz potentials”, Dyn. Partial Differ. Equ. 9(2), 173–174 (2012)Google Scholar
46. 46.
Raphaël, P., Szeftel, J.: Existence and uniqueness of minimal blow-up solutions to an inhomogeneous mass critical NLS. J. Am. Math. Soc. 24(2), 471–546 (2011)
47. 47.
Sohinger, V.: Bounds on the growth of high Sobolev norms of solutions to Nonlinear Schrödinger Equations on $${\cal{S}}^1$$. Differ. Integral Equ. 24(7–8), 653–718 (2011)
48. 48.
Sohinger, V.: Bounds on the growth of high Sobolev norms of solutions to Nonlinear Schrödinger Equations on $${\mathbb{R}}$$. Indiana Univ. Math. J. 60(5), 1487–1516 (2011)
49. 49.
Staffilani, G.: On the growth of high Sobolev norms of solutions for KdV and Schrödinger equations. Duke Math. J. 86, 109–142 (1997)
50. 50.
Thirouin, J.: On the growth of Sobolev norms of solutions of the fractional defocusing NLS on the circle. Ann. Inst. H. Poincaré, Anal. Non Linéaire 34, 509–531 (2017)
51. 51.
Xu, H.: Large time blow up for a perturbation of the cubic Szegő equation. Anal. PDE 7, 717–731 (2014)
52. 52.
Xu, H.: Unbounded Sobolev trajectories and modified scattering theory for a wave guide nonlinear Schrödinger equation. Math. Z. 286(1–2), 443489 (2017)Google Scholar
53. 53.
Zakharov, V., Guyenne, P., Pushkarev, A., Dias, F.: Wave turbulence in one-dimensional models. Physica D 152–153, 573–619 (2001)
© Springer International Publishing AG, part of Springer Nature 2018
## Authors and Affiliations
• Patrick Gérard
• 1
• Enno Lenzmann
• 2
• Oana Pocovnicu
• 3
• Pierre Raphaël
• 4
1. 1.Laboratoire de Mathématiques d’OrsayUniv. Paris-Sud, CNRS, Université Paris-SaclayOrsayFrance
2. 2.Departement für Mathematik und InformatikUniversität BaselBaselSwitzerland
3. 3.Department of MathematicsHeriot-Watt University and The Maxwell Institute for the Mathematical SciencesEdinburghUK
4. 4.Laboratoire Jean-Alexandre DieudonnéUniversité Nice Sophia AntipolisNiceFrance | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8855620622634888, "perplexity": 4706.266855430884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515029.82/warc/CC-MAIN-20181022092330-20181022113830-00130.warc.gz"} |
https://verification.asmedigitalcollection.asme.org/GT/proceedings-abstract/GT2016/49743/V003T06A020/239453 | A modern energy system based on renewable energy like wind and solar power inevitably needs a storage system to provide energy on demand. Hydrogen is a promising candidate for this task. For the re-conversion of the valuable fuel hydrogen to electricity a power plant of highest efficiency is needed.
In this work the Graz Cycle, a zero emission power plant based on the oxy-fuel technology, is proposed for this role. The Graz Cycle originally burns fossil fuels with pure oxygen and offers efficiencies up to 65 % due to the recompression of about half of the working fluid. The Graz Cycle is now adapted for hydrogen combustion with pure oxygen so that a working fluid of nearly pure steam is available. The changes in the thermodynamic layout are presented and discussed. The results show that the cycle is able to reach a net cycle efficiency based on LHV of 68.5 % if the oxygen is supplied “freely” from hydrogen generation by electrolysis.
An additional parameter study shows the potential of the cycle for further improvements. The high efficiency of the Graz Cycle is also achieved by a close interaction of the components which makes part load operation more difficult. So in the second part of the paper strategies for part load operation are presented and investigated. The thermodynamic analysis predicts part load down to 30 % of the base load at remarkably high efficiencies.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252236843109131, "perplexity": 1014.9300974717718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683708.93/warc/CC-MAIN-20220707063442-20220707093442-00469.warc.gz"} |
https://mathoverflow.net/questions/221102/sum-of-inverse-of-multinomial-coefficients | # Sum of inverse of multinomial coefficients
Find an asymptotically tight estimate for the sum $$A_n^{k}(\lambda)= \sum_{ \substack{a_i\geq \lambda_i \\ a_1+a_2+\dots a_k=n }} \prod_{i=1}^k a_i!$$
Is the leading term going to be $$|\textrm{Number of Maximal Lambda}|(j-\lambda_1-\lambda_2-\lambda_3- \lambda_4+ \lambda_{max})!\frac{1}{\lambda_{\max}!}\prod_{i=1}^4 \lambda_i!$$
Edit: As of right now there is some discrepency as to weather this conjecture is correct or not.
This question has been asked before in the binomial situation Here
I add two hopefully useful remarks.
I consider the general situation. In the sequel $$k\geq 1$$ and $$\lambda=(\lambda_1,\ldots,\lambda_{k+1})$$ are fixed, $$s:=\sum_{i=1}^{k+1}\lambda_i$$ and $$n\geq s$$.
Let $$A_n^{(k+1)}(\lambda):=\sum\limits_{\stackrel{a_i\geq \lambda_i, i=1,\ldots,k+1}{a_1+\ldots +a_{k+1}=n}} \prod_{i=1}^{k+1} a_i!$$ and let $$\mathcal{S}_{k}:=\{ (x_1,\ldots,x_{k+1})\;:\;x_i\geq 0, \sum_{i=1}^{k+1} x_i=1\}$$ denote the $$k$$-dimensional standard simplex.
(i) $$A_n^{(k+1)}(\mathbf{\lambda})$$ has a nice geometric representation:
$$A_n^{(k+1)}(\lambda)=\frac{(n-s+k)!\,(n+k)!}{(n-s)!} \int_{\mathcal{S}_{k}} \int_{\mathcal{S}_{k}} \left(x_1y_1+\ldots+x_{k+1}y_{k+1}\right)^{n-s}\,\prod_{i=1}^{k+1} x_i^{\lambda_i}d\mathbf{x}\,\,d\mathbf{y}$$
Proof: recall Dirichlet's integral: $$((\sum_{i=1}^{k+1}a_i)+k)!\, \int_{\mathcal{S}_{k}} \prod_{i=1}^{k+1} x_i^{a_i}\,d\mathbf{x}=\prod_{i=1}^{k+1} a_i!$$ Now expand the integrand using the multinomial theorem and use Dirichlet's integral repeatedly.
EDIT: I have replaced the previous (unnecessarily complicated) derivation and again corrected the factor (apologies). (The conclusions remain unchanged).
(ii) Thus the large $$n$$ asymptotic of $$A_n^{(k+1)}(\lambda)$$ can be investigated along the lines of the Laplace method (explained e.g. in chapter 4 of de Bruijn's book).
Remark: I have done that, and (unfortunately?) the result confirmed your original conjecture. Thus either my analysis or the accepted answer (or both) are incorrect (no offence).
EDIT:
The Laplace analysis follows the usual scheme: (1) identify maxima (2) cutoff tails, approximate integrand (3) complete tails
I sketch the main steps. Let $$I$$ denote the integral above.
Basic intuition:
(1) Consider the scalar product $$\langle \mathbf{x},\mathbf{y}\rangle>=\sum_{i=1}^{k+1}x_iy_i\leq 1$$. On the domain of integration $$\frac{1}{k+1}\leq \langle\mathbf{x},\mathbf{y}\rangle\leq 1$$, and $$\langle\mathbf{x},\mathbf{y}\rangle$$ can be 1 only in a "corner" $$x_iy_i=1$$ for a certain $$i$$.
(2) For any $$0<\epsilon<1$$ the part of $$I$$ where $$\langle\mathbf{x},\mathbf{y}\rangle<1-\epsilon$$ is asymptotically negligible (compared to its complement). Asymptotically thus only the parts around the corners (i.e. $$x_iy_i\approx 1$$) need to be considered.
With this in mind: (3) choose $$\epsilon$$ so small that the corners $$C_i:=\{ x\in \mathcal{S}_k\;:\; x_i\geq 1-\epsilon, y_i\geq 1-\epsilon\}$$ are disjoint, and discuss their influence individually.
E.g. for corner $$k+1$$ choose $$x_1,\ldots,x_k$$ resp. $$y_1,\ldots,y_k$$ as independent coordinates. Then $$x_{k+1}=1-\sum_{i=1}^k x_i=:1-u$$, $$y_{k+1}=1-\sum_{i=1}^k y_i=:1-t$$.
We have $$\langle\mathbf{x},\mathbf{y}\rangle=(1-u)(1-t)+\sum_{i=1}x_iy_i\leq (1-u)(1-t)+ut$$, thus $$(\langle\mathbf{x},\mathbf{y}\rangle)^2\leq (1-2u(1-u))(1-2t(1-t))$$. The parts of the integral where $$u\geq 1/n^{2/3}$$ or $$t\geq 1/n^{2/3}$$ are negligible because $$\langle\mathbf{x},\mathbf{y}\rangle^n=\mathcal{O}(\exp(-n^{1/3}))$$ there. On the remaining part $$\langle\mathbf{x},\mathbf{y}\rangle^n =\exp(-n(u+t))(1+o(1))$$ uniformly. Therefore introduce new coordinates $$x_1,\ldots,x_{k-1},u$$, $$y_1,\ldots,y_{k-1},t$$, then \begin{align*} &\int_{{C}_{k+1}}\int \left(x_1y_1+\ldots+x_{k+1}y_{k+1}\right)^n\,\prod_{i=1}^{k+1} x_i^{\lambda_i}d\mathbf{x}\,\,d\mathbf{y} \\ &=\int\limits_{{x_i\geq 0, x_1+\ldots +x_k=u}\atop{{y_i\geq 0, y_1+\ldots +y_k=t}\atop {0\leq u,t\leq n^{-2/3}}}} e^{-n(u+t)}\prod_{i=1}^k x_i^{\lambda_i}(1-u)^{\lambda_{k+1}} dx_1\ldots,dx_k dy_1\ldots dy_k\,du\,dt\big(1+o(1)\big)\\ &= \int_0^{n^{-2/3}}\int_0^{n^{-2/3}} e^{-n(u+t)} (\frac{u}{n})^{s-\lambda_{k+1}+k-1} \frac{\prod_{i=1}^k\lambda_i!}{(s-\lambda_{k+1}+k-1)!} (1-u)^{\lambda_{k+1}} (\frac{t}{n})^{k-1}\frac{1}{(k-1)!}\,du\,dt\big(1+o(1)\big)\\ &=\frac{\prod_{i=1}^k \lambda_i!}{n^{2k+s-\lambda_{k+1}}}\left(1+o(1)\right) \end{align*} where last line follows after substituting $$z=nu,w=nt$$ and completing the tails. Summing over all corners one now gets your formula (after applying Stirling's formula to it, and up to the additional factors).
• Could you flesh out the laplace's method calculation? – Daniel Parry Oct 27 '15 at 15:35
• The term "generalized geometric sum" isn't googleable. Could you point to a reference to it? Also, if everything works out I would like to cite this work. Could you possibly make it a bit less sketchy so I can use it. I would also suggest you use your actual name on MO so you can get credit for awsome work like this! – Daniel Parry Oct 27 '15 at 19:49
• (1) I have corrected some inaccuracies in the first part (apologies for being confusing), expanded it and hope it is sufficiently clear now. (2) In your post above the index $b$ should be $n$. – esg Oct 28 '15 at 17:24
Your sum is a fancy way of writing $$A_j=A_j^{(4)}=\sum_{\substack{a_i\ge\lambda_i, i=1,\dots,4\\a_1+\dots+a_4=j}}a_1!a_2!a_3!a_4!$$ (which, of course, can be treated as $A_j^{(k)}$ for any $k\ge2$). If I understand you correctly, your expectation is that $A_j\sim N\times\{\text{the maximum term of the sum}\}$ as $j\to\infty$, where $N$ is the number of hits of this maximum term. This is certainly wrong, as the terms in the neighbourhoods of those maximal entries contribute quite substantially to the sum. Some plausible asymptotics to consider here are $A_{j+1}/A_j$ or $A_j^{1/j}$ as $j\to\infty$, as in these cases one can indeed show that the leading term completely determines the growth. The related reference to study is the book "Asymptotic methods in analysis" by de Bruijn (1961), more specifically, Chapter 3 there. (I would recommend doing $k=2$ and $k=3$ first.) I really recommend this particular book, as the most accessible (and elementary enough) reference to the asymptotics of binomial sums.
• What is $n$ here? – Daniel Parry Oct 24 '15 at 21:54
• The reference you mention. Are you just giving out the name of a standard text or is it specifically in that booK? – Daniel Parry Oct 24 '15 at 22:30
Note that if $k_i\geq \lambda_i$ for all $i\in \{1,\dots,l\}$ the term $$\binom{n}{k_1,\dots ,k_l}^{-1}$$ can only be maximal if for any $i,j\in \{1,\dots,l\}$ with $i\neq j$ we have $k_i=\lambda_i$ or $k_j=\lambda_j$. Hence to get the maximal value we need to put $k_i=\lambda_i$ for all but one $i\in \{1,\dots,l\}$. Hence we have for $n\geq \sum_{i=1}^l \lambda_i$ $$\binom{n}{k_1,\dots ,k_l}^{-1}\leq \max_{j\in \{1,\dots,l\}} \frac{(n+\lambda_{j}-\sum_{i=1}^l \lambda_i)!\prod_{i=1}^l \lambda_i!}{\lambda_{j}!n!}\leq \frac{(n+\lambda_{max}-\sum_{i=1}^l \lambda_i)!\prod_{i=1}^l \lambda_i!}{\lambda_{max}!n!}$$ A double counting argument yields $$\sum_{\substack{k_1+\dots+k_l=n\\k_i\geq \lambda_i}} 1=\binom{n-\sum_{l=1}^l\lambda_i+l-1}{l-1}$$ Hence we have $$\sum_{\substack{k_1+\dots+k_l=n\\k_i\geq \lambda_i}} \binom{n}{k_1,\dots, k_l}^{-1}\leq \binom{n-\sum_{l=1}^l\lambda_i+l-1}{l-1}\frac{(n+\lambda_{max}-\sum_{i=1}^l \lambda_i)!\prod_{i=1}^l \lambda_i!}{\lambda_{max}!n!}\leq \frac{\prod_{i=1}^l \lambda_i!}{(l-1)!\lambda_{max}! }(n+l-1)^{(l-1)-\sum_{i=1}^l \lambda_i+\lambda_{max}}.$$ For example for $\lambda_1=\dots=\lambda_l=2$ we get $$\sum_{\substack{k_1+\dots+k_l=n\\k_i\geq \lambda_i}} \binom{n}{k_1,\dots, k_l}^{-1}\leq \frac{2^{l-1}}{(l-1)!} (n+l-1)^{-(l-1)}.$$
• How does this show the leading term is asymptotic to the maximal term in the sequence? This bound grows as $n$ grows large. – Daniel Parry Oct 21 '15 at 14:56
• I dont understand what you mean by leading term and what it means to be asymptotic to the maximal term. I simply estimated $A_j$. I thought this was your question. If not please clarify your question. – user35593 Oct 21 '15 at 15:01
• Thanks so much for the help! I mean that $A_n\thicksim B_n$ or that the ratio tends toward one. I don't need the asymptotic per say just a bound that is sharp with respect to it. – Daniel Parry Oct 21 '15 at 15:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 39, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9801148772239685, "perplexity": 277.0295053906914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153892.74/warc/CC-MAIN-20210729172022-20210729202022-00484.warc.gz"} |
https://stackabuse.com/integrating-elasticsearch-with-ms-sql-logstash-and-kibana/ | Integrating Elasticsearch with MS SQL, Logstash, and Kibana
Introduction
MS SQL Server holds the data in relational form or even multi-dimensional form (through SSAS) and proffers several out-of-the-box search features through Full Text Search (FTS).
However, the search function of the modern-world applications has many complexities. The search specifications are hybrid and the queries demand full-scale searching over massive data sets. A better solution is required to perform such advance level of searches and that is where Elasticsearch grabs attention from technology experts.
Elasticsearch is a substantial REST HTTP service that enables scaling of operations even up to thousands of queries per second. Its features, such as Facets and Aggregation framework, assist in resolving many data analyses related issues as well. Hence, integration of Elasticsearch with any relational database can be proved a powerful value addition to the application.
How can we Integrate Elasticsearch with MS SQL?
The entire integration process of MS SQL and Elasticsearch along with Data-collection and log-parsing engine – Logstash, analytics and visualization platform – Kibana is described here in five simple steps.
Step 1: Environment Setup
Please find the directions to setup the integration environment with their purposes (where applicable):
• Download and install Java using the URL https://java.com/en/download - Set the Java path in Path Environment variable and set JAVA_HOME to "C:\Program Files\Java\jre1.8.0_151"
Purpose: Elasticsearch provides a Java API and it executes all operations in an asynchronous manner using the client object. The client object can cumulatively execute all operations in bulk. The Java API is used in order to execute all APIs in Elasticsearch.
Purpose: Since Elasticsearch is developed in Java, we need to install JDBC driver in order to be connected with SQL Server
• Extract the driver to "C:\Program Files".
• Copy sqljdbc_auth.dll from "C:\Program Files\sqljdbc_6.0\enu\auth\x64" and paste to the location "C:\Program Files\Java\jre1.8.0_151\bin".
Purpose: This will authorize Java to access JDBC driver
• Add "C:\Program Files\sqljdbc_6.0\enu\auth\x64" to the Environment variable Path.
Purpose: Environment variables are set to enable processes such as:
• Enabling other tools to interact with SDKs more easily
Step 2: Elasticsearch Setup
Please find instructions to perform the setup for Elasticsearch:
Purpose: To install services, in our case we need Logstash to be installed as a service.
Step 3: Logstash Setup
Please find instructions to perform the setup for Logstash:
Purpose: Logstash enables the application to collect data from different systems. Moreover, it normalizes different schemas. It enables you to keep the data gathered from various systems into a common format. As a result:
• You can interact with data collected from different systems simultaneously. Additionally, you can compare data sets or even see how they influences one another
• Visualization tools such as Kibana and analytics engines such as Elasticsearch can make the best out of complex data
• Create a file having the name logstash.conf, add this file under the "logstash/bin" folder.
• Open Command prompt with Administrator rights, navigate to the "nssm\win64" folder and write nssm install Logstash
• Navigate to the Logstash folder and provide argument as below:
• Open logstash.conf using Notepad (or any other text editor) and add following configuration:
input {
jdbc {
# SqlServer jdbc connection string to our database, employeedb
jdbc_connection_string => "jdbc:sqlserver://localhost\SQLExpress;database=employeedb;user=sa;[email protected]"
# The user we want to execute our statement as
jdbc_user => nil
jdbc_driver_library => "C:/Program Files/sqljdbc_6.0/enu/jre8/sqljdbc42.jar"
# The name of the driver class for SqlServer
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
# Query for testing purpose
statement => "SELECT * from employee"
}
}
output {
stdout { codec => json_lines }
}
• Navigate to Logstash bin folder from command prompt and provide command "logstash -f logstash.conf". It should return Query result:
Step 4: Kibana Setup
Please find instructions to create setup for Kibana:
• Go to the folder of Kibana "C:\kibana-5.6.4-windows-x86\config", remove '#' icon from kibana.yml for uncommenting the properties of Kibana
• Install Kibana using NSSM
• Navigate to Kibana folder as mentioned below:
To check whether Kibana installed or not, explore the local host URL: http://localhost:5601/app/kibana#/management/kibana/index?_g=( )
Step 5: Connecting Elasticsearch with the Application
Install the Nest plugin to utilize Elasticsearch in Visual Studio. To enable database operations through Elasticsearch it is also required to attach ElasticsearchCRUD plugin using the URL https://www.nuget.org/packages/ElasticsearchCRUD/. Download the DLL files and provide it a reference.
Conclusion
In this article, I have described the systematic process of integrating Elastic Stack (Elasticsearch, Logstash, and Kibana) with MS SQL database to make the best out of data sets. I have also tried to share the purpose of each action wherever it is applicable.
These steps are not necessarily limited to MS SQL database, however. As long as you download the relevant drivers, you can integrate Elasticsearch with any other database by following the same procedure.
Due to its excessive data management capabilities, Elasticsearch has potential to deal with the modern era of data eruption challenges, I believe. No wonder, the high-end clientele including Netflix, Uber, Dell, BBC, LinkedIn and eBay, is leveraging it. However, it is still in a young phase and much better is yet to come. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1976926624774933, "perplexity": 8173.651481741798}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141737946.86/warc/CC-MAIN-20201204131750-20201204161750-00361.warc.gz"} |
http://crypto.stackexchange.com/tags/hash/hot?filter=month | Tag Info
Hot answers tagged hash
10
Expanding then shrinking in SHA-1 refers to the process, performed for each round (each 512-bit block of padded message), of message expansion from 512 bits to 2560 bits; keeping only 160 bits of state for the next round. The later directly follows from the construction of SHA-1 as a Merkle-Damgård hash of 160 bit. The former occurs because SHA-1's ...
5
There are a few pitfalls: File name integrity: signing files one at a time signs the contents of the files. It (typically) does not protect the file names from tampering. This could be disastrous in some situations (e.g. an attacker could change blacklist.txt to whitelist.txt). Set membership integrity: signing individual files does not prevent adding or ...
5
It depends of course on the hash function you're dealing with. Assuming it is a cryptographically secure hash function, you're still looking at brute forcing the output: in other words, trying every possible input, computing the hash and then comparing with the output. Finding two inputs with the same output is a hash collision. Consider the MD5 hash ...
4
SHA-1 (and, in general, any modern cryptographical method) will generate an arbitrary bitstring. The bit string 0x22 (the ASCII code for double quote) is as probable as any other bitstring. You could attempt to detect (and escape) such byte values; another possibility is convert the bitstring into hex or base-64; those have a deterministic size, and may be ...
3
I know SHAKE128 and 256 are part of the SHA-3 standard but is the SHA3 standard officially released yet? i can only find a draft of the publication, does this mean it's not official and therefor not proven to be secure? No, SHA-3 has not been formally approved. On the other hand, what do you mean "not proved to be secure"? Do you really thing that ...
3
The entropy for the output of SHA-256 truncated to its first $128$ bits when fed a random $128$-bit input is about $127.173$ bit, down from very close to $128$ bit before truncation (likely $128-k\cdot2^{-127}$ bit for $k\in\{0,1,2\}$). The truncation does not halve the entropy, because the halves are not independent. The right line of thought is that ...
3
If we use $H_1(X) = H_0(X) \oplus firstnbits(X)$, this would seem to be trivial. EDIT: As Cédric Van Rompay pointed out, this is only a counterexample if $H_1$ winds up being preimage-resistant. This may not be a necessary consequence of $H_0$ being preimage-resistant, but I really only need one case where it is.
2
Cryptographically secure hashes usually work on bitstrings of arbitrary length and output a fixed length bitstring. The secure part is being collision resistant and preimage resistant, so that you have a practical oneway function, and those are the properties you want for "scrambling". As fgrieu psted in the comments, one easy way to do this is to utilize ...
2
The hash output is a random string of a length specified by the function (that is 160 bits for SHA-1). It may contain any special characters, including e.g. white space. It is more common to encode the value in hex than to use quoting of special characters are special characters are very common in hash output (minority of characters would be ASCII if no ...
2
I think what you're missing here is that a cryptographic hash by itself is not actually sufficient to verify the integrity of a message. Consider this: I want to send a message over the Internet (on an insecure connection, e.g. UDP), but have it be protected from tampering. I take the message and attach at the end a cryptographic hash of the message (e.g. ...
2
I'll consider only a non-adversarial model for the requirement of a low collision probability; that is, we are considering naturally-occurring strings only (which implies they are of bounded size; I'll limit it to $2^{64}-1$ bits, over 2305 Petabyte). However I'll consider that we need to reliably detect strings that differ only in a small consecutive ...
1
It seems to me that you don't need a cryptographic hash function, that is, a function that provides preimage resistance, collision resistance, etc. or at least to the degree that cryptographic applications require. Anyway, it seems that you could use a hash function that follows the Merkle-Damgard construction, but without doing the length padding at the ...
1
It depends. If you have full control over the whole system, all components and can use whatever algorithm you want to deploy, you can stick to the one giving you the best efficiency which fulfills your security requirements. In this case, it would be Tiger. However, Tiger has a 192 bit output. If that is not enough for you, go for SHA256. However, if the ...
1
Yes, there's an issue: you're adding needless complexity, which gives you absolutely no benefit. The whole point of a PBKDF is to be slow; passwords are low-entropy, and the only way to mitigate brute-force is to make it take time to compute hashes. It can't take too long to log in, so you have to balance "fast for a user" and "slow for an attacker." ...
1
If it is for completely random data you could still make a program that uses the random looking input to make different choices. For instance, you could sign two .jar files in Java, using the SHA-256 hash over the file in the META-INF folder. Then you can use the different files a property to make one choice or the other. Basically you're replacing one of ...
1
Signing files individually will create independent signatures for the contents (not filename) of each file. A potential downfall here is that the items could be removed or renamed without detection. Let's say, for example, the files to be signed are alice-invoice, bob-invoice, and chris-invoice. If each file is individually signed, and bob-invoice is ...
1
TL;DR, you don't. At this point, we have algorithms we believe are unbroken by current adversaries. For hashing, this includes the SHA-2 family of hashes, SHA-3, BLAKE2b and others. For authentication, we have the HMAC family of functions, the UMAC family of functions, Poly1305, and others. For symmetric encryption, we have AES, ChaCha20, and others. For ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5499351620674133, "perplexity": 1209.0414378613827}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299339.12/warc/CC-MAIN-20150323172139-00241-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://xn--llions-yua.jutge.org/upc-python-cookbook/control-flow.html | # Basic control flow
Without control flow a program is just a list of statements that is sequentially executed. In this section we cover how to conditionally or repeatedly execute code blocks by means of the if, while and for statements.
## The if statement
For conditional execution we use the if statement. In order to get familiar with its syntax let’s explore the following function that tells whether a given year is a leap year or not.
# Leap years are those...
# Multiples of 4 that do not end with two zeros.
# And also, the years ending with two zeros such that,
# after removing these two zeros, are divisible by four.
def leap_year(year):
if year % 4 == 0 and year % 100 != 0:
return True
elif year % 100 == 0 and (year//100) % 4 == 0:
return True
else:
return False
For instance,
>>> leap_year(1800)
False
>>> leap_year(2020)
True
In general, if statements are compound statements made up of zero or more elif clauses and an optional else clause. Note that the keyword elif is short for else if.
The body of each clause needs to be indented since this is Python’s way of grouping statements. The Style Guide for Python Code suggests using 4 spaces per indentation level.
## The while statement
It is used for repeated execution provided that a condition is true. Our first hands-on experience with looping in Python will be computing the greatest common divisor of two natural numbers $a$ and $b$ by means of the Euclidean algorithm.
def gcd(a, b):
while b: # while b != 0: would be equivalent
a, b = b, a%b
return a
Note that we are taking advantage of multiple assignment in Python. In the statement a, b = b, a%b, the right-hand side is always evaluated fully before actually setting the values to the left-hand side, which can be quite useful.
## The for statement
The for statement in Python differs from what you might expect if you come from C-based programming languages.
Rather than giving the programmer the ability to determine both the iteration step and halting condition, for loops in Python iterate over the elements of any sequence, in the order that they appear. A sequence may be an iterable object, such as a string, a list, a dictionary etc.
Its readable syntax specifies the variable to be used, the sequence to loop over with the in keyword to link them, just as we described in the Lists section:
>>> for i in [1, 1, 2, 5, 14, 42, 132]:
... print(i, end=' ')
1 1 2 5 14 42 132
When looping over integer subsequences, the range() type is useful. Let’s explore it through different examples:
Syntax Example Generates
range(stop) range(4) [0, 1, 2, 3]
range(start, stop) range(2, 4) [2, 3]
range(start, stop, step) range(11, 2, -3) [11, 8, 5]
Note that the stopping point is never part of the generated sequence. As a sidenote, with range() we do not obtain a list per se but another iterable object. Its advantage is that it always takes the same small amount of memory.
#### Example: Fibonacci numbers
Each number in the Fibonacci sequence $F_n$ is the sum of the two numbers that precede it, starting from $F_0 = 0$ and $F_1 = 1$. That is, $F_n = F_{n-1} + F_{n-2}$ for $n > 1$, giving rise to the well-known beginning ${0, 1, 1, 2, 3, 5, 8, 13, \dots }$.
Let’s write a piece of code such that, given $n$, it prints the $n$-$th$ Fibonacci number:
def fibonacci(n):
a, b = 0, 1
for i in range(n):
a, b = b, a+b
return a
To illustrate some Python particularities, let’s consider the following C++ code
int main() {
for (int i = 0; i < 5; ++i) {
cout << i << endl;
i = 5; // In the first loop we force the halting condition
}
// cout << i << endl;
// Would entail an error since 'i' was not declared in this scope
}
whose output is
0
and translate it into Python:
for i in range(5):
print(i)
i = 5
print('We can access i:', i)
It is apparently equivalent until we actually execute it and dig into its intricacies:
0
1
2
3
4
We can access i: 5
In Python, any change we make to variable i in the suite of the for loop is overwritten. Moreover, we can still access i once the loop is finished. It is illustrative to remind that the first line of the code is equivalent to
for i in [0, 1, 2, 3, 4]:
### The break and continue statements
Lliçons.jutge.org | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41350045800209045, "perplexity": 1244.686922892841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00306.warc.gz"} |
https://www.quantumcalculus.org/cohomology-six-lines/ | # Cohomology in six lines
Here is the code to compute a basis of the cohomology groups of an arbitrary simplicial complex. It takes 6 lines in mathematica without any outside libraries.
The input is a simplicial complex, the out put is the basis for $H^0,H^1,H^2 etc$. The length of the code compares in complexity with computations in basic planimetric computations in a triangle (Example [mathematica notebook] for Math E320). We just compute the Dirac operator D, then split up the blocks Hk of D2 and compute their kernel. These vector spaces are now equivalent to the cohomology groups by Hodge.The genious move of Hodge is that rather than talking about equivalence classes of cocycles (which requires some mathematical training to appreciate), one can look at the kernel of concrete matrices (which we do after three weeks in an intro course of linear algebra). In the following self contained code, the first 4 lines generate a random simplicial complex. Then, in the next 6 lines, the Dirac and Hodge operator is computed. Finally, the basis of the null spaces of the Laplacians are spit out). The cohomology in the discrete has again and again been reinvented, but it is definitely due to Betti or Poincare, the key idea being the notion of the incidence matrix d, which implements “div, grad, curl etc”.
The earliest reference for discrete Hodge I could find is the survey lecture “The {Euler} characteristic – a few highlights in its long history” by Benno Eckmann. As a graduate student, I have seen one one of these survey lectures, and it had been the one about Euler characteristic. The talk was brilliant and the lecture hall at the nearby university had been packed. I never took a course from Eckmann, he got retired, but Eckmann was still seen a lot at the department when I was a student there. He had been the person who told me that I won the fellowship to spend a year in Israel (1988-1989). The following code shows also that the topic of cohomology is something which could be introduced early on in a linear algebra course as it is just the process of computing the kernel of a specific matrix. We just had covered that in our linear algebra course Math 21b.
Generate[A_]:=Delete[Union[Sort[Flatten[Map[Subsets,A],1]]],1]
R[n_,m_]:=Module[{A={},X=Range[n],k},Do[k:=1+Random[Integer,n-1];
A=Append[A,Union[RandomChoice[X,k]]],{m}];Generate[A]];
G=R[10,16];n=Length[G]; Dim=Map[Length,G]-1;f=Delete[BinCounts[Dim],1];
Orient[a_,b_]:=Module[{z,c,k=Length[a],l=Length[b]}, If[SubsetQ[a,b] &&
(k==l+1),z=Complement[a,b][[1]];c=Prepend[b,z];Signature[a]*Signature[c],0]];
dext=Table[0,{n},{n}]; dext=Table[Orient[G[[i]],G[[j]]],{i,n},{j,n}];
Dirac=dext+Transpose[dext]; H=Dirac.Dirac; f=Prepend[f,0]; m=Length[f]-1;
U=Table[v=f[[k+1]];Table[u=Sum[f[[l]],{l,k}];H[[u+i,u+j]],{i,v},{j,v}],{k,m}];
cohomology=Map[NullSpace,U]; betti=Map[Length,cohomology]
You can see why there is a lot of theory to compute the cohomology more effectively. A computer does not mind to find the kernel of large matrices, but when dealing with simplicial complexes with thousands of elements, then the computer has to work hard too.
By the way, various Dirac operators have been considered in the discrete. It appears that discrete Dirac operator as a matrix in the discrete had been overlooked for long. The Dirac operator in the continuum is a silly beast as one has to use a Clifford algebra in order to factor the Laplacian. In the discrete, such gymnastics is not unnecessary. But it is nice. McKean-Singer for example is quite simple in the discrete, when following the approach of the Cycon-Froese-Kirsch and Simon book (the later book had been one of the key books for me in graduate school). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7381003499031067, "perplexity": 536.335694093412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648207.96/warc/CC-MAIN-20180323102828-20180323122828-00259.warc.gz"} |
https://www.physicsforums.com/threads/finding-a-basis-for-a-set-of-polynomials-and-functions.637810/ | # Finding a basis for a set of polynomials and functions
• Start date
• #1
trap101
342
0
Find a basis for and the dimension of the subspaces defined for each of the following sets of conditions:
{p $\in$ P3(R) | p(2) = p(-1) = 0 }
{ f$\in$Span{ex, e2x, e3x} | f(0) = f'(0) = 0}
Attempt: Having trouble getting started...
So I think my issue is interpreting what those sets are and setting it up. So I think the sets are: i) the set of all polynomials s.t P(2) = p(-1) = 0 and ii) the set of exp functions where at 0 equal 0.
So how do I put these each into a matrix form to find the basis and dimension?
• #2
Homework Helper
Gold Member
3,474
257
Find a basis for and the dimension of the subspaces defined for each of the following sets of conditions:
{p $\in$ P3(R) | p(2) = p(-1) = 0 }
{ f$\in$Span{ex, e2x, e3x} | f(0) = f'(0) = 0}
Attempt: Having trouble getting started...
So I think my issue is interpreting what those sets are and setting it up. So I think the sets are: i) the set of all polynomials s.t P(2) = p(-1) = 0
I doubt this. It is probably the set of all polynomials with degree <= 3, such that p(2) = p(-1) = 0.
and ii) the set of exp functions where at 0 equal 0.
Not just any exp functions. They have to be in $\text{span}(e^x, e^{2x}, e^{3x})$.
I suggest that you start by finding the dimension of these two spaces: $P_3(\mathbb{R})$ and $\text{span}(e^x, e^{2x}, e^{3x})$. Also, what is the form of a general element for each of these two spaces?
• #3
Homework Helper
43,010
969
Personally, I wouldn't use a matrix, I would use the basic definition. First, I am going to assume that P3 is the vector space of polynomials of degree 3 or less, which has dimension 4 (some texts use that to mean the space of polynomials 2 or less which has dimension 3- the same ideas will apply but it is simpler). Any such polynomial canbe written $p(x)= ax^3+ bx^2+ cx+ d$. The condition that p(2)= 0 means that we must have $p(2)= 8a+ 4b+ 2c+ d= 0$ or $d= -(8a+ 4b+ 2c)$. The condition that p(-1)= 0 means that $-a+ b- 2c+ d= 0[/tex] or $$d= -(a- b+ 2c)$$. Then d= -(8a+ 4b+ 2c)= -(a- b+ 2c) so that 8a+ 4b+ 2c= a- b+ 2c which reduces to 7a= -5b. Sp we can replace a by -5b/7 which means d= -(-(5/7)b- b+ 2c)= -(12/7)b- 2c. Using those, [itex]ax^3+ bx^2+ cx+ d= -(5/7)bx^3+ bx^2+ cx- (12/7)b- 2c= (-(5/7)x^3+ x^2- 12/7)b+ c(x- 2)$
Now, do you see what a basis is and what the dimension is?
(You could have made a quick "guess" at what the dimension is by the fact that the basic space has dimension 4 and there are 2 conditions put on it.)
For the second one, any f in the span of ex, e2x, and e3x, can be written as f(x)= aex+ be2x+ ce3x, and f'(x)= aex+ 2bex+ 3cex.
The condition that f(0)= 0 gives a+ b+ c= 0 and f'(0)= 0 gives a+ 2b+ 3c= 0. We can subtract the first equations from the second to get b+ 2c= 0 or b= -2c. Putting that into the first equation a- 2c+ c= a- c= 0 so a= c. That is, we can write aex+ bex+ cex= cex[/sup- 2ce2x+ ce3x= c(ex- 2e2x+ e3x. Now, what is the dimension and what is a basis?
(Here, the basic space has dimension three and there are two conditions.)
• #4
trap101
342
0
I doubt this. It is probably the set of all polynomials with degree <= 3, such that p(2) = p(-1) = 0.
Not just any exp functions. They have to be in $\text{span}(e^x, e^{2x}, e^{3x})$.
I suggest that you start by finding the dimension of these two spaces: $P_3(\mathbb{R})$ and $\text{span}(e^x, e^{2x}, e^{3x})$. Also, what is the form of a general element for each of these two spaces?
That's what I intend on doing, but my issue is setting it up in order ot find those dimensions. So here's how I'm trying to piece it together:
I know the general form for P3(R) is: ax3+bx2+cx+d, now the condition is that p(2) = P(-1) = 0. So I have to some how write out a set of vectors that satisfy that form.
As for ii) a function would be f(x) = ex-2e2x+e3x, but I'm utterly clueless as to how this is line independent and how I could even find this vector if I set up a matrix
• Last Post
Replies
4
Views
1K
• Last Post
Replies
1
Views
3K
• Last Post
Replies
5
Views
1K
• Last Post
Replies
2
Views
1K
• Last Post
Replies
2
Views
7K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
1
Views
915
• Last Post
Replies
1
Views
2K
• Last Post
Replies
11
Views
1K
• Last Post
Replies
1
Views
3K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9498623013496399, "perplexity": 698.2581433606296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573163.7/warc/CC-MAIN-20220818033705-20220818063705-00367.warc.gz"} |
https://mollermara.com/blog/2016/ | # ace-mc - Add multiple cursors using ace-jump
Happy leap day!
To celebrate this leap day, I just released my first Emacs package called ace-mc which allows you to quickly and easily add as well as remove multiple-cursors mode cursors using ace-jump-mode.
It's available on MELPA now! So installing it is as easy as M-x package-install RET ace-mc
Documentation is available on the GitHub page, but here are a couple screencasts:
The main reason I made this package is because adding cursors with mc/mark-next-like-this or mc/mark-all-like-this-dwim doesn't work super well if you have a lot of potential matches. For example, if I'm trying to rename a variable "i", there's often a bunch of i's in other words that I don't want to touch. While multiple-cursors does make it possible to add multiple cursors using the mouse, this often is a bit of a hassle for me.
It seems like some people are already using it. In the time it's been out, one person already reported a bug (which is now fixed) and a couple people have already asked if I'll be adding avy support.
I eventually plan to add avy support. I first off used ace-jump-mode as it's the package I use currently. I understand, though, that many people have switched to avy. I've already messed around with adding a "add cursor" action to avy-dispatch-alist. The tricky, thing, though is how multiple-cursors deals with read-prompts. And since avy provides many types of jumping styles in separate commands, I'm not sure how best to add ace-mc support for all of them. But as I keep playing around with avy, I plan to finally switch to it.
Anyways, give ace-mc a try, and if you have any improvements or suggestions, let me know! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3796825706958771, "perplexity": 1983.0069879745213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00105.warc.gz"} |
https://www.azdictionary.com/definition/accidence | • Definition for "accidence"
• The part of morphology that handles the inflections…
• Sentence for "accidence"
• Roman subjects is like a language…
• Hypernym for "accidence"
• morphology
• Etymologically Related for "accidence"
• accident
• Same Context for "accidence"
• morphosyntax
# accidence definition
• noun:
• The part of morphology that handles the inflections of words.
• The accidents, of inflections of terms; the rudiments of sentence structure. - John Milton
• A book containing the initial concepts of sentence structure, therefore of rudiments of every topic or art.
• The rudiments of every subject. - James Russell Lowell
• The accidents, of inflections of words; the rudiments of grammar.
• The rudiments of every subject.
• That section of grammar which treats of this accidents or inflection of terms; a tiny guide containing the rudiments of grammar.
• thus The rudiments of every subject.
• A fortuitous situation; a major accident.
• the part of sentence structure that relates to the inflections of terms | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913514614105225, "perplexity": 10068.16461069101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608416.96/warc/CC-MAIN-20170525195400-20170525215400-00176.warc.gz"} |
https://www.iacr.org/news/index.php?p=detail&id=3197 | International Association for Cryptologic Research
# IACR News Central
You can also access the full news archive.
Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).
2013-12-29
13:17 [Pub][ePrint]
A widespread security claim of the Bitcoin system, presented in the original Bitcoin whitepaper, states that the security of the system is guaranteed as long as there is no attacker in possession of half or more of the total computational power used to maintain the system. This claim, however, is proved based on theoretically flawed assumptions.
In the paper we analyze two kinds of attacks based on two theoretical flaws: the Block Discarding Attack and the Difficulty Raising Attack. We argue that the current theoretical limit of attacker\'s fraction of total computational power essential for the security of the system is in a sense not $\\frac{1}{2}$ but a bit less than $\\frac{1}{4}$, and outline proposals for protocol change that can raise this limit to be as close to $\\frac{1}{2}$ as we want.
The basic idea of the Block Discarding Attack has been noted as early as 2010, and lately was independently though-of and analyzed by both author of this paper and authors of a most recently pre-print published paper. We thus focus on the major differences of our analysis, and try to explain the unfortunate surprising coincidence. To the best of our knowledge, the second attack is presented here for the first time.
13:17 [Pub][ePrint]
Consider a joint distribution $(X,A)$ on a set ${\\cal X}\\times\\{0,1\\}^\\ell$. We show that for any family ${\\cal F}$ of distinguishers $f \\colon {\\cal X} \\times \\{0,1\\}^\\ell \\rightarrow \\{0,1\\}$, there exists a simulator $h \\colon {\\cal X} \\rightarrow \\{0,1\\}^\\ell$ such that
\\begin{enumerate}
\\item no function in ${\\cal F}$ can distinguish $(X,A)$ from $(X,h(X))$ with advantage $\\epsilon$,
\\item $h$ is only $O(2^{3\\ell}\\epsilon^{-2})$ times less efficient than the functions in ${\\cal F}$.
\\end{enumerate}
For the most interesting settings of the parameters (in particular, the cryptographic case where $X$ has superlogarithmic min-entropy, $\\epsilon > 0$ is negligible and ${\\cal F}$ consists of circuits of polynomial size), we can make the simulator $h$ \\emph{deterministic}.
As an illustrative application of this theorem, we give a new security proof for the leakage-resilient stream-cipher from Eurocrypt\'09. Our proof is simpler and quantitatively much better than the original proof using the dense model theorem, giving meaningful security guarantees if instantiated with a standard blockcipher like AES.
Subsequent to this work, Chung, Lui and Pass gave an interactive variant of our main theorem, and used it to investigate weak notions of Zero-Knowledge. Vadhan and Zheng give a more constructive version of our theorem using their new uniform min-max theorem.
13:17 [Pub][ePrint]
This paper is devoted to the characterization of hyper-bent functions.
Several classes of hyper-bent functions have been studied, such as
Charpin and Gong\'s $\\sum\\limits_{r\\in R}\\mathrm{Tr}_{1}^{n} (a_{r}x^{r(2^m-1)})$ and Mesnager\'s $\\sum\\limits_{r\\in R}\\mathrm{Tr}_{1}^{n}(a_{r}x^{r(2^m-1)}) +\\mathrm{Tr}_{1}^{2}(bx^{\\frac{2^n-1}{3}})$, where $R$ is a set of representations of the cyclotomic
cosets modulo $2^m+1$ of full size $n$ and $a_{r}\\in \\mathbb{F}_{2^m}$.
In this paper, we generalize their results and consider a class of Boolean functions of the form $\\sum_{r\\in R}\\sum_{i=0}^{2}Tr^n_1(a_{r,i}x^{r(2^m-1)+\\frac{2^n-1}{3}i}) +Tr^2_1(bx^{\\frac{2^n-1}{3}})$, where $n=2m$, $m$ is odd, $b\\in\\mathbb{F}_4$, and $a_{r,i}\\in \\mathbb{F}_{2^n}$.
With the restriction of $a_{r,i}\\in \\mathbb{F}_{2^m}$, we present the characterization of hyper-bentness of these functions with character sums. Further, we reformulate this characterization in terms of the number of points on
hyper-elliptic curves. For some special cases, with the help of Kloosterman sums and cubic sums, we determine the characterization for some hyper-bent functions including functions with four, six and ten traces terms. Evaluations of Kloosterman sums at three general points are used in the characterization. Actually, our results can generalized to the general
case: $a_{r,i}\\in \\mathbb{F}_{2^n}$. And we explain this for characterizing binomial, trinomial and quadrinomial hyper-bent functions.
13:17 [Pub][ePrint]
The most widely accepted models in the security proofs of Authenticated Key Exchange protocols are the Canetti-Krawczyk model and the extended Canetti-Krawczyk model. They are shown to be incomparable due to the subtleties that they admit different adversarial queries and the definitions of the queries are not specific and strict enough to allow a rigorous comparison be made. Concerning the security of one-round implicitly authenticated Diffie-Hellman key exchange protocols, we present a stronger security model that characterizes specific adversarial capabilities and encompass the Ephemeral Key Reveal and the Session-State Reveal simultaneously. To demonstrate the usability of our model, a new protocol based on the OAKE protocol is proposed, which satisfies the presented stronger security notion and at the same time attains high efficiency as the OAKE protocol. The protocol is proven secure in random oracle model under the gap Diffie-Hellman assumption.
13:17 [Pub][ePrint]
In Eurocrypt\'98, Blaze et al. introduced the concept of proxy re-encryption (PRE). It allows a semi-trusted proxy to convert a ciphertext originally intended for Alice into one which
can be decrypted by Bob, without the proxy knowing the corresponding plaintext. PRE has found many applications, such as in encrypted e-mail forwarding[8], distributed secure file systems[1,2], multicast[10] cloud computation etc. However, all the PRE schemes until now require the delegator (or the delegator and the delegatee cooperatively) to generate the re-encryption keys. We observe
that this is not the only way to generate the re-encryption keys, the encrypter also has the ability to generate re-encryption keys. Based on this observation, we introduce a new primitive: PRE^{+},
which is almost the same as the traditional PRE except the re-encryption keys generated by the encrypter. Interestingly, this PRE^{+} can be viewed as the dual of the traditional PRE. Compared
with PRE, PRE can easily achieve the non-transferable property and message-level based fine-grained delegation, while these two properties are very desirable in practical applications. We first
categorize PRE^{+} as the single-hop and multi-hop variant and discuss its potential applications, then we give the definition and security model for the single-hop PRE^{+}, construct a concrete scheme and
prove its security. Finally we conclude our paper with many interesting open problems.
13:17 [Pub][ePrint]
We show how to extract an arbitrary polynomial number of simultaneously hardcore bits from any one-way function. Our construction is based on differing-input obfuscation.
13:17 [Pub][ePrint]
We provide a general construction of a rational secret-sharing
protocol in which the secret can be reconstructed in expected three rounds.
Our construction converts any rational secret-sharing protocol
to a protocol with an expected three-round reconstruction in a black-box manner.
Our construction works in synchronous but non-simultaneous channels,
and preserves a strict Nash equilibrium of the original protocol.
Combining with an existing protocol,
we obtain a rational secret-sharing protocol
that achieves a strict Nash equilibrium with the optimal coalition resilience
of $\\ceil{\\frac{n}{2}}-1$ for expected constant-round protocols,
where $n$ is the number of players.
Although the coalition resilience of $\\ceil{\\frac{n}{2}}-1$ is shown to be optimal
as long as we consider constant-round protocols,
we circumvent this limitation by considering players
who do not prefer to reconstruct \\emph{fake} secrets.
By assuming such players,
we construct an expected constant-round protocol that achieves a strict Nash equilibrium
with coalition resilience of $n-1$.
We also extend our construction to a protocol that preserves \\emph{immunity}
to unexpectedly behaving (or malicious) players.
Then we obtain a protocol that achieves a Nash equilibrium
with coalition resilience of $\\ceil{\\frac{n}{2}}-t-1$
in the presence of $t$ unexpectedly behaving players for any constant $t \\geq 1$.
The same protocol also achieves a strict Nash equilibrium in the absence of malicious players.
2013-12-27
13:37 [Job][New]
Coding and Cryptography Group at the University of Tartu, Estonia, is looking for a research fellow for a project on design and decoding of LDPC codes. The ideal candidate will have strength in one or more of the following areas:
• LDPC codes and iterative decoding algorithms
• Optimization methods applied to error correction
• Mathematical foundations of coding theory
• Any area related to coding theory
The project is a collaboration with the University of Bergen, Norway, and the University of Valladolid, Spain. Salary is at least 2000 euro per month before taxes plus social benefits, depending on qualification and experience. Some travel money will also be provided. Cost of living in Estonia is quite low, see e.g. http://www.expatistan.com/cost-of-living. Employment contract is for two years.
A successful candidate should:
• Hold a Ph.D. degree
• Have a strong background in coding theory or a related field
• Have an international publication record at outstanding venues
To apply, please submit the following documents (by email):
• Application letter
• Research statement
• Curriculum vitae
• Publication list
• Two letters of reference (make sure they reach us by the application deadline)
Deadline for applications: 1 February 2014
2013-12-20
16:48 [Job][New]
The objective of this thesis is the forensic reconstruction of partially erased data of various types. The problem that we will tackle is formalized as follows: We consider a data object instance as the result of a function F(t,r) where t encodes the objet type and r is a random number. The OS can create objects, erase them or update them. Erasure is done by forgetting the object’s reference and hence implicitly recycling the space on which it was written. The problem consists in reconstructing algorithmically erased data objects of various types and modeling the conditions under which various assortments of types subject to a given number of rewriting cycles can still be recovered. The methods that will be developed will subsequently be applied to iOS and Android.
The candidate should have solid programming and algorithmic skills. Prior knowledge of reverse engineering tools such as IDA Pro is a plus. The candidate will interact with zero-day exploit hunters and physical reverse engineering experts and will have access to very advanced computing and forensic facilities. This proposal is reserved to French nationals only and is fully funded.
Interested candidates should contact directly david.naccache (at) ens.fr
16:17 [Pub][ePrint]
Ecash is a concept of electronic cash which would allow users to carry money in form of digital coins. Transaction can be done both offline and online in absence of a third party/financial institution. This paper proposes an offline model which supports multiple usage of transferable ecoin. The protocol is based on RSA, digital signature and a two-step encryption process. In this two step encryption, the user account details are encrypted in the coin using unique numbers in each step. The first encryption takes place during the successful receipt of the coin, where a receive end number is used for encryption,which is unique for every receipt. The second step of encryption takes place during successful spending of the coin,where a spending end receive number is used for encryption, which is unique for every spenfing of the coin. These two unique numbers comprise the major part of encryption in this model, prevents double spending and preserves user anonymity.
16:17 [Pub][ePrint]
Many RFID authentication protocols have been proposed to provide desired security and privacy level for RFID systems. Almost all of these protocols are based symmetric cryptography because of the limited resources of RFID tags. Recently Cheng et. al have been proposed an RFID security protocol based on chaotic maps. In this paper, we analyze the security of this protocol and discover its vulnerabilities. We firstly present a de-synchronization attack in which a passive adversary makes the shared secrets out-of-synchronization by eavesdropping just one protocol session. We secondly present a secret disclosure attack in which a passive adversary extracts secrets of a tag by eavesdropping just one protocol session. An adversary having the secrets of the tag can launch some other attacks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6308659911155701, "perplexity": 1519.7039053723672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451744.67/warc/CC-MAIN-20151124205411-00064-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/77951/how-to-estimate-the-probability-of-a-rare-event-about-which-observations-can-onl | # How to estimate the probability of a rare event about which observations can only be made in quantized time?
I have reduced the estimation problem of a real event (a technical failure happening, the fact of which is checked in regular time intervals) to the following problem:
we have a non-fair coin, which gives a head for almost every throw. If gives a tail extremely rarely. Let's say we throw the coin every second. After an hour, a tail comes up. After two minutes, another. After 2 hours, another. How can we estimate the probability of a tail occurring, and how can we have a good guess at the reliability of the estimate after a given number of (e.g. few dozen) occurrences of a tail?
My problem is that the event is rare enough that it's very hard to get a reliable measurement for some of the longest time periods when it doesn't occur (it just takes a lot of time).
• What about a simple exponential distribution for the time between failures? Nov 28 '13 at 13:15
• Have you looked into survival analysis? Nov 28 '13 at 15:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693453669548035, "perplexity": 391.76468943218003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300934.87/warc/CC-MAIN-20220118152809-20220118182809-00182.warc.gz"} |
http://www.ams.org/joursearch/servlet/DoSearch?f1=msc&v1=47H11 | # American Mathematical Society
My Account · My Cart · Customer Services · FAQ
Publications Meetings The Profession Membership Programs Math Samplings Washington Office In the News About the AMS
You are here: Home > Publications
AMS eContent Search Results
Matches for: msc=(47H11) AND publication=(all) Sort order: Date Format: Standard display
Results: 1 to 24 of 24 found Go to page: 1
[1] Alberto Boscaggin, Guglielmo Feltrin and Fabio Zanolin. Positive solutions for super-sublinear indefinite problems: High multiplicity results via coincidence degree. Trans. Amer. Math. Soc. Abstract, references, and article information View Article: PDF [2] Jean Mawhin and Katarzyna Szymańska-Dȩbowska. Convex sets and second order systems with nonlocal boundary conditions at resonance. Proc. Amer. Math. Soc. 145 (2017) 2023-2032. Abstract, references, and article information View Article: PDF [3] Rene Dager, Mihaela Negreanu and J. Ignacio Tello. An inverse problem for the compressible Reynolds equation. Quart. Appl. Math. 73 (2015) 607-614. Abstract, references, and article information View Article: PDF [4] Stefano Almi and Marco Degiovanni. On degree theory for quasilinear elliptic equations with natural growth conditions. Contemporary Mathematics 595 (2013) 1-20. Book volume table of contents View Article: PDF [5] Ionel Ciuperca and J. Ignacio Tello. Lack of contact in a lubricated system. Quart. Appl. Math. 69 (2011) 357-378. MR 2729893. Abstract, references, and article information View Article: PDF This article is available free of charge [6] O. Yu. Makarenkov. Poincaré index and periodic solutions of perturbed autonomous systems. Trans. Moscow Math. Soc. 70 (2009) 1-30. MR 2573636. Abstract, references, and article information View Article: PDF This article is available free of charge [7] J. Berkovits and M. Miettunen. On the uniqueness of the Browder degree. Proc. Amer. Math. Soc. 136 (2008) 3467-3476. MR 2415030. Abstract, references, and article information View Article: PDF This article is available free of charge [8] Sergiu Aizicovici, Nikolaos S. Papageorgiou and Vasile Staicu. Degree theory for operators of monotone type and nonlinear elliptic equations with inequality constraints. Memoirs of the AMS 196 (2008) MR 2459421. Book volume table of contents [9] Piotr Hajłasz, Tadeusz Iwaniec, Jan Malý and Jani Onninen. Weakly differentiable mappings between manifolds. Memoirs of the AMS 192 (2008) MR 2357085. Book volume table of contents [10] J. Berkovits. A reduction theorem for the topological degree for mappings of class $(S+)$. Proc. Amer. Math. Soc. 135 (2007) 2059-2064. MR 2299481. Abstract, references, and article information View Article: PDF This article is available free of charge [11] Athanassios G. Kartsatos and Igor V. Skrypnik. On the eigenvalue problem for perturbed nonlinear maximal monotone operators in reflexive Banach spaces. Trans. Amer. Math. Soc. 358 (2006) 3851-3881. MR 2219002. Abstract, references, and article information View Article: PDF This article is available free of charge [12] A. V. Pokrovskii and O. A. Rasskazov. On the use of the topological degree theory in broken orbits analysis. Proc. Amer. Math. Soc. 132 (2004) 567-577. MR 2022383. Abstract, references, and article information View Article: PDF This article is available free of charge [13] Athanassios G. Kartsatos and Igor V. Skrypnik. The index of a critical point for densely defined operators of type $(S_+)_L$ in Banach spaces. Trans. Amer. Math. Soc. 354 (2002) 1601-1630. MR 1873020. Abstract, references, and article information View Article: PDF This article is available free of charge [14] Zhonghai Ding. On nonlinear oscillations in a suspension bridge system. Trans. Amer. Math. Soc. 354 (2002) 265-274. MR 1859275. Abstract, references, and article information View Article: PDF This article is available free of charge [15] Kunquan Lan and Jeffrey Webb. A fixed point index for generalized inward mappings of condensing type. Trans. Amer. Math. Soc. 349 (1997) 2175-2186. MR 1422903. Abstract, references, and article information View Article: PDF This article is available free of charge [16] Chung-Cheng Kuo. On the solvability of a nonlinear second-order elliptic equation at resonance. Proc. Amer. Math. Soc. 124 (1996) 83-87. MR 1301035. Abstract, references, and article information View Article: PDF This article is available free of charge [17] Zhengyuan Guan and Athanassios G. Kartsatos. Ranges of perturbed maximal monotone and $m$-accretive operators in Banach spaces . Trans. Amer. Math. Soc. 347 (1995) 2403-2435. MR 1297527. Abstract, references, and article information View Article: PDF This article is available free of charge [18] Shou Chuan Hu and Nikolaos S. Papageorgiou. Generalizations of Browder's degree theory . Trans. Amer. Math. Soc. 347 (1995) 233-259. MR 1284911. Abstract, references, and article information View Article: PDF This article is available free of charge [19] Zhengyuan Guan. Solvability of semilinear equations with compact perturbations of operators of monotone type . Proc. Amer. Math. Soc. 121 (1994) 93-102. MR 1174492. Abstract, references, and article information View Article: PDF This article is available free of charge [20] Athanassios G. Kartsatos. On compact perturbations and compact resolvents of nonlinear $m$-accretive operators in Banach spaces . Proc. Amer. Math. Soc. 119 (1993) 1189-1199. MR 1216817. Abstract, references, and article information View Article: PDF This article is available free of charge [21] L. H. Erbe, W. Krawcewicz and J. H. Wu. A composite coincidence degree with applications to boundary value problems of neutral equations . Trans. Amer. Math. Soc. 335 (1993) 459-478. MR 1169080. Abstract, references, and article information View Article: PDF This article is available free of charge [22] Patrick Fitzpatrick and Jacobo Pejsachowicz. Orientation and the Leray-Schauder theory for fully nonlinear elliptic boundary value problems. Memoirs of the AMS 101 (1993) MR 1126177. Book volume table of contents [23] Jorge Ize, Ivar Massabò and Alfonso Vignoli. Degree theory for equivariant maps, the general $S^1$-action. Memoirs of the AMS 100 (1992) MR 1126179. Book volume table of contents [24] W. V. Petryshyn and P. M. Fitzpatrick. A degree theory, fixed point theorems, and mapping theorems for multivalued noncompact mappings . Trans. Amer. Math. Soc. 194 (1974) 1-25. MR 2478129. Abstract, references, and article information View Article: PDF This article is available free of charge
Results: 1 to 24 of 24 found Go to page: 1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518323302268982, "perplexity": 2521.5192866292155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687833.62/warc/CC-MAIN-20170921153438-20170921173438-00359.warc.gz"} |
https://projecteuclid.org/euclid.die/1367341061 | Differential and Integral Equations
A remark on the continuous dependence on $\phi$ of solutions to $U_T-\Delta\phi(U)=0$
David J. Diller
Article information
Source
Differential Integral Equations, Volume 11, Number 3 (1998), 425-438.
Dates
First available in Project Euclid: 30 April 2013
Diller, David J. A remark on the continuous dependence on $\phi$ of solutions to $U_T-\Delta\phi(U)=0$. Differential Integral Equations 11 (1998), no. 3, 425--438. https://projecteuclid.org/euclid.die/1367341061 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837321639060974, "perplexity": 1384.8784110337265}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583857913.57/warc/CC-MAIN-20190122140606-20190122162606-00043.warc.gz"} |
http://www.textbook.ds100.org/ch/a05/bias_modeling.html | Model Bias and Variance
Contents
import warnings
# Ignore numpy dtype warnings. These warnings are caused by an interaction
# between numpy and Cython and can be safely ignored.
# Reference: https://stackoverflow.com/a/40846742
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
%matplotlib inline
import ipywidgets as widgets
from ipywidgets import interact, interactive, fixed, interact_manual
import nbinteract as nbi
sns.set()
sns.set_context('talk')
np.set_printoptions(threshold=20, precision=2, suppress=True)
pd.options.display.max_rows = 7
pd.options.display.max_columns = 8
pd.set_option('precision', 2)
# This option stops scientific notation for pandas
# pd.set_option('display.float_format', '{:.2f}'.format)
def df_interact(df, nrows=7, ncols=7):
'''
Outputs sliders that show rows and columns of df
'''
def peek(row=0, col=0):
return df.iloc[row:row + nrows, col:col + ncols]
if len(df.columns) <= ncols:
interact(peek, row=(0, len(df) - nrows, nrows), col=fixed(0))
else:
interact(peek,
row=(0, len(df) - nrows, nrows),
col=(0, len(df.columns) - ncols))
print('({} rows, {} columns) total'.format(df.shape[0], df.shape[1]))
Model Bias and Variance¶
We have previously seen that our choice of model has two basic sources of error.
Our model may be too simple—a linear model is not able to properly fit data generated from a quadratic process, for example. This type of error arises from model bias.
Our model may also fit the random noise present in the data—even if we fit a quadratic process using a quadratic model, the model may predict different outcomes than the true process produces. This type of error arises from model variance.
The Bias-Variance Decomposition¶
We can make the statements above more precise by decomposing our formula for model risk. Recall that the risk for a model $$f_\hat{\theta}$$ is the expected loss for all possible sets of training data $$X$$, $$y$$ and all input-output points $$z$$, $$\gamma$$ in the population:
\begin{aligned} R(f_\hat{\theta}) = \mathbb{E}[ \ell(\gamma, f_\hat{\theta} (z)) ] \end{aligned}
We denote the process that generates the true population data as $$f_\theta(x)$$. The output point $$\gamma$$ is generated by our population process plus some random noise in data collection: $$\gamma_i = f_\theta(z_i) + \epsilon$$. The random noise $$\epsilon$$ is a random variable with a mean of zero: $$\mathbb{E}[\epsilon] = 0$$.
If we use the squared error as our loss function, the above expression becomes:
\begin{aligned} R(f_\hat{\theta}) = \mathbb{E}[ (\gamma - f_\hat{\theta} (z))^2 ] \end{aligned}
With some algebraic manipulation, we can show that the above expression is equivalent to:
\begin{aligned} R(f_\hat{\theta}) = (\mathbb{E}[f_\hat{\theta}(z)] - f_\theta(z))^2 + \text{Var}(f_\hat{\theta}(z)) + \text{Var}(\epsilon) \end{aligned}
The first term in this expression, $$(\mathbb{E}[f_\hat{\theta}(z)] - f_\theta(z))^2$$, is a mathematical expression for the bias of the model. (Technically, this term represents the bias squared, $$\text{bias}^2$$.) The bias is equal to zero if in the long run our choice of model $$f_\hat{\theta}(z)$$ predicts the same outcomes produced by the population process $$f_\theta(z)$$. The bias is high if our choice of model makes poor predictions of the population process even when we have the entire population as our dataset.
The second term in this expression, $$\text{Var}(f_\hat{\theta}(z))$$, represents the model variance. The variance is low when the model’s predictions don’t change much when the model is trained on different datasets from the population. The variance is high when the model’s predictions change greatly when the model is trained on different datasets from the population.
The third and final term in this expression, $$\text{Var}(\epsilon)$$, represents the irreducible error or the noise in the data generation and collection process. This term is small when the data generation and collection process is precise or has low variation. This term is large when the data contain large amounts of noise.
Derivation of Bias-Variance Decomposition¶
$\mathbb{E}[(\gamma - f_{\hat{\theta}}(z))^2]$
And expand the square and apply linearity of expectation:
$=\mathbb{E}[\gamma^2 -2\gamma f_{\hat{\theta}} +f_\hat{\theta}(z)^2]$
$= \mathbb{E}[\gamma^2] - \mathbb{E}[2\gamma f_{\hat{\theta}}(z)] + \mathbb{E}[f_{\hat{\theta}}(z)^2]$
Because $$\gamma$$ and $$f_{\hat{\theta}}(z)$$ are independent (the model outputs and population observations don’t depend on each other), we can say that $$\mathbb{E}[2\gamma f_{\hat{\theta}}(z)] = \mathbb{E}[2\gamma] \mathbb{E}[f_{\hat{\theta}}(z)]$$. We then substitute $$f_\theta(z) + \epsilon$$ for $$\gamma$$:
$=\mathbb{E}[(f_\theta(z) + \epsilon)^2] - \mathbb{E}[2(f_\theta(z) + \epsilon)] \mathbb{E}[f_{\hat{\theta}}(z)] + \mathbb{E}[f_{\hat{\theta}}(z)^2]$
Simplifiying some more: (Note that $$\mathbb{E}[f_\theta(z)] = f_\theta(z)$$ because $$f_\theta(z)$$ is a deterministic function, given a particular query point $$z$$.)
$=\mathbb{E}[f_\theta(z)^2 + 2f_\theta(z) \epsilon + \epsilon^2] - (2f_\theta(z) + \mathbb{E}[2\epsilon]) \mathbb{E}[f_{\hat{\theta}}(z)] + \mathbb{E}[f_{\hat{\theta}}(z)^2]$
Applying linearity of expectation again:
$= f_\theta(z)^2 + 2f_\theta(z)\mathbb{E}[\epsilon] + \mathbb{E}[\epsilon^2] - (2f_\theta(z) + 2\mathbb{E}[\epsilon]) \mathbb{E}[f_{\hat{\theta}}(z)] + \mathbb{E}[f_{\hat{\theta}}(z)^2]$
Noting that $$\big( \mathbb{E}[\epsilon] = 0 \big) => \big( \mathbb{E}[\epsilon^2] = \text{Var}(\epsilon) \big)$$ because $$\text{Var}(\epsilon) = \mathbb{E}[\epsilon^2]-\mathbb{E}[\epsilon]^2$$:
$= f_\theta(z)^2 + \text{Var}(\epsilon) - 2f_\theta(z) \mathbb{E}[f_{\hat{\theta}}(z)] + \mathbb{E}[f_{\hat{\theta}}(z)^2]$
We can then rewrite the equation as:
$= f_\theta(z)^2 + \text{Var}(\epsilon) - 2f_\theta(z) \mathbb{E}[f_{\hat{\theta}}(z)] + \mathbb{E}[f_{\hat{\theta}}(z)^2] - \mathbb{E}[f_{\hat{\theta}}(z)]^2 + \mathbb{E}[f_{\hat{\theta}}(z)]^2$
Because $$\mathbb{E}[f_{\hat{\theta}}(z)^2] - \mathbb{E}[f_{\hat{\theta}}(z)]^2 = Var(f_{\hat{\theta}}(z))$$:
$= f_\theta(z)^2 - 2f_\theta(z) \mathbb{E}[f_{\hat{\theta}}(z)] + \mathbb{E}[f_{\hat{\theta}}(z)]^2 + Var(f_{\hat{\theta}}(z)) + \text{Var}(\epsilon)$
$= (f_\theta(z) - \mathbb{E}[f_{\hat{\theta}}(z)])^2 + Var(f_{\hat{\theta}}(z)) + \text{Var}(\epsilon)$
$= \text{bias}^2 + \text{model variance} + \text{noise}$
To pick a model that performs well, we seek to minimize the risk. To minimize the risk, we attempt to minimize the bias, variance, and noise terms of the bias-variance decomposition. Decreasing the noise term typically requires improvements to the data collection process—purchasing more precise sensors, for example. To decrease bias and variance, however, we must tune the complexity of our models. Models that are too simple have high bias; models that are too complex have high variance. This is the essence of the bias-variance tradeoff, a fundamental issue that we face in choosing models for prediction.
Example: Linear Regression and Sine Waves¶
Suppose we are modeling data generated from the oscillating function shown below.
from collections import namedtuple
from sklearn.linear_model import LinearRegression
np.random.seed(42)
Line = namedtuple('Line', ['x_start', 'x_end', 'y_start', 'y_end'])
def f(x): return np.sin(x) + 0.3 * x
def noise(n):
return np.random.normal(scale=0.1, size=n)
def draw(n):
points = np.random.choice(np.arange(0, 20, 0.2), size=n)
return points, f(points) + noise(n)
def fit_line(x, y, x_start=0, x_end=20):
clf = LinearRegression().fit(x.reshape(-1, 1), y)
return Line(x_start, x_end, clf.predict(x_start)[0], clf.predict(x_end)[0])
population_x = np.arange(0, 20, 0.2)
population_y = f(population_x)
avg_line = fit_line(population_x, population_y)
datasets = [draw(100) for _ in range(20)]
random_lines = [fit_line(x, y) for x, y in datasets]
plt.plot(population_x, population_y)
plt.title('True underlying data generation process');
If we randomly draw a dataset from the population, we may end up with the following:
xs, ys = draw(100)
plt.scatter(xs, ys, s=10)
plt.title('One set of observed data');
Suppose we draw many sets of data from the population and fit a simple linear model to each one. Below, we plot the population data generation scheme in blue and the model predictions in green.
plt.figure(figsize=(8, 5))
plt.plot(population_x, population_y)
for x_start, x_end, y_start, y_end in random_lines:
plt.plot([x_start, x_end], [y_start, y_end], linewidth=1, c='g')
plt.title('Population vs. linear model predictions');
The plot above clearly shows that a linear model will make prediction errors for this population. We may decompose the prediction errors into bias, variance, and irreducible noise. We illustrate bias of our model by showing that the long-run average linear model will predict different outcomes than the population process:
plt.figure(figsize=(8, 5))
xs = np.arange(0, 20, 0.2)
plt.plot(population_x, population_y, label='Population')
plt.plot([avg_line.x_start, avg_line.x_end],
[avg_line.y_start, avg_line.y_end],
linewidth=2, c='r',
label='Long-run average linear model')
plt.title('Bias of linear model')
plt.legend();
The variance of our model is the variation of the model predictions around the long-run average model:
plt.figure(figsize=(8, 5))
for x_start, x_end, y_start, y_end in random_lines:
plt.plot([x_start, x_end], [y_start, y_end], linewidth=1, c='g', alpha=0.8)
plt.plot([avg_line.x_start, avg_line.x_end],
[avg_line.y_start, avg_line.y_end],
linewidth=4, c='r')
plt.title('Variance of linear model');
Finally, we illustrate the irreducible error by showing the deviations of the observed points from the underlying population process.
plt.plot(population_x, population_y)
xs, ys = draw(100)
plt.scatter(xs, ys, s=10)
plt.title('Irreducible error');
Bias-Variance In Practice¶
In an ideal world, we would minimize the expected prediction error for our model over all input-output points in the population. However, in practice, we do not know the population data generation process and thus are unable to precisely determine a model’s bias, variance, or irreducible error. Instead, we use our observed dataset as an approximation to the population.
As we have seen, however, achieving a low training error does not necessarily mean that our model will have a low test error as well. It is easy to obtain a model with extremely low bias and therefore low training error by fitting a curve that passes through every training observation. However, this model will have high variance which typically leads to high test error. Conversely, a model that predicts a constant has low variance but high bias. Fundamentally, this occurs because training error reflects the bias of our model but not the variance; the test error reflects both. In order to minimize test error, our model needs to simultaneously achieve low bias and low variance. To account for this, we need a way to simulate test error without using the test set. This is generally done using cross validation.
Takeaways¶
The bias-variance tradeoff allows us to more precisely describe the modeling phenomena that we have seen thus far.
Underfitting is typically caused by too much bias; overfitting is typically caused by too much model variance.
Collecting more data reduces variance. For example, the model variance of linear regression goes down by a factor of $$1 /n$$, where $$n$$ is the number of data points. Thus, doubling the dataset size halves the model variance, and collecting many data points will cause the variance to approach 0. One recent trend is to select a model with low bias and high intrinsic variance (e.g. a neural network) and collect many data points so that the model variance is low enough to make accurate predictions. While effective in practice, collecting enough data for these models tends to require large amounts of time and money.
Collecting more data reduces bias if the model can fit the population process exactly. If the model is inherently incapable of modeling the population (as in the example above), even infinite data cannot get rid of model bias.
Adding a useful feature to the data, such as a quadratic feature when the underlying process is quadratic, reduces bias. Adding a useless feature rarely increases bias.
Adding a feature, whether useful or not, typically increases model variance since each new feature adds a parameter to the model. Generally speaking, models with many parameters have many possible combinations of parameters and therefore have higher variance than models with few parameters. In order to increase a model’s prediction accuracy, a new feature should decrease bias more than it increases variance.
Removing features will typically increase bias and can cause underfitting. For example, a simple linear model has higher model bias than the same model with a quadratic feature added to it. If the data were generated from a quadratic phenomenon, the simple linear model underfits the data.
In the plot below, the X-axis measures model complexity and the Y-axis measures magnitude. Notice how as model complexity increases, model bias strictly decreases and model variance strictly increases. As we choose more complex models, the test error first decreases then increases as the increased model variance outweighs the decreased model bias.
As the plot shows, a model with high complexity can achieve low training error but can fail to generalize to the test set because of its high model variance. On the other hand, a model with low complexity will have low model variance but can also fail to generalize because of its high model bias. To select a useful model, we must strike a balance between model bias and variance.
As we add more data, we shift the curves on our plot to the right and down, reducing bias and variance:
Summary¶
The bias-variance tradeoff reveals a fundamental problem in modeling. In order to minimize model risk, we use a combination of feature engineering, model selection, and cross-validation to balance bias and variance. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8488588929176331, "perplexity": 1363.9033633775803}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573163.7/warc/CC-MAIN-20220818033705-20220818063705-00038.warc.gz"} |