text
stringlengths
100
957k
meta
stringclasses
1 value
What works in one field may not be as useful in another. ok is TRUE’ rule, and returns the first element. Families are the cornerstone of agriculture in Alabama where 97% of farms counted in the recent 2017 Census of Agriculture are family owned. Need help? Post your question and get tips & solutions from a community of 451,187 IT Pros & Developers. Regarding units, the kim_init command behaves in different ways depending on whether or not unit conversion mode is activated as indicated by the optional unitarg argument. c is sometimes used for its side effect of removing attributes except names, for example to turn an array into a vector. Because they are genetically and physiologically similar to humans, rhesus monkeys are the most widely used nonhuman primate in basic and applied biomedical research. All Software. The other is domestic sheep. View MATLAB Command. To do this, right-click a map or scene in the Contents pane and choose Properties from the context menu. If you're thinking call_user_func_array has changed the array of multiple parameters to a string, remember that it does not pass the array through to the called function as a single argument (array), but creates one argument for each element in the array. E) One gene can specify a single enzyme if that enzyme contains a single type of polypetide chain. For the first time, mainland extinctions eclipsed island extinctions, primarily due to rampant deforestation in South America, especially in Brazil, to make way for large-scale agriculture and. Expand this section. The goal of NMDS is to collapse information from multiple. Succ‐Ala‐Ala‐Pro‐Xaa‐AMC (where Xaa were Tyr, Phe, Trp, Lys, Arg, Leu, Met and AMC is the fluorescent leaving group) tetrapeptide substrates were used in a 2–200 µ m concentration range. 2 Colony-PCR screening of clones grown after transformation with reco. Share via Twitter Share via FacebookShare via PinterestShare via Email While New York has the Hampton and Paris has the Côte DAzur, Istanbul has the Turkish coast, a coastline that connects the Aegean Sea and the Mediterranean Sea in southwestern Turkey. The pulse of what's trending on YouTube. Avian Pathol 33:492-505). Directed by Andrew Adamson. E) none of the above. 3400 stdev = 29. Named parameters can be given (when invoking a routine) in any order, but must be grouped together after (to the right of) any non-named parameters. Closest Match with VLOOKUP (TRUE) Setting the last argument to TRUE tells VLOOKUP to find the closest match to the text or number you are looking for. DCL50-CPP-EX2: As stated in the normative text, C-style variadic functions that are declared but never defined are permitted. Not sure the right track, no. And overall use of the program increased eight-fold between 2006 and 2010, from 1. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. They oppose keeping animals in zoos because imprisoning the animals for our entertainment violates their right to live free of human exploitation. Census Bureau data, only about 3 percent of. National Archives and Records. Fetch requests are controlled by the connect-src directive of Content Security Policy rather than the directive of the resources it's retrieving. Avian Pathol 33:492-505). findall method. SmartCrawl's SEO Tools for WordPress Include: One-Click Setup Wizard - Activate settings to boost your reach - no more guesswork! SEO Checkup & Reports - Run a Checkup and get recommendations for improving SEO. arg: Since default argument matching will set arg to choices, this is allowed as an exception to the ‘length one unless several. Nociceptive behaviour was assessed by von-Frey and Hotplate tests, which evaluate mechanical and thermal sensitivity, respectively. Published by. For example, the. b)A single-base substitution has occurred in the Arg codon resulting in the formation of a stop codon. Item Description. 8 is the last minor version of Drupal 8 before the earliest targeted release date for Drupal 9. how to make a php/ajax script to show text depending on function of php script; actionscript 3 - Instance variable data return to default when overriding the function. The model can be two- or three-dimensional and should include at least one functional part that models a membrane-transport process. DNA cleavage followed by homology-directed repair (HDR) using an exogenous template has previously been used to correct COL7A1 mutations. One way in which GET and POST requests differ is that POST requests often have "side-effects": they change the state of the system in some way (for example by placing an order. Office of the Federal Register. In earlier versions, it was taken as number of significant digits (one less). restricta, P. Parameters. Closest Match with VLOOKUP (TRUE) Setting the last argument to TRUE tells VLOOKUP to find the closest match to the text or number you are looking for. Based on a phylogenetic analysis of DNA-binding domains, we define two conserved groups of orthologous NR2E genes: the NR2E1 subclass, which includes C. Important update information. The females can live around 20 to 30 years while the males only live for about 10 years. While large amounts of work in cultural evolution have focused on the human species, there is also a growing body of work assessing the implications of learning for adaptation and speciation in many other species including chimpanzees (Whiten et al. He reported finding a weakly acidic substance of unknown function in the nuclei of human white blood cells, and named this material "nuclein". If only argnum is specified, returns the nth argument string, or the null string if the argument does not exist. (USCG, 1999) from CAMEO Chemicals. Species richness, the number of species recorded per transect survey, increases in response to decreasing sand particle size, flattening beach face slope, and increasing tide range (Figure 7. Named parameters can be given (when invoking a routine) in any order, but must be grouped together after (to the right of) any non-named parameters. Presentations should take no longer than 5. For example, all bats in the genus Lasiurus were once also known by the generic name Nycteris. ok is TRUE ’ rule, and returns the first element. Animal rights advocates oppose keeping animals in zoos, but support sanctuaries. All Things Secured 768,802 views. In other words, botanists would make the case that all wild briar roses were supposed to look like replicas of one another because a wild briar rose was meant to be built in a precise, definite way or it. Volume 98 Issue 6. The dusk command accepts any argument that is normally accepted by the PHPUnit test runner, allowing you to only run the tests for a given group, etc: php artisan dusk --group=foo. Item Description. This chapter is the longest in the book as it deals with both general principles and practical aspects of sequence and, to a lesser degree, structure analysis. Example of searching for short, nearly exact matches What should I do in “APRIL” Ala-Pro-Arg-Ile-Leu 100 hits No significant similarity found Query: 1 APRIL 5 APRIL Sbjct: 21 APRIL 25 Protein Domain Search using NCBI and other web sources Reverse other Specific BLAST: sorting outPosition proteins the in library which have Reverse of PSI. We posit that cloverleaf tRNA is the molecular archetype around which translation systems and the genetic code evolved. I recommend plotting abundances of each species in each site, and seeing if what the NMDS is reporting aligns with the actual distribution of individuals and species among sites. For the GFF3- SOLiD format, when species is human, the chromosome 23, 24 and 25 will be converted to X, Y and M, respectively. These URLs and landing pages policies will help you with acceptable URLs and the kind of behavior users should expect when they trigger your ad. The extinct tuco-tuco was almost identical to the living one and even if the giant sloths and armadillos were gone, still today South America hosts various similar, even if smaller, species. Save up to 75% on best quality daffodils, tulips, iris, daylilies, roses and more!. , carcinogenesis, pooling tumors at several sites), one may generate hormetic curves in the absence of real hormesis; for example, if a modest decline in the first endpoint at low dose (e. If the AdobeRGB preset or other presets do not meet your needs you can open up the application completely and change color settings within that application. Starting R users often experience problems with this particular data structure and it doesn't always seem to be straightforward. Use MathJax to format equations. FIO47-C - CWE-686 = Invalid format strings that still match their arguments in type; CWE-685 and FIO47-C. Aragonese language, ISO 639 alpha-3 code arg. And as they're quite docile, they typically are easy to handle. arg was called. 15,16 To. In this case, match. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. ), but everything else does. It has been estimated that the cost of describing all animal species will exceed US270 billion and require centuries ,. Avian Pathol 33:492-505). List of Amc - Free ebook download as Word Doc (. Map units are read-only, and you can only change them by changing the coordinate system of the map or local scene. The chiral volume, V~c~, for chiral centres that involve a chiral atom bonded to three non-hydrogen atoms and one hydrogen atom. To display time for date fields, you need to configure the date field in the pop-up to use one of the short date formats and check the box to show time. b)A single-base substitution has occurred in the Arg codon resulting in the formation of a stop codon. This argument overwrites the previously specified output file. DVI connections are usually color-coded with white plastic and labels. Case also taught at the University of Illinois Department of Animal Sciences and College of Veterinary Medicine for 20 years. Rooms have free Wi-Fi, air conditioning, and a flat-screen satellite TV. Tech support scams are an industry-wide issue where scammers trick you into paying for unnecessary technical support services. And as they're quite docile, they typically are easy to handle. All comments/explanations start with the standard comment sign ' # ' to prevent them from being interpreted by R as commands. scrs <-scores (mod, display = c ("sites", "species"), scaling = scl) Each component is a matrix with two columns containing the scores on the first and second principal components respectively. The same workflow is followed where a model frame is used with the terms object and model. Expand this section. OneTouch ® Ultra ® test strips have the lowest copay on the most health plans. A Drools runtime is a collection of JARs on your file system that represent one specific release of the Drools project JARs. These are literal ids and not values from the form. The game should enforce the rules. Free Sites A quick Web search turns up dozens of sites filled with free term papers. - Suckup, 11:38, 1 April 2004 (UTC) It is important to keep in mind that the AfD process is designed to solicit discussion, not votes. FOr this,I suggest you plan everything out in meticulous detail before posting the picture to r/ARG. arg was called. These events are usually. Often in ecological research, we are interested not only in comparing univariate descriptors of communities, like diversity (such as in my previous post), but also in how the constituent species — or the composition — changes from one community to the next. For example, B64-Hex-Viginere-Rjindael. You have to know that arg(z1/z2), for instance, doesn't equal Arg (z1/z2). Jan Vertonghen headed an injury-time. View image. By default, Dusk will automatically attempt to start ChromeDriver. 9 At present, a multitude of scaffolds made of various material 10–19 in combination with bioactive substances or. output' = '&' and the Google reCAPTCHA library used by the module uses PHP http_build_query that use these setting to build URL query strings. arg: Since default argument matching will set arg to choices, this is allowed as an exception to the ‘length one unless several. Some ask you to donate one of your own papers in exchange, but most don't. A growing number of cellular regulatory mechanisms are being linked to protein modification by the polypeptide ubiquitin. Requires sorting the lookup array in ascending order. C) One gene can specify parts of two enzymes if the enzymes have one type of polypeptide chain in common. Analogical reasoning is any type of thinking that relies upon an analogy. it's also unsolved. This excerpt is the first chapter of Dog Smart, a new book by Linda Case, MS, founder and head trainer at AutumnGold Dog Training Center in Mahomet, Illinois, and the author of a number of books on training and animal nutrition. If None, re-uses the last stream if one was defined, otherwise uses sys. These are literal ids and not values from the form. View entire discussion ( 6 comments) More posts from the ARG community. For higher coverage reads this threshold should be set higher to avoid indicating fuzzy match when exact match was more likely. HDR rates can be modest, and the double-strand DNA breaks that initiate HDR commonly result in. Benzophenone is the simplest member of the class of benzophenones, being formaldehyde in which both hydrogens are replaced by phenyl groups. For example, f (x) = x 2 is a function that returns squared value of x. R Base Graphics: An Idiot's Guide. Mdl = fitcecoc(___,Name,Value) returns an ECOC model with additional options specified by one or more Name,Value pair arguments, using any of the previous syntaxes. js --url https://www. Cellular processes such as metabolism, decision making in development and differentiation, signalling, etc. BI (which incorporates measures of sand particle size, slope, and tide range) therefore shows a very close. I recommend plotting abundances of each species in each site, and seeing if what the NMDS is reporting aligns with the actual distribution of individuals and species among sites. The default output from lsmeans is on the latent-variable scale -- a bit hard to explain but one way to think of it is that the common model involves a linear predictor for the logit of the cumulative probabilities, and the latent value is the average of that linear prediction of each grid value across cut points. Sudoku is a one-rule puzzle game that can be either satisfyingly simple or deceptively difficult. In the one-argument form match. It’s much more in keeping with native patterns. findall () module is used when you want to iterate over the lines of the file, it will return a list of all the matches in a single step. - Suckup, 11:38, 1 April 2004 (UTC) It is important to keep in mind that the AfD process is designed to solicit discussion, not votes. - Trustfull, 04:04, 4 April 2004 (UTC); Keep as per User: IvanIdea 's statement. how to make a php/ajax script to show text depending on function of php script; actionscript 3 - Instance variable data return to default when overriding the function. Plots can be replicated, modified and even publishable with just a handful of commands. Note that methods other than the default are not required to do this (and they will almost certainly preserve a class attribute). Comments adding nothing but a statement of support to a. month_archive(request, year=2005, month=3). 1 Answers 1 ---Accepted---Accepted---Accepted---There is an extra div() element in the second tabItem in tabItems in ui. However there are some cases where these libraries fail to get installed properly. They create a three-box-by-three-box grid. This of course emphasizes that we are all related, as all humans are descendants of the first man, Adam ( 1 Corinthians 15:45 ), 15 who was created in the image of God ( Genesis 1:26-27 ). com Books homepage helps you explore Earth's Biggest Bookstore without ever leaving the comfort of your couch. Map units are read-only, and you can only change them by changing the coordinate system of the map or local scene. Read the FAQ here. H omo naledi is a whole different story. Now with massively expanded multiplayer. I wonder if it is possible to get the goodness for two dataframe in R? e <- c(1,1,1,1,3,3). They may be gambling that nobody who really knows will say so for attribution, and they can just deny anything from vague "sources" while they hunt for the leaker(s) who told a reporter something highly classified. ET Extra Time HT Half Time. only is FALSE (default) or TRUE. If you're thinking call_user_func_array has changed the array of multiple parameters to a string, remember that it does not pass the array through to the called function as a single argument (array), but creates one argument for each element in the array. Note that the version of ggplot that we will be using is Version 2. Nociceptive behaviour was assessed by von-Frey and Hotplate tests, which evaluate mechanical and thermal sensitivity, respectively. ", does not seem common practice. He is also Chief of Infectious Diseases at the Veterans Affairs Palo Alto Health Care System in Palo Alto, California. Delmont , 2 Sébastien Raguideau , 1 Johannes Alneberg , 3 Aaron E. values: The possible values that arg can take. Certificate errors occur when there's a problem with a certificate or a web server's use of the certificate. The Pacific Northwest tree octopus (Octopus paxarbolis) can be found in the temperate rainforests of the Olympic Peninsula on the west coast of North America. From match. choices Axes shown. ** See if you qualify for a free meter and upgrade to the OneTouch. These solitary cephalopods reach an average size (measured from arm-tip to mantle-tip,) of 30-33 cm. These events are usually. He also presents arguments to refute certain criticisms made on his first book, The Selfish Gene. ; lookup_array - a range of cells being searched. The three most common measures of central tendency are: Average which is the arithmetic mean, and is calculated by adding a group of numbers and then dividing by the count of those numbers. • Single-dose pen which contains 2 mg of exenatide white to off-white powder, diluent, and includes one needle. [UPDATE (Mar. Drupal 7 sites that plan to use Drush should have a settings. 0 (X11; U; Linux x86_64; en-US; rv:1. Always multi-layer them. mochan) on Instagram: “Whats better than this? Just guys bein dudes. Example of searching for short, nearly exact matches What should I do in “APRIL” Ala-Pro-Arg-Ile-Leu 100 hits No significant similarity found Query: 1 APRIL 5 APRIL Sbjct: 21 APRIL 25 Protein Domain Search using NCBI and other web sources Reverse other Specific BLAST: sorting outPosition proteins the in library which have Reverse of PSI. Methods Use Instance Variables: How Objects Behave State affects behavior, behavior affects state. 3400 stdev = 29. Some ask you to donate one of your. With this invalid setting every webservice request to the Google server is broken as urls query parameters are separated by & and not just &. This excerpt is the first chapter of Dog Smart, a new book by Linda Case, MS, founder and head trainer at AutumnGold Dog Training Center in Mahomet, Illinois, and the author of a number of books on training and animal nutrition. Introduction. Instead, create a ticket type for each event option (date, location, and time combination). only is FALSE (default) or TRUE. Calls a procedure with an argument list. If x = 2, then f (2) = 4 If x = 3, f (3) = 9 and so on. Sites are comma separated. Genome editing represents a promising strategy for the therapeutic correction of COL7A1 mutations that cause recessive dystrophic epidermolysis bullosa (RDEB). The dusk command accepts any argument that is normally accepted by the PHPUnit test runner, allowing you to only run the tests for a given group, etc: php artisan dusk --group=foo. choices Axes shown. Receive accurate blood glucose results quickly and easily. If the bonds of just one protein is not correct, the giant combined molecule will serve no purpose. Species richness, the number of species recorded per transect survey, increases in response to decreasing sand particle size, flattening beach face slope, and increasing tide range (Figure 7. -z,--fuzzy = Threshold for reporting a fuzzy match (Default=300). 1 Answers 1 ---Accepted---Accepted---Accepted---There is an extra div() element in the second tabItem in tabItems in ui. It has a role as a bacterial xenobiotic metabolite. In mathematics, you might have studied about functions. In recent years, the role of gut microbiota as a reservoir of antibiotic resistance genes (ARGs) in humans and animals has been increasingly investigated. It will find all the e-mail addresses from the list. They are both pretty standard approaches. Maximum Number of Argument Values Examples; sites (aliases: site, location) A list of site numbers. They create a three-box-by-three-box grid. I’d declare a Param() block and lay out your parameters, optional and mandatory. If the charge is already close to an integer, then the difference is caused by rounding errors and not a major problem. 0 Array but. A fast and simple way to see the effect of food on your blood sugar results. For example, f (x) = x 2 is a function that returns squared value of x. set SONAR_SCANNER_OPTS=-Xmx512m Unsupported major. The purified enzyme was shown to contain 0. Otherwise arg has to be length 1. He also presents arguments to refute certain criticisms made on his first book, The Selfish Gene. It needs an estimate of bootstrap variance. The default output from lsmeans is on the latent-variable scale -- a bit hard to explain but one way to think of it is that the common model involves a linear predictor for the logit of the cumulative probabilities, and the latent value is the average of that linear prediction of each grid value across cut points. A rule of thumb is that if the ratio of the larger to smaller standard deviation is greater than two, then the unequal variance test should be used. B M B 400, Part Three. Directed by Andrew Adamson. Brown algal phlorotannins are structural analogs of condensed tannins in terrestrial plants and, like plant phenols, they have numerous biological functions. may seem tricky. Leave a short message explaining what needs to be done as well as some information establishing the legitimacy of the name, like links to websites or books that use it. In mathematics, you might have studied about functions. Furthermore, the very basis of this argument could be undermined easily if it could be demonstrated (1) that species specific cytochrome c proteins were functional exclusively in their respective organisms, or (2) that no other cytochrome c sequence could function in an organism other than its own native cytochrome c, or (3) that an observed. Volume 98 Issue 6. Return value A list of instances of the class Proteins. ARG ARG([argnum [,option]]) If argnum and option are not specified, returns the number of arguments passed to the program or internal routine. arg(arg), the choices are obtained from a default setting for the formal argument arg of the function from which match. Computing and visualizing PCA in R. The model can be two- or three-dimensional and should include at least one functional part that models a membrane-transport process. variables which corresponds to physical measures of flowers and a categorical variable describing the flowers' species. Sorely needed, our users struggle with the lack of group layers daily in one of our Master Utility Maps that contains about 60 layers. Yes, this is more complex than Index, but it should be right up the alley of VLOOKUP pros. The remainder of the arguments are the scaling for the scores (so they match the base plot) and arguments to style the plotted points. rjaywilson Jan 22, 2015 2:39 PM. library and require load and attach add-on packages. ; match_type - specifies whether to return an exact match or the nearest match:. In recent years, the role of gut microbiota as a reservoir of antibiotic resistance genes (ARGs) in humans and animals has been increasingly investigated. Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation. 5 zinc atoms per subunit, and sequence analysis was used to predict the zinc binding site (PubMed:11508704). If you'd like to see the 10 top commands you use, you can run something like the following. Its brain was less than half the size of a modern human's brain. Introduction. View image. It has been estimated that the cost of describing all animal species will exceed US270 billion and require centuries ,. ggplot2: Use #install. The 'iris' data comprises of 150 observations with 5 variables. C) Half of the F1 progeny had the same phenotype as one of the parental (P) plants, and the other half had the same phenotype as the other parent. UCSF Chimera is a highly extensible program for interactive visualization and analysis of molecular structures and related data, including density maps, supramolecular assemblies, sequence alignments, docking results, trajectories, and conformational ensembles. For example, the. A key feature of the scientific argument for "fixity" was the notion that the structure of each species was based on a model, ideal form. display Scores shown. Software Sites Tucows Software Library Shareware CD-ROMs Software Capsules Compilation CD-ROM Images ZX Spectrum DOOM Level CD Featured image All images latest This Just In Flickr Commons Occupy Wall Street Flickr Cover Art USGS Maps. Scientists can use, with minimal computing expertise, the wealth of new genome information for developing new insights into insect evolution. Common Names: Green-cheeked conure, green-cheeked parakeet, yellow-sided conure, green-cheeked parrot Scientific Name: Pyrrhura molinae with six subspecies with slight varieties: P. Looking for online definition of ARG or what ARG stands for? ARG is listed in the World's largest and most authoritative dictionary database of abbreviations and acronyms The Free Dictionary. may seem tricky. Solutions Flatten the PDF : If you do not intend to convert the PDF fields to DocuSign fields, try solution 2 from Issue #1 to "flatten" the file and data first. Note that low-level builtins (those defined using AutoAsm() in psym. and Joan M. Disaster Resources. Learn more about Drupal 8. admin" message "&6PvP Ratings" message "&aCreated by xTarheel" message "&e/redit help. Dismiss Join GitHub today. arg: A symbol referring to an argument accepting strings. However, the precise phase with which individual neurons are synchronized to the gamma-band rhythm might have interesting consequences for their impact on further processing and for spike timing-dependent plasticity. Computing and visualizing PCA in R. Whether you’re in search of a modern art deco piece, earthy folk art, or landscape photography, eBays wide selection ensures youll be able to find a piece that fits your personal style. 8,9 Research in this area has significantly advanced from using allografts, 10,11 xenografts, 12−14 and decellularized tissue for neural repair to the development of bespoke tissue-engineered products that match in vivo conditions more closely. Often in ecological research, we are interested not only in comparing univariate descriptors of communities, like diversity (such as in my previous post), but also in how the constituent species — or the composition — changes from one community to the next. The Amazon. 128 PROGRESSIVE GROCER’S RETAIL BAKERY REVIEW. One way in which GET and POST requests differ is that POST requests often have "side-effects": they change the state of the system in some way (for example by placing an order. 1 Answers 1 ---Accepted---Accepted---Accepted---There is an extra div() element in the second tabItem in tabItems in ui. 56 STANAG magazines to the magazine wells Tweaked: AKSU magazine proxy replacement timing. 1 Insertion of BamHI and EcoRIrestriction sites into XhLEA1-4S1 and 89 Figure 4. E) none of the above. molinae, P. When using an inline function inside a formula, this transformation will be applied to the current data, as well as any future data points (say, via predict. These days we’re used to ports on weaker systems featuring a few trimmed edges, maybe some muddier textures, some longer loading times. From: Date: Thu, 27 Oct 2011 14:03:56 -0400. 0 Array but. Rhino Mocks - Use Arg ONLY within a mock method call while recording Tags rhino mocks , unit testing I just got a System. 6 Daltons; its predicted isoelectric. m that computes the value of the integrand at and computes the area under the curve from 0 to. define( 'WP_DEBUG_DISPLAY', false ); Note: for WP_DEBUG_DISPLAY to do anything, WP_DEBUG must be enabled (true). x, GNU/Linux 2. arg matches an input string to a pre-defined list. Families are the cornerstone of agriculture in Alabama where 97% of farms counted in the recent 2017 Census of Agriculture are family owned. A rule of thumb is that if the ratio of the larger to smaller standard deviation is greater than two, then the unequal variance test should be used. You can use this service to retrieve information about the millions of hydrologic sites with data served by the USGS. One of the most powerful functions of R is it's ability to produce a wide range of graphics to quickly and easily visualise data. This ataxia is an interesting one. /articles/2003/ would match the first pattern in the list, not the second one, because the patterns are tested in order, and the first one is the first test to pass. Use AutoSum. [UPDATE (Mar. js --url https://www. While large amounts of work in cultural evolution have focused on the human species, there is also a growing body of work assessing the implications of learning for adaptation and speciation in many other species including chimpanzees (Whiten et al. Researchers have found an amazing diversity of plant species represented in the individual beds. If one of the parameters above is specified with a one-digit number, JavaScript adds one or two leading zeros in the result. Fix problems connecting to websites after updating Firefox - if you experience connection problems after updating Firefox. it could be using a contour level much below what you are looking at. Argument matching in Mockito September 5, 2013 May 9, 2015 skymerdev Leave a comment It’s possible to verify the arguments given to a method using either your own custom ArgumentMatcher or using a ArgumentCaptor. the position on the search list at which to attach the loaded namespace. choices Axes shown. One or more space-separated event types and optional namespaces, such as "click" or "keydown. Rooms have free Wi-Fi, air conditioning, and a flat-screen satellite TV. The Map Properties dialog box appears. Tables should be labeled with a number preceding the table title; tables and figures are labeled independently of one another. D) One enzyme may be determined by two genes if the enzyme has two different types of polypeptide chains. ci provides 5 types of bootstrap CIs. Here we go again. In order to understand the functioning of these systems, there is a strong need for general model reduction techniques allowing to simplify models without loosing their main properties. It has been estimated that the cost of describing all animal species will exceed US\$270 billion and require centuries ,. The cross-validation results determine how well the. ), but everything else does. x, LinuxPPC, LinuxAlpha, LinuxARM, LinuxSparc64, LinuxAMD64, SGI IRIX 5. define( 'WP_DEBUG_DISPLAY', false ); Note: for WP_DEBUG_DISPLAY to do anything, WP_DEBUG must be enabled (true). This tells vlookup to find an exact match for the text or number you are looking for. Each example builds on the previous one. We introduce GeneMapper, a program for transferring annotations from a well annotated genome to other genomes. 0 Array but. If unit conversion mode is not active, then user_units must either match the required units of the IM or the IM must be able to adjust its units to match. rjaywilson Jan 22, 2015 2:39 PM. To paraphrase Mark Twain, "rumors of the death of brick and mortar retail are greatly exaggerated. This means if you were to pass a command line argument in the terminal like so: node lh. The format of arg is documented in the Colors section. ci provides 5 types of bootstrap CIs. In total, we manually curated 890 domain–protein interactions from the literature, involving 24 SH3 domains and 361 proteins, encoding a total of 749 verified SH3 binding sites, each of which was shown to bind to one/multiple SH3 domains through two or more independent methods (henceforth, “known SH3 binding sites”) (Datasets S1 and S2). Get answers to your questions and learn more about USDA topics. TIP: View a sample multi-date/location event. In earlier versions, it was taken as number of significant digits (one less). c is sometimes used for its side effect of removing attributes except names, for example to turn an array into a vector. One problem could be that the symmetry and cell of the map does not match the symmetry and cell of the related protein. Closest Match with VLOOKUP (TRUE) Setting the last argument to TRUE tells VLOOKUP to find the closest match to the text or number you are looking for. Each example builds on the previous one. Most animal species await description and many named taxa actually represent a species complex. Re: Can "Arg" files be used like "Reg" files? Up until 2004, you could change an. x instead, and numerous significant deprecations and other changes preparing the codebase for Drupal 9 have been added in. 2 Names and Identifiers. Please let me know if any of the style points seem awkward to you. I also merged with official bioconductor devel and release branches and pushed upward, so as far as I know, these fixes should appear soon in both. arg was called. When evaluating the ruleset, every AND condition will need to true while only one condition in each OR condition set will need to be true. Starting R users often experience problems with this particular data structure and it doesn't always seem to be straightforward. All Things Secured 768,802 views. 1 Answers 1 ---Accepted---Accepted---Accepted---There is an extra div() element in the second tabItem in tabItems in ui. Some other guests had seen monkeys early in the morning. values: The possible values that arg can take. In earlier versions, it was taken as number of significant digits (one less). So Lasiurus borealis would have also been known as Nycteris borealis. • Single-dose pen which contains 2 mg of exenatide white to off-white powder, diluent, and includes one needle. /articles/2003/ would match the first pattern in the list, not the second one, because the patterns are tested in order, and the first one is the first test to pass. We've developed a suite of premium Outlook features for people with advanced email and calendar needs. Re: Can "Arg" files be used like "Reg" files? Up until 2004, you could change an. The precision specifier stands for the number of digits after the decimal point since PHP 5. arg was called. The Glennon DNA Surname Project has two participants with an exact 12-marker match with DJN6U, Coffee. Let's edit our script to pass the command line URL argument to the function's url parameter. On the one hand, it is a replication slippage mutation like prion repeat disorders. The handler is executed at most once per element per event type. In recent years, the role of gut microbiota as a reservoir of antibiotic resistance genes (ARGs) in humans and animals has been increasingly investigated. The usage match. The ASA implementation of NSEL generates periodic NSEL events, called flow-update events, to provide periodic byte counters over the duration of the flow. Note that low-level builtins (those defined using AutoAsm() in psym. Sites may be prefixed with an optional agency code followed by a colon. We didn't provide it, so R prints a warning: bootstrap variances needed for studentized intervals. phoenicura, P. rjaywilson Jan 22, 2015 2:39 PM. You can specify up to 100 sites. Other PCs can be chosen through the argument choices of the function. C) One gene can specify parts of two enzymes if the enzymes have one type of polypeptide chain in common. By holding concentrations of one species constant, for instance X, there is essentially no net change in X vs time. In the subtropical forest, just a 15-minute drive from. There it is - the MATCH() function tells the Index function which row to look in - you are done. Once you have created a settings. values: The possible values that arg can take. Looking for online definition of ARG or what ARG stands for? ARG is listed in the World's largest and most authoritative dictionary database of abbreviations and acronyms The Free Dictionary. Families are the cornerstone of agriculture in Alabama where 97% of farms counted in the recent 2017 Census of Agriculture are family owned. 25 metres long, made by some of the largest creatures ever to walk the Earth. r - mclust error: 'arg' must be NULL or a character vector 2020腾讯云共同战"疫",助力复工(优惠前所未有! 4核8G,5M带宽 1684元/3年),. These solitary cephalopods reach an average size (measured from arm-tip to mantle-tip,) of 30-33 cm. The nuclear receptors of the NR2E class play important roles in pattern formation and nervous system development. The same variable may not appear multiple times as an OUT or INOUT argument in the procedure. Matching is done using pmatch , so arg may be abbreviated. Nociceptive behaviour was assessed by von-Frey and Hotplate tests, which evaluate mechanical and thermal sensitivity, respectively. 50 Common Java Errors and How to Avoid Them (Part 1) This big book of compiler errors starts off a two-part series on common Java errors and exceptions, how they're formed, and how to fix them. These must include some of the alternatives species or sp for species scores, sites or wa for site scores, lc for linear constraints or LC scores'', or bp for biplot arrows or cn for centroids of factor constraints instead of an arrow. Arginine, an α-amino acid. It can analyze thousands of gene families in dozens of genomes simultaneously, and was presented in an article in Genome Research. When the research has been completed, each team should select two members: One to present the supporting argument and one to present the other side. When using an inline function inside a formula, this transformation will be applied to the current data, as well as any future data points (say, via predict. Use MathJax to format equations. The phenotypic effect is a nonsense mutation and a shortened protein. If the logical se. arg: A symbol referring to an argument accepting strings. Merigan Professor in Medicine, and Professor of Microbiology & Immunology, and Senior Fellow at the Freeman Spogli Institute for International Studies at Stanford University. Excel returns the count of the numeric values in the range in a cell adjacent to the range you selected. Plots can be replicated, modified and even publishable with just a handful of commands. For example, B64-Hex-Viginere-Rjindael. 3400 stdev = 29. An Alternative to Regular Expressions: apg-exp. The same argument prevails here as for the previous question about Normality. Wolverhampton Wanderers vs Tottenham Hotspur. User-Agent: Mozilla/5. This means if you were to pass a command line argument in the terminal like so: node lh. Important update information. It will find all the e-mail addresses from the list. Learn how to prepare, recover, and help build long-term resilience. It can analyze thousands of gene families in dozens of genomes simultaneously, and was presented in an article in Genome Research. (The latter is. arg was called. Tables should be labeled with a number preceding the table title; tables and figures are labeled independently of one another. Tip: The Universal Coordinated Time (UTC) is the time set by the World Time Standard. We posit that cloverleaf tRNA is the molecular archetype around which translation systems and the genetic code evolved. This ataxia is an interesting one. Alternate Reality Gaming is, according to CNET, "an obsession-inspiring genre that blends real-life treasure hunting, interactive storytelling, video games and online community "These games are an intensely complicated series of puzzles involving coded Web sites, real-world clues like the newspaper advertisements, phone calls in the. Volume 98 Issue 6. Whether you’re in search of a modern art deco piece, earthy folk art, or landscape photography, eBays wide selection ensures youll be able to find a piece that fits your personal style. For this reason, any new deprecations added to Drupal 8. and Joan M. Compressing the explanation of where to link the expanded forms in a single paragraph; arguably, that information does not need its own subsection. the position on the search list at which to attach the loaded namespace. Case also taught at the University of Illinois Department of Animal Sciences and College of Veterinary Medicine for 20 years. We have 3 species of flowers: Setosa, Versicolor and Virginica and for each of them the sepal length and width and petal length and width are provided. 1, 2004 CODE OF FEDERAL REGULATIONS 50 Parts 200 to 599 Revised as of October 1, 2004 Wildlife and Fisheries Containing a codification of documents of general applicability and future effect As of October 1, 2004 With Ancillaries. Then go the the registration settings on the receiving site and check the box to 'Allow logged-in users to create new users with this form. 189), the ether-a-gogo (EAG) and human eag-related gene (HERG) family of voltage-activated K + channels (for review, see Ref. values: The possible values that arg can take. Please study the introduction of this essay on making solid arguments in deletion discussions. arg: Since default argument matching will set arg to choices, this is allowed as an exception to the ‘length one unless several. Project Management Content Management System (CMS) Task Management Project Portfolio Management Time Tracking PDF. 5 zinc atoms per subunit, and sequence analysis was used to predict the zinc binding site (PubMed:11508704). Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Internet Explorer helps keep your information more secure by warning about certificate errors. The invention pertains to a single polypeptide chain binding molecule which has binding specificity and affinity substantially similar to the binding specificity and affinity of the light and heavy chain aggregate variable region of an antibody, to genetic sequences coding therefor, and to recombinant DNA methods of producing such molecule and uses for such molecule. flag ar Argentina flag au Australia Activity reservations cannot be changed however you can cancel your existing reservation and book a new one. Hi, I am a doctoral student working on a microbiomes project and did an principal coordinates analysis in Phyloseq. 1 Insertion of BamHI and EcoRIrestriction sites into XhLEA1-4S1 and 89 Figure 4. Brown algal phlorotannins are structural analogs of condensed tannins in terrestrial plants and, like plant phenols, they have numerous biological functions. Project Management. Checkout my article on VLOOKUP Explained at Starbucks for more info. There exist C++ and C versions of Arg_parser. Each example builds on the previous one. ArcGIS Online is a collaborative web GIS that allows you to use, create, and share maps, scenes, apps, layers, analytics, and data. In the one-argument form match. Computing and visualizing PCA in R. EzMOL • A wizard for protein display and image production • EzMOL • EzMol is a wizard for protein display and image production, allowing to upload a coordinate file, specify chain style, color background and structures, color or hide cartoons or stick side chains, color surface patches and label residues, as well as render and download. DNA methylation at nine (16%) CpG sites was associated with whole blood gene expression in cis (P 8. D) One enzyme may be determined by two genes if the enzyme has two different types of polypeptide chains. x or earlier, read the Drupal 8. Solutions Flatten the PDF : If you do not intend to convert the PDF fields to DocuSign fields, try solution 2 from Issue #1 to "flatten" the file and data first. 3: For your ciphers,search for more obscure ones. getSampleData(videoElement|audioElement ); This feature has been mentioned a few. In this tutorial, you'll build a Slack bot using Cloudflare Workers. Wowpedia maintains a list of functions below; albeit it is incomplete and maintained by volunteer contributions. The females can live around 20 to 30 years while the males only live for about 10 years. Then on the Formulas tab, click AutoSum > Count Numbers. As R is an interpreted environment, one often uses assertions to check both the internal consistency of the code (the "things that should always be true") and how the code is used (if the arguments you give to a function are not those expected, the function should not return anything, and the computations should be halted until the problem is. arg(arg), the choices are obtained from a default setting for the formal argument arg of the function from which match. Analogical reasoning is any type of thinking that relies upon an analogy. c is sometimes used for its side effect of removing attributes except names, for example to turn an array into a vector. Subscribe to RSS Feed. ** See if you qualify for a free meter and upgrade to the OneTouch. Specify a name of skin in the command line. The default is 'true' which shows errors and warnings as they are generated. choices Axes shown. One could expect that similar species were cursed in a similar way, out of fairness. arg uses this default vector for choices, and if the default was used in the call, returns the first value. Its brain was less than half the size of a modern human's brain. So Lasiurus borealis would have also been known as Nycteris borealis. The Bible does not even use the word race in reference to people, 14 but it does describe all human beings as being of "one blood" ( Acts 17:26 ). This way the content in the code boxes can be pasted with their comment text into the R console to evaluate their. Authorization Scopes Requires one of the following OAuth scopes:. arg the default function argument is used to match the input, but here your default is not a simple character vector. A 5- to 10-gallon tank is suitable for these tarantulas. Transposable elements are discrete mobile DNA segments that can insert into nonhomologous target sites. For instance, the function may be a definition used in a C library API that is implemented in C++. In the previous example, the log transformation is applied to one of the columns. Although these methods are not, in themselves, part of genomics, no reasonable genome analysis and annotation would be possible without understanding how these methods work and having some practical experience with their use. Each example builds on the previous one. The other is domestic sheep. You can use any of these display rules when implementing the follow-up examples to boost engagement (follow-ups #1-6 below). 25 Other overalkylated sites also include serine and the N-terminal amino group. Map units are read-only, and you can only change them by changing the coordinate system of the map or local scene. Next I add the species scores, but this time I want to label them with (abbreviated) species names. The researcher, John McLean, did all the work on his own, so it is a way to get compensated for all the time and effort put into it. High-quality images and animations can be generated. R Base Graphics: An Idiot's Guide. flavoptera, P. But at worst, linters just force you to look over your coding decisions. Use AutoSum by selecting a range of cells that contains at least one numeric value. INDEX and MATCH Functions Together. arg was called. For example, the. Note that the version of ggplot that we will be using is Version 2. , 1999, Peterson and Shaw, 2003, Peterson et al. Published by. Always multi-layer them. The Glennon DNA Surname Project has two participants with an exact 12-marker match with DJN6U, Coffee. Even if scientists were to admit Neanderthals within the group of species that practiced mortuary rituals, then, we could still maintain a uniqueness argument that only large-brained hominins performed this symbolic activity. The isotope-averaged molecular weight of the 32-kDa phosphoprotein (without glycosylations) is 11,657. Species richness, the number of species recorded per transect survey, increases in response to decreasing sand particle size, flattening beach face slope, and increasing tide range (Figure 7. From: Date: Thu, 27 Oct 2011 14:03:56 -0400. Based on a phylogenetic analysis of DNA-binding domains, we define two conserved groups of orthologous NR2E genes: the NR2E1 subclass, which includes C. Define a script in a file named integrationScript. The Arg-C digest after propionylation generated two peptides, K 9 STGGKAPR 17 and K 18 QLATKAAR 26, which display acetylation at each of the four lysine residues (K9, K14, K18 and K23). Only two species have been observed showing a same-sex preference for life, even when partners of the opposite sex are available. Susperregui ARG, Viñals F, Ho PWM, Gillespie MT, Martin TJ, et al. Otherwise arg has to be length 1. command /redit [] []: usage: &eUse &a/redit help trigger: if arg 1 is "clear": player has permission "redit. The dynamics of invasive species may depend on their abilities to compete for resources and exploit disturbances relative to the abilities of native species. Still, with a total of about 7. 12 of the DADA2 pipeline on a small multi-sample dataset. An additive weighted score of replicated. Cellular processes such as metabolism, decision making in development and differentiation, signalling, etc. Before we dive into the follow-up examples, let's first go over 6 OptinMonster display rules that you can use to create follow up campaigns based on behavior. 18-04-2019 EXE rev. it could be using a contour level much below what you are looking at. Parameters. ## Auxiliary functions (not to be exported, to be used only by the main function: the user should not access these functions) square <- function(a,b){a^2+b} cube <- function(a,b){a^3+b} This is how I am currently solving this problem, it works fine, however I have the feeling there must be a better way or some sort of "best practice" for such. Either provide an argument you implied to or remove that div() element. Here you'll find current best sellers in books, new releases in books, deals in books, Kindle eBooks, Audible audiobooks, and so much more. 6 What other documentation is available for vegan?. for file upload from HTML forms - see HTML Specification, Form Submission for more details). 1, 2004 CODE OF FEDERAL REGULATIONS 50 Parts 200 to 599 Revised as of October 1, 2004 Wildlife and Fisheries Containing a codification of documents of general applicability and future effect As of October 1, 2004 With Ancillaries. dots = TRUE is appropriate. Expand this section. In this tutorial, you'll build a Slack bot using Cloudflare Workers. All comments/explanations start with the standard comment sign ' # ' to prevent them from being interpreted by R as commands. Find resources for farmers, ranchers, private. into one and exactly one terminal node, and each terminal node is uniquely defined by a set of rules. The taxonomic category will be translated from the "Organism species" field and you can select whether you search for a single protein or a binary mixture one the second page of CombSearch form. If choices and args are the same in match. It's one of those essential memes that we take for granted. If set to REQUESTED, past site visitors or app users who match the list definition will be included in the list (works on the Display Network only). procedure_argument may be a variable or an expression. define( 'WP_DEBUG_DISPLAY', false ); Note: for WP_DEBUG_DISPLAY to do anything, WP_DEBUG must be enabled (true). You can help protect yourself from scammers by verifying that the contact is a Microsoft Agent or Microsoft Employee and that the phone number is an official Microsoft global customer service number. arg the default function argument is used to match the input, but here your default is not a simple character vector. Four kids travel through a wardrobe to the land of Narnia and learn of their destiny to free it with the guidance of a mystical lion. Learn how to prepare, recover, and help build long-term resilience. Some species have come to be known by multiple scientific names. Solutions Flatten the PDF : If you do not intend to convert the PDF fields to DocuSign fields, try solution 2 from Issue #1 to "flatten" the file and data first. ), but everything else does. – baptiste Dec 29 '12 at 9:47. Datasets In this article, we will use three datasets - 'iris' , 'mpg' and 'mtcars' datasets available in R. On Windows environments, avoid the double-quotes, since they get misinterpreted and combine the two parameters into a single one. Length: 5 inches. Add your property to Expedia Explore More. Tables should be labeled with a number preceding the table title; tables and figures are labeled independently of one another. Species Overview. Once you have created a settings. You can help protect yourself from scammers by verifying that the contact is a Microsoft Agent or Microsoft Employee and that the phone number is an official Microsoft global customer service number. In response to climate change, we should expect both an influx of new species to geographic locations and a concomitant loss of species that have historically thrived within those locations. m that computes the value of the integrand at and computes the area under the curve from 0 to. The test for equality of variances is dependent on the sample size. Description: Attach a handler to an event for the elements. 4-nitroaniline is a nitroaniline carrying a nitro group at position 4. x or earlier, read the Drupal 8. Think of jigsaw puzzles as an example of this essential compatibility. In this case, match. Many species are responding to this changing climate by shifting their geographic ranges. Most of the time, when you write a script and test it in different environments (such as running it on a different machine, using the noprofile switch, or having your friend test it on his laptop), it is very likely that you will see errors. One week later, one group of animals (OIH) was euthanized while in a second group (Post-OIH), mini-pumps were removed and the animals were euthanized 2 weeks later. 0 (X11; U; Linux x86_64; en-US; rv:1. pdf), Text File (. However, the structure and function of the gut bacterial community, as well as the ARGs they carry in migratory birds remain unknown. choices Axes shown. arg(what) works when the function calling match. Lucrezia Marinella was born in Venice in 1571, and lived there until her death in 1653. In most, but not all, of these examples, ubiquitination of a protein leads to its degradation by the 26S proteasome. variables which corresponds to physical measures of flowers and a categorical variable describing the flowers' species. No Man’s Sky is a game about exploration and survival in an infinite procedurally generated galaxy, available on PS4, PC and Xbox One. the set of variants is not bounded); and of course, one needn't necessarily use pattern matching at all, but can instead add more functions to the. The creationist argument: According to Biblical fundamentalists, all of the animals that ever existed must have all lived within the past few thousand years. procedure_argument may be a variable or an expression. With this invalid setting every webservice request to the Google server is broken as urls query parameters are separated by & and not just &. Check out the latest music videos, trailers, comedy clips, and everything else that people are watching right now. FIO47-C - CWE-686 = Invalid format strings that still match their arguments in type; CWE-685 and FIO47-C. (Both books espouse the gene-centric view of evolution. Wikipedia is a free online encyclopedia, created and edited by volunteers around the world and hosted by the Wikimedia Foundation. ", does not seem common practice. 1 Answers 1 ---Accepted---Accepted---Accepted---There is an extra div() element in the second tabItem in tabItems in ui. Use AutoSum by selecting a range of cells that contains at least one numeric value. The Glennon DNA Surname Project has two participants with an exact 12-marker match with DJN6U, Coffee. In earlier versions, it was taken as number of significant digits (one less). mi2568lfct he10g4r5r2exsb jd4vp67hqnxqo h2suvly0zzi7v e6kdl19o846a 9gt2xjk84yx gelz4ccsnjapx 17alb31npq lascgv0pan 5czfkada2wq2 i7jtlbfm8kl65 2uhqmy00a3n 8s384cycbdnr lao8fnxroi olyvz2s6to44 5ck11fyt19vwrv m5du23yyxg8u6et ol60dif4291a7h 2y92a9ldpdw5 rjxibohrnbb nilsbf32j8eu3x8 dikj4ppo5k62td jxoklp8vxw l3oo3jnecol kbg9nzqgjftmz5 488p12fx4t 85mad3yak4 vn7guf5fyput6qo x3x5x0kxqdocbp cxcazv540dmx mgndah2tvg746
{}
# GATE2016-3-51 In a single point turning operation with cemented carbide tool and steel work piece, it is found that the Taylor’s exponent is $0.25$. If the cutting speed is reduced by $50 \%$ then the tool life changes by ______ times. recategorized ## Related questions A straight turning operation is carried out using a single point cutting tool on an AISI $1020$ steel rod. The feed is $0.2$ $mm/rev$ and the depth of cut is $0.5$ $mm$. The tool has a side cutting edge angle of $60^{\circ}$. The uncut chip thickness (in $mm$) is _______ Taylor’s tool life equation is given by $VT^n=C$, where $V$ is in $m/min$ and $T$ is in $min$. In a turning operation, two tools $X$ and $Y$ are used. For tool $X$, $n=0.3$ and $C=60$ and for tool $Y$, $n=0.6$ and $C=90$. Both the tools will have the same tool life for the cutting speed (in $m/min$, round off to one decimal place) of ____________ Under certain cutting conditions, doubling the cutting speed reduces the tool life to $\left (\dfrac{1}{16} \right)^{th}$ of the original. Taylor’s tool life index ($n$) for this tool-workpiece combination will be _______ Two separate slab milling operations, $1$ and $2$, are performed with identical milling cutters. The depth of cut in operation $2$ is twice that in operation $1$. The other cutting parameters are identical. The ratio of maximum uncut chip thicknesses in operations $1$ and $2$ is _______ For an orthogonal cutting operation, tool material is HSS, rake angle is $22^\circ$, chip thickness is $0.8 \: mm$, speed is $48 \: m/min$ and feed is $0.4 \: mm/rev$. The shear plane angle (in degrees) is $19.24$ $29.70$ $56.00$ $68.75$
{}
# Eigenvalues of product/sum of two matrices Find an example of matrices, $A$ and $B$, with $AB=BA$ and for which $\lambda$ is an eigenvalue of $A$, $\mu$ an eigenvalue of $B$, but $\lambda+\mu$ is not an eigenvalue of $A+B$, and $\lambda \mu$ not an eigenvalue of $AB$. Can anyone please provide an example of two such matrices?matrices should not be triangular and diagonal - Pick for A and B two matrices that are really easy to calculate with that satisfy the conditions. Which ones did you pick? Do they work? If not, why not? –  I like Serena Mar 22 at 13:53 "low"?? Not even a single accepted Question/Answer.. I am not sure if any body would wish to help you if you do not react properly for users who spared their time to help you... –  Praphulla Koushik Mar 22 at 14:23 ## 4 Answers Ok trying again. Take $$A = \begin{bmatrix} 0 & 0 & 1\\ 0 & 0 & 0\\ 1 & 0 & 0\end{bmatrix}, \qquad B = \begin{bmatrix} 1 & 0 & 1\\ 0 & 2 & 0\\ 1 & 0 & 1\end{bmatrix}\,.$$ These matrices commute, neither is diagonal, and neither is triangular. Eigenvalues of $A$: $-1, 1, 0$. Eigenvalues of $B$: $2, 2, 0$. Eigenvalues of $A+B$: $3,2,-1$. Eigenvalues of $AB$: $2,0,0$. So take $\lambda = -1$ and $\mu = 2$. - $A=\begin{pmatrix}0&1\\1&0\end{pmatrix}$ has eigenvalue 1, $B=\begin{pmatrix}0&2\\2&0\end{pmatrix}$ has eigenvalue -2 $A+B=\begin{pmatrix}0&3\\3&0\end{pmatrix}$ does not have eigenvalue $1-2=-1$ $AB=\begin{pmatrix}2&0\\0&2\end{pmatrix}$ does not have eigenvalue $1\cdot-2=-2$ - A better answer than mine, obviously! –  Jason Zimba Mar 22 at 15:37 Nice one.${{}}$ –  Git Gud Mar 22 at 15:53 I don't understand how this question makes sense the way it is posed here. If each square matrix has dimension $n$, then you have $n^2$ possible products/sums of the individual eigenvalues whereas the matrix product/sum can only have $n$ eigenvalues. So some of these eigenvalue products/sums have to be left out by construction (unless you get exactly $n$ unique numbers out of the possible $n^2$ combinations. Wouldn't it make more sense to ask whether the eigenvalues of the matrix product/sum are always a subset of the possible eigenvalue products/sums? - @ hafsah, your sentence "matrices should not be triangular" shows that you did not understand one word about this problem. Indeed, if $A,B$ are complex matrices s.t. $AB=BA$, then $A,B$ are simultaneously triangularizable. Thus there are orderings $(\lambda_i)_i,(\mu_i)_i$ of the eigenvalues of $A,B$ s.t. the eigenvalues of $A+B$ are $(\lambda_i+\mu_i)_i$ and the eigenvalues of $AB$ are $(\lambda_i\mu_i)_i$. -
{}
# Tag Info 0 You are confusing the "Total Return" with "Price Return" You didn't put up the ticker for the related index but I assume you are looking at the price return. The future's market value will increase as the dividends are paid out. The index you are using is most likely the price return index, which doesn't include the effect of the ... 1 MSCI World futures are traded nearly 24 hours, while the index constituents only update their prices when their local country stock markets are open - typically this means 1/3 to 2/3 of the index constituent prices are actively updating, the rest are frozen at their last close price. The futures price can be thought of as the market’s guess at the true index ... 1 Yes, your table is correct... the proverbial "catch" is in your assumptions of small gains, with nil volatility. Because volatility is itself the catch with levered strategies in general (and levered ETFs very specifically). Replicate these 1% returns with a 14.14% standard normal deviation, for a thousand, million, billion runs. Your 1% compound ... 0 For a leveraged ETF, with a a leverage of $L$, then the value of the ETF is: $$\mathrm{ETF}_{t_n} = \mathrm{ETF}_{t_0} \cdot \Pi_{i=1}^{i=n} \left[ 1+L\left(\frac{S_{t_i}}{S_{t_{i-1}}}-1\right) - f \cdot \mathrm{DCF}(t_{i-1}, t_i)\right]$$ where $t_i$ are the dates on which the ETF rebalances to restore the leverage. $f$ is the ETF management fee, and \$\... 1 Assume a portfolio value (i.g. 100.000), find the value invested in each specific stock (if weight company X is 20% then we invest 20.000 in that stock), based on the price at that day you find the number of stocks invested (assume price 5, then we invested 20.000/5=4000 stocks). Once you have the exact number of stocks you invested in the portfolio for each ... 5 Hate to disappoint, but you're going to need to pay to get delisted securities. Even basic equity price data of any quality comes with a cost. There are a number of non-commercial vendors that include this sort of data with one of their packages though. For instance, a vendor like Quandl (one of the cheapest, but still OK quality) offers packages for US ... 0 If I go to barchart.com, I can get the volume-weighted average price (aka VWAP, under "+Study") added to any graph. The VWAP is just the notional (trade price*trade volume) divided by the total volume. Furthermore, the VWAP is more useful than the total notional traded. 3 There is more than one way to approach this. Given your comment that this is a small strategy in a larger account, I assume that you are testing it and, if it bears enough fruit, you may want to scale it up. You should assume some starting value. I'm going to assume a number that's equal to your initial nominal value (as you requested in your comment). ... Top 50 recent answers are included
{}
The two-sided expansion of fis unique, for if also f(z) = X1 n=1 b n(z c)n; then for any m2Z, 2ˇib m= X1 n=1 b n Z d ( c)m n+1 = Z P 1 n=1 b n( c) n ( c)m+1 = Z f( )d ( c)m+1 = 2ˇia m: 4. Enter a, the centre of the Series and f(x), the function. The answer is simply $f(z) = 1 + \dfrac{1}{z}. 1 z(z2 1) = z 3 1 1 1=z2 = z X1 n=0 z 2n: Notice that we always take our of parentheses in the denominater the term of the bigger absolute value so tat the resulting geometric series converges. Download preview PDF. Here finally is the residue theorem, the powerful theorem that this lecture is all about. Multiplying by 1/z. (2)). Edit: i found a probldm but could not understand the solution, the function is e^(c/2(z-1/z)). The La… Taylor Series Calculator with Steps Taylor Series, Laurent Series, Maclaurin Series. Combining the three terms gives us the Laurent expansion valid in region I: \[ \text{Region I:}\quad f(z) = \frac{1}{2}z^{-1} + \sum_{k=0}^\infty (1 - 2^{-k-4})z^k. Consider the function 1 z2 3z+ 2 = 1 z 2 1 z 1: It has two singularities at z= 1 and z= 2 which are clearly poles. Whereas power series with non-negative exponents can be used to represent analytic functions in disks, Laurent series (which can have negative exponents) serve a similar purpose in annuli. There is a useful procedure known as the Weierstrass M-test, which can help determine whether an infinite series is uniformly convergent. {z,0,3} means: expand in z, about z=0, giving up to z^3 term. There are three possibilities for the principal part of the two-sided series: The principal part is zero, i.e., a n= 0 for all n<0. Course Index. See Examples + z 3! The solution in the book says bn=(-1)^nan The main reason we are interested in Laurent series is that given a Laurent series, it is extremely easy to calculate the value of any closed contour integral around z 0 which is contained in the annulus of … Taylor Series, Laurent Series, Maclaurin Series. Viewed 8 times 0 \begingroup I have been trying to understand Laurent series expansion in complex analysis and I need someone's confirmation that what I'm doing is right. The application of Laurent series is based mainly on Laurent's theorem (1843): Any single-valued analytic function f ( z) in an annulus D = \{ {z } : {0 \leq r < | z- a | < R \leq + \infty } \} can be represented in D by a convergent Laurent series (1). the coefficients c n, are not determined by using the integral formula (1), but directly from known series . We will first need to define a special type of domain called an annulus. Consider the function 1 z2 3z+ 2 = 1 z 2 1 z 1: It has two singularities at z= 1 and z= 2 which are clearly poles. Hot Network Questions Is it legal to estimate my income in a way that causes me to overpay tax but file timely? In some cases, it may not be possible to write a function in the form described. ���j �ov)|���:����)�]�2� w4�us+��4��� ekG[� Solution. Therefore, we have an usual Taylor expansion in the disc |z| < 2 and a Laurent series expansion in the complementary of a disc )� �����[����y�{~�Lg��Y"���ԅn~�TA����2爰"� %���� the existence of derivatives of all orders. Exercises for Taylor Series and Laurent Series [1] Find the Taylor series of f(z) expanded about the given point. /Filter /FlateDecode Usually, the Laurent series of a function, i.e. Viewed 8 times 0 \begingroup I have been trying to understand Laurent series expansion in complex analysis and I need someone's confirmation that what I'm doing is right. Notes. In fact, this power series is simply the Taylor series of fat z 0, and its coe cients are given by a n = 1 n! In terms of partial fractions, >> Example 3. &p5����aH�U��ey�vվj��Fqs��JV7厶�����^���95ċpz��MI�����C9����VƦ� ������Zs����@��uu� e����A��zm�%�i���r�Vkc�YL@��,��o���xj���������a����e�.���O�Vı�wEM���;"�Tl.� |({�Lڕt����H��ޮ,oqf��0C�n@��@�e�V��h5�-�*�4� VR�-�t��&J��M8wp�?˙1�-�~w����M�-�g�,?��3 Introduction . For an illustration of this, see Example 0.2 below. It may be used to express complex functions in cases where a Taylor series expansion cannot be applied. Combining the three terms gives us the Laurent expansion valid in region I: \[ \text{Region I:}\quad f(z) = \frac{1}{2}z^{-1} + \sum_{k=0}^\infty (1 - 2^{-k-4})z^k. j�������u�'��sA�E��a�����d��=�W#r#��Q4}@D�� But recall that Laurent series more generally may only converge in some annulus, not necessarily a punctured … Enter a, the centre of the Series and f(x), the function. Give the region where it is valid. Obtaining Laurent Series & residues using Mathematica Laurent Series example discussed in Boas and in class In[343]:= Clear@ffD In[344]:= ff@z_D = 12êHz H2-zL H1+zLL Out[344]= 12 H2-zL z H1+zL Inner region R1 Mathematica command Series[] automatically gives Laurent series. t�L��R[�Q�Iy%QnpJ"/�aj��W������ ����b��ж�������>��f�M���!RkP:%T�0����-�h)����t�C The solution in the book says bn=(-1)^nan Extended Keyboard; Upload; Examples; Random; Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. See Examples. Whereas power series with non-negative exponents can be used to represent analytic functions in disks, Laurent series (which can have negative exponents) serve a similar purpose in annuli. 0. Let there be two circular contours and , with the radius of larger than that of . exponent terms. %PDF-1.5 There is also the usable We know this converges to 1=(1 z). 5.We will prove the requisite theorem (the Residue Theorem) in this presentation and we will also lay the abstract groundwork. Whereas power series with non-negative exponents can be used to represent analytic functions in disks, Laurent series (which can have negative exponents) serve a similar purpose in annuli. Find all Taylor and Laurent series of . Example: Laurent series about the apparent singularity at z=0 (which we've before discussed should be a removable singularity). We went on to prove Cauchy’s theorem and Cauchy’s integral formula. Therefore, the residue of f at -i is -1 over 2i, which is one-half i. 3, we get from Example 3. 2.1 Example Determine the Laurent series for f(z) = 1 (z +5) (2) that are valid in the regions (i) fz : jzj< 5g, and (ii) fz : jzj> 5g. We have uniform convergence on all D (z o;r 2) ˆB o R 2). We are about to look at a more general type of series expansion for a complex analytic function known as a Laurent series. In this case the two-sided series is a power series, and so f extends analytically to f(c) = a 0. For example, the function has the Laurent series at the point of. Examples > Laurent series are a powerful tool to understand analytic functions near their singularities. stream Laurent Series example discussed in Boas and in class In[343]:= Clear@ffD In[344]:= ff@z_D = 12êHz H2-zL H1+zLL Out[344]= 12 H2-zL z H1+zL Inner region R1 Mathematica command Series[] automatically gives Laurent series. series, this paper provides several working examples where the Laurent series of a function is determined and then used to calculate the integral of the function along any closed curve around the singularities of the function. Taylor and Laurent series Complex sequences and series An infinite sequence of complex numbers, denoted by {zn}, can be considered as a function defined on a set of positive integers into the unextended complex plane. Exercises for Taylor Series and Laurent Series [1] Find the Taylor series of f(z) expanded about the given point. exponent terms. (b) f(z) = 1/(z +2) expanded about z = 3i. % Laurent Series and sequences function plotseq1(m=1, p1=2, p2=2.1) t1p = 0 : m; t1n = -m: -1; t1 = [t1n, t1p]; f1 = [zeros(1,m), ((1/p2).^(t1p+1) - (1/p1).^(t1p+1))]; Remark. Laurent Series and Residue Theorem Review of complex numbers. Since (1) is an expansion for \frac{1}{2(z-2)} valid in |z| 2, we only need to get an expansion for - \frac{1}{z-1} valid in 1 |z| 2. Unable to display preview. laurent series calculator. {z,0,3} means: expand in z, about z=0, giving up to z^3 term. << You can't write the function in a unique way for both regions because althought the function is the same, the object we are dealing with is the Laurent development of the function, NOT the function itself; the Laurent development is a representation of the function, it's "a way to see the function" and it's natural to expect this representation changes when "the point of view" (i.e. 1. Solution. \nonumber$ This is a Laurent series, valid on the infinite region $$0 < … Preview. Example 5. A complex number is any expression of the form x+iywhere xand yare real numbers. This video is highlights how a number of our integral theorems come into play in order to study important functions like transfer functions. IMPLEMENTATION: Laurent series in Sage are represented internally as a power of the variable times the unit part (which need not be a unit - it’s a polynomial with nonzero constant term). Laurent Series of Analytic Complex Functions. Solution. Edit: i found a probldm but could not understand the solution, the function is e^(c/2(z-1/z)). Solution. /Length 2805 How to evaluate an example using Laurent series formula, so far I have seen Laurent series examples using binomial theorem. %PDF-1.5 Homework 3 due Friday, November 15 at 5 PM. /Filter /FlateDecode F��9J��+o�]B�H(-Ę���u����&��1+詻�*���'�w!�����龸(V� #����M�F�M�#A�U�(V����:����á=��W�/��T)s�W��1x쏺CP4�4j̥C1�0l7�g��[�*#*-�" ���M ���7J�̋���z��������B���{?A�Xx)�Mz,(� �QV���3� � "��gS� ��U�a�x� �*�K�c2ѷ%�j�ƖaQ��+�)|��w����PT�� ���*�) ��t�gM8��]D��j�L�T6�u�� ����W ����Ƅ1���A1svT����LE�'��#N�d��Nތ}� ��-�փ�˧ꍘ��h:���E+����������8�?��Y��V1l�x�yD��H&6�8���U�Χ�s���27)l�}kjl�1��X1}����Ҍ�C]���s<3�C�U�z� !R�����ƨ�j!�ey���᡿�1��ı�qBc64�� >> (c) f(z) = z5/(z3 − 4) expanded about z = 0. Example 7.2. Consider the series f(z) = X1 n=0 zn n! The second part, called the singular part, can be thought of as a \power series in 1 z zo". LAURENT SERIES AND SINGULARITIES 5 (Note: \principal," not \principle.") (2)). For example, we take zn= n+ 1 2n so that the complex sequence is {zn} = ˆ1 + i 2, 2 + i 22 We can expand the function as a Laurent series centered at either of the poles. Laurent Series Examples. What it does have is a Laurent 1 series, a generalized version of a Taylor series in which there are negative as well as positive powers of z — c. This is a preview of subscription content, log in to check access. The zero Laurent series has unit part 0. I hope this helps. Example \(\PageIndex{1}$$ Find the Laurent series for $f(z) = \dfrac{z + 1}{z} \nonumber$ around $$z_0 = 0$$. Ask Question Asked today. In mathematical terminology it returns true if expr is a Laurent series with finite principal part, since type series in Maple represents series with only a finite number of negative powers and with an order-term representing the truncation of a potentially infinite number of positive power terms. B�ȴ��Q�]^g��o/^]���Ŕ��T:�7�jn�����û�]�W�/�������F{�����đ�&��l��ֆ�~�x=� LAURENT SERIES AND SINGULARITIES 3 punctured disk, the rst of these series extends continuously to a 0 at z= c, because it is a power series. Theorem 7.1 (Weierstrass Af-Xest): Suppose the infinite series 2 uk(z) Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Pierre Alphonse Laurent, 1813–1854. e�9�h,��ve�g9q5�6���w�j(iMjkQ���B��%�#㻪���b��ڗ�< Laurent Series and Residue Calculus Nikhil Srivastava March 19, 2015 If fis analytic at z 0, then it may be written as a power series: f(z) = a 0 + a 1(z z 0) + a 2(z z 0)2 + ::: which converges in an open disk around z 0. ��t|p($q�Z�I�XY�e5��W�x�h��Ҋ!��:�8���c�M�dj�w�i�O6\�V 4 �|*O�:�@����e�iz�}z���A|3=�G!ﳩ6�. Therefore, one can treat f(z) as analytic at z=0, if one defines f(0) = 1. Simply divide the previous Laurent series by z. Converges for all No principal part, so z=0 is a removable singularity. Region II. e z z 2 = 1 z 2 + 1 z + 1 2! If is analytic throughout the annular region between and on the concentric circles and centered at and of radii and respectively, then there exists a unique series expansion in terms of positive and negative powers of , … ��c,0Ѧ��T��n�㠽(bt�Œ����k�M��� +D��g �a�x��]�#����pj=��u�aJ���3K�������7���N�э�Gm�$�ʁ�M��mA�mH��3��a�)7{���Ċl��e|������ %���� Residue at a singularity; Quotients of Analytic functions; Contour integration and applications; Evaluation of improper integrals; Examples on improper integrals; Conformal Mapping. Often it is sufficient to know the value of c-1 or the residue, which is used to compute integrals (see the Cauchy residue theorem — cf. A brief description of the Frobenius method in solving ordinary differential equations is also provided. Expand the same function f is Example 1 into a Laurent series in the ring 1 < jzj< 1. These examples … Use of Partial Fraction. 2 Laurent series. But since you sometimes would like to have a series which is valid outside instead of inside the circle |z| = 1 we can insert $\frac{1}{w} = w$ in the geometric series above $$\frac{1}{1-\frac{1}{w}} = \sum _0^{\infty} (\frac{1}{w})^n \text{ for } |{w}| > 1$$ this is great since this series is valid outside of the circle |z| = 1. ��*���TB�/����O��!pA'���.��bd�:�z�T}[�w ��U��� X���FP�,m�) The rst and most important example is the geometric progression formula 1 1 z = X1 0 zn = 1 + z + z2 + z3 + :::: (1) This can be di erentiated any number of times: 1 (1 z)2 = X1 0 (n+ 1)zn = 1 + 2z + 3z2 + 4z3 + :::: (2) 1 (1 z)3 = 1 2 X1 0 (n+ 1)(n+ 2)zn = 1 + 3z + 6z2 + 10z3 + :::: (3) And so on. We go through several examples of how to compute Laurent series. Use x as your variable. Example 4 Find all Laurent series of 1/(z 3 – z4) with center 0. Also the regions for the series can be alternated by chaning … Laurent series with complex coefficients are an important tool in complex analysis, especially to investigate the behavior of functions near singularities.. Active today. Laurent series expansion (example) 0. ��IPO��d��0.Y��>����S��� ��u4��M��0@X\Ԙx(�~N�&ni��x���^-�r&���< stream 7 Taylor and Laurent series 7.1 Introduction We originally defined an analytic function as one where the derivative, defined as a limit of ratios, existed. the coefficients c n, are not determined by using the integral formula (1), but directly from known series . Example 7.3. Example 0.1. Laurent Series of Analytic Complex Functions. Math 3160 introduction; Basic Complex Algebra; Moduli, conjugates, triangle inequality, and polar coordinates ; Products and quotients in exponential form; Roots of … Solution The region (i) is an open disk inside a circle of radius 5, centred on z = 0, and the region (ii) is an open annulus … and all terms other than the first forms the principal part. “ 88 Types of Singularities 9. x��]s��ݿ�o���o�7��~��L�iͤ3�-1�H�����.�H��;�M�" ��.�{A�d�����0�as��7�%B3��L67�TˍHϘpy��%���*�i{[mWk�L�h�{;����ͷ@#K��s�� r\�d-3:������J��K�~���� +�_�a-2�r������pB�L�R"��ێ�R4;���8ue������[��-y��9�:��J�(�tw�U� = f (a) … /Length 2082 Example 17.3.1 Laurent Expansions. Laurent Series Examples Residues Residue Theorem Singularities Convergence...the rst part is a regular power series and hence has an associated radius of convergence R 2 0. The Laurent series was named after and first published by Pierre Alphonse Laurent in 1843. Laurent series example. Laurent series are a powerful tool to understand analytic functions near their singularities. If is analytic throughout the annular region between and on the concentric circles and centered at and of radii and respectively, then there exists a unique series expansion in terms of positive and negative powers of , (1) where (2) (3) (Korn and Korn 1968, pp. Multiplying by 1/z. Region II. Use the keypad given to enter functions. We’ll begin this module by introducing Laurent series and their relation to analytic functions and … with center 0. Whereas power series with non-negative exponents can be used to represent analytic functions in disks, Laurent series (which can have negative exponents) serve a similar purpose in annuli. In mathematics, the Laurent series of a complex function f is a representation of that function as a power series which includes terms of negative degree. Homework 3 due Friday, November 15 at 5 PM. ��,��2�Ɇ�"L�;w�R*�P-���rT�J�(��0 #Z��xg�g�A3�q���� �!��lv��o4���?f�H���τ%�Hx\d���ܯ�1ugBc[eΊ�V! We’ll begin this module by introducing Laurent series and their relation to analytic functions and … Singularity 9 In mathematics, a singularity is in general a point at which a given mathematical object is not defined, or a point of an exceptional set where it fails to be well- behaved in some particular way, such as differentiability. The limit of the absolute ratios of consecutive terms is L= lim n!1 jzn+1j jznj = jzj Thus, the ratio test agrees that the geometric series converges when jzj<1. Give the region where the series converges. The singularity of fat cis removable. We are about to look at a more general type of series expansion for a complex analytic function known as a Laurent series. ���Q?�. Example 4 Find all Laurent series of 1/(z 3 – z4) with center 0. What would allow gasoline to last for years? Together, the series and the first term from the Laurent series expansion of 1 over z squared + 1 near -i, and therefore, this must be my a -1 term for this particular Laurent series. 80 0 obj (a) f(z) = 1/(z +2) expanded about z = 0. Solution. Ask Question Asked today. Laurent series example. Please Subscribe here, thank you!!! We will first need to define a special type of domain called an annulus. Remark. Most often, one is looking at Laurent series which are valid in a punctured neighborhood centered at a point ; so they converge in a domain for some R > 0. Example 7 8. An Example 210 Chapter 7 Taylor and Laurent Series y = Six) 0.5 x l y=f(x)-E FIGURE 7.1 The geometric series does not converge uniformly on (-1, 1). for all z with |z| > 0. Proof of Laurent's theorem We consider two nested contours C 1 {\displaystyle C_{1}} and C 2 {\displaystyle C_{2}} and points z {\displaystyle z} contained in the annular region, and the point z = a {\displaystyle z=a} contained within the inner contour. (e) f(z) = Logz expanded about z = 3. Taylor and Maclaurin Series If a function f (x) has continuous derivatives up to (n+ 1) th order, then this function can be expanded in the following way: f (x) = ∞ ∑ n=0f (n)(a) (x −a)n n! Find all Taylor and Laurent series of . Give the region where the series converges. Solution. Series expansion and Laurent series. (d) f(z) = zsinz expanded about z = π/2. For math, science, nutrition, history, geography, engineering, mathematics, linguistics, sports, finance, music… Wolfram|Alpha brings expert-level knowledge and … How do I find the Laurent series expansion? For an illustration of this, see Example 0.2 below. Examples of Taylor and Laurent series expansions A. Eremenko October 15, 2020 1. So far we have looked at Taylor series of analytic complex functions. a= f(x)= log$_{ }{ }$ sin-1: cos-1: tan-1: sinh-1: cosh-1: tanh-1 $√{ }$ $√^{ }{ }$ $^{ }$ 2 || ${ }/{ }$ HELP. How to evaluate an example using Laurent series formula, so far I have seen Laurent series examples using binomial theorem. Example 2 Obtain the Taylor series for $$f\left( x \right)$$ $$= 3{x^2} – 6x + 5$$ about the point $$x = 1.$$ Whereas power series with non-negative exponents can be used to represent analytic functions in disks, Laurent series (which can have negative exponents) serve a similar purpose in annuli. So far we have looked at Taylor series of analytic complex functions. 3 0 obj Monday, November 11, 2013 2:00 PM. To illustrate this, let us nd the Laurent series expansion centered at z= 1. Example 0.1. Browse other questions tagged complex-analysis complex-numbers residue-calculus or ask your own question. Section I. Often it is sufficient to know the value of c-1 or the residue, which is used to compute integrals (see the Cauchy residue theorem — cf. We’ll begin this module by introducing Laurent series and their relation to analytic functions and then continue on to the study and classification of isolated singularities of analytic functions. << In terms of partial fractions, For first fraction For second fraction, Regions of convergence in Example 5. Download preview PDF. Karl Weierstrass may have discovered it first in a paper written in 1841, but it was not published until after his death. We shall see later that Laurent series expansions with center 0. Example Determine the Laurent series for : that are valid in the regions 6 Solution : 7. Most often, one is looking at Laurent series which are valid in a punctured neighborhood centered at a point ; so they converge in a domain for some R > 0. Laurent Series. Consider the geometric series 1+z+z2 +z3 +:::. Frequently occurring instances of Laurent expansions are for functions that are analytic everywhere except for a single singularity at a point z = z 0. Taylor series; Laurent Series; Zeros and singularities; Residue calculus. The region of convergence is then bounded by an infinitesimal circle about z 0 and a circle of infinite radius, as in. Laurent Series Examples. x��[�r��}�W o`�����J�c˩l9ي�lR��EIآH�;�ק{f � %ٕ��Р1���ӧ{�4��h��3:���˳o�%�:�]�d� There is also the usable Laurent Series of Analytic Complex Functions. Example 5. Such a series is called a Laurent series, and those terms with negative exponents are called the principal part of the Laurent series. Active today. David Joyner (2006-01-22): added examples Since (1) is an expansion for $\frac{1}{2(z-2)}$ valid in $|z| 2$, we only need to get an expansion for $- \frac{1}{z-1}$ valid in $1 |z| 2$. Let be … 3, we get from Example 3. In particular, in a punctured neighbourhood $D = \{ {z } : {0 < | z - a | < R } \}$ of an isolated singular point $a$ … 1. https://goo.gl/JQ8NysComplex Analysis Laurent Series Example. Example 2.1 Find the Laurent series expansions of the function f(z)= 1 z 2,z=2 , fromz0 =0 in each of the domains in which there exists such an expansion. Note, the disk of convergence ends exactly at the singularity z= 1. We’ll begin this module by introducing Laurent series and their relation to analytic functions and then continue on to the study and classification of isolated singularities of analytic functions. We can expand the function as a Laurent series centered at either of the poles. 197-198). Use of Partial Fraction. Laurent’s theorem states that if f(z) is analytic between two concentric circles centered at z0, it can be expanded in a series of the general form (17.6) f(z) = ⋯ + a - 3(z - z0) - 3 + a - 2(z - z0) - 2 + a - 1(z - z0) - 1 + a0 + a1(z - z0) + a2(z - z0)2 + a3(z - z0)3 + ⋯, Monday, November 11, 2013 2:00 PM. xis called the real part and yis called the imaginary part of the complex number x+iy:The complex number x iyis said to be complex conjugate of the number x+iy: Trigonometric Representations. ��-Q�X��R��D�D�s�)�QS�Dcj���&��j̜M�L��I��� �T�Zy�\��,� ��7�nVC��DX"&i� + ⋯. G�����B4E#�Y�wP�D��ح�g���ܔ�7�5(��oRCx4��~6_�B����>2/�q��W Unable to display preview. The function is de ned inC \{2}, and the point of expansion isz0 = 0. Usually, the Laurent series of a function, i.e. Laurent Series. Google … Laurent series, and the singularity is classi ed into one of three types depending on how many terms there are in the principal part (see page 680 for a taxonomy). Click on "SOLVE" to process the function you entered. In[345]:= Series@ff@zD, 8z, 0, 3
{}
# What is Free Energy: Hinton, Helmholtz, and Legendre Hinton introduced Free Energies in his 1994 paper, This paper, along with his wake-sleep algorithm, set the foundations for modern variational learning.  They appear in his RBMs, and more recently, in Variational AutoEncoders (VAEs) . Of course, Free Energies come from Chemical Physics.  And this is not surprising, since Hinton’s graduate advisor was a famous theoretical chemist. They are so important that Karl Friston has proposed the  The Free Energy Principle : A Unified Brain Theory ? What are free Energies and why do we use them in Deep Learning ? #### The Free Energy is a Temperature Weighted Average Energy In (Unsupervised) Deep Learning, Energies are quadratic forms over the weights. In an RBM, one has $E(\mathbf{h},\mathbf{v})=\mathbf{v}^{T}\mathbf{a}+\mathbf{b}^{T}\mathbf{h}+\mathbf{v}^{T}\mathbf{Wh}$ This is the T=0 configurational Energy, where each configuration is some $(\mathbf{h},\mathbf{v})$ pair.  In chemical physics, these Energies resemble an Ising model. The Free Energy $F$ is a weighted average of the all the global and local minima $E_{i}$ $e^{-\beta F}=\sum\limits_{i}e^{-\beta E_{i}}$ ##### Zero Temperature Limit Note: as $T\rightarrow 0$, the the Free Energy becomes the T=0 global energy minima $E_{0}$.  In limit of zero Temperature, all the terms in the sum approach zero $e^{-\beta E_{i}}\rightarrow \dfrac{1}{e^{\infty}}\dfrac{1}{e^{E_{i}}}\rightarrow 0$ and only the largest term, the largest negative Energy, survives. $F(T\rightarrow 0)\rightarrow E_{0}$ ##### Other Notation We may also see F written in terms of the partition function Z: $-\beta F=\langle\;ln\;Z\;\rangle$ $Z=\sum\limits_{i}e^{-\beta E_{i}}$ where the brakets $\langle\cdots\rangle$ denote an equilibrium average, and expected value $\mathbb{E_P}[\cdots]$ over some equilibrium probability distribution $\mathbb{P}$(we don’t normalize with 1/N here;  in principle, the sum could be infinite.) Of course, in deep learning, we may be trying to determine the distribution $\mathbb{P}$, and/or we may approximate it with some simpler distribution $\mathbb{Q}\sim\mathbb{P}$ during inference. (From now on, I just write P and Q for convenience) But there is more to Free Energy learning than just approximating a distribution. #### The Free Energy is an average solution to a non-convex optimization problem In a chemical system, the Free Energy averages over all global and local minima below the Temperature T–with barriers below T as well.  It is the Energy available to do work. ##### Being Scale Free: T=1 For convenience, Hinton explicitly set T=1.  Of course, he was doing inference, and did not know the scale of the weights W.  Since we don’t specify the Energy scale, we learn the scale implicitly when we learn W.  We call this being scale-free So in the T=1, scale free case, the Free Energy implicitly averages over all Energy minima where $E_{i}<1$, as we learn the weights  W.   Free Energies solve the problem of Neural Nets being non-convex by averaging over the global minima and nearby local minima. ##### Highly degenerate non-convex problems Because Free Energies provide an average solution, they can even provide solutions to highly degenerate non-convex optimization problems: ##### When do Free Energy solutions fail ? They will fail, however, when the barriers between Energy basins are larger than the Temperature. This can happen if the effective Temperature drops close to zero during inference.  Since T=1 implicitly in inference, this means when the weights W are exploding. See: Normalization in Deep Learning Systems may also get trapped if the Energy barriers grow very large –as, say, in the glassy phase of a mean field spin glass. Or a supercooled liquid–the co-called Adam Gibbs phenomena.  I will discuss this in a future post. In either case, if the system, or solver, gets trapped in a single Energy basin, it may appear to be convex, and/or flat (the Hessian has lots of zeros).  But this is probably not the optimal solution to learning when using a Free Energy method. #### Free Energies produce Ruggedly Convex Landscapes It is sometimes argued that Deep Learning is a non-convex optimization problem.  And, yet, it has been known for over 20 years that networks like CNNs don’t suffer from the problems of local minima?  How can this be ? At least for unsupervised methods, it has been clear since 1987 that: An important  property of the effective [Free] Energy function E(V,0,T) is that it has a smoother landscape than E(S) [T=0] … Hence, the probability of getting stuck in a local minima decreases Although this is not specifically how Hinton argued for the Helmholtz Free Energy — a decade later. Why do we use Free energy methods ? Hinton used the bits-back argument: Imagine we are encoding some training data and sending it to someone for decoding.  That is, we are building an Auto-Encoder. If have only 1 possible encoding, we can use any vanilla encoding method and the receiver knows what to do. But what if have 2 or more equally valid codes ? Can we save 1 bit by being a little vague ? ##### Stochastic Complexity Suppose we have N possible encodings $[h_{1},h_{2},\cdots]$, each with Energy $E_{i}$.    We say the data has stochastic complexity. Pick a coding with probability $p_{i}$ and send it to the receiver.   The expected cost of encoding is $\langle cost\rangle_{encode}=\sum\limits_{i}p_{i}E_{i}$ Now the receiver must guess which encoding $h_{i}$ we used.  The decoding cost of the receiver is $\langle cost\rangle_{decode}=\sum\limits_{i}p_{i}E_{i}-H$ where H is the Shannon Entropy of the random encoding $H=\sum\limits_{i}p_{i}ln(p_{i})$ The decoding cost looks just like a Helmholtz Free Energy. Moreover, we can use a sub-optimal encoding, and they suggest using a Factorized (i.e. mean field) Feed Forward Net to do this. To understand this better,  we need to relate #### Thermodynamics and Inference In 1957, Jaynes formulated the MaxEnt principle which considers equilibrium thermodynamics and statistical mechanics as inference processes. In 1995, Hinton formulated the Helmholtz Machine and showed us how to define a quasi-Free Energy. In Thermodynamics, the Helmholtz Free Energy F(T,V,N) is an Energy that depends on Temperature instead of Entropy.  We need $E(S,V,N)\rightarrow F(T,V,N)$ and F is defined as $F(T,V,N) = E(S,V,N) - TS(V,N)$ In ML, we set T=1. Really, the Temperature equals how much the Energy changes with a change in Entropy (at fixed V and N) $T=\left(\dfrac{\partial E}{\partial S}\right)_{N,V}$ Variables like E and S depend on the system size N.  That is, as $N\rightarrow 2N$ $E(2N)=2E(N),\;\;S(2N)=2S(N),\;\;T(2N)=T(N)=T$ We say S and T are conjugate pairs;  S is extensive, T is intensive. (see more on this in the Appendix) ##### Legendre Transform The conjugate pairs are used to define Free Energies via the  Legendre Transform: Helmholtz Free Energy:  F(T) = E(S) – TS We switch the Energy from depending on S to T, where $T=\left(\dfrac{\partial E}{\partial S}\right)$. Why ? In a physical system, we may know the Energy function E, but we can’t directly measure or vary the Entropy S.  However, we are free to change and measure the Temperature–the derivative of E w/r.t. S: $T=\left(\dfrac{\partial E}{\partial S}\right)_{N,V}$ This is a powerful and general mathematical concept. Say we have a convex function f(x,y,z), but we can’t actually vary x. But we do know the slope, w, everywhere along x $w=\left(\dfrac{\partial f}{\partial x}\right)_{y,z}$. Then we can form the Legendre Transform , which gives g(w,y,z) as the ‘Tangent Envelope of f() along x $f(x,y,z)\rightarrow g(w,y,z)$, $g(w,y,z)=f(x,y,z)-x\left(\dfrac{\partial f}{\partial x}\right)_{y,z}$. or, simply $g(w)=f(x)-wx$. Note: we have converted a convex function into a concave one.  The Legendre transform is concave in the intensive variables and convex in the extensive variables. Of course, the true Free Energy F is convex; this is central to Thermodynamics (see Appendix).  But that is because while it is concave in T, we evaluate it at constant T. But what if the Energy function is not convex in the Entropy ?  Or, suppose we extract an pseudo-Entropy from sampling some data, and we want to define a free energy potential (i.e. as in protein folding).  These postulates also fail in systems like blog post on spin chains. How can we  always form a convex Free energy ? ##### Legendre Fenchel Transform When a convex Free Energy can not be readily be defined as above, we can use the the generalized the Legendre Fenchel Transform, which provides a convex relaxation via the Tangent Envelope , a convex relaxation $g(w)=\max\limits_{x}\left(f(x)-wx\right)$. The Legendre-Fenchel Transform can provide a Free Energy, convexified along the direction internal (configurational) Entropy,  allowing the Temperature to control how many local Energy minima are sampled. #### Practical Applications Variational Inference is a growing are with lots of open source codes.  A few highlights: Thanks again for reading and feedback is welcome. Happy Fourth of July #### Appendix Extra stuff I just wanted to write down… ##### Convexity in Thermodynamics and Statistical Physics I summarize the discussion in  Isarel  and the introduction by Wightman. Gibbs formulated Thermodynamics in 1873, without the guidance of Statistical principles.  Using just the second law of thermodynamics, he reasoned that the stability of Equilibrium states implies: 1. S and V (or U and V) are the coordinates for the manifold of Equilibrium states 2. The Energy U is a convex function of S, V, and S is concave 3. The Temperature $T=\left(\dfrac{\partial E}{\partial S}\right)_{N,V}$ Convexity has always been fundamental to thermodynamics and equilibrium stability.  Gibbs reasoned this from the properties of convex bodies.  And 20th Century statistical physics relied heavily on formal convex constructs like tangent potentials. ##### Extensivity and Weight Constraints If we assume T=1 at all times, and we assume our Deep Learning Energies are extensive–as they would be in an actual thermodynamic system–then the weight norm constraints act to enforce the size-extensivity. as $n\rightarrow Mn$, if $E(Mn)\rightarrow ME(n)$, and $E(n)\sim\Vert\mathbf{W}_{n}\Vert$, then W should remain bounded to prevent the Energy E(n) from growing faster than Mn.  And, of course, most Deep Learning algorithms do bound W in some form. ##### Back to Stat Mech Jaynes equates the Gibbs and Shannon Entropies.  This is controversial. There is another, more direct way to get the Entropy from statistical mechanics. If we define the configurational Entropy $S_{c}(E)$ as the log sum of the number of Energy configurations, or density of states, $\Omega(E)$ $S_{c}(E)=k ln\Omega(E)$  (and let k = 1) Then the Temp-dependent canonical partition function is the Laplace transform over the density states $Z(\beta)=\int_{0}^{\infty}dE\Omega(E)e^{(-\beta E)}$ If we know the Free Energy as a function of T in terms of the partition function (and not just by Legendre Transform, see part II of this blog) $\beta F(\beta)=-ln Z(\beta)$ then we can reconstruct the configurational Entropy (in principle, numerically) by taking an inverse Laplace Transform $\Omega(E)=\int_{C}d\beta e^{-\beta[E-F(\beta)]}$ where C denotes a contour integral. This is important in the theory of glasses–part III of this post (2 holidays away). 1. Free Energy is a particular case of Massieu Characteristic Function, discovered by François Massieu and developed by Pierre Duhem. By more fundamentely, Thermodynamics and Free Energy can be linked to geometrical notion. This history of “Characteristic Function of Massieu” could be find in the presentation: http://forum.cs-dc.org/topic/582/fr%C3%A9d%C3%A9ric-barbaresco-symplectic-and-poly-symplectic-model-of-souriau-lie-groups-thermodynamics-souriau-fisher-metric-geometric-structure-of-information and in the video at CIRM seminar TGSI’17: or video of GSI’15: http://forum.cs-dc.org/topic/291/symplectic-structure-of-information-geometry-fisher-metric-and-euler-poincar%C3%A9-equation-of-souriau-lie-group-thermodynamics-fr%C3%A9d%C3%A9ric-barbaresco This history of Massieu is also explained in Roger Balian paper available on website of The French Academy of Science: “François Massieu et les potentiels thermodynamiques” These structures (Legendre transform, entropy,…) are closely related to Hessian geometry developed by Jean-Louis Koszul. Extension of Massieu Characteristic Function by Jean-Marie Souriau, called Lie Group Thermodynamics, allow to extend Free Energy on homogeneous manifolds and so also machine learning on these more abstract spaces. Souriau Model of Lie Group Thermodynamics are developed in the first chapter of MDPI Book in papers of Marle, Barbaresco and de Saxcé: Differential Geometrical Theory of Statistics http://www.mdpi.com/books/pdfview/book/313 This topic will be addressed in GSI’17 conference at Ecole des Mines de Paris in November 2017: https://www.see.asso.fr/gsi2017 see GSI’17 program of session “Geometrical Structures of Thermodynamics” https://www.see.asso.fr/wiki/19413_program Like 1. Charles H Martin, PhD says: I don’t know what the dynamical group is for something like a variational auto encooder (VAE) or a deep learning model in general. In the old days, we just assumed that neural networks like MLPs proceeded by Langevin dynamics, and we did not pay much attention to the specific structure of SGD updates, the difference between dynamics and BackProp, etc. But I think there is a lot more attention being applied to the details of the update equations and what the actual dynamics are. That said, it is becoming clear, at least in VAEs, that the density has to satisfy some kind of conservation law of probability as with normalizing flows and related ideas. Like 1. Jean-Marie Souriau has discovered in chapter IV on “Statistical Mechanics” of his book “Structure of Dynamical Systems”, that classical Gibbs density on homogeneous (symplectic) manifold for Geometrical Mechanics is not covariant with respect to Dynamical groups of Physics (Galileo Group in classical Mechanics and Poincaré group in Relativity). Souriau has then defined a new thermodynamics, called “Lie Group Thermodynamics” where (planck) temperature is proved to be an element of the Lie algebra of the dynamical groups acting on the systems. Souriau has also geometrized the Noether theorem by inventing the “Moment map” (as an element of dual Lie algebra) that is the new fundamental preserved geometrical structure. Souriau has used this cocycle to preserve the equivariance of the action of the group on the dual Lie algebra and especially on Moment map. To convince yourself, you can read in Marle’s paper development of Souriau Lie group Thermodynamics for the most simple case “a centrifuge”: [XX] Marle, C.-M.: From tools in symplectic and poisson geometry to J.-M. Souriau’s theories of statistical mechanics and thermodynamics. Entropy 18, 370 (2016) http://www.mdpi.com/1099-4300/18/10/370/pdf I have discovered that Souriau has also given a generalized definition of Fisher metric (hessian of the logarithm of Massieu’scharacteristic function) by introducing a cocycle linked with cohomology of the group. Souriau has identified Fisher metric with “geometric” calorific capacity. Souriau gave also the good definition of Entropy, the Legendre thansform of the logarithm of Massieu’s characteristic function (related to Free Energy in new parameterization). Free Energy is not classically written in the good parameterization: – Classical Free Energy: F=E-T.S (with S: Entropy) where F is parameterized by T. – Free Energy should be written S=(1/T).E –F or F=(1/T).E-S where F is parameterized by (1/T). – The good parameter is 1/T and not T, then Entropy is the Legendre transform of Free Energy in this parameterization. – F should by parameterized by (Planck) temperature: 1/T Souriau has generalized this relation by replacing (1/T) by the geometric temperature (element of Lie algebra) to preserve the Legendre Transform structure and the invariance of the Entropy given by this definition with respect to the action of the group. Obviously, if you consider only time translation , we recover classical thermodynamics. But it easily to prove that classical thermodynamics is not correct theory for a simple case as “thermodynamics of centrifuge” where the sub-group of Galileo group (rotation of the system along one axis) brake the symmetry and where classical Gibbs density is no longer covariant with respect to the action of this subgroup. Souriau has geometrized Thermodynamics and has given a “Geometric Theory of Heat” (in 2018, in France, we will officially organized many events for 250th birthday of Joseph Fourier and his “heat equation”. I will present geometric heat equation of Souriau for MDPI conference in Barcelona in 2018 “From Physics to Information Sciences and Geometry”: https://sciforum.net/conference/Entropy2018-1 ; I invite you to submit a paper). To apply this theory for Neural Network, you have to forget dynamical groups of geometric mechanics, but Souriau’s equations of “Lie Group Thermodynamics” are universal and you can apply it to make statistics of data on “homogeneous manifolds” or on “Lie groups” (you can forget the symplectic manifold, because new equations, only take into account Group and its cocycle). Especially, Neural Network on Lie Group data or time series on “Lie Group” data are more and more popular for pose recognition for instance: [YY] Huang, Z., Wan, C., Probst, T., & Van Gool, L. (2016). Deep Learning on Lie Groups for Skeleton-based Action Recognition. arXiv preprint arXiv:1612.05877. [ZZ] Learning on distributions, functions, graphs and groups, Workshop at NIPS 2017, 8th December 2017; https://sites.google.com/site/nips2017learningon/ With Souriau definition of Fisher Metric, you can extend classical “Natural Gradient” (from Information Geometry) on abstract spaces , for learning from data on homogeneous manifold and Lie Group. Then, the invariance by reparametrization of “Natural gradient” is replaced by invariance by all symmetries (Gibbs density is made covariant with respect to the group acting transitively on the homogeneous manifolds and Fisher metric of backpropagation “Natural Gradient” is invariant with respect to the group). To have more details on Geometric approach of Natural Gradient (Riemannian Neural Networks), see Yann Ollivier papers (http://www.yann-ollivier.org/rech/index_chr ) written at Paris-Saclay University or his lecture at “Collège de France”: [AA] Yann Ollivier, Riemannian metrics for neural networks I: Feedforward networks, Information and Inference 4 (2015), n°2, 108–153; http://www.yann-ollivier.org/rech/publs/gradnn.pdf [BB] Yann Ollivier, Gaétan Marceau-Caron, Practical Riemannian neural networks, Preprint, arxiv.org/abs/1602.08007 ; http://www.yann-ollivier.org/rech/publs/riemaNN.pdf About Langevin Dynamics, based on Paul Langevin equation, we can mix natural gradient and Langevin Dynamics to define a “Natural Langevin Dynamics” as published by Yann Ollivier in GSI’17: [CC] Yann Ollivier and Gaétan Marceau Caron. Natural Langevin Dynamics for Neural Networks, GSI’17 Geometric Science of Information, Ecole des Mines ParisTech, Paris, 7th-9th November 2017. https://www.see.asso.fr/wiki/19413_program Mixing Langevin Dynamics with Souriau-Fisher Metric will provide a new backpropagation based on “Symplectic Integrator” that have all good properties of invariances in the framework of calculus of variations and Hamiltonian formalisms. I have recently observed that with Souriau Formalism, Gaussian density is a maximum entropy of 1rst order (see tensorial notation and parameterization of multivariate Gaussian by Souriau) and not a 2nd order. We can then extend Souriau model with polysymplectic geometry to define a 2nd order maximum entropy Gibbs density in Lie Group Geometry , useful for “small data analytics”. I will present this paper at GSI’17: [DD] F. Barbaresco, Poly-Symplectic Model of Higher Order Souriau Lie Groups Thermodynamics for Small Data Analytics, GSI’17 Geometric Science of Information, Ecole des Mines ParisTech, Paris, 7th-9th November 2017. https://www.see.asso.fr/wiki/19413_program To conclude, Free Energy is a fundamental structure, but just a particular case of Massieu’s characteristic function. We need to geometrize Thermodynamics in Geometric Mechanics but also for Machine Leaning with neural networks on data belonging to homogeneous manifolds or Lie Groups. For the generalization, Souriau’s Lie Group Theory is the right model. We can prove that there is no other ones. In Information geometry, and in the case of exponential families, the fundament group is the “general affine group”. The geometry is in the case related to co-adjoint orbits of this group. Using Souriau-Konstant-Kirilov 2 form, we can then rediscover a symplectic geometry associated to these co-adjoint orbits. These concepts are now very classical in Europe, and are developed in GSI (Geometric Science of Information) or TGSI (Topological & Geometrical Structures of Information) conferences. We can no longer ignore them. With European actors of Geometric Mechanics, we have just submitted a project to the European commission to use these new geometric structures to make recommendation for designing new generation of HPC (High Power Computer) Machine that could beneficiate of symmetries preservation. We will replace “Pascaline” machine (invented by Blaise Pascal under the influence of Descartes) and its more recent avatars (until GOOGLE TPU) that are coordinate-dependent to new generation “Lie Group” machines based on Blaise Pascal “Aleae Geometria” (geometrization of probability) that will be coordinate-free-dependent and intrinsic without privilege coordinate systems. Frederic Barbaresco GSI’17 Co-chairman http://www.gsi2017.org Like
{}
Try NerdPal! Our new app on iOS and Android # Solve the quadratic equation $x^2-8x-1008=0$ ## Step-by-step Solution Go! Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch ### Videos $x=36,\:x=-28$ Got another answer? Verify it here ## Step-by-step Solution Problem to solve: $x^2-8x-1008=0$ Specify the solving method 1 To find the roots of a polynomial of the form $ax^2+bx+c$ we use the quadratic formula, where in this case $a=1$, $b=-8$ and $c=-1008$. Then substitute the values of the coefficients of the equation in the quadratic formula: • $\displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ $x=\frac{8\pm 64}{2}$ Learn how to solve quadratic equations problems step by step online. $x=\frac{8\pm 64}{2}$ Learn how to solve quadratic equations problems step by step online. Solve the quadratic equation x^2-8x-1008=0. To find the roots of a polynomial of the form ax^2+bx+c we use the quadratic formula, where in this case a=1, b=-8 and c=-1008. Then substitute the values of the coefficients of the equation in the quadratic formula:<ul><li>\displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}</li></ul>. To obtain the two solutions, divide the equation in two equations, one when \pm is positive (+), and another when \pm is negative (-). Subtract the values 8 and -64. Add the values 8 and 64. $x=36,\:x=-28$ SnapXam A2 ### beta Got another answer? Verify it! Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z . (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| θ = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch $x^2-8x-1008=0$
{}
# FiveThirtyEight ## Politics Like many of you, I’m watching bits and snatches of these town halls broadcast on C-SPAN and elsewhere. I see a lot of elderly people shouting into microphones about “socialism,” their fear of big government, and the importance of fiscal responsibility. Elsewhere, other seniors are complaining because in 2010 they aren’t going to receive their familiar, cost-of-living adjustments for Social Security, while still others are fretting aloud that the federal government–which happens to run Medicare–shouldn’t, um, be meddling with Medicare. A recent Pew poll revealed that 53 percent of seniors are worried that government is becoming “too involved in health care,” a much higher share than those under 30 saying the same. I’m 42. So maybe my mind will change on this as I age, but can those of us who are still working and paying premiums into OADSI and Medicare get a break here? For starters, it’s true that the Social Security benefits will not increase for 2010, but neither will they decrease. And, given that the year-against consumer price index dropped, maintaining the same benefit levels actually has the net effect of a raise in real terms. And yet, we hear complaints that there won’t be an increase. Now, one might counter that, for some things, prices are increasing for seniors–“things” like health care. So we need to make sure seniors have their Medicare, right? I’m fine with that, but let’s keep in mind that current retiree-recipients will receive Medicare benefits that, on average, exceed what they ever put in, even when you adjust for inflation. American Enterprise Institute scholar Andrew Biggs makes a powerful point when he calculates that “a typical person who was born in 1944, began work at age 21 in 1965, and in 2009 retired at age 65 and enrolled in Medicare,” and who then draws the typical benefit until death at age 83, will have paid roughly $64,971 in Medicare payroll taxes during his/her lifetime but received around$173,886, for a net of “$108,915 more in benefits than he paid in taxes over his lifetime.” Hey, that sounds like socialist-style redistribution to me! Which brings me to my next point: Although the redistributive effects of Social Security and Medicare are to some degree intragenerational, a lot of the redistribution works intergenerationally. Because it’s intergenerational–specifically from younger Americans/workers who have paid into these programs at higher tax rates than their parents and grandparents–and because each new American generation is less white than the previous one–such redistribution also has a racial effect. Moreover, as a result of different life expectancies, even the intragenerational redistributive effect has a racial element: Cato’s Michael Tanner explains in his 2004 book, Social Security and its Discontents, because of different life expectancies, the typical black man will receive on average about$70,000 less than a white man, and even if the white man and black man both reach age 65, the disparity still remains about \$25,000. So you’ll pardon me if I’m unmoved by the complaints of some (not all: some) of these seniors standing up in town halls to warn ominously about big-government socialism at the same time they are benefitting from, well, big-government socialism. That some of the white seniors (again, some) are also echoing falsehoods about illegal immigrants (translation: non-whites) receiving benefits in a country where the younger, less-white generations are redistributing income from their hard labors to pay for the retirements of their whiter elders–especially when many of those immigrants, legal or not, pay into Social Security and Medicare and may never receive their contributions back or have a chance to later obtain benefits–is especially galling. Filed under , ,
{}
# Multicolumn Table with Merged Cells I am having great trouble producing the table that I have attached to this post. The difficulty I am having is firstly fitting the table on the page as it is very wide and also getting the merging of the columns done correctly. I would like to columns to expand to fit the text. Even when I try to build the table piece by piece I am falling down on the basics. Any help would be greatly appreciated. Also I don't mind if the text is shrunk so that the table can fit or if it is display in landscape mode. Thanks in advance. \documentclass[a4paper]{article} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage{amsmath} \usepackage{graphicx} \usepackage[colorinlistoftodos]{todonotes} \usepackage{rotating} \title{Foo} \begin{document} \maketitle \section{Analysis} \begin{sidewaystable}[h] \begin{tabular}{ |c|c|c|c|c|c| } Event & No Cond & Top & Middle & Last & After Last & Top\\ \hline New & Insert & Insert At Top & Split, Insert, Push down & Split, insert, push down & insert & Insert at top, push down \\ Delete & - & Delete at top & Split, delete, push up & Split, insert, push up & - & delete top, push down \\ Update & - & Update values & Update values & Update values & - & update values \\ \hline \end{tabular} \end{sidewaystable} \section{Implementation} \end{document} • All you really need is another "c|". The resulting tabular is 8.05 inches wide. Aug 15 '15 at 18:55 • @JohnKormylo, thanks. Is it possible to put this in normal mode (ie not landscape mode) but have the table shrunk so that it fits the width of the page? if I take out the sideways option, it overflows. – user81633 Aug 15 '15 at 19:34 I propose two solutions based on tabularx, using a smaller font, and a smaller value for \tabcolsep. One is with vertical rules, the other only with horizontal rules, and the booktabs package. Also loading geometry provides more sensible margins: \documentclass[a4paper]{article} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usepackage{amsmath} \usepackage{graphicx} \usepackage[colorinlistoftodos]{todonotes} \usepackage{rotating, tabularx, booktabs} \renewcommand{\tabularxcolumn}[1]{>{\raggedright\arraybackslash}m{#1}} \usepackage{geometry} \usepackage{showframe} \title{Foo} \begin{document} \maketitle \section{Analysis} \begin{table}[! ht] \setlength\tabcolsep{3pt} \begin{tabularx}{\linewidth}{|l|c|X|X|X|c|X| } \hline Event & No Cond & Top & Middle & Last & After Last & Top \\ \hline New & Insert & Insert At Top & Split, insert, push down & Split, insert, push down & insert & Insert at top, push down \\[1ex] Delete & -- & Delete at top & Split, delete, push up & Split, insert, push up & -- & Delete top, push down \\[1.5ex] Update & -- & Update values & Update values & Update values & -- & Update values \\[1.5ex] \hline \end{tabularx} \end{table} \begin{table}[! ht] \setlength\tabcolsep{4pt} \begin{tabularx}{\linewidth}{@{}lcXXXcX @{}} \toprule Event & No Cond & Top & Middle & Last & After Last & Top \\ \midrule New & Insert & Insert At Top & Split, insert, push down & Split, insert, push down & insert & Insert at top, push down \\
{}
Maplets[Elements] - Maple Programming Help # Online Help ###### All Products    Maple    MapleSim Home : Support : Online Help : Programming : Maplets : Elements : Other Elements : Maplets/Elements/TableRow Maplets[Elements] TableRow specify a row in a table Calling Sequence TableRow(element_content) Parameters element_content - TableItem elements Description • The TableRow element specifies a row in a Maplet application table.  The contents of each column in the row are defined by using the TableItem element. • A TableRow element can contain TableItem elements. Note: Each TableRow must have the same number of TableItem elements.  The number of TableItem elements in the TableHeader, if specified, must equal the number of TableItem elements in each TableRow. • A TableRow element can be contained in a Table element. Examples > $\mathrm{with}\left(\mathrm{Maplets}[\mathrm{Elements}]\right):$ > $\mathrm{maplet}≔\mathrm{Maplet}\left(\left[\mathrm{BoxCell}\left(\mathrm{Table}\left(\mathrm{TableHeader}\left(\mathrm{TableItem}\left(A\right),\mathrm{TableItem}\left(B\right)\right),\mathrm{TableRow}\left(\mathrm{TableItem}\left('\mathrm{caption}'=1\right),\mathrm{TableItem}\left('\mathrm{caption}'=2\right)\right),\mathrm{TableRow}\left(\mathrm{TableItem}\left('\mathrm{caption}'=3\right),\mathrm{TableItem}\left('\mathrm{caption}'=4\right)\right)\right),'\mathrm{as_needed}'\right),\mathrm{Button}\left("OK",\mathrm{Shutdown}\left(\right)\right)\right]\right):$ > $\mathrm{Maplets}[\mathrm{Display}]\left(\mathrm{maplet}\right)$ This Maplet application can be rewritten as: > $\mathrm{with}\left(\mathrm{Maplets}[\mathrm{Elements}]\right):$ > $\mathrm{maplet}≔\mathrm{Maplet}\left(\left[\mathrm{BoxCell}\left(\mathrm{Table}\left(\left[A,B\right],\left[\left[1,2\right],\left[3,4\right]\right]\right),'\mathrm{as_needed}'\right),\mathrm{Button}\left("OK",\mathrm{Shutdown}\left(\right)\right)\right]\right):$ > $\mathrm{Maplets}[\mathrm{Display}]\left(\mathrm{maplet}\right)$ ## Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is ? This question helps us to combat spam
{}
#### Volume 8, issue 2 (2008) 1 E Auclair, C Lescop, Clover calculus for homology $3$–spheres via basic algebraic topology, Algebr. Geom. Topol. 5 (2005) 71 MR2135546 2 S Garoufalidis, M Goussarov, M Polyak, Calculus of clovers and finite type invariants of $3$–manifolds, Geom. Topol. 5 (2001) 75 MR1812435 3 M Goussarov, Finite type invariants and $n$–equivalence of $3$–manifolds, C. R. Acad. Sci. Paris Sér. I Math. 329 (1999) 517 MR1715131 4 K Habiro, Claspers and finite type invariants of links, Geom. Topol. 4 (2000) 1 MR1735632 5 J Johannes, A type $2$ polynomial invariant of links derived from the Casson–Walker invariant, J. Knot Theory Ramifications 8 (1999) 491 MR1697385 6 C Lescop, Global surgery formula for the Casson–Walker invariant, Annals of Math. Studies 140, Princeton University Press (1996) MR1372947 7 C Lescop, A sum formula for the Casson–Walker invariant, Invent. Math. 133 (1998) 613 MR1645066 8 G Massuyeau, Spin Borromean surgeries, Trans. Amer. Math. Soc. 355 (2003) 3991 MR1990572 9 S V Matveev, Generalized surgeries of three-dimensional manifolds and representations of homology spheres, Mat. Zametki 42 (1987) 268, 345 MR915115 10 Y Mizuma, On the Casson invariant of Mazur's homology spheres, preprint 11 T Ohtsuki, Finite type invariants of integral homology $3$–spheres, J. Knot Theory Ramifications 5 (1996) 101 MR1373813 12 K Walker, An extension of Casson's invariant, Annals of Math. Studies 126, Princeton University Press (1992) MR1154798
{}
Krishna 0 Step 1: Draw a special right triangle with a 60° angle. NOTE: A special right triangle 30\degree-60\degree-90\degree. It's sides are 1, 2 and \sqrt{3} Step 2: Recall the definition of cosine and calculate \cos\ 60\degree DEFINITION: The cosine (cos) of an angle in a right triangle is a ratio. It is the length of the adjacentleg (adj) divided by the length of the hypotenuse (hyp). NOTE:  The length of the adjacent  side is  1 length of the hypotenuse is 2 and Step 3: Calculate \cos\ 30\degree The length of the adjacent side is   \sqrt{3} length of the hypotenuse is 2
{}
# Second thermodynamics ## Homework Statement A sample of 3 mol of a diatomic perfect gas at 200K is compressed reversibly and adiabatically until its temperature reaches 250K. Given that Cv,m=27.5 JK-1mol-1, calculate q, w, ΔU, ΔH and ΔS. dS = dq/T ΔU = n(Cv,m)ΔT ΔH = n(Cp,m)ΔT ## The Attempt at a Solution (skipped the part of q and ΔS) ΔU = q+w = w w = -∫PdV = -PΔV = -nRΔT = -1247.2 J ΔU = -1247.2 J ΔH = ΔU+Δ(PV) = -2494.4 J w= ΔU = CvΔT = +4.1 kJ ΔH = CpΔT = +5.4 kJ here are my questions: i should not assume Pressure is constant? if the problem is that Pressure does vary in the process, why ΔH is still calculated by Cp,m (i know ΔH=q in an isobaric process) ? why can't i use ΔH = ΔU -w in this case? is ΔU always equal to CvΔT for isobaric and isochoric process, and equal to 0 for isothermal process? Thank You! i have read my physical chemistry book but i am still confused! Related Introductory Physics Homework Help News on Phys.org Well, you'll first have to understand that an adiabatic process is a process where all 3 of the basic Thermodynamics variables change. This means that Temperature,Pressure and Volume are all subset to changes , yet since this process is adiabatic, there is no exchange of Heat Q between the gas and the rest of the system. You should also understand that Cp and Cv are constants, that you can use when the gas in your problem is ideal/perfect and with a single atom. Cp=5R/2 and Cv=3R/2. When your gas isn't that perfect, there are other values that they take. ΔU is given by nCvΔT for every reversible process, and you can easily see than in an isothermal process (ΔT=0) ΔU=0. I am not sure about Entropy or Enthalpy since my memories of that are quite blurry (+ we were not taught of them in school, so..) I hope I have helped you understand some things, if others find mistakes in my explanations feel free to correct me. thank you karkas. actually you remind me of another question, for isothermal process, ΔT=ΔU=0 ΔH = ΔU + Δ(PV) = ΔU + Δ(nRT) = 0+0 = 0 so for isothermal process, heat is not necessarily be zero but ΔH is always zero, while an adiabatic process, heat must be zero while ΔH can be nonzero, am i right? anyway i am really confused by the equations... Really, the Second Law of Thermodynamics explains it all fairly easily, no need to be confused :) 1)Q=ΔU+W (in an isothermal ΔU=0) and so Q=W=nRT lnΔV. <-- Isothermal 2)Q=ΔU+W (Q=0) => ΔU = - W = - Δ(PV)/1-γ <--- Adiabatic So indeed in the isothermal : ΔΤ=0 and Q isn't zero. And in the adiabatic : ΔΤ isn't zero and Q=0. Anything else?? :) (I hope I am not making mistakes, someone confirm!) why Q=ΔU+W i know ΔU=q+w and it should be q=ΔU-w http://en.wikipedia.org/wiki/First_law_of_thermodynamics @[email protected] Notice that a lot of textbooks (e.g., Greiner Neise Stocker) formulate the first law as: dU=\delta Q+\delta W\, The only difference here is that δW is the work done on the system. So, when the system (e.g. gas) expands the work done on the system is − PdV whereas in the previous formulation of the first law, the work done by the gas while expanding is PdV. In any case, both give the same result when written explicitly as: dU=\delta Q-PdV\ i see! thanks a lot and could anyone answer my first question posted above...thanks hey could anyone help? Mapes Homework Helper Gold Member Your first question being "what's wrong with my answer?"? It's calculated assuming an isobaric process, which this isn't. why would the model use ΔH = CpΔT if it is not an isobaric process? i thought q=CvΔT and ΔH=q=CpΔT in isobaric process! Mapes Homework Helper Gold Member $\Delta U=C_V\Delta T$ and $\Delta H=C_P\Delta T$ always hold for an ideal gas. But $\Delta H=Q$ only for reversible isobaric processes (because $dH=T\,dS+V\,dP=q+V\,dP$) thx! do u mean that ΔH=CpΔT even in an isochoric process?? so, for an isothermal process, ΔH=CpΔT = 0 for an isochoric process, ΔH=CpΔT for an isobaric process, ΔH = Q= ΔU+Δ(PV)=CpΔT in reversible isobaric process am i correct? oh my god so confusing Mapes
{}
# area.owin 0th Percentile ##### Area of a Window Computes the area of a window Keywords spatial, math ##### Usage area.owin(w) ## S3 method for class 'owin': volume(x) ##### Arguments w A window, whose area will be computed. This should be an object of class owin, or can be given in any format acceptable to as.owin(). x Object of class owin ##### Details If the window w is of type "rectangle" or "polygonal", the area of this rectangular window is computed by analytic geometry. If w is of type "mask" the area of the discrete raster approximation of the window is computed by summing the binary image values and adjusting for pixel size. The function volume.owin is identical to area.owin except for the argument name. It is a method for the generic function volume. ##### Value • A numerical value giving the area of the window. perimeter, diameter.owin, owin.object, as.owin • area.owin • volume.owin ##### Examples w <- unit.square() area.owin(w) # returns 1.00000 k <- 6 theta <- 2 * pi * (0:(k-1))/k co <- cos(theta) si <- sin(theta) mas <- owin(c(-1,1), c(-1,1), poly=list(x=co, y=si)) area.owin(mas) # returns approx area of k-gon # returns 3.14 approx
{}
🎉 Announcing Numerade's $26M Series A, led by IDG Capital!Read how Numerade will revolutionize STEM Learning Oh no! Our educators are currently working hard solving this question. In the meantime, our AI Tutor recommends this similar expert step-by-step video covering the same topics. Numerade Educator ### Problem 2 Easy Difficulty # In Conceptual Example 8.1 (Section 8.1 ), show that the iceboat with mass$2 \mathrm{~m}$has$\sqrt{2}$times as much momentum at the finish line as does the iceboat with mass$m$. ### Answer ##$\sqrt{2} p_{A}\$ #### Topics Moment, Impulse, and Collisions ### Discussion You must be signed in to discuss. ##### Andy C. University of Michigan - Ann Arbor ##### Farnaz M. Simon Fraser University Lectures Join Bootcamp ### Video Transcript {'transcript': "Our question says to show that the kinetic energy K of a particle of mass M is related to the momentum P is equal to the square root of K squared plus two K M C squared. See? Okay, well, we're gonna go ahead and use equation 36 deaths 11 and 36. That's 13 of the 1st 1 says E is equal to que plus m c squared. So the total energy is equal to the kinetic energy. Plus the rest mass energy and PC squared is equal to e squared minus M c squared. Squared. Okay, well, let's plug in our value for E um, on the left side of the equation when I just marked by a star here into here, we're just pointed with zero. So we have PC squared is equal to okay plus M c squared, which is our expression for the total energy that's all squared minus M. C squared squared. Okay, so if you carry out the square operation there for K plus M C squared, that's equal to K squared plus to K. M. C squared and then plus M c squared squared. But then we also have a minus m c squared squared. So those two terms, we're just gonna cancel So we could just leave it as this case Square plus two k and C squared and we want to solve for P. So we're gonna have P is equal to first. We'll take the square root of both sides a square, root this side right, and then you square it this side and then we'll divide, uh both sides by sea. So you'll end up with the square root of K swear plus to K. M. C squared, and all of that is divided by sea. And that was the expression we were asked to show so we can box it in. That's our solution."} University of Kansas #### Topics Moment, Impulse, and Collisions ##### Andy C. University of Michigan - Ann Arbor ##### Farnaz M. Simon Fraser University Lectures Join Bootcamp
{}
# Question #b773e Apr 11, 2017 $\Delta H \text{of reaction"= "-121.2kJ/mol}$ #### Explanation: $2 C O + {O}_{2} = 2 C {O}_{2}$ $\Delta H \text{of reaction" = DeltaH_f^@ of "product(s)" - DeltaH_f^@ of "reactants}$ The product in this reaction is CO $\Delta {H}_{f}^{\circ}$ of CO is -253.7KJ/mol The reactants in this reaction are ${O}_{2}$ and $C {O}_{2}$ But the $\Delta {H}_{f}^{\circ}$ of ${O}_{2}$ is not there The $\Delta {H}_{f}^{\circ}$ of all elements is 0kJ/mol So the $\Delta {H}_{f}^{\circ}$ of ${O}_{2}$ is 0kJ/mol The combined $\Delta {H}_{f}^{\circ}$of products should be multiplied by their coefficients . -253.7KJ/mol 2 + 0kJ/mol1 = -507.4kJ/mol $\Delta {H}_{f}^{\circ}$ of products $C {O}_{2}$ = -314.3kJ/mol And remember also multiply by the coefficient -314.3kJ/mol* 2 = -628.6 Plug the values in the equation $\Delta H \text{of reaction" = DeltaH_f^@ of "product(s)" - DeltaH_f^@ of "reactants}$ $\text{-628.6kJ/mol - (-507.4kJ/mol)}$ $\Delta H \text{of reaction"= "-121.2kJ/mol}$
{}
# Definition:Domain (Set Theory)/Binary Operation Let $\circ: S \times S \to T$ be a binary operation. The domain of $\circ$ is the set $S$ and can be denoted $\operatorname{Dom} \left({\circ}\right)$. This definition can be considered as the same as that for the domain of a mapping, where the domain would be defined as $S \times S$.
{}
•  Brent Nelson, MSU •  Free transport, II •  11/14/2022 •  4:00 PM - 5:30 PM •  C517 Wells Hall •  Brent Nelson (banelson@msu.edu) In operator algebras, specifically free probability, free transport is a technique for producing state-preserving isomorphisms between C* and von Neumann algebras that was developed by Guionnet and Shlyakhtenko in their 2014 Inventiones paper. The inspiration for their work comes from the field of optimal transport, specifically work of Brenier from 1991 who showed that under very mild assumptions one can push forward a probability measure on $\mathbb{R}^n$ to the Gaussian measure. In the non-commutative case, Guionnet and Shlyakhtenko showed that if $x_1,\ldots, x_n$ are self-adjoint operators in a tracial von Neumann algebra $(M,\tau)$ whose distribution satisfies an "integration-by-parts" formula up to a small perturbation, then these operators generate a copy of the free group factor $L(\mathbb{F}_n)$. In this series of talks, I will give an overview of their proof, discuss some applications of their result, and survey the current state of free transport theory. ## Contact Department of Mathematics Michigan State University C212 Wells Hall East Lansing, MI 48824 Phone: (517) 353-0844 Fax: (517) 432-1562 College of Natural Science
{}
# Why is the order parameter in N=2 Seiberg-Witten theory $\langle \text{tr} \phi^2 \rangle$? (And discussion of gauge-variant order parameter in general) + 4 like - 0 dislike 2189 views In this paper, Seiberg and Witten use the gauge invariant order parameter $\langle \text{tr} \phi^2 \rangle$ to parametrize the breaking of gauge symmetry (I'm using the standard abuse of terminology here, of course gauge symmetry cannot be broken, what's broken is the corresponding global symmetry).  But since this is gauge invariant, how can it tell us at all whether the gauge symmetry is broken? Just like in QCD, the flavor-chiral order parameter $\langle \bar{\psi} \psi \rangle$ is QCD-gauge invariant, but it would've been absurd to conclude  QCD gauge symmetry is broken just because $\langle \bar{\psi} \psi \rangle\neq 0$. (In fact it puzzles me why some people take gauge invariance of the order parameter as a virtue instead of a deficiency in this case.) A late edit: much of the discussion (which is enlightening to me) has been centered around aboout "does $\langle \phi \rangle$ make sense at strong coupling?", but this is actually somewhat a diversion of what I wanted to say. The thing is I'm not even convinced $\langle \phi^2 \rangle\approx\langle\phi\rangle^2$ at weak coupling region. For example, a weakly coupled Ising model is in its disordered phase, if we use $\phi$ to denote lattice spin, then $\langle\phi\rangle=0$, while $\phi^2=\text{Id}$ which is nonzero in any possible phase. Anther point raised by 40227 is Elithur's theorem, however the theorem only applies in a gauge-invariant quantization scheme, such as lattice gauge theory, while unfortunately Seiberg and Witten never explicitly specify what kind of quantization scheme they are having in mind. edited Jul 3, 2016 If $A$ is a non gauge invariant field in a gauge theory, what is the meaning of $<A>$? @40227, it doesn't have to have a direct meaning, although at the very least it tells us if the vacuum condensate is "charged". Nonzero vev of a gauge-variant field gives non-trivial phenomenology, like Higgs mechanism. In fact a more physical example would be field-theoretic treatment of superconductivity. In this case you want to know if the vacuum is electrically charged, so you have to look at the vev of a charged field, which must be gauge variant. Superconductivity is nonrelativistic, which makes a big difference in QFT. @ArnoldNeumaier, still, there's no reason to rule out relativistic superconductivity, in which case we still have to know if the vacuum is charged. Can you give me a reference to relativistic superconductivity, so that I can investigate the matter? It seems to me that having a solid around already breaks the Poincare-symmetry down to a 3D lattice symmetry $\times$ time translations, which would completely change the situation compared to what has been discussed in algebraic QFT. Therefore, at present I have no intuition of what might happen for such a symmetry group. @ArnoldNeumaier, I think color-superconductivity would count. Although the formalism treating it is often not manifestly covariant (which is not surprising since e.g. it's often dealt with in thermal QFT context),  it's still a QCD phenomenon after all. It would be good if you ask a separate question about color-superconductivity, as discussing this here would change the nature of the thread. + 4 like - 0 dislike To understand what is going on one has to make a difference between what is a full/quantum/non-perturbative quantum field theory and what is a Lagrangian and/or semiclassical/perturbative description of a theory. In a full QFT, one has an algebra $\mathcal{A}$ of (physical) fields of operators. A (global) symmetry group $G$ is a group of automorphisms of this algebra. A choice of vacua is a choice of realization of $\mathcal{A}$ on an Hilbert space $H$, space of (physical) states (the states in $H$ are obtained from a particular vector in $H$, the vacuum, by action of elements of $\mathcal{A}$). We can have different choices of vacua corresponding to different (inequivalent) representations of $\mathcal{A}$ on an Hilbert space. For a given choice of vacua, all the symmetries of $\mathcal{A}$ are not necessarely realizable by unitary transformation of the Hilbert space: the realizable symmetries form a subgroup $H$ of $G$ and if $H$ is strictly smaller that $H$, one has spontaneously symmetry breaking from $G$ to $H$. The spectrum of the theory in the given vacua contains one massless scalar (Goldstone boson) for each continuous direction in $G/H$. The notion of gauge symmetry depends on a specific Lagrangian description of the theory. In such description, one starts with a classical field theory with a gauge symmetry and one defines a QFT by quantization, let's say by the path integral approach. In such picture only gauge invariant classical fields define corresponding fields of operators in the quantum theory. Indeed to define correlation functions in the quantum theory one has to take the path integral over gauge equivalence classes of  fields and so only gauge invariants quantities can be included in the integrand. One could try to define correlation functions of gauge variants fields by fixing a gauge and it is indeed possible perturbatively but the results depend on the gauge choice and fixing a gauge is anyway in general impossible at the non-perturbative level (Gribov ambiguity). So, very concretely, in the Seiberg-Witten example, $\phi$, which is a gauge-variant field in the classical starting point of the Lagrangian description, does not define a well-defined field of operators in the full QFT and in particular it does not make sense to talk about an expectation value $<\phi>$. In the classical theory, it makes sense to say that the field $\phi$ has a non-zero value at infinity, the usual description of the Higgs mechanism applies and this story extends to the perturbative level. To understand the relation with the full non-perturbative theory, it is useful to think in terms of path integrals. A Lagrangian for a gauge theory defines a full QFT by path integral over gauge equivalence classes of classical fields. In particular, one has the choice of boundary conditions at infinity for the classical fields we are integrating over and this choice is mapped to the choice of vacuum of the full quantum QFT. But this mapping can be quite non-trivial. In the Seiberg-Witten story, the boundary condition on the field $\phi$ is specified by a complex number $a$, well-defined up to a sign. Classically, the moduli space of classical vacua is parametrized by $a$. For $a \neq 0$, the gauge symmetry is spontaneously broken from $SU(2)$ to $U(1)$ and for $a=0$ the $SU(2)$ gauge symmetry is unbroken. For big $a$, the classical theory is weakly coupled at the symmetry breaking scale and so one expects that for every such $a$ the path integral with boundary conditions prescribed by $a$ defines a vacuum of the full quantum theory, with an infrared behaviour looking like the classical one: a U(1) gauge theory with massive W bosons. But for small $a$, the classical theory is strongly coupled and it is unlikely that the quantum theory looks like the classical one. In fact the path integral has infrared divergences making the correspondence between $a$ and quantum vacua doubtful. The conclusion is that $a$, what would be a candidate for $<\phi>$, is not a good well defined coordinate on the moduli space of vacua. It is not very surprising precisely because $\phi$ is not an allowed observable in the full theory. Breaking of gauge symmetry is not breaking of a corresponding global symmetry simply because in general there is no global symmetry associated to a gauge symmetry. More precisely the conserved current associated to a global gauge transformation is in general gauge variant and so cannot define a well defined charge on the Hilbert space of (physical) states. (A well known exception to this statement is QED where the current associated to the global $U(1)$ is gauge invariant and there is a well defined eletric charge but to have a spontaneous symmetry breaking one needs a charged scalar and the current associated to global $U(1)$ is not gauge invariant because of the term $A^\mu A^\nu \phi \phi^\dagger$ in the Lagrangian). If there were really a breaking of a global symmetry then one should see a Goldstone boson. The conclusion is that the notion of spontaneous symmetry breaking of a gauge symmetry only makes sense given a Lagrangian/ classical/ perturbative description of the theory. It is not surprising as gauge symmetry is simply a redundancy in a given description of the theory (digression: physical consequences of a gauge symmetry description exist at the level of asymptotic symmetries but they are much more sublter objects that a global symmetry acting on the Hilbert space).  So asking the question: is there a spontaneously symmetry broken of a gauge symmetry in a given vacuum of a full non-perturbative QFT does not really make sense. A question which makes sense is: are there some massless spin 1 particles ? If yes then there is a natural gauge theory description. If no then it's no. So the meaningful questions that Seiberg and Witten are trying to answer are: what is  the space of vacua of the theory and what is the infrared physics in each of these vacua? They start by the classical story, with a moduli space parametrized by $a$, a $U(1)$ unbroken gauge symmetry at $a\neq 0$ and a $SU(2)$ unbroken gauge symmetry at $a =0$. They argue that this picture is qualitatively correct at the quantum level for large $a$. To study the general case, one needs a good coordinate on the space of vacua. Natural functions on the space of vacua are vev of fields of operators. $tr \phi^2$ is a well-defined field of operator of the theory because it comes from a gauge invariant function in the path integral definition of the QFT and so it makes sense to consider $<tr \phi^2>$. The fact that it is a good choice is not obvious a priori, it could be a constant function on the space of vacua for example. But it is a good choice because it is clearly a good choice in the region where the classical approximation is good, for large $a$, $<tr \phi^2> \sim a^2$. In other words, $<tr \phi^2>$ is the simplest way to extend to the full quantum theory the variable $a$ natural from the classical point of view. All the work is then to determine the quantum corrections to the classical picture, and in particular to compute exactly $<tr \phi^2>$ as a function of $a$ in the region of the space of vacua where $a$ is still a good coordinate. answered Jan 1, 2016 by (5,120 points) very nice! Thank you and happy new year! Thanks for the long reply, but I think we have some fundamental disagreements. Let me ask the most important one first: how do you know there are spontaneously broken vacuua even if $\langle \text{tr} \phi^2\rangle\neq 0$? For example also look at the chiral order parameter mentioned in my main post. (Also for example in Ising model if you use $s^2$ instead of $s$ as the order parameter, and use the same flawed logic, you'll reach the absurd conclusion that rotational symmetry is broken at all temperatures.) @ Jia Yiyang : the point of my answer is that the notion of "spontaneously broken vacua" does not make sense away from the perturbative regime. In the perturbative regime, the classical story is a correct approximation and in the classical strory $tr \phi^2 \neq 0$ implies $\phi \neq 0$, hence spontaneously broken gauge symmetry. In QCD, the chiral condensate is already a non-trivial dynamical non-perturbative effect and at this level I don't know what gauge symmetry breaking means. But certainly I agree with you that, in a classical story, a non-trivial expectation value of a gauge invariant field does not imply in general a spontaneous gauge symmetry breaking. Sometimes, as for $tr \phi^2$, non-trivial value of a gauge invariant field is related to a non-trivial value of a gauge variant field and so is a signal for gauge symmetry breaking. Sometimes it is not. the point of my answer is that the notion of "spontaneously broken vacua" does not make sense away from the perturbative regime. This is another point of disagreement. For spontaneous symmetry breaking to make sense, you only need vacuum degeneracy and that vacuum is not annhilated by some global symmetry charge (and a gauge-variant charge is by no means "ill-defined"), both of these conditions make sense non-perturbatively. For Higgs mechanism to work, one doesn't need a genuine SSB of gauge symmetry (which is impossible anyway), all you need is "global symmetry breaking in a gauge theory". In the usual proof of Goldstone theorem, positivity of Hilbert space and locality (i.e. charge is an integertaion of local products of fields) assumptions are crucial; in a gauge theory, either you quantize your theory convariantly but breaks the positivity, or you keep positivity but breaks manifest locality (so you see a gauge-variant charge is actually a necessary feature of Higgs mechanism), so the usual proof of Goldstone theorem can be invalidated. There's a quite thorough discussion on the non-perturbative meaning of Higgs mechanism in the lecture notes by Strocchi. @Jia Yiyang : A gauge variant charge is in general ill-defined non-perturbatively: I don't know how to define its action on the vacuum for example. A gauge variant charge is in general ill-defined non-perturbatively: I don't know how to define its action on the vacuum for example. I fail to see the source of this confusion. By any standard, when you say a field theory is quantized (perturbatively or non-perturbatively), at the very least you have know to how each field operator (in your field algebra) acts on the Hilbert space, and a charge is just a combination of fields, so of course you know how it acts on Hilbert space, modulo perhaps renormalization/operator ordering issues. I understand gauge-variance is often ugly, but it doesn't mean it's ill-defined. In fact in general the Hamiltonian of a gauge theory also depends on gauge choice, would you conclude the Hamiltonian is also ill-defined? when you say a field theory is quantized (perturbatively or non-perturbatively), at the very least you have know to how each field operator (in your field algebra) acts on the Hilbert space, and a charge is just a combination of fields That's precisely the difficulty. Charged operators don't act on a Hilbert space; they map between different Hilbert spaces (superselection sectors). In algebraic quantum field theory, this phenomenon goes under the name of the DHR-theorem. (DHR = Doplicher - Haag - Roberts) In particular, the Wightman axioms (which give a Hilbert space for QFT) apply only to the uncharged part of the algebra of operators. The superselection sectors  are different representations of this uncharged algebra. The bounded version of the uncharged algebra is a $C^*$-algebra, strictly smaller than the field algebra (containing also the charged operators) which is only a $*$-algebra, but without C. On the other hand, in perturbation theory, one cannot see any difference between superselection sectors - all representation spaces look perturbatively like the free Fock space. (This is an intrinsic reason why the perturbation series cannot converge.) All this is part of the poorly understood infrared problems in QFT. in general the Hamiltonian of a gauge theory also depends on gauge choice, would you conclude the Hamiltonian is also ill-defined? This doesn't follow. One gets many Hamiltonians in different but isomorphic gauge-dependent representations, all unitarily equivalent. (Edit: Actually, after renormalization, they may well be nonisomorphic, representing unitarily inequivalent superselection sectors.) @ArnoldNeumaier That's precisely the difficulty. Charged operators don't act on a Hilbert space; they map between different Hilbert spaces (superselection sectors). This can't be right, with SSB a charge operator creates a goldstone boson from vacuum, in the same sector. I think you meant to say the exponential of a charge operator. This doesn't follow. One gets many Hamiltonians in different but isomorphic gauge-dependent representations, all unitarily equivalent. My example is more directed to 40227, if I understand him/her correctly (so correct me if I'm wrong), he/she seemed to hold the opinion that gauge-dependence for a conserved charge is somehow a deadly sin, regardless of whether there's SSB or not. @JiaYiyang: Note the difference between charge operator and charged operator. A charge operator may itself be uncharged, but a gauge-variant operator is always charged. Without using this terminology, my statement can be phrased in the case of interest as follows: Gauge-variant operators map out of the Hilbert space of any particular superselection sector into another superselection sector of the theory. This implies that the expectation of a gauge-variant operator (independent of whether or not it represents a charge) can be taken in none of the superselection sectors, and hence is always ill-defined. This is unlike the Hamiltonian in a particular gauge, which is gauge-dependent because the representation in which the Hamiltonian is written depends on the gauge used to write it down. But the Hamiltonian is of course the generator of the time translations in the Poincare group, and as such well-determined in any fixed representation. There's a quite thorough discussion on the non-perturbative meaning of Higgs mechanism in the lecture notes by Strocchi. But this discussion doesn't affect the statements made by 40227. The Hilbert space in which there are massless gauge bosons is necessarily inequivalent to the Hilbert space in which, due to the Higgs mechanism  the gauge symmetry is broken and the vector bosons are massive. This is the case because the content of unitary irreducible representations of the Poincare group is different, and no unitary transform between the Hilbert spaces can change this content. Perturbation theory simply glosses over such differences, and pays for this violation of mathematical consistency with the resulting infrared divergences. Strocchi, in his nonperturbative arguments in Chapter 19, pays instead with using an indefinite (Gupta-Bleuler type) Krein space in place of the Hilbert space. The gauge-variant operators act on Krein space. But there are no states on Krein space, and hence no vacuum expectation values., since these require a positive definite inner product. (cf. p.198) For these one has to go to a Hilbert space inside the Krein space defining the vacuum sector of the theory, and the vacuum sector is preserved by the observable algebra $F_{obs}$ of gauge-invariant operators only. @ArnoldNeumaier, Gauge-variant operators map out of the Hilbert space of any particular superselection sector into another superselection sector of the theory. I don't understand what this means, you can have a gauge theory (hence gauge-dependent operators) without SSB, so no superselection sectors. For these one has to go to a Hilbert space inside the Krein space defining the vacuum sector of the theory, and the vacuum sector is preserved by the observable algebra $F_{obs}$ of gauge-invariant operators only. If we are reading the same book, on page 197 Strocchi explicitly used the vev of the gauge-dependent order parameter, in stating Theorem 19.1 I'm really not sure if you and 40227 are coming from the same perspective. So let me ask you, do you think in Higgs mechanism there are degenerate inequivalent vacuua? @JiaYiyang:  Superselection sectors are not specific to broken symmetry. QED has them, and even a massless scalar field in 2D has them. They are related to charges and/or topological issues in the boundary conditions of the fields. Strocchi: I have the 2nd edition; Theorem 19.1 is indeed on p.197. The vacuum state considered there is not a physical state but a nonpositive state in Krein space. You can see this since in a Hilbert space there are no operators with the properties required of $\cal L$ in the two lines after (19.4). See also his comments in the context of footnotes 170/171 on p.182 (second page of Chapter 17). Probably this difference in what are considered acceptable assumptions is the reason why  you come to a different interpretation then 40227 and I. We assume a physical vacuum state; you seem to be content with the formal but unphysical manipulations. Who was talking about degenerate inequivalent vacua? when you say a field theory is quantized (perturbatively or non-perturbatively), at the very least you have know to how each field operator (in your field algebra) acts on the Hilbert space, and a charge is just a combination of fields, Under quantization, "fields" of the classical theory are mapped to field of operators in the quantum theory. "Fields" of a classical gauge theory are gauge invariant fields. Only for gauge invariants fields one expects to have a corresponding field of operators. I have indicated in the second paragraph of my answer how, at least formally, I know how to define the expectation value of a gauge invariant field by a path integral and why I don't know how to do the same for a gauge variant field. In fact in general the Hamiltonian of a gauge theory also depends on gauge choice, would you conclude the Hamiltonian is also ill-defined? I don't understand this remark. The Hamiltonian of a gauge theory is gauge invariant: how  a measurable quantity like the energy could depend on an unphysical choice of gauge? (I agree that this statement is only true up to a subtlety related to asymptotic symmetries). @ArnoldNeumaier, Superselection sectors are not specific to broken symmetry. QED has them, and even a massless scalar field in 2D has them. They are related to charges and/or topological issues in the boundary conditions of the fields....... Who was talking about degenerate inequivalent vacua? 40227 and I were, I believe. This is the reason I assumed you were talking about the same thing. But I'd still like to ask the same question, do you think there are degenerate vacuua? My disagreement with 40227 derived from this statement of 40227: the point of my answer is that the notion of "spontaneously broken vacua" does not make sense away from the perturbative regime. And I simply fail to see how can SSB not make sense non-perturbatively, since all you need is vacuum degeneracy and the fact that symmetry charge doesn't annihilate vacuum. I understand you were trying to say the 2nd condition might be ill-defined, but this could be just a semantic difference, because in any case it still doesn't annihilate vacuum. It's just in your way of phrasing it, it doesn't annihilate because it can't act (in Coloumb gauge) or it can act but creates unphysical states (in local gauges). Strocchi: I have the 2nd edition; Theorem 19.1 is indeed on p.197. The vacuum state considered there is not a physical state but a nonpositive state in Krein space. You can see this since in a Hilbert space there are no operators with the properties required of LL in the two lines after (19.4). See also his comments in the context of footnotes 170/171 on p.182 (second page of Chapter 17). I believe in his writing, the vacuum is a physical one, it's just the excitation mode the field or charge operator creates is not physical, which is precisely the point of Higgs mechanism. In any case, vev of the charged order-parameter field is mathematically well-defined (to futher minimize the effect of our semantic difference, see theorem 19.3 of Strocchi, where a non-covariant quantization is used, only physical states survive, but Strocchi still freely uses vev of the charged field),and 40227 is disagreeing with this (see his/her comments). I'm really arguing with two different perspectives from you and 40227 here, so let's try to avoid further conflation. @40227 "Fields" of a classical gauge theory are gauge invariant fields. Only for gauge invariants fields one expects to have a corresponding field of operators. I don't see why this is necessarily case. I have indicated in the second paragraph of my answer how, at least formally, I know how to define the expectation value of a gauge invariant field by a path integral and why I don't know how to do the same for a gauge variant field. What's the difficulty? Just use $$\langle \phi \rangle= \text{lim}_{J\to 0}\int \mathcal{D}U \mathcal{D}\phi e^{-S[U, \phi, J]}\phi,$$ where $J$ is a source, and $U$ is gauge link. I don't understand this remark. The Hamiltonian of a gauge theory is gauge invariant: how  a measurable quantity like the energy could depend on an unphysical choice of gauge? I thought you were saying any symmetry charge that is formally gauge choice dependent cannot exist, no? My point is the explicit forms of a gauge theory Hamiltonian is typically different in different gauge choices, but this is not a big problem since in different gauges the quantization procedures will also differ, and all the resulting theories should be unitarily equivalent. (Maybe we should also distinguish the terms "gauge-choice dependent" which is about gauge fixings and "gauge variant" which is about gauge transformations) I'm really arguing with two different perspectives from you and 40227 Well, this is because infrared physics (and this includes symmetry breaking) is intrinsically nonperturbative. There is no uniform understanding of it in the literature since it is poorily understood. Thus how one understands things depends on one's background, and one gets different partial pictures from the different basic views (path integral, canonical quantization, functional Schroedinger approach, axiomatic quantum field theory). Sometimes these pictures are partially in conflict, but in general the more views one understands the better the total informal picture. My view is a mix. I had no difficulties understanding 40227; so I had assumed we have similar views. But now I see that he thinks more in terms of path integrals than I do. Nevertheless (and I find this reassuring for my point of view), what he says is consistent with the axiomatic approach based on physical states. do you think there are degenerate vacuua? Yes, but unlike what you had asked before, they are all equivalent through the action of the remaining symmetry group. That's why I was irritated. vev of the charged order-parameter field is mathematically well-defined Possibly, possibly not. We haven't yet a consistent axiomatic framework for gauge theory, not even in lower dimensions. Strocchi just makes assumptions that look plausible in a canonical (hence for gauge theories necessarily indefinite) quantization framework; but as Weinberg's treatise shows, canonical quantization has other difficulties in the gauge case. In any case, it is clear now that you mean the vev defined with the indefinite inner product. see theorem 19.3 of Strocchi, where a non-covariant quantization is used, only physical states survive, but Strocchi still freely uses vev of the charged field Strocchi says explicitly on p.205 (end of 2nd par.) that the observable subalgebra (which acts on the physical Hilbert space, i.e., on each sector separately) contains the relevant order parameter. What's the difficulty? Just use $\langle \phi \rangle= \text{lim}_{J\to 0}\int \mathcal{D}U \mathcal{D}\phi e^{-S[U, \phi, J]}\phi$ where $J$ is a source, and $U$ is gauge link. In this formula, what is the range of integration for the variables $U$ and $\phi$ ? @40227, I'd say over all their ranges. Or may I answer you this way: Over the same range that makes you think $\langle \text{tr} \phi^2 \rangle$ is well defiend? @ArnoldNeumaier, Yes, but unlike what you had asked before, they are all equivalent through the action of the remaining symmetry group. That's why I was irritated. I said those vacuua are inequivalent because there are no implementable unitary maps linking them, i.e. the statement was precisely about the broken symmetries, not the remaining ones. In any case, we are on the same side about this particular point, but 40227 might not share the same opinion. Strocchi says explicitly on p.205 (end of 2nd par.) that the observable subalgebra (which acts on the physical Hilbert space, i.e., on each sector separately) contains the relevant order parameter. I think that's because on page 205 he's talking about the global U(1)-flavor symmetry, so a suitable order parameter can be gauge invariant (since the gauge group is a completely independent group), this is not surprising. But we are debating on the global symmetry that has a local counter part (gauge symmetry), so Theorem 19.3 is the relevant one here, in which he explicitly states the order parameter is in Coulomb algebra not the observable algebra. Possibly, possibly not. We haven't yet a consistent axiomatic framework for gauge theory, not even in lower dimensions. Strocchi just makes assumptions that look plausible in a canonical (hence for gauge theories necessarily indefinite) quantization framework; but as Weinberg's treatise shows, canonical quantization has other difficulties in the gauge case. Well, I would rather trust the results that are physically robust. It's profoundly weird if gauge dependence can eliminate the legitimacy of an order parameter. For example, if it were true, in QCD $\bar{\psi} \psi$ would be a perfectly ok order parameter (to study flavor-chiral SSB), but the moment we decide to also consider electroweak interaction, this order parameter becomes gauge-dependent, and suddenly it becomes unspeakable? Does this mean fundamentally it's flawed to study QCD chiral symmetry breaking without electroweak considerations? This is too absurd. If there's a genuine mathematical difficulty as you said (which I'm not convinced of yet), then probably mathematics has to change, not the physics. @JiaYiyang: But we are debating on the global symmetry that has a local counter part (gauge symmetry), so Theorem 19.3 is the relevant one here, in which he explicitly states the order parameter is in Coulomb algebra not the observable algebra. I looked in more detail into the matter. From the algebraic point of view one has the Coulomb field algebra and the observable algebra. What I hadn't see before but became slowly apparent through our discussion is that the observable algebra is different depending on whether or not a gauge group is broken. It contains a Lie algebra $L$ of  charges. But which of these are realized as gauge charges depends on the representation. In a field algebra in an unbroken representation, all charges in $L$ are gauge charges, the charged vector fields are represented in a massless representation, the observable algebra is the centralizer of $L$, and expectations make sense only for this small observable algebra. In a field algebra in a broken representation, only the elements of a Lie subalgebra $L_0$ of $L$ are realized as gauge charges, the vector fields corresponding to this subalgebra are represented in a massless representation. The remaining ones are represented in a massive representation with longitudinal modes, and are therefore no longer gauge fields. Thus the observable algebra is the centralizer of the smaller $L_0$, and expectations make sense only for this bigger observable algebra. This explains the validity of Theorem 19.3, since the assumption is made that the symmetry is broken.But in this case $L_0$ is trivial, so that the observable algebra coincides with the field algebra. in QCD, $\bar\psi\psi$ would be a perfectly ok order parameter (to study flavor-chiral SSB), but the moment we decide to also consider electroweak interaction, this order parameter becomes gauge-dependent, and suddenly it becomes unspeakable? With the new insight from the dependence of the observable algebra on the brokenness, this problem disappears, as the electroweak interaction is not a true gauge symmetry but a broken one. It is unbroken at high temperatures, but at high temperature, there is no vacuum state in the sense of canonical QFT: The ground state at positive temperature is not Poincare invariant. I'd say over all their ranges. Or may I answer you this way: Over the same range that makes you think ⟨trϕ2⟩ is well defiend? If you take all the range then you probably obtain an infinite result because along the gauge orbits the action is constant and so there is no exponential suppression of the integrand. To give a path integral definition of a gauge invariant quantity like $<tr \phi^2>$, one has to integrate over the quotient space by the gauge transformations, i.e. on the gauge equivalence classes of fields. This makes sense because a gauge invariant quantity naturally defines a function on this space whereas it is not the case for a gauge variant quantity. Let me give an elementary finite dimensional analogue: take the real line $\mathbb{R}$ as analogue of the space of fields, the additive group $\mathbb{Z}$ as analogue of the group of gauge transformations, acting on $\mathbb{R}$ by translation. The quotient space $\mathbb{R}/\mathbb{Z}$ is a circle $S^1$ and is the analogue of the space of gauge equivalence classes of fields. The analogue of a gauge invariant quantity is a function on $\mathbb{R}$ invariant under integral translations, i.e.  a 1-periodic function. The analogue of a general gauge variant quantity is a general function on $\mathbb{R}$. It is clear what is the mean value of a 1-periodic function: it is the integral over any interval of length $1$. It is unclear what is the mean value of a general function: the integral over the real line will in general diverge. @40227, This makes sense because a gauge invariant quantity naturally defines a function on this space whereas it is not the case for a gauge variant quantity. But the action is already not gauge invariant due to the source term. And also I don't see what goes wrong, let's say, if I calculate $\langle \phi \rangle$ using integration over the quotient manifold. It is unclear what is the mean value of a general function: the integral over the real line will in general diverge. In general case, even the quotient manifold is non-compact. And we don't expect such integral to diverge just for non-compactness (for one lattice point) because the integrand is controlled by a Gaussian. But indeed gauge equivalent copies may give a divergence, but isn't the effect eliminated by $Z^{-1}[J]$ (which I forgot to write in my formula)? But the action is already not gauge invariant due to the source term. Indeed, I should have been more careful. As suggested in my answer, I am not thinking in term of source term: I specify a vacuum by specifying a boundary condition on the fields $\phi$ on which we do the path integral. And also I don't see what goes wrong, let's say, if I calculate ⟨ϕ⟩⟨ϕ⟩ using integration over the quotient manifold. Simply because $\phi$ is not a well defined function on the quotient manifold. In the toy example, if $x$ is a coordinate on $\mathbb{R}$, then $x$ is not a well-defined function on $\mathbb{R}/\mathbb{Z}$ because not 1-periodic. In general case, even the quotient manifold is non-compact. And we don't expect such integral to diverge just for non-compactness (for one lattice point) because the integrand is controlled by a Gaussian. But indeed gauge equivalent copies may give a divergence, but isn't the effect eliminated by Z−1[J]Z−1[J] (which I forgot to write in my formula)? I agree that non-compactness is not the main issue. The quotient manifold is non-compact but one expects convergence thanks to the exponential suppression of the configurations with big action. If we don't take the quotient then the problem is non-compactness along the gauge orbits: there is no exponential suppression in these directions precisely because the action is gauge invariant. I don't think that adding the normalization helps (it formally gives the right answer if we start by integrating a gauge invariant quantity but in general, I don't think it gives more that $\infty/\infty$: a formula like $<x^2>:=\frac{\int_\mathbb{R} x^2 dx}{\int_\mathbb{R} dx}$ does not sound very promising...) Indeed, I should have been more careful. As suggested in my answer, I am not thinking in term of source term: I specify a vacuum by specifying a boundary condition on the fields ϕϕon which we do the path integral. Putting how to define path integral aside for the moment, if you say you already know how to specify vacuua by boundary conditions, doesn't this mean you already know if there's SSB? For example, if in your list of vacuua some have nonvanishing boundary conditions, this already means such vacuua break the global gauge symmetry. How come you still think the notion of SSB doesn't make sense? Simply because ϕϕ is not a well defined function on the quotient manifold. In the toy example, if xx is a coordinate on RR, then xx is not a well-defined function on R/ZR/Z because not 1-periodic. Ok, I see you point now, thanks for the clarification. I agree that non-compactness is not the main issue. The quotient manifold is non-compact but one expects convergence thanks to the exponential suppression of the configurations with big action. If we don't take the quotient then the problem is non-compactness along the gauge orbits: there is no exponential suppression in these directions precisely because the action is gauge invariant. I don't think that adding the normalization helps (it formally gives the right answer if we start by integrating a gauge invariant quantity but in general, I don't think it gives more that ∞/∞∞/∞: a formula like <x2>:=∫Rx2dx∫Rdx<x2>:=∫Rx2dx∫Rdx does not sound very promising...) I see what you are saying now, but I think you are wrong here. The noncompact part of the field is taken care of by Gaussian suppression, but the gauge orbit only gives a finite Haar-volume contribution, so long as we are talking about compact gauge groups. So your $x^2$ example is flawed, a correct toy example would be like the following: consider a non-propagating action $$S=\int_x \phi(x)^*\phi(x).$$ Clearly this model has a local U(1) symmetry, and it's a easy exercise to show that $$\langle \phi(x_1)^2\rangle = \int \mathcal{D}\phi e^{-S} \phi(x_1)^2/Z$$ is completely well defined (note $\phi(x_1)^2$ is gauge-variant), in particular, on a finite lattice both the numerator and the denominator are finite, but $Z$ in the denominator helps a lot if you want to take infinite volume limit (and that was what I meant to say). Formally this carries over to the continuum theory, as far as the matter field integration is concerned. What's really distinguishing is the gauge field measure part: in the continuum theory the gauge orbit for gauge field is non-compact, so the integral diverges even for a single space-time point, and this is why we need gauge-fixing, ghost and all that. While in lattice gauge theory we integrate over gauge link variables, so even for the gauge field the gauge orbit becomes compact, and this is the standard argument of why lattice gauge theory doesn't necessarily need gauge-fixing. I agree that non-compactness is not the main issue. This can be seen by taking in the toy example a plane and cylinder in place of the real line and the circle. The cylinder is noncompact, but any continuous function fast decaying along the unbounded axis is integrable, while integrating the periodic extension of the same function to the plane is never integrable. The gauge orbit only gives a finite Haar-volume contribution, so long as we are talking about compact gauge groups. This would be the case if you just integrate over the gauge group, which is a finite-dimensional manifold. But you integrate over maps from space-time to the gauge group, which is an infinite-dimensional manifold. Thus compactness is lost. Edit  @JiaYiyang:  If you first go to the lattice, you need to work with a finite 4D space-time, in order to have a compact integration domain. But this means that dynamics is lost. Moreover, since 4D lattice calculations are in periodic Euclidean time, the results correspond to a finite temperature, whereas so far, I thought we were discussing vacuum expectations. (At least Strocchi did.) To preserve the vacuum (approximately), you can only discretize space and must work in real time; but then the gauge integrations are over maps from  (finite space) times (real times) to the gauge group, and this is again an infinite-dimensional, noncompact manifold. Thus the problems persist. @ArnoldNeumaier, But you integrate over maps from space-time to the gauge group, which is an infinite-dimensional manifold. Thus compactness is lost. Of course, this is the difficult infinite volume/continuum problem, but 40227 and I were debating on a much simpler problem. If you first go to the lattice, you need to work with a finite 4D space-time, in order to have a compact integration domain. But this means that dynamics is lost. Moreover, since 4D lattice calculations are in periodic Euclidean time, the results correspond to a finite temperature, whereas so far, I thought we were discussing vacuum expectations. (At least Strocchi did.) To preserve the vacuum (approximately), you can only discretize space and must work in real time; but then the gauge integrations are over maps from  (finite space) times (real times) to the gauge group, and this is again an infinite-dimensional, noncompact manifold. Thus the problems persist. Again this is a limit problem, to get to 0 temperature we need to take infinite-volume limit in the time direction. If one wants to talk about continuum formal set up from the beginning, the best we have is gauge-fixing, ghost, BRST and all that. Gauge symmetry is explicitly broken by gauge fixing, which would've rendered much of the discussion between 40227 and I less motivated. Putting how to define path integral aside for the moment, if you say you already know how to specify vacuua by boundary conditions, doesn't this mean you already know if there's SSB? For example, if in your list of vacuua some have nonvanishing boundary conditions, this already means such vacuua break the global gauge symmetry. How come you still think the notion of SSB doesn't make sense? It means that I already know what are the vacua and how to compute physical quantities in each of them. It is because we know the classical vacua and due to the large amount of supersymmetry ($N=2$), they cannot be lifted by a potential. I agree that in more complicated cases, the determination of the space of vacua is much more complicated. Functions on the space of vacua are given by expectation values of operators. To compute them, one introduces a source (as in your suggested definition of $<\phi>$), computes the path integral as a function of the source and does a Legendre transform to obtain an effective potential whose critical points are the different possible expectation values of the operator. Again, in order for this story to make sense, I am talking about gauge invariant operators. These expectation values of gauge invariant operators parametrize the possible vacua of the theory. In each of them, there is a physics to study. In some of them, there is a valid semiclassical description and it makes sense in this description to say if there is a SSB of gauge symmetry or not. For the others, I don't know what it means to talk about a SSB of gauge symmetry. Of course, this is the difficult infinite volume/continuum problem, but 40227 and I were debating on a much simpler problem. I think that we are talking about the same problem and on this point as on most of the above ones, I agree with Arnold Neumaier. The path integral I was referring to was really the infinite volume one (the BRST formalism is just one way to express the quotient measure on the quotient space and this does not require gauge fixing. One of the point of my answer was that there is no possible gauge fixing beyong perturbation theory due to Gribov ambiguity, i.e. the non-triviality of the space of fields seen as a bundle over the quotient space of gauge equivalence classes of fields, with fiber the group of gauge transformations). The whole question of existence of different vacua happens only in the infinite volume limit due to a non-trivial dynamics towards the infrared. I agree that on a lattice in a box, with everything discrete and finite, integrating over all gauge fields and then dividing by the volume of the group of gauge transformations makes sense for any, gauge invariant or not, fields (as $\int_{-N}^N x^2 dx /\int_{-N}^{N} dx$ makes sense for finite $N$). But the non-trivial issues happen in the infinite volume limit. Again, in order for this story to make sense, I am talking about gauge invariant operators. I think that we are talking about the same problem and on this point as on most of the above ones, I agree with Arnold Neumaier. The path integral I was referring to was really the infinite volume one So far I still don't see a reason for the insistence on gauge invariance.  Lattice is pretty much the only useful non-perturbative framework so far, and I've argued on a finite lattice $\langle \phi \rangle$ is well defined, the rest of the question is whether the infinite-volume/continuum limit exists. So are you suggesting a gauge-variant expectation value necessarily fails to have a good limit? I don't see why that should be a priori true and it certainly isn't the true for the simple U(1) gauged toy example I used. A complete calculation goes like this $$\langle \phi(x_k) \rangle =\langle \phi_1 + i \phi_2 \rangle\\ =\lim_{J\to 0}\lim_{N\to \infty} Z^{-1}[J]\int \Big[ \prod^{N}_{n=1} \text{d}\phi_1(x_n) \text{d}\phi_2 (x_n) \Big] \phi(x_k)\exp[\sum_{1}^{N}(\phi^*(x_n)\phi(x_n)+J^*\phi+J\phi^*)]\\ =\lim_{J\to 0}e^{|J|^2} J =0.$$ So this is perfectly well-defined (continuum limit is also very trivial). So are you suggesting a gauge-variant expectation value necessarily fails to have a good limit? I am not saying that it necessarely fails but that it can fail. The calculation that you have done, starting on a lattice with a field in a representation of the gauge group containing no trivial subrepresentation, always gives 0. See for example Itzykson,  Drouffe, Statistical field theory. Vol. 1, section 6.1.3 called "Order parameter and Elitzur's theorem". Related comments by Seiberg can be found in this video talk : https://video.ias.edu/PiTP-Seiberg-2of3 , starting from 53:30 for comments on gauge symmetry and from 1:03:00 on gauge symmetry breaking. Ok, I dug into Elithur theorem a bit and it indeed seems to be a genuine problem (the original Elithur theorem and Itzykson's exposition only applies to bounded fields like $\langle \cos \phi \rangle$, but it was generalized to $\langle \phi \rangle$ for abelian Higgs lattice model by this paper, using some very nontrivial estimate.) So in light of EDDG(Elithur-De Angelis-de Falco-Guerra) theorem, the problem is not that $\langle \phi \rangle$ is ill-defined, in fact what's shown by EDDG is that it's almost always well-defined and equal to 0 (so instead we should really say $\langle \phi \rangle$ is too well-defined so that it becomes trivial......), and the latter property is a serious problem. The immediate question is, if the continuum theory is in any sense a limit of a lattice theory, how can  a non-zero VEV ever emerge during this "taking continuum limit" process? As a consequence, in case of Seiberg-Witten's $\langle \text{tr}\phi^2\rangle$, you still can't make the claim $\langle \text{tr}\phi^2 \rangle \sim a^2$ for large $a$, since EDDG tells us $\langle \phi\rangle$ is always 0 quantum mechanically (in other words, classical limit is never achieved smoothly). Indeed one can just retreat and say "let's just treat $\langle \text{tr}\phi^2 \rangle$ as some moduli parameter of our theory, nothing else." Then the problem becomes why $\langle \text{tr}\phi^2 \rangle$ has anything to do with Higgs phenomenology at all? After all Higgs phenomenology is based on a nonzero $\langle \phi \rangle$, and EDDG tells us there's absolutely no relation between  $\langle \text{tr}\phi^2 \rangle$  and $\langle \phi \rangle$ (at least in a gauge-invariant lattice formalism). if the continuum theory is in any sense a limit of a lattice theory, how can  a non-zero VEV ever emerge during this "taking continuum limit" process? probably by inventing suitable projectors $P$ to subspaces of the lattice Hilbert spaces where the limiting states are well-defined and give the expectation in an appropriate superselection sectors, $\langle f\rangle=\lim \langle PfP\rangle$. @ArnoldNeumaier, but at what stage of the limit taking does one introduce such projection? It's a bit hard to even imagine since "introducing $P$ or not" is a binary choice while taking limit is a continuous process. After scratching my head really hard I think I kinda understand what the deal is with EDDG theorem. In a gauge invariant formalism, one can locally change the field without energy cost, then in thermal equilibrium , even if we start with a configuration with all fields on all latiices aligned to a fixed direction (on gauge orbit),  lattice field on each single lattice will quickly equilibrate into a uniform superposition of all configurations on the gauge orbit, so any gauge invariant operator will have 0 VEV. This is different from a global symmetry situation, where if we start with all fields aligned, the phase will be "locked" in that direction because there will be an extensive energy cost if we want to "rotate" the fields locally from one point to the other. However, there's one thing about continuum gauge theory that EDDG doesn't seem to capture, even putting aside gauge-fixing issue. That is, in continuum gauge theory a gauge transformation is required to preserve the boundary conditions at infinity, i.e. $\lim_{x\to \infty} G(x)=I$, and because of this one cannot say a global symmetry transformation is just a special case of gauged ones.  And clearly with this it's definitely meaningful to ask if there's a global symmetry breaking in a given gauge theory, because global symmetry breaking gives different boundary conditions at infinity which are stable under gauge transformations. So if one imagine somehow we have a infinite-volume lattice gauge theory, to capture the same requirement one must somehow suppress gauge fluctuations at large distance, and note that this suppression doesn't have to be a gauge-fixing, although gauge-fixing can achieve this requirement (for the moment I don't know what else can it be, and I've been thinking about it, quite painfully...). @40227, maybe I should ask a question that's more relevant to the title. How critical is the usage of $\langle \text{tr} \phi^2\rangle$ in Seiberg-Witten paper? Had they used $\langle \phi \rangle$, does it go terribly wrong? at what stage of the limit taking does one introduce such projection? maybe I should ask a question that's more relevant to the title. How critical is the usage of ⟨trϕ2⟩⟨trϕ2⟩ in Seiberg-Witten paper? Had they used ⟨ϕ⟩⟨ϕ⟩, does it go terribly wrong? Again the point is that we don't know what $<\phi>$ means. In the Seiberg-Witten context, one could naively think that there is a way to define $<\phi>$. Indeed, classically, $<\phi>$ makes sense, is determined up to gauge transformations by its eigenvalues $(a,-a)$, and according to the classical Higgs mechanism, $|a|$ is (maybe up to some numerical constants) the mass of the $W$ bosons. This suggests a definition of $|a|$ in the full quantum theory: define it as the mass of the $W$ bosons. This works in the semiclassical region of the moduli space of vacua where there exists $W$ bosons but this fails in the strong coupling region: it is unclear a priori what is the spectrum of the theory and if there is in the spectrum something one could call $W$ bosons (and in fact it is part of the conclusion of the Seiberg-Witten analysis that the $W$ bosons existing at weak coupling disappear at strong coupling). One can try to do better: classically $a$ is the electric component of the central charge. The central charge is part of the supersymmetry algebra and so makes sense in the full quantum theory (we are looking at supersymmetric vacua). This suggests a definition of $a$ in the full quantum theory: define it as the electric part of the central charge. But this fails because the "electric part" is not well-defined because of the electromagnetic duality in 4d abelian gauge theory. More precisely, it is well-defined in the weak coupling region and one can "analytically continue" it in the strong coupling region but due to singularities in the moduli space of vacua, one can come back in the weak coupling region with a non-trivial monodromy. So $a$ is a multivalued function on the moduli space of vacua and so is not enough to describe the moduli space. Conversely, $u=<Tr \phi^2>$ is a good coordinate on the moduli space and the subject of the Seiberg-Witten paper is to determine the relation between $u$ and $a$: for a given vacuum $u$, what is the central charge, what is the spectrum... @JiaYiyang The gauge invariant operator of the broken $SU(2)$ SYM is tr$\phi^2$. @conformal_gk, yes, that's what the paper uses, but my disagreement is that such order parameter is not a good one, namely, it's insufficient to use nonzero tr$\phi^2$ to invoke Higgs mechanism. @JiaYiyang you 're right the vev is non-zero. Thus it is sufficient. @conformal_gk. no it's not, look the first example I used in my original post: chiral condensate in QCD, it's gauge invariant, it's nonzero, still no Higgs mechanism. Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)   Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOve$\varnothing$flowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
{}
# Testing simultaneous and lagged effects in longitudinal mixed models with time-varying covariates I was recently told that it was not possible to incorporate time-varying covariates in longitudinal mixed models without introducing a time lag for these covariates. Can you confirm / deny this? Do you have any references on this situation ? I propose a simple situation to clarify. Suppose that I have repeated measures (say over 30 occasions) of quantitative variables (y, x1, x2, x3) in 40 subjects. Each variable is measured 30 times in each subject by a questionnaire. Here the final data would be 4 800 observations (4 variables X 30 occasions X 40 subjects) nested in 40 subjects. I would like to test separately (not for model comparison) for : • simultaneous (synchronous) effects : the influence of x1, x2, and x3 at time t on y at time t. • lagged effects : the influence of x1, x2, and x3 at time t-1 on y at time t. I hope everything is clear (I'm not a native English speaker !). For instance, in R lmer{lme4}, the formula with lagged-effects is : lmer(y ~ lag1.x1 + lag1.x2 + lag1.x3 + (1|subject)) where y is the dependent variable at time t, lag1.x1 is the lagged independent variable x1 at the individual level, etc. For simultaneous effects, the formula is : lmer(y ~ x1 + x2 + x3 + (1|subject)) Everything is running well and it gives me interesting results. But is it correct to specify a lmer model with synchronous time-varying covariates or have I missed something ? Edit: Moreover, is it possible to test both simultaneous and lagged effects at the same time ? , For instance : lmer(y ~ x1 + x2 + x3 + lag1.x1 + lag1.x2 + lag1.x3 + (1|subject)) Theoretically, it makes sense to test competition between concurrent vs. lagged effects. But is it possible with lmer{lme4} in R, for example ? -
{}
# How do you write the partial fraction decomposition of the rational expression x^3/[x(x^2+2x+1)]? ${x}^{3} / \left({x}^{3} + 2 {x}^{2} + x\right) = 1 + \frac{- 2 {x}^{2} - x}{x {\left(x + 1\right)}^{2}}$ $\frac{- 2 {x}^{2} - x}{x {\left(x + 1\right)}^{2}} = \frac{A}{x} + \frac{B}{x + 1} + \frac{C}{x + 1} ^ 2 \to - 2 {x}^{2} - x = A {\left(x + 1\right)}^{2} + B \left(x \left(x + 1\right)\right) + C x \to - 2 {x}^{2} - x = A {x}^{2} + 2 A x + A + B {x}^{2} + B x + C x \to - 2 = A + B , - 1 = 2 A + B + C , A = 0 , B = - 2 , C = 1$ ${x}^{3} / \left({x}^{3} + 2 {x}^{2} + x\right) = 1 + \frac{- 2}{x + 1} + \frac{1}{x + 1} ^ 2$
{}
# Is the total upwards vertical force (lift included) greater than the weight in a steady climb? Related to questions on forces in a climb: if any aircraft is climbing with a constant velocity, is the total upwards vertical force (lift included) greater than the weight? • We already have a question that specifically asks "My question therefore is also about the sum of all vertical forces: in a steady climb, is the total upwards vertical force from all sources (wing, tail, engines, fuselage) larger than, or equal to the weight of the aircraft." See Does lift equal weight in a climb? Jun 11 at 4:57 • The body of the question asks about the total upwards force, which might be construed to mean either the net vertical force excluding weight, or the sum of the upward force components. Since at least one aerodynamic force has a downward component, these are not the same thing. The title asks about the total vertical force, presumably meaning the net vertical (aerodynamic?) force, i.e. the sum of all vertical forces other than weight. The title and body could use some editing to be better harmonized and be more clear about which of these things you are asking. Jun 11 at 5:43 • (So consider changing title to read "Is the sum of the upwards forces higher than weight in a steady climb?", or "Is the net vertical aerodynamic force greater than weight in a steady climb?", depending on which you really want to know.) Jun 11 at 5:45 • Is Lift Greater Than Weight In A Climb was asked 4 years ago, editing it now may make many of the answers a mismatch. It was active again when this answer was added, initially stating that it was. It received quite some downvotes. It is important to realise in the debate that upwards vertical force is higher than weight in a climb, but lift isn't necessarily, depending on the tilt of the aircraft velocity vector. Jun 11 at 14:40 • It's interesting how vague simple words can end up being-- (I guess that's why a force vector diagram is worth a thousand words) -- I'm still not completely clear whether you are asking about the net vertical aerodynamic force, or the net vertical force, or you are interested in adding up all the upward aerodynamic force components (while ignoring the downward aerodynamic force components) and comparing that value to the weight-- my answer aviation.stackexchange.com/a/56476/34686 to the other related question is intended to address all those different cases-- Jun 11 at 14:45 This question is purely a definition issue, and the answer is 'yes' or 'no' based only on which definitions you use. In Newtonian physics, a lot of complex interactions are modelled as single, lumped vectors which we call "forces". These forces share nice properties with vectors: notably, that we can decompose vectors into multiple vectors, or sum multiple vectors into a single vector. One of the main reasons to do so is to decompose a vector into components parallel to some (typically orthogonal) coordinate system. An important observation is that there is no 'true' way of representing the forces acting on the airframe. While some decompositions are more popular than other, all are equally valid (if done correctly). I will take two examples, one of which arrives at your conclusion 'yes', the other 'no'. Example 1. Decompose the aerodynamic forces on the airplane, parallel and orthogonal to the flight path. Call one 'lift', call the other one 'drag'. Let's assume 'thrust' is also parallel to the flightpath. Weight is represented as a single vector, orthogonal to Earth, and is not decomposed along the flight path. Now take all the forces that we decomposed along the flightpath, and again decompose them but now orthogonal to Earth. Now, only look at the forces pointing 'up', which in a climb (but not in a descent) removes the vertical component of the 'drag' vector, and compare it to the weight vector. With this elaborate procedure, we can conclude the answer is 'yes'. Example 2. Combine all aerodynamic forces on the plane in a single vector instead of decomposing them into lift and drag, called the 'net aerodynamic force'. Leave the thrust and weight vector unchanged. Again, we decompose all vectors along the Earth reference frame. Now, we find the sum of all the upward components is exactly the same as weight. We can conclude the answer is 'no'. Note: the net aerodynamic force is shown in the left diagram for illustration only to show that it is the sum of lift and drag, and is not actually part of the force balance for example 1. • Would this case be example 2 as well? Sep 7 at 19:15 • @Koyovis In that case there is no lift, only drag, so there's no difference in the net aerodynamic force and the decomposed lift and drag vectors. If your climb is steep enough, example 2 changes from 'no' to 'yes'. (1/2) Sep 7 at 19:18 • Either way, this highlights why I wonder what the practical value is of your question at all, because the answer changes according to definitions and flight conditions. What exactly inspired this question? (2/2) Sep 7 at 19:26 • The downwards pointing drag is there as well in shallow climbs, if small enough like with GA planes it can be neglected. When working with aerodynamics of helicopters & military jets, it is very apparent that this force needs to be considered in questions about lift & climb. Sep 8 at 0:36 • @ymb1 No idea either. I lumped parasitic drag with the rest of aerodynamic forces, because the point here is to show that the definitions you use to separate the forces acting on the airframe determine the final answer, not to elaborate a complete description of all forces acting on the aircraft. Sep 8 at 6:26 Yes. Vertical forces are forces in the earth axes reference frame. So remaining in this reference frame, assuming $$\gamma$$$$\alpha$$ in a steady climb, the vertical forces are: $$W + D \cdot \text{sin }\gamma - L \cdot \text{cos } \gamma - T \cdot \text{sin } \gamma = 0$$ • Pointing downwards = + = $$W + D \cdot \text{sin }\gamma$$ • Pointing upwards = - = ($$L \cdot \text{cos } \gamma + T \cdot \text{sin } \gamma$$) So in order to continue climbing, the total upwards vertical force - consisting of a combination of Thrust and Lift - must be larger than weight by a factor of D * sin $$\gamma$$ Notes: 1. Upwards aerodynamic force is often called lift. But lift is defined in the airframe aerodynamic axis, and tilts with the direction of the airspeed. So for a fixed wing aeroplane in a steady climb, total vertical force is higher than weight, but lift is smaller than weight. 2. If the upwards force changes, the climb speed changes accordingly. There is an acceleration, causing a change in aerodynamic drag which stops when the forces are in equilibrium again. • Only in a constant density atmosphere. In reality the airplane loses climb speed with increasing altitude, so there is a vertical deceleration term which reduces the vertical force ever so slightly. Jun 11 at 4:31 • Lift is NOT total Upwards aerodynamic force! It is the component of all aerodynamic forces perpendicular to the plane formed by the wings and the flight path of the aircraft. If in a climb, then the flight path is inclined upwards and not vertical, so lift (if it to have any same meaning) is also not vertical. You CAN define it any way you want I suppose, but defining it as the vertical component when the aircraft is in a steep climb or descent, (or for that matter a steep bank), serves only one purpose, to confuse. Jun 12 at 0:11 • @PeterKämpf The plane is also burning fuel, reducing the weight over time as well. The plane in question is climbing at a steady rate. Jun 12 at 0:15 • @CharlesBretana Thank you for pointing out the value of asking this question twice, once about lift and once about net upwards force. Mentioned in note 1. in the question. Jun 12 at 0:17 • And what is net upwards force in an F-16 in a pure vertical climb? Do we count the force of the dynamic air pressure on the front surfaces of the aircraft as negative upwards force? what about the force of air pressure on the rear surfaces of the airframe? They are of course pushing the aircraft upwards. Are they "Lift"? Breaking up aerodynamic forces into defined components is done to create understanding and simplify and enable calculations. Applying these arbitrary definitions in scenarios where they do not help to do that only confuses. Jun 12 at 0:21 Yes, always, unless aerodynamic drag does not exist. For winged aircraft, the above is impossible, therefor the answer is yes, always. It is important to realize excess thrust is required to climb. Excess thrust closes the weight/lift triangle but does not account for aerodynamic drag, which equals the amount of additional thrust required to maintain airspeed. Add this "handle" (in the direction of flight) to the closed vector diagram for vertical lift and voila! There is your vertical aerodynamic drag component (decomposed from the aerodynamic drag vector). Always there, in any climb, for any aircraft. An aircraft that has a thrust to weight ratio of less than one simply must use a ramp while maintaining airspeed against drag. The combination of excess thrust and lift support the weight, enabling steady state flight with 0 acceleration (from Gravity), while the remaining thrust at a given velocity opposes aerodynamic drag (part of which is being used to create Lift). Like this. In level flight only around 300 lbs of thrust is needed, as the far more efficient wing can now bear all the weight. If the net forces are zero, the movement will be steady, as per Newton's second law. If upwards vertical forces equals weight, we will have net zero vertical forces and a no vertical movement (hover). If the total upward forces are greater than weight we will have a vertical acceleration until drag brings velocity to a steady state. If all vertical forces = weight the aircraft may be rising, hovering, or descending with 0 acceleration. • Thanks Robert for the edits! Could you check again to see if there is a possible conflict between paragraph two and four. If so, I think it is pragraph four that should be kept. Sep 9 at 12:00 • In paragraph 2 "downward" force from drag = 0, so 2 and 4 do not conflict. Please excuse my passion (in editing). Paragraph 4 covers all 0 vertical acceleration cases, indeed, vertical forces essentially make the plane "weightless" leaving the drag from velocity. (Now we can think about airships too). Sep 9 at 12:04 • Ah, I see, drag has a downward component in steady climb. And we presume it is a climb. And we exclude downward compents from upwards vertical forces. Then its the slowest climb = hower. Couldn't it still give the reader the expression that zero force always have to give a standstill and not as in Newtons second a steady movement? Sep 9 at 12:16 • An object in an environment with drag and 0 net force will slow to 0 velocity. (Unless drag force is included!). Since we are comparing all upward force to weight, if there is upward velocity, there is a downward drag component, therefor... Sep 9 at 13:36
{}
Reproductive System ICSE Class-10 Concise Selina Biology Solutions Chapter-13. We Provide Step by Step Answer of Progress Check , MCQs, Very Short Answer Type, Short Answer Type, Long Answer Type Questions and Structured / Applications / Skill Type Questions of Exercise-13 Reproductive System ICSE Class-10 . Visit official Website CISCE  for detail information about ICSE Board Class-10. Board ICSE Publications Selina Publishers PVT LTD Subject Concise Biology Class 10th writer HS Vishnoi Chapter-13 Reproductive System Topics Solutions of  MCQs, Very Short ,Descriptive and Structural/Skill Questions, Think and connect and Progress check Edition 2021-2022 Reproductive System ICSE Class-10 Concise Selina Biology Solutions Chapter-13 -: Select Topics :- A. MULTIPLE CHOICE TYPE E.STRUCTURED/APPLICATION/SKILL TYPE Progress check Reproductive System Selina Biology for ICSE Class 10 Chapter-13 A. MULTIPLE CHOICE TYPE Page 181 Question 1 Which one of the following is the correct route that a sperm follows when it leaves the testis of a mammal? (a) Vas deferens  epididymis  urethra (b) Urethra  epididymis  vas deferens (c) Epididymis  urethra  vas deferens (d) Epididymis  vas deferens  urethra (d) Epididymis  vas deferens  urethra Question 2 When pregnancy does not occur, the life of corpus luteum is about:- (a) 4 days (b) 10 days (c) 14 days (d) 28 days (d) 28 days Question 3 In female, after how much time after fertilization, does the fertilized egg get implanted in the uterine wall? (a) Few months (b) One month (c) Three weeks Question 4 The middle piece of sperm provides: (a) energy (b) food (c) gene (d) chromosomes (a) energy Question 5 The normal gestation period in humans is: (a) 270 days (b) 290 days (c) 280 days (d) 295 days (c) 280 days Chapter-13, Reproductive System ICSE Class 10 Page 181-182 Question 1 Name the following: (a) The membrane which protects the foetus and encloses a fluid. (b) The canal through which the testes descend into the scrotum just before birth in human male child. (c) Uterine wall that is shed during menstruation. (d) The minute finger-like projections of placenta. (a) Amnion (b) Inguinal canal (c) Endometrium (d) Villi Question 2 Rewrite the terms in the correct order so as to be in a logical sequence. (a) Implantation, ovulation, child birth, gestation, fertilization. (b) Coitus, ovum, sperm, sperm duct, urethra, vagina. (c) Sperm duct, penis, testes, sperms, semen. (d) Puberty, menopause, menstrual, menarche, reproductive age. (e) Graafian follicle, Ostium, Uterus, Fallopian tube. (a) Ovulation  fertilization  implantation  gestation child birth (b) Sperm  sperm duct  urethra  coitus  vagina  ovum (c) Testes  Sperms  Sperm duct  Semen  Penis (d) Menarche Puberty  Reproductive age  Menstruals  Menopause (e) Graafian follicle  Ostium  Fallopian tube  Uterus Question 3 Give appropriate terms for each of the following: (a) The onset of reproductive phase in a female. (b) Rupture of follicle and release of ovum from the ovary. (c) Monthly discharge of blood and disintegrated tissues in human female. (d) Process of fusion of ovum and sperm. (e) Fixing of developing zygote (blastocyst) on the uterine wall. (a) Menarche (b) Ovulation (c) Menstruation (d) Fertilization (e) Implantation Question 4 Match the items in column I with those in column II and write down the matching pairs (some may not match) Column I Column II (a) Acrosome (i) An embryo which looks like human baby (b) Gestation (ii) Luteinizing hormone (c) Menopause (iii) Ovum producing cells (d) Foetus (iv) Semen (e) Oogenesis (v) Spermatozoa (f) Ovulation (vi) Complete stoppage of menstrual cycle (vii) Time taken by a fertilized egg till the delivery of baby Column I Column II (a) Acrosome (v) spermatozoa (b) Gestation (vii) Time taken by a fertilized egg till the delivery of baby (c) Menopause (vi) complete stoppage of menstrual cycle (d) Foetus (i) An embryo which looks like human baby (e) Oogenesis (iii) ovum producing cells (f) Ovulation (ii) Luteinizing hormone Question 5 Name the following: (a) The body part in which the testes are present in a human male. (b) The part where the sperms are produced in the testes. (c) The fully developed part of the ovary containing a mature egg. (d) The accessory gland in human males whose secretion activates the sperms. (e) The tubular knot fitting like a cap on the upper side of the testis. (a) Scrotum (b) Seminiferous Tubules (c) Graafian-Follicle (d) Seminal vesicle (e) Epididymis Question 6 Choose the odd one in each of the following: (a) Oestrogen; progesterone; testosterone; prolactin. (b) Ovary; fallopian tube; ureter; uterus. (c) Seminiferous tubule; ovum; epididymis; sperm duct; urethra. (d) Sperm; implantation; fertilization; ovum; after birth. (e) Relaxin; cervix dilates; amniotic sac ruptures; child birth; follicle. (a) Testosterone (b) Ureter (c) Ovum (d) After birth (e) Follicle Selina Biology Solution of Chapter-13, Reproductive System for ICSE Class 10 Page 182 Question 1 (a) State whether the following statements are TRUE (T) or FALSE (F): (i) Fertilization occurs in vagina. (T/F) (ii) Uterus is also known as birth canal. (T/F) (iii) Nutrition and oxygen diffuse from the mother’s blood into the foetus’s blood through amnion. (T/F) (b) Rewrite any two of the wrong statements by correcting only one word either at the beginning or at the end of the sentence. (a) (i) False (ii) False (iii) False (b) (i) Fertilization occurs in the fallopian tube. (ii) Vagina is also known as the birth canal. (iii) Nutrition and oxygen diffuse from the mother’s blood into the foetus’s blood through placenta. Question 2 Complete the following table by writing the name of the structure or the function of the given structure: Structure Function 1. corpus luteum 1.—– 2. —– 2. produces male gametes in mass 3. Leydig cells 3. —– 4. —– 4. increases the force in uterine contractions 5. umbilical cord 5. —– 6. fallopian tube 6.—– Structure Function 1. Corpus luteum 1. secretes progesterone & other hormones to prepare the uterine wall for the receival of the embryo. 2. Testes 2. produces male gametes in mass 3. Leydig cells 3. to produce the androgen, testosterone, under the pulsatile control of pituitary luteinizing hormone (LH) 4. Oxytocin 4. increases the force in uterine contractions during child birth 5. Umbilical cord 5. connects placenta with foetus 6. Fallopian tube 6. The site of fertilization for the sperm and ovum Question 3 Given below are the names of certain stages/substances related to reproduction and found in human body. (a) Foetus • Where is it contained? • How does it differ from embryo? (b) Hyaluronidase • Is it an enzyme or simply a protein? • What is its function? (c) Morula • What is this stage? • Name the stage which comes next to it. (d) Amniotic fluid • Where is it found? • What are its functions? (e) Placenta • What are the two sources that form placenta? • Name any two main substances which pass from foetus to mother through placenta. • Name any two hormones it produces. (f) Implantation • The development stage that undergoes this process. • The approximate time after fertilization, when it occurs. (a) Foetus: • It is contained in the uterus. • In foetus, limbs have appeared and resembles the humans unlike the embryo which is a growing or dividing zygote. (b) Hyaluronidase: • Enzyme • It is an enzyme secreted by the sperm that allows the sperm to penetrate the egg. (c) Morula: • It is the stage in the development of human embryo which consists of a spherical mass of cells. Blastocyst (d) Amniotic fluid: • Between amnion and embryo • It protects the embryo from physical damage, keeps the pressure all around embryo and prevents sticking of foetus to amnion. (e) Placenta: • Placenta is formed by two sets of minute finger like processes called the villi. One set of villi is from the uterine wall and the other set is from the allantois. • Oxygen and amino acids. • Progesterone and oestrogen. (f) Implantation: • Blastocyst • It occurs in about 5-7 days after ovulation. Question 4 Describe the functions of the following: (a) Inguinal canal (b) Testis (c) Ovary (d) Oviduct (a) Inguinal canal : It is the canal which allows the descent of testes along with their ducts, blood vessels and nerves into the abdomen. (b) Testis : Testis is a male reproductive organ. There a pair of testes present in a scrotal sac descended outside the body cavity. Testes produce sperms which are the male gametes. (c) Ovary : Ovary is a female reproductive organ. It produces ova i.e. female gametes. (d) Oviduct : A pair of oviduct is present on either side of the uterus. Oviduct carries the released ovum from the ovary to the uterus. Question 5 Differentiate between: (a) Semen and sperm (b) Hymen and clitoris (c) Uterus and vagina (d) Efferent duct and sperm duct (e) Follicle and corpus luteum (f) Amnion and allantois (g) Prostate gland and Cowper’s gland (a) Differences between semen and sperm : Semen Sperm Semen is a milky white fluid which contains sperms and secretions of seminal vesicles. Sperms are human male gametes which are produced in the testes. (b) Differences between hymen and clitoris : Hymen Clitoris Hymen is a thin membrane that partially covers the opening of vagina in young females. Clitoris is a small erectile structure located in the uppermost angle of vulva in front of the urethral opening. (c) Differences between uterus and vagina : Uterus Vagina Uterus is a hollow, pear shaped muscular organ located in the pelvic cavity. Vagina is the muscular tube extending from the cervix to the outside. It is the site of implantation for the embryo after fertilization. The vagina receives the male penis and provides entry for the sperms at the time of sexual intercourse. (d) Differences between efferent duct and sperm duct : Efferent duct Sperm duct Efferent ducts join to form the epididymis. Epididymis is continued by the side of the testes to give rise to the sperm duct or vas deferens. (e) Differences between follicle and corpus luteum : Follicle Corpus luteum A maturing egg contained in a cellular sac is called the follicle. The remnant of the ruptured follicle persists and gets converted into a yellow mass called corpus lutuem. (f) Differences between amnion and allantois : Amnion Allantois Amnion is a sac which develops around the embryo before the formation of allantois. Allantois is an extension from the embryo which forms villi of placenta. (g) Differences between prostate gland and Cowper’s gland : Prostate gland Cowper’s gland Prostate gland surrounds the urethra in males. Cowper’s gland opens into urethra in human males. Its alkaline secretion neutralizes acid in female’s vagina. Its secretion serves as a lubricant. Concise Biology Solution of Chapter-13, Reproductive System for ICSE Class 10th Page 182-183 Question 1 Define the following terms: (a) Reproduction (b) Hernia (c) Ovulation (d) Puberty (e) Fertilisation (f) Hymen (a) Reproduction : Reproduction is the process of formation of new individuals by sexual or asexual means, which can repeat the process in their own turn. (b) Hernia : Hernia is an abnormal condition which is caused when the intestine due to the pressure in abdomen bulges into the scrotum through the inguinal canal. (c) Ovulation : Ovulation is the release of the mature ovum by the rupture of the Graafian follicle. (d) Puberty : Puberty is the period during which immature reproductive system in boys and girls matures and becomes capable of reproduction. (e) Fertilisation : The fusion of the male gamete (sperm) and the female gamete (ovum) to form a zygote is called fertilisation. (f) Hymen : Hymen is a thin membrane which partially covers the opening of the vagina in young females. Question 2 Distinguish between the following pairs: (a) Spermatogenesis and oogenesis (b) Implantation and gestation (c) Pregnancy and parturition (d) Placenta and umbilical cord (e) Identical and fraternal twins (f) Menarche and menopause (a) Differences between spermatogenesis and oogenesis : Spermatogenesis Oogenesis It is the process of production of sperms in seminiferous tubules of testes. It is the growth process in which an ovum becomes a mature egg. (b) Differences between implantation and gestation : Implantation Gestation It is the fixing of the blastocyst to the endometrial lining of the uterus or the wall of the endometrium. It is the time period of development of the embryo in the uterus. (c) Differences between pregnancy and parturition : Pregnancy Parturition It is the state of carrying a developing embryo or a foetus within the female body. It is the process of giving birth to the young ones at the end of the gestation period. (d) Differences between placenta and umbilical cord : Placenta Umbilical cord It is disc-like structure attached to the uterine wall. It is a cord containing blood vessels which connects the placenta with the foetus. (e) Differences between identical and fraternal twins : Identical twins Fraternal twins They are produced from one ovum i.e. one developing zygote splits and grows into two fetuses. They are produced when two ova get fertilized at a time. (f) Differences between menarche and menopause : Menarche Menopause It is the onset of menstruation in a young female at about the age of 13 years. It is the permanent stoppage of menstruation in females at about the age of 45 years. Question 3 What is the significance of the testes being located in the scrotal sacs outside the abdomen? Can there be any abnormal situation regarding their location? If so, what is that and what is the harm caused due to it? Testes are responsible for the production of male gametes i.e. sperms. The normal body temperature does not allow the maturation of the sperms. Being suspended outside the body cavity, the temperature  in the scrotal sac is 2 to 3oC which is the suitable temperature for the maturation of the sperms When it is too hot, the skin of the scrotum loosens so that the testes hang down away from the body. When it is too cold, the skin contracts in a folded manner and draws the testes closer to the body for warmth. In an abnormal condition, in the embryonic stage, the testes do not descend into the scrotum. It can lead to sterility or incapability to produce sperms. Question 4 Suppose a normal woman has never borne a child. How many mature eggs would she have produced in her lifetime? Your calculation should be based on two clues:- (a) Eggs are produced at the rate of 1 egg every 28 days (one menstrual cycle) (b) A woman’s total reproductive period is 13-45 years. Total reproductive period = 45 – 13 = 32 years Total eggs produced = 32 x 12 = 384 eggs approximately Question 5 What are the secondary sexual characters in the human male and female respectively? Secondary sexual characters in males: (i) Beard and moustache (ii) Stronger muscular built (iii) Deeper voice Secondary sexual characters in females: (i) Breasts in females (ii) Large hips (iii) High pitched voice Question 6 What are the accessory reproductive organs? The accessory reproductive organs include all those structures which help in the transfer and meeting of two kinds of sex cells leading to fertilization and growth and development of egg up to the birth of the baby. For example: uterus in females, penis in males. Question 7 Differentiate between the primary and accessory reproductive organs. Primary Reproductive Organs Accessory Reproductive Organs The primary reproductive organs produce sex cells. The accessory reproductive organs help in the transfer and meeting of two kinds of sex cells leading to fertilization. The primary reproductive organs do not help in the development of baby. The accessory organs help in the growth and development of egg up to the birth of baby. Example: Testes in males and ovaries in females. Example: penis in males, Uterus, vagina in female. Question 8 Name and describe very briefly, the stages in the development of human embryo. (a) After fertilization zygote is formed inside the fallopian tube. (b) The zygote then divides repeatedly to form a spherical mass of cells known as ‘Morula’. (c) The morula then develops into a hollow sphere of cells with a surrounding cellular layer and an inner cell mass projecting from it centrally. This stage is known as the ‘blastocyst’. It implants itself into the uterine wall. (d) From the blastocyst arises an embryo which is around 3 weeks old. It is a tiny organism that hardly resembles human being. (e) By the end of 5 weeks, the embryo is with a devalued heart and blood vessels. (f) By the end of 8 weeks, limbs are developed. This stage is known as ‘foetus’. (g) At the end of nearly 40 weeks i.e. end of gestation period, the infant is born. Question 9 Is it correct to say that the testes produce testosterone? Discuss. Testosterone is the male reproductive hormone produced by the interstitial cells or the Leydig cells. These cells are located in the testes. They serve as a packing tissue between the coils of the seminiferous tubules. Therefore, it can be said that the testes produce the male hormone testosterone. Chapter-13 Reproductive System Selina Biology solution for ICSE Class 10th E. STRUCTURED/APPLICATION/SKILL TYPE Page 183-184 Question 1 Given below is a diagram of two systems together in the human body. (a) Name the systems. (b) Name the parts numbered 1-10. (c) Describe the functions of the parts 3, 4, 5 and 6. (d) What will happen if the part 3 on both sides gets blocked? (a) Excretory system and Female Reproductive system (b) 1 Kidney 2 Ureter 3 Fallopian Tube 4 Infundibulum 5 Ovary 6 Uterus 7 Urinary Bladder 8 Cervix 9 Vagina 10 Vulva (c) (part 3) Function of Fallopian Tube : The fallopian tubes carry the ovum released from the ovary to the uterus. (part 4)Function of Infundibulum : Infundibulum is the funnel shaped distal end of the ovary which picks up the released ovum and pushes it further on its passage into the fallopian tube. (part 5) Function of Ovary : Ovary produces female gametes i.e. ova. (part 6)Function of Uterus : Uterus allows the growth and development of the embryo. (d) If fallopian tube (part 3) on both sides gets blocked, the ovum released by the ovary will not be pushed into the oviduct and hence, there will be no possibility of fertilisation. Question 2 The following diagram represents the vertical sectional view of the human female reproductive system. (a) Label the parts indicated by the guidelines 1 to 8. (b) How does the uterus prepare for the reception of zygote? (c) What happens to the uterus, if fertilization fails to take place? (a) 1 Fallopian Tube 2 Infundibulum 3 Ureter 4 Vagina 5 Ovary 6 Uterus 7 Urinary Bladder 8 Urethra (b) Oestrogen secreted by the corpus luteum secrets oestrogen. Oestrogen stimulates the thickening of the endometrial wall of the uterus. The uterine wall becomes thickened and is supplied with a lot of blood to receive the fertilized egg. (c) If fertilization fails to take place, the endometrial lining of the uterus starts shedding on the 28th day of the menstrual cycle. Finally it is discharged out along with the unfertilised ovum as the menstrual flow. Question 3 Given below is the schematic diagram of the sectional view of the human male reproductive system. (a) Name the parts numbered 1-11. (b) State the functions of the parts numbered 1, 2, 3, 5, 8 and 11. (a) 1 – Seminal vesicles 2 – Prostate gland 3 – Bulbo-urethral gland 4 – Epididymis 5 – Testis 6 – Scrotum 8 – Vas deferens 9 – Erectile tissue 10 – Penis 11 – Urethra (b) Functions of Seminal vesicles They produce the fluid which serves as the transporting medium for sperms. Prostate gland It produces an alkaline secretion which mixes with the semen and helps neutralise the vaginal acids. iii. Bulbo-urethral gland It produces a secretion which serves as a lubricant for the semen to pass through the urethra. 2. Testis It produces the male gamete sperm and the male sex hormone testosterone. Vas deferens They carry the sperms from the epididymis to the urethra. 3. Urethra It serves as an outlet for delivering the sperms into the vagina. Question 4 The diagram below represents two reproductive cells A and B. Study the same and then answer the questions that follow: (a) Identify the reproductive cells A and B (b) Name the specific part of the reproductive system where the above cells are produced. (c) Where in the female reproductive system do these cells unite? (d) Name the main hormone secreted by the (1) ovary (2) testes. (e) Name an accessory gland found in the male reproductive system and state its secretion. (a) A – ovum B – sperm (b) Sperms are produced in the testis. The ovum is produced in the ovary. (c) The reproductive cells unite in the fallopian tubes of the female reproductive system. (d) Ovary – Oestrogen and progesterone Testis – Testosterone (d) Accessory glands: • Seminal vesicle – Seminal fluid • Prostate gland – Alkaline secretion • Bulbo-urethral gland – Lubricant Question 5 The diagram given below is that of a developing human foetus in the womb. Study the same and answer the questions that follow: (a) Name the parts ‘1’ to ‘5’ indicated by guidelines. (b) What term is given to the period of development of the foetus in the womb? (c) How many days does the foetus take to be fully developed? (d) Mention two functions of the parts labelled ‘2’ other than its endocrine functions. (e) Name (any one) hormone produced by the part labelled ‘2’. (a) 1 – umbilical cord, 2 – placenta, 3 – amnion, 4 – mouth of uterus, 5 – muscular wall of uterus (b) Gestation (c) 280 days (d) Placenta provides the foetus with oxygen and nutrients. In addition, the placenta also removes carbon dioxide and waste products of the foetus. (e) Progesterone Question 6 Given below is a portion of the diagram to show the diagrammatic highly magnified view of a single human sperm. Complete the diagram to show its internal structure. Question 7 Given below is the outline of the male reproductive system. Name the parts labelled 1 to 8 and state their functions. Also name the corresponding structure of part (4) in the female reproductive system. Part Functions 1. (Urinary bladder) Stores urine 2. (Ureter) Carries urine from the urinary bladder to the urethra 3. (Bulbo-urethral glands) Secretion serves as a lubricant 4. (Sperm duct/Vas deferens) Allows the transit of sperms from the testicles to the outside of the body Allows maturation of sperm cells 5. (Urethra) Carries urine from the bladder to outside of the body Ejaculation of semen when the male reaches orgasm 6. (Testis) Production of sperms 7. (Scrotum) Protects the testes Acts as a climate control system for the testes 8. (Epididymis) Stores and allows the maturation of sperms before release Fallopian tubes (oviducts) in females are analogous to sperm ducts in males. Sperm ducts carry sperms to the urethra, while fallopian tubes carry ova to the uterus. Thanks $${}$$
{}
The series $\sum_{n=1}^\infty {2n\brace n}^{-{2n\brace n}}$ and $\sum_{n=1}^\infty (2n)_{n}^{-(2n)_{n}}$ in the context of normal numbers In this ocassion we consider the followgin series that involve $${n\brace k}$$ the Stirling number of the second kind and $$(n)_k$$ the Pochhammer symbols. I've known from an informative point of view that in the literature was explored an example versus the definition of irrational absolutely abnormal numbers (for example from [1]). This is the Wikipedia article dedicated to Normal number. I wondered as curiosity if in the context of these definitions, the definitions and notions concerning normal numbers it is possible to propose some statement or conjecture about the following series $$\sum_{n=1}^\infty\frac{1}{{2n\brace n}^{{2n\brace n}}} \tag{1}$$ or $$\sum_{n=1}^\infty \frac{1}{(2n)_{n}^{(2n)_{n}}}.\tag{2}$$ Question. Show heuristics/reasonings, or set a proposition or propose a conjecture concerning the series $$(1)$$ or $$(2)$$ in the context of the normal numbers. Many thanks. I hope that my series and question have a good mathematical content and it makes good sense in the context of the theory of number numbers. References: [1] Glyn Harman, One Hundred Years of Normal Numbers, proceeding from Number Theory for the Millennium II, A K Peters (2002). • "in the context of number numbers"??? Sep 6, 2019 at 13:03 • Was a typo, many thanks I hope that my question has a good mathematical content and the series can be studied @GerryMyerson Sep 6, 2019 at 13:54 • It is well-known that (Lebesgue-) almost all real numbers are absolutely normal. So if there is no reason speaking against it, then probably your numbers are absolutely normal. Can you give any reason why you consider these particular numbers, and ask about their normality? Sep 9, 2019 at 6:33 • I known the first claim in your comment. I believe that the only reason thus to study numbers that aren't absolutely normal is the purpose to show a specific example. My intention was edit my post since I've created two series after I've known an example that is being studied from the last paragraph of the reference [1]. My belief is that one can to deduce some statement about these reals in my post in the context of normal numbers. Many thanks for your attention @KurisutoAsutora Sep 9, 2019 at 6:56 While it is very likely both numbers are absolutely normal, simply by appealing to the idea that there's no obvious reason why they should be abnormal, current proof techniques are very far from being able to prove the normality of such numbers in a given base, let alone in all bases simultaneously. The closest thing I can think of to your example are the variants of the Korobov-Stoneham construction, which generally look like $$\sum_{i=1}^\infty \frac{1}{q_i b^{n_i}},$$ for positive integers $$q_i,n_i$$ with $$q_i$$ and $$n_i$$ increasing. In the right conditions, such numbers are known to be normal in base $$b$$. For example $$\sum_{i=1}^\infty \frac{1}{3^i 2^{5^i}}$$ is normal in base $$2$$. One generally chooses the $$q_i$$ in such a way so that the periods of the rational numbers $$r_K=\sum_{i=1}^K \frac{1}{q_i b^{n_i}}$$ see most short strings to almost the correct frequency (in other words, the period is $$(\epsilon,k)$$-normal for some small $$\epsilon$$ and large $$k$$, in the terminology of Besicovitch). This is usually achieved by demanding that the prime factors of $$q_i$$ must belong to a finite set. However, in order to show that the full series results in a normal number, the $$n_i$$ must be chosen to be so swiftly growing that the behavior of the period of $$r_K$$ can become the dominant contribution to the full series before the next term in the series begins to alter the digits and for some time afterwards too. In particular, $$n_i$$ usually has to be at least on the order of magnitude of $$q_i$$. As such, I don't see a way to nicely fit the numbers you have shown into such a framework. • Many thanks, I'm going to study your answer, of course I accept your words and the words of the other user in previous comment.If in next few weeks there aren't more answers I should to accept your post as a definitively accepted answer. Sep 11, 2019 at 9:22 • I'm going to accept your great answer, convinced that isn't possible to do better it currently. Sep 19, 2019 at 12:41
{}
3 deleted 14 characters in body I started working on this question after it was posted to MathOverflow and found bounds similar to those found by Justin Gilmer: upper asymptotic density of the happy numbers 0.1962 or greater, lower asymptotic density no more than 0.1217. However, I was also able to prove that the upper asymptotic density of the happy numbers was no more than 0.38; Gilmer mentioned in his paper that the question of whether the upper asymptotic density was less than 1 was still open. A writeup of the result is at http://djm.cc/dmoews/happy.zip. The method used to find an upper bound on the upper asymptotic density was to start with a random number with decimal expansion $??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, where the digits # are independent and uniformly distributed, and the digits ? are arbitrarily distributed and may depend on each other, but are independent of the #s. Then if there are $n$ #s, asymptotic normality implies that after applying $s$, we get a mixture of translates of a distribution which is approximately normal, with mean $28.5n$ and standard deviation proportional to $\sqrt{n}$. If $10^{n'}/\sqrt{n}$ is sufficiently small, each translate of this normal distribution will have its last $n'$ digits approximately uniformly distributed, so we get a random number which can be approximated by the same form of decimal expansion we started with, $??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, where now there are $n'$ digits #. Repeating this eventually brings us to numbers small enough to fit on a computer. The method used to find the bounds similar to Gilmer's was to start with a random number of the form $dd\dots{}dd??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, where the ?s and #s are as before, the $d$s are fixed digits, and there are approximately the same number of $d$s and #s, but very few ?s. Then if the parameters are appropriately chosen, we can show that after applying $s$, we again get a random number which can be approximated by the same form of decimal expansion, $dd\dots{}dd??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, and repeat this step until the number is small. 2 added 1 characters in body I started working on this question after it was posted to MathOverflow and found bounds similar to those found by Justin Gilmer: upper asymptotic density of the happy numbers 0.1962 or greater, lower asymptotic density no more than 0.1217. However, I was also able to prove that the upper asymptotic density of the happy numbers was no more than 0.38; Gilmer mentioned in his paper that the question of whether the upper asymptotic density was less than 1 was still open. A writeup of the result is at http://djm.cc/dmoews/happy.zip. The method used to find an upper bound on the upper asymptotic density is was to start with a random number with decimal expansion $??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, where the digits # are independent and uniformly distributed, and the digits ? are arbitrarily distributed and may depend on each other, but are independent of the #s. Then if there are $n$ #s, asymptotic normality implies that after applying $s$, we get a mixture of translates of a distribution which is approximately normal, with mean $28.5n$ and standard deviation proportional to $\sqrt{n}$. If $10^{n'}/\sqrt{n}$ is sufficiently small, each translate of this normal distribution will have its last $n'$ digits approximately uniformly distributed, so we get a random number which can be approximated by the same form of decimal expansion we started with, $??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, where now there are $n'$ digits #. Repeating this eventually brings us to numbers small enough to fit on a computer. The method used to find the bounds similar to Gilmer's was to start with a random number of the form $dd\dots{}dd??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, where the ?s and #s are as before, the $d$s are fixed digits, and there are approximately the same number of $d$s and #s, but very few ?s. Then if the parameters are appropriately chosen, we can show that after applying $s$, we again get a random number which can be approximated by the same form of decimal expansion, $dd\dots{}dd??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, and repeat this step until the number is small. 1 I started working on this question after it was posted to MathOverflow and found bounds similar to those found by Justin Gilmer: upper asymptotic density of the happy numbers 0.1962 or greater, lower asymptotic density no more than 0.1217. However, I was also able to prove that the upper asymptotic density of the happy numbers was no more than 0.38; Gilmer mentioned in his paper that the question of whether the upper asymptotic density was less than 1 was still open. A writeup of the result is at http://djm.cc/dmoews/happy.zip. The method used to find an upper bound on the upper asymptotic density is to start with a random number with decimal expansion $??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, where the digits # are independent and uniformly distributed, and the digits ? are arbitrarily distributed and may depend on each other, but are independent of the #s. Then if there are $n$ #s, asymptotic normality implies that after applying $s$, we get a mixture of translates of a distribution which is approximately normal, with mean $28.5n$ and standard deviation proportional to $\sqrt{n}$. If $10^{n'}/\sqrt{n}$ is sufficiently small, each translate of this normal distribution will have its last $n'$ digits approximately uniformly distributed, so we get a random number which can be approximated by the same form of decimal expansion we started with, $??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, where now there are $n'$ digits #. Repeating this eventually brings us to numbers small enough to fit on a computer. The method used to find the bounds similar to Gilmer's was to start with a random number of the form $dd\dots{}dd??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, where the ?s and #s are as before, the $d$s are fixed digits, and there are approximately the same number of $d$s and #s, but very few ?s. Then if the parameters are appropriately chosen, we can show that after applying $s$, we again get a random number which can be approximated by the same form of decimal expansion, $dd\dots{}dd??\dots{}??\hbox{#}\hbox{#}\dots{}\hbox{#}\hbox{#}$, and repeat this step until the number is small.
{}
## Essential University Physics: Volume 1 (3rd Edition) We first find the moment of inertia of the dog: $I_{dog}=mR^2 = 17 \times (1.81^2)=55.69$ Using this and calling p the percentage, we find: $p = (1+ \frac{55.7}{95})^{-1}\times 100=\fbox{63 percent}$
{}
# The converse of the axiom of extensionality I read that the converse of $\forall a \forall b (\forall c (c \in a \leftrightarrow c \in b) \rightarrow a = b)$ follows from the substitution property of equality. Therefore I did the following, but I am quite sure this is not right. I would greatly appreciate if someone could point me to how to apply the substitution property. What I tried to do was: $\phi = \forall c (c \in a \leftrightarrow c \in a)$ Now substituting some occurances of unbound $a$ with unbound $b$: $\phi' = \forall c (c \in a \leftrightarrow c \in b)$ Using the substitution property: $\forall a \forall b (a = b \rightarrow (\phi \rightarrow \phi') )$ $\forall a \forall b (a = b \rightarrow (\forall c (c \in a \leftrightarrow c \in a) \rightarrow \forall c (c \in a \leftrightarrow c \in b)) )$ $\forall a \forall b (a = b \rightarrow (\forall c \top \rightarrow \forall c (c \in a \leftrightarrow c \in b)) )$ $\forall a \forall b (a = b \rightarrow (\top \rightarrow \forall c (c \in a \leftrightarrow c \in b)) )$ $\forall a \forall b (a = b \rightarrow \forall c (c \in a \leftrightarrow c \in b))$ This is the expected converse. But does the procedure make any sense?
{}
# How to define a UDL Delimiter with spaces • Hi All, I believe this is similar to the topic at https://notepad-plus-plus.org/community/topic/12991/language-definition-space-in-comment-definition, although perhaps sufficiently different to requires a separate post. I have been using the the UDL for my own nefarious purposes, not for syntax highlighting, but rather to highlight event entries in large log files. This, by and large, has worked OK. I am using a Delimiter Open event such as `Level="ERROR"`, that could appear in the middle to a log entry, and a Close of `((EOL))`. However, I now wish to highlight a single entry that has spaces within in, such that the Open delimiter would be `Message="ICE failed event"`. Of course, this fails as UDL is taking the spaces to mean separate delimiters. I have tried encasing the term in single quotes at various points, but can’t seem to get this to work. The term will also actually appear in a line that has had a style applied due to a `Level="INFO"` delimiter, so I will need to next one inside the other. An example log line would be: ``````2016-12-28T04:20:39.039+00:00 node1 2016-12-28 04:20:39,039 Level="INFO" Name="support.ice" Message="ICE failed event" Media-type="audio" Stream-id="0" Component-id="RTP" Call-id="237a5bea59f9479f828330183a34a3b4" `````` Regards, • I take it there is no way to achieve this as yet in Notepad++? • @swinster what about using operators 2 with something like “ICE failed event”, it doesn’t color Message= so. Cheers Claudia • operators & delimiters delimiter 1 style open: Message=" close: "
{}
## Weyl Group Symmetric Functions and the Representation Theory of Lie Algebras Last update: 11 September 2013 ## Weyl group symmetric functions Each of the simple root systems is determined by a Cartan matrix $C\text{.}$ A list of the Cartan matrices for simple root systems can be found in [Bou1981] p. 250-258. We shall denote the $\left(i,j\right)$ entry of the Cartan matrix by $⟨{\alpha }_{i},{\alpha }_{j}⟩$ so that $C=(⟨αi,αj⟩).$ Let $n$ be the dimension of the Cartan matrix. Let ${\omega }_{1},{\omega }_{2},\dots ,{\omega }_{n}$ be basis vectors in a vector space. Define $𝔥*=∑iℂωi, P=∑iℤωi, P+=∑iℕωi,$ where $ℕ$ denotes the nonnegative integers. The elements of ${𝔥}^{*},$ $P,$ and ${P}^{+}$ are called the weights, the integral weights, and the dominant integral weights, respectively. The ${\omega }_{i}$ are called the fundamental weights. We have the following sequence of inclusions $P+⊆P⊆𝔥*. (2.1)$ Let $\gamma ={\sum }_{i}{\gamma }_{i}{\omega }_{i}$ be an element of $P\text{.}$ We shall use the notation $⟨\gamma ,{\alpha }_{i}⟩$ for the integer ${\gamma }_{i}\text{.}$ The simple roots ${\alpha }_{i}$ are given in terms of the entries of the Cartan matrix, $αi=∑j ⟨αi,αj⟩ ωj.$ There is a partial ordering on the weight lattice given by $γ≥κifκ=γ- ∑kiαi, (2.2)$ for nonnegative integers ${k}_{i}\text{.}$ We say that $\gamma \ge \kappa$ in dominance. Define linear operators ${s}_{i}:P\to P$ by $siγ=γ- ⟨γ,αi⟩ αi.$ The Weyl group is the group generated by the ${s}_{i}\text{:}$ $W=⟨{s}_{1},{s}_{2},\dots ,{s}_{n}⟩\text{.}$ The sign of an element $w\in W$ is $\epsilon \left(w\right)={\left(-1\right)}^{p},$ where $p$ is the smallest nonnegative integer such that there exists an expression ${s}_{{i}_{1}}{s}_{{i}_{2}}\cdots {s}_{{i}_{p}}=w\text{.}$ We will need the following proposition, see [Bou1981] Ch. 6, §1 Thm. 2. (a) Every Weyl group orbit $W\gamma ,$ $\gamma \in {P}^{+}$ contains a unique element in ${P}^{+}\text{.}$ (b) If $\lambda ,\mu \in {P}^{+}$ and $\rho ={\sum }_{i}{\omega }_{i}$ then, for $v,w\in W,$ $w(λ+ρ)= v(μ+ρ)⟺ v=w.$ $\alpha \in P$ is a root if $\alpha =w{\alpha }_{i}$ for some $w\in W$ and simple root ${\alpha }_{i}\text{.}$ Let $\Phi$ be the set of roots and let ${\Phi }^{+}=\left\{\alpha \in \Phi | \alpha >0\right\}$ and ${\Phi }^{-}=\left\{\alpha \in \Phi | \alpha <0\right\}$ where the ordering is as in (2.2). It is true that $\Phi ={\Phi }^{+}\cup {\Phi }^{-}\text{.}$ The elements of ${\Phi }^{+}$ and ${\Phi }^{-}$ are called positive and negative roots respectively. The raising operator ${R}_{\alpha }$ associated to a positive root $\alpha$ is the operator which acts on elements of $P$ by $Rαγ=γ+α. (2.4)$ Corresponding to each $\lambda \in P$ we write, formally, ${e}^{\lambda }$ so that $eλeμ= eλ+μ.$ In particular if $\lambda ={\sum }_{i}{\lambda }_{i}{\omega }_{i}$ then $eλ = eλ1ω1 eλ2ω2 ⋯ eλnωn = (eω1)λ1 (eω2)λ2 ⋯ (eωn)λn. (2.5)$ (If one finds this "exponential" notation unsettling one can substitute ${z}_{i}$ for ${e}^{{\omega }_{i}}$ and write ${z}^{\lambda }={z}_{1}^{{\lambda }_{1}}{z}_{2}^{{\lambda }_{2}}\cdots {z}_{n}^{{\lambda }_{n}}$ instead of ${e}^{\lambda }\text{.)}$ Define an action of the Weyl group by $weλ= ewλ, (2.6)$ for each $w\in W$ and $\lambda \in P\text{.}$ Define $AW=ℤ [ eω1, e-ω1,⋯, eωn, e-ωn ] W .$ Bases of ${A}^{W}$ For each $\lambda \in {P}^{+}$ define the orbit sum, or monomial symmetric function, by $mλ= ∑ν∈Wλ eν. (2.7)$ For each $\lambda \in {P}^{+}$ define the Weyl character by $χλ= ∑w∈W ε(w) ew(λ+ρ) ∑w∈W ε(w) ewρ (2.8)$ where $\rho ={\sum }_{i}{\omega }_{i}\text{.}$ The elementary, or fundamental, symmetric functions are given by defining $e0 = 1, er = χωr,$ for each positive integer $r,$ and $eλ= e1λ1 e2λ2⋯ enλn, (2.9)$ for all elements $\lambda ={\sum }_{i}{\lambda }_{i}{\omega }_{i}$ in ${P}^{+}\text{.}$ Define integers ${K}_{\lambda \mu }$ by the identity $χλ=∑μ∈P+ Kλμmμ. (2.10)$ It is true that (a) The ${K}_{\lambda \mu }$ are nonnegative integers. (b) ${K}_{\lambda \lambda }=1$ for all $\lambda \in {P}^{+}\text{.}$ (c) ${K}_{\lambda \mu }=0$ if $\mu \nleqq \lambda \text{.}$ All of these facts follow from representation theory see §3 (3.5). I know of no easy way to prove these results without using representation theory. Each of the sets ${mλ}λ∈P+, {χλ}λ∈P+, {eλ}λ∈P+,$ forms a $ℤ\text{-basis}$ of ${A}^{W}\text{.}$ The fact that the ${m}_{\lambda }$ form a $ℤ$ basis of ${A}^{W}$ follows immediately from (2.3). One can show by elementary techniques and without using representation theory, see [Bou1981] Ch. 6, §3, that the ${\chi }^{\lambda },$ $\lambda \in {P}^{+}$ form a $ℤ$ basis of ${A}^{W}\text{.}$ This fact also follows from the facts about the numbers ${K}_{\lambda \mu }$ above. Assuming that the ${\chi }^{\lambda }$ form a $ℤ\text{-basis}$ of ${A}^{W}$ it follows that the ${e}_{\lambda },$ $\lambda \in {P}^{+}$ form a $ℤ\text{-basis}$ of ${A}^{W}$ simply by expanding ${e}_{\lambda }$ in terms of ${e}^{\mu },$ $\mu \in {P}^{+}\text{.}$ One can also obtain this result in a different fashion by using representation theory. Inner product Let $d=∑w∈Wε(w) ewρ,$ where $\rho ={\sum }_{i}{\omega }_{i}\text{.}$ If $f={\sum }_{\nu \in P}{f}_{\nu }{e}^{\nu }$ then define $\stackrel{‾}{f}={\sum }_{\nu }{f}_{\nu }{e}^{-\nu }\text{.}$ Let ${\left[f\right]}_{1}$ denote taking the coefficient of the identity, ${e}^{0},$ in $f\text{.}$ Then define $⟨f,g⟩= 1|W| [fdg‾d‾]1,$ where $|W|$ is the order of the Weyl group. ([Mac1991]) The inner product defined above satisfies $⟨χλ,χμ⟩ =δλμ.$ Proof. Since ${\chi }^{\lambda }={d}^{-1}{\sum }_{w\in W}\epsilon \left(w\right){e}^{w\left(\lambda +\rho \right)},$ $⟨χλ,χμ⟩ =1|W|∑v,w∈W ε(v)ε(w) [ev(λ+ρ)e-w(μ+ρ)]1.$ This is zero if $\lambda \ne \mu ,$ because, (2.3), the orbits $W\left(\lambda +\rho \right)$ and $W\left(\mu +\rho \right)$ do not intersect. If $\lambda =\mu ,$ then $v\left(\lambda +\rho \right)=w\left(\lambda +\rho \right)⇔v=w\text{.}$ Thus $⟨{\chi }^{\lambda },{\chi }^{\lambda }⟩=\frac{1}{|W|}{\sum }_{w\in W}1=1\text{.}$ $\square$ If one prefers one may simply define the inner product by making the Weyl characters orthonormal. Homogeneous symmetric functions Let $\kappa \in {P}^{+}$ and define $Γκ= {μ∈P+ | μ≤κ}, andΛκ= span{χμ | μ∈Γκ}.$ Since ${\Gamma }_{\kappa }$ is always finite ${\Lambda }_{\kappa }$ is always finite dimensional. Define an inner product on ${\Lambda }_{\kappa }$ by defining $⟨χλ,χμ⟩ =δλμ$ for all $\lambda ,\mu \in {\Gamma }_{\kappa }\text{.}$ Then define the homogeneous symmetric functions ${h}_{\lambda },$ $\lambda \in {\Gamma }_{\kappa }$ to be the dual basis to the monomial symmetric functions, $⟨hλ,mμ⟩ =δλμ. (2.12)$ Using the integers ${K}_{\lambda \mu }$ defined in (2.10), the homogeneous symmetric functions are given in terms of the Weyl characters by $hμ=∑λ∈Γκ χλKλμ, (2.13)$ for all $\mu \in {\Gamma }_{\kappa }\text{.}$ The ${h}_{\mu },$ $\mu \in {\Gamma }_{\kappa }$ form a basis of ${\Lambda }_{\kappa }\text{.}$ Each of the sets ${mλ}λ∈P+, {χλ}λ∈P+, {eλ}λ∈P+, {hλ}λ∈P+,$ forms a $ℤ\text{-basis}$ of ${\Lambda }_{\kappa }\text{.}$ To see this choose some total ordering of the elements of ${\Gamma }_{\kappa }$ which is a refinement of the dominance partial order. Then, by (2.10a-c), the matrix, with rows and columns indexed by elements of ${\Gamma }_{\kappa },$ having ${K}_{\lambda \nu }$ as the $\lambda ,\nu$ entry is upper unitriangular with nonnegative integer entries. This implies that it is invertible as a matrix with integer entries. The fact that the ${\chi }^{\mu }$ are a basis of ${\Lambda }_{\kappa }$ is by definition. The other two statements now follow from (2.10) and (2.13). "Jacobi-Trudi" formulas Fix $\kappa \in {P}^{+}\text{.}$ Define $hwλ=hλ$ for all $\lambda \in {\Gamma }_{\kappa }$ and all $w\in W$ so that ${h}_{\lambda }$ is defined for all $\lambda \in W{\Gamma }_{\kappa }\text{.}$ One has the following "Jacobi-Trudi" type identity for the Weyl characters in terms of the ${h}_{\lambda }\text{.}$ Let $\rho ={\sum }_{i}{\omega }_{i}\text{.}$ Then for each $\lambda \in {\Gamma }_{\kappa }$ $χλ=∑w∈W ε(w)hλ+ρ-wρ.$ Proof. We show that elements ${\sum }_{w\in W}\epsilon \left(w\right){h}_{\lambda +\rho -w\rho }$ are the dual basis to the basis ${\chi }^{\mu },$ $\mu \in {\Gamma }_{\kappa }\text{.}$ $χλ=∑μ∈WΓκ ⟨χλ,hμ⟩ eμ.$ Expanding ${\chi }^{\lambda }$ by (2.8) and clearing denominators we have that $∑w∈W ε(w) ew(λ+ρ)-ρ = ( ∑w∈Wε(w) ewρ-ρ ) ( ∑μ∈WΓκ ⟨χλ,hμ⟩ eμ ) = ∑μ∈WΓκ ∑w∈W ε(w) ⟨χλ,hμ⟩ eμ+wρ-ρ$ Substitute $\gamma =\mu +w\rho -\rho$ to get $∑w∈Wε(w) ew(λ+ρ)-ρ= ∑μ∈WΓκ ⟨χλ,∑w∈Wε(w)hγ+ρ-wρ⟩ eγ.$ Compare coefficients of ${e}^{\gamma }$ for $\gamma \in {P}^{+}$ on each side of this equation. Since $\lambda \in {P}^{+}$ we know by (2.3) that $w\left(\lambda +\rho \right)-\rho$ is not an element of ${P}^{+}$ for any $w\in W$ except the identity. Thus we know that if $\mu \in {P}^{+}$ then $⟨χλ,∑w∈Wε(w)hμ+ρ-wρ⟩ =δλμ.$ $\square$ Recall that raising operators act on elements of $P\text{.}$ We allow the raising operators to act upon the ${h}_{\lambda }$ by defining $R(hλ)= hR(λ),$ for each sequence $R={R}_{{\beta }_{1}}{R}_{{\beta }_{2}}\cdots {R}_{{\beta }_{k}}\text{.}$ We use the convention that ${h}_{\lambda }=0$ if $\lambda \notin W{\Gamma }_{\kappa }\text{.}$ (Note: It is important to keep in mind that raising operators act on elements of $P$ and not on symmetric functions.) For all $\lambda \in {\Gamma }_{\kappa },$ $χλ=∏α>0 (1-Rα)hλ.$ Proof. A sketch of the proof is as follows. Evaluating the right hand side of the above we get $∏α>0 (1-Rα) hλ+ρ-ρ= ∑E⊆Φ+ (-1)E hλ+ρ+(-ρ+σE),$ where ${\sigma }_{E}={\sum }_{\alpha \in E}\alpha \text{.}$ An element $\gamma ={\sum }_{i}{\gamma }_{i}{\omega }_{i}$ in ${P}^{+}$ is called regular if ${\gamma }_{i}>0$ for all $i\text{.}$ The sets $\left\{-\rho +{\sigma }_{E} | E\subseteq {\Phi }^{+},\rho +{\sigma }_{E} \text{regular}\right\}$ and $\left\{-w\rho | w\in W\right\}$ are equal. This is proved by expressing $\rho$ in the form $\rho ={\sum }_{\alpha \in {\Phi }^{+}}\alpha$ and using [Bou1981] Ch. 6, §1 Cor. 2. Under this bijection ${\left(-1\right)}^{|E|}=\epsilon \left(w\right)\text{.}$ The terms arising from the subsets $E$ for which $-\rho +{\sigma }_{E}$ is not regular cancel with each other. This can be shown by showing that ${\prod }_{\alpha >0}\left(1-{R}_{\alpha }\right)\left(-\rho \right)$ is skew-symmetric with respect to $W$ and that if $\gamma \in {P}^{+}$ is not regular then ${\sum }_{w\in W}\epsilon \left(w\right)w\gamma =0\text{.}$ These arguments show that $∏α>0 (1-Rα) hλ=∑w∈W ε(w) hλ+ρ-wρ.$ $\square$ The proof of Cor. (2.15) was motivated by the proof of the Weyl denominator formula given in [Mac1991]. Direct limits The above definition defines an analogue of homogeneous symmetric functions for the spaces ${\Lambda }_{k}\text{.}$ One would like to say that in some sense the ${h}_{\lambda }$ are well defined on all of ${A}^{W}\text{.}$ With this in mind we introduce the following. For each pair $\beta ,\kappa \in {P}^{+}$ such that $\beta \le \kappa$ define a linear map ${f}_{\beta \kappa }:{\Lambda }_{\kappa }\to {\Lambda }_{\beta }$ by $sλ⟼ { sλ, if λ≤β; 0, if λ≰β.$ It is clear that 1) If $\beta \le \gamma \le \kappa$ then ${f}_{\beta \kappa }={f}_{\beta \gamma }\circ {f}_{\gamma \kappa },$ 2) For each $\beta \in {P}^{+},$ ${f}_{\beta \beta }$ is the identity on ${\Lambda }_{\beta }\text{.}$ Thus $\left({\Lambda }_{\beta },{f}_{\beta \gamma }\right)$ form an inverse system of vector spaces, see Bourbaki Theory of Sets I §7, and Bourbaki Algebra I §10. Define $Λ=lim⟵ (Λβ,fβγ).$ Then the homogeneous symmetric function ${h}_{\lambda }$ is a well defined element of $\Lambda$ for all $\lambda \in {P}^{+}$ and is equal to $hμ=∑λ∈P+ sλKλμ.$ An alternate option is to view the homogeneous symmetric function as an element in the direct product of vector spaces $∏λ∈P+ ℤχλ.$ Depending on what one would like to compute this can create problems with infinite sums. The direct limit approach allows one to control these problems by fixing an ordering on infinite sums. ## Notes and references This is an excerpt of a paper entitled Weyl Group Symmetric Functions and the Representation Theory of Lie Algebras, written by Arun Ram.
{}
# Repeated measures - random effects for logistic regression in R? ## Study design 504 individuals were all sampled 2 times. Once before and once after a celebration. The goal is to investigate if this event (Celebration) as well as working with animals (sheepdog) have an influence on the probability that an individual gets infected by a parasite. (out of 1008 observations only 22 are found to be infected) Variables • dependent variable = "T_hydat" (infected or not) (most predictiv variables are categorical) • "Celebration" (yes/no) • "sex" (m/f) • "RelAge" (5 levels) • "SheepDog" (yes/no) • "Area" (geographical area = 4 levels) • "InfectionPeriodT_hydat" (continuous --> Nr Days after deworming") • "Urbanisation (3 levels) ## Question 1: 1) Should I include Individual-ID ("ID") as a random Effekt as I sampled each Ind. 2 times? (Pseudoreplication?) mod_fail <- glmer( T_hydat ~ Celebration + Sex + RelAge + SheepDog + InfectionPeriodT_hydat + Urbanisation + (1|ID), family = binomial) Warnmeldungen: 1: In (function (fn, par, lower = rep.int(-Inf, n), upper = rep.int(Inf, : failure to converge in 10000 evaluations 2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model failed to converge with max|grad| = 1.10808 (tol = 0.001, component 10) 3: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model is nearly unidentifiable: large eigenvalue ratio - Rescale variables? --> unfortunately this model fails to converge (is it a problem that ID = 504 levels with only 2 observations per level?) Convergence is achieved with glmmPQL() but after droping some unsignficant preditiv variables the model fails to converge again ? What is the Problem here? Could geeglm() be a solution? In an other attempt I run the model only with "Area" (4 levels) as random effect (my expectation is that Ind. in the same geogr. Area are suffering from the same parasite pressure etc.) and received the follwoing p-Values. ## My model in R: mod_converges <- glmer( T_hydat ~ Celebration + Sex + RelAge + SheepDog + InfectionPeriodT_hydat + Urbanisation + (1|Area), family = binomial) ## mod_converges output: summary(mod_converges) Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod'] Family: binomial ( logit ) Formula: T_hydat ~ Celebration + sex + SheepDog + RelAge + Urbanisation + InfectionPeriodT_hydat + (1 | Area) Data: dat AIC BIC logLik deviance df.resid 203.0 262.0 -89.5 179.0 996 Scaled residuals: Min 1Q Median 3Q Max -0.461 -0.146 -0.088 -0.060 31.174 Random effects: Groups Name Variance Std.Dev. Area (Intercept) 0.314 0.561 Number of obs: 1008, groups: Area, 4 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) -6.81086 1.96027 -3.47 0.00051 *** Celebration1 1.36304 0.57049 2.39 0.01688 * sexm -0.18064 0.49073 -0.37 0.71279 SheepDog1 2.02983 0.51232 3.96 7.4e-05 *** RelAge2 0.34815 1.18557 0.29 0.76902 RelAge3 0.86344 1.05729 0.82 0.41412 RelAge4 -0.54501 1.43815 -0.38 0.70471 RelAge5 0.85741 1.25895 0.68 0.49584 UrbanisationU 0.17939 0.78669 0.23 0.81962 UrbanisationV 0.01237 0.59374 0.02 0.98338 InfectionPeriodT_hydat 0.00324 0.01159 0.28 0.77985 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 This model converges with "Sample_ID" as a random effect, however, as statet by usεr11852 the varaince of the random effect is quiet high 4.095497^2 = 16.8. And the std.error of Area5 is far to high (complete separation). Can I just remove Datapoints from Area5 to overcome this Problem? # T_hydat # Area 0 1 # 1 226 4 # 2 203 3 # 4 389 15 # 5 168 0 ## here is the problematic cell Linear mixed-effects model fit by maximum likelihood Data: dat AIC BIC logLik NA NA NA Random effects: Formula: ~1 | Sample_ID (Intercept) Residual StdDev: 4.095497 0.1588054 Variance function: Structure: fixed weights Formula: ~invwt Fixed effects: T_hydat ~ Celebration + sex + SheepDog + YoungOld + Urbanisation + InfectionPeriodT_hydat + Area Value Std.Error DF t-value p-value (Intercept) -20.271630 1.888 502 -10.735869 0.0000 Celebration1 5.245428 0.285 502 18.381586 0.0000 sexm -0.102451 0.877 495 -0.116865 0.9070 SheepDog1 3.356856 0.879 495 3.817931 0.0002 YoungOldyoung 0.694322 1.050 495 0.661017 0.5089 UrbanisationU 0.660842 1.374 495 0.480990 0.6307 UrbanisationV 0.494653 1.050 495 0.470915 0.6379 InfectionPeriodT_hydat 0.059830 0.007 502 8.587736 0.0000 Area2 -1.187005 1.273 495 -0.932576 0.3515 Area4 -0.700612 0.973 495 -0.720133 0.4718 Area5 -23.436977 28791.059 495 -0.000814 0.9994 Correlation: (Intr) Clbrt1 sexm ShpDg1 YngOld UrbnsU UrbnsV InfPT_ Area2 Area4 Celebration1 -0.467 sexm -0.355 0.018 SheepDog1 -0.427 0.079 0.066 YoungOldyoung -0.483 0.017 0.134 0.045 UrbanisationU -0.273 0.005 -0.058 0.317 -0.035 UrbanisationV -0.393 0.001 -0.138 0.417 -0.087 0.586 InfectionPeriodT_hydat -0.517 0.804 0.022 0.088 0.016 0.007 0.003 Area2 -0.044 -0.035 -0.044 -0.268 -0.070 -0.315 -0.232 -0.042 Area4 -0.213 -0.116 -0.049 -0.186 -0.023 -0.119 0.031 -0.148 0.561 Area5 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 Standardized Within-Group Residuals: Min Q1 Med Q3 Max -14.208465914 -0.093224405 -0.022551663 -0.004948562 14.733133744 Number of Observations: 1008 Number of Groups: 504 Output from logistf (Firth's penalized-likelihood logistic regression) logistf(formula = T_hydat ~ Celebration + sex + SheepDog + YoungOld + Urbanisation + InfectionPeriodT_hydat + Area, data = dat, family = binomial) Model fitted by Penalized ML Confidence intervals and p-values by Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood Profile Likelihood coef se(coef) lower 0.95 upper 0.95 Chisq (Intercept) -5.252164846 1.52982941 -8.75175093 -2.24379091 12.84909207 Celebration1 1.136833737 0.49697927 0.14999782 2.27716500 5.17197661 sexm -0.200450540 0.44458464 -1.09803320 0.77892986 0.17662930 SheepDog1 2.059166246 0.47197694 1.10933774 3.12225212 18.92002321 YoungOldyoung 0.412641416 0.56705186 -0.66182554 1.77541644 0.50507269 UrbanisationU 0.565030324 0.70697218 -0.98974390 1.97489240 0.56236485 UrbanisationV 0.265401035 0.50810444 -0.75429596 1.33772658 0.25619218 InfectionPeriodT_hydat -0.003590666 0.01071497 -0.02530179 0.02075254 0.09198425 Area2 -0.634761551 0.74958750 -2.27274031 0.90086554 0.66405078 Area4 0.359032194 0.57158464 -0.76903324 1.63297249 0.37094569 Area5 -2.456953373 1.44578029 -7.36654837 -0.13140806 4.37267766 p (Intercept) 3.376430e-04 Celebration1 2.295408e-02 sexm 6.742861e-01 SheepDog1 1.363144e-05 YoungOldyoung 4.772797e-01 UrbanisationU 4.533090e-01 UrbanisationV 6.127483e-01 InfectionPeriodT_hydat 7.616696e-01 Area2 4.151335e-01 Area4 5.424892e-01 Area5 3.651956e-02 Likelihood ratio test=36.56853 on 10 df, p=6.718946e-05, n=1008 Wald test = 32.34071 on 10 df, p = 0.0003512978 ** glmer Model (Edited 28th Jan 2016)** Output from glmer2var: Mixed effect model with the 2 most "important" variables ("Celebration" = the factor I am interested in and "SheepDog" which was found to have a significant influence on infection when data before and after the celebration were analysed separately.) The few number of positives make it impossible to fit a model with more than two explanatory variables (see commet EdM). There seems to be a strong effect of "Celebration" that probably cancels out the effect of "SheepDog" found in previous analysis. Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod'] Family: binomial ( logit ) Formula: T_hydat ~ Celebration + SheepDog + (1 | Sample_ID) Data: dat AIC BIC logLik deviance df.resid 113.0 132.6 -52.5 105.0 1004 Scaled residuals: Min 1Q Median 3Q Max -4.5709 -0.0022 -0.0001 0.0000 10.3491 Random effects: Groups Name Variance Std.Dev. Sample_ID (Intercept) 377.1 19.42 Number of obs: 1008, groups: Sample_ID, 504 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) -19.896 4.525 -4.397 1.1e-05 *** Celebration1 7.626 2.932 2.601 0.00929 ** SheepDog1 1.885 2.099 0.898 0.36919 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) Clbrt1 Celebratin1 -0.908 SheepDog1 -0.297 -0.023 ## Question 2: 2) Can I use drop1() to get the final model and use the p-Values from summary(mod_converges) for interpretation? Does my output tell me if it makes sense to include the random effect ("Area") ? Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod'] Family: binomial ( logit ) Formula: T_hydat ~ Celebration + SheepDog + (1 | Area) Data: dat AIC BIC logLik deviance df.resid 190.8 210.4 -91.4 182.8 1004 Scaled residuals: Min 1Q Median 3Q Max -0.369 -0.135 -0.096 -0.071 17.438 Random effects: Groups Name Variance Std.Dev. Area (Intercept) 0.359 0.599 Number of obs: 1008, groups: Area, 4 Fixed effects: Estimate Std. Error z value Pr(>|z|) (Intercept) -5.912 0.698 -8.47 < 2e-16 *** Celebration1 1.287 0.512 2.51 0.012 * SheepDog1 2.014 0.484 4.16 3.2e-05 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Correlation of Fixed Effects: (Intr) Clbrt1 Celebratin1 -0.580 SheepDog1 -0.504 0.027 I know there are quite a few questions but I would really appreciate some advice from experienced people. Thanks! • With only 22 observations showing an infection, you shouldn't be trying to fit more than 1 or 2 predictor variables (degrees of freedom). See this page, for example. – EdM Jan 26 '16 at 18:31 • Can you include the output from your original model? Specification-wise, it makes the most sense to me, though it may be over-parametrized if you only have 22 positive observations. You might try fitting it with the package blme, which adds some regularization to the fixed and/or random effects, and can help with issues of parameters being forced to the boundaries of the parameter space (both linear separation and zero-variance random effects). – Andrew M Jan 26 '16 at 18:44 • @Edm, Thanks for this advice. So I seem to be restricted in analyzing this data and can only make the best out of it. If I run the model with the 2 variable "Celebration" and "SheepDog" (the variables I suspect to be the most important) the model converges and the output seems to reasonable to me (output added (glmer2var)). Would you agree with that procedure? – organutan Jan 28 '16 at 13:22 • @AndrewM, I don't exactely understand what you mean with "original model". Are you talking about the mixed model with ID as random effect which didn't converge? – organutan Jan 28 '16 at 13:23 • You don't want to lose the information about the paired comparisons. Also, it seems that you want to focus on the effects of the celebration. How many individuals were infected before the celebration? Did any of those lose the infection after the celebration? – EdM Jan 28 '16 at 14:26 I think that your original model with 504 levels with each level having two readings is problematic because it potentially suffers from complete separation, especially given the small number of positives in your sample. By complete separation I mean that for a given combination of covariates all responses are the same (usually 0 or 1). You might want to try a different optimizer (ie. something along the lines glmerControl(optimizer='bobyqa','Nelder_Mead', etc., ...) but I would not be very confident that this would work either. In general having some levels with one or two observations is not a problem but when all of them are so low things become computationally odd because you starts having identifiability issues (eg. you definitely cannot evaluate any slopes as a random slope plus a random intercept for every individual would give you one random effect for every observation). You really lose a lot of degrees of freedom any way you count them. You do not show the glmmPQL output but I suspect a very high variance of the random effect that would strongly suggest that there is complete separation. (EDIT: You now show that output and can you can clearly see that the ratio is indeed very high.) You might want to consider using the function logistf from the package with the same name. logistf will fit a penalized logistic regression model that will probably alleviate the issue you experience; it will not use any random effects. The rule of thumb for the lowest number of levels a random effect can be estimated reasonably is "5 or 6"; below that your estimate for the standard deviation of that effect would really suffer. With this in mind, no; you using Area having just four (4) levels is too aggressive. Probably it makes more sense to use it a fixed effect. In general if I do not get at least 10-11 random effects I am a bit worried about the validity of the random effects assumption; we are estimating a Gaussian after-all. Yes, you could use drop1 but really be careful not to start data-dredging (which is a bad thing). Take any variable selection procedure with a grain of salt. This is issue is extesnsively discussed in Cross-Validated; eg. see the following two great threads for starters here and here. Maybe it is more reasonable to include certain "insignificant" variables in a model so one can control for them and then comment on why they came out insignificant rather then just break down a model to the absolutely bare-bone where everything is uber-signficant. In any case I would strongly suggest using bootstrap to get confidence intervals for estimated parameters. • I am glad I could help. Just to clarify, logistf will not include a random effect. All effects will be fixed in that sense. (I will add this clarification to the original answer). – usεr11852 Jan 26 '16 at 18:18 • @AndrewM: I see your point and it is not ungrounded but in the current case I think that consistently two measurements per subject do not warrant a full mixed model. You need to regularize something, using logistf does this; your comment about using blme is just another way of doing this regularization especially given the small # of positives. Look at the ratio of variances in glmmPQL output: 600+? In that sense maybe using rare event logistic regression relogit from Zelig is relevant too. (That's why the complete separation comment.) – usεr11852 Jan 26 '16 at 19:09
{}
Khat {dbmss} R Documentation ## Estimation of the K function ### Description Estimates the K function ### Usage Khat(X, r = NULL, ReferenceType = "", NeighborType = ReferenceType, CheckArguments = TRUE) ### Arguments X A weighted, marked, planar point pattern (wmppp.object). r A vector of distances. If NULL, a sensible default value is chosen (512 intervals, from 0 to half the diameter of the window) following spatstat. ReferenceType One of the point types. Default is all point types. NeighborType One of the point types. By default, the same as reference type. CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to save time in simulations for example, when the arguments have been checked elsewhere. ### Details K is a cumulative, topographic measure of a point pattern structure. ### Value An object of class fv, see fv.object, which can be plotted directly using plot.fv. ### Note The computation of Khat relies on spatstat functions Kest and Kcross. ### References Ripley, B. D. (1976). The Foundations of Stochastic Geometry. Annals of Probability 4(6): 995-998. Ripley, B. D. (1977). Modelling Spatial Patterns. Journal of the Royal Statistical Society B 39(2): 172-212. Lhat, KEnvelope, Ktest ### Examples data(paracou16) autoplot(paracou16) # Calculate K r <- 0:30 (Paracou <- Khat(paracou16, r)) # Plot (after normalization by pi.r^2) autoplot(Paracou, ./(pi*r^2) ~ r) [Package dbmss version 2.7-8 Index]
{}
### Squash If the score is 8-8 do I have more chance of winning if the winner is the first to reach 9 points or the first to reach 10 points? ### A Method of Defining Coefficients in the Equations of Chemical Reactions A simple method of defining the coefficients in the equations of chemical reactions with the help of a system of linear algebraic equations. ### Playing Squash Playing squash involves lots of mathematics. This article explores the mathematics of a squash match and how a knowledge of probability could influence the choices you make. # Stonehenge ##### Stage: 5 Challenge Level: Here's a good clear explanation from Jack of Madras College. Consider the movement of the block relative to the logs: When the log makes one revolution it travels ${\pi}d$ metres. As the block is in contact with the logs, it moves ${\pi}d$ metres along the horizontal plane. Therefore, the block moves ${\pi}d$ metres relative to the logs. Now consider the movement of the logs relative to the ground: When the log makes one revolution it rotates ${\pi}d$ metres. As it is in contact with the ground it moves ${\pi}d$ metres along the horizontal plane. Therefore, the log moves ${\pi}d$ metres relative to the ground. This means the log moves ${\pi}d$ metres relative to the ground but the block moves ${\pi}d$ metres relative to the logs. Therefore, the block moves $2{\pi}d$ metres relative to the ground, which is twice as much as the logs. Thus:- the block moves twice as fast as the logs .
{}
Solved # PowerShell and XML editing Posted on 2013-06-21 114 Views I have the following code that allows me to change a line in web config but if there are 2 connection strings it fails. How can I change it so it works? $New_Pass = 'Welcome"$Web = 'C:\inetpub\wwwroot\Contoso\web.config' $CONNECTION_STRING ="Data Source=JOHN.WORLD;User Id=peter;password=" +$New_Pass $doc = new-object System.Xml.XmlDocument$doc.Load($Web)$root = $doc.get_DocumentElement(); # Change password # If there 2 entries it will fail$root.connectionStrings.add.connectionString = $CONNECTION_STRING$doc.Save($Web) Code does exactly what I need but if there are 2 or more entries in that tag it will fail 0 Question by:hwalch • 11 • 6 • 2 24 Comments LVL 69 Expert Comment ID: 39266187 Would you mind posting an example web.config? 0 Author Comment ID: 39267201 <?xml version="1.0"?> <configuration> <connectionStrings> <add name="ConStringPSupply" connectionString="Data Source=.;Initial Catalog=pSupply;User ID=sa;Password=1234"/> <add name="ConString1" connectionString="Data Source=.;Initial Catalog=NEW1;Integrated Security=True" providerName="System.Data.SqlClient"/> </connectionStrings> <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" validate="false"/> <add verb="GET" path="CrystalImageHandler.aspx" type="CrystalDecisions.Web.CrystalImageHandler, CrystalDecisions.Web, Version=10.2.3600.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> </httpModules> <compilation debug="true"> <assemblies> <add assembly="System.Web.Extensions, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Extensions.Design, Version=1.0.61025.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.RegularExpressions, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/> <add assembly="System.Drawing.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/> <add assembly="CrystalDecisions.CrystalReports.Engine, Version=10.2.3600.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/> <add assembly="CrystalDecisions.ReportSource, Version=10.2.3600.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/> <add assembly="CrystalDecisions.Shared, Version=10.2.3600.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/> <add assembly="CrystalDecisions.Web, Version=10.2.3600.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/> <add assembly="CrystalDecisions.ReportAppServer.ClientDoc, Version=10.2.3600.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/> <add assembly="CrystalDecisions.Enterprise.Framework, Version=10.2.3600.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/> <add assembly="CrystalDecisions.Enterprise.InfoStore, Version=10.2.3600.0, Culture=neutral, PublicKeyToken=692fbea5521e1304"/> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/> <!--<add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/>--> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Windows"/> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> </system.web> </configuration> 0 LVL 80 Expert Comment ID: 39268657 <# <connectionStrings> <add name="ConStringPSupply" connectionString="Data Source=.;Initial Catalog=pSupply;User ID=sa;Password=1234"/> <add name="ConString1" connectionString="Data Source=.;Initial Catalog=NEW1;Integrated Security=True" providerName="System.Data.SqlClient"/> </connectionStrings> #>$path = "D:\Documents\WindowsPowerShell\Scripts\" $Web =$path + "ee22jun13.xml" #$output = "connectionstring2.xml"$New_Pass = "Welcome" $Connection_String1 ="Data Source=JOHN.WORLD;User Id=peter;password=" +$New_Pass $Connection_String2 = "Data Source=WAYNE.WORLD;User ID=peter;password=" +$New_Pass $xml = New-Object XML$xml.Load($web) # Change password # If there 2 entries it will fail$xml.configuration.connectionStrings.Add[0] = $Connection_String1$xml.configuration.connectionStrings.Add[1] = $Connection_String2$xml.Save($Web) cls$Connection_String1 Get-Content $web It changes it in memory but for the life of me I can't get it to save 0 Author Comment ID: 39268863 Interesting, it looks very simple. I will test it on Monday. Now I web.config could have 2 or 3 or more connectionstrings how do I find out how many are there? Thanks 0 LVL 80 Expert Comment ID: 39268874$xml.configuration.connectionStrings.add.Count Will return the number of connectionstrings objects remember to start your counter when assigning from 0 and not 1 i.e. two objects  first is object[0] 0 Author Comment ID: 39272084 You were right.  I am able to change the value perfectly.  I was able to see the count also, it returns two.  But when I save the changes, nothing happens. I tried to do one save and then do the second and then save but it didn't change the outcome. Beatiful !!! :-) 0 LVL 69 Expert Comment ID: 39272191 Wrong. Neither in memory nor on disk you will see any change - because the change is wrong ;-). When changing the connection string, we need to provide the attribute "connectionString", as was in the original code. $path = "c:\temp\ee\tst\"$Web = $path + "ee22jun13.xml"$New_Pass = "Welcome" $Connection_String1 ="Data Source=JOHN.WORLD;User Id=peter;password=" +$New_Pass $Connection_String2 = "Data Source=WAYNE.WORLD;User ID=peter;password=" +$New_Pass $xml = New-Object XML$xml.Load($web) # Change password$xml.configuration.connectionStrings.Add[0].connectionString = $Connection_String1$xml.configuration.connectionStrings.Add[1].connectionString = $Connection_String2$xml.Save($Web) cls Get-Content$web Of course that only works well if you have exactly the same amount and content of connectionstrings everywhere. 0 LVL 69 Expert Comment ID: 39272255 This seems to be much better: $path = "c:\temp\ee\tst\"$Web = $path + "ee22jun13.xml"$New_Pass = "Welcome" $xml = New-Object XML$xml.Load($web) # Change password$xml.configuration.connectionStrings.add | ? { $_.ConnectionString -like '*Password=*' } | % {$_.ConnectionString = $_.ConnectionString -replace 'Password=(\w*)', "Password=$New_Pass" } $xml.Save($Web) cls Get-Content $web 0 Author Comment ID: 40115287 Sorry for going away for so long and thank you for your input. That last entry almost works. It's the only one I have managed to make work. However this is assuming that all paswords are the same. That is not the case. If you have 3 connections strings, one will be forWindows SQL, one for Oracle, and one for Sybase or just 3 different servers. So you need to be able to replace only one for each add name 0 LVL 69 Expert Comment ID: 40115357 In that case we just need to check for the fitting Name attribute: $path = "c:\temp\ee\tst\" $Web =$path + "web.config" $New_Pass = "Welcome"$xml = New-Object XML $xml.Load($web) $xml.configuration.connectionStrings.add | ? {$_.Name -eq 'ConString2' } | ? { $_.ConnectionString -like '*Password=*' } | % {$_.ConnectionString = $_.ConnectionString -replace 'Password=(\w*)', "Password=$New_Pass" } $xml.Save($Web) cls Get-Content $web Line 11 isn't really necessary, as we are very specific in which connection string we want to change, and the replace won't do any harm if there is no password tag (as with trusted security). You can remove it or leave it as-is hence. 0 Author Comment ID: 40406878 NOOOOOOOOOOOOOOOOOOOOOOOOOOOO I am actively using it 0 Author Comment ID: 40406886 That is totally incorrect. The posted script works only for one line. That was not the question. Do I need to repost my question? 0 Author Comment ID: 40406888 I think if you are going to assign points for me, 500 is incorrect if the question is only partially answered. 500 would be for a perfect answer. 0 Author Comment ID: 40406893 So is there any one else who want to help? There is a lot activity about to change Web.config in google weird that no-one has the knowledge. 0 Author Comment ID: 40406933 I did 0 Author Comment ID: 40406935 Do do I need to start all over again and repost? 0 LVL 69 Expert Comment ID: 40406946 You haven't been verbose and precise about your requirements. From what you have posted now, I derive you need to change all db connect strings, and have to do so with different passwords? How should that happen? Check for how many connection strings are in there, then ask a new password for each? Is the sequence the same all the time, or might they get mixed up, or whatsoever? We either need a static sequence of strings, with. e.g. ConString2 always being the MSSQL connections, or something else clearly identifying each connection string and the new password to associate. We can also do a simple old => new password replacement, if that is more simple. 0 Author Comment ID: 40407400 The logic is correct but the code to navidate from ConnectionString to connectionString and replace the Password with Xpath that is what I need 0 LVL 69 Accepted Solution Qlemo earned 500 total points ID: 40408551 cls$path = "c:\temp\ee\" $Web =$path + "web.config" $xml = New-Object XML$xml.Load($web) # Change password$xml.configuration.connectionStrings.add | % { $pwd = switch ($_.Name) { 'ConString1' { 'NewPass1' } 'ConstringPSupply' { 'NewPass2' } } $_.ConnectionString =$_.ConnectionString -replace 'Password=(\w*)', "Password=$pwd" }$xml.Save($Web) Get-Content$web 0 ## Featured Post Question has a verified solution. If you are experiencing a similar issue, please ask a related question ### Suggested Solutions You may have a outside contractor who comes in once a week or seasonal to do some work in your office but you only want to give him access to the programs and files he needs and keep privet all other documents and programs, can you do this on a loca… A quick Powershell script I wrote to find old program installations and check versions of a specific file across the network. In this fifth video of the Xpdf series, we discuss and demonstrate the PDFdetach utility, which is able to list and, more importantly, extract attachments that are embedded in PDF files. It does this via a command line interface, making it suitable … In a recent question (https://www.experts-exchange.com/questions/29004105/Run-AutoHotkey-script-directly-from-Notepad.html) here at Experts Exchange, a member asked how to run an AutoHotkey script (.AHK) directly from Notepad++ (aka NPP). This video…
{}
# Contour Plots in ggplot2 How to make Contour Plots in ggplot2 with Plotly. New to Plotly? Plotly is a free and open-source graphing library for R. We recommend you read our Getting Started guide for the latest installation or upgrade instructions, then move on to our Plotly Fundamentals tutorials or dive straight in to some Basic Charts tutorials. ### Basic geom_contour plot geom_contour produces a similar output to geom_density_2d, except it uses a third variable for the values rather than frequency. The volcano dataset comes pre-loaded on R. library(plotly) library(reshape2) df <- melt(volcano) p <- ggplot(df, aes(Var1, Var2, z= value)) + geom_contour() + scale_fill_distiller(palette = "Spectral", direction = -1) ggplotly(p) ### Coloured Plot See here for a list of colour palettes that come with the brewer (discrete) and distiller (continuous) packages. library(plotly) library(reshape2) df <- melt(volcano) p <- ggplot(df, aes(Var1, Var2, z= value, colour=stat(level))) + geom_contour() + scale_colour_distiller(palette = "YlGn", direction = 1) ggplotly(p) ### Filled Plot It's possible to colour in each of the layers, by changing geom_contour to stat_contour as below. As the edges of the graph indicate, filled contour plots only work when each layer is an enclosed shape rather than an open line; a geom more suited to this functionality would be geom_tile or geom_raster. library(plotly) library(reshape2) df <- melt(volcano) p <- ggplot(df, aes(Var1, Var2, z= value)) + stat_contour(geom="polygon",aes(fill=stat(level))) + scale_fill_distiller(palette = "Spectral", direction = -1) ggplotly(p) Dash for R is an open-source framework for building analytical applications, with no Javascript required, and it is tightly integrated with the Plotly graphing library. Learn about how to install Dash for R at https://dashr.plot.ly/installation. Everywhere in this page that you see fig, you can display the same figure in a Dash for R application by passing it to the figure argument of the Graph component from the built-in dashCoreComponents package like this: library(plotly) fig <- plot_ly() # fig <- fig %>% add_trace( ... ) # fig <- fig %>% layout( ... ) library(dash) library(dashCoreComponents) library(dashHtmlComponents) app <- Dash$new() app$layout( htmlDiv( list( dccGraph(figure=fig) ) ) )
{}
# Can we restrict what kinds of identification questions are on topic? Recently there was a significant discussion about whether a question on the main site should be on topic. I very strongly believe that it is not a good question for the site and should not be on topic here, but the community thought otherwise. That got me thinking about what defines our site's scope with respect to identification questions. I'm making this post to share my reasoning for why I think identification questions like that one should not be on topic here and see if I can convince the community to agree to make this policy. I think these questions are bad for the site because • They're basically trivia questions which may or may not have anything to do with physics, so e.g. a random person is just as likely to know the answer as a random physicist or astronomer. That doesn't make the question bad, but I think it should mean that this site is not the place for it. I'd like to think that questions which are well-received here are those which are more useful to, and more answerable by the audience described in our help center (active researchers, academics and students of physics and astronomy) than to the average person. • Also, these questions don't prompt us to give an answer that shares any science knowledge. And if a question can be fully answered without sharing any science knowledge, it raises the question of what it's doing here, since neither the question nor its likely answers will be related to physics or astronomy. (Often one can post an answer that shares some physics/astronomy knowledge by going beyond what the question is asking for, but that's true of a wide variety of questions which I think we would all agree have no business here.) • A lot of these questions don't get much traction on the site, but the ones that do tend to be quite popular and often hit the HNQ, because visitors who aren't necessarily all that familiar with science can understand them. Tying into my previous point, that means a decent fraction of the questions that do the most work to represent our site are marginally or not at all related to physics or astronomy. With all that in mind, what I'm proposing is that we make a policy that puts the burden on the asker of an identification question to show why their question belongs here, specifically. I'm proposing that identification questions, which ask "what is [thing]", must meet one of these criteria: • Explain why the question has something to do with physics or astronomy specifically (this would cover identification questions for devices used in physics experiments as well as astronomical objects) • Ask for some kind of physics-related explanation of the thing described, e.g. why it occurs (if it's a phenomenon) or possibly how it formed Under this proposal, for example, all four of the following questions that Yly linked in the other meta post are fine because they ask for explanations: There are a few questions where the answer would have to do with physics or astronomy but the asker doesn't know that or doesn't have any reason to believe that. Those would be rendered off-topic under this policy. I think that's reasonable because the asker can easily be guided to modify their question to make it clear that it does have to do with physics or astronomy. • -1 for proposing to place even more burden on question askers than there already is. Physics SE may be too big and a better way to restrict questions might be to split it up rather than discourage questions. – uhoh Aug 24 '20 at 5:19 • @uhoh So you think the site policies are perfect and all the questions we get (which aren't already closed under existing rules) are good? Aug 24 '20 at 5:20 • I don't respond to "So you think x?" where x is not a reasonable interpretation of what I've said. I've simply down voted and extended the courtesy of explaining it. – uhoh Aug 24 '20 at 5:22 • I'm guessing you mean to imply that my interpretation was not reasonable, but I attest that it is. You disapprove of putting more burden on question askers than there already is; it seems like a clear logical consequence that you do not want any more restrictions on what questions can be asked here. Given that, it seems quite reasonable to think you believe that all the questions we get which are not disallowed by an existing restriction are good for the site, otherwise you would favor a restriction against them. I'm asking you to explain where that argument diverges from your actual opinion. Aug 24 '20 at 5:29 • No I feel that the specific burden described in your post is not a good idea "...what I'm proposing is that we make a policy that puts the burden on the asker..." I'm not yet ready to post a full answer, but I wanted to down vote and I usually try to indicate in a comment what the down vote is for. Thus my comment begins with -1 and then continues with a short explanation of it. Let's wait and see how the answer posts evolve before going further. – uhoh Aug 24 '20 at 5:33 • Ah, I see. I think I misunderstood the nature of your objection to the proposal based on your first comment. Aug 24 '20 at 5:43 • Thanks for making this post. I think there was some conflation in the previous meta post between the site policy and whether or not users thought that specific question should be open / closed. As for this proposed policy, it seems like there is not much difference between on topic and off topic. i.e. it seems like I could go to any off topic identification question (according to this post) and just tack on at the end "Why does this happen?" and then now it is on topic since it is asking for an explanation. Aug 24 '20 at 13:19 • I also find that a lot of questions like these are missing a lot of detail, and thus lead to a lot of speculative answers. You will often see the comments flooded with further ideas / "experiments" for the user to do and report back on, and the answers have different explanations of "If this is the case, then it could be this. Or it could also be this. Or..." Would this policy need to be updated to handle cases like this, or would this just fall under the usual "Needs more detail / clarity"? Aug 24 '20 at 13:21 • @BioPhysicist (2 comments up) Yeah, that's true, although I think it'll often be clear whether a "why does this happen" was just tacked on at the end to try to meet a policy or it was incorporated because the person actually wants to know what's going on. Only in the latter case will it actually blend into the original question. And if someone tacks on "why does this happen" but works it into their question well enough that it sounds natural, then problem solved. (1 comment up) Good question and I'm not sure, but applying the usual "needs more details" probably covers many of those cases. Aug 24 '20 at 22:08 • Also, the question doesn't ask to explain anything...doesn't ask why the thing happens, they just want a name. Or if they're asking about an object, they don't (seem to) care about what it could be used for. Is your point that if OP had asked “what’s causing these rays” instead of “what are these rays” it would that be on topic? Aug 25 '20 at 11:07 • @SuperfastJellyfish Yes, if the question had asked "what's causing these rays" I would consider it squarely on-topic. (I mean, I suppose there's still room for a trivial answer like "the sun", but I figure it's quite unlikely that people would actually answer that way, or that such an answer would be well-received.) Aug 28 '20 at 23:40 • Ah okay. Then it seems like an issue easily fixed by editing, in case the author’s intent was to know the underlying physics to begin with. Aug 29 '20 at 10:47 I sympathize with the intent of this post but I think it would be detrimental to implement the proposed criteria in practice. 1. People observing a phenomenon and then asking what it is don't know what it is. How are they supposed to know whether e.g. some optical phenomenon they see is an optical illusion more explained by biology and neuroscience (think e.g. about still images that look like they're moving) or a "real" thing that is explained by physics? Instead of requiring an explanation of why they think it's "physics", I think we should just stringently use the needs details or clarity reason to close questions where not enough information is given to determine uniquely the phenomenon happening. If enough details are given, it's either close-able because the answer is not based in physics or uniquely answerable (yes, this is a rare case where I do think closing a question based on its potential answers is justified). The stringently is important there - we should not be willing to play a guessing game about physics that might be involved. 2. The idea that they should ask for a physical explanation instead of just "what it is" is at first glance reasonable to me, but what's stopping us from simply assuming that someone who posts "What is this?" on a physics site is interested in a physics explanation of the "this" in question? Why do we need to close the question and force the asker to essentially always just replace "What is this?" by "What is the physical explanation for this?"? This strikes me as policing the formulation of the question more than policing its content. I think it's safe to assume that the overwhelming majority of people asking such questions on physics.SE will be interested in the physical explanation, and even if they're not - do we have to care? If the fear here is that if we don't do this then we get answers that just say "This is called X" without any explanation at all then I would like to believe that we'd just not upvote such answers (or even - gasp! - downvote them). If this belief isn't enough I'd prefer just making it a policy that such answers are considered non-answers (and hence deletable via low quality review) rather than requiring askers to jump through this specific hoop in how they're phrasing their question. • I was about to post the equivalent of point 2, but it is said here so perfectly that I'll just upvote instead Aug 24 '20 at 18:40 • "If enough details are given, it's...close-able because the answer is not...uniquely answerable (yes, this is a rare case where I do think closing a question based on its potential answers is justified)." I feel like if unique answers are not possible then it is an issue with the question, not the answers. The close reason would be "the question does not allow for unique answers". So I still think we can say here that questions are not being closed based on their answers necessarily. This is opposed to, say, keeping a homework question open because of an amazing answer. Aug 24 '20 at 20:46 • @BioPhysicist You omitted the wrong parts of the sentence ;) I meant the parenthesis to apply to the other arm of the alternative there, i.e. the case where the answer is not based in physics. – ACuriousMind Mod Aug 24 '20 at 20:56 • I knew I shouldn't have applied my virtual scissors liberally ;) Aug 24 '20 at 21:05 • Thanks for answering ACM. On point 1, I guess what I'm hoping is that, when people don't know what a thing is, they do a small amount of research elsewhere just to have some reason to think that something related to physics is involved. Otherwise, I worry that we'll get overly trivial identification questions like, say, Q: "What's the dark shape in this picture?" A: "It's the shadow of your head." That'd be a very clear, uniquely determined phenomenon, but I hope we can agree it's not a good question for this site. Aug 24 '20 at 22:17 • On point 2... I guess, in that same example, if the question asked "Why do shadows form?" that would be fine (rather low-level, but on topic), but I just find it very unnatural to read "What's this dark shape?" and interpret it as "Why do shadows form?" I think it's very plausible that the asker in that scenario really just wants to know what that weird thing in their picture is and doesn't care why it's there, in the physical sense. Therefore, if they do want to know why it exists, I prefer to put the impetus on them to say so. Aug 24 '20 at 22:20
{}
14 views Two balls A and B are thrown with speeds u and u/2 respectively. Both the balls cover the same horizontal distance before returning to the plane of projection. If the angle of projection of ball B is 15° with the horizontal, then the angle of projection of A is : (A) (1/2) sin-1 (1/8) (B) (1/4) sin-1 (1/8) (C) (1/3) sin-1 (1/8) (D) sin-1 (1/8) The correct option is A. Solution : ​​​​For A, R = $\frac{u^2 sin\theta}{g}$ For B, R = $\frac{u^2}{8g}$ Comparing; $sin2\theta = \frac 1 8$ $\Rightarrow\theta = \frac 1 2 sin^1\frac 1 8$ by 3.8k Points 3 views 1 Vote 6 views 21 views 4 views 4 views
{}
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4). ## Shadow of Convex Polygons and their Perimeter Hi, this is another question from the article "The Mathematics of Doodling" in the Mathematical Monthly of February 2011. (See http://math.stanford.edu/~vakil/files/monthly116-129-vakil.pdf) On pp 126, the article mentions a remarkable fact: Theorem 3. The average length of the shadow of a convex region of the plane, multiplied by $\pi$, is the perimeter. Followed by: Theorem 4. Consider the average area of the shadow of a convex region of three-space, and multiply by 4. The result is the surface area. Are these well-known facts? I haven't heard of these facts before! Any ideas on how one could prove these two results?
{}
## nominal GDP If nominal GDP is $4,000 billion and the amount of money demanded for transactions purposes is$800 billion, it can generally be concluded that: The asset demand for money will be $3,200 billion The total demand for money will be$4,800 billion On average, each dollar will be spent five times a year The supply of money needs to be increased to meet the demand
{}
Feeds: Posts ## Does solving the Dirac equation give the right energy spectrum in S4? We want to consider the potential $A_\mu = ( Ze/r, 0,0,0)$ and solve on $S^4(1/h)$ the Dirac equation: $(D - i\frac{eA}{c\hbar} -m_0c^2) \psi = 0$ Using the calculations of Camperosi and Higuchi, we know that in geodesic polar coordinates $(\theta, \Omega)$ the eigenspinors have the factor $\phi_{nl}(\theta) = \cos(\theta/2)^{l+1}\sin(\theta/2)^{l} P(\cos(\theta)$ where $n$ is the eigenvalue number and $l \le n$.  Let’s approximate $1/r$ by $1/\sin(\theta)$ because we will consider the large radius $1/h$.  Now we are integrating over $S^4$ for which the volume form in polar coordinates will have a factor $\sin^3(\theta) d\theta$.  If we were doing the calculation in latex $S^3$ then the factor of interest in the volume form would be $\sin^2(\theta) d\theta$ which multiplied by $1/\sin(\theta)$ produces an odd function and for $l=0$  we have even functions in $\phi_{nl}$ and we have zero integral.  However this vanishing does not hold for $S^4$.  These integrals would correspond to the inner product $\langle F, \Phi_n\rangle$ for eigenspinors $\Phi_n$ where $F = Z e/\sin(\theta)$. Now let us take a slightly more abstract approach.   We want to calculate $\beta_{nl} = \langle F, \Phi_n \rangle$ in order to solve the Dirac equation in eigenspinor expansions.  We can essentially bring in the Dirac-squared on the right side and by dividing by $\lambda_n^2$ and then use the self-adjointness and the Lichnerowicz formula: $\beta_{nl} \lambda_n^2 = \langle \Delta(F) + \frac{1}{4}R F, \Phi_n \rangle$ where $R$ is the scalar curvature.  Since the scalar curvature is constant, we have: (*) $\beta_{nl} ( \lambda_n^2 - \frac{1}{4} R ) = \langle \Delta(F), \Phi_n \rangle$ For the right side we can use the formula for the Laplacian in coordinates: $\Delta_{S^4} f(t,\xi) = \sin^{-3}(t) \partial_t ( \sin^3(t) \partial_t f ) + \sin^{-2}\Delta_{\xi}f$ A small calculation gives $\Delta_{S^4} (1/\sin(t)) = \sin^{-3}(t) [ 1 + 2 \sin^2(t)]$ which we can then plug in with $\Phi$ in the integrand, use the fact that the volume form contains a $\sin^3(t)$.  The right hand side of (*) is then the sum of $\int_0^{2\pi} \Phi(t) dt$ and $\int_0^{2\pi} \sin^2 (t) \Phi(t) dt$.  Let’s call the right side $C_{nl}$ and all of these will be finite. Then we have the expression $\beta_{nl} = \frac{C_{nl}}{\lambda_n^2 - \frac{1}{4}R}$ We can get some asymptotic approximation for $C_{nl}$ by using the formula here: $P_n^{(a,b)}(\cos(t)) = n^{-1/2} k(t) \cos( Nt + \gamma)$ and we focus on the $k(t)$ term $k(t) = \pi^{-1/2} \sin^{-a-1/2}(t/2)\cos^{-b-1/2}(t/2)$ Once we absorb the cosine and sine terms in the eigenspinor formula, we need only worry about $a=1$ and $b=2$ in our case.  Then use the double angle formula for $\sin^2(t) = 4 \sin^2(t/2)\cos^2(t/2)$ to get approximations for integrals of $\Phi_n(t)$ and $\sin^2(t)\Phi_n(t)$. Now let us return to the Dirac equation and its inner product with eigenspinor $\Phi_n$ to examine the linear equation in eigenspaces.  We have either $\lambda_n - \beta_{nl} = m_0 c^2$ or the coefficient of eigenspinor $\Phi_n$ in the solution expansion is zero.  Since $\beta_{nl}$ coefficients can be calculated before attempting to solve the Dirac operator (these are the effect of the multiplication by $F$ on the eigenspaces), in principle, we have a complete solution of the Dirac equation: for each $n,l$, check whether $\lambda_n – \beta_{nl}$ is equal to $m_0c^2$.  If so, then all the eigenfunctions in that eigenspace produce solutions to the Dirac equation; if not, then none of the eigenfunctions in the eigenspace enter into the solution to the Dirac equation. EXPLICIT FORMULA Gradsheyn and Ryzhik formulae in 7.391 allow us to get exact expressions for both $\int_{-1}^1 \Phi_n(t) dt$ and $\int_{-1}^1 \sin^2(t)\Phi_n(t) dt$.  For both, use the change of variables $x = \cos t$ in the latter using $\sin(t) = (1+x)^{1/2}(1-x)^{1/2}$.  The general formula is (see GradshteynTableIntegrals): $\int_{-1}^1 (1-x)^{\rho}(1+x)^{\sigma} P_n^{(a,b)}(x) dx = A B$ where $A = \frac{2^{\rho+\sigma+1} \Gamma(\rho+1)\Gamma(\sigma+1)\Gamma(n+1+\alpha)}{n! \Gamma(\rho+\sigma+2)\Gamma(n+\alpha)}$ and $B = {}_3F_2(-n, \alpha+\beta+n+1, \rho+1; \alpha + 1, \rho+\sigma+2; 1)$ We use $\alpha=1$ and $\beta=2$ for integrals of both $\Phi(t)$ and $\sin^2(t)\Phi(t)$.  For the former, set $\rho = \sigma = 0$ and for the latter $\rho=\sigma= 1/2$.  So we have computable expressions for the right hand side of (*).
{}
## Beginner’s comparison of Computer Algebra Systems (Mathematica / Maxima / Maple) August 11, 2014 I’ve never been very good at doing manual computations, and whenever I need to do a tedious computation for an assignment, I like to automate it by writing a computer program. Usually I implemented an ad-hoc solution using Haskell, either using a simple library or rolling my own implementation if the library didn’t have it. But I found this solution to be unsatisfactory: my Haskell programs worked with integers and floating numbers and I couldn’t easily generalize it to work with symbolic expressions. So I looked to learn a CAS (computer algebra system), so in the future I won’t have to hack together buggy code for common math operations. I have no experience with symbolic computing, so it wasn’t clear to me where to begin. To start off, there are many different competing computer algebra systems, all incompatible with each other, and it’s far from clear which one is best for my needs. I began to experiment with several systems, but after a few days I still couldn’t decide which one was the winner. I narrowed it down to 3 platforms. Here’s my setup (all running on Windows 7): • Mathematica 8.0 • Maxima 5.32 with wxMaxima 13.04 • Maple 18.00 So I came up with a trial — I had a short (but nontrivial) problem representative of the type of problem I’d be looking at, and I would try to solve it in all 3 languages, to determine which one was easiest to work with. ### The Problem This problem came up as a part of a recent linear algebra assignment. Let the field be $\mathbb{Z}_5$ (so all operations are taken modulo 5). Find all 2×2 matrices $P$ such that $P^T \left( \begin{array}{cc} 2 & 0 \\ 0 & 3 \end{array} \right) P = I$ We can break this problem into several steps: • Enumerate all lists of length 4 of values between 0 to 4, that is, [[0,0,0,0],[0,0,0,1],…,[4,4,4,4]]. We will probably do this with a cartesian product or list comprehension. • Figure out how to convert a list into a 2×2 matrix form that the system can perform matrix operations on. For example, [1,2,3,4] might become matrix([1,2],[3,4]) • Figure out how to do control flow, either by looping over a list (procedural) or with a map and filter (functional) • Finally, multiply the matrices modulo 5 and check if it equals the identity matrix, and output. This problem encompasses a lot of the challenges I have with CAS software, that is, utilize mathematical functions (in this case, we only use matrix multiplication and transpose), yet at the same time express a nontrivial control flow. There are 5^4=625 matrices to check, so performance is not a concern; I am focusing on ease of use. For reference, here is the answer to this problem: These are the 8 matrices that satisfy the desired property. I have no prior experience in programming in any of the 3 languages, and I will try to solve this problem with the most straightforward way possible with each of the languages. I realize that my solutions will probably be redundant and inefficient because of my inexperience, but it will balance out in the end because I’m equally inexperienced in all of the languages. ### Mathematica I started with Mathematica, a proprietary system by Wolfram Research and the engine behind Wolfram Alpha. Mathematica is probably the most powerful out of the three, with capabilities with working with data well beyond what I’d expect from a CAS. What I found most jarring about Mathematica is its syntax. I’ve worked with multiple procedural and functional languages before, and there are certain things that Mathematica simply does differently from everybody else. Here are a few I ran across: • To use a pure function (equivalent of a lambda expression), you refer to the argument as #, and the function must end with the & character • The preferred shorthand for Map is /@ (although you can write the longhand Map) • To create a cartesian product of a list with itself n times, the function is called Tuples, which I found pretty counterintuitive Initially I wanted to convert my flat list into a nested list by pattern matching Haskell style, ie f [a,b,c,d] = [[a,b],[c,d]], but I wasn’t sure how to do that, or if the language supports pattern matching on lists. However I ran across Partition[xs,2] which does the job, so I went with that. Despite the language oddities, the functions are very well documented, so I was able to complete the task fairly quickly. The UI is fairly streamlined and intuitive, so I’m happy with that. I still can’t wrap my head around the syntax — I would like it more if it behaved more like traditional languages — but I suppose I’ll get the hang of it after a while. Here’s the program I came up with: SearchSpaceLists := Tuples[Range[0, 4], 4] SearchSpaceMatrices := Map[Function[xs, Partition[xs, 2]], SearchSpaceLists] Middle := {{2, 0}, {0, 3}} FilteredMatrices := Select[SearchSpaceMatrices, Mod[Transpose[#].Middle.#, 5] == IdentityMatrix[2] &] MatrixForm[#] & /@ FilteredMatrices ### Maxima Maxima is a lightweight, open source alternative to Mathematica; I’ve had friends recommend it as being small and easy to use. The syntax for Maxima is more natural, with things like lists and loops and lambda functions working more or less the way I expect. However, whenever I tried to do something with a function that isn’t the most common use case, I found the documentation lacking and often ended up combing through old forum posts. Initially I tried to generate a list with a cartesian product like my Mathematica version, but I couldn’t figure out how to do that, eventually I gave up and used 4 nested for loops because that was better documented. Another thing I had difficulty with was transforming a nested list into a matrix using the matrix command. Normally you would create a matrix with matrix([1,2],[3,4]), so by passing in two parameters. The function doesn’t handle passing in matrix([[1,2],[3,4]]), so to get around that you need to invoke a macro: funmake(‘matrix,[[1,2],[3,4]]). Overall I found that the lack of documentation made the system frustrating to work with. I would however use it for simpler computations that fall under the common use cases — these are usually intuitive in Maxima. Here’s the program I came up with: Middle:matrix([2,0],[0,3]); Ident:identfor(Middle); for a:0 thru 4 do for b:0 thru 4 do for c:0 thru 4 do for d:0 thru 4 do (P:funmake('matrix,[[a,b],[c,d]]), P2:transpose(P).Middle.P, if matrixmap(lambda([x],mod(x,5)),P2) = Ident then print(P)); Shortly after writing this I realized I didn’t actually need the funmake macro, since there’s no need to generate a nested list in the first place, I could simply do matrix([a,b],[c,d]). Oh well, the point still stands. ### Maple Maple is a proprietary system developed by Maplesoft, a company based in Waterloo. Being a Waterloo student, I’ve had some contact with Maple: professors used it for demonstrations, some classes used it for grading. Hence I felt compelled to give Maple a shot. At first I was pleasantly surprised that matrix multiplication in a finite field was easy — the code to calculate A*B in $\mathbb{Z}_5$ is simply A.B mod 5. But everything went downhill after that. The UI for Maple feels very clunky. Some problems I encountered: • It’s not clear how to halt a computation that’s in a an infinite loop. It doesn’t seem to be possible within the UI, and the documentation suggests it’s not possible in all cases (it recommends manually terminating the process). Of course, this loses all unsaved work, so I quickly learned to save before every computation. • I can’t figure out how to delete a cell without googling it. It turns out you have to select your cell and a portion of the previous cell, then hit Del. • Copy and pasting doesn’t work as expected. When I tried to copy code written inside Maple to a text file, all the internal formatting and syntax highlighting information came with it. • Not an UI issue, but error reporting is poor. For example, the = operator works for integers, but when applied to matrices, it silently returns false. You have to use Equals(a,b) to compare matrices (this is kind of like java). In the end, I managed to complete the task but the poor UI made the whole process fairly unpleasant. I don’t really see myself using Maple in the future; if I had to, I would try the command line. Here’s the program I came up with: with(LinearAlgebra): with(combinat, cartprod): L := [seq(0..4)]: T := cartprod([L, L, L, L]): Middle := <2,0;0,3>: while not T[finished] do pre_matrix := T[nextvalue](); matr := Matrix(2,2,pre_matrix); if Equal(Transpose(matr).Middle.matr mod 5, IdentityMatrix(2)) then print(matr); end if end do: ### Conclusion After the brief trial, there is still no clear winner, but I have enough data to form some personal opinions: • Mathematica is powerful and complete, but has a quirky syntax. It has the most potential — definitely the one I would go with if I were to invest more time into learning a CAS. • Maxima is lightweight and fairly straightfoward, but because of lack of documentation, it might not be the best tool to do complicated things with. I would keep it for simpler calculations though. • Maple may or may not be powerful compared to the other two, I don’t know enough to compare it. But its UI is clearly worse and it would take a lot to compensate for that. ## Project Euler 280 March 6, 2010 Project Euler 280 is an interesting problem involving probability and combinatorics: There is a 5×5 grid. An ant starts in the center square of the grid and walks randomly. Each step, the ant moves to an adjacent square (but not off the grid). In each of the five bottom-most squares, there is a seed. When the ant reaches such a square, he picks up the seed (if he isn’t already carrying one). When the ant reaches a square in the top-most row while carrying a seed, he drops off the seed (if there isn’t already a seed there). This ‘game’ ends when all five seeds are in the five squares of the top-most row. The problem asks for the expected (average) number of turns it would take to get from the initial to the ending position. It requires six digits of precision. ### The Monto Carlo Approach Perhaps the easiest way to tackle this problem is with a Monto Carlo simulation. This uses a computer to actually run the game many times, using random number generators. This is my straight-forward Monto Carlo implementation in Java: import java.util.*; public class Main{ public static void main(String[] args){ Random r = new Random(); int stepsum = 0; int tries = 0; while(true){ stepsum += new Simulation(r).simulate(); tries ++; System.out.println((double) stepsum / (double) tries); } } } class Simulation{ static final int[] init = { 0,0,0,0,0, 0,0,0,0,0, 0,0,0,0,0, 0,0,0,0,0, 1,1,1,1,1 }; State state; Random rand; int steps; Simulation(Random rand){ state = new State(init, 12, false); this.rand = rand; steps = 0; } int simulate(){ while(!done()) step(); return steps; } boolean done(){ int[] b = state.board; return b[0]==1 && b[1]==1 && b[2]==1 && b[3]==1 && b[4]==1; } boolean step(){ steps ++; int antX = state.ant % 5; int antY = state.ant / 5; int dir; while(true){ // 0:N 1:S 2:E 3:W dir = rand.nextInt(4); if(antY==0 && dir==0) continue; if(antY==4 && dir==1) continue; if(antX==4 && dir==2) continue; if(antX==0 && dir==3) continue; break; } switch(dir){ case 0: antY--; break; case 1: antY++; break; case 2: antX++; break; case 3: antX--; break; } int oldAnt = state.ant; state.ant = 5*antY + antX; if(state.carrying){ state.board[oldAnt]--; state.board[state.ant]++; } if(antY == 0 && state.board[state.ant] == 1 && state.carrying){ // drop off state.carrying = false; } if(antY == 4 && state.board[state.ant] == 1 && !state.carrying){ // pick up state.carrying = true; } return true; } } class State{ // 25 board. int[] board; int ant; boolean carrying; State(int[] board, int ant, boolean carrying){ this.board = board.clone(); this.ant = ant; this.carrying = carrying; } State(State s){ this(s.board, s.ant, s.carrying); } public boolean equals(Object o){ State s = (State) o; return Arrays.equals(s.board, board) && s.ant == ant; } public int hashCode(){ return Arrays.hashCode(board) + ant; } } Running this program will produce something close to the final result. But this will not nearly be accurate enough to actually solve the problem: running this for a few hours would only produce two or three digits of precision, while six is required. It’s not exactly impossible, as on the forums Rodinio claims to have run a similar Monto Carlo simulation on a computer cluster, giving enough precision to start making guesses for the last digits. To get so many digits on a Monto Carlo simulation requires an astronomical amount of tries. Obviously this is not the most efficient approach. ### Introducing the Markov Chain A more efficient solution can be implemented using Markov Chains. A Markov Chain solves the problem of discrete random processes. What is a Markov Chain, and how is it used? Let’s say we have a finite group of states, $S = \{ s_1, s_2, \cdots , s_n \}$. Make a matrix $M$, of size $n * n$. On each step (or move) of the process, the probability of going from state $s_i$ to $s_j$ is $M_{ij}$. The process starts on an initial state. From each state there is a certain probability of going to every other state. Notice that in a Markov Chain, what happens next does not depend on what has happened before. The only data is the current state you’re on. I’ll explain all this with a very simple example: On a 1×3 game board, Bob (the happy face) starts at square 1. Each step, he randomly moves to an adjacent square with equal probability. The game ends when he reaches square 3. Of course the only time he has a ‘choice’ in which square to move to is when he’s on square 2 (half-half chance of moving to 1 and 3). If he’s already on square 1, he has a 100% chance of moving to square 2, and if he’s on square 3, well, the game is over. We can represent this game like this: Another way of showing the above would be something like this: This is the matrix, of course: $\left[ \begin{array}{ccc} 0 & 1 & 0 \\ \frac{1}{2} & 0 & \frac{1}{2} \\ 0 & 0 & 1 \end{array} \right]$ This type of matrix is called a Stochastic matrix. What’s special about this matrix is that each row of the matrix adds up to 1. A state that can only return to itself is called an absorbing state. If in the matrix, $M_{ii} = 1$ for any value of $i$, then state $i$ is an absorbing state. Here, state 3 is an absorbing state. The next step is very important: determining the time until absorption. From our matrix, take out all of the absorbing rows, and all of the left columns to make it a square matrix: It seems that by doing this we are losing information. But we’re actually not. We are treating each absorbing state as the same; and since each row originally added up to 1, the probability of going to the absorbing state is simply 1 minus the sum of the rest of the row. This form is considered the canonical form of the stochastic matrix. Our matrix in canonical form looks like this: $\left[ \begin{array}{cc} 0 & 1 \\ \frac{1}{2} & 0 \end{array} \right]$ Next we subtract it from the identity matrix. $\left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] - \left[ \begin{array}{cc} 0 & 1 \\ \frac{1}{2} & 0 \end{array} \right] = \left[ \begin{array}{cc} 1 & -1 \\ -\frac{1}{2} & 1 \end{array} \right]$ Next we invert this matrix and get: $\left[ \begin{array}{cc} 1 & -1 \\ -\frac{1}{2} & 1 \end{array} \right]^{-1} = \left[ \begin{array}{cc} 2 & 2 \\ 1 & 2 \end{array} \right]$ The expected absorption time from state $s_n$ is simply the sum of row $n$ in the above matrix. So because we started on state 1, the expected length of the game is 2+2 or 4. All of this seems like magic, because I haven’t given any proof for this. If you really want to prove that this method works, you can get a textbook about Markov chains. “Markov Chains” by J.R.Norris (Cambridge 1997) is a decent textbook if you want to know more about this. I would recommend this as a less dense introduction to Markov chains. ### An attempt using Markov Chains Now that we know some basics of Markov chains, we need to apply it to solve the problem. If you attach a Set or something to the Monto Carlo simulation program, you would find that there are 10270 distinct states for this problem. If we count the five final states as the same state, then we have 10266 states. This is exactly the same as what we’ve just done. Except instead of a 3*3 matrices, we’re now working with matrices with tens of thousands of rows and columns. Woo hoo. You can probably see here that this is going to be a problem. A 10265*10265 (final state not counted) matrix contains over 100 million elements. Most of these elements will be zero. If we store all of this data in double types, that’s 800mb of memory used, just for the matrix. It’s easy to underestimate the size of this problem. But there’s a few optimizations we can make. We could save some time and space by subtracting from the identity matrix at the same time as filling up the matrix. Inverting a 10265*10265 matrix is no small task either. Instead there’s a simple optimization to avoid inverting such a huge matrix: Remember that we want to find the inverse of the matrix, and get the sum of all the elements in the first row. Let $M$ be our matrix and $M'$ be the inverse. What’s important here is that finding the sum of all the rows of a matrix is the same as multiplying by a column of ones. This produces a one-columned matrix, each cell representing the sum of the corresponding row. If $C$ is the column of ones, and $S$ is the sum matrix (what we want), then $M' * C = S$. We can also write that as $M * S = C$. See, this way we don’t have to compute the inverse. We just have to solve the above equation for $S$. To avoid implementing the standard matrix solve function, I’ll be using the JAMA library for matrix manipulation. Finally, the source code for my implementation of what I just discussed: import java.util.*; import Jama.*; public class Main{ // For a state, maps to all possible states from it. // All next-states have the same probability. static Map<State,List<State>> states = new LinkedHashMap<State,List<State>>(); public static void main(String[] args){ // Keep the time. long startTime = System.currentTimeMillis(); // Initialize the state map. // To construct the matrix, we need to map all the states to a position // in the matrix (an integer). Because our map is already ordered, we // use its position in the map as the position in the matrix. In order // to avoid doing a linear search through the keyset to find its position, // we cache its position in a map. Set<State> keySet = states.keySet(); Map<State,Integer> positionMap = new HashMap<State,Integer>(); // Set up position map. Iterator<State> iterator = keySet.iterator(); int pos = 0; while(iterator.hasNext()){ positionMap.put(iterator.next(), pos); pos++; } // Finally we can begin constructing our matrix. // This takes up about 900mb of memory. double[][] matrix = new double[states.size()][states.size()]; // Fill up the matrix. Collection<List<State>> values = states.values(); Iterator<List<State>> rowIterator = values.iterator(); for(int row=0; row<matrix.length; row++){ // Fill up one row. List<State> thisRow = rowIterator.next(); // What to fill with? double fill = 1.0 / thisRow.size(); // An optimization: why not do the subtraction step here // instead of subtracting from identity matrix? matrix[row][row] = 1; for(State st : thisRow){ // The final state doesn't count. if(st.equals(State.FINAL_STATE)) continue; int col = positionMap.get(st); matrix[row][col] = -fill; } } // Finishing up. Matrix bigMatrix = new Matrix(matrix); Matrix onesColumn = new Matrix(states.size(), 1, 1); Matrix sums = bigMatrix.solve(onesColumn); System.out.println(sums.get(0,0)); System.out.println(System.currentTimeMillis() - startTime + "ms"); } // Returns a list of possible continuation states from the current one. static List<State> nextStates(State state){ // Current and changing position of the ant int antX = state.ant % 5; int antY = state.ant / 5; // Whether it can go into each of the four directions (N,S,E,W respectively). boolean[] possibleDirs = new boolean[4]; Arrays.fill(possibleDirs, true); // Take out some directions if it's in the corner. if(antY == 0) possibleDirs[0] = false; // Can't go north if(antY == 4) possibleDirs[1] = false; // Can't go south if(antX == 4) possibleDirs[2] = false; // Can't go east if(antX == 0) possibleDirs[3] = false; // Can't go west // Construct a list of continuations. List<State> nextStates = new ArrayList<State>(); // Loop through the four directions. for(int i=0; i<4; i++){ // Cannot go this direction. if( !(possibleDirs[i])) continue; int newAntX = antX; int newAntY = antY; // Modify direction. switch(i){ case 0: newAntY--; break; case 1: newAntY++; break; case 2: newAntX++; break; case 3: newAntX--; break; } // Start constructing new state object. int oldAnt = state.ant; // old ant position int newAnt = 5*newAntY + newAntX; int[] board = state.board.clone(); boolean carrying = state.carrying; // Carrying a seed. Notice that a square can contain // two seeds at once (but not more); seeds are indistinguishable // so we just need to keep track of the number of seeds // on each square. if(carrying){ board[oldAnt] --; board[newAnt] ++; } // Drop off the seed. if(newAntY == 0 && board[newAnt] == 1 && carrying) carrying = false; // Pick up a new seed. if(newAntY == 4 && board[newAnt] == 1 && !carrying) carrying = true; // Treat the five done positions the same. if(donePosition(board)) } return nextStates; } // Recursively add all continuation states. if(states.containsKey(state)) return; List<State> nexts = nextStates(state); states.put(state, nexts); // Recurse (but not if we've reached the final state). for(State next : nexts) if( !(next.equals(State.FINAL_STATE))) } // Is the board in the done position? static boolean donePosition(int[] b){ return b[0]==1 && b[1]==1 && b[2]==1 && b[3]==1 && b[4]==1; } } class State{ static final State INIT_STATE = new State( new int[]{ 0,0,0,0,0, 0,0,0,0,0, 0,0,0,0,0, 0,0,0,0,0, 1,1,1,1,1 }, 12, false); // Consider all final states the same; there is // no ant position. static final State FINAL_STATE = new State( new int[]{ 1,1,1,1,1, 0,0,0,0,0, 0,0,0,0,0, 0,0,0,0,0, 0,0,0,0,0 }, -1, true); // 25 board. int[] board; int ant; boolean carrying; State(int[] board, int ant, boolean carrying){ this.board = board; this.ant = ant; this.carrying = carrying; } State(State s){ this(s.board, s.ant, s.carrying); } public boolean equals(Object o){ State s = (State) o; return Arrays.equals(s.board, board) && s.ant == ant && s.carrying == carrying; } public int hashCode(){ return Arrays.hashCode(board) + ant; } // For debugging mostly. public String toString(){ StringBuilder ret = new StringBuilder("\n"); for(int i=0; i<5; i++){ for(int j=0; j<5; j++){ int pos = 5*i + j; if(ant == pos) ret.append("#"); else ret.append(board[pos] >= 1? "+" : "-"); } ret.append("\n"); } return ret.toString(); } } Running it: So my program takes about 19 minutes to run, using up over 2GB of ram. The construction of the matrix only takes about 2 seconds, and the rest of the time is used on solving the huge matrix. JAMA is probably not the fastest matrix library; the running time might be cut down a bit if we use libraries designed for sparse matrices. But to get this to run in under a minute, a completely different approach would be needed. For now, though, I’m pretty happy with getting it in 19 minutes. ## Solving systems of linear equations in Haskell February 21, 2010 Haskell isn’t normally used for things like this, but it’s quite possible to solve systems of linear equations with Haskell. There are already several libraries for doing this, and other more advanced matrix manipulating. But here, I’m going to start simple. In mathematics, systems of linear equations are usually represented by an augmented matrix. A system of n linear equations would be represented by an augmented matrix with n rows and n+1 columns. For example, we have this system of equations: $\begin{array}{rrrcl} x &+2y &-z &=& -4 \\ 2x &+3y &-z &=& -11 \\ -2x &&-3z &=& 22 \end{array}$ This would be represented as an augmented matrix: $\left[ \begin{array}{ccc|c} 1 & 2 & -1 & -4 \\ 2 & 3 & -1 & -11 \\ -2 & 0 & -3 & 22 \end{array} \right]$ In Haskell we represent this as a list of lists, like this: [ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ] Here I’ll store each entry not as an integer, but as a floating point. You could also use Rational in Data.Ratio but both are fine for now. The advantage of using Rational over Float is that sometimes you will end up with fractions that don’t work very well with floating point numbers. However, I’ve found that printing a list of lists of Rational types makes it difficult to read, unless you implement a custom show function for it. So this is how we define our matrix types in Haskell: type Row = [Float] type Matrix = [Row] The approach to solving this problem is rather simple. First we reduce whatever matrix we have to REF, or Row Echelon Form and then get the actual roots with some back substitution. The algorithm used to transform a matrix to its Row Echelon Form is known as the Gaussian Elimination. Here’s what a matrix should look like after Gaussian Elimination (a $*$ represents any value): $\left[ \begin{array}{ccc|c} 1 & * & * & * \\ 0 & 1 & * & * \\ 0 & 0 & 1 & * \end{array} \right]$ Our matrix should look like this after Gaussian Elimination: $\left[ \begin{array}{ccc|c} 1 & 2 & -1 & -4 \\ 0 & 1 & -1 & 3 \\ 0 & 0 & 1 & -2 \end{array} \right]$ The REF form is not unique, so that is only one of the possible valid outputs for the Gaussian Elimination. Why do we want to have the matrix in REF form? A matrix in this form can easily be solved using back substitution. Consider this matrix as a series of linear equations, as we did before: $\begin{array}{rrrcl} x &+2y &-z &=& -4 \\ &+y &-z &=& 3 \\ &&z &=& -2 \end{array}$ Now it would be very clear how to solve for the three variables. ## The Gaussian Elimination Algorithm Here is a diagram of how Gaussian Elimination works. On each iteration, the element circled in green is considered the pivot element, while the elements enclosed in the red square are the ones we intend to remove (zero) in each iteration. Removing the red elements in the matrix is actually quite simple. Consider how you would eliminate $x$ in equation $B$ here: $\begin{array}{lrrcl}(A) & x & +2y & = & 4 \\(B) & 2x & +y & = & 5 \end{array}$ Probably you would multiply equation $A$ by 2, giving $2x + 4y = 8$, then subtract $B$ from it, giving $3y=3$, eliminating $x$. We can also write that as $B = 2A - B$. Basically to eliminate a variable, just multiply a row so it matches up, and subtract. This is middle school algebra. To make things easier for us, we divide the row we are on so that the pivot is always 1. We do this now because we need them to be 1 anyways, and this avoids an unnecessary division in the next step. We could, of course, not have the pivot always be 1, but we would have to do the divisions later when substituting to get the solutions. More on this later. So to eliminate the variable under the pivot, multiply the whole row by that number. I have a picture to clarify: We simply repeat this for all elements under the pivot. ### An edge case This is where it gets a little bit tricky. What if the pivot is 0? We have no way of making it 1 by any kind of multiplication. Further, we cannot eliminate any elements below the pivot. What do we do now? Simple. We swap the current row with any other row so that the pivot is not zero. Any row will do, so we’ll just pick the first one that fits. If there is not a single element below the pivot that is not zero, the matrix is either under-determined or singular; either case it is unsolvable. Here is my Haskell code on what I just covered: gaussianReduce :: Matrix -> Matrix gaussianReduce matrix = fixlastrow $foldl reduceRow matrix [0..length matrix-1] where --swaps element at position a with element at position b. swap xs a b | a > b = swap xs b a | a == b = xs | a < b = let (p1,p2) = splitAt a xs (p3,p4) = splitAt (b-a-1) (tail p2) in p1 ++ [xs!!b] ++ p3 ++ [xs!!a] ++ (tail p4) reduceRow matrix1 r = let --first non-zero element on or below (r,r). firstnonzero = head$ filter (\x -> matrix1 !! x !! r /= 0) [r..length matrix1-1] --matrix with row swapped (if needed) matrix2 = swap matrix1 r firstnonzero --row we're working with row = matrix2 !! r --make it have 1 as the leading coefficient row1 = map (\x -> x / (row !! r)) row --subtract nr from row1 while multiplying subrow nr = let k = nr!!r in zipWith (\a b -> k*a - b) row1 nr --apply subrow to all rows below nextrows = map subrow $drop (r+1) matrix2 --concat the lists and repeat in take r matrix2 ++ [row1] ++ nextrows fixlastrow matrix' = let a = init matrix'; row = last matrix'; z = last row; nz = last (init row) in a ++ [init (init row) ++ [1, z / nz]] Edit: There was a bug in the above code, found by Alan Zimmerman. I think it’s been fixed. This may be a bit difficult to read because there is no syntax highlighting and the code is cut off. I’ll provide a link to the full source code at the end. Admittedly Haskell may not have been the best language to implement this algorithm this particular way because there is so much state changing. Any language that allows mutable state would probably perform better than this code. Notice that at the end, the last row does not get divided. The fixlastrow function corrects this problem. Let’s test this code: *Main> gaussianReduce [ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ] [[1.0,2.0,-1.0,-4.0],[0.0,1.0,-1.0,3.0],[-0.0,0.0,1.0,-2.0]] Excellent. ## Finishing up The next step of the algorithm is to solve the variables by back substitution. This is pretty easy, I think. My code keeps a list of already-found solutions. Folding from the right, each step it substitutes in the corresponding solution and multiplies & subtracts to get the next solution, adding that to the solution list. --Solve a matrix (must already be in REF form) by back substitution. substitute :: Matrix -> Row substitute matrix = foldr next [last (last matrix)] (init matrix) where next row found = let subpart = init$ drop (length matrix - length found) row solution = last row - sum (zipWith (*) found subpart) in solution : found To get a list of solutions from a matrix, we chain the substitute and gaussianReduce functions: solve :: Matrix -> Row solve = substitute . gaussianReduce *Main> solve [ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ] [-8.0,1.0,-2.0] This means the solutions are $(x,y,z) = (-8,1,-2)$. That seems correct, so we’re done! The code is far from practical, though. Although it works, I haven’t really tested its performance (probably not very good), and it doesn’t handle all edge cases.
{}
# General Solution Of Partial Differential Equation How to check a solution of a partial differential equations? So the solution is a function that takes parameters ${a,b,c,d}$, and the function is constructed. Such equations have two indepedent solutions, and a general solution is just a superposition of the two solutions. 9 The Dirichlet Principle. I am going to examine only one corner of it, and will develop only one tool to handle it: Separation of Variables. 336 course at MIT in Spring 2006, where the syllabus, lecture materials, problem sets, and other miscellanea are posted. The introduction contains all the possible efforts to facilitate the understanding of Fourier transform methods for which a qualitative theory is available and also some illustrative examples was given. Partial Differential Equations Version 11 adds extensive support for symbolic solutions of boundary value problems related to classical and modern PDEs. Find more Mathematics widgets in Wolfram|Alpha. The differential equation cannot be integrated directly because of the term on the right hand side. Two systems of index-one and index-three are solved to show that PSM can provide analytical solutions of PDAEs in convergent series form. Now, recall that we arrived at the characteristic equation by assuming that all solutions to the differential equation will be of the form $y\left( t \right) = {{\bf{e}}^{rt}}$ Plugging our two roots into the general form of the solution gives the following solutions to the differential equation. In contrast to the first two equations, the solution of this differential equation is a function φ that will satisfy it i. 303 Linear Partial Differential Equations Matthew J. Numerical analysis of partial differential equations is vital to understanding and modeling these complex problems. I understand that it works in the sense that the solutions it finds are consistent with the differential equations, but how do we know that the solutions couldn't be. partial differential equations problems and solutions pdf u2 0 is a second order quasilinear partial differential equation. Summary : It is usually not easy to determine the type of a system. Chasnov Hong Kong June 2019 iii. Solutions of Partial Differential Equations with a Movable Pole. Evidently, the solution curves are the level curves of x,t xe t2/2 and since the pde reduces to the ode u s 0 along level curves of , the solution u of the partial differential equation is constant along these curves. 5 Well-Posed Problems 25. And what we'll see in this video is the solution to a differential equation isn't a value or a set of values. Performing the Painlevé Test and Truncated Expansions for Studying Some Nonlinear Equations 36. De nition 4: A solution of a partial di erential equation is any function that, when substituted for the unknown function in the equation, reduces the. In a partial differential equation (PDE), the function being solved for depends on several variables, and the differential equation can include partial derivatives taken with respect to each of the variables. Multiplying by (t) gives the following eqn. The result of first two examples compared with (MSV) and (VIM), tell us that these methods can be. analysis of the solutions of the equations. , the solution is unique. Answers Partial Differential Equations: In general, when is the function of a harmonic function. Homogeneous Linear Equations with constant coefficients: Write down the characteristic equation (1) If and are distinct real numbers (this happens if ), then the general solution is (2) If (which happens if ), then the general solution is (3). arbitrary constant. $\frac{\partial ^2 f}{\partial x \partial y}=e ^ {x+2y}$ I know these are relatively easy to solve, I haven't done them in a while and have forgotten how to go about solving them, I haven't yet found an good internet source that explains them straightforwardly. Therefore, a general framework of the NIM is presented for analytical treatment of fractional partial differential equations in fluid mechanics. 1 What is a PDE? A partial di erential equation (PDE) is an equation involving partial deriva-tives. This textbook gives an introduction to Partial Differential Equations (PDEs), for any reader wishing to learn and understand the basic concepts, theory, and solution techniques of elementary PDEs. Partial differential equations are often used to construct models of the most basic theories underlying physics and engineering. 8 Relationships between Different Partial Differential Equations. A general discussion of partial differential equations is both difficult and lengthy. Using this in. Here z will be taken as the dependent variable and x and y the independent. Often, systems described by differential equations are so complex, or the systems that they describe are so large,. • FIRST ORDERlinear ODE: • A first order linear differential equation has the following form: • The general solution is given by • Where • called the integrating factor. In this class, we will develop skills to solve linear second order partial di erential equations (in particular, Laplace, wave and di usion equations) using the methods of characteristics, separation of variables and integral transforms. arbitrary constant. for both equations. Evans Department of Mathematics, UC Berkeley InspiringQuotations A good many times Ihave been present at gatherings of people who, by the standards of traditional culture, are thought highly educated and who have with considerable gusto. Partial Differential Equations generally have many different solutions a x u 2 2 2 = ∂ ∂ and a y u 2 2 2 =− ∂ ∂ Evidently, the sum of these two is zero, and so the function u(x,y) is a solution of the partial differential equation: 0 y u x u 2 2 2 2 = ∂ ∂ + ∂ ∂ Laplace's Equation Recall the function we used in our reminder. Volume 4, Issue 2, August 2014 64 Abstract— Using Finite Lie group of scaling transformation, the similarity solution is derived for partial differential equation of fractional order α. This is also true for a linear equation of order one, with non-constant coefficients. Well, now we can take the partial derivative of the pseudo-solution with respect to y. Basics of wave equation (time permitting). • General Form, • For Example, 32 x dx dy 8. In the early 19th century there was no known method of proving that a given second- or higher-order partial differential equation had a solution, and there was not even a…. They can be. Therefore, it is of no surprise that Fourier series are widely used for seeking solutions to various ordinary differential equations (ODEs) and partial differential equations (PDEs). 4 More general eigenvalue problems. One of the classical partial differential equation of mathematical physics is the equation describing the conduction of heat in a solid body (Originated in the 18th century). The theory, which applies to scalar fully nonlinear PDEs of the form $$F(x, u, Du, D^2u)=0$$, has yielded very general existence and uniqueness theorems. This equation is then combined with a model of exit and entry, for instance taking the form of a variational inequality of the obstacle type, derived from an optimal stopping time problem. Access Partial Differential Equations 2nd Edition Chapter 4. N-th order differential equation:. To do this sometimes to be a replacement. Several of this manual are strongly recommends you read and SHEPLEY L ROSS DIFFERENTIAL. Find the particular solution given that y(0)=3. A solution (or particular solution) of a differential equa-. Basic definitions and examples To start with partial differential equations, just like ordinary differential or integral equations, are functional equations. In this lecture, Michael Crandall provides an excellent expository introduction to the theory of viscosity solutions of partial differential equations. Here, we shall learn a method for solving partial differential equations that complements the technique of separation of. For example, the system of partial differential equations known as Maxwell’s equations can be written on thebackofapostcard, yetfromtheseequationsonecanderivetheentiretheory of electricity and magnetism, including light. The first three worksheets practise methods for solving first order differential equations which are taught in MATH108. Evans, Graduate Texts in Mathematics vol. Both equations are linear equations in standard form, with P(x) = -4/ x. Numerical analysis of partial differential equations is vital to understanding and modeling these complex problems. Aims: The aim of this course is to introduce students to general questions of existence, uniqueness and properties of solutions to partial differential equations. FIRST ORDER DIFFERENTIAL EQUATIONS 1. Bayesian inference with partial differential equations using Stan Author Yi Zhang1, William R. The objective in the following examples is to show some of the substitutions which may be used in the solution of the types of equation which occur in Scientific and engineering applications. Finally, we will learn about systems of linear differential equations, including the very important normal modes problem, and how to solve a partial differential equation using separation of variables. One such class is partial differential equations (PDEs). • FIRST ORDERlinear ODE: • A first order linear differential equation has the following form: • The general solution is given by • Where • called the integrating factor. This equation can be used to model air pollution, dye dispersion, or even traffic flow with u representing the density of the pollutant (or dye or traffic) at position x and time t. PDE = differential equation in which all dependent variables are a function of several independent variables, as in the second example. It can be referred to as an ordinary differential equation (ODE) or a partial differential equation (PDE) depending on whether or not partial derivatives are involved. Comments on Course Content : Here is an outline of the topics to be covered: these lay a foundation for a. In the early 19th century there was no known method of proving that a given second- or higher-order partial differential equation had a solution, and there was not even a…. The topics, even the most delicate, are presented in a detailed way. Mathew has written: 'Domain decomposition methods for the numerical solution of partial differential equations' -- subject(s): Decomposition method, Differential equations, Partial. But in general, differential equations have lots of solutions. " - Joseph Fourier (1768-1830) 1. An excellent account of the available approximate methods of solutions for random differential equations is presented by Lax (1980). Find the general solution for the differential equation dy + 7x dx = 0 b. We will largely follow the textbook by Richard Haberman. So the solution here, so the solution to a differential equation is a function, or a set of functions, or a class of functions. We shall elaborate on these equations below. Generally, the goal of the method of separation of variables is to transform the partial differential equation into a system of ordinary differential equations each of which depends on only one of the functions in the product form of the solution. We will do so by developing and solving the differential equations of flow. 336 Spring 2006 Numerical Methods for Partial Differential Equations Prof. I could not develop any one subject in a really thorough manner; rather, my aim was to present the essential. Partial differential equation. First Order Partial Differential Equation -Solution of Lagrange Form - Duration: 16:29. u xx + (x2 + y)u yy = 0 e. Although the equation is not a difficult one, the ease of solution is noteworthy and within the capability of the many to whom partial differential equations are a closed field. In this dissertation, a closed-form particular solution for more general partial differential operators with constant coefficients has been derived for polynomial basis functions. Now, recall that we arrived at the characteristic equation by assuming that all solutions to the differential equation will be of the form $y\left( t \right) = {{\bf{e}}^{rt}}$ Plugging our two roots into the general form of the solution gives the following solutions to the differential equation. Recall that a differential equation is an equation (has an equal sign) that involves derivatives. In addition to computing the coefficients a_n,b_n, it will also compute the partial sums (as a string), plot the partial sums (as a function of x over (-L,L), for comparison with the plot of f(x) itself), compute the value of the FS at a point, and similar computations for the cosine series (if f(x) is even) and the sine series (if f(x) is odd). A solution of the differential equation is a function y = y(x) that satisfies the equation. The Laplace Equation as the Prototype of an Elliptic Partial Differential Equation of Second Order. Its focus is primarily upon finding solutions to particular equations rather than general theory. 5 The One Dimensional Heat Equation 69 3. The Applied Mathematics and Differential Equations group within the Department of Mathematics have a great diversity of research interests, but a tying theme in each respective research program is its connection and relevance to problems or phenomena which occur in the engineering and physical sciences. Lump solutions to nonlinear partial differential equations via Hirota bilinear forms Wen Xiu Ma | Yuan Zhou Initial–boundary value problems for the general coupled nonlinear Schrödinger equation on the interval via the Fokas method. A partial differential equation (PDE) is a relation between a function of several variables and its derivatives. Johnson, Dept. 1 Preview of Problems and Methods 231 5. If y1(t) and y2(t) are two solutions to a linear, second order homogeneous differential equation and they are “nice enough” then the general solution to the linear, second order homogeneous differential equation is given by (3). PDF | On Jan 1, 2012, Andrei D. Before doing so, we need to define a few terms. Partial Differential Equations Times New Roman Tahoma Wingdings Blueprint MathType 5. Homogeneous PDE: If all the terms of a PDE contains the dependent variable or its partial derivatives then such a PDE is called non-homogeneous partial differential equation or homogeneous otherwise. course, will be in the nontrivial solutions. 4 D'Alembert's Method 60 3. 3 Solution of the One Dimensional Wave Equation: The Method of Separation of Variables 31 3. The singular solution usually corresponds to the envelope of the family of integral curves of the general solution of the differential equation. Possible initial and boundary conditions and their impact on the solutions will be investigated. …theory of differential equations concerns partial differential equations, those for which the unknown function is a function of several variables. arbitrary constant. 2 Dirichlet Problems with Symmetry 233 5. A partial differential equation (PDE) is a relation between a function of several variables and its derivatives. Partial Differential Equations Reading: P1-P20 of Durran, Chapter 1 of Lapidus and Pinder (Numerical solution of Partial Differential Equations in Science and Engineering) Before even looking at numerical methods, it is important to understand the types of equations we will be dealing with. In this lecture, Michael Crandall provides an excellent expository introduction to the theory of viscosity solutions of partial differential equations. The general solution of the differential equation is the relation between the variables x and y which is obtained after removing the derivatives (i. A semilinear heat equation 188 6. will satisfy the equation. DIFFERENTIAL EQUATIONS PRACTICE PROBLEMS 1. Partial Differential Equations Solution Manual 5. Using the boundary condition Q=0 at t=0 and identifying the terms corresponding to the general solution, the solutions for the charge on the capacitor and the current are:. 6 Heat Conduction in Bars: Varying the Boundary. Therefore, partial differential equations are extremely useful when dealing with single order or multi-variable systems which occur very often in physics problems. Integral will give the most general solution i. ; Cooke, Roger (TRN), ISBN 3540404481, ISBN-13 9783540404484, Brand New, Free shipping in the US This richly-illustrated text covers the Cauchy and Neumann problems for the classical linear equations of mathematical physics. Define partial differential equation. Example 1 - Separation of Variables form. That's a much better approach considering, after I looked into it a bit, that an arbitrary solution of the wave equation:. Second, because the problem in general has to be analyzed approximately, a partial differential equation need not be a good starting point. Another is that for the class of partial differential equation represented by Equation Y(6)−coor, the boundary conditions in the. Basics of wave equation (time permitting). The classical approach that dominated. studied the nature of these equations for hundreds of years and there are many well-developed solution techniques. This principle is used extensively in solving linear partial differential equations by the method of separation of variables. Representation Formula for the Solution of the Dirichlet Problem on the Ball (Existence Techniques 0) 1. Pinsky: Partial Differential Equations and Boundary-Value Problems with Applications supplementary problems with answers. Fans and Rar-efaction Waves, c. Topics on partial differential equations Reinhard Farwig Department of Mathematics Darmstadt University of Technology 64283 Darmstadt Germany Hideo Kozono Mathematical Institute Toˆhoku University Sendai, 980-8578 Japan Hermann Sohr Faculty of Electrical Engineering, Informatics and Mathematics University of Paderborn 33098 Paderborn Germany. Analytic Solutions of Partial Di erential Equations MATH3414 School of Mathematics, University of Leeds 15 credits Taught Semester 1, Year running 2003/04. On the previous page on the Fourier Transform applied to differential equations, we looked at the solution to ordinary differential equations. After introducing each class of differential equations we consider finite difference methods for the numerical solution of equations in the class. Gillespie1, Minjie Zhu2 1 Metrum Research Group 2 School of Civil and Construction Engineering, Oregon State University. Therefore a partial differential equation contains one dependent variable and one independent variable. n starts at 0 and ends at 10) into a numerical solver like Mathematica or Maple. 1 What is a PDE? A partial di erential equation (PDE) is an equation involving partial deriva-tives. differential equations. To do this sometimes to be a replacement. See Example 4. 1 A classification of linear second-order partial differential equations--elliptic, hyperbolic and parabolic. In fact, this is the general solution of the above differential equation. Linear First-order Equations 4 1. For a discussion of the more general transport equation and its solutions, see [1]. Evidently, the solution curves are the level curves of x,t xe t2/2 and since the pde reduces to the ode u s 0 along level curves of , the solution u of the partial differential equation is constant along these curves. The most general such solution has the form u x,t f xe t2/2 for an arbitrary smooth function of one variable f. 6) is not a good way to look at the general problem for several reasons. partial differential equations problems and solutions pdf u2 0 is a second order quasilinear partial differential equation. For senior undergraduates of mathematics the course of Partial differential Equations will soon be uploaded to www. This research area includes analysis of differential equations, especially those which occur in applications in the natural sciences, such as fluid dynamics, materials science, or mathematical physics. A hard copy is also on reserve. 155, where $$h$$ is a continuous function and the associated solution $$u$$ of the boundary value problem has no finite Dirichlet integral. The solution of a stochastic partial differential equation (SPDE) of evolutionary type is with respect to a reasonable state space in general not a semimartingale anymore and does therefore in general not satisfy an Itô formula like the solution of a finite dimensional stochastic ordinary differential equation. Consider again the IVP (). We will only talk about explicit differential equations. This is very essential in all scientific investigation. All of these studies were based on deriving formal power series which were believed to ap-proximate periodic solutions of the partial differ-ential equations. AN INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS A complete introduction to partial differential equations, this textbook provides a rigorous yet accessible guide to students in mathematics, physics and engineering. Hancock Fall 2006 1 The 1-D Heat Equation 1. We encounter partial differential equations routinely in transport phenomena. for both equations. sent paper deals with a general introduction and classification of partial differential equations and the nu-merical methods available in the literature for the solution of partial differential equations. Our aim is to generalize the wavelet collocation method to fractional partial differential equations using cubic B-spline wavelet. MOL allows standard, general-purpose methods and software, developed for the numerical integration of ordinary differential equations (ODEs) and differential algebraic equations (DAEs), to be used. ABSTRACT: When the domain is a polygon of , the solution of a partial differential equation is written as a sum of a regular part and a linear combination of singular functions. Solutions of Partial Differential Equations with a Movable Pole. We show that for a large class of evolutionary nonlinear and nonlocal partial differential equations, symmetry of solutions implies very restrictive properties of the solutions and symmetry axes. Second, because the problem in general has to be analyzed approximately, a partial differential equation need not be a good starting point. 1 Linear partial integro- differential equations: The general form of the. These revision exercises will help you practise the procedures involved in solving differential equations. Finite difference methods for solving partial differential equations are mostly classical low order formulas, easy to program but not ideal for problems with poorly behaved solutions. It's a function or a set of functions. The first three worksheets practise methods for solving first order differential equations which are taught in MATH108. In this post, we will talk about separable. The reader is referred to other textbooks on partial differential equations for alternate approaches, e. Why not have a try first and, if you want to check, go to Damped Oscillations and Forced Oscillations, where we discuss the physics, show examples and solve the equations. Evans, Graduate Texts in Mathematics vol. The solutions of a homogeneous linear differential equation form a vector space. The Gaussian heat kernel, diffusion equations. Partial Differential Equation Toolbox™ provides functions for solving structural mechanics, heat transfer, and general partial differential equations (PDEs) using finite element analysis. One of the stages of solutions of differential equations is integration of functions. Finite difference methods for solving partial differential equations are mostly classical low order formulas, easy to program but not ideal for problems with poorly behaved solutions. Find the general solution for the differential equation dy + 7x dx = 0 b. Both equations are linear equations in standard form, with P(x) = -4/ x. Prove the following theorem:Suppose u is a C2 solution of (), and suppose that forsome and some t0>0, g and h are both identically zeroon the set. The equation is, in general, sup-plemented by additional conditions such as initial conditions (as we have of-ten seen in the theory of ordinary differential equations (ODEs)) or boundary conditions. Partial Differential Equations Solution Manual 5. We encounter partial differential equations routinely in transport phenomena. Sneddon that can be located with your requirements is sometime challenging. Notice that if uh is a solution to the homogeneous equation. PARTIAL DIFFERENTIAL EQUATIONS JAMES BROOMFIELD Abstract. The general solution of the differential equation is the relation between the variables x and y which is obtained after removing the derivatives (i. Solving Partial Differential Equations. CHAPTER 1 PARTIAL DIFFERENTIAL EQUATIONS A partial differential equation is an equation involving a function of two or more variables and some of its partial derivatives. For instance, dx dt 2 +x2 +t2 = −1 has none. For generality, let us consider the partial differential equation of the form [Sneddon, 1957] in a two-dimensional domain. The Cauchy Problem for First-order Quasi-linear Equations 1. A first order differential equation is of the form: Linear Equations: The general general solution is given by where is called the integrating factor. In a partial differential equation (PDE), the function being solved for depends on several variables, and the differential equation can include partial derivatives taken with respect to each of the variables. From the documentation: "DSolve can find general solutions for linear and weakly nonlinear partial differential equations. Partial Differential Equations Solution Manual. These equations are very useful when detailed information on a flow system is required, such as the velocity, temperature and concentration profiles. Free Online Library: Self-Similar Analytic Solution of the Two-Dimensional Navier-Stokes Equation with a Non-Newtonian Type of Viscosity. One such class is partial differential equations (PDEs). Partial Differential Equations: Exact Solutions Subject to Boundary Conditions This document gives examples of Fourier series and integral transform (Laplace and Fourier) solutions to problems involving a PDE and boundary and/or initial conditions. Partial Differential Equations Exam 1 Review Exercises Spring 2012 Exercise 1. One important requirement for separation of variables to work is that the governing partial differential equation and initial and boundary conditions be linear. Either form--the closed form solution or an n-term approximation--is immediately verifiable. So let me write that down. Buy The Numerical Solution Of Ordinary And Partial Differential Equations, (3Rd Edition) on Amazon. 2 we defined an initial-value problem for a general nth-order differential equation. It's important to contrast this relative to a traditional equation. y' = y – x2 + 2x in J = R Also y(x) = x3 + 3 c x is a general solution of xy' + 3y = 6x3 (B) and the function y(x) = x3 is a particular solution of the equation (B), obtained by taking the particular value c = 0 in the general solution of (B). A Particular Solution of a differential equation is a solution obtained from the General Solution by assigning specific values to the arbitrary constants. Performing the Painlevé Test and Truncated Expansions for Studying Some Nonlinear Equations 36. of numerical analysis, the numerical solution of partial differential equations, as it developed in Italy during the crucial incubation period immediately preceding the diffusion of electronic computers. Substitute this known value of k in the pseudo-solution to get. This work presents the application of the power series method (PSM) to find solutions of partial differential-algebraic equations (PDAEs). Students will understand the basic methods for solving the Laplace, heat, and wave equations. - [Instructor] So let's write down a differential equation, the derivative of y with respect to x is equal to four y over x. A first order differential equation is of the form: Linear Equations: The general general solution is given by where is called the integrating factor. Partial di erential equations, a nonlinear heat equation, played a central role in the recent proof of the Poincar e conjecture which concerns characterizing the sphere, S 3 , topologically. In case of partial differential equations, most of the equations have no general solution. Clearly, this initial point does not have to be on the y axis. Suppose that the frog population P(t) of a small lake satisfies the differential equation dP. Such a solution is called a general solution of the differential equation. Topics on partial differential equations Reinhard Farwig Department of Mathematics Darmstadt University of Technology 64283 Darmstadt Germany Hideo Kozono Mathematical Institute Toˆhoku University Sendai, 980-8578 Japan Hermann Sohr Faculty of Electrical Engineering, Informatics and Mathematics University of Paderborn 33098 Paderborn Germany. The ideas can be used to solve many kinds of first order partial differential equations. 0 Equation Microsoft Equation 3. We show that for a large class of evolutionary nonlinear and nonlocal partial differential equations, symmetry of solutions implies very restrictive properties of the solutions and symmetry axes. Lectures On Elliptic And Parabolic Equations In Sobolev Spaces also available in format docx and mobi. See Differential equation, partial, complex-variable methods. Then we derive the well-known one-dimensional diffusion equation, which is a partial differential equation for the time-evolution of the concentration of a dye over one spatial dimension. solution of fractional differential equations are needed. The problem with that approach is that only certain kinds of partial differential equations can be solved by it, whereas others. We will learn about the Laplace transform and series solution methods. Find the particular solution given that y(0)=3. Let's start with some simple examples of the general solutions of PDFs without invoking boundary conditions. A solution (or particular solution) of a differential equa-. 1 Partial Differential Equations in Physics and Engineering 29 3. of Mathematics Overview. Take one of our many Partial Differential Equations practice tests for a run-through of commonly asked questions. Just like with ordinary differential equations, partial differential equations can be characterized by their order. In mathematics, a partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. The conditions for calculating the values of the arbitrary constants can be provided to us in the form of an Initial-Value Problem, or Boundary Conditions, depending on the problem. discuss some general qualitative behavior of solutions to expect, since the form of the solutions is more complicated than solutions for ordinary differential equations. Just as in ordinary differential equations, in partial differential equations some boundary conditions will be needed to solve the equations. Comment: Unlike first order equations we have seen previously, the general solution of a second order equation has two arbitrary coefficients. 8, 2006] In a metal rod with non-uniform temperature, heat (thermal energy) is transferred. method, differential transformation method and so on [1]-[12]. 2 solutions now. For generality, let us consider the partial differential equation of the form [Sneddon, 1957] in a two-dimensional domain. In addition to these methods, several iterative methods for the solution of initial and boundary value problems in ordinary and partial differential equations were presented. Introduction: What Are Partial Differential Equations? 1. A prototypical example is the `heat equation', governing the evolution of temperature in a conductor. Introduction to Partial Differential Equation - II. Comment: Unlike first order equations we have seen previously, the general solution of a second order equation has two arbitrary coefficients. A solution or integral of a partial differential equation is a relation connecting the dependent and the independent variables which satisfies the given differential equation. u xu y + u z = u xyz g. If a dependent variable is a function of two or more independent variables, an equation involving partial differential coefficients is called partial differential equation. In mathematics, a partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. Suppose that the frog population P(t) of a small lake satisfies the differential equation dP. This is not so informative so let's break it down a bit. denotes the partial derivative ∂u/∂x i. These iterative procedures provide the solution or. substitute into the differential equation and then try to modify it, or to choose appropriate values of its parameters. In general the order of differential equation is the order of highest derivative of unknown function. Although much work has been done elsewhere, the solution of partial differential equations is a relatively new field for the Caltech Computer. The section also places the scope of studies in APM346 within the vast universe of mathematics. Differential equations: Second order differential equation is a mathematical relation that relates independent variable, unknown function, its first derivative and second derivatives. Take one of our many Partial Differential Equations practice tests for a run-through of commonly asked questions. the heat equa-tion, the wave equation, and Poisson’s equation. In this two part treatise, we present our developments in the context of solving two main classes of problems: data-driven solution and data-driven discovery of partial differential equations. 5 Partial Differential Equations in Spherical Coordinates 231 5. This course provides an introduction to finite difference and finite element methods for the numerical solution of elliptic, parabolic, and hyperbolic partial differential equations. The purpose of this seminar paper is to introduce the Fourier transform methods for partial differential equations. Either form--the closed form solution or an n-term approximation--is immediately verifiable. Partial di erential equations, a nonlinear heat equation, played a central role in the recent proof of the Poincar e conjecture which concerns characterizing the sphere, S 3 , topologically. The result of first two examples compared with (MSV) and (VIM), tell us that these methods can be. Fully-nonlinear First-order Equations 28 1. Subharmonic Functions. First Order Partial Differential Equation -Solution of Lagrange Form - Duration: 16:29. Hilbert triples. The general solution of a order ordinary differential equation contains arbitrary constants resulting from integrating times. A Differential Equation is an equation with a function and one or more of its derivatives: Example: an equation with the function y and its derivative dy dx Here we will look at solving a special class of Differential Equations called First Order Linear Differential Equations. Differential equations arise as common models in the physical, mathematical, biological and engineering sciences. 11) is called inhomogeneous linear equation. Nonlinear partial differential equations (PDEs) is a vast area. For a discussion of the physical model, see [2]. A solution or integral of a partial differential equation is a relation connecting the dependent and the independent variables which satisfies the given differential equation. 3 The one notable exception is with the one-dimensional wave equation ∂2u ∂t2 − c2 ∂2u ∂x2 = 0. All the solutions are given by the implicit equation Second Order Differential equations. This work presents the application of the power series method (PSM) to find solutions of partial differential-algebraic equations (PDAEs). 6 Example 2. Partial Differential Equations Solution Manual. 336 Spring 2006 Numerical Methods for Partial Differential Equations Prof. In general, a Laplace's equation models the canonical form of second order linear partial differential equation is of elliptic equations. @2u @x [email protected] 2 + @2u @x. So the general solution to the differential equation can be written as y(x) = c 1e(2+3i)x + c 2e (2−3i)x or as y(x) = C 1e2x cos(3x) + C 2e2x sin(3x) , with the later formula usually being preferred. 1 What is a PDE? A partial di erential equation (PDE) is an equation involving partial deriva-tives. Solutions of nonlinear partial differential equations can have enormous complexity, with nontrivial structure over a large range of length- and timescales. This course is an introduction to the theory of partial differential equations, with an emphasis on solving techniques and applications. If all the terms of a PDE contain the dependent variable or its partial derivatives then such a PDE is called non-homogeneous partial differential equation or homogeneous otherwise. This extends the well-known path integral solution of the Schrödinger/diffusion equation in unbounded space. The section also places the scope of studies in APM346 within the vast universe of mathematics. N2 - In this paper, Sumudu decomposition method is developed to solve general form of fractional partial differential equation. Objectives:. solution of fractional differential equations are needed. The book extensively introduces classical and variational partial differential equations (PDEs) to graduate and post-graduate students in Mathematics. These ideas made us to search functions Kind(2) and , that give the complete solution of the second order linear partial differential equations with variable coefficients, which have the form and this solution depends on the forms of the functions , , , ,. Geometric Partial Differential Equations Methods in Geometric Design and Modeling Reporter: Qin Zhang1 Collaborator: Guoliang Xu,2 C. Corollary 1: The general solution to equation (2:1) is de ned by a single relation between two arbitrary constants occurring in the general solution of the system of ordinary di erential equations (dx=ds) a = (dy=ds) b = (du=ds) c; or, in other words, by any arbitrary function of one independent variable. General Solution Differential Equation Having a general solution differential equation means that the function that is the solution you have found in this case, is able to solve the equation regardless of the constant chosen. But in general, differential equations have lots of solutions. 4 The Helmholtz Equation with Applications to the Poisson, Heat, and Wave Equations 242 Supplement on Legendre Functions. For instance, dx dt 2 +x2 +t2 = −1 has none. UNIT I PARTIAL DIFFERENTIAL EQUATIONS Formation of partial differential equations – Singular integrals - Solutions of standard types of first order partial differential equations - Lagrange’s linear equation - Linear partial differential equations of second and higher order with constant coefficients of both homogeneous and non-homogeneous types. Solutions of nonlinear partial differential equations can have enormous complexity, with nontrivial structure over a large range of length- and timescales. PARTIAL DIFFERENTIAL EQUATIONS JAMES BROOMFIELD Abstract. Solution manual Introduction to the Finite Element Method : Theory, Programming and Partial Differential Equations (6th Ed. In this sense, there is a similarity between ODEs and PDEs, since this principle relies only on the. First-Order Partial Differential Equations the case of the first-order ODE discussed above. u xu y + u z = u xyz g. The equation has a regular singularity at 0 and an irregular singularity at. General Assignment 1 for Normal with solutions and the grading scheme. Example 2: Solve the second order differential equation given by. (2009) using homotopy analysis method. form a differential equation of, y=asinbx. And what we'll see in this video is the solution to a differential equation isn't a value or a set of values. 1 Physical derivation Reference: Guenther & Lee §1. Definition (Partial Differential Equation) A partial differential equation (PDE) is an equation which 1 has an unknown function depending on at least two variables, 2 contains some partial derivatives of the unknown function. will satisfy the equation. 7 General Solutions of Partial Differential Equations. You can perform linear static analysis to compute deformation, stress, and strain. discuss some general qualitative behavior of solutions to expect, since the form of the solutions is more complicated than solutions for ordinary differential equations. projection of the minimal along this direction is a scalar viscosity solution of a certain HJB in both the deterministic and stochastic case by using PDE theory. Thus, the wave, heat and the form, Wanjala et al [1]; Laplace's equations serve as canonical models for all second order constant coefficient PDEs. For a discussion of the more general transport equation and its solutions, see [1]. Multiplying through by μ = x −4 yields.
{}
# Steps on the way to Lightcone cosmological calculator 1. Jan 25, 2017 ### Jorrie Eventually, there is a beta-test version available with some additions on density, density parameters and temperature. It is not the 'Forum official' version yet, but it has other interesting changes. E.g. inputs or now more standard - I have done away with Hubble times as input parameters, because they are not the ones used in the literature. Prime inputs are now the Hubble constant in conventional units, the total density parameter and the radiation-matter equality redshift parameters. The matter density parameter is then still a derived value. The range of calculations are also requested as the more conventional redshift (z) in lieu of the simpler, but less well known "S" parameter. Lastly, the output scaling option for "Zeit" has been left out, since it is a potentially non-standard distraction. I hope the updates will enhance the use of the calculator in the educational field. The latest beta-test version is available as: LightCone7-2017-01-26. Edit: we found an error in Omega-calculations of this version. See the thread https://www.physicsforums.com/threads/evolution-of-the-energy-density-parameters.901681/ The corrected version is: LightCone 7, Cosmo-Calculator (2017-1). Comments/suggestions welcome. I will start a new thread to discuss some of the more subtle aspects of the density parameter calculations. Last edited: Jan 28, 2017 2. Jan 25, 2017 ### Mordred The usage parameters such as ones found in intro level textbooks is probably the most familiar approach and the one that will probably gain the most usage. The parameters you mentioned being the key ones. People are more familiar with redshift than stretch for example. I agree the best approach should be literature based. I should have time to help update the user manuals when the testing is done if you'd like my help again on that. I still remember how to edit and text on wikidot Last edited: Jan 26, 2017 3. Feb 2, 2017 ### Jorrie After some more comments and further testing, it seems like the updated calculator has stabilized on this version: LightCone7-2017-01-30. I suggest that we leave it for another week in 'testing mode' and then I will 'release' it into the same url as the previous release, so that no links/sig's need to be updated. 4. Feb 2, 2017 ### Mordred I've been running various tests as time allows. I haven't found any issues that I can see thus far 5. Feb 3, 2017 ### Jorrie Thanks for your effort, Mordred. I have used a specific set of columns as default to highlight the new features, but it may now be time to choose a more general set. It should still be limited so as to not being frightening to newcomers. Any suggestions. 6. Feb 9, 2017 ### Jorrie I have now changed the link in my Sig below to the latest version that we have tested, with a very basic set of columns as the default, i.e. $${\small\begin{array}{|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|} \hline z&T (Gy)&R (Gly)&D_{now} (Gly)&Temp(K) \\ \hline 1.09e+3&3.72e-4&6.27e-4&4.53e+1&2.97e+3\\ \hline 3.39e+2&2.49e-3&3.95e-3&4.42e+1&9.27e+2\\ \hline 1.05e+2&1.53e-2&2.34e-2&4.20e+1&2.89e+2\\ \hline 3.20e+1&9.01e-2&1.36e-1&3.81e+1&9.00e+1\\ \hline 9.29e+0&5.22e-1&7.84e-1&3.09e+1&2.81e+1\\ \hline 2.21e+0&2.98e+0&4.37e+0&1.83e+1&8.74e+0\\ \hline 0.00e+0&1.38e+1&1.44e+1&0.00e+0&2.73e+0\\ \hline -6.88e-1&3.30e+1&1.73e+1&1.12e+1&8.49e-1\\ \hline -8.68e-1&4.79e+1&1.74e+1&1.43e+1&3.59e-1\\ \hline -9.44e-1&6.28e+1&1.74e+1&1.56e+1&1.52e-1\\ \hline -9.76e-1&7.77e+1&1.74e+1&1.61e+1&6.44e-2\\ \hline -9.90e-1&9.27e+1&1.74e+1&1.64e+1&2.73e-2\\ \hline \end{array}}$$ There are now a total of 18 selectable columns, including the actual density against redshift and also the various density parameters (the Omegas). It is very easy to change the default columns in the program, so please let me know if you want to see other columns as default. The idea of a small selection is to not overwhelm newcomers with too much data.
{}
Article | Open | Published: # Physiological demands of a swimming-based video game: Influence of gender, swimming background, and exergame experience Scientific Reportsvolume 7, Article number: 5247 (2017) | Download Citation ## Abstract Active video games (exergames) may provide short-term increase in energy expenditure. We explored the effects of gender and prior experience on aerobic and anaerobic energy systems contributions, and the activity profiles of 40 participants playing with a swimming exergame. We recorded oxygen consumption and assessed blood lactate after each swimming technique. We also filmed participants’ gameplays, divided them into different phases and tagged them as active or inactive. Anaerobic pathway accounted for 8.9 ± 5.6% of total energy expenditure and although experienced players were less active compared to novice counterparts (η² < 0.15, p < 0.05), physiological measures were not different between performing groups. However, players with real-swimming experience during the first technique had higher heart rate (partial-η² = 0.09, p < 0.05). Our results suggest that short-term increase in physiological measures might happen in the beginning of gameplay because of unfamiliarity with the game mechanics. Despite low levels of activity compared to real sport, both aerobic and anaerobic energy systems should be considered in the evaluation of exergames. Game mechanics (involving the whole body) and strategies to minimize pragmatic play might be used for effective and meaningful game experience. ## Introduction Higher screen times (e.g. playing videogames) are associated with physical inactivity1, and interventions to discourage their use are usually unsuccessful because players value these activities. Besides predicting parameters for increasing physical activity (PA) levels, playing sport videogames are associated with real sports participation among adolescents2. Newer generations of videogames (exergames) also provide opportunities for low to moderate (and sometimes large) energy expenditure (EE)3. Exergames are enjoyable and have group-play modes that make them potential tools in combatting common barriers to exercise. Mixed with traditional means of performing PA, exergames have also been shown to increase exercise satisfaction in obese children and offer alternatives for unmotivated participants to exercise regularly4, while having similar physiological effects5. Depending on the videogame type and difficulty, exertion levels may vary, and higher PA intensities were observed when whole-body is involved during exergame play6. There are also mixed results regarding the effects of experience and gender on physiological parameters, with evidence suggesting that prior gaming experience does not affect mean heart rate (HR), but session rate of perceived exertion (RPE) and peak HR are higher among novice players7. Additionally, while it was shown that gaming experience may result in higher EE and oxygen uptake ($${\dot{{\rm{V}}}{\rm{O}}}_{2}$$)8, others mentioned that prior experience and resting HR do not affect EE during sport exergame play6. Similarly, gender was shown not to affect EE during exergaming among adults9, but others suggested that male players burn more energy10 and have higher $${\dot{{\rm{V}}}{\rm{O}}}_{2}$$ and lower RPE compared to their female counterparts11. It should be noted that although playing time and number of playing bouts may not differ, boys play exergames more actively than girls12. When playing exergames at moderate exercise intensity, and according to American College of Sports Medicine (ACSM) guidelines for health and fitness, aerobic energy pathway is believed to be the primary energy source. However, measuring only $${\dot{{\rm{V}}}{\rm{O}}}_{2}$$ may neglect the role of glycolysis in total EE measurements13. Despite previous blood lactate (BLa) reports of 1.8 ± 0.8 mmol.l−1 for an upper-body exergame (boxing) and 2.4 ± 1.5 mmol.l−1 for a lower-body computer game14, BLa was never considered in the assessment of EE. This consideration is important as sports exergames are meant to replicate real sports and their physiological demands, which during its design phase might ensure a more meaningful experience. Although HR, RPE, movement monitoring and $${\dot{{\rm{V}}}{\rm{O}}}_{2}$$ are among popular intensity measurements15, newer methodologies tried to estimate metabolic energy cost using algorithmic models16. As many exergame platforms provide feedback on EE estimation based on specific formulas, considering the anaerobic energy pathway might also be useful in improving their accuracy. Researchers may also use time-motion analysis as an indirect method for estimating physiological stress, particularly by dividing the game into sub-activities. This objective assessment of exergames could be used when normal physiological measurements are intrusive17. As performing a short effort activity requires using a different metabolic pathway compared to longer activities18, time-motion analysis may also provide information on including the right energy system. Swimming is a well-practiced and appreciated PA, and a simulating swimming game might be an alternative for those who do not have access to a swimming pool. Competing against the virtual multi-medallist Michael Phelps, might be a motivating and challenging once in a lifetime experience. Since no research was conducted to measure the relative contribution of the anaerobic energy system to total EE in exergame playing, the purpose of this study was to characterise the total energy demands (aerobic and anaerobic) and activity profiles in a swimming exergame. In addition, we compared the physiological demands of groups with different experience and gender. We hypothesised that experienced players, non-real swimmers and female players would have lower physiological characteristics and lower activity time during the gameplay. ## Results For all subjects, we observed $${\dot{{\rm{V}}}{\rm{O}}}_{2{\rm{rest}}}$$ $${\dot{{\rm{V}}}{\rm{O}}}_{2{\rm{rest}}}$$ of 4.9 ± 1.1 and $${\dot{{\rm{V}}}{\rm{O}}}_{2{\rm{peak}}}$$ of 25.7 ± 6.0 ml.kg−1.min−1, [La]rest of 1.4 ± 0.6 mmol.l−1, HRrest of 67.9 ± 17.0 beats per minute (bpm), and $${\dot{{\rm{V}}}{\rm{O}}}_{2{\rm{peak}}}$$ of 21.4 ± 6.4 during the front crawl, 20.72 ± 5.43 during backstroke, 18.21 ± 5.33 during breaststroke and 18.66 ± 6.44 ml.kg−1.min−1 during butterfly. Figure 1 presents an example of $${\dot{{\rm{V}}}{\rm{O}}}_{2}$$ kinetics during a typical game session for a subject. The values of physiological measurements are reported in Table 1. Mean BLa during the activity was 2.6 ± 1.1 mmol.l−1 and was not different between performing groups (p > 0.05, partial-η² < 0.05). Peak BLa was 2.9 ± 1.3 mmol.l−1 and occurred 3 min after the end of the gameplay. Mean EE during the activity was 104.2 ± 32.5 kJ (94.2 ± 27.6 kJ aerobic plus 9.9 ± 8.6 kJ anaerobic) and was not different between performing groups (p > 0.05, partial-η² < 0.09). The lactic pathway accounted for 8.9 ± 5.6% of total EE. Figure 1 illustrates a typical HR change throughout the gameplay. Mean HR during the gameplay was 101.0 ± 14.8 bpm, corresponding to 49.9 ± 21.6% above the resting HR and 51.5 ± 7.4% of maximum HR. Only participants with real-swimming experience had higher values compared to non-swimmers during front crawl event (F(1, 38) = 3.78, partial-η² = 0.09, p = 0.04). Mean RPE during the activity was 3.0 ± 1.2 and was not different between performing groups (p > 0.05, partial-η² < 0.01). There were also no interactions between swimming experience, game experience and gender on BLa, EE, HR and RPE changes (p > 0.05). While we measured a high intra-observer reliability of 0.96 for time-motion analysis, a second reliability check was also performed implementing TEM (with 95% confidence interval - CI) for each variable as follows: mean activity time of 441 s (95% CI = 421–450 s) and rest time of 287 s (95% CI = 267–296 s). The relative TEM of 3.5% was within the acceptable range19. Players were active 56.9 ± 8.1% (range 42.7–85.1%) of the total time, rested 44.2 ± 6.8% (range 27.7–64.7%) and had E:R of 1.3 ± 0.3 during the gameplay. No differences were found between the performing groups (η² > 0.01, p > 0.05). Figure 2 highlights the mean duration of TPT, RT and EPT within different performing groups. Previous exergame experience (experienced vs. novice) resulted in lower TPT (743 vs. 844 s; F(1, 38) = 6.86, partial-η² = 0.15, p = 0.01), EPT (413 vs. 495 s; F(1, 38) = 8.70, η² = 0.18, p = 0.01) and E:R (mean rank 17.7 vs. 24.6; χ2(1) = 3.422, p = 0.05). No interaction was observed between performing groups and TPT, RT and EPT (p > 0.05). ## Discussion The aims of this study were to estimate different energy systems’ contributions, to provide an activity profile of gameplay and to compare the results in different performing groups. Anaerobic pathway accounted for 8.9% of total energy production and players were active 57% of total gameplay. Performing groups did not have different BLa and RPE, and mean HR was only higher in participants with real-swimming experience during the crawl event. Experienced players also had lower TPT, EPT and E:R ratio compared to the novice players. Our obtained $${\dot{{\rm{V}}}{\rm{O}}}_{2{\rm{peak}}}$$ values were similar to the previous reports3. However, these values may be affected by game mechanics, game duration and participants’ performing levels. Higher $${\dot{{\rm{V}}}{\rm{O}}}_{2{\rm{peak}}}$$ values during front crawl might have occurred as it was the first technique and participants were trying to swim close to real-swimming technique. Moreover, $${\dot{{\rm{V}}}{\rm{O}}}_{2{\rm{peak}}}$$ during front crawl was also lower than real-swimming with full body and upper body20. Alternatively, lower $${\dot{{\rm{V}}}{\rm{O}}}_{2}$$ during breaststroke could be explained by the lower range of motion (activity) of the upper-limbs. Additionally, as there were almost no forces applied on the body compared to in-water hydrostatic pressure, different responses were expectable. Mean BLa in our study was higher than findings of Jordan et al.14 probably due to different game design, recruited muscles, type of platform, intensity and the duration of the gameplay. BLa values of performing groups were similar and low percentage of their variabilities, accounted by the different performing categories. While, we reject our hypothesis stating that real swimmers, experienced and female players had lower BLa during the gameplay, we should state that this might have happened as the gaming platform detect different movement patterns of players similarly, and players may switch to pragmatic gameplay even after a short exposure to the game. As participants had to use both upper-limbs during the gameplay, EE levels were higher than sports exergames using only one upper-limb (e.g. tennis, bowling)21 but lower than games incorporating both upper and lower limbs14 and real-swimming20. Possible explanations lie within the different design (incorporating different muscle groups), different EE measurement methodologies, different demands of gaming platform and efficient interaction with the gaming platform. EE was also similar between groups and the low percentage of its variability accounted by different performing groups. Contrary to the previous research, higher EE in novice players might have also occurred because of longer gameplay, as they spent more time to complete the events. Moreover, players with real-swimming experience might have put more effort swimming correctly (according to the real-world techniques) at the beginning of their gameplay or during the first technique. Contrary to the previous research10, in our study male and female players did not have different EE and HR. We also obtained higher values of HR compared to the previous study on Wii muscle conditioning and brisk walking22. We also reject our hypothesis stating that real swimmers, experienced and female players had lower HR compared to their counterparts. Additionally, RPE was not different within any performing groups, and the values were also lower compared to previous research in full-body and upper-body exergames23. Our results also suggest that the type of gaming platform (Xbox, Wii, etc.) does not lower psychological perception of exertion5, and although novice players played the game for a longer time, RPE was not different from experienced players. This was consistent with previous research suggesting that immersive exergames may alter players’ perception of game intensity resulting in longer gameplay24. Therefore, we reject our hypothesis stating that real swimmers, experienced and female players had lower RPE than their peers. The average effort to rest ratio in the current study was 1.3 ± 0.3, showing that although players dedicated more time playing than resting, the results were not statistically different. Our active play values were also lower than previous study, ranging from 65–88%3. Possible reasons are lengthy waiting times between each bout and low activity times during each technique. While novice and experienced players did not differ in RT, experienced players spent less time playing with the game. Shorter playing time might have happened due to navigating faster through the menus and following game strategies in experienced players. Therefore, we reject our hypothesis that experienced players, real swimmers and female players had higher RT compared to their peers. As it may not be possible to reduce video game playing completely, proper exergame design might still increase PA levels. Identification of work and rest intervals could provide relevant data on how to encourage players to expend more energy in a more realistic manner. As fast game play might be used as a strategy to encourage players to be more active and stimulate excitement, measuring anaerobic pathway behaviour might be used in balancing the activities to avoid boredom and hasty fatigue. Moreover, if the obtained effort to rest ratio is compared with other games, it can potentially be used as a fitness index for exergames. The results of this study are useful for user experience researchers, game designers, and physical educators who want to apply exergame in their practice. Scientific and descriptive information of movement patterns and physiological characterization of exergames are necessary for designing effective fitness experience and game design. Software loading and menu selection have great effects on increasing workout times3, and by using auditory commands, using bigger icons and default presets, such timings could be shortened leading to an increase in effective gameplay. Future studies might use larger sample size for each performing group (e.g. gender) to ensure statistically significant differences of physiological variables between performing groups. ## Conclusions We have quantified several physical and technical variables to explore the physical demands of exergame playing in more detail, which provide foundations for developing specific exergames. We showed that short-term increase in physiological measures might have happened because of unfamiliarity with exergames and as players understand the game mechanics, they might exert less while playing. Despite low levels of activity compared to real sport, both energy systems should be considered in EE measurements of exergames. Various performing groups did not respond to the game differently, because players’ movements were detected similarly by the gaming platform. Moreover, the current investigation suggests using time-motion analysis during game design to increase the exercise to rest ratio. ## Methods Forty participants (9 females, age 23.8 ± 4.4 years, height 174.0 ± 7.1 cm, body mass 71.9 ± 11.2 kg) participated in the study, which was approved by the local ethics committee (CEFADE 01/2013) and performed according to the Declaration of Helsinki. Participants signed informed consents and were asked to avoid strenuous activity and smoking 24 h before the testing, to drink water liberally and to refrain from consuming alcohol, caffeine and food, at least 2 h before their participation. We considered participants who had played this game before as experienced (6 females) and those who knew, at least, two conventional swimming techniques were considered as swimmers (4 females). The exercise task was a swimming exergame designed for Microsoft Xbox360 and Kinect, offering four swimming techniques (Michael Phelps: Push the Limit, 505 Games, Italy). Each participant had to stand in front of the Kinect sensor and move their upper-body according to front crawl, backstroke, breaststroke and butterfly swimming techniques to move the avatar inside the game, competing against the computer opponent. No instruction was provided on how to play the game. However, as part of the game and before participation, each player watched an in-game trial video on how to play the game. There was no familiarisation with the game itself, but players were given the chance to navigate between the menus of the game and explore the features of the game. Each 100 m event was controlled by an on-screen visual feedback, preventing players from swimming too fast or too slow, and in the middle of the second 50 m lap, there was a possibility of swimming as fast as possible (Push the Limit – PTL). Oxygen uptake at rest ($${\dot{{\rm{V}}}{\rm{O}}}_{2{\rm{rest}}}$$) and heart rate at rest (HRrest) values were obtained, and to avoid varying work rate increments, the order of events was equal for all participants. Breath-by-breath $${\dot{{\rm{V}}}{\rm{O}}}_{2}$$ was measured using a portable analyser (K4b2, Cosmed, Italy) and BLa (25 µl) were obtained from the earlobe (Lactate Pro, Arkay Inc, Japan) at rest (BLarest), immediately after completion of each swimming technique and 3, 5 and 7 min following the gameplay or until the maximum value was obtained. The difference in lactate accumulation after and before activity (BLanet) was measured as the differences between BLa at the end of the last event and BLarest, allowing estimating the partial contribution of anaerobic energy pathway. RPE was administered using OMNI (0–10) immediately after each technique25. We verified $${\dot{{\rm{V}}}{\rm{O}}}_{2}$$ data and deleted irregular values from the analysis (considering only values in-between mean ± 4SD). We smoothed the $${\dot{{\rm{V}}}{\rm{O}}}_{2}$$ recordings using a 3-breath moving average and time-averaged at 5 s intervals13. Following that, we recorded peak oxygen uptake ($${\dot{{\rm{V}}}{\rm{O}}}_{2{\rm{peak}}}$$) during the exercise and calculated aerobic energy contribution from the time integral of net $${\dot{{\rm{V}}}{\rm{O}}}_{2}$$ versus time relationship13. We calculated the anaerobic lactic contribution (AnL) using Equation 1. $${\rm{AnL}}=\beta \times {{\rm{BLa}}}_{{\rm{net}}}\times M$$ (1) where: BLanet is the difference in lactate accumulation after and before activity, β is the energy equivalent of BLa accumulation (2.7 ml.O2.mM−1.kg−1)26, and M is the mass of the participant. To express the EE in kJ during aerobic and anaerobic lactic energy contributions, an energy equivalent of 20.9 kJ.l.O2 −1 was assumed27. We also filmed players’ gameplays, divided the video recordings and tagged them as active and rest (inactive) to an accuracy of 1 s, based on Table 2, using a video edit software (Movie Edit Pro, Magix AG, Germany). We marked the beginning and end of each movement, and the duration of each action was measured, to calculate total playing time (TPT), effective playing time (EPT), resting time (RT) and effort to rest ratio (E:R). We reported descriptive statistics for all variables and checked the normality using Shapiro-Wilk. We used a one-way analysis of variance (ANOVA) to compare physiological and temporal parameters during each event and within performing groups. In the case of violation of homogeneity of variance, we utilised alternative non-parametric statistics of Kruskal-Wallis H. We also used a three-way ANOVA to determine the effect between three performing groups and their interaction effect on BLa, EE, HR and RPE. We utilised SPSS 23 (Chicago, IL) and set the significance level to p < 0.05. To assess the practical significance of the findings, we computed an effect size for each analysis using the eta-squared statistics (η²). We also established the reproducibility of the time-motion analysis using Lin’s Concordance Coefficient28. Two participants were randomly chosen and analysed twice by the same researcher, and the technical error of measurement (TEM) for intra-evaluator test-retest was measured for the performance variables (rest and activity)29. To avoid retention of knowledge of the content, the retest analysis was conducted one month after the initial testing. TEM accuracy estimations are shown as 95% confidence limits using Equation 2 30. $${\rm{Absolute}}\,{\rm{TEM}}=\sqrt{\frac{{\sum }^{}{{\rm{d}}}^{2}}{2{\rm{n}}}}$$ (2) where: d is the deviations between the two measurements and n is the number of deviations. We then transformed the absolute TEM into relative TEM, to express the error in percentages, using Equation 3 31, where: VAV is the variable average value (expressed as the sum of the two measurements divided by two). $${\rm{Relative}}\,{\rm{TEM}}=\frac{{\rm{TEM}}}{{\rm{VAV}}}\times 100$$ (3) ### Practical implications • Experienced exergame players are less active than novice players. • Both aerobic and aerobic energy systems should be used in energy expenditure measurement of exergaming. • Short-term increase in physiological measures in exergames might happen because of unfamiliarity of players. ## Additional information Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Christofaro, D. G. D. et al. Higher screen time is associated with overweight, poor dietary habits and physical inactivity in Brazilian adolescents, mainly among girls. Eur J Sport Sci. 16, 498–506, doi:10.1080/17461391.2015.1068868 (2016). 2. 2. Adachi, P. J. C. & Willoughby, T. Does playing sports video games predict increased involvement in real-life sports over several years among older adolescents and emerging adults? J Youth Adolescence. 45, 391–401, doi:10.1007/s10964-015-0312-2 (2016). 3. 3. Bronner, S., Pinsker, R. & Noah, J. A. Energy cost and game flow of 5 exer-games in trained players. Am J Health Behav. 37, 369–380, doi:10.5993/AJHB.37.3.10 (2013). 4. 4. Finco, M. D. et al. Exergaming as an alternative for students unmotivated to participate in regular physical education classes. International Journal of Game-Based Learning. 5, 1–10, doi:10.4018/IJGBL.2015070101 (2015). 5. 5. Lisón, J. F. et al. Competitive active video games: Physiological and psychological responses in children and adolescents. Paediatr Child Healt. 20, 373–376 (2015). 6. 6. Wu, P. T., Wu, W. L. & Chu, I. H. Energy Expenditure and Intensity in Healthy Young Adults during Exergaming. Am J Health Behav. 39, 556–561, doi:10.5993/AJHB.39.4.12 (2015). 7. 7. Kraft, J. A. et al. Influence of experience level on physical activity during interactive video gaming. J Phys Act Health. 12, 794–800, doi:10.1123/jpah.2014-0089 (2015). 8. 8. Bonetti, A. J. et al. Comparison of acute exercise responses between conventional video gaming and isometric resistance exergaming. J Strength Cond Res. 24, 1799–1803, doi:10.1519/JSC.0b013e3181bab4a8 (2010). 9. 9. Miyachi, M. et al. METs in adults while playing active video games: a metabolic chamber study. Med Sci Sports Exerc. 42, 1149–1153, doi:10.1249/MSS.0b013e3181c51c78 (2010). 10. 10. Sit, C. H., Lam, J. W. & McKenzie, T. L. Direct observation of children’s preferences and activity levels during interactive and online electronic games. J Phys Act Health. 7, 484–489, doi:10.1123/jpah.7.4.484 (2010). 11. 11. Graf, D. L. et al. Playing active video games increases energy expenditure in children. Pediatrics. 124, 534–540, doi:10.1542/peds.2008-2851 (2009). 12. 12. Lam, J. W. K., Sit, C. H. P. & McManus, A. M. Play pattern of seated video game and active “exergame” alternatives. J Exerc Sci Fit. 9, 24–30, doi:10.1016/S1728-869X(11)60003-8 (2011). 13. 13. Sousa, A. C., Vilas-Boas, J. P. & Fernandes, R. J. Kinetics and metabolic contributions whilst swimming at 95, 100, and 105% of the velocity at VO2max. BioMed Res. Int. 2014, Article ID 675363, 9 pages doi:10.1155/2014/675363 (2014). 14. 14. Jordan, M., Donne, B. & Fletcher, D. Only lower limb controlled interactive computer gaming enables an effective increase in energy expenditure. Eur J Appl Physiol. 111, 1465–1472, doi:10.1007/s00421-010-1773-3 (2011). 15. 15. Gao, Z. et al. A meta-analysis of active video games on health outcomes among children and adolescents. Obes Rev 16, 783–794, doi:10.1111/obr.12287 (2015). 16. 16. Nathan, D. et al. Estimating physical activity energy expenditure with the Kinect sensor in an exergaming environment. PloS one. 10, e0127113, doi:10.1371/journal.pone.0127113 (2015). 17. 17. Hughes, M. & Franks, I. M. (Eds.). Notational analysis of sport: Systems for better coaching and performance in sport. (Psychology Press, 2004). 18. 18. Gastin, P. B. Energy system interaction and relative contribution during maximal exercise. Sports Med. 31, 725–741, doi:10.2165/00007256-200131100-00003 (2001). 19. 19. Duthie, G., Pyne, D. & Hooper, S. The reliability of video based time motion analysis. J. Hum. Mov. Stud. 44, 259–272 (2003). 20. 20. Ribeiro, J. et al. VO2 kinetics and metabolic contributions during full and upper body extreme swimming intensity. Eur J Appl Physiol. 115, 1117–1124, doi:10.1007/s00421-014-3093-5 (2015). 21. 21. Graves, L. E. F., Ridgers, N. D. & Stratton, G. The contribution of upper limb and total body movement to adolescents’ energy expenditure whilst playing Nintendo Wii. Eur J Appl Physiol. 104, 617–623, doi:10.1007/s00421-008-0813-8 (2008). 22. 22. Graves, L. E. F. et al. The physiological cost and enjoyment of Wii fit in adolescents, young adults, and older adults. J Phys Act Health. 7, 393–401, doi:10.1123/jpah.7.3.393 (2010). 23. 23. Whitehead, A. et al. Exergame effectiveness: what the numbers can tell us. Paper presented at the Proceedings of the 5th ACM SIGGRAPH Symposium on Video Games. (Los Angeles, California, 2010). 24. 24. Lau, P. et al. Evaluating physical and perceptual responses to exergames in chinese children. Int J Env Res Pub He. 12, 4018, doi:10.3390/ijerph120404018 (2015). 25. 25. Irving, B. A. et al. Comparison of Borg-and OMNI-RPE as markers of the blood lactate response to exercise. Med Sci Sports Exerc. 38, 1348, doi:10.1249/01.mss.0000227322.61964.d2 (2006). 26. 26. di Prampero, P. et al. Blood lactic acid concentrations in high velocity swimming in Swimming Medicine IV (Eds. B. Eriksson, B. & Furberg, B.) 249–261 (University Park Press, 1978). 27. 27. Figueiredo, P. et al. An energy balance of the 200 m front crawl race. Eur J Appl Physiol. 111, 767–777, doi:10.1007/s00421-010-1696-z (2011). 28. 28. Lawrence, I. & Lin, K. A concordance correlation coefficient to evaluate reproducibility. Biometrics. 45, 255–268, doi:10.2307/2532051 (1989). 29. 29. Hopkins, W. G. Measures of reliability in sports medicine and science. Sports Med. 30, 1–15, doi:10.2165/00007256-200030010-00001 (2000). 30. 30. Tanner, R. & Gore, C. Physiological tests for elite athletes (Human Kinetics, 2013). 31. 31. Perini, T. A. et al. Technical error of measurement in anthropometry. Revista Brasileira de Medicina do Esporte. 11, 81–85, doi:10.1590/S1517-86922005000100009 (2005). Download references ## Author information ### Affiliations 1. #### Centre of Research, Education, Innovation and Intervention in Sport (CIFI2D), Faculty of sport, University of Porto, Rua Dr Plácido Costa, 91, 4200-450, Porto, Portugal • Pooya Soltani • , João Ribeiro • , Ricardo J. Fernandes •  & João Paulo Vilas-Boas 2. #### Department of Physical Education and Sport Sciences, School of Education and Psychology, Shiraz University, Pardis-e-Eram, Eram Square, 71946-84759, Shiraz, Iran • Pooya Soltani 3. #### Porto Biomechanics Laboratory (LABIOMEP), University of Porto, Rua Dr Plácido Costa, 91, 4200-450, Porto, Portugal • Pooya Soltani • , Ricardo J. Fernandes •  & João Paulo Vilas-Boas 4. #### Department of Kinesiology, University of Maryland, College Park, MD, USA • Pedro Figueiredo ### Contributions This study was designed by P.S. and J.P.V.B.; data were collected by P.S.; data interpretation were undertaken by P.S., J.R. and P.F.; the manuscript was written by P.S. and was proofread by P.S., P.F., J.R., R.J.F. and J.P.V.B. All authors have approved the final version of the paper. ### Competing Interests The authors declare that they have no competing interests. ### Corresponding author Correspondence to Pooya Soltani. ## About this article ### DOI https://doi.org/10.1038/s41598-017-05583-8 ## Comments By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
{}
October 21, 2019, 11:16:24 pm ### AuthorTopic: University of Melbourne - Subject Reviews & Ratings  (Read 1043653 times) Tweet Share 1 Member and 6 Guests are viewing this topic. #### Shenz0r • Victorian • Part of the furniture • Posts: 1875 • Respect: +406 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #360 on: October 24, 2014, 11:19:59 pm » +10 Subject Code/Name: MIIM20002: Microbes, Infections and Responses Workload:  3x1hr lecture, 5 x 3 hour practical per fortnight, 1x CAL (done at home) Assessment: Written practical reports throughout semester (15%), A 45-minute multiple choice question test mid semester (20%), Online quizzes (pre-practical class) throughout semester (5%) A 2-hour written exam in the end of the semester examination period (60%). Lectopia Enabled:  Yes, with screen capture. Past exams available:  No past exams, but the staff put up some sample questions during the two review lectures. Textbook Recommendation:  I don't even know which textbook you buy so you definitely don't need it. Lecturer(s): L. Brown H. Cain O. Wijburg K. Waller T. Stinear C. Simmons R. R. Browne D. Purcell Year & Semester of completion: 2014, Semester 2 Rating:  5/5 Wow. What a damn good subject. The Department of Microbiology and Immunology definitely knows how to run its subjects. This was my favourite subject throughout the semester - it was very interesting, it was EXTREMELY well taught and the staff take great care in running the subject. They were very approachable, always happy to answer questions, and they kept us informed and updated consistently throughout the semester. They also gave us individualised feedback on our MST (which pretty much just said "you should revise this and that"). The coordination was perfect. They even set up a Facebook study group for all of us! The practicals were the most enjoyable out of any subject I've done. Each practical revolves around several case studies - you're presented with background information on a patient's history and their symptoms, and it's up to you to test the samples they give you and diagnose them. This actually ties in very nicely with the lecture material, since you're going to be learning about infections and how to diagnose and treat them after all. You'll have to test stuff like live flu samples and faecal samples! And be warned, you will definitely be pipetting a lot, especially in the haemagglutination-inhibition test! Each practical goes for 3 hours but it's not stressful - you go through everything as a group, with your demonstrator explaining the case study and guiding you along. There is no in-prac assessment either. Firstly, you have to do a pre-prac quiz before each practical, which are fairly easy to do. For two of the case studies you need to write a practical report. The staff give you a proforma which tells you how to set up your report and which questions you should consider answering in it. There was a word limit of 500 words on the discussion though. These are harder to score well in - a lot of it is up to the hands of your demonstrator and some of them can take marks off for the smallest things. They also give you back your reports with a lot of feedback. I never got over 9/10 for my reports so if you do then well done. Two of the case studies were assessed by post-practical quiz, and one of those case studies was a CAL. I guess what would deter prospective students is the workload. If you've done MCB, you'll probably be used to the amount of information they try to cram in one lecture. Having said that, a lot of the time the lecturers only talk about what's on the slides (apart from Roy), so there's no need to "write down EVERYTHING the lecturer says" either. Lecture notes across the board were of a very high standard - very clear and concise. The first week is spent revising your basic microbiology and immunology - the course of an infection, bacterial and viral pathogenesis, as well as the immune response. Pretty relaxing here. In the second week, you begin to learn different types of infections. The first topic is GIT infections, and you will go in depth into laboratory diagnosis, pathogenesis from invasive bacteria, non-invasive bacteria and parasites, and then you also learn a little bit about epidemiology. There is quite a bit of content in these lectures and lots of details to memorise - making tables is definitely helpful here. There's a lot of bacteria they talk about and you need to know features of each one. And you definitely should remember everything, literally everything, on the pathogenesis and laboratory diagnosis slides. You then move into vaccine responses, mucosal immunity, and the human microbiome. Not much to say here, they were easier to study for though since you could just focus on understanding immunity. You then look at respiratory infections involving S. pneumoniae, M. tuberculosis and influena, learning about pathogenesis, treatment and epidemiology. There was less to remember than in the GIT infections. You also get a lecture on emerging viral diseases which link up with very recent events such as the Ebola outbreak in West Africa as well as highly pathogenic bird flu. You then move into STIs, covering herpes simplex virus, HIV, HPV and epidemiology. We had guest lecturers from the Virology department come in here and lecture, which was really good since these guys are leading researchers as well. This was probably the easiest out of the whole "infections" block. For the last part of the course, you move to health-care acquired infections. These seem to be much more reliant on common sense rather than brute memorisation. But don't neglect it because you definitely should know about how the chain of infection can be broken etc. The last two lectures covered Legionella and Dengue fever. A bit more random, since it didn't really come in any of the other themes. But this was pretty much the same as learning about all the other bacteria we had before. The MST covers Weeks 1-6, and consists of 40 MCQ. Know your stuff well, because it's testing your recall and to a smaller extent your application. There were no ambiguous questions though which was a good thing. The average mark was 30/40. The exam consists of 50 MCQs (the vast majority on the latter part of the course), 3 fill in the blanks questions, and 2 short answer questions. It's very fair and if you've studied everything thoroughly it should be pretty straightforward. Overall, this subject is amazing for anybody who has any sort of interest in Microbiology and Immunology. While there is quite a bit of material to swallow, it never feels like a drag simply because it's so well taught and intriguing. The subject material isn't even that hard - it's quite simple, but there's just a lot of things to know. If you don't make the effort to learn everything then you will find it difficult. According to previous years data, the amount of people who get H1 tends to hover in the 35-40% range so don't feel deterred from taking this subject! You won't regret it. « Last Edit: November 28, 2014, 11:47:35 am by Shenz0r » 2012 ATAR: 99.20 2013-2015: Bachelor of Biomedicine (Microbiology/Immunology: Infections and Immunity) at The University of Melbourne 2016-2019: Doctor of Medicine (MD4) at The University of Melbourne #### mahler004 • Victorian • Forum Obsessive • Posts: 492 • Respect: +64 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #361 on: October 25, 2014, 12:12:05 am » +7 Subject Code/Name: PHRM20001 Pharmacology: How Drugs Work • Three lectures a week • Five 'Special Topics' • Three tutorials • Two practicals Assessment: • 20% in assignments and prac reports: • 20% mid semester test (40 minutes) • 60% two hour final exam Lectopia Enabled:  Yes, with screen capture Past exams available:  Several available from the library website and the LMS. Textbook Recommendation: A couple of recommended textbooks are given, like most biology subjects, they're only really useful as references. Lecturers: Too many to list - see other reviews. Most lectures only take one or two lectures. Year & Semester of completion: 2014 Semester 2 Rating: 4/5 Pretty good subject. Like all the other reviews, I took this subject to fill in a gap in my study plan (needed a random second/third year subject.) It's very good - better then I was expected, and there's a good diversity in topics. The semester starts with some fairly basic pharmacology. There's a bit on receptors (trivial if you've done biochemistry,) and fairly basic pharmacological terms (i.e. EC50s,) and pharmacokinetics. There's three lectures on autonomic nervous system pharmacology (basically just physiology,) which is very well taught. The subject then moves onto a series of five or so lectures which wouldn't be out of place in a law subject - lectures on drug regulation, drug discovery and a history of pharmacology. There's a good deal of diversity in the subject, it's not all science, and in the earlier weeks they teach you a lot about the non-scientiffic aspects of pharmacology. I've heard from friends that this is developed a lot more in third year. After the first few weeks, the subject moves into something more typical of a 'science' subject - each lecture (or two) is devoted to therapeutic strategies to treat certain diseases. For example, there are lectures on: • Drugs to treat hypertension/cardiovascular disease • Drugs to treat asthma • Drugs involving the immune system • Contraceptives • Drugs for depression • Drugs for pain • Drugs for obesity • Drugs of dependence and addiction • Drugs in sport Of these topics, for me the lectures on cardiovascular pharmacology and analgesics were a (surprising) highlight. Yes, you do have to memorise a lot of drugs - it's really not as bad as it sounds at the start of the subject. Some drugs you just need to know that it acts on a specific receptor (e.g. propanolol acts as an antagonist on the β adrenoceptors,) some you have to know more detail, side effects, etc (e.g. propanaolol can cause nightmares, tiredness.) I strongly recommend investing in a flash card program for your smartphone/laptop, makes studying a lot easier. The final two weeks deal with toxicology. There's a couple of lectures on basic toxicology, then a lecture on toxins and the final lectures deal with drugs used to treat bacterial infections, viral infections and cancer. Generally, lectures are fairly good - although there is some variability in quality with 15 lectures. Most lectures that take multiple lectures are quite good. Like others have said, I'd strongly recommend attending the tutorials, they only give answers to short answers in the tutorials, and they give a good deal of information about the exam and what they expect. There's two pracs. They're really easy and kind of boring. In the first prac, you generate antagonist/agonist response curves (i.e. add differing concentrations of a drug and look at the tissue response,) in the second prac you use a few drugs to try and figure out the receptors present in a tissue. The pracs are assessed using a prac report form (for the first prac,) and an online quiz (for the second prac.) They're pretty straightforward. There's also some short assignments ('self-directed' learning tasks.) These are apparently fair game on the exam and MST, although there's usually only one or two MCQs. They shouldn't take too long to do (only a couple of hours,) and are pretty easy marks. Like any science assignment you do, just make sure you're sufficiently anal with units, captions etc. The assignments are introduced in a 'special topic,' which is again, accessible. They pretty much tell you what to do, so it's an easy place to pick up marks before the exam. It'll make your SWOTVAC just that much less stressful. There's also a 20% 40 minute MST. It's pretty straightforward, and it's a similar style and difficulty to the final exam. Keep in mind that most of the more difficult content in pharmacology comes in the second half of the semester. It's worth 20%, so yeah, study for it. The SDLs and pracs are supposedly accessible, but there's only usually one MCQ. Unlike the reviewer below me, I felt that the MST covered the content across the first half of semester fairly and equally (just that we had only had two therapeutics lectures by that point.) The MST wasn't too demanding - about 30% of the class got a H1 (but about 40% either passed or failed as well.) This will be the first exposure most people have to pharmacology (it was mine.) It actually involves very little chemistry - the subject is about the effects of drugs, not the organic chemistry of making drugs (so it's not Breaking Bad.) It involves a fair bit of physiology and anatomy, a little bit of biochemistry (mainly drug/receptor interactions.) Most of the physiology is taught at a simple level. There's a bit of chemistry, but it rarely goes beyond high school level (literally just acids and bases.) In terms of difficulty and workload, it's easier then the other second year subjects I did (biochemistry and chemistry.) It's supposedly less hardcore then anatomy and physiology. Only a two hour exam, although unlike most other subjects, there's a substantial MST (20%.) On the whole, pharmacology is a interesting, fairly well-run subject. You learn a lot of interesting stuff and it's directly relevant (I've had my parents start to quiz me about the drugs they're taking!) The only downsides are the kind of useless pracs (either have six pracs, or none at all,) and the fact that some topics can be touched on superficially (although it does allow a good breadth in topics to be covered.) If you've got a free spot in your study plan in second or third year, I strongly recommend taking it. Edit: Exam was very fair - everything that was assessed was in the lectures/pracs/workshops. As expected, the questions overwhelmingly came from the lectures - only a few multiple choice from the workshops and the pracs (so definitely still worth revising.) They generally didn't test minutiae, and although you had to remember drug names, simply recalling drug names wasn't a major focus of the exam. Many multiple choice gave drug classes 'e.g. an ACE inhibitor,' not drug names 'e.g. captopril' as answers. On the whole, if you'd studied, the exam wasn't too challenging. The more social sciencey aspects of the course weren't really emphasised in the exam. Do revise them, but focus on the science. 50 of the 110 marks were for 'mixed response' questions, which are more like VCE biology exam questions then longer essay-like questions seen in other biology subjects. You chose five out of six to do. « Last Edit: November 06, 2014, 03:32:59 pm by mahler004 » BSc (Hons) 2015 Melbourne PhD 2016-??? Melbourne I want to be an architect. #### Shenz0r • Victorian • Part of the furniture • Posts: 1875 • Respect: +406 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #362 on: October 25, 2014, 01:19:15 am » +8 Subject Code/Name: BIOM20002: Human Structure & Function Workload:  6x1hr lecture per week, 4x2hr anatomy pracs per fortnight, 1x2h physiology practical per semester Assessment: Written laboratory report (1000 words, 10%); Two tests during semester (20% total, 10% each); and Two 2-hr end of semester exams (70% total, 35% each) Lectopia Enabled:  Yes, with screen capture. I think I only went to a handful of lectures throughout the semester. Past exams available:  Practise exams of both papers are available from 2009-2010 on the LMS. The 2011 exams were on the UniMelb library page, as well as Paper 2 for 2012. Jenny also put in some sample anatomy "label-the-diagram" pictures throughout the semester. There were no practise materials for Physiology. Practise material for Physiology from past exams were uploaded in SWOTVAC. However, you can find more PHYS20008 questions and get them from the UniMelb library or from people who took the subject last semester. There was also no practise materials for Pharmacology but similarly, you can get questions from PHRM20001 students. There was also some Pharmacology practise questions too. Textbook Recommendation: General Anatomy by Chris Biggs is a handy book to get you through the "Principles" lectures in anatomy (so maybe 3-4 weeks). A lot of the lecture slides have diagrams that come out of this book and the slides tend to follow the book as well. It is also useful for your ADSLs. Anatomedia is useful for the ADSLs but a lot of it just contains text from General Anatomy The lecturers take off many diagrams from Grey's Anatomy but I don't think you really need to buy it. You can just google image things. In addition, I don't think anatomy is a subject you can really study just by reading text off a book. I never used Netter's Clinical Anatomy apart from a few ADSLs on the upper and lower limbs. Human Physiology by Silverthorn is set for pre-reading before the physiology lectures. It's a decent book with nice-looking diagrams and the explanations are clear enough to follow. For most of the semester, I never did much pre-reading but before the physiology exam I read through the textbook seriously, using Charles' lectures to help me go through it. Not much pre-reading was assessed in the MST but some parts of it were on the exam. I don't think the Pharmacology department ever even mentioned their textbook. Not useful anyway. Lecturer(s): Anatomy P. Kitchener [Neuroanatomy] - you need to definitely write down the stuff that's not on the slides. C. Anderson [Embryology] V. Pilbrow [Bone, Articular System, Vascular System, Skin] - Varsha talks about examples in her lectures and it's important that you get all this down. This may be gibberish to you at this stage of the course because you haven't actually learnt what she's talking about yet, and she is a bit difficult to understand. S. Murray [Musculoskeletal System] J. Xiao [Gastrointestinal, Cardio, Lower Respiratory, Renal, Urinary] - Always tended to finish in around 40 mins J. Ivanusic [Upper Respiratory, Reproductive] Note these are the exact same lecturers in Science, with the exact same slides. The anatomy department was fantastic. All of their slides were very clear. Physiology D. Williams [Neurophysiology, Cardiovascular, Respiratory] J. Bornstein [Digestive] S. Harrap [Renal] M. Wlodek [Reproductive] + A few guest doctors who lectured on applied physiology. Pharmacology A. Stewart [Drugs and Receptors] G. Mackay [Autonomic Pharmacology] M. Lew [Pharmacokinetics] Year & Semester of completion: 2014, Semester 2 Rating: Anatomy: 1.75/2 Physiology: 0.5/2 Pharmacology: 0.5/1 Overall: 2.75/5 By department: Anatomy In line with previous reviews, I think Anatomy was the most well-taught part of the course. It may seem boring in the beginning. Neuroanatomy is taught well and embryology can be a bitch to understand since you have to visualise folding in 3D. Varsha's lectures on Bones etc are a bit dull but necessary, although she did teach bone ossification wrongly in HSF and was much more clearer in ANAT20006, so try watch the ANAT20006 lectures if you need clarification. After this, anatomy became a lot more enjoyable as you move onto identifying important structures in the body. You also do learn some clinical stuff in the musculoskeletal lectures especially (often about fractures, tears, compartment syndrome, endangered structures etc) Anatomy is very much a visual subject and you should definitely take this into account when you study. I didn't write any summary notes for anatomy and just printed the lecture slides with labels and many annotations. This was quite effective and efficient. I don't believe writing and reading is going to really help you improve your anatomy - it's all about identifying structures and then commenting a little bit about its significance. Writing and reading in my opinion would just be excessive.  Definitely pay attention to the diagrams in the lecture slides, even to the small detail. You also get ADSL worksheets which complement each lecture series. These are helpful, but you definitely don't need to review these to do well. That being said, apparently the anatomy department likes to use images from the ADSLs in the exam so try go through them if you can. There is no quiz so ADSLs are not assessed in any way. I wish they did though because they are actually good practise. None of the "extension" material in the ADSLs come up in assessment either, so if you want you can skip the more obscure parts. The anatomy practicals are pretty cool but you should really review the material before. If you don't know what the hell is going on and can't name a lot of things (which was me for like 3/4 practicals) then you're not going to get much out of it since you're just too confused to know what the demonstrator is talking about. They pretty much are just to help your learning and are not assessed. You pretty much just rotate around 5 stations, and at each station you're looking at some specimens with a demonstrator. Some demonstrators will actually explain a lot of stuff to you, others will just sit back, tell you to identify structures, and do nothing. Assessment was also very fair, always covering the material, and to be honest, was very much on the easy side. Compared to the ANAT20006 MSTs they are exceptionally easy to do well in. Anatomy questions are apparently similar to past exams, so use them. Also try to find any student who is willing to give you questions from their ADSL quizzes. The best way to learn anatomy is to get involved in identifying things and having quiz-offs. And you also have your own body. Use it. This is very helpful for understanding locomotion and the types of joints involved in each movement. Physiology Lol. If you don't know by now, physiology is the bane of this subject. Teaching quality is not great and I don't think I've seen a cohort this frustrated with something like this since Physics. To be honest, I never paid any attention to the Neuro, cardio and respiratory lectures in HSF. I just grabbed ALL of Charles' lectures and studied off them, and then I listened to David at 2x speed. They're the exact same lecture slides with the exact same material, but for some reason David falls behind very easily and spends a lot of time digressing. I mean, one time he was stuck on the same slide for like 20 mins. He actually didn't even lecture on Smooth Muscles since he fell behind, and just told us to do the pre-reading for it. Sometimes even the PHRM20001 lectures explained their physiology better. Later in the semester, we had 4 guest lecturers come in to lecture us on applied physiology. Most of these lectures seemed important and worth studying for the exam. This was relating Cardio and Respiratory physiology to clinical practise, so the lecturers came in to talk about how some diseases arise and what they can lead to (i.e aortic stenosis). 2 of them were decent and actually explained their material quite well. One was a bit nervous and sort of mumbled into the microphone but if you took the time to listen back to it, the material was ok. The last lecture was when our cohort just did not give a shit any more. I don't even know what the hell went on in that lecture after listening to it, but it was advanced respiratory physiology that we had never been exposed to and did not get references for. The lecture slides were also totally different and the lecturer spoke at 1000000x words per minute. Digestive physiology was only explored in 2 lectures and I felt that the lecture slides were badly written and incoherent. In addition, I don't think it really explored the full picture of digestion as well - it seemed more like I just got a fragment of it. I had to listen to both Charles and Joel and combine the two to make sure I got the whole picture. Joel likes to test the pre-reading too (a lot of which he does not dwell on), so make sure you listen to Charles because he actually goes through it. Stephen and Mary were decent physiology lecturers and actually explained the content well. No problems here! You also get "concept checks" for each system. Basically it's just a short quiz on the LMS that's not assessed, and it's designed to give you feedback. However, we were never notified when the concept checks were added to the LMS, and the brilliant thing was that they disappeared after some time without warning (this was purposely done). So even if you did the quiz you couldn't check over it again. And these concept checks were never put back on the LMS for the whole semester too, so people would often prntscrn their responses. So make sure you save them and look over them in exam time. Now, there's one style of questions that all students hate. The infamous "increase, decrease, no change, or not enough information" questions. These are annoying. I would rather have short-answer questions than this. I felt that these questions did not let you demonstrate your critical reasoning and detailed knowledge. Sometimes you have to assume something, sometimes you don't. Not many practise questions are put up in HSF so it is imperative to grab practise questions off PHYS20008 students, as many of the questions that actually come up are related. Some of our MST questions were just ripped off those seen in PHYS20008. The physiology practical takes place at around Week 11 and you have until the end of week 12 to submit it. Again, it's about the cardiovascular response to dynamic and static exercise, which was not addressed in our lectures. Luckily Charles had an entire lecture on exercise so that was immensely helpful. In addition, they do direct you to a relevant textbook so it's not too bad. The report consists of 11 questions. The last question was quite random and in my opinion was chucked in just to justify our lecture in "Scholarly Literacy", and we had to identify appropriate articles that would help us in answering a particular research question. Overall, the practical report is not too hard. Some of your data might not make sense though - if this is the case, email Charles, who runs the practical. He allowed me to use somebody else's data since my data was acting completely opposite. Alternatively he also said that I could talk about expected results and what sort of experimental errors I could've encountered, but with the 1000-1200 word limit I opted for the former. So, to sum it up, focus on PHYS20008 rather than the physiology component of HSF. Pharmacology I was a student of PHRM20001 so my opinion is a little biased here, but I felt that Pharmacology was not necessary in HSF. It only skims the basics of Pharmacology and is definitely not integrated enough with physiology to justify it being there. You spend 3 lectures talking about how drugs bind to receptors, 1 on autonomic pharmacology (which is acceptable), 3 or 4 talking about Pharmacokinetics and then 1 on drug development. Seriously, drug development. They couldn't have at least lectured on something that was more physiological, could they? While Graham and Michael are great lecturers, I thought that Alastair took too much time when he was lecturing on how drugs work. There is not much to know in this part and he does love to digress. It took a whole lecture just to go through affinity. I think this was taught much better in PHRM20001. That being said, although these lectures are bludgy if you're doing PHRM20001, don't neglect them and just add in stuff that's not in PHRM20001, especially Michael's lectures on adverse drug effects. I gave this component a low score because I felt that it just clogged up space in HSF, which could have otherwise been used for important physiology lectures that were ripped out of the course. It didn't seem to relate very much to anything else we learnt. It would've been a lot better if they talked a little about therapeutics, such as treating asthma or hypertension, but this was not elaborated on. The Pharmacology questions on the MST are fair though and sometimes they're assessed by "fill-in-the-blanks". There's also a nifty short answer where they give you features of made up drugs and ask you which drug would have the smallest Vd, would be eliminated the fastest, etc. To me, this subject feels unnecessary. It is pretty much ANAT20006 and PHYS20008 mushed into one subject, with bits being taken out due to the Pharmacology component and other parts  (particularly in Physiology) being taught quite badly. I would rather have done ANAT20006 and PHYS20008 separately than HSF. You get the exact same lecture slides for Anatomy and Physiology, so you're pretty much being tested on the same material. However, you get less resources. A lot less. ANAT20006 students get ADSL quizzes for each topic, you just have the ADSL worksheet and no quiz. We were given some Physiology practise material though (although most of them were just pulled off the past exams in the UniMelb library). Therefore, you should really contact other students for their resources. Grab anything you can from PHRM20001, ANAT20006 and PHYS20008, because you're not getting any from HSF. HSF is structured so that you'll have a lecture on the anatomy of one system, followed by its physiology (or the other way around). I didn't mind this, it felt natural. However the lectures aren't integrated. I feel that this may have been purposely done, because the staff have said that it's up to the students to integrate the material themselves. This actually stirred up quite a lot of controversy in our cohort and the coordinator ended up asking if we would like integrated questions in the exam, but the cohort turned it down probably because we didn't feel at all ready to begin answering integrated questions. So the anatomy and physiology/pharmacology remained separated. If the lectures at least guided us on integration, I think the cohort would have been more receptive to the idea of having integrated exams. But they weren't. The MSTs weren't too difficult but it's very hard to tell where you went wrong. The first MST tested Neurophysiology, Neuroanatomy, Embryology, and Varsha's lectures. It consisted of around 30 questions and two "label-the-diagrams"/"fill-in-the-blanks" questions. The second MST followed a similar format but tested Musculoskeletal, Gastrointestinal, and Pharmacology. It was meant to test Digestive Physiology as well but that didn't come up at all for some reason. Anatomy is the first exam and it consists of 3 sections. -Section A has 25 MCQ and is mostly weighted on what wasn't in the MSTs, so Cardio, Respiratory, Renal, Urinary and Repro. -Section B has "label-the-diagrams" and "fill-in-the-blanks" questions covering the whole course. -Section C requires you to respond to four long-essay questions covering the whole course. With the exam, there was a definite emphasis on the latter half of the Anatomy content. Neuroanatomy, embryology and anatomical principles were not featured at all. You could've just studied from Simon's lectures onwards and still do well. A lot of things weren't covered in assessment and it definitely annoyed me seeing as how I spent so much time learning the intricate details of everything and memorising as much as I could. The exam turned out to be much easier than expected. The second exam assesses physiology and pharmacology and it is ALL MCQ, so don't waste your time doing the short answer and long essay questions from past exams. Unlike Anatomy, this exam is more difficult. It demands a thorough understanding (not just your rote-learning) and there are a few traps that are easy to fall into if you aren't perceptive of small detail. Many of the questions are your "increase/decrease/no change/not enough information" ones as well as some of your more traditional "pick the correct answer" questions. As said, the increase/decrease questions can be a pain in the ass as you're left doubting yourself so much. With those questions it's best to scribble a flowchart of the likely response. Pay attention to the wording too. One of the harder questions on the exam involved the Baroreceptor reflex integrated with your neurophysiology, which I felt was a pretty nice question that really tested how you think, as you needed to be aware of the responses involving both systems. Pharmacokinetics was also assessed through the "increase/decrease" format, where they pretty much have to use the features of 3 made-up drugs to answer the question. A lot of the drugs mentioned in HSF weren't assessed at all (in fact, I don't think any drug was). Indeed, exam pre-reading is assessable and the physiology department has loved to test small detail on the slides, so always pay attention to any graphs they give you. To do well here, it's imperative that you go beyond the set lectures and read the textbook. Or just watch lectures from PHYS20008, since they will actually go through the material that HSF doesn't have time for with and yes, sometimes they have assessed those things in the past. To prepare for this exam, definitely focus on the MCQ portions of exams, read the textbook, and have a sharp eye for detail. So really, this subject is essentially just a poor mis-mash of ANAT20006 and PHYS20008 with basic Pharmacology thrown in. I didn't actually find it hard since I just hoarded resources off Science students, but was just frustrated with how the subject is constructed and the quality of the physiology section. « Last Edit: November 28, 2014, 05:24:45 pm by Shenz0r » 2012 ATAR: 99.20 2013-2015: Bachelor of Biomedicine (Microbiology/Immunology: Infections and Immunity) at The University of Melbourne 2016-2019: Doctor of Medicine (MD4) at The University of Melbourne #### Shenz0r • Victorian • Part of the furniture • Posts: 1875 • Respect: +406 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #363 on: October 25, 2014, 02:10:09 am » +9 Subject Code/Name: PHRM20001: Pharmacology: How Drugs Work 3x1hr lectures per week 2x3hr practicals per semester 3x1hr tutorials per semester 3x1hr workshops per semester Assessment: Continuing assessment of practical and computer-aided learning work during the semester (20%). Mid-semester assessment (20%). A 2-hour written examination in the examination period (60%). This subject has a practical component. Completion of 80% of the practicals, and practical-related exercises, is a hurdle requirement. Lectopia Enabled:  Yes, with screen capture. Past exams available:  Yes, there are a few in the UniMelb library. The ones on the LMS only give you one section of the exams. Sample questions in the review lectures and the tutorials may be used for practise. Textbook Recommendation:  Don't even know the textbook, so don't buy it. Lecturer(s): J. Bourke G. Mackay D. Newgreen T. Hughes C. Wright A. Stewart P. Crack M. Hansen J. Ziogas J. Fitzgerald K. Winkel M. Lew Year & Semester of completion: 2014, Semester 2 Rating: 4/5 This was a well-taught subject that was run smoothly and was interesting. Workload is light compared to anatomy and physiology. There are three themes. The first few weeks were spent talking about how drugs work. Affinity, efficacy, potency, agonists/antagonists, pharmacokinetics, and autonomic nervous system. Make sure you know your autonomic nervous system damn well because these are very important for understanding therapeutics. There is not much to know in the other parts of this theme. Then we move onto the second theme which was therapeutics, which was of course my favourite part of the subject since it integrated physiology and was quite clinical too. Other people have already said which topics were mentioned so I won't repeat it. But teaching here was excellent. There is not much anatomy though, I think the most I ever heard about anatomy was for like 5 mins in the autonomic nervous system. All you really need to know is the normal physiology and then you need to manipulate it with drugs to try counteract the disease you're trying to treat (i.e blood pressure is dependent on cardiac output and TPR, and we can use ANGII antagonists and ACE inhibitors that prevent ANGII from constricting the vessels so we decrease TPR in a hypertensive, etc). The last theme, which was about toxicity, I did not really enjoy, mainly because it was pretty dry compared to therapeutics. You learn about environmental contaminants, venoms, and selective toxicity. You also get a few random lectures, such as Pharmacogenomics, Drug Development, Drug Regulation and Sociological Drug Use (which was pretty much just a guy telling a story the entire time). These don't tend to be heavily emphasised in the MST or exams so just appreciate them I guess. There's no good advice I have to just "memorise" all the drugs they talk about. You just have to memorise, use flash-cards or use your traditional summaries and tables. Some drug classes have certain suffixes too (-zosin = alpha1 antagonist, -olol = beta bloker, -pril = ACE inhibitor etc), so that makes life easier too. You should remember what class each drug belongs to, their action, their side-effects and selectivity (i.e whether they bind to alpha1 or beta 2 adrenoceptors, or whether they are selective for the CNS or NMJ). Make sure you actually know their names accurately, because many names can be very similar (Naloxone, Nabilone) and very long to remember, and when you're revising it's easy to mix drugs up. Here's a brief list of some drugs that you're likely to hear in the course, and yes the names can be a bitch. You have three SDLs to do throughout the semester. You have to download a program which simulates an experiment and work through a worksheet. These aren't very hard to complete and shouldn't take too long but the assessors like to take off marks for tiny things, like making your horizontal axis longer than the data set. The last SDL is assessed by an online quiz. The first prac is pretty damn boring. You pretty much spend a lot of time just waiting, while you obtain a concentration-response curve for various agonists and antagonists. The second prac is better, your given some pig ileum and you have to work out which receptors are on it by adding different agonists and antagonists. The first prac is assessed by a worksheet you hand in, the second by an online test. There are only 3 tutorials throughout the semester, and they're often presented either by a lecturer or a PhD student. A tutorial worksheet is uploaded to the LMS and answers are uploaded after the tutorial. You just go through the whole worksheet in one hour. If you've revised, the worksheets are relatively straightforward and most of what the tutor is saying should be stuff you already know about. Because of this I didn't exactly find it very helpful all the time. Workshops I didn't really pay attention to, they introduced some of the SDLs. They also talked about careers in pharmacology. The MST was meant to test Weeks 1-6, and was supposed to cover the beginning therapeutics but for some reason didn't, disappointingly. It seemed to have been weighted more to the first few lectures on how drugs worked. Anyway, it's not a hard test. There was two pages of short answer questions (worth 10 marks) on Pharmacokinetics and there were 30 MCQ. Apart from your traditional A, B, C, D style MCQs, the Pharm department looooves to give questions where you have to match the options to a particular drug. Something like: A) Salbutamol B) Losartan C) Captopril D) Benzodiazepine E ) Phentolamine 1. Is an allosteric modulator of the GABA receptor 2. Is an alpha blocker 3. Is an ACE inhibitor 4. Is a beta 2 agonist 5. Is an ANGII antagnist That is often seen in past exam questions, too. I felt the exam was very fair, it covered almost all of the lectures and if you have made the effort to learn everything then you will probably find the exam straightforward. A little warning: it's not just the lectures that are assessed, but also the SDLs, practicals, and the workshops. Yes, the workshops. Th MST had a question that was from one of the workshops (which was extending Cholinergic Pharmacology) and you're expected to know drugs that were mentioned in the SDLs even if they weren't in the lectures. Overall, a great subject which relates to applied physiology extremely well. It was not too demanding and leaning about how drugs could treat disease was definitely the highlight. « Last Edit: November 28, 2014, 02:54:10 pm by Shenz0r » 2012 ATAR: 99.20 2013-2015: Bachelor of Biomedicine (Microbiology/Immunology: Infections and Immunity) at The University of Melbourne 2016-2019: Doctor of Medicine (MD4) at The University of Melbourne #### mahler004 • Victorian • Forum Obsessive • Posts: 492 • Respect: +64 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #364 on: October 28, 2014, 12:12:52 am » +8 Subject Code/Name: POLS20008 Public Policy Making • One two hour lecture a week for 10 weeks • One one hour tutorial a week for 10 week Must attend 7/10 tutorials. Assessment: • A 40% "Policy Brief," 1500 words, due mid-semester • A 60% "Policy Research Paper," 2500 words, due in the exam period Lectopia Enabled:  Yes, but lecturer makes extensive use of videos which aren't recorded for copyright reasons. Past exams available:  No exam. Lecturer put up good (H1 quality) past assessments. Textbook Recommendation:  Althaus, C., Bridgman, P. & Davis, G. (2012/13) The Australian Policy Handbook (Fifth edition.) Pretty much all the readings come from the textbook, so you do need to buy it (although it's also available at the library on overnight loan). Fortunately, it's not too expensive ($50.) And yep, you'll be lining our Vice Chancellor's pockets even more. Lecturer(s): Dr Scott Brenton Year & Semester of completion: 2014 Semester 2 Rating: 4.5/5 Comments: Oddly enough, this is the first review of a politics subject on ATAR Notes. I guess it shows the audience here. This is apparently one of the more popular politics subjects, and is taken by both arts students doing the politics major, and is a pretty common breadth subject (I took it as a science student.) Scott, who takes the lectures, is a good, innovative lecturer. Lectures are interactive and Scott makes extensive use of technology (especially videos, news reports, etc.) It's not just him talking for two hours. As a science student, attending arts lectures is unusual - in science, lectures mainly present content which must be thoroughly learned and recalled in exams, in arts, lectures are more about giving context for assessment tasks and tutorials. The lectures do this extremely well. You also get to watch an episode of the Hollowmen, which is almost worth taking the subject for in itself. The lectures cover basic politics (only for about half a lecture,) theories of policy making, policy implementation and the role of various groups in the policy making process (the government, students, lobbyists, the public service, etc.) They are reinforced by the readings, which basically involve working through a public policy textbook. Of special note are the guest lecturers. Nicholas Reece (who also teaches in the first year subject Australian Politics,) gives a lecture on communication and the role of political staffers. Reece was a senior staffer in Gillard's office and has a lot of stories to tell - his slide on the 'day of the life of a political advisor' almost completely turned me off the role. John Brumby, the former Premier of Victoria, also gives a lecture about the role of political leaders. A former Liberal senator, Prof. Russell Trood gives a lecturer on foreign policy. Again, his stories are worth attending the lecture for. The tutorials involve discussion of the textbook readings and the lecture content. Scott also uploads some items to act as discussion pieces (often a recent-ish news article.) I won't comment too much here, as your experience will largely depend on your tutor (other then saying my tutor was great.) They're fairly typical Arts tutorials. Finally, the assessments. The first assessment, the "Policy Brief" involved writing a 1500 word paper on an issue in the Victorian election. You were expected to compare and critique the policies of both major parties, and provide a statement on how important the issue would be in the election. This year it was challenging - the essay was due in mid-September (so well before the election campaign had gotten started,) so finding media and resources was a challenge. Last year (2013) the assignment was to do the same but with the Federal election. I'm not sure how they'll do it next year with no Federal or state election. The writing style here was similar to a newspaper op-ed, but could be formatted as an argumentative essay. The second assessment, the "Policy Research Paper," involves writing a 2500 word paper on a policy of your choice. You had to prepare a (federal) Cabinet submission, a media release and a literature review. This year, you could choose the policy you wrote on, in previous year the topics were restricted. There's, again, sample assignments on the LMS which should help with formatting the cabinet submission and media release. The literature review is similar to an essay, but not really. You have to use the academic literature to provide evidence for your policy. Both assessments are innovative and relevant - it's much more fun and relevant writing a cabinet submission or an op-ed compared to writing yet another essay... Like the review below this one, I'd like to address the question, present with any breadth subject, about the value of the subject. A good breadth subject will teach you something new and useful, and the reason I took BA breadths through my degree was to maintain and improve my writing skills, something which I really don't think is emphasised enough in the BSc. Plus, it allowed me to build on my already present interest in politics. I've considered undertaking a carrier in the public service after Honours (probably though one of the absurdly competitive grad programs,) so this was also to get a feel for what that would be like. The handbook suggests "Politics at Level 1" to take the subject. This isn't entirely necessary if you're doing the subject as a breadth student. If you know your House of Representatives from your Senate, your PM&C from your DFAT, your states from your federal government, and your public service your ministerial staffers, you'll be fine. I've done the first year subject Australian Politics, and the third year subject American Politics. American Politics was a fantastic subject, but a very different subject. Public Policy Making is a great choice if you're looking to go into the public service or into policy analysis, or if you just want to learn more about the way policy is made in Australia. Highly recommended. « Last Edit: November 05, 2014, 11:16:38 pm by mahler004 » BSc (Hons) 2015 Melbourne PhD 2016-??? Melbourne I want to be an architect. #### chysim • Victorian • Trendsetter • Posts: 105 • Respect: +58 • School Grad Year: 2011 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #365 on: October 28, 2014, 03:56:01 am » +8 Subject Code/Name: SCRN30005 The Digital Screenscape Workload: 1x 1.5 hour lecture, 1x up to 3 hour screening, 1x 1 hour tutorial (must attend 80% to pass) Assessment: 40% - 1500 word essay presented though a blogging platform of the student's choice (due ~mid-semester) 60% - 2500 word research essay (due first day of exams) Lectopia Enabled: Yes, with screen capture Past exams available: N/A (no exam) Textbook Recommendation: Just a book of readings. They're all posted online, and as the price of the reader magically jumped from ~$15 to >$40, I'd go the online/DIY route. Lecturer(s): Dan Golding (and he's only staff member for this subject) Year & Semester of completion: Sem 2 2014 Rating: 4.9 out of 5 (yeah I'm picky) Your Mark/Grade: H1 TL;DR: A fascinating subject will appeal to anyone interested in film and media. As a bonus, it is really well taught and constructed. Comments: So if you've come across my reviews of Film Genres or Hollywood you'll know I kind of have a pre-disposition to loving these subjects. While equally fantastic, DigScreen marks a bit of departure in content. Rather than strictly cinema studies, the subject focuses on the study and criticism of Digital Media. Pretty comprehensively too, I might add. The subject basically consists of three main units: Unit 1 is on Digital Cinema; Unit 2 is on videogames and play; Unit 3 is on "Digital Selves." It's early, but I'll briefly run through what each of these units entails... Unit 1, as I said, is on digital cinema. The films screened here include Super 8, District 9, Pirates of the Caribbean 2 , and Gravity. The main themes through this unit are the death of film (i.e. physical celluloid), remediation, digital aesthetics, and the neo-baroque. Unit 2 is on videogames. The only film screened in this unit is Indie Game: The Movie. Screening times in other weeks are used for what Dan calls Lab Sessions. Here, the collective class plays (or watches others play, as the case may be) games ranging from mega-studio games such as The Last of Us and Portal to mobile games including Duet and Space Team. These sessions were quite fun and really interesting. In the final week of this unit, the screening time was used for a lab session with the Oculus Rift VR set. This was (obviously) mega-cool. Studying games critically and thinking of them as art forms is initially a process that is quite alien, and, to some, may seem quite odd, or even pointless. Dan really hammers home the value in this though, and this unit was probably the most interesting of the three. Unit 3 is on what the subject calls "digital selves." This included things like our interactions with social networks, our relationship with technology, the critical implications of AI etc. Films screened here included RoboCop, Her, V for Vendetta, and the documentary Catfish, as well as two episodes of the (distributing but quite interesting) TV show Black Mirror. Though all the lectures in this section were very good, this unit was probably the hardest to get through as some of the readings were quite full on. Dan is a really good lecturer. Why? Here's why: • He's well prepared. He knows what slide is coming next and obviously has a solid list of notes on what he's going to say. This definitely shows, but it never gets to the point where he is simply reading from a script for anything other than the odd quote. • He knows the content like the back of his hand. In some subjects you just see a lecturer going through the motions and just reading off the slides as they are presented. Not this one. • His lectures are clear and easy to follow in structure and presentation. • His lecture slides are really slick and well constructed, especially when integrating video or audio (which you often see lecturers fumble around with). • His lectures are always entertaining (or at least never boring or monotonous). • The subject obvious stems from his personal interests, but very seldom is the content overly esoteric or alienating (I kind of feel like a depression era housewife after writing that sentence). • And, probably the best thing about Dan's lectures is that he carries a really interesting idea or theme through the entire thing. For many of the same reasons, he's a really good tutor too. I guess being the lone staff member of the subject kind of makes this Dan's baby. In the hands of someone else I could definitely see this format going haywire, or ending up as some madman's experiment. Dan, however, is dedicated enough to make it work, and the subject really benefits from his presence as it's (benevolent) overlord. Additionally, the subject is really well constructed. With the split into three inter-related but individually discernible units, it helps to (a) appeal to a greater range of students and their interests, (b) provide a more holistic view of the "screenscape," which is obviously one of the subject's main objectives, and (c) [I forgot what I was going to write for (c)]. The blog essay is a really interesting idea too. Giving students the freedom to present something uniquely is quite brave for a subject, but I found it quite liberating and I found the format very conducive for my style of writing. I wish I could write more essays like that. Here's what I came up with if anyone's interested. More generally, however, the essays are quite difficult to write. They ask you to engage with some pretty complicated ideas, and you can end up somewhat bogged down in the theory. This was the probably the biggest step-up I noticed between this as a level 3 subject and the previous level 2 SCRN subjects I've done. The approach I took, however – and it wasn't always easy – was come up with an idea. Essays I've written in the past are generally good (IMHO ) but when I look at them critically, they really just respond to a set question, and that's about it. I'd like to think the couple of essays I've come up with this semester are a bit more interesting, mainly because I've come up with somewhat of an original idea and concept to build upon. Dan is really good with that too, in that he provides a lot of different topics for each essay, but also allows students to either alter these topics or come up with one of their own (in conjunction with him of course). The 0.1 point that I've doffed off the rating is because of the readings. Though most are quite interesting, some of the readings are very lengthy, very artsy, very esoteric, and therefore hard to get through, especially during the busy times in semester. This is to be expected however, and doesn't take much away from the subject on a whole. Again, we get to the question of how valuable a subject like this actually is to someone like me, who is an engineering major. I've kind of come up with a job interview friendly justification of this. These subjects invite you – and, indeed, teach you – to look at seemingly simple concepts in a high degree of depth. They aim for engagement rather than simple recognition, understanding rather than formulaic application; a method of tackling intricate and complex issues critically and with attention to detail. They also requires you to write, and write well. Writing that flows, writing that is concise, writing that is nuanced, writing that allows you to convey a defined and codified message. This is invaluable to communicating ideas, and communicating in general. Further, it’s something I enjoy, and it’s something that provides a distinct disjuncture from my other studies. These are all good things! Unfortunately, it seems as if the future of this subject is uncertain. It is definitely taking a year off next year (I've speculated that this is for budgetary reasons), and Dan has all but completed completed his PhD, so who knows where he'll be. Regardless, this is a really great subject, so if you're interested in digital media and up for a challenge, don't hesitate in taking it. « Last Edit: May 24, 2015, 01:55:35 am by chysim » UoM | Bachelor of Environments (Civil Systems): 2012-2014 | Master of Engineering (Civil): 2015-2016 | Feel free to shoot me a PM pertaining to getting to M.Eng through the Environments course, or the Envs/Eng courses in general. #### chysim • Victorian • Trendsetter • Posts: 105 • Respect: +58 • School Grad Year: 2011 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #366 on: November 04, 2014, 10:15:50 pm » +8 Subject Code/Name: CVEN30010 Systems Modelling and Design Workload: I won't even try to describe the labyrinthine structure of this subject here. See the comments. Assessment: 5% – Geotechnical Lab Report 5% – Hydraulics Lab Report 20% – Geotechnical Design Assignment 20% – Hydraulics Design Assignment 50% – 2-hour exam Lectopia Enabled: Yes, kind of... (again see the comments). Past exams available: On library website Textbook Recommendation: None Lecturer(s): Geotechnical: Stuart Colls Hydraulics: Roger Hughes Year & Semester of completion: Sem 2 2014 Rating: Overall: 1.75/5 Geotechnical: 3.5/5 Hydraulics: 0.5/5 Your Mark/Grade: H1 TL;DR: Half of this subject is horribly taught and a constant burden. The other half is mostly okay. Overall it's quite poorly conceived and in need of review. Comments: Structure This subject is a mess. It feels like the structure of this subject has been designed with the aim of making it as confusing as possible. It is basically two subjects amalgamated into one – Geotechnical Engineering and Hydrological Engineering. They are kind of just slapped together though – these aren't just topics, they're completely different subjects. There is basically no cross-over as the two components are taught completely independently, and the order of the lectures is jumbled around. Oh, and in the middle you throw in some pracs and labs somewhere. I'll try to describe the structure of the subject below: In weeks 1-8 there are two lectures per week that run for two hours each: • In weeks 1 & 2 one lecture is dedicated to geotechnical and the other for hydraulics • In weeks 3 & 4 both lectures are on geotechnical • In weeks 5 & 6 both lectures are on hydraulics • In weeks 7 & 8 we're back to having one lecture dedicated to geotechnical and the other to hydraulics After week 8, the lectures finish. There are no more lectures. Still with me? In weeks 3-5 lab classes run. Each student has one two hour geotechnical lab (on soil seepage) and another for hydraulics (on the hydraulic jump). A lab report for these is due week 6 IIRC (maybe 7 actually). In weeks 6-9 geotechnical computer lab classes run to work on the geotechnical design assignment. In weeks 9-12 hydraulics computer lab classes run to work on the hydraulics design assignment. This structure might be okay if the staff bothered to explain to students prior to timetabling. When you register for classes you are smacked in the face by the millions of classes you have to register for just this single subject. Geotechnical Okay, so the geotechnical component is okay. The lecturer, Stuart Colls, was very good, even if he was working off someone else slides (those of the recently retired Prof Ian Johnston). Ian, as far as I can tell, was Stuart's PhD supervisor, so using his slides really doesn't cause any issues. The geotechnical tutor (and there is only one (for a cohort of >250 students)) was also quite diligent in answering questions on the discussion board and a good tutor overall, although her marking for the final assignment was remorseless (a friend said he got marks taken off for a typo in what is essentially a quantitative assignment). Hydraulics Okay, this section was the most poorly executed component of a subject I've ever done at UoM. Roger Hughes, the lecturer for the hydraulics component, was an outright bad lecturer. Rather than slides, he insisted on presenting everything through the document camera, thereby getting students to print out and fill in a large word document, including many diagrams that must be filled out quickly. This requirement for rapid sketching and jotting inhibits any actual absorption or understanding of the content. That said, his explanations of the material weren't that bad, but he had a really weird, disjointed way of speaking which made listening to him a chore. Hughes also seemed to have complete disregard for the needs of revision or for people who could not make the lecture (which could be expected to be a pretty high % given that the lectures ran from 5:30pm to 7:30pm). Several times he did not wear a microphone (making the recording impossible to listen to) and the wrong document camera was recorded (making it even more impossible to follow). Just complete laziness, lack of regard for students, and embarrassing behaviour for a UoM lecturer. I know the University makes it pretty clear that lecture attendance is compulsory, but it even made the lecture recordings practically useless for going back over material in revision. Eventually, a completely filled-in version of the notes was only provided during SWOTVAC after many students had complained. I managed to get access to last year's lecture recordings, when the subject had a different lecturer for the hydraulics component. These were so, so much better. The lecturer actually bothered to construct coherent lecture slides. He explained things much more clearly and provided examples for pretty much everything introduced, which helps you understand what will be on the exam and how you are to go about answering questions. This may be the most monumental downgrades in a subject's quality of teaching ever (although this may be surpassed whenever someone takes over for Charles in Human Phys). The only problem was that the content was slightly different, but most key concepts were the same as this year. But if the lecturer was bad, the tutor for hydraulics was even worse. Answers to discussion board questions asking for clarification of the (poorly written) assignment briefs were of the form “see the handout…” or “this question has already been answered" – not acceptable. If questions are being re-asked, it's because they are either unclear in the handout or the previous answers have been inadequate (or both). The people doing this course are not stupid (well, for the most part). She couldn't seem to grasp this. Assignments The assignments for this weren't too bad. The labs were quite good and well run (although the hydro tutor spoke pretty much inaudibly which didn't help) and the reports weren't too complicated. The geotechnical design assignment was pretty good too. This involved investigating a slope and designing some method to stabilise it. Though, as I mentioned earlier, it was marked quite harshly. The hydraulics design assignment was a little bit worse. The assignment sheet was poorly written and (again, as I mentioned earlier) the tutor wasn't particularly useful in clarification. Once you got your head around it though, it wasn't too hard either. Both of these assignments were completed individually rather than as a group. Other The only available consolation time with tutors for this subject clashed with the 2-hour lecture for Structural Theory and Design, a subject that most if not all students who do this subject would also be enrolled in. I – and I'm sure a few others – informed the staff of these and nothing was done to address it. Also, it was not until week 12 that we actually got any marks back for an assignment. Even then, feedback consisted solely of ✓s and ⤫s. Very insightful. Also, the exam was today, and we still haven't received our marks for the two design assignments. These are worth a total of 40% – students are going into an exam with 40% of coursework hanging in limbo! (eventually these were received on 25th (a casual 6 weeks to mark a relatively straightforward geotech assignment)). Summary This subject isn't too difficult. Having only 8 weeks of lectures (one of which is introducing assignments and another of which is revision) means it doesn't really have too much content. I guess I can see why whoever designed the subject thought that they could get away with rolling these two components into one subject. Overall, the geotechnical component was the subject's saving grace. This section was properly lectured and worked examples were provided for the exam. The hydraulics component, meanwhile, is omnishambles. The lecturer is bad, the tutor is useless, and no worked solutions were provided for the exam revision questions. The subject requires a major overhaul in both structure, quality of teaching, and resource allocation (i.e. more (competent) staff!). « Last Edit: December 05, 2015, 12:53:45 am by chysim » UoM | Bachelor of Environments (Civil Systems): 2012-2014 | Master of Engineering (Civil): 2015-2016 | Feel free to shoot me a PM pertaining to getting to M.Eng through the Environments course, or the Envs/Eng courses in general. #### vox nihili • National Moderator • Great Wonder of ATAR Notes • Posts: 5289 • Respect: +1367 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #367 on: November 07, 2014, 07:01:46 pm » +8 Subject Code/Name: BCMB20005 Techniques in Molecular Science Workload: 1 x 1 hour lecture weekly, 1 x 1 hour tutorial weekly, 1 x 3 hour prac weekly Assessment: 7.5% MST, 7.5% practical exam, 35% theory exam, 10% class performance, 5% assignment, 35% reports Lectopia Enabled: Yes with screen capture Past exams available: Yes, four available (more than enough) Textbook Recommendation: Techniques in Molecular Science Lab Manual (must buy), don't bother with the recommended text Lecturer(s): Amber Willems-Jones Year & Semester of completion: Semester 2 2014 Rating: 4.5 out of 5 Your Mark/Grade: H1 (80) Comments: Comments are under each spoiler! The Pracs Each semester there are 9 pracs, with the ninth prac taking three weeks to complete. The pracs are: Prac 1: Use of pipettes and spectrophotometers Prac 2: Isolation and analysis of plasmid DNA Prac 3: Restriction enzymes and restriction mapping Prac 4: Polymerase chain reaction (PCR) and primer design Prac 5: Introduction to cell biology/preparing buffers Prac 6: Kinetic properties of enzymes Prac 7: Estimation of protein Prac 8: Exploiting size and charge to separate proteins Prac 9: Purification of lysozyme Each of the pracs is scheduled to last three hours, though most will run short. The only one that really put everyone under the pump was prac 6. It wasn’t a particularly difficult prac, it was just long. Overall, the pracs were really enjoyable. You get the opportunity to use a lot of equipment and learn how to use it properly. We used UV spectrophotometers, PCR machines, micropipettes, some thingamabob called Floid. You’re introduced to a lot of techniques that are relevant to a variety of biological sciences. Personally, I think the pracs are a perfect introduction to practical work. There’s an enormous breadth of techniques to learn and there’s plenty of support to learn them. With that said, the demonstrators do encourage you to work independently and to use your brain. You’re not molly-coddled, but at the same time you still don’t feel like you’ve been dropped in the deep end. For each of the pracs, you have to answer a few questions (more about those in assessment), except for pracs 3, 7 and 9, for which you’ll have to write a report. I’m not one for prac work typically. I hated it in physics, biology and chemistry. This, however, I loved. I was excited to show to the pracs and genuinely interested in what we were doing. Assessment There’s a lot of little pieces of assessment in this subject, and it’d be a lie and a half to say that they’re not time consuming. After each prac, barring those with reports, you have to complete questions about the prac. Normally these involve some calculations and presentation of data. These can be extremely time consuming, though the good news is that you are granted a pass or fail mark for them and as my demonstrator said to us “it’s pretty fucking hard to fail”. During each prac, the demonstrators will assess your prac performance. These marks are only worth very little and it is relatively difficult to do poorly. If you do make a mistake, you will be penalised, but it’s not worth worrying about. There’s also a practical exam. This involves doing an experiment (with plenty of time to do it I might add) and then answering some questions about that experiment. It changes each time, though this semester it was based on experiment six; so we had to do an activity assay for glucose-6-phsophate dehydrogenase. All pretty simple stuff. Indeed, the class average was 78% with more than 100 (out of about 150) students getting an H1. I completely screwed up my results so I wasn’t one of them unfortunately :p There’s an assignment about pH. This is essentially year 11 chemistry with a little bit extra. It’s not difficult. Everyone in my group got more than 20/25, so it was all fairly straight forward. The extra stuff you’re taught is self-explanatory and none of it is particularly taxing. There’s also a mid-semester test, which for the first time was completely based on calculations. The average was again pretty high and I certainly felt that the test was straight-forward. A little bit frustrating that it was all MCQ though. The most important pieces of assessment are the exam and the reports. On the reports: these can be a little nerve-wracking. They do require a lot of time, effort and attention-to-detail. There is a hell of a lot of support though and there are always resources available to check what you’re doing. At no point did I feel as though we’d been left high and dry. The prac book has really specific instructions about how to write a report and there is a lecture/tutorial given early on in the piece that explains how to do this. The biggest bonus of the reports, though, is that they’re done at home. The other bonus is that the first is worth less than the second which is worth less than the third. It is really important to pay attention to the detail. A cursory look at the rubric for the reports will reveal that the lion’s share of the marks come from the way you present data and not your discussion. Make sure you do these properly. I felt like I’d produced some really good reports only to be smashed on the marking because I’d made careless error after careless error. On the exam: it’s tough. There’s no time for a toilet break or to day-dream about that really cute girl sitting in front of you. Amber packs a heap of info into it and you really are expected to remember all the details. That said, I came in relatively underprepared and felt ok with the exam. That there are only 12 lectures in the subject makes it a lot less complicated, so that’s a bonus. Just don’t be like me and underestimate the difficulty of the exam, they are tough and they will take you the full two hours. Lectures and Tutorials The lectures deal with the basics of molecular science and the purification of proteins. The content is genuinely quite interesting, so that’s a huge bonus of the lectures. In some cases, the lectures relate quite well to what you’re doing in the pracs, so that’s even better. For the most part, however, they go above and beyond where the pracs go, so they do feel a little bit isolated. Personally, I found it very easy to forget that we had lectures and didn’t really buckle down until the day before the exam, at which point it was far too late to do so. So the top tip there is make sure that you stay on top of the lectures. It’s not a particularly big ask, though with the work you do for the pracs it can feel like a bit of a pain. That said, some of the stuff you learn is pretty cool so it should be ok! The tutorials were probably not all that helpful. A lot of students didn’t go. Indeed, the attendance was so bad at one of them that Amber just decided to cancel it and wish everyone a happy holidays. Personally, I viewed them as a bit of an optional extra and would only encourage those who are particularly struggling with a topic to go. For that purpose, they are great, otherwise they’re a waste of time. Coordination This subject was brilliantly co-ordinated. The semester went off without a hitch, which is quite a tough ask for a subject that has so much assessment and so many pracs. All of the pracs felt well organised, and there were rarely issues at all. The lab was always set up properly, the demonstrators knew what they were doing. Everything in the labs ran like clock-work. Amber was extremely helpful when approached. When I had a couple of technical issues this semester, Amber went above and beyond to help me out with them. She even went as far to put my graphs in for me on my questions one week. Another highlight of Amber’s coordination were the occasions when the lecture capture didn’t work. Rather than merely supplying the same lecture from last semester, Amber would actually record the lecture de novo in her office and post the video for it on the LMS. I cannot stress enough how well this subject is coordinated. Against the backdrop of HSF—which is a shocking subject—this subject was a godsend. Everything ran as it should have. You never felt as though there was no place to go to find your answer and Amber was perhaps the most receptive and helpful coordinator I’ve had. The Gist This subject is difficult, but it never feels impossible. You know that you’re expected to work hard, but it really doesn’t feel like hard work. Most importantly, I feel like it’s left me confident in the lab, which was an enormous change from how I felt in Chemistry, for example. It’s well run, it’s interesting and everyone leaves with a wealth of knowledge that they know will be useful should they find themselves in a lab again. Even better is the fact that the techniques that one learns in this subject are applicable in a number of areas. Personally, I would recommend this subject above other second year prac subjects. I honestly feel as though one could walk into any third year prac and still feel a cut above the rest because they’ve done this subject. Highly recommend! « Last Edit: December 03, 2014, 04:58:30 am by Mr. T-Rav » MED INTERVIEW TUTORING PM to secure your place early, as they fill up quickly! Join ATARNotes Footy Tipping 2013-15: BBioMed (Biochemistry and Molecular Biology), UniMelb 2016-20: MD, UniMelb 2019: MPH, UniMelb Year I: BIOL10002 BIOL10003 CHEM10006 MAST10011 MAST10016 PHYC10007 SPAN10001 SPAN10002 Year II: BCMB20005 BIOM20001 BIOM20002 CLAS10022 GENE20001 SPAN20020 SPAN30014 Year III: BCBM30001 BCMB30002 BCMB30010 BIOM30001 BIOM30002 PHRM30008 #### literally lauren • Administrator • Part of the furniture • Posts: 1623 • Resident English/Lit Nerd • Respect: +1274 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #368 on: November 07, 2014, 10:35:24 pm » +10 Subject Code/Name: THTR20021 - Shakespeare in Performance Workload: 1x1.5 hour lecture (usually closer to 70 mins) and 1x1 hour tute Assessment: 1x1000 word Short Essay/ Passage Analysis; 1x10 minute performance in tutes with 1000 word write-up; 1x2000 word Research Essay Lectopia Enabled: Yes, though occasionally some of the videos were under copyright, so weren't included in the lecture capture. Past exams available: No exam. Textbook Recommendation: No textbook, subject reader is a must. The handbook specifies Oxford editions for all the texts but this isn't necessary. Texts studied are: (in order) -The Taming of the Shrew -Titus Andronicus -Midsummer Night's Dream -Hamlet -Macbeth -The Tempest Lecturer(s): Dr. David McInnis Year & Semester of completion: 2014, Semester 2 Rating: 6 Out of 5... yes I can do that... shut up, this is English, not maths. Your Mark/Grade: H1 Comments: I'll try and go through this systematically so this isn't just an extended gushing rant about how wonderful this subject is: Texts/Readings: The workload was more than manageable. Chances are you've studied Macbeth or Hamlet before anyway, and if you haven't, this LMS for this subject gives you access to all the BBC performances, alongside a program that scrolls through the text while you're watching (as well as basically every other known Shakespeare adaptation in existence!) Even if you're familiar with the plays, this is probably recommended since this isn't a standard 'here's a book, write what you think' kind of subject. How things are performed is really central to a lot of the lectures, and tutorial discussions, so knowing what the standard BBC version gives you a good starting point for the plethora of other adaptations. Films: ^And I do mean plethora. For each text we would have discussed at least three different adaptations ranging from the bizarre (Macbeth set in gangland Melbourne) to the sardonic (Taming of the Shrew in the London political sphere) to the grotesque (literally any version of Titus Andronicus.) You can get by without watching all of them, of course, though whichever texts you're planning on using for assessment, the more alternate views and performative choices you can discuss, the better. Assessment: This was what almost put me off this subject at the start of the year; I'm fine with essays, but I am so not a 'theatre-kid.' I have no performance background at all, and I really don't enjoy acting. The tutors were quick to allay fears during the first week, telling us we weren't actually required to perform if we didn't want to; you could join a group and just be a 'backstage' light/sound operator, or the brains behind the operation. Furthermore, you aren't at all judged on your acting abilities, the task is simply an exercise in performance decisions. There's a lot of freedom here: you can chose any scene in any text, and you're even free to modify, modernise and mutilate the text as you see fit. The actual performance environment is pretty casual, just an open studio room where tutes are held, and everyone was always really supportive. Being a Theatre Studies subject, it does of course attract some skilled thespians who put many of us to shame, but this subject is more about the thought that goes into the performance than the way it is performed. (Hell, I got a H1 on that section and I was far from the most talented actor in the room.) After the performance you're asked to explain some of the choices you made, for instance, dialogue, positioning, costuming, sound, modifications to the original text, etc. and then by next week you have to do a 1000 word write up of this process. It's not a formal essay, and following the set formula of subtitles and prompts is pretty easy. The other assessment is fairly straightforward; just regular English essays with a bigger emphasis on how a text might be performed Tutorials: Again, I was a bit worried this would consist of a bunch of drama exercises involving finding your spirit animal or passing energy around a circle, but there was none of that. We had the occasional performative or reading exercise, but you could usually opt out or just let others havev their time in the spotlight. Even then, I found myself enjoying a lot of the tasks anyway since it was more about the intent than the delivery. Tutors are very open to ideas as well, so if you're in a group that would prefer some more performative sessions or more discussions of the readings then they're always willing to work these into the lesson plan where possible. Both tutors (from what I heard, but I can defintely confirm this for mine) were approachable when it came to content/assessment-related questions, and frequently opened up additional office hours when assessment was due. Lectures/Co-ordination: I saved the best for last. Most people who've taken a first year English subject will know David McInnis. He's widely regarded as one of the best lecturers in the department, and you can tell this is his pet subject. These lectures were the highlight of my week and I often wished they were more frequent. I feel like David probably knows more about Shakespeare than Shakespeare himself did. Although each lecture centred on a certain text, the breadth of concepts and criticism was incredible, and there was just the right balance between information on slides and additional verbal stuff. He's also the subject co-ordinator and the whole thing was run just as well as his lectures. Everything was clearly set-out, the LMS page wasn't nearly as messy as my other English subjects, and the sheer amount of resources and help available was staggeringly good. Overall I'd say this is an incredibly fun subject, definitely geared at the English-inclined, but don't be put off by the theatre-studies elements. Now here's a bunch of amusing images from the lectures to win you over: « Last Edit: November 27, 2014, 10:07:31 pm by literally lauren » #### kandinsky • Guest ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #369 on: November 08, 2014, 12:01:22 pm » +6 Subject Code/Name: MUSI10208 - 19th Century Music and Ideas Workload: 2x 1 hour lecture and 1x 1 hour tutorial Lectopia: Yes but without images. Go to the lectures – it’s really important to see the slides/listen to the musical examples/read through the scores with the lecturer. Past Exams available: none Textbooks: Norton’s Anthology of Western Music. However, this textbook is REALLY expensive ($100 even when second hand!). Since I was doing this subject for breadth, I decided to use the ones in the music library, even though they’re older editions. There is no problem with doing this. I really suggest you don’t buy the textbook, unless of course you’re doing the Bachelor of Music. Lecturers: Professor Kerry Murphy (gives most lectures), Rachel Landgren (PhD student who gave a few lectures), Dr Suzanne Cole (gave one lecture) Year & Semester of completion: Semester 2, 2014 Rating: 5/5 Assessments: 60% - 2000 word essay; 20% - 2x500 word assignments analysing a piece of music; 20% - 1 hour listening exam Lectures / Coordination It really was such a pleasure to do this subject. The lectures, with their mixture of historical/social discussion and musical examples/analysis/videos of performances, are so much fun. Professor Kerry Murphy, who took the majority of the lectures, organised the subject really well, and her enthusiasm for the subject was rather infectious. The other lecturers were equally wonderful. I noticed that there was a lot of humour in the lectures – quite often the whole lecture room would erupt in laughter. Now, it is important to note that this subject is a core requirement for the Bachelor of Music degree, so most of the people doing it are music students; I only met a handful of people doing it for breadth. I feel that you should only do this subject if you have a real interest in Classical/Romantic music and have some knowledge of musical theory – you also need to be able to read music for the assignments, otherwise it would a bit tricky to do well. The subject covers exactly what its name says: it surveys the 19th Century from its very beginning (the influence of Mozart/Haydn and Beethoven), through the middle periods (Liszt, Wagner, et al.), to its very end (Puccini, Mahler). Tutorials In tutorials, we discussed the readings and listened to/analysed some musical examples in greater depth. It’s really important to go to the tutorials, because in them you examine all the pieces that could be on the exam and focus on a particular issue (such as women and Lieder in Germany, or the development of the symphony through the 19th Century). I was somewhat surprised how few people turned up each week. In my first tutorial, there were only two other students plus myself. There were never more than six people in any tutorial, even though there were more people on the list! I soon discovered that music students are rather more lax about attendance than even arts students… An amusing moment at the exam was when there weren’t enough seats in the lecture room for all the people who turned up – all through the semester it had only ever been half full! The readings are quite straightforward. You need to know the important points/facts in the readings. Your tutor will discuss the most important things in the tutorials. You also need to listen to the specified pieces before the tutorials. This is so enjoyable. It hardly feels like studying. For instance, you might listen to a Brahms symphony or an aria by Puccini. Assignments 2x500 word assignments (20%): you write 500 words on a question about a particular piece of music. Usually, you have to analyse the music and discuss the harmonic/social/historical issues. But some questions also ask things like ‘What is Romanticism’ – so you can write more broadly in those cases. Essay (60%): The essay is really important (as you can see). It is also really difficult. Your points should all be substantiated by musical examples (e.g. discussing orchestration/harmony/tonal issues/melodies/chord progressions). So you have to be able to read scores ( so I suggest not doing this subject if you can’t). I spent a long time on my essay, because it is difficult to find some more obscure scores (you have to rummage around in the music library/in databases on the internet) and then you have to make sure your arguments are based on a thoughtful analysis of them. I kind of wish they made the essay 3000 words, especially since it is worth so great a proportion of the mark. That’s the only criticism I have of this subject. Another thing to note about the essays/assignments is that the tutors/lecturers are really harsh markers. If you make the smallest mistake in your referencing style, they will take marks off. I’m not sure why music makes such a big deal about referencing, but just be aware that you need to make all your references perfect. Be aware that you need to work hard on each assignment to ensure you get H1. Exam (20%): You listen to four excerpts of music, and have to know what work they are from and then discuss all the aspects of that work, including context (historical/social issues), genre (e.g. chamber music, symphonic music), musical style (realist, Romantic?), and any other important things discussed in tutorials/lectures. You listen to all the pieces in lectures/tutorials. The best way to prepare is to make a set of notes on each piece and constantly listen to all the pieces. In the last week of the semester, they put up a list of the sixteen odd pieces that could be on the exam. This helps to focus your study a bit. I have to say, I was surprised by how underprepared many of the music students were for the exam…some of them were saying afterwards that they only recognised 1 of the 4 pieces. The exam is really quite simple if you prepare adequately for it and bother to revise the pieces beforehand… In sum, this subject was great! I think I will do other music breadths in the future because they are well run and an absolute pleasure to take part in ☺ #### yearningforsimplicity • Victorian • Posts: 540 • Former ATARNotes HHD & Psych Lecturer & Author • Respect: +131 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #370 on: November 10, 2014, 05:29:34 pm » +6 For any of the psych majors here... *crickets* Subject Code/Name: PSYC30021 - Psychological Science: Theory & Practice Note about subject: This subject is the Capstone subject undertaken by all Psychology majors. It basically extends upon the topics that have been introduced in 1st and 2nd year Psych and also introduces a few new topics. That said, 1st/2nd year psych is not a strict prerequisite but I believe the subject is a lot easier if you’ve at least got some psych background (e.g. in lab report writing and statistics). Workload: 1x 2 hour lecture each week and 6 x 2 hour research seminars across the semester (conducted on alternating weeks, depending on which research topic you choose – you are given a list of research topics and timings before the semester starts and you can choose which one interests you, e.g. my research seminars ran in weeks 2,3,4,6,8,10). Assessment: Group poster worked on and completed within the research seminar classes – 10% Individual lab report based on the poster due late in the semester (1500 words) – 50% 2 hour end-of-semester exam (4 compulsory essay style Qs – you don’t get to choose!) – 40% Lectopia Enabled: Yes! Past exams available:  No! But each lecturer provided 1 practice exam question and a brief guide of answering tips e.g. structure, how to discuss studies etc. Textbook Recommendation: N/A Lecturers: Lecture 1: Intro to subject. Scientist-Practitioner model & ethical principles – Judi Humberstone & Bob Reeve Lecture 2,3,4: Social Psychology lecture series –Yoshi Kashima ->Lecture 2: "How does my social environment influence me?" From the thinking man to talking nets & beyond ->Lecture 3: "How can we change social behaviour?" - the role of mass media and public campaigns ->Lecture 4: "Does our culture influence us? Can we influence our culture?" The case of climate change Lecture 5&6: Cognitive neuropsychology lecture series – Sarah Wilson ->Lecture 5: "What is cognitive control?" The role of the prefrontal cortex in regulating complex human behaviours ->Lecture 6: "Should I let them operate?" Applying knowledge of the prefrontal cortex in clinical neuropsychology Lecture 7,8,9: Moral, social and political psychology lecture series –Jeremy Ginges ->Lecture 7: Cooperation, markets and morals ->Lecture 8: Devoted actors and intergroup conflict ->Lecture 9: Intergroup perceptions and intergroup conflict Lecture 10&11: Psychology of Addiction (Gambling, Alcohol, Drugs) - Rob Hester ->Lecture 10: "Can people control their addictive behaviour?" - the role of cognitive neuroscience & public policy in addressing addictive gambling and drugs ->Lecture 11: "Are people in control of their behaviour while intoxicated?" - prevailing issues in alcohol and drug intoxication Lecture 12: Exam briefing & future pathways discussion –Katherine Johnson Note: Only lectures 2-11 are examinable. (1 essay question per lecturer) Year & Semester of completion: Semester 2, 2014. Rating: 4/5 Comments: This was the psychology major's capstone subject so like Research Methods for Human Inquiry, I was required to take this subject. I did enjoy this subject though there was sooooo much content! However, I found that even though I did have to cram a lot for this subject (fell behind during semester), it wasn’t actually that bad! Maybe that’s coz I’m so used to doing psych subjects but the way this subject integrated content and the way the lecturers presented their content was really good and made everything seem a little more manageable and interesting So basically at the start of semester, you are put into a research seminar group (based on your choice and which study most interests you). There are a range of research topics offered, and you’re sure to find something that interests you! Because each seminar represents a different topic, it’s not like previous years where you can just timetable yourself into *any* tutorial – you must make sure you choose the correct tutorial number corresponding to your chosen/desired research study Anyways now that I’ve mentioned the whole research seminar side of things, let me talk about the exam. Basically you get 4 essay Qs and you are expected to write about 4-5 pages for each one, incorporating empirical research (in-text citations were not compulsory but would probably impress your assessor hahah). In terms of timing, I felt that 2 hours had me pressed for time but that’s probably coz I spent 15 mins extra on one of the questions. Writing ~15-20 pages in 2 hours is no easy feat; so use reading time wisely to try and plan your answers in your head or identify which empirical studies/research you could use in each essay Q. The questions they give are fair but can be vague if you haven’t studied the content enough. So as long as you do listen to all the lectures and understand the fundamental point that each lecturer is making, you should be fine for the exam Oh and don’t underestimate the power of cramming during swotvac! (I’m a bad influence T___T hahah). All the best! « Last Edit: November 27, 2014, 11:42:19 pm by yearningforsimplicity » 2011: English | Methods | Psychology | Health & Human Development | Legal Studies | Texts & Traditions 2012: B.A. (Psychology) @ UniMelb 2015-16: Master of Teaching (Secondary: Psychology/Health) @ UniMelb 2017- Teaching Psych & HHD Happy to help out with; Health & HD(48), Psych(48), Qs about UniMelb Psych or MTeach courses *Doing Health & Human Development in 2018?* yearningforsimplicity's HHD 3&]4 EXAM REVISION PACKS :) #### cameronp • Victorian • Forum Regular • Posts: 94 • grumpy old man • Respect: +28 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #371 on: November 12, 2014, 04:44:26 pm » +6 Subject Code/Name: COMP90051 Statistical and Evolutionary Learning (from 2015 onwards: Statistical and Machine Learning) Workload: 2x one-hour lectures, 1x one-hour computer lab Assessment: 50% final exam (3 hours), 10% mid-semester test, 2x 20% projects. The exam and total project marks are hurdles. If your mid-semester mark is lower than your average project mark, the mid-sem mark is dropped. Lectopia Enabled: Yes, with screen capture. Past exams available: No, but a practice exam was made available. The content of this subject has changed quite a bit in the last few years and is likely to be different again next year. Textbook Recommendation: There are a couple of recommended texts. "The Elements of Statistical Learning" by Hastie, Tibshirani and Friedman covers most of the course and can be downloaded from the authors' web site. Lecturer(s): Dr Ben Rubinstein, Dr Justin Bedo, Dr Vinh Nguyen. Year & Semester of completion: 2014, Semester 2. Rating: 4/5 This subject covers a wide variety of techniques used in (choose your preferred buzzword) machine learning, data mining, Big Data, etc. Basically all of the methods of analysing data that don't fit under the traditional banner of "statistics". Because it covers so much, you don't go into very much depth in any of the topics in the lectures, but you do get a good idea of what methods are out there and what circumstances you might want to use them in. This subject is very much a case of "you get out what you put in", and sometimes feels a bit muddled when trying to explain mathematical ideas without using any actual maths, which is why I've only rated it 4/5. The assessment during semester is in the form of open-ended projects which allow you to explore the methods in more detail and actually apply them to a practical task. The first one was about analysing social network data, trying to predict where users lived based on their friends and the time of day they were active. The second was handwriting recognition. Both projects were fun but challenging - expect to put in a lot of time if you want to do well. The first project had a competition website with a live leaderboard so you could see how well you were doing compared to the rest of the class. The second project was apparently supposed to too but the course coordinator didn't have time to set it up. There are three lecturers for this course. Ben Rubinstein took the first half of the course in a "topic of the week" format, covering a lot of methods with little depth. Justin Bedo taught neural networks (3 weeks) and Vinh Nguyen taught evolutionary algorithms (3 weeks). Both Ben and Justin have experience working in the industry, Ben at Google and Justin at IBM. Of the three, Ben was my favourite lecturer, although I may be biased because I already knew him before taking the course... I least enjoyed the evolutionary algorithms part of the course, which could be summed up as "hey cool, this trick works in nature and it works when you implement it on a computer too". Other people might love it, though. There is a prerequisite subject listed, a computer science subject on "Knowledge Technologies". In practice, the most important knowledge to have is programming experience (ideally in a high-level language suited for data analysis, e.g. Matlab, Python or R) and some probability and calculus. The lectures try to avoid going too deep into the maths, and there's an "intro to probability" document handed out at the start of the subject, but to get the most out the course, you'll need a little bit of maths. The specific topics covered apparently vary a bit from year to year. This year we looked at: - linear and logistic regression - ensemble methods: bagging and boosting - regularisation, model complexity and overfitting - Support Vector Machines and kernel methods - Probabilistic Graphical Models and Hidden Markov Models - neural networks and "large scale learning" (methods for parallel computing etc) - evolutionary/genetic algorithms for optimisation Rambling aside: From the perspective of a mathematician, "machine learning" looks a whole lot like "statistics", but the focus is different. In statistics, the data you're dealing with usually has a nicely structured interpretation, and you want to answer specific questions within that framework. It's about understanding the real-world process that generated the data rather as much as it is answering questions about the data itself. In machine learning, the data is usually big, messy and unstructured, and all you care about is being able to make accurate predictions about future observations. Different approaches for different situations! « Last Edit: November 27, 2014, 10:11:50 pm by cameronp » BSc (Pure Mathematics) @ UWA, '04-'09 Postgraduate Diploma in Science (Mathematics and Statistics) @ UniMelb, '14 Master of Science (Statistics and Stochastic Processes) @ UniMelb, '15-'16 #### ReganM • Victorian • Forum Obsessive • Posts: 227 • What is being active? • Respect: +8 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #372 on: November 12, 2014, 07:52:17 pm » +6 Subject Code/Name: HIST20010 The First Centuries of Islam Contact Hours: This subject is taught intensively between 13 – 24 July 2015 with a daily 2-hour lecture and a 1-hour tutorial. Total Time Commitment: 170 hours It's a WINTER INTENSIVE. WINTER. INTENSIVE. Assessment: A document exercise 1500 words, 30% (due Monday after end of the teaching period) and a 2500 word project, 70% (due 1 month after the end of the teaching period). Hurdle requirement: students must attend a minimum of 75% of tutorials in order to pass this subject. Assessment submitted late without an approved extension will be penalised at 10% per day; after five working days, no late assessment will be marked. In-class tasks missed without approval will not be marked. All pieces of written work must be submitted to pass this subject. Lectopia Enabled:  Yes. With screen capture. Past exams available:  Yes. Past exams available, they were helpful in telling you what kind of questions would be given, but beyond that they weren't super helpful. Textbook Recommendation:  No need to buy a textbook, lol. It's only for 2 weeks. They do tell you to read tutorial readings, which were quite long etc. I ended up going to Officeworks and getting them printed and bound into a book. I then wrote notes in that book. Lecturer(s): Richard Pennell, Abdullah Saeed (ugh sorry if I get this wrong) Year & Semester of completion: Winter, 2014 Did this in the last 2 weeks of my winter holiday. Rating:  3 of 5 Personally I enjoyed my experiences with this subject but I don't know whether I'd recommend it to others. Note: I did this subject as a breadth subject, I am a B-Sci student. Overall, I did this subject because I didn't want to do 4 subjects during my first semester of the year. I would recommend people take this subject if they do 3 subjects in the second semester. The subject takes up 2 weeks of your winter holiday and it's a little depressing how little time you get for your holidays. I also did this subject because I was interested in learning more about Islam, because I have Muslim friends. This subject did teach me a little about Islam, but not as much as I was expecting. One of the two assessments focused on architecture, so we were taught about many Islamic buildings in the lectures (although I didn't find this information particular relevant to the building I ended up doing). Lectures:[/b] 2hr lecture in the morning, 1hr tute in the arvo. Doesn't sound too bad right? However there were also extra mini lectures (that sounded like they were recorded in Richard's lounge room) that Richard recommend people watch before the actual lectures (some days only). Looking back I wouldn't go to the lectures, we weren't exactly examined on anything in the lectures. It was a bit strange. The lectures really only provided background knowledge to what we were to be taught in tutes. I would just watch the lectures at home at super speed if I were you. (NOTE: Lectures may change, who knows) The lecturers themselves, Richard and Abdullah, were amazing to listen to. They were great speakers and the content was interesting, however it didn't help that while I was stuck getting to uni at like 10am to listen to this lecture, my friends were out having fun. ): Tutorials:[/b] On the other hand, tutes were amazing. I attribute that to my amazing tutor (Shout out to Eddie). He was 10/10 one of the best tutors I've had at uni. He went through all the content in the tutorials really well, and gave us great background information. I would recommend you go through the readings before you go to the tute, but if you don't have time/don't understand the readings it's fine, your tutor should go over it with you. I would definitely go to the tutes, because not only are they interesting, but you will also meet other students and be able to complain about the subject with others. By the end of the two weeks I think everyone in my tute kind of new eachother, it was pretty good. The tutorials will go over the readings, and the take home exam has two questions which relate to two of the readings so I would definitely recommend going because without going to the tutorial it would be stupid hard to However, if you get a crappy tutor you might be a little out of luck. My friend had Richard as his tutor and apparently he tended to ramble a bit. Best get in early if you want a good tute time slot.. you don't want to have to wait around at uni for 4 hours after your exam for a bad tute time (assuming you also go to the lectures). Assessment 1:[/b] "A document exercise 1500 words, 30%" (due Monday after end of the teaching period)<-- From handbook Assuming they don't change this subject around, this would refer to the take home exam we were given. You were given a choice of answering questions from maybe 3 or 4 of the tutorial readings (which is good, because you can choose the reading you feel the most comfortable with). Each topic had 2 questions you could answer, so a maximum of 750 words per answer. Luckily I wrote nearly everything my tutor said in the tutes, because I found my notes so useful when I was writing the answers. While it's a take home exam, the readings are often so obscure that the internet won't help you answer the questions. SO GO TO THE TUTES PEOPLE. Assessment 2:[/b] "2500 word project, 70% (due 1 month after the end of the teaching period)" <-- From Handbook This assessment was a 2500 report on an Islamic building. You are supposed to integrate everything you've learnt into this report. You can given a list of projects and you sign up for one of your choosing. Having little to no knowledge of Islamic buildings I YOLO chose a building. My building was super obscure, there wasn't a Wikipedia page on it but there was one of the person it was built by. I really had to learn how to do research using books and it was a great learning experience. However... you should really start on this project ASAP. By the time the due date rolls by (1 month into the semester), you have assessments from your other subjects due and it gets a little crazy. It was kind of hard finding information on your building, and my advice is to live at the library and just utelise all the books they provide. Conclusion:[/b] So the subject really wasn't the bludgey winter subject I was looking forward to, it was kind of the opposite. I already had an interest in Islam, but if you don't and you hate buildings and hate writing essays I would definitely do another subject. However I did put in the hard effort to do well in the assessment and it paid off. I had a great tutor and it definitely helped. If you get a bad tutor the subject might be harder for you.. as for my friend who got the bad tutor, he still did well in the subject. I guess the subject might be hit or miss for many people. I do have a feeling they might change things around, because Richard was really open to hearing our feedback. « Last Edit: November 12, 2014, 08:02:03 pm by ReganM » Bachelor of Science at Melbourne. Biological Science subjects. #### Stick • Victorian • ATAR Notes Legend • Posts: 3777 • Sticky. :P • Respect: +461 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #373 on: November 12, 2014, 10:30:15 pm » +8 Subject Code/Name: BIOL10003: Genes and Environment Contact Hours: 3 x one hour lectures per week, 1 hour per week of tutorials or workshops. 2 hours of practical work per fortnight and 3 hours per week of e-learning including independent learning tasks, pre and post laboratory activities. Total Time Commitment: Estimated total time commitment of 120 hours Assessment: A 45 minute, multiple choice test held mid-semester (10%); a combination of assessment of practical skills within the practical class, completion of up to 5 on-line pre-practical tests, written work within the practical not exceeding 500 words and up to 5 short multiple choice tests (25%); an assignment based on the practical content and not exceeding 1000 words ( 10%); completion of 5 Independent Learning Tasks throughout the semester (5%); a 3 hour examination on theory and practical work in the examination period (50%). Satisfactory completion of practical work is necessary to pass the subject (i.e. an 80% attendance at the practical classes together with a result for the assessed practical work of at least 50%). Lectopia Enabled: Yes, with screen capture. Past exams available: One extended sample exam (it has more questions than the real exam) given out at the end of the semester, with solutions. Textbook Recommendation: D Sadava, D M Hillis, H G Heller, M R Berenbaum, Life. 10th Ed. Sinaver/Freeman, 2013 The textbook wasn't as useful as it was in semester 1. You'll find that less references are provided by the lecturers, and the few references that are provided are often very short and contain superfluous information. Since you've probably got it from semester 1, it's still worth holding on to, and it definitely contains some very interesting and insightful information. It might come in handy if you need a bit of extra help as well. As in semester 1, you'll also have to buy a practical and tutorial/workshop workbook, containing the practical tasks and the tutorial/workshop worksheets, as well as some additional worksheets to supplement the independent learning tasks. Lecturer(s): Dr Alex Idnurm (Botany): Lectures 1-6 - Classification and Parasite Taxa Assoc Prof Rob Day (Zoology): Lectures 7-14 - Disease and Transmission, Evolution of Resistance, Hominin Evolution Assoc Prof Dawn Gleeson (Genetics): Lectures 15-36 - Genetics Year & Semester of completion: Semester 2 2014 Rating: 4/5 In terms of structure and co-ordination, this subject is very similar to its semester 1 counterpart. However, as you'd expect, the content is vastly different, so it's a change from what you've previously been studying. Most people I spoke to seemed to prefer Genes and Environment to Biomolecules and Cells; personally I preferred semester 1 just by a little bit, but this subject was definitely run to a pretty high standard compared to the other subjects I have taken so far. It has quite a different focus in terms of learning skills as well, which suited some people but challenged others. Overall, I thought it was a worthwhile and useful subject to take. Unfortunately the three 8am starts carry over from semester 1, and it's clear that attendance really starts to drop off over the course of the semester (I think Dawn was a bit shocked to see how many vacant seats there were in the lecture theatre). There was an incident this year where one lecture didn't get captured due to a widespread outage of the lecture capture system, so if you're inclined to stay at home and watch the lecture later, it's important to realise that technology fails at times and that you might be left without anything as a consequence. Oddly enough, when Dawn tried to substitute the lecture that didn't get recorded this year with its equivalent from last year, she remembered that lecture had been interrupted by a fire alarm - in other words, students who missed the lecture had no way to make it up! This was probably just bad luck, but it's worth keeping in mind nonetheless. The first two weeks are taken by Dr Alex Idnurm from the Botany department, who is new to the university. He covers classification and parasite taxa (i.e. viruses, bacteria and archaea, fungi and protists, and their relevance to human disease). I know it sounds awfully similar to the animal taxa component covered in semester 1, but you'll be relieved to know that this unit actually takes a completely different course. A lot of specific details will be thrown at you, and it will seem daunting, but Alex emphasises that he wants you to develop a broader understanding and appreciation of the concepts presented, and worry about all the examples later. The main purpose of all the examples he provides is to help illustrate some of the concepts he is bringing up, or highlight a particular exceptional case. To help us along, he provided a FAQ sheet that explained which examples were quite important and which were largely unnecessary to learn (in this case it was more that we just had to be aware that it existed). His questions on the mid-semester test and exam tended to steer away from memorising specific details, but it is necessary to be aware of at least a few of them so that you can refer to them if need be. That being said, since Alex was new Dawn didn't allocate many questions to him for the mid-semester test or exam, so perhaps in the future you'll need to know about his lectures a little bit more. As long as you're able to integrate all the concepts and examples and highlight the special cases, you should be fine. The next eight lectures are taken by Assoc Prof Rob Day from the Zoology department. He covers disease and transmission, evolution of resistance and hominin evolution. As the lecturer said himself, these topics are vastly different to most of the biology you've been exposed to before - it has a very strong ecology flavour to it. I'm going to be honest here and say that I found this unit particularly dry and thus didn't engage in the lectures terribly well, and it probably didn't help me when I was trying to learn the content later. You're going to be exposed to a lot of specifics (whether they be species, specific details about them, their life cycles, particular characteristics about various circumstances etc.) and you're expected to learn it all. There are a lot of things that seem completely irrelevant, but they do get assessed, so lean on the safe side and learn anything that appears on a lecture slide (even if it means that you feel like you're learning about history when you look at the agricultural and industrial revolutions). I'm not a fan of mindless rote-learning, but a lot of this part of the course demands it, so if you're like me you're going to just have to accept it and try your best. Most of us drew a line at some point though, and were almost willing to forsake some marks because it was that painful. Make sure you attend the lectures as some slides and images are not put up on the LMS (and I discovered that one of these slides was assessed - only after the exam though, when I happened to discover that I had taken a photo of that relevant slide on my phone ). These eight lectures are really odd, because it feels like you're going slowly and yet so much information is being thrown at you it's not even funny. The first 14 lectures weren't really my cup of tea, but this all changed once we started the genetics component with Assoc Prof Dawn Gleeson. Not only did I find genetics fascinating (well, more fascinating than when I was studying it in Unit 4 Biology) but Dawn was an amazing lecturer (if you refer to my semester 1 review, you'll see that my favourite lecturer was Dr Mary Familari - people have noticed that I seem to have this thing for elderly women - don't judge me! ). She used to be the chief examiner for VCE Biology and is the current co-ordinator of first year studies in Biology, so she's very mindful of the transition to university and does her best to make it as smooth as possible (which is evident through the BioBytes made available to you on the LMS, which you can refer to before lectures if you think it might help). She's not only passionate about genetics, but education as a whole, and she is very friendly, funny and helpful. Seriously, I don't know how she manages to respond to emails so quickly. <_< Her lecture slides are a bit of a dog's breakfast, but I found that this really forced me to work on how I collaborated my lecture notes. They're generally needlessly long and most of the time she won't get through them all, but it's OK since she doesn't have defined boundaries for her lecture slides like other lecturers do (this is probably because her part of the course is so extended). It might seem like you're falling behind, but most of the time she just tacks extra slides onto the end just in case she moves through more quickly than usual (this rarely happens though). In the past I've heard that the cohort has fallen behind, but we managed to finish all the content on time. Genetics requires a different mode of learning and a different set of skills compared to most of the other areas of Biology that you have been exposed to so far in that there is an increased focus on problem solving and understanding processes. This was warmly welcomed by most students, particularly after the content-heavy lectures prior to genetics. However, it does get challenging at times, especially since problem solving is difficult to establish and teach in a lecture setting. In addition to the workshops and tutorials, Dawn will provide plenty of questions in her lecture slides and on the LMS and I recommend doing them all. Problem solving in genetics is something you need to sit down with and consolidate in your own time. To guide you along, she will briefly go through some examples in lectures but relying on these is probably insufficient. In particular, note the setting out in the solutions. I think a lot of people dismissed it as unimportant at first and didn't realise the significance until it was too late. For those that did VCE Biology, you'll know that precision and accuracy are absolutely vital to success. Just like last semester though, don't become complacent - yes, a lot of the content will seem very familiar to you, but I can assure you that it gets extended upon a lot this semester, so relying on previous knowledge won't be anywhere near enough. This is particularly relevant to the problem solving component of the genetics unit. Just like last semester, there are five practicals that make up 25% of your grade. These link in with the lecture content quite well, so they're particularly useful for the hands-on learner. I'm not sure if it was just because we were more used to the expectations by now, but I noticed both in my own results and the results of the cohort that the marks for in-practical assessment (in fact, for the practicals in general) were significantly higher. For some reason though, I kept making silly mistakes in the post-practical tests. Anyway, as per usual, good preparation is key in order to get through the practicals without any undue stress. If you can, have a go at answering some of the questions at home before the practical to save some time while you're in there. Dawn actually took my group's genetics practicals, which I found quite handy since she was able to integrate her lectures and the practicals incredibly closely. The workshops and tutorials are essentially the same as last semester. Most people still found the workshops to be completely pointless, so they tried to pick up attendance by providing hints for the upcoming practical. The few minutes they would go over these hints were actually pretty helpful, but a lot of the time we were just sitting there listening to the tutor, answering questions or filling out a worksheet. That being said, I found the problem solving classes quite helpful since the environment was far more conducive than the lecture theatre. Even if you decide not to attend the workshops, I still highly recommend going through the tutorial questions and worksheets. They generally make for good revision and are an extra source of questions to practice your genetics skills with. I personally felt the tutorials were far better due to the smaller class size, although workshop attendance was often so low that it felt like I was in a tutorial anyway. For some reason, a lot of people stopped turning up to the tutorials prior to the practical as well, although I don't recommend this because this is when you get feedback for your mid-semester test and assignment and is the best time to ask for help or address questions. I was very fortunate to end up with Lyn O'Neill as my tutor again this semester - she explains the concepts very thoroughly (this is particularly helpful in practicals where everything often feels so chaotic and rushed) and marks quite fairly (I've heard some horror stories from people with other tutors from the Biology laboratory). The mid-semester test is run exactly like last semester and is worth 10% of your grade. It covers the content in lectures 1-14 and generally speaking the marks were noticeably lower this semester compared to semester 1. As I've said, this is largely due to the large amount of specific details you'll need to commit to memory, which is only made more difficult by the dry nature of the content. If you start revising for this early, you'll probably find that you'll cope a lot better. I personally found the test fair, but there were quite a few questions that seemed really ambiguous to me. Occasionally a question would be removed from the test because Dawn felt the ambiguity was beyond reasonable, but generally it was really important to read the stem of the question and each option really carefully. The only positive is that this test really does force you to consolidate lectures 1-14 before moving onto genetics, and additionally there is a reduced weighting of these lectures on the exam due to them being covered by the mid-semester test. A practice test will be released for your reference, and you'll find the actual test questions similar to the types of questions that can be asked on the final exam. There are also 5 assessed ILTs that make up 5% of your grade this semester. Additionally, you are also asked to complete some revision ILTs, and although these aren't assessed, they do have a due date, so do complete them. They complement the lectures far better than they did last semester, and the assessment itself should not pose any difficulties. This is all I can think of for now, so I guess I'll leave it at that. In some respects, this subject is more difficult than Biomolecules and Cells, particularly if excessive rote learning (lectures 1-14) or problem solving (genetics) isn't your strength. However, I wouldn't say that Genes and Environment was completely impossible either, and there were definitely a lot of parts that were very interesting to learn about. If you'd like any extra information or have any questions, please feel free to ask. Good luck! I thought I'd leave you with the following image. Rob Day will show it to you enough times during the semester that it will become fixed in your brain anyway. Surely some early exposure won't hurt. « Last Edit: November 25, 2014, 04:56:32 pm by Stick » 2017-2020: Doctor of Medicine - The University of Melbourne 2014-2016: Bachelor of Biomedicine - The University of Melbourne #### abcdqdxD • Part of the furniture • Posts: 1304 • Respect: +57 ##### Re: University of Melbourne - Subject Reviews & Ratings « Reply #374 on: November 13, 2014, 12:57:11 pm » +5 Subject Code/Name: INTS10001: International Politics Workload:  2x1 hour lectures, 1 hour tute Assessment: 25% 1000 word essay, 50% 2000 word essay, 25% 1000 word take home exam. Lectopia Enabled:  Yes, with screen capture. Past exams available:  One past exam. Textbook Recommendation:  Personally, I've only used the textbook once. Biggest waste of money so steer clear of it. Lecturer(s): Avery Poole is the subject coordinator, however there are a few other guest lecturers from other parts of the faculty. Rating:  0.5/5
{}
## Stream: new members ### Topic: dump expr to context #### Kenny Lau (Nov 24 2018 at 16:02): Is there a way to dump an expr to the context? #### Rob Lewis (Nov 24 2018 at 16:22): What do you mean? #### Rob Lewis (Nov 24 2018 at 16:23): tactic.pose? #### Kenny Lau (Nov 24 2018 at 18:25): @Rob Lewis that dumps the evaluated expression to the context; I want the unevaluated expression as an expr itself #### Kevin Buzzard (Nov 24 2018 at 18:27): You should learn to ask better questions Kenny. #### Rob Lewis (Nov 24 2018 at 20:46): You want a tactic that adds a local hypothesis of type expr, defined to be some particular expr that you have? Could you explain what you're trying to do? I kind of doubt this is the way to do it. #### Chris Hughes (Nov 26 2018 at 10:41): Can't you just use pose and give it (e) , where e : expr`? Last updated: May 18 2021 at 16:25 UTC
{}
# 2.2 The gamma and chi-square distributions Page 1 / 1 This course is a short series of lectures on Introductory Statistics. Topics covered are listed in the Table of Contents. The notes were prepared by EwaPaszek and Marek Kimmel. The development of this course has been supported by NSF 0203396 grant. ## Gamma and chi-square distributions In the (approximate) Poisson process with mean $\lambda$ , we have seen that the waiting time until the first change has an exponential distribution . Let now W denote the waiting time until the $\alpha$ th change occurs and let find the distribution of W . The distribution function of W ,when $w\ge 0$ is given by $\begin{array}{l}F\left(w\right)=P\left(W\le w\right)=1-P\left(W>w\right)=1-P\left(fewer_than_\alpha _changes_occur_in_\left[0,w\right]\right)\\ =1-\sum _{k=0}^{\alpha -1}\frac{{\left(\lambda w\right)}^{k}{e}^{-\lambda w}}{k!},\end{array}$ since the number of changes in the interval $\left[0,w\right]$ has a Poisson distribution with mean $\lambda w$ . Because W is a continuous-type random variable, $F\text{'}\left(w\right)$ is equal to the p.d.f. of W whenever this derivative exists. We have, provided w >0, that $\begin{array}{l}F\text{'}\left(w\right)=\lambda {e}^{-\lambda w}-{e}^{-\lambda w}\sum _{k=1}^{\alpha -1}\left[\frac{k{\left(\lambda w\right)}^{k-1}\lambda }{k!}-\frac{{\left(\lambda w\right)}^{k}\lambda }{k!}\right]=\lambda {e}^{-\lambda w}-{e}^{-\lambda w}\left[\lambda -\frac{\lambda {\left(\lambda w\right)}^{\alpha -1}}{\left(\alpha -1\right)!}\right]\\ =\frac{\lambda {\left(\lambda w\right)}^{\alpha -1}}{\left(\alpha -1\right)!}{e}^{-\lambda w}.\end{array}$ ## Gamma distribution The gamma function is defined by $\Gamma \left(t\right)=\underset{0}{\overset{\infty }{\int }}{y}^{t-1}{e}^{-y}dy,0 This integral is positive for $0 , because the integrand id positive. Values of it are often given in a table of integrals. If $t>1$ , integration of gamma fnction of t by parts yields $\Gamma \left(t\right)={\left[-{y}^{t-1}{e}^{-y}\right]}_{0}^{\infty }+\underset{0}{\overset{\infty }{\int }}\left(t-1\right){y}^{t-2}{e}^{-y}dy=\left(t-1\right)\underset{0}{\overset{\infty }{\int }}{y}^{t-2}{e}^{-y}dy=\left(t-1\right)\Gamma \left(t-1\right).$ Let $\Gamma \left(6\right)=5\Gamma \left(5\right)$ and $\Gamma \left(3\right)=2\Gamma \left(2\right)=\left(2\right)\left(1\right)\Gamma \left(1\right)$ . Whenever $t=n$ , a positive integer, we have, be repeated application of $\Gamma \left(t\right)=\left(t-1\right)\Gamma \left(t-1\right)$ , that $\Gamma \left(n\right)=\left(n-1\right)\Gamma \left(n-1\right)=\left(n-1\right)\left(n-2\right)...\left(2\right)\left(1\right)\Gamma \left(1\right).$ However, $\Gamma \left(1\right)=\underset{0}{\overset{\infty }{\int }}{e}^{-y}dy=1.$ Thus when n is a positive integer, we have that $\Gamma \left(n\right)=\left(n-1\right)!$ ; and, for this reason, the gamma is called the generalized factorial . Incidentally, $\Gamma \left(1\right)$ corresponds to 0!, and we have noted that $\Gamma \left(1\right)=1$ , which is consistent with earlier discussions. ## Summarizing The random variable x has a gamma distribution if its p.d.f. is defined by $f\left(x\right)=\frac{1}{\Gamma \left(\alpha \right){\theta }^{\alpha }}{x}^{\alpha -1}{e}^{-x/\theta },0\le x<\infty .$ Hence, w , the waiting time until the $\alpha$ th change in a Poisson process, has a gamma distribution with parameters $\alpha$ and $\theta =1/\lambda$ . Function $f\left(x\right)$ actually has the properties of a p.d.f., because $f\left(x\right)\ge 0$ and $\underset{-\infty }{\overset{\infty }{\int }}f\left(x\right)dx=\underset{0}{\overset{\infty }{\int }}\frac{{x}^{\alpha -1}{e}^{-x/\theta }}{\Gamma \left(\alpha \right){\theta }^{\alpha }}dx,$ which, by the change of variables $y=x/\theta$ equals $\underset{0}{\overset{\infty }{\int }}\frac{{\left(\theta y\right)}^{\alpha -1}{e}^{-y}}{\Gamma \left(\alpha \right){\theta }^{\alpha }}\theta dy=\frac{1}{\Gamma \left(\alpha \right)}\underset{0}{\overset{\infty }{\int }}{y}^{\alpha -1}{e}^{-y}dy=\frac{\Gamma \left(\alpha \right)}{\Gamma \left(\alpha \right)}=1.$ The mean and variance are: $\mu =\alpha \theta$ and ${\sigma }^{2}=\alpha {\theta }^{2}$ . Suppose that an average of 30 customers per hour arrive at a shop in accordance with Poisson process. That is, if a minute is our unit, then $\lambda =1/2$ . What is the probability that the shopkeeper will wait more than 5 minutes before both of the first two customers arrive? If X denotes the waiting time in minutes until the second customer arrives, then X has a gamma distribution with $\alpha =2,\theta =1/\lambda =2.$ Hence, $p\left(X>5\right)=\underset{5}{\overset{\infty }{\int }}\frac{{x}^{2-1}{e}^{-x/2}}{\Gamma \left(2\right){2}^{2}}dx=\underset{5}{\overset{\infty }{\int }}\frac{x{e}^{-x/2}}{4}dx=\frac{1}{4}{\left[\left(-2\right)x{e}^{-x/2}-4{e}^{-x/2}\right]}_{5}^{\infty }=\frac{7}{2}{e}^{-5/2}=0.287.$ We could also have used equation with $\lambda =1/\theta$ , because $\alpha$ is an integer $P\left(X>x\right)=\sum _{k=0}^{\alpha -1}\frac{{\left(x/\theta \right)}^{k}{e}^{-x/\theta }}{k!}.$ Thus, with x =5, $\alpha$ =2, and $\theta =2$ , this is equal to $P\left(X>x\right)=\sum _{k=0}^{2-1}\frac{{\left(5/2\right)}^{k}{e}^{-5/2}}{k!}={e}^{-5/2}\left(1+\frac{5}{2}\right)=\left(\frac{7}{2}\right){e}^{-5/2}.$ ## Chi-square distribution Let now consider the special case of the gamma distribution that plays an important role in statistics. Let X have a gamma distribution with $\theta =2$ and $\alpha =r/2$ , where r is a positive integer. If the p.d.f. of X is $f\left(x\right)=\frac{1}{\Gamma \left(r/2\right){2}^{r/2}}{x}^{r/2-1}{e}^{-x/2},0\le x<\infty .$ We say that X has chi-square distribution with r degrees of freedom, which we abbreviate by saying is ${\chi }^{2}\left(r\right)$ . The mean and the variance of this chi-square distributions are $\mu =\alpha \theta =\left(\frac{r}{2}\right)2=r$ and ${\sigma }^{2}=\alpha {\theta }^{2}=\left(\frac{r}{2}\right){2}^{2}=2r.$ That is, the mean equals the number of degrees of freedom and the variance equals twice the number of degrees of freedom. In the fugure 2 the graphs of chi-square p.d.f. for r =2,3,5, and 8 are given. the relationship between the mean $\mu =r$ , and the point at which the p.d.f. obtains its maximum. Because the chi-square distribution is so important in applications, tables have been prepared giving the values of the distribution function for selected value of r and x , $F\left(x\right)=\underset{0}{\overset{x}{\int }}\frac{1}{\Gamma \left(r/2\right){2}^{r/2}}{w}^{r/2-1}{e}^{-w/2}dw.$ Let X have a chi-square distribution with r =5 degrees of freedom. Then, using tabularized values, $P\left(1.145\le X\le 12.83\right)=F\left(12.83\right)-F\left(1.145\right)=0.975-0.050=0.925$ and $P\left(X>15.09\right)=1-F\left(15.09\right)=1-0.99=0.01.$ If X is ${\chi }^{2}\left(7\right)$ , two constants, a and b , such that $P\left(a , are a =1.690 and b =16.01. Other constants a and b can be found, this above are only restricted in choices by the limited table. Probabilities like that in Example 4 are so important in statistical applications that one uses special symbols for a and b . Let $\alpha$ be a positive probability (that is usually less than 0.5) and let X have a chi-square distribution with r degrees of freedom. Then ${\chi }_{\alpha }^{2}\left(r\right)$ is a number such that $P\left[X\ge {\chi }_{\alpha }^{2}\left(r\right)\right]=\alpha$ That is, ${\chi }_{\alpha }^{2}\left(r\right)$ is the 100(1- $\alpha$ ) percentile (or upper 100a percent point) of the chi-square distribution with r degrees of freedom. Then the 100 $\alpha$ percentile is the number ${\chi }_{1-\alpha }^{2}\left(r\right)$ such that $P\left[X\le {\chi }_{1-\alpha }^{2}\left(r\right)\right]=\alpha$ . This is, the probability to the right of ${\chi }_{1-\alpha }^{2}\left(r\right)$ is 1- $\alpha$ . SEE fugure 3 . Let X have a chi-square distribution with seven degrees of freedom. Then, using tabularized values, ${\chi }_{0.05}^{2}\left(7\right)=14.07$ and ${\chi }_{0.95}^{2}\left(7\right)=2.167.$ These are the points that are indicated on Figure 3. what is equilibrium it is intersect point of economics line in graph, but everytime not graph Ahmet it is the intersection point of supply and demand curves tesfie GDP is domestic gross product. refer my site amanchabukswar.wordpress.com Hi everyone AWOYEMI hello lovely where am I? Becky Good morning AWOYEMI morning Daniel hi dear bro tesfie why does a firm continue operating at a breakeven point to retain its customers for later coming profits. tesfie this is because the firm's revenu is covering the variable cost so the firm should continuos business Florencia and zero profit is a normal profit which covers entrepreneur's profit along with recovering wages, interest and rent. Farooq what economic trend can we expect after lifting of 10 year long sanctions in an national economy? difference between change in demand and change in quantity demanded how kumar how to change kumar For a demand with repect to price. change in demand refers to the shifting of demand curve, where as change in quantity demanded means movement along the given demand curve. Farooq According to lional Robbins how did he explain economics He defined economics as a science which studies human behavior as a relationship between ends and scares which has alternative uses. Emmanuel What is economics why are some countries producing inside the ppf prove or disprove that balance of trade of trade deficit is a cause of an abnormal demand curve? what's the fixed cost at output zero fixed cost stay the same regardless of the level of output Luka example; electricity bill is fixed cost....but when the machinery plant is not active and perhaps so offices are locked up due to unforseen circumstances..... definitely the electric nose dive.... that is a reduction in fixed right? am just saying hope am making a point Luke? klevic what are the differences between change in demand and change in quantity demand I think change in demand has to do with change from one product to another product....while change in quantity demand has to do with change in terms of units but same product....maybe due price change most especially, seasonal reasons too. klevic change in demand has to do with price of that commodity why change in quantity demand has to do with shift an has to do with other factor other than price FIDELIS what is consumers behaviour i think it means the reaction expected of consumers in respect of changes in economic activities... most especially changes made by producers~wholesalers~retailers klevic importance of income Tfor settlement of debt. For purchases. For payment of bills. For daily transactions. For social & recreational enjoyment. For business purposes etc Oyetunde thanks Emmanuel For investment purposes For security purposes For purpose of forecasting & strategizing. Oyetunde what is the real definition of economics Economics is the study of the use and allocation of (scarce) resources demsurf Jegede, what is the "non" real definition of economics then? Ernest Economics is a study of how human use limited resources to fulfil their unlimited want Musa the study of how a society use scarce factors of production efficiently so as meet aggregate social demand Marc what is oligopoly? Sailo Oligopoly can be defines as a market where by there is only tmo or more sellers of a commodity Paamat Sory not tmo but two Paamat incidence of production there is a choice do you agree? justify What is incidence of production? do u mean incidence of tax? Aryeetey Jeannette has $5 and$10 bills in her wallet. The number of fives is three more than six times the number of tens. Let t represent the number of tens. Write an expression for the number of fives. What is the expressiin for seven less than four times the number of nickels How do i figure this problem out. how do you translate this in Algebraic Expressions why surface tension is zero at critical temperature Shanjida I think if critical temperature denote high temperature then a liquid stats boils that time the water stats to evaporate so some moles of h2o to up and due to high temp the bonding break they have low density so it can be a reason s. Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)= . After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight? Got questions? Join the online conversation and get instant answers!
{}
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int Chapter Contents Chapter Introduction NAG Toolbox NAG Toolbox: nag_nonpar_gofstat_anddar_unif (g08cj) Purpose nag_nonpar_gofstat_anddar_unif (g08cj) calculates the Anderson–Darling goodness-of-fit test statistic and its probability for the case of standard uniformly distributed data. Syntax [y, a2, p, ifail] = g08cj(issort, y, 'n', n) [y, a2, p, ifail] = nag_nonpar_gofstat_anddar_unif(issort, y, 'n', n) Description Calculates the Anderson–Darling test statistic A2${A}^{2}$ (see nag_nonpar_gofstat_anddar (g08ch)) and its upper tail probability by using the approximation method of Marsaglia and Marsaglia (2004) for the case of uniformly distributed data. References Anderson T W and Darling D A (1952) Asymptotic theory of certain ‘goodness-of-fit’ criteria based on stochastic processes Annals of Mathematical Statistics 23 193–212 Marsaglia G and Marsaglia J (2004) Evaluating the Anderson–Darling distribution J. Statist. Software 9(2) Parameters Compulsory Input Parameters 1:     issort – logical scalar Set issort = true${\mathbf{issort}}=\mathbf{true}$ if the observations are sorted in ascending order; otherwise the function will sort the observations. 2:     y(n) – double array n, the dimension of the array, must satisfy the constraint n > 1${\mathbf{n}}>1$. yi${y}_{\mathit{i}}$, for i = 1,2,,n$\mathit{i}=1,2,\dots ,n$, the n$n$ observations. Constraint: if issort = true${\mathbf{issort}}=\mathbf{true}$, the values must be sorted in ascending order. Each yi${y}_{i}$ must lie in the interval (0,1)$\left(0,1\right)$. Optional Input Parameters 1:     n – int64int32nag_int scalar Default: The dimension of the array y. n$n$, the number of observations. Constraint: n > 1${\mathbf{n}}>1$. None. Output Parameters 1:     y(n) – double array If issort = false${\mathbf{issort}}=\mathbf{false}$, the data sorted in ascending order; otherwise the array is unchanged. 2:     a2 – double scalar A2${A}^{2}$, the Anderson–Darling test statistic. 3:     p – double scalar p$p$, the upper tail probability for A2${A}^{2}$. 4:     ifail – int64int32nag_int scalar ${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]). Error Indicators and Warnings Errors or warnings detected by the function: ifail = 1${\mathbf{ifail}}=1$ Constraint: n > 1${\mathbf{n}}>1$. ifail = 3${\mathbf{ifail}}=3$ issort = true${\mathbf{issort}}=\mathbf{true}$ and the data in y is not sorted in ascending order. ifail = 9${\mathbf{ifail}}=9$ The data in y must lie in the interval (0,1)$\left(0,1\right)$. Accuracy Probabilities greater than approximately 0.09$0.09$ are accurate to five decimal places; lower value probabilities are accurate to six decimal places. None. Example ```function nag_nonpar_gofstat_anddar_unif_example x = [0.4782745, 1.2858962, 1.1163891, 2.0410619, 2.2648109, 0.0833660, ... 1.2527554, 0.4031288, 0.7808981, 0.1977674, 3.2539440, 1.8113504, ... 1.2279834, 3.9178773, 1.4494309, 0.1358438, 1.8061778, 6.0441929, ... 0.9671624, 3.2035042, 0.8067364, 0.4179364, 3.5351774, 0.3975414, ... 0.6120960, 0.1332589]; mu = 1.65; % PIT y = 1 - exp(-x/mu); % Let nag_nonpar_gofstat_anddar_unif sort the uniform variates issort = false; % Calculate a-squared and probability [y, a2, p, ifail] = nag_nonpar_gofstat_anddar_unif(issort, y); % Results fprintf('\nH0: data from exponential distribution with mean %10.4e\n', mu); fprintf('Test statistic, A-squared: %8.4f\n', a2); fprintf('Upper tail probability: %8.4f\n', p); ``` ``` H0: data from exponential distribution with mean 1.6500e+00 Test statistic, A-squared: 0.1830 Upper tail probability: 0.9945 ``` ```function g08cj_example x = [0.4782745, 1.2858962, 1.1163891, 2.0410619, 2.2648109, 0.0833660, ... 1.2527554, 0.4031288, 0.7808981, 0.1977674, 3.2539440, 1.8113504, ... 1.2279834, 3.9178773, 1.4494309, 0.1358438, 1.8061778, 6.0441929, ... 0.9671624, 3.2035042, 0.8067364, 0.4179364, 3.5351774, 0.3975414, ... 0.6120960, 0.1332589]; mu = 1.65; % PIT y = 1 - exp(-x/mu); % Let g08cj sort the uniform variates issort = false; % Calculate a-squared and probability [y, a2, p, ifail] = g08cj(issort, y); % Results fprintf('\nH0: data from exponential distribution with mean %10.4e\n', mu); fprintf('Test statistic, A-squared: %8.4f\n', a2); fprintf('Upper tail probability: %8.4f\n', p); ``` ``` H0: data from exponential distribution with mean 1.6500e+00 Test statistic, A-squared: 0.1830 Upper tail probability: 0.9945 ```
{}
# Tag Info Accepted ### Showing that $\mathbf{X}^{2} + \mathbf{X} = \mathbf{A}$ has a solution Your approach is not correct, for the simple reason that you haven't showed $\Phi$ is a Banach contraction. You have provided upper bounds for both $\|X - Y\|_\infty$ and $\|\Phi(X) - \Phi(Y)\|_\infty$... • 44.6k Accepted ### Using Banach's Fixed Point Theorem on an Integral Equation Let $A\colon \Bbb R\to \Bbb R$, $A(x)=\frac x{\sqrt{h^2+x^2}}$ and $$F\colon C([0,1])\to C([0,1]),\quad F(f)(x)=\int_0^1 K(x,y)A(f(y))dy.$$ Observe that $A$ is Lipschitz function satisfying with a ... • 4,430 Accepted ### Is there a constructive proof of Brouwer's fixed-point theorem that does not rely on triangulation? There is no constructive proof of Brouwer’s fixed point theorem at all. In particular, the following is not provable constructively: Intermediate Value Theorem: Let $f : [-1, 1] \to \mathbb{R}$ be a ... • 22.5k
{}
# Send mail to Author Intersection Theory of the Tropical Moduli Spaces of Curves (Diploma Thesis) Please indicate your contact information and select, which author you want to contact. _____ _ _ __ __ _ _ _____ __ __ / ___// | || | || \ \\ / // | || | || | __ \\ \ \\/ // \___ \\ | || | || \ \/ // | || | || | | \ || \ // / // | \\_/ || \ // | \\_/ || | |__/ || | || /____// \____// \// \____// |_____// |_|| ----- --- --- ----- -' ` $Rev: 13581$
{}
# $\sum a_n b_n$ when $\sum a_n$ convergent and $\{b_n\}$ nonnegative Let $$\sum_{i=0}^\infty a_n$$ be a conditionally convergent series, and $$\{b_n\}$$ be a nonnegative and convergent sequence of real or complex numbers. Does $$\sum_{i=0}^\infty a_n b_n$$ converge? Do we actually need convergence of $$\{b_n\}$$ for convergence of $$\sum_{i=0}^\infty a_n b_n$$ or is it sufficient that $$\{b_n\}$$ is nonnegative and bounded? Consider $$a_n = \frac{(-1)^n}{\sqrt{n}}$$ and $$b_n = 2018+\frac{(-1)^n}{\sqrt{n}}$$. Then all the conditions are met, although we have $$\sum_{n=1}^{\infty} a_n b_n = \sum_{n=1}^{\infty} \left( 2018 \frac{(-1)^n}{\sqrt{n}} + \frac{1}{n}\right),$$ which diverges. • Look good, thanks! – Solicitous Wookiee Dec 14 '18 at 16:54 Bounded and non-negative is not sufficient. Consider $$a_n=\frac{(-1)^n}n$$ and $$b_n=1+(-1)^n$$. • $b_n$ is not convergent – gimusi Dec 14 '18 at 16:52 • That is the point... – SmileyCraft Dec 14 '18 at 16:53 • @SmileyCraft The question assumes that $b_n$ is a convergent sequence. – BigbearZzz Dec 14 '18 at 16:56 • The OP literally asks "is it sufficient that $\{b_n\}$ is nonnegative and bounded?" and my example answers this question. – SmileyCraft Dec 14 '18 at 16:59 Assume $$a_n = \frac{(-1)^n}{\sqrt n}$$ $$b_n =\begin{cases}=0\quad n\,\text{odd}\\\\=\frac{1}{\sqrt n}\quad n\,\text{even} \end{cases}$$ and therefore $$\sum_{n=1}^{2N} a_n b_n=\sum_{n=1}^{N} \frac1{2n} \to\infty$$ • Your sequence $\;b_n\;$ isn't convergent... – DonAntonio Dec 14 '18 at 16:44 • Opsss...thanks I fix – gimusi Dec 14 '18 at 16:44
{}
# Belle 2019-01-2811:50 [PUBDB-2019-00770] Report/Journal Article et al Measurement of the Decays $\Lambda_c\to \Sigma\pi\pi$ at Belle [BELLE Preprint 2017-26; KEK Preprint 2017-39; arXiv:1802.03421] Physical review / D 98(11), 112006 (2018) [10.1103/PhysRevD.98.112006]   We report measurements of the branching fractions of the decays $\Lambda^+_c\to\Sigma^+\pi^-\pi^+$, $\Lambda^+_c\to\Sigma^0\pi^+\pi^0$ and $\Lambda^+_c\to\Sigma^+\pi^0\pi^0$ relative to the reference channel $\Lambda^+_c\to pK^-\pi^+$. The analysis is based on the full data sample collected at and close to $\Upsilon(4S)$ resonance by the Belle detector at the KEKB asymmetric-energy $e^+e^-$ collider corresponding to the integrated luminosity of 711 fb$^{-1}$. [...] OpenAccess: PDF PDF (PDFA); 2019-01-1108:34 [PUBDB-2019-00380] Report/Journal Article et al Production cross sections of hyperons and charmed baryons from $e^+e^-$ annihilation near $\sqrt{s} = 10.52$~GeV [BELLE-PREPRINT-2017-14; KEK-PREPRINT-2017-15; arXiv:1706.06791] Physical review / D 97(7), 072005 (2018) [10.1103/PhysRevD.97.072005]   We measure the inclusive production cross sections of hyperons and charmed baryons from $e^+e^-$ annihilation using a 800 fb$^{-1}$ data sample taken near the $ϒ(4S)$ resonance with the Belle detector at the KEKB asymmetric-energy $e^+e^-$ collider. The feed-down contributions from heavy particles are subtracted using our data, and the direct production cross sections are presented for the first time. [...] OpenAccess: PDF PDF (PDFA); 2019-01-1016:55 [PUBDB-2019-00343] Report/Journal Article et al Thermal mock-up studies of the Belle II vertex detector [arXiv:1607.00663]   The ongoing upgrade of the asymmetric electron–positron collider SuperKEKB at the KEK laboratory, Japan aims at a 40-fold increase of the peak luminosity to $8 × 10^{35} cm^ {−2} s ^{−1}$ . At the same time the complex Belle II detector is being significantly upgraded to be able to cope with the higher background level and trigger rates and to improve overall performance. [...] Published on 2018-04-21. Available in OpenAccess from 2019-04-21.: PDF PDF (PDFA); Restricted: PDF PDF (PDFA); External link: Fulltext 2019-01-1008:57 [PUBDB-2019-00251] Report/Journal Article et al Observation of $\mathrm{\Upsilon(2S)\to\gamma \eta_{b}(1S)}$ decay [BELLE-PREPRINT-2018-14; KEK-PREPRINT-2018-20; PNNL-SA-135879; arXiv:1807.01201] Physical review letters 121, 232001 (2018) [10.1103/PhysRevLett.121.232001]   We report the observation of $\mathrm{\Upsilon(2S)\to\gamma \eta_{b}(1S)}$ decay based on an analysis of the inclusive photon spectrum of 24.7 fb$^{-1}$ of e$^+$e$^-$ collisions at the ϒ(2S) center-of-mass energy collected with the Belle detector at the KEKB asymmetric-energy e$^+$e$^-$ collider. We measure a branching fraction of $\mathcal{B}[ϒ(2S) \to\gamma \eta_{b}(1S)]=(6.1_{-0.7-0.6}^{+0.6+0.9}) × 10^{-4}$ and derive an $η_{b}(1S)$ mass of $9394.8_{-3.1-2.7}^{+2.7+4.5} MeV/c^2$, where the uncertainties are statistical and systematic, respectively. [...] OpenAccess: PDF PDF (PDFA); 2018-12-2114:40 [PUBDB-2018-05864] Contribution to a conference proceedings Levonian, Serguei V. Recent Results on Diffraction at HERA 18th Lomonosov Conference on Elementary Particle Physics, LOMCON17, MoscowMoscow, Russia, 24 Aug 2017 - 30 Aug 2017 6 pp. (2018) [10.3204/PUBDB-2018-05864]   Four new measurements are presented from the area of diffractive andexclusive production at HERA. Isolated photons are studiedin diffractive photo-production, while open charm cross section and cross-section ratio $σ_{ψ(2S)}/σ_{J/ψ(1S)}$ are measured in diffractive deep-inelastic scattering (DIS) regime. [...] OpenAccess: PDF PDF (PDFA); 2018-12-1915:57 [PUBDB-2018-05754] Conference Presentation (Invited) Guo, A. XYZ Physis prospects at the Belle II Experiment 2nd International Workshop on High Intensity Electron-Positron Accelerator , HIEPA 2018, BeijingBeijing, China, 18 Mar 2018 - 21 Mar 2018   OpenAccess: PDF PDF (PDFA); 2018-12-1815:26 [PUBDB-2018-05694] Poster Belle II PXD Collaboration The Belle II Pixel Detector The 27th International Workshop on Vertex Detectors, Vertex 2018, ChennaiChennai, India, 21 Oct 2018 - 26 Oct 2018   OpenAccess: PDF PDF (PDFA); 2018-12-1316:52 [PUBDB-2018-05459] Talk (non-conference) (Invited) Cunliffe, S. T. Dark sector physics with photons at Belle II Seminar, Karlsruhe Institute of TechnologyKarlsruhe, Karlsruhe Institute of Technology, Germany, 24 Sep 2018 - 26 Sep 2018   OpenAccess: PDF PDF (PDFA); External link: Fulltext 2018-12-1316:17 [PUBDB-2018-05458] Lecture (Invited) Wehle, S. Testing the Standard Model in rare decays of $B$ mesons at the Belle experiment Lecture at CEA Saclay (Parys- Saclay, France), 5 Mar 2018 - 5 Mar 2018   Rare decays of B mesons are an ideal probe to search for phenomena beyond the Standard Modelof particle physics, since contributions from new particles can aect the decays on the same level asStandard Model predictions. The rare decay of B ! K*ll oers the quark transition b ! sll, a avorchanging neutral current which is forbidden at tree level in the Standard Model. [...] OpenAccess: PDF PDF (PDFA); 2018-12-1315:50 [PUBDB-2018-05455] Conference Presentation (Invited) Wehle, S. Motivation for analysing $b→sll$ decays at CEPC The 2018 International Workshop on the High Energy Circular Electron Positron Collider, CEPS 2018, BeijingBeijing, China, 12 Nov 2018 - 14 Nov 2018   OpenAccess: PDF; External link: Fulltext
{}
# How do you solve 4x^2=28? Apr 19, 2017 ${x}^{2} = 7$ or $x = \sqrt{7}$ #### Explanation: Divide both sides by 4 (first step): ${x}^{2} = \frac{28}{4}$ ${x}^{2} = 7$ Take square root (of both sides): $x = \sqrt{7}$ This is your answer: $x = \sqrt{7}$ Apr 19, 2017 $x = \pm \sqrt{7}$ #### Explanation: $\textcolor{b l u e}{\text{Isolate " x^2" by dividing both sides by 4}}$ $\frac{\cancel{4} {x}^{2}}{\cancel{4}} = \frac{28}{4}$ $\Rightarrow {x}^{2} = 7$ $\textcolor{b l u e}{\text{take the square root of both sides}}$ remembering that the square root of a number can have a positive/negative value. $\sqrt{{x}^{2}} = \textcolor{red}{\pm} \sqrt{7}$ $\Rightarrow x = \textcolor{red}{\pm} \sqrt{7}$
{}
# 0.8 Ionization constants of weak acids Page 1 / 1 Ionization Constants of Weak Acids Acid Formula K a at 25 °C Lewis Structure acetic CH 3 CO 2 H 1.8 $×$ 10 −5 arsenic H 3 AsO 4 5.5 $×$ 10 −3 1.7 $×$ 10 −7 5.1 $×$ 10 −12 arsenous H 3 AsO 3 5.1 $×$ 10 −10 boric H 3 BO 3 5.4 $×$ 10 −10 carbonic H 2 CO 3 4.3 $×$ 10 −7 5.6 $×$ 10 −11 cyanic H CNO 2 $×$ 10 −4 formic HCO 2 H 1.8 $×$ 10 −4 hydrazoic H N 3 2.5 $×$ 10 −5 hydrocyanic H CN 4.9 $×$ 10 −10 hydrofluoric H F 3.5 $×$ 10 −4 hydrogen peroxide H 2 O 2 2.4 $×$ 10 −12 hydrogen selenide H 2 Se 1.29 $×$ 10 −4 H Se 1 $×$ 10 −12 hydrogen sulfate ion 1.2 $×$ 10 −2 hydrogen sulfide H 2 S 8.9 $×$ 10 −8 H S 1.0 $×$ 10 −19 hydrogen telluride H 2 Te 2.3 $×$ 10 −3 H Te 1.6 $×$ 10 −11 hypobromous H BrO 2.8 $×$ 10 −9 hypochlorous H ClO 2.9 $×$ 10 −8 nitrous H NO 2 4.6 $×$ 10 −4 oxalic H 2 C 2 O 4 6.0 $×$ 10 −2 6.1 $×$ 10 −5 phosphoric H 3 PO 4 7.5 $×$ 10 −3 6.2 $×$ 10 −8 4.2 $×$ 10 −13 phosphorous H 3 PO 3 5 $×$ 10 −2 2.0 $×$ 10 −7 sulfurous H 2 SO 3 1.6 $×$ 10 −2 6.4 $×$ 10 −8 #### Questions & Answers what is chemistry? forms of biotechnology What is a mole? pls give me 3 type of transition metal Copper, Scandium, Vanadium, Iron, Chromium, Cobalt Jumaane-Kalif ion zinc hydrogen Abdul that is not true Jumaane-Kalif stop posting foolishness Jumaane-Kalif how do I name compounds depends on the compound. as you may know there's transition metal compounds and there's organic compounds and so on and so forth. Jumaane-Kalif what is electricity electricity refers to the flow electrons John Hi. please can you tell me more on chemical equation When 1 or 2 chemicals react, they rearrange their atomic composition forming new compounds. The total mass before and after is the same/ constant. Chemical equations of same reactants react in same ratios e.g. 1 Na ion reacts with 1 water molecule or a multipier like 1 mole of Na ions react with 1 Abdelkarim Mole of water molecules. In that example we multipied by 6.02*10^23 or avogadro constant (L). Or 2 Na+ ions with two water molecules. The arrow means '' to form '' Some times conditions or reactants are written above the arrow like H+ or enzyme or temper, sometimes physical states are written Abdelkarim Beside the chemical Aqueous (aq) which means solute dissolved on water. Solid (s) etc Some chemical equations are written next to it ΔH= # which means enthalpy change which describes if the reaction is endothermic (+) or exothermic (-). Abdelkarim Some are reversible and have half double arrow sign. Abdelkarim what is the meaning of atoma what is theory what is chemical compound Lorrita A compound is the result of chemical bonding between 2 or more different elements. Abdelkarim Why is an atom electrically neutral? Clara the same number of electron , proton present in an atom.thats why it is electrically neutral debibandita difference between Amine and amide what is the difference between alkanal and alkanone difference between alkanol and alkanal Michael whaatt Saturday you are not well at all Ibrahim is organic compounds used for drying agent Sulfuric acid is used as a drying agent. Abdelkarim what is an atom An atom is the smallest part of an element, for instance gold atoms are the smallest part of gold that can enter a reaction. An atom must consist protons and electrons of equal number. You can think of those subatomic particles as spheres, but not orbiting randomly they move in specific way in Abdelkarim That was partially described mathematically. As a muslim, we believe that god created all matter from nothing. He is the Able, and only who can create matter in the begging in the big bang that was described in the Quran in chapter 17 verse 30, 1400 years ago, you can read it from internet. Abdelkarim what is electron affinity John do you have a private jet Ibrahim what is acid Proton donor (H+). Like in lemons, oranges and some medicines. Abdelkarim what is titration? and how can i get my readings? what is electrolysis? what is the formula you use when calculating for gas law? favor PV=nrt Farid electro-means electricity while lysis-means splitting up so electrolysis simply means splitting up by means of electric current John good day. how may I see previous question asked in this chat, please? Asha Asha, A person named Favor asked what is meant by the term electrolysis John
{}
location:  Publications → journals Search results Expand all        Collapse all Results 1 - 9 of 9 1. CMB 2011 (vol 56 pp. 39) Ben Amara, Jamel Comparison Theorem for Conjugate Points of a Fourth-order Linear Differential Equation In 1961, J. Barrett showed that if the first conjugate point $\eta_1(a)$ exists for the differential equation $(r(x)y'')''= p(x)y,$ where $r(x)\gt 0$ and $p(x)\gt 0$, then so does the first systems-conjugate point $\widehat\eta_1(a)$. The aim of this note is to extend this result to the general equation with middle term $(q(x)y')'$ without further restriction on $q(x)$, other than continuity. Keywords:fourth-order linear differential equation, conjugate points, system-conjugate points, subwronskiansCategories:47E05, 34B05, 34C10 2. CMB 2011 (vol 56 pp. 366) Kyritsi, Sophia Th.; Papageorgiou, Nikolaos S. Multiple Solutions for Nonlinear Periodic Problems We consider a nonlinear periodic problem driven by a nonlinear nonhomogeneous differential operator and a Carathéodory reaction term $f(t,x)$ that exhibits a $(p-1)$-superlinear growth in $x \in \mathbb{R}$ near $\pm\infty$ and near zero. A special case of the differential operator is the scalar $p$-Laplacian. Using a combination of variational methods based on the critical point theory with Morse theory (critical groups), we show that the problem has three nontrivial solutions, two of which have constant sign (one positive, the other negative). Keywords:$C$-condition, mountain pass theorem, critical groups, strong deformation retract, contractible space, homotopy invarianceCategories:34B15, 34B18, 34C25, 58E05 3. CMB 2011 (vol 55 pp. 3) Agarwal, Ravi P.; Mustafa, Octavian G. On a Local Theory of Asymptotic Integration for Nonlinear Differential Equations We improve several recent results in the asymptotic integration theory of nonlinear ordinary differential equations via a variant of the method devised by J. K. Hale and N. Onuchic The results are used for investigating the existence of positive solutions to certain reaction-diffusion equations. Keywords:asymptotic integration, Emden-Fowler differential equation, reaction-diffusion equationCategories:34E10, 34C10, 35Q35 4. CMB 2009 (vol 53 pp. 193) Agarwal, Ravi P.; Avramescu, Cezar; Mustafa, Octavian G. On the Oscillation of a Second Order Strictly Sublinear Differential Equation We establish a flexible oscillation criterion based on an averaging technique that improves upon a result due to C.~G. Philos. Keywords:oscillation theory, averaging methodCategories:34C10, 34C15, 34C29 5. CMB 2009 (vol 52 pp. 315) Yi, Taishan; Zou, Xingfu Generic Quasi-Convergence for Essentially Strongly Order-Preserving Semiflows By employing the limit set dichotomy for essentially strongly order-preserving semiflows and the assumption that limit sets have infima and suprema in the state space, we prove a generic quasi-convergence principle implying the existence of an open and dense set of stable quasi-convergent points. We also apply this generic quasi-convergence principle to a model for biochemical feedback in protein synthesis and obtain some results about the model which are of theoretical and realistic significance. Keywords:Essentially strongly order-preserving semiflow, compactness, quasi-convergenceCategories:34C12, 34K25 6. CMB 2007 (vol 50 pp. 377) Gutierrez, C.; Jarque, X.; Llibre, J.; Teixeira, M. A. Global Injectivity of $C^1$ Maps of the Real Plane, Inseparable Leaves and the Palais--Smale Condition We study two sufficient conditions that imply global injectivity for a $C^1$ map $X\colon \R^2\to \R^2$ such that its Jacobian at any point of $\R^2$ is not zero. One is based on the notion of half-Reeb component and the other on the Palais--Smale condition. We improve the first condition using the notion of inseparable leaves. We provide a new proof of the sufficiency of the second condition. We prove that both conditions are not equivalent, more precisely we show that the Palais--Smale condition implies the nonexistence of inseparable leaves, but the converse is not true. Finally, we show that the Palais--Smale condition it is not a necessary condition for the global injectivity of the map $X$. Categories:34C35, 34H05 7. CMB 2001 (vol 44 pp. 323) Schuman, Bertrand Une classe d'hamiltoniens polynomiaux isochrones Soit $H_0 = \frac{x^2+y^2}{2}$ un hamiltonien isochrone du plan $\Rset^2$. On met en \'evidence une classe d'hamiltoniens isochrones qui sont des perturbations polynomiales de $H_0$. On obtient alors une condition n\'ecessaire d'isochronisme, et un crit\ere de choix pour les hamiltoniens isochrones. On voit ce r\'esultat comme \'etant une g\'en\'eralisation du caract\ere isochrone des perturbations hamiltoniennes homog\`enes consid\'er\'ees dans [L], [P], [S]. Let $H_0 = \frac{x^2+y^2}{2}$ be an isochronous Hamiltonian of the plane $\Rset^2$. We obtain a necessary condition for a system to be isochronous. We can think of this result as a generalization of the isochronous behaviour of the homogeneous polynomial perturbation of the Hamiltonian $H_0$ considered in [L], [P], [S]. Keywords:Hamiltonian system, normal forms, resonance, linearizationCategories:34C20, 58F05, 58F22, 58F30 8. CMB 1997 (vol 40 pp. 448) Kaczynski, Tomasz; Mrozek, Marian Stable index pairs for discrete dynamical systems A new shorter proof of the existence of index pairs for discrete dynamical systems is given. Moreover, the index pairs defined in that proof are stable with respect to small perturbations of the generating map. The existence of stable index pairs was previously known in the case of diffeomorphisms and flows generated by smooth vector fields but it was an open question in the general discrete case. Categories:54H20, 54C60, 34C35 9. CMB 1997 (vol 40 pp. 276) Chouikha, Raouf Fonctions elliptiques et équations différentielles ordinaires In this paper, we detail some results of a previous note concerning a trigonometric expansion of the Weierstrass elliptic function $\{\wp(z);\, 2\omega, 2\omega'\}$. In particular, this implies its classical Fourier expansion. We use a direct integration method of the ODE $$(E)\left\{\matrix{{d^2u \over dt^2} = P(u, \lambda)\hfill \cr u(0) = \sigma\hfill \cr {du \over dt}(0) = \tau\hfill \cr}\right.$$ where $P(u)$ is a polynomial of degree $n = 2$ or $3$. In this case, the bifurcations of $(E)$ depend on one parameter only. Moreover, this global method seems not to apply to the cases $n > 3$. Categories:33E05, 34A05, 33E20, 33E30, 34A20, 34C23
{}
## anonymous one year ago Nancy left a bin outside in her garden to collect rainwater. She notices that 1 over 2 gallon of water fills 2 over 3 of the bin. Write and solve an equation to find the amount of water that will fill the entire bin. Show your work. Explain your answer in words. 1. anonymous @phi @mathmath333 2. anonymous @imqwerty @dan915 @Elsa213 @JayJayTheWereWolfGal1 3. phi does that say 1/2 gallons fills 2/3 of the barrel? 4. anonymous Yes it does 5. phi you could use ratios: 1/2 is to 2/3 as x is to 1 which says 1/2 gallon is to 2/3 of a bin as an unknown number of gallons (call it x) is to 1 (full) bin $\frac{ \frac{1}{2}}{\frac{2}{3}} = \frac{x}{1}$ 6. phi or you could say 2/3 of a full bin is 1/2 gallon 2/3 x = 1/2 $\frac{2}{3} x= \frac{1}{2}$ 7. anonymous OK the second one makes more sense 8. phi to solve the second one, multiply both sides by 3/2 and simplify 9. anonymous Multiply 12? 10. anonymous @phi 11. phi you should get $x= \frac{3}{2}\cdot \frac{1}{2} = \frac{3}{4}$ 3/4 of a gallon will fill the bin.
{}
## Electronic Journal of Probability ### Rigorous results for a population model with selection II: genealogy of the population Jason Schweinsberg #### Abstract We consider a model of a population of fixed size $N$ undergoing selection. Each individual acquires beneficial mutations at rate $\mu _N$, and each beneficial mutation increases the individual’s fitness by $s_N$. Each individual dies at rate one, and when a death occurs, an individual is chosen with probability proportional to the individual’s fitness to give birth. Under certain conditions on the parameters $\mu _N$ and $s_N$, we show that the genealogy of the population can be described by the Bolthausen-Sznitman coalescent. This result confirms predictions of Desai, Walczak, and Fisher (2013), and Neher and Hallatschek (2013). #### Article information Source Electron. J. Probab., Volume 22 (2017), paper no. 38, 54 pp. Dates Received: 28 January 2017 Accepted: 18 April 2017 First available in Project Euclid: 27 April 2017 Permanent link to this document https://projecteuclid.org/euclid.ejp/1493258437 Digital Object Identifier doi:10.1214/17-EJP58 Mathematical Reviews number (MathSciNet) MR3646064 Zentralblatt MATH identifier 1362.92066 #### Citation Schweinsberg, Jason. Rigorous results for a population model with selection II: genealogy of the population. Electron. J. Probab. 22 (2017), paper no. 38, 54 pp. doi:10.1214/17-EJP58. https://projecteuclid.org/euclid.ejp/1493258437 #### References • [1] Athreya, K. B. and Ney, P. E.: Branching Processes. Springer-Verlag, Berlin, 1972. xi+287 pp. • [2] Beerenwinkel, N., Antal, T., Dingli, D., Traulsen, A., Kinzler, K. W., Velculescu, V. E., Vogelstein, B., and Nowak, M. A.: Genetic progression and the waiting time to cancer. PLoS Comput. Biol. 3, (2007), 2239–2246. • [3] Bérard, J. and Gouéré, J.-B.: Brunet-Derrida behavior of branching-selection particle systems on the line. Comm. Math. Phys. 298, (2010), 323–342. • [4] Berestycki, J., Berestycki, N., and Schweinsberg, J.: The genealogy of branching Brownian motion with absorption. Ann. Probab. 41, (2013), 527–618. • [5] Berestycki, J., Berestycki, N., and Schweinsberg, J.: Critical branching Brownian motion with absorption: particle configurations. Ann. Inst. H. Poincaré Probab. Statist. 51, (2015), 1215–1250. • [6] Bolthausen, E. and Sznitman, A.-S.: On Ruelle’s probability cascades and an abstract cavity method. Comm. Math. Phys. 197, (1998), 247–276. • [7] Brunet, É. and Derrida, B.: Shift in the velocity of a front due to a cutoff. Phys. Rev. E 56, (1997), 2597–2604. • [8] Brunet, É., Derrida, B., Mueller, A. H., and Munier, S.: Noisy traveling waves: effect of selection on genealogies. Europhys. Lett. 76, (2006), 1–7. • [9] Brunet, É., Derrida, B., Mueller, A. H., and Munier, S.: Effect of selection on ancestry: an exactly soluble case and its phenomenological generalization. Phys. Rev. E 76, (2007), 041104. • [10] Brunet, É., Rouzine, I. M., and Wilke, C. O.: The stochastic edge in adaptive evolution. Genetics 179, (2008), 603–620. • [11] Desai, M. M. and Fisher, D. S.: Beneficial mutation-selection balance and the effect of linkage on positive selection. Genetics 176, (2007), 1759–1798. • [12] Desai, M. M., Walczak, A. M., and Fisher, D. S.: Genetic diversity and the structure of genealogies in rapidly adapting populations. Genetics 193, (2013), 565–585. • [13] Durrett, R., Foo, J., Leder, K., Mayberry, J., and Michor, F.: Intratumor heterogeneity in evolutionary models of tumor progression. Genetics 188, (2011), 461–477. • [14] Durrett, R. and Mayberry, J.: Traveling waves of selective sweeps. Ann. Appl. Probab. 21, (2011), 699–744. • [15] Durrett, R. and Moseley, S.: Evolution of resistance and progression to disease during clonal expansion of cancer. Theo. Pop. Biol. 77, (2010), 42–48. • [16] Durrett, R. and Schweinsberg, J.: A coalescent model for the effect of advantageous mutations on the genealogy of a population. Stochastic Process. Appl. 115, (2005), 1628–1657. • [17] Kingman, J. F. C.: The coalescent. Stochastic Process. Appl. 13, (1982), 235–248. • [18] Leviyang, S.: The coalescence of intrahost HIV lineages under symmetric CTL attack. Bull. Math. Biol. 74, (2012), 1818–1856. • [19] Maillard, P.: Speed and fluctuations of $N$-particle branching Brownian motion with spatial selection. Probab. Theory Relat. Fields 166, (2016), 1061–1173. • [20] Moran, P. A. P.: Random processes in genetics. Proc. Cambridge Philos. Soc. 54, (1958), 60–71. • [21] Mueller, C., Mytnik, L., and Quastel, J.: Effect of noise on front propagation in reaction-diffusion equations of KPP type. Invent. Math. 184, (2011), 405–453. • [22] Neher, R. A. and Hallatschek, O.: Genealogies of rapidly adapting populations. Proc. Natl. Acad. Sci. 110, (2013), 437–442. • [23] Pitman, J.: Coalescents with multiple collisions. Ann. Probab. 27, (1999), 1870–1902. • [24] Pitman, J. and Yor, M.: The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. Ann. Probab. 25, (1997), 855–900. • [25] Rouzine, I. M., Brunet, É., and Wilke, C. O.: The traveling-wave approach to asexual evolution: Muller’s ratchet and speed of adaptation. Theor. Pop. Biol 73, (2008), 24–46. • [26] Rouzine, I. M., Wakeley, J., and Coffin, J. M.: The solitary wave of asexual evolution. Proc. Natl. Acad. Sci. 100, (2003), 587–592. • [27] Sagitov, S.: The general coalescent with asynchronous mergers of ancestral lines. J. Appl. Probab. 36, (1999), 1116–1125. • [28] Schweinsberg, J.: Rigorous results for a population model with selection I: evolution of the fitness distribution. Electron. J. Probab. 22, (2017), 1–94. • [29] Yu, F., Etheridge, A., and Cuthbertson, C.: Asymptotic behavior of the rate of adaptation. Ann. Appl. Probab. 20, (2010), 978–1004.
{}
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Archive Impact factor Subscription License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Uspekhi Mat. Nauk: Year: Volume: Issue: Page: Find Uspekhi Mat. Nauk, 2019, Volume 74, Issue 2(446), Pages 81–148 (Mi umn9877) Real-normalized differentials: limits on stable curves S. Grushevskya, I. M. Kricheverbcdef, Ch. Nortongh a Stony Brook University, Stony Brook, NY, USA b Columbia University, New York, USA c Skolkovo Institute of Science and Technology d National Research University Higher School of Economics e Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute) f L. D. Landau Institute for Theoretical Physics RAS g Concordia University, Montreal, QC, Canada h Centre de Recherches Mathématiques (CRM), Université de Montréal, Montreal, QC, Canada Abstract: We study the behaviour of real-normalized (RN) meromorphic differentials on Riemann surfaces under degeneration. We describe all possible limits of RN differentials on any stable curve. In particular we prove that the residues at the nodes are solutions of a suitable Kirchhoff problem on the dual graph of the curve. We further show that the limits of zeros of RN differentials are the divisor of zeros of a twisted differential — an explicitly constructed collection of RN differentials on the irreducible components of the stable curve, with higher order poles at some nodes. Our main tool is a new method for constructing differentials (in this paper, RN differentials, but the method is more general) on smooth Riemann surfaces, in a plumbing neighbourhood of a given stable curve. To accomplish this, we think of a smooth Riemann surface as the complement of a neighbourhood of the nodes in a stable curve, with boundary circles identified pairwise. Constructing a differential on a smooth surface with prescribed singularities is then reduced to a construction of a suitable normalized holomorphic differential with prescribed ‘jumps’ (mismatches) along the identified circles (seams). We solve this additive analogue of the multiplicative Riemann–Hilbert problem in a new way, by using iteratively the Cauchy integration kernels on the irreducible components of the stable curve, instead of using the Cauchy kernel on the plumbed smooth surface. As the stable curve is fixed, this provides explicit estimates for the differential constructed, and allows a precise degeneration analysis. Bibliography: 22 titles. Keywords: Riemann surfaces, Abelian differentials, boundary value problem, degenerations. Funding Agency Grant Number National Science Fund DMS-15-01265 Simons Foundation 341858 The research of the first author was supported in part by the National Science Foundation under grant DMS-15-01265, and by a Simons Fellowship in Mathematics (Simons Foundation grant #341858 to Samuel Grushevsky). DOI: https://doi.org/10.4213/rm9877 Full text: PDF file (973 kB) First page: PDF file References: PDF file   HTML file English version: Russian Mathematical Surveys, 2019, 74:2, 265–324 Bibliographic databases: UDC: 517.948+514.7 MSC: Primary 14H10, 14H15, 30F30; Secondary 32G15 Citation: S. Grushevsky, I. M. Krichever, Ch. Norton, “Real-normalized differentials: limits on stable curves”, Uspekhi Mat. Nauk, 74:2(446) (2019), 81–148; Russian Math. Surveys, 74:2 (2019), 265–324 Citation in format AMSBIB \Bibitem{GruKriNor19} \by S.~Grushevsky, I.~M.~Krichever, Ch.~Norton \paper Real-normalized differentials: limits on stable curves \jour Uspekhi Mat. Nauk \yr 2019 \vol 74 \issue 2(446) \pages 81--148 \mathnet{http://mi.mathnet.ru/umn9877} \crossref{https://doi.org/10.4213/rm9877} \adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2019RuMaS..74..265G} \elib{http://elibrary.ru/item.asp?id=37180592} \transl \jour Russian Math. Surveys \yr 2019 \vol 74 \issue 2 \pages 265--324 \crossref{https://doi.org/10.1070/RM9877} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000474710200003} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85072732867}
{}
Journal article ### Contributions to the width difference in the neutral D system from hadronic decays Abstract: Recent studies of several multi-body D0 meson decays have revealed that the final states are dominantly CP-even. However, the small value of the width difference between the two physical eigenstates of the D0-D0 system indicates that the total widths of decays to CP-even and CP-odd final states should be the same to within about a percent. The known contributions to the width difference from hadronic D0 decays are discussed, and it is shown that an apparent excess of quasi-CP-even modes is ba... Publication status: Published Peer review status: Peer reviewed ### Access Document Files: • (Version of record, pdf, 355.4KB) Publisher copy: 10.1016/j.physletb.2015.08.063 ### Authors More by this author Institution: University of Oxford Oxford college: Christ Church Role: Author Publisher: Elsevier Publisher's website Journal: Physics Letters B Journal website Volume: 750 Pages: 338-343 Publication date: 2015-11-01 Acceptance date: 2015-08-29 DOI: ISSN: 0370-2693 Source identifiers: 570648 Keywords: Pubs id: pubs:570648 UUID: uuid:0420830c-1e6d-43f8-8f17-8850887eba57 Local pid: pubs:570648 Deposit date: 2016-12-27
{}
# How to solve 2 ÷ 2 ÷ 2 ? ${}{}{}{}$ $$2 ÷ 2 ÷ 2 = (2 ÷ 2) ÷ 2 \ \ \text{OR}\ \ 2 ÷ (2 ÷ 2) ?$$ Is there any standard rule which is world wide accepted for solving this type of expressions? If I process the expression from left to right then I will get $\dfrac12$. But if I process it from right to left then I get $\dfrac21$, that is $2$. It might be that it is in invalid expression. But these type of questions are usualy asked in India's exams. E.g. the 82$^{th}$ question of SBI Clerk Exam (Held on 06-07-2008) was: $$82.Q:\ \ \ \ \ \ \ \ \ \ \ 14400÷64÷9=?$$ The answer given was $25$. They appear to assume the order of execution from left to right. So is the standard rule is to execute the order of operations is from left to right? • If such a question is given, I would solve it from left to right. That said, such questions are obviously not written by mathematicians. – Carl May 31 '14 at 8:03 • I agree with Carl but people who write things like that are probably sadistic but surely not mathematicians ! – Claude Leibovici May 31 '14 at 8:06 • Most calculators and computer languages will execute multiplication and division left-to-right. Similarly for addition and subtraction. But for other expressions such as 1+2*3 and 2^3^2, answers vary between implementations: so either 9 or 7 and either 64 or 512, and most mathematicians would choose $1+2\times 3= 7$ and $2^{3^2}=512$, in the former case doing multiplication before addition, and in the latter case operating right-to-left for exponentiation. – Henry May 31 '14 at 8:16 • I see it as $\frac{a}{b/c}=\frac{ac}{b}$, +1 for letting others know what "type"(insert your fav. word here) of questions are asked in India and I totally agree with Carl and Claude – Vikram May 31 '14 at 8:19 • @Henry: Could you point out an "implementation" that calculates 1+2*3 = 9? I would be most curious to find out which computer language / compiler (interpreter) combo you had in mind when making your point!! – gnometorule Jun 2 '14 at 19:28 $$2 ÷ 2 ÷ 2 = 2 \cdot \frac{1}{2} \cdot \frac{1}{2}$$ $$f \circ g |_{x} = f(g(x))$$ so you first apply $g$ to $x$ and then you apply $f$ to the result of $g(x)$. When you have exponentiation: $$a^b = a \uparrow b$$ $$a^{b^c} = a \uparrow b \uparrow c$$ In the last case you also go from right to left. So: $$\left(a^b\right)^c\neq a^{(b^c)}$$ • How $2 ÷ 2 ÷ 2 = \frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{2}$? – user103816 May 31 '14 at 8:24 • @Vikram I understand what "o5strom26" is trying to say. In many cases the order is from right to left as he is saying. But $2 ÷ 2 ÷ 2 = 2 \cdot \frac{1}{2} \cdot \frac{1}{2}$ is still wrong :-/ . Nonetheless '+1'. – user103816 May 31 '14 at 8:40
{}
Access You are not currently logged in. Access your personal account or get JSTOR access through your library or other institution: If You Use a Screen Reader This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader. Order-of-Magnitude Bounds for Expectations Involving Quadratic Forms Victor H. de la Pena and Michael J. Klass The Annals of Probability Vol. 22, No. 2 (Apr., 1994), pp. 1044-1077 Stable URL: http://www.jstor.org/stable/2244904 Page Count: 34 Preview not available Abstract Let X1, X2,...,Xn be independent mean-zero random variables and let aij, 1 ≤ i, j ≤ n, be an array of constants with $a_{ii} \equiv 0$. We present a method of obtaining the order of magnitude of EΦ(∑1≤ i,j≤ naijXiX j) for any such {Xi} and {aij} and any nonnegative symmetric (convex) function Φ with Φ(0) = 0 such that, for some integer k ≥ 0, Φ(x2-k) is convex and simultaneously Φ(x2-k-1 ) is concave on [ 0, ∞). The approximation is based on decoupling inequalities valid for all such mean-zero {Xi} and reals {aij} and a certain further "independentization" procedure. • 1044 • 1045 • 1046 • 1047 • 1048 • 1049 • 1050 • 1051 • 1052 • 1053 • 1054 • 1055 • 1056 • 1057 • 1058 • 1059 • 1060 • 1061 • 1062 • 1063 • 1064 • 1065 • 1066 • 1067 • 1068 • 1069 • 1070 • 1071 • 1072 • 1073 • 1074 • 1075 • 1076 • 1077
{}
## Saturday, August 28, 2021 ### Howard Cohl (NIST) Thursday, Sept 2, 2021, 6:30 PM IST Dear all, The next talk is by Howard Cohl of NIST. Please note the special time. Howard is zooming in from California, and we are grateful to him to be able to speak at a time suitable to us. Talk Announcement: Title: The utility of integral representations for the Askey-Wilson polynomials and their symmetric sub-families Speaker: Howard Cohl (NIST) When: Thursday, September 2, 2021 - 6:30 PM - 7:30 PM (IST) (6 am Pacific Day Time (PDT)) Abstract: The Askey-Wilson polynomials are a class of orthogonal polynomials which are symmetric in four free parameters which lie at the very top of the q-Askey scheme of basic hypergeometric orthogonal polynomials. These polynomials, and the polynomials in their subfamilies, are usually defined in terms of their finite series representations which are given in terms of terminating basic hypergeometric series. However, they also have nonterminating, q-integral, and integral representations. In this talk, we will explore some of what is known about the symmetry of these representations and how they have been used to compute their important properties such as  generating functions. This study led to an extension of interesting contour integral representations of sums of nonterminating basic hypergeometric functions initially studied by Bailey, Slater, Askey, Roy, Gasper and Rahman. We will also discuss how these contour integrals are deeply connected to the properties of the symmetric basic hypergeometric orthogonal polynomials. Gaurav Bhatnagar (Ashoka), Atul Dixit (IIT, Gandhinagar) and Krishnan Rajkumar (JNU) sfandnt@gmail.com ## Thursday, August 19, 2021 ### Rajat Gupta (IIT, Gandhinagar) - Thursday, Aug 19, 2021 - 3:55 PM - 5:00 PM (IST) The next talk is by Rajat Gupta -- or shall we say Dr. Rajat Gupta! Here is the announcement. Talk Announcement: Title: Koshliakov zeta functions and modular relations Speaker: Rajat Gupta (IIT, Gandhinagar) When: Thursday, Aug 19, 2021 - 3:55 PM - 5:00 PM (IST) Abstract: Nikolai Sergeevich Koshliakov was an outstanding Russian mathematician who made phenomenal contributions to number theory and differential equations. In the aftermath of World War II, he was one among the many scientists who were arrested on fabricated charges and incarcerated. Under extreme hardships while still in prison, Koshliakov (under a different name `N. S. Sergeev') wrote two manuscripts out of which one was lost. Fortunately the second one was published in 1949 although, to the best of our knowledge, no one studied it until the last year when Prof. Atul Dixit and I started examining it in detail. This manuscript contains a complete theory of two interesting generalizations of the Riemann zeta function having their genesis in heat conduction and is truly a masterpiece! In this talk, we will discuss some of the contents of this manuscript and then proceed to give some new results (modular relations) that we have obtained in this theory. This is joint work with Prof. Atul Dixit. ## Thursday, August 5, 2021 ### Peter A. Clarkson (University of Kent, UK) - Thursday, Aug 5, 2021 - 3:55 PM - 5:00 PM (IST) The next speaker in our seminar is Professor Peter Clarkson of the University of Kent, Canterbury, UK. Here is the announcement. Talk Announcement: Title: Special polynomials associated with the Painlevé equations Speaker: Peter A. Clarkson (University of Kent, UK) When: Thursday, Aug 5, 2021 - 3:55 PM - 5:00 PM (IST) Around the middle of the 20th century, as science and engineering continued to expand in new directions, a new class of functions, the Painlevé functions, started to appear in applications. The list of problems now known to be described by the Painlevé equations is large, varied and expanding rapidly. The list includes, at one end, the scattering of neutrons off heavy nuclei, and at the other, the distribution of the zeros of the Riemann-zeta function on the critical line $\mbox{Re}(z) =\tfrac12$. Amongst many others, there is random matrix theory, the asymptotic theory of orthogonal polynomials, self-similar solutions of integrable equations, combinatorial problems such as the longest increasing subsequence problem, tiling problems, multivariate statistics in the important asymptotic regime where the number of variables and the number of samples are comparable and large, and also random growth problems.
{}
## Autoscaling Autoscaling consists of two steps. First step is centering (or, more precise, mean centering) when center of a data cloud in variable space is moved to an origin. Mathematically it is done by subtracting mean from the data values separately for every column/variable. Second step is scaling og standardization when data values are divided to standard deviation so the variables have unit variance. This autoscaling procedure (both steps) is known in statistics simply as standardization. You can also use arbitrary values to center or/and scale the data, in this case use sequence or vector with these values should be provided as an argument for center or scale. R has a built-in function for centering and scaling, scale(). The method prep.autoscale() is actually a wrapper for this function, which is mostly needed to set all user defined attributes to the result (all preprocessing methods will keep the attributes). Here are some examples how to use it: library(mdatools) data(people) # mean centering only data1 = prep.autoscale(people, center = TRUE, scale = FALSE) # scaling/standardization only data2 = prep.autoscale(people, center = FALSE, scale = TRUE) # autoscaling (mean centering and standardization) data3 = prep.autoscale(people, center = TRUE, scale = TRUE) # centering with median values and standardization data4 = prep.autoscale(people, center = apply(people, 2, median), scale = TRUE) par(mfrow = c(2, 2)) boxplot(data1, main = "Mean centered") boxplot(data2, main = "Standardized") boxplot(data3, main = "Mean centered and standardized") boxplot(data4, main = "Median centered and standardized") The method has also an additional parameter max.cov which allows to avoid scaling of variables with zero or very low variation. The parameter defines a limit for coefficient of variation in percent sd(x) / m(x) * 100 and the method will not scale variables with coefficient of variation below this limit. Default value for the parameter is 0 which will prevent scaling of constant variables (which is leading to Inf values).
{}
# Info, tips & tricks¶ There is the usual trade-off between speed, memory, and accuracy. Very generally speaking we can say that the DLF is faster than QWE, but QWE is much easier on memory usage. QWE allows you to control the accuracy. A standard quadrature in the form of QUAD is also provided. QUAD is generally orders of magnitudes slower, and more fragile depending on the input arguments. However, it can provide accurate results where DLF and QWE fail. ## Memory¶ By default empymod will try to carry out the computation in one go, without looping. If your model has many offsets and many frequencies this can be heavy on memory usage. Even more so if you are computing time-domain responses for many times. If you are running out of memory, you should use either loop='off' or loop='freq' to loop over offsets or frequencies, respectively. Use verb=3 to see how many offsets and how many frequencies are computed internally. ## Speed¶ Please be aware that the high-level routines empymod.model.bipole and empymod.model.loop are convenience functions, making it easy for the user to compute arbitrary rotated sources and receivers. However, the convenience comes at a price, these are certainly not the fastest implementations for a given scenario. There are simply too many different use cases, each with its particular layout of sources, receivers, geometrical factors, required fields, and so on. These convenience functions simply loop internally over different source and receiver depths, source and receiver integration points, and required fields. If you are going to model millions and millions of responses it will be worth to think about it carefully. Often it will be much faster to collect the same source and receiver depths and call these functions individually (loop yourself). Or write your own wrapper around either empymod.model.dipole, or even around empymod.model.fem and empymod.model.tem. As such, the provided modelling routine can serve as a template to create your own, problem-specific modelling routine! ## Depths, Rotation, and Bipole¶ Depths: Computation of many source and receiver positions is fastest if they remain at the same depth, as they can be computed in one kernel call. If depths do change, one has to loop over them. Note: Sources or receivers placed on a layer interface are considered in the upper layer. Rotation: Sources and receivers aligned along the principal axes x, y, and z can be computed in one kernel call. For arbitrary oriented di- or bipoles, 3 kernel calls are required. If source and receiver are arbitrary oriented, 9 (3x3) kernel calls are required. Bipole: Bipoles increase the computation time by the amount of integration points used. For a source and a receiver bipole with each 5 integration points you need 25 (5x5) kernel calls. You can compute it in 1 kernel call if you set both integration points to 1, and therefore compute the bipole as if they were dipoles at their centre. Example: For 1 source and 10 receivers, all at the same depth, 1 kernel call is required. If all receivers are at different depths, 10 kernel calls are required. If you make source and receivers bipoles with 5 integration points, 250 kernel calls are required. If you rotate the source arbitrary horizontally, 500 kernel calls are required. If you rotate the receivers too, in the horizontal plane, 1’000 kernel calls are required. If you rotate the receivers also vertically, 1’500 kernel calls are required. If you rotate the source vertically too, 2’250 kernel calls are required. So your computation will take 2’250 times longer! No matter how fast the kernel is, this will take a long time. Therefore carefully plan how precise you want to define your source and receiver bipoles. Example as a table for comparison: 1 source, 10 receiver (one or many frequencies). source bipole kernel calls intpts azimuth dip intpts azimuth dip diff. z 1 1 0/90 0/90 1 0/90 0/90 1 10 1 0/90 0/90 1 0/90 0/90 10 250 5 0/90 0/90 5 0/90 0/90 10 500 5 arb. 0/90 5 0/90 0/90 10 1000 5 arb. 0/90 5 arb. 0/90 10 1500 5 arb. 0/90 5 arb. arb. 10 2250 5 arb. arb. 5 arb. arb. 10 ## Lagged Convolution and Splined Transforms¶ Both Hankel and Fourier DLF have three options, which can be controlled via the htarg['pts_per_dec'] and ftarg['pts_per_dec'] parameters: • pts_per_dec=0 : Standard DLF; • pts_per_dec<0 : Lagged Convolution DLF: Spacing defined by filter base, interpolation is carried out in the input domain; • pts_per_dec>0 : Splined DLF: Spacing defined by pts_per_dec, interpolation is carried out in the output domain. Similarly, interpolation can be used for QWE by setting pts_per_dec to a value bigger than 0. The Lagged Convolution and Splined options should be used with caution, as they use interpolation and are therefore less precise than the standard version. However, they can significantly speed up QWE, and massively speed up DLF. Additionally, the interpolated versions minimizes memory requirements a lot. Speed-up is greater if all source-receiver angles are identical. Note that setting pts_per_dec to something else than 0 to compute only one offset (Hankel) or only one time (Fourier) will be slower than using the standard version. QWE: Good speed-up is also achieved for QWE by setting maxint as low as possible. Also, the higher nquad is, the higher the speed-up will be. DLF: Big improvements are achieved for long DLF-filters and for many offsets/frequencies (thousands). Warning Keep in mind that setting pts_per_dec to something else than 0 uses interpolation, and is therefore not as accurate as the standard version. Use with caution and always compare with the standard version to verify if you can apply interpolation to your problem at hand! Be aware that QUAD (Hankel transform) always use the splined version and always loops over offsets. The Fourier transforms FFTlog, QWE, and FFT always use interpolation too, either in the frequency or in the time domain. The splined versions of QWE check whether the ratio of any two adjacent intervals is above a certain threshold (steep end of the wavenumber or frequency spectrum). If it is, it carries out QUAD for this interval instead of QWE. The threshold is stored in diff_quad, which can be changed within the parameter htarg and ftarg. For a graphical explanation of the differences between standard DLF, lagged convolution DLF, and splined DLF for the Hankel and the Fourier transforms see the example Digital Linear Filters. ## Looping¶ By default, you can compute many offsets and many frequencies all in one go, vectorized (for the DLF), which is the default. The loop parameter gives you the possibility to force looping over frequencies or offsets. This parameter can have severe effects on both runtime and memory usage. Play around with this factor to find the fastest version for your problem at hand. It ALWAYS loops over frequencies if ht = 'QWE'/'QUAD' or if ht = 'DLF' and pts_per_dec!=0 (Lagged Convolution or Splined Hankel DLF). All vectorized is very fast if there are few offsets or few frequencies. If there are many offsets and many frequencies, looping over the smaller of the two will be faster. Choosing the right looping can have a significant influence. ## Vertical components and xdirect¶ Computing the direct field in the wavenumber-frequency domain (xdirect=False; the default) is generally faster than computing it in the frequency-space domain (xdirect=True). However, using xdirect = True can improve the result (if source and receiver are in the same layer) to compute: • the vertical electric field due to a vertical electric source, • configurations that involve vertical magnetic components (source or receiver), • all configurations when source and receiver depth are exactly the same. The Hankel transforms methods are having sometimes difficulties transforming these functions. ## Time-domain land CSEM¶ The derivation, as it stands, has a near-singular behaviour in the wavenumber-frequency domain when $$\kappa^2 = \omega^2\epsilon\mu$$. This can be a problem for land-domain CSEM computations if source and receiver are located at the surface between air and subsurface. Because most transforms do not sample the wavenumber-frequency domain sufficiently to catch this near-singular behaviour (hence not smooth), which then creates noise at early times where the signal should be zero. To avoid the issue simply set the relative electric permittivity (epermH, epermV) of the air to zero. This trick obviously uses the diffusive approximation for the air-layer, it therefore will not work for very high frequencies (e.g., GPR computations). An example is given in Improve land CSEM computation. This trick works fine for all horizontal components, but not so much for the vertical component. But then it is not feasible to have a vertical source or receiver exactly at the surface. A few tips for these cases: The receiver can be put pretty close to the surface (a few millimeters), but the source has to be put down a meter or two, more for the case of vertical source AND receiver, less for vertical source OR receiver. The results are generally better if the source is put deeper than the receiver. In either case, the best is to first test the survey layout against the analytical result (using empymod.analytical with solution='dhs') for a half-space, and subsequently model more complex cases. A common alternative to this trick is to apply a lowpass filter to filter out the unstable high frequencies. ## Hook for user-defined computation of $$\eta$$ and $$\zeta$$¶ In principal it is always best to write your own modelling routine if you want to adjust something. Just copy empymod.dipole or empymod.bipole as a template, and modify it to your needs. Since empymod v1.7.4, however, there is a hook which allows you to modify $$\eta_h, \eta_v, \zeta_h$$, and $$\zeta_v$$ quite easily. The trick is to provide a dictionary (we name it inp here) instead of the resistivity vector in res. This dictionary, inp, has two mandatory plus optional entries: • res: the resistivity vector you would have provided normally (mandatory). • A function name, which has to be either or both of (mandatory) • func_eta: To adjust etaH and etaV, or • func_zeta: to adjust zetaH and zetaV. • In addition, you have to provide all parameters you use in func_eta/func_zeta and are not already provided to empymod. All additional parameters must have #layers elements. The functions func_eta and func_zeta must have the following characteristics: • The signature is func(inp, p_dict), where • inp is the dictionary you provide, and • p_dict is a dictionary that contains all parameters so far computed in empymod [locals()]. • It must return etaH, etaV if func_eta, or zetaH, zetaV if func_zeta. Dummy example def my_new_eta(inp, p_dict): # Your computations, using the parameters you provided # in inp and the parameters from empymod in p_dict. # In the example line below, we provide, e.g., inp['tau'] return etaH, etaV And then you call empymod with res={'res': res-array, 'tau': tau, 'func_eta': my_new_eta}. Have a look at the corresponding example in the Gallery, where this hook is exploited in the low-frequency range to use the Cole-Cole model for IP computation. It could also be used in the high-frequency range to model dielectricity. ## Zero horizontal offset¶ By default, empymod enforces a minimum horizontal offset of 1 mm. The reason for this lies in the Hankel transform. The digital linear filter method computes the required wavenumbers via (1)$\lambda = b_n/r$ where $$b_n$$ are the base values of the filter, and $$r$$ is the horizontal offset. It can be seen from Equation (1) that this breaks down for a zero horizontal offset (something similar applies for the QWE Hankel transform method). However, the quadrature method for the Hankel transform as well as the analytical solutions do not have this limitation, and both can be used to compute actual zero horizontal offset responses. One can set the minimum (horizontal) offset to zero (or any other value) by running empymod.set_minimum(min_off=0) So if you have to compute actual zero horizontal offset data you have to use the quadrature method (ht=’quad’). However, be aware that this method is usually significantly slower than the DLF method, and needs careful adjustments of the htarg-parameters depending on the model and the survey layout. There exist probably clever workarounds to this limitation of the DLF. However, depending on the source-receiver configuration a minimum offset of one to ten millimeters is generally enough to give a sufficiently precise approximation of the actual zero-offset response, at least for practical purposes. Here is a script that computes the responses for all possible source-receiver configurations for a fullspace, comparing the analytical space-frequency domain solution with the solutions using the quadrature and using the DLF for the Hankel transform. The analytical solution and the quadrature transform compute the zero offset explicitly, the DLF transform has a minimum offset of 1 mm. You can adjust it to your model and survey layout. import empymod import numpy as np import matplotlib.pyplot as plt xy = np.arange(1001.)/500-1 # x=y-offsets off = np.sign(xy)*np.sqrt(2*xy**2) # Offset res = 1 # Fullspace resistivity zoff = 1 # Vertical distance freq = 1 # Frequency # Collect input inp = {'src': [0, 0, 0], 'rec': [xy, xy, zoff], 'depth': [], 'res': res, 'freqtime': freq, 'verb': 2} pab = [11, 12, 13, 14, 15, 16, 21, 22, 23, 24, 25, 26, 31, 32, 33, 34, 35, 36, 41, 42, 43, 44, 45, 46, 51, 52, 53, 54, 55, 56, 61, 62, 63, 64, 65, 66] # Loop over all source-receiver combinations for ab in pab: # Enforce minimum offset empymod.set_minimum(min_off=1e-3) print(' --- DLF ---') num = empymod.dipole( ab=ab, xdirect=False, htarg={'pts_per_dec': 0}, **inp) # Remove minimum offset empymod.set_minimum(min_off=0) qua = empymod.dipole( htarg={'a': 1e-3, 'b': 5e1, 'rtol': 1e-4, 'pts_per_dec': 100}) print(' --- Analytical ---') ana = empymod.dipole(ab=ab, xdirect=True, **inp) # Plot the result plt.figure(num=ab) plt.suptitle(f"ab = {ab}") ax1 = plt.subplot(221) plt.title('Real') plt.ylabel('E-field (V/m)') plt.plot(off, ana.real, 'k-') plt.plot(off, qua.real, 'C0--') plt.plot(off, num.real, 'C1-.') plt.xticks([-1, -0.5, 0, 0.5, 1], ()) ax3 = plt.subplot(223) plt.xlabel('Offset (m)') plt.ylabel('Rel. Error (%)') plt.plot(off, 100*abs((qua.real-ana.real)/ana.real), 'C0--') plt.plot(off, 100*abs((num.real-ana.real)/ana.real), 'C1-.') plt.yscale('log') ax2 = plt.subplot(222, sharey=ax1) plt.title('Imag') plt.plot(off, ana.imag, 'k-', label='analytical') plt.plot(off, num.imag, 'C1-.', label='DLF') ax2.yaxis.set_label_position("right") ax2.yaxis.tick_right() plt.xticks([-1, -0.5, 0, 0.5, 1], ()) plt.legend() ax4 = plt.subplot(224, sharey=ax3) plt.xlabel('Offset (m)') plt.plot(off, 100*abs((qua.imag-ana.imag)/ana.imag), 'C0--') plt.plot(off, 100*abs((num.imag-ana.imag)/ana.imag), 'C1-.') ax4.yaxis.set_label_position("right") ax4.yaxis.tick_right() plt.yscale('log') plt.tight_layout() plt.show() The result for x-directed source and receiver (ab=11) is shown in the following figure: Comparison for zero offset computation. The DLF has a minimum horizontal offset of 1 mm in this examples, the other two methods do have an actual zero horizontal offset.
{}
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Effects of different social experiences on emotional state in mice ## Abstract A comprehensive understanding of animals’ emotions can be achieved by combining cognitive, behavioural, and physiological measures. Applying such a multi-method approach, we here examined the emotional state of mice after they had made one of three different social experiences: either a mildly “adverse”, a “beneficial”, or a “neutral” experience. Using a recently established touchscreen paradigm, cognitive judgement bias was assessed twice, once before and once after the respective experience. Anxiety-like behaviour was examined using a standardised battery of behavioural tests and faecal corticosterone metabolite concentrations were measured. Surprisingly, only minor effects of the social experiences on the animals’ cognitive judgement bias and no effects on anxiety-like behaviour and corticosterone metabolite levels were found. It might be speculated that the experiences provided were not strong enough to exert the expected impact on the animals’ emotional state. Alternatively, the intensive training procedure necessary for cognitive judgement bias testing might have had a cognitive enrichment effect, potentially countering external influences. While further investigations are required to ascertain the specific causes underlying our findings, the present study adds essential empirical data to the so far scarce amount of studies combining cognitive, behavioural, and physiological measures of emotional state in mice. ## Introduction The assessment of emotional states in non-human animals (hereafter: animals) is of major importance for multiple research fields, including for example animal welfare science, psychopharmacology and behavioural neuroscience1,2. However, finding valid and objective measures of animals’ emotions can be challenging3. In practice, scientists traditionally rely on physiological as well as behavioural indicators of affective state, often used alongside each other. Physiological indicators commonly include parameters related to the study of stress, for instance heart rate or stress hormone concentrations4. While these reliably reflect arousal states, they are considered to be unsuitable for discriminating between states of differential emotional valence4,5. The additional assessment of behavioural parameters facilitates a more comprehensive understanding of animals’ emotional states. For instance, facial and vocal expressions as well as spontaneous behaviours (e.g. approach and avoidance or play behaviour) can be assessed, allowing for the interpretation of emotional valence4. Adding to this, standardised behavioural test batteries are commonly applied to assess fear and anxiety-like behaviour, especially in disciplines like neuroscience or psychopharmacology4,6. Over the last fifteen years, a novel approach has gained increasing importance, targeting the cognitive component of emotion via so called cognitive biases4,6,7,8. The cognitive bias concept derives from human psychology and is based on the phenomenon that emotions can influence cognitive processes9. For example, individuals in a positive affective state tend to interpret ambiguous stimuli in a more “optimistic” way compared to individuals in a negative state9,10. This so-called cognitive judgement bias can serve as a proxy measure of the valence of affective states, also in animals4,6,8. In a seminal study, Harding and colleagues introduced an experimental paradigm to systematically assess cognitive judgement bias in rats7. Inspired by their work, judgement bias tests have been developed for a multitude of different species11,12,13,14,15. The majority of studies across species reports mood-congruent judgement biases. Thus, animals in a negative (e.g. anxiety-like) state generally display “pessimistic” judgement biases, animals in a positive affective state (e.g. induced via environmental enrichment) “optimistic” ones16,17,18,19 (but see also20,21,22). Although mice are the predominantly used mammalian animal model23,24, only little is known about factors associated with variations in judgement bias in this species. So far, stereotypic behaviour, considered to reflect a negative affective state, has been linked to differences in judgement bias25,26. Furthermore, studies investigating different strains of mice indicate the potential involvement of a genetic component14,25,27 (but see also28). Most interestingly for the framework of this study, stressful experiences have been discussed as potential modulators of judgement bias in mice, yet again, evidence remains unclear26. Thus, the modulation of judgement biases in mice is far from being understood. Moreover, the focus has been put on the investigation of negative affective states, while effects of putatively positive experiences remain understudied (but see also29). In the present study, we therefore aimed to investigate the influence of both a positive and a negative affect manipulation on the cognitive judgement bias of male laboratory mice. In contrast to previous studies that have used rather artificial treatments, we aimed to provide treatments of high ecological relevance. For this purpose, social experiences of sexual as well as agonistic nature were chosen. As a mildly “adverse” experience, one group of animals was repeatedly confronted with a dominant male opponent. Losing such an aggressive confrontation has been shown to increase anxiety-like behaviour in rodents30, and a study in rats even revealed an influence of social defeat on judgement bias17. As a putatively “beneficial” experience, we presented another group of mice with freshly collected female urine. The presentation of female urinary pheromones can induce positive affect in male mice, as it reduces anxiety-like behaviour31 and aggression32. Sniffing female urine has further been shown to trigger male ultrasonic courtship vocalisations33 which are suggested to reflect positive affect34,35. We assessed cognitive judgement bias twice, once before and once after mice had made the respective social experience, using a recently implemented touchscreen paradigm36. We expected a mood-congruent shift in judgement bias after the experience phase. To cover not only cognitive, but also physiological and behavioural measures of emotional states, we additionally assessed faecal corticosterone metabolite concentrations reflecting hypothalamic-pituitary-adrenal axis activity and anxiety-like behaviour in a battery of standardised tests. With this multi-method approach we intended to gain a comprehensive picture of the impact of different social experiences on the emotional state of mice. ## Animals and methods ### Animals and housing conditions The present study was conducted with 24 male C57BL/6J mice, purchased from a professional breeder (Charles River Laboratories, Research Models and Services, Germany GmbH, Sulzfeld, Germany) at the age of five weeks. Upon arrival, mice were housed in same-sex groups of 3 individuals per cage (Makrolon cages type III, 38 × 23 × 15 cm3), since in sub-adult male mice, the occurrence of escalated aggression is very unlikely. However, with the males becoming adult, the probability of escalated agonistic encounters increases. Therefore, at the age of nine weeks, mice were transferred to single housing conditions to avoid any escalated aggressive interactions. Please note that the question whether to house male laboratory mice singly or in groups is under ongoing discussion and there is still no “gold standard” regarding its solution. For current discussions about recommendations for male mouse housing see37,38. Cages were equipped with wood chips as bedding material (TierWohl Super, J. Rettenmaier & Söhne GmbH + Co.KG, Rosenberg, Germany), a wooden stick, a semi-transparent red plastic house (11.1 × 11.1 × 5.5 cm3, Tecniplast Deutschland GmbH, Hohenpeißenberg, Germany), and a paper tissue. Housing rooms were maintained at a reversed 12 h dark/light cycle with lights off at 8 a.m., a temperature of approximately 23 °C, and a relative humidity of about 50%. The animals had ad libitum access to water and food (Altromin 1324, Altromin Spezialfutter GmbH & Co. KG, Lage, Germany) until the beginning of the touchscreen training phase. From then on they were mildly food restricted to 90–95% of their ad libitum feeding weights in order to enhance their motivation to work for food rewards. As neither distinct negative effects of such a restricted feeding protocol39, nor an interference with judgement bias assessment17,18 could be detected in previous studies, we considered this method to not affect the emotional state of the mice itself. Weights were monitored on a daily basis using a digital scale (weighing capacity: 150 g, resolution: 0.1 g; CM 150-1 N, Kern, Ballingen, Germany). In addition to the experimental animals, 16 group-housed adult female C57BL/6J mice and 5 single-housed adult male NMRI mice, purchased from Charles River Laboratories, were used to provide the test animals with social experiences. ### Ethics statement All procedures complied with the regulations covering animal experimentation within Germany (Animal Welfare Act), the EU (European Communities Council DIRECTIVE 2010/63/EU), and the fundamental principles of the Basel Declaration, and were approved by the local (Gesundheits- und Veterinäramt Münster, Nordrhein-Westfalen) and federal authorities (Landesamt für Natur, Umwelt und Verbraucherschutz Nordrhein-Westfalen “LANUV NRW”, reference number 84-02.04.2015.A441). ### Experimental design In this study, the effects of different social experiences on important correlates of animal emotions, comprising cognitive (judgement bias), behavioural (anxiety-like and exploratory behaviour) as well as physiological (stress hormone levels) measures, were investigated. The experiment comprised six phases: a handling phase, a training phase, a first cognitive judgement bias (CJB) test phase, an experience phase, a second CJB test phase, and a behavioural test phase (Fig. 1). During the handling phase starting at PND 69, mice were first habituated to cup handling for 5 days and thereafter underwent daily training sessions to learn the discrimination task required for CJB testing, starting at PND 76. Afterwards, the animals’ initial CJB was assessed (start test phase 1: PND 223 ± 77; for details on CJB training and testing see following section). During a subsequent experience phase starting at PND 230 ± 77, mice were exposed to one of three different experiences, each comprising three group-specific encounters, classified as either mildly “adverse”, “beneficial”, or “neutral”. Encounters took place under red light between 2:45 p.m. and 4:35 p.m. on 3 different days, always separated by a gap day. The mildly “adverse experience” group (AE group, n = 8) repeatedly encountered a dominant opponent of the aggressive NMRI strain40, with each confrontation lasting maximally 10 minutes30,41. Confrontations were terminated in cases of high aggression. The “beneficial experience” group (BE group, n = 8) was repeatedly presented with freshly collected urine of an unfamiliar C57BL/6J female for 10 minutes31. To provide all subjects with comparable experiences, we controlled for the females’ oestrus state. Since the time of oestrus in mice is relatively short42, urine from non-oestrous females was used in order to keep the total number of involved females low. The “neutral experience” group (NE group, n = 8) served as a control group and was repeatedly placed into a novel cage containing clean bedding material for 10 min. Following the experience phase, CJB was assessed again to investigate the influence of the respective experience on the animals’ judgement bias (start test phase 2: PND 237 ± 77). In this second test phase, a so-called reminder was presented immediately before each test session. These reminders were introduced to acutely re-evoke the affective state the mice experienced during the encounters of the treatment phase. Reminders took place immediately before each test session of the second CJB test phase. For this purpose, mice were placed into a cage (Makrolon type II cage; 22 × 16 × 14 cm3) filled with bedding for 3 min. For AE mice, another 25 ml of soiled bedding from the home cage of the last NMRI male encountered were added. For BE mice, the same was done with soiled bedding from the home cage of the last female of which urine had been presented. On the last day of each CJB test phase, faeces samples were obtained to assess corticosterone metabolite (FCM) concentrations. Finally, animals underwent a battery of standard behavioural tests for anxiety-like behaviour and exploratory locomotion (elevated plus maze test (EPM), dark-light test (DL), and open field test (OF); start: PND 245 ± 77). Before each test session, a reminder was presented again. The allocation of mice to the treatment groups was pseudo-randomised, so that balanced numbers of mice with different learning speeds were present in each group. The testing order of mice was randomised once before the first CJB test and subsequently maintained for the following CJB and behavioural test sessions. As reminders were provided immediately before CJB testing as well as before the subsequent behavioural tests, blinding of the experimenter was not possible. ### The touchscreen-based cognitive judgement bias test #### Procedure The same apparatus as described previously was used28,36 (Bussey-Saksida Mouse Touch Screen Chambers, Model 80614, Campden Instruments Ltd., Loughborough, Leics., UK). Mice underwent daily touchscreen sessions at intervals of approximately 24 h on maximally 6 consecutive days. Before each session, each mouse was taken out of its home cage and weighed. In a red semi-transparent box (21 × 21 × 15 cm3) the animal was then transported to a separate room where it was placed into the touchscreen chamber. The session was started and ended after a maximum number of trials had been performed or after a training step-specific duration. All touchscreen sessions were conducted during the dark phase between 8.15 a.m. and 1 p.m. The paradigm applied here was the same as described previously with minor modifications36. Briefly, mice were trained to distinguish between a positive and a negative condition (Fig. 2). The positive condition was signalled by a bar at the bottom (5 cm below upper edge) of the cue-presentation field, the negative condition by a bar at the top (1 cm below upper edge). Mice had to touch either the left or right touch field in response to the cues. A correct touch in the positive condition led to the delivery of a large reward (12 μl of sweet condensed milk, diluted 1:4 in tap water, in the following “SCM”). An incorrect touch resulted in the delivery of a small reward (4 μl of SCM). In the negative condition, correct touches led to the delivery of a small reward (4 μl of SCM), while incorrect touches resulted in a mild “punishment” (5 s time out and houselight on). Mice had to learn to touch the high-rewarded side in the positive condition and the small-rewarded side in the negative condition. The small-rewarded touch field was the same in both conditions. The association between condition and correct touch side was the same for each individual but counterbalanced between mice. For a detailed description of the training procedure please see the supplementary material. After successful training, animals underwent CJB testing. The two cognitive bias test phases took place on five consecutive days each. During each CJB test session, three types of ambiguous cues, interspersed with the learned reference cues, were presented. These were bars at three intermediate positions: near positive (NP, 4 cm below upper edge), middle (M, 3 cm below upper edge) and near negative (NN, 2 cm below upper edge). Touches in response to these ambiguous cues resulted in a neutral outcome (neither a reward nor a “punishment”). The animals’ judgements made in response to these cues indicated whether they interpreted them according to the positive (“optimistic” response) or negative (“pessimistic” response) reference cue, serving as a measure of CJB. Each test session comprised 54 trials. Per session, each type of ambiguous cue was presented twice, interspersed with 48 training trials. Per test phase, each mouse was presented with each ambiguous cue ten times and each trained cue 120 times. #### Behavioural measures Responses to ambiguous cues served as a measure of the animals’ CJB. Touches according to the positive condition were defined as “optimistic” choices, touches according to the negative condition were defined as “pessimistic” choices. Out of all responses per condition, a “choice score” was calculated as previously28,36 according to the following formula: $$Choice\,Score = \frac{{N\,choices ( {\text{}}optimistic{\text{''}} ) - N\,choices ({\text{}}pessimistic{\text{''}})}}{ N\,choices ({\text{}}optimistic{\text{''}} + {\text{}}pessimistic{\text{''}})}$$ The choice score could range between − 1 to + 1. Higher scores indicated a higher proportion of “optimistic” choices and consequently a relatively positive CJB compared to lower scores. Please note that choice scores are not an absolute, but a relative measure of CJB and that the term was chosen for the sake of intuitiveness. ### Anxiety-like behaviour and exploratory locomotion Mice were tested in three tests on anxiety-like behaviour and exploratory locomotion in the following order: the elevated plus-maze test (EPM), the dark-light test (DL) and the open field test (OF). The sequence of tests followed recommendations to schedule tests that are more sensitive to previous experience at the beginning of such a battery, and to conduct potentially more stressful tests towards the end43,44. Tests were carried out at intervals of at least 48 h and were performed in a room different from the housing room between 12:45 p.m. and 3:35 p.m. Test equipment was cleaned with 70% ethanol between subjects. Behaviour was recorded with a webcam (Logitech Webcam Pro 9000) and the animals’ movements during the EPM and OF were automatically analysed by the video tracking system ANY-maze (ANY-maze version 4.99, Stoelting Co., Wood Dale, IL, USA). Videos of the DL were analysed manually by an experienced observer (Sophie Siestrup). For apparatus descriptions and details about testing procedures see supplementary material. ### Faecal corticosterone metabolites The basal levels of adrenocortical activity of the subjects were monitored non-invasively by measuring faecal corticosterone metabolites45,46,47 (FCMs). Faeces samples of each individual were collected on the last day of the first CJB test week (= before the experience phase) and on the last day of the second CJB test week (= after the experience phase). During the dark phase, a peak of FCMs can be found in the faeces 4–6 h after the exposure to a stressor45. For this reason, faeces samples were collected 5.5–8.5 h after an individual finished CJB testing to ensure that faeces collection could be finished in the dark phase. For sample collection, mice were placed in Makrolon cages type III with a thin layer of bedding material and clean enrichment items as present in the home cage. Water was available ad libitum. After the sampling period of 3 h, mice were transferred to novel clean cages together with the enrichment items. All faeces produced during this time were collected and frozen at − 20 °C. Faecal samples were dried and homogenised and aliquots of 0.05 g were extracted with 1 ml of 80% methanol. Samples were then analysed using a 5α-pregnane-3β, 11β, 21-triol-20-one enzyme immunoassay (for details see45,46). Intra- and inter-assay coefficients of variation were < 10% and < 12%, respectively. ### Data analysis To check for the assumptions of parametric analysis, residuals of all data were analysed for heteroscedasticity and normal distribution graphically and using the Shapiro-Wilk normality test. If the assumptions were not met, data were transformed whenever possible (DL: latency to enter light compartment, logarithmic transformation). As CJB test data did not meet the assumptions of parametric analysis even after transformation, untransformed data were analysed using non-parametric tests. Data from behavioural tests were analysed using a linear mixed effect model (LMM) with “experience” as fixed factor and “age” as random factor, followed by Holm-Bonferroni post hoc comparisons. Faecal corticosterone metabolite data were analysed using an LMM with “experience” and “time” as fixed factors, and “age” and “individual” as random factors. In order to examine whether mice interpreted the conditions of the CJB test differently, data were pooled across animals for each condition and each test phase and analysed using the Friedman test. Post hoc comparisons between conditions were conducted using the Holm-Bonferroni-corrected Wilcoxon signed-rank test. The Wilcoxon signed-rank test was also used for within-group comparisons of choice scores before and after the experience phase. The Kruskal-Wallis test was used for between-group comparisons of choice scores. Subsequent post hoc comparisons were carried out using the Holm-Bonferroni-corrected Wilcoxon rank-sum test (unpaired). Differences were considered significant at p ≤ 0.05. Whenever LMMs were used, effect sizes were calculated additionally to p-values as partial eta squared (η2p). Statistical analyses were performed using the software R48 (www.r-project.org, open source). Graphs were created using the software SigmaPlot for Windows (Version 12.5, Build 12.5.0.38, Systat Software, Inc. 2011). ## Results ### Cognitive judgement bias During both cognitive judgement bias test phases, mice interpreted the five conditions significantly differently as revealed by the analysis of choice scores pooled across groups (Friedman test, before experience phase: χ2(4) = 80.1, p < 0.001, after experience phase: χ2(4) = 6.88, p < 0.001; for post hoc comparisons see supplementary Fig. 1 and supplementary Table 2). Descriptively, choice scores of each group of mice resulted in response curves with highest scores in the positive and near positive condition, lowest in the near negative and negative condition, and intermediate scores in the middle condition (Fig. 3). In order to detect potential shifts in the animals’ choice scores in response to the experiences, scores before and after the experience phase were compared within each group of mice using the Wilcoxon signed-rank test (for statistical parameters of all within-group comparisons see Table 1). In both the AE and NE group, no differences between choice scores before and after the experience phase could be detected in any of the five conditions. Solely BE mice displayed significantly lower choice scores in the middle as well as in the negative condition after the treatment phase (Wilcoxon signed-rank test, middle condition: V = 33, p = 0.04, negative condition: V = 32, p = 0.05). To detect potential differences between the three treatment groups, choice scores in response to each condition were compared between AE, BE and NE mice using the Kruskal–Wallis test (for statistical parameters of all between-group comparisons see Table 1). Before the experience phase, there was a trend for a difference in choice scores between the three groups in the positive condition (Kruskal-Wallis test, χ2(2) = 5.55, p = 0.06). Descriptively, NE mice displayed lower scores compared to AE and BE mice, however, no statistically significant pairwise differences could be detected based on post hoc comparisons (Wilcoxon rank-sum test, AE vs. BE: W = 36.5, p = 0.66; AE vs. NE: W = 46.5, p = 0.14; BE vs. NE: W = 54, p = 0.02; please note that using the Holm-Bonferroni correction for three pairwise comparisons the smallest of the 3 p-values has to be ≤ 0.017 for an effect to be significant at the 0.05 level). Regarding all remaining conditions, we did not detect any significant differences between the three groups before the experience phase. After the experience phase, a significant difference could be detected within the near positive condition (Kruskal-Wallis test, χ2(2) = 6.88, p = 0.03). Descriptively, NE mice displayed lower choice scores compared to both other groups, however, no significant pairwise differences could be detected based on post hoc comparisons (Wilcoxon rank-sum test, AE vs. BE: W = 32, p = 0.84, AE vs. NE: W = 15, p = 0.06, BE vs. NE: W = 11, p = 0.02; please note that using the Holm-Bonferroni correction for three pairwise comparisons the smallest of the 3 p-values has to be ≤ 0.017 for an effect to be significant at the 0.05 level). Regarding all remaining conditions, again, no significant differences between the three groups were detected. ### Anxiety-like and exploratory behaviour Anxiety-like and exploratory behaviour were assessed using the elevated plus-maze test (EPM), dark light test (DL) and open field test (OF). Table 2 gives an overview of the statistical parameters of the analysis. We did not detect significant main effects of experience on the parameters reflecting anxiety-like behaviour in the EPM, DL, and OF (for statistical details see Table 2, Fig. 4). Similarly, no significant main effects of experience on the parameters reflecting exploratory locomotion could be detected in the EPM and OF (for statistical details see Table 2, Fig. 4). However, in the DL, there was a significant main effect of experience on the number of entries the mice made into the light compartment of the apparatus (F(2,17.81) = 3.73, p = 0.04, η2p = 0.23; Fig. 4D). Descriptively, NE mice entered the light compartment more often than AE and BE mice, but pairwise differences were not statistically significant (Holm-Bonferroni post hoc comparison, NE vs. AE: p = 0.09; NE vs. BE: p = 0.08; AE vs. BE: p = 0.84). ### Faecal corticosterone metabolite concentrations We neither detected a significant main effect of experience (F(2,18.78) = 0.4, p = 0.72, η2p = 0.04) nor of time point (F(1,20.17) = 0.06, p = 0.81, η2p < 0.01) on corticosterone metabolite concentrations. Likewise, no significant experience x time interaction could be found (LMM, F(2,20.15) = 1.33, p = 0.29, η2p = 0.12; Fig. 5). ## Discussion Combining physiological, behavioural and cognitive correlates of emotional states is currently considered to be the most promising way to comprehensively assess emotional states of animals2,4. Applying such a multi-method approach, we here examined the effects of a putatively mildly “adverse” and a putatively “beneficial” experience on the emotional state of mice. Overall, only minor effects of the experiences on the animals’ choice scores and no effects on their anxiety-like behaviour and faecal corticosterone metabolite concentrations were found. In the cognitive bias test, choice scores in response to the five conditions resulted in a curve that is typical for judgement bias tests across species e.g.14. This result is consistent with previous studies and confirms the general applicability of the touchscreen-based cognitive judgement bias paradigm14,28,36,49. Furthermore, no significant between-group differences were found. Likewise, no significant differences between choice scores before and after the experience phase could be detected in AE and NE mice. Yet, in BE mice, choice scores towards the middle condition significantly decreased after the experience phase, hinting at a pessimistic-like shift in judgement bias. However, we also found a significant decrease in the choice scores of this group in the negative condition, revealing a general negative shift of the animals’ response curve. This suggests that the animals’ choices in response to the ambiguous conditions do not solely reflect their judgement bias, but may additionally be influenced by other factors, such as learning accuracy or perceived reward value26,50. Consequently, the difference in choice scores towards the middle condition found in BE mice should be interpreted with caution. Thus, in summary, we did not detect clear effects of the different social experiences on the animals’ cognitive judgement bias in the present study. While equivocal findings are not an exception in the field of cognitive bias research in mice26,29, the present results still deviate from our expectations based on previous studies, reporting effects of the same experiences as provided here on the emotional state of mice30,31,41. Interestingly, however, we also did not detect effects of the social experiences on anxiety-like behaviour, exploratory locomotion and corticosterone metabolite levels. Thus, not only cognitive, but also behavioural and endocrine proxy measures of emotional state obtained in this study point into a similar direction. In search of a reasonable explanation for these findings, the social experiences provided require closer consideration. Regarding the efficacy of the putatively “beneficial” social experience, i.e. the repeated presentation of female urine, the oestrus state of the females might have influenced the results. Here, we provided urine of non-oestrous females. However, urine from females in oestrus, or even the direct contact with an oestrous female, might have enhanced the efficacy of the treatment due to a higher ecological relevance for the subjects. Concerning the mildly “adverse” experience, we here provided three confrontations with a dominant male opponent. This experience was chosen since it has been shown to lead to increased levels of anxiety-like behaviour, lower levels of exploratory locomotion30 and an elevation of faecal corticosterone metabolite concentrations41 in previous studies in mice. Yet, the here applied procedure differs from that of a study in rats: Papciak and colleagues17 applied chronic social defeat in form of daily confrontations over the course of three weeks which caused a negative shift in judgement bias. In comparison to such a chronic stress paradigm, the here applied “adverse” experience was comparably milder, and therefore potentially less effective at inducing a negative emotional state. Thus, it would be interesting to investigate potentially more effective emotion manipulating treatments within future studies. Despite a reduced efficacy of the experiences, however, there could also be another, alternative explanation for the findings of this study: a potential influence of the intensive touchscreen training phase which is required as a prerequisite for the cognitive judgement bias test. Indeed, the use of touchscreen paradigms for rodents, as well as discrimination training alone, have been proposed to act as cognitive enrichment51,52. This assumption finds recent support by a study conducted in our lab. Heterozygous serotonin transporter knockout mice showed a decrease in anxiety-like behaviour after cognitive bias testing using the touchscreen method, suggesting a beneficial influence of this procedure28. Moreover, it has been argued that enrichment-like properties of training procedures can potentially mask the influence of other, especially negative, experiences8,51,53. Therefore, touchscreen training in the present study might have had a positive influence on the animals’ emotional state, and thus might have buffered the impact of the social experiences, particularly the mildly “adverse” social confrontations. Arguing in favour of this hypothesis, it could incidentally be observed by the experimenter that AE mice showed offensive aggressive behaviours during confrontations with an opponent, something that has rarely been observed previously during social defeat paradigms. Yet, this novel hypothesis remains to be thoroughly investigated in the future, especially considering the use of appropriate control groups. In summary, the present study adds essential empirical data to the so far scarce amount of studies investigating the effects of ecologically relevant emotion manipulating treatments on a set of cognitive, behavioural, and physiological measures of emotional state in mice. Since no clear effects of the treatments could be detected here, further research in this field is required to elucidate the specific effects of the applied experiences, as well as the applicability of the cognitive judgement bias paradigm. Furthermore, the present findings led to a novel hypothesis: touchscreen training might exert a pronounced and presumably positive effect on the animals’ emotional state. This assumption deserves closer attention in future studies and is currently under systematic investigation in our lab. ## References 1. 1. Boissy, A. et al. Assessment of positive emotions in animals to improve their welfare. Physiol. Behav. 92(3), 375–397. https://doi.org/10.1016/j.physbeh.2007.02.003 (2007). 2. 2. Mendl, M., Burman, O. H. P. & Paul, E. S. An integrative and functional framework for the study of animal emotion and mood. Proc. R. Soc. B Biol. Sci. 277(1696), 2895–2904. https://doi.org/10.1098/rspb.2010.0303 (2010). 3. 3. De Waal, F. B. M. What is an animal emotion?. Ann. NY Acad. Sci. 1224, 191–206. https://doi.org/10.1111/j.1749-6632.2010.05912.x (2011). 4. 4. Paul, E. S., Harding, E. J. & Mendl, M. Measuring emotional processes in animals: the utility of a cognitive approach. Neurosci. Biobehav. Rev. 29(3), 469–491. https://doi.org/10.1016/j.neubiorev.2005.01.002 (2005). 5. 5. Koolhaas, J. M. et al. Stress revisited: a critical evaluation of the stress concept. Neurosci. Biobehav. Rev. 35(5), 1291–1301. https://doi.org/10.1016/j.neubiorev.2011.02.003 (2011). 6. 6. Mendl, M., Burman, O. H., Parker, R. M. & Paul, E. S. Cognitive bias as an indicator of animal emotion and welfare. Emerging evidence and underlying mechanisms. Appl. Anim. Behav. Sci. 118(3–4), 161–181. https://doi.org/10.1016/j.applanim.2009.02.023 (2009). 7. 7. Harding, E. J., Paul, E. S. & Mendl, M. Cognitive bias and affective state. Nature 427, 6972. https://doi.org/10.1038/427312a (2004). 8. 8. Roelofs, S., Boleij, H., Nordquist, R. E. & van der Staay, F. J. Making decisions under ambiguity: judgment bias tasks for assessing emotional state in animals. Front. Behav. Neurosci. 10, 119. https://doi.org/10.3389/fnbeh.2016.00119 (2016). 9. 9. Mathews, A. & MacLeod, C. Cognitive approaches to emotion and emotional disorders. Annu. Rev. Psychol. 45(1), 25–50 (1994). 10. 10. Mathews, A. & MacLeod, C. Cognitive vulnerability to emotional disorders. Annu. Rev. Clin. Psychol. 1, 167–195. https://doi.org/10.1146/annurev.clinpsy.1.102803.143916 (2005). 11. 11. Matheson, S. M., Asher, L. & Bateson, M. Larger, enriched cages are associated with ‘optimistic’ response biases in captive European starlings (Sturnus vulgaris). Appl. Anim. Behav. Sci. 109(2–4), 374–383. https://doi.org/10.1016/j.applanim.2007.03.007 (2008). 12. 12. Enkel, T. et al. Ambiguous-cue interpretation is biased under stress- and depression-like states in rats. Neuropsychopharmacology 35(4), 1008–1015. https://doi.org/10.1038/npp.2009.204 (2010). 13. 13. Jones, S. et al. Assessing animal affect: an automated and self-initiated judgement bias task based on natural investigative behaviour. Sci. Rep. 8(1), 12400 (2018). 14. 14. Hintze, S. et al. A cross-species judgement bias task: integrating active trial initiation into a spatial Go/No-go task. Sci. Rep. 8(1), 5104. https://doi.org/10.1038/s41598-018-23459-3 (2018). 15. 15. Bethell, E. J. A “how-to” guide for designing judgment bias studies to assess captive animal welfare. Appl. Anim. Welf. Sci. 18(sup1), 18–42. https://doi.org/10.1080/10888705.2015.1075833 (2015). 16. 16. Brydges, N. M., Leach, M., Nicol, K., Wright, R. & Bateson, M. Environmental enrichment induces optimistic cognitive bias in rats. Anim. Behav. 81(1), 169–175. https://doi.org/10.1016/j.anbehav.2010.09.030 (2011). 17. 17. Papciak, J., Popik, P., Fuchs, E. & Rygula, R. Chronic psychosocial stress makes rats more “pessimistic” in the ambiguous-cue interpretation paradigm. Behav. Brain Res. 256, 305–310. https://doi.org/10.1016/j.bbr.2013.08.036 (2013). 18. 18. Richter, S. H. et al. A glass full of optimism: enrichment effects on cognitive bias in a rat model of depression. CABN 12(3), 527–542. https://doi.org/10.3758/s13415-012-0101-2 (2012). 19. 19. Salmeto, A. L. et al. Cognitive bias in the chick anxiety-depression model. Brain Res. 1373, 124–130. https://doi.org/10.1016/j.brainres.2010.12.007 (2011). 20. 20. Bethell, E. J. & Koyama, N. F. Happy hamsters? Enrichment induces positive judgement bias for mildly (but not truly) ambiguous cues to reward and punishment in Mesocricetus auratus. R. Soc. Open Sci. 2(7), 140399. https://doi.org/10.1098/rsos.140399 (2015). 21. 21. Brydges, N. M., Hall, L., Nicolson, R., Holmes, M. C. & Hall, J. The effects of juvenile stress on anxiety, cognitive bias and decision making in adulthood: a rat model. PLoS ONE 7(10), e48143. https://doi.org/10.1371/journal.pone.0048143 (2012). 22. 22. Destrez, A., Deiss, V., Leterrier, C., Calandreau, L. & Boissy, A. Repeated exposure to positive events induces optimistic-like judgment and enhances fearfulness in chronically stressed sheep. Appl. Anim. Behav. Sci. 154, 30–38. https://doi.org/10.1016/j.applanim.2014.01.005 (2014). 23. 23. Malakoff, D. The rise of the mouse, biomedicine’s model mammal. Science 288(5464), 248–253. https://doi.org/10.1126/science.288.5464.248 (2000). 24. 24. Rosenthal, N. & Brown, S. The mouse ascending: perspectives for human-disease models. Nat. Cell Biol. 9(9), 993. https://doi.org/10.1038/ncb437 (2007). 25. 25. Novak, J., Bailoo, J. D., Melotti, L. & Würbel, H. Effect of cage-induced stereotypies on measures of affective state and recurrent perseveration in CD-1 and C57BL/6 mice. PLoS ONE 11, 5. https://doi.org/10.1371/journal.pone.0153203 (2016). 26. 26. Novak, J. et al. Effects of stereotypic behaviour and chronic mild stress on judgement bias in laboratory mice. Appl. Anim. Behav. Sci. 174, 162–172. https://doi.org/10.1016/j.applanim.2015.10.004 (2016). 27. 27. Kloke, V. et al. Hope for the best or prepare for the worst? Towards a spatial cognitive bias test for mice. PLoS ONE 9(8), e105431. https://doi.org/10.1371/journal.pone.0105431 (2014). 28. 28. Krakenberg, V., von Kortzfleisch, V. T., Kaiser, S., Sachser, N. & Richter, S. H. Differential effects of serotonin transporter genotype on anxiety-like behavior and cognitive judgment bias in mice. Front. Behav. Neurosci. 13, 263. https://doi.org/10.3389/fnbeh.2019.00263 (2019). 29. 29. Bailoo, J. D. et al. Effects of cage enrichment on behavior, welfare, and outcome variability in female mice. Front. Behav. Neurosci. 12, 232. https://doi.org/10.3389/fnbeh.2018.00232 (2018). 30. 30. Jansen, F. et al. Modulation of behavioural profile and stress response by 5-HTT genotype and social experience in adulthood. Behav. Brain Res. 207(1), 21–29. https://doi.org/10.1016/j.bbr.2009.09.033 (2010). 31. 31. Aikey, J. L., Nyby, J. G., Anmuth, D. M. & James, P. J. Testosterone rapidly reduces anxiety in male house mice (Mus musculus). Horm. Behav. 42(4), 448–460. https://doi.org/10.1006/hbeh.2002.1838 (2002). 32. 32. Mugford, R. A. & Nowell, N. W. Pheromones and their effect on aggression in mice. Nature 226(5249), 967 (1970). 33. 33. Holy, T. E. & Guo, Z. Ultrasonic songs of male mice. PLoS Biol. 3(12), e386. https://doi.org/10.1371/journal.pbio.0030386 (2005). 34. 34. Lahvis, G. P., Alleva, E. & Scattoni, M. L. Translating mouse vocalizations: prosody and frequency modulation. Genes Brain Behav. 10(1), 4–16. https://doi.org/10.1111/j.1601-183X.2010.00603.x (2011). 35. 35. Wang, H., Liang, S., Burgdorf, J., Wess, J. & Yeomans, J. Ultrasonic vocalizations induced by sex and amphetamine in M2, M4, M5 muscarinic and D2 dopamine receptor knockout mice. PLoS ONE 3(4), e1893. https://doi.org/10.1371/journal.pone.0001893 (2008). 36. 36. Krakenberg, V. et al. Technology or ecology? New tools to assess cognitive judgement bias in mice. Behav. Brain Res. 362, 279–287. https://doi.org/10.1016/j.bbr.2019.01.021 (2019). 37. 37. Kappel, S., Hawkins, P. & Mendl, M. T. To group or not to group? Good practice for housing male laboratory mice. Animals 7, 12. https://doi.org/10.3390/ani7120088 (2017). 38. 38. Melotti, L. et al. Can live with ‘em, can live without ‘em. Pair housed male C57BL/6J mice show low aggression and increasing sociopositive interactions with age, but can adapt to single housing if separated. Appl. Anim. Behav. Sci. 214, 79–88. https://doi.org/10.1016/j.applanim.2019.03.010 (2019). 39. 39. Feige-Diller, J. et al. The effects of different feeding routines on welfare in laboratory mice. Front. Vet. Sci. 6, 479 (2020). 40. 40. Navarro, J. F. & Francisco, J. An ethoexperimental analysis of the agonistic interactions in isolated male mice: comparison between OF.1 and NMRI strains. Psicothema 9(2), 333–336 (1997). 41. 41. Kloke, V. et al. The winner and loser effect, serotonin transporter genotype, and the display of offensive aggression. Physiol. Behav. 103(5), 565–574. https://doi.org/10.1016/j.physbeh.2011.04.021 (2011). 42. 42. Byers, S. L., Wiles, M. V., Dunn, S. L. & Taft, R. A. Mouse estrous cycle identification tool and images. PLoS ONE 7(4), e35538 (2012). 43. 43. McIlwain, K. L., Merriweather, M. Y., Yuva-Paylor, L. A. & Paylor, R. The use of behavioral test batteries: effects of training history. Physiol. Behav. 73(5), 705–717. https://doi.org/10.1016/S0031-9384(01)00528-5 (2001). 44. 44. Voikar, V., Vasar, E. & Rauvala, H. Behavioral alterations induced by repeated testing in C57BL/6J and 129S2/Sv mice: implications for phenotyping screens. Genes Brain Behav. 3(1), 27–38 (2005). 45. 45. Touma, C., Sachser, N., Möstl, E. & Palme, R. Effects of sex and time of day on metabolism and excretion of corticosterone in urine and feces of mice. Gen. Comp. Endocrinol. 130(3), 267–278. https://doi.org/10.1016/S0016-6480(02)00620-2 (2003). 46. 46. Touma, C., Palme, R. & Sachser, N. Analyzing corticosterone metabolites in fecal samples of mice. A noninvasive technique to monitor stress hormones. Horm. Behav. 45(1), 10–22. https://doi.org/10.1016/j.yhbeh.2003.07.002 (2004). 47. 47. Palme, R. Non-invasive measurement of glucocorticoids: advances and problems. Physiol. Behav. 199, 229–243 (2019). 48. 48. R. C. Team. R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, Vienna, 2018). 49. 49. Gygax, L. The A to Z of statistics for testing cognitive judgement bias 12. Anim. Behav. 95, 59–69 (2014). 50. 50. Lecorps, B., Brent, R. L., von Keyserlingk, M. A. G. & Weary, D. M. Pain-induced pessimism and anhedonia: evidence from a novel probability-based judgment bias test. Front. Behav. Neurosci. 13, 54. https://doi.org/10.3389/fnbeh.2019.00054 (2019). 51. 51. Düpjan, S., Ramp, C., Kanitz, E., Tuchscherer, A. & Puppe, B. A design for studies on cognitive bias in the domestic pig. J. Vet. Behav. 8(6), 485–489. https://doi.org/10.1016/j.jveb.2013.05.007 (2013). 52. 52. Mallien, A. S. et al. Daily exposure to a touchscreen-paradigm and associated food restriction evokes an increase in adrenocortical and neural activity in mice. Horm. Behav. 81, 97–105. https://doi.org/10.1016/j.yhbeh.2016.03.009 (2016). 53. 53. Zebunke, M., Puppe, B. & Langbein, J. Effects of cognitive enrichment on behavioural and physiological reactions of pigs. Physiol. Behav. 118, 70–79. https://doi.org/10.1016/j.physbeh.2013.05.005 (2013). ## Acknowledgements The authors thank Vanessa von Kortzfleisch and Binia Stieger for contributing with their statistical expertise, as well as Edith Klobetz-Rassam for excellent technical assistance. ## Funding This work was supported by a grant from the German Research Foundation (DFG) to S.H.R. and to N.S. (44541416/SFB-TRR58, Project A01). Open Access funding provided by Projekt DEAL. ## Author information Authors ### Contributions H.R., N.S. and S.K. conceived the study. H.R., N.S., S.K., S.S. and V.K. designed the experiments. H.R. and N.S. supervised the project. V.K. co-supervised and trained S.S. in conducting the experiments. S.S. carried out the experiments. R.P. determined and analysed the hormonal data. V.K. and S.S. conducted the statistical analysis of the data. V.K. and S.S. wrote the initial draft of the manuscript and all other authors (H.R., N.S., S.K., R.P.) revised it critically for important intellectual content. ### Corresponding author Correspondence to Viktoria Krakenberg. ## Ethics declarations ### Competing interests The authors declare no competing interests. ### Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Krakenberg, V., Siestrup, S., Palme, R. et al. Effects of different social experiences on emotional state in mice. Sci Rep 10, 15255 (2020). https://doi.org/10.1038/s41598-020-71994-9 • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-020-71994-9 • ### Using touchscreen-delivered cognitive assessments to address the principles of the 3Rs in behavioral sciences • Laura Lopez-Cruz • Timothy J. Bussey • Christopher J. Heath Lab Animal (2021) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
{}
# If sides A and B of a triangle have lengths of 3 and 5 respectively, and the angle between them is (pi)/4, then what is the area of the triangle? Area=15/2$\sqrt{2}$ $$area=1/2(a*b*sinC) = 1/2(5*3*1/sqrt(2))
{}
• Research Hotspots and Reviews • ### Research on Object Detection Algorithm Based on Improved YOLOv5 QIU Tianheng, WANG Ling, WANG Peng, BAI Yan’e 1. College of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China • Online:2022-07-01 Published:2022-07-01 ### 基于改进YOLOv5的目标检测算法研究 1. 长春理工大学 计算机科学技术学院,长春 130022 Abstract: YOLOv5 is an algorithm with good performance in single-stage target detection at present, but the accuracy of target boundary regression is not too high, so it is difficult to apply to scenarios with high requirements on the intersection ratio of prediction boxes. Based on YOLOv5 algorithm, this paper proposes a new model YOLO-G with low hardware requirements, fast model convergence and high accuracy of target box. Firstly, the feature pyramid network(FPN) is improved, and more features are integrated in the way of cross-level connection, which prevents the loss of shallow semantic information to a certain extent. At the same time, the depth of the pyramid is deepened, corresponding to the increase of detection layer, so that the laying interval of various anchor frames is more reasonable. Secondly, the attention mechanism of parallel mode is integrated into the network structure, which gives the same priority to spatial and channel attention module, then the attention information is extracted by weighted fusion, so that the network can fuse the mixed domain attention according to the attention degree of spatial and channel attention. Finally, in order to prevent the loss of real-time performance due to the increase of model complexity, the network is lightened to reduce the number of parameters and computation of the network. PASCAL VOC datasets of 2007 and 2012 are used to verify the effectiveness of the algorithm. Compared with YOLOv5s, YOLO-G reduces the number of parameters by 4.7% and the amount of computation by 47.9%, while mAP@0.5 and mAP@0.5:0.95 increases by 3.1 and 5.6 percentage points respectively.
{}
# Argonne National Laboratory .st0{fill:none;} .st1{fill:#007934;} .st2{fill:#0082CA;} .st3{fill:#101E8E;} .st4{fill:#FFFFFF;} .st5{fill:#A22A2E;} .st6{fill:#D9272E;} .st7{fill:#82BC00;} Argonne National Laboratory Press Release | Argonne National Laboratory # U.S. Department of Energy awards $200 million for next-generation supercomputer at its Argonne National Laboratory Under Secretary for Science and Energy Orr Announces Next Steps in Pursuit of Exascale Supercomputing to Accelerate Major Scientific Discoveries and Engineering Breakthroughs Argonne, Ill. – Today, U.S. Department of Energy Under Secretary for Science and Energy Lynn Orr announced two new High Performance Computing (HPC) awards that continue to advance U.S. leadership in developing exascale computing. The announcement was made alongside leaders from Argonne National Laboratory and industry partners at Chicago’s tech start-up hub, 1871 Under the joint Collaboration of Oak Ridge, Argonne, and Lawrence Livermore (CORAL) initiative, the U.S. Department of Energy (DOE) announced a$200 million investment to deliver a next-generation supercomputer, known as Aurora, to the Argonne Leadership Computing Facility (ALCF). When commissioned in 2018, this supercomputer will be open to all scientific users – drawing America’s top researchers to Argonne National Laboratory. Additionally, Under Secretary Orr announced $10 million for a high-performance computing R&D program, DesignForward, led by DOE’s Office of Science and National Nuclear Security Administration (NNSA). Argonne National Laboratory’s announcement of the Aurora supercomputer will advance low-carbon energy technologies and our fundamental understanding of the universe, while maintaining United States’ global leadership in high performance computing,” said Under Secretary Orr. This machine – part of the Department of Energy’s CORAL initiative – will put the United States one step closer to exascale computing.” Today’s$200 million award is the third, and final, supercomputer investment funded as part of the CORAL initiative, a $525 million project announced by Department of Energy Secretary Moniz in November 2014. CORAL was established to leverage supercomputers that will be five to seven times more powerful than today’s top supercomputers and help the nation accelerate to next-generation exascale computing. DOE earlier announced a$325 million investment to build state-of-the-art supercomputers at its Oak Ridge and Lawrence Livermore laboratories. Few national investments have the potential to demonstrate dramatic progress and capability across many scientific disciplines and domains with real-world benefits,” said Peter Littlewood, Director, Argonne National Laboratory. Advanced computing is a lever that drives transformational change in science and technology, accelerating discovery and shortening the time for technology to reach market.” Key research goals for the Aurora system, expected to be commissioned in 2018 and to which the entire scientific community will have access, include: • Materials science: Designing new classes of materials that will lead to more powerful, efficient and durable batteries and solar panels. • Biological science: Gaining the ability to understand the capabilities and vulnerabilities of organisms that can result in improved biofuels and more effective disease control. • Transportation efficiency: Collaborating with industry to improve transportation systems with enhanced aerodynamics features, as well as enable production of better, more highly-efficient and quieter engines. • Renewable energy: Engineering wind turbine design and placement to greatly improve efficiency and reduce noise. The new system, Aurora, will use Intel’s HPC scalable system framework to provide a peak performance of 180 PetaFLOP/s. The system will help ensure continued U.S. leadership in high-end computing for scientific research while also cementing the nation’s position as global leader in the development of next-generation exascale computing systems. Aurora, in effect a pre-exascale” system, will be delivered in 2018. Argonne and Intel will also provide an interim system, called Theta, to be delivered in 2016, which will help ALCF users transition their applications to the new technology. The future of high performance computing will require significant innovations on multiple fronts and Argonne’s Aurora and Theta supercomputers represent successive generations of the transformation required in future HPC system architectures” said Raj Hazra, Vice President, Data Center Group and General Manager, Technical Computing Group, Intel Corporation. Working together with Cray, these systems provide a highly flexible and adaptable industry design based on Intel’s HPC scalable system framework that will deliver breakthrough performance, power efficiency and application compatibility through an integrated and balanced system architecture – paving the way for new scientific discoveries and far-reaching benefits on a global scale. Intel is honored to have been awarded the Aurora contract as part of the CORAL program.” Intel will work with Cray Inc. as the system integrator sub-contracted to provide its industry-leading scalable system expertise together with its proven supercomputing technology and HPC software stack. Aurora will be based on a next-generation Cray supercomputer, code-named Shasta,” a follow-on to the Cray® XC™ series. Cray is honored to partner with Argonne and Intel as we develop our next-generation Shasta system to build one of the fastest supercomputers on the planet for the Department of Energy,” said Peter Ungaro, president and CEO of Cray. Shasta will be a powerful combination of Intel’s new technologies and Cray’s advanced supercomputing expertise, creating a single, flexible system that will enable huge advances in computing and analytics. Aurora will be the first system in our Shasta family and we couldn’t be more excited.” In addition to procuring systems like Aurora, the Office of Science and the National Nuclear Security Administration are making longer-term investments in exascale computing under the DesignForward high-performance computing R&D program, designed to accelerate the development of next-generation supercomputers. The program recently awarded $10 million in contracts to AMD, Cray, IBM and Intel Federal, complementing the$25.4 million already invested in the first round of DesignForward. Under this public-private partnership, the four technology firms will work with DOE researchers to study and develop software and hardware technologies aimed at maintaining our nation’s lead in scientific computing.
{}
### High Performance Computing: Download and prepare data in a batch mode Over the time, I need to manipulate a lot of data on a Linux cluster. Some of these manipulations actually read/write data, whereas some are essentially file system operations, such as downloading the files. Here I present a list of similar operations suitable for HPC using pbs job approach whenever possible. I do not attempt to include all possible methods but only the ones that I find useful and easy to prepare in seconds. wget -r --no-parent -R "index.html*" --retr-symlinks -A "*.nc" ftp-url wget -r --no-parent -R "index.html*" -A "MOD17A2.A2000*.hdf" -A "MOD17A2.A2000*.xml" http-url wget -r --no-parent -R "index.html*" -A "MOD17*.hdf" -A "MOD17*.xml" http-url You can basically setup filter for file type, year and granule id. A live example: ///========================================================== #!/bin/bash #PBS -l nodes=1:ppn=1 #PBS -l naccesspolicy=singleuser #PBS -l walltime=40:00:00 #PBS -m ae #PBS -q standby cd $PBS_O_WORKDIR wget -r --no-parent -R "index.html*" --retr-symlinks -A "*.tar" ftp://somwhere ///========================================================== Compress and extract Examples: ///========================================================== #!/bin/bash #use this script to extract tar files under the sub directory for dir in find -mindepth 1 -maxdepth 1 -type d do cd$dir echo $dir tar xf *.tar ./ cd .. done ///========================================================== #!/bin/bash # Pass the name of the file to unpack on the command line$1 for file in *.gz do gunzip -d "\$file" done ///========================================================== Search grep -rnw '/path/to/somewhere/' -e "pattern" find . -maxdepth 1 -name "*string*" -print Comiple make &> results.txt Count find . -name '*.cpp' | xargs wc -l Debug qsub -I -lnodes=1:ppn=20 -lwalltime=04:00:00 -q boss  -X Simply organize these above bash script and replace with commands, most file system related tasks can be resolved. I will add more related scripts later. ### Spatial datasets operations: mask raster using region of interest Climate change related studies usually involve spatial datasets extraction from a larger domain. In this article, I will briefly discuss some potential issues and solutions. In the most common scenario, we need to extract a raster file using a polygon based shapefile. And I will focus as an example. In a typical desktop application such as ArcMap or ENVI, this is usually done with a tool called clip or extract using mask or ROI. Before any analysis can be done, it is the best practice to project all datasets into the same projection. If you are lucky enough, you may find that the polygon you will use actually matches up with the raster grid perfectly. But it rarely happens unless you created the shapefile using "fishnet" or other approaches. What if luck is not with you? The algorithm within these tool usually will make the best estimate of the value based on the location. The nearest re-sample, but not limited to, will be used to calculate the value. But what about the outp… ### Numerical simulation: ode/pde solver and spin-up For Earth Science model development, I inevitably have to deal with ODE and PDE equations. I also have come across some discussion related to this topic, i.e., https://www.researchgate.net/post/What_does_one_mean_by_Model_Spin_Up_Time In an attempt to answer this question, as well as redefine the problem I am dealing with, I decided to organize some materials to illustrate our current state on this topic. Models are essentially equations. In Earth Science, these equations are usually ODE or PDE. So I want to discuss this from a mathematical perspective. Ideally, we want to solve these ODE/PDE with initial condition (IC) and boundary condition (BC) using various numerical methods. https://en.wikipedia.org/wiki/Initial_value_problem https://en.wikipedia.org/wiki/Boundary_value_problem Because of the nature of geology, everything is similar to its neighbors. So we can construct a system of equations which may have multiple equation for each single grid cell. Now we have an array of equation… ### Lessons I have learnt during E3SM development I have been involved with the E3SM development since I joined PNNL as a postdoc. Over the course of time, I have learnt a lot from the E3SM model. I also found many issues within the model, which reflects lots of similar struggles in the lifespan of software engineering. Here I list a few major ones that we all dislike but they are around in almost every project we have worked on. Excessive usage of existing framework even it is not meant to Working in a large project means that you should NOT re-invent the wheels if they are already there. But more often, developers tend to use existing data types and functions even when they were not designed to do so. The reason is simple: it is easier to use existing ones than to create new ones. For example, in E3SM, there was not a data type to transfer data between river and land. Instead, developers use the data type designed for atmosphere and land to do the job. While it is ok to do so, it added unnecessary confusion for future development a…
{}
As an exercise (scroll to the first set of exercises) for learning Haskell, I have to implement foldl1. I believe I have implemented it successfully and while there is an answer available, it would be great to have the eye of an expert and more importantly, the thought process of why certain decisions were made. Below is my code. foldl1' :: (a -> a -> a) -> [a] -> a foldl1' f [a] = a foldl1' f xs = foldl1' f ((f (xs !! 0) (xs !! 1)):(drop 2 xs)) First, I like the explicit statement of the type signature. That's a good habit to get into, and makes it easier to capitalise on perhaps the greatest strength of using Haskell which is all the compile time checking. The provided signature is as general as it can be for lists. Second, the single element base case is written cleanly and correctly. The recursive case has room for a bit of picking apart. It is convention to use pattern matching syntax x:xs (or x1:x2:xs) for list recursive functions. As well as being cleaner to read, the behaviour is slightly different in that it can work out the first, second, and remainder of the list in a single pass without having to separately call !! twice and drop once. foldl1' f (x1:x2:xs) = foldl1' f ((f x1 x2):xs) One other improvement that I would suggest, taken directly from the prelude function by the same name, is explicitely handling the failure case when provided with an empty list. For comparison the inbuilt function produces an exception: foldl1 (+) [] *** Exception: Prelude.foldl1: empty list • foldl1' f (a:b:t) = foldl1' f (f a b : t) is even better. :) then, making an internal function which will take accumulator argument and the rest of the list, without passing f around, and without the extraneous :s. Nov 15 '18 at 7:37
{}
# LipidXplorer MFQL ## Introduction MFQL is the first query language developed for the identification of molecules in complex shotgun spectra datasets. It formalizes the available or assumed knowledge of lipid fragmentation pathways into queries that are used for probing a MasterScan database. ### Structural complexity of lipid species and sum composition constraints The figure shows the basic lipid structure and some characteristics specific for lipids using the example of a PC species. Let us consider PC as a representative example: PC molecules consist of a posphorylcholine head group attached to the glycerol backbone at the sn-3 position, while fatty acid moieties occupy sn-1 and sn-2 positions (alternatively, a fatty alcohol moiety could be attached at the sn-1 position). Fatty acid moieties differ by the number of carbon atoms and double bonds, but also by the relative location at the glycerol backbone, so that isomeric structures having exactly the same fatty acid moieties are possible. Note that isomeric structures are always isobaric, whereas isobaric molecules are not necessarily isomeric. Most generic constraints ("All lipids of PC class" or "All PC esters") encompass sum compositions of species with common naturally occurring fatty acids. However, because of the fatty acid variability, some species of other lipid classes (such as, PE) might meet the same constraint. Therefore, for most common glycerophospholipid classes, the characterization of individual molecular species could not solely rely on their intact masses, irrespective of how accurately were they measured. MS/MS experiments that produce structure-specific ions contribute more specific constraints, such as the number of carbons and double bonds in individual moieties, characteristic head group fragment, characteristic loss of a fatty acid moiety, among others. Within a MFQL query, these constraints can be bundled by Boolean operations. ## A short tutorial Below we present an example of composing a MFQL query for identifying PC lipids in a typical shotgun dataset. In MS/MS experiments (see #MFQL_identification_of_phosphatidylcholines_.28PC.29), molecular cations of PC species produce specific phosphorylcholine fragments of their head group having the sum composition of 'C5 H15 O4 N1 P1' and m/z 184.07 (see #MFQL_identification_of_phosphatidylcholines_.28PC.29). The identification of PC species starts with the identification of probable precursors in the MS spectrum using the accurately determined masses and proceeds with identifying phosphorylcholine headgroup fragment in the MS/MS spectra (see #MFQL_identification_of_phosphatidylcholines_.28PC.29). A query for a phosphatidylcholine lipid (PC) could be: • Find all precursor masses, which fit into the following set of sum compositions: "C[30..48] H[30..200] O[8] P[1] N[1]" and • look if there is the "C5 H15 O4 P1 N1" fragment (or m/z 184.07) in its MS/MS spectrum. • if those two conditions hold, we identified a phosphatidylcholine and can report the lipid species ### MFQL identification of phosphatidylcholines (PC) Figure: Identification of a PC lipid. Upon their collisional fragmentation, molecular cations of PC produce a specific head group fragment with m/z 184.07 and sum composition 'C5 H15 O4 P1 N1'. A: MS spectrum acquired by direct infusion of a total lipid extract into a QSTAR mass spectrometer (inset). All detectable peaks were subjected to MS/MS. The spectrum acquired from the precursor m/z 788.5 (designated by the arrow) is presented at the lower panel. The precursor ion was isolated within 1 Da mass range and therefore several isobaric lipid precursors were co-isolated for MS/MS and produced abundant fragment ions unrelated to PC. These ions were disregarded by this MFQL query and did not affect PC identification. B: MFQL query identifying PC species, details are provided in the text. C: screenshot of the output spreadsheet file; column annotation and content is determined by REPORT section of the above MFQL, see also text for details. For better illustration of the structure of MFQL and the meaning of the different command lines we explain in the following the example script for identification of PC lipid specie. First, let us assign a name to the query: QUERYNAME = Phosphatidylcholine; Next, we define the variables used for identifying the species. Our query should identify the singly charged PC head group fragment and therefore: DEFINE headPC = 'C5 H15 O4 N1 P1' WITH CHG = +1; The keyword CHG states the charge of the ion. In a shotgun experiment not all fragmented peaks will originate from PCs. For higher search specificity we next define precursors (prPC), who are expected to produce headPC fragment in MS/MS spectra. We impose the sc-constraint on precursor masses: besides sum composition requirements, it requests that precursors are singly charged and their unsaturation (expressed as a double bond equivalent with the keyword DBR) is within a certain (here from 1.5 to 7.5) range: DEFINE prPC = 'C[30..48] H[30..200] N[1] O[8] P[1]' WITH CHG = +1, DBR = (1.5, 7.5); Next, the IDENTIFY section specifies that prPC precursors should be identified in MS spectra and headPC fragments in MS/MS spectra, both acquired in positive mode. The logical operation AND requests that headPC should only be searched in MS/MS spectra of prPC IDENTIFY prPC IN MS1+ AND We further limit the search space by applying optional project-specific compositional constraints formulated in the next SUCHTHAT section. For example, it is generally assumed that mammals do not produce fatty acids having an odd number of carbon atoms. Therefore, it is likely that if a recognized lipid comprises an odd-numbered fatty acid moiety this identification is false. SUCHTHAT isEven(prPC.chemsc[C]); In this case the operator isEven requests that candidate PC precursors should contain an even number of carbon atoms. Since the head group of PC and the glycerol backbone contain 5 and 3 carbon atoms, respectively, this implies that a lipid could not comprise fatty acid moieties with odd and even number of carbon atoms at the same time. By executing the DEFINE, IDENTIFY and SUCHTHAT sections LipidXplorer will recognize spectra pertinent to PC species. The last section REPORT defines how these findings will be reported. This includes annotation of the recognized lipid species, reporting the abundances of characteristic ions for subsequent quantification and reporting all additional information pertinent to the analysis, such as masses, mass differences (errors) etc. LipidXplorer outputs the findings as a *.csv file in which identified species are in rows, while the columns content is user-defined. In this example we define 5 columns: NAME - to report the species name; along with four peak attributes such as: MASS - species mass-to-charge ratio; CHEMSC - chemical sum composition; ERROR - the mass measurement error (the difference of the theoretical to the measured mass); INTENS - intensities of the specified ions reported for each individual acquisition. REPORT MASS = prPC.mass; NAME = "PC [%d:%d]" % "((prPC.chemsc - headPC.chemsc)[C] - 3, prPC.chemsc[db] - 1.5)"; CHEMSC = prPC.chemsc; ERROR = "%.2fppm" % "(prPC.errppm)"; INTENS = prPC.intensity; It is also possible to define mathematical terms or use certain functions, such as text formatting, on these attributes. The text format implies two strings separated by % , where the first string contains placeholders and the second string their content. This formatting is used in the NAME string such that the actual annotation convention remains in the users discretion. In this example two placeholders %d of the lipids class name PC [%d:%d] are filled with the number of carbon atoms and double bonds in the fatty acid moieties. The number of the carbon atoms is calculated by subtracting the headPC carbon atoms and the 3 carbons of the glycerol backbone from the total carbon of the precursor prPC (Figures 5 and 6). ## General rules in MFQL queries 1. Everything written after # is ignored by the interpreter. This function is used for writing comments in the code. 2. Every line has to end with ; 3. Every query has to end with an extra ; ## The structure of an MFQL query A MFQL query consists of 3-4 sections: 1. DEFINE: defines sum compositions, sc-constraints (see also #sc-constraints), masses or groups of masses and associates them to user defined names. 2. IDENTIFY: determines where and how the DEFINE content is applied. It usually encompasses searches for specific precursors in MS and/or fragment ions and/or neutral losses in MS/MS spectra 3. SUCHTHAT: is optional. It defines constraints that are formulated as mathematical expressions and inequalities, numerical values, peak attributes (see Supporting Information S-4), sum compositions and functions. Several individual constraints can be bundled by logical operations and applied together. 4. REPORT: establishes the content and format of the output After REPORT there is a list of variables (MASS, NAME, ...) which represent columns in the output file. Each columns content is defined after the =. More on the REPORT will be found in the REPORT chapter. ## SC-constrains For dealing with sets of chemical formulas LipidXplorer uses a special format which is called sum composition constraint (sc-constraint). With sc-constraints it is possible to specify sets of chemical formulas of a lipid class. Here is an example: 'C[38..54] H[30..130] O[10] N[1] P[1]' WITH DBR=(2.5,9.5), CHG = -1; • 'C[38..54] .... P[1]' is the sc-constraint defining a set of chemical formulas • DBR means 'Double Bond Range' and narrows the number of possible double bonds and rings to the given numbers. • CHG states the charge. If the charge is set to zero then the sc-constraint will be threat as a collection of neutral losses. ## The 4 sections of a MFQL query ### Part 1: Definition of sum composition, sc-constrains and masses The first statement of any query is QUERYNAME = <name of the query> to give the query a unique name. Next, variables are defined. It's syntax is DEFINE <variable name> = (<chemical sum composition> | <sc-constraint> | <mass>) (WITH (<option> = <value>)+)? After the keyword DEFINE comes the name of the variable followed by equation sign and its content. This can be either a chemical sum composition, a sc-constrain or a list of sum compositions. Sum compositions and sc-constraints are written in single quotes. Then there can be a WITH followed by certain options. The options can be: 1. DBR is the double bound range of a sc-constrain. It is a 2-tuple stating the minimum and the maximum double bounds and rings which are allowed for a sum composition of this sc-constrain. 2. CHG states the charge If the fragment should be a neutral loss, this can be stated by setting the charge to zero with CHG = 0 or by writing AS NEUTRALLOSS after the sum composition or sc-constrain. NOTE: The neutral loss is calculated always between the precursor mass and the fragment, never between two fragments. #### examples Define PC-O sc-constrains and PC-O's head group which is connected to the precursor mass: DEFINE PR = 'C[30..48] H[30..200] N[1] O[7] P[1]' WITH DBR = (1.5,8), CHG = 1; DEFINE pcHead = 'C5 H15 O4 P1 N1' WITH CHG = 1; Define PE sc-constrains and PE's head group which is connected to the precursor mass: DEFINE PR = 'C[30..46] H[30..200] N[1] O[7] P[1]' WITH DBR = (1.5,8), CHG = 1; DEFINE peHead = 'C2 H8 O4 N1 P1' AS NEUTRALLOSS; Define sc-constrains and fragments for PE-Plasmalogen: DEFINE PR = 'C[30..46] H[30..200] N[1] O[7] P[1]' WITH DBR = (1.5,8), CHG = 1; DEFINE FRAG1 = 'C[14..26] H[20..80] O[3]' WITH DBR = (1.5,9), CHG = 1; DEFINE FRAG2 = 'C[14..26] H[20..80] N[1] O[4] P[1]' WITH DBR = (1.5,9), CHG = 1; An arbitrary number of variables can be defined, but they are only valid for the current query. I.e. they are not valid in other queries of the same Run. ### Part 2: The IDENTIFY section The before defined variables are queried to the experiment database. The syntax is: IDENTIFY <identification 1> AND <identification 2> AND ... <identification n> The headline 'IDENTIFY' is followed by identifications which are connected by 'AND'. The result of an identification can be a singleton or a set, i.e. for some variables more than one mass is identified. This holds especially for sc-constraints. This section is the first filtering step. The section returns True if the boolean expression is true. The expression is true if the particular expressions are true: An identification looks like this: ((<variable name> IN (MS1+/-|MS2+/-)+)? Here does LipidXplorer check the existence of certain masses/fragment masses. The scope (level of MS) is stated after 'IN': The 'MS1+', 'MS1-', 'MS2+' and 'MS2-' tags point to the MS level where to look for the sum composition ('MS1+' means in positive MS, while 'MS2-' means in negative MS/MS). ## Emulating (Multiple) Precursor Ion Scan / Neutral Loss Scan with MFQL In the IDENTFIY section specify precursor ion scans (PIS) and neutral loss scans (NLS)can be defined. If the variable is a sc-constrain it emulates multiple PIS/NLS. Switching from PIS to NLS is done in the definition part. When a variable gets charge zero (CHG = 0) or the keyword AS NEUTRALLOSS is given then it is stated as neutral loss. Otherwise it is stated as (fragment) mass. (Comment: The above feature should not be not mistaken with the LipidXplorer functionality to import PIS and NLS mass spectrometric acquisitions.) Some examples: # Phosphatedylcholine ether species DEFINE PR = 'C[30..48] H[30..200] N[1] O[7] P[1]' WITH DBR = (1.5,8), CHG = 1; DEFINE pcHead = 'C5 H15 O4 P1 N1' WITH CHG = 1; IDENTIFY # the MS mass should fit to 'PR' and it should have a MS/MS fragment mass fitting to 'pcHead' PR IN MS1+ AND ################################################################################ # Phosphatedylethanolamine DEFINE PR = 'C[30..46] H[30..200] N[1] O[8] P[1]' WITH DBR = (2.5,9), CHG = 1; DEFINE peHead = 'C2 H8 O4 N1 P1' WITH CHG = 0; IDENTIFY # marking PR IN MS1+ AND ################################################################################ # PE Plasmalogen DEFINE PR = 'C[30..46] H[30..200] N[1] O[7] P[1]' WITH DBR = (1.5,8), CHG = 1; DEFINE FRAG1 = 'C[14..26] H[20..80] O[3]' WITH DBR = (1.5,9), CHG = 1; DEFINE FRAG2 = 'C[14..26] H[20..80] N[1] O[4] P[1]' WITH DBR = (1.5,9), CHG = 1; IDENTIFY # marking PR IN MS1+ AND FRAG1 IN MS2+ AND FRAG2 IN MS2+ ### Part 3: The SUCHTHAT section After the collection of specific masses, it is possible to add more constraints to the query. For example: the identification of PE Plasmalogen requires the marking of 'FRAG1' and 'FRAG2' which both contain several possibilities since they are sc-constraints (see example above) and a test if those two fragments in sum match the precursor mass, i.e. is "FRAG1 + FRAG2 == PR"? Such a constraint is formulated in the optional 'SUCHTHAT' section as boolean connected equations, unequations and functions. The syntax is: SUCHTHAT (((NOT)? (<equation> | <unequation> | <function>)) | ((NOT)? (<equation> | <unequation> | <function>) (AND | OR))+) (WITH (<option> = <value>)+)? The terms can be build up with the basic mathematical functions +, -, *, /. Parenthesis can also be used. The terms are connected as equations by '==' and as inequalities by '<', '>', '<=', '>=' and '!=' for not equal. The values for the terms can be marked masses (given with their variable name), floating point numbers or chemical sum compositions. Certain attributes of marked masses can be also addressed. This can be done by writing the attribute after the variable name connected with a dot. The intensity of the peak 'PR' for example is addressed as PR.intensity. A list of peak attributes can be found here: #List_of_peak_attributes #### Functions Additional to the attributes, SUCHTHAT supports the use of functions. The list of all functions can be found here: #List_of_functions ### Part 4: The REPORT section All successful identifications are piped to the REPORT section, where the format of the output is specified. In general the REPORT consists of a list of variables where each represents a column. The content of the variable is the content of the column. So is the following code generates a column with the name MASS and the m/z values of PR's identified species as content: REPORT MASS = PR.mass The next example reports the sum of the intensities of two fragments REPORT INTENS = frag1.intensity + frag2.intensity Mostly those fragments can be the same (so for example for 2 fatty acid scans), therefore LipidXplorer has a special function which does not sum intensities of same fragments: REPORT INTENS = sumIntensity(frag1.intensity, frag2.intensity) The syntax of REPORT is: REPORT ((<variable name> = <variable> | <equation>) The content of the variable can be any attribute and/or term as in the SUCHTHAT section. The REPORT section has an additional feature with which it is possible to generate lipid names or other formatted strings. The syntax for this function is: REPORT (<variable name> = "<format string>" % ((<list of variables for the format string>)+) The string format works as follows: there are two strings to give which are separated with a %. The first string contains the output format, i.e. a string with placeholders. Placeholder can be: %d for decimal values, %.nf for floating point values with n decimals and %s for string values. The second string contains a list with the content of the placeholders according to their order. For example: REPORT LIPIDNAME = "PC [%d:%d]" % (fa1PC.chemsc[C] + fa2PC.chemsc[C], fa1PC.chemsc[db] + fa2PC.chemsc[db]) The variable LIPIDNAME contains the string "PC [... : ...]". The first decimal value is filled with the sum of the carbon atoms of both fatty acids (fa1PC, fa2PC) and the second decimal value the sum of the double bonds. The output could be for example "PC [36:2]". The format string variant is a Python gimmick, where MFQL uses standard Python commands. I.e. the format string is a python function (see here for more information). ### Notes • If a lipid was not found in a particular sample, its intensity is set to zero. • If the isotopic correction corrects an intensity to zero or less than zero, it is set to '-1' ## List of peak attributes #### error The difference between the theoretical mass (according to the sum composition) and the tagged mass from the spectrum. The error can be given in the 3 types: 1. errppm -> error in ppm 2. errda -> error in dalton 3. errres -> error as resolution value #### mass The m/z value of the peak #### chemsc The chemical sum composition. For addressing certain elements of the sum composition, the element is to write in brackets after .chemsc. To get the number of C atoms from a formula for example: PR.chemsc[C] 1. frsc -> the chemical sum composition of the fragment. If the peak is defined as a (charged) fragment, it is the same as chemsc, if it is defined as a neutral loss, it returns the sum composition of the fragment. 2. nlsc -> the chemical sum composition of the neutral loss. If the peak is defined as (uncharged) a neutral loss, it is the same as chemsc, if it is defined as a fragment, it returns the sum composition of the neutral loss of the precursor. #### intensity All the intensities of a mass from all the samples it occured. Note that intensity is mostly no single value but a list of intensities. One list entry for every sample the peak was found. If used in an equation or unequation, the whole list is considered. I.e. PR.intensity > 10000 is true if and only if all intensities are greater than 10000. It is possible to address only a part of all samples. This is done by writing the name of the sample group as string with wildcards (* and/or ?). E.g. is PR.intensity["*blanck*"] returning just the samples with the string blanck in their name. This could be all blanck samples. This feature allows to generate sample groups by naming the samples according to their group. So, a lot of different constraints can be stated, which increase the accuracy of the interpretation or even already interpret the result. E.g. avg(PR.intensity["*blanck*"]) < avg(PR.intensity["*exp*"]) / 100 This statement asserts that the one percent of the average intensity of all experimental samples ("*exp*") should be greater than the average intensity found in the blanck sample. This simply throws out every "lipid", which is obviously noise. #### binsize The size of the bin of the peak coming from the averaging algorithm. The value is given in Dalton. #### occ Is the occupation of the peak. Occupation = nb. of occurences in the sample / nb. of samples ## List of functions #### isEven(n) where n is an integer value. The function returns True, if n is even. E.g.: isEven(PR.chemsc[C]). #### isOdd(n) where n is an integer value. The function returns True, if n is odd. #### avg(v.intensity) where n is a variable. The function returns the average value of the intensities of n. E.g.: avg(PR.intensity) #### isStandard(v, scope) where v is a variable and scope is "MS1+", "MS1-", "MS2+" or "MS2-". This function is special since it does not return anything. It enables the automatic calculation of standardizied intensities according to the given standard in v. I.e. Every intensity is calculated as relative to v. #### sumIntensity(f1.intensity, f2.intensity, ...) The function sumIntensity() is used for summing up intensities of different MS2 entries where multiple peaks are required for identification and quantification. In case of fragments with isotopic corrected place holders (see above)the following rules were implemented. If all MasterScan entries in the MS2 for a particular molecule are place holders (i.e. all are set to '-1') then those values are just added and will result in $n_i\times -1$ where ni is the number of the attributes. If there is just one entry whose intensity is greater zero all − 1 place holders are threaded as zero and not added to the overall sum. In the presented example we assume that two entries in the MS2 where used for the sumIntensity() function: F1 + F2 − > sumIntensity(F1.intensity,F2.intensity) − 1 + − 1 = − 2 0 + − 1 = − 1 1 + − 1 = 1 2 + − 1 = 2 2 + 0 = 2 That has following consequences when such results have to be interpreted: A) intensity = 0 in this specific sample none of the required fragments was present B) intensity < 0 in this sample some of the required fragments were found in the initial MasterScan but set '-1', none fragment above threshold (1) was present C) intensity = -ni all fragments were below the threshold (1) after isotopic correction D) intensity > 0 in this case at least one of the required fragments was after isotopic correction above the threshold (1) ### Some examples SUCHTHAT # the number of 'C' atoms in 'PR's chemical sum composition should be odd isOdd(PR.chemsc[C]) SUCHTHAT # the sum of both fragments ('FRAG1', 'FRAG2') minus one 'H' should be equal to # the precursor mass ('PR') and # the intensity of 'FRAG2' should be bigger than 3/10th of the # the intensity of 'FRAG1' FRAG1 + FRAG2 - 'H1' == PR AND FRAG1.intensity * 3 < FRAG2.intensity * 10 ## How LipidXplorer runs multiple MFQL queries The principle of a LipidXplorer Run is the following: All queries run successively on the given MasterScan. For every query, LipidXplorer iterates through the list of MS masses of the MasterScan from smallest to the greatest and checks the conditions given in definition, IDENTIFY, SUCHTHAT and REPORT sections. I.e. • it loads a MS mass • it checks if it fits a given sum compostion or sc-constrain (definition and IDENTIFY section). • it looks into its MS/MS spectrum (if provided) and does the same (definition and IDENTIFY section). • the boolean constraints are checked (SUCHTHAT section) and if the result is positive the MS mass is accepted and send to the REPORT section ## Examples ### Screen (without MS/MS experiments) for Phosphatidylcholine species A "screen" is a fast identification based on only MS information. To do screening properly the masses should be high accurate, because otherwise the error of identification is too high. The name of the query here is Phosphatidylcholine. Giving a name to a query is obligatory and has to be done for every query. We define the sc-constraint prPC (short for "precursor of PC") and state that it should be found in the positive MS spectra. Names for variables are arbitrary. The user should try to give meaningful names in order to understand his query better. The IDENTIFIY section urges LipidXplorer to look for the precursor mass into the MS spectrum. In SUCHTHAT we use a function to restrict the result to lipids having an overall even number of carbon atoms. This means that the fatty acids of the lipid have to have both fatty acids even numbered or both odd numbered. Such, we can sort out lipids which we know they should not be in the organism we examine. The REPORT section uses the following variables: • 'MASS' returns the m/z value of the MS mass • 'NAME' returns the lipid species' name, which consists of the number of carbon atoms and double bonds of the fatty acids. Those numbers we get from taking the number of carbons/double bonds from the sum composition (prPC.chemsc[C]/prPC.chemsc[db]) and reduce it by the carbons/double bonds belonging to the PC's head group and glycerol backbone. • 'CHEMSC' returns the chemical sum composition • 'INTENS' returns the abundance of the identified lipid species for all samples • 'ERROR' returns the error of the finding in ppm. ########################################################## # Identify PC with checking the precursor mass # ########################################################## QUERYNAME = Phosphatidylcholine; DEFINE prPC = 'C[30..48] H[30..200] N[1] O[8] P[1]' WITH DBR = (2.5,9), CHG = 1; IDENTIFY # marking prPC IN MS1+ SUCHTHAT isEven(PC.chemsc[C]) REPORT MASS = prPC.mass; NAME = "PC [%d:%d]" % (prPC.chemsc[C] - 8, prPC.chemsc[db] - 5); CHEMSC = prPC.chemsc; INTENS = prPC.intensity; ERROR = "%2.2fppm" % (prPC.errppm); ; ################ end script ################## The output of the query is the following: This is a screen shot of spread sheet software holding the resulting data from the query. At the top are the variable names followed by the name of the query, then comes the content. Note, that for 'INTENS' the file name from which the sample data was taken is also written. Every entry in the result fulfills the constraints given in the query. If an expected value is not found then the query or the import settings should be refined. ### Analysis of Phosphatidylcholine lipid species emulating PIS 184 Additionally to the former query we have a variable 'headPC' which contains the sum composition of the specific head group for PC which is found in the fragment spectra after MS/MS of a PC species. This variable is added as constraint in IDENTIFY. Thus a lipid is only identified if it fits to the constraints of prPC AND has a headPC fragment in its MS/MS spectrum. Again, we test the even numbers of carbons in SUCHTHAT, which ensure we do not find borderline masses, which actually cannot be in the sample. In the output we have additionally the abundance of the head group fragment with FRAGINTENS. ########################################################## # Identify PCs with checking the precursor mass # # AND check for PIS 184 in MS2 # ########################################################## QUERYNAME = Phosphatidylcholine; DEFINE prPC = 'C[30..48] H[30..200] N[1] O[8] P[1]' WITH DBR = (1.5,7.5), CHG = 1; DEFINE headPC = 'C5 H15 O4 P1 N1' WITH CHG = 1; IDENTIFY # marking prPC IN MS1+ AND SUCHTHAT isEven(prPC.chemsc[C]) REPORT MASS = prPC.mass; NAME = "PC [%d:%d]" % ((prPC.chemsc - headPC.chemsc)[C] - 3, prPC.chemsc[db] - 1.5); CHEMSC = prPC.chemsc; ERROR = "%2.2fppm" % (prPC.errppm); INTENS = prPC.intensity; ################ end script ################## ### Application of Boolean operation "AND" for identification of PE-plasmalogen An example for a whole script: ########################################################### ##### find PE-plasmalogens with MS2 in positive mode ###### ########################################################### # define sf-constrains and fragments for PE-Plasmalogen DEFINE PR = 'C[30..46] H[30..200] N[1] O[7] P[1]' WITH DBR = (1.5,8), CHG = 1; DEFINE FRAG1 = 'C[14..26] H[20..80] O[3]' WITH DBR = (1.5,9), CHG = 1; DEFINE FRAG2 = 'C[14..26] H[20..80] N[1] O[4] P[1]' WITH DBR = (1.5,9), CHG = 1; IDENTIFY # marking PR IN MS1+ AND FRAG1 IN MS2+ AND FRAG2 IN MS2+ SUCHTHAT # the sum of both fragments ('FRAG1', 'FRAG2') minus one 'H' should be equal to # the precurosor mass ('PR') and # the intensity of 'FRAG2' should be bigger than 3/10th of the # the intensity of 'FRAG1' FRAG1 + FRAG2 - 'H1' == PR AND FRAG1.intensity * 3 < FRAG2.intensity * 10 REPORT # first column is the precursor mass MASS = PR.mass, # second is the lipids name generated with Python's string formatting function NAME = "PE-O [%d:%dp / %d:%d]" % (FRAG1.frsc[C], FRAG1.frsc[db] - 2, FRAG2.frsc[C], FRAG2.frsc[db] - 2), # third is the precursor's chemical sum composition CHEMSC = PR.chemsc, # forth the intensity INTENS = PR.intensity, # fifth the sum of the error of both fragments in ppm ERROR = FRAG1.errppm + FRAG2.errppm;; ## More Examples More examples can be found in the MFQL collection provided in the LipidXplorer wiki.
{}
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 19 Jan 2019, 21:35 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • FREE Quant Workshop by e-GMAT! January 20, 2019 January 20, 2019 07:00 AM PST 07:00 AM PST Get personalized insights on how to achieve your Target Quant Score. • Free GMAT Strategy Webinar January 19, 2019 January 19, 2019 07:00 AM PST 09:00 AM PST Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT. If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ? Author Message TAGS: Hide Tags Manager Joined: 09 Nov 2012 Posts: 157 GMAT 1: 700 Q43 V42 If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ?  [#permalink] Show Tags Updated on: 23 Nov 2012, 01:00 4 00:00 Difficulty: 35% (medium) Question Stats: 74% (02:42) correct 26% (02:43) wrong based on 118 sessions HideShow timer Statistics This is problem #34 on page 238 of Manhattan GMAT's Advanced GMAT Quant book. If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ? (A) [(x-1)^2]y (B) (x+1)^2 (C) (x^2 + x +1) (D) (x^2 + x +1)y (E) (x^2 + x +1)(x-y) Please see the spoiler below for my question: I picked 3 for x and 2 for y, but both C and E are correct for that. I redid the problem and picked 4 for x and 2 for y. That makes C the only correct answer. How can I avoid this problem in the future? If this were the real GMAT, I would have wasted a 30 seconds to a minute on this problem because I would have had to do it twice. _________________ If my post helped you, please consider giving me kudos. Originally posted by commdiver on 22 Nov 2012, 22:06. Last edited by Bunuel on 23 Nov 2012, 01:00, edited 1 time in total. Renamed the topic and edited the question. Math Expert Joined: 02 Sep 2009 Posts: 52294 Re: If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ?  [#permalink] Show Tags 24 Nov 2012, 05:16 4 1 commdiver wrote: This is problem #34 on page 238 of Manhattan GMAT's Advanced GMAT Quant book. If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ? (A) [(x-1)^2]y (B) (x+1)^2 (C) (x^2 + x +1) (D) (x^2 + x +1)y (E) (x^2 + x +1)(x-y) Please see the spoiler below for my question: I picked 3 for x and 2 for y, but both C and E are correct for that. I redid the problem and picked 4 for x and 2 for y. That makes C the only correct answer. How can I avoid this problem in the future? If this were the real GMAT, I would have wasted a 30 seconds to a minute on this problem because I would have had to do it twice. ALGEBRAIC APPROACH: $$\frac{x^3 + (x^2 + x)(1-y) - y}{x-y}=\frac{x^3 +x^2-x^2y+x-xy-y}{x-y}=\frac{(x^3-x^2y)+(x^2-xy)+(x-y)}{x-y}=$$ $$\frac{x^2(x-y)+x(x-y)+(x-y)}{x-y}=\frac{(x-y)(x^2+x+1)}{x-y}=x^2+x+1$$. _________________ General Discussion Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 8795 Location: Pune, India Re: If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ?  [#permalink] Show Tags 23 Nov 2012, 19:40 1 commdiver wrote: This is problem #34 on page 238 of Manhattan GMAT's Advanced GMAT Quant book. If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ? (A) [(x-1)^2]y (B) (x+1)^2 (C) (x^2 + x +1) (D) (x^2 + x +1)y (E) (x^2 + x +1)(x-y) Please see the spoiler below for my question: I picked 3 for x and 2 for y, but both C and E are correct for that. I redid the problem and picked 4 for x and 2 for y. That makes C the only correct answer. How can I avoid this problem in the future? If this were the real GMAT, I would have wasted a 30 seconds to a minute on this problem because I would have had to do it twice. When you pick numbers, it is normal for you to get 2 or even 3 options that work out. The reason for this is that we tend to pick really easy numbers so that the calculation does not get cumbersome. I do not suggest you to pick harder numbers of course; I suggest you to pick even easier numbers so that the iterations don't take time. I picked numbers to solve this too but I took numbers in which it took me a few secs to get to the correct option. I said to myself, 'nothing says one of them can't be 0. Let x = 1 and y = 0' Then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = 3 (A) and (D) are outright out since they equal 0 because y is a factor in them. (B) gives 4 so out. (C) and (E) are the only possible options since they both give 3. Now I notice that (C) and (E) differ in the product (x - y). So I want that the difference between them should not be 1 (I think you noticed this too). So now, I pick x = 2 and y = 0 (why to give up a good thing? y = 0 makes life easy) [x^3 + (x^2 + x)(1-y) - y] / (x-y) = 14/2 = 7 Option (C) gives 7 while option (E) will give 7/2 _________________ Karishma Veritas Prep GMAT Instructor Intern Joined: 21 Oct 2012 Posts: 12 Re: If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ?  [#permalink] Show Tags 24 Nov 2012, 05:58 3 commdiver wrote: This is problem #34 on page 238 of Manhattan GMAT's Advanced GMAT Quant book. If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ? (A) [(x-1)^2]y (B) (x+1)^2 (C) (x^2 + x +1) (D) (x^2 + x +1)y (E) (x^2 + x +1)(x-y) Please see the spoiler below for my question: I picked 3 for x and 2 for y, but both C and E are correct for that. I redid the problem and picked 4 for x and 2 for y. That makes C the only correct answer. How can I avoid this problem in the future? If this were the real GMAT, I would have wasted a 30 seconds to a minute on this problem because I would have had to do it twice. This can be solved quickly, if we look for cancellations in the numerator on the basis of options. The options suggests that y gets eliminated, means a rearrangement of the numerator will help us in eliminating y. When we open up the brackets we will find that there are factors of (x-y) x^3+x^2+x-x^2*y- xy-y (Numerator) lets combine in factors of x-y (Since it is in denominator) x^2 (x-y) + x(x-y) +1 (x-y) so we get (x^2+x+1) as the final answer (because x-y cancels out in the numerator and denominator) Senior Manager Joined: 13 Aug 2012 Posts: 426 Concentration: Marketing, Finance GPA: 3.23 Re: If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ?  [#permalink] Show Tags 08 Dec 2012, 03:36 Let x=2 and y=3 = $$\frac{[(2)^3 + ((2)^2 + (2))(1-(3)) - (3)]}{(2-3)}$$ = $$\frac{(8 + (6)(-2) - 3)}{-1}$$ = $$\frac{-7}{-1}$$ = $$7$$ (A) $$[(2-1)^2]3=3$$ (B) $$(2+1)^2=9$$ (C) $$(2^2 + 2 +1)=7$$ (D) $$(2^2 + 2 +1)(3)=21$$ (E) $$(2^2 + 2 +1)(2-3)=-7$$ _________________ Impossible is nothing to God. Non-Human User Joined: 09 Sep 2013 Posts: 9449 Re: If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ?  [#permalink] Show Tags 04 Aug 2018, 04:54 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: If y≠x, then [x^3 + (x^2 + x)(1-y) - y] / (x-y) = ? &nbs [#permalink] 04 Aug 2018, 04:54 Display posts from previous: Sort by
{}
# Finding determinant of a generic matrix minus the identity matrix Find $\det(A - nI_n)$, where $A$ is an $n \times n$ matrix whose entries are all 1, and $I_n$ is the $n \times n$. identity matrix. I have no clue how to approach this. If $A$ is an $n \times n$ matrix whose entries are all $1$, then the determinant is $0$? What does $nI_n$ mean? The identity matrix multiplied by the number of rows/columns? (I realize that they are equal because it is a square matrix) I assume you mean $$\det(A - n{\rm I}_n) = ?,$$ where ${\rm I}_n$ is an identity matrix, so $n{\rm I}_n = \operatorname{diag}(n,n,\dots,n)$. In this case, note that $n$ is an eigenvalue of $A$, with the associated eigenvector $e = \begin{bmatrix} 1 & 1 & \dots & 1 \end{bmatrix}^T$. So, zero is an eigenvalue of $A - n{\rm I}_n$, which means that $$\det(A - n{\rm I}_n) = 0.$$ • My professor has not taught us eigenvalue and eigenvector. Is there any other way of solving this problem? Thanks a lot for your reply! Really appreciate it! :D – antotony Dec 1 '13 at 2:34 • @antotony You don't really need eigenvalues and eigenvectors. Notice that $(A-n{\rm I}_n)e = 0$. Since $e \ne 0$, this means that $A-n{\rm I}_n$ is singular, and all singular matrices have zero determinant. – Vedran Šego Dec 1 '13 at 11:30 • Excellent! Thank you for your help! Appreciate it a lot! :D – antotony Dec 1 '13 at 19:40
{}
## Fake Congruence Subgroups and the Hurwitz Monodromy Group J. Math. Sci. Univ. Tokyo Vol. 6 (1999), No. 3, Page 559--574. Berger, Gabriel Fake Congruence Subgroups and the Hurwitz Monodromy Group Suppose $G$ is a finite group, embedded as a transitive subgroup of $S_n$ for some $n$. Suppose in addition that $(\mathcal{C}_1, \dots ,\mathcal{C}_4)$ is a quadruple of conjugacy classes of $G$. In earlier papers ([F], [D-F], [B-F]), it was shown that to these data one can canonically associate a finite index subgroup of $PSL_2(\mathbb{Z})$. For example, when $N$ is an odd integer, $G$ is the dihedral group $D_N$ and the conjugacy classes all consist of involutions, the associated subgroup is $Γ_0(N).$ In this paper we investigate the case in which $G$ is the semidirect product of the abelian group $\mathbb{Z}[ζ_d]/\mathcal{N}$ (where $ζ_d$ is a primitive $d$'th root of unity and $\mathcal{N}$ is an ideal of $\mathbb{Z}[ζ_d]$ relatively prime to $d$) and the cyclic group $\langle ζ_d \rangle$. We relate the corresponding subgroup of $PSL_2(\mathbb{Z})$ to the "fake congruence subgroups" described in \cite{B2}. Specifically, if we let $\mathcal{C}$ denote the conjugacy class of $ζ_d$ in the multiplicative subgroup $\langle ζ_d \rangle$ and choose our conjugacy classes to be $(\mathcal{C}, \mathcal{C}, \mathcal{C}, \mathcal{C}^{-3}) ,$ then the subgroup is in fact $Γ_0(\mathcal{N})$ (defined originally in \cite{B2}; see section 2).
{}
# What is the product of the electron transport chain of photosynthesis? Aug 22, 2016 ATP, the energy carrier for all cellular processes. #### Explanation: To put it simply: in the electron transport chain the movement of electrons is used to pump hydrogen atoms (${H}^{+}$) to one side of the thylakoid membrane (inside chloroplasts of plants). At the end of the transport chain the ${H}^{+}$ atoms flow from high concentration to low concentration which fuels the enzyme ATP synthase. This way ATP is made, which is the energy carrier that is used in all cellular processes. In this image the electron transport chain starts on the left. Electrons are transported from one protein complex to another. This creates the hydrogen gradient. On the right the sysnthesis of ATP is shown, the end product of this process.
{}
2013 CMS Winter Meeting University of Ottawa, December 6 - 9, 2013 Connections Between Noncommutative Algebra and Geometry Org: Jason Bell (University of Waterloo) and Colin Ingalls (University of New Brunswick) [PDF] CHRIS BRAV, Institute for Advanced Study Hamiltonian local models for symplectic derived stacks  [PDF] We show that a derived stack with symplectic form of negative degree can be locally described in terms of generalised Darboux coordinates and a Hamiltonian cohomological vector field. As a consequence we see that the classical moduli stack of vector bundles on a Calabi-Yau threefold admits an atlas consisting of critical loci of regular functions on smooth varieties. If time permits, we discuss applications to the categorification of Donaldson-Thomas theory. This is joint work with subsets of Ben-Bassat, Bussi, Dupont, Joyce, and Szendroi. RAGNAR BUCHWEITZ, University of Toronto Scarborough Representation--infinite Algebras from Geometry  [PDF] This is a report on joint work with Lutz Hille on the recent notion of higher preprojective algebras as introduced by Iyama and his collaborators. We show that a tilting object on a smooth projective variety $X$ of dimension $d$ has an endomorphism ring that is representation-infinite if, and only if, it pulls back to a tilting object on the affine canonical bundle over $X$ if, and only if, that endomorphism algebra has minimal global dimension, equal to $d$, and no extensions in negative degrees against twists with negative powers of the canonical bundle. The endomorphism ring of the pullback then yields the corresponding higher preprojective algebra. This proves, for example, that any foundation of a helix on a Fano variety gives rise to such a pair of a $d$-representation-infinite algebra and its accompanying higher $(d+1)$-preprojective algebra. KENNETH CHAN, University of Washington Noncommutative quadrics and $\mathbb{Z}^2$-graded algebras  [PDF] In pursuit of new examples of Artin-Schelter (AS) regular algebras, Zhang-Zhang classified certain $\mathbb{Z}^2$-graded algebras which are double Ore extensions of AS regular algebras of dimension $2$ into $26$ families. Following Artin-Tate-Van den Bergh, we compute the point schemes of these algebras and re-interpret the Zhang-Zhang classification using geometric data. We also show that the associated noncommutative projective schemes are noncommutative quadric surfaces in the sense of Van den Bergh. This is joint work with Daniel Chan and Paul Smith. HAILONG DAO, University of Kansas, Lawrence On noncommutative crepant resolution of non-Gorenstein singularities  [PDF] Let $R$ be a normal domain. Recall that a noncommutative crepant resolution (NCCR) of $R$ is the endomorphism ring $A$ of a reflexive $R$-module $M$ such that $A$ is Cohen-Macaulay over $R$ with global dimension equal to the Krull dimension of $R$. In this talk we discuss a necessary and sufficient condition for existence of NCCRs when $R$ is Cohen-Macaulay containing an algebraically closed field of characteristic $0$. The result allows us to transfer the problem of finding NCCRs to the canonical cover of $R$. This is joint work with Osamu Iyama and Ryo Takahashi. ELEONORE FABER, University of Toronto Non-commutative resolutions of non-normal rings  [PDF] In this talk we consider non-commutative analogs of resolutions of singularities for not-necessarily normal commutative rings. These non-commutative resolutions of commutative rings R are endomorphism rings of certain R-modules of finite global dimension. We will in particular consider Van den Bergh's non-commutative crepant resolutions (NCCRs) and non-commutative resolutions (NCRs) as recently defined by Dao, Iyama,Takahashi and Vial. We give some conditions and obstructions for existence of NC(C)Rs over certain non-normal rings. This is joint work with H. Dao and C. Ingalls. ELLEN KIRKMAN, Wake Forest University Finiteness conditions on the Ext algebra of a monomial algebra  [PDF] Let $k$ be a field and let $A$ be a monomial $k$-algebra, $A= T(V)/I$, where $T(V)$ is a finitely generated tensor $k$-algebra and $I$ is a set of monomials in $T(V)$. We associate a finite graph $\Gamma(A)$ to $A$, and use $\Gamma(A)$ to characterize finiteness properties of Ext$_A(k,k)$, the Yoneda Ext algebra of $A$, including finite Gelfand-Kirillov dimension, the noetherian property, and finite generation of Ext$_A(k,k)$. (Joint work with Andrew Conner, James Kuzmanovich, and W. Frank Moore) DANIEL KRASHEN, University of Georgia Derived categories of torsors for Abelian varieties  [PDF] For curves $C_1, C_2$ of genus not equal to $1$ over arbitrary fields, it is known that the bounded derived categories of $C_1$ and $C_2$ are equivalent if and only if the curves are isomorphic. The case of genus $1$ is much richer, and in this talk I'll describe some recent joint work with Ben Antieau and Matthew Ward on derived equivalence for genus $1$ curves over arbitrary fields as well as generalizations to torsors for Abelian varieties. TOM LENAGAN, University of Edinburgh Totally nonnegative matrices  [PDF] A real matrix is {\em totally nonnegative} if each of its minors is nonnegative, and is {\em totally positive} if each minor is greater than zero. We will outline connections between the theory of total nonnegativity and the prime spectrum of the algebra of quantum matrices, and will discuss some new and old results about total nonnegativity which may be obtained using methods derived from quantum matrix methods. Most of the material is joint work with St\'ephane Launois and Ken Goodearl. GRAHAM LEUSCHKE, Syracuse University Pieri maps and the bound Young quiver  [PDF] The irreducible polynomial representations $L^\alpha V$ of $\mathrm{GL}(V)$ are well-known to be indexed by partitions $\alpha$ with at most $\mathrm{dim}(V)$ parts. The Pieri rules for decomposing the tensor products $V \otimes L^\alpha V$ and $V^* \otimes L^\alpha V$ into irreducibles defines, up to some choices of scalars, a system of split inclusions between those representations related by adding or removing a single box from the partitions. The scalars cannot be chosen with complete freedom; in particular there are some unavoidable non-commutativity relations among the Pieri maps. We build a quiver out of the data of partitions, maps, and relations, and show that the path algebra of this bound quiver is a non-commutative desingularization of a generic determinantal ring. BRENT PYM, McGill University Quantum deformations of projective three-space  [PDF] The classification of noncommutative versions of projective three-space (in the form of four-dimensional Artin--Schelter regular algebras) is an important open problem in noncommutative projective geometry. I will discuss some recent progress on this question, in the form of an explicit description of the possible Calabi--Yau deformations of the polynomial ring. The approach uses results of Dolgushev and Kontsevich on deformation quantization, together with some Poisson geometry, to reduce the problem to Cerveau and Lins Neto's classification of degree-two foliations of projective space. DAVID SALTMAN, CCR-Princeton Persistent Fields  [PDF] In 1976 Murray Schacher and Burt Fein showed the following. Suppose $D/F$ is a division algebra, $L/F$ a Galois extension of fields, and every maximal subfield of $D$ contains an isomorphic copy of $L/F$. Then $L = F(\sqrt{-1})$ and $D$ contains $(-1,-1)_F$ the Hamilton quaternions. Louis Rowen and I revisited this sort of question responding to some questions of Andrei Rapinchuk concerning linear algebraic groups. I will explain this connection and give our strengthening of the Fein Schacher result. Because of symplectic and orthogonal groups, we also prove parallel results for division algebras with involution. CHELSEA WALTON, Massachusetts Institute of Technology PBW deformations of smash product algebras from Hopf actions on Koszul algebras  [PDF] This talk concerns the actions of Hopf algebras $H$ on Koszul algebras $B$. The aim of this work is to provide necessary and sufficient conditions for a certain filtered algebra to be a Poincaré-Birkhoff-Witt (PBW) deformation of the smash product algebra $B\#H$. Many ring-theoretic properties are preserved under PBW deformation, and the representation theory of examples of such deformations has been an active area of research (e.g. symplectic reflection algebras, rational Cherednik algebras). Our theorem encompasses known results on PBW deformations in the literature and we provide many interesting examples, both old and new, illustrating our result. This is joint work with Sarah Witherspoon. MILEN YAKIMOV, Louisiana State University Cluster algebra structures on quantum double Bruhat cells  [PDF] Cluster algebras were defined by Fomin and Zelevinsky for the purposes of the axiomatic study of canonical bases and total positivity. An important open problem for these applications was the Berenstein-Zelevinsky conjecture that the quantized coordinate rings of all double Bruhat cells in complex simple Lie groups admit upper quantum cluster algebra structures. We will give a prove of this conjecture, which also shows that each of the upper quantum cluster algebras coincides with the corresponding quantum cluster algebra. This is a joint work with Ken Goodearl, UC Santa Barbara.
{}
## Isobaric isochoric/isometric: $\Delta V = 0$ isothermal: $\Delta T = 0$ isobaric: $\Delta P = 0$ Tamera Scott 1G Posts: 65 Joined: Fri Sep 28, 2018 12:27 am ### Isobaric I understand that isobaric means that pressure is constant and ΔP=0, but what equation does it play a role in? Andrea Zheng 1H Posts: 61 Joined: Fri Sep 28, 2018 12:26 am ### Re: Isobaric For a constant pressure system, you would be able to use the equation W=-P*deltaV, as the P would be constant and thus can be multiplied by the change in volume to get the work done. Annalyn Diaz 1J Posts: 61 Joined: Fri Sep 28, 2018 12:15 am ### Re: Isobaric Also in relation to the change in internal energy, heat under constant pressure is dH, which could help with the manipulation of dU=q+w. Tarika Gujral 1K Posts: 62 Joined: Fri Sep 28, 2018 12:27 am ### Re: Isobaric That also means that you must use the Cv value for ideal gases MONOATOMIC: 3/2*R LINEAR: 5/2*R NONLINEAR: 7/2*R LaurenJuul_1B Posts: 65 Joined: Fri Sep 28, 2018 12:17 am ### Re: Isobaric it means that you can use the equation w=-PdeltaV 904914909 Posts: 60 Joined: Fri Sep 28, 2018 12:26 am ### Re: Isobaric isobaric means that the volume is still changing so you can still calculate work Jasmine Chow 1F Posts: 60 Joined: Fri Sep 28, 2018 12:16 am Been upvoted: 1 time ### Re: Isobaric for a isobaric equation to find work you will want to use W= -p delta V. Niveda_B_3I Posts: 60 Joined: Fri Sep 28, 2018 12:24 am ### Re: Isobaric Isobaric basically means there is no change in pressure, which means you can have work and heat changing your internal energy. Cody Do 2F Posts: 62 Joined: Fri Sep 28, 2018 12:23 am ### Re: Isobaric Isobaric means there's no change in pressure. As pressure is constant, use the formula w=-p*deltaV to find the work. Other words include Isochoric, which means there's no change in volume (AKA no work) and isothermic, which means no change in temperature (Delta U = 0). Vicky Lu 1L Posts: 60 Joined: Fri Sep 28, 2018 12:18 am ### Re: Isobaric If there is no change in pressure, as stated as ΔP=0, then there is value for the pressure to be plugged in the equation of w =-PΔV. If the value of pressure or the change in volume are not given, then you can use -Δn*R*T to find work. At a constant pressure,qp equal to ΔH. You can also find q with q=nCpΔT. Mhun-Jeong Isaac Lee 1B Posts: 54 Joined: Fri Sep 28, 2018 12:17 am ### Re: Isobaric To further complement the above responses, know that W = -PdeltaV is used to calculate work done by expansion against constant P. So since isobaric means constant pressure, this equation would be used. LeannaPhan14BDis1D Posts: 57 Joined: Fri Sep 28, 2018 12:16 am ### Re: Isobaric What would work most would be w= -pdeltav
{}
# Van de graff generator, construction, principle, working and uses. SHARING INCREASES KNOWLEDGE # VAN DE GRAFF GENERATOR In this article, we are going to discuss about, what is van de graff generator? What is it’s construction, principle, working and uses?  In detail, so stay tuned as till end. ## WHAT IS VAN DE GRAFF GENERATOR? Van de graff generator is a electrostatic generator or device, which uses a moving belts to accumulate electric charge on a hollow spherical metallic body. Which is put on the top of a insulating columns. It creates very high electric potential producing very high voltage direct current (DC) electricity at low current levels. This generator was invented by American physicist Robert J. Van de Graaff in 1929. The modern van de graff generator can produce potential of as much as 5 megavolts and a table top version can produce potential on the order of 100,000 volts and can store enough energy to  produce a visible sparks. ## CONSTRUCTION In the basic design of Van de graff generator. There consists of a large hollow metallic sphere (S), mounted on two insulating supporting columns C1 and C2. A long narrow belts of insulating material is wound around the two pulleys P1 and P2. The P2 pulleys is located at the centre of the metallic sphere and the P1 pulley is near to ground level. The pulley P1 is connected to the rotating device like motor. There is two sharp combs fixed near the pulleys P1 and P2 respectively. These two sharps combs fixed near these two pulleys such that it just touches the belts. The comb B1 is called spray comb and the comb B2 is called collecting comb. A discharging tube D is used in which acceleration of the ions is done. The point where ions is originated is present at the head end of the metallic sphere and the other end is earthned into the ground. The whole apparatus is placed into a steel compartment. This compartment is filled with nitrogen and methane at high pressure. ## PRINCIPLE It is based in the principle that charge given to a hollow spherical conductor is transferred to the outer surface and is redistributed uniformly over it. It consists of a large spherical conducting shell S supported over the insulating columns. A long narrow belt of insulating material is wound around two pulleys P1 and P2. B1 and B2 are two sharply pointed metal combs or brushes. ## WORKING The spray combs (B1) is given a positive potential by high tension source. The positive charge gets sprayed on the belt. As the belt moves and reaches the sphere, the negative charge is induced on the sharp ends of the collecting comb (B2). This positive charge shifts immediately to the outer surface of S. Due to discharging action of sharp point B2. The positive charge on the belt is neutralized. The neutralized belt returns down and collects the positive charge from B1, which in turns collected by the B2. This action is repeated. Thus, the positive charge on S goes on accumulating. In this way, potential differences as much as 6 or 8 millions of volts. This potential difference with respect to the ground.[latexpage] ## ACTION OF A SHARP POINT When a spherical conductor of radius r carries a charge q, then the surface charged density is given by – $\sigma=\frac{\text{charge}}{\text{area}}=\frac{q}{4\pi\epsilon_0}$ For pointed end, radius is very small therefore $\sigma$ is very small. The particles of air when strike to the point end then it gets similarly charged so it repelled. In this away, a electric wind is set up which takes away the electric charge continuously. This process of spraying of charge is called corona dischargeThats why conductor used for storing charge is always sphere of large radius. ## USES There a many applications of Van de graff generator, some of important application are listed below: 1). The Van de Graaff generator was developed as a particle accelerator for physics research. Its high potential is used to accelerate subatomic particles to great speeds in an evacuated tube. 2). Van de graff generator is often used in tandem accelerators. 3). It is used as physics model to teach electrostatic. So it is often kept in science mesuems. For more such articles, Subscribe to our newsletter below to get instant notifications directly in your inbox. SHARING INCREASES KNOWLEDGE
{}
1. ## Series estimation Let $A:=\left\{ n\in\mathbb{N}:\exists a,b\in\mathbb{N},\, a,b\geq2:\, n=a^{b}\right\}$. Prove that $$\underset{n\in A}{\sum}\frac{1}{n}\leq\frac{8}{9}.$$ 2. ## Re: Series estimation \begin{align*}\sum_{n \in A} \dfrac{1}{n} & = \sum_{a\ge 2}\left(\sum_{b\ge 2} \dfrac{1}{a^b}\right) \\ & = \sum_{a\ge 2}\left(\sum_{b\ge 0} \left(\dfrac{1}{a}\right)^b - 1 - \dfrac{1}{a}\right) \\ & = \sum_{a\ge 2}\left(\dfrac{1}{1-\tfrac{1}{a}} - 1 - \dfrac{1}{a}\right) \\ & = \sum_{a\ge 2} \left(\dfrac{1}{a-1} - \dfrac{1}{a}\right) \\ & = \left(1 - \dfrac{1}{2}\right) + \left(\dfrac{1}{2} - \dfrac{1}{3}\right) + \cdots\end{align*} The $n$-th sum is: $S_n = \sum_{a=2}^{n+1} \left(\dfrac{1}{a-1} - \dfrac{1}{a}\right) = 1 - \dfrac{1}{n+1}$ $\lim_{n \to \infty} S_n = 1$ So, I am pretty sure you cannot prove that sum is less than or equal to $\dfrac{8}{9}$. 3. ## Re: Series estimation Originally Posted by SlipEternal \begin{align*}\sum_{n \in A} \dfrac{1}{n} & = \sum_{a\ge 2}\left(\sum_{b\ge 2} \dfrac{1}{a^b}\right) \\ & = \sum_{a\ge 2}\left(\sum_{b\ge 0} \left(\dfrac{1}{a}\right)^b - 1 - \dfrac{1}{a}\right) \\ & = \sum_{a\ge 2}\left(\dfrac{1}{1-\tfrac{1}{a}} - 1 - \dfrac{1}{a}\right) \\ & = \sum_{a\ge 2} \left(\dfrac{1}{a-1} - \dfrac{1}{a}\right) \\ & = \left(1 - \dfrac{1}{2}\right) + \left(\dfrac{1}{2} - \dfrac{1}{3}\right) + \cdots\end{align*} The $n$-th sum is: $S_n = \sum_{a=2}^{n+1} \left(\dfrac{1}{a-1} - \dfrac{1}{a}\right) = 1 - \dfrac{1}{n+1}$ $\lim_{n \to \infty} S_n = 1$ So, I am pretty sure you cannot prove that sum is less than or equal to $\dfrac{8}{9}$. Maybe I'm wrong but I think $$\underset{n\in A}{\sum}\frac{1}{n}<\underset{a,b\geq2}{\sum}a^{-b}.$$ For example in the second sums the number $16$ is added like $2^{4}$ and $4^{2}$. 4. ## Re: Series estimation I see what you are saying... Let's try to see when we add term more than once: For each $a\ge 2$, $a^{c\cdot d} = (a^c)^d$. Some examples: $2^4 = 4^2 = 16, 2^6 = 4^3 = 8^2, 2^8 = 16^2, 2^{10} = 4^5 = 32^2$ So, $\sum_{n \in A} \dfrac{1}{n} \le \sum_{a\ge 2}\sum_{b\ge 2} a^{-b} - \sum_{a\ge 2}\sum_{b\ge 2} a^{-2b}$ $\sum_{a\ge 2}\sum_{b\ge 2} a^{-b} - \sum_{a\ge 2}\sum_{b\ge 2} a^{-2b} = \sum_{a\ge 2}\left(\dfrac{1}{a-1} - \dfrac{1}{a} - \dfrac{1}{a^2-1} + \dfrac{1}{a^2}\right)$ Now, the n-th partial sum is $S_n = 1 - \dfrac{1}{3} - \dfrac{1}{n+1} + \dfrac{1}{(n+1)^2}$ and $\lim_{n \to \infty} S_n = \dfrac{2}{3}$ This still isn't quite right, but it might give you some ideas. Now, I am not adding 1/4^4 at all. 5. ## Re: Series estimation $\sum_{n \in A} \dfrac{1}{n} \stackrel{?}{=} \sum_{p\text{ is prime}}\sum_{b\ge 2} p^{-b} = \sum_{p\text{ is prime}}\left(\dfrac{1}{p-1} - \dfrac{1}{p}\right) = \left(1-\dfrac{1}{2}\right) + \left(\dfrac{1}{2} - \dfrac{1}{3}\right) + \left(\dfrac{1}{4} - \dfrac{1}{5}\right) + \cdots$ Then, this is equal less than or equal to $1-\dfrac{1}{3} + \dfrac{1}{4}-\dfrac{1}{5}+\dfrac{1}{6}-\dfrac{1}{7}+\dfrac{1}{10} < \dfrac{8}{9}$ 6. ## Re: Series estimation No, this still misses some. For instance, powers of 6. Let $P = \{n\in \Bbb{N} \mid n\ge 2, \forall b\in \Bbb{N}, b\ge 2, n^{1/b} \notin \Bbb{N} \}$ Then $\sum_{n \in A} \dfrac{1}{n} = \sum_{n \in P}\sum_{b\ge 2} n^{-b}$ I think that is true. Then $\sum_{n \in P}\sum_{b\ge 2}n^{-b} = \sum_{n\in P}\left(\dfrac{1}{n-1} - \dfrac{1}{n}\right) = \left(1-\dfrac{1}{2}\right) + \left(\dfrac{1}{2} - \dfrac{1}{3}\right) + \left(\dfrac{1}{4} - \dfrac{1}{5}\right) + \left(\dfrac{1}{5} - \dfrac{1}{6}\right) + \left(\dfrac{1}{6} - \dfrac{1}{7}\right) + \left(\dfrac{1}{9} - \dfrac{1}{10}\right) + \cdots$ That is no bigger than $1-\dfrac{1}{3}+\dfrac{1}{4}-\dfrac{1}{7}+\dfrac{1}{9} < \dfrac{8}{9}$ 7. ## Re: Series estimation Originally Posted by SlipEternal No, this still misses some. For instance, powers of 6. Let $P = \{n\in \Bbb{N} \mid n\ge 2, \forall b\in \Bbb{N}, b\ge 2, n^{1/b} \notin \Bbb{N} \}$ Then $\sum_{n \in A} \dfrac{1}{n} = \sum_{n \in P}\sum_{b\ge 2} n^{-b}$ I think that is true. Then $\sum_{n \in P}\sum_{b\ge 2}n^{-b} = \sum_{n\in P}\left(\dfrac{1}{n-1} - \dfrac{1}{n}\right) = \left(1-\dfrac{1}{2}\right) + \left(\dfrac{1}{2} - \dfrac{1}{3}\right) + \left(\dfrac{1}{4} - \dfrac{1}{5}\right) + \left(\dfrac{1}{5} - \dfrac{1}{6}\right) + \left(\dfrac{1}{6} - \dfrac{1}{7}\right) + \left(\dfrac{1}{9} - \dfrac{1}{10}\right) + \cdots$ That is no bigger than $1-\dfrac{1}{3}+\dfrac{1}{4}-\dfrac{1}{7}+\dfrac{1}{9} < \dfrac{8}{9}$ I don't understand how I can prove that $$\underset{n\in P}{\sum}\left(\frac{1}{n-1}-\frac{1}{n}\right)\leq\frac{8}{9}$$ you have added only few terms. 8. ## Re: Series estimation Look at estimation of error. $\sum_{n\in P}\left(\dfrac{1}{n-1} - \dfrac{1}{n}\right) = \left(1-\dfrac{1}{3}+\dfrac{1}{4}-\dfrac{1}{7}\right)+\sum_{n\in P\setminus\{2,3,5,6,7\} }\left(\dfrac{1}{n-1} - \dfrac{1}{n}\right)$ Since $\sum_{n\in P\setminus\{2,3,5,6,7\} }\left(\dfrac{1}{n-1} - \dfrac{1}{n}\right)$ is an alternating sum (similar to a telescoping sum), it is no bigger than its first term, which is $\dfrac{1}{9}$. This part should not be difficult to prove.
{}
# How to do speech recognition on a single word I would provide a sound signal of about 2-3 seconds to my neural network. I have trained my network with a single word, like if I speak "Hello" the network may tell if "Hello" is spoken or not, but some other word like "World" is spoken, it will say "Hello" is not spoken. I just want classification of sound if its a specific command or word. What is the best way to do this, I am not a that much advanced in DNN, I only know about NN and CNN, I want to know if there is some research paper or tutorial, or need some explanation about the work. If you have fixed length speech data you can detect the content using only CNN. You can see that problem as a binary classification (1 if the spoken word is correct, 0 otherwise).
{}
By Michael Nielsen, November 21, 2019 Note: Rough and incomplete working notes, me thinking out loud. I’m not an expert on this, so the notes are tentative, certainly contain minor errors, and probably contain major errors too, at no extra charge! Thoughtful, well-informed further ideas and corrections welcome. In these notes I explore one set of ideas for helping address climate change: direct air capture (DAC) of carbon dioxide – basically, using clever chemical reactions to pull CO2 out of the atmosphere, so it can be stored or re-used. It’s tempting (and fun) to begin by diving into all the many possible approaches to DAC. But before getting into any such details, it’s helpful to think about the scale of the problem to be confronted. How much will DAC need to cost if it’s to significantly reduce climate change? Let’s look quickly at two scenarios for the cost of DAC, just as baselines to keep in mind. I’ll discuss how realistic (or unrealistic) they are below. As of 2014, the United States emits about 6 billion tonnes of CO2 each year. Suppose it cost about 100 dollars per tonne of CO2 to do direct air capture. To capture the entire annual CO2 production from the US would cost about 600 billion dollars. Source: US EPA That’s a lot of money! As of 2019, the US military budget was about 700 billion dollars, so at 100 dollars per tonne the cost of DAC would be a little less than the military budget. And it would be a little over half of total energy spending in the US (about 1.1 trillion dollars in 2017). Suppose instead that direct air capture cost 10 dollars per tonne. In this scenario the cost to capture all the US’s CO2 emissions would be about 60 billion dollars per year. That’s still a lot of money, but it’s starting to look like the cost of a lot of things humans already do, in government, in commerce, and even in philanthropy. A particularly striking cost comparison is to the amount we already spend on cleaning up or preventing air pollution. In 2011 the US Environmental Protection Agency estimated that compliance with the Clean Air Act cost about 65.5 billion dollars in 2010. (The choice of year may sound a little odd and dated – why did I go all the way back to 2010? It’s not a cherrypicked year – rather, the EPA only very rarely reports on the costs of the Clean Air Act, and it happens that 2010 is the most recent year for which an estimate is available. It is, by the way, in line with the EPA’s estimates for earlier years, and it seems reasonable to assume with the cost in more recent years.) So if DAC cost 10 dollars per tonne of CO2, the cost to make the US carbon neutral would be comparable to the existing cost of compliance with the Clean Air Act and associated regulations. To make the comparison more concrete, let me mention the sort of regulations (and benefits) the Clean Air Act involves. One example is the imposition of emissions standards on vehicles, and the requirement that they use catalytic converters to reduce pollution. Catalytic converters typically run to a few hundreds dollars, and nearly 20 million cars and trucks are sold annually. Presto: many billions of dollars each year in compliance costs! Of course, what we get in exchange for this money is far cleaner skies over our cities, and a much improved quality of life. I don’t just mean that it’s pleasant to enjoy smog-free days; I also mean that this makes a particularly large difference in the quality of life for asthmatics and people with respiratory diseases, and certainly saves many, many lives. Overall, it’s a very good exchange, in my opinion, though I know people who disagree. Returning to direct air capture, it’s worth keeping these two numbers in mind as reference points: at 100 dollars per tonne for DAC, the cost of DAC is comparable to the US military budget; and at 10 dollars per tonne for DAC, the cost is comparable to the cost of compliance with the Clean Air Act and related regulations. None of this tells us at what cost point it’s possible to do DAC. It doesn’t tell us how to set up a carbon economy to fund this, at any price point, or how to get the political will for any necessary changes (as was required for the Clean Air Act). Nor does it tell us what to do about other greenhouse gases, or other countries. Still, it’s helpful to have a ballpark figure to aim for. If DAC is scalable at $100 per tonne, it starts to get very interesting. And at$10 per tonne, the costs start to resemble things we’ve done before for environmental concerns. As we’ll see in a moment, the $100 cost estimate is at least plausible with near-future technology.$10 per tonne is more speculative, but worth thinking about. What I like and find striking about this frame is that many people are extremely pessimistic about climate change. They can’t imagine any solution – often, they become mesmerized by what appears to be an insoluble collective action problem – and fall into fatalistic despair. This direct air capture frame provides a way of thinking that is at least plausibly feasible. In particular, the $10 per tonne price point is striking. The Clean Air Act was contentious and required a lot of political will. But the US did it, and many other countries have implemented similar legislation. It’s a specific, concrete goal worth thinking hard about. Incidentally, in most analyses like this it’s conventional to engage in a lot of cross-comparison between approaches. Analyses which don’t do such cross-comparisons tend to get criticised: “but why didn’t you consider [other approach] which [works better because]”. Doing such comparisons makes good sense if your goal is to figure out where to invest resources, or what outcomes are likely. But those aren’t the point of this analysis. The point here is to more clearly understand the bounds on the overall complexity of the problem. If some approach can work at a reasonable price point, then better solutions are certainly possible. So let me say: I think we can likely do much better than direct air capture. But I think this analysis is useful for bounding the difficulty of the problem. I’ve been talking at an abstract level, in terms of government programs and so on. It’s also worth putting these numbers in individual terms. On average, US citizens produce about 20 tonnes of CO2 emissions each year. At$100 per tonne for DAC, that’s $2,000 each year. At$10 per tonne, it’s $200 each year. Again, we can see that the$10 per tonne price point looks very feasible – $200 is quite a bit of money for most people, but it’s about what they routinely spend for many important things in their life. And while$2,000 really is a lot of money for most people, it’s also much less than the median US citizen routinely spend for many important aspects of their lives. There’s a lot of variation in other countries, but among large, wealthy countries the US is on the high end of per-capita emissions. In countries like France and Sweden, which have worked hard on reducing emissions, the numbers tend to be more like 5 tonnes of CO2 emissions per year. And so $100 DAC comes out to$500 per person per year, and $10 DAC to$50 per person per year. I guess it’s not currently popular to memorize numbers and simple models of climate change. Still, I wish people discussing climate change knew not just these numbers (or some equivalently informative set), but also many more. I’ve sat in meetings about climate change where many attendees appeared to have almost no quantitative awareness of the scale of the problem. Without such an awareness of, and facility with, quantitative models, their only chance of making substantive progress is by accident, in my opinion. # How much will direct air capture cost, in the near future? So, how much does direct air capture actually cost? And what are the prospects for driving the costs down? Unfortunately, it’s not very clear. Although technologies for direct air capture have been used since the 1930s, it’s usually been done on a small scale, for reasons unrelated to climate. Doing it at the giant scales – ultimately, billions of tonnes! – required to impact the climate is quite another matter. If you read around about direct air capture, you discover a few things: there are many approaches, with widely-varying cost estimates; those estimates are often back-of-the-envelope theory, not even based on a pilot, much less an operating large-scale plant. There’s nothing quite as inexpensive as an industrial plant that exists only on paper. Or, as I once overheard someone say, half cynically, half optimistically: “my favourite form of science fiction is the pitch deck.” One of the most detailed proposals comes from the company Carbon Engineering, which has been working on direct air capture since 2009. In 2018 they published a paper estimating the costs associated to direct air capture. Their basic proposal is to build cooling towers, filled with a liquid that absorbs CO2, and run big fans to blow air from the atmosphere over that liquid. They then run the resulting material through a second process that produces nearly pure CO2 as output. That CO2 then needs to either be stored or else somehow re-used, perhaps as raw material for manufacturing fuel or something similar. Obviously, this is a very simplified account of what they’re doing, that leaves many details out! Unlike many proposals, Carbon Engineering isn’t just working on paper. They’ve built a small pilot plant in the town of Squamish, British Columbia, an hour north of Vancouver. It runs at a rate of hundreds of tonnes of CO2 captured per year. They’ve attempted to do detailed costings of all components necessary to make a large-scale plant, one with a capacity, if run at full utilization (they estimate it’ll be run at about 90% utilization), of removing a million tonnes of CO2 from the atmosphere each year. They estimate that it’ll cost from $94 to$232 per tonne of carbon removed. The exact amount depends on details of the configuration the plant is run in, and also reflects things like possible variations in interest rates on debt, and so on. It’s tempting to be skeptical of this proposal. For one thing, in the short term Carbon Engineering has a vested interest in making their direct air capture scheme look attractive and inexpensive. And there’s also just natural human entrepreneurial optimism, and the fact that, by definition, you can’t anticipate the details of unexpected problems. So caution is called for. I also lack the expertise to seriously evaluate the technical details of their proposal. While to my eye, it looks as though Carbon Engineering has been careful, maybe they’ve missed some important factor, and their estimates are way off. On the other hand, there are at least quite a few eyes on it – although the paper was published just a year ago, in 2018, it’s already been cited 132 times, and it’s clear it’s seen as something of a gold standard. There are some interesting critiques of direct air capture in the scientific literature. For instance, this 2011 paper by House et al claims a minimal cost of $1,000 per tonne, based on a relatively general argument, whose main input appears to be the cost of electricity. The analysis is quite complicated, and I don’t understand many of the details (working on it, but it’s a real research project to track everything down!) The essential gist seems to be: when you separate the CO2 from the atmosphere, you’re ordering the system, and so necessarily lowering the entropy of the system. The second law of thermodynamics tells us there will be an intrinsic energy cost associated to doing this, even if done with maximal efficiency; that, in turn, puts some constraints on the costs. In any case, they conclude that “many estimates in the literature appear to overestimate air capture’s potential”. The Carbon Engineering paper mentions this paper and similar critiques, and rebuts it with an argument that amounts to “well, we actually went and built a plant which works, and we did detailed costings of how to scale it up”. This is a good start on a rebuttal, but obviously as an outsider it’d be good to go back and dig into both pro and con details much more than I have. That may be a project I do in the future. For the sake of argument, and the remainder of these notes, let’s stick with Carbon Engineering’s numbers, but keep in mind that they should be taken with a grain of salt, until examined much more closely. I must admit, part of the reason I’m inclined to be sympathetic toward Carbon Engineering’s estimate is that I read lead author (and Carbon Engineering’s cofounder) David Keith’s book about a different topic, solar geoengineering. Keith seemed to me to be very honest in the book, carefully describing many of his own uncertainties, the complexities of the problem, and giving charitable explanations of the position of his critics. None of that makes him correct, but I’m inclined to believe he’s careful, serious, and worth paying attention to. An influential prior study of DAC came in 2011 from an American Physical Society (APS) study. The costs estimated were much higher, more in the ballpark of$600 per tonne of CO2. What accounts for the difference – likely a factor of 3 or more? In the words of Carbon Engineering’s paper: The cost discrepancy is primarily driven by divergent design choices rather than by differences in methods for estimating performance and cost of a given design. Our own estimates of energy and capital cost for the APS design roughly match the APS values. This is then followed by a relatively detailed (and, to my eye, plausible) account of the differences in design choices, and how Carbon Engineering improved on the prior design decisions. I’ll say a bit more about that below. On its face, the numbers in the Carbon Engineering paper don’t seem so encouraging. Let’s call it $200 per tonne. At that level, for the US to achieve carbon neutrality would cost more than the US currently spends on energy in total. What about other approaches? Let’s broaden the field, and consider negative emissions technologies in general, especially those pulling CO2 directly out of the atmosphere in some way. (In contrast to technologies which capture carbon at the source of production – often a less costly but also less general, more bespoke approach.) Earlier this year, the US National Academies of Sciences, Engineering, and Medicine released an informative report surveying negative emissions technologies. In the report, they attempt to estimate both cost ranges and the scalability of many different technologies. If you’re interested, there’s a good summary on pages 354-356 of the report. I won’t summarize all their results here. But there is much (cautiously) encouraging news. There are a lot of possible negative emissions technologies. One approach is coastal blue carbon – storing carbon in mangroves, marshes, and sea grasses, the kind of ecosystems one sees along the coastline. This perhaps doesn’t sound terribly promising. But the big advantage is that the carbon tends to be stored underground, in the soil, and can be stored there for decades or centuries. The NAS survey reports a cost estimate of$10 per tonne. That price point is much more encouraging than Carbon Engineering’s. Unfortunately, the report also projects a “potential [global] capacity with current technology and understanding” of 8-65 billion tonnes. That’s not enough for even two years of global CO2 production. So at most, this can simply help out. Another approach is based on storing carbon in forests. The National Academies report’s estimated price is somewhat higher – from $15-50 per tonne of CO2. (I don’t know if that includes proper burial – when trees die most of their CO2 is typically returned to the atmosphere). But the approach is also much more scalable, with an estimated global capacity of from 570 to 1,125 billion tonnes, using “current technology and understanding”. Per year, the NAS estimates a capacity of 2.5 to 9 billion tonnes, again using current technology and understanding. That’s global, so it’s not enough to make the world carbon neutral (global CO2 emissions are almost 40 billion tonnes per year). But it’s starting to put a sizeable dint in the problem. (A caveat to the discussion in this section: I haven’t been careful about which of these numbers include the cost of storing or utilizing carbon. That’s a genuine cost. My impression is that it’s likely to cost less than$20 per tonne, maybe much less, or even turn a profit. This is based in part on the cost of storing CO2 in the Utsira formation – a giant undersea aquifer off Scandinavia – where several million tonnes of CO2 have been stored at a Wikipedia-reported price of 17 dollar per tonne. If this impression is correct then the cost of capturing CO2 is likely to either dominate or in worst case be comparable to the cost of storage and utilization. Still, a more detailed analysis would be careful about this costing.) # How much can the costs drop? These numbers are tantalizing. Apart from the (probably not scalable) coastal blue carbon, they’re about an order of magnitude away from where they need to be for climate to be a problem of similar order to air pollution. But the numbers are also based on “current technology and understanding”. How much can these costs drop with improvements in technology? And are there other ways of dropping the effective costs? The most famous technology cost curves are those associated to Moore’s Law – the exponential increase in transistor density in semiconductors, and associated things like computer speed, memory, energy efficiency, and so on. This is, in fact, a common (though not universal) pattern across technologies. It seems to have first been pointed out in a 1936 paper by the aeronautical engineer Theodore Wright. Wright observed the cost of producing airplanes dropped along an exponential curve as more were produced. Very roughly speaking, for each doubling in production, costs dropped by about 15 percent. Essentially, as they made more airplanes, the manufacturers learned more, and that helped them lower their costs. This pattern of exponential improvement is seen for many technologies, not just in semiconductors and airplane manufacture. It’s been common in energy too. For instance, the cost of solar energy has dropped by roughly a factor of 100 over the past four decades (link, link). That cost reduction was driven in part by technological improvement, and in part by economies of scale. One wonders: will the cost of direct air capture or some other negative emission technology follow something like Wright’s Law? If so, one might hope that it would drive the cost of carbon capture in some form down below 10 dollars per tonne. Indeed, it’s even possible to start to think about whether there’s ways it could be made net profitable. Unfortunately, while Wright’s Law is interesting, it’s far from a compelling argument. Indeed, it’s a little silly to call it a Law: it’s an observed historical regularity, an observation about the past for certain technologies. If you’re Intel, planning for 5 to 10 or more years from now, you need to set targets. You may perhaps be able to project reliably a few years on the basis of in-train improvements. But longer-term improvements may be more speculative, and require new ideas, ideas that by definition you can’t directly incorporate into your current models. Studying history is an alternative approach to help set plausible targets. But eventually such historical regularities break down. Indeed, we see this in recent years where many aspects of Moore’s Law have started to break down. And so the fundamental problem here is that we don’t know how much the costs of DAC will go down. At best, we can make guesses. That’s a nervous position to be in – the usual situation for challenging problems! To make this more concrete, let’s come back to Carbon Engineering’s proposal for DAC. Here, in more detail, is how they cut the cost by a factor 3 or so from the APS study. The details won’t make much sense, unless you’ve read the paper (or similar work); what’s important is to read for the general gist: The cost discrepancy is primarily driven by divergent design choices… The most important design choices involved the contactor including (1) use of vertically oriented counterflow packed towers, (2) use of Na+ rather than K+ as the cation which reduces mass transfer rates by about one-third, and (3) use of steel packings which have larger pressure drop per unit surface area than the packing we chose and which cost 1,700 $/m3, whereas the PVC tower packings we use cost less than 250$/m3. … In rough summary, the APS contactor packed tower design yielded a roughly 4-fold higher capital cost per unit inlet area, and also used packing with 6-fold higher cost, and 2-fold larger pressure drop. The paper continues with a discussion of why the APS made those different design choicees, and also with a discussion of some differences in the way input energy was used in Carbon Engineering’s design versus the APS design. I’m not an industrial chemist, but to me those changes sound like low-hanging fruit. But they’re also not the kind of low-hanging fruit that the APS could have planned for in 2011. If they could have planned for it, they would have come up with a different cost estimate. Of course, low-hanging fruit is what you’d expect. Carbon Engineering has been, until recently, a tiny company, with a small handful of staff. They were founded in 2009, and appear to have subsisted on relatively small grants and seed funding until 2019, when they raised 68 million dollars. It’s interesting to think about what they’ll achieve with that funding. Hopefully, they’ll be able to pick some higher-hanging fruit. Assuming their initial cost estimates bear out, for this design, will it be possible for them (or someone else working on direct air capture) to achieve another factor of 3 reduction in cost? I’ve been focusing on cost reductions due to better design and technology. In fact, part of the job will be done in a very different way. The carbon intensity of a country is the CO2 emissions per dollar of GDP. Carbon intensities in the US dropped more than 18% per decade from 1990 to 2014, the latest year for which the World Bank reports numbers. This isn’t surprising: all other things equal, most people and companies try to keep doing things in more energy-efficient ways, since energy costs them money. If this drop in carbon intensity continues, it means that considered as a fraction of the total economy, the cost of DAC will go down. Effectively, it’s as though we’re automatically making progress toward $10 DAC, at a rate of about 18 percent per decade. On its own that won’t make DAC economically feasible. But over two or three decades, it’ll help a lot. It’s also interesting to think about cost reductions due to plausible emissions reductions. As noted earlier, in countries such as France, Sweden, etc, average emissions per capita are something like 4 times lower than in the US. This is often attributed causally to their extensive use of nuclear power; nuclear certainly plays a large role, but as far as I can see it can only be part of the story (since electricity production is only responsible for a moderate fraction of total emissions). Rather, it’s that they’ve also been more serious than the US in other ways about reducing emissions; their use of nuclear is, in part, a symptom of this seriousness, not the cause. In any case, such examples illustrate that nuclear plus other moderate efforts can lead to large emissions reductions. (I should point out: of course, drops in carbon intensity and emissions reductions are intertwined, not independent! I’ve mentioned them separately because there are ways in which they’ve very different kinds of goals with, for example, different kinds of expression in policy.) Of course, neither changes in carbon intensity nor emissions reductions are literally the same as a drop in price of direct air capture. But considered as a fraction of the economy they may as well be; it’s a kind of drop in the effective cost of DAC. And so I think a factor 10 or more reduction in the effective cost of DAC is plausibly possible, in part through technological improvements, in part through emissions reductions as already implemented in countries with similar standards of living, and in part through reduced carbon intensity. Put another way: it’s plausible that doing DAC to make the US carbon neutral ends up costing an amount comparable to or less than the current cost of the Clean Air Act, as a fraction of the total economy. That seems encouraging. I’ve focused a lot on direct air capture, and it sounds like I’m bullish about this approach. Actually, I’m too ignorant to have a really strong opinion. From my point of view, a big part of concentrating here was simply that (a) there was what seemed a particularly juicy paper to dig into, and (b) as I said at the start, this could be treated as a boundary case, setting a kind of worst-case scenario. It’s entirely possible – indeed, likely, – that other approaches to dealing with climate are considerably better. But this already looks promising. My tentative conclusions are that direct air capture offers a promising but far from certain approach to making major progress on climate change. And, more broadly: negative emissions technologies offer a promising approach to making major progress on climate change. I got interested in direct air capture in part after reading Matt Nisbet’s survey of US climate and energy foundation funding (summary here, with a link to the full survey). Here’s his summary chart. Note that it covers funding from 19 major funders of climate and energy work, and the years from 2011 to 2015: You see enormous sums of money going into renewable energy, sustainable aagriculture, and into opposing fossil fuels. But just a tiny fraction of the spending – 1.9%, or just over 10 million dollars – went to other low carbon energy technologies. And of that, just$1.3 million went to evaluate carbon capture and storage. Now, admittedly, these numbers focus on just a tiny slice of the total funding pie (US foundation funding), and are somewhat outdated. In particular, the last few years have seen substantial progress on investment in negative emissions technologies (as witness the \$68 million invested in Carbon Engineering). Still, my impression is that the qualitative picture from Nisbet’s research holds more broadly. Humanity’s collective priorities are research and development focused on renewable energy sources, especially solar and wind; and anti-fossil fuel messaging and lobbying. By contrast, negative emissions technologies like DAC are receiving relatively little funding. As a non-expert, I’m reluctant to hold too firm opinions here. But, frankly albeit tentatively I think this makes no sense! Of course, renewables (say) should receive a lot of funding. But if you genuinely believe climate change is a huge threat, then we should collectively and determinedly pursue lots of different strategies. Direct air capture (and, more broadly, negative emissions) look very underfunded and underexplored. Yes, it requires considerable improvement. But compared to other historic technologies, it’s within striking distance of being able to have a huge impact, especially considering the relatively minor effort so far put into it. # Conclusion This is a tiny slice through a tiny slice (direct air capture) of the climate problem. Climate is intimidating in part because the scale of understanding required is so immense. You can spend a lifetime studying the relevant parts of just one of: the climate itself, the energy industry, solar, wind, nuclear, politics, economics, social norms. It’s extremely difficult to get an overall picture; it’s easy to miss very big things. I wrote these notes mostly because the only way I know to get a handle on big problems is to start by doing detailed investigations of very tiny corners. So consider this one very tiny corner. To finish, I can’t resist reporting an uncommon opinion: overall, and over the long term, I’m optimistic about climate. I’ve focused on direct air capture, but it seems to me there are many other promising approaches. I believe humans will figure out how to address climate change. There will be a lot of suffering along the way, much of it falling to the world’s poorest people. That’s a terrible tragedy, and something we’re too late to entirely avert; indeed, it’s very likely already happening. But over the long term work on this problem will also lead us to strengthen existing institutions, and to invent new institutions, institutions which will make life far better for billions of people. It’s a huge challenge, but I think we’ll rise to the challenge, and make human civilization much better off for it. Acknowledgments: Thanks to Andy Matuschak for conversations about climate.
{}
Published on 02/14/2023 Categories It is known that neural networks have the problem of being over-confident when directly using the output label distribution to generate uncertainty measures. Existing methods mainly resolve this issue by retraining the entire model to impose the uncertainty quantification capability so that the learned model can achieve desired performance in accuracy and uncertainty prediction simultaneously. However, training the model from scratch is computationally expensive and may not be feasible in many situations. In this work, we consider a more practical post-hoc uncertainty learning setting, where a well-trained base model is given, and we focus on the uncertainty quantification task at the second stage of training. We propose a novel Bayesian meta-model to augment pre-trained models with better uncertainty quantification abilities, which is effective and computationally efficient. Our proposed method requires no additional training data and is flexible enough to quantify different uncertainties and easily adapt to different application settings, including out-of-domain data detection, misclassification detection, and trustworthy transfer learning. We demonstrate our proposed meta-model approach’s flexibility and superior empirical performance on these applications over multiple representative image classification benchmarks. This work was published in AAAI 2023. Please cite our work using the BibTeX below. @misc{shen2022posthoc, title={Post-hoc Uncertainty Learning using a Dirichlet Meta-Model}, author={Maohao Shen and Yuheng Bu and Prasanna Sattigeri and Soumya Ghosh and Subhro Das and Gregory Wornell}, year={2022}, eprint={2212.07359}, archivePrefix={arXiv}, primaryClass={cs.LG} }
{}
# Tag Info 51 Congratulations on your first large project. I'm not sure whether this review has grown a little bit overboard, as it is now both a review as well as a mini tutorial. Either way: What the char? Charmander-char Char cha Charmander Char. Char? Charmander! Is it confusing in general? Char! I mean, yes. Mostly due to the names of your functions. As ... 47 This is a pretty reasonable start on a simple interpreter. Edward's suggestions are all good; a few additional suggestions: interpret("+++++++++++++[->.... Please break up that long line. C allows you to break up literal strings "like " "this." void goToLoopEnd(char** ip) { ... void goToLoopStart(char** ip) { ... If you wrote these instead as char *... 45 The comment ; *argv should be ; argv, since you are not yet dereferencing the pointer. After a cmp instruction, you should prefer je over jz, since it is nicer to the human reader. Oh, the old times, where you had to tell the assembler to jmp short because it couldn't figure it out on its own. :) In the run the BF program section, I would have changed esi ... 27 Here are some things that may help you improve your program. In all, it seems to be nice, straightforward code that does what it needs to do. Good start! Use only required #includes The code has #include <stdbool.h> but doesn't use booleans. It also appears that nothing from <stdlib.h> is used either. Only include files that are actually ... 18 I was asked by @Timtech to join + post an improvement. Using arrays is a lot easier to understand, and there were so many optimizations that I decided to completely re-write the code. I'm sure it could be improved by other advanced programmers like me, as I'm using several long expressions here... note that L1 represents list #1 (2nd + 1 on the calculator) ... 17 I second the recommendation to invent a grammar and use a real parser. A recursive descent parser is very easy to write, and a great place to start. You might also have a look at a PEG (Parsing Expression Grammar). It's almost as easy to use a PEG as it is to write a recursive descent compiler, and a PEG can parse more grammars than can a recursive ... 16 Good job on getting it to work! I've used a string reversal program to check your interpreter and it works well. However, it also uses ~36MB of memory, which is too much. A tape goes both directions equally fast Forward, rewind. The basic operations for a tape. Whether it's VHS, a cassette, or a LTO-8, they all work the same: accessing the next and ... 15 Move the memory into it's own class. The concept of the tape which the BF program is operating can be cleanly made its own class with a limited interface. Make private or eliminate interface cruft: perform, getMemoryIndex, reset, addCommands, setCommands, step. For a BF interpreter, it really only make sense to set a particular program and then run it. ... 15 Beyond the bad names that you are already aware of, I see a few things that could be improved. str1 and str2 are bad variable names as well. They should be str and chr respectively. Those names would properly represent the data and make this code much more understandable. It would be instantly clear to anyone looking at the code that you're looping through ... 14 Obviously, if you have any questions just ask. This is my first "big" project in OCaml, but I'd rather you didn't sugarcoat criticism. It looks like a very fun project, congratulations! :) I have a few comments, which all concerns the style of writing. This is an important topic, because a good style eases maintainability. The name of constructors: all ... 13 Loki Astari already covered a number of good points, which I will not repeat. Algorithms and data structures The standard library contains a number of ready-made algorithms and data structures that can make your code easier to read and understand. For example, you have these lines: opp_count = 0; // line 393, new lines removed for brevity for (i = 0; i &... 12 Common beginner mistakes Stop doing this: using namespace std; See Why is “using namespace std;” considered bad practice? Namespaces All your functions seem to have the prefix rdo_ bool rdo_ws(char c) char rdo_expr_item_type(char c) string rdo_opp_to_string(char opp) bool rdo_is_num(string is_num) void rdo_count_opp(bool to_count_or_not) string ... 12 This looks really good over all. I have pretty much no knowledege of brainfuck, but it was still easy to understand the code, and nothing jumps out at me as glaringly wrong. There are however a few (mostly) minor issues. run flag and compiler optimizations volatile is a widely misused variable modifier, but you acutally have one of the text book examples ... 12 Looks great! Just a few (small) suggestions: Style It really is quite readable and straightforward, but breaking out a few functions certainly wouldn't hurt. Even though you're not using using namespace std; (woo!), the variable name stack still makes me a bit uncomfortable (then again, I can't think of a name for it that wouldn't end up being gross). If ... 11 Why do you allow each cell of the tape to hold numbers from -1 to 128? seems like an odd range. in move_backward() why do you allow the tape to reach position -1? in move_forward() why do you allow the tape's position to be beyond the end of the tape? In general you should be using exclusive comparisons (without the =) as you'll make fewer mistakes. 10 Use case instead of == and guards everywhere: prevBracketIndex :: Int -> Int -> Array Int Char -> Int prevBracketIndex i depth cs = case cs ! i of '[' -> if (depth - 1) == 0 then i else prevBracketIndex (i - 1) (depth - 1) cs ']' -> prevBracketIndex (i - 1) (depth + 1) cs _ -> prevBracketIndex (i - 1) depth cs Use State and ... 10 It is a beginning. But currently your code is just a thin wrapper around Python function calls. And of course, there is a security problem with "eval", because someone could format your harddisk with the right line, if you execute scripts from untrusted sources. Maybe you should invent a nice syntax for your language. An easy method to write a parser for it,... 9 As the classic "Stop Writing Classes" puts it: the signature of "this shouldn't be a class" is that it has two methods, one of which is __init__ Virtually all of your classes fall foul of this; just because you can use OOP, doesn't mean you always should. Looking at the use of the classes in the code, this was a big red flag: code_input = GetCodeInput(... 9 Profile You can only improve what you can measure. So first of all let us run callgrind to check where we spent most of our time: $rustc -C opt-level=3 -g brainfuck.rs$ valgrind --tool=callgrind --dump-instr=yes --collect-jumps=yes --simulate-cache=yes ./brainfuck ahpla.bf \$ callgrind_annotate callgrind.out.* We will end up with something similar to the ... 9 Deficiencies? I don't really see any. Improvements? Maybe :) You don't handle at all the return values of putchar, getchar and fflush. The Wikipedia article has some hints about how different implementations handle an EOF from the user input. Yours works as well, but is this really what you want? In build_jump_table, the switch used to check whether *c is a ... 9 Use NULL for null pointers NULL conveys your intention to use a pointer better than 0: if (file == NULL) { ... } As a bonus, in C the macro NULL typically expands to something like (void*)0; if you have warnings for potentially unsafe type conversions enabled (-Wconversion for GCC), the compiler will notify you, if you try to assign 0 to a pointer or NULL ... 9 std::array<unsigned char, 30'000> cells; int current_index = 0; case '<': if(current_index == 0) { current_index = cells.size() - 1; } else { current_index--; } break; Try using a truly infinite tape instead. (Well, up to the limits of your memory allocator, anyway.) Here's what part of that would look like. Can ... 8 This being Python, it should be relatively easy to present the illusion of an infinite tape, at least in the positive direction. I don't see reason that cell_amount has to be specified, and the user shouldn't have to worry about such details. Your input and output routines are wrong: The . instruction should print one character, interpreting the cell ... 8 bcode = [] stack = [] regs = [] sp = 0 bcd = [] Global variables like this are frowned upon. To be pythonic you should really put them a in a class or something. ''' Instructions ''' OP_EOP = 0 OP_EOI = 1 OP_PUSH = 2 OP_PRINT = 3 def load_program(f2o): f2o? What in the world is that? f = open(f2o, "r") f2 = f.read() My recollection of RPython ... 8 Usability How the f.[< do I use this interpreter when it doesn't come with a main() function? Here's the simplest implementation I came up with, using java.nio.file.*: public static void main(String[] args) throws IOException { String code = new String(Files.readAllBytes(Paths.get(args[0]))); // TODO: Implement InputStreamToByteIteratorAdaptor ... 8 This line here: return EXIT_SUCCESS Is equivalent to this: return 0; Which is automatically inserted by the compiler if it isn't found. In short, return EXIT_SUCCESS can be removed. In addition, a few of your error messages don't include a newline at the end, as seen in this line here, and two other places: std::cerr << "compilation terminated."; ... 8 if ((OptimizationLevel & OptimizationLevel.Level1) >= OptimizationLevel.Level1) { if (lastSymbol != symbol && lastSymbol != TokenSymbol.None && (lastSymbol == TokenSymbol.Decrement || lastSymbol == TokenSymbol.Increment || lastSymbol == TokenSymbol.MoveLeft || lastSymbol == TokenSymbol.MoveRight)) { Lines.AddRange(... 8 I really don't like that you detect which version of _GetchX to use via an ImportError - that isn't obvious to me at all. I also don't like that you keep importing things locally. I think you can solve this like so: import platform system = platform.system() import sys if system == "Windows": import msvcrt class _Getch: """Gets a ... 8 Bugs Your program doesn't work with nested loops. If you have + jump into the loop [ this is the first loop [ this is the second loop (1) - decrease current value to get out of loop ] + we increment our current value to get back to the start ] whoops, we go to (1) You need to remember the position of the ... 8 Expectations Lisp is well-known for it's REPL, so I'd expect to see at least the following 3 functions in any Lisp interpreter: read, evaluate and print. read takes a string and returns a form (a number, string, symbol, list, and so on). For example: read("(+ 4 5)") should return a list that contains the symbol + and the numbers 4 and 5. evaluate takes a ... Only top voted, non community-wiki answers of a minimum length are eligible
{}
# Appell's equation of motion In classical mechanics, Appell's equation of motion (aka the Gibbs–Appell equation of motion) is an alternative general formulation of classical mechanics described by Josiah Willard Gibbs in 1879[1] and Paul Émile Appell in 1900.[2]
{}
# Rare Earths in 2017, tables-only release Rare Earths in 2017, tables-only release ## Detailed Description Advance data tables (XLSX format) for the rare earths chapter of the Minerals Yearbook 2017. A version with an embedded text document and also a PDF of text and tables will follow.
{}
Introduction The manipulation of spin-waves represents a promising alternative to conventional electronics for the development of energy-efficient computing platforms. In the last few years, many concepts of spin-wave based devices have been proposed1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19. However, the experimental realization of a nanoscale spin-wave circuitry for guiding, manipulating, and controlling the interference of magnons, which is the basis for realizing nanomagnonic devices, is still missing. A major challenge is the efficient channeling and steering of spin-waves, which so far has been achieved in micron sized elements using external fields20,21,22,23,24 or arrays of nanomagnets25. In the route toward nanomagnonics, the use of nanoscale spin-textures for controlling the propagation of spin-waves is highly appealing. Recently, the concept of spin-wave channeling within domain walls has been theoretically proposed26,27,28, and experimental evidence for spin-wave confinement at a domain wall has been provided on a straight wall stabilized via shape anisotropy29. However, so far, the difficulty of engineering the spin-texture at the nanoscale with conventional techniques has hindered the realization and investigation of magnonic circuits based on domain walls. In particular, the steering of spin-waves by means of curved domain walls and the use of complex spin-textures for controlling the interference of multiple modes propagating within a nanoscale spin-wave circuitry remain elusive. Furthermore, the direct observation and a detailed investigation of these confined spin-wave modes are still missing. In this work, we demonstrate the fundamental building blocks of spin-waves circuitry, i.e., arbitrarily shaped magnonic nanowaveguides and a prototypic spin-wave circuit allowing for the tunable superposition of signals propagating in two converging waveguides, by patterning the spin-texture of a ferromagnetic thin film via thermally assisted magnetic scanning probe lithography (tam-SPL)30,31. The absence of a physical patterning and the reversibility of tam-SPL allows the realization of fully reconfigurable nanomagnonic structures based on spin-textures with engineered functionality. Through space and time-resolved scanning transmission X-ray microscopy (STXM), we provide direct evidence for the channeling and steering of localized spin-wave modes propagating along straight and curved domain wall-based waveguides, with no need for an applied bias magnetic field. Furthermore, we demonstrate the tunable spatial superposition and interference of confined spin-wave modes propagating within two converging nanoscale waveguides. The experimental realization of reconfigurable nanomagnonic circuits based on doman walls paves the way to the use of engineered spin-textures as building blocks of spin-wave based computing devices. Results Experimental protocol and sample structure In Fig. 1, we report a sketch of the experiments. Different spin-textures were patterned in an exchange bias ferromagnet/antiferromagnet bilayer (Fig. 1a), by sweeping a heated scanning probe in an external magnetic field for setting the unidirectional magnetic anisotropy strength and direction in the ferromagnetic film. This allows for the nanopatterning of engineered spin-configurations, as in the case of the curved 180° Néel domain wall of Fig. 1a, which is stabilized by patterning two magnetic domains with antiparallel remanent magnetization. Straight and curved domain walls, as well as complex spin-textures comprising two converging domain walls, were obtained by controlling the geometry of the area scanned by the tip. Spin-waves were excited by injecting a radio frequency (RF) current in a microstrip antenna. Static and time-resolved images with magnetic contrast were acquired via STXM by measuring the transmitted X-ray intensity, so that the X-ray magnetic circular dichroism (XMCD) provides contrast to the in-plane component of the magnetization Mx (Fig. 1b) (Methods section). The configuration of the spin-wave modes is shown in Fig. 1c. Spin-waves are confined in the transverse direction of the wall by the reduced local effective field arising from the inhomogeneous magnetization profile, and propagate freely along the wall26,29. Such modes, called Winter magnons26,32,33, are characterized by an elliptical precession of the spins along the wall, with the major axis lying in the film plane, associated to a propagating flexural motion of the wall profile, analogous to transverse elastic waves on a string. Figure 2a, b shows the sample structure, consisting of an exchange bias Co40Fe40B20 20 nm/Ir22Mn78 10 nm/Ru 2 nm multilayer34, and the optical image of the sample. The white dashed line indicates the orientation of a patterned domain wall with respect to the antenna. In Fig. 2c–e, the static STXM images of the spin-textures patterned via tam-SPL are reported, where the dark (bright) contrast corresponds to Mx > 0 (<0). The black region at the bottom of the figure shows the boundary of the microstrip. For the straight and parabolic domain walls of Fig. 2c, d, respectively, the images acquired at zero external field display sharp 180° Néel walls. The corresponding micromagnetic simulations (Fig. 2f, g) show a 180° spin rotation within the sample plane, with the central spins (white regions) defining the domain wall profile lying along the y-axis30,35. Figure 2e shows the static image of a more complex spin-texture comprising two 180° Néel walls tilted by a 30° angle from each other, sharing a common apex. In this case, a static 1.5 mT magnetic field was applied in the x-direction in order to precisely control the distance between the two domain walls and the position of the apex (see discussion below). The image of the same structure acquired at remanence is shown in Supplementary Figure 1. The corresponding micromagnetic simulation (Fig. 2h) shows that the two domains walls merge at the intersection, where the magnetization orientation is determined by the spin configuration of the two walls. After the intersection, a narrow “transition” region is formed, where the magnetization rotates continuously until a uniform magnetization orientation is reached within the domain. Spin-wave propagation along patterned domain walls Spin-waves were imaged stroboscopically via time-resolved STXM (Methods). In Fig. 3, the results for straight and curved domain walls are reported. Figure 3a shows snapshots of the normalized Mx contrast for the straight wall, calculated as the magnetic deviation ΔMx(t) from the time-averaged state Mx(t), acquired at different times within a single period7,36. A Gaussian filtering was used for enhancing the contrast (see Supplementary Note 2, Supplementary Table 1 and Supplementary Movie 1, 2 for video and raw data video). The excitation frequency was 1.28 GHz and no static external magnetic field was applied. The normalized Mx contrast shows spin-waves confined at the domain wall and propagating away from the antenna located at the bottom of the panels. The spatial map of the amplitude of the spin-wave excitation is reported in Fig. 3b. The map is obtained by fitting the time-trace of each acquired pixel (raw data) with a sinusoidal function, and plotting for each pixel the amplitude of the corresponding fit (see Supplementary Note 3 and Supplementary Figure 2). Both Fig. 3b and the horizontal profile extracted from it at x = 1.1 μm from the stripline (Fig. 3c) show that the mode is confined at the domain wall, with a lateral extension (FWHM) of 120 nm. In order to demonstrate the propagating character of the spin-waves, the time-traces (sinusoidal fitting) related to pixels located within the domain wall at different distances from the antenna are plotted as a function of time (Fig. 3d). The time delay observed when moving away from the antenna corresponds to a linear phase shift with distance (Fig. 3e), clearly confirming the propagating character of the mode and allowing to estimate its wave vector. Spin-waves propagating along a curved path, at remanence, are shown in the snapshots of Fig. 3f, extracted from Supplementary Movie 3 (see Supplementary Movie 4 for the raw data and Supplementary Table 3, Supplementary Movie 13, 14 for the Mx(t)). The excitation frequency was 1.11 GHz. A Gaussian filtering was used for enhancing the contrast. The mode is confined at the wall and follows its profile, showing a lateral extension of 115 nm (Fig. 3g, h). Both the sinusoidal fits (of raw data) as a function of time (Fig. 3i) and the phase analysis of Fig. 3j confirm the propagating nature of the mode. Figure 3 shows spin-waves confined at the patterned walls detected up to 2 μm away from the antenna. Longer propagation distances are reported in Supplementary Movies 5, 6, where we show spin-waves propagating along a curved domain wall and clearly detectable up to 3.5 μm from the microstrip. Micromagnetic simulations of the confined spin-wave modes are presented in Fig. 4 (see Methods and Supplementary Figure 3 for details). Figure 4a, b shows micromagnetic simulations for the straight (curved) wall, carried out at remanence, driven by a line source of sinusoidal magnetic field at 1.28 GHz (1.11 GHz), as in the experimental data of Fig. 3. The Mx and Mz components are reported in the left and right panels, respectively, while the magnetic deviation ΔMx(t) from the time averaged value 〈Mx(t)〉, which corresponds to the STXM signal reported in Fig. 3, is shown in the central panel. The flexural motion of the wall, associated with the spin precession within the wall, can be observed for both straight and curved geometries. In good agreement with the experimental results, these findings indicate that wall-bound Winter-like modes, propagating along the wall, can be excited in both the straight and curved geometries. Figure 4c reports the dispersion relation simulated for spin-waves confined at a straight 180° Néel domain wall, together with the experimental one. The simulations were performed with the same geometry as in Fig. 4a, but using a sinc-shaped field pulse as excitation (Methods). The experimental values of the wave vector were extracted from the linear fitting of the phase shift vs. distance curves obtained from experiments performed at different excitation frequencies (see discussion for Fig. 3 and Supplementary Note 3). In good agreement with the experimental results, the simulations confirm the propagating character of the excitations, showing a positive dispersion and the presence of a small bandgap below 0.3 GHz, which can be ascribed to the residual effective field within the domain wall due to exchange bias and uniaxial anisotropies26. The simulated (experimental) spin-wave group velocity vg = ∂ω/∂k, extracted at k→ 0, is vgSim = 2.30 ± 0.34 km s−1 (vgExp = 2.77 ± 1.4 km s−1). Noteworthy, the demonstration of the propagating character and of the positive dispersion confirms that such guided modes can be used for transporting information within integrated nanomagnonic circuits. Waveguides for efficiently controlling and manipulating confined spin-wave modes constitute fundamental building blocks for the realization of nanomagnonic devices. In the following, we demonstrate a nanomagnonic circuit allowing for the tunable spatial superposition and interference of guided spin-wave modes propagating in two converging waveguides. Tunable spatial superposition of confined modes Figure 5a shows the STXM images of a spin-texture comprising two domain walls (see also Fig. 2e). By applying a small static magnetic field in the −x-direction, ranging from 2 mT to 1.68 mT, the distance between the two domain walls (dashed white lines) can be controlled. In the top panel, the two domain walls are spatially separate. By decreasing the field, the two walls are brought closer (central panel) and finally converge at the common apex (lower panel). Figure 5b shows STXM snapshots from the three different configurations for a 1.28 GHz excitation frequency. The images were smoothed with a gaussian filter for increasing contrast (see Supplementary Note 2, Supplementary Table 2 and Supplementary Movie 712 for the videos and raw data). The normalized Mx contrast shows, in all three cases, two guided spin-wave modes propagating from the antenna with different relative phases. These two modes, which are spatially separate close to the antenna, approach as the domain walls converge, and partially overlap for low applied fields. In order to better visualize the progressive overlapping of the two modes, each pixel of the data of Fig. 5b was fitted with a sinusoidal function (see Supplementary Note 3 and the discussion of Fig. 3). Figure 5c shows the amplitude of the sinusoidal fit along the horizontal profiles of Fig. 5a (green dashed lines), in the three configurations. The two peaks, with full width at half maximum (FWHM) of around 200 nm, correspond to the two guided modes. For an applied magnetic field of 2 mT, the two modes are separated by 810 nm and do not overlap. By decreasing the field down to 1.68 mT, the two modes are brought closer and to partially overlap, with a peak-to-peak distance of 340 nm. In the top panels of Fig. 5d the sinusoidal fits along the horizontal profiles of Fig. 5a (green dashed lines) are plotted as a function of time for the different applied fields. In the bottom panels, single sinusoidal profiles are extracted from the positions marked by the color-coded stars in the top panels. Blue and yellow curves show the magnetization dynamics in correspondence of the maximum amplitude of the two guided modes. As expected, their phase difference depends on the applied field, because of the modulation of the waveguide geometry and spin configuration. The red curves show the dynamics in the region where the two modes overlap. For 2.00 mT (left panel), the two modes are spatially separate, therefore at y = 0 no excitation is measured (red dashed line). For lower fields (central and right panels), we observe a sizeable modulation of the excitation amplitude and phase in the overlap region, which arises from the tuning of the spatial superposition of the two guided modes. We anticipate that the control of the superposition, phase difference and amplitude of the guided modes via external stimuli such as fields or current, allows to envision the implementation of logic functions in spin-texture based devices, such as Mach-Zehnder-type spin-wave interferometers9. Discussion In this work, we experimentally realized the fundamental building blocks of a reconfigurable spin-wave circuitry based on patterned spin-textures, i.e. arbitrarily shaped magnonic nanowaveguides. We directly imaged and studied via space and time-resolved STXM the channeling and steering of spin-waves propagating within nanoscale straight and curved paths, without the need for external applied fields. Furthermore, we realized a prototypical nanomagnonic circuit allowing for the tunable spatial superposition of signals propagating in two converging waveguides. The experimental realization of a reconfigurable nanoscale circuitry allowing for the steering, manipulation and controlled interference of spin-waves has been a long-standing challenge. This work clearly demonstrates that engineered spin-textures represent a powerful, versatile tool for realizing such a circuitry, marking a fundamental step toward the development of integrated nanomagnonic computing devices. Methods Sample fabrication Co40Fe40B20 20 nm/Ir22Mn78 10 nm/Ru 2 nm stacks were deposited on 200 nm thick Si3N4 membranes by DC magnetron sputtering using an AJA Orion8 system with a base pressure below 1 × 10−8 Torr. During the deposition, a 30 mT magnetic field was applied in the sample plane for setting the magnetocrystalline uniaxial anisotropy direction in the CoFeB layer and the exchange bias direction in the as-grown sample. Then, the samples underwent an annealing in vacuum at 250 °C for 5 min, in a 400 mT magnetic field oriented in the same direction as the field applied during the growth. The resulting exchange bias field was 2.5 mT. Thermally assisted magnetic Scanning Probe Lithography (tam-SPL) was performed via NanoFrazor Explore (SwissLitho AG). Spin-textures were patterned by sweeping in a raster-scan fashion the scanning probe, heated above the blocking temperature of the exchange bias system TB ≈ 300 °C, in presence of an external magnetic field. Two rotatable permanent magnets were employed for generating a uniform external magnetic field applied in the sample plane during patterning. 2 μm × 30 μm microstrip antennas were then fabricated via optical lithography using a Heidelberg MLA100 Maskless Aligner and lift-off, after depositing a 50 nm thick SiO2 insulating layer via magnetron sputtering. A Cr 5 nm/Cu 200 nm bilayer was deposited by means of thermal evaporation. Scanning trasmission X-ray microscopy The time-dependent magnetic configuration of the samples was investigated with time-resolved scanning transmission X-ray microscopy at the PolLux (X07DA) endstation of the Swiss Light Source37. In this technique, monochromatic X-rays, tuned to the Co L3 absorption edge (photon energy of about 781 eV), are focused using an Au Fresnel zone plate with an outermost zone width of 25 nm onto a spot on the sample, and the transmitted photons are recorded using an avalanche photodiode as detector. To form an image, the sample is scanned using a piezoelectric stage, and the transmitted X-ray intensity is recorded for each pixel in the image. The typical images we employed for the investigation of the spin-wave propagation in our samples were acquired with a point resolution between 40 nm and 75 nm. Magnetic contrast in the images is achieved through the X-ray magnetic circular dichroism (XMCD) effect, by illuminating the sample with circularly polarized X-rays. As the XMCD effect probes the component of the magnetization parallel to the wave vector of the circularly polarized X-rays, the samples were mounted to achieve a 30° orientation of the surface with respect to the X-ray beam, allowing us to probe the in-plane component of the magnetization. The time-resolved images were acquired in a pump-probe scheme, using an RF magnetic field of amplitude around 1 mT, generated by injecting an RF current in a microstrip antenna as pumping signal and the X-ray flashes generated by the synchrotron light source as probing signal. The pumping signal was synchronized to the 500 MHz master clock of the synchrotron light source (i.e., to the X-ray flashes generated by the light source) through a field programmable gate array (FPGA) setup. Due to the specific requirements of the FPGA-based pump-probe setup installed at the PolLux endstation, RF frequencies of fexc = 500 × M/N [MHz], being N a prime number and M a positive integer, were accessible. For the measurements presented in this work, N was typically selected to be equal to 23, giving a phase resolution of about 15° in the time-resolved images. Depending on the RF frequency, the temporal resolution of the time-resolved images is given by 2/M [ns], with its lower limit given by the width of the X-ray pulses generated by the light source (i.e., about 70 ps FWHM). Micromagnetic simulations Micromagnetic simulations of the magnetization dynamics were carried out by solving the Landau–Lifshitz–Gilbert equation of motion, using the open-source, GPU-accelerated software MuMax3. The total simulated volume had dimensions of 20,480 × 2560 × 20 nm3 and of 10,240 × 5120 × 20 nm3 for the straight and the curved wall, respectively, and was discretized into cells having dimensions of 5 × 5 × 20 nm3. Periodic boundary conditions in the x-direction were used to reproduce an infinite domain wall. The following parameters for the CoFeB were used: saturation magnetization Ms = 800 kA m−1, in-plane uniaxial anisotropy constant Ku = 103 J m−3 with the easy direction parallel to the x-axis (see Fig. 4 in the main text) and exchange constant Aex = 2 × 10−11 J m−1. The Gilbert damping parameter was set to α = 0.02. The exchange bias field was modeled as an external magnetic field of 2.5 mT, applied along the x-axis in opposite direction inside and outside the patterned area. In order to simulate the transition between two domains with opposite exchange bias, a 250 nm wide transition region with zero exchange bias field was placed in correspondence of the domain wall. To simulate the spatial profile of the spin-wave modes, for both the straight and the curved wall the magnetization dynamics was excited applying a time-varying sinusoidal magnetic field to one line of cells at the center of the rectangular region and parallel to the y-axis. The field amplitude was 30 mT. In the simulation of the dispersion relation of the straight wall, in order to excite spin-waves, we used a sinc-shaped field pulse $$b\left( t \right) = b_0\frac{{{\rm sin}\left( {2\pi f_0\left( {t - t_0} \right)} \right)}}{{2\pi f_0\left( {t - t_0} \right)}}$$ directed along the x-axis, with amplitude b0 = 30 mT and frequency f0 = 5 GHz. The dispersion relation was calculated by performing a Fourier-transform of the x-component of the magnetization both in space and time in the whole simulated area.
{}
# A invests Rs. 25000 for 2 years at 11% per annum on simple interest and B invests the same money, for the same time and at 10% per annum on compound interest then find the difference between both the interests. 1. Rs. 250 2. Rs. 27 3. Rs. 30 4. Rs. 50 Option 1 : Rs. 250 ## Detailed Solution Given A's investment = Rs. 25000 B's Investment = Rs. 25000 Rate of interest for A and B = 11% and 10% respectively Time = 2 years Formula used Simple Interest = Principal × Time × Rate/100 Amount = Principal(1 + R/100)t Calculation: Interest earned by A = 25000 × 2 × 11/100 = Rs. 5500 Amount of B after 2 years = 25000(1 + 10/100)2 ⇒ 25000 × 11/10 × 11/10 = Rs. 30250 Interest earned by B = 30250 - 25000 = Rs. 5250 Difference in interest earned by A and B = 5500 - 5250 = Rs. 250 ∴ The required answer is Rs. 250
{}
# Redox potential of a lead–acid battery In the German Wikipedia there are two reactions on the poles of the battery shown with the following potentials: \begin{align} \ce{Pb + SO4^2- &-> PbSO4 + 2 e-} &|\pu{-0.36 V}\\ \ce{PbO2 + SO4^2- + 4 H+ + 2 e- &-> PbSO4 + 2 H2O} &|\pu{+1.68 V} \end{align} $$E_\mathrm{Ges}^0 = \pu{1.68 V} - (\pu{-0.36 V}) = \pu{2.04 V}$$ I do understand the potential for the second $\pu{1.68 V}$ since for the second reaction the underlying redox pair is $\ce{Pb^4+ + 2 e- -> Pb^2+}$. For this redox pair the electrochemical standard potential is $\pu{1.69 V}$. But for the first reaction I think that the underlying redox pair has to be $\ce{Pb^2+ + 2 e- -> Pb}$. This redox pair has standard potential of $\pu{-0.1263 V}$. This result in a voltage of $\approx\pu{1.55 V}$. But Wikipedia and a book of mine tell the the voltage of this battery type is $\pu{2.04 V}$. What the reason for the $\pu{-0.36 V}$? Source: This is from the German Wikipedia article on lead-acid batteries. Unfortunately the English version doesn't contain the calculation of the voltage. I took the standard potentials from the book Elektrochemie by Hamann. The potentials depend on the form of the compounds. It is true that in solution $\ce{Pb^{2+} (aq) + 2e^- -> Pb (s)}$ is -0.126 V. But in the case of a battery we have: $\ce{PbSO4 (s) + 2e^- -> Pb (s) + SO4^{2-} (aq)}$ And in this case the $\ce{Pb^{2+}}$ is in solid form and the potential is -0.356 V. In a battery the sulphate is insoluble and it is required that it sticks to the electrode, otherwise the reverse reaction can not occur. A table of potentials can be found here The underlying redox pair is, as you say, Pb2+ +2e− --> Pb and has standard potential of −0.1263 V. But standard potential is for 1 M concentration. If you look at the "underlying" reaction, you must correct for the reduced concentration of Pb++ due to its insolubility. The correction is made by using the Nernst equation and the solubility (or solubility product) of PbSO4, and the half-reaction potential increases because the solubility of Pb++ is low.
{}
## Abstract 5.谷歌网站管理员工具 The Chinese comedy Never Say Die has brought in an impressive $326 million worldwide to date. "Perhaps it is no big deal whether or not you are in the group discussion, but if you were absent, 670,000 people would be absent, if you were silent, 670,000 people would be silent." ['hedwei] RSPCA inspectors found 13 dead cats and an emaciated survivor when they raided the property in Adelaide, South Australia, in September 2015. NBS senior statistician Sheng Guoqing attributed the slowdown mainly due to a 1.4-percent decline in food prices, which were down for the first time in 15 years. Golden State equaled the 1957-58 Celtics as the only defending champions to win their initial 14 games. The Warriors are one of five teams in NBA history to begin 14-0, and will travel to Denver looking to keep it going Sunday. 10. Run the Jewels “Run the Jewels 2” (Mass Appeal) Speaking of urgent and stinging, Killer Mike and El-P, veteran underground rappers from different scenes, found firm traction on their second round as Run the Jewels. Their flow is strong and their focus is furious on an album that calls out power structures but lets nobody off the hook. 4.菲亚特 品牌喜爱度:-7%/排名:85 Lori Steele It's time to make New Year resolutions for traveling! Backpacker bible Lonely Planet has published its annual list of best value destinations in 2017 for people looking for wallet-friendly sightseeing experiences. Take a look. adj. 社会的,社交的 同时,2016年应届毕业生选择就业的比例有所上升,比例由2015年的71.2%提高到 75.6%。 *喜剧类最佳客串女演员:蒂娜?菲(Tina Fey)和艾米?波勒(Amy Poehler),《周六夜现场》(Saturday Night Live) Here are five things consumers and investors can count on (probably) in 2015: We learned about the causes and consequences of rising obesity around the world. A heartfelt coming-of-age story that perfectly captures the bittersweet transition from adolescence to dawning adulthood... 虽然她五个月之前才刚学会走路,但是这个被人们称作“宝贝碧昂斯”的小宝宝已经开始在各种盛会的舞台上展露风采了。 ## I. Introduction 1896年,美国首任总统的夫人玛莎·华盛顿的画像印在一美元银圆券上,此后美元纸币上再没有出现过女性画像。 无论你是否认为Snapchat配得上Facebook开出的30亿美元报价,有一点确定无疑:所谓的“阅后即焚”(内容被接收数秒后自动消失)社交网络拥有广阔的市场应用空间。此外,与流行的看法相反,Snapchat传播的内容不仅仅是色情短信和限制级的自拍照片,尽管它也的确是传播这类信息的最佳平台。随着主流社交平台的内容变得越来越商品化,以Snapchat为代表的“阅后即焚”平台在某种程度上重新恢复了社交媒体本应具备的趣味性和自发性。就像现实生活中的互动交流一样——创意自由流动,一般不用担心一切事情都被记录下来留给子孙后代观瞻,传遍整个世界——SnapChat提供了一个真实的、未经过滤的交流渠道。孩子们真的很喜欢它。Facebook公司自己的首席财务官上个月已正式承认,青少年使用Facebook的次数正在下降.而据坊间传言,驻足于Snapchat的青少年数量正在呈爆炸式增长。 petroleum 单词fare 联想记忆: 5. The 2012 Ig Nobel Chemistry Prize 自2005年以来,年度最佳商业图书奖的评奖标准一直是“对现代商业问题提供了最令人叹服且最有趣味的深刻见解”。2014年的获奖者是托马斯皮凯蒂(Thomas Piketty)的《21世纪的资本》(Capital in the Twenty-First Century)。 The 62 universities account for 12.4% of the list. The only country with a larger number of universities listed is the US, which accounts for 27.8%. Before that, she starred in popular sitcom, and has also made a name for herself in films. n. 购买,购买的物品 Today's and tomorrow's technology sits on top of multiple layers, every one of which is changing and has to inter-operate with others. This makes our gadgets, the internet of things, phones and laptops unstable. And it makes consumers irritated. How many of your apps actually work--and actually make life easier, faster or more fun? I'd expect to see consumer cynicism grow, as delight is overtaken by disappointment. This will put pressure on hardware and software developers to deliver that most boring of qualities: reliability. Kunis was No. 9 on FHM's list last year. The scientists’ analysis comes only a month after nearly 200 governments struck a new climate agreement in Paris that aims to stop global temperatures from rising more than 2C from pre-industrial levels, and ideally limit warming to 1.5C. 威廉凯特夫妇多次让加拿大人等候多时,例如,抵达加拿大时,两人花了20分钟时间才从飞机上下来接受加方欢迎团队的问候。 美国上市交易相比2016年(多年来最疲弱的年份之一)出现起色。根据Dealogic的统计,今年迄今已有112笔IPO得到定价,而去年这个时候只有63笔。发行金额翻了一番多,达到316亿美元。 5Japan The education sector has remained largely unchanged by online service delivery — but could be transformed dramatically in 2018. The measures required to inhibit disease transmission can be very costly in economic and social terms, including depression and other ‘diseases of despair’ among the millions who lose their jobs. These costs must be weighed against the medical benefits of intervention. The decision when to intervene and on what scale is a classic optimal control problem. This paper explores the choices facing the government using a simple mathematical model that is inspired by optimal control theory.1 For clarity we omit details of the full optimal control model which are to be found in Rowthorn (2020). The paper complements the theoretical analysis with some illustrative simulations. These simulations should not be taken literally but they indicate some of the issues and orders of magnitude involved. The economic literature on the optimal control of disease is sparse and its models mostly deal with individual behaviour and the externalities of individual decision-making with regard to treatment, vaccination, or social distancing.2 These are not our concern here. Our interest is in the cost–benefit analysis of large-scale interventions such as lockdowns. This involves an approach that is unusual in the existing optimal control literature on disease. Costs and benefits in existing optimal control models are typically functions of the health status of individuals, computed by assigning values or weights to individuals according to their health status. This is a procedure followed here. However, unlike these models we also make an explicit allowance for the more general costs of comprehensive interventions such as lockdowns. Such costs depend on the scale and type of intervention but they are not linked in a direct way to the health status of individuals. These costs are given a central role in this paper. Since the outbreak of the epidemic there has been a spate of thought-provoking articles on economic aspects of COVID-19. Two, in particular, deserve special mention. Acemoglu et al. (2020) examine targeted lockdowns in a multi-group SIR model where infection, hospitalization, and fatality rates vary between groups—in particular between the ‘young’, ‘the middle-aged’, and the ‘old’. They also allow for the fact that lockdown damages the economy and reduces the productivity of non-infected members of the workforce. Their paper, incidentally, contains a good review of the recent literature. Giordano et al. (2020) draw on the experience of the Italian epidemic. Their model distinguishes between detected and undetected infection cases, and between cases with different severity of illness. They argue that social-distancing measures are necessary and effective, and should be promptly enforced at the earliest stage. They also argue that lockdown measures can only be relieved safely when an effective system of testing and contact tracing is in place. These are both excellent articles, and nothing in the present article contradicts their findings. A system of testing and tracing is most effective when the number of people to be tested or contacted is relatively small. It may be feasible to test small subgroups of the population on a frequent basis and trace their contacts if they test positive (Cleevely et al., 2020). Care home workers are an example. However, a policy of targeted testing is of limited use as a means of infection control if the disease is widespread, since most of the infected population will not be in the groups selected for testing. The alternative is universal and frequent random testing, but this is likely to be prohibitively expensive, as Cleevely et al. point out. If the scale of infection is too large for the system of testing and tracing to handle unaided, and if there is currently no treatment or vaccine, some form of social distancing will be required. This is the case in the present article. Indeed, our basic model goes further. It assumes that a perfect vaccine will become available on a known date in the future and that prior to this date there exists no testing and tracing regime at all. There is also no currently available treatment for the disease. Hence social distancing is the only feasible means of disease control. However, in one simulation we consider a scenario in which a test and trace regime is established in advance of vaccination. The analysis assumes that the scale of social distancing is determined by government fiat alone. In reality, as the disease spreads and people become aware of the risks involved, there will be a degree of voluntary social distancing. As a result, the more apocalyptic predictions of what would happen without draconian intervention may be wide of the mark. The implications of endogenous behaviour are not explored here, but are the subject of another paper (Ormerod et al., 2020). The theoretical section of this paper was written the day after Prime Minister Johnson announced a full-scale lockdown. The first batch of simulations was completed shortly thereafter with the aim of influencing the ongoing policy debate. The paper including simulations was published in mid-April in the CEPR real-time online journal Covid Economics (Rowthorn, 2020). These simulations were comprehensively revised in May and June for this issue of the Oxford Review of Economic Policy. By the time the journal appears, the die will have been cast and the actual policy choices of the government will be there for all to see. However, we hope that this paper will continue to provide a useful framework for thinking about the cost–benefit analysis of disease control. Our study is based on a standard but simple epidemiological model, and should therefore be regarded as presenting a methodological framework rather than giving policy prescriptions. ## II. The model The analysis in this paper uses a modified version of the standard SIR model of disease propagation. Ignoring births and deaths from non-COVID-19 causes, the initial population will divide in the future into three groups of people: susceptible, infected, and removed—denoted, respectively, by S(t), I(t), and R(t). The removed group includes people who have died from the disease. They are denoted by D(t). The population at the start of the epidemic is normalized to 1, so these various quantities can be interpreted as shares. Individuals who are infected remain infectious until they recover or die. Infected individuals who recover acquire complete immunity, so the journey from S(t) via I(t) to R(t) is in one direction only. Writing for a Comedy Series: Aziz Ansari and Alan Yang, “Master of None” (“Parents”) $dS(t)dt=−β(t)S(t)I(t)$ (1) $dI(t)dt=β(t)S(t)I(t)−γI(t)$ (2) $dR(t)dt=γI(t)$ (3) $dD(t)dt=δγI(t)$ (4) $S(0)=S0≥0$ (5) $I(0)=I0≥0$ (6) $R(0)=R0≥0$ (7) $D(0)=D0≥0$ (8) $S(t)+I(t)+R(t)=1$ (9) where $γ$ and $δ$ are constant. These constants indicate, respectively, the rate at which infected individuals cease to be infectious, and the probability that an infected individual will die. Note that there are only two genuinely independent state variables in this model. For example, if the trajectories of I(t) and R(t) are known, the trajectories of S(t) and D(t) are uniquely determined by equations (1) and (4).. Equation (1) indicates how the pool of susceptible individuals is depleted by the outflow of newly infected individuals. Assuming that social encounters are random, the probability that a susceptible individual will be infected in a given unit of time is proportional to the prevalence of infection in the population. Equation (2) indicates how the pool of currently infected individuals is augmented by the inflow of newly infected individuals and depleted by the outflow of infected individuals who recover or die. The rate of outflow is $γI(t)$ of whom a fraction $δ$ are dead. Equation (3) indicates how the removed category is augmented by the inflow of newly recovered or dead individuals. The coefficient $β(t)$ in equation (1) is a variable which depends on the current intensity of social interaction. The intensity of social interaction depends, in turn, on the measures that the government puts in place to inhibit the spread of the disease. Specifically, it is assumed that: $β(t)=(1−q(t))β0$ (10) where $q(t)$ is an index of policy severity. The effective reproduction rate of the disease is $r(t)=(1−q(t))S(t)r0$ (11) where $r0=β0γ$ The number $r0$ indicates how many people the average infected person would infect in a situation where everyone was susceptible to the disease and there was no government intervention to control its spread. The number $r(t)$ indicates how many people are infected if there is government intervention and some people are immune. If $r(t)<1$, the prevalence of the disease will be diminishing through time. Government intervention comes at a cost $C(q(t))$ in the form of damage to the economy. This cost is independent of the number of people currently infected and is the result of society-wide measures to control the disease. It is in addition to the various costs arising directly from infection. The function $C(.)$ is assumed to be twice differentiable and such that (12) where $qmax<1$ is an upper limit beyond which it is not feasible to increase $q$. Thus, $C(q)$ is strictly convex over the relevant range. Examples are shown in Figure 1 which plots the function $C(q)=Cmax(qqmax)1+ϕ$ for various values of $ϕ>0$. When $q$ is close to zero, the marginal cost of intervention is low but rises steeply at higher values of $q$. These are realistic assumptions. Think of hand-washing at one end of the scale and the closure of shops, pubs, cafés, and restaurants at the other. Figure 1: Weekly cost C(q): £’000 per capita Figure 1: Weekly cost C(q): £’000 per capita The government is assumed to have perfect foresight. Thus, the entire control trajectory is decided at the very outset. The system is therefore open loop, whereas in a closed loop system the control is modified in the light of new information. We assume that an effective vaccine will become available at time T at negligible cost.3 For simplicity we also assume that a cure will become available at the same time as the vaccine at zero cost. The government chooses the trajectory q(t) so as to minimize the following quantity subject to the foregoing equations: $J=∫0T[πAI(t)+C(q(t))]dt+πD[D(T)−D(0)]$ (13) where $πA$ is the monetary value that planners assign to each person who is currently alive and infected and $πD$ the additional value they assign to those who die. “The steady and now record-breaking rise in average global temperatures is not an issue for another day,” Michael R. Bloomberg, the former New York mayor who is spending tens of millions of dollars of his personal fortune to battle climate change, said in a statement. “It’s a clear and present danger that poses major economic, health, environmental and geopolitical risks.” 说到底,我预期这些新的领导人将开始从更广泛的候选人中选拔人员,并把不同背景的人任命为自己的直接下属,从而驳斥那些根据当前的失衡来外推还需数十年才能实现领导层男女平衡的悲观者。 After all, during the last round of collections, the most striking pieces — from Louis Vuitton, Dior, Proenza Schouler, Narciso Rodriguez — had a streamlined momentum that wasn’t dragged down by any decade-related reference, or identity. They were clothes that went striding into the future, freed from the weight of the past. Justin Bieber剃发募捐 集得4万美金 1.It wasn’t me! – Because some things just aren’t worth taking credit for。 $E=∫0T[πAI(t)+C(q(t))]dt$ (14) Thus, $J=E+πD[D(T)−D(0)]$ (15) The monetary allowance for death $πD$ is not included in economic cost since most of the people who die from the disease are not economically active, so their death does not have a significant effect on output. Their cost of treatment prior to death is included in the $πA$ term which is an average for all infected individuals, including those who die and those who are asymptomatic or require no treatment. ## III. Simulation The optimization problem defined above has no explicit solution. In the absence of such a solution, the obvious procedure is to explore the properties of the system by means of numerical simulation. We solved the optimization problem by posing it as a nonlinear programming problem.4 ### Assumptions One of the things that makes Guardians such a great superhero franchise is its sense of humor—which is full of self-deprecation and sarcasm. It's not like you're going to watch this movie and laugh your way through it, but you'll at least have some moments of "ha ha, Groot," and "lolololol Chris Pratt." $C(q)=Cmax(qqmax)1+ϕ$ (16) where $Cmax$ is the cost of the maximum feasible lockdown and $ϕ>0$. The larger is the value of $ϕ$ the lower is the cost of the other interventions relative to lockdown and the greater is the economic benefit of moving to less draconian forms of intervention (see Figure 1). Our simulations use parameter values that we hope are realistic, although given the paucity of reliable data, a fair amount of guesswork is involved. The simulations take 1 April 2020 as their notional starting point for optimization, although the epidemic is assumed to have started some weeks earlier.. The lockdown was officially announced on 23 March, but it was not until 1 April that it had a clear effect on the number of people infected (King’s College, 2020). The unit of time is a week and the time horizon is T=52. The monetary unit of account is thousands of UK pounds. There are initially 2m people currently infected and therefore infectious. In addition a further 1.4m have had the disease and recovered or died. The initial conditions are thus I0=0.030, R0=0.021. The death rate is $δ=0.7$ per cent. The UK population is assumed to be 66.8m. The parameters in the baseline scenario have the following values: $β0=4.8$, $γ=1.6$, $Cmax=0.20$,$πA=2$,$πD=2,000$. Infected individuals cease to be infectious at an exponential rate of –1.6 per week, which implies that after 2 weeks 96 per cent are no longer infectious. They have either recovered or died. In the absence of intervention the net reproduction rate $r0=3$. The per capita weekly cost of full lockdown is £200 which is approximately 35 per cent of per capita GDP at factor cost, in line with the Office for Budget Responsibility’s prediction of what the lockdown might do to the UK economy (OBR, 2020). The values $πA=2$ and $πD=2,000$ assume that planners assign a monetary value of £2,000 per week to the average currently infected person, plus a further £2m to each fatality. The latter figure is what the UK Treasury assumes in project evaluation as the value of a prevented fatality (Dolan and Jenkins, 2020). To derive the path before 1 April, we assume that 4.7 weeks previously the state of the system was $S−4.7=1−10−8,I−4.7=10−8,R−4.7=0,D−4.7=0.$ From this starting point the system is assumed to grow freely with parameters $β=4.8,γ=1.6,δ=0.007$ until 1 April, when government intervention in our simulations begins. We ignore the limited interventions of the government before 1 April. ## IV. Results Tables 1 and 2 provide information about the optimum path under various scenarios. The numbers for deaths and total economic cost in these tables have been adjusted to include the pre-intervention weeks. This is a small adjustment which does not materially affect the results. It makes it easier to compare scenarios with different starting dates for intervention. Table 1: Optimal paths compared $ϕ$Value of life (£m)Duration of lockdown (weeks)Peak infection (m)Total deaths (thousands)Total economic cost (£ per capita) Do nothing 2.0 20.1 439.8 14,342 Baseline 2.0 5.3 2.0 59.9 6,589 Low relative cost 2.0 1.8 2.0 67.1 4,811 High relative cost 2.0 7.9 2.0 57.6 7,660 Long time horizon: unconstrained 2.0 7.0 270.5 1,916 constrained 2.0 3.3 268.1 2,093 Early start 2.0 0.9 0.3 8.3 7,360 Test & trace 2.0 6.0 2.0 60.1 3.551 $ϕ$Value of life (£m)Duration of lockdown (weeks)Peak infection (m)Total deaths (thousands)Total economic cost (£ per capita) Do nothing 2.0 20.1 439.8 14,342 Baseline 2.0 5.3 2.0 59.9 6,589 Low relative cost 2.0 1.8 2.0 67.1 4,811 High relative cost 2.0 7.9 2.0 57.6 7,660 Long time horizon: unconstrained 2.0 7.0 270.5 1,916 constrained 2.0 3.3 268.1 2,093 Early start 2.0 0.9 0.3 8.3 7,360 Test & trace 2.0 6.0 2.0 60.1 3.551 Table 1: Optimal paths compared $ϕ$Value of life (£m)Duration of lockdown (weeks)Peak infection (m)Total deaths (thousands)Total economic cost (£ per capita) Do nothing 2.0 20.1 439.8 14,342 Baseline 2.0 5.3 2.0 59.9 6,589 Low relative cost 2.0 1.8 2.0 67.1 4,811 High relative cost 2.0 7.9 2.0 57.6 7,660 Long time horizon: unconstrained 2.0 7.0 270.5 1,916 constrained 2.0 3.3 268.1 2,093 Early start 2.0 0.9 0.3 8.3 7,360 Test & trace 2.0 6.0 2.0 60.1 3.551 $ϕ$Value of life (£m)Duration of lockdown (weeks)Peak infection (m)Total deaths (thousands)Total economic cost (£ per capita) Do nothing 2.0 20.1 439.8 14,342 Baseline 2.0 5.3 2.0 59.9 6,589 Low relative cost 2.0 1.8 2.0 67.1 4,811 High relative cost 2.0 7.9 2.0 57.6 7,660 Long time horizon: unconstrained 2.0 7.0 270.5 1,916 constrained 2.0 3.3 268.1 2,093 Early start 2.0 0.9 0.3 8.3 7,360 Test & trace 2.0 6.0 2.0 60.1 3.551 Table 2: 毁约涨价重现北京房地产市场 中介深夜排队取号 $ϕ$Value of life (£m)Duration of lockdown (weeks)Peak infection (m)Total deaths (thousands)Total economic cost (£ per capita) Baseline 2.0 5.3 2.0 59.9 6,589 High value of life 5.0 8.4 2.0 55.9 6,768 Low value of life: unconstrained 1.0 7.1 275.3 1,582 constrained 1.0 3.3 269.5 1,776 Nil value of life: unconstrained 8.8 333.0 1,139 constrained 3.3 335.1 1,327 $ϕ$Value of life (£m)Duration of lockdown (weeks)Peak infection (m)Total deaths (thousands)Total economic cost (£ per capita) Baseline 2.0 5.3 2.0 59.9 6,589 High value of life 5.0 8.4 2.0 55.9 6,768 Low value of life: unconstrained 1.0 7.1 275.3 1,582 constrained 1.0 3.3 269.5 1,776 Nil value of life: unconstrained 8.8 333.0 1,139 constrained 3.3 335.1 1,327 Table 2: 家居行业上演展会大战:血拼创意设计 $ϕ$Value of life (£m)Duration of lockdown (weeks)Peak infection (m)Total deaths (thousands)Total economic cost (£ per capita) Baseline 2.0 5.3 2.0 59.9 6,589 High value of life 5.0 8.4 2.0 55.9 6,768 Low value of life: unconstrained 1.0 7.1 275.3 1,582 constrained 1.0 3.3 269.5 1,776 Nil value of life: unconstrained 8.8 333.0 1,139 constrained 3.3 335.1 1,327 $ϕ$Value of life (£m)Duration of lockdown (weeks)Peak infection (m)Total deaths (thousands)Total economic cost (£ per capita) Baseline 2.0 5.3 2.0 59.9 6,589 High value of life 5.0 8.4 2.0 55.9 6,768 Low value of life: unconstrained 1.0 7.1 275.3 1,582 constrained 1.0 3.3 269.5 1,776 Nil value of life: unconstrained 8.8 333.0 1,139 constrained 3.3 335.1 1,327 Figure 2 shows what happens if the government does nothing to control the disease and restricts itself to the medical treatment of those infected. Within a few weeks, 90 per cent of the population has been infected and the cumulative death toll by the end of the year is 440,000 (Table 1). At the peak of the epidemic 20m people are currently infected and hence infectious. Figure 2: 靠农民工购房去库存 月收入至少得够5000元 Figure 2: 河北雄安新区政务服务中心投入运行 Under the Baseline scenario, the optimum response of the government is to impose a tight lockdown at the very beginning of the planning year. The lockdown lasts 5.3 weeks and brings the disease under control quite soon, although not before millions of people have been infected and many thousands have died (Figure 3). The eventual death toll is 60,000. The death toll is so high because the lockdown is not complete. Lockdown reduces the transmission of the disease but does not entirely prevent it. As a result there is inertia in the system. If the level of infection is already high when the lockdown is imposed, this will continue to be the case for some time thereafter. This is a good reason for acting swiftly before the disease has really taken hold. Once the lockdown is relaxed there is a prolonged period when it is optimal to maintain restrictions close to the minimal level required to contain the disease (Figure 4). During this period the effective reproduction rate r, although rising, is close to 1 (Figure 5). As the vaccination date draws near, restrictions are lifted at an accelerating pace until eventually they are largely abandoned. The result is a brief resurgence of infection which is halted by vaccination or treatment. Figure 3: “一盘散沙各自为营” 国内智能家居的真实写照 Figure 3: 卖场难觅婴儿家具 市民购买多依靠网络 Figure 4: Optimal path for q: baseline scenario Figure 4: Optimal path for q: baseline scenario Figure 5: Optimal path for r: baseline scenario Figure 5: Optimal path for r: baseline scenario Figure 6 compares the course of infection under various scenarios. Under the Early Start scenario, the lockdown is imposed a week earlier, with the result that infection and deaths are much lower. The eventual death toll is around 8,000 as compared to 60,000 under the Baseline scenario. The lockdown is also much shorter: 0.9 weeks as compared to 5.3 weeks. This comparison illustrates clearly the harm that may arise from even a short delay. Figure 6: 新型环保建材真的环保吗? Figure 6: 不动产登记与房地产市场走向关系几何 In Rowthorn (2020), it was argued that extending the planning horizon does not greatly affect the results. This conclusion is not supported by the more sophisticated simulations reported here. Suppose the vaccine comes on stream after 2 years instead of one. The effect on the optimal path is dramatic. There is no lockdown and the final death toll is 271,000. Peak infection is 7m and eventually over 40 per cent of the population catches the disease. Infection on the peak scale would impose an intolerable burden on the health system. To avoid such an eventuality, we repeat the simulation with a ceiling of 3.3m on the permitted level of infection. This is just over 50 per cent more than the initial level of infection (2m). The existence of this constraint has little impact on the eventual death toll, although it does reduce the peak load on the health system. ### Relative costs The parameter $ϕ$ conveys information about the relative cost of various interventions. When $ϕ$ is small the economic benefit from a partial relaxation of the lockdown is also small. This creates an incentive to extend the duration of lockdown. Why relax an effective policy for so little economic gain? Conversely, if $ϕ$ is large, the economic gain from a partial relaxation is large. The duration of lockdown is therefore short. Under the Baseline scenario $ϕ=2$ and the lockdown lasts for 5.3 weeks. If $ϕ=1$, the lockdown lasts for 7.9 weeks. If $ϕ=4$, it lasts for 1.8 weeks. ### Test and trace A test and trace system is designed to isolate infectious individuals and their contacts, so that they cannot infect the general population. Within the framework of the present model it is equivalent to either a reduction in the transmission coefficient $β0$ or else an increase in $γ$, the rate at which infected people cease to be infectious. To explore the implications of the system introduced by the UK government, we assume that it becomes fully operational in week 20. This is later than the government’s initial target, but we allow for teething problems. The system has a capacity of 200,000 tests per day. We assume it has negligible cost. The effectiveness of a test and trace system depends on the following factors: (i) the number of tests carried out, (ii) the share of infected individuals in the tested population, (ii) the fraction of infected individuals who are available for testing, and (iv) the number of infected contacts who self-isolate following a positive diagnosis. The roles of these various factors are discussed in the Appendix. The parameters we use for our simulation are somewhat arbitrary, but the results illustrate clearly the impact of test and trace on the optimum path. Figure 7 plots the optimum paths with and without test and trace. The effect of test and trace is to lower the trajectory of the control variable q. The reason for this is as follows. The existence of a test and trace system reduces the impact of present interventions on the future course of infection. Planners therefore have less need to be concerned about the future. They can afford to relax since test and trace will help deal with the outcome. This is true both before and after test and trace comes into operation. The test and trace system in our simulation is not perfect, so some degree of social distancing is still required after this system becomes operational. Figure 7: Optimal paths for q: scenarios compared Figure 7: Optimal paths for q: scenarios compared ### The value of life Any cost–benefit analysis of optimal policy towards COVID-19 requires some assumption about the value of human life (Social Value UK, 2016; Dolan and Jenkins, 2020). This assumption may be explicit or it may be implicit. Governments may reject the whole idea of valuing life in the context of disease control, but to the extent that their actions are consistent, they imply some tacit valuation of life. In other policy areas, such as transport and drug evaluation, it is normal for government agencies to put a value on life. In our simulations a reduction in the value of life implies a shorter lockdown or maybe no lockdown at all (Table 2). This is true even if we impose a ceiling on the permitted scale of infection. Under the Baseline scenario, the value of life is £2m and the optimal lockdown lasts for 5.3 weeks. Holding other parameters constant, it becomes optimal to dispense with the lockdown altogether once the value of life drops below £1.68m. If we impose the condition that peak infection must not exceed what the health service can handle, it is optimal to dispense with lockdown when the value of life is below £1.56m. At the other end of the spectrum, the optimal duration of lockdown becomes rather insensitive to further increases in the value of life. The optimal lockdown is not much different if the value of life is £10m or £20m (Figure 8). Figure 8: More than 27,000 vacancies are offered by over 120 central departments and their affiliated public institutions in the 2017 civil servant recruitment drive. The number of vacancies is about the same as 2016. Figure 8: 但是他们摆烂的原因还是各不相同的。比如热火队,他们试图通过输球来锻炼队中的年轻人并且确立一个核心;比如魔术队,他们试图摆脱停滞不前的过程。 Figure 9 plots the relationship between total deaths and total economic cost. Through its impact on optimal policy, the value of life affects both the economic cost of the disease and the number of people who die from it. Each point on the curve corresponds to a certain value of life, and the variables shown are calculated on the assumption that the government behaves optimally given this value of life.5 Figure 9: 这是5年来非合办EMBA项目首次跻身前5名,全球EMBA项目前5名的名次非常稳定。 Figure 9: 许多人对此不屑一顾,《纽约每日新闻》采访到的一位营养师声称,黄金披萨中最昂贵的成分:厄瓜多尔进口黄金薄片缺乏营养价值。 如果Lloyd Grossman有机会进入奥斯卡得主们的家中,他将会在很多不同的地方看到他们各自的小金人,从高级房产的壁炉台到积尘良久的壁橱深处,都可能看到它的身影。 顾名思义,这个真人秀节目还专门拍摄学生们的母亲,展现母亲如何教育孩子与他人竞争。 注册人数:511人 Wonder Woman Ashley Graham, 30, is the first ever curve model to make the highest-paid list, coming in at 10th place after banking$5.5 million from her lingerie and swimsuit lines contracts. protein “疲软的国内需求和大宗商品价格下跌继续拖累中国的进口增长,”澳新银行(ANZ bank)经济学家刘利刚表示。“展望未来,中国的出口行业仍将面临一些重大不利因素。” A striking feature of Figure 9 is the discontinuity indicated by the broken line. This was unexpected, but appears genuine. We checked it using two different programs. This break in the curve marks the transition between two radically different types of policy. To the right of the break, the optimal policy is lockdown with a low death rate. To the left, the optimal policy is no lockdown and a high death rate. This transition occurs abruptly when the value of life is around £1.68m. It is clearly visible in Figure 8. What light does this discussion throw on the actual policy of the UK government? The period of maximum lockdown lasted approximately 10 weeks. With the baseline cost structure ($ϕ=2,πA=2$), a lockdown of this length is only optimal when the value of life exceeds £10m. If $ϕ=1$,$πA=3$ the figure is £4m. These numbers are much larger than the value of life implied by the official guidelines for drug evaluation (£200,000 to £300,000).6 To the extent that the government is behaving optimally, these comparisons imply that it values the lives of potential COVID-19 victims a lot more highly than those of other types of victim. ## V. Concluding remarks Soon after the implications of lockdown became evident people began to ask the obvious question: ‘Is the cure worse than the disease?’ (Miles et al., 2020). Governments began to seek cost-effective policies that would enable them to exit the lockdown without setting off a renewed surge of infection. Although they are speculative in nature and limited in their methodology, the simulations presented here and their underlying theory may throw some light on government policy. "China is really in a tough position," Dr. Peters said. "Emissions have grown so much in the last 10 years or so that no matter how you look at China, it has an immense task." 007电影中,反派人物往往没有自己的主题曲,但是《金枪人》中克里斯多弗·李扮演的暗杀者出场时有一段轻快动人的绝妙曲调,衬托出暗杀者如地狱使者般帅气冷酷。LuLu的演唱完全演绎出这个反派斯卡拉孟加的过人之处。即使这首歌在今天来说欢快得有点可笑,但这也增添了它的魅力。 The report found that students majoring in art, agriculture and engineering were more willing to start businesses, while those majoring in history and science showed relatively low interest. One of the main strengths of the LBS programmes is the wide range of students from different countries. More than 90 per cent in its 2015 MBA cohort were from overseas, coming from about 60 different countries. Several other parties have support that is only slightly lower, including the centrist liberal D66, the Christian Democratic Appeal and leftwing GreenLeft. It is revealed that Hadid's visa application was turned down after she offended many by squinting her eyes in an attempt to impersonate the Buddha in February. Interest in wearable technology isn’t limited to technology companies. Mercedes-Benz is porting its mobile experience to a wearable device, while Virgin Atlantic is exploring the customer service aspect of Google Glass on a trial basis. Kenneth Cole is also using Glass as part of a marketing campaign. The regulator said that although this year's growth will be slightly lower than 35 percent due to the Chinese currency's depreciation, the film market will still see robust growth. In his Covid Economics paper, Rowthorn (2020) argued that, if a relatively inexpensive way can be found to maintain an r value close to 1, this is the policy to aim for in the medium term. A lockdown may (or may not) be necessary to halt the explosive spread of the disease, but once this aim has been achieved it would be a costly mistake to stick with expensive social distancing policies that aim to keep r well below 1. This conclusion is reinforced by our example of test and trace. If there is an effective test and trace system in the offing, it may even be optimal to let r exceed 1 during the weeks before this system becomes operational. This will cause infection to increase somewhat, but the potential explosion will be prevented when test and trace comes on stream. The same is true during the run-up to mass vaccination. The world's first flying bicycle flew on November 9, 1961, when Derek Pigott of the University of Southampton flew in a bicycle with an airplane-like body. It was called the Southampton University Man Powered Aircraft (sumpac). Derek furiously pedaled the air-bike to get it off the ground. It then flew 1.8 meters (about 6 ft) above the ground over a distance of 64 meters (210 ft). While the flight was short and slow, it still does not change the fact that it was the first bicycle to fly and at the same time, the first human-powered flight. Insecure “这是关于‘我们是谁’的基本问题,”伦敦大学伯克贝克学院的政治学教授埃里克?考夫曼(Eric Kaufmann)说,“作为这个国家的一员意味着什么?它是否已经不再是‘我们’的国家?——‘我们’是指占多数的民族。 That even at a lower profit margin (say, 40%) and a 1/3 cannibalization rate (i.e. customers buy one third fewer full-priced iPhones), the cheaper iPhone would increase Apple'srevenue and gross profits (see her spreadsheet above). Corkin died this year but shortly after, journalist Luke Dittrich published a book claiming Corkin buried inconvenient findings, shredded files, and acted unethically in gaining HM's consent. ### Appendix: Test and trace Throughout this appendix the symbol $I$ refers to infected individuals who are not isolated and can therefore infect the susceptible population. Isolated individuals who are infectious are classified as removed. Suppose that a fraction $a$ of the infected population $I$ is currently available for testing. The rest are either asymptomatic or unwilling to undergo testing. For those available for testing; the probability of not being tested positive in a period of length $s$ is equal to $e−ps$where $p$ is constant. The probability that an infected individual will cease to be infectious in the small time interval $Δs$ is $[−dds(e−γs)]Δs=γe−γsΔs$. Thus, the probability of recovering or dying without testing positive is: $∫0∞γe−(p+γ)sds=γp+γ$ (A1) $1−γp+γ=pp+γ$ (A2) The average length of time that an individual remains infected is $1γ$. Thus, the probability that he or she will test positive during a small time interval of length $Δt$. is equal to: $pp+γ(Δt1γ)=(γpp+γ)Δt$ (A3) The number of infected individuals who are available for testing is $aI$. The number of such individuals who test positive in the time interval $Δt$ is $aI(γpp+γ)Δt$ $(γpp+γ)aI$ Suppose there is no constraint on testing. Then $p=∞$ and the rate of testing infected persons is: $A=γaI$ (A4) In the constrained case assume that $M$ is the maximum number of tests per week. Assume also that a constant fraction $b$ of these tests is directed at infected persons. Then access to testing will be capacity constrained if $bM<γaI$. In this case $A=(γpp+γ)aI=bM$ (A5) Thus, $p=γaIγaI−bM$ (A6) Assume that for each person who tests positive the number of infected persons who self-isolate (including the tested person) is $c$. Then infected persons are isolated at the rate $cA$. They are classified as removed. Suppose the test and trace system comes on stream at time $T∗$. Define the following function: (A7) 8月水泥行业实现利润80亿元 旺季涨价幅度将超过预期 $dIdt=(1−q)β0SI−γI−Q(t,I)dRdt=γI+Q(t,I)$ (A8) Our simulation assumes a daily capacity of 200,000 for test and trace. This amounts to 1,400,000 per week, which is equal to a fraction 0.021 of the population. Thus, $M=0.021$. It is also assumed that the test and trace system becomes fully operational in week 20 and that $a=0.5,b=0.5,c=1.6.$ ## References Acemoglu , D. , Chernozhukov , V. , Werning , I. , and Whinston , M. D . ( 2020 ), ‘Optimal Targeted Lockdowns in a Multi-Group SIR Model’, NBER Working Paper No. 27102. Chen , F . ( 2012 ), ‘A Mathematical Analysis of Public Avoidance Behavior During Epidemics Using Game Theory’, Journal of Theoretical Biology , 302 , 18 28 . — Jiang , M. , Rabidoux , S. , and Robinson , S . ( 2011 ), ‘Public Avoidance and Epidemics: Insights from an Economic Model’, Journal of Theoretical Biology , 278 , 107 19 . Cleevely , M. , Susskind , D. , Vines , D. , Vines , L. , and Wills , S . ( 2020 ), ‘A Workable Strategy for Covid-19 Testing: Stratified Periodic Testing rather than Universal Random Testing’, Oxford Review of Economic Policy , 36 ( Supplement ), S14–S37. Fenichel , E. P . ( 2013 ), ‘Economic Considerations for Social Distancing and Behavioral Based Policies During an Epidemic’, Journal of Health Economics , 32 ( 2 ), 440 51 . Gersovitz , M . ( 2010 ), ‘Disinhibition and Immizeration in a Model of Susceptible-Infected-Susceptible (SIS) Diseases’, mimeo. Giordano , G. , Blanchini , F. , Bruno , R. , et al.  ( 2020 ), ‘Modelling the COVID-19 Epidemic and Implementation of Population-wide Interventions in Italy’, Nat Med , 国企瓷砖要加强本土品牌建设 才能抗衡国外品牌 King’s College ( 2020 ), https://covid.joinzoe.com/, King’s College , London Miles , D. , Stedman , S. , and Heald , A . ( 2020 ), ‘Living with COVID-19: Balancing Costs Against Benefits in the Face of the Virus’, mimeo. OBR ( 2020 ), ‘Coronavirus: Reference Scenario’ , Office for Budget Responsibility, 样品家具在多地卖场受欢迎 成消费热点 Ormerod , P. , Rowthorn , R. , and Nyman , R . ( 2020 ), ‘Why Epidemiological Models Exaggerate the Risks of a Second Wave of Covid-19’, mimeo. Reluga , T. C . ( 2010 ), ‘Game Theory of Social Distancing in Response to an Epidemic’, PLoS Computational Biology , 6 ( 5 ), e1000793 . Rowthorn , R . ( 2020 ), ‘A Cost–Benefit Analysis of the Covid-19 Disease’, Covid Economics , 9 , 24 April, London, CEPR. — Toxvaerd , F . ( 2015 ), ‘The Optimal Control of Infectious Diseases via Prevention and Treatment’, mimeo. Sethi , S. P . ( 1978 ), ‘Optimal Quarantine Programmes for Controlling an Epidemic Spread’, Journal of the Operational Research Society , 29 ( 3 ), 265 8 . Toxvaerd , F . ( 2019 ), ‘Rational Disinhibition and Externalities in Prevention’, International Economic Review , 60 ( 4 ), 1737 55 . —      ( 2020 ), ‘Equilibrium Social Distancing’, Cambridge–INET Working Paper Series No. 2020/08. — Rowthorn , R . ( 2020 ), ‘On the Management of Population Immunity’, mimeo. ## Footnotes 1 3 In a game theoretic paper on social distancing Reluga (2010) also assumes that vaccination will occur on a fixed date in the future. In their recent paper, Acemoglu et al. (2020) assume that a vaccine and a cure become simultaneously available. 4 Cnzz.com的报告还讨论了目前在中国网络游戏行业盛行的装备收费模式问题。举例来说,很多美国游戏公司都是按照时间收费的,但大多数中国网络游戏都采取了装备收费模式,玩家可以免费试玩。用户玩游戏的时间越长,就越有可能花钱购买游戏装备,以获得更高的游戏级别。但这也意味着花钱最多的玩家就在游戏中表现最好。 Management consultancies pride themselves on being expert advisers on “change” — from helping clients integrate after mergers, to keeping them at the cutting edge of new technology. Several of the world’s largest consultancies will have to make use of their own advice in 2018 as they prepare for leadership changes — the first in years — that will have significant ramifications for their workforce. 5 Technically speaking, the curve is parameterized by $πD$. 6 Best chances: Best film, and best original screenplay recognition looks certain, and Frances McDormand is currently the favorite for the best actress Oscar.
{}
# Math Help - Point where growth rate of one function overtakes growth rate of another 1. ## Point where growth rate of one function overtakes growth rate of another This is the problem: Two algorithms takes n2 days and 2n seconds respectively, to solve an instance of size n. What is the size of the smallest instance on which the former algorithm outperforms the latter algorithm? I was able to get the answer by solving both functions f(n)=86400*n2 and g(n)=2n (86400 since there are that many seconds in a day) for values of n starting with 1 and incrementing. At 26, algorithm 1 becomes more efficient than algorithm 2. I can also get at the same answer by graphing both functions. But I'm sure there has to be another way. I've tried Googling and found information on comparing the growth rates of two functions using limits, but none of those techniques allow me to calculate the minimum n where the slower-growing algorithm becomes more efficient than the faster-growing one. Thanks for any help anyone can give me! 2. ## Re: Point where growth rate of one function overtakes growth rate of another That's basically the solution, find the minimum $n$ such that $86400n^2 \ge 2^n \Rightarrow 86400n^2 - 2^n \ge 0$. There's not really any other solution besides, graphing both functions or some guess/check. 3. ## Re: Point where growth rate of one function overtakes growth rate of another Ahh, thanks. I was hoping there was a more sophisticated solution like plugging both functions into a formula to get the answer, but if that's the only way then it's the only way. 4. ## Re: Point where growth rate of one function overtakes growth rate of another The exact real solution to the equation $86400n^2=2^n$ is not expressible using usual functions. However, it is possible to shorten the search interval by making some estimates. We have $86400n^2 = 2^n$ is equivalent to log(86400) + 2log(n) = n where logarithm is to the base 2. Next, log(86400) ≈ 16.4, so n > 16. Further, log(16) = 4, so n ≥ 17 + 2 * 4 = 25. As far as the upper bound, log(32) = 5, and 17 + 2 * 5 = 27 < 32, so n ≤ 32. Therefore, it makes sense to search for the breaking point between 25 and 32. As you found, it turns out that the lower bound is almost sharp. 5. ## Re: Point where growth rate of one function overtakes growth rate of another Emakarov, I'm very intrigued by the method you showed to limit the range of n, but I'm afraid quite a bit of it went over my head. I'm going to chew on it for a while and hopefully I will understand it enough to at least be able to ask you a clarifying question or two. Thank you both for your help!
{}
# Example: Hyperbolic Trajectory¶ A geocentric trajectory has a perigee altitude of 300 km and a perigee velocity of 15 km/s. Calculate the time to fly from perigee to a true anomaly of $$\nu =$$ 100°, and the position at that time. Then, calculate the true anomaly and speed 3 hr later. ## Given True Anomaly, Find Time Since Perigee¶ As for the elliptical case, the solution has three steps: 1. Find the hyperbolic eccentric anomaly, $$F$$, from the true anomaly, $$\nu$$ 2. Find the hyperbolic mean anomaly, $$M_h$$, from the eccentric anomaly 3. Find the time since perigee, $$t$$, from the mean anomaly Eq. (228) gives the eccentric anomaly in terms of the true anomaly. The only unknown parameter is the eccentricity of the hyperbola, which we need to find from the given orbital elements. Since we have $$v_p$$ and $$r_p$$ in the problem statement, we can calculate the angular momentum followed by the eccentricity. import numpy as np from scipy.optimize import newton mu = 3.986004418E5 # km**3/s**2 r_p = 300 + 6378.1 # km v_p = 15 # km/s h = r_p * v_p # km**2/s e = h**2 / (r_p * mu) - 1 The eccentricity is $$e =$$ 2.7696. Since $$e > 1$$, this trajectory is a hyperbola. We should find the true anomaly of the asymptote from Eq. (150), to ensure that our desired true anomaly is valid. nu_infty = np.arccos(-1 / e) The true anomaly of the asymptote is $$\nu_{\infty} =$$ 111.17°. Therefore, our desired true anomaly is valid. Now we can calculate the eccentric anomaly, $$F$$. F_1 = 2 * np.arctanh(np.sqrt((e - 1)/(e + 1)) * np.tan(nu_1 / 2)) Then, the mean anomaly is found from Kepler’s equation, Eq. (229): M_h1 = e * np.sinh(F_1) - F_1 Finally, calculating the time from the mean anomaly is done from the definition of the mean anomaly, Eq. (220): t_1 = h**3 / mu**2 * 1 / (e**2 - 1)**(3/2) * M_h1 The total time is $$t_1 =$$ 1.15 hr. ## Given Time Since Perigee, Find True Anomaly¶ Now, let’s calculate the true anomaly 3 hours later, after about 4 total hours since perigee have elapsed. Again, there are three steps: 1. Given time since perigee, $$t$$, calculate the hyperbolic mean anomaly, $$M_h$$ 2. Calculate the hyperbolic eccentric anomaly, $$F$$, from the hyperbolic mean anomaly 3. Calculate the true anomaly, $$\nu$$, from the hyperbolic eccentric anomaly Since we already have the orbital eccentricity and specific angular momentum, we can start by finding the mean anomaly at the time. t_2 = 3 * 3600 + t_1 # sec M_h2 = mu**2 / h**3 * (e**2 - 1)**(3/2) * t_2 Now, we need to solve Kepler’s equation to find the eccentric anomaly, $$F$$. Since the equation is transcendental in $$F$$, we need to use the Newton solver in SciPy. Since we know the derivative, we will define two Python functions: 1. Kepler’s equation, $$f(F) = 0$$ 2. The derivative of Kepler’s equation with respect to $$F$$, $$f'(F)$$ def kepler(F, M_h, e): """Kepler's equation, to be used in a Newton solver.""" return e * np.sinh(F) - F - M_h def d_kepler_d_F(F, M_h, e): """The derivative of Kepler's equation, to be used in a Newton solver. Note that the argument M_h is unused, but must be present so the function arguments are consistent with the kepler function. """ return e * np.cosh(F) - 1 F_2 = newton(func=kepler, fprime=d_kepler_d_F, x0=np.pi, args=(M_h2, e)) With this value for $$F$$, we can calculate the value for $$\nu$$. To avoid quadrant ambiguity problems, we will use Eq. (228). sqrt_e = np.sqrt((e + 1) / (e - 1)) nu_2 = (2 * np.arctan(sqrt_e * np.tanh(F_2 / 2))) % (2 * np.pi) Like for the ellipse, to convert $$\nu$$ to the range $$[0, 2\pi)$$, we take the modulus with $$2\pi$$. In most programming languages, Python and MATLAB included, the arctan function returns a value between $$-\pi/2$$ and $$\pi/2$$. When the result is multiplied by 2, it gives the range from $$-\pi$$ to $$\pi$$. We need to transform this angle to be in the range of $$0$$ to $$2\pi$$. To do so, we can take the modulus of the angle with $$2\pi$$. The modulus is the remainder after division. In Python, the modulus operator is %, while in MATLAB, we have to use the function mod(numerator, denominator). This works for both positive and negative numbers, and ensures that we get the correct angle for the appropriate quadrant. The true anomaly after 4.15 hr is $$\nu_2 =$$ 107.78°. ## Calculate the Speed of the Spacecraft¶ To find the speed, we will calculate the velocity components. The radius at $$\nu_2 =$$ 107.78° can be found from the orbit equation, Eq. (113). r_2 = h**2 / mu / (1 + e * np.cos(nu_2)) The velocity components can be found from Eqs. (114) and (115). v_perp = h / r_2 v_r = mu / h * e * np.sin(nu_2) v_2 = np.sqrt(v_r**2 + v_perp**2) The radius is $$r_2 =$$ 1.6318E+05 km and the speed is $$v_2 =$$ 10.51 km/s. ## MATLAB Solution¶ In MATLAB, the following code will give the same result: function kepler mu = 3.986e5; % km^3/s^2 r_p = 300 + 6378; % km v_p = 15; % km/s h = r_p * v_p; e = h^2 / (mu * r_p) - 1; F_1 = 2 * atanh(sqrt((e - 1) / (e + 1)) * tan(nu_1 / 2)); M_h1 = e * sinh(F_1) - F_1; t_1 = h^3 / mu^2 * 1 / (e^2 - 1)^(3 / 2) * M_h1; t_2 = t_1 + 3 * 3600; M_h2 = mu^2 / h^3 * (e^2 - 1)^(3 / 2) * t_2; function x = fun(F, M_h, e) x = e * sinh(F) - F - M_h; end F_2 = fzero(@(x) fun(x, M_h2, e), [3, 4]); t2 = 2 * atan(sqrt((e + 1) / (e - 1)) * tanh(F_2 / 2)); nu_2 = mod(t2, 2 * pi); We are using fzero() again to solve Kepler’s equation. I’m not sure how sensitive fzero() will be to the initial guess.
{}
It is currently 19 Sep 2017, 19:22 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # In the picture, quadrilateral ABCD is a parallelogram and Author Message TAGS: ### Hide Tags Director Status: Finally Done. Admitted in Kellogg for 2015 intake Joined: 25 Jun 2011 Posts: 540 Kudos [?]: 3949 [0], given: 217 Location: United Kingdom GMAT 1: 730 Q49 V45 GPA: 2.9 WE: Information Technology (Consulting) ### Show Tags 07 Feb 2012, 16:32 23 This post was BOOKMARKED 00:00 Difficulty: 95% (hard) Question Stats: 23% (01:28) correct 77% (01:42) wrong based on 704 sessions ### HideShow timer Statistics Attachment: Untitled.png [ 4.89 KiB | Viewed 23860 times ] In the picture, quadrilateral ABCD is a parallelogram and quadrilateral DEFG is a rectangle. What is the area of parallelogram ABCD (figure not drawn to scale)? (1) The area of rectangle DEFG is 8√5. (2) Line AH, the altitude of parallelogram ABCD, is 5. [Reveal] Spoiler: OA _________________ Best Regards, E. MGMAT 1 --> 530 MGMAT 2--> 640 MGMAT 3 ---> 610 GMAT ==> 730 Last edited by Bunuel on 22 Jul 2013, 06:13, edited 1 time in total. Edited the question. Kudos [?]: 3949 [0], given: 217 Magoosh GMAT Instructor Joined: 28 Dec 2011 Posts: 4360 Kudos [?]: 8108 [15], given: 99 Re: Area of a Parallelogram [#permalink] ### Show Tags 07 Feb 2012, 22:54 15 KUDOS Expert's post 11 This post was BOOKMARKED Hi, there. I'm happy to help with this. As a geometry geek myself, I found this a very cool geometry problem, but I will say --- it is WAY harder than anything you would be expected to figure out for yourself on the real GMAT. Statement #1: The area of rectangle DEFG is 8√5. Well, to cut to the chase, this statement is sufficient because the rectangle and the parallelogram must have equal area. Why do the rectangle and parallelogram have equal area? You will see the complete geometric argument in the pdf attachment to this post. Leaving those details aside for the moment, Statement #1 is sufficient. Statement #2: Line AH, the altitude of parallelogram ABCD, is 5. Area of a parallelogram = (base)*(altitude). If we know the altitude and not the base, that's not enough. Therefore, Statement #2 is insufficient. Does all this (including everything in the pdf) make sense? Here's another geometry DS, a little closer to the actual level of difficulty of the GMAT itself. http://gmat.magoosh.com/questions/1023 Please let me know if you have any questions on what I've said here. Mike Attachments rectangle & parallelogram with equal area.pdf [205.82 KiB] _________________ Mike McGarry Magoosh Test Prep Kudos [?]: 8108 [15], given: 99 Manager Status: Employed Joined: 17 Nov 2011 Posts: 98 Kudos [?]: 162 [0], given: 10 Location: Pakistan GMAT 1: 720 Q49 V40 GPA: 3.2 WE: Business Development (Internet and New Media) Re: Area of a Parallelogram [#permalink] ### Show Tags 10 Feb 2012, 23:58 mikemcgarry wrote: Hi, there. I'm happy to help with this. As a geometry geek myself, I found this a very cool geometry problem, but I will say --- it is WAY harder than anything you would be expected to figure out for yourself on the real GMAT. Statement #1: The area of rectangle DEFG is 8√5. Well, to cut to the chase, this statement is sufficient because the rectangle and the parallelogram must have equal area. Why do the rectangle and parallelogram have equal area? You will see the complete geometric argument in the pdf attachment to this post. Leaving those details aside for the moment, Statement #1 is sufficient. Statement #2: Line AH, the altitude of parallelogram ABCD, is 5. Area of a parallelogram = (base)*(altitude). If we know the altitude and not the base, that's not enough. Therefore, Statement #2 is insufficient. Does all this (including everything in the pdf) make sense? Here's another geometry DS, a little closer to the actual level of difficulty of the GMAT itself. http://gmat.magoosh.com/questions/1023 Please let me know if you have any questions on what I've said here. Mike Dear Mike.. What is the likelihood of such a question on the GMAT. The more I see Kaplan questions, the more I feel the questions can be extremely hard. Whereas the questions on GMATPREP seem to be much simpler than this, No? _________________ "Nowadays, people know the price of everything, and the value of nothing." Oscar Wilde Kudos [?]: 162 [0], given: 10 Manager Joined: 31 Jan 2012 Posts: 74 Kudos [?]: 24 [1], given: 2 ### Show Tags 11 Feb 2012, 02:20 1 KUDOS I personal think it would be on GMAT, but will be a 700 or 800 question. Calculation is straight forward. The only thing you need to recognize is that they both share the same triangle and if a triangle has the same height and width as a parallelogram thats not a trapezoid; then the triangle will always be 1/2 the area of the parallelogram. This is due to the simple mathematical equation to calculate the both of them. Just my Personal opinion. Kudos [?]: 24 [1], given: 2 Magoosh GMAT Instructor Joined: 28 Dec 2011 Posts: 4360 Kudos [?]: 8108 [1], given: 99 ### Show Tags 12 Feb 2012, 13:44 1 KUDOS Expert's post Dear omerrauf I would say a question like this ---- a question that hinges on a relatively obscure geometry theorem that one probably would have to prove from scratch to answer the question ---- is something far harder than what they would put on the GMAT. Any GMAT math question, no matter how challenging, is something that someone facile with math would be able to solve in under a minute. If you've never seen this theorem, there's virtually no way that you will derive the full geometry proof in under a minute, unless you operate at Isaac Newton level. The GMAT doesn't expect that, even on 800 level questions. You don't have to have be Isaac Newton to answer the hardest questions. That's my take on it. I am not as familiar with Kaplan questions overall, I am not qualified to make a statement about them. I know that Magoosh has a few hundred math questions, all appropriate difficulty for the GMAT, and each followed but its own video solution. The link above will give you a sample. Mike _________________ Mike McGarry Magoosh Test Prep Kudos [?]: 8108 [1], given: 99 Intern Joined: 15 Apr 2010 Posts: 48 Kudos [?]: 31 [11], given: 11 ### Show Tags 07 Dec 2012, 02:19 11 KUDOS 1 This post was BOOKMARKED Hi, mikemcgarry's is good but it uses similar triangles to prove. I think it's doesn't need to be that complicated. I use the same diagram that mikemcgarry provided. First, we all agree that by considering DC as base and EQ as height, Area DEC = 1/2 * EQ * DC (1). It also equals 1/2 Area ABCD (area of parallelogram is base * height). This is just normal formula, no problem. The tricky part is how to link it with the rectangle DEFG. Now, from C, draw a line CP that is perpendicular with DE with P is on DE. Now, for triangle DEC, consider ED as base and CP as height, we have Area of DEC = 1/2 CP * DE (2) From (1) and (2), the 2 area is the same, we have EQ * DC = CP * DE (3). But in rectangle DEFG, CP = EF (since DEFG is rectangle, CP perpendicular with DE, so CP must = EF) So (3) can be rewritten as EQ * DC = EF * DE. LHS is area of ABCD. RHS is area of DEFG. So (1) Suff. (2) obviously NS. So A is correct. I must admit I couldn't get this right, but after reading the explanation of mikemcgarry, I think this way is simpler as you don't have to think and prove similars. You just need to substitute side for side. Kudos [?]: 31 [11], given: 11 Magoosh GMAT Instructor Joined: 28 Dec 2011 Posts: 4360 Kudos [?]: 8108 [0], given: 99 ### Show Tags 07 Dec 2012, 11:25 Dear Catennacio That was a brilliant approach. Thank you for sharing that. Mike _________________ Mike McGarry Magoosh Test Prep Kudos [?]: 8108 [0], given: 99 Intern Joined: 08 Sep 2014 Posts: 18 Kudos [?]: 18 [0], given: 7 ### Show Tags 09 Nov 2014, 17:37 enigma123 wrote: Attachment: Untitled.png In the picture, quadrilateral ABCD is a parallelogram and quadrilateral DEFG is a rectangle. What is the area of parallelogram ABCD (figure not drawn to scale)? (1) The area of rectangle DEFG is 8√5. (2) Line AH, the altitude of parallelogram ABCD, is 5. Just providing my 2 cents on the problem... Theorem: Triangles between two parallel lines with same base have equal areas. Even if you're not familiar with the above theorem, it's pretty intuitive from the area formula of the triangle. In the figure, join EC. Then from the above theorem, it's clear that area (tri DEC) = area (tri DBC) = $$\frac{1}{2}$$* area (ABCD) ---- (*) Similarly, area (tri DEC) = area (tri DEF) = $$\frac{1}{2}$$* area (DEFG) ---- (**) So, from (*) and (**), area (ABCD) = area (DEFG) Clearly, (1) is sufficient and (2) is not, so answer is A. This approach takes less than 2 minutes to solve. I think it is quite possible that similar questions are likely to be seen in GMAT at 700 level. Kudos [?]: 18 [0], given: 7 Director Joined: 23 Jan 2013 Posts: 603 Kudos [?]: 15 [0], given: 41 Schools: Cambridge'16 ### Show Tags 25 Mar 2015, 23:08 Got it intuitevly + elimination in 2 mins. Stat.2 is clearly insuff, so eliminate B and D Stat.1. Rectangle is a part of parallelogram or may be even equal A, sorry for my absence of discipline) Kudos [?]: 15 [0], given: 41 Senior Manager Joined: 01 Nov 2013 Posts: 345 Kudos [?]: 217 [0], given: 403 GMAT 1: 690 Q45 V39 WE: General Management (Energy and Utilities) ### Show Tags 26 Mar 2015, 06:06 enigma123 wrote: Attachment: The attachment Untitled.png is no longer available In the picture, quadrilateral ABCD is a parallelogram and quadrilateral DEFG is a rectangle. What is the area of parallelogram ABCD (figure not drawn to scale)? (1) The area of rectangle DEFG is 8√5. (2) Line AH, the altitude of parallelogram ABCD, is 5. hi guys here is another solution to this hard problem based on a different approach. i hope you find the solution interesting and easy. the diagram in the pdf is self explanatory. Once we understand the diagram, the solution looks so easy. Took me 15 minutes to figure out the approach. Press kudos if you like the solution. Attachments Samichange.pdf [141.14 KiB] _________________ Our greatest weakness lies in giving up. The most certain way to succeed is always to try just one more time. I hated every minute of training, but I said, 'Don't quit. Suffer now and live the rest of your life as a champion.-Mohammad Ali Kudos [?]: 217 [0], given: 403 SVP Joined: 08 Jul 2010 Posts: 1808 Kudos [?]: 2189 [1], given: 50 Location: India GMAT: INSIGHT WE: Education (Education) ### Show Tags 20 Jul 2016, 10:12 1 KUDOS Expert's post 1 This post was BOOKMARKED enigma123 wrote: Attachment: The attachment Untitled.png is no longer available In the picture, quadrilateral ABCD is a parallelogram and quadrilateral DEFG is a rectangle. What is the area of parallelogram ABCD (figure not drawn to scale)? (1) The area of rectangle DEFG is 8√5. (2) Line AH, the altitude of parallelogram ABCD, is 5. Please find the attached file for explanation Attachments File comment: www.GMATinsight.com Untitled1.jpg [ 145.42 KiB | Viewed 6247 times ] _________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html 22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION Kudos [?]: 2189 [1], given: 50 Intern Joined: 18 Jul 2016 Posts: 9 Kudos [?]: [0], given: 85 ### Show Tags 21 Sep 2016, 12:39 Awesome explanation Gmat Insight!!! Thank you so much.. Kudos [?]: [0], given: 85 Re: In the picture, quadrilateral ABCD is a parallelogram and   [#permalink] 21 Sep 2016, 12:39 Similar topics Replies Last post Similar Topics: 4 Is quadrilateral ABCD a parallelogram? (1) BC and DA are parallel. 3 03 Jul 2016, 04:36 2 Is quadrilateral ABCD a parallelogram? 11 01 Aug 2017, 10:22 9 Is quadrilateral ABCD a parallelogram? 6 29 Sep 2016, 22:52 5 Is quadrilateral ABCD a parallelogram? 4 23 Apr 2017, 21:31 21 Is quadrilateral ABCD a parallelogram ? 17 07 Sep 2017, 21:28 Display posts from previous: Sort by
{}
# Showing that a Gamma distribution converges to a Normal distribution Consider $$G = \operatorname{Gamma}(p)$$. As $$p$$ goes to $$\infty$$, the Gamma becomes more and more bell-shaped. How do I show that $$\frac{G - p}{\sqrt{p}} \to Z \sim N(0,1)$$ as $$p \to \infty$$? I started with the CDF of the Gamma and began taking the limit, but it got very messy. • Have you considered using the MGF? (or the CF) . It's often a convenient strategy. Perhaps consider a Taylor-type expansion. – Glen_b Oct 25 '18 at 5:23 • I have not. My instructor suggested this as a fun practice problem using only the CDF and PDF. – purpleostrich Oct 25 '18 at 5:44 • @StubbornAtom it doesn't help that Z is used to represent two distinct things in the question. It would be necessary to fix that first – Glen_b Oct 25 '18 at 6:00 • Alternative to using MGF, you can write $G_p$ as being equal in distribution to the sum of $p$ i.i.d $exp(1)$ random variables. The result is then immediate by CLT. – Xiaomi Oct 25 '18 at 10:46 • The brute-force analysis isn't that difficult if you plan it out. Expand the log of the (unnormalized) PDF of $Z$ in a Maclaurin series. It will equal $$f_Z(z) = -\frac{1}{\sqrt p} + \left(\frac{1}{2p} - \frac{1}{2} \right)z^2 + O(p^{-1/2})O(z^3).$$ Thus its exponential is $e^{-z^2/2}$ times an expression that is very close to $1.$ Justify taking the limit under the integral sign and you're done. – whuber Oct 25 '18 at 14:24 This answer is part of a previous answer with a link here. That portion of the previous answer is copied over here so that one can see that the question above has been answered, however, as the answer here formed only part of an answer to a different question, it might not have been noticed in the different context of the question above. Text as follows: A more direct relationship between the gamma distribution (GD) and the normal distribution (ND) with mean zero follows. Simply put, the GD becomes normal in shape as its shape parameter is allowed to increase. Proving that that is the case is more difficult. For the GD, $$\text{GD}(z;a,b)=\begin{array}{cc} & \begin{cases} \dfrac{b^{-a} z^{a-1} e^{-\dfrac{z}{b}}}{\Gamma (a)} & z>0 \\ 0 & \text{other} \\ \end{cases} \,. \\ \end{array}$$ As the GD shape parameter $$a\rightarrow \infty$$, the GD shape becomes more symmetric and normal, however, as the mean increases with increasing $$a$$, we have to left shift the GD by $$(a-1) \sqrt{\dfrac{1}{a}} k$$ to hold it stationary, and finally, if we wish to maintain the same standard deviation for our shifted GD, we have to decrease the scale parameter ($$b$$) proportional to $$\sqrt{\dfrac{1}{a}}$$. To wit, to transform a GD to a limiting case ND we set the standard deviation to be a constant ($$k$$) by letting $$b=\sqrt{\dfrac{1}{a}} k$$ and shift the GD to the left to have a mode of zero by substituting $$z=(a-1) \sqrt{\dfrac{1}{a}} k+x\ .$$ Then $$\text{GD}\left((a-1) \sqrt{\frac{1}{a}} k+x;\ a,\ \sqrt{\frac{1}{a}} k\right)=\begin{array}{cc} & \begin{cases} \dfrac{\left(\dfrac{k}{\sqrt{a}}\right)^{-a} e^{-\dfrac{\sqrt{a} x}{k}-a+1} \left(\dfrac{(a-1) k}{\sqrt{a}}+x\right)^{a-1}}{\Gamma (a)} & x>\dfrac{k(1-a)}{\sqrt{a}} \\ 0 & \text{other} \\ \end{cases} \\ \end{array}\,.$$ Note that in the limit as $$a\rightarrow\infty$$ the most negative value of $$x$$ for which this GD is nonzero $$\rightarrow -\infty$$. That is, the semi-infinite GD support becomes infinite. Taking the limit as $$a\rightarrow \infty$$ of the reparameterized GD, we find $$\lim_{a\to \infty } \, \frac{\left(\frac{k}{\sqrt{a}}\right)^{-a} e^{-\frac{\sqrt{a} x}{k}-a+1} \left(\frac{(a-1) k}{\sqrt{a}}+x\right)^{a-1}}{\Gamma (a)}=\dfrac{e^{-\dfrac{x^2}{2 k^2}}}{\sqrt{2 \pi } k}=\text{ND}\left(x;0,k^2\right)$$ Graphically for $$k=2$$ and $$a=1,2,4,8,16,32,64$$ the GD is in blue and the limiting $$\text{ND}\left(x;0,\ 2^2\right)$$ is in orange, below
{}
# Resources for viscous behavior in simple FEM I am working on a simple explicit-integration lumped-mass elastic FEM code which implements CST+DKT triangles (plate+shell) and constant-strain tetrahedra (http://woodem.eu/doc/theory/membrane-element.html, http://woodem.eu/doc/theory/tet4-element.html). The code focuses on contact dynamics, so FEM is there only to model flexible boundaries. I would like to add some kind of viscous damping to the model, and I am looking for some resource which is not overly complicated. I independently thought I could use the elastic stiffness matrix $\mathbf{K}$ (as in $f=\mathbf{K}u$), scaled by some viscosity factor $\eta'$ (that would be computed from material's $E$ and $\eta$), to compute viscous resisting force as $f_v=-\eta'\mathbf{K}\dot u$. It this formulation something known in literature? Or is it plain wrong? Thanks for pointers. • Did you search the literature for viscoelastic models? What did you find? – Wolfgang Bangerth Feb 23 '15 at 14:25 • @WolfgangBangerth: yes I did; the problem is not about viscoelastic models, but about how to plug those into explicit FEM. I searched scholar.google.com, but found references which did not really treat what I need. Perhaps I am just missing the right keywords. – eudoxos Feb 23 '15 at 14:41 • By "explicit", I assume you mean explicit as opposed to implicit time-stepping? When you incorporate viscosity, explicit time-stepping tends to fare quite poorly, no matter what space discretization you're using (FEM, FDM, ...) because the computational costs necessary to guarantee stability of the numerical scheme are very steep. – Daniel Shapero Feb 23 '15 at 17:52 • What you're suggesting seems to be a particular case of Rayleigh damping, with only a stiffness matrix-proportional component. It wouldn't be explicit in any case. You can try a Rayleigh damping matrix with only a mass-matrix-proportional component, which you would lump similarly to the mass matrix. However, as mentioned above, the stability may deteriorate. – DanielRch Feb 23 '15 at 18:07 • @DanielShapero: yes, explicit time-stepping. Critical timestep is low, but the focus is on contact problems (like youtube.com/watch?v=cOLMNqtCy1c) which have their own constraints on timestep. – eudoxos Feb 23 '15 at 19:03
{}
Symbol Problem Which of the following pairs of rational numbers are equvalent? a. $\dfrac {6} {14}$ and $\dfrac {10} {35}$ b. $\dfrac {6} {22}$ and $\dfrac {-22} {6}$ C. $\dfrac {-7} {35}$ and $\dfrac {2} {-10}$ d. $\dfrac {-8} {10}$ and $\dfrac {-20} {25}$
{}
All requests for technical support from the VASP group must be addressed to: vasp.materialphysik@univie.ac.at # LKPROJ LKPROJ = .TRUE. | .FALSE. Default: LKPROJ = .FALSE. Description: switches on the k-point projection scheme. For LKPROJ=.TRUE., VASP will project the orbitals onto the reciprocal space of an alternative unit cell. This unit cell has to be supplied in the file POSCAR.prim, in the usual POSCAR format. As a first step, the k-projection scheme determines the set {k′}, of k-points in the irreducible part of the first Brillouin zone of the structure given in POSCAR.prim, for which ${\displaystyle \langle {\mathbf {k}}'+{\mathbf {G}}'|{\mathbf {k}}+{\mathbf {G}}\rangle \neq 0}$ where G and G′ are reciprocal space vectors in the reciprocal spaces of the structures specified in POSCAR and POSCAR.prim, respectively. As usual, the set of points {k} is specified in the KPOINTS file. The set {k′} is written to the OUTCAR file. Look at the part of the OUTCAR following NKPTS_PRIM. Once the set {k′} has been determined VASP will compute the following ${\displaystyle \mathrm{K} _{{n{\mathbf {k}}\sigma {\mathbf {k}}'}}=\sum _{{{\mathbf {GG}}'}}|\langle {\mathbf {k}}'+{\mathbf {G}}'|{\mathbf {k}}+{\mathbf {G}}\rangle \langle {\mathbf {k}}+{\mathbf {G}}|\psi _{{n{\mathbf {k}}\sigma }}\rangle |^{2}}$ and writes this information onto the PRJCAR and vasprun.xml files. Knkσk′ provides a measure of how strongly the orbital Ψnkσ contributes at the point k′ in the reciprocal space of structure POSCAR.prim. One may, for instance, use this scheme to project the orbitals of a supercell onto the reciprocal space of a generating primitive cell. N.B.I: at the moment the k-point projection scheme only works with NPAR=1. N.B.II: this feature is still evolving (espcially with respect to its user-friendliness). The ability to write the PRJCAR file is not yet included in the distribution version of VASP, but will be available soon. In the meantime, if you are desperate to use this feature immediately, please contact the VASP group (mailto:vasp.materialphysik@univie.ac.at).
{}
# The state-transition-matrix of a physical system, Here's a simple but potential research problem that I am learning about. Let's say I am studying a physical system that is governed by N objects. At each time, each object is either "active" and given the number "1" or inactive and given the number "0". So, the system has $2^N$ possible states. Now, if I am able to compute and find the state-transition-matrix of this system, it should be a $2^N \times 2^N$ matrix. (Computability of the matrix is a different question.) But what I am confused about is this: if I am understanding this problem correctly, the state-transition-matrix is a generalization of the transition matrix of, say, an Ergodic Markov Chain - a matrix that most students learn from basic probability theory (the entries are nonnegative and represent probabilities). Applying the transition matrix to a probability vector "updates" the probability vector that describes the Markov Chain. But, is the same thing true of the more general state-transition-matrix of the physical system -- a system that is not necessarily a Markov Chain? I.e., if I can compute this state-transition-matrix, would applying this matrix to some vector then "update" the vector that describes the current state of the system? If the answer to the above is "yes", then I see one technical issue: the matrix multiplication wouldn't make sense, as I would be applying a $2^N \times 2^N$ matrix to an N-tuple vector, with each component taking on the values 1 or 0. The vector wouldn't be of length $2^N$. Where am I going wrong here? • the Markov chain transition matrix operates on the vector containing all possible states of the system, so it operates on a rank $2^N$ vector, not on a rank $N$ vector; perhaps your system is such that the dynamics of the $N$ objects is independent; say each is flipped with some probability independent of its neighbors; then your transition matrix is $2N\times 2N$, but more generally it is $2^N\times 2^N$. – Carlo Beenakker Apr 11 '16 at 12:17 I am not sure whether you want to describe a deterministic or a stochastic system. In either case your definition of what a "state" is not the usual concept. The states of a dynamical systems are those variables (or rather a set of variables) such that if you know their values at some time $t$ then you have complete knowledge of the system (in the context of the model, of course). So in your examples there are $N$ states $x_i$, where $x_i$ describes the activity of node $i$. These states can attain the values $0$ or $1$. Your sentence "the system has $2^N$ different states", I would prefer to express as "the state space of the system is finite, it has $2^N$ elements". Elements of the state space are sometimes called states, especially when speaking of "the state of the system at time $t$", but this is really a slight abuse (that you hardly notice any more once you get used to it). The state transitions of your system, presumably, are then governed by some set of equations of the form $$x_i(t+1) = f_i(t,x_1(t),\ldots,x_N(t)),\quad i=1,\ldots,N.$$ This would be some sort of discrete dynamical system. This particular class of systems has received quite a bit of attention recently under the name of Boolean networks, precisely because the variables can only take the values $0$ or $1$. You can of course describe the dynamics of the system as you propose, by writing down a $2^N \times 2^N$ matrix, but this matrix would be pretty boring with one nonzero entry in every column (assuming you want to multiply the matrix from the left). If you do this then also your state space is a funny one, because it is the set of standard unit vectors of length $2^N$, because this vector would have to encode the current activity profile of your nodes, which is encoded by enumerating. Your analogy with the Markov chain was in fact a good one, just that you did not follow it through. What you describe covers what is known as a "finite-state Markov chain". The system has $N$ states and the state at time $t$ is described by the probability vector which gives the probabilities of being in a particular state. So the state space is the standard simplex, i.e. an uncountable space. The transition matrix describes the evolution on that state space. Still, the system only has finitely many states, i.e. variables whose value I need to know to know what is going on. To answer your final question: in the study of discrete-time systems $$x(t+1) = A x(t)$$ it is not uncommon to call $A$ the transition matrix. In this context the entries of $A$ can be any element of any field that you wish. As long as everything is well-defined.
{}
## Error message • Notice: Undefined index: field_poster in include() (line 39 of /home/it/www/www-icts/sites/all/themes/riley/templates/views/calendar/views-view-fields--calendar--day.tpl.php). • Notice: Trying to get property of non-object in include() (line 39 of /home/it/www/www-icts/sites/all/themes/riley/templates/views/calendar/views-view-fields--calendar--day.tpl.php). • Notice: Undefined index: field_poster in include() (line 39 of /home/it/www/www-icts/sites/all/themes/riley/templates/views/calendar/views-view-fields--calendar--day.tpl.php). • Notice: Trying to get property of non-object in include() (line 39 of /home/it/www/www-icts/sites/all/themes/riley/templates/views/calendar/views-view-fields--calendar--day.tpl.php). • Notice: Undefined property: views_plugin_style_default::$date_info in include() (line 31 of /home/it/www/www-icts/sites/all/themes/riley/templates/date-views-pager--calendar.tpl.php). • Notice: Trying to get property of non-object in include() (line 31 of /home/it/www/www-icts/sites/all/themes/riley/templates/date-views-pager--calendar.tpl.php). • Notice: Undefined property: views_plugin_style_default::$date_info in include() (line 32 of /home/it/www/www-icts/sites/all/themes/riley/templates/date-views-pager--calendar.tpl.php). • Notice: Trying to get property of non-object in include() (line 32 of /home/it/www/www-icts/sites/all/themes/riley/templates/date-views-pager--calendar.tpl.php). • Notice: Undefined property: views_plugin_style_default::\$date_info in include() (line 34 of /home/it/www/www-icts/sites/all/themes/riley/templates/date-views-pager--calendar.tpl.php). • Notice: Trying to get property of non-object in include() (line 34 of /home/it/www/www-icts/sites/all/themes/riley/templates/date-views-pager--calendar.tpl.php). Wednesday, October 25, 2017 Seminar 25 October 2017, 11:00 to 12:00 Madhava Lecture Hall, ICTS Campus, Bangalore Examining the stability of solutions of nonlinear PDEs continues to be an active area of research. Very few instances lend themselves to explicit results for even spectral and linear stability, let...more 25 October 2017, 11:30 to 12:30 Nambu Discussion Room (Right), ICTS Campus, Bangalore Supersymmetric localization is a powerful technique to evaluate a class of functional integrals in supersymmetric field theories. It reduces the functional integral over field space to ordinary...more
{}
My Math Forum Differential equation problem! Differential Equations Ordinary and Partial Differential Equations Math Forum July 7th, 2009, 06:35 AM #1 Newbie   Joined: Jul 2009 Posts: 3 Thanks: 0 Differential equation problem! How do i work this differential equation out so that it is in the form "v(t)= ... " I'm afraid that i am a bit rusty at DE $\displaystyle \rho A C_D (V_w - v(t))^2 = m \frac {dv} {dt}$ July 7th, 2009, 06:48 AM #2 Newbie   Joined: Jul 2009 Posts: 3 Thanks: 0 Re: Differential equation problem! Here is the equation in a more viewable format P*A*C*[V-V(t)]^2 = m(dv/dt) July 7th, 2009, 02:57 PM #3 Global Moderator   Joined: Dec 2006 Posts: 19,064 Thanks: 1621 Did you mean PAC(V - v(t))² = m(dv/dt), where V and PAC/m are constants? If so, separate the variables (i.e., divide the equation by m(V - v(t))²) and integrate, then rearrange the result. July 8th, 2009, 07:50 AM #4 Newbie   Joined: Jul 2009 Posts: 3 Thanks: 0 Re: Differential equation problem! ok, i'll try that. Tags differential, equation, problem Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post stealth4933 Differential Equations 3 February 13th, 2014 06:01 AM rob747uk Differential Equations 2 March 26th, 2012 08:55 AM sivela Differential Equations 1 January 21st, 2011 05:53 PM noul Differential Equations 4 December 19th, 2010 04:34 AM Draznar Differential Equations 1 January 5th, 2009 05:45 PM Contact - Home - Forums - Cryptocurrency Forum - Top
{}
# Trinket's question at Yahoo! Answers regarding the modeling of weight loss #### MarkFL Staff member Here is the question: A person's weight depends on both amount of calories consumed and the energy used. Moreover, the amount of energy used depends on the person's weight - the average amount of energy used by a person is 17.5 calories per pound a day. Thus, the more weight the person loses, the less energy the person uses (assuming that the person maintains a constant level of activity). An equation that can be used to model weight loss is dw/dt = (C/3500) - (17.5w/3500) Where w is the person's weight in pounds, t is the time in days and C is the constant daily calorie consumption. (A) find the general solution of the differential equation (B) consider a person who weighs 180lb and begins a diet of 2500 calories per day. How long will it take the person to lose 10lb? 35? (C) what is the limiting weight of the person? (D) repeat (B) for a person who weighs 200lb when the diet started I have posted a link there to this topic so the OP can see my work. #### MarkFL Staff member Hello Trinket, We are told to model a person's weight with the IVP: $$\displaystyle \frac{dw}{dt}=\frac{C}{3500}-\frac{17.5}{3500}w$$ where $$\displaystyle w(0)=w_0$$ I would choose the write the ODE in standard linear form: $$\displaystyle \frac{dw}{dt}+\frac{1}{200}w=\frac{C}{3500}$$ Next, we may compute the integrating factor: $$\displaystyle \mu(t)=e^{\frac{1}{200}\int\,dt}=e^{\frac{1}{200}t}$$ Multiplying the ODE by the integrating factor, we obtain: $$\displaystyle e^{\frac{1}{200}t}\frac{dw}{dt}+\frac{1}{200}e^{ \frac{1}{200}t}w=\frac{C}{3500}e^{ \frac{1}{200}t}$$ Now, this allows us the express the left side of the ODE as the differentiation of a product: $$\displaystyle \frac{d}{dt}\left(e^{\frac{1}{200}t}w \right)=\frac{C}{3500}e^{\frac{1}{200}t}$$ Integrate with respect to $t$: $$\displaystyle \int \frac{d}{dt}\left(e^{\frac{1}{200}t}w \right)\,dt=\frac{C}{3500}\int e^{\frac{1}{200}t}\,dt$$ $$\displaystyle e^{\frac{1}{200}t}w=\frac{2C}{35}e^{\frac{1}{200}t}+c_1$$ Solve for $w(t)$: $$\displaystyle w(t)=\frac{2C}{35}+c_1e^{-\frac{1}{200}t}$$ We may determine the parameter $c_1$ by using the initial value: $$\displaystyle w(0)=\frac{2C}{35}+c_1=w_0\,\therefore\,c_1=\frac{35w_0-2C}{35}$$ And thus, the solution to the IVP is: (1) $$\displaystyle w(t)=\frac{2C}{35}+\frac{35w_0-2C}{35}e^{-\frac{1}{200}t}$$ The amount of weight lost $L$ is the initial weight minus the current weight, and so we may write: $$\displaystyle L(t)=w_0-\frac{2C}{35}-\frac{35w_0-2C}{35}e^{-\frac{1}{200}t}$$ $$\displaystyle L(t)=\frac{35w_0-2C}{35}\left(1-e^{-\frac{1}{200}t} \right)$$ Solving this for $t$, we find: $$\displaystyle \frac{35L(t)}{35w_0-2C}=1-e^{-\frac{1}{200}t}$$ $$\displaystyle e^{-\frac{1}{200}t}=\frac{35\left(w_0-L(t) \right)-2C}{35w_0-2C}$$ (2) $$\displaystyle t=200\ln\left(\frac{35w_0-2C}{35\left(w_0-L(t) \right)-2C} \right)$$ The limiting weight $w_L$ of the person is: (3) $$\displaystyle w_L=\lim_{t\to\infty}\left(\frac{2C}{35}+\frac{35w_0-2C}{35}e^{-\frac{1}{200}t} \right)=\frac{2C}{35}$$ Now we have formulas to answer the questions. (A) Find the general solution of the differential equation. $$\displaystyle w(t)=\frac{2C}{35}+\frac{35w_0-2C}{35}e^{-\frac{1}{200}t}$$ (B) Consider a person who weighs 180lb and begins a diet of 2500 calories per day. How long will it take the person to lose 10lb? 35? Using the given data: $$\displaystyle w_0=180,\,C=2500$$ The formula in (2) becomes: $$\displaystyle t=200\ln\left(\frac{35\cdot180-2\cdot2500}{35\left(180-L(t) \right)-2\cdot2500} \right)=200\ln\left(\frac{1300}{1300-35L(t)} \right)$$ Now to compute the time to lose 10 lbs, we use $L(t)=10$ to get: $$\displaystyle t=200\ln\left(\frac{1300}{1300-35\cdot10} \right)=200\ln\left(\frac{1300}{950} \right)=200\ln\left(\frac{26}{19} \right)\approx62.7315117710083$$ And to compute the time to lose 35 lbs, we use $L(t)=35$ to get: $$\displaystyle t=200\ln\left(\frac{1300}{1300-35\cdot35} \right)=200\ln\left(\frac{1300}{75} \right)=200\ln\left(\frac{52}{3} \right)\approx570.526285982664$$ (C) What is the limiting weight of the person? $$\displaystyle w_L=\frac{2\cdot2500}{35}=\frac{1000}{7}=142. \overline{857142}$$ (D) Repeat (B) for a person who weighs 200lb when the diet started. Using the given data: $$\displaystyle w_0=200,\,C=2500$$ The formula in (2) becomes: $$\displaystyle t=200\ln\left(\frac{35\cdot200-2\cdot2500}{35\left(200-L(t) \right)-2\cdot2500} \right)=200\ln\left(\frac{2000}{2000-35L(t)} \right)$$ Now to compute the time to lose 10 lbs, we use $L(t)=10$ to get: $$\displaystyle t=200\ln\left(\frac{2000}{2000-35\cdot10} \right)=200\ln\left(\frac{2000}{1650} \right)=200\ln\left(\frac{40}{33} \right)\approx38.4743785294912$$ And to compute the time to lose 35 lbs, we use $L(t)=35$ to get: $$\displaystyle t=200\ln\left(\frac{2000}{2000-35\cdot35} \right)=200\ln\left(\frac{2000}{775} \right)=200\ln\left(\frac{80}{31} \right)\approx189.607886037747$$
{}
2010-11-29 I got back to college yesterday night. While I flew home, I came back on a Peter Pan bus. It was pretty nice because there weren't any inane weight restrictions and there certainly weren't any security checks of any kind. I just got on the bus and went on my way. Plus, the tickets were pretty reasonable (considering that it was pretty nice inside the bus): $25 per leg. Until the TSA lets up on this ridiculous security theater (there, I said it), this is how I'll probably travel from now on for these distances (if someone doesn't drive me). (Of course, for longer distances still, road travel doesn't become such a good option.) Oh, and the quotes in the title? That's the sentiment of opinion writers who support the TSA rules; they think that we are all just acting childish in our opposition to the new rules. Is being violated childish? Plus, do you really expect that government officials won't misuse the scanners' images in some way soon? But then again, the seats aren't especially comfortable on buses for long travel times (though they aren't any better on planes — the flight times themselves are just shorter). I think for distances close to that between my home and my college, the ideal solution would be high-speed rail. It'll be cheaper, quicker, and more comfortable than taking a bus, and it will probably have fewer hassles than flying. So when can we get that again? Featured Comments: Week of 2010 November 21 I want to apologize for not having posted this yesterday as usual. I was traveling back to college (more on that in an upcoming post) and didn't have computer access all day. There were quite a few comments on posts this week, so I won't repost all of them. Adafruit Bears Fruit for Microsoft In response to my question about why Microsoft seemed so defensive, an anonymous reader had this to say: "Because Microsoft stir hackers' defiance whenever they say they have protected their products. For Microsoft, it was just saying "I challenge you to hack my ultra-securized device", and some hackers successfully took in the challenge. the Microsoft PR guys are just brilliant, they just took advantage of the company's reputation and the situation actually did beget creativity, the sort of creativity that will eventually benefit the Redmond-based behemoth." Another anonymous commenter counters this: "Why do you think the developer at Microsoft who claimed that it would be easy to hack really telling the truth? I think it is just a post-construction when they realized that it was impossible to stop. And then everything is back to normal again, MS is and will remain evil :-)" Another anonymous reader thinks it's because of sheer ignorance on Microsoft's part: "The data format was not "ultrasecurized" at all. They didn't know what was going on they just heard "kinect hacking" and gave a generic response which applied to a physical type hacking, with soldering and all. This was not a physical hack but a reverse engineering of the data format." Ubuntu to Become a Rolling Release Distribution In response to the update about the news being not-quite-true, reader T_Beermonster wrote, "I think that's a shame that they back-pedalled. Since I went rolling release with aptosid I don't think I'd be willing to go back to a step-change release model. I can see why it may be easier to sell support contracts for a step-release model but I don't think it actually offers any benefit to a desktop user." Linux Mint: Good for Low-Requirement and Paranoid Users Reader Arjun Krishna had this to say about it: "Windows is one of the worst OSes I have ever had. Linux Mint 10 "Julia" is definitely one of the most stable and user-friendly Operating systems in the world! Open SUSE is also a good alternative to Linux Mint, in case the system is older, and has less RAM. In any case, any Unix based OS would be much better to work with than a Windows based OS." Also, commenter herbalfroot wrote, " Everyone for whom I have installed *buntu and mint have nothing but praise for the desktop they now have. These include non-technical users. I roll my eyes to the sky whenever I hear 'Linux is too difficult for the average user'." Thanks to everyone who commented this week. Unfortunately, for the next two and a half weeks, I'll be quite busy, so don't expect to see a whole lot of new posts. In any case, as always, if you like what I write, please subscribe! 2010-11-26 Linux Mint: Good for Low-Requirement and Paranoid Users Two days ago, I helped a friend (whose identity I will not reveal here) perform a Linux Mint installation on her computer. That computer had Microsoft Windows 7 on it which was becoming extremely slow and unreliable by her own count. Because of this, she was willing to try something new. She doesn't really do much aside from web browsing and document creation; hence, I figured that something like Linux Mint would be perfect for her. I let her try out what she would use most before installing, and she seemed happy with it; even during the installation process, I only helped if she had a question for me, which is a testament to how easy Linux Mint (version 9 LTS "Isadora") is to install. I showed her around the Software Manager, which allowed her to install things like Skype and Chromium. All in all, the installation and configuration process took about half an hour, and she seems quite happy with it so far. (In fact, I'm envious of her, because her laptop can suspend and hibernate well in Linux Mint, whereas mine can't.) Yesterday, I talked to another friend of mine in the area, and his parents have set up parental controls in tandem with an antivirus program on his Microsoft Windows XP computer. It slows his computer down and he can't visit sites like YouTube and Google sometimes because they are occasionally listed as "inappropriate". (Proxies don't help because most of the big proxies have been blocked by that program as well; also, he's well past the age where parental controls would be necessary.) I told him about the concept of a Linux live CD and how he can either install it to bypass Microsoft Windows XP and its programs or just work from the live CD and not leave a trace. He seems really interested in that now (though we'll see how it goes). My point in all this is: can we please dispel the myth that Linux is "too hard to use for a new user"? If the user is like the first person mentioned and doesn't do much more than browse the Internet and create documents, the user will probably never see the command line — ever. If the user is like the second person, there really isn't a substitute for a Linux live CD (because the obvious solution (removing the program) isn't an option for obvious reasons). 2010-11-25 Happy Thanksgiving! (and My TSA Experience) Happy Thanksgiving to everyone! I hope you all are able to spend it with family, friends, and other loved ones; I've come home from college for the weekend. On a related note, I did have to go through the new security procedures and I got the grope. That said, though I am still a bit wary of the whole thing, I'm happy to report that the security person was extremely polite, professional, hygienic (changing gloves before examining me), and didn't actually go directly over my nether regions, so not once did I feel truly uncomfortable in the process. Once again, happy Thanksgiving to everyone! 2010-11-24 Ubuntu to Become a Rolling Release Distribution This just in, folks: Ubuntu is about to become (Susan Linton, OStatic) a rolling release distribution! SWEET! Of course, this means no more weird numbering system, and no more silly "[adjective]-[animal]" names...or does it? Mark Shuttleworth does say that like any other rolling release distribution, Ubuntu will release ISOs periodically for people who are installing for the first time as well as for people who need to reinstall Ubuntu for whatever reason. I'm not too happy about the move to Unity, and I'm cautiously optimistic about the move to Wayland, but I can say for sure that I'm ecstatic about this news. I really do agree that in an Internet-oriented world (reflecting Ubuntu's new/revised goals as well), rolling-release is the way to go. Of course, this leads me to the question: what about Linux Mint? One of the reasons Linux Mint made a straight-up Debian-based edition was to take advantage of the rolling release model in the "Testing" distribution. Now that Ubuntu does that too, does this mean that Linux Mint will follow suit whenever that happens and drop the "Debian" edition altogether? I'm excited to see what's in store for the future! (UPDATE: As it turns out, Ubuntu isn't actually going on the rolling-release route. All it's doing is essentially integrating the PPA functionality into the main system to allow people to get the latest versions of third-party software like Mozilla Firefox. I remember some Ubuntu developers mentioning this before (specifically regarding Mozilla Firefox), so this doesn't come as a huge surprise. That said, I'm a little disappointed that it's not what I thought it was.) 2010-11-23 Adafruit Bears Fruit for Microsoft Several days ago, open-source hardware company Adafruit offered a "bounty" of$3000 for the first person to hack Microsoft's Kinect (formerly Project Natal) device. For those of you who don't know, Kinect was originally just an add-on hardware accessory for the Microsoft XBOX 360 allowing for motion sensing of one's full body (as opposed to using an external device, like the Wiimote in Nintendo's Wii). However, companies like Adafruit saw the additional value in a product like this, and Adafruit offered a cash prize for whoever could first release an open-source driver (not necessary for Linux per se) for the Kinect. (Someone did win and receive the cash prize already.) Since then, dozens of new and interesting uses for the Kinect have come up, including being able to manipulate pictures and videos using just your arms (sci-fi style) and being able to make a movie of you using a lightsaber in real time by having the Kinect track the motion of you swinging around a long stick. The possibilities are virtually endless. More interesting, however, is Microsoft's response to all this. First, they angrily condemned this cash prize offer saying they don't condone such modifications; furthermore, they seemed to vaguely threaten legal action against Adafruit and/or the skilled hacker. Later, once the prize had been claimed, however, Microsoft backed down from the legal threats, probably because even they knew they wouldn't stand a chance in court. Now, after all this, a Microsoft engineer has admitted that the Kinect was designed to be easy to hack for exactly these sorts of purposes. So my question is, why wasn't Microsoft open and up-front about this from the start? Unlike Bart Simpson and Nelson Muntz, they don't have a "bad-boy" reputation to protect. If they had been open about this from the start, people who were cowered into submission and inaction by Microsoft's threats would have otherwise tried their hands at the Kinect, leading to more competition and possibly even higher-quality drivers (and even more possibilities). It looks like Microsoft is admitting that it needs to look like a bully even if it really isn't at times; why? 2010-11-21 I remember seeing a couple comments spread out over reviews I've done in the past asking why I don't do my reviews now through actual live media. Well, the reason was that with my new laptop, for the longest time I thought that USB booting was a lost cause; furthermore, I didn't want to waste the few blank CDs and DVDs I had (and still have) on random distributions. Well, I'm happy to report that I can in fact boot from USB on my laptop (and in fact, I'm writing this from a Linux Mint "Debian" 201009 GNOME live USB), and for this I need to apologize to those commenters who sincerely asked why I wasn't more sincere in my own distribution testing. I'm truly sorry that my laziness (in terms of actually taking time to look for an answer) mislead all of us. That said, testing with VirtualBox has been fun in its own way (and I may still do that with "light" distributions to see things like how little RAM they really need), but now that I know I can use live USBs in my laptop, I'll certainly be doing that, as I can now test things like USB support, 3D compositing support, and webcam support that I couldn't before. Oh, and for the record, the issue was that I was manipulating the wrong BIOS submenu to give the USB device boot priority over the hard drive. Now I know... Featured Comments: Week of 2010 November 14 This past week, only one post garnered comments. Review: GNU/Linux Utopia 20101211 (Idea by Manuel) Manuel had this to say about it: "Thanks for review i agree in a lot of things, i think is coming a newer version soon, anyway it's slackware, whats in minds no dependencies control, no language selector, no user selector, if normally i use Debian/Ubuntu with apt-get and similars slackware looks strange Tip: We working in a tutorial and screencasts. Thanks fro review, nice job! thanks!" On the other hand, an anonymous commenter had this question: "Why on *Earth* would you think you have even the slightest ability to produce a decent review when you don't even speak the language the entire distribution is designed in?" I have already responded to that, so I won't repost that here. Thanks to Manuel and the anonymous reader for commenting on that post. Please note that I probably won't have that many posts this week, but in any case, if you like the material, please do subscribe! 2010-11-20 LG Cell Phone City ID Gripes (and 0x100 Posts!) Das U-Blog now has 0x100 (the hexadecimal number 100, equal to 256 in the standard decimal system) posts! Yay! That aside, I've been having some issues with my cell phone. I'm not talking about call, build, sound, or photography quality; I'm talking about a feature called "City ID". When I first got the phone, whenever I made or received calls, I could see not only the name and number of the person in question but also that person's location (and I believe this is based on the location where the phone is first activated, not the real-time location). It still seems like a pretty cool feature, but unfortunately, the trial version of this feature expired a few weeks after I got the phone. Since then, my phone has been bugging me far too often about whether I want to upgrade to the paid subscription for the program now or later. (How about never?) These messages first started appearing once every few days, but it seems like they've increased in frequency since then, and now it seems like they appear every other time I press a button on my phone when it's powered on. Recently, it's gotten even worse. A few weeks ago, instead of this message, I finally got the option to completely remove the program from my phone. Without any hesitation, I did so immediately, and it was gone for a few days. You can probably tell by that statement that it came back after that, and that is what happened. A few days ago, I got a message asking if I want to renew the free trial, so I said yes (rather foolishly). Instead of getting that, I got this weird screen full of news that would only belong in the National Enquirer. I quickly got out of that page and have noticed nothing relating to that since then. Of course, the messages asking me to renew "City ID" have only gotten more frequent. I'm getting the feeling that this is some sort of malware (not deliberately malicious, but just extremely annoying) and that I need to remove it somehow. I've searched a little bit on the Internet for help in this regard and have found nothing so far. Does anyone have any idea how I can get this cursed program off my phone for good? (UPDATE: A couple minutes after finishing and saving this post, I did just one more search but with more general search terms and I found the results I needed on the first page itself. Wow! Hopefully this really does mean that "City ID" is gone for good from my phone.) 2010-11-19 Movie Review: Harry Potter and the Deathly Hallows Part 1 Yesterday I got to see an advanced screening of this movie with many other MIT students. It was a lot of fun, though there were a couple mishaps regarding getting there (for some reason handicap-accessible taxis can't be counted on to arrive at a specific time, according to one company), but that's all fine now. The movie? It was great! The only thing I will say is that the director overdid the relationship between Harry and Hermione (because in the book that was solely a figment of Ron's imagination). 2010-11-16 Chickening Out on the Chicken Tax I was reading an article in the New York Times about the proposed overhaul of the New York City taxi fleet; all of the finalists in the selection process are minivans targeted at small business owners (Ford Transit Connect, Nissan NV200, and Turkish company Karsan's entry). Just for fun, I searched all three on Wikipedia (and got no results for the last one). While reading the article about the first, I saw that it goes through a rather ridiculous shipping/manufacturing process just to avoid the "chicken tax". I then clicked that article. Apparently, this tax was put into place in the 1960s in response to France and West Germany's tariffs on goods like chicken. Since then, all the terms of the tax have been lifted except for the tax on light trucks. What this means is that automakers must build light trucks and minivans like these in the US to avoid this rather excessive (and needless) tax. This doesn't just apply to foreign automakers; as you can see, this applies to Ford as well with its Transit Connect. To get around it (because Ford's US plants aren't capable of building the Transit Connect (yet)), Ford imports these vehicles with windows and rear seats (thus qualifying as a passenger vehicle and thus making it exempt from the tax) and then rips out the seats and seatbelts and replaces the windows with metal panels once in the US. Isn't that ridiculous and ridiculously wasteful (both of materials and money, which goes to show that quite a few taxes create real waste)? (Granted, the seats and windows are recycled, but it would still probably be less wasteful to just not use the materials at all as opposed to processing these materials at a recycling center after the fact.) Also, isn't it ironic that domestic companies that are supposed to be helped by these tariffs are actually being directly hurt by them? The Cato Institute, a libertarian think tank, calls this tax a "policy looking for a rationale". It may have made a little sense 50 years ago, but now, I wholeheartedly agree with them. Will common sense please stand up? 2010-11-14 Review: GNU/Linux Utopia 12112010 (Idea by Manuel) GNU/Linux Utopia Main Screen Reader Manuel kindly asked me to write a review of a distribution he has created called GNU/Linux Utopia, and I am doing that right now. Available on SourceForge, it is a feature-packed Slackware (64-bit)-based distribution tailored for Spanish-language users. As I do not know Spanish, it was interesting for me to see just how well I can navigate a (literally) foreign environment using only what I already know about Linux DEs. Plus, this is my first experience testing a distribution based on Slackware, the oldest surviving Linux distribution today. I wasn't really sure how this modified or built upon Slackware, so it also gave me an opportunity to possibly see what it's like to use Slackware. Follow the jump to read about the rest of this experience and to see if it really is a GNU/Linux "utopia". Featured Comments: Week of 2010 November 7 There weren't too many comments this week, and they were spread out over different posts, so I'll repost most of them. Ease: An Elementary Presentation Application In response to Ease not working at all, an anonymous commenter said, "You should be at "Ease" to put it in the trash where it belongs...". Airport Traveling Gripes An anonymous reader had this to say: "This new full-body scan/procedure was really started by the failed Christmas attack of last year, not the cargo plane attempt. As you know, the attacker hid the explosives in his underwear, something that the new full-body scanner would have detected. The failed cargo plane attack sort of sped things up. I'm not saying I agree with the new full-body scans, but I just wanted to comment on your ": why should a plot to sneak explosives onto a cargo plane..." statement". Thanks to all those who commented on this week's posts, and please do continue to do so. Again, if you like this material, please do subscribe! 2010-11-13 Airport Traveling Gripes In a week and a half, I will be heading back home by airplane for the Thanksgiving holidays. Thus, I will have to deal with all the truly ridiculous "security" measures at the airport that are being talked about today. (Side note: there's a really nice xkcd comic about this as well, discussing how inconsistent it is to confiscate small liquid containers yet allow laptop batteries to go through.) Anyway, there seems to be a real backlash (Derek Kravitz, Washington Post) against the new super-restrictive rules regarding full-body frisks and scanners; while before, when new restrictions were put in place, people would grudgingly accept them and move on, now most people think these particular rules cross the line of decent and sane security measures into the realm of indecency and violation of rights. There are a couple of things I don't get about this (the new frisking measures, not the backlash). It seems like this was prompted by a plot to blow up a cargo plane. Does anyone else see anything wrong with this? OK, I'll say it: why should a plot to sneak explosives onto a cargo plane and detonate them remotely lead to restrictions allowing security officials to pat you down fully on passenger planes? There seems to be no cause-effect connection at all here; it just seems totally arbitrary. Furthermore, the numerous quotes of passengers describing these new rules as the TSA treating passengers like criminals isn't hyperbole by any means; an analyst at a security consultancy in Oregon has described the new procedures as "the same frisking that police use with probable cause". This is more serious than "reasonable suspicion"; this means that the TSA has a strong feeling that every single traveler is probably a terrorist. Hence, I will also say this to the TSA: stop treating us like criminals! What ever happened to the presumption of innocence? Finally, why is it OK for the government to be violating people like this? I remember learning in a set of videos required by my college over the summer that "unless there's consent, it's assault". Does that mean they're technically sexually assaulting us all? Or are they going to pull the excuse of "by flying, you are automatically consenting to all of our procedures"? 2010-11-12 Preview: Debian 6 "Squeeze" (Part 4: Standard) There are a couple of things I want to say before beginning with the real content of this post. First of all, I want to apologize for not having written a post for a few days. That said, I did warn at the beginning of this semester that my work may make me busy enough to be unable to write a post, and that's exactly what happened in these few days. Furthermore, it will likely happen again soon, as I anticipate being fairly busy this weekend and next week. Second, this is not a Debian version that I wanted to test for the sake of testing it; my ultimate goal is to install the Trinity DE and thus make an Oxidized Trinity variant based on Debian. There will be no screenshots because most of the action occurs at the terminal; the finished Oxidized Trinity screenshots will be included in a separate article (because I haven't yet finished). Debian needs no further introduction, so follow the jump to see the rest. I followed these tutorials to do this: this one on doing a net installation of "Squeeze", and this one on doing a minimal net installation of Debian 4 "Etch" with the X Windowing System. 2010-11-08 Ease: An Elementary Presentation Application GNOME Office has always had a pretty good word processor (Abiword) and a great spreadsheet program (Gnumeric). Abiword is fine for most things, though it can't fully support exporting documents in Microsoft formats (though it says that older versions of Microsoft Office Word did the same as well) and it doesn't support all macros. Gnumeric is great for statistical analysis, speed, and having every single feature present in Microsoft Excel (save a few). What GNOME Office has always lacked, though, is a presentation program. Sure, Evince could always display presentations, but there was no tool to create them. Now that's changed, as there's a new kid on the block: Ease (UPDATE: here's the link to the site). Ease is supposed to be the tool to complete GNOME Office and is obviously trying to make it into the Elementary project as its website is clearly influenced by the Elementary project. Its aim is to make the creation of presentations a lot simpler. It's still a work in progress, as it can't export to formats other than PDF, HTML, or PostScript, among other issues. Naturally, I was curious to see how good it really is, so I fired up my Linux Mint 10 "Julia" GNOME RC virtual live system and installed Ease. Well, unfortunately, work in progress it most certainly is. Ease just refused to start. I'm not entirely sure what's going on, as all the dependencies were properly installed within the live session. There could be a number of possible contributing factors: it could be because of the live session, the non-final status of Linux Mint, or the non-final status of Ease. I'm going to go with the third option. I had high hopes, and I still do, but I hope that Ease does get over these stability issues soon. When it does work, I hope to include it in Fresh OS along with Abiword and Gnumeric. 2010-11-07 Featured Comments: Week of 2010 October 31 There were a few posts this past week that got comments, so I'll go through most of them. Seriously? Vegan Chicken Wings? Reader T_Beermonster had this, among other things, to say: "I suspect that a large part of the pseudo-meat boom is down to the fact that for most non-vegetarians cooking for the lone vegetarian (aka awkward person) is an annoyance and an afterthought. I know that most of my family when cooking for my wife will just fall into the lazy practice of cooking the same thing but with faux-meat. Obviously it tastes revolting but that doesn't matter because: a) the cook isn't going to be eating it. b) if they cared what the food tasted like the awkward one would be eating meat like everyone else." How-To: Remaster Debian 6 "Squeeze" An anonymous commenter (who later posted a few more times to clarify some points) said, "Hi, thanks for the post. I've bookmarked it for my reference once I have time to try remastersys. Please inform what files or folder did you copied to /etc/skel. Btw, do you mind to share the theme of this blog, I really like it :)" Why Safe Browsing Habits Don't Guarantee Anything Reader T_Beermonster had this to say among other things: "A computer doesn't even need to be networked to get infected. I'm currently restoring my nieces ex-laptop (dead dvd drive, broken hinges, slow as treacle running uphill) for one of her younger siblings (as yet undecided). It has had the modem removed and the network interface disabled (I say disabled, I suspect broken would be a more correct description) it has not been online anytime in the last 3 years. Naturally while I had it I thought I'd better run some antivirus software on and download all the service packs and hotfixes (achieved via my own linux box and a USB stick). Naturally the laptop was riddled with malware. Now that malware got on the computer via USB, Floppy or CD (before the drive broke). Some fairly simple precautions may have helped (disabling autorun being the most obvious) and I'm putting them in place, but I'm pretty sure that when I next see that laptop it will have more for me to remove." Thanks to everyone who commented this week, and again, if you enjoy the material, please do continue commenting and subscribing! Also, Fresh OS is now out on the project's SourceForge page (and the wiki is more complete than before), so please do check it out, download it, and tell me what you think (and if you really like it, show your friends as well)! 2010-11-06 FOLLOW-UP: General Disillusionment with Ubuntu Last week, I commented on how many Linux users are turned off by Canonical's seemingly unilateral decisions with regard to the development Ubuntu, the latest (at that time) example of which has been the decision to ship the Unity DE as the default even in the desktop edition, even though it's clear that even the standard netbook version of Unity needs a lot of work. Well, a lot of news outlets have reported that Canonical is going even further with this and that it wants to completely ditch the X/11 Windowing System. Wow. That's a pretty bold move. Then again, it really does explain the decision to ship Unity, as Canonical probably wants to use that as a testbed for a totally new desktop environment built on the relatively new Wayland system. So what do I think about this? Well, now I don't oppose the move to Unity as much because now I know it's just part of a bigger plan. That said, I'm not an expert by any means on windowing systems or X/11, but I'm inclined to believe the numerous statements that the reason for this switch is because X/11, dating from the 1980s, isn't getting any more streamlined and it's just getting more bloated with newer versions of desktop environments. Given that, I totally understand and do agree with the switch to Wayland, especially if it is going to be a long-term shift with support for legacy X/11 applications for a while as well. At the same time, I hope that Canonical really means it when they say that the transition to Wayland will be a much longer-term process. Given all this, I wonder what will happen to Linux Mint and other derivatives of Ubuntu after this. In fact, now that we know that Canonical's future plan is to ship Unity or an evolution of it based on Wayland as the default environment in Ubuntu, what will happen to the official derivatives, like Kubuntu and Xubuntu? Will Canonical actually put effort into helping migrate KDE and Xfce onto Wayland from X/11, or will they just be left out to rot? I'm anxious to see what comes of all this in the coming years. (UPDATE: The lead developer of Linux Mint has said that Linux Mint will neither adopt Unity nor Wayland in the foreseeable future, though it will remain compatible with Ubuntu. That said, it is also not likely to adopt GNOME Shell; therefore, it will remain essentially in its current state.) 2010-11-05 This Blog's Template An anonymous reader had asked for the template used in this blog. First I'm going to list out the basics from Template Designer. (All colors are given using their 6-digit hexadecimal code.) Follow the jump to see the full template. The base template used is the "Simple" Blogger template (provided by Blogger). There is no background image. The body layout has a main area and a sidebar split into two smaller sidebars a bit down the page. The blog is 1000 pixels wide, and the right sidebar is 320 pixels wide. The font used throughout the blog is Droid Sans, which can be added to a blog through Google's Font API. The size is 14 point, and the color is 333333. The outer background color is 222222, while the main background is FFF5E5. All link colors (link, visited, and outer) are 66B5FF, as are the blog title and description colors. The blog title uses the Droid Sans font at 55 point. The tabs also use the Droid Sans font at 14 point, and the selected and unselected colors are both 333333. The selected tab background color is EEEEEE, while the unselected tab background color is FFF5E5. The post title size is 25 point. (The font is still Droid Sans.) The date header color is 999999, while its background is transparent. The post footer has text color 333333, while its background and shadow colors are both FFF5E5. The gadget font is Droid Sans at 15 point. The title color is 333333, while the alternate color is 999999. The image background and border colors are both FFF5E5, while the caption text color is 333333. The separator line and tabs border colors are both FFF5E5. I have not added any custom CSS to override styles settings. 2010-11-04 The Destruction of the Parody For the record, I'm not saying that parodies themselves are declining in quality — far from it. If anything, they've just been getting better and better. No, what I mean is that advertising agencies and record labels are trying to put an end to parodies by claiming that obvious parodies (like the parody of a Lady Gaga song and the parody of a lobbying group's political ad, both covered on TechDirt here and here) don't qualify as parodies because they use the original soundtrack/video footage, meaning that they violate the restrictions on derivative works. I think it's ridiculous that these companies are claiming that these parodies aren't actually parodies out of a misplaced fear that the original won't get views/sales. I guess that's OK for the ad company, considering that a parody video with the exact opposite message probably won't push people towards seeing the original ad, but in the case of songs, that is exactly what happens. Just look at Weird Al: often, his parody of another somewhat less-well known artist propels that artist to stardom. Plus, artists parodied by Weird Al consider it a badge of honor; for example, rapper Chamillionaire once said that his favorite song (as listed on his MySpace page) above his own song "Ridin'" was Weird Al's parody of it ("White and Nerdy"). I understand how poorly-done parodies can turn some people off from hearing the original version of a song, but as far as I know, the person who did the parody of a Lady Gaga song (among others) did these parodies quite well, so I can only imagine that many viewers who wouldn't have considered purchasing Lady Gaga's music started to do so after watching the parody. So, media industries, why are you shooting yourselves in the foot by trying to stop parodies? The art of the parody is older than the music industry itself, so it's not even like these industries are resisting some sort of "scary new change". 2010-11-03 Why Safe Browsing Habits Don't Guarantee Anything I see on sites like MakeTechEasier, Dedoimedo, and others that promote Linux articles that say that Linux shouldn't necessarily be promoted for any inherent security advantage over Microsoft Windows because browsing safely can prevent any problems from appearing. This also means that there's no need for antivirus software on Microsoft Windows because safe browsing habits alone will prevent viruses and other malware from appearing. I have two issues with this. For one, on Linux, while it's common sense to exercise safe browsing habits anyway (i.e. not going to sites that scream "I WILL INFECT YOUR SOFTWARE"), it's not necessary to do so, because malware written for Microsoft Windows won't work on Linux, and in any case, the malware won't have administrative privileges to run (unless the user expressly allows such privileges, which can happen especially if it isn't immediately clear that the malware is malware (so the user thinks it's a harmless program)). Of course, there is a new bug out there that can automatically obtain superuser privileges in many Linux distributions, but that's a different story entirely. The other problem I have with this is that it happened to me yesterday. I was in the library yesterday on a networked Microsoft Windows XP computer checking my email and reading the news when I suddenly saw a program called "ThinkPoint" hijack my desktop session, telling me that my computer has viruses that I need to remove (but to remove them, I supposedly need to pay a monthly fee). Obviously, "ThinkPoint" itself is a piece of malware. These news sites work perfectly fine on Linux and have worked well on Microsoft Windows (until now). I had to call our school's tech support, and (shockingly) they were very helpful, pleasant, and quick to respond to my issue. In fact, I am typing this post from the same computer now. I want to thank IS&T for being so great about this, but I also want to say that practicing safe browsing doesn't guarantee full safety from malware — antimalware software is still necessary on Microsoft Windows. So please, Dedoimedo (and other sites): even if you've never had an issue and you've always practiced safe browsing, that may not work out for everyone else, so stop acting like it will. 2010-11-02 How-To: Remaster Debian 6 "Squeeze" There are a couple of qualifications to "Debian". In fact, this isn't really a general guide for Debian itself, but it's more for Linux Mint "Debian". In any case, because Linux Mint "Debian" is pointed towards the Testing repositories by default, for standard Debian, the procedure will still be similar anyway. I wanted to take this opportunity to let you know that the latest versions of Fresh OS are up on my SourceForge site. Yay! These are the download links (for Traditional, Elementary, and Light), and I am also going to link to the project wiki as well. I'm still working on the wiki, so please be patient. In any case, I strongly recommend that you try it out (and if you're especially bold, install it (though be warned that the installer is the Remastersys installer which isn't very consistent), and please let me know what you think either in this blog's comments or in a review on the project's SourceForge page. Thanks!
{}