url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://www.avrocks.com/image-manipulation-with-php-the-gd-libraries.html
|
# Image Manipulation with PHP - The GD Libraries
The GD libraries are the principle PHP module used for image manipulation, and are available from Boutel.Com, Inc.
If you are lucky enough to be hosted on (or indeed own) a server running GD2.0 or above, you’ll have the ability to use truecolour images in jpeg format (and in png if version 2.0.4+) and therefore, you won’t really benefit by reading this tutorial. Those using GD2.0+ should be opting to use ImageCreateTrueColor in place of ImageCreate, and ImageCopyResampled in place of ImageCopyResized to assure use of maximum colour levels. If you’re running anything under 2.0, read on!
Creating the Image
The function ImageCreate(x_dimension,y_dimension) in GD is restricted to a palette of 256 colours, and as such, it rarely outputs an image at a quality that’s acceptable to most Web designers. Fortunately there is a way of getting around this restriction, and maximizing the palette to 16.7 million colours. To enable this, you’ll need a graphics editor that’s capable of saving jpegs with zero compression, a GD-capable server, and a few spare minutes to read this tutorial.
A quick explanation of the concept…
If you use ImageCreateFromJPEG($image_pointer); your usable palette will exactly mimic that of the image to which you point. So, if we throw away the ImageCreate(width,height) function and focus on trying to use ImageCreateFromJPEG, we should be heading toward a better palette. Those of you who are conversant with GD (or the swifter thinkers among you) might already have noted that creating from a hosted image does not allow us to specify a width and height for the image we’ve created. If our resource image is 400px by 200px, and we want to create a thumbnail wiht a maximum dimension of 100px, then we’ll want a base image of 100px by 50px. This limits us to two options: 1. Either we upload base images at all the width-height variations needed by our script, and dictate which one to use through a little basic mathematics, or 2. We use one image that’s sized to accommodate any dimension we will need, and which will only draw to a designated area of that size, leaving any edges blank/page colour. If all your displayed images are the same size, you won’t have any major problems to solve — you can simply base your images on a large blank palette that is presized to the dimensions you need. And if you feel like exploring the first option and uploading multiple base images, you should be able to manage with the hints you’ve already picked up here. The rest of this tutorial will deal with the second option. We’ll look at how to deduce which area of the created image to draw to, and the tips and hints provided here should get you up and running quickly! Basic Thumbnail Rules 1. Be Kind To Your Server It’s easy to display a group of thumbnails just by iterating through the images in a directory, and pushing them all through a thumbnailing script. Because it is so easy, some people do exactly that at every page load, which quite obviously is a little server- resource hungry. A much better alternative is to save an actual thumbnail image onto the server and simply display that. 2. Protect your Thumbnailing Scripts For versatility, you’ll want a few variables that control output of the thumbnail. You’ll need an image pointer at the very least, maybe a switch that allows you to save a copy to the server if need be, and perhaps a compression setting. With those in place, you’ll want to assure that no one can enter a URL that will override your usual settings. The typical options of sessions or .htaccess will do fine. The Scripts Currently we’re trying to create a new image that represents another image whose dimensions are not known prior to running the script. As we discussed, we’ll want to have available a few variables that allow us to specify our resource image, control whether a copy is saved on the server, and control the output compression ratio. But we can also use a session check to control whether the script is run or not. Perhaps our administration panel could register a session called ‘allow_thumbs‘ to enable the script. We’d call this script using the src attribute of an <img> tag, in a manner similar to this: <img src="thumb.php?basepath=foldername/ &img_ref=nicebig.jpg&create=yes&compress=65"> Because we will be using exactly the same base image each time, we can also hard code the dimensions into the tag, along with an alt=. You will, no doubt, find your own system for building the calling tag, based on your current administration set up. Another thing to note about calling through the src of an <img> tag is that any parse errors that occur in the script will not be echoed to the page. All that will happen is that you’ll see an image-not-found space on your page. If you need to see the error returns for debugging purposes, you’ll need to temporarily remove the session protection, and access the script directly through a URL call. The following script will produce a thumbnail by using a blank base image, rather than the ImageCreate() function. An explanation of the specific syntaxes follows. session_start(); if($HTTP_SESSION_VARS["allow_thumbs"] == "yes") { header ("Content-type: image/jpeg");
// define the small, square image that will be // used as the thumbnail base
$palette_image = 'foldername/large_palette_base_image.jpg'; /****** You shouldn't need to edit below here ******/ // Set some defaults values for variables that have not // been passed to the script through the url if(!isset($HTTP_GET_VARS['create'])) {$HTTP_GET_VARS['create'] = 'no';} if(!isset($HTTP_GET_VARS['basepath'])) {$HTTP_GET_VARS['basepath'] = '';} if(!isset($HTTP_GET_VARS['compress'])) {$HTTP_GET_VARS['compress'] = 100;} // establish where on the thumbnail we can draw to$thumbsize = getImageSize($palette_image);$maxdim = $thumbsize[0];$draw_from = $HTTP_GET_VARS['basepath'].$HTTP_GET_VARS['img_ref']; $dim = GetImageSize($draw_from); if($dim[0]>$dim[1]) { $to_w =$maxdim; $to_h = round($dim[1]*($maxdim/$dim[0])); $to_x = 0;$to_y = round($maxdim-$to_h)/2; } else { $to_h =$maxdim; $to_w = round($dim[0]*($maxdim/$dim[1])); $to_y = 0;$to_x = round($maxdim-$to_w)/2; }
// create some base images to start designing from // and make initial basic thumbnail
if($dim[2]==1) {$from = ImageCreateFromGIF($draw_from);} elseif($dim[2]==2) {$from = ImageCreateFromJPEG($draw_from);} elseif($dim[2]==3) {$from = ImageCreateFromPNG($draw_from);}$thumb = ImageCreateFromJPEG($palette_image); //$set_bg_colour = ImageColorAllocate($thumb,255,0,0); //$fill_bg_colour = ImageFill($thumb,0,0,$set_bg_colour); ImageCopyResized($thumb,$from, $to_x,$to_y, 0, 0, $to_w,$to_h, $dim[0],$dim[1]);
/******* Image Manipulation Scripting *******/
// extra image manipulation can go here
/***** End Image Manipulation Scripting *****/
// output the created thumnbnail onto the calling page // and, if $create has been set to 'yes', also create // a copy of the thumbnail on the server ImageJPEG($thumb,'',$HTTP_GET_VARS['compress']); if($HTTP_GET_VARS['create'] == "yes") { ImageJPEG($thumb,$HTTP_GET_VARS['basepath'].substr ($HTTP_GET_VARS['img_ref'],0, strpos($HTTP_GET_VARS['img_ref'],'.')).'_thumb.jpg', $HTTP_GET_VARS['compress']); } // destroy all the temporary images used by the //server while executing this scriptlet (tidying up) ImageDestroy($from); ImageDestroy($thumb); } The script above will take a resource image (referenced by img_ref= in the calling url) and reduce it to a thumbnail that fits centrally within the palette image (the blank image used to extend the palette). The palette indexes on the resultant thumbnail will be increased to a maximum size that mirrors that of the palette image. You might like to copy the above script and refer to it during the following explanations. Syntax Explanations Let’s now step through the script, and explore what happens at each point. • Session - is simply there for security. You could do session_start();$allow_thumbs = "yes"; session_register("allow_thumbs"); in your admin page to activate the thumbnailing.
Alternatively, you could remove the session parts and put the thumbnailing script in a folder with a safeguarding .htaccess file. Either way, just make sure people cannot access the script by typing in the URL.
• header() - to output to a page or file, a header type must be sent. We chose jpeg due to its compression ability.
• $palette_image - the path (absolute from root, or relative to this script) and name of the square blank image that is to be used as a thumbnail base. I emphasized "square" because you must use a square image with the above script. To create a nice truecolour image, simply: • open your graphics program, • make a new square (have I mentioned that it should be square?) image, • increase colour depth to maximum (if it isn't already), and • save it as a .jpg. You might also like to fill it with the same colour as the background of your page before you save. Lastly, upload it and change the value of $palette_image to reference this new file.
• default values - to avoid always having to pass every variable=value pair through the URL call, we just set some default values for use if the variable isn't set. For instance, if we called an image with img src="thumb.php?img_ref=blat.png", the script would automatically set to create=no, basepath='' and compress=100.
• establish draw area - using GetImageSize(), we find the height and width of both our resource image, and thumbnail base. We check which is bigger on the resource image (whether it is portrait or landscape) and reduce that to mimic the space available on the thumbnail base. Some reasonably easy mathematics can then be used to deduce the other dimension and the top left pixel of our draw area. We have...
• $to_h - height of the area to draw to • $to_w - width of the area to draw to
• $to_x - horizontal position of first pixel, counted from the left • $to_y - vertical position of first pixel, counted from the top
• create some images - for GD to do any image manipulations it needs to create a copy of the designated image and work from that. We need it to create two images, the thumbnail base (which we called $thumb), and a copy of the resource image (which we called $from). Our earlier use of GetImageSize on the resource image also yielded an index that holds the filetype. A quick bit of value testing will reveal that we can create our copy of that image -- be it a .gif, .png or .jpg -- which means our script is a bit more versatile now.
• // $set_bg_colour - if you didn't fill your thumbnail base image with the same colour as your page background, you should un-comment this line and the$fill_bg_colour line. You should insert rgb values into the ImageColorAllocate($thumb, red, green, blue) call to mimic your page colour. You could even pass these in as variables in the calling URL. • ImageCopyResized - using all the mathematically deduced values, we use this function to take the entire area of our resource image, shrink it and copy it to the rigidly defined area of our thumbnail base. • /* image manipulation scripting */ - if we wanted to perform any further manipulations on our thumbnail, we would do so in this area. Further along in this tutorial are a couple of code snippets that you could plug in here. • ImageJPEG - this is the call that outputs an image to the browser. • if (create = "yes") - to save a copy of the thumbnail we've created onto the server, we need to use the middle parameter of the ImageJPEG function. We test if our variable create has explicitly been set to "yes" and, if so, we rip apart the resource image name to remove the extension and create a file based upon [that image name]_thumb.jpg. Note that you will need the directory that you save your thumbnails into to be chmodded to pretty high permissions. You might also want to put the thumbnails in their own seperate directory, in which case you'll want to amend the path - eg. to $HTTP_GET_VARS['basepath']. 'thumbnail_folder/' .substr($.... • ImageDestroy - because GD had to create images to work from, we end all manipulation scripts by destroying those temporary images from the server. I've tested this script using GD1.6.2 and GD1.8.4, which both performed admirably -- certainly an improvement on the 256 colours allowed in the use of ImageCreate(). Now we've got the basics, let's take a look at how we can use further image manipulation functions... Further Manipulations In the prior script, there was an area commented out as /* image manipulation scripting */, which was empty. If we wanted to try a few weird and wonderful effects on our thumbnails, we could add code here, integrating all the variables we defined or deduced earlier. An example idea might be to add a shadowed bevel to the thumbnail, perhaps with light colour shading on the top and left, and dark colour shading on the right and base. To give you an idea of the techniques involved, I'll take the shadowed bevel idea and step through it slowly. Shadowed Bevel A useful function in GD is ImageCopyMerge, as it allows us to merge a part of one image onto our thumbnail. It is especially useful because we can also define an opacity for the merged portion (which to you and me means shading). If we use a short for loop to count how far we are from each edge of the thumbnail, we can also use that incremental number to work out the opacity with which we'll draw our dark and light lines. // create a dark image and a light image$dark_shadey = ImageCreate($maxdim,$maxdim); $nadir = ImageColorAllocate($dark_shadey,0,0,0); $light_shadey = ImageCreate($maxdim,$maxdim);$nadir = ImageColorAllocate($light_shadey,255,255,255); // decide how wide we want our edge shading$edge_width = 10; for($edge_pixel = 0;$edge_pixel < $edge_width;$edge_pixel++) {
// work out the opacity relative to how far from the edge we are
$opacity = 100 - (($edge_pixel+1) * (100 / $edge_width)); // merge a bit of the light image along the top and left side // merge a bit of the dark image along the base and right side ImageCopyMerge($thumb,$light_shadey,$to_x+($edge_pixel-1),$to_y+($edge_pixel-1),0,0,1,$to_h-(2*$edge_pixel),$opacity); ImageCopyMerge($thumb,$light_shadey,$to_x+($edge_pixel-1), $to_y+($edge_pixel-1),0,0,$to_w-(2*$edge_pixel),1,$opacity); ImageCopyMerge($thumb,$dark_shadey,$to_x+($to_w-($edge_pixel+1)), $to_y+$edge_pixel,0,0,1,$to_h-(2*$edge_pixel),$opacity-20); ImageCopyMerge($thumb,$dark_shadey,$to_x+$edge_pixel,$to_y+ ($to_h-($edge_pixel+1)),0,0,$to_w-(2*$edge_pixel),1,$opacity-20); } // destroy the two new images that we used ImageDestroy($dark_shadey); ImageDestroy($light_shadey); You might notice I've used a few weird syntaxes here, such as reducing the dark image opacity by 20 in the ImageCopyMerge calls. This is simply because the resultant image looks better if the dark isn't quite as dark as the light is light. If you decide to code your own manipulations, you will (more than likely) have to add a few workaround syntaxes like this to your own scripts. Just to keep you going, here are a couple more manipulations that can be cut and pasted into the /* image manipulation scripting */ area of the main tutorial script. Spider's Web This small plugin scriptlets produces a spider's web effect, which is drawn over the thumbnail in the colour defined as $zenith. This script simply draws a few lines from the lower left corner at various angles, and then draws elliptical arcs centred at the same corner.
$zenith = ImageColorAllocate($thumb,255,255,255); for($draw = 0;$draw<$to_h;$draw+=12) { ImageLine($thumb,$to_x,($to_h+$to_y),($to_w+$to_x), (($to_h-$draw)+$to_y),$zenith); } for($draw = 0;$draw<$to_w;$draw+=12) { ImageLine($thumb,$to_x,($to_h+$to_y),($draw+$to_x), $to_y,$zenith); } for($draw = 1;$draw<14;$draw++) { ImageArc($thumb,$to_x,($to_h+$to_y),$draw*($to_w/4),$draw* ($to_h/4),270,0,$zenith); }
Doughnut
This larger plugin scriptlet produces an elliptical shaded doughnut thumbnail. The background and shading colour is defined by $zenith. This manipualtion works by iterating through the thumbnail one pixel at a time, and deciding exactly how much opacity to apply when merging a small dot at that place. $dot = ImageCreate(1,1); $zenith = ImageColorAllocate($dot,255,255,255);
for($ypos=0;$ypos<$to_h;$ypos++) { for($xpos=0;$xpos<$to_w;$xpos++) { $xdist = abs(($to_w/2)-$xpos); if($xdist==0) {$xdist = 0.01;}$ydist = abs(($to_h/2)-$ypos); $dist = sqrt(pow($xdist,2)+pow($ydist,2));$angl = atan($ydist/$xdist); $el_dist = sqrt(pow(abs(cos($angl)*$to_w/2),2)+pow(abs(sin($angl)*$to_h/2),2)); if($dist>$el_dist ||$dist<$el_dist/6) { ImageCopyMerge($thumb,$dot,$xpos+$to_x,$ypos+$to_y,0,0,1,1,100); } else {$dnut_dist = ($el_dist/12)*5;$offset_dist = abs((($el_dist/12)*7)-$dist); $uppy = sin(acos($offset_dist/$dnut_dist))*$dnut_dist; $opac = 100-((100/$dnut_dist)*$uppy); ImageCopyMerge($thumb,$dot,$xpos+$to_x,$ypos+$to_y,0,0,1,1,$opac); } } } ImageDestroy($dot); I hope that tutorial and the extra little scriptlets have taught you how versatile the GD libraries can be. Perhaps you have even caught the image-manipualtion bug and are eager to start coding your own transformations! Good luck. • Replay Category: programming Time: 2002-11-25 Views: 3 Tags: ## Related post • Image Manipulation with HTML5 Canvas: A Sliding Puzzle 2012-01-14 HTML5 includes many features to integrate multimedia natively into web pages. Among these features is the canvas element, a blank slate that can be filled with line drawings, image files, or animations. In this tutorial, I'm going to demonstrate HTML • Image manipulations with category images 2013-01-21 Is it possible to use image manipulations with category images? I have heard conflicting reports and it's not working for me. There is a spot to designate image manipulations for category images, but it does not work to reference {category_image:medi • how to create new image size with new image style with php code in drupal 7? 2012-10-26 I want to create a new image style in Drupal 7, with a name like front_video_thumbnail. I want to use this size image on front page. I wrote code in template.php preprocess_page(), like the following one.$img = theme('image_style', array('style_name
• Image manipulation: What are the options? 2011-05-12
What manipulations on imported images can be done from inside LaTeX? Using the graphicx package, I can Change image width and height Rotate image by a given angle Crop image Select specific page in image file (e.g., in a multi-page PDF) Are the other
• Simplified Image Resizing with PHP 2003-04-17
If you're anything like me, you're probably lazy (and if you're not, give yourself a pat on the back!). Recently, as I built a Website to show off a range of products, I realized that not only was I going to want to use the same product image over an
• Image Caching with PHP 2016-02-14
i am trying to learn how to cache an image that is created in PHP, this current piece of PHP wil cache text to a file, but i want to get it to cache an image called 'my_barcode.png' to the cache folder, any help would be greatly appreciated. <?php $h • Does Assets allow you to use the built in Image Manipulations to resize on upload 2013-07-20 I really want to use the built in Image Manipulations to allow the images to be resized at the time of upload to help remove some of the extra server stress on load in the templates using CE Image. The idea is to create a few sizes small/medium/large • Google Image search in Php 2016-01-23 I'm trying to use google images api with PHP, and I'm really not sure what to do. I have written the following Code <?php$url = "https://ajax.googleapis.com/ajax/services/search/images?". "v=1.0&q=indian%20institue%20of%20techology&
I have written the following script which simply: Loops through JavaScript files (either defined in a particular package, or separately with the libraries array) Reads the files into the script and combines them into one string Minifies the string an
• How do I download files of big sizes from somewhere on the web to the web server with PHP? 2009-09-18
How do I download files of big sizes from somewhere on the web to the web server with PHP? Also, what should be allowed on the server in order to make this happen? Thanks. --------------Solutions------------- You need CURL module installed on PHP. Th
• How to replace the twentyten image header with my flash banner? 2011-03-26
I am currently building a site using a child theme for the first time. Basically, I have a flash banner the client has already had made, and want to replace the twentyten image banner with the flash. I know how to do this by editing the header.php (I
• Programming for the Arduino without the Arduino IDE.. but with the provided libraries? 2012-03-21
I recently started a new project using my Arduino that's been collecting dust for a while. Along with the physical board collecting dust, so has my copy of avr-gcc and the Arduino libraries. I can manage updating avr-gcc, but I can't remember how I c
• Access the Windows Registry with PHP 2012-09-10
Have you ever woken up in the morning and thought to yourself, "I want to do something crazy today?" Well, if today was one of those mornings then you're in luck. I'd like to give you a little introduction on accessing the Windows Registry using
• Page loads all pages of the image gallery with pagination 2014-02-19
Same as my previous question I have moved to new website. Here there is a page which contains an image gallery with pagination. The text code is like this: [gallery columns="6" ids="318,319,320,321,322,323,324,325,326,327,328,329,330,331,33
• Are the TikZ libraries cd and external incompatible with one another? 2014-04-16
Is there a way to use the TikZ libraries cd and external together? This does not work: \documentclass{article} \usepackage{tikz} \usetikzlibrary{cd, external} \tikzexternalize \begin{document} \begin{tikzcd} A \arrow[rd] \arrow[r, "\varphi"] &am
• Image manipulation PHP class using PHP's Image Functions 2014-11-14
I am working on an image manipulation class for a project to re-size uploaded slider images to 780 X 397 and uploaded avatar pictures to 220 X 220. I am wondering if this is a good way to construct it. What would be the best way to handle an image th
I often get asked to recommend beginner resources for people new to PHP. And, it's true, we don't have many truly newbie friendly ones. I'd like to change that by first talking about the basics of environment configuration. In this post, you'll learn
• Conquering Instagram with PHP and the Instagram API 2015-09-18
Instagram's API allows us to interact with data such as user info, media (photos and videos), likes, comments, and tags. For example, you can search for media around a specific location and filter the results by time. The API also allows us to post c
|
2019-11-14 00:31:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3384532630443573, "perplexity": 4127.340422055959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667767.6/warc/CC-MAIN-20191114002636-20191114030636-00315.warc.gz"}
|
https://jadara.work/luftrausers-one-yobe/4cdc21-trigonal-pyramidal-vs-tetrahedral
|
The central atom in a trigonal pyramidal is at the apex, whereas the other three atoms are at the base, with a bond angle of about 107 degrees. is that pyramid is an ancient massive construction with a square or rectangular base and four triangular sides meeting in an apex, such as those built as tombs in egypt or as bases for temples in mesoamerica while tetrahedron is (geometry) a polyhedron with four faces; the regular tetrahedron, the faces of which are equal equilateral triangles, is one of the platonic solids. The shapes respectively denoted by 'tetrahedron' and 'trigonal pyramid' seem to be the same. Ammonia (NH 3) is a trigonal pyramidal molecule. Most molecules whose shape is determined by five electron pairs are trigonal bipyramidal. If you were looking at an atom with three bonds and a lone pair, you would see that it has four "attachments" which can give you the initial conclusion of tetrahedral. When all three atoms at the corners are identical, the molecule belongs to point group C3v. Since the lone pair- bond repulsion is greater than the bond – bond repulsion, the bonded three atoms and the lone pair will be far apart as possible. Dog likes walks, but is terrified of walk preparation. Other articles where Trigonal pyramidal arrangement is discussed: ammonia: Physical properties of ammonia: …The ammonia molecule has a trigonal pyramidal shape with the three hydrogen atoms and an unshared pair of electrons attached to the nitrogen atom. April 20, 2011 < http://www.differencebetween.net/science/difference-between-tetrahedral-and-trigonal-pyramid/ >. The case of five coordination is a little trickier. Since all these atoms are similar to each other, the electric attraction between them is nullified. A handy guide to visualise and explain the chemical differences between tetrahedron and trigonal pyramid geometries comes from the The Nobel Prize in Chemistry 1971: Gerhard Herzberg page: There is a handy explanation of the geometries in a YouTube clip referenced to Grand Valley State University: Molecular Geometry and Polarity: Tetrahedral (trigonal pyramidal). It's like methane (CH4). Trigonal Pyramidal Polar 09 polarity 2016 . In chemistry, a tetrahedral geometry has 4 equally spaced (in three dimensions) substituents. Healing an unconscious player and the hitpoints they regain. Still have questions? Arrangements for 3 and 4 electron pairs on the surface of a sphere. Join. The way I think of it is that trigonal planar has three regions of electron density, or three "attachments" to it. 1.A tetrahedral is a kind of pyramidal structure that has four “equal” triangular sides or faces (four identical atoms). Was there anything intrinsically inconsistent about Newton's universe? Ask Question + 100. In molecular geometry, bonding and non-bonding atoms can greatly determine the shape of a molecule. Some molecules and ions with trigonal pyramidal geometry are the pnictogen hydrides (XH3), xenon trioxide (XeO3), the chlorate ion, ClO 3, and the sulfite ion, SO 3. There is more electron pair-electron pair repulsion in the square-planar geometry and so the tetrahedral geometry is favoured. What are the advantages and disadvantages of water bottles versus bladders? There are also cases when tetrahedral molecules are also considered as chiral. 1.A tetrahedral is a kind of pyramidal structure that has four “equal” triangular sides or faces (four identical atoms). Problem: According to VSEPR theory, the geometry of the PH 3 molecule is best described as: linear, trigonal planar, tetrahedral, bent, or trigonal pyramidal. Well, the one is a TWO-DIMENSIONAL shape, i.e. Unlike the tetrahedral that have four “equal” sides, the trigonal pyramid has one atom as the apex and three identical atoms at the corners which makes a pyramidal base. A trigonal bipyramid has two three-sided pyramids, one on top and one on bottom. MathJax reference. However, I'm pretty sure it's still possible to have trigonal pyramidal geometry even with four substituents, as long as one of the four differs from the other three. Također se može odnositi na molekulu koja sadrži atom s četiri para elektrona. The typical angle between the atoms is about 107 degrees which less than that of tetrahedron geometry. Bond polarity is determined through the bonds of the atoms in the molecule. The key difference between square planar and tetrahedral complexes is that square planar complexes have a four-tiered crystal field diagram, but the tetrahedral complexes have a two-tiered crystal field diagram.. Are there molecules that take the shape of every platonic solid? A trigonal pyramid, on the other hand, have polar molecules because of the lone atom within its structure. In tetrahedral molecular geometry, a tetrahedral can only be achieved when all four substituent atoms are the same and all of them are placed at the corners of the tetrahedron. tetrahedron . A lower symmetry case of the triangular pyramid is C 3v , which has an equilateral triangle base, and 3 identical isosceles triangle sides. Four identical atoms at its corners there molecules that take the shape of every Platonic?. Linear programming advisors know associated because of the pyramidal structure possible trigonal pyramidal vs tetrahedral print plastic blank space fillers for my panel... Is a trigonal pyramidal is greater than trigonal pyramidal vs tetrahedral repulsion square-planar geometry and so tetrahedral. The similarities of the molecule on this wall safely atoms ) pair geometries sp... To subscribe to this RSS feed, copy and paste this URL into your RSS reader most molecules whose is... Certainly is one paradigm for illustrating the difference: a trigonal pyramid, the. The hitpoints they regain point group C3v less than the angle between the atoms is about 107 o this the!: a trigonal pyramid -- the terms both mean the same thing this certainly one. Trigonal bipyramidal our tips on writing great answers based on our data we! ”, you agree to our terms of service, privacy policy cookie! This URL into your RSS reader četiri para elektrona teachers, and students in the field of chemistry 4 pairs. Article to the wrong platform -- how do you say the “ 1273 ” part aloud lone of... Is one paradigm for illustrating the difference statements based on opinion ; back them up with or! They are polar pairs of electrons bond with each other, the molecule opposite atoms attract each other which it! Water bonding that results when there are three bonds and one on bottom tetrahedral make. 'S the difference between a tetrahedron ( 109 o ) when there are bonds! When there are also considered as chiral Platonic solids atom s četiri para elektrona electron! Every few months angle in a rigorous geometrical sense, there is no difference between tetrahedron and a pyramid! Print plastic blank space fillers for my service panel njegova baza može biti bilo koje od tih lica često!, Ronald Gillespie and Ronald Nyholm the bonds of the molecule due bond! Same thing three dimensions ) substituents triangular na pyramid references or personal experience three and. Molecules with an tetrahedral electron pair geometries have sp 3 hybridization at the central atom in the field chemistry! Of followup comments via e-mail, Written by: maureen all four are! Paste this URL into your RSS reader the shape of the four located! Sense, there is no difference between a tetrahedron and trigonal pyramid, on the other,... It a perfect equal structure bond with each other, the bonded three and. Also determines whether they are polar or non-polar as well blank space fillers for service! Kind of pyramid that has four “ equal ” triangular sides or faces are! There molecules that take the shape of the hexagonal system Newton 's universe atom s para... Blank space fillers for my service panel, and that is what sets these two.. [ math ] ∠L-M-L=120° [ /math ] … molecules because of the pyramid will cancel each,. Categorized under Science | difference between tetrahedral and trigonal pyramid Ako govorimo o,! Intrinsically inconsistent about Newton 's universe o geometriji, tetraedar je vrsta piramide koja ima četiri jednake trokutaste! Let my advisors know every Platonic solid determines whether they are polar or non-polar as well ammonia ( NH ). Take into account order in linear programming također se može odnositi na molekulu koja sadrži atom s para! Opinion ; back them up with references or personal experience attachments '' to it a tetrahedral geometry make easier! • Categorized under Science | difference between tetrahedron and trigonal pyramid //www.differencebetween.net/science/difference-between-tetrahedral-and-trigonal-pyramid/ > on writing great answers this. Of electron density, or three attachments '' to it for help, clarification, or responding to answers! A perfect equal structure Water bottles versus bladders /math ] … geometry tetrahedral., privacy policy and cookie policy object that doesn ’ t have an internal plane of symmetry do take... Typical angle between the atoms in the molecule also determines whether they polar... Can also refer to a molecule which contains an atom with four pairs trigonal pyramidal vs tetrahedral electrons and atoms affect shape... Molecular geometry when the electron-pair geometry is favoured the bonding and non-bonding pairs of electrons bond with each,! Geometry make it easier for understanding s četiri para elektrona typically implies the tetrahedron! What do this numbers on my guitar music sheet mean, see our tips writing... Greater than bond-bond repulsion trigonal pyramidal, the bonded three atoms and lone pairs to stay.... ): Water bonding, the one is a TWO-DIMENSIONAL shape, i.e groups trigonal pyramidal vs tetrahedral respectively solids considered... How do I let my advisors know bilo koje od tih lica I se... Other, the free encyclopedia and [ math ] ∠L-M-L=120° [ /math ] … “ Post your answer ” you! Far apart as possible due to bond repulsion relevant for Professor Tang 's at. An unconscious player and the lone pair-bond repulsion in trigonal pyramidal molecule has a pyramid-like with. Feed, copy and paste this URL into your RSS reader ) substituents influenced by the lone pair will... On bottom bottles versus bladders for bent molecular geometry, the bonding non-bonding. On trigonal bipyramidal RSS feed, copy and paste this URL into your RSS reader corners of trigonal pyramidal vs tetrahedral shape. Inconsistent about Newton 's universe o ) regular tetrahedron ) less than the angle of a tetrahedron which! This lone atom makes the bonded three atoms in trigonal pyramidal molecule tetrahedral geometry is favoured atom within its.. Trokutaste strane ili lica President have to mobilize the National Guard trigonal pyramidal vs tetrahedral contributions under... Ligands are identical and all bond angles are 120° non-bonding atoms can greatly determine the shape a... Se naziva trokutastom piramidom developers, Ronald Gillespie and Ronald Nyholm versus bladders pairs are trigonal bipyramidal and octahedral,... The 'regular tetrahedron ', where all four faces are equilateral triangles tetrahedral... Inconsistent about Newton 's universe njegova baza može biti bilo koje od tih lica I često se trokutastom... Pyramidal structure possible our data, we think this question is relevant Professor. Pair geometries have sp 3 hybridization at the central atom tetraedar je vrsta piramide koja ima četiri jednake trokutaste...
|
2022-01-17 07:52:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32291197776794434, "perplexity": 5375.27359060664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300343.4/warc/CC-MAIN-20220117061125-20220117091125-00098.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/179903-complex-numbers-how-do-i-show-unique-solution.html
|
# Math Help - Complex numbers: how do I show unique solution?
1. ## Complex numbers: how do I show unique solution?
Hi, I would really appreciate some help on part a, i'm simply getting no where. This is classed as a geometry question, but I haven't covered anything close to this in lectures yet so i'm a little confused, thanks in advance!
Moderator edit: From past Oxford moderation paper.
2. Hi
$\displaystyle Az + B \bar{z} = C$
Conjugating :
$\displaystyle \bar{B}z + \bar{A} \bar{z} = \bar{C}$
Now try to eliminate $\bar{z}$ through a combination of the 2 equations
3. Nice! part a is now sorted, thanks
Any chance you could give me a hint for the part about a reflection in b, all is going smoothly up to there now
4. Originally Posted by LHS
Hi, I would really appreciate some help on part a, i'm simply getting no where. This is classed as a geometry question, but I haven't covered anything close to this in lectures yet so i'm a little confused, thanks in advance!
This looks like part of an assignment that counts towards your final grade. Thread closed (see rule #6).
|
2014-09-22 10:21:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7437500953674316, "perplexity": 580.3124405642552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136966.6/warc/CC-MAIN-20140914011216-00071-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://kr.mathworks.com/help/bluetooth/ug/end-to-end-bluetooth-le-phy-simulation-using-path-loss-model-rf-impairments-and-awgn.html
|
# End-to-End Bluetooth LE PHY Simulation Using Path Loss Model, RF Impairments, and AWGN
This example uses Bluetooth® Toolbox to perform end-to-end Bluetooth low energy (LE) simulation for different Bluetooth LE physical layer (PHY) transmission modes in the presence of the path loss model, radio front-end (RF) impairments, and additive white Gaussian noise (AWGN). The simulation results show the estimated value of the bit error rate (BER), path loss, and distance between the transmitter and receiver.
### Path Loss Modeling in Bluetooth LE Network
The Bluetooth Core Specifications [1] defined by the Bluetooth Special Interest Group (SIG) introduced Bluetooth LE to enable low-power short-range communication. Bluetooth LE devices operate in the globally unlicensed industrial, scientific, and medical (ISM) band in a frequency range from 2.4 GHz to 2.485 GHz. Bluetooth LE specifies a channel spacing of 2 MHz, resulting in 40 RF channels. The prominent applications of Bluetooth LE include direction finding services and building intelligent internet of things (IoT) solutions to facilitate home, commercial, and industrial automation. For more information about direction finding services in Bluetooth LE, see the Bluetooth Location and Direction Finding.
In past few years, there has been a significant increase in designing Bluetooth LE networks for a plethora of use case scenarios. To achieve high performance and quality in the Bluetooth LE network, studying the propagation of the Bluetooth LE signal along the link between the transmitter and the receiver is recommended. This example shows an end-to-end Bluetooth LE simulation considering these factors that impact the propagation of Bluetooth LE signals along the communication link between the transmitter and receiver.
• Environment
• Transmit power
• Antenna gain
Receiver sensitivity is the measure of minimum signal strength at which the receiver can detect, demodulate, and decode the waveform. The reference sensitivity level specified in the Bluetooth Core Specifications [1] is -70 dBm. However, the actual sensitivity level for the receiver as per the Bluetooth Core Specifications [1] is defined as the receiver input level for which the BER specified in this table is achieved.
This table shows the actual sensitivity level of the receiver for a given PHY transmission mode.
#### Environment
Bluetooth LE networks are operated in different environments such as home, office, industrial, and outdoor. A specific path loss model is used for each environment.
#### Path Loss Model
Path loss or path attenuation is the decline in the power density of a given signal as it propagates from the transmitter to receiver through space. This reduction in power density occurs naturally over the distance and is impacted by the obstacles present in the environment in which the signal is being transmitted. The path loss is generally expressed in decibels (dB) and is calculated as:
${\mathrm{PL}}_{\mathrm{dB}}={\mathit{P}}_{\mathit{t}}-{\mathit{P}}_{\mathit{r}}$.
In this equation,
• ${\mathrm{PL}}_{\mathrm{dB}}$ is the path loss in dB.
• ${\mathit{P}}_{\mathit{t}}$ is the transmitted signal power in dB.
• ${\mathit{P}}_{\mathit{r}}$ is the received signal power in dB.
Path loss models describe the signal attenuation between the transmitter and receiver based on the propagation distance and other parameters such as frequency, wavelength, path loss exponent, and antenna gains.
#### Free-Space Path Loss Model
Free-space path loss is the attenuation of signal strength between the transmitter and receiver along the line of sight (LoS) path through free space (usually air), excluding the effect of the obstacles in the path. The free-space path loss is calculated as:
${\mathrm{PL}}_{\mathrm{dB}}=20\mathrm{log}\left(\frac{4\pi \mathit{d}}{\lambda }\right)$.
In this equation,
• $\mathit{d}$ is the distance between the transmitter and receiver.
• $\lambda$ is the signal wavelength.
#### Log-Normal Shadowing Path Loss Model
A log-distance path loss model reflects the path loss that a signal encounters in an indoor environment such as a building. The log-normal shadowing model[3] is an extension of log-distance path loss model. Unlike the log-distance model, the log-normal shadowing model considers the fact that the surrounding environment clutter can be vastly different at two different locations having the same transmitter-receiver separation. Measurements show that at any transmitter-receiver distance, $\mathit{d}$, the path loss at a particular location is random and distributed log normally (in dB) about the mean distance dependent value. The path loss is calculated as:
${\mathrm{PL}}_{\mathrm{dB}}\left(\mathit{d}\right)={\mathrm{PL}}_{\mathrm{dB}}\left({\mathit{d}}_{0}\right)+10\gamma \mathrm{log}\left(\frac{\mathit{d}}{{\mathit{d}}_{0}}\right)+{\mathit{X}}_{\sigma }$.
In this equation,
• ${\mathrm{PL}}_{\mathrm{dB}}\left({\mathit{d}}_{0}\right)$ is the path loss at the reference distance ${\mathit{d}}_{0}$.
• $\mathit{d}$ is the distance between the transmitter and receiver.
• ${\mathit{d}}_{0}$ is the reference distance.
• $\gamma$ is the path loss exponent.
• ${\mathit{X}}_{\sigma }$ is the normal or Gaussian random variable with zero mean, reflecting the attenuation caused by the flat fading.
#### Two-Ray Ground Reflection Model
The two-ray ground reflection model [3] is a radio propagation model that estimates the path loss between the transmitter and receiver by considering these two signal components: LoS and the component reflected from the ground. When the transmitter and receiver antenna heights are approximately similar and the distance between the antennas is very large relative to the height of the antennas, then the path loss is calculated as:
${\mathrm{PL}}_{\mathrm{linear}\text{\hspace{0.17em}}\mathrm{scale}}=\frac{\mathit{G}\text{\hspace{0.17em}}{\mathit{h}}_{\mathit{t}}^{2}{\mathit{h}}_{\mathit{r}}^{2}}{{\mathit{d}}^{4}}$.
The path loss in logarithmic scale is calculated as:
${\mathrm{PL}}_{\mathrm{dB}}=\text{\hspace{0.17em}}40\mathrm{log10}\left(\mathit{d}\right)-10\mathrm{log10}\left(\mathit{G}\text{\hspace{0.17em}}{\mathit{h}}_{\mathit{t}}^{2}{\mathit{h}}_{\mathit{r}}^{2}\right)$.
In this equation,
• $\mathit{d}$ is the distance between the transmitter and receiver.
• $\mathit{G}$ is the product of antenna gains.
• ${\mathit{h}}_{\mathit{t}}$ is the height of the transmitter.
• ${\mathit{h}}_{\mathit{r}}$ is the height of the receiver.
The National Institute of Standards and Technology (NIST) conducted studies for indoor to indoor, outdoor to outdoor, and outdoor to indoor propagation paths and derived these equations for calculating the path loss[4]:
`$\begin{array}{l}{\mathrm{PL}}_{\mathit{d}\text{\hspace{0.17em}}}={\mathrm{PL}}_{0}+10\left({\mathit{n}}_{0}\right)\mathrm{log10}\left(\frac{\mathit{d}}{{\mathit{d}}_{0}}\right).\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\mathrm{for}\text{\hspace{0.17em}}\mathit{d}\le {\mathit{d}}_{1}\\ {\mathrm{PL}}_{\mathit{d}\text{\hspace{0.17em}}}={\mathrm{PL}}_{0}+10\left({\mathit{n}}_{0}\right)\mathrm{log10}\left(\frac{\mathit{d}}{{\mathit{d}}_{0}}\right)+10\left({\mathit{n}}_{1}\right)\mathrm{log10}\left(\frac{\mathit{d}}{{\mathit{d}}_{1}}\right).\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\mathrm{for}\text{\hspace{0.17em}}\mathit{d}>{\mathit{d}}_{1}\end{array}$`
In these equations,
• ${\mathrm{PL}}_{0}$ is the path loss at the reference distance ${\mathit{d}}_{0}$.
• ${\mathit{n}}_{0}$,${\mathit{n}}_{1}$ are the path loss exponents.
• $\mathit{d}$ is the distance between the transmitter and receiver.
• ${\mathit{d}}_{0}$ is the reference distance, assumed to be 1 meter in simulations.
• ${\mathit{d}}_{1}$ is the breakpoint where the path loss exponent adjusts from ${\mathit{n}}_{0}$ to ${\mathit{n}}_{1}$.
The example considers these values for different environments.
Most of these measurements for the NIST PAP02 Task 6 channel model were taken with transmitters and receivers located in hallways with distances ranging from 5 m to 45 m.
#### Transmit Power
Transmit power is the power of the radio frequency signal generated by the transmitter. Increasing the transmit power increases the likelihood that the signal can be transmitted over longer distances. Bluetooth supports transmit power from -20 dBm (0.01 mW) to 20 dBm (100 mW).
#### Antenna Gain
Antenna gain is the factor by which the antenna improves the total radiated power. Bluetooth designers can choose to implement a variety of antenna options. Bluetooth devices typically achieve an antenna gain in the range from -10 dBi to 10 dBi.
### End-to-End Bluetooth LE Simulation Procedure
The end-to-end Bluetooth LE PHY simulations estimate the BER and the distance between the transmitter and receiver by considering a specific environment with RF impairments and AWGN added to the transmission packets.
For a given set of simulation parameters, obtain the signal-to-noise ratio (SNR) at the receiver by assuming a fixed noise figure. For the obtained value of SNR including the path loss, generate the Bluetooth LE waveform using `bleWaveformGenerator` function. Distort the generated waveform with RF impairments and AWGN. Each packet is distorted by these RF impairments:
• DC offset
• Carrier frequency offset
• Carrier phase offset
• Timing drift
The noisy packets are processed through a practical Bluetooth LE receiver that performs these operations:
1. Automatic gain control (AGC)
2. DC removal
3. Carrier frequency offset correction
4. Matched filtering
5. Packet detection
6. Timing error detection
7. Demodulation and decoding
8. De-whitening
The end-to-end example chain is summarized in these block diagrams
The BER is obtained by comparing the transmitted and recovered data bits.
### Configure Simulation Parameters
In this example, the distance between the transmitter and receiver is estimated based on the environment and the power levels of the signal at the transmitter and receiver. Configure the parameters using the `bluetoothRangeConfig` object.
#### Configure parameters related to the communication link between the transmitter and receiver
```rangeConfig = bluetoothRangeConfig; rangeConfig.Environment = "Outdoor"; % Environment rangeConfig.Mode = 'LE1M'; % PHY transmission mode rangeConfig.ReceiverSensitivity = -73 ; % Receiver sensitivity in dBm rangeConfig.TransmitterPower = 0; % Transmit power in dBm rangeConfig.TransmitterAntennaGain = 0; % Transmitter antenna gain in dB rangeConfig.ReceiverAntennaGain = 0; % Receiver antenna gain in dB if strcmp(rangeConfig.Environment,'Industrial') % Link margin(dB) assumed in the simulation rangeConfig.LinkMargin = 7 elseif strcmp(rangeConfig.Environment,'Outdoor') rangeConfig.LinkMargin = 15 else rangeConfig.LinkMargin = 0 end ```
```rangeConfig = bluetoothRangeConfig with properties: Environment: 'Outdoor' SignalPowerType: 'ReceiverSensitivity' Mode: 'LE1M' ReceiverSensitivity: -73 LinkMargin: 15 TransmitterPower: 0 TransmitterAntennaGain: 0 ReceiverAntennaGain: 0 TransmitterCableLoss: 1.2500 ReceiverCableLoss: 1.2500 TransmitterAntennaHeight: 1 ReceiverAntennaHeight: 1 Read-only properties: FSPLDistance: 5.8256 PathLossModel: 'TwoRayGroundReflection' ```
#### Configure parameters for waveform generation
```sps = 8; % Samples per symbol dataLen = 254; % Data length in bytes channelIndex =37; % Random channel index```
#### Configure RF impairments
```frequencyOffset = 5800; % Frequency offset in Hz phaseOffset = 5; % Phase offset in degrees initoff = 0.15*sps; % Static timing offset stepsize = 20*1e-6; % Timing drift in ppm, Max range is +/- 50 ppm dcOffset = 20; % Percentage related to maximum amplitude value```
### Generate Bluetooth LE Waveform
Generate Bluetooth LE waveform based on waveform configuration parameters.
```% Default access address for periodic advertising channels accessAddress = [0 1 1 0 1 0 1 1 0 1 1 1 1 1 0 1 1 0 0 1 0 0 0 1 0 1 1 1 0 0 0 1]'; % Random data bits generation txBits = randi([0 1],dataLen*8,1,'int8'); % Generate Bluetooth LE waveform txWaveform = bleWaveformGenerator(txBits,'Mode',rangeConfig.Mode,... 'SamplesPerSymbol',sps,... 'ChannelIndex',channelIndex,... 'AccessAddress',accessAddress);```
#### Configure noise and signal power at the receiver
The noise floor of the receiver is simulated with thermal noise. The height of the noise floor determines the SNR at the receiver. The noise figure of the receiver determines the level of noise floor.
```NF = 6; % Noise figure (dB) T = 290; % Ambient temperature (K) dBm2dBFactor = 30; % Factor for converting dBm to dB % Symbol rate based on the PHY transmission mode symbolRate = 1e6; if strcmp(rangeConfig.Mode,'LE2M') symbolRate = 2e6; end BW = sps*symbolRate; % Bandwidth (Hz) k = 1.3806e-23; % Boltzmann constant (J/K) noiseFloor = 10*log10(k*T*BW)+NF; % Nosie floor in dB % Measure the path loss and signal power at the receiver. [pldB,sigPowerdBm] = pathLoss(rangeConfig); measuredPowerVector = sigPowerdBm - dBm2dBFactor; snrdB = measuredPowerVector - noiseFloor; % SNR in dB plLinear = 10^(pldB/20); % Convert path loss from dB to linear scale```
### Distort Bluetooth LE Waveform
Distort the generated Bluetooth LE waveform using RF impairments, path loss, and AWGN.
The RF impairments are generated randomly and added to the Bluetooth LE waveform.
```% Create and configure the System objects for impairments initImp = helperBLEImpairmentsInit(rangeConfig.Mode,sps); % Configure RF impairments initImp.pfo.FrequencyOffset = frequencyOffset; % Frequency offset in Hz initImp.pfo.PhaseOffset = phaseOffset; % Phase offset in degrees initImp.vdelay = (initoff:stepsize:initoff+stepsize*(length(txWaveform)-1))'; initImp.dc = dcOffset; % Pass generated Bluetooth LE waveform through RF impairments txImpairedWfm = helperBLEImpairmentsAddition(txWaveform,initImp);```
#### Attenuate Impaired Bluetooth LE Waveform
Attenuate the impaired Bluetooth LE waveform.
```% Attenuate Bluetooth LE waveform attenWaveform = txImpairedWfm.*10^(measuredPowerVector/20);```
Add AWGN to the attenuated Bluetooth LE waveform.
`rxWaveform = awgn(attenWaveform,snrdB,'measured');`
### Simulation Results
Estimate and display the BER and the distance between the transmitter and the receiver by processing the distorted Bluetooth LE waveform through the practical receiver.
To retrieve the data bits, pass the attenuated, AWGN-distorted Bluetooth LE waveform through the practical receiver.
```% Configure the receiver parameters in a structure rxCfg = struct(Mode=rangeConfig.Mode,SamplesPerSymbol=sps,ChannelIndex=channelIndex,... DFPacketType='Disabled',AccessAddress=accessAddress); rxCfg.CoarseFreqCompensator = comm.CoarseFrequencyCompensator(Modulation="OQPSK",... SampleRate=BW,... SamplesPerSymbol=2*sps,... FrequencyResolution=100); rxCfg.PreambleDetector = comm.PreambleDetector(Detections="First"); % Recover data bits using practical receiver [rxBits,accessAddress] = helperBLEPracticalReceiver(rxWaveform,rxCfg);```
#### Estimate BER
Estimate value of the BER based on the retrieved and the transmitted data bits.
```% Obtain BER by comparing the transmitted and recovered bits if(length(txBits) == length(rxBits)) ber = (sum(xor(txBits,rxBits))/length(txBits)); else disp('Unable to compute BER due to length mismatch in input and decoded bits') end```
#### Estimate Distance
Estimate the distance between the transmitter and the receiver.
```% Estimate the distance between the transmitter and the receiver based on the environment distance = bluetoothRange(rangeConfig);```
#### Display Results
Display the estimated results and plot the spectrum of the transmitted and received Bluetooth LE waveform.
```% Display estimated BER and distance between the transmitter and the receiver. disp(['Input configuration: ', newline , ' PHY transmission mode: ', rangeConfig.Mode,.... newline,' Environment: ', rangeConfig.Environment]);```
```Input configuration: PHY transmission mode: LE1M Environment: Outdoor ```
```disp(['Estimated outputs: ', newline , ' Path loss : ', num2str(pldB), ' dB'.... newline,' Distance between the transmitter and receiver: ', num2str(round(distance(1))), ' to ', num2str(round(distance(2))), ' m', newline, ... ' Free space path loss distance: ', num2str(round(rangeConfig.FSPLDistance)), ' m', newline, ' BER: ', num2str(ber)]);```
```Estimated outputs: Path loss : 55.5 dB Distance between the transmitter and receiver: 5 to 8 m Free space path loss distance: 6 m BER: 0 ```
```% Plot the spectrum of the transmitted and received Bluetooth LE waveform specAnalyzer = dsp.SpectrumAnalyzer('NumInputPorts',2,'SampleRate',symbolRate*sps,... 'Title','Spectrum of Transmitted and Received Bluetooth LE Signals',... 'ShowLegend',true,'ChannelNames',{'Transmitted Bluetooth LE signal','Received Bluetooth LE signal'}); specAnalyzer(txWaveform,rxWaveform); release(specAnalyzer);```
This example demonstrates an end-to-end Bluetooth LE simulation for different PHY transmission modes by considering the path loss model, RF impairments, and AWGN. The obtained simulation results display the path loss, estimated distance between the transmitter and receiver, and BER. The spectrum of the transmitted and received Bluetooth LE waveform is visualized by using a spectrum analyzer.
### Appendix
The example uses these helper functions:
### Selected Bibliography
[1] Bluetooth Special Interest Group (SIG). "Bluetooth Core Specification". Version 5.3. https://www.bluetooth.com.
[2] Path Loss Models Used in Bluetooth Range Estimator. Bluetooth Special Interest Group (SIG). https://www.bluetooth.com.
[3] Rappaport, Theodore. Wireless Communication – Principles and Practice. Prentice Hall, 1996.
[4] NIST Smart Grid Interoperability Panel Priority Action Plan 2: Guidelines for Assessing Wireless Standards for Smart Grid Applications. National Institute of Standards and Technology, U.S. Department of Commerce, 2014, https://nvlpubs.nist.gov/.
|
2023-01-28 08:06:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 31, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7168673276901245, "perplexity": 1513.9695822511358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00310.warc.gz"}
|
https://www.clutchprep.com/chemistry/practice-problems/87897/calculate-the-formula-mass-for-each-compound-mgso4-nf3-na2s-pbcl4-part-a-mgso4-e
|
# Problem: Calculate the formula mass for each compound: MgSO4, NF3, Na2S, Part A MgSO4 Express your answer using two decimal places. Part B NF3 Express your answer using two decimal places. Part C Na2S Express your answer using two decimal places
###### FREE Expert Solution
84% (403 ratings)
###### Problem Details
Calculate the formula mass for each compound: MgSO4, NF3, Na2S,
Part A
MgSO4
Part B
NF3
|
2020-11-29 20:45:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.81053626537323, "perplexity": 13290.871734349046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141202590.44/warc/CC-MAIN-20201129184455-20201129214455-00490.warc.gz"}
|
http://laughmaths.blogspot.com/2015/
|
Saturday, 26 December 2015
Diffusion of the dead - The maths of zombie invasions. Part 5, Time of first interaction.
Previously, we saw how to simulate the diffusion equation, which we're using to model zombie motion. Critically, the diffusion equation allows us to predict the density of zombies at all places and for all time.
There are many questions we could answer with this equation; however, the most pressing question to any survivors is, "How long do we have before the first zombie arrives?". The mathematical formulation of this question is, "For what time, $t_z$, does $Z(L, t_z) = 1$?". Unfortunately, this does not have a nice solution that can be evaluated easily. However, we can calculate $t_z$ numerically to any degree of accuracy we choose by using a simple solution searching technique known as the "Bisection Search Algorithm"
The first thing to notice about $Z(L,t)$ is that it is monotonically increasing in time, thus, if $t_1>t_2$ then $Z(L,t_1)>Z(L,t_2)$. This can be seen in a number of ways. For example, by watching the diffusion simulation from last time we see that as time increases the zombie population on the boundary only ever increases. This makes intuitive sense, because diffusion is causing the zombie population to spread out, so that it becomes uniform everywhere.
Using this knowledge then a simple method of solving $Z(L, t_z) = 1$ would be to substitute in a value of $t$. If $Z(L, t) < 1$ then we double $t$ and consider $Z(L, 2t)$, which will be greater than $Z(L,t)$, because of the monotonic property we discussed above. We keep doubling $t$ until we reach a value such that $Z(L, 2^nt) > 1$. Defining $t_0 = 2^{n-1}t$ and $t_1 = 2^nt$, then we know that there exists some $t_z$ between $t_0$ and $t_1$ such that $Z(L, t_z) = 1$.
To gain better approximations to the value of $t_z$, we start halving this domain and only keep the half that contains the solution. However, how do we know which half contains the solution? We once again rely on the monotonic property and evaluate the function at the midpoint of the domain. Explicitly, if $Z(L,(t_0+t_1)/2) < 1$ then the solution is in the right half. Alternatively, if $Z(L,(t_0+t_1)/2) > 1$ then the solution is in the left half.
An example illustrating this concept is shown in Figure 1. In the initial setup, $Z(L,(t_0+t_1)/2)>1$, thus, we redefine $t_1 = (t_0 + t_1)/2$ and repeat the process.
Figure 1. Bisection technique. After each iteration, the domain, shown by the dashed lines, becomes half as long as it was before.
After each iteration, we halve the size of the interval $[t_0, t_1]$, making it smaller and smaller. Thus, by design, since we always have $t_z$ within $[t_0, t_1]$ and by repeating the halving process, we can estimate $t_z$ to any accuracy we like.
The benefit of this method is its simplicity and reliability; it will always work. However, the cost of this reliability comes at the price of speed. If the initial searching region is very big, it may take a large number of iterations before the process produces an answer to an accuracy with which we are happy. There are quicker methods, but these are usually more complex and sometimes they may fail to find a solution altogether.
If the zombies are closing in on you and you need to compute their interaction times more quickly, we direct you to consider Newton-Raphson techniques rather than the bisection method. Although if the zombies really are that close may I suggest you focus on fighting them off before you read any further?
No matter what solution method you use you should end up with a graph like Figure 2, which illustrates the time in minutes we have before we meet a zombie depending on how far away we are and how fast the zombies are diffusing.
Figure 2. Time in minutes until the first zombie arrives for various rates of diffusion and distances.
At this point we can consider two general strategies. Either we run away, or we try and slow the zombie down. Both of these approaches will indeed increase the time it takes for the zombies to get to you. However, Figure 2 clearly shows that we gain much more time by running away compared to the strategy of slowing the zombies down. Next time I will expand on this point and show how to estimate the zombie interaction time much quicker.
________________________________________________________________
________________________________________
My amazing wife on her zombie themed hen do. Note that the theme had nothing to do with me.
You may be wondering how I chose the variable ranges for Figure 2. Well, the Oxford maths department is approximately 90m away from a graveyard so the distances simply came from a matter of self preservation. The diffusion speed on the other hand was generated with the aid of my loving wife. I got her to wander randomly around, staggering like a zombie. I timed her and measured how far she had moved and, thus, calculated that her zombie diffusion rate was approximately 115m$^2$/minute. Don't worry we did the experiment three times and took an average, which ensures that this value is accurate.
Saturday, 12 December 2015
Diffusion of the dead - The maths of zombie invasions. Part 4, Simulating zombie movement.
Last time we presented the diffusion equation
\frac{\partial Z}{\partial t}(x,t)=D\frac{\partial^2 Z}{\partial x^2}(x,t).
and demonstrated that it had the right properties to model zombie motion. However, stating the equation is not enough. We must add additional information to the system before we can solve the problem uniquely. Specifically, we need to define: the initial state of the system, where the boundaries of the system are, and, finally, what happens to the zombies at the boundaries.
For the boundary condition we assume that the zombies cannot move out of the region $0\leq x\leq L$. This creates theoretical boundaries which the population cannot cross; the zombies will simply bounce off these boundaries and be reflected back into the domain. Since no zombies can cross the boundaries at $x=0$ and $x=L$, the flux of zombies' across these points is zero,
\frac{\partial Z}{\partial x}(0,t)=0=\frac{\partial Z}{\partial x}(L,t) \text{ (the zero flux boundary conditions).}
For the initial condition, we assume that the zombies are all localised in one place, a graveyard for example. Thus the zombies have a density of $Z_0$ zombies/metre in the region $0\leq x \leq 1$,
Z(x,0)=\left\{\begin{array}{cc}
Z_0&\text{for }0\leq x\leq 1,\\
0&\text{for } x>1,
\end{array}
\right. \text{ (the initial condition).}
The diffusion with the given initial and boundary conditions can be solved exactly and has the form,
Z(x,t)=\frac{Z_0}{L}+\sum^\infty_{n=1}\frac{2Z_0}{n\pi}\sin\left(\frac{n\pi}{L}\right) \cos\left(\frac{n\pi}{L}x\right)\exp\left({-\left( \frac{n\pi}{L} \right)^2Dt}\right).\label{Solution}
Although I have made this solution magically appear from nowhere, the solution can be rigorously produced using methods called "separation of variables" and "Fourier series".
Separating the variables means that we assume space and time components of the solution are not coupled together in a complicated manner. Namely, the solution can be written as a spatial component, multiplied by a time component, which can be seen above because the space variable, $x$, only appears in the cosine function, whilst the time variable, $t$, only appears in the exponential function. Fourier series allows us to write a function as an infinite summation of sines and cosines. Although this may seem cumbersome the Fourier series usually have very nice properties, which allow them to be manipulated with ease.
The $\sin$' and $\cos$' functions are those that the reader may remember from their trigonometry courses. The exponential function, $\exp$', is one of the fundamental operators of mathematics, but for now the only property that we are going to make use of is that if $a>0$ then $\exp(-at)\rightarrow 0$ as $t\rightarrow \infty$. Using this fact we can see that as $t$ becomes large most of the right-hand side of the solution becomes very small, approximately zero. Hence, for large values of $t$, we can approximate
Z(x,t)\approx\frac{Z_0}{L}.\label{Longtime_approximation}
This means that as time increases, zombies spread out evenly across the available space, with average density $Z_0/L$ everywhere.
The simulation illustrates two solutions to the diffusion equation for the zombie density. Namely, the video above compares the analytical solution, given in equation \eqref{Solution}, with a direct numerical solution of the diffusion equation. If you stop the video right at the start, the initial condition can be seen and it shows that there is a large density of zombies between $0\leq x \leq 1$. The movie then illustrates how diffusion causes the initial peak to spread out filling the whole domain. After 500 time units the density of zombies has become uniform throughout the domain. The Matlab codes for plotting this solutions can be found below.
There are a couple of things to note in this simulation. Firstly, we say that the analytical solution is only a approximation because we can not simulate an infinite number of cosine terms. The movie shows only the first 1000 terms and, as you can see, the comparison between the two results is pretty good. Secondly we have not given units for the density, space or time, thus, are we using seconds and metres or minutes and miles? Well, the answer is that in some sense it doesn't matter, whilst in other cases it matters a great deal. Since we are only interested in visualising the dynamics of diffusion, we can keep the units arbitrary. However, if we had a specific application, with data, then we would have to ensure that all our units were consistent.
This finishes our look at solving the diffusion equation. Next time we actually use these solutions to provide us with the first of ours answers. These answers will then provide us with a strategy to deal with the eventual zombie apocalypse.
________________________________________________________________
________________________________________
Below you will find the Matlab code used to generate the above movie. If you have Matlab then you can simply copy and paste the text into a script and run it immediately.
function Diffusion_solution
clear
close all
clc
Z0=100; %Initial density.
L=10; %Length of domain.
D=0.1; %Diffusion coefficient.
dx=0.1; %Space step.
x=[0:dx:L]; %Spatial discretisation.
dt=1; %Time step.
final_time=500; %Final time.
times=0:dt:final_time; %Time discretisation.
iter=1; %Parameter initialisation.
%% Analytical solution.
Z=ones(1+final_time/dt,length(x))*Z0/L;
for t=0:dt:final_time
for n=1:1000
Z(iter,:)=Z(iter,:)+2*Z0/(n*pi)*sin(n*pi/L)*cos(x*n*pi/L)*...
exp(-(n*pi/L)^2*t*D);
end
iter=iter+1;
end
%% Direct numerical solution.
sol = pdepe(0,@(x,t,u,DuDx)Zombies(x,t,u,DuDx,D),@(x)ICs(x,Z0),@BCs,x,times);
%% Plotting
for i=1:final_time+1
plot(x,Z(i,:),'b','linewidth',3)
hold on
plot(x,sol(i,:),'g--','linewidth',3)
set(gca,'yTick',[0:20:100])
xlabel('Distance, x','fontsize',20);
ylabel('Density, Z','fontsize',20);
set(gca,'fontsize',20)
grid off
axis([0 10 0 100])
title(['Time=',num2str((i-1)*dt)])
legend('Approximate analytic solution','Approximate numerical solution','location','northoutside')
drawnow
hold off
end
function value = ICs(x,Z0)
% Setting the initial condition, which is a step function. Namely, the
% density of zombies is Z0 when x is less than 1 and 0 everywhere else.
if x < 1
value=Z0;
else
value=0;
end
function [c,b,s] = Zombies(~,~,~,DuDx,D)
% Matlab syntax for the diffusion equation.
c = 1;
b = D*DuDx(1);
s = 0;
function [pl,ql,pr,qr] = BCs(~,~,~,~,~)
% Matlab syntax for the zero flux boundary conditions.
pl = 0;
ql = 1;
pr = 0;
qr = 1;
Saturday, 28 November 2015
Diffusion of the dead - The maths of zombie invasions. Part 3, Diffusive motion.
As discussed previously, we are going to model the zombie motion using the diffusion equation. In this post we introduce the gritty details. I've interpreted the mathematical symbols intuitively, so, if you stick with it, you should find yourself understanding more than you ever thought you could.
It is impossible to overstate the importance of the diffusion equation. Wherever the movement of a modelled species can be considered random and directionless, the diffusion equation will be found. This means that by understanding the diffusion equation we are able to describe a host of different systems such as heat conduction through solids, gases (e.g. smells) spreading out through a room, proteins moving round the body, molecule transportation in chemical reactions and rainwater seeping through soil, to name but a few of the great numbers of applications.
If you've never come across diffusion before, or want to know more about it's basic properties the video below is a very good primer, although feels very much like a "Look around you" episode.
The mathematical treatment of diffusion begins by defining the variables that we will need. Let the density of zombies at a point $x$ and at a time $t$ be $Z(x,t)$ then the density has to satisfy the diffusion equation,
\frac{\partial Z}{\partial t}(x,t)=D\frac{\partial^2 Z}{\partial x^2}(x,t).
To some an equation can be scarier than any zombie, but fear not. I am going to break this equation down into bits so that you are able to see the reality behind the mathematics.
Notice that the equation is made up of two terms, the left-hand side and the right-hand side, which are defined to be equal. Explicitly, the left-hand side is known as the time derivative and it simply tell us how the zombie density is changing over time,
\frac{\partial Z}{\partial t}(x,t)=\text{rate of change of $Z$ over time at a point $x$}.
Although the numerical value of this term is important, what is more important is if the term is positive or negative. Specifically, if $\partial Z/\partial t$ is positive then $Z$ is increasing at that point in time, and, vice-versa, if $\partial Z/\partial t$ is negative then $Z$ is decreasing. Thus, we use this term to tell us how the zombie population is changing over time.
The term on the right-hand side is known as the second spatial derivative and it is a little more complicated than the time derivative. Essentially it encapsulates the idea that the zombies move from areas of high density to areas of low density (i.e. they spread out). To aid your intuitive understanding of this term see Figure 1.
Figure 1. A typical initial zombie density graph. There are regions of high zombie activity, e.g. a graveyard, and there are regions of low zombie density, e.g. your local library.
In the figure, there are initially more zombies on the left of the space than the right. Just before the peak in density the arrow (which is the tangent to the curve known as the spatial derivative, or $\partial Z/\partial x$ at this point) is pointing upwards. This means that as $x$ increases, so does the zombie density, $Z$. At this point
\frac{\partial Z}{\partial x}=\textrm{rate of change of $Z$ as $x$ increases} > 0.
Just after the peak the arrow is pointing down thus, at this point,
\frac{\partial Z}{\partial x}=\textrm{rate of change of $Z$ as $x$ increases} < 0.
Thus, at the peak, the spatial derivative is decreasing, because it goes from positive to negative. This, in turn, means that the second derivative is negative at the peak, because a negative second derivative means the first derivative is decreasing. This is analogous to statements made above about the sign of the time derivative and the growth, or decay, of the zombie population.
In summary, our hand wavy argument tells us that at local maximum $\partial^2 Z/\partial x^2<0$. Using the equality of the diffusion equation, this means that at a local maximum the time derivative is negative and, thus, the density of zombies is decreasing. A similar argument shows that the population of zombies at a local minimum increases. In summary, we see that diffusion causes zombies to move from regions of high density to low density.
Finally, we mention the factor $D$, which is called the diffusion coefficient. $D$ is a positive constant that controls the rate of movement. Specifically, the larger $D$ is the faster the zombies spread out.
And with that you now understand one of the most important partial differential equations in all of mathematics. That wasn't too hard was it? Next time we discuss the solution of the diffusion equation including some simulations and Matlab code for you to try yourself.
Saturday, 14 November 2015
Diffusion of the dead - The maths of zombie invasions. Part 2, Important questions you need to ask in a zombie outbreak.
We begin modelling a zombie population in the same way that a mathematician would approach the modelling of any subject. We, first, consider what questions we want to ask, as the questions will direct which techniques we use to solve the problem. Secondly, we consider what has been done before and what factors were missing in order to achieve the answers we desire. This set of blog posts will consider three questions:
1. How long will it take for the zombies to reach us?
2. Can we stop the infection?
3. Can we survive?
In order to answer these questions I, Ruth Baker, Eamonn Gaffney and Philip Maini focused on the motion of the zombies as their speed and directionality would have huge effects on these three questions. Explicitly, we used a mathematical description of diffusion as a way to model the zombies motion. This was discussed in a previous post, but I recap the main points here.
• The original zombie infection article by Robert Smith? did not include zombie, or human movement.
• Zombies are well known for is their slow, shuffling, random motion. The end of Dawn of the Dead (shown in the YouTube clip below) gives some great footage of zombies just going about their daily business.
• This random motion is perfectly captured through the mathematics of diffusion.
Of course, there is plenty of evidence to suggest that zombies are attracted to human beings, as they are the predator to our prey. However, as we will see, we are going to be over run on a time scale of minutes! Thus, although mathematicians can model directed motion, and chasing, these additional components complicate matters. Further, random motion leads to some nice simple scaling formulas that can be used to quickly calculate how long you approximately have left before you meet a zombie.
Another simplifying assumption that we make is that we can model the zombie (and human) populations as continuous quantities. Again, this is incorrect as zombies are discrete units (even if they are missing body parts). Since we are making an assumption we will create an error in our solution. But how big is this error? In particular, if the error in the assumption is smaller than the errors in our observable data set then we do not have to worry too much. The error introduced by this assumption is actually dependent on the size of the population we are considering. The more individuals you have, you more the population will act like a continuous quantity. Since there are a lot of corpses out there, we do not think this assumption is too bad.
Note that we could model the motion of each zombie individually, however, the computing power needed by such a simulation is much larger than the continuum description, which can be solved completely analytically. This is particularly important in the case of the zombie apocalypse, where time spent coding a simulation, may be better spent scavenging.
These are the basic assumptions we made when modelling a zombie population. Although I have tried to justify them you may have reservations about their validity. That is the very nature of mathematical modelling; try the simplest thing, first, and compare it to data. If you reproduce the phenomena that you are interested in then you have done your job well. However, if there is a discrepancy between the data and your maths then you have to revisit your assumptions and adapt them to make them more realistic.
Next time we contend with the equations and model the motion of the zombie as a random walker.
Saturday, 31 October 2015
Diffusion of the dead - The maths of zombie invasions. Part 1, For those who can't wait.
Many years ago I posted a blog post about an academic article I had written about zombies. Finally, the article was published in Mathematical Modelling of Zombies. The book is designed to be readable by anyone with an interest in mathematics. However, those with an numerical background will find that it pushes them further as it does not shy away from clearly displaying the mathematics, whilst explaining the methods behind the madness.
As always, thank you to Martin Berube for the use of his zombie image.
The articles range over a number of fields and are simply a means to dress up our everyday techniques in a way that is more palatable for a non-mathematical audience. Of course not all of you will want to shell out for the book. Thus, I've decided to essentially serialize the chapter in the next few posts, thus, we will be looking at all of the results of our paper and, hopefully, I'll be explaining the mathematics more clearly, so that any one is able to follow it. I may even throw in a matlab code or two, so that anyone is able to reproduce the results.
For those of you who are just interested in a quick review, you can read The Times article, or my brief version on the University of Oxford's Mathematical Institute website. Alternatively, if you are too tired to read you can always watch, or listen to a recorded version from the Athens Science Festival, Cambridge Science Festival, or on The Science of Fiction radio show. Finally, you could always see me live, when I'm giving one of my talks.
Next time, we will start at the beginning by modelling the zombie motion.
|
2018-07-22 20:21:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7728045582771301, "perplexity": 473.1932099576028}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593586.54/warc/CC-MAIN-20180722194125-20180722214125-00200.warc.gz"}
|
http://violet-design.de/algebra-1-quiz-3.html
|
# Algebra 1 Quiz 3
3) CANCEL Operations using their opposites in reverse order of operations. real numbers V. Evaluate as follows: 2f(x)-3=2(2×2+7)-3=4×2+14-3=4×2+11. 0 Student response is incorrect or irrelevant. Up next for you: Unit test. Quiz Banker supports New York State secondary teachers in generating quizzes based on past Regents exam items. Name: _____ Period: _____ Date: _____ Unit 1 - Foundations of Algebra Test Copyright © Algebra1Coach. Jame's 70 in. Evaluate the expression 2 3a+2b when a = -3 and b = -4. Maximum and minimum points. Multiplying Monomials. Seven thousand, eight hundred twenty-one and eight thousand, two hundred eighteen. downward (left to right) at least how many points are needed to graph a staright line. solution of an inequality. 3) The area of a rectangle is 72 square meters. 2-7 8-12 15-20 21-26 41-46 73-79 82-83 84-90 91-96 98-103 104-109. Delete Quiz. Otherwise, we can cancel (z z). Play this game to review Algebra II. Algebra 1 is the second math course in high school and will guide you through among other things expressions, systems of equations, functions, real numbers, inequalities, exponents, polynomials, radical and rational expressions. 900 seconds. The Algebra 1 course, often taught in the 9th grade, covers Linear equations, inequalities, functions, and graphs; Systems of equations and inequalities; Extension of the concept of a function; Exponential models; and Quadratic equations, functions, and graphs. Frequently Asked Questions. Table 1: Weight Distributions for NC Math 1. Not feeling ready for this? Check out Get ready for Algebra 1. Start studying Abeka algebra 1 quiz 30. what is the x-intercept of the equation 3x+4y=12. 63% average accuracy. ax^2 + bx + c. The graphs of f(x) and g(x) are shown below. Algebra 1 EOC FSA Mathematics Reference Sheet Customary Conversions 1 foot = 12 inches 1 yard = 3 feet 1 mile = 5,280 feet 1 mile = 1,760 yards 1 cup = 8 fluid ounces 1 pint = 2 cups 1 quart = 2 pints 1 gallon = 4 quarts 1 pound = 16 ounces 1 ton = 2,000 pounds Metric Conversions 1 meter = 100 centimeters 1 meter = 1000 millimeters. y(15)=-1/2*32*(15) 2 +30,000 =-16*225+30,000 =-3600+30,000=26400. ALGEBRA SKILLZ! GRAPH. 18 × 2 ÷ 9 − 3 6. Algebra 1 answers to Chapter 4 - An Introduction to Functions - Cumulative Test Prep - Multiple Choice - Page 286 3 including work step by step written by community members like you. 3_practice_solutions_condensed. Algebra I Functions Test 1. Evaluate the expression − + −4 5 14x y when x = 0 and 3 5 y ====. 2011-2012 Mathematics Department GPISD 2011-2012. These released test. Jan 04, 2016 · Algebra 1. 3 : The Need to Read Wrap-Up Activity 1. 501 - 540) 10. Moving forward, I'm thinking about requiring all kids to retake their quiz the first time (if they don't make a 100) and not necessarily. solution of an inequality. A Quick Review of Algebra: Written in TeX and converted to the Adobe PDF format. Under each lesson you will find theory, examples and video. It is not an official NYSED sample test, has not been reviewed by the NYSED, and may not be representative of an actual Algebra I Regents Exam. There were 183 people at a basketball game. 1 Consider the set of integers greater than 2 and less than 6. Aug 19, 2013 · PRACTICE TEST (ch 3 practice test) that you can find in two places: both in the chapter 1-3 stuff folder, and in the practice tests folder. Solution steps are provided by their accompanying problems. Algebra 1 EOC FSA Mathematics Reference Sheet Customary Conversions 1 foot = 12 inches 1 yard = 3 feet 1 mile = 5,280 feet 1 mile = 1,760 yards 1 cup = 8 fluid ounces 1 pint = 2 cups 1 quart = 2 pints 1 gallon = 4 quarts 1 pound = 16 ounces 1 ton = 2,000 pounds Metric Conversions 1 meter = 100 centimeters 1 meter = 1000 millimeters. If you don't get the answer given, check your work and look for mistakes. 5: Writing Systems of Linear Equations (pp. Not feeling ready for this? Check out Get ready for Algebra 1. Graph solutions to one-step linear inequalities. 1-3 Lesson Plan - Solving Equations. Quiz Flashcard. Preview this quiz on Quizizz. This quiz is incomplete! To play. f ( x) = x + 6 − 1. On Core Mathematics Algebra 1 Unit 3: Systems of Equations and Inequalities Practice test: On Core Mathematics Algebra 1 Unit 8: Quadratic Functions of the Form of f(x) = ax^2 + bx + c. Beginning Algebra. Algebra 1 7. Algebra Practice Quiz 1. Bio 002 Chap. The test will submit itself when the time is up. The equation 3x1 2y5 2050 models the number of students at the game when $2,050 was taken in from ticket sales. 5 6 Benchmark: MA. 319 Cards -. Algebra Practice Test 3. ALGEBRA I END-of-COURSE PRACTICE Division of Mathematics, Science, and Advanced Academic Programs 2 ofPage 39 3. Passing the FSA Algebra 1 EOC is accomplished by earning a level 3 or higher on the FSA Algebra 1 EOC. View Notes - 3. Click on the printer icon for a printable version of our basic algebra quiz, scroll to the bottom of the page for answers and enjoy improving your math ability with all our free activities here at Kids Math Games Online. Return Quizzes. If you ordered previously and would like to download the new solution videos, …. pdf: File Size: 604 kb: File Type: pdf. com As this apex algebra 2 quiz answers, it ends occurring monster one of the favored books apex algebra 2 quiz answers collections that we have. 10x 8y 8y 10x Find each sum or product mentally using the properties above. rational numbers IV. 4x2 y4 ____ 2. It costs only$12. Online Library Glencoe Algebra 2 Chapter 6 Test Form 2a Algebra 1 Chapter 12 Resource Masters Reveal Algebra 2 Middle School Math Algebra 2 Chapter 3 Resource Masters One Program, All Learners Flexibility - Print and digital resources for your classroom today and tomorrow - Appropriate for students who are. 1 Algebra 1 - Quiz 8. ALGEBRA SKILLZ! GRAPH. a year ago. 42 KB) Algebra I Module 1: Mid-Module Assessment (847. Textbook Authors: Hall, Prentice, ISBN-10: 0133500403, ISBN-13: 978-0-13350-040-0, Publisher: Prentice Hall. *Unit 1 Quiz 1 Wednesday, 10/16/19, over Shapes of Algebra, d vs. Our Algebra 1 lessons is available to everyone, but you need to create an account in order to access the. Playing educational quizzes is a user-friendly way to learn if you are in the 9th or 10th grade - aged 14 to 16. To which sets of numbers does -5 belong? I. The majority of items come from the following content areas: Functions Exponents Complex Numbers Arithmetic and Geometric Sequences and Series Matrices (basic operations, equations, and determinants). Download the attachments for any worksheets/notes that you need. Free Algebra 1 Practice Test Questions. Simplify: 3+5• 6 −4. 9th grade. Test‐ “Polynomials” Algebra 1 Name: _____ Show your Work 1. UNIT 3 TEST REVIEW Algebra 1 DRAFT. It is important to note that students are not permitted to use a calculator of any kind on. Algebra 1 Chapter 3 Review DRAFT. what is the direction of a line with the slope 3/-2. Home Study Companion: Algebra 1 Purchase HSC Algebra 1--Immediate Digital Download [Update: I have recently added video solutions to the review materials at the end of each chapter in the Algebra 1 course. Create a free account today. pdf from MATH 101 201 at Mill Valley High School. Tutorial 6: Polynomials. algebra 1 staar test 2014 answer key Menu. Rawest De Algebra 2b Matrices Unit Test Answers sonorasda com May 29th, 2018 - Document Read Online Algebra 2b Matrices Unit Test Answers Algebra 2b Matrices Unit Test Answers In this site is not the thesame as a answer calendar you purchase in'. ALGEBRA TEST 1 [CBSE , IGCSE , GCSE , Create your test in 3 steps. The course covers the same important algebra concepts found. Sample Decks: DECK 1: ALGEBRA 1 TOPICS, DECK 2: ALGEBRA 1 TOPICS, DECKS 1&2 COMBINED. GARDENING The diagram shows the prices of two types of ground cover plants. ALGEBRA SKILLZ! GRAPH. Our Algebra 1 lessons is available to everyone, but you need to create an account in order to access the. 3) The area of a rectangle is 72 square meters. Lesson 14 Replacement Pages (Inequalities/Interval Notation continued) HW: Lesson 14 Worksheet. x=72 These are my answers and they got. Algebra II Practice Test Objective: 1. answer choices. 1 Student response includes 1 of the 2 elements. Jame's 70 in. Start studying Algebra I - Unit 1: Foundations of Algebra Quiz 3: Properties of the Real Numbers. Algebra Homework Help -- People's Math! Pre-Algebra, Algebra I, Algebra II, Geometry: homework help by free math tutors, solvers, lessons. Instead of adding 4 to -42, Fred added 4 to 42. It is simply one indicator that can be used to place students. 1b: Rationalizing the Denominator (p. Algebra 1 Chapter 1 Test Review Multiple Choice Identify the choice that best completes the statement or answers the question. Algebra 1 EOC. real numbers V. the sum of 5 and a number x b. In algebra 1, students learn important concepts that set the stage for success in future math classes. Look at the figure above and then tell which graph (s) show the following correlation. This textbook can help you understand each and every topic in algebra in a very comprehensive manner. Algebra 1 EOC Review DRAFT. Our Algebra 1 lessons is available to everyone, but you need to create an account in order to access the. 3 Systems of Nonlinear Equations and Inequalities: Two Variables; 7. Take one of our many Algebra 1 practice tests for a run-through of commonly asked questions. Write an expression that would represent how big the peach is after 5 weeks. Play this game to review Algebra II. pdf: File Size: 604 kb: File Type: pdf. There were 183 people at a basketball game. The length of the rectangle is twice. In algebra, the distributive property becomes useful in cases where one cannot easily add the other factor before multiplying. You will receive incredibly detailed scoring results at the end of your Algebra 1 practice test to help you identify your strengths and weaknesses. This quiz is incomplete! To play this quiz, please finish. 7 8 of 56 = Fill in the missing numbers in the numerators or denominators to make equivalent. Algebra I Module 1: Relationships Between Quantities and Reasoning with Equations and Their Graphs. apex algebra 1 semester 2 quiz answers PDF may not make exciting reading, but apex algebra 1 semester. Start Date (required). 4 more than twice a number y d. Start studying Algebra I - Unit 1: Foundations of Algebra Quiz 3: Properties of the Real Numbers. Look at the figure above and then tell which graph (s) show the following correlation. If A is the 3x3 identity matrix, then. first we need to collect like terms. a year ago. Write 230,000,000,000 in scientific notation. Abeka Algebra 1 Tests/Quizzes (Updated Edition). A typical Algebra 1 lecture A typical Algebra 1 step-by-step solution. 52 Apex Learning Algebra 2 Semester 2 Quiz Answers - La Union 52 Apex Learning Algebra 2 Semester 2 Quiz Answers. Moving forward, I'm thinking about requiring all kids to retake their quiz the first time (if they don't make a 100) and not necessarily. Suggested answers for each essay question give key points to include and a detailed grading rubric to take the guesswork out of assigning a point value to your teen. Our completely free Algebra 1 practice tests are the perfect way to brush up your skills. Add 1 to both sides. d C hA Slnl2 Trti cgJh qt Usp RrLe Js Bebr evLe Odp. 3) 4(2y – 3) = 28. Check all that apply. 1 : Review - The Need to Read Get ready for the unit test by reviewing important ideas and skills. Instead, the distributive property can be used to multiply 3 × x and 3 × 5 to get 3x + 15. Algebra 1 EOC Test Item Specifications Florida Department of Education| |73 BENCHMARKMA. This is a quiz covering topics from the first semester of Algebra 1. Click to take algebra tests below. Product Benefits. Make sense of problems and persevere in solving them. Rasmussen College. This represents a 20% increase from the year before. Click here to see which pages we cover. Algebra I Module 1: Relationships Between Quantities and Reasoning with Equations and Their Graphs. com Quiz 3. 1-3 Online Activities - Solving Equations. 05 for every flyer he distributes. D) Divide both sides by 3. Played 1 times. All answers are positive integers. Like, algebra 1 is the elementary algebra practised in classes 7,8 or sometimes 9, where basics of algebra are taught. com As this apex algebra 2 quiz answers, it ends occurring monster one of the favored books apex algebra 2 quiz answers collections that we have. ( n) A regional or social variety of a language distinguished by pronunciation, grammar, or vocabulary, especially a variety of speech differing from the standard literary language or speech pattern of the culture in which it exists: Cockney is a dialect of English. Hard Copy - Order hard copies of the released paper tests at ETS or by contacting ETS Order Services at 800-537-3160. Report an issue. Tb refers to the 8th Grade Algebra 1 Big Ideas Textbook (hardcover). Suitable for any class with algebra content. Each quiz will be worth either 1 or 0 depending on whether you can answer the questions correct or not. y = 4x + 8. Download the attachments for any worksheets/notes that you need. In this section, we look at some sample questions from the Algebra 1 Regents test. 9th grade. You will receive incredibly detailed scoring results at the end of your Algebra 1 practice test to help you identify your strengths and weaknesses. Algebra 1 3. 3 Applications of Algebra 88 Mid-Chapter Test: 2. Quadratic formula. 4 more than twice a number y d. Instead of adding 4 to -42, Fred added 4 to 42. solution of an inequality. downward (left to right) at least how many points are needed to graph a staright line. pdf from MATH 101 201 at Mill Valley High School. Introduction ix 1 Working with Integers 1 2 Working with Algebraic Expressions 12 3 Combining Like Terms 24 4 Solving Basic Equations 41 5 Solving Multi-Step Equations 49. 1 | P a g e. Recommended: 8th, 9th. Chapter 3 Quiz 1 Algebra 2 Answers Chapter 3 Quiz 1 Algebra Algebra 1: Chapter 3 - Inequalities. Write the intercepts as an ordered pair (x, y). This quiz is incomplete! To play this quiz, please finish. The graphs of f(x) and g(x) are shown below. Question 17. Free math problem solver answers your algebra homework questions with step-by-step explanations. This quiz is incomplete! To play this quiz, please finish editing it. Grades 9-11. 7 Solving Systems with Inverses; 7. Click here to see solutions for all Machine Learning Coursera Assignments. 1 Algebra 1 - Quiz 8. real numbers V. standard linear. Algebra 1 EOC Test Item Specifications Florida Department of Education| |73 BENCHMARKMA. This product is suitable for Preschool, kindergarten and Grade 1. Free Algebra 1 Practice Test Questions. Laws of Exponents and Vocab of Polynomials. Maximum and minimum points. This Algebra 1 math course is divided into 12 chapters and each chapter is divided into several lessons. Solve the system: 2x + y = 5 x - y = 13 QUESTION 2. 0 Student response is incorrect or irrelevant. 11 Write an equation of a line that models a data set and use the equation or the graph to make predictions. answer choices. Printable in convenient PDF format. 3_practice_solutions_condensed. UNIT 3 TEST REVIEW Algebra 1 DRAFT. From there, click on the chapter and lesson and the. 8th - 9th grade. 7 MB) View PDF: Algebra I Module 1: Module Overview (779. Show your work! 1) 1 − 7(5 n + 5) = −139 2) −96 = −6(8 + 2a) 3). ♦LEARNING TARGET - 8. 2 Question options 3 2. WARM UP: Solve for the given variable. Bio 002 Chap. Simplify: ____ 1. Introduction to Systems of Equations and Inequalities; 7. Write and Solve Proportions - Sections 3. Leah cannot work more than a total of 20 hours per month. Algebra 1 Quiz #3. second is to undo addition and subtraction. The cost of distributing by. The product is available for instant download after purchase. This quiz is incomplete! To play. Quiz: Solving Quadratic Equations. Even though matrix multiplication is not commutative in general ( for general matrices A,B), for the special case where , we have , and also. If A is the 3x3 identity matrix, then. Apex Algebra 1 Semester 2 Quiz Answers - Booklection. 5x + 10y = 25 2. Based on what I currently know about curriculum design, it's as complete as I can make it. Algebra worksheets for Algebra I and Algebra II courses that start with simple equations and polynomials and lean to advanced conics. Math workbook 1 is a content-rich downloadable zip file with 100 Math printable exercises and 100 pages of answer sheets attached to each exercise. She spent $25. pdf from MATH 101 201 at Mill Valley High School. Go through the lessons and practice problems below to help you learn Algebra 1 and excel in school. 664 KB (Last Modified on August 23, 2017) Comments (-1). 64 4( 8) 14. 1-3 Exit Quiz - Solving Equations. 20 to 40% = 5 points. 20 Apex algebra 2 semester 1 quiz answers. ax^2 + bx + c. Solve Systems of Linear Inequalities by Graphing. File Type: pdf. following equation: y = -3x + 2. Select to Learn More. Lee charges$3 for a basket and 2. Algebra 1 Algebra 1 Practice TestPractice TestPractice Test Algebra 1 Practice Test Part 1: Directions: For questions 1-20, circle the correct answer on your answer sheet. what is the direction of a line with the slope 3/-2. f\left (x\right)\ =\ -2x+b f (x) = −2x +b , if "b" is increased by 4 units, the graph of the new line would be shifted 4 units:. Common Core Connection for Grades 3+. Saxon Math Homeschool 6 TEST ANSWERS Algebra 1/2 by Saxon 1 FINAL EXAM 1. 3 ~ 20132014). Algebra 2 Name_____ Date_____ Period____ ©f L2g0 u1m4k IK 4u st 3af BSRowfKtxw0aCrpe l pLXLmCp. pdf from MATH 101 201 at Mill Valley High School. Passing the FSA Algebra 1 EOC is accomplished by earning a level 3 or higher on the FSA Algebra 1 EOC. 2 What is the sum of x2 3x 7 and 3x2 5x 9? A 4x2 2x 2 B 4x2 8x 2 C 4x2 2x 16 D 4x2 2x 2 36. The quiz will have 15 questions with field's dimensions. Algebra 1 Pearson Online Textbook. College Algebra quiz. Benchmark Test : Algebra 1 ©1999-2011 Progress Testing Page 3 Benchmark: MA. Converting recursive & explicit forms of arithmetic sequences Get 3 of 4 questions to level up! Quiz 1 Level up on the above skills and collect up to 500 Mastery points Start quiz. All Categories Anthropology Biology Business Chemistry Communication Computer Economics Education English Finance Foreign Language Geography Geology Health History Human Services Math Medical Philosophy Professional Psychology. 06 MB) View PDF: Algebra I Module 1: Copy Ready Materials (1. a jar contains 20 marbles; 6 are brown, 2 are gray, 5 are black, and 7 are white. 4: Solving Real-Life Problems (pp. This is a comprehensive textbook that can help the student better understand the entire algebra topic. in descending order of the exponents. Home; Sign up. A quiz on linear equations in one variable to re-live old memories of high school math. Algebra 1 answers to Chapter 1 - Foundations for Algebra - Mid-Chapter Quiz - Page 29 3 including work step by step written by community members like you. 2020 BC BLOCK ALGEBRA 2. The Videos, Games, Quizzes and Worksheets make excellent materials for math teachers, math educators and parents. 4 REVIEW Solve each equation. In order to access Algebra Nation's Facebook app (and all our awesome resources), you need to click "Okay. 2 years ago. N/A Edexcel online writing level 1 test. 3 Quiz Review. Apexvs answer key geometry semester 2. Algebra 1: Chapter 3 Test Review Name _____ Period _____ Find the x-intercept and y-intercept for each equation. notebook November 27, 2013 When you are done with your quiz, work on missing work or read an AR book. 11 Body of Knowledge. algebra algebra 1 quiz 41 42 47 44 algebra i 1001 practice problems for' 'algebra quiz 1 solving equations study sets and flashcards may 13th, 2018 - quizlet provides algebra quiz 1 solving equations activities flashcards and games start learning today for free' 'algebra 1 chapter 5 test answers dprior de. by amoreno. RPJ refers to the 8th Grade Algebra 1 Record and Practice Journal. This quiz is incomplete! To play. 1-3 Guided Notes SE - Solving Equations. 3937 days 7 hours 4 minutes ago. A calculator is not allowed. ALGEBRA 1 EOC 2017 2018 2019 2021 EDGEWATER HIGH 76 9 15 75 9 16 59 16 25 83 8 9. 20 to 40% = 5 points. Introduction to minimum and maximum points. Any Equation. Try for free. Each retake is 3 hours long. (2a 2 b - 3c 3)(3a 3 b + 4c) = 5a 6 b 2 + 12c 4 - 9a 3 bc 3. Your teacher announced a 50 point math quiz The length is 12ft longer than the width. 319 Cards -. Content is loading. Chapter 1: Expressions, Equations, and Inequalities Get Ready! 1 My Math Video 3 1-1 Patterns and Expressions 4 1-2 Properties of Real Numbers 11 1-3 Algebraic Expressions 18 Mid-Chapter Quiz 25 1-4 Solving Equations 26 1-5 Solving Inequalities 33 1-6 Absolute Value Equations and Inequalities 41 Assessment and Test Prep Pull It All Together 49. 0% average accuracy. EOC Algebra 1 LEVEL 1 LEVEL 2 LEVEL 3 may not add to 100 due to rounding and reflect the test taken rather than the grade level enrolled. Hand out Chapter 3 & 4 Test - Review in class, no corrections allowed. Students must choose which of 6 views. in descending order of the exponents. Unit 2-1 and Unit 2-2 Quiz. (Hint: Use v = 1/3 (pi × r 2 × h) 11. first we need to collect like terms. Write 230,000,000,000 in scientific notation. emilydavis714. Test‐ "Polynomials" Algebra 1 Name: _____ Show your Work 1. COMPUTATION AND ELEMENTARY ALGEBRA PRETEST 2 September 2012 WHAT TO TAKE WITH YOU ON THE TEST DAY 1. 2) Solve for x: x² - 36 = 0 x = and. 10x 8y 8y 10x Find each sum or product mentally using the properties above. 1 Solving 1 step Eq. The solution to the original equation is x = 17/5. Students must choose which of 6 views. License Terms and Conditions | Privacy Policy | Children’s Privacy | Terms of Use | Copyright | Customer Support; All Rights Reserved. Student Placement Chart. It serves as an anchor for model lessons and assessments, and it is the primary document teachers can use to align instruction to course content. Our Algebra 1 lessons is available to everyone, but you need to create an account in order to access the. 3 Applications of Algebra 88 Mid-Chapter Test: 2. Chapter 3 Quiz 1 Algebra 2 Answers Chapter 3 Quiz 1 Algebra Algebra 1: Chapter 3 - Inequalities. UNIT 3 TEST REVIEW Algebra 1 DRAFT. EP All-in-One Highschool, Algebra 1 © allinonehomeschool. as a composition of two functions, f and g, where. This quiz is incomplete! To play this quiz, please finish editing it. We’ll track your progress and help you identify your strengths and weaknesses. Live Game Live. 9A1-4-DGFP. Abeka 9th Bible Content Quiz 1. Click on the correct answer. 0) Lesson Tutorials. − b x Axis of symmetry = 2 a y − y m = 2 1 x − x 2 1 m a n = n m − b 2 − 4 ac x = 2 a a − n = 1 a n a m = a (m − n) a n Quotient of powers Slope-intercept form. Printable Basic Algebra Quiz - Click this link for a printable version (opens in a new window). | PowerPoint PPT presentation | free to view. Check all that apply. 00)ALL of my Algebra course materials are now available bundled at a discounted price as a Dropbox file! (Individual units are NOT Dropbox files, but are downloads from the TPT site. College Algebra Questions With Answers Sample 1. 4 hours ago. Start Date (required). Suggested answers for each essay question give key points to include and a detailed grading rubric to take the guesswork out of assigning a point value to your teen. y(15)=-1/2*32*(15) 2 +30,000 =-16*225+30,000 =-3600+30,000=26400. Algebra Tests for Children. You can practice taking these exams at home to assess your readiness and determine areas of weakness that you can focus on while studying. CC MATH I STANDARDS UNIT 3. Algebra I Module 3 Quiz 1. (Distribute, Combine Like Terms, Evaluate) 2) IDENTIFY VARIABLE. Algebra 1 Practice Questions | Answer Key. 5 SOLVING MULTI-STEP EQUATIONS: PART 1. File Size: 79 kb. A company distributes its product by train and by truck. Quadratic formula. what is the direction of a line with the slope 3/-2. Given the function f (x) f ( x) we want to find the inverse function, f −1(x) f − 1 ( x). com As this apex algebra 2 quiz answers, it ends occurring monster one of the favored books apex algebra 2 quiz answers collections that we have. 1 Solving 1 step Eq. Math Notes Rubric. 3937 days 7 hours 4 minutes ago. Write and Solve Proportions - Sections 3. 1-3 Assignment - Solving Equations. August 2021 Algebra Regents Answers links: Date: 2021-1-4 | Size: 17. Jan 04, 2016 · Algebra 1. 9th grade. Solve the equation from Step 2 for y y. All answers are positive integers. Simplify: 675 ÷ (6 + 9 ÷ 3) A) 15 B) 9 C) 75 D) 225 E) 135. Mathematics. If the terminal side of a 330 angle intersects a unit circle, what would. Given the function f (x) f ( x) we want to find the inverse function, f −1(x) f − 1 ( x). Please Note Practice set answers are in the sold-separately, Saxon Algebra 1/2 Answer. y = a (x-h)^2 + k. Leah cannot work more than a total of 20 hours per month. Florida State Standards MA. 3(x y) (x y)3 6. Algebra 1 Chapter 3 Review DRAFT. College Algebra Quiz 3 Version 1. Have a great day! Mathway's live experts will not knowingly provide solutions to students while they are taking a test or quiz. On the y-axis. , ISBN-10: 0133281140, ISBN-13:. 4 Solve the following proportion for x: 3x + 5 10 x-3 15 A 6. Note: Explain to students that the value in the. Algebra 1abeka quiz 3. Algebra 1 Syllabus 2019-2020. 1 Algebra 1 - Quiz 8. • Tables 1, 2, and 3 describe the range of total items by conceptual category and Depth of Knowledge that will appear on the NC Math 1 and NC Math 3 Tests. But, algebra 2 is the advanced algebra, which is practised in high school level. Graph solutions to one-step linear inequalities. QuickMath will automatically answer the most common problems in algebra, equations and calculus faced by high-school and college students. Like, algebra 1 is the elementary algebra practised in classes 7,8 or sometimes 9, where basics of algebra are taught. •Attendance Quizzes: On randomly chosen days, a 1 to 3 question quiz (time allocated about 3-5 minutes) will be given to you during the regular class time. 1) SIMPLIFY ALL EXPRESSIONS. Mathematics. The product is available for instant download after purchase. View Test Prep - Algebra 1 - Unit 3 - PRACTICE - Quiz 1 - Functions and Slope-KEY. Quiz Banker supports New York State secondary teachers in generating quizzes based on past Regents exam items. If you're a student at a school or college studying mathematics then this is the ideal quiz for you. 50 per month to play this quiz and over 3,500 others that help you with. Available for Pre-Algebra, Algebra 1, Geometry, Algebra 2, Precalculus, and Calculus. 1 Consider the set of integers greater than 2 and less than 6. Home; Translate. Which of the following is considered a quadratic equation? Which of the following correctly solves the equation x 2 - 3 x = 18? Which of the following was correctly solved by using the completing the square process? If ax 2 + bx + c = 0, then which correctly states the possible values for x?. 18 hours ago. No make up attendance quizzes will be given. Online Library Glencoe Algebra 2 Chapter 6 Test Form 2a Algebra 1 Chapter 12 Resource Masters Reveal Algebra 2 Middle School Math Algebra 2 Chapter 3 Resource Masters One Program, All Learners Flexibility - Print and digital resources for your classroom today and tomorrow - Appropriate for students who are. in what quadrant is the point (5,4) found. Algebra II Practice Test Objective: 1. Legend (Opens a modal) Possible mastery points. Which of the following is considered a quadratic equation? Which of the following correctly solves the equation x 2 - 3 x = 18? Which of the following was correctly solved by using the completing the square process? If ax 2 + bx + c = 0, then which correctly states the possible values for x?. pdf from MATH 101 201 at Mill Valley High School. Multiple-Choice Sample Question (Part I) The cost of jerseys is\$23$ per jersey. possible 5 ame: ate: Score: _____ Quiz. Introduction to minimum and maximum points. This test includes selected content from Math 5/4, Math 6/5, Math 7/6, Math 8/7, and Algebra 1__. About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features Press Copyright Contact us Creators. Algebra 1 Worksheets. On the y-axis. 1 : Review - The Need to Read Get ready for the unit test by reviewing important ideas and skills. 3: Solving Inequalities Using Multiplication or Division. Therefore, the given expression can also be written as 5 3 / 8 3. COMPUTATION AND ELEMENTARY ALGEBRA PRETEST 2 September 2012 WHAT TO TAKE WITH YOU ON THE TEST DAY 1. Click here to see which pages we cover. Systems of Equations - Elimination. Playing educational quizzes is a user-friendly way to learn if you are in the 9th or 10th grade - aged 14 to 16. 1) Solve for x: 2 (3x + 4) - 3 (x - 1) = x - 1 x =. ALGEBRA 1 EOC 2017 2018 2019 2021 EDGEWATER HIGH 76 9 15 75 9 16 59 16 25 83 8 9. This quiz is incomplete! To play this quiz. rational numbers IV. Tutorial 6: Polynomials. #25 Rubric Part A 2280-M41337 Score Description 2. Solve One-Step Equations - Section 3. downward (left to right) at least how many points are needed to graph a staright line. 11th - 12th grade. 2 What is the sum of 3x2 2x 5 and x2 2x 8, expressed in its simplest form? A 4x2 3 B 4x2 3 C 3x4 3 D 4x4 3. Homework Practice WorkbookMathematicsAlgebra 2 Chapter 3 Resource MastersMathematics Curriculum in School EducationPre AlgebraGlencoe Algebra 1, Student EditionMathematics ConnectionsAlgebra 2 Chapter 13 Resource MastersGeometryMath ApplicationsAlgebra 2 School-to-Career MastersMathematicsAlgebra 1Algebra 1 Chapter 9 Resource MastersAlgebra. answer choices. Algebra I Module 3. Khan Academy's Algebra 1 course is built to deliver a comprehensive, illuminating, engaging, and Common Core aligned experience!. 132 - 137) Quiz 3. 15) 8M - 4M - 6 - 3 + 5M = 82 - 1 16) Check 17) (-3)2 ÷ 9 + 6 = D 18) Check Pre-Algebra Competency Exam To receive the full benefit of this test, watch the student to ensure he has mastered the concepts presented in Pre-Algebra. Math1222 Lesson 1 Quiz week 3. Solve for x: 2(x+ 7) - 3(2x-4) = -18 A. In this module, students extend their study of functions to include function notation and the concepts of domain and range. in descending order of the exponents. Equation Game In this interactive concentration game, students will try to match each equation with the correct solution as fast as they can. Unit 1 Lesson 1, 2 & 3 Quiz Review And Answers Algebra[1] unit 1 lesson 1, 2 & 3 quiz review and answers Algebra[1] Unit 1 Lesson 1, 2, and 3 Review 2. Multi-Step, With Variables on Both Sides - Sections 3. Solve the equation for x. 1 3 𝑥−1, 𝑥≤3 1 2 𝑥, 𝑥≥2. Which of the following is a function?. 50 per month to play this quiz and over 3,500 others that help you with. Questions and Answers. natural numbers III. 3 Quiz Answers | latest! Algebra 2 (3) Apex - Semester 2 Part B Apex Algebra 1 Semester Answer Key - Booklection. Algebra I Module 3. Algebra 1 and algebra 2 are the Maths courses included for students in their early and later stages of academics, respectively. Textbook Authors: Charles, Randall I. Report an issue. 9A1-4-DGFP. 5x + 10y = 25 2. CC MATH I STANDARDS UNIT 3. Below is a list of all the units and the lessons contained in the units. Show your work! 1) 1 − 7(5 n + 5) = −139 2) −96 = −6(8 + 2a) 3). Class Syllabus Free Graphing Calulators for Computer: Math Calculator Quiz 10. 1-3 Bell Work - Solving Equations. This quiz is incomplete! To play. This quiz is timed. Students will learn new material through animations, videos, reading, and guided practice. Click here to see solutions for all Machine Learning Coursera Assignments. Solve Systems of Linear Inequalities by Graphing. HOW TO SOLVE MULTI-STEP EQUATIONS. solution of an inequality. Review for Test on Solving Equations - Chapter 3. College Algebra Questions With Answers Sample 1. 106 - 111) 3. A PHOTO ID TESTING REGULATIONS No cell phones, food, drink, books, dictionaries, notes of any kind are allowed in the test room. 4 Solve the following proportion for x: x 9 6 27 F 2 G 3 H 18 I 40. 8th - 9th grade. Teachers using ALGEBRA 1 may photocopy complete pages in sufficient quantities for classroom use only and not for resale. A) 2 1 − B) 2 17 − C) 2 1 D) 2 17 2. College Algebra quiz. by amoreno. Quiz 1A - Signed Numbers #1. 2 out of 5 stars 16. 1 | P a g e. second is to undo addition and subtraction. 50 per month to play this quiz and over 3,500 others that help you with. Unit 2-1 and Unit 2-2 Quiz. 3 with audio to aid students. Evaluate the expression 2 3a+2b when a = -3 and b = -4. Algebra 1 Name: _ Unit 3: PQ1 Date: _ Period: _ "PRACTICE" QUIZ. Algebra I Test & Quiz Generator Quiz Banker creates student-ready editable quiz and answer documents based on an item bank of over 2500 state exam questions. Answers sold separately in the Algebra 1 Quiz/Test Key. Keith Calkins, remediate high school. Date: 2021-1-7 | Size: 10. Each quiz will be worth either 1 or 0 depending on whether you can answer the questions correct or not. as a composition of two functions, f and g, where. The area of the garden is tripled B. in descending order of the exponents. Each retake is 3 hours long. Download free editable and probable test in PDF and doc file. 132 - 137) Quiz 3. Textbook Authors: Hall, Prentice, ISBN-10: 0133500403, ISBN-13: 978-0-13350-040-0, Publisher: Prentice Hall. 3) The area of a rectangle is 72 square meters. Click on the correct answer. Benchmark Test : Algebra 1 ©1999-2011 Progress Testing Page 3 Benchmark: MA. Free Algebra 1 Practice Test Questions. College Algebra Quiz 3 Version 1. standard quadratic. You will receive incredibly detailed scoring results at the end of your Algebra 1 practice test to help you identify your strengths and weaknesses. Simplify -32. © {{ currentYear | date. D: The situation describes a geometric series, with a common ratio of 2. This is where you will find weekly agendas and copies of assignments that were given in class. Algebra 1 EOC Test Item Specifications Florida Department of Education| |73 BENCHMARKMA. Algebra 1abeka quiz 3. This quiz is incomplete! To play this quiz, please finish editing it. Algebra has a reputation for being difficult, but Math Games makes struggling with it a thing of the past. Delete Quiz. Algebra 1 3. The majority of items come from the following content areas: Functions Exponents Complex Numbers Arithmetic and Geometric Sequences and Series Matrices (basic operations, equations, and determinants). 44 Learners. integers II. 5 Factoring Trinomials Day 1 2/27 - 7. Click to take algebra tests below. Click on the correct answer. 42 KB) Algebra I Module 1: Mid-Module Assessment (847. Algebra Trivia Quizzes. 3937 days 7 hours 4 minutes ago. 7 8 of 56 = Fill in the missing numbers in the numerators or denominators to make equivalent. A statement that compares two quantities using <, >, "d,"攀, or ≠. Answers, Teach, Algebra, Cheat answer for gradpoint algebra 1, Gradpoint. Addition Property of Equality. what is the x-intercept of the equation 3x+4y=12. This quiz is incomplete! To play this quiz, please finish. Standard 3 Solve linear equations and inequalities. Under each lesson you will find theory, examples and video. Mathematics. solution of an inequality. Level up on the above skills and collect up to 500 Mastery points Start quiz. On the y-axis. Unit 8 Quiz. Instead of adding 4 to -42, Fred added 4 to 42. 17 per container 65. 50 for each pound of fruit picked at the orchard. Start Now. RPJ refers to the 8th Grade Algebra 1 Record and Practice Journal. •Attendance Quizzes: On randomly chosen days, a 1 to 3 question quiz (time allocated about 3-5 minutes) will be given to you during the regular class time. 2) Solve for x: x² - 36 = 0 x = and. Dividing 5th grade test, algebra 1 problems square root of symbol, analytic cubes for dummies, rate and ratios 6th grade math free worksheet, linear algebra done right solution manual, holt mathematics answers, sample codes on how to find the gcf of 3 numbers in java. MATH 1222 - Winter 2016. 464 (b) 1128.
|
2021-10-27 11:04:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23442703485488892, "perplexity": 2709.1397042999592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00130.warc.gz"}
|
https://mikespivey.wordpress.com/2015/02/17/does-pointwise-convergence-to-a-continuous-function-imply-uniform-convergence/
|
## Does Pointwise Convergence to a Continuous Function Imply Uniform Convergence?
Recently in my advanced calculus class we discussed how the uniform limit of a sequence of continuous functions is itself continuous. One of my students turned the question around: If the limit of a pointwise sequence of continuous functions is continuous, does that mean that the convergence is uniform?
The answer is “No,” although I couldn’t think of a counterexample off the top of my head. I found one (not surprisingly) on Math.SE. Although there are some good discussion and multiple good answers to this question given there, I’m going to discuss Ilmari Karonen’s answer.
Let $\displaystyle f_n: [0,1] \mapsto \mathbb{R}$ be given by
$\displaystyle f_n(x) = \begin{cases} nx, & 0 \leq x \leq 1/n; \\ 2 - nx, & 1/n < x \leq 2/n; \\ 0, & \text{ otherwise}. \end{cases}$
Each $f_n$ is continuous (piecewise, but that’s fine). For any $x \in (0,1]$, there exists N such that $x > 2/n$ for all $n \geq N$. Since $f_n(0) = 0$ for all n, this means that $f_n(x) \to 0$ for all $x \in [0,1]$. The zero function is clearly continuous.
However, for each $n \in \mathbb{N}$ we have that $f_n(1/n) = 1$. This means that when $\epsilon < 1$ there cannot be an $N \in \mathbb{N}$ such that $|f_n(x)| < \epsilon$ for all $x \in [0,1]$ and $n \geq N$. Thus the convergence cannot be uniform.
An even more simpler example is $$f_n: (0,1) \mapsto [0,1)$$, where $$f_n(x)=x^n$$. $$f_n(x)$$ is clearly continuous and converges to $$0$$ on $$(0,1)$$ (again continuous point-wise), whereas the convergence is not uniform.
I think in class we had restricted ourselves to closed and bounded domains for $\{f_n\}$. But you’re right; for the question I stated in the post that is definitely a simpler example.
|
2019-06-19 13:23:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9445390701293945, "perplexity": 263.6063216144629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998986.11/warc/CC-MAIN-20190619123854-20190619145854-00035.warc.gz"}
|
http://cscircles.cemc.uwaterloo.ca/10-def/
|
# 10: def
In this lesson we show you the most important idea in programming: defining your own functions! Functions let your program become shorter, better-organized, easier to read, easier to debug, and more reusable. In later lessons we will see other benefits like recursion.
# Defining a Function
We'll start with an example. Remember the function abs? It takes one argument (a number x), and returns its absolute value (which is x when x ≥ 0, and -x when x < 0). The way to define this function in Python is:
def abs(x): # define a function named 'abs' of one argument named 'x'
if x >= 0: # function body starts here
return x
else:
return -x # function body ends here
## The Two Parts of a Function Definition
The first line in a function definition specifies the name of the function and the arguments of the function. It should look like
def «function-name»(«argument-list»):
In the example above, abs was the function name, and the argument list was x. It is called an argument list because there can be multiple arguments such as x, y, z; it's also possible to have zero arguments, i.e. an empty list. The arguments are the input(s) to the function.
Everything after the first line of a function definition is the body of the function. The body is an indented block of code. Whenever the function is called, the body is executed on those arguments/inputs. Finally, when we reach this line in the function body,
return «value»
the function stops running and returns «value» as its output. That is, the return value is the output of the function. As the abs example shows, the body may contain several return statements; but only the first one executed has an effect since the function stops executing afterwards.
## Example: Defining and Calling a Function
The following program contains a function definition and some other statements that are executed after the function is defined. Can you guess what the output will be?
We see that the function name is square, there is just one argument x, and that the function body consists of just one line return x*x. Then outside of the function we have two commands, which call the function a total of three times.
As the visualizer shows, note that a separate chunk of working space (a frame) is allocated each time the function is called. Once the function returns, that working space is no longer needed, and erased.
Here are the steps explained in words:
1. When the first command is executed, Python must evaluate square(10).
• Python compares the list of inputs (10) to the argument list (x). So, it executes the function body return x*x while remembering that x equals 10. Thus x*x yields 100, which is printed.
2. When the second command is executed, Python must evaluate square(square(2)).
• The inner part square(2) is evaluated first. We temporarily set x equal to 2 and run the function body, which returns 4.
• Then the outer expression is evaluated; since square(2) gave 4, we are now calling square(4). This returns 16, which is printed.
## Four Common Mistakes
One common mistake you can make in defining a function is to forget the return statement.
Example
Mistake 1: forgetting to return
As you can see, if no return statement is encountered in the body, then the function gives None by default. So, if you are failing some exercise because a function is returning None, the problem is often that some function inputs are not causing a return statement to be executed.
You may also intentionally omit a return statement. This only makes sense if your function has some side effect other than its return value, for example printing some output:
Example: A side effect and no return statement
Another common mistake is forgetting to indent the code, resulting in an IndentationError.
Example
Mistake 2: forgetting to indent
As we saw in lesson 2, calling with too few or too many arguments causes an error.
Example
Mistake 3: call with too many arguments
Finally, if you make a typo when defining or calling the function, you will get an error saying that the function is undefined.
Example
Mistake 4: wrong name
## Try it Yourself
There's no need to use input() or print() for this exercise, or for future exercises where you're asked to "define a function." The grader will call your function with arguments automatically, and inspect the result it returns directly. A correct solution for this particular exercise will be 2 lines (see the 'The Two Parts of a Function Definition' at the top of the lesson).
Coding Exercise: Cubism
Define a function cube(n), which takes a single number n as input, and outputs its cube n × n × n.
Enter testing statements like print(myfunction("test argument")) below.
## Two or More Arguments
The functions above only took one argument, but a function can be designed to take any number of arguments. For example, you have been calling input() with 0 arguments (and you'll define a function with zero arguments in lesson 15A).
Here's an example with two arguments, about geometry. Suppose we need a function to compute the area of a triangle, given the length of its base and its height. The formula for the area is
area = base × height / 2
In code, this looks like the following: we replace def «function-name»(«argument1») with def «function-name»(«argument1», «argument2»).
Another important thing to note is that when a function is defined is different from when the function is executed. The body doesn't run until the function is actually called. To test this, complete the following exercise:
Multiple Choice Exercise: In-n-Out
What is the output of the following program?
def f(): # function of 0 arguments
print("in f")
print("not in f")
f() # call f now
Correct! The function f is defined first of all, but its body is not executed immediately. After the function is defined we print out not in f. Then the function is called, and the body is executed, printing in f.
In the last exercise we ask you to write your own function of two arguments, to compute the perimeter of a rectangle. (The perimeter is the total length of all sides.) In a rectangle with a given width and height, its perimeter is given by the following formula:
$\textrm{perimeter = width + height + width + height}$
See the diagram at right for an example.
Coding Exercise: Rectangle
Define a function rectanglePerimeter(width, height) that computes the perimeter of a rectangle.
Enter testing statements like print(myfunction("test argument")) below.
### Functions Calling Functions
Functions are the building blocks of well-built large programs: you can do the same task twice without writing out the same code twice, and you can re-use solutions to common tasks. Here is an example of building one function and using it inside of another one.
Can you guess what this program does, before you run it? Remember from lesson 4 that multiplication of strings and integers means to repeat the string that many times. For example, "tar"*2 is "tartar".
Drag the vertical gray line left and right to change how much code & visualization is shown.
You can see a new phenomenon in the visualization: under the "Frames" column, when we are at Step 8 of 30 in the execution, there are not only some global variables, but two frames (one for outer, and one for inner that was just created). The outer one stays waiting until the inner one is completely finished, then the outer one resumes. It happens to make another call to inner, and again outer waits; and when inner is finished, then outer continues to completion. Finally, we call outer and the whole process is repeated.
You are now ready for the next lesson!
|
2017-02-27 13:56:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5948446393013, "perplexity": 973.3422083136854}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00033-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://mathstodon.xyz/@unknown/
|
Pinned toot
Pinned toot
Now I make PDF shinshu.us/itou.pdf , Thanks! We will plan to generalize Twitter "じゃがりきんapu長ピーポ黄金角形" twitter.com/wasanp_/status/109 in the future.
Pinned toot
Pinned toot
@jsiehler I'm playing your with so exciting! homepages.gac.edu/~jsiehler/ga (If I understand mechanism that one piece moving one time anyway is counted as 1-move, I may be able to reduce MOVES more!)
Pinned toot
I am sorry because I write the following in Japanese. Show more
Mathematical Objects: Stick of Chalk
A conversation about mathematics inspired by a stick of chalk. Presented by Katie Steckles and Peter Rowlett. Podcast: Play in new window | Download Subscribe: Android | Google Podcasts | RSS aperiodical.com/2019/04/mathem
Regular polygon surfaces: arxiv.org/abs/1804.05452
Ian Alevy answers Problem 72 of The Open Problems Project (cs.smith.edu/~jorourke/TOPP/P7): every topological sphere made of regular pentagons can be constructed by gluing regular dodecahedra together.
You can also glue dodecahedra to get higher-genus surfaces, as in this image from momath.org/mathmonday/the-para, but Alevy's theorem doesn't apply, so we don't know whether all higher-genus regular-pentagon surfaces are formed that way.
Another one of my domain coloring stuff from my archives: an iterated rational function with icosahedral symmetry, plotted over the Riemann sphere
Mathematical sign language: aperiodical.com/2019/04/mathem
Hearing-impaired eletrical engineering researcher Jess Boland discovered that weren't enough technical terms in British Sign Language to cover the mathematics she uses in her work, so she's been creating new ones as well as promoting the ones BSL already had. Katie Steckles interviews her for The Aperiodical.
(@aperiodical has a Mathstodon account but they're letting it go stale instead of promoting their new posts...)
We've blogged about #Halcyon 2.3.0,what we did especially in this release and also a few things before.You can read it here: nikisoft.myblog.de/nikisoft/ar
Additionally new strings of Halcyon 2.3.0 are now ready for translation at translate.zanata.org/iteration .We need your help for offering Halcyon in even more languages and to update existing languages.Thank you very much for your help!
I've just updated mathstodon.xyz to Mastodon v2.8.0.
Release notes: github.com/tootsuite/mastodon/
It's got polls, an interface to manage your follow lists, and lets us change signups from open access to "ask for an invitation". We'll do that if spam becomes a problem again (it currently isn't)
・タイムラインのトゥート間の隙間を調整(各ブラウザの差異を埋めるべく)
・ブースト機能の拡張(設定画面で切り替えが必要です)
・Pleromaにようやく対応
などなど、詰め込みすぎた感がありますが詳細は下記をご覧ください。
github.com/nishlumi/gplusdon/r
#quesdon 本家の未回答質問をQuesdon(toot.app)にインポートできるようになりました。
quesdon.toot.app
What we'll see when Andromeda and the Milky Way collide! twitter.com/universal_sci/stat
#newsbot
We're proud to announce that #Halcyon has been bought by @facebook .We're looking forward to a cooperation which will bring advantages for both platforms.Let's make federated social media for the masses together!We thank everyone who went the long way with us.Nothing will change for you as a user except that we will abuse every action you do for advertising.Oh by the way ads will be introduced with the next release!Find out more here: nikisoft.one/firstapril.php
Interesting…
\begin{align}\sqrt[3]{50+\sqrt{2527}}&+\sqrt[3]{50-\sqrt{2527}}=4\\ \sqrt[3]{85+\sqrt{7252}}&+\sqrt[3]{85-\sqrt{7252}}=5\\ \sqrt[3]{44+\sqrt{1944}}&+\sqrt[3]{44-\sqrt{1944}}=4\\ \sqrt[3]{1414+\sqrt{1999404}}&+\sqrt[3]{1414-\sqrt{1999404}}=14\\ \sqrt[3]{3515+\sqrt{12355252}}&+\sqrt[3]{3515-\sqrt{12355252}}=19\\ \sqrt[3]{171710+\sqrt{29484324108}}&+\sqrt[3]{171710-\sqrt{29484324108}}=70\\ \sqrt[3]{3015103+\sqrt{9090846100636}}&+\sqrt[3]{3015103-\sqrt{9090846100636}}=182\end{align}
I'm looking for identities like this one that I've already found:
$\sqrt[3]{4n^3+6n+\sqrt{16n^6+48n^4+36n^2+8}}+\sqrt[3]{4n^3+6n-\sqrt{16n^6+48n^4+36n^2+8}}=2n$
I can't think of a better way of distributing the code we use to add to mathstodon.xyz, so I've made a GitHub gist containing a patch file: gist.github.com/christianp/617
It also includes a patch to add the automatic replacement of some LaTeX commands with equivalents inside the toot compose box, which is more convenient when you don't need "proper" LaTeX typesetting.
こういうの思いつきました(*´ω`*)
ついにTheDeskシェア1%超えたね
Hundreds of thousands of protesters swamped London demanding another referendum on EU membership amid political par… https://twitter.com/i/web/status/1109568250503933953
A Mastodon instance for maths people. The kind of people who make $\pi z^2 \times a$ jokes.
Use $ and $ for inline LaTeX, and $ and $ for display mode.
|
2019-07-22 13:08:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2753411531448364, "perplexity": 5876.7427064999365}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528013.81/warc/CC-MAIN-20190722113215-20190722135215-00014.warc.gz"}
|
http://blogs.catapultsystems.com/asills/archive/2011/06/01/useful-expression-blend-sdk-controls-callout/
|
## Useful Expression Blend SDK Controls–Callout
The Expression Blend SDK has some interesting controls in it that you can easily miss if you’re not poking around the SDK namespaces. The one I’m going to talk about right now is called Callout. The Callout control is a content control that renders itself with an optional arrow anchored at a given point. What makes this control interesting (and fun) is it has an option for a displaying itself light a thought bubble and a word balloon.
The AnchorPoint property dictates where the optional arrow will be anchored on the callout. The CalloutStyle property has 4 options: Cloud, Oval, Rectangle, RoundedRectangle. The first two support arrows and the last two do not. To see this in action:
If I switch CalloutStyle to Cloud and move the anchor point to “0,1” (and move the Callout up a bit with a margin) I end up with this:
And a quick example of Rectangle and RoundedRectangle (which are much more boring without arrows):
|
2018-01-16 23:32:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8248972296714783, "perplexity": 2563.3740222086917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886758.34/warc/CC-MAIN-20180116224019-20180117004019-00053.warc.gz"}
|
https://muon-spectroscopy-computational-project.github.io/muons.html
|
# The Muon Spectroscopy Computational Project
Software and methods to make the muon spectroscopist's life easier
|
2023-03-24 08:44:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984287679195404, "perplexity": 13216.946560349783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945279.63/warc/CC-MAIN-20230324082226-20230324112226-00288.warc.gz"}
|
https://wwrenderer-staging.libretexts.org/render-api?sourceFilePath=Library/UMN/calculusStewartCCC/s_4_6_6.pg&problemSeed=1234567&courseID=anonymous&userID=anonymous&course_password=anonymous&answersSubmitted=0&showSummary=1&displayMode=MathJax&language=en&outputFormat=nosubmit
|
Find the dimensions $l \times w$ of a rectangle with area $250 \text{yds.}^2$ whose perimeter is as small as possible.
Answer (in sq. yds.): $l =$ and $w =$
You can earn partial credit on this problem.
|
2023-02-01 03:08:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4056026339530945, "perplexity": 566.4235167028852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499899.9/warc/CC-MAIN-20230201013650-20230201043650-00150.warc.gz"}
|
https://affarerhqgk.web.app/9174/39441.html
|
# apple mail > inställningar - La Buona Terra
implicit interest - Swedish translation – Linguee
An implicit method has been presented for solving singular initial value problems. The method is simple and gives more accurate solution than the implicit Euler method as well as the second order implicit Runge-Kutta (RK2) (i.e., implicit midpoint rule) method for some particular singular problems. $\begingroup$ There is a difference between implicit ODE $0=F(t,y,\dot y)$ and implicit numerical methods. Most numerical methods, explicit as well as implicit, are for explicit ODE $\dot y=f(t,y)$.
Skickas inom 10-15 vardagar. Köp Moving Particle Semi-implicit Method av Seiichi Koshizuka på Bokus.com. Pris: 1009 kr. E-bok, 2018.
## Acceleration of Compressible Flow - AVHANDLINGAR.SE
Sometimes, stating something clearly can be the best way to make your 25 Nov 2020 The IT method may be regarded as an implicit version of the HOS method since it replaces the explicit perturbation approach for obtaining the still An implicit function is one that is not defined explicitly, but the given information implies that there is a function. There is no way to solve for or explicitly. Explicit means direct or clear.
### Untitled
Implicit teaching The opposite of making facts, concepts and procedures explicit is to leave some of them implicit. This is what happens when we learn through discovery and inference. So ‘implicit teaching’ might be a good term to describe this approach, yet the expression does not seem to have caught on. The process that we used in the second solution to the previous example is called implicit differentiation and that is the subject of this section. In the previous example we were able to just solve for y y and avoid implicit differentiation. Analysis of the scheme We expect this implicit scheme to be order (2;1) accurate, i.e., O( x2 + t). Substitution of the exact solution into the di erential equation will demonstrate the consistency of the scheme for the inhomogeneous heat equation and give the accuracy.
The SQ-based collision detection method was compared to the GJK algorithm they could leverage the explicit and implicit representations of the SQ objects, In this talk, I describe the context and method for a study that we have been conducting on the effects of blame on expressions of implicit bias.* I will provide an Often, implicit bias can be the root of the problem. Simulation-based training is a proven method to change behaviors and improve outcomes, as well as to Explicit or Implicit Grammar? - Grammar Teaching Approaches in Three fulltext. Jakobsson, Ina; Knutsson, Emmalinn : Malmö universitet/Lärande och Part IV. 2004-10-20 Per Martin-Löf: Normalization by evaluation and by the method of computability.
3. The discretization of the heat equation MPM method is also included, followed by a description of the implicit and explicit methods. A complete description of the implicit equations, both linear and Implicit Method for the Wave Equation*. By Melvyn Ciment** and Stephen H. Leventhal***. Abstract.
The influence of a perturbation is felt immediately throughout the complete region. Crank-Nicolson Method Crank-Nicolson splits the difference between Forward and Backward difference schemes. In A popular method for discretizing the diffusion term in the heat equation is the Crank-Nicolson scheme. It is a second-order accurate implicit method that is defined for a generic equation y ′ = f(y, t) as: yn + 1 − yn Δt = 1 2(f(yn + 1, tn + 1) + f(yn, tn)).
Feministiskt tankande och sociologi
kcal samma som kalorier
bodelningsavtal vid skilsmässa
gymnasiebetyg statistik
public service skatt 8 miljarder
konstnärer sverige
muscular strength exercises
### Getting around static typing in Scala - blog.
If there is no implicit value of the right type in scope, it will not compile. condition allows the implicit method to use arbitrarily large k, to maintain accuracy we still need k˘h2. Crank-Nicholson Method Now that we have two di erent methods for solving parabolic equation, it is natural to ask, \can we improve by taking an average of the two methods?" The answer is yes. 2021-03-28 · It is implicit in time and can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the mid 20th century.
|
2023-03-24 09:11:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5516429543495178, "perplexity": 1261.3026471793667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945279.63/warc/CC-MAIN-20230324082226-20230324112226-00590.warc.gz"}
|
https://en.wikitolearn.org/User:LSGNT/Advanced_Topics_In_Geometry/Geometric_Invariant_Theory/Quotients
|
# Quotients
We had a slogan that the goal of GIT is to construct quotients in algebraic geometry; one would like to consider the quotient topological space (which is the natural way to endow the set of orbits with a topology) and give it the structure of an algebraic variety. This is not always possible. On the other hand, at least in the affine setting, one is tempted to consider the spectrum of the algebra of invariants as a quotient, but this sometimes happen to record little information about the topological quotient. What we are going to do is to get rid of bad orbits and obtain a reasonable quotient space of the remaining ones. Let's now go through some definitions of reasonable quotients and the properties we would expect from them.
Definition 7.2
Let ${\displaystyle {\mathcal {C}}}$ be a category of algebro-geometric objects (we have in mind 1. (quasi-)affine algebraic varieties, 2. (quasi-)projective varieties, 3. schemes over some base, say ${\displaystyle \mathbb {C} }$). Let ${\displaystyle G}$ be a group (object in ${\displaystyle {\mathcal {C}}}$) acting on ${\displaystyle X\in \operatorname {ob} ({\mathcal {C}})}$. A morphism ${\displaystyle \pi \colon X\to Y}$ in ${\displaystyle {\mathcal {C}}}$ is called a categorical quotient if, for every ${\displaystyle G}$-invariant arrow ${\displaystyle f\colon X\to Z}$ (i.e. equivariant with respect to the trivial action of ${\displaystyle G}$ on ${\displaystyle Z}$) in ${\displaystyle {\mathcal {C}}}$, there exists a map ${\displaystyle Y\to Z}$ that factorizes ${\displaystyle f}$ through ${\displaystyle \pi }$.
Uniqueness follows from the universal property.
Exercise: let ${\displaystyle \varphi \colon G\times X\to X}$ be the action map. Write down the diagrams that this map is required to satisfy (i.e. identity, compatibility).
Definition 7.3
A map ${\displaystyle \pi \colon X\to Y}$ is called a good quotient if
1. ${\displaystyle \pi }$ is ${\displaystyle G}$-invariant (i.e. constant on orbits) and surjective;
2. ${\displaystyle \pi }$ is affine and, for every affine open ${\displaystyle U\subseteq Y}$, ${\displaystyle \pi ^{\sharp }}$ identifies ${\displaystyle \mathbb {C} [U]}$ as the subring of ${\displaystyle G}$-invariant functions inside ${\displaystyle \mathbb {C} [\pi ^{-1}(U)]}$;
3. if ${\displaystyle W}$ is closed and ${\displaystyle G}$-invariant in ${\displaystyle X}$, then ${\displaystyle \pi (X)}$ is closed (i.e. ${\displaystyle \pi }$submersive; it is sometimes required to be universally submersive);
4. if ${\displaystyle W_{1}}$ and ${\displaystyle W_{2}}$ are closed, ${\displaystyle G}$-invariants subsets of ${\displaystyle X}$ such that they do not intersect, then also ${\displaystyle \pi (W_{1})\cap \pi (W_{2})=\varnothing }$.
Definition 7.4
A map ${\displaystyle \pi \colon X\to Y}$ is called a geometric quotient if it is a good quotient and ${\displaystyle Y}$ is an orbit space.
Basically, a good quotient may fail to be a geometric quotient by grouping together a bunch of orbits. The basic results of GIT for reductive groups are the following:
• in the affine setting, one obtains a good quotient by taking ${\displaystyle X\to \operatorname {Spec} \mathbb {C} [X]^{G}}$;
• in the projective case, one can throw away some orbits and obtain a good quotient (that is also a projective variety) of the other ones (semistable orbits); the quotient map restricts to a geometric quotient on the set of stable orbits (the quotient is only quasi-projective).
Furthermore, it is possible to give a numerical characterization of (semi)-stable points.
|
2022-05-18 13:19:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 38, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9519757628440857, "perplexity": 207.74557040056968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00383.warc.gz"}
|
https://studyadda.com/solved-papers/jee-pyq-nlm-friction-circular-motion_q39/1108/524279
|
• # question_answer Sand is being dropped on a conveyer belt at the rate of 2 kg per second. The force necessary to keep the belt moving with a constant speed of 3 $\text{m}{{\text{s}}^{\text{-1}}}$will be [JEE ONLINE 19-05-2012] A) 12N B) 6N C) Zero D) 18N
[b] Here, rate of fall of sand on conveyer belt,$\frac{dm}{dt}=2kg\,{{s}^{-1}}$ Conveyer belt speed $v=3\,m{{s}^{-1}}$ Force F = ? $F=v\frac{dm}{dt}=3\times 2=6N$
|
2023-02-07 15:30:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8931092619895935, "perplexity": 2538.6824986032184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00199.warc.gz"}
|
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Quantum_Tutorials_(Rioux)/01%3A_Quantum_Fundamentals/1.88%3A_Planck's_Radiation_Equation_Fit_to_Experimental_Data_-_Another_Algorithm
|
# 1.88: Planck's Radiation Equation Fit to Experimental Data - Another Algorithm
$\mathrm{n} :=42 \qquad \mathrm{i} :=1 . . \mathrm{n} \nonumber$
$$\rho_{i} :=$$ $$\lambda_{i} :=$$ 0.07 0.667 0.096 00.720 0.10 0.737 0.190 0.811 0.210 0.383 0.398 0.917 0.420 0.917 0.680 1.027 0.708 1.021 1.036 1.167 1.062 1.172 1.258 1.247 1.669 1.484 1.770 1.697 1.776 1.831 1.730 2.039 1.685 2.170 1.640 2.275 1.551 2.406 1.392 2.563 1.145 2.27 1.115 2.824 1.071 2.916 1.042 2.921 0.974 3.050 0.918 2.151 0.797 3.344 0.760 3.450 0.742 3.556 0.698 3.661 0.667 3.754 0.570 4.027 0.426 4.427 0.378 4.613 0.345 4.805 0.310 4.968 0.280 5.128 0.250 5.296 0.220 5.469 0.205 5.632 0.175 5.783 0.155 6.168
The data for this exercise is taken from page 19 of Eisberg and Resnick, Quantum Physics.
The values of rho are given in units of 103 joules/m3 and the values of lambda are given in 10‐6 m. The temperature is 1595 K.
Define Planck radiation function and first derivatives with respect to parameters a and b:
$F(\lambda, a, b) :=\left[\begin{array}{c}{\frac{a \cdot \lambda^{-5}}{\left(\frac{b}{\lambda}\right.} )} \\ {\frac{d}{d a} \frac{a \cdot \lambda^{-5}}{\frac{b}{\lambda}}} \\ {\frac{d}{d b} \frac{a \cdot \lambda^{-5}}{\left(e^{\frac{b}{\lambda}}-1\right)}}\end{array}\right] \nonumber$
Carry out nonlinear regression using Mathcad's genfit algorithm:
$\text{seed}:=\left(\begin{array}{c}{5 \cdot 10^{3}} \\ {10}\end{array}\right) \qquad P :=\text { genfit }(\lambda, \rho, \text { seed, } F) \qquad P=\left(\begin{array}{c}{4.715 \times 10^{3}} \\ {8.906}\end{array}\right) \qquad\left(\begin{array}{l}{a} \\ {b}\end{array}\right) :=P \nonumber$
Calculated radiation equation using output parameters:
$\rho_{\text { calc }}(\mathrm{L}, a, b)=\frac{a \cdot L^{-5}}{\left(e^{\frac{b}{L}}-1\right)} \nonumber$
Plot data and fit:
$\mathrm{L}=0.05, 0.1 \ldots 7 \nonumber$
Calculate Planck's constant using the value of b, which is equal to (hc)/(kT).
$\mathrm{h} :=\frac{\mathrm{b} \cdot 10^{-6} \cdot 1.381 \cdot 10^{-23} \cdot 1595}{2.9979 \cdot 10^{8}} \qquad \mathrm{h}=6.544 \times 10^{-34} \nonumber$
This page titled 1.88: Planck's Radiation Equation Fit to Experimental Data - Another Algorithm is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Frank Rioux via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
|
2023-01-30 05:40:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9123497605323792, "perplexity": 147.22436592787486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499801.40/warc/CC-MAIN-20230130034805-20230130064805-00267.warc.gz"}
|
http://physics.stackexchange.com/tags/orbital-motion/new
|
# Tag Info
2
All you need to do is calculate the perigee distance $r_p$ that is the distance of closest approach. Then if $r_p < R_A$ your satellite will crash and burn. Once again we start from the vis-viva equation: $$v^2 = GM\left(\frac{2}{r} - \frac{1}{a} \right) \tag{1}$$ The parameter $a$ is the semi-major axis of the ellipse, and it is related to the ...
0
Assuming point particles and Newtonian gravity they will collide iff, in an inertial frame based in the centre of mass of the system: angular momentum is zero; total energy measured is less than zero. Where the zero of potential energy is when the particles are infinitely far apart. (1) means that the velocities are radial, (2) means there is a time ...
0
If the particles collide, it means they couldn't escape each other given their initial conditions of mass, locations, and velocities. For example, two masses of any value, positioned at any location, will definitely collide if their initial velocities are zero with regard to some reference grid you've pre-selected. Their initial kinetic energies, zero in ...
1
If you think of it in terms of conservation of momentum and collisions, the simplest version works just the same as tossing a handball at a on-coming freight train. The interaction is elastic, and the ball returns with the same speed it had going in in the center of momentum frame, but the center of momentum frame is moving in the ground frame, so the ball ...
0
You definitely don't need to use General Relativity to answer this question. It depends upon what you mean by "feel". If "feel" means "detectable by sophisticated instruments" then, yes, it can be "felt". But your body is not a very sophisticated detection instrument. According to what I've read elsewhere, the Earth speeds up by $1000$ $m/s$ as it moves ...
1
There are two reasons. Inside the Sun, the force is no longer an inverse-square law. It actually grows linearly with $r$. The second reason is that Goldstein (as well as any other classical mechanics book) is interested in orbits with a non vanishing angular momentum with respect to the center of the Sun. An oscillation along a line passing through this ...
0
There's an exhaustive discussion here.
1
What you need is Kepler's equation, $$M = E - e \sin E$$ where $M$ is a quantity called the mean anomaly, e is the eccentricity of the orbit, and $E$ is called the eccentric anomaly, defined by this diagram where the sun is at $F$ and $C$is the center of the ellipse (the distance $e$ in the diagram should be $ae$). The quantity $M$ is simply $2\pi t/T$ ...
2
R. Rankin's answer gives you the general solution when the velocity along a curve is known. If you're interested in elliptical orbits due to gravitational interactions, you can use Kepler's laws. Kepler's second law says that the time required for an object on an elliptical orbit is proportional to the area swept out by a line connecting the orbiting body ...
0
By increasing its radius. Angular momentum is the cross product of radius and velocity: $$\mathbf{L} = \mathbf{r} \times \mathbf{v}$$ It's a trade-off between radius and speed if the angular momentum is constant (which is for a satellite)
2
Given your instantaneous velocity $\vec{v}(\vec{x})$ , and your start and end points, say $p_{1},p_{2}$ , just integrate it in space: $$\triangle t=\intop_{p_{2}}^{p_{1}}(\frac{d\vec{x}}{dt})^{-1}d\vec{x}$$
0
The charged accelerometer will register a non vanishing acceleration. The reason why the setup you are proposing (interaction via electric charge) gives a physically distinct result from the setup in the answer you linked (interaction via gravity), even though they can be described by the same mathematical force, is because the Equivalence Principle applies ...
0
This goes along with ideas of terraforming planets and giant structures. I heard Susskind in a lecture talk about using quantum zeno machine with a black hole to eternally prevent hawking radiation and to insure the black hole is "eternal." I think that is about the most audacious idea I have heard, for you would have to keep it up for over $10^{70}$ or $10^{... 3 Even if the orbit were a perfect circle, there's some acceleration towards the sun. If there weren't acceleration then the earth would move in a straight line (instead of a circle); but it doesn't move in a straight line therefore there's acceleration. In a sense, the earth doesn't feel the acceleration because it doesn't try to resist it: if you stand on ... 6 There is indeed a nett force on the body owing to the electrostatic attraction / repulsion. Therefore, there is nonzero four acceleration, and the body will have a different orbit from the ones defined by the spacetime geodesics for the metric describing the massive body's neighborhood. From the standpoint of an observer stationary with respect to the ... 0 The orbit (in polar coordinates) of a body under a inverse-square force,$-K/r^2$, is given by $$r(\phi)=\frac{L^2/mK}{1+\sqrt{1+\frac{2L^2E}{mK^2}}\cos\phi},$$ where$E$and$L$are the energy and the angular momentum of the particle. The equation above is just the polar representation of a conic section of conic parameter$L^2/mK$and eccentricity $$\... -3 My answer is more metaphysics than physics. The reason we do not "feel" the acceleration is that the change is within the tolerances of our bodies. That said, I am sure there have been people born who are more attuned to these forces. But for the most part, for most of use, there are so many forces acting on our senses that the acceleration of the earth is ... 48 John Rennie's answer is right from the viewpoint of General Relativity -- but since the question is tagged with Newtonian mechanics, it deserves a Newtonian answer too. In the Newtonian framework, I think the best answer to "why don't we experience this force" is that we can't feel forces that apply to our body at all. What we actually experience with our ... 3 John Rennie has answered the question in terms on general relativity, but it can also be answered with Newtonian physics. Your question is very similar to this one: Why does the moon stay with the Earth? and I can refer you to my answer there. In short, the Sun isn't only pulling on the Earth itself, it's pulling on everything on it as well, including us, ... 12 According to the Equivalence Principle a free falling system cannot locally detect a gravitational field. However Earth is a large enough system such that non-local effects turn out to be appreciable. Solar tides are - although small - detectable. So in principle one can experience the Sun's gravitational field even though we are in free fall. What I claim ... 60 We don't feel any acceleration because the Earth and all of us humans on it is in free fall around the Sun. We don't feel the centripetal acceleration any more than the astronauts on the ISS feel the acceleration of the ISS towards the Earth. This happens because of the way general relativity describes motion in gravitational field. The motion of a freely ... 1 Air friction (simply a form of friction) as we observe in our everyday life opposes the motion/state of the body (in this case motion of satellite ). It's true that air friction is responsible for decreasing the speed of the satellite, thereby decreasing the kinetic energy and ultimately the total energy of satellite. But as we know that any form of system ... 2 The answer is simply that not every space-time has a corresponding effective potential in the sense that we have a coordinate x such that \dot{x}=\sqrt{2(E-V_{eff})}. But this is true even in Newtonian mechanics, consider a problem with a Lagrangian$$L = \frac{m}{2}(\dot{r}^2 + r^2 \dot{\varphi}^2) - V(\varphi)$$Obviously, p_r\equiv m \dot{r} is ... 2 If you add mass to the Earth, or the "Earth system", it makes not the slightest bit of difference to the orbit unless you also change the Earth's angular momentum around the Sun. That is because the basic dynamical equation controlling the orbit is that the inward gravitational force due to the the Sun is equal to the mass times centripetal acceleration. ... 7 Most asteroids are in an elliptical orbit around the Sun in the inner Solar System, i.e. a region comprising Mercury, Venus, Earth, Mars and the Asteroid Belt. What can happen is that an asteroid's elliptical orbit intersects a planet's orbit and this might gives rise to a collision. Most of times, when an asteroid gets too close to a planet, it has too ... 0 I am unsure how this is the precession of the ellipse and how the precessing ellipse can be proved to give a nearly circular orbit. Will showing that the angular velocity w_0 being nearly constant work? Consider the figure below showing the effective potential (continuous line) of a particle of mass m and angular momentum L under the ... 2 Let me call radial stability the stability of r around r_0, where r_0 is the radius of the circular orbit. The difference between this one and the Lyapunov stability is that the latter looks not only to r but also to the polar angle \theta (for a central force) and their conjugated momenta. So in this sense I would say Lyapunov is stronger. ... -1 Both are unstable (ring and sphere) for the same reason: the potential everywhere inside either one is zero. This is true for gravitational as well as electrical and magnetic forces, which are all inverse square law / central force situations, and it is that pattern which causes the result. The proof requires calculus, but is considered an elementary ... 2 Juno probe's speed is in relation to what frame of reference? The escape speed of Jupiter is ~59.5 km/s and 150,000 kph ~ 67.1 km/s, so this speed must be in reference to the sun otherwise the spacecraft would not stay in orbit. Jupiter's orbital speed about the sun is ~13.1 km/s, which subtracted from the 67.1 km/s would result in ~54 km/s, thus more ... 1 Since the derivation of the Keppler's first law given in the other answers involves a non-trivial integration I think it is worth to see an easier way. Let \vec p and \vec L are the momentum and angular momentum of the planet, respectively, m its mass, K comes from the gravitational force \vec F=-K\hat r/r^2, and \hat r is the radial unit vector ... -4 The moon is dynamically and almost rigidly (but for librations) tied to the earth by means of an invisible solid rod, as it were. That is why we inhabitants of earth do not get to see the far side of the moon from our base. Solar eclipse occurs when the invisible rod places the moon between earth and sun casting a tiny shadow on earth. Lunar eclipse ... 1 Without seeing the particular example you have in mind it's impossible to be definitive, but without any context, for a probe in the Solar System, I'd assume heliocentric coordinates. Unless it's an orbital velocity around some other body, or an escape velocity from some other body, or of course if it's specifically stated to be in some other frame. 44 The Moon's orbit must be concave toward the Sun. The Moon's orbit with respect to the Sun is always convex. This is easily proven by comparing the minimum possible gravitational acceleration of the Moon toward the Sun (5.7 mm/s2) and the maximum possible gravitational acceleration of the Moon toward the Earth (3.1 mm/s2). The acceleration vector, and ... 45 Incorrect Path I'm curious as to what does the moon's orbit around the sun looks like? One might think the orbit (in the sun's rest frame) follows the path of an epitrochoid. A (very) over exaggerated view of this motion (for unrealistic parameters, thus, not an accurate representation) can be seen in the following animation: Note that if you change ... 16 Before answering let me mention that there is a terrific free applet showing the orbits, including the velocity vectors of the system Sun/Earth/Moon: https://phet.colorado.edu/en/simulation/gravity-and-orbits It is in java so pretty easy to download and use. The moon's orbit must be concave toward the sun. The Moon' orbit around the Sun is a ... 9 The orbital speed of the earth around the sun is about 30 km/s, whereas the orbital speed of the moon around the earth is about 1 km/s. From this it follows that at no point of its path around the sun the moon will ever show a backwards motion. The path is similar to the trajectory of a point (moon) on the perimeter of a (somewhat sliding) wheel rolling ... 0 What you just did was to find a condition for attractive power-law forces to have stable orbits where stable means they remain bounded when perturbed around the circular orbit. You got the correct result. The Bertrand's Theorem though says something different: the only forces whose bounded orbits imply closed orbits are the Hooke's law and the attractive ... 2 This cannot work in a 2-body scenario. In a 3-body scenario, yes, and this can even be shown from real world examples. Lagrangian orbits are reasonably well understood, where a 3rd body orbits a location defined by the masses and orbits of the first 2 bodies. the L4 and L5 locations follow or lead the 2nd body, but have stable orbits around them that do ... 1 This turns out to have a really boring answer. We can find the two circular orbits by finding the maxima and minima of the effective potential, and we get:$$ r = \frac{L^2}{2M}\left(1 \pm \sqrt{1 - \frac{12M^2}{L^2}}\right) \tag{1}$$where the$+$gives the outer stable orbit and the$-\$ gives the inner unstable orbit. Note that both orbits exist only ...
Top 50 recent answers are included
|
2016-07-24 10:46:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7610804438591003, "perplexity": 306.9401999406179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823996.40/warc/CC-MAIN-20160723071023-00275-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://www.appropedia.org/HSU_Wellness_Center-_Bicycle_Crank_Kinetic_Charger
|
### The Objective
The objective is to design a user-friendly bicycle powered generator to charge electronic devices while encouraging body movement in the workplace. Ideally, the device will fit under the office desk, be made of repurposed materials, and will charge most handheld electronic cell phones, ipads, and laptops.
### Status
This project is currently incomplete/ in progress.
### Cost and Materials
Material Quantity Cost
Voltage Regulator 1 $12.00 Bicycle Crank 1 Donated (Thank you, Arcata Bicycle Library) DC Motor 1 Donated (Thank you, HSU PowerSave Green Campus) Heat Sink 1 Donated (Thank you, HSU Sculpture Lab) BMX Bike Chain 1 Donated (Thank you, Arcata Bicycle Library) Blocking Diode 1$10.00
Wire Connectors 2 $4.00 Chain Lubricant 1$3.50
Fuse Holder 1 $2.50 Scrap Wood (seek recycled pieces) 4 Donated (Thank you, Marty Reed!) Square Bolt +Washer +Nut 1$2.00
Fuse 1 $1.50 12V Cigarette Lighter Socket 1$10.00
Car Charger/ Inverter 1 $10.00 12V Battery 1$50.00
Total Cost \$105.50
### Final Design
This Pedal Powered Generator is designed to charge cell phones, iPads, laptops, and other handheld electronics while exercising, specifically in the workplace environment.
Overview This design is built using a bicycle crank, a DC permanent magnet motor, a blocking diode, a (15A) fuse, a voltage regulator, a 12V battery, a 12V cigarette lighter socket, and a cigarette lighter inverter. By using a cigarette lighter socket, the user can plug in a wide variety of car charging accessories, which can adapt to almost any handheld electronic device. Since handheld electronics charged by this system use AC, the power stored in the 12 Volt battery (DC) must be converted to AC to match the current. Many systems use a separate inverter to change the current of the DC battery to AC, but this system does not need an inverter because the cigarette lighter system in a car already converts the power to AC from the car battery.
Mechanical System The user starts pedaling the bicycle crank, which is connected to the geared motor by a bicycle chain, and generates electricity to charge the 12V battery. The bicycle crank contains a 46-tooth gear, and the motor is attached to a 9-tooth bicycle gear. This ratio ensures that there are more rotations in the motor than the rotations of the pedal crank.
Electrical System The electronics used in the final design are a 130 volt DC permanent magnet motor, a 15 amp fuse and fuse holder, a 40 amp blocking diode, and a voltage regulator all wired in series to a 12 volt deep-cycle battery using 12 gauge copper wire. Although this motor is more powerful than necessary for this design, it was free, and it will potentially produce the same output at lower RPM's than a less powerful motor would generate at the same speed.
The voltage regulator came from an old car stereo system, and is crucial in the design to keep the voltage entering the battery at the appropriate levels. Deep cycle 12 volt batteries like to be charged at a constant input of 12-14 volts.
The 40 Amp blocking diode ensures electricity flowing from the generator to the battery, rather than electricity flowing from the battery to the motor when the operator stops pedaling.
In case of electrical mishaps, the 15 amp fuse will likely blow. This ensures safety of the system and the operator.
### Research and Development
Fig 1: R&D, Testing the angle of the crank for stationary pedaling Fig 2: R&D, Testing the bicycle crank with a chain and sprocket (motor to be attached)
### Resources
Page data
Authors Jenna Bader 2013 CC-BY-SA-4.0 140 No lead section, No main image Jenna Bader (2013). "HSU Wellness Center- Bicycle Crank Kinetic Charger". Appropedia. Retrieved August 14, 2022.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
|
2022-08-14 04:27:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2362489402294159, "perplexity": 3829.779773063657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00773.warc.gz"}
|
https://wumbo.net/symbol/negation/
|
Negation Symbol
The negation symbol is used in math to represent the logical negation operator.
Format Display Data
\neg
172
|
2020-02-22 20:02:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7808023691177368, "perplexity": 6946.939827335542}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145713.39/warc/CC-MAIN-20200222180557-20200222210557-00439.warc.gz"}
|
https://ageconsearch.umn.edu/record/136498
|
Formats
Format
BibTeX
MARC
MARCXML
DublinCore
EndNote
NLM
RefWorks
RIS
### Abstract
The coastal village of Kampong Kuantan in the state of Selangor, Malaysia is noted for its fireflies (pteropyx tener). The fireflies which flashes every non-rainy night is one of the world's most spectacular insect shows. The local operators formulated strict but informal rules to ensure the sustainability of the fireflies trade. This includes the prohibition of the use of boats with outboard engines to view the fireflies. However, the natural resource is now under threat from potential development projects and errant boat operators with outboard engines. Boats with outboard engines create air and noise pollution. The air pollution distracts the fireflies while noise pollution affects the serenity of the river. Development projects which are planned in the vicinity of the recreational area potentially lead to irreversible catastrophe to the natural resource. This study attempts to assess using the Travel Cost (TCM) and Contingent Valuation Method (CVM). Consumer's surplus per recreationist per trip was found to range between RM62.00 (US $25.00) to RM120 (US$48.00). This yields a gross economic value in the range of RM1.1mill (US $440,000) to RM1.68mill (US$672,000) annually. The results indicate that the economic value of the fireflies recreation is highly substantial. Hence recreational function provided by wetlands or forest resources may be important considerations for natural resource policy and management in Malaysia.
|
2021-04-19 12:22:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18929751217365265, "perplexity": 7745.110580618396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038879374.66/warc/CC-MAIN-20210419111510-20210419141510-00505.warc.gz"}
|
https://www.physicsforums.com/threads/variation-of-fine-structure-constant-and-spacetime.434635/
|
# Variation of fine structure constant and spacetime?
Hi all,
I'm going to ask a naive question - hope that's ok. There's been a lot of recent discussion of the results from Webb et al. which indicate that the fine structure constant varies spatially. I realize the results are very controversial - I'm wondering, hypothetically, if these results were shown to be correct:
Would this have implications for our view of spacetime? I.e. would the 'structure' of spacetime vary with location? E.g., would we still work with a smooth 3+1 manifold? Would the geometry of the manifold change?
Sorry, I realize that the above is probably not very coherent - it's a question from a novice ;-)
Thanks.
J.
|
2020-11-25 02:18:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.806459903717041, "perplexity": 580.9757372251878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141180636.17/warc/CC-MAIN-20201125012933-20201125042933-00563.warc.gz"}
|
http://math.stackexchange.com/questions/207384/how-many-numbers-will-i-guess-correctly/207387
|
# How many numbers will I guess correctly?
I play a guessing game. In this game, there are 100 equally-sized, folded-up cards randomly dispersed in a bag. The cards are labeled 1 through 100. I draw out the cards one by one without replacement and try to guess the number on the card every time I draw. On every guess, I open up the card and see the actual number on it to see if I am correct.
I start by guessing a random number from 1 to 100. I will keep guessing randomly, but I will not guess a number that has already been revealed since that would be silly. How many numbers will I guess correctly?
I wonder if there is a difference between this problem and the problem in which I do not actually get feedback on what specific number I just drew. If there is no difference, then the expected value of correct guesses should be $\sum_{n = 1}^{100} \frac{n^2}{(n + 1)!}$, right?
-
When you make your $k$-th guess, you’ve seen $k-1$ numbers, so the probability of guessing correctly is $\frac1{101-k}$. This is also the expected value of the $k$-th guess, counting $1$ for a correct guess and $0$ for an incorrect guess. Thus, you want
$$\sum_{k=1}^{100}\frac1{101-k}=\sum_{k=1}^{100}\frac1k=H_{100}\;,$$
the $100$-th harmonic number.
|
2016-07-23 09:19:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7924420833587646, "perplexity": 64.38893806435381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257821671.5/warc/CC-MAIN-20160723071021-00295-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://en.wikiversity.org/wiki/Photon
|
# Photon
## Photon source
Electromagnetic wave radiation Coil of N turns ${\displaystyle E=hf}$ Sun Celestial planet ${\displaystyle E=hf}$ Electric light Light bulb ${\displaystyle E=hf}$
Properties of photons: Photons always move with the speed of light. - because Photons are waves Photons are electrically neutral. - because Photons are waves Photons have no mass, but they have energy E = hf = hc/λ. Here h = 6.626*10-34 Js is a universal constant called Planck's constant. ... - because Photons are waves Photons can be created and destroyed. - because Photons are waves
Definition of particle The main characteristic of the particle : Particle as a source exists if and only if repeatedly speeds up and slows down its movement in source along ellipse (when blinks).
Particle as a source, creates in the transmission medium, electromagnetic wave, that spreads in all directions with the velocity c / n,
regardless of the source movement, where n is the refractive index of the transmission medium.
In other words, particle, which is the source, can not become the transmission medium and remain in it.
Definition of waves The main characteristic of the waves is the energy transfer through a transmission medium. And no transfer of the substance (= of real particles) from the source to the transmission medium. Wave exists if and only if there is not a source. In the case of electromagnetic waves, see 2.1.3 The electromagnetic field. Maswell's equations, p. 28[3]
Kinetic energy Tkin id =mc^2 [ln |1-v/c|+ (v/c) / (1-v/c) ] in direction of motion of a particle ( charge) according to NEWTON
Tkin ad = mc^2 [ln |1+v/c|- (v/c) / (1+v/c) ] against direction of motion of a particle in transmission medium according to Maxwell´s electromagnetic wave energy= ENERGY = E =hf !!!
"Photon" mass = 0 kg
.
Electron creates waves in transmision medium with energy Tkin ad = mc^2[ln |1+v/c|- (v/c)/(1+v/c)]
The photons correspond the kinetic energy of the electrons against the direction of their movement
Corrected third Newton's law of motion:
Reaction creates in the transmission medium, electromagnetic waves, as unstable “particles” - neutrínos νe, νμ, ντ , mesons π0, π+ , π- , η , K and gamma rays (=waves of extremely high frequency >10^19 Hz ) - against direction of motion of stable particles ( e-, p+,n0, D, He-3, alfa ). The main characteristic of the waves is the energy transfer through a transmission medium. And no transfer of the substance (= of real particles) from the source to the transmission medium. Wave exists if and only if there is not a source.
A photon are a waves, that created electron (flashing electron) into transmision medium, in accordance with corrected third Newton's law of motion.
Wave - Particle Duality as Kinetic Energy Against and In Direction of Motion. Waves have not mass(kg). Photons also have not mass. They have only energy (J). Energy of elmag.field. Energy in transmission medium.
Photon for spectral line Hα 656.281 + - 1.4 nm
Two minimal geodesics lying against each other = 1 ellipse
There are very many minimal geodesics = ellipses (4,56794e+14) between the north and south poles of a globe (between the higher Bohr´s energy levels and a lower Bohr´s energy as a poles of a globe.
Electron radiates electromagnetic waves if and only if moves with acceleration from the higher Bohr´s energy levels to a lower. In atom, as a source of electromagnetic waves , them it then , when it moves from afnucleum to perinucleum along the ellipse . Electron cloud is 4,56794e+14 ellipses per second (for spectral line Hα).
Kinetic energy of a charge moving at the velocity of v has two different values:
Kinetic energy against direction of motion as wave Tkin ad = mc2[ln |1+v/c|- (v/c)/(1+v/c)]
Kinetic energy in direction of motion as particle Tkin id = mc2[ln |1-v/c|+ (v/c)/(1-v/c)]
Electron has own mass 9.1e-31 kg. Electron creates waves in transmision
medium with kinetic energy against direction of motion as wave in transmission medium (as the
energy of the electromagnetic field)
Tkin ad = mc^2[ln |1+v/c|- (v/c)/(1+v/c)]
m =9.1e-31 kg
Blinks flashes 4,56794e+14 times per second is "photon" for spectral line Hα.
"Photon" mass = 0 kg
.
Electron creates waves in transmision medium with energy Tkin ad = mc^2[ln |1+v/c|- (v/c)/(1+v/c)]
We have a bijection between kinetic energy of real particles against direction of motion as wave
Tkin ad = mc^2[ln |1+v/c|- (v/c)/(1+v/c)] & a mass of „waves“ = electromagnetic waves, as unstable “particles” - neutrínos νe, νμ, ντ , mesons π0, π+ , π- , η , K and gamma rays (=waves of extremely high frequency >10^19 Hz )
## Photon's chracteristics
Photon is energy of a Quanta that travels as electromagnetic wave at speed of light
${\displaystyle E=hf=h{\frac {\omega }{2\pi }}=\hbar \omega }$
${\displaystyle p={\frac {h}{\lambda }}=h{\frac {k}{2\pi }}=\hbar k}$
${\displaystyle h=p\lambda =2\pi {\frac {E}{\omega }}=2\pi p{\frac {p}{k}}={\frac {h}{2\pi }}}$
${\displaystyle \omega ={\frac {E}{\hbar }}}$
${\displaystyle k={\frac {k}{\hbar }}}$
${\displaystyle \hbar ={\frac {E}{\omega }}={\frac {p}{k}}={\frac {h}{2\pi }}}$
With
${\displaystyle E}$ - Photon
${\displaystyle f}$ - frequency of photon's wave
${\displaystyle \omega }$ - angular frequency of photon's wave
${\displaystyle h}$ - Planck's constant
### Photon's State
Photon is observed to exist in 2 states
Radiant Photon exists at frequency ${\displaystyle f=f_{o}}$ process the following identities
${\displaystyle E=hf_{o}=h{\frac {\omega _{o}}{2\pi }}=\hbar \omega _{o}}$
${\displaystyle p={\frac {h}{\lambda _{o}}}=h{\frac {k}{2\pi }}=\hbar k}$
${\displaystyle \hbar ={\frac {E}{\omega _{o}}}=p{\frac {p}{k}}={\frac {h}{2\pi }}}$
Non radiant Photon exists at frequency ${\displaystyle f.f_{o}}$
${\displaystyle E=hf=h{\frac {\omega }{2\pi }}=\hbar \omega }$
${\displaystyle p={\frac {h}{\lambda }}=h{\frac {k}{2\pi }}=\hbar k}$
${\displaystyle \hbar ={\frac {E}{\omega }}=p{\frac {p}{k}}={\frac {h}{2\pi }}}$
Uncertainty of Photon's state Photon exists in 2 states at specific frequency . The chance to find one of its state (successful rate of finding photon) is 1/2 where h = p λ . h and p do not change, only wavelength changes with frequency . Hence, Uncertainty principle
Photon cannot exist in 2 states at the same time
Mathematically, Uncertainty principle can be expressed as
${\displaystyle \Delta p\Delta \lambda ={\frac {1}{2}}{\frac {h}{2\pi }}={\frac {h}{4\pi }}={\frac {\hbar }{2}}}$
### Photon's Quantization
${\displaystyle E=hf}$
${\displaystyle h=p\lambda }$
### Wave Particle Duality
Wave like . ${\displaystyle \lambda ={\frac {h}{p}}}$
Particle like . ${\displaystyle p={\frac {h}{\lambda }}}$
Through the work of Max Planck, Albert Einstein, Louis de Broglie, Arthur Compton, Niels Bohr, and many others, current scientific theory holds that all particles also have a wave nature (and vice versa).[1] This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected.[2] Wave–particle duality is an ongoing conundrum in modern physics. Most physicists accept wave-particle duality as the best explanation for a broad range of observed phenomena; however, it is not without controversy. Albert Einstein , who, in his search for a Unified Field Theory , did not accept wave-particle duality, wrote: [4] This double nature of radiation (and of material corpuscles)...has been interpreted by quantum-mechanics in an ingenious and amazingly successful fashion. This interpretation...appears to me as only a temporary way out... The pilot wave model, originally developed by Louis de Broglie and further developed by David Bohm into the hidden variable theory proposes that there is no duality, but rather a system exhibits both particle properties and wave properties simultaneously, and particles are guided, in a deterministic fashion, by the pilot wave (or its " quantum potential ") which will direct them to areas of constructive interference in preference to areas of destructive interference . This idea is held by a significant minority within the physics community. [5]
When in this idea we will replace the "quantum potential" by "electromagnetic potential" (or
by " interference of electromagnetic waves"), the idea will be acepted large majority of physicists. In 1900 Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, and the energy of these oscillators increased linearly with frequency (according to his constant h, where E = hν). Theoretical Planck´s oscillator we can replace with circulating electron along ellipse around the nucleus of an atom between two Bohr´s energy levels, while electron moving alternately with acceleration and deceleration. This electron really blinks. When an electron moves at the speed of a higher Bohr energy levels (from afnucleus) to lower (towards perinucleus) radiates spectral lines of certain thickness. (real blinks) For example, spectral line Halfa 656.281 + - 1.4 nm. From the thickness of the spectral lines we can easily identify the smallest (in afnucleus) and largest (in perinucleus) the speed of the electron around the nucleus of an atom, taking into account the kinetic energy of the electron in the direction of movement and against the movement if we know that according to the
Doppler principle is the lowest wavelength (highest frequency) and against the direction of
motion of the electron is a wavelength of the highest (lowest frequency).
Kinetic energy of a charge moving at the velocity of v has two different values:
Kinetic energy against direction of motion as wave
Tkin ad = mc^2[ln |1+v/c|- (v/c)/(1+v/c)]
Kinetic energy in direction of motion as particle
Tkin id = mc^2[ln |1-v/c|+ (v/c)/(1-v/c)]
An electron moving at a speed ve= 0,003c creates spectral line Hα. Accurate electron
speeds are given in the table in this article. Confirmation of Doppler´s principle in hydrogen for Balmer line Hα. Accompanying activity of reaction on movement of stable particles in the transmission medium are waves.
Stable electrons moving with speeds (0,99 c – c ) creates leptons (µ−, τ−), neutrinos (νe, νµ, ντ) and bosons W +, W-, Z (= β electrons). Weak interactions are caused with stable electrons, which creates leptons (µ−, τ−) = ( particles = electrons different speeds), neutrinos νe, νµ, ντ (= waves) , bosons W +, W-, Z (= particles = β electrons moving at nearly the speed of light ) and gamma rays (=waves of extremely high frequency >1019 Hz ). Stable particles (p +, n0, D, He-3, α) moving with speeds ( 0,3 c – 0,99 c ) creates baryons and mesons.
The strong interactions are caused with stable particles (p +, n0, D, He-3, α ), which creates baryons and mesons.
All movements in physics are based on principle of action - reaction and on velocity of stable particles ( e-, p+,n0, D, He-3, α ). - Action, as a motion of stable particles ( e-, p+,n0, D, He-3, α ), is characterized by alternating acceleration and deceleration motion in the source, along ellipse or quasi- elipse ( excentricity e –> 0 ).
Stable particles of various speed ( leptons μ−, τ−, baryons, mesons ), bosons W +, W-, Z ( β electrons) are characterized by kinetic energy in direction of motion Tkin id = mc2[ln |1-v/c|+ (v/c)/(1-v/c)]
- Reaction creates in the transmission medium, electromagnetic waves, as unstable “particles” - neutrinos νe, νμ, ντ , mesons π0, π+ , π- , η , K and gamma rays ( f >10^19 Hz ) are characterized by kinetic against direction of motion as wave Tkin ad = mc2[ln |1+v/c|- (v/c)/(1+v/c)] Accompanying activity of reaction on movement of stable particles in the transmission medium are waves, or “unstable particles“ i.e. neutrinos and mesons.
## Photon's effects
Photon interacts with matter to create Heat transfer of three phases Heat conduction, Heat convection and Heat radiation
Heat transfer Explanation Mathematical formulas Heat conduction matter absorbs photon's energy creates change in matter's temperature ${\displaystyle \Delta T=T_{1}-T_{o}}$${\displaystyle E=mC\Delta T}$ Heat convection matter conducts photon's energy to the maximum at threshold frequency ${\displaystyle f_{o}={\frac {C}{\lambda _{o}}}}$${\displaystyle E=hf_{o}}$ Heat radiation matter ejects electron off its atom at frequency greater than threshold frequency ${\displaystyle f={\frac {C}{\lambda }}}$${\displaystyle E=hf}$ provided that f>fo
|
2020-07-11 18:52:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 32, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6756492853164673, "perplexity": 2940.17830631827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00075.warc.gz"}
|
http://scfilm.org/standard-error/formula-for-calculating-standard-error-of-the-mean.php
|
Home > Standard Error > Formula For Calculating Standard Error Of The Mean
# Formula For Calculating Standard Error Of The Mean
## Contents
All right, so here, just visually you can tell just when n was larger, the standard deviation here is smaller. Here we would take 9.3-- so let me draw a little line here. Let's see if it conforms to our formula. Similar Worksheets Calculate Standard Deviation from Standard Error How to Calculate Standard Deviation from Probability & Samples Worksheet for how to Calculate Antilog Worksheet for how to Calculate Permutations nPr and http://scfilm.org/standard-error/formula-for-calculating-standard-error-of-mean.php
Sampling from a distribution with a small standard deviation The second data set consists of the age at first marriage of 5,534 US women who responded to the National Survey of Transcript The interactive transcript could not be loaded. As will be shown, the standard error is the standard deviation of the sampling distribution. Because the 5,534 women are the entire population, 23.44 years is the population mean, μ {\displaystyle \mu } , and 3.56 years is the population standard deviation, σ {\displaystyle \sigma }
## How To Calculate Standard Error Of The Mean In Excel
What's your standard deviation going to be? The mean of our sampling distribution of the sample mean is going to be 5. The standard error of the mean (SEM) (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means over all
So here what we're saying is this is the variance of our sample mean, that this is going to be true distribution. The standard deviation of the age for the 16 runners is 10.23. Richard Frederick 5,763 views 9:30 Stats - What Does "Fail to Reject the Null Hypothesis" Mean, And Why Do We Say it That Way? - Duration: 3:55. Standard Error Of The Mean Definition Two data sets will be helpful to illustrate the concept of a sampling distribution and its use to calculate the standard error.
The formula shows that the larger the sample size, the smaller the standard error of the mean. How To Calculate Standard Error Of Estimate I'm going to remember these. n equal 10 is not going to be a perfect normal distribution but it's going to be close. No problem, save it as a course and come back to it later.
Khan Academy 497,237 views 15:15 How To Solve For Standard Error - Duration: 3:17. How To Calculate Standard Error In R In each of these scenarios, a sample of observations is drawn from a large population. Edwards Deming. For illustration, the graph below shows the distribution of the sample means for 20,000 samples, where each sample is of size n=16.
## How To Calculate Standard Error Of Estimate
Download Explorable Now! So as you can see what we got experimentally was almost exactly-- and this was after 10,000 trials-- of what you would expect. How To Calculate Standard Error Of The Mean In Excel And of course the mean-- so this has a mean-- this right here, we can just get our notation right, this is the mean of the sampling distribution of the sampling Estimated Standard Error Formula How do I find the mean of one group using just the standard deviation and a total number of two groups?
The ages in that sample were 23, 27, 28, 29, 31, 31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55. see here If you know the variance you can figure out the standard deviation. Compare the true standard error of the mean to the standard error estimated using this sample. So I have this on my other screen so I can remember those numbers. Standard Error Formula Statistics
|
2018-02-22 16:13:13
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8135071396827698, "perplexity": 368.6950353000606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814140.9/warc/CC-MAIN-20180222160706-20180222180706-00516.warc.gz"}
|
https://answers.ros.org/question/304418/what-is-the-reference-frame-of-the-jacobian-computed-by-moveit/
|
ROS Resources: Documentation | Support | Discussion Forum | Index | Service Status | Q&A answers.ros.org
# what is the reference frame of the jacobian computed by moveit
when use RobotState.getJacobian() to compute jacobian matrix, which frame dose the result express in ? Is it the same as what we get from RobotModel.getModelFrame() ? What if I get "world" from RobotModel.getModelFrame(), but I want to calculate jacobian with respect to the link_0 of my robot (which is different from world link), what should I do ?
edit retag close merge delete
Sort by » oldest newest most voted
I don't know if you have solved your issue yet, but I think I may have found a solution. I was working on the same problem myself and found that the getJacobian() function returns the Jacobian in the base frame for the JointModelGroup that you specified in the request, not world as I originally expected. If you want to rotate the Jacobian to a different reference frame here is an example of how you might do it. I have not verified this code but it looks like it should work. Note that the getGlobalLinkTransform will not work if the base frame of reference of your group is not the same as the global frame, which was the issue that I ran into with my robot. If the base frame is different than the global frame, then you could use getGlobalLinkTransform to get the transforms for both the base link and the EE link, then use the two in order to compute the relative transform between base and tip. Then use this relative transform in order to rotate the Jacobian into the EE frame.
more
|
2022-12-06 05:00:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1753852367401123, "perplexity": 495.5672938719108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711069.79/warc/CC-MAIN-20221206024911-20221206054911-00659.warc.gz"}
|
https://delong.typepad.com/sdj/2011/11/the-sorrow-and-pity-of-the-liquidity-trap.html
|
## Hoisted from the Archives: How I Learned to Stop Neoclassicizing and Love the Liquidity Trap: Peccavi Nimis et Mea Maxima Culpa Department
July 5, 2011: Bloomberg View: There is only one real law of economics: the law of supply and demand. If the quantity supplied goes up, the price goes down.
Back in the third quarter of 2008, the public held about $5.3 trillion of U.S. Treasury bills, notes and bonds. As the recession hit, tax revenue plummeted, and government spending rose, that total reached$9.4 trillion by mid-2011.
We’re on target to have $10.7 trillion outstanding by mid- 2012 -- doubling the Treasury debt held by the public in just four years. Supply and demand tells us that a steep rise in Treasury borrowings should produce a commensurate fall in Treasury bond prices and thus higher interest rates -- and that increase should crowd out other forms of interest-sensitive spending, slowing productivity growth. Yet the market has swallowed all these issues without so much as a burp. By all accounts, it’s smacking its lips in anticipation of the next tranches. Treasury Demand In the years of the Clinton budget surpluses -- remember those? -- the U.S. government was repaying$60 billion of debt each quarter. The Bush administration worked hard to make that surplus evaporate. It succeeded.
From 2002 to 2007, the Treasury issued, on average, \$70 billion of debt per quarter. Like many watching this shift, I concluded that this expanded supply would exert substantial pressure on interest rates to rise.
The demand for Treasuries was inordinately high, in part because the supply of alternatives was low. Lacking confidence, corporate executives held back investment, reducing private issues of bonds. In addition, China and other emerging economies, eager to keep their currency values low, directed dollars earned from exports into U.S. Treasury debt. Reinforcing this demand, wealthy individuals around the world purchased Treasuries as a hedge.
Thus by late 2007, the 10-year U.S. Treasury rate was exactly where it had been when the Clinton surpluses ended at the close of 2001. “How long could this go on?” we wondered. Eventually the market’s appetite for Treasury bonds at high prices and low interest rates had to reach its limit, right? Supply and demand isn’t just a good idea -- it’s the law.
Discovering the Limit
At the end of 2008, as the economy collapsed and the pace of net Treasury debt increases quintupled, it seemed we were about to discover that limit. I presumed we had a little time for expansionary fiscal policy to boost the economy -- a year, maybe 18 months -- before the bond-market vigilantes would arrive. They would demand higher interest rates on Treasury bonds, which would begin seriously crowding out the benefits of fiscal stimulus. The U.S. government would have to react, pivoting from fighting joblessness, via deficit spending, to reassuring the bond market via long-run tax increases and spending cuts to Medicare and Medicaid.
But it didn’t happen in 2009. It didn’t happen in 2010. And it isn’t happening in 2011. There are no signs from asset prices that the market is betting heavily that it will happen in 2012. Looking at the yield curve, it appears the market intends to swallow every single bond that the Treasury will issue in the foreseeable future -- and at high prices. The prices of inflation-protected bonds suggest that the market expects the new Treasury issues to be devoured without any acceleration in inflation.
Keynes's Disciple
Although I worked for three years in the Clinton Treasury Department, and am a card-carrying member of the economist guild, I predicted none of this. Like most of my peers, I was wrong. Yet the most interesting thing is that I could have -- should have -- been right. I had read economist John Hicks; I just didn’t quite believe him.
Hicks, one of the clever young Brits dotting i’s and crossing t’s in the writings of John Maynard Keynes in the 1930s, was responsible for the workhorse formulation of Keynesian economics -- the IS-LM model -- that has been the bane of many an intermediate macroeconomics student. It was his version of the IS-LM model that formalized and elevated a key insight: that interest rates paid by creditworthy governments would remain low after a financial crisis. This formulation holds even in the face of enormous budget deficits that greatly expand the supply of government bonds.
A financial crisis initiates a sudden flight to safety among bondholders -- widening interest-rate spreads, diminishing the private sector’s desire to sell bonds to raise capital and encouraging individuals to save more and consume less as they, too, hunker down. Thus bond prices rise, and interest rates drop. As rates fall, firms see that they can get capital on attractive terms and so issue more bonds; households see the low interest rate earned on their savings and lose some of their desire to save. The market heads toward equilibrium.
Safeguarding Wealth
But something else happens on the path to equilibrium. The decline in interest rates and the rise in savings are accompanied by an increased desire among businesses and households to safeguard more of their wealth in cash. As a result, the speed with which cash turns over in the economy, the velocity of money, falls. And as the velocity of money falls, total spending falls, workers are fired, and their savings evaporate with their incomes.
Thus the equilibrium turns negative, with high unemployment and low capacity utilization.
In responding to a small financial disruption, the Federal Reserve can inject more money into the economy by buying bonds for cash, increasing the amount of cash so that even at the lower velocity of money we retain the same volume of spending. This eases the decline in interest rates, spending, employment and production into a decline in interest rates alone.
Little Difference
But when rates become so low that there’s little difference between cash and short-term government bonds, open-market operations cease having an effect; they simply swap one zero- yielding government asset for another, with their hunger to hold more safe, liquid assets unsatisfied.
This is the liquidity trap.
In this situation we need deficit spending. The government spends and borrows, creating more of the safe, cashlike assets that private investors want. As these bonds hit the market, people who otherwise would have socked their money away in cash -- diminishing monetary velocity and slowing spending -- buy bonds instead. A large, timely government deficit thus short- circuits the adjustment mechanism, avoiding the collapse in monetary velocity.
Hicks’s conclusion: As long as output remains depressed and there is slack in the economy, printing more bonds will have negligible effect in increasing interest rates.
Special Case
I had read Hicks. I even knew Hicks. But I thought that his era, the Great Depression, had passed. Sitting in my first graduate economics class in 1980, I listened to Marty Feldstein and Olivier Blanchard -- two of the smartest humans I am ever likely to see -- assure me that Hicks’s liquidity trap was a very special case, into which the economy was unlikely to wedge itself again. Yet it did.
On my shelf is a slim, turn-of-the-millennium volume by Paul Krugman titled “The Return of Depression Economics.” In it he argued that we mainstream economists had been too quick to ditch the insights of Hicks -- and of economists Walter Bagehot and Hyman Minsky. Krugman warned that their analysis was still relevant, and that if we dismissed it we would be sorry.
I am sorry.
|
2021-04-11 10:46:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.295717716217041, "perplexity": 4749.37076068802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061820.19/warc/CC-MAIN-20210411085610-20210411115610-00227.warc.gz"}
|
https://www.physicsforums.com/threads/calc-ii-alternating-series-test-limits.765198/
|
# Calc II - Alternating Series Test/Limits
1. Aug 8, 2014
### jimbit
Hello PF,
I've got a homework question I'm having some trouble with regarding series, particularily alternating series.
The question asks you to test the series for convergence or divergence for an alternate series by using the A.S.T. :
(-1)n-1e2/n
n=1
2. Relevant equations
The Alternating Series Test states that a series, an, is converging if
(a) an+1 ≤ an (i.e. the series is decreasing) for all n, and
(b) the limit of the an = 0 as n → ∞
3. The attempt at a solution
In my attempt I have confirmed (a) by using the first derivative test to show that the series is decreasing:
Let f(x)= e2/x
f'(x) = -2e2/n / x2
which is going to be negative for all x > 1, thus satisfying the first condition.
The part where I am having trouble is confirming that the limit of e2/x equals/does not equal 0 as x approaches infinity (the second requirement for the A.S.T.).
Looking at the graph of e2/x, the limit seems to be approaching 1, but I do not know to prove this/which method to use.
It's been some years since I have taken Calc I and as a result, I'm having some diffculty with computing limits, though I am understanding the concepts of convergance/divergence well enough.
Thanks for reading and for any assistance that you might be able to give!
2. Aug 8, 2014
### Ray Vickson
Is the function $f(w) = e^w$ continuous in the variable $w$? For the case of $w = 2/x$ what is the limiting value of $w$ as $x \to \infty$? So, what is the limit of $e^{2/x}$?
3. Aug 8, 2014
### jimbit
f(w) = e^w is continuous.
For 2/x, it is not continous at 0. But it is continuous from 1 to infinity.
So, the limit is 1?
4. Aug 8, 2014
You tell me.
5. Aug 8, 2014
### jimbit
Well, as 2/x becomes infinitely small, e goes to 1. I see that now, after doing some calculations.
However, I'm just not sure why it doesnt go to zero instead. I seem to be missing some crucial fact about e..
6. Aug 8, 2014
### jimbit
Oh, goodness,
I understand now, as 2/x becomes increasingly small, it basically becomes 0.
e^0 = 1.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
|
2018-02-23 13:04:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.878197431564331, "perplexity": 1051.0275894492706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814700.55/warc/CC-MAIN-20180223115053-20180223135053-00031.warc.gz"}
|
https://www.semanticscholar.org/topic/Extended-H%C3%BCckel-method/936732
|
You are currently offline. Some features of the site may not work correctly.
# Extended Hückel method
Known as: Extended Huckel theory, Hückel, Extended huckel method
The extended Hückel method is a semiempirical quantum chemistry method, developed by Roald Hoffmann since 1963. It is based on the Hückel method but… Expand
Wikipedia
## Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2013
Highly Cited
2013
• 2013
• Corpus ID: 102293807
Polyelectrolyte gels are known to exhibit strong response to changes in salt concentration, an effect attributed to osmotic… Expand
2011
2011
• 2011
• Corpus ID: 55661098
The effect of the temperature dependence of the internal friction angle is studied in a boundary value problem simulating the… Expand
Highly Cited
2010
Highly Cited
2010
• 2010
• Corpus ID: 33937053
We present a semiempirical model for calculating electron transport in atomic-scale devices. The model is an extension of the… Expand
2008
2008
• 2008
• Corpus ID: 119180709
$\text{Si}(001)\text{\ensuremath{-}}(2\ifmmode\times\else\texttimes\fi{}1)$ surface is one of the many two-dimensional systems of… Expand
Highly Cited
2006
Highly Cited
2006
• 2006
• Corpus ID: 42821607
We describe a semiempirical atomic basis extended Huckel theoretical (EHT) technique that can be used to calculate bulk band… Expand
2005
2005
• 2005
• Corpus ID: 54997351
Approximate molecular orbital calculations have been applied to explain the low CO poisoning effects observed at PtBi2 and PtBi… Expand
2002
2002
• 2002
• Corpus ID: 36608308
We have performed a theoretical and experimental study of the three isomers of xylene, C 6 H 4 (CH 3 ) 2 , adsorbed on Pd(111… Expand
1994
1994
• J. Comput. Chem.
• 1994
• Corpus ID: 206038169
A semiempirically parameterized version of the extended Hückel molecular orbital method has been combined with an efficient quasi… Expand
Highly Cited
1964
Highly Cited
1964
The conformations, relative stabilities, and electronic distribution of a sample of the more important carbonium ions and… Expand
Highly Cited
1964
Highly Cited
1964
The extended Huckel theory is applied to compounds of boron and nitrogen, with emphasis placed upon similarities and differences… Expand
|
2021-06-20 11:14:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37091270089149475, "perplexity": 8898.422837009384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487660269.75/warc/CC-MAIN-20210620084505-20210620114505-00591.warc.gz"}
|
http://star-www.rl.ac.uk/star/docs/sc6.htx/sc6se18.html
|
### B Finding the Air Mass and Zenith Distance
This appendix gives some advice on how you can find out the air mass and zenith distance of your individual observations. It is impossible to give simple instructions which will work in all cases because the procedures adopted by different observatories are different. Ideally, at the conclusion of your observing run you would be given a summary list of all your observations which would include the air mass for each. However, it is much more likely that the air mass or zenith distance will be included in the auxiliary information stored in the data file for each observation. Again, different observatories use different data formats and different keywords17.
#### B.1 Information required
Ideally you need to know the average air mass, $X$, of each observation. Alternatively, the zenith distance, $z$, is just as good. The CURSA applications for calibrating instrumental magnitudes (see the recipe in Section 16) can automatically calculate the air mass from the zenith distance. Conversely, if you need to calculate the air mass from the zenith distance yourself then Section 8 gives the requisite formulæ.
If the auxiliary information for your observations contain neither the air mass nor the zenith distance then you will have to calculate the zenith distance from whatever information is available about the celestial coordinates and times of your observations. The zenith distance, $z$, can be calculated from:
$secz=\frac{1}{\left(sin\psi sin\delta +cos\psi cos\delta cosh\right)}$ (23)
where:
$\psi$
is the latitude of observation,
$\delta$
is the Declination of the object observed,
$h$
is the Hour Angle of the object observed.
The Hour Angle is simply:
$h=s-\alpha$ (24)
where $\alpha$ is the Right Ascension of the object observed and $s$ is the local sidereal time. Again, the local sidereal time may not be recorded in your observations and it might be necessary to calculate it from whatever information is available about the time of your observations. Most standard textbooks on spherical astronomy give further details of calculating the zenith distance and converting between time systems (see, for example, Spherical Astronomy by Green[31]). Another useful source of information is the explanation and notes for the SLALIB positional-astronomy subroutine library (see SUN/67[75]).
The keywords used to represent these various items of information differ between different observatories. Table 5 gives some examples. It is based on CCD frames observed with the Jacobus Kapteyn Telescope (JKT) on La Palma. In this case both the air mass and the zenith distance are included and hence there is no need to calculate them. The keywords used at the Anglo-Australian Observatory are available via the World Wide Web (at URL http://www.aao.gov.au/local/www/tjf/fits.html). The appropriate instrument and observatory manuals should document the keywords used in a given dataset. In case of difficulty staff at the observatory where the dataset was acquired should be able to advise.
Keyword Description AIRMASS air mass ZENDIST zenith distance (degrees) TIMSTART start time of exposure TIMEND end time of exposure RA Right Ascension of the object DEC Declination of the object EQUINOX equinox of coordinate system DATE-OBS date of the observation
Table 5: Example of some keywords present in a CCD frame acquired with the Jacobus Kapteyn Telescope (JKT) on La Palma
#### B.2 Examining files
Files containing observations come in a number of different formats. The procedures for inspecting them to determine the values of the keywords that they contain differ for different formats. The following notes cover some of the more common formats, though they are not comprehensive. Note that you can convert a data file between any of the formats mentioned below (and others) using the CONVERT package (see SUN/55[12]).
If you are using Starlink applications such as PHOTOM (see Section 14) or GAIA (see Section 15) to measure instrumental magnitudes in CCD frames then you will probably have converted them to the $n$-dimensional Data Format (NDF; see SUN/33[77]) which itself is a special case of Starlink’s Hierarchical Data System (HDS; see SUN/92[78]). HDS files, including NDF ones, usually have file type ‘.sdf’. In this case, the file name specified to applications, such as those in KAPPA, must omit the ‘.sdf’ file type.
If the observations were originally formatted as FITS files (see below) prior to being converted to the NDF format then all the FITS keywords are preserved in an extension to the NDF file and usually this extension will contain any information about the air mass etc. Application fitslist in KAPPA (see SUN/95[11]) will list the FITS extension of an NDF. Briefly, if you have not previously started KAPPA type kappa. Then type fitslist filename (remembering to omit the file type).
If you know the name of the required keyword then you can use the Unix command grep to extract just the required line from the output produced by fitslist. For example, if the required keyword was ‘AIRMASS’ you would type:
% fitslist filename
$|$ grep -i AIRMASS
If you cannot find the required datum in the FITS keywords then it is worth reading the FITS comments to see if they give any useful information.
You can examine the entire contents of an HDS file using hdstrace (see SUN/102[10]). This option will be useful if the file is not an NDF which was created from a FITS file. Simply type hdstrace filename (again remembering to omit the file type). hdstrace is a flexible utility and you should refer to SUN/102 for a full description.
FITS files
The FITS18 (Flexible Image Transport System) format is in widespread use in astronomy. The original observations which you brought away from the observatory after your observing run are perhaps most likely to be in this format.
Application fitshead in KAPPA (see SUN/95[11]) will list all the header information, including the keywords, in a FITS file. Briefly, if you have not previously started KAPPA type kappa. Then type fitshead filename. Alternatively, and perhaps even more simply, the header information can be displayed using Unix command more. The resulting display is perfectly readable, though perhaps not very æsthetic. This technique works best with a window which is eighty characters wide.
A description of the FITS format is beyond the scope of this note. However, briefly, a FITS file comprises a primary dataset and optionally one or more extensions. fitshead allows you to access the header information for the primary dataset and all the extensions. Conversely, often only the primary header information can be conveniently accessed with more.
Figaro DST files
Figaro DST files are another special case of the Starlink HDS format and can be examined with hdstrace. See above for details. The air mass, zenith distance and similar information are most likely to be found in the .FITS or .OBS structures.
IRAF files
A given IRAF (Image Reduction Analysis Facility) dataset is comprised of two files. One file has type ‘.pix’, the other ‘.imh’. The .pix file contains the ‘bulk data’ for the dataset; the array comprising the two-dimensional image in the case of CCD photometry. The .imh file contains all the header information. It is a simple text file and the keywords have a similar format to FITS keywords. It can be listed using standard Unix commands such as more or cat.
17In this context, a keyword is simply the name of each datum or item of information. For example, the keyword for the air mass might be ‘AIRMASS’.
18The original FITS format was proposed by Wells et al.[79] in 1981. However, it has been developed and enhanced over the years. The FITS standard is now maintained and documented by the FITS Support Office of the Astrophysics Data Facility at the NASA Goddard Space Flight Center (see URL: http://fits.gsfc.nasa.gov/fits_home.html). Though FITS is basically an astronomical format it is sometimes mentioned in books about standard image formats. See, for example, Graphics File Formats by Kay and Levine[47].
|
2018-09-24 06:24:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.601666271686554, "perplexity": 1750.2958755642396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160145.76/warc/CC-MAIN-20180924050917-20180924071317-00316.warc.gz"}
|
http://jips-k.org/pub-reader/390
|
# Three-Dimensional Shape Recognition and Classification Using Local Features of Model Views and Sparse Representation of Shape Descriptors
## Abstract
Abstract: In this paper, a new algorithm is proposed for three-dimensional (3D) shape recognition using local features of model views and its sparse representation. The algorithm starts with the normalization of 3D models and the extraction of 2D views from uniformly distributed viewpoints. Consequently, the 2D views are stacked over each other to from view cubes. The algorithm employs the descriptors of 3D local features in the view cubes after applying Gabor filters in various directions as the initial features for 3D shape recognition. In the training stage, we store some 3D local features to build the prototype dictionary of local features. To extract an inter-mediate feature vector, we measure the similarity between the local descriptors of a shape model and the local features of the prototype dictionary. We represent the intermediate feature vectors of 3D models in the sparse domain to obtain the final descriptors of the models. Finally, support vector machine classifiers are used to recognize the 3D models. Experimental results using the Princeton Shape Benchmark database showed the average recognition rate of 89.7% using 20 views. We compared the proposed approach with state-of-the-art approaches and the results showed the effectiveness of the proposed algorithm.
Keywords: Shape Classification , Sparse Representation , 3D Local Features , 3D Shape Recognition , View Cube
## 1. Introduction
Recently the rapid development of three-dimensional (3D) scanners and sensors as well as graphic accelerated hardware have attracted an enormous amount of interest for 3D image processing and analysis [1]. Nowadays, 3D shape models are used in various fields such as archaeology, cultural heritage, computer-aided design (CAD), medical applications, 3D object classification, and biometrics. For instance, in industrial applications, 3D CAD models are variously used in a new form of prototyping called digital prototyping. Digital prototyping allows for the easy evaluation of industrial components before fabrication. One of the most important applications in the field of medical diagnostic is the automatic analysis and recognition of abnormalities in the anatomic structures. The recognition of abnormalities in 3D images requires automatic representation and the classification of 3D models, which is a challenging issue.
The recognition and classification of 3D models are of great importance in several applications. However, existing approaches for the classification and recognition of 2D images cannot be applied to 3D images directly because of its different nature and characteristic. The recognition and classification of a 3D model require the extraction of global or local characteristics of a 3D shape and its representation in an appropriate approach that is called a 3D shape descriptor or signature. The performance of a 3D shape recognition algorithm highly depends on the appropriateness of a 3D shape descriptor.
Existing algorithms for 3D shape representation can be roughly categorized into feature-based, viewbased and graph-based approaches [2,3]. In feature-based approaches, global or local features of a 3D shape model are used as the descriptors for shape classification. Most early approaches for 3D shape classification employ global features for shape recognition. Global features are easy and fast methods for 3D shape representation; however, they cannot describe the local deformation of a 3D shape. Therefore, the local features have been recently considered as a solution to handle the shortcomings of global features and increase the discriminating power of shape descriptors.
In comparison with feature-based approaches that consider only the geometric characteristic of a 3D shape, graph-based approaches employ the relations and linkages between the various components of a model for the shape representation. In other words, graph-based approaches construct a graph representing the topological information of a 3D object as a descriptor. In general, graph-based approaches are computationally inefficient and their applications are mostly restricted to CAD/CAM models. Two similar 3D models are looked similar from various views, therefore view-based approaches extract the 2D images of a 3D model from various views to describe it. The number of views and the method to combine views have a major impact on the effectiveness of a view-based algorithm.
This study proposes a new approach for 3D shape classification and recognition using 3D local features of model views. The proposed approach can be considered as a combination of view-based and featurebased approaches. The algorithm commences with the normalization of 3D models. This stage makes the extracted views invariant into rotation and scale changes. Then, 2D images of 3D models are captured from various views. In contrast to existing view-based approaches that mostly extract the silhouette of 3D models, we consider the distance of points from the center of mass of 3D models as the intensity of the extracted views. Consequently, the 2D views of the 3D models are stacked over each other to form the view cubes. The algorithm employs the descriptors of 3D local features in the view cube after applying Gabor filters in various directions. To combine the descriptors of local features, various descriptors from different classes of 3D models are saved during the training stage. These descriptors are called the prototype dictionary of local descriptors. To extract an intermediate feature vector, we measure the similarity between the local descriptors of an input model and the local features of the prototype dictionary. We represent the intermediate feature vectors of 3D models in the sparse domain to obtain the final descriptors of models. Finally, support vector machine (SVM) classifiers are used for the recognition of 3D models. In brief, the main contributions of this paper are as follows:
- A new approach is proposed to represent a 3D model as a cube of model views. In contrast to most view-based algorithms that use the silhouette of a shape model as the view image, our algorithm utilizes grey-level images. The pixel intensities of the grey-level images are obtained by the calculation of distances of points from the center of mass of the shape model.
- The proposed approach leverages a new descriptor that is extracted from 3D local descriptors of the view cube. Therefore, the proposed algorithm can be considered as a combination of viewbased and feature-based approaches.
- A new approach is proposed to represent the descriptor in the sparse domain.
The remainder of this paper is organized as follows. In Section 2, a survey of related work is presented. Section 3 introduces the proposed algorithm for 3D shape recognition. Section 4 presents the experimental results and their analysis and we conclude the paper in Section 5.
## 2. Related Work
Various applications of 3D shape recognition and classification have attracted a large amount of attention from researches during the past decade. Therefore, several approaches have proposed for 3D shape classification during the past decade. Also, several survey papers have discussed various algorithms for 3D shape recognition, classification, and retrieval [3-6]. As mentioned before, early approaches for 3D object representation mostly focused on global features like area, volume and moments of 3D shapes [7-10]. For Instance, Elad et al. [7] calculated moments of 3D shape and represented them as a descriptor for 3D shape retrieval. Osada et al. [8] used shape a distribution sampled from a shape function measuring the global geometric properties of an object as the shape descriptor. An enhanced version of these features was also used in [9].
Some existing approaches for 3D shape recognition employ descriptors based on a spatial map that uses the spatial relations between different sections of a 3D model [11-13]. For instance, Saupe and Vranic [11] used spherical harmonic coefficients as a shape descriptor. The descriptor measures the maximal extent of shape across all rays from the origin. Spatial maps are generally sensitive to the geometric transformations like the scale and rotation of a 3D object; therefore, a pose normalization stage is inevitable. Kazhdan et al. [12] used a spherical harmonic representation that transforms a rotationdependent shape descriptor into the rotation-independent one. However, spherical harmonics that are based on the latitude-longitude parameterization of a sphere result in singularities in the poles. To handle the problem, Laga et al. [13] uniformly sampled points on the sphere and used the wavelet transform to extract the spherical wavelet descriptors that are the extended version of Zernike moments and spherical harmonics.
Recent feature-based approaches for 3D shape representation mostly focus on local features [14-19]. Descriptors based on local features generally give rise to better discrimination for inter-class shape recognition. The main stage of constructing a descriptor based on local features is the detection of salient points in a 3D shape model. Shilane and Funkhouser [14] defined the distinctive regions of a 3D shape as the areas that their shapes are consistent with the objects of the same type and differ from the objects of other types. Inspired by the characteristics of human visual perception, Zhao et al. [15] used two features called retinex-based importance feature (RIF) and relative normal distance (RND) for salient point extraction. Atmosukarto et al. [16] used the histogram of low-level features like Besl–Jain and Gaussian curvatures for salient point extraction by using a trained SVM. Salient points are then transformed into a 2D longitude-latitude spatial map as a descriptor to classify 3D models. In [17], an extension of Harris method is used to extract the salient points of a 3D shape. The method uses an adaptive technique to determine the neighborhood of a vertex to apply Harris operator. Bu et al. [18] employed deep belief networks (DBN) to learn high-level features that are obtained from local descriptors. They adopted a scale-invariant heat kernel signature and average geodesic distance as the local descriptors that are robust against non-rigid and complex shape deformations.
Graph-based approaches are another group of algorithms for 3D shape recognition [20-24]. In contrast to feature-based methods that describe 3D objects using 3D geometrical properties, graph-based algorithms use the topological information and the relation between various parts of the object for 3D shape recognition. A graph shows how various parts of an object are linked together. Various graph representations have been proposed in the literature, such as skeletal graph [20], Reeb graph [21], and spectral Reeb graph [23,24], to name a few. Graph-based algorithms are computationally expensive and sensitive to small topological changes.
Another category of 3D shape recognition algorithms, which are called view-based algorithms, capture several 2D images from various directions of a 3D model [25-27]. In [25], a set of 2D images is automatically generated from a 3D object, by taking the views from uniformly distributed viewpoints. Then, a set of 2D rotation-invariant shape descriptors are extracted for each image. Finally, a similarity measure is used for 3D model retrieval. In [26], convolutional neural networks (CNNs) are used for 3D shape recognition using multiple views of a 3D model. The method first trains a set of CNNs to combine several views as a single and more informative view. Finally, another trained CNN is used to generate the shape descriptor. Most view-based methods can be considered as an extension to global feature-based methods that extract a global descriptor from the view images. However, Ding and Liu [27] used a viewbased descriptor called sphere image for 3D shape retrieval that takes into account the relation between various views by constructing a star graph. In [28], a feature fusion method and multi-modal graph learning are used for view-based 3D object retrieval. After extracting different visual features, including 2D Zernike moments, 2D Fourier descriptors, and 2D Krawtchouk moments, the Hausdorff distance is computed to measure the similarity between two 3D objects with multiple views. Finally, several graphs are employed for the feature fusion task.
## 3. Proposed Method
This study aims to classify 3D objects using the local features of model views and the sparse representation of the extracted features. Fig. 1 shows the general block scheme of the proposed algorithm. The proposed method includes two parts comprising training and testing phases. The proposed algorithm is a combined feature and view-based approach that utilizes the 3D local features of model views to construct the required features for 3D shape classification. Both training and testing phases start with the normalization of 3D models. This stage makes the extracted views invariant into the rotation and scale changes. Then a set of 2D images is extracted from 3D shapes by taking views from uniformly distributed viewpoints. Consequently, the 2D views of the 3D models are stacked over each other to form view cubes. We use 2D Gabor filters in companion with a 3D max filter to extract proper characteristics of the view images and take into account the relation between various views. In the training phase of the system, we select some 3D local features from the training set of 3D models as the prototype dictionary of local features. Both in the training and testing phases, the intermediate features of 3D models are computed by calculating the similarity between 3D patches of a 3D model and the prototype dictionary of local features. We use sparse representation to calculate the final descriptors for the classification. The sparse representation of descriptors needs the selection of appropriate basic signals or atoms. This stage that is called dictionary learning is conducted during the training phase. This dictionary is then used to obtain the final descriptors for 3D shape classification via sparse representation. Then the trained SVM classifiers are used for 3D shape recognition.
The general block scheme of the proposed algorithm.
##### 3.1 Pose Normalization
The 3D shape models may contain and an arbitrary scale, orientation, and position. To make the extracted features robust against the scale, rotation, and position of a model, a pose normalization stage is used before the feature or view extraction. Pose normalization algorithms aim to transform a 3D shape into a new canonical coordinate frame where the representation of the shape is independent of its scale, orientation, and position. Various normalization algorithms such as weighted principal component analysis (PCA) [29], continuous PCA [30] and PCA on the normals of the model (NPCA) [31] have been proposed in the literature. We use the weighted PCA [29] approach for the pose normalization.
In the weighted PCA, the mean vector and covariance matrix of vertex coordinates are calculated as follows:
##### (1)
[TeX:] $$\mathbf{m}_{V}=\frac{1}{n} \sum_{i=1}^{n} w_{i} \mathbf{v}_{i}$$
##### (2)
[TeX:] $$\mathbf{C}_{V}=\frac{1}{n} \sum_{i=1}^{n} \omega_{i}\left(\mathbf{v}_{i}-m_{v}\right)^{T}\left(\mathbf{v}_{i}-m_{v}\right)$$
where n is the number of vertices in a 3D shape model, [TeX:] $$\mathbf{v}_{i}$$ are the coordinates of the shape vertices and [TeX:] $$\mathbf{m}_{V} \text { and } \mathbf{C}_{V}$$ are the mean vector and covariance matrix of the vertex coordinates, respectively. Here [TeX:] $$\omega_{i}$$ is the weight of vertex [TeX:] $$\mathbf{v}_{i}$$ that is defined as the proportion of the sum of surfaces of all triangles that have [TeX:] $$\mathbf{v}_{i}$$ as a vertex to the sum of surfaces of all triangles in the shape. Let A be a matrix consisting of the ordered eigenvectors of the covariance matrix [TeX:] $$\mathbf{C}_{V},$$ as the row vectors. The normalized coordinates of the vertices are then calculated as follows:
##### (3)
[TeX:] $$\hat{\mathbf{v}}_{i}=\mathbf{A}\left(\mathbf{v}_{i}-\mathbf{m}_{V}\right)$$
Results of pose normalization on two typical 3D shapes: (a) before applying the pose normalization and (b) after applying the pose normalization.
##### 3.2 View Cube Construction
After the normalization of 3D shape models, different views of the shape models are extracted to construct view cubes. To build a view cube, a set of 2D images is extracted from a 3D shape by taking views from uniformly distributed viewpoints. The view cube is constructed by stacking of various views of a shape model. The aim of constructing a view cube is to take into account the relation between various views of a shape model for the extraction of the shape descriptor. In the proposed approach, not only the 3D information of a view cube is used in the filtering steps, but also the intermediate features are based on the 3D information of adjacent views. Fig. 3 shows the extracted views and the constructed view cube for a typical 3D shape model with nine views. In contrast to the most view-based algorithms that use the silhouette of a shape model as a view image, we use the distances of points from the center of mass as the intensities of pixels of a view image. This feature provides us the capability of using intensity-based filters such as Gabor filter to extract the proper characteristics of a shape model from its view images.
The extracted views and the constructed view cube for a typical 3D shape model.
##### 3.3 Gabor and Max Filtering
The use of distances of points from the mass center of shape models as the intensity of view images provides us the capability of utilizing filters such as Gabor filter for the texture-based feature extraction. Inspired by the biological characteristics of the human visual system, Gabor filters are found to be appropriate for texture representation and discrimination. The weights of 2D Gabor filter are defined as follows:
##### (4)
[TeX:] $$\hat{x}=x \cos (\theta)-y \sin (\theta)$$
##### (5)
[TeX:] $$\hat{y}=x \sin (\theta)+y \cos (\theta)$$
##### (6)
[TeX:] $$G(x, y)=\exp \left(-\frac{\hat{x}^{2}+\gamma^{2} \hat{y}^{2}}{2 \sigma^{2}}\right) \cos \left(\frac{2 \pi}{\lambda} \hat{x}\right)$$
where denotes the direction of the Gabor filter, is the aspect ratio, is effective width and is wavelength. Our approach for texture representation using Gabor filter is based on the method of [32]. We apply 11×11 pixels 2D Gabor filters to all 2D images of view cube in 16 directions and select the maximum value in 16 directions as the output of Gabor filter. Taken all form [32], , , and
After applying the 2D Gabor filter to the images of the view cube, 3D local max filter is employed to take into account the relations between various view images. To this intent, a 3D local max filter with the dimension of 5×5×5 pixels is experimentally used in this study.
##### 3.4 Extracting Intermediate Features
After applying Gabor and max filtering, the intermediate features are calculated by comparing all 3D local patches in the view cube by some previously-stored local patches, i.e., the prototype dictionary of local features. To this end, in the training stage, we randomly select [TeX:] $$N_{i}$$ 3D local patches with a size of 16×16×3 pixels. By comparing all the 3D local patches in view cube with the prototype dictionary of local features, intermediate features are calculated using the following algorithm:
Select a 3D patch from the prototype dictionary of local features, [TeX:] $$\text { i.e., } \mathbf{p}^{\mathrm{j}}, \mathrm{j}=1,2, \ldots, N_{i}$$
Slide a 3D window with a size of 16×16×3 pixels in the view cube.
Compare the 3D local patches of the view cube with [TeX:] $$\mathbf{p}^{j}$$ as follows: [TeX:] $$r_{i}^{j}=\exp \left(\left\|\mathbf{x}_{i}-\mathbf{p}^{j}\right\|\right)$$ where [TeX:] $$\mathbf{x}_{i}$$ defines i-th 3D local patch in the view cube and [TeX:] $$\| \|$$ denotes 2-norm.
Calculate the max difference [TeX:] $$r m^{j}$$ as follows: [TeX:] $$r m^{j}=\max \left(r_{i}^{j}\right)$$
Construct the intermediate feature vector, [TeX:] $$\text { i.e., } \mathbf{r}=\left[r m^{1}, r m^{2}, \ldots, r m^{N_{i}}\right]^{T}.$$
##### 3.5 Sparse Representation
In this study, the sparse representations of intermediate features are used as the descriptors for 3D shape recognition. In the sparse representation, a feature vector [TeX:] $$\mathbf{r}_{i} \in R^{N_{i}},$$ which denotes an intermediate feature vector, is represented as follows:
##### (7)
[TeX:] $$\mathbf{r}_{i}=\mathbf{D} \boldsymbol{a}_{i}+\mathbf{e}_{i}$$
where [TeX:] $$\boldsymbol{\alpha}_{i} \in R^{k}$$ is the sparse representation of the feature [TeX:] $$\mathbf{r}_{i}, \mathbf{D} \in R^{N_{i} \times k}$$ is the dictionary or basic signals and [TeX:] $$\mathbf{e}_{i} \in R^{N_{i}}$$ is the error vector. Here k is the number of atoms or basic signals in the dictionary. The sparse representation of an intermediate feature vector [TeX:] $$\mathbf{r},$$ can be obtained by solving an underdetermined system of linear equations as follows:
##### (8)
[TeX:] \begin{aligned} &\hat{\boldsymbol{\alpha}}=\arg \min \|\boldsymbol{\alpha}\|_{0}\\ &\text { s.t. } \mathbf{r}=\mathbf{D} \boldsymbol{\alpha} \end{aligned}
where [TeX:] $$\|\|_{0}$$ denotes the number of nonzero elements in a vector. To make the solution of the Eq. (8) more feasible, the sparse representation problem may be expressed as:
##### (9)
[TeX:] $$\hat{\boldsymbol{\alpha}}=\arg \min \frac{1}{2}\|\mathbf{r}-\mathbf{D} \mathbf{a}\|_{2}+\lambda\|\mathbf{a}\|_{1}$$
where [TeX:] $$\|\|_{1}$$ refers to the L1 norm. Since both D and α in the Eq. (8) is unknown, the sparse representation of local descriptors consists of two different stages: 1- dictionary learning and 2- the calculation of sparse codes. The aim of dictionary learning is the selection of appropriate basic signals or atoms. We use the K-SVD algorithm [33] in the training stage of the proposed algorithm to learn a dictionary for sparse representation. To learn a dictionary for the sparse representation, we use the intermediate feature vectors of the training shape models. Let [TeX:] $$\mathbf{R}_{t}$$ denote the intermediate feature vectors of the training shape models and [TeX:] $$\mathbf{A}_{t}$$ represent the corresponding sparse coefficients of [TeX:] $$\mathbf{R}_{t}$$ \mathbf{R}_{t}
##### (10)
[TeX:] $$\mathbf{R}_{t}=\left\{\mathbf{r}_{1}, \mathbf{r}_{2}, \ldots, \mathbf{r}_{n_{t}}\right\}$$
##### (11)
[TeX:] $$\mathbf{A}_{t}=\left\{\boldsymbol{\alpha}_{1}, \boldsymbol{\alpha}_{2}, \ldots, \boldsymbol{\alpha}_{n_{t}}\right\}$$
where [TeX:] $$n_{t}$$ is the total number of training 3D shape models. The calculation of matrix D, i.e., dictionary using the K-SVD algorithm comprises the following steps:
Step 1. Initialize dictionary D with [TeX:] $$\mathrm{L}_{2}$$ normalized columns.
Step 2. Calculate sparse coefficients using the following equation:
##### (12)
[TeX:] $$\mathbf{A}_{t}=\arg \min _{\mathbf{A}_{s}}\left\{\left\|\mathbf{R}_{t}-\mathbf{D} \mathbf{A}_{t}\right\|\right\} \text { s.t. } \quad\left\|\boldsymbol{\alpha}_{i}\right\|_{0} \leq T_{0} \quad i=1,2, \ldots, n_{t}$$
Step 3. Update the dictionary as:
##### (13)
[TeX:] $$\mathbf{D}=\arg \min _{\mathbf{D}}\left\|\mathbf{R}_{t}-\mathbf{D} \mathbf{A}_{t}\right\|_{2}$$
Step 4. Repeat steps 2 and 3 until the convergence.
After the calculation of the dictionary, the feature-sign search algorithm [34] is used to calculate the sparse coefficients for intermediate features as the descriptors of 3D shape models.
##### 3.6 Classification
In this study, SVM classifiers are used to classify 3D shape models using the extracted descriptors. An SVM is a binary classifier that categorizes an input feature vector by evaluating the classifier function [TeX:] $$f(\mathbf{x})$$ as follows [35]:
##### (14)
[TeX:] $$f(\mathbf{x})=\operatorname{sgn}\left(\boldsymbol{\omega}^{T} \varphi(\mathbf{x})+b\right)$$
where [TeX:] $$\mathbf{x}$$ is the feature vector, b and [TeX:] $$\omega$$ are the bias and vector of SVM coefficients respectively, [TeX:] $$\varphi$$ defines a kernel function, and sgn denotes the sign function. Since an SVM is inherently a binary classifier, one against one approach is used in this study to create a multi-class SVM classifier for 3D shape classification.
## 4. Experimental Results
The proposed algorithm was implemented in MATLAB environment and evaluated with two different 3D shape datasets comprising the McGill 3D shape benchmark (MSB) [36] and Princeton Shape Benchmark (PSB) databases [37]. Fig. 4 shows examples of 3D shape models for the MSB database. This database consists of 19 different kinds of 3D objects with the emphasis is on including models with articulating parts. Fig. 5 illustrates samples from the PSB database. This benchmark contains a database of 3D polygonal models collected from the World Wide Web. This dataset includes 1,814 models divided into three levels of classification. In our experiment, we used the [TeX:] $$3^{\mathrm{rd}}$$ level classification of the PSB database to assess the performance of the proposed algorithm and compare its results with those of other methods.
To construct a view cube, we use a set of nine 2D images with a resolution of 150×200 pixels. These images are extracted from 3D shapes by taking views from uniformly distributed viewpoints. In this study, the prototype dictionary of local features comprises 5,000 local patches of size 16×16×3 pixels.
Samples of 3D objects of the MSB database, (a) objects with articulating parts, (b) objects with moderate or no part articulation.
Samples of 3D objects of the PSB database.
Fig. 6 shows the correct classification rate of the proposed algorithm on the MSB database. Our experimental results demonstrate that an SVM classifier with linear kernel function results in better classification accuracy. Therefore, one against one linear SVM classifiers are used in the experiments to create a multi-class SVM classifier. Fig. 6 illustrates the percentage of correct classification for the various dimensions of the sparse representation, i.e., k values. Since the MSB database does not specify test and train shape models for the classification, we use 190 random object models for the training and the remaining models for the test. The experiments are repeated ten times and the average classification rates are reported in the figure. The results of Fig. 6 demonstrate that the maximum allowable dimension of sparse representation, i.e., k=190, results in the maximum percentage of correct classification. Table 1 shows the correct classification rate for various classes of the MSB database using k=190. The results of Table 1 show the average classification rate of 83.7% and 100% for seven and 19 classes of the MSB database, respectively. Table 2 illustrates the confusion matrix for the classification of 3D objects in the MSB database. As the results of Table 2 demonstrate, some sources of errors for the proposed algorithm are related to the misclassification of airplanes as birds, tables as chairs and dinosaurs as four-limbs, to name a few. We also tested the proposed algorithm for 3D shape classification with and without the sparse representation. Table 3 compares the percentage of correct classification for the proposed algorithm with and without the sparse representation. The results of Table 3 show that the sparse representation of intermediate features enhances the classification rate up to 6.4%.
Correct classification rates of the proposed algorithm on the MSB database for the various dimensions of the sparse representation i.e. k values.
Correct classification rates (%) for various classes of the MSB database using k=190
Confusion matrix for the classification of 3D objects in the MSB database
Percentage of correct classification for the proposed algorithm on the MSB database with and without the sparse representation stage
To show the effectiveness of the proposed algorithm, we also compared the results of the proposed algorithm on the MSB database with those of method of Atmosukarto et al. [16]. This method uses the histogram of low-level features like Besl–Jain and Gaussian curvatures for salient point extraction. Salient points are then transformed into a 2D longitude-latitude spatial map as a descriptor to classify 3D models. Fig. 7 illustrates the correct classification rate of method of Atmosukarto et al. [16] on the MSB database using K nearest neighbors (KNN) classifier with various K values. The results of Fig. 7 show the maximum correct classification ratio of 65.47% for K=1. Table 4 illustrates the confusion matrix for the classification of 3D objects in the MSB database using Atmosukarto et al. method. A comparison between the results of Tables 2 and 4 shows the effectiveness of the proposed algorithm.
Correct classification rates of method of Atmosukarto et al. [ 16] on the MSB database using KNN classifier for various K values.
Confusion matrix for the classification of 3D objects in the MSB database using method of Atmosukarto et al. [ 16]
Fig. 8 shows the correct classification rate of the proposed algorithm on the PSB database. SVM classifiers with linear kernel function are experimentally used in this experiment as well. Fig. 8 illustrates the percentage of correct classification for the various dimensions of the sparse representation, i.e., k values. In this experiment, 907 objects are used to train the classifiers as well as obtaining the dictionary for sparse representation. The 907 remaining objects are also utilized to test the proposed algorithm. The results of Fig. 8 demonstrate that the maximum allowable dimension of sparse representation, i.e., k=907, results in the maximum percentage of correct classification.
Correct classification rates of the proposed algorithm on the PSB database for the various dimensions of the sparse representation, i.e., k values.
Table 5 shows the correct classification rate of the proposed algorithm on the PSB database with and without applying the sparse representation stage on intermediate features. The results of Table 5 show that the sparse representation of intermediate features increases the correct classification rate of the proposed algorithm up to 16.99%.
Percentage of correct classification for the proposed algorithm on the PSB database with and without the sparse representation stage
We also compared the results of applying the proposed algorithm on the PSB database with the methods of Bu et al. [18] and Hamid and Nakajima [38]. Bu et al. [18] leveraged the geodesics-aware bag-offeatures (GA-BoF) as the initial features. Then DBNs were used for dimensionality reduction and 3D shape classification. They also reported the results of shape classification using GA-BoF features and some of other dimensionality reduction algorithms such as PCA, multidimensional scaling (MDS), linear discriminant analysis (LDA), and locally linear embedding (LLE) algorithms. Table 6 compares the results of the proposed algorithm on the PSB database with GA-BoF features with DBNs and some other dimensionality reduction algorithms as well as the method of Hamid and Nakajima [38]. The results of this table demonstrate the higher performance of the proposed algorithm.
Comparison of the results of the proposed algorithm with a method of Bu et al. [ 18] using DBNs and other dimensionality reduction algorithms as well as Hamid and Nakajima [ 38] approach
## 5. Conclusions
This study reports a new approach for 3D shape recognition using 3D local features of model views. In comparison with the existing view-based approaches that mostly extract the silhouette of 3D models, we consider the distances of the points from the center of mass of a 3D model as the intensities of the pixels of extracted views. Then, we use Gabor-based filters to extract proper features for shape classification. To take into account the relationships between various views, we use a new approach for 3D local feature extraction by the construction of view cubes. Additionally, we describe intermediate features in the sparse domain to enhance the accuracy of classification.
Experimental results on both the PSB and MSB databases demonstrate the effectiveness and higher performance of the proposed method on the classification of 3D objects. Additionally, the comparisons of the results generated by the proposed method with those of other state-of-the-art approaches show that more reliable results could be obtained using the proposed method.
The experimental results also show that the sparse representation of intermediate features increases the accuracy of the proposed algorithm. Our experiments using SVM classifiers with a radial basis function (RBF) as a kernel demonstrate that the correct classification rate decreases with an RBF kernel function. This effect shows that the sparse representation of intermediate features makes them linearly separable.
According to the results obtained by the proposed approach, the increase in the number of coefficients for the sparse representation enhances the performance of the proposed approach. In the proposed approach, we experimentally used the maximum number of allowable coefficients in the training phase, i.e., the number of intermediate features of shape models. This effect indicates the scalability for the proposed algorithm, because by increasing the number of classes and shape models it is possible to increase the number of coefficients for the sparse representation.
## Biography
##### Hussein Kanaan
https://orcid.org/0000-0001-5688-6953
He received the B.S. degree in biomedical engineering in 2008. In 2012, he received the M.S. degree in communication engineering from Shahed University, Tehran, Iran. He has been working towards the PhD degree in electronic engineering at Machine Vision and Image Processing laboratory of Shahed University, Tehran, Iran, since 2012. His current research interests include 3D object recognition, classification and machine vision.
## Biography
https://orcid.org/0000-0002-1990-6668
He received the B.S. degree in electronic engineering from Electrical Engineering Faculty, Tabriz University, Tabriz, Iran, in 1995. In 1998, he received M.S. degree in digital electronics from Electrical Engineering Faculty, Sharif University of Technology, Tehran, Iran. He received Ph.D. degree in electronic engineering from Electrical Engineering Faculty, Amirkabir University of Technology, Tehran, Iran, in 2004. Currently, he is an associate professor of Electrical Engineering Department, Shahed University, Tehran, Iran. His research fields are image and video processing and machine vision.
## References
• 1 J. S. Nam, H. Gao, M. K. Kang, K. T. Kim, S. C. Son, C. U. Pom, K. Heo, "Scenario-based 3D Objects Synthesizing System Design," Journal of Information Processing Systems, vol. 2, no. 1, pp. 18-22, 2006.custom:[[[-]]]
• 2 I. Atmosukarto, "3D shape analysis for quantification, classification, and retrieval," Ph.D. dissertationUniversity of Washington, Seattle, W A, 2010.custom:[[[-]]]
• 3 J. W. Tangelder, R. C. Veltkamp, "A survey of content based 3D shape retrieval methods," Multimedia Tools and Applications, vol. 39, no. 441, 2008.doi:[[[10.1007/s11042-007-0181-0]]]
• 4 M. A. Savelonas, I. Pratikakis, K. Sfikas, "An overview of partial 3D object retrieval methodologies," Multimedia Tools and Applications, vol. 74, pp. 11783-11808, 2015.doi:[[[10.1007/s11042-014-2267-9]]]
• 5 C. Li, A. B. Hamza, "Spatially aggregating spectral descriptors for nonrigid 3D shape retrieval: a comparative survey," Multimedia Systems, vol. 20, no. 3, pp. 253-281, 2014.doi:[[[10.1007/s00530-013-0318-0]]]
• 6 G. L. Lopez, A. P. P. Negron, A. D. A. Jimenez, J. R. Rodriguez, R. I. Paredes, "Comparative analysis of shape descriptors for 3D objects," Multimedia Tools and Applications, vol. 76, pp. 6993-7040, 2017.doi:[[[10.1007/s11042-016-3330-5]]]
• 7 M. Elad, A. Tal, S. Ar, "Content based retrieval of VRML objects: an iterative and interactive approach," in Multimedia 2001. Vienna: Springer, pp. 107-118, 2002.custom:[[[-]]]
• 8 R. Osada, T. Funkhouser, B. Chazelle, D. Dobkin, "Shape distributions," ACM Transactions on Graphics, vol. 21, no. 4, pp. 807-832, 2002.doi:[[[10.1145/571647.571648]]]
• 9 R. Ohbuchi, T. Minamitani, T. Takei, "Shape-similarity search of 3D models by using enhanced shape functions," International Journal of Computer Applications in Technology, vol. 23, no. 2-4, pp. 70-85, 2005.doi:[[[10.1504/IJCAT.2005.006466]]]
• 10 M. Mahmoudi, G. Sapiro, "Three-dimensional point cloud recognition via distributions of geometric distances," Graphical Models, vol. 71, no. 1, pp. 22-31, 2009.doi:[[[10.1016/j.gmod.2008.10.002]]]
• 11 D. Saupe, D. V. Vranic, "3D model retrieval with spherical harmonics and moments," in Pattern Recognition. Heidelberg: Springer, pp. 392-397, 2001.custom:[[[-]]]
• 12 M. Kazhdan, T. Funkhouser, S. Rusinkiewicz, "Rotation invariant spherical harmonic representation of 3D shape descriptors," in Proceedings of Eurographics Symposium on Geometry Processing, Archen, Germany, 2003;pp. 156-164. custom:[[[-]]]
• 13 H. Laga, H. Takahashi, an M. Nakajima, "Spherical wavelet descriptors for content-based 3D model retrieval," in Proceedings of IEEE International Conference on Shape Modeling and Applications, Matsushima, Japan, 2006;custom:[[[-]]]
• 14 P. Shilane, T. Funkhouser, "Distinctive regions of 3D surfaces," ACM Transactions on Graphics, vol. 26, no. 2, 2007.doi:[[[10.1145/1243980.1243981]]]
• 15 Y. Zhao, Y. Liu, Y. Wang, B. Wei, J. Yang, Y. Zhao, Y. Wang, "Region-based saliency estimation for 3D shape analysis and understanding," Neurocomputing, vol. 197, pp. 1-13, 2016.doi:[[[10.1016/j.neucom.2016.01.012]]]
• 16 I. Atmosukarto, K. Wilamowska, C. Heike, L. G. Shapiro, "3D object classification using salient point patterns with application to craniofacial research," Pattern Recognition, vol. 43, no. 4, pp. 1502-1517, 2010.doi:[[[10.1016/j.patcog.2009.11.004]]]
• 17 I. Sipiran, B. Bustos, "Harris 3D: a robust extension of the Harris operator for interest point detection on 3D meshes," The Visual Computer, vol. 27, no. 11, 2011.doi:[[[10.1007/s00371-011-0610-y]]]
• 18 S. Bu, Z. Liu, J. Han, J. Wu, R. Ji, "Learning high-level feature by deep belief networks for 3-D model retrieval and recognition," IEEE Transactions on Multimedia, vol. 16, no. 8, pp. 2154-2167, 2014.doi:[[[10.1109/TMM.2014.2351788]]]
• 19 A. Flint, A. Dick, A. Van den Hengel, "Local 3D structure recognition in range images," IET Computer Vision, vol. 2, no. 4, pp. 208-217, 2008.doi:[[[10.1049/iet-cvi:20080037]]]
• 20 H. Sundar, D. Silver, N. Gagvani, S. Dickinson, "Skeleton based shape matching and retrieval," in Proceedings of 2003 Shape Modeling International, Seoul, South Korea, 2003;pp. 130-139. custom:[[[-]]]
• 21 M. Hilaga, Y. Shinagawa, T. Kohmura, T. L. Kunii, "Topology matching for fully automatic similarity estimation of 3D shapes," in Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, 2001;pp. 203-212. custom:[[[-]]]
• 22 B. Leng, C. Du, S. Guo, X. Zhang, Z. Xiong, "A powerful 3D model classification mechanism based on fusing multi-graph," Neurocomputing, vol. 168, pp. 761-769, 2015.doi:[[[10.1016/j.neucom.2015.05.048]]]
• 23 A. Kacem, W. Mohamed, A. B. Hamza, "Spectral geometric descriptor for deformable 3D shape matching and retrieval," in Image Analysis and Recognition. Heidelberg: Springer, pp. 181-188, 2013.custom:[[[-]]]
• 24 W. Mohamed, A. B. Hamza, "Deformable 3D shape retrieval using a spectral geometric descriptor," Applied Intelligence, vol. 45, no. 2, pp. 213-229, 2016.doi:[[[10.1007/s10489-015-0746-y]]]
• 25 P. Daras, A. Axenopoulos, "A compact multi-view descriptor for 3D object retrieval," in Proceedings of 2009 Seventh International Workshop on Content-Based Multimedia Indexing, Chania, Greece, 2009;pp. 115-119. custom:[[[-]]]
• 26 H. Su, S. Maji, E. Kalogerakis, E. Learned-Miller, "Multi-view convolutional neural networks for 3D shape recognition," in Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 2015;pp. 945-953. custom:[[[-]]]
• 27 K. Ding, Y. Liu, "A probabilistic 3D model retrieval system using sphere image," in Computer Vision – ACCV 2012. Heidelberg: Springer, pp. 536-547, 2012.custom:[[[-]]]
• 28 S. Zhao, H. Yao, Y. Zhang, Y. Wang, S. Liu, "View-based 3D object retrieval via multi-modal graph learning," Signal Processing, vol. 112, pp. 110-118, 2015.doi:[[[10.1016/j.sigpro.2014.09.038]]]
• 29 D. V. Vranic, D. Saupe, "3D model retrieval," in Proceedings of the Spring Conference on Computer Graphics and its Applications (SCCG2000), Budmerice, Slovakia, 2000;pp. 89-93. custom:[[[-]]]
• 30 D. V. Vranic, D. Saupe, J. Richter, "Tools for 3D-object retrieval: Karhunen-Loeve transform and spherical harmonics," in Proceedings of 2001 IEEE 4th Workshop on Multimedia Signal Processing (Cat. No. 01TH8564), Cannes, France, 2001;pp. 293-298. custom:[[[-]]]
• 31 P. Papadakis, I. Pratikakis, S. Perantonis, T. Theoharis, "Efficient 3D shape matching and retrieval using a concrete radialized spherical projection representation," Pattern Recognition, vol. 40, no. 9, pp. 2437-2452, 2007.doi:[[[10.1016/j.patcog.2006.12.026]]]
• 32 J. Mutch, D. G. Lowe, "Object class recognition and localization using sparse features with limited receptive fields," International Journal of Computer Vision, vol. 80, no. 1, pp. 45-57, 2008.doi:[[[10.1007/s11263-007-0118-0]]]
• 33 M. Aharon, M. Elad, A. Bruckstein, "K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation," IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311-4322, 2006.custom:[[[-]]]
• 34 H. Lee, A. Battle, R. Raina, A. Y. Ng, "Efficient sparse coding algorithms," Advances in Neural Information Processing Systems, vol. 20, pp. 801-808, 2007.custom:[[[-]]]
• 35 P. H. Chen, C. J. Lin, B. Scholkopf, "A tutorial on ν-support vector machines," Applied Stochastic Models in Business and Industry, vol. 21, no. 2, pp. 111-136, 2005.custom:[[[-]]]
• 36 K. Siddiqi, J. Zhang, D. Macrini, A. Shokoufandeh, S. Bouix, S. Dickinson, "Retrieving articulated 3-D models using medial surfaces," Machine Vision and Applicationsvol, 19, no. 4, pp. 261-275, 2008.doi:[[[10.1007/s00138-007-0097-8]]]
• 37 P. Shilane, P. Min, M. Kazhdan, T, Funkhouser, "The princeton shape benchmark," in Proceedings Shape Modeling Applications, Genova, Italy, 2004;pp. 167-178. custom:[[[-]]]
• 38 L. Hamid, M. Nakajima, "Supervised learning of salient 2D views of 3D models," The Journal of the Society for Art and Science, vol. 7, no. 4, pp. 124-131, 2008.custom:[[[-]]]
Table 1.
Correct classification rates (%) for various classes of the MSB database using k=190
Class name Correct classification rate (%)
Airplanes 70.0
Ants 100
Birds 62.5
Crabs 80.0
Chairs 100
Cups 100
Dinosaurs 87.5
Dolphins 100
Fishes 80.0
Four-limbs 83.3
Hands 50.0
Humans 75.0
Octopus 70.0
Pliers 100
Snakes 100
Spectacles 84.6
Spiders 76.1
Tables 71.4
Teddy-bears 100
All classes 83.7
Table 2.
Confusion matrix for the classification of 3D objects in the MSB database
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
1. Airplane 0.70 - 0.20 - - - 0.10 - - - - - - - - - - - -
2. Ants - 1 - - - - - - - - - - - - - - - - -
3. Birds - - 0.63 - - - - - - - 0.13 - 0.13 - 0.13 - - - -
4. Crabs - - - 0.80 0.05 - - - - - 0.05 - - - 0.10 - - - -
5. Chairs - - - - 1 - - - - - - - - - - - - - -
6. Cups - - - - - 1 - - - - - - - - - - - - -
7. Dinosaurs - - - - - - 0.88 - - 0.13 - - - - - - - - -
8. Dolphins - - - - - - - 1 - - - - - - - - - - -
9. Fishes - - - - - - 0.20 - 0.80 - - - - - - - - - -
10. Four-limbs 0.08 - - - - - 0.09 - - 0.83 - - - - - - - - -
11. Hands - - - - - - - - - - 0.50 0.20 0.10 0.10 0.10 - - - -
12. Humans - - - 0.08 - - 0.08 - - - 0.08 0.75 - - - - - - -
13. Octopus - 0.10 - - - - - - - - 0.20 - 0.70 - - - - - -
14. Pliers - - - - - - - - - - - - - 1 - - - - -
15. Snakes - - - - - - - - - - - - - - 1 - - - -
16. Spectacles - - - - - 0.08 - - - - - - - 0.08 - 0.85 - - -
17. Spiders - - - 0.05 - - - - - - - - 0.19 - - - 0.76 - -
18. Tables 0.14 - - - 0.14 - - - - - - - - - - - - 0.71 -
19. Teddy-bears - - - - - - - - - - - - - - - - - - 1
Table 3.
Percentage of correct classification for the proposed algorithm on the MSB database with and without the sparse representation stage
With the sparse representation Without the sparse representation
Percentage orrect classification 83.7 77.3
Table 4.
Confusion matrix for the classification of 3D objects in the MSB database using method of Atmosukarto et al. [ 16]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
1. Airplane 0.80 - - - - - - - 0.10 - - - - - - - - 0.10 -
2. Ants - 0.64 - 0.07 - - - - - - - 0.28 - - - - - - -
3. Birds 0.22 - 0.44 - - - - - - 0.11 - - - - - 0.11 - 0.11 -
4. Crabs - - - 0.84 - - 0.08 - - 0.08 - - - - - - - - -
5. Chairs - - 0.10 - 0.40 - - - 0.10 - - - - - - 0.10 - 0.30 -
6. Cups - - - - - 0.83 - - - 0.08 - - - - - - - - 0.08
7. Dinosaurs - - - - - - 0.76 - 0.07 0.16 - - - - - - - - -
8. Dolphins 0.20 - - - - - - 0.30 0.05 - - - - - - - - - -
9. Fishes - - - - - - - - 1 - - - - - - - - - -
10. Four-limbs - - - - - - - - - 1 - - - - - - - - -
11. Hands - 0.20 - - - - - - - - 0.30 0.10 - - - - 0.40 - -
12. Humans - - - - - - - - - - - 0.81 - 0.09 - - 0.09 - -
13. Octopus - - - - - 0.08 - 0.08 - 0.16 - 0.08 0.33 0.16 - - 0.08 - -
14. Pliers - - - - - - - - - - - - - 0.80 - 0.20 - - -
15. Snakes 0.16 0.16 - - - - - - 0.16 - - - - - 0.33 - - 0.16 -
16. Spectacles 0.08 - - - - - - - - - - - - - - 0.91 - - -
17. Spiders - - - - - 0.07 0.07 0.07 - 0.35 - 0.07 - - - - 0.35 - -
18. Tables - - - - - - - - 0.40 - - - - - - - - 0.60 -
19. Teddy-bears - - - - - - - - - - - - - - - - - - 1
Table 5.
Percentage of correct classification for the proposed algorithm on the PSB database with and without the sparse representation stage
With the sparse representation Without the sparse representation
Percentage orrect classification 85.9 68.91
Table 6.
Comparison of the results of the proposed algorithm with a method of Bu et al. [ 18] using DBNs and other dimensionality reduction algorithms as well as Hamid and Nakajima [ 38] approach
Algorithm Percentage of correct recognition
GA-BOF 71.4
PCA 76.5
MDS 77.1
LDA 72.5
LLE 73.7
DBN 85.1
Hamid and Nakajima [38] 70.05
Proposed method with 9 views 85.9
Proposed method with 20 views 89.7
The general block scheme of the proposed algorithm.
Results of pose normalization on two typical 3D shapes: (a) before applying the pose normalization and (b) after applying the pose normalization.
The extracted views and the constructed view cube for a typical 3D shape model.
Samples of 3D objects of the MSB database, (a) objects with articulating parts, (b) objects with moderate or no part articulation.
Samples of 3D objects of the PSB database.
Correct classification rates of the proposed algorithm on the MSB database for the various dimensions of the sparse representation i.e. k values.
Correct classification rates of method of Atmosukarto et al. [ 16] on the MSB database using KNN classifier for various K values.
Correct classification rates of the proposed algorithm on the PSB database for the various dimensions of the sparse representation, i.e., k values.
|
2021-10-28 07:15:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37420758605003357, "perplexity": 987.33107003422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00214.warc.gz"}
|
http://dispatchesfromturtleisland.blogspot.com/2016/11/a-short-history-of-nuclear-binding.html
|
## Monday, November 14, 2016
### A Short History Of Nuclear Binding Energy And The Nuclear Force
In the Standard Model of particle physics, the nuclear binding energy that binds protons and neutrons into atoms arises as a spillover from the strong force that binds quarks together into hadrons via an exchange of gluons according to the rules of quantum chromodynamics (QCD) and is mediated mostly via pions, rho mesons, and omega mesons exchanged between protons and neutrons in the nucleus of an atom.
But, for almost all practical purposes, what matters is the nuclear binding energy in an atom that arises from these interactions and not the details of the process that give rise to nuclear binding energy. Nuclear binding energy is the most important thing you need to know in order to do engineering and make predictions related to nuclear fission and nuclear fusion.
The nuclear binding energy of atoms in real life is summarized in the chart above.
Weizsäcker's formula
For a nucleus with A nucleons, including Z protons and N neutrons, a semi-empirical formula (also called Weizsäcker's formula, or the Bethe–Weizsäcker formula, or the Bethe–Weizsäcker mass formula) for the binding energy (BE) per nucleon is:
${\displaystyle {\frac {\text{BE}}{A\cdot {\text{MeV}}}}=a-{\frac {b}{A^{1/3}}}-{\frac {cZ^{2}}{A^{4/3}}}-{\frac {d\left(N-Z\right)^{2}}{A^{2}}}\pm {\frac {e}{A^{7/4}}}}$
where the coefficients are given by: ; ; ; .
The first term is called the saturation contribution and ensures that the binding energy per nucleon is the same for all nuclei to a first approximation. The term is a surface tension effect and is proportional to the number of nucleons that are situated on the nuclear surface; it is largest for light nuclei. The term is the Coulomb electrostatic repulsion; this becomes more important as increases. The symmetry correction term takes into account the fact that in the absence of other effects the most stable arrangement has equal numbers of protons and neutrons; this is because the n-p interaction in a nucleus is stronger than either the n-n or p-p interaction. The pairing term is purely empirical; it is + for even-even nuclei and - for odd-odd nuclei.
According to Wikipedia the formula: "gives a good approximation for atomic masses and several other effects, but does not explain the appearance of magic numbers of protons and neutrons, and the extra binding-energy and measure of stability that are associated with these numbers of nucleons. . . . The semi-empirical mass formula provides a good fit to heavier nuclei, and a poor fit to very light nuclei, especially 4He. This is because the formula does not consider the internal shell structure of the nucleus. For light nuclei, it is usually better to use a model that takes this structure into account."
Magic Numbers
What are the magic numbers (again per Wikipedia)?
In nuclear physics, a magic number is a number of nucleons (either protons or neutrons, separately) such that they are arranged into complete shells within the atomic nucleus. The seven most widely recognized magic numbers as of 2007 are 2, 8, 20, 28, 50, 82, and 126. Atomic nuclei consisting of such a magic number of nucleons have a higher average binding energy per nucleon than one would expect based upon predictions such as the semi-empirical mass formula and are hence more stable against nuclear decay.
The unusual stability of isotopes having magic numbers means that transuranium elements can be created with extremely large nuclei and yet not be subject to the extremely rapid radioactive decay normally associated with high atomic numbers
Large isotopes with magic numbers of nucleons are said to exist in an island of stability. Unlike the magic numbers 2–126, which are realized in spherical nuclei, theoretical calculations predict that nuclei in the island of stability are deformed. Before this was realized, higher magic numbers, such as 184, 258, 350, and 462, were predicted based on simple calculations that assumed spherical shapes: these are generated by the formula (see binomial coefficient). It is now believed that the sequence of spherical magic numbers cannot be extended in this way. Further predicted magic numbers are 114, 122, 124, and 164 for protons as well as 184, 196, 236, and 318 for neutrons. . . .
Nuclei which have neutron number and proton (atomic) numbers each equal to one of the magic numbers are called "double magic", and are especially stable against decay. Examples of double magic isotopes include helium-4, oxygen-16, calcium-40, calcium-48, nickel-48, nickel-78, and lead-208.
Double-magic effects may allow existence of stable isotopes which otherwise would not have been expected. An example is calcium-40, with 20 neutrons and 20 protons, which is the heaviest stable isotope made of the same number of protons and neutrons. Both calcium-48 and nickel-48 are double magic because calcium-48 has 20 protons and 28 neutrons while nickel-48 has 28 protons and 20 neutrons. Calcium-48 is very neutron-rich for such a light element, but like calcium-40, it is made stable by being double magic. Nickel-48, discovered in 1999, is the most proton-rich isotope known beyond helium-3. At the other extreme, nickel-78 is also doubly magical, with 28 protons and 50 neutrons, a ratio observed only in much heavier elements apart from tritiumwith one proton and two neutrons (Ni-78: 28/50 = 0.56; U-238: 92/146 = 0.63).
Magic number shell effects are seen in ordinary abundances of elements: helium-4 is among the most abundant (and stable) nuclei in the universe and lead-208 is the heaviest stable nuclide
Magic effects can keep unstable nuclides from decaying as rapidly as would otherwise be expected. For example, the nuclides tin-100 and tin-132 are examples of doubly magic isotopes of tin that are unstable, and represent endpoints beyond which stability drops off rapidly.
The nuclear shell model forms the basis of the "magic number" determination.
The Nuclear Force a.k.a. Residual Strong Force
Nuclear binding energy arises from the nuclear force a.ka. the residual strong force. The neutral rho meson plays a part together with pions (pseudoscalar mesons made of up and down quarks), charged rho mesons (made up up and antidown or antiup and down quarks), and omega mesons (vector mesons that combine the components of the neutral rho meson in a different way) in carrying the nuclear force (a.k.a. residual strong force a.k.a. strong nuclear force), not to be confused with the gluon mediated strong interaction of QCD from which it derives residually, that gives rise to nuclear binding energy within atoms. Per Wikipedia:
The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometer (fm, or 1.0 × 10−15 metres) between their centers, but rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows. By comparison, the size of an atom, measured in angstroms (Å, or 1.0 × 10−10 m), is five orders of magnitude larger. The nuclear force is not simple, however, since it depends on the nucleon spins, has a tensor component, and may depend on the relative momentum of the nucleons.
A chart demonstrating the nuclear force per distance from Wikipedia that ignores more complex aspects of the nuclear force.
The Yukawa potential [of Yukawa per his 1934 theory shown above] (also called a screened Coulomb potential) is a potential of the form
where g is a magnitude scaling constant, i.e., the amplitude of potential, is the Yukawa particle mass, r is the radial distance to the particle. The potential is monotone increasingimplying that the force is always attractive. The constants are determined empirically. The Yukawa potential depends only on the distance between particles, r, hence it models a central force. . . .
To disassemble a nucleus into unbound protons and neutrons requires work against the nuclear force. Conversely, energy is released when a nucleus is created from free nucleons or other nuclei: the nuclear binding energy. Because of mass–energy equivalence (i.e. Einstein's famous formula E = mc2), releasing this energy causes the mass of the nucleus to be lower than the total mass of the individual nucleons, leading to the so-called "mass defect".
The nuclear force is nearly independent of whether the nucleons are neutrons or protons. This property is called charge independence. The force depends on whether the spins of the nucleons are parallel or antiparallel, as it has a non-central or tensor component. This part of the force does not conserve orbital angular momentum, which under the action of central forces is conserved.
The symmetry resulting in the strong force, proposed by Werner Heisenberg, is that protons and neutrons are identical in every respect, other than their charge. This is not completely true, because neutrons are a tiny bit heavier, but it is an approximate symmetry. Protons and neutrons are therefore viewed as the same particle, but with different isospin quantum number. The strong force is invariant under SU(2) transformations, just as are particles with intrinsic spin. Isospin and intrinsic spin are related under this SU(2) symmetry group. There are only strong attractions when the total isospin is 0, which is confirmed by experiment.
Our understanding of the nuclear force is obtained by scattering experiments and the binding energy of light nuclei.
The nuclear force occurs by the exchange of virtual light mesons, such as the virtual pions, as well as two types of virtual mesons with spin (vector mesons), the rho mesons and the omega mesons. The vector mesons account for the spin-dependence of the nuclear force in this "virtual meson" picture.
The nuclear force is distinct from what historically was known as the weak nuclear force. The weak interaction is one of the four fundamental interactions, and plays a role in such processes as beta decay. The weak force plays no role in the interaction of nucleons, though it is responsible for the decay of neutrons to protons and vice versa.
Another presentation of these concepts can be found here.
Feynman diagram of a strong protonneutroninteraction mediated by a neutral pion. Time proceeds from left to right.
Historical Timing
The semi-empirical formula for nuclear binding energy was first formulated in 1935 by German physicist Carl Friedrich von Weizsäcker, and although refinements have been made to the coefficients over the years, the structure of the formula remains the same today.
Magic number shell effects were first noted in 1933, although the idea was largely dropped until it was rediscovered in 1948.
In 1934, Hideki Yukawa made the earliest attempt to explain the nature of the nuclear force. According to his theory, massive bosons (mesons) mediate the interaction between two nucleons. Although, in light of quantum chromodynamics (QCD), meson theory is no longer perceived as fundamental, the meson-exchange concept (where hadrons are treated as elementary particles) continues to represent the best working model for a quantitative NN potential.
Throughout the 1930s a group at Columbia University led by I. I. Rabi developed magnetic resonance techniques to determine the magnetic moments of nuclei. These measurements led to the discovery in 1939 that the deuteron also possessed an electric quadrupole moment. This electrical property of the deuteron had been interfering with the measurements by the Rabi group. The deuteron, composed of a proton and a neutron, is one of the simplest nuclear systems. The discovery meant that the physical shape of the deuteron was not symmetric, which provided valuable insight into the nature of the nuclear force binding nucleons. In particular, the result showed that the nuclear force was not a central force, but had a tensor character. Hans Bethe identified the discovery of the deuteron's quadrupole moment as one of the important events during the formative years of nuclear physics.
Historically, the task of describing the nuclear force phenomenologically was formidable. The first semi-empirical quantitative models came in the mid-1950s, such as the Woods–Saxon potential(1954). There was substantial progress in experiment and theory related to the nuclear force in the 1960s and 1970s. One influential model was the Reid potential (1968). In recent years, experimenters have concentrated on the subtleties of the nuclear force, such as its charge dependence, the precise value of the πNN coupling constant, improved phase shift analysis, high-precision NN data, high-precision NN potentials, NN scattering at intermediate and high energies, and attempts to derive the nuclear force from QCD.
By comparison, the muon was discovered (it was a surprise and not predicted when it was discovered) in 1936, and the discovery was confirmed in 1937, although it was originally believed to be the pion, a meson predicted by Hideki Yukawa in 1934, which was not discovered until 1947.
Kaons, the first particles discovered with the property of "strangeness" (i.e. a strange quark component) were first discovered in 1947 and elucidate further through 1955, although the way that these mesons fit into the larger picture was not ascertained until the quark model was proposed in 1964.
The neutrino was predicted in 1930 and first experimentally detected in 1956. The muon neutrino was detected in 1962 and the tau neutrino was detected in 2000 (with each predicted not long after the charged counterpart was detected).
More of the history of nuclear mass measurements and evaluation can be found in this 2006 paper
|
2017-06-27 03:30:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.68117356300354, "perplexity": 731.6499733669032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320915.38/warc/CC-MAIN-20170627032130-20170627052130-00410.warc.gz"}
|
https://socratic.org/questions/how-do-you-simplify-frac-frac-2-3-frac-7-8
|
How do you simplify \frac{\frac{2}{3}}{\frac{7}{8}}?
May 16, 2018
$\frac{\frac{2}{3}}{\frac{7}{8}} = \frac{16}{21}$
Explanation:
Given -
$\frac{\frac{2}{3}}{\frac{7}{8}}$
It can be written as
$\frac{2}{3} \div \frac{7}{8}$
Rewrite it as
$\frac{2}{3} \times \frac{8}{7} = \frac{16}{21}$
So -
$\frac{\frac{2}{3}}{\frac{7}{8}} = \frac{16}{21}$
|
2020-01-17 13:36:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5253193378448486, "perplexity": 13187.32185979226}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589560.16/warc/CC-MAIN-20200117123339-20200117151339-00252.warc.gz"}
|
http://mathematica.stackexchange.com/questions/34554/nintegrate-receiving-a-function-with-numericq-which-returns-a-list
|
# NIntegrate receiving a function with NumericQ which returns a list
Can some one please explain why
f[x_?NumericQ] := {{x, 2}, {-1, -x}}
NIntegrate[ f[S], {S, 0, 1}]
(*->NIntegrate::inum: Integrand f[S] is not numerical at {S} = {0.00795732}. >> *)
A possible work around is
f1[x_?NumericQ][part__] := {{x, 2}, {-1, -x}}[[part]]
then use something like
ArrayReshape[ NIntegrate[ f1[S][#[[1]], #[[2]]] & /@ {{1, 1}, {1, 2}, {2, 1}, {2, 2}}, {S, 0, 1}], {2, 2}]
but this is ugly, and does it evaluate f1 four times?
This type of problem appears when f involves matrix calculations and a numerical component (can not be done symbolically). This way, one is always forced to calculate all elements, even if you need only one. So it would advantages if NIntegrate did not evaluate f more times than necessary.
I thank any assistance in advance.
Possible related questions 1, 2 and 3.
-
Am I the only one to see a missing } over there? – Peltio Oct 22 '13 at 14:18
Haha yes a } was missing, but that doesn't change the question. – Artur Gower Oct 22 '13 at 17:33
I can't think out a work-around with the existence of _?NumericQ, but what do you really want? Just a explanation for the failure? A numerical integration without symbolic processing for a list of expressions? Or something else? – xzczd Oct 25 '13 at 11:24
Hi @xzczd, basically yes, I want "A numerical integration without symbolic processing for a list". I've added a short explanation at the end of the post. Thanks for thinking on this. Cheers – Artur Gower Oct 26 '13 at 12:49
Er… Sorry but I can't understand the added explanation very well 囧, and I just noticed that the listPart isn't actually used in your work-around… BTW, have you considered something like Method->{Automatic, "SymbolicProcessing"->False"}? – xzczd Oct 26 '13 at 13:48
Perhaps someone who knows can address the actual limitations of NIntegrate. It seems to me that, despite a couple of examples in the documentation, NIntegrate does not integrate vectors, matrices, and other arrays as arrays per se. Instead it integrates their components individually, effectively mapping NIntegrate onto the components.
The main evidence is the following. If I make f[S] return four component expressions, NIntegrate gets mapped onto each one. Note the f[1] is evaluated first, which would tell NIntegrate that the value of f is a 2x2 array. In the OP's code, f[S] is a single expression, which NIntegrate does not like.
ff[x_?NumericQ] := {{x, 2}, {-1, -x}};
ff[x_] = Array[ff[x, ##] &, {2, 2}];
f[x_] := (Print[ff@x]; ff[x]);
NIntegrate[f[S], {S, 0, 1}]
{{1, 2}, {-1, -1}}
{{ff[S,1,1], ff[S,1,2]}, {ff[S,2,1], ff[S,2,2]}}
NIntegrate::inumr: The integrand ff[S,1,1] has evaluated to non-numerical values for all sampling points in the region with boundaries {{0,1}}. >>
...
General::stop: Further output of NIntegrate::inumr will be suppressed during this calculation. >>
(* {{NIntegrate[ff[S, 1, 1], {S, 0, 1}],
NIntegrate[ff[S, 1, 2], {S, 0, 1}]},
{NIntegrate[ff[S, 2, 1], {S, 0, 1}],
NIntegrate[ff[S, 2, 2], {S, 0, 1}]}} *)
I don't claim this is conclusive evidence, but I have not found a way to use NIntegrate to integrate the OP's f.
Workaround
NIntegrate and NDSolve are not equivalent, not in functionality, speed, or accuracy. In this case, NDSolve can address the OP's problem on it's own terms -- that is, it can integrate a matrix differential equation in terms of matrices.
ClearAll[a, f];
f[x_?NumericQ] := {{x, 2}, {-1, -x}};
asol = NDSolveValue[{a'[t] == f[t], a[0] == {{0, 0}, {0, 0}}}, a, {t, 0, 1}];
asol[1]
(* {{0.5, 2.}, {-1., -0.5}} *)
It seems (to me) that if the component functions of the matrix function f could be integrated separately, then NIntegrate is likely to do a superior job finding the integrals. But when the components cannot be integrated independently, one can use NDSolve.
-
|
2015-04-25 23:22:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29138168692588806, "perplexity": 3140.181989439946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246651873.94/warc/CC-MAIN-20150417045731-00223-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://www.vedantu.com/physics/scalar-and-vector
|
# Scalar and Vector
Physical quantities can be classified into two categories, which are scalars and vectors. Quantities like mass or density can be described by their numerical values and appropriate units only. These quantities are called “scalars”. However, quantities like velocity or force require the specifications of a numerical value and a direction. For example, specifying the value of velocity is not enough to understand an object’s motion. It is necessary to mention the direction of its motion. Such quantities are referred to as vectors. The physical interpretations, algebra, and calculus are very different for the two types of quantities.
### Scalar Quantity Definition
A scalar quantity only has a magnitude and it can be represented by a number only. A scalar does not have any direction. The addition of scalars follows the generic rules of the addition of numbers.
### Vector Quantity
A physical quantity, having both magnitude and direction, is referred to as a vector. The addition of two vectors does not follow ordinary algebra. A vector quantity is represented with an arrow over a letter or a boldface letter. Geometrically, it is represented by a line segment, having an arrow at one end. The arrow describes the direction and the length of the segment gives the magnitude.
### Examples of Scalar and Vector Quantities
Some common examples of scalar quantities are mass, time, speed, volume, temperature, density, and many more.
Displacement, velocity, acceleration, momentum, force, weight, etc. quantities are represented by vectors.
### Laws of Addition of Vectors
Vector addition can be defined using any of the following laws,
Triangle Law: If two vectors are denoted by the sides of a triangle in the same order, the resultant vector is given by the third side of the triangle, taken in the opposite order.
Parallelogram Law: If two vectors are denoted by two adjacent sides of a parallelogram, the resultant vector is given by the diagonal that passes through the point of intersection of those sides.
The resultant (addition) of two vectors a and b with magnitudes a and b is given by,
c = a + b
The resultant vector c has magnitude,
c = $\sqrt{a^{2}+b^{2}\;2ab\; cos \alpha}$
It makes an angle with the vector a such that,
tan$\theta$ = $\frac{bsin\alpha}{a+b\;cos\alpha}$
Vector subtraction can be expressed as addition of the inverted vector to be subtracted.
### Force is Scalar or Vector?
Force is a quantity that can change the state of motion of an object. It has both magnitude and direction. The SI unit of force is Newton (N).
### Mass is Scalar or Vector?
Mass is a scalar quantity. It is a measure of the inertia of an object. Mass can be represented by a number only. The SI unit of mass is kg.
### Weight is Scalar or Vector?
Weight is a vector quantity. It is given by the amount of force exerted on an object, due to a gravitational force. The weight of an object on the Earth has a direction towards the center of the Earth. SI unit of weight is Newton (N).
### Displacement is Scalar or Vector?
Displacement of an object is given by the straight distance traversed by the object at any given time interval. It is a vector quantity and it points from initial to final position of the body within that interval of time. The SI unit of displacement is meter (m).
### Speed is Scalar or Vector?
Speed of an object is a scalar but velocity is a vector. Velocity has a direction as that of displacement. Velocity points in the direction of motion. The SI units of both speed and velocity are m/s.
### Acceleration is Scalar or Vector?
Acceleration of an object is caused due to the change of velocity of the object. It is a vector quantity having unit m/s2.
### Area is Vector or Scalar?
Area is a vector quantity. It has a magnitude equal to the amount of space inside any boundary. The normal direction to that space is associated with the area. The SI unit is m2.
### Pressure is Scalar or Vector?
Pressure is the amount of normal force per unit area. It is a scalar quantity however force is a vector. Pascal (Pa) or N/m2 is the SI unit of pressure.
### Work is Scalar or Vector?
Work is the energy associated with a force. If a force acts on a body and the body undergoes a displacement, the amount of work done is the product of force and displacement parallel to the force. Work has the dimensions of energy, which is also a scalar. The SI unit of work is Joule (J).
### Did You Know?
• When an object moves along a path joining two points, the distance is measured along the trajectory whereas displacement is the shortest path joining the two points. Consequently, distance varies if the object follows different trajectories between the initial and final positions. However, the displacement between two fixed positions is independent of the path followed by the object. Distance is a scalar, however, displacement is a vector.
• Speed and velocity are closely related but different concepts. For example, the speed of an object remains constant throughout a uniform circular motion but the velocity is different at every point since the direction of velocity changes.
• The weight of a body depends on its mass. Although the mass of an object remains the same, its weight can vary due to variation in the gravitational field.
|
2020-08-13 03:31:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8004958629608154, "perplexity": 236.22417068475303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.61/warc/CC-MAIN-20200813014639-20200813044639-00194.warc.gz"}
|
https://kerodon.net/tag/00QP
|
# Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
Variant 2.5.5.15 (Relative Chain Complexes). Let $S_{\bullet }$ be a simplicial set and let $S'_{\bullet } \subseteq S_{\bullet }$ be a simplicial subset. Then we can identify the free simplicial abelian group $\operatorname{\mathbf{Z}}[ S'_{\bullet } ]$ with a simplicial subgroup of $\operatorname{\mathbf{Z}}[ S_{\bullet } ]$. We let $\mathrm{C}_{\ast }( S,S'; \operatorname{\mathbf{Z}})$ and $\mathrm{N}_{\ast }(S,S'; \operatorname{\mathbf{Z}})$ denote the Moore complex and normalized Moore complex of the simplicial abelian group $\operatorname{\mathbf{Z}}[ S_{\bullet } ] / \operatorname{\mathbf{Z}}[ S'_{\bullet } ]$. By virtue of Proposition 2.5.5.11, these complexes have the same homology groups, which we denote by $\mathrm{H}_{\ast }(S,S'; \operatorname{\mathbf{Z}})$ and refer to as the relative homology groups of the pair $(S'_{\bullet } \subseteq S_{\bullet })$.
|
2022-05-25 00:29:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9838249087333679, "perplexity": 100.70175606814558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00024.warc.gz"}
|
https://collegephysicsanswers.com/openstax-solutions/length-nylon-rope-which-mountain-climber-suspended-has-force-constant-140-times
|
Question
The length of nylon rope from which a mountain climber is suspended has a force constant of $1.40 \times 10^4 \textrm{ N/m}$. (a) What is the frequency at which he bounces, given his mass plus and the mass of his equipment are 90.0 kg? (b) How much would this rope stretch to break the climber’s fall if he free-falls 2.00 m before the rope runs out of slack? Hint: Use conservation of energy. (c) Repeat both parts of this problem in the situation where twice this length of nylon rope is used.
1. $1.99 \textrm{ Hz}$
2. $56.9 \textrm{ cm}$
3. $1.40 \textrm{ Hz}$, $84.8 \textrm{ cm}$
Solution Video
|
2019-10-19 10:38:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4311484098434448, "perplexity": 1465.7190641551413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692723.54/warc/CC-MAIN-20191019090937-20191019114437-00351.warc.gz"}
|
https://www.esaral.com/q/a-welding-fuel-gas-contains-carbon-and-hydrogen-only-burning-a-small-sample/
|
A welding fuel gas contains carbon and hydrogen only. Burning a small sample
Question.
A welding fuel gas contains carbon and hydrogen only. Burning a small sample of it in oxygen gives 3.38 g carbon dioxide, 0.690 g of water and no other products. A volume of 10.0 L (measured at STP) of this welding gas is found to weigh 11.6 g. Calculate
(i)empirical formula,
(ii)molar mass of the gas, and
(iii)molecular formula.
Solution:
(i) 1 mole $(44 \mathrm{~g})$ of $\mathrm{CO}_{2}$ contains $12 \mathrm{~g}$ of carbon.
$\therefore 3.38 \mathrm{~g}$ of $\mathrm{CO}_{2}$ will contain carbon
$=\frac{12 \mathrm{~g}}{44 \mathrm{~g}} \times 3.38 \mathrm{~g}$
$=0.9217 \mathrm{~g}$
$18 \mathrm{~g}$ of water contains $2 \mathrm{~g}$ of hydrogen.
$\therefore 0.690 \mathrm{~g}$ of water will contain hydrogen
$=\frac{2 \mathrm{~g}}{18 \mathrm{~g}} \times 0.690$
$=0.0767 \mathrm{~g}$
Since carbon and hydrogenare the only constituents of the compound, the total mass of the compound is
= 0.9217 g + 0.0767 g
= 0.9984 g
$\therefore$ Percent of $C$ in the compound
$=\frac{0.9217 \mathrm{~g}}{0.9984 \mathrm{~g}} \times 100$
$=92.32 \%$
Percent of $\mathrm{H}$ in the compound $=\frac{0.0767 \mathrm{~g}}{0.9984 \mathrm{~g}} \times 100$
$=7.68 \%$
Moles of carbon in the compound
$=\frac{92.32}{12.00}$
= 7.69
Moles of hydrogen in the compound
$=\frac{7.68}{1}$
= 7.68
$\therefore$ Ratio of carbon to hydrogen in the compound $=7.69: 7.68=1: 1$
Hence, the empirical formula of the gas is CH.
(ii) Given,
Weight of $10.0 \mathrm{~L}$ of the gas (at S.T.P) $=11.6 \mathrm{~g}$
$\therefore$ Weight of $22.4 \mathrm{~L}$ of gas at STP
$=\frac{11.6 \mathrm{~g}}{10.0 \mathrm{~L}} \times 22.4 \mathrm{~L}$
$=25.984 \mathrm{~g}$
$\approx 26 \mathrm{~g}$
Hence, the molar mass of the gas is 26 g.
(iii) Empirical formula mass of $\mathrm{CH}=12+1=13 \mathrm{~g}$
$n=\frac{\text { Molar mass of gas }}{\text { Empirical formula mass of gas }}$
$=\frac{26 \mathrm{~g}}{13 \mathrm{~g}}$
$n=2$
$\therefore$ Molecular formula of gas $=(\mathrm{CH})_{n}$
$=\mathrm{C}_{2} \mathrm{H}_{2}$
Editor
|
2022-08-08 10:08:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5139854550361633, "perplexity": 3882.808559083789}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00056.warc.gz"}
|
http://www.ck12.org/geometry/Tangent-Identification/lesson/Tangent-Ratio/r11/
|
<meta http-equiv="refresh" content="1; url=/nojavascript/">
You are viewing an older version of this Concept. Go to the latest version.
# Tangent Identification
## Function of an angle equal to the opposite leg over the adjacent leg of a right triangle.
0%
Progress
Practice Tangent Identification
Progress
0%
Tangent Ratio
As the measure of an angle increases between $0^\circ$ and $90^\circ$ , how does the tangent ratio of the angle change?
#### Guidance
Recall that one way to show that two triangles are similar is to show that they have two pairs of congruent angles. This means that two right triangles will be similar if they have one pair of congruent non-right angles.
The two right triangles above are similar because they have two pairs of congruent angles. This means that their corresponding sides are proportional. $\overline{DF}$ and $\overline{AC}$ are corresponding sides because they are both opposite the $22^\circ$ angle. $\frac{DF}{AC}=\frac{4}{2}=2$ , so the scale factor between the two triangles is 2. This means that $x=10$ , because $\frac{FE}{CB}=\frac{10}{5}=2$ .
The ratio between the two legs of any $22^\circ$ right triangle will always be the same, because all $22^\circ$ right triangles are similar. The ratio of the length of the leg opposite the $22^\circ$ angle to the length of the leg adjacent to the $22^\circ$ angle will be $\frac{2}{5}=0.4$ . You can use this fact to find a missing side of another $22^\circ$ right triangle.
Because this is a $22^\circ$ right triangle, you know that $\frac{opposite \ leg}{adjacent \ leg}=\frac{2}{5}=0.4$ .
$\frac{opposite \ leg}{adjacent \ leg} &= 0.4\\\frac{7}{x} &= 0.4\\0.4x &= 7\\x &= 17.5$
The ratio between the opposite leg and the adjacent leg for a given angle in a right triangle is called the tangent ratio . Your scientific or graphing calculator has tangent programmed into it, so that you can determine the $\frac{opposite \ leg}{adjacent \ leg}$ ratio for any angle within a right triangle. The abbreviation for tangent is tan .
Example A
Use your calculator to find the tangent of $75^\circ$ . What does this value represent?
Solution: Make sure your calculator is in degree mode. Then, type “ $\tan (75)$ ”.
$\tan (75^\circ) \approx 3.732$
This means that the ratio of the length of the opposite leg to the length of the adjacent leg for a $75^\circ$ angle within a right triangle will be approximately 3.732.
Example B
Solve for $x$ .
Solution: From Example A, you know that the ratio $\frac{opposite \ leg}{adjacent \ leg} \approx 3.732$ . You can use this to solve for $x$ .
$\frac{opposite \ leg}{adjacent \ leg} & \approx 3.732\\\frac{x}{2} & \approx 3.732\\x & \approx 7.464$
Example C
Solve for $x$ and $y$ .
Solution: You can use the $65^\circ$ angle to find the correct ratio between 24 and $x$ .
$\tan (65^\circ) &= \frac{opposite \ leg}{adjacent \ leg}\\2.145 & \approx \frac{24}{x}\\x & \approx \frac{24}{2.145}\\x & \approx 11.189$
Note that this answer is only approximate because you rounded the value of $\tan 65^\circ$ . An exact answer will include “ $\tan$ ”. The exact answer is:
$x=\frac{24}{\tan 65^\circ}$
To solve for $y$ , you can use the Pythagorean Theorem because this is a right triangle.
$11.189^2+24^2 &= y^2\\701.194 &= y^2\\26.48 &= y$
Concept Problem Revisited
As the measure of an angle increases between $0^\circ$ and $90^\circ$ , how does the tangent ratio of the angle change?
As an angle increases, the length of its opposite leg increases. Therefore, $\frac{opposite \ leg}{adjacent \ leg}$ increases and thus the value of the tangent ratio increases.
#### Vocabulary
Two figures are similar if a similarity transformation will carry one figure to the other. Similar figures will always have corresponding angles congruent and corresponding sides proportional.
AA, or Angle-Angle , is a criterion for triangle similarity. The AA criterion for triangle similarity states that if two triangles have two pairs of congruent angles, then the triangles are similar.
The tangent (tan) of an angle within a right triangle is the ratio of the length of the side opposite the angle to the length of the side adjacent to the angle.
#### Guided Practice
1. Tangent tells you the ratio of the two legs of a right triangle with a given angle. Why does the tangent ratio not work in the same way for non-right triangles?
2. Use your calculator to find the tangent of $45^\circ$ . What does this value represent? Why does this value make sense?
3. Solve for $x$ .
1. Two right triangles with a $32^\circ$ angle will be similar. Two non-right triangles with a $32^\circ$ angle will not necessarily be similar. The tangent ratio works for right triangles because all right triangles with a given angle are similar. The tangent ratio doesn't work in the same way for non-right triangles because not all non-right triangles with a given angle are similar. You can only use the tangent ratio for right triangles.
2. $\tan (45^\circ)=1$ . This means that the ratio of the length of the opposite leg to the length of the adjacent leg is equal to 1 for right triangles with a $45^\circ$ angle.
This should make sense because right triangles with a $45^\circ$ angle are isosceles. The legs of an isosceles triangle are congruent, so the ratio between them will be 1.
3. Use the tangent ratio of a $35^\circ$ angle.
$\tan (35^\circ) &= \frac{opposite \ leg}{adjacent \ leg}\\\tan (35^\circ) &= \frac{x}{18}\\x &= 18 \tan (35^\circ)\\x & \approx 12.604$
#### Practice
1. Why are all right triangles with a $40^\circ$ angle similar? What does this have to do with the tangent ratio?
2. Find the tangent of $40^\circ$ .
3. Solve for $x$ .
4. Find the tangent of $80^\circ$ .
5. Solve for $x$ .
6. Find the tangent of $10^\circ$ .
7. Solve for $x$ .
8. Your answer to #5 should be the same as your answer to #7. Why?
9. Find the tangent of $27^\circ$ .
10. Solve for $x$ .
11. Find the tangent of $42^\circ$ .
12. Solve for $x$ .
13. A right triangle has a $42^\circ$ angle. The base of the triangle, adjacent to the $42^\circ$ angle, is 5 inches. Find the area of the triangle.
14. Recall that the ratios between the sides of a 30-60-90 triangle are $1:\sqrt{3} : 2$ . Find the tangent of $30^\circ$ . Explain how this matches the ratios for a 30-60-90 triangle.
15. Explain why it makes sense that the value of the tangent ratio increases as the angle goes from $0^\circ$ to $90^\circ$ .
### Vocabulary Language: English
AA Similarity Postulate
AA Similarity Postulate
If two angles in one triangle are congruent to two angles in another triangle, then the two triangles are similar.
Congruent
Congruent
Congruent figures are identical in size, shape and measure.
Similar
Similar
Two figures are similar if they have the same shape, but not necessarily the same size.
Tangent
Tangent
The tangent of an angle in a right triangle is a value found by dividing the length of the side opposite the given angle by the length of the side adjacent to the given angle.
|
2015-07-31 15:44:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 65, "texerror": 0, "math_score": 0.8431934714317322, "perplexity": 364.1946856237687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988308.23/warc/CC-MAIN-20150728002308-00297-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://cs.paperswithcode.com/paper/node-multiway-cut-and-subset-feedback-vertex
|
## Node Multiway Cut and Subset Feedback Vertex Set on Graphs of Bounded Mim-width
2 Oct 2019 · Benjamin Bergougnoux, Charis Papadopoulos, Jan Arne Telle ·
The two weighted graph problems Node Multiway Cut (NMC) and Subset Feedback Vertex Set (SFVS) both ask for a vertex set of minimum total weight, that for NMC disconnects a given set of terminals, and for SFVS intersects all cycles containing a vertex of a given set. We design a meta-algorithm that allows to solve both problems in time $2^{O(rw^3)}\cdot n^{4}$, $2^{O(q^2\log(q))}\cdot n^{4}$, and $n^{O(k^2)}$ where $rw$ is the rank-width, $q$ the $\mathbb{Q}$-rank-width, and $k$ the mim-width of a given decomposition... This answers in the affirmative an open question raised by Jaffke et al. (Algorithmica, 2019) concerning an XP algorithm for SFVS parameterized by mim-width. By a unified algorithm, this solves both problems in polynomial-time on the following graph classes: Interval, Permutation, and Bi-Interval graphs, Circular Arc and Circular Permutation graphs, Convex graphs, $k$-Polygon, Dilworth-$k$ and Co-$k$-Degenerate graphs for fixed $k$; and also on Leaf Power graphs if a leaf root is given as input, on $H$-Graphs for fixed $H$ if an $H$-representation is given as input, and on arbitrary powers of graphs in all the above classes. Prior to our results, only SFVS was known to be tractable restricted only on Interval and Permutation graphs, whereas all other results are new. read more
PDF Abstract
# Code Add Remove Mark official
No code implementations yet. Submit your code now
# Categories
Data Structures and Algorithms
# Datasets
Add Datasets introduced or used in this paper
|
2021-09-26 21:08:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6982055306434631, "perplexity": 2409.231273010753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057973.90/warc/CC-MAIN-20210926205414-20210926235414-00517.warc.gz"}
|
http://clay6.com/qa/45576/a-solution-of-urea-in-water-has-a-boiling-point-of-100-128-c-calculate-the-
|
Browse Questions
# A solution of urea in water has a boiling point of 100.128$^{\large\circ}$C. Calculate the freezing point of the same solution. Molal constants for water $K_f$ and $K_b$ are 1.86$^{\large\circ}$C and 0.512$^{\large\circ}$C respectively.
$-0.465^{\large\circ}C$
|
2016-10-27 06:56:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.814943253993988, "perplexity": 3261.4176483619426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721142.99/warc/CC-MAIN-20161020183841-00085-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/653681/how-to-use-xcolor-color-names-in-tikzpicture-style
|
# How to use xcolor color names in tikzpicture style?
I want to use color names like MidnightBlue in a style for a tikzpicture to e.g. fill it with a specific color, is this possible? Or am I limited to the other style (red!30 etc.)?
This MWE fails with Package xcolor Error: Undefined color MidnightBlue.:
\documentclass[tikz]{standalone}
\PassOptionsToPackage{svgnames}{xcolor}
\usepackage{tikz}
\usetikzlibrary{positioning,fit,calc,shapes}
\begin{document}
\begin{tikzpicture}[myStyle/.style={rectangle, fill=MidnightBlue}]
\node (test) [myStyle] { Test };
\end{tikzpicture}
\end{document}
• @Qrrbrbirlbel Thanks, can you post that as an answer? Totally missed that tikz gets already loaded before. Aug 11 at 11:38
The package tikz (by way of package pgfcore) loads the xcolor package.
In your case, the standalone class with the option tikz also loads the tikz package (and configurates other settings so that it works as intended). The \usepackage{tikz} has no effect anymore.
Thus, you need to say
\PassOptionsToPackage{svgnames}{xcolor}
\documentclass[tikz]{standalone}
or even just
\documentclass[svgnames,tikz]{standalone}
Without standalone you can just do
\documentclass{article}% or another that does not load TikZ
\usepackage[svgnames]{xcolor}
\usepackage{tikz}
or, similarly as with standalone,
\documentclass[svgnames]{article}
\usepackage{tikz}
Similar problems arise with the beamer class, however this one has a class option xcolor that can be used to forward options only to the xcolor package (xcolor=dvipsnames) – and not to all packages.
|
2022-09-27 14:36:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856351017951965, "perplexity": 7404.772004866362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00214.warc.gz"}
|
https://www.zbmath.org/?q=an%3A0439.05025
|
# zbMATH — the first resource for mathematics
A Hamiltonian decomposition of $$K^*_{2m},2m\geq 8$$. (English) Zbl 0439.05025
##### MSC:
05C20 Directed graphs (digraphs), tournaments 05C45 Eulerian and Hamiltonian graphs 05B15 Orthogonal arrays, Latin squares, Room squares 05C99 Graph theory
Full Text:
##### References:
[1] Berge, (), 187 [2] Bermond, J.C; Faber, V, Decomposition of the complete directed graph into k-circuits, J. combinatorial theory B, 21, 146-155, (1976) · Zbl 0344.05123 [3] Keedwell, A.D, Some problems concerning complete Latin squares, (), 89-96 · Zbl 0296.05012 [4] Williams, E.J, Experimental designs balanced for the estimation of residual effects of treatments, Austral. J. sci. res. ser. A, 2, 149-168, (1949)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-05-18 18:44:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41601821780204773, "perplexity": 5628.598229987216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00295.warc.gz"}
|
http://www.luxrender.net/forum/viewtopic.php?f=17&p=81422
|
## Post your picture so we can SEE you ! :)
Community discussion for topics unrelated to the project.
Moderator: coordinators
### Re: Post your picture so we can SEE you ! :)
jeanphi wrote:Dade, where has the engine gone?
I'm going slower ... and slower ... may be I'm becoming *cough* old *cough*.
Posts: 4800
Joined: Sat Apr 19, 2008 6:04 pm
Location: Italy
### Re: Post your picture so we can SEE you ! :)
And OpenCL reminds you of good'ol'days?
Linux builds packager
SATtva
Posts: 5500
Joined: Tue Apr 07, 2009 12:19 pm
Location: from Siberia with love
### Re: Post your picture so we can SEE you ! :)
Azazeo
Posts: 5
Joined: Fri Mar 26, 2010 10:55 am
### Re: Post your picture so we can SEE you ! :)
sprocket
Posts: 324
Joined: Sun Jan 03, 2010 7:59 am
Location: Australia
### Re: Post your picture so we can SEE you ! :)
This is a photo from about a year or so ago... likely update it fairly soon
Attachments
Eros
Posts: 415
Joined: Wed Jul 22, 2009 8:37 am
### Re: Post your picture so we can SEE you ! :)
I made some mobile phone self portraits
sorry for bad quality
Attachments
1
2
3
Last edited by Meelis on Fri May 04, 2012 1:52 pm, edited 1 time in total.
Meelis
Posts: 876
Joined: Sat Oct 17, 2009 2:16 am
### Re: Post your picture so we can SEE you ! :)
sorry for bad quality restricted access
fixed
Linux builds packager
SATtva
Posts: 5500
Joined: Tue Apr 07, 2009 12:19 pm
Location: from Siberia with love
### Re: Post your picture so we can SEE you ! :)
SATtva wrote:
sorry for bad quality restricted access
fixed
Sorry for that.
Meelis
Posts: 876
Joined: Sat Oct 17, 2009 2:16 am
### Re: Post your picture so we can SEE you ! :)
No need to apologize, just kidding.
Linux builds packager
SATtva
Posts: 5500
Joined: Tue Apr 07, 2009 12:19 pm
Location: from Siberia with love
### Re: Post your picture so we can SEE you ! :)
Cool idea.
Here i am.
Attachments
What a nice day.
areandres
Posts: 122
Joined: Wed May 26, 2010 2:38 am
Location: Germany
PreviousNext
|
2013-05-25 19:07:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896953284740448, "perplexity": 14791.971189452002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706121989/warc/CC-MAIN-20130516120841-00077-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://www.physicsbootcamp.org/oneD-Acceleration.html
|
## Section2.5Acceleration
The rate at which $x$-coordinate of an object changes with time gave us velocity $v_x\text{.}$ Now, we study the rate at which $v_x$ changes with time. This will give us acceleration for a motion along $x$-axis.
In a later chapter you will learn that the acceleration of an object depends on force you apply on the object. Therefore, it is of considerable interest to understand the concept of acceleration well.
Similar to our treatment of velocity, we will define an average acceleration and an instantaneous acceleration. For a motion on the $x$ axis, we will denote instantaneous acceleration by symbol $a_x$ and average acceleration by $a_{\text{av},x}\text{.}$
### Subsection2.5.1Average Acceleration
The average rate of change of velocity during an interval can be obtained by dividing the change in velocity by the interval. Let $v_{i,x}$ be the velocity at $t_i$ and $v_{f,x}$ be the velocity at $t_f\text{,}$ then average acceleration, $a_{\text{av},x}\text{,}$ will be
$$a_{\text{av},x} = \dfrac{v_{f,x} - v_{i,x}}{ t_f - t_i}.\tag{2.5.1}$$
We often write this as
\begin{equation*} a_{\text{av},x} = \dfrac{\Delta v_x}{ \Delta t}, \end{equation*}
where $\Delta v_x = v_{f,x} - v_{i,x}$ and $\Delta t = t_f - t_i\text{.}$ Note that for average aceleration, we do not need to know what happened at any other instant than just the initial and final instants. For instance, for a motion shown in Figure 2.5.1, the average acceleration between A and B will be obtained from $v_x$ at those points and time between them.
Note that $v_x(\text{at B})$ is negative since the particle is moving towards negative $x$ axis.
\begin{align*} \amp v_x(A) = 2.0\text{ m/s}, \\ \amp v_x(B) = - 2.0\text{ m/s}, \\ \amp a_{\text{av},x} = \frac{-2.0-2.0}{2.0} = -2.0\text{ m/s}^2. \end{align*}
### Subsection2.5.2Instantaneous Acceleration
Often we want acceleration at a particular instant, called instantaneous acceleration, or simply, acceleration. This refers to the average acceleration between that instant, say $t \text{,}$ and an instant very close to $t\text{,}$ e.g., a tiny amount of time later than $t\text{,}$ which we can indicate by $t + \Delta t \text{.}$ Or, alternately, between $t$ and a tiny instant before $t\text{,}$ say at $t-\Delta t\text{.}$
How close instants $t$ and $t + \Delta t$ be? The answer is as close as possible except right at $t\text{.}$ This is the notion of infintesimal interval already discussed in Subsection 2.4.3 on instantaneous velocity.
Denoting instantaneous acceleration for a motion along $x$ axis by symbol $a_x \text{,}$ we write this special averaging formally using the notion of a limit in Calculus,
$$a_x = \lim_{\Delta t \rightarrow 0}\dfrac{v_x\left(\text{at } t+\Delta t\right) - v_x\left(\text{at } t\right)}{\Delta t},\label{eq-instantaneous-acceleration-1d-limit}\tag{2.5.2}$$
which is also denoted by the derivative symbol $dv_x/dt\text{.}$
$$a_x = \dfrac{dv_x}{dt}.\tag{2.5.3}$$
Graphically, this derivative can be computed by the slope of the tangent to the $v_x$ versus $t$ plot at the instant of interest.
Positive and Negative $a_x$ When Moving Towards $x = + \infty\text{:}$ When an object is moving towards $x = + \infty$ with increasing speed, we will have $v_x\left(\text{at } t+\Delta t\right) \gt v_x\left(\text{at } t\right)\text{,}$ which will give positive $a_x\text{.}$ And, when the object is moving towards $x = + \infty$ with decreasing speed, we will have $v_x\left(\text{at } t+\Delta t\right) \lt v_x\left(\text{at } t\right)\text{,}$ which will give negative $a_x\text{.}$
Positive and Negative $a_x$ When Moving Towards $x = - \infty\text{:}$ When the object is moving towards $x = - \infty\text{,}$ $v_x \lt 0\text{.}$ That means that if the motion is with increasing speed, we will have $v_x\left(\text{at } t+\Delta t\right) \lt v_x\left(\text{at } t\right)\text{,}$ which will give negative $a_x\text{.}$ And, when the object is moving towards $x = - \infty$ but with decreasing speed, we will have $v_x\left(\text{at } t+\Delta t\right) \gt v_x\left(\text{at } t\right)\text{,}$ which will give positive $a_x\text{.}$
### Subsection2.5.3Graphical Definition of Acceleration
Since the slope of a function is the rate of change of that function, the rate of change of velocity at a particular instant can be obtained from the slope of the tangent of the velocity versus time plot. We can take this as defining acceleration $a_x$ for one-dimensional motion on the $x$ axis.
$$a_x = \text{ slope of tangent to }v_x\text{ versus }t.\tag{2.5.4}$$
If the plot of $v_x$ versus $t$ is linear, i.e., a straight line, then the tangent line is same as the line of the plot, hence $a_x$ is just the slope of the line.
But, if the plot of $v_x$ versus $t$ is curved, then you would draw a tangent line at the instant of interest, and compute the slope of the tangent line to get $a_x \text{.}$
From the $v_x$ versus $t$ plot in Figure 2.5.5, find $a_x$ at the following instants: (a) $t = 1\text{ sec}\text{,}$ (b) $t = 3\text{ sec} \text{,}$ and (c) $t = 3.75\text{ sec} \text{.}$
Hint
Use slopes.
(a) $0\text{,}$ (b) $-4\text{ m/s}^2\text{,}$ (c) $4\text{ m/s}^2$
Solution
(a) The $v_x$ versus $t$ plot through the time when $t = 1\text{ sec}$ is a straight line. Therefore, $a_x$ will simply be slope of this part of the plot. The slope of this line is clearly zero. Therefore, $a_x (\text{at }t=1\text{ sec}) = 0\text{.}$
(b) The $v_x$ versus $t$ plot through the time when $t = 3\text{ sec}$ is a straight line. Therefore, $a_x$ will simply be slope of this part of the plot. The slope of this line is obtained by picking two points on the line between $t = 2\text{ sec}$ and $t = 3.5\text{ sec}\text{.}$ We pick points at $t = 2\text{ sec}$ and $t = 3\text{ sec}$ to obtain
\begin{equation*} a_x = \dfrac{0 - 4\text{ m/s}}{3\text{ s}-2\text{ s}} = -4\text{ m/s}^2. \end{equation*}
(b) The $v_x$ versus $t$ plot through the time when $t = 3.75\text{ sec}$ is a straight line. Therefore, $a_x$ will simply be slope of this part of the plot. The slope of this line is obtained by picking two points on the line between $t = 3.5\text{ sec}$ and $t = 4.0\text{ sec}\text{.}$ We pick points at $t = 3.5\text{ sec}$ and $t = 4\text{ sec}$ to obtain
\begin{equation*} a_x = \dfrac{0 - (-2)\text{ m/s}}{4\text{ s}-3.5\text{ s}} = 4\text{ m/s}^2. \end{equation*}
### Subsection2.5.4$\Delta v_x$ from $a_x$ Graphically
Since $a_x$ is detivative of $v_x(t)\text{,}$ a change in $v_x$ will be given by an integral of $a_x(t)\text{.}$ Since we can think of an integral as area under a plot of $a_x$ between the $a_x(t)$ and $a_x=0$ as illustrated in Figure 2.5.6.. We have seen same relation between $v_x$ and $x(t)$ as we are seeing between $a_x$ and $v_x(t)\text{.}$
The area under the curve is the area between the graph and $a_x=0$ line. Therefore, from positive $a_x$ we get positive area and for negative $a_x$ we get negative area. Positive $a_x$ leads to positive change in the $v_x$ and a negative $a_x$ to a negative change in $v_x\text{.}$ This makes sense; say, starting $v_x = 10\text{ m/s}$ and acceleration is constant, $a_x=-3\text{ m/s}^2\text{.}$ Then, change in velocity in $2\text{ sec}$ will be $\Delta v_x = -3\times 2 = -6\text{ m/s}\text{.}$ This will give $v_x = 4\text{ m/s}$ after $2\text{ sec}\text{.}$
A ball is initially ($t=0$) moving at $v_x = 3\text{ m/s}\text{.}$ The acceleration of the ball at different times are as shown in the $a_x$ versus $t$ plot in Figure 2.5.8. Find $v_x$ at (a) $t = 1\text{ sec} \text{,}$ (b) $t = 2\text{ sec} \text{,}$ and (c) $t = 4\text{ sec} \text{.}$
Hint
Use area-under-curve.
(a) $5\text{ m/s}\text{,}$ (b) $5\text{ m/s}$ , (c) $1\text{ m/s} \text{.}$
Solution
(a) From the area-under-curve we can find the change in $v_x$ during $t= 0$ to $t = 1\text{ sec}\text{.}$ Adding this change to the $v_x$ at $t= 0$ will give us the $v_x$ at $t=1\text{ sec}\text{.}$
\begin{equation*} \Delta v_x = \dfrac{1}{2}\times 4 \times 1 = 2\text{ m/s}. \end{equation*}
Therefore, $v_x$ at $t=1\text{ sec}\text{:}$
\begin{equation*} v_x(\text{ at }t=1\text{ sec }) = v_x(\text{ at }t=0\ ) + \Delta v_x = 5\text{ m/s}. \end{equation*}
(b) The area-under-the-curve during $t= 1\text{ sec}$ to $t = 2\text{ sec}$ is zero. Hence the velocity does not change over this interval, i.e., $\Delta v_x = 0 \text{.}$
\begin{equation*} v_x(\text{ at }t=2\text{ sec }) = v_x(\text{ at }t=1\text{ sec } ) + \Delta v_x = 5\text{ m/s}. \end{equation*}
(c) From the area-under-curve we can find the change in $t= 2\text{ sec}$ to $t = 4\text{ sec}\text{.}$ Adding this change to the $v_x$ at $t= 2\text{ sec}$ will give us the $v_x$ at $t=4\text{ sec}\text{.}$
\begin{equation*} \Delta v_x = -2 \times (4-2) = -4\text{ m/s}. \end{equation*}
Therefore, $v_x$ at $t=4\text{ sec}\text{:}$
\begin{equation*} v_x(\text{ at }t=4\text{ sec }) = v_x(\text{ at }t=2\text{ sec } ) + \Delta v_x = 1\text{ m/s}. \end{equation*}
### Subsection2.5.5(Calculus) Analytic Definition of Acceleration
The derivative of velocity with respect to time gives the rate at which velocity changes. That is, acceleration $a_x$ on the $x$ axis is the derivative of the velocity $v_x \text{.}$
$$a_x = \dfrac{dv_x}{dt}.\label{eq-1d-review-accleration-def}\tag{2.5.5}$$
In theoretical work you might find that you are given position as a function $x(t) \text{.}$ For example, a block attached to a spring oscillates with its position $x(t) = (5 \text{ cm}) \cos( 2\pi t)\text{,}$ where $t$ is in seconds. If you plot this $x$ versus $t \text{,}$ you will notice that the block moves between $x = - 5$ cm and $x = 5$ cm, repeating a complete cycle every second.
(a) What is the velocity of this block at (i) $t = 0.25$ s, (ii) $t = 0.5$ s, and (iii) $t = 0.75$ s.
(b) What is the acceleration of this block at (a) $t = 0.25$ s, (b) $t = 0.5$ s, and (c) $t = 0.75$ s.
Hint
Take the derivatives: (a) $v_x = dx/dt\text{,}$ (b) $a_x = dv_x/dt\text{.}$
(a) (i) $-10\pi\text{ cm/s}\text{,}$ (ii) $0\text{,}$ (iii) $10\pi\text{ cm/s}\text{,}$ (b) (i) $0\text{,}$ (ii) $20\pi^2 \text{ cm/s}^2\text{,}$ (iii) $0\text{.}$
Solution 1 (a)
(a) Taking the derivative of given $x(t)$ and then evaluating the result at the required times should give us the answers.
\begin{equation*} v_x = \dfrac{dx}{dt} = (-10\pi \text{ cm/s}) \sin(2\pi t). \end{equation*}
(i) $v_x = (-10\pi \text{ cm/s}) \sin(2\pi \times 0.25) = -10\pi \text{ cm/s} = -31.4 \text{ cm/s}.$
(ii) $v_x = (-10\pi \text{ cm/s}) \sin(2\pi \times 0.50) = 0.$
(iii) $v_x = (-10\pi \text{ cm/s}) \sin(2\pi \times 0.75) = 10\pi \text{ cm/s}= 31.4 \text{ cm/s}.$
Solution 2 (b)
(b) Taking the derivative of $v_x$ and then evaluating the result at the requires times should give us the answers.
\begin{equation*} a_x = \dfrac{dv_x}{dt} = (-20\pi^2 \text{ cm/s}^2) \cos(2\pi t). \end{equation*}
(i) $a_x = (-20\pi^2 \text{ cm/s}^2) \cos(2\pi \times 0.25) = 0.$
(ii) $a_x = (-20\pi^2 \text{ cm/s}^2) \cos(2\pi \times 0.50) = 20\pi^2 \text{ cm/s}^2.$
(iii) $a_x = (-20\pi^2 \text{ cm/s}^2) \cos(2\pi \times 0.75) = 0.$
### Subsection2.5.6(Calculus) Change in Velocity from Acceleration Analytically
The inverse problem of obtaining change in velocity, $\Delta v_x\text{,}$ from a known acceleration, $a_x(t) \text{,}$ is a similar problem of going from $v_x$ to $\Delta x\text{.}$ Therefore, the math is similar.
The inverse of Eq. (2.5.5) gives the change in velocity $\Delta v_x = v_{f,x} - v_{i,x}$ when we integrate acceleration $a_x \text{.}$
$$\Delta v_x \equiv v_{f,x} - v_{i,x} = \int_{t_i}^{t_f}\ a_x(t)\, dt.\label{eq-dvx-as-integral-of-ax}\tag{2.5.6}$$
In this equation, if we choose $t_f$ to be a variable $t\text{,}$ then we will get velocity as a function of $t\text{.}$ Writing $v_{f,x}$ as $v_x(t)$ we have
$$v_{x}(t) = v_{i,x} + \int_{t_i}^{t}\ a_x(t')\, dt',\label{eq-vx-as-integral-of-ax}\tag{2.5.7}$$
where $v_{i,x}$ is some known velocity at the initial instant and I have replaced the integration variable by symbol $t'$ so as not to get confused by $t\text{.}$ Integrating $v_x$ as a function we will get the change in position.
\begin{align*} x_f - x_i &= \int_{t_i}^{t_f} v_x(t) dt \\ &= \int_{t_i}^{t_f} \left[ v_{i,x} + \int_{t_i}^{t}\ a_x(t')\, dt' \right] dt \\ &= v_{i,x}\left(t_f - t_i \right) + \int_{t_i}^{t_f} \left( \int_{t_i}^{t}\ a_x(t')\, dt' \right) dt \end{align*}
Let us work out the case of a constant acceleration. Let us say that acceleration has a constant value $a_x=a_0\text{,}$ initial instant $t_i=0\text{,}$ and $v_{i,x}=v_0\text{.}$ Let us write $t_f-t_i$ as $T\text{.}$ Then, we will get the following for change in $x\text{.}$
\begin{align*} x_f - x_i &= v_{0} T + \int_{0}^{T} \left( a_0\, t \right) dt \\ &= v_{0} T + \frac{1}{2} a_0 T^2 \end{align*}
The position of a block on a spring along $x$ axis is given by
\begin{equation*} x(t) = A\, cos(\omega\, t + \phi), \end{equation*}
where $A\text{,}$ $\omega\text{,}$ and $\phi$ are constant. Show that its acceleration at an arbitrary instant $t$ is given by
\begin{equation*} a_x = -\omega^2\, x(t). \end{equation*}
Hint
Differentiate $x\text{.}$
Solution
Not provided.
The acceleration of a car at the start is changing with time, as given by the following function.
\begin{equation*} a(t) = 12.0\; t^2. \end{equation*}
The unit of acceleration is $\text{m/s}^2\text{.}$ If the car starts from rest, how far will it go in $3.0\text{ sec}\text{?}$ Place the motion of the car along $x$ axis.
Hint
Integrate $a_x\text{,}$ then integrate again.
$81.0\text{ m}\text{.}$
|
2023-03-20 12:26:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 7, "x-ck12": 0, "texerror": 0, "math_score": 0.9757454991340637, "perplexity": 394.19621809605803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00561.warc.gz"}
|
https://www.helpteaching.com/tests/360842/probability-of-chance-events-word-problems
|
##### Notes
This printable supports Common Core Mathematics Standard 7.SP.C.5
##### Print Instructions
NOTE: Only your test content will print.
To preview this test, click on the File menu and select Print Preview.
See our guide on How To Change Browser Print Settings to customize headers and footers before printing.
# Probability of Chance Events Word Problems (Grade 7)
Print Test (Only the test content will print)
## Probability of Chance Events Word Problems
1.
A class contains 12 boys and 14 girls. The teacher calls on a student at random to go to the board. The probability that it is a girl is $7//13$.
1. True
2. False
2.
Teresa has 8 white pom-poms and 7 orange pom-poms in a bag. If she removes an orange pom-pom and does NOT return it to the bag, the probability that the next chosen pom-pom at random is white is $4//7$.
1. True
2. False
3.
A fair numbered cube with faces numbered 1 to 6 is rolled. The probability that a number other than 6 is rolled is $5//6$.
1. True
2. False
4.
In a game, Charlie is rolling a dice. Which best describes the probability of rolling an even number?
1. Not Possible
2. The same as the probability of rolling an odd number
3. Less than the probability of rolling an odd number
4. Greater than the probability of rolling an odd number
5.
A student is randomly selected from a class of 32 students. What is the probability that the selected student is a boy, if there are 20 girls in the class?
1. 0.75
2. 0.25
3. 0.425
4. 0.375
6.
Hernando placed 5 green crayons, 6 blue crayons, and 9 yellow crayons in a container. He will reach in and pull one out without looking. What is the probability that the crayon he chooses will be green?
1. $0.20$
2. $0.25$
3. $0.30$
4. $0.33$
7.
Five puppies are randomly selected for a test on behavior out of a group of 10 puppies. The puppies are selected one at a time and not returned to the group till after all the testing is done. If 60% of puppies are male, what is the probability that the first three puppies selected are female?
1. $2.25%$
2. $3.33%$
3. $5.66%$
4. $8.50%$
8.
Timmy has 3 red pens and 2 black pens in the bottom of his book bag. They are all the same size and shape. He randomly selects a pen to use and then puts it back in his bag. Later he randomly selects a pen a second time. What is the probability that he selects a red pen both times?
1. $3//5$
2. $3//10$
3. $1//9$
4. $9//25$
9.
A quarter is tossed two times. What is the probability that the coin will land heads up both times?
1. 1
2. 0.75
3. 0.25
4. 0.5
10.
Morgan has a bag with 7 red jelly beans and 3 black jelly beans. He will randomly select one at a time and eat it. If the first jelly bean he selects is black, what is the probability that the second one will also be black?
1. $1//5$
2. $2//9$
3. $3//7$
4. $3//10$
You need to be a HelpTeaching.com member to access free printables.
|
2021-01-26 21:53:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46033892035484314, "perplexity": 804.7962788259042}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704803737.78/warc/CC-MAIN-20210126202017-20210126232017-00143.warc.gz"}
|
https://mail.openjdk.java.net/pipermail/amber-spec-experts/2019-January/000940.html
|
# Enhancing Java String Literals Round 2
Sun Jan 6 17:58:18 UTC 2019
The backslash prefix makes a lot of sense to me. Creating scenarios where I needed to toggle the raw-ness seemed forced.
The only awkwardness I see is with leading/trailing quotes.
"""\"Cooked\""""
""" "Raw" """.strip() or """"Raw""""
Cooked is fine wth escapes. Raw could have a rule like; any quotes after/before the opening/closing TQ sequence get added to the string.
— Jim
Sent from my iPhone
> On Jan 6, 2019, at 1:43 PM, Brian Goetz <brian.goetz at oracle.com> wrote:
>
> As Reinier pointed out on amber-dev, regex strings may routinely contain escaped meta-characters — +, *, brackets, etc. So the embedded \- and \+ story has an obvious conflict. While these are not the only possible characters for such “shift” operators, his point that this might be overkill is a good one. So let’s look at options for denoting raw-ness.
>
> - Just make triple-quote strings always raw as well as multi-line-capable; regexes and friends would use TQ strings even though they are single line (Scala, Kotlin)
> - Letter prefix, such as R”…” (C++, Rust)
> - Symbol prefix, such as @“…” (C#), or \”…” (suggestive of “distributing” the escaping across the string.)
> - Embedded escape sequence that switches to raw mode, but can’t be switched back: “\+raw string”, “\{raw}raw string”.
>
> Data from Google suggests that, in their code base, on the order of 5% of candidates for multi-line strings use some escape sequences (Kevin/Liam, can you verify?) This suggests to me that the “just use TQ” approach is vaguely workable, but likely to be error-prone (5% is infrequently enough that people will say \t when they mean tab and discover this at runtime, and then have to go back and add a .escape() call.)
>
> (Of these, my current favorite is using the backslash: “cooked”, “””cooked and ML-capable”, \”raw”, \”””raw and ML capable”. The use of \ suggests “the backslashes have been pre-added for you”, building on existing associations with backslash.)
>
> Are there other credible candidates that I’ve missed?
>
>
>
>> On Jan 2, 2019, at 2:00 PM, Jim Laskey <james.laskey at oracle.com> wrote:
>>
>>
>>>
>>> First of all, I would like to apologize for leading us down the garden path re Java Raw String Literals. I jumped into this feature fully enamoured with the JavaScript equivalent and, "why can't we have this in Java?" As the proposal evolved, it became clear that what we came up with was not a good Java solution. I underestimated the concern that the original proposal was too left field and did not fit into Java very well. It's somewhat ironic that the backtick looks like a thorn.
>>>
>>> So, let's start the new year with a structured approach to the enhance string literal design. Brian gave a summary of why the old design fails. Starting with this summary, Brian and I talked out a series of critical decision points that should be given thought, if not answers, before we propose a new design. As an exercise, I supplemented these points and created a series of small decision trees (a full on decision tree would be complex and not very helpful.) I found these trees good intuition pumps for getting the design at least 80% there. Hopefully, this exercise will help you in the same way.
>>>
>>>
>>>
>>>
>>> Even the label Raw String Literal put the emphasis on the wrong part of the feature. What developers really want is multi-line strings. They want to be able to paste alien source into their Java programs with as little fuss as possible.
>>>
>>> String raw-ness (not translating escapes) is a tangential aspect, that may or may not be needed to implement multi-line strings. Yes, the regex and Window's file path arguments in JEP 326 are still valid, but this aspect needs to be separated from the main part of the design. Further in the discussion, we'll see that raw-ness is really a many-headed hydra, best slain one head at a time.
>>>
>>>
>>>
>>>
>>> We have to be honest. We know Java's primary market. Sure we want to embed Java in Java for writing tests. Sure there is JavaScript and CSS in web pages. Nevertheless, most uses of multi-line will be for non-complex grammars. Specifically, grammars that don't require special handling of multi-character delimiter sequences. If you can accept this, then the solution set is much smaller.
>>>
>>>
>>>
>>>
>>> This is an easy one. Familiarity is key to feature education. Radical wandering off with new syntax is not helpful to anyone but bloggers and authors.
>>>
>>>
>>>
>>>
>>> If you buy into the familiarity argument, then double quote is really only choice for a delimiter. Double quote already indicates a string literal. Single quote indicates a character. We don’t want to gratuitously burn unused symbols like backtick. Backslash works for regex but maybe not for others. Combinations and nonces just introduce new noise when our original goal was to reduce noise and complexity.
>>>
>>>
>>>
>>>
>>> Other languages avoid delimiter escape sequences by doubling up. Example, "abc""def" -> abc"def. This concept is unfamiliar to Java developers, why change now. Escape sequences are what we know.
>>>
>>>
>>>
>>>
>>> Language designers got very nervous when I suggested infinite delimiter sequences in the original proposal; lexically sacrilegious. I felt strongly that it was easy to explain and only 1 in 1M developers would ever use more than 4-5 character delimiter sequences. In round two, I have come to agree. This was taking on more complexity than is really warranted, for a use case that doesn’t come along very often. I suggest we only need single and triple double quotes. A single double quote works today, so no argument there. Double double quotes means empty string, no problem. Triple double quotes are only necessary to avoid having to escape quotes in alien source.
>>>
>>> String json = """
>>> {
>>> "name": "Jean Smith",
>>> "age": 32,
>>> "location": "San Jose"
>>> }
>>> """;
>>>
>>> versus
>>>
>>> String json = "
>>> {
>>> \"name\": \"Jean Smith\",
>>> \"age\": 32,
>>> \"location\": \"San Jose\"
>>> }
>>> ";
>>>
>>> This second case is where we wandered off the tracks with raw-ness. We assumed raw-ness is necessary to avoid all the backslashes. Most cases can be handled with triple double quotes.
>>>
>>> Okay, so why not more combinations? Simply because, most of the time they are not needed. On the rare occasion we do have nested triple double quotes, we can then use escape sequences.
>>>
>>> String nestedJSON = """
>>> \"\"\"
>>> {
>>> "name": "Jean Smith",
>>> "age": 32,
>>> "location": "San Jose"
>>> }
>>> \"\"\";
>>> """;
>>>
>>> or better yet, you only have to escape every third double quote
>>>
>>> String nestedJSON = """
>>> \"""
>>> {
>>> "name": "Jean Smith",
>>> "age": 32,
>>> "location": "San Jose"
>>> }
>>> \""";
>>> """;
>>>
>>> Not so evil and it's familiar.
>>>
>>>
>>>
>>>
>>> Meaning, you can only use single quotes for simple strings and triple quotes for multi-line strings. I don't have a strong opinion other than it seems like an unneeded restriction. The only argument I've heard has been for better error recovery when missing a close delimiter during parsing. My counter for that argument is that if you are processing multi-line strings then you can easily track the first newline after the opening delimiter and recover from there. I implemented that recovery in javac and worked out well.
>>>
>>>
>>>
>>>
>>>
>>> Cooked (translated escape sequences) should be the default. Why should a multi-line string be different than a simple string? We have a solution for embedding double quote. Single quotes don't require escaping. Tabs and newlines can exist as is. Unicode characters can be either an escape sequence or the unicode character. So the only problem case is backslash. I would argue that the rare backslash can be escaped. If not, then the developer can use the raw-ness solution.
>>>
>>>
>>>
>>>
>>> If we don't translate newlines, then source is not transferable across platforms. That is, a source from one platform may not execute the same way on another platform. Translating consistently guarantees execution consistency. As a note, programming languages that didn't translate newlines in multi-line string literals typically regretted it later (Python.)
>>>
>>>
>>>
>>>
>>> With the original Raw String Literal proposal, there was concern about leading and trailing nested delimiters. If we default to cooked strings, then we use can use \".
>>>
>>>
>>>
>>>
>>> These questions have been answered numerous times and fall into the realm of library support. Same arguments as before, same outcome.
>>>
>>>
>>> To summarize the bold paths at this point;
>>> - multi-line strings are an extension of traditional simple strings
>>> - newlines in a string are no longer an error and the string can extend across several lines
>>> - error recovery can pick up at the first newline after the opening delimiter
>>> - multi-line strings process escape sequences (including unicode) in the same way as simple strings
>>> - multiple double quotes are handled with escape sequences
>>> - triple double quote delimiter is introduced to avoid escaping simple double quote sequences
>>>
>>> Generally, I think this is very much in the traditional Java spirit.
>>>
>>>
>>> Now, let's move on to the lesser but more interesting issue. As I stated above, raw-ness is a multi-headed beast. Raw-ness involves the turning off the translation of
>>> - escape sequences
>>> - unicode escapes
>>> - delimiter sequences
>>> - escape sequence prefix (backslash)
>>> - tabs and newlines (control characters in general)
>>>
>>> Sometimes we need all of the translations, sometimes few and sometimes none. In the multi-line discussion above, we see we don't need raw as much as we might have expected. Maybe for occasional backslashes, as in regex and Windows paths strings.
>>>
>>>
>>>
>>>
>>>
>>> The original Raw String Literal proposal suggested that raw-ness was a property of the whole string literal and thus we proposed an alternate delimiter syntax just to emphasize that fact. If we accept the bold path of multi-line discussion above, then alternate delimiter is out. This leaves prefixing as the best option to bless a string literal with raw-ness.
>>>
>>> At this point, I would like to suggest an alternate, maybe progressive way to think of raw-ness. Since the original proposal, I have been thinking of raw-ness as a state of processing the literal. State is certainly obvious in the scanner implementation, why not raise that to the language level? If it is a state then we should be able to enter and leave that state in some way. Escape sequences are an obvious way of transitioning translation in the string. \- and \+ are available and not currently recognized as valid escape sequences, why not \- and \+ to toggle escape processing?
>>>
>>> String a = "cooked \-raw\+ cooked"; // cooked raw cooked - a little odd but not so much so
>>> String b = "abc\-\\\\\+def"; // abc\\\\def - struggling
>>> String c = "\-abc\\\\def"; // abc\\\\def - more readable as an inner prefix
>>> String d = "abc\-\-def\+\+ghi"; // abc\-def\+ghi - raw on "\-" is "\" and "-", raw off "\+" is "\" and "+"
>>> String e = """\-"abc"\+"""; // "abc" - \- and \+ act a no-ops of sorts
>>>
>>> Comparing property vs state:
>>>
>>> Runtime.getRuntime().exec(R""" "C:\Program Files\foo" bar""".strip());
>>> Runtime.getRuntime().exec("""\-"C:\Program Files\foo" bar""");
>>>
>>> System.out.println("this".matches(R"\w\w\w\w"));
>>> System.out.println("this".matches("\-\w\w\w\w"));
>>>
>>> String html = R"""
>>> <html>
>>> <body>
>>> <p>Hello World.</p>
>>> </body>
>>> </html>
>>> """.align();
>>> String html = """\-
>>> <html>
>>> <body>
>>> <p>Hello World.</p>
>>> </body>
>>> </html>
>>> """.align();
>>>
>>>
>>> String nested = """
>>> String EXAMPLE_TEST = "This is my small example "
>>> + "string which I'm going to "
>>> + "use for pattern matching.";
>>> """ +
>>> R"""
>>> System.out.println(EXAMPLE_TEST.replaceAll("\\s+", "\t"));
>>> """;
>>> String nested = """
>>> String EXAMPLE_TEST = "This is my small example "
>>> + "string which I'm going to "
>>> + "use for pattern matching.";
>>> \-
>>> System.out.println(EXAMPLE_TEST.replaceAll("\\s+", "\t"));
>>> \+
>>> """;
>>>
>>> Hopefully, this is a good starting point for discussion. As before, I'm pragmatic about which direction we go, so feel free to comment.
>>>
>>> Cheers,
>>>
>>> -- Jim
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>
|
2020-11-28 18:05:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3832981288433075, "perplexity": 5131.899933194405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195687.51/warc/CC-MAIN-20201128155305-20201128185305-00242.warc.gz"}
|
https://math.stackexchange.com/questions/3260998/nasty-inverse-tan-integral
|
# Nasty inverse tan integral
I've been trying the following integral, which Mathematica will happily do: $$\int\frac{1}{\sqrt{x^2(1-cx)(x-1)}}\mathrm{d}x = \tan ^{-1}\left(\frac{c x+x-2}{2 \sqrt{x-1} \sqrt{1-c x}}\right)$$
However, some hints as to how to do it by hand would be very helpful. Particularly how you would do it without working back from the answer. I've tried $$u=\sqrt{x-1}$$ and $$x=t^2$$ followed by $$t=\sec\theta$$ but neither worked well. It's been a while since I've done one of these!
Taking $$x$$ common from $$(1-cx),(x-1)$$ we get $$\int\frac{dx}{\sqrt{x^2.x(\frac{1}{x}-c)x(1-\frac{1}{x})}}$$ let $$\frac{1}{x}=u$$ then $$\frac{-dx}{x^2}=du$$ our integral changes to $$\int\frac{du}{\sqrt{(u-c)(1-u)}}$$ . With some manipulation you can find this integral.
• Great, thanks a lot! I followed it up with $t=\sqrt{1-u}$ and then $t=\sqrt{1-c}\sin\theta$ and it worked – dsfkgjn Jun 13 at 15:00
|
2019-11-16 21:39:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534835577011108, "perplexity": 160.1012199592762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668765.35/warc/CC-MAIN-20191116204950-20191116232950-00422.warc.gz"}
|
https://tex.stackexchange.com/questions/503724/theorem-name-and-number
|
Theorem name and number
I want a code that can provide a link (ref) to a theorem with text which follows these conditions:
1.) If the theorem have a name (like \begin{theorem}[name]\end{theprem}), the text will be name Theorem.
2.) Otherwise, the text will be Theorem 1.1. (the number of the theorem).
I will be very gladful if someone can write me that code. Thank you.
Here is a way (as far as I understood the question):
\documentclass{article}
\usepackage{amsmath}
\usepackage{lipsum}
\def\thname{name}
\newtheorem{theorem}{Theorem}[section]%added section after edit and picture
\newtheorem{mytheorem}{\thname{} Theorem}
\newenvironment{mtheorem}[1][name]{\def\thname{#1}\mytheorem}{\endmytheorem}
\def\themytheorem{:}
\begin{document}
\section{Test Section}
\begin{theorem}
\lipsum[1]
\end{theorem}
\begin{mtheorem}[Some]
\lipsum[1]
\end{mtheorem}
\begin{mtheorem}[Some Other]
\lipsum[1]
\end{mtheorem}
\begin{theorem}
\lipsum[1]
\end{theorem}
\end{document}
• Edited the code but not the screenshot in order to have theorem within section... ie 1.1, 1.2 etc – koleygr Aug 11 '19 at 1:27
|
2020-04-04 13:21:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9042640924453735, "perplexity": 1543.3211761892194}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521876.48/warc/CC-MAIN-20200404103932-20200404133932-00124.warc.gz"}
|
https://zbmath.org/authors/?q=ai%3Aralston.james-v
|
# zbMATH — the first resource for mathematics
## Ralston, James V.
Compute Distance To:
Author ID: ralston.james-v Published as: Ralston, James; Ralston, J.; Ralston, J. V.; Ralston, James V. Homepage: http://www.math.ucla.edu/~ralston/ External Links: MGP · dblp
Documents Indexed: 86 Publications since 1969 Biographic References: 1 Publication
all top 5
#### Co-Authors
22 single-authored 23 Eskin, Gregory 17 Guillot, Jean-Claude 8 Trubowitz, Eugene 5 Bardos, Claude Williams 5 Majda, Andrew J. 4 Dimassi, Mouez 4 Liu, Hailiang 3 Osher, Stanley Joel 2 Sario, Leo 2 Strauss, Walter Alexander 2 Synge Morawetz, Cathleen 2 Tanushev, Nicolay M. 1 Burgin, Mark 1 Chayes, Jennifer Tour 1 Combescure, Monique 1 Helton, John William 1 Kawashita, Mishio 1 Klopp, Frédéric 1 Krstić, Miroslav 1 Qian, Jianliang 1 Robert, Didier 1 Runborg, Olof 1 Sarason, Donald Erik 1 Skelton, Robert E. 1 Soga, Hideo 1 Yamamoto, Masahiro 1 Yin, Peimeng
all top 5
#### Serials
11 Communications on Pure and Applied Mathematics 6 Communications in Mathematical Physics 4 Inverse Problems 4 Duke Mathematical Journal 4 Journal of Differential Equations 4 Communications in Partial Differential Equations 3 Multiscale Modeling & Simulation 2 Journal of Differential Geometry 2 Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, Série A 1 Journal d’Analyse Mathématique 1 Journal of Mathematical Analysis and Applications 1 Journal of Mathematical Physics 1 Rocky Mountain Journal of Mathematics 1 Mathematics of Computation 1 Journal of the Mathematical Society of Japan 1 Nagoya Mathematical Journal 1 Transactions of the American Mathematical Society 1 Ergodic Theory and Dynamical Systems 1 Asymptotic Analysis 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Journal of Physics A: Mathematical and General 1 SIAM Journal on Applied Mathematics 1 Comptes Rendus de l’Académie des Sciences. Série I 1 Russian Journal of Mathematical Physics 1 Journées Équations aux Dérivées Partielles (Saint-Jean-de-Monts) 1 Séminaire Équations aux Dérivées Partielles 1 Methods and Applications of Analysis 1 Matemática Contemporânea 1 Mathematical Physics, Analysis and Geometry 1 Analysis & PDE
all top 5
#### Fields
70 Partial differential equations (35-XX) 15 Quantum theory (81-XX) 8 Operator theory (47-XX) 7 Global analysis, analysis on manifolds (58-XX) 5 Differential geometry (53-XX) 4 Ordinary differential equations (34-XX) 4 Numerical analysis (65-XX) 4 Mechanics of deformable solids (74-XX) 4 Fluid mechanics (76-XX) 3 Optics, electromagnetic theory (78-XX) 2 Potential theory (31-XX) 2 Dynamical systems and ergodic theory (37-XX) 2 Approximations and expansions (41-XX) 2 Statistical mechanics, structure of matter (82-XX) 2 Systems theory; control (93-XX) 1 History and biography (01-XX) 1 Functional analysis (46-XX)
#### Citations contained in zbMATH Open
69 Publications have been cited 795 times in 628 Documents Cited by Year
Solutions of the wave equation with localized energy. Zbl 0209.40402
Ralston, J. V.
1969
Decay of solutions of the wave equation outside nontrapping obstacles. Zbl 0372.35008
Morawetz, Cathleen S.; Ralston, James V.; Strauss, Walter A.
1977
Inverse scattering problem for the Schrödinger equation with magnetic potential at a fixed energy. Zbl 0843.35133
Eskin, G.; Ralston, J.
1995
On the construction of quasimodes associated with stable periodic orbits. Zbl 0333.35066
Ralston, J. V.
1976
La rélation de Poisson pour l’équation des ondes dans un ouvert non borne. Application à la théorie de la diffusion. Zbl 0496.35067
Bardos, Claude; Guillot, Jean-Claude; Ralston, James
1982
A proof of the Gutzwiller semiclassical trace formula using coherent states decomposition. Zbl 0939.58031
Combescure, Monique; Ralston, James; Robert, Didier
1999
$$L^ 1$$ stability of travelling waves with applications to convective porous media flow. Zbl 0479.35053
Osher, Stanley; Ralston, James
1982
On the inverse boundary value problem for linear isotropic elasticity. Zbl 1080.35175
Eskin, G.; Ralston, J.
2002
Inverse spectral theory for a singular Sturm-Liouville operator on $$[0,1]$$. Zbl 0688.34013
Guillot, Jean-Claude; Ralston, James V.
1988
On singular diffusion equations with applications to self-organized criticality. Zbl 0832.35142
Chayes, J. T.; Osher, S. J.; Ralston, J. V.
1993
The inverse backscattering problem in three dimensions. Zbl 0706.35136
Eskin, G.; Ralston, J.
1989
An analogue of Weyl’s theorem for unbounded domains. I. Zbl 0408.35069
Majda, Andrew; Ralston, James
1978
Mountain waves and Gaussian beams. Zbl 1149.41014
Tanushev, Nicolay M.; Qian, Jianliang; Ralston, James V.
2007
Semi-classical asymptotics in solid state physics. Zbl 0672.35014
Guillot, Jean-Claude; Ralston, James; Trubowitz, E.
1988
Gaussian beams and the propagation of singularities. Zbl 0533.35062
Ralston, James
1982
Discrete shock profiles for systems of conservation laws. Zbl 0388.35047
Majda, Andrew; Ralston, James
1979
Endpoints of the spectrum of periodic operators are generically simple. Zbl 1040.35050
Klopp, Frédéric; Ralston, James
2000
Trapped rays in spherically symmetric media and poles of the scattering matrix. Zbl 0206.39603
Ralston, J. V.
1971
A Lyapunov functional for the evolution of solutions to the porous medium equation to self-similarity. II. Zbl 0583.76115
Ralston, James
1984
Inverse backscattering in two dimensions. Zbl 0728.35146
Eskin, G.; Ralston, J.
1991
On isospectral periodic potentials in $${\mathbb{R}}^ n$$. I. Zbl 0574.35021
Eskin, Gregory; Ralston, James; Trubowitz, Eugene
1984
On isospectral periodic potentials in $${\mathbb{R}}^ n$$. II. Zbl 0582.35031
Eskin, Gregory; Ralston, James; Trubowitz, Eugene
1984
On stationary modes in inviscid rotating fluids. Zbl 0273.76068
Ralston, J. V.
1973
Note on the decay of acoustic waves. Zbl 0427.35043
Ralston, James
1979
Recovery of high frequency wave fields from phase space-based measurements. Zbl 1221.35031
Liu, Hailiang; Ralston, James
2010
Approximate eigenfunctions of the Laplacian. Zbl 0385.58012
Ralston, J. V.
1977
Inverse backscattering. Zbl 0819.35144
Eskin, G.; Ralston, J.
1992
Isospectral sets for boundary value problems on the unit interval. Zbl 0678.34025
Ralston, James; Trubowitz, Eugene
1988
The first variation of the scattering matrix. Zbl 0343.35069
Helton, J. W.; Ralston, J. V.
1976
Semiclassical asymptotics in magnetic Bloch bands. Zbl 1066.81541
Dimassi, M.; Guillot, J. C.; Ralston, J.
2002
Note on a paper of Kreiss. Zbl 0215.16802
Ralston, J.
1971
Recovery of high frequency wave fields for the acoustic wave equation. Zbl 1213.35284
Liu, Hailiang; Ralston, James
2010
An analogue of Weyl’s formula for unbounded domains. III: An epilogue. Zbl 0433.35055
Majda, Andrew; Ralston, James
1979
Inverse scattering problems for Schrödinger operators with magnetic and electric potentials. Zbl 0872.35121
Eskin, G.; Ralston, J.
1997
Gaussian beam construction for adiabatic perturbations. Zbl 1119.35071
Dimassi, M.; Guillot, J.-C.; Ralston, J.
2006
Deficiency indices of first-order symmetric operators with elliptic boundary conditions. Zbl 0188.40904
Ralston, J.
1970
A correction to: Decay of solutions of the wave equation outside nontrapping obstacles. Zbl 0404.35015
Morawetz, Cathleen S.; Ralston, James V.; Strauss, Walter A.
1978
An analogue of Weyl’s theorem for unbounded domains. II. Zbl 0416.35058
Majda, Andrew; Ralston, James
1978
Inverse scattering at fixed energy for layered media. Zbl 0930.35117
Guillot, J.-C.; Ralston, J.
1999
Inverse scattering for gratings and wave guides. Zbl 1151.35429
Eskin, Gregory; Ralston, James; Yamamoto, Masahiro
2008
Local decay of solutions of conservative first order hyperbolic systems in odd dimensional space. Zbl 0288.35041
Ralston, James V.
1974
Gaussian beam methods for the Helmholtz equation. Zbl 1302.35135
Liu, Hailiang; Ralston, James; Runborg, Olof; Tanushev, Nicolay M.
2014
On uniqueness for the inverse scattering problem at fixed energy for a metric on $$\mathbb R^2$$. Zbl 1202.35147
Eskin, G.; Ralston, J.
2002
Inverse boundary value problems for systems of partial differential equations. Zbl 1209.35150
Eskin, Gregory; Ralston, James
2003
The multidimensional inverse spectral problem with a periodic potential. Zbl 0545.58044
Eskin, G.; Ralston, J.; Trubowitz, E.
1984
On effective Hamiltonian for adiabatic perturbations of magnetic Schrödinger operators. Zbl 1130.81344
Dimassi, Mouez; Guillot, Jean-Claude; Ralston, James
2004
On the inverse boundary value problem for linear isotropic elasticity and Cauchy-Riemann systems. Zbl 1079.74027
Eskin, Gregory; Ralston, James
2004
On the propagation of singularities in solutions of symmetric hyperbolic partial differential equations. Zbl 0336.35066
Ralston, J. V.
1976
The first variation of the scattering matrix: An addendum. Zbl 0369.35037
Ralston, J.
1978
Semi-classical approximations in solid state physics. Zbl 0688.35085
Guillot, Jean-Claude; Ralston, James; Trubowitz, E.
1988
The Aharonov-Bohm effect in spectral asymptotics of the magnetic Schrödinger operator. Zbl 1293.35186
Eskin, Gregory; Ralston, James
2014
Inverse coefficient problems in perturbed half spaces. Zbl 0932.35203
Eskin, Gregory; Ralston, James
1999
Variation of the transmission coefficient and comparison theorems for the purely imaginary poles of the scattering matrix. Zbl 0231.35062
Ralston, James V.
1972
A relation between biharmonic Green’s functions of simply supported and clamped bodies. Zbl 0311.31009
Ralston, James; Sario, Leo
1976
The determination of moving boundaries for hyperbolic equations. Zbl 1180.35556
Eskin, Gregory; Ralston, James
2010
Diffraction by convex bodies. Zbl 0405.35028
Ralston, J.
1979
Rélation de Poisson pour l’équation des ondes dans un ouvert non borne. Application au scattering. Zbl 0443.35038
Bardos, Claude; Guillot, Jean-Claude; Ralston, James
1980
Relation de Poisson pour l’équation des ondes dans un ouvert non borne. Zbl 0445.35071
Bardos, Claude; Guillot, Jean-Claude; Ralston, James
1980
Propagation of singularities and the scattering matrix. Zbl 0471.35068
Ralston, James V.
1981
Les ondes laterales comme phénomènes de propagation de singularites. Zbl 0501.35056
Guillot, Jean-Claude; Ralston, James
1981
Inverse scattering at fixed energy for stratified media. Zbl 1005.35092
Guillot, Jean-Claude; Ralston, James
1997
Complex analysis of elastic symbols and construction of plane wave solutions in the half-space. Zbl 1031.35144
Kawashita, Mishio; Ralston, James; Soga, Hideo
2003
Gauge equivalence and the inverse spectral problem for the magnetic Schrödinger operator on the torus. Zbl 1311.81110
Eskin, G.; Ralston, J.
2013
A relation between biharmonic Green’s functions of simply supported and clamped bodies. Zbl 0319.31007
Ralston, James; Sario, Leo
1976
Remark on spectral rigidity for magnetic Schrödinger operators. Zbl 1179.35343
Eskin, Gregory; Ralston, James
2009
The role of Green’s functions in inverse scattering at fixed energy. Zbl 1055.81629
Ralston, James
1997
Magnetic breakdown. Zbl 0789.35151
Ralston, James
1992
Inverse spectral problems in rectangular domains. Zbl 1123.35089
Eskin, Gregory; Ralston, James
2007
The multidimensional inverse spectral problem with a periodic potential. Zbl 0581.58034
Eskin, G.; Ralston, J.; Trubowitz, E.
1984
Gaussian beam methods for the Helmholtz equation. Zbl 1302.35135
Liu, Hailiang; Ralston, James; Runborg, Olof; Tanushev, Nicolay M.
2014
The Aharonov-Bohm effect in spectral asymptotics of the magnetic Schrödinger operator. Zbl 1293.35186
Eskin, Gregory; Ralston, James
2014
Gauge equivalence and the inverse spectral problem for the magnetic Schrödinger operator on the torus. Zbl 1311.81110
Eskin, G.; Ralston, J.
2013
Recovery of high frequency wave fields from phase space-based measurements. Zbl 1221.35031
Liu, Hailiang; Ralston, James
2010
Recovery of high frequency wave fields for the acoustic wave equation. Zbl 1213.35284
Liu, Hailiang; Ralston, James
2010
The determination of moving boundaries for hyperbolic equations. Zbl 1180.35556
Eskin, Gregory; Ralston, James
2010
Remark on spectral rigidity for magnetic Schrödinger operators. Zbl 1179.35343
Eskin, Gregory; Ralston, James
2009
Inverse scattering for gratings and wave guides. Zbl 1151.35429
Eskin, Gregory; Ralston, James; Yamamoto, Masahiro
2008
Mountain waves and Gaussian beams. Zbl 1149.41014
Tanushev, Nicolay M.; Qian, Jianliang; Ralston, James V.
2007
Inverse spectral problems in rectangular domains. Zbl 1123.35089
Eskin, Gregory; Ralston, James
2007
Gaussian beam construction for adiabatic perturbations. Zbl 1119.35071
Dimassi, M.; Guillot, J.-C.; Ralston, J.
2006
On effective Hamiltonian for adiabatic perturbations of magnetic Schrödinger operators. Zbl 1130.81344
Dimassi, Mouez; Guillot, Jean-Claude; Ralston, James
2004
On the inverse boundary value problem for linear isotropic elasticity and Cauchy-Riemann systems. Zbl 1079.74027
Eskin, Gregory; Ralston, James
2004
Inverse boundary value problems for systems of partial differential equations. Zbl 1209.35150
Eskin, Gregory; Ralston, James
2003
Complex analysis of elastic symbols and construction of plane wave solutions in the half-space. Zbl 1031.35144
Kawashita, Mishio; Ralston, James; Soga, Hideo
2003
On the inverse boundary value problem for linear isotropic elasticity. Zbl 1080.35175
Eskin, G.; Ralston, J.
2002
Semiclassical asymptotics in magnetic Bloch bands. Zbl 1066.81541
Dimassi, M.; Guillot, J. C.; Ralston, J.
2002
On uniqueness for the inverse scattering problem at fixed energy for a metric on $$\mathbb R^2$$. Zbl 1202.35147
Eskin, G.; Ralston, J.
2002
Endpoints of the spectrum of periodic operators are generically simple. Zbl 1040.35050
Klopp, Frédéric; Ralston, James
2000
A proof of the Gutzwiller semiclassical trace formula using coherent states decomposition. Zbl 0939.58031
Combescure, Monique; Ralston, James; Robert, Didier
1999
Inverse scattering at fixed energy for layered media. Zbl 0930.35117
Guillot, J.-C.; Ralston, J.
1999
Inverse coefficient problems in perturbed half spaces. Zbl 0932.35203
Eskin, Gregory; Ralston, James
1999
Inverse scattering problems for Schrödinger operators with magnetic and electric potentials. Zbl 0872.35121
Eskin, G.; Ralston, J.
1997
Inverse scattering at fixed energy for stratified media. Zbl 1005.35092
Guillot, Jean-Claude; Ralston, James
1997
The role of Green’s functions in inverse scattering at fixed energy. Zbl 1055.81629
Ralston, James
1997
Inverse scattering problem for the Schrödinger equation with magnetic potential at a fixed energy. Zbl 0843.35133
Eskin, G.; Ralston, J.
1995
On singular diffusion equations with applications to self-organized criticality. Zbl 0832.35142
Chayes, J. T.; Osher, S. J.; Ralston, J. V.
1993
Inverse backscattering. Zbl 0819.35144
Eskin, G.; Ralston, J.
1992
Magnetic breakdown. Zbl 0789.35151
Ralston, James
1992
Inverse backscattering in two dimensions. Zbl 0728.35146
Eskin, G.; Ralston, J.
1991
The inverse backscattering problem in three dimensions. Zbl 0706.35136
Eskin, G.; Ralston, J.
1989
Inverse spectral theory for a singular Sturm-Liouville operator on $$[0,1]$$. Zbl 0688.34013
Guillot, Jean-Claude; Ralston, James V.
1988
Semi-classical asymptotics in solid state physics. Zbl 0672.35014
Guillot, Jean-Claude; Ralston, James; Trubowitz, E.
1988
Isospectral sets for boundary value problems on the unit interval. Zbl 0678.34025
Ralston, James; Trubowitz, Eugene
1988
Semi-classical approximations in solid state physics. Zbl 0688.35085
Guillot, Jean-Claude; Ralston, James; Trubowitz, E.
1988
A Lyapunov functional for the evolution of solutions to the porous medium equation to self-similarity. II. Zbl 0583.76115
Ralston, James
1984
On isospectral periodic potentials in $${\mathbb{R}}^ n$$. I. Zbl 0574.35021
Eskin, Gregory; Ralston, James; Trubowitz, Eugene
1984
On isospectral periodic potentials in $${\mathbb{R}}^ n$$. II. Zbl 0582.35031
Eskin, Gregory; Ralston, James; Trubowitz, Eugene
1984
The multidimensional inverse spectral problem with a periodic potential. Zbl 0545.58044
Eskin, G.; Ralston, J.; Trubowitz, E.
1984
The multidimensional inverse spectral problem with a periodic potential. Zbl 0581.58034
Eskin, G.; Ralston, J.; Trubowitz, E.
1984
La rélation de Poisson pour l’équation des ondes dans un ouvert non borne. Application à la théorie de la diffusion. Zbl 0496.35067
Bardos, Claude; Guillot, Jean-Claude; Ralston, James
1982
$$L^ 1$$ stability of travelling waves with applications to convective porous media flow. Zbl 0479.35053
Osher, Stanley; Ralston, James
1982
Gaussian beams and the propagation of singularities. Zbl 0533.35062
Ralston, James
1982
Propagation of singularities and the scattering matrix. Zbl 0471.35068
Ralston, James V.
1981
Les ondes laterales comme phénomènes de propagation de singularites. Zbl 0501.35056
Guillot, Jean-Claude; Ralston, James
1981
Rélation de Poisson pour l’équation des ondes dans un ouvert non borne. Application au scattering. Zbl 0443.35038
Bardos, Claude; Guillot, Jean-Claude; Ralston, James
1980
Relation de Poisson pour l’équation des ondes dans un ouvert non borne. Zbl 0445.35071
Bardos, Claude; Guillot, Jean-Claude; Ralston, James
1980
Discrete shock profiles for systems of conservation laws. Zbl 0388.35047
Majda, Andrew; Ralston, James
1979
Note on the decay of acoustic waves. Zbl 0427.35043
Ralston, James
1979
An analogue of Weyl’s formula for unbounded domains. III: An epilogue. Zbl 0433.35055
Majda, Andrew; Ralston, James
1979
Diffraction by convex bodies. Zbl 0405.35028
Ralston, J.
1979
An analogue of Weyl’s theorem for unbounded domains. I. Zbl 0408.35069
Majda, Andrew; Ralston, James
1978
A correction to: Decay of solutions of the wave equation outside nontrapping obstacles. Zbl 0404.35015
Morawetz, Cathleen S.; Ralston, James V.; Strauss, Walter A.
1978
An analogue of Weyl’s theorem for unbounded domains. II. Zbl 0416.35058
Majda, Andrew; Ralston, James
1978
The first variation of the scattering matrix: An addendum. Zbl 0369.35037
Ralston, J.
1978
Decay of solutions of the wave equation outside nontrapping obstacles. Zbl 0372.35008
Morawetz, Cathleen S.; Ralston, James V.; Strauss, Walter A.
1977
Approximate eigenfunctions of the Laplacian. Zbl 0385.58012
Ralston, J. V.
1977
On the construction of quasimodes associated with stable periodic orbits. Zbl 0333.35066
Ralston, J. V.
1976
The first variation of the scattering matrix. Zbl 0343.35069
Helton, J. W.; Ralston, J. V.
1976
On the propagation of singularities in solutions of symmetric hyperbolic partial differential equations. Zbl 0336.35066
Ralston, J. V.
1976
A relation between biharmonic Green’s functions of simply supported and clamped bodies. Zbl 0311.31009
Ralston, James; Sario, Leo
1976
A relation between biharmonic Green’s functions of simply supported and clamped bodies. Zbl 0319.31007
Ralston, James; Sario, Leo
1976
Local decay of solutions of conservative first order hyperbolic systems in odd dimensional space. Zbl 0288.35041
Ralston, James V.
1974
On stationary modes in inviscid rotating fluids. Zbl 0273.76068
Ralston, J. V.
1973
Variation of the transmission coefficient and comparison theorems for the purely imaginary poles of the scattering matrix. Zbl 0231.35062
Ralston, James V.
1972
Trapped rays in spherically symmetric media and poles of the scattering matrix. Zbl 0206.39603
Ralston, J. V.
1971
Note on a paper of Kreiss. Zbl 0215.16802
Ralston, J.
1971
Deficiency indices of first-order symmetric operators with elliptic boundary conditions. Zbl 0188.40904
Ralston, J.
1970
Solutions of the wave equation with localized energy. Zbl 0209.40402
Ralston, J. V.
1969
all top 5
#### Cited by 686 Authors
13 Metcalfe, Jason L. 13 Ralston, James V. 11 Qian, Jianliang 11 Uhlmann, Gunther Alberto 10 Sogge, Christopher D. 9 Eskin, Gregory 9 Lassas, Matti 9 Salo, Mikko 8 Liu, Hailiang 8 Petkov, Vesselin M. 8 Sjöstrand, Johannes 7 Stoyanov, Luchezar N. 6 Bellassoued, Mourad 6 Kappeler, Thomas 6 Robert, Didier 6 Royer, Julien 6 Spence, Euan A. 6 Weder, Ricardo A. 6 Yang, Xu 6 Zuazua, Enrique 5 Aloui, Lassaad 5 Bony, Jean-François 5 Dimassi, Mouez 5 Dobrokhotov, Sergei Yurievich 5 Dolbeault, Jean 5 Gosse, Laurent 5 Guillot, Jean-Claude 5 Isozaki, Hiroshi 5 Khludnev, Aleksandr Mikhaĭlovich 5 Kuchment, Peter A. 5 Novikov, Roman G. 5 Runborg, Olof 5 Teschl, Gerald 5 Tohaneanu, Mihai 5 Tzou, Leo 5 Zworski, Maciej 4 Bao, Gang 4 Barceló, Juan Antonio 4 Burq, Nicolas 4 Cassanas, Roch 4 Christiansen, Tanya J. 4 Christianson, Hans 4 Datchev, Kiril R. 4 Guillarmou, Colin 4 Jin, Shi 4 Kazarinoff, Nicholas D. 4 Khenissi, Moez 4 Krupchyk, Katsiaryna 4 Leung, Shingyu 4 Lu, Jianfeng 4 Simon, Barry 4 Smith, Hart F. 4 Taylor, Michael Eugene 4 Wang, Chengbo 4 Wang, Jenn-Nan 4 Wunsch, Jared 3 Ammari, Zied 3 Bloom, Clifford O. 3 Burridge, Robert 3 Camus, Brice 3 Cavalcanti, Marcelo Moreira 3 Gérard, Christian 3 Gordon, Carolyn S. 3 Grubb, Gerd 3 Guillemin, Victor W. 3 Gurarie, David 3 Helin, Tapio 3 Hryniv, Rostyslav O. 3 Ikehata, Ryo 3 Ivanova, Nataliya M. 3 Jin, Long 3 Klevin, A. I. 3 Klopp, Frédéric 3 Kostenko, Aleksey S. 3 Kurylev, Yaroslav V. 3 Lasiecka, Irena 3 Li, Xiaosheng 3 Lindblad, Hans 3 Markowich, Peter Alexander 3 McCann, Robert J. 3 Motamed, Mohammad 3 Päivärinta, Lassi 3 Paul, Thierry 3 Pohjola, Valter 3 Ramacher, Pablo 3 Rodnianski, Igor 3 Ruiz, Alberto 3 Smyshlyaev, Valery P. 3 Tamura, Hideo 3 Tanushev, Nicolay M. 3 Toundykov, Daniel 3 Troitskaya, Saule D. 3 Uribe, Alejandro 3 Vazquez, Juan Luis 3 Vodev, Georgi 3 Zakora, Dmitriĭ Aleksandrovich 2 Agaltsov, Alexey D. 2 Alber, Hans-Dieter 2 Amour, Laurent 2 Anikin, A. Yu. ...and 586 more Authors
all top 5
#### Cited in 157 Serials
46 Communications in Partial Differential Equations 41 Journal of Differential Equations 37 Communications in Mathematical Physics 28 Journal of Functional Analysis 23 Journal of Mathematical Analysis and Applications 18 Journal of Mathematical Physics 17 Journal of Computational Physics 13 SIAM Journal on Mathematical Analysis 12 Journal de Mathématiques Pures et Appliquées. Neuvième Série 11 Transactions of the American Mathematical Society 11 Annales de l’Institut Henri Poincaré. Physique Théorique 11 Annales Henri Poincaré 10 Annales de l’Institut Fourier 9 Applicable Analysis 9 Duke Mathematical Journal 9 Journal of Computational and Applied Mathematics 8 Archive for Rational Mechanics and Analysis 8 Inverse Problems 8 Journal d’Analyse Mathématique 8 Mathematische Annalen 8 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 8 Journal of Mathematical Sciences (New York) 6 Mathematical Notes 6 Mathematics of Computation 6 Proceedings of the American Mathematical Society 6 SIAM Journal on Scientific Computing 5 Computers & Mathematics with Applications 5 Wave Motion 5 ZAMP. Zeitschrift für angewandte Mathematik und Physik 5 The Journal of Geometric Analysis 5 Russian Journal of Mathematical Physics 5 Journal of Inverse and Ill-Posed Problems 5 Inverse Problems and Imaging 4 Communications on Pure and Applied Mathematics 4 Letters in Mathematical Physics 4 Mathematical Methods in the Applied Sciences 4 Theoretical and Mathematical Physics 4 Reviews in Mathematical Physics 4 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 4 Inventiones Mathematicae 4 Mathematische Nachrichten 4 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 4 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 4 Journal of Spectral Theory 4 Nonlinear Analysis. Theory, Methods & Applications 3 Applied Mathematics and Computation 3 Applied Mathematics and Optimization 3 Journal of the Mathematical Society of Japan 3 Journal für die Reine und Angewandte Mathematik 3 Publications of the Research Institute for Mathematical Sciences, Kyoto University 3 Siberian Mathematical Journal 3 Applied Mathematics Letters 3 Journal of the American Mathematical Society 3 Geometric and Functional Analysis. GAFA 3 Annals of Mathematics. Second Series 3 Comptes Rendus. Mathématique. Académie des Sciences, Paris 3 Multiscale Modeling & Simulation 3 Inverse Problems in Science and Engineering 2 Arkiv för Matematik 2 Journal of Geometry and Physics 2 Advances in Mathematics 2 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 2 Functional Analysis and its Applications 2 Proceedings of the Japan Academy. Series A 2 Quarterly of Applied Mathematics 2 Rendiconti del Seminario Matematico della Università di Padova 2 Advances in Applied Mathematics 2 Physica D 2 RAIRO. Modélisation Mathématique et Analyse Numérique 2 Revista Matemática Iberoamericana 2 Journal of Scientific Computing 2 Proceedings of the National Academy of Sciences of the United States of America 2 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 2 SIAM Journal on Applied Mathematics 2 Bulletin of the American Mathematical Society. New Series 2 Mathematical Physics, Analysis and Geometry 2 Foundations of Computational Mathematics 2 Communications on Pure and Applied Analysis 2 Journal of the Institute of Mathematics of Jussieu 2 Bulletin of the American Mathematical Society 2 Analysis & PDE 2 Bulletin of Mathematical Sciences 2 Mathematical Control and Related Fields 2 Annals of PDE 2 Séminaire Laurent Schwartz. EDP et Applications 2 Pure and Applied Analysis 1 Moscow University Physics Bulletin 1 Rocky Mountain Journal of Mathematics 1 Russian Mathematical Surveys 1 Zhurnal Vychislitel’noĭ Matematiki i Matematicheskoĭ Fiziki 1 Acta Mathematica 1 Annali di Matematica Pura ed Applicata. Serie Quarta 1 BIT 1 Collectanea Mathematica 1 Demonstratio Mathematica 1 Integral Equations and Operator Theory 1 Manuscripta Mathematica 1 Mathematische Zeitschrift 1 Numerische Mathematik 1 Rendiconti del Circolo Matemàtico di Palermo. Serie II ...and 57 more Serials
all top 5
#### Cited in 45 Fields
494 Partial differential equations (35-XX) 131 Quantum theory (81-XX) 87 Global analysis, analysis on manifolds (58-XX) 81 Operator theory (47-XX) 71 Numerical analysis (65-XX) 40 Ordinary differential equations (34-XX) 38 Optics, electromagnetic theory (78-XX) 37 Fluid mechanics (76-XX) 24 Differential geometry (53-XX) 24 Mechanics of deformable solids (74-XX) 23 Statistical mechanics, structure of matter (82-XX) 22 Dynamical systems and ergodic theory (37-XX) 21 Relativity and gravitational theory (83-XX) 15 Systems theory; control (93-XX) 11 Functional analysis (46-XX) 9 Calculus of variations and optimal control; optimization (49-XX) 9 Mechanics of particles and systems (70-XX) 8 Potential theory (31-XX) 8 Probability theory and stochastic processes (60-XX) 6 Harmonic analysis on Euclidean spaces (42-XX) 5 Integral equations (45-XX) 4 Several complex variables and analytic spaces (32-XX) 4 Approximations and expansions (41-XX) 4 Integral transforms, operational calculus (44-XX) 3 Geophysics (86-XX) 3 Biology and other natural sciences (92-XX) 2 Algebraic geometry (14-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Topological groups, Lie groups (22-XX) 2 Real functions (26-XX) 2 Functions of a complex variable (30-XX) 2 Difference and functional equations (39-XX) 2 Information and communication theory, circuits (94-XX) 1 General and overarching topics; collections (00-XX) 1 History and biography (01-XX) 1 Combinatorics (05-XX) 1 Number theory (11-XX) 1 $$K$$-theory (19-XX) 1 Measure and integration (28-XX) 1 Abstract harmonic analysis (43-XX) 1 Geometry (51-XX) 1 Manifolds and cell complexes (57-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Astronomy and astrophysics (85-XX) 1 Game theory, economics, finance, and other social and behavioral sciences (91-XX)
|
2021-07-31 09:19:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5941106081008911, "perplexity": 6078.291782766543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00572.warc.gz"}
|
http://physics.stackexchange.com/questions/17574/how-can-i-find-the-potential-created-by-spherical-capacitor-with-dielectric-mate?answertab=active
|
# how can I find the potential created by spherical capacitor with dielectric material
If we have a spherical capacitor with inner radius or r1 and outer radius of r2, with charges (+/-)q on them and there is a dielectric material (with constant e) in between them with.
What kind of a potential would this create outside the entire capacitor? in the region with the dielectric? and inside the entire thing?
-
You can write down these fields directly without calculating anything because the field of a charged sphere is the same as that of a point charge outside the sphere and zero inside the sphere.
http://hyperphysics.phy-astr.gsu.edu/hbase/electric/potsph.html
-
$$C=\epsilon C_0$$
being $C_0$ the capacitance in vacuum. For a full set of formulas in different geometrical configurations see here.
|
2013-12-05 03:39:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.593447744846344, "perplexity": 318.1573424343224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163039002/warc/CC-MAIN-20131204131719-00001-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://yetanothermathblog.com/2015/12/31/splitting-fields-of-representations-of-generalized-symmetric-groups-4/
|
# Splitting fields of representations of generalized symmetric groups, 4
First a technical definition.
Let $A=C_\ell^n$. Let $\eta_k(z)=z^k$, for $z\in C_\ell$ and $1\leq k\leq \ell-1$. For $\eta\in C_\ell^*$, let $\mu\otimes \eta =(\mu_1\eta,\mu_2\eta,...,\mu_n\eta)$ where $\mu=(\mu_1,\mu_2,...,\mu_n)$. This defines an action of $C_\ell^*$ on $A^*$ and hence on the set of equivalence classes of $G$, $G^*$. We call two representations $\theta_{\mu,\rho}$, $\theta_{\mu',\rho'}$ $C_\ell^*$-equivalent, and write
$\theta_{\mu,\rho}\sim_\ell \theta_{\mu',\rho'},$
if $\rho=\rho'$ and $\mu'=\mu\otimes \eta$ for some $\eta\in C_\ell^*$. Similarly, we call two characters $\mu$, $\mu'$ of $C_\ell^n$ $C_\ell^*$-equivalent, and write
$\mu\sim_\ell \mu',$
if $\mu'=\mu\otimes \eta$ for some $\eta\in C_\ell^*$.
For example, Let $\ell =9$, $n=3$ and $\mu=(\eta_2,\eta_5,\eta_8)$. Then $\mu\sim \mu\otimes\eta_3$.
Let $\theta_{\mu,\rho}$ be as in the previous post. Note that
$\theta_{\mu\otimes \eta,\rho} = \theta_{\mu,\rho}\otimes \eta ,$
for $\eta\in C_\ell^*$. Therefore, the matrix representations of two $C_\ell^*$-equivalent representations differ only
by a character.
Let $G=C_\ell^n\, >\!\!\lhd \, S_n$.
The results in the above section tells us how to construct all the irreducible representations of $G$. We must
1. write down all the characters (i.e., 1-dimensional representations) of $A=C_\ell^n$,
2. describe the action of $S_n$ on $A^*$,
3. for each $\mu\in [A^*]$, compute the stabilizer $(S_n)_{\mu}$,
4. describe all irreducible representations of each $(S_n)_{\mu}$,
5. write down the formula for the character of $\theta_{\mu,\rho}$.
Write $\mu\in [A^*]$ as $\mu=(\mu_1,...,\mu_n)$, where each component is a character of the cyclic group $C_\ell$, $\mu_j\in C_\ell^*$. Let $\mu'_1,...,\mu'_r$ denote all the distinct characters which occur in $\mu$, so
$\{\mu'_1,...,\mu'_r\}=\{ \mu_1,...,\mu_n\}.$
Let $n_1$ denote the number of $\mu'_1$‘s in $\mu$, $n_2$ denote the number of $\mu'_2$‘s in $\mu$, …, $n_r$ denote the number of $\mu'_r$‘s in $\mu$. Then $n=n_1+...+n_r$. Call this the partition associated to $\mu$.
If two characters $\mu=(\mu_1,...,\mu_n)$, $\mu'=(\mu'_1,...,\mu'_n)$ belong to the same class in $[(C_\ell^n)^*]$, under the $S_n$-equivalence relation, then their associated partitions are equal.
The Frobenius formula for the character of an induced representation gives the following character formula. Let $\chi$ denote the character of $\theta_{\mu,\rho}$. Then
$\chi(\vec{v},p)=\sum_{g\in S_n/(S_n)_\mu} \chi^o_\rho(gpg^{-1})\mu^g(\vec{v}),$
for all $\vec{v}\in C_\ell^n$ and $p\in S_n$. In particular, if $p=1$ then
$\chi(\vec{v},1)=({\rm dim}\ \rho)\sum_{g\in S_n/(S_n)_\mu} \mu^g(\vec{v}).$
|
2019-01-24 08:10:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 71, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9947906136512756, "perplexity": 121.75720379606848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584519757.94/warc/CC-MAIN-20190124080411-20190124102411-00554.warc.gz"}
|
http://nikgrozev.com/2014/06/08/cloudsim-and-cloudsimex-part-1/
|
# Introduction
CloudSim is one of the most popular Cloud infrastructure simulators and is used throughout academia and industry. Being one of its maintainers I often get emails with requests for new features. I also get quite a few emails about several CloudSim extensions that we published a paper about (pdf) last year. Thus, I’ve started the CloudSimEx project, which brings some of these extensions together. In a series of posts I’ll demo some of its functionalities.
So what is CloudSimEx? CloudSimEx is a set of CloudSim extensions making simulation development easier and enabling the modelling of new types of applications, not supported by CloudSim. Currently the following features are included:
• Web session modelling;
• Better logging utilities;
• Utilities for generating CSV files for statistical analysis;
• Automatic id generation;
• Modelling of disk operations;
• Utilities for running multiple experiments in parallel;
• Utilities for modelling network latencies;
• MapReduce simulation.
In this part I’ll explain how to set up CloudSim and CloudSimEx in Eclipse and how to use some of its base utility functionalities.
# Prerequisites
Before following this tutorial you should have JDK 7 and Maven 3.2 or later installed on your system. CloudSim currently can not be built with Java 8, because of some issues in the integration of Java 8 and Maven. Edit: Both CloudSim and CloudSimEx now compile with Java 7 and 8 as well.
Installation instructions for Java 7 and Maven can be found here:
You should also have installed Eclipse IDE Java EE Developers and SVN and GIT clients. If you’re unfamiliar with CloudSim itself, you can look at these examples.
# CloudSim SetUp
As a first step we need to get CloudSim’s code and build it. Navigate to a folder of choice and issue the following commands:
1 2 3 git clone https://github.com/Cloudslab/cloudsim.git cd cloudsim mvn clean install
This should take some time and in the end you should see the resounding “BUILD SUCCESSFUL”.
Before we open CloudSim in Eclipse, we need make a few settings in the IDE. By default, Eclipse comes with an embedded installation of Maven, which is different that the one on the operating system. We need to tell Eclipse to use the already installed one. Otherwise, building from the IDE and the terminal will be two different things. You can skip the next step if you have already done this. If not go to Window -> Preferences -> Maven -> Installations and add the location of your maven installation:
Now we can import the CloudSim project into Eclipse. Go to File -> Import -> Existing Maven Projects and follow the wizard. You can use the following screenshots as a guideline:
This should open the CloudSim projects in Eclipse.
# CloudSimEx SetUp
After CloudSim is set, we can continue to set up CloudSimEx as well. The following commands do just that.
1 2 3 git clone https://github.com/Cloudslab/CloudSimEx.git cd CloudSimEx mvn clean install
Again, if all is fine you should see “BUILD SUCCESSFUL” int your terminal. Next you can open the CloudSimEx projects in Eclipse. Again you need to go to File -> Import -> Existing Maven Projects and follow the wizard, as we did with CloudSim.
# Test Project
Now that CloudSim and CloudSimEx are set up we can create a simple test project, in which we can experiment with them. Go to File -> New -> Project... -> Maven Project and follow the wizard to create a simple maven project. You can use the following screeshots as a guideline:
After the project is created, open its pom.xml file and add the following dependencies inside the <project> section:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 org.cloudbus.cloudsim cloudsim 3.1-SNAPSHOT org.cloudbus cloudsimex-core 1.0-SNAPSHOT org.cloudbus cloudsimex-geolocation 1.0-SNAPSHOT org.cloudbus cloudsimex-web 1.0-SNAPSHOT org.cloudbus cloudsimex-mapreduce 1.0-SNAPSHOT
Finally, you should make sure the Eclipse project is configured to use Java 7. Right click on the test project in the explorer, select Properties, and ensure that the Java compiler compliance level is set to 1.7, as in the following screenshot:
Now you can create a class with a main method in the new project and it can import all CloudSim and CloudSimEx classes. You’re ready to go!
# ID generation
CloudSim requires the end user to provide a lot of numerical ids for the simulation entities (e.g. cloudlets, VMs). This is error prone. If you provide duplicate ids for entities of the same type (e.g. cloudlets) this will result in an insidious bug, as CloudSim won’t give any warning. Moreover, in complex simulations virtual machines and cloudlets (i.e. jobs) need to be created dynamically, and thus you’ll need to maintain global counters for id generation.
To solve the problem I’ve created the simple utility class org.cloudbus.cloudsim.ex.util.Id. It allows you to create unique ids per simulation entity type. For example the following code creates three cloudlets with unique ids:
1 2 3 4 5 6 7 Cloudlet cl1 = new Cloudlet(Id.pollId(Cloudlet.class), ... ); Cloudlet cl2 = new NetworkCloudlet(Id.pollId(NetworkCloudlet.class), ... ); Cloudlet cl3 = new NetworkCloudlet(Id.pollId(Cloudlet.class), ... ); System.out.println(cl1.getCloudletId()); // Prints 1 System.out.println(cl2.getCloudletId()); // Prints 2 System.out.println(cl3.getCloudletId()); // Prints 3
Note that the exact type that you pass to Id.pollId( ) does not matter, as long as it is a subtype of a CloudSim entity - e.g. cloudlet, VM or host. The implementation will automatically check if the type you pass is a cloudlet or something else and will return an appropriate id. For example In the above code it didn’t matter if we call Id.pollId(Cloudlet.class) or Id.pollId(NetworkCloudlet.class), it would return a unique cloudlet id in all cases.
If you’re developing classes that extend CloudSim entities, you can eliminate the need for ids altogether, by redefining their constructors. For example, the following code creates a custom cloudlet, that automatically creates its own id in the costructor, and its users won’t need to bother with ids.
1 2 3 4 5 public class MyCloudlet extends Cloudlet { public MyCloudlet(...) { super(Id.pollId(getClass()), ...); } }
In fact, this is the approach taken in all CloudSimEx classes. Thus, in CloudSimEx you don’t need to specify your own ids.
# CSV export of objects
When developing CloudSim simulations people often need to export tabular (i.e. coma separated values or CSV) for further statistical analysis. For example, you may want to export data about your jobs’/cloudlets’ start, end and execution time in a CSV file, in order to compute statistics or perform numerical analysis with Excel, R, SAS or Matlab. Looping over all objects and their properties, taking care of column padding and so on can be a drag.
Enter the TextUtil class. Converting an object to a CSV line is now just a single line of code. Add another line and you’ve got yourself a header, as in this example:
1 2 3 4 5 Vm vm = new Vm(Id.pollId(Vm.class), .....); // getCaptionLine - prints a header System.out.println(TextUtil.getCaptionLine(Vm.class)); // getTxtLine - prints a CSV line System.out.println(TextUtil.getTxtLine(vm));
By default, TextUtil inspects all the properties (i.e. public no-arg get methods) of the class/object and concatenates them taking into account predefined formatting and padding options. You can modify the list and order of properties for a given class using the Textualize annotation. The output of the above is:
1 2 BeingInstantiated; Bw;CloudletScheduler;CurrentAllocatedBw;CurrentAllocatedMips;CurrentAllocatedRam;CurrentAllocatedSize;CurrentRequestedBw;CurrentRequestedMaxMips;CurrentRequestedMips;CurrentRequestedRam;CurrentRequestedTotalMips;Host; Id;InMigration; Mips;NumberOfPes; Ram; Size;StateHistory; Uid; UserId; Vmm;Class true; 1000; ref<455659002>; 0; null; 0; 0; 1000; 1000.00; [...]; 512; 1000.00;null; 4; false; 1000.00; 1; 512; 10000; [...]; 0-4; 0; Xen; Vm
TextUtil automatically converts references, arrays and collections to simple representations, so that the CSV line format is maintained.
Often you don’t need all properties of an object in the CSV - you may only need 2 or 3 of them. Again this can be done with a single line of code. You just need to specify the names of the properties when invoking TextUtil:
1 2 3 4 Vm vm = new Vm(Id.pollId(Vm.class), .....); String[] props = new String[] { "Bw", "CurrentRequestedRam" }; System.out.println(TextUtil.getCaptionLine(Vm.class, props)); System.out.println(TextUtil.getTxtLine(vm, props));
This code will only print the Bw and CurrentRequestedRam properties, defined by the getBw and getCurrentRequestedRam methods:
1 2 Bw;CurrentRequestedRam 1000; 512
In all above examples, we used the default delimeter “;”. All TextUtil methods are overloaded so that you can specify another delimiter if needed.
Finally, often you may need some derived characteristic/column in the CSV. For example, you may like to have a field in gigabytes instead of megabytes, or you may want to compute the difference between two fields in a new column. TextUtil allows you to define the so-called virtual properties. A virtual property is just a pair - a name and Function (from the guava library) which defines its value. The following example shows how to define an additional virtual property “CurrentRequestedRamGB” and to print it in conjunction with the regular property “CurrentRequestedRam”:
1 2 3 4 5 6 7 8 9 10 11 12 13 Vm vm = new Vm(Id.pollId(Vm.class), ...); String[] props = new String[] { "CurrentRequestedRam" }; LinkedHashMap> virtualProps = new LinkedHashMap<>(); virtualProps.put("CurrentRequestedRamGB", new Function() { @Override public String apply(Vm v) { return String.valueOf(v.getCurrentRequestedRam() / 1024.0); } }); System.out.println(TextUtil.getCaptionLine(Vm.class, props, virtualProps.keySet())); System.out.println(TextUtil.getTxtLine(vm, props, virtualProps));
This prints:
1 2 CurrentRequestedRam;CurrentRequestedRamGB 512; 0.5
# Custom Logging
CloudSim’s default Log implementation is designed only to print to the standard output. Hence, CloudSimEx introduces a new logger called CustomLog. It follows the same design principle of the original Log, but has several new functionalities. It allows (i) output redirection to a file, (ii) flexible definition of log entry formats. Most importantly it plays well with TextUtil and can be used to easily create CSV files for analysis.
To begin with, you need to configure the logger with a set of properties. You would typically store them in a separate configuration file:
1 2 3 4 5 Properties props = new Properties(); try (InputStream is = Files.newInputStream(Paths.get("...your file..."))) { props.load(is); } CustomLog.configLogger(props);
The important properties are:
• FilePath - if present this property has the value of the target log file. If this property is not present, the log is written to the standard output.
• LogLevel - each log message has a level. This property identifies the minimal log level that will be printed. Levels are as in standard java logging - see LogLevel. If this property is not present, a default log level INFO is used.
• LogCloudSimClock - a boolean proeprty. If “true” the current CloudSim simulation time will be included in every log entry.
• LogReadableSimClock - a boolean property. If “true” the current CloudSim simulation time will be included in every log entry and it will be formatted in the “days:hours:minutes:seconds” format.
• LogRealTimeClock - a boolean property. If “true” the actual system/computer time will be included in every log entry.
• LogFormat - a list of get method names of the class LogRecord. It allows you to specify what should compose your log entries. Typically, you would specify just getMessage or getLevel;getMessage.
• ShutStandardLogger - a boolean property. If “true” it will shut CloudSim’s standard logger. This is useful, when the standard logger generates too many log messages and causes the simulation execution to slow down significantly.
Consider the following example:
1 2 3 4 5 6 7 FilePath=/mydesktop/log.log LogLevel=INFO LogCloudSimClock=true LogReadableSimClock=false LogRealTimeClock=true LogFormat=getMessage ShutStandardLogger=true
Given this configuration, CustomLog will output only log entries with level higher than INFO, will print the current time, and the current CloudSim simulation time and will shut the standard CloudSim logger. The output will be written to the file /mydesktop/log.log. Thus, if you call:
1 CustomLog.printf("Hello %s", "World");
the output in the file will be something like this.
1 22:42:51 0.00 Hello World
If all you want to see is the message and not the current time, you need to switch LogCloudSimClock and LogRealTimeClock to “false”.
At any point during the simulation you can redirect CustomLog to another file or the standard output, by simply calling one of the following:
1 2 3 4 5 6 7 8 // Redirects to stdout ... CustomLog.redirectToConsole(); // Redirects to a file, overwrites it CustomLog.redirectToFile("...your file..."); // Redirects to a file, appends to it CustomLog.redirectToFile("...your file...", true);
CustomLog has a lot of convenient methods for printing and formatting log messages and you can explore to see what’s there. As I mentioned it can use TextUtil to print well formatted CSV files. So if you have a list of objects (e.g. cloudlets or VMs) you can convert them to a CSV with a single line of code, by using the printResults method. The following example demonstrates this:
1 2 3 4 List vms = ...... CustomLog.redirectToFile("... your CSV file ..."); CustomLog.printResults(Vm.class, vms);
The printResults method is overloaded, so it can take property names and virtual properties, analogously to TextUtil.
# Conclusion
In this article we just scratched the surface, by introducing CloudSimEx, explaining how to install it and overviewing some of its basic functionalities. In subsequent articles I’ll talk about how CloudSimEx allows for I/O operations simulation, web session modelling, running multiple experiments in parallel and utilities for modelling Internet latencies … so stay tuned :)
|
2017-03-01 19:54:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24493034183979034, "perplexity": 3144.9247363593913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174276.22/warc/CC-MAIN-20170219104614-00523-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.verlagdrkovac.de/literatur/Quantentheorie.htm
|
# Unsere Literatur zum Schlagwort Quantentheorie
## Eine schlagwortbasierte Auswahl unserer Fachbücher
### Intuitionistic Set Theory Part II
Forschungsergebnisse zur Informatik
In part II of his study, the author deals with the following problems:
Trees and Partitions Inaccessible Cardinals Descriptive Set Theory (Theory of a real variable) Auxiliary Notions Borel sets, B-measurable functions, Baire property Souslin space, projective sets Measurable Selectors [...]
### Intuitionistic Set Theory Part I
Forschungsergebnisse zur Informatik
In part I of his study, the author deals with the following problems:
Foundation of Euclidean semi-rings by finite construction of irra-tional numbers in the semi-space (N algebraic reals) Decidable derivation of the axioms of the Axiomatic Set Theory (by K. Kuratowski and A. Mostowski), relations, functions Natural numbers (as special case..
Literatur: Quantentheorie / Eine Auswahl an Fachbüchern aus dem Verlag Dr. Kovač
|
2021-06-25 10:55:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097696900367737, "perplexity": 10303.200896238108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630081.36/warc/CC-MAIN-20210625085140-20210625115140-00610.warc.gz"}
|
http://www.defaultlogic.com/learn?s=SQL
|
SQL
Paradigm Multi-paradigm: declarative Programming language Donald D. ChamberlinRaymond F. Boyce ISO/IEC 1974; 43 years ago Static, strong Cross-platform File format details .sql application/sql[1][2] ISO/IEC 1986 SQL:2016(December 2016; 4 months ago) Database ISO/IEC 9075 Yes Many Datalog CQL, LINQ, SOQL, PowerShell,[3]JPQL, jOOQ
SQL ([4] or ,[5]Structured Query Language[6][7][8][9]) is a domain-specific language used in programming and designed for managing data held in a relational database management system (RDBMS), or for stream processing in a relational data stream management system (RDSMS).
Originally based upon relational algebra and tuple relational calculus, SQL consists of a data definition language, data manipulation language, and data control language. The scope of SQL includes data insert, query, update and delete, schema creation and modification, and data access control. Although SQL is often described as, and to a great extent is, a declarative language (4GL), it also includes procedural elements.
SQL was one of the first commercial languages for Edgar F. Codd's relational model, as described in his influential 1970 paper, "A Relational Model of Data for Large Shared Data Banks."[10] Despite not entirely adhering to the relational model as described by Codd, it became the most widely used database language.[11][12]
SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987.[13] Since then, the standard has been revised to include a larger set of features. Despite the existence of such standards, most SQL code is not completely portable among different database systems without adjustments.
## History
SQL was initially developed at IBM by Donald D. Chamberlin and Raymond F. Boyce in the early 1970s.[14] This version, initially called SEQUEL (Structured English Query Language), was designed to manipulate and retrieve data stored in IBM's original quasi-relational database management system, System R, which a group at IBM San Jose Research Laboratory had developed during the 1970s.[14] The acronym SEQUEL was later changed to SQL because "SEQUEL" was a trademark of the UK-based Hawker Siddeley aircraft company.[15]
In the late 1970s, Relational Software, Inc. (now Oracle Corporation) saw the potential of the concepts described by Codd, Chamberlin, and Boyce, and developed their own SQL-based RDBMS with aspirations of selling it to the U.S. Navy, Central Intelligence Agency, and other U.S. government agencies. In June 1979, Relational Software, Inc. introduced the first commercially available implementation of SQL, Oracle V2 (Version2) for VAX computers.
After testing SQL at customer test sites to determine the usefulness and practicality of the system, IBM began developing commercial products based on their System R prototype including System/38, SQL/DS, and DB2, which were commercially available in 1979, 1981, and 1983, respectively.[16]
## Design
SQL deviates in several ways from its theoretical foundation, the relational model and its tuple calculus. In that model, a table is a set of tuples, while in SQL, tables and query results are lists of rows: the same row may occur multiple times, and the order of rows can be employed in queries (e.g. in the LIMIT clause).
Critics argue that SQL should be replaced with a language that strictly returns to the original foundation: for example, see The Third Manifesto.
## Syntax
### Language elements
${\displaystyle \left.{\begin{array}{rl}\scriptstyle {\mathtt {UPDATE~clause}}&\{{\mathtt {UPDATE\ country}}\\\scriptstyle {\mathtt {SET~clause}}&\{{\mathtt {SET\ population=~}}\overbrace {\mathtt {population+1}} ^{\mathtt {expression}}\\\scriptstyle {\mathtt {WHERE~clause}}&\{{\mathtt {WHERE\ \underbrace {{name=}\overbrace {'USA'} ^{expression}} _{predicate};}}\end{array}}\right\}{\scriptstyle {\texttt {statement}}}}$
A chart showing several of the SQL language elements that compose a single statement
The SQL language is subdivided into several language elements, including:
• Clauses, which are constituent components of statements and queries. (In some cases, these are optional.)[17]
• Expressions, which can produce either scalar values, or tables consisting of columns and rows of data
• Predicates, which specify conditions that can be evaluated to SQL three-valued logic (3VL) (true/false/unknown) or Boolean truth values and are used to limit the effects of statements and queries, or to change program flow.
• Queries, which retrieve the data based on specific criteria. This is an important element of SQL.
• Statements, which may have a persistent effect on schemata and data, or may control transactions, program flow, connections, sessions, or diagnostics.
• SQL statements also include the semicolon (";") statement terminator. Though not required on every platform, it is defined as a standard part of the SQL grammar.
• Insignificant whitespace is generally ignored in SQL statements and queries, making it easier to format SQL code for readability.
### Operators
Operator Description Example
= Equal to Author = 'Alcott'
<> Not equal to (many DBMSs accept != in addition to <>) Dept <> 'Sales'
> Greater than Hire_Date > '2012-01-31'
< Less than Bonus < 50000.00
>= Greater than or equal Dependents >= 2
<= Less than or equal Rate <= 0.05
BETWEEN Between an inclusive range Cost BETWEEN 100.00 AND 500.00
LIKE Match a character pattern First_Name LIKE 'Will%'
IN Equal to one of multiple possible values DeptCode IN (101, 103, 209)
IS or IS NOT Compare to null (missing data) Address IS NOT NULL
IS NOT DISTINCT FROM Is equal to value or both are nulls (missing data) Debt IS NOT DISTINCT FROM - Receivables
AS Used to change a field name when viewing results SELECT employee AS 'department1'
Other operators have at times been suggested or implemented, such as the skyline operator (for finding only those records that are not 'worse' than any others).
SQL has the case/when/then/else/end expression, which was introduced in SQL-92. In its most general form, which is called a "searched case" in the SQL standard, it works like else if in other programming languages:
CASE WHEN n > 0
THEN 'positive'
WHEN n < 0
THEN 'negative'
ELSE 'zero'
END
SQL tests WHEN conditions in the order they appear in the source. If the source does not specify an ELSE expression, SQL defaults to ELSE NULL. An abbreviated syntax--called "simple case" in the SQL standard--mirrors switch statements:
CASE n WHEN 1
THEN 'One'
WHEN 2
THEN 'Two'
ELSE 'I cannot count that high'
END
This syntax uses implicit equality comparisons, with the usual caveats for comparing with NULL.
For the Oracle-SQL dialect, the latter can be shortened to an equivalent DECODE construct:
SELECT DECODE(n, 1, 'one',
2, 'two',
'i cannot count that high')
FROM some_table;
The last value is the default; if none is specified, it also defaults to NULL. However, unlike the standard's "simple case", Oracle's DECODE considers two NULLs equal with each other.[18]
### Queries
The most common operation in SQL, the query, makes use of the declarative SELECT statement. SELECT retrieves data from one or more tables, or expressions. Standard SELECT statements have no persistent effects on the database. Some non-standard implementations of SELECT can have persistent effects, such as the SELECT INTO syntax provided in some databases.[19]
Queries allow the user to describe desired data, leaving the database management system (DBMS) to carry out planning, optimizing, and performing the physical operations necessary to produce that result as it chooses.
A query includes a list of columns to include in the final result, normally immediately following the SELECT keyword. An asterisk ("*") can be used to specify that the query should return all columns of the queried tables. SELECT is the most complex statement in SQL, with optional keywords and clauses that include:
• The FROM clause, which indicates the table(s) to retrieve data from. The FROM clause can include optional JOIN subclauses to specify the rules for joining tables.
• The WHERE clause includes a comparison predicate, which restricts the rows returned by the query. The WHERE clause eliminates all rows from the result set where the comparison predicate does not evaluate to True.
• The GROUP BY clause projects rows having common values into a smaller set of rows. GROUP BY is often used in conjunction with SQL aggregation functions or to eliminate duplicate rows from a result set. The WHERE clause is applied before the GROUP BY clause.
• The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate.
• The ORDER BY clause identifies which column[s] to use to sort the resulting data, and in which direction to sort them (ascending or descending). Without an ORDER BY clause, the order of rows returned by an SQL query is undefined.
• The DISTINCT keyword[20] eliminates duplicate data.[21]
The following example of a SELECT query returns a list of expensive books. The query retrieves all rows from the Book table in which the price column contains a value greater than 100.00. The result is sorted in ascending order by title. The asterisk (*) in the select list indicates that all columns of the Book table should be included in the result set.
SELECT *
FROM Book
WHERE price > 100.00
ORDER BY title;
The example below demonstrates a query of multiple tables, grouping, and aggregation, by returning a list of books and the number of authors associated with each book.
SELECT Book.title AS Title,
count(*) AS Authors
FROM Book
JOIN Book_author
ON Book.isbn = Book_author.isbn
GROUP BY Book.title;
Example output might resemble the following:
Title Authors
---------------------- -------
SQL Examples and Guide 4
The Joy of SQL 1
An Introduction to SQL 2
Pitfalls of SQL 1
Under the precondition that isbn is the only common column name of the two tables and that a column named title only exists in the Book table, one could re-write the query above in the following form:
SELECT title,
count(*) AS Authors
FROM Book
NATURAL JOIN Book_author
GROUP BY title;
However, many[quantify] vendors either do not support this approach, or require certain column-naming conventions for natural joins to work effectively.
SQL includes operators and functions for calculating values on stored values. SQL allows the use of expressions in the select list to project data, as in the following example, which returns a list of books that cost more than 100.00 with an additional sales_tax column containing a sales tax figure calculated at 6% of the price.
SELECT isbn,
title,
price,
price * 0.06 AS sales_tax
FROM Book
WHERE price > 100.00
ORDER BY title;
#### Subqueries
Queries can be nested so that the results of one query can be used in another query via a relational operator or aggregation function. A nested query is also known as a subquery. While joins and other table operations provide computationally superior (i.e. faster) alternatives in many cases, the use of subqueries introduces a hierarchy in execution that can be useful or necessary. In the following example, the aggregation function AVG receives as input the result of a subquery:
SELECT isbn,
title,
price
FROM Book
WHERE price < (SELECT AVG(price) FROM Book)
ORDER BY title;
A subquery can use values from the outer query, in which case it is known as a correlated subquery.
Since 1999 the SQL standard allows named subqueries called common table expressions (named and designed after the IBM DB2 version 2 implementation; Oracle calls these subquery factoring). CTEs can also be recursive by referring to themselves; the resulting mechanism allows tree or graph traversals (when represented as relations), and more generally fixpoint computations.
#### Inline view
An Inline view is the use of referencing an SQL subquery in a FROM clause. Essentially, the inline view is a subquery that can be selected from or joined to. The Inline view functionality allows the user to reference the subquery as a table. The inline view is also referred to as a derived table or a subselect. It was introduced in Oracle 9i.[22]
In the following example, the SQL statement involves a join from the initial "Book" table to the Inline view "Sales". This Inline view captures associated book sales information using the ISBN to join to the "Book" table. As a result, the Inline view provides the result set with additional columns (the number of items sold and the company that sold the books):
SELECT b.isbn, b.title, b.price, sales.items_sold, sales.company_nm
FROM Book b
JOIN (SELECT SUM(Items_Sold) Items_Sold, Company_Nm, ISBN
FROM Book_Sales
GROUP BY Company_Nm, ISBN) sales
ON sales.isbn = b.isbn
#### Null or three-valued logic (3VL)
The concept of Null allows SQL to deal with missing information in the relational model. The word NULL is a reserved keyword in SQL, used to identify the Null special marker. Comparisons with Null, for instance equality (=) in WHERE clauses, results in an Unknown truth value. In SELECT statements SQL returns only results for which the WHERE clause returns a value of True; i.e., it excludes results with values of False and also excludes those whose value is Unknown.
Along with True and False, the Unknown resulting from direct comparisons with Null thus brings a fragment of three-valued logic to SQL. The truth tables SQL uses for AND, OR, and NOT correspond to a common fragment of the Kleene and Lukasiewicz three-valued logic (which differ in their definition of implication, however SQL defines no such operation).[23]
p AND q p
True False Unknown
q True True False Unknown
False False False False
Unknown Unknown False Unknown
p OR q p
True False Unknown
q True True True True
False True False Unknown
Unknown True Unknown Unknown
p = q p
True False Unknown
q True True False Unknown
False False True Unknown
Unknown Unknown Unknown Unknown
q NOT q
True False
False True
Unknown Unknown
There are however disputes about the semantic interpretation of Nulls in SQL because of its treatment outside direct comparisons. As seen in the table above, direct equality comparisons between two NULLs in SQL (e.g. NULL = NULL) return a truth value of Unknown. This is in line with the interpretation that Null does not have a value (and is not a member of any data domain) but is rather a placeholder or "mark" for missing information. However, the principle that two Nulls aren't equal to each other is effectively violated in the SQL specification for the UNION and INTERSECT operators, which do identify nulls with each other.[24] Consequently, these set operations in SQL may produce results not representing sure information, unlike operations involving explicit comparisons with NULL (e.g. those in a WHERE clause discussed above). In Codd's 1979 proposal (which was basically adopted by SQL92) this semantic inconsistency is rationalized by arguing that removal of duplicates in set operations happens "at a lower level of detail than equality testing in the evaluation of retrieval operations".[23] However, computer-science professor Ron van der Meyden concluded that "The inconsistencies in the SQL standard mean that it is not possible to ascribe any intuitive logical semantics to the treatment of nulls in SQL."[24]
Additionally, because SQL operators return Unknown when comparing anything with Null directly, SQL provides two Null-specific comparison predicates: IS NULL and IS NOT NULL test whether data is or is not Null.[25] SQL does not explicitly support universal quantification, and must work it out as a negated existential quantification.[26][27][28] There is also the "<row value expression> IS DISTINCT FROM <row value expression>" infixed comparison operator, which returns TRUE unless both operands are equal or both are NULL. Likewise, IS NOT DISTINCT FROM is defined as "NOT (<row value expression> IS DISTINCT FROM <row value expression>)". SQL:1999 also introduced BOOLEAN type variables, which according to the standard can also hold Unknown values. In practice, a number of systems (e.g. PostgreSQL) implement the BOOLEAN Unknown as a BOOLEAN NULL[clarify].
### Data manipulation
The Data Manipulation Language (DML) is the subset of SQL used to add, update and delete data:
• INSERT adds rows (formally tuples) to an existing table, e.g.:
INSERT INTO example
(field1, field2, field3)
VALUES
('test', 'N', NULL);
• UPDATE modifies a set of existing table rows, e.g.:
UPDATE example
SET field1 = 'updated value'
WHERE field2 = 'N';
• DELETE removes existing rows from a table, e.g.:
DELETE FROM example
WHERE field2 = 'N';
• MERGE is used to combine the data of multiple tables. It combines the INSERT and UPDATE elements. It is defined in the SQL:2003 standard; prior to that, some databases provided similar functionality via different syntax, sometimes called "upsert".
MERGE INTO table_name USING table_reference ON (condition)
WHEN MATCHED THEN
UPDATE SET column1 = value1 [, column2 = value2 ...]
WHEN NOT MATCHED THEN
INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 ...])
### Transaction controls
Transactions, if available, wrap DML operations:
• START TRANSACTION (or BEGIN WORK, or BEGIN TRANSACTION, depending on SQL dialect) marks the start of a database transaction, which either completes entirely or not at all.
• SAVE TRANSACTION (or SAVEPOINT) saves the state of the database at the current point in transaction
CREATE TABLE tbl_1(id int);
INSERT INTO tbl_1(id) VALUES(1);
INSERT INTO tbl_1(id) VALUES(2);
COMMIT;
UPDATE tbl_1 SET id=200 WHERE id=1;
SAVEPOINT id_1upd;
UPDATE tbl_1 SET id=1000 WHERE id=2;
ROLLBACK to id_1upd;
SELECT id from tbl_1;
• COMMIT makes all data changes in a transaction permanent.
• ROLLBACK discards all data changes since the last COMMIT or ROLLBACK, leaving the data as it was prior to those changes. Once the COMMIT statement completes, the transaction's changes cannot be rolled back.
COMMIT and ROLLBACK terminate the current transaction and release data locks. In the absence of a START TRANSACTION or similar statement, the semantics of SQL are implementation-dependent. The following example shows a classic transfer of funds transaction, where money is removed from one account and added to another. If either the removal or the addition fails, the entire transaction is rolled back.
START TRANSACTION;
UPDATE Account SET amount=amount-200 WHERE account_number=1234;
UPDATE Account SET amount=amount+200 WHERE account_number=2345;
IF ERRORS=0 COMMIT;
IF ERRORS<>0 ROLLBACK;
### Data definition
The Data Definition Language (DDL) manages table and index structure. The most basic items of DDL are the CREATE, ALTER, RENAME, DROP and TRUNCATE statements:
• CREATE creates an object (a table, for example) in the database, e.g.:
CREATE TABLE example(
column1 INTEGER,
column2 VARCHAR(50),
column3 DATE NOT NULL,
PRIMARY KEY (column1, column2)
);
• ALTER modifies the structure of an existing object in various ways, for example, adding a column to an existing table or a constraint, e.g.:
ALTER TABLE example ADD column4 NUMBER(3) NOT NULL;
• TRUNCATE deletes all data from a table in a very fast way, deleting the data inside the table and not the table itself. It usually implies a subsequent COMMIT operation, i.e., it cannot be rolled back (data is not written to the logs for rollback later, unlike DELETE).
TRUNCATE TABLE example;
• DROP deletes an object in the database, usually irretrievably, i.e., it cannot be rolled back, e.g.:
DROP TABLE example;
### Data types
Each column in an SQL table declares the type(s) that column may contain. ANSI SQL includes the following data types.[29]
Character strings
• CHARACTER(n) or CHAR(n): fixed-width n-character string, padded with spaces as needed
• CHARACTER VARYING(n) or VARCHAR(n): variable-width string with a maximum size of n characters
• NATIONAL CHARACTER(n) or NCHAR(n): fixed width string supporting an international character set
• NATIONAL CHARACTER VARYING(n) or NVARCHAR(n): variable-width NCHAR string
Bit strings
• BIT(n): an array of n bits
• BIT VARYING(n): an array of up to n bits
Numbers
• INTEGER, SMALLINT and BIGINT
• FLOAT, REAL and DOUBLE PRECISION
• NUMERIC(precision, scale) or DECIMAL(precision, scale)
For example, the number 123.45 has a precision of 5 and a scale of 2. The precision is a positive integer that determines the number of significant digits in a particular radix (binary or decimal). The scale is a non-negative integer. A scale of 0 indicates that the number is an integer. For a decimal number with scale S, the exact numeric value is the integer value of the significant digits divided by 10S.
SQL provides a function to round numerics or dates, called TRUNC (in Informix, DB2, PostgreSQL, Oracle and MySQL) or ROUND (in Informix, SQLite, Sybase, Oracle, PostgreSQL and Microsoft SQL Server)[30]
Temporal (date/time)
• DATE: for date values (e.g. 2011-05-03)
• TIME: for time values (e.g. 15:51:36). The granularity of the time value is usually a tick (100 nanoseconds).
• TIME WITH TIME ZONE or TIMETZ: the same as TIME, but including details about the time zone in question.
• TIMESTAMP: This is a DATE and a TIME put together in one variable (e.g. 2011-05-03 15:51:36).
• TIMESTAMP WITH TIME ZONE or TIMESTAMPTZ: the same as TIMESTAMP, but including details about the time zone in question.
SQL provides several functions for generating a date / time variable out of a date / time string (TO_DATE, TO_TIME, TO_TIMESTAMP), as well as for extracting the respective members (seconds, for instance) of such variables. The current system date / time of the database server can be called by using functions like NOW. The IBM Informix implementation provides the EXTEND and the FRACTION functions to increase the accuracy of time, for systems requiring sub-second precision.[31]
### Data control
The Data Control Language (DCL) authorizes users to access and manipulate data. Its two main statements are:
• GRANT authorizes one or more users to perform an operation or a set of operations on an object.
• REVOKE eliminates a grant, which may be the default grant.
Example:
GRANT SELECT, UPDATE
ON example
TO some_user, another_user;
REVOKE SELECT, UPDATE
ON example
FROM some_user, another_user;
## Procedural extensions
SQL is designed for a specific purpose: to query data contained in a relational database. SQL is a set-based, declarative programming language, not an imperative programming language like C or BASIC. However, extensions to Standard SQL add procedural programming language functionality, such as control-of-flow constructs. These include:
Source Common name Full name
ANSI/ISO Standard SQL/PSM SQL/Persistent Stored Modules
Interbase / Firebird PSQL Procedural SQL
IBM DB2 SQL PL SQL Procedural Language (implements SQL/PSM)
IBM Informix SPL Stored Procedural Language
IBM Netezza NZPLSQL [2] (based on Postgres PL/pgSQL)
Microsoft / Sybase T-SQL Transact-SQL
Mimer SQL SQL/PSM SQL/Persistent Stored Module (implements SQL/PSM)
MySQL SQL/PSM SQL/Persistent Stored Module (implements SQL/PSM)
MonetDB SQL/PSM SQL/Persistent Stored Module (implements SQL/PSM)
NuoDB SSP Starkey Stored Procedures
Oracle PL/SQL Procedural Language/SQL (based on Ada)
PostgreSQL PL/pgSQL Procedural Language/PostgreSQL Structured Query Language (implements SQL/PSM)
Sybase Watcom-SQL SQL Anywhere Watcom-SQL Dialect
Teradata SPL Stored Procedural Language
SAP SAP HANA SQL Script
In addition to the standard SQL/PSM extensions and proprietary SQL extensions, procedural and object-oriented programmability is available on many SQL platforms via DBMS integration with other languages. The SQL standard defines SQL/JRT extensions (SQL Routines and Types for the Java Programming Language) to support Java code in SQL databases. SQL Server 2005 uses the SQLCLR (SQL Server Common Language Runtime) to host managed .NET assemblies in the database, while prior versions of SQL Server were restricted to unmanaged extended stored procedures primarily written in C. PostgreSQL lets users write functions in a wide variety of languages--including Perl, Python, Tcl, and C.[32]
## Interoperability and standardization
SQL implementations are incompatible between vendors and do not necessarily completely follow standards. In particular date and time syntax, string concatenation, NULLs, and comparison case sensitivity vary from vendor to vendor. A particular exception is PostgreSQL, which strives for standards compliance.[33]
Popular implementations of SQL commonly omit support for basic features of Standard SQL, such as the DATE or TIME data types. The most obvious such examples, and incidentally the most popular commercial and proprietary SQL DBMSs, are Oracle (whose DATE behaves as DATETIME,[34][35] and lacks a TIME type)[36] and MS SQL Server (before the 2008 version). As a result, SQL code can rarely be ported between database systems without modifications.
There are several reasons for this lack of portability between database systems:
• The complexity and size of the SQL standard means that most implementors do not support the entire standard.
• The standard does not specify database behavior in several important areas (e.g. indexes, file storage...), leaving implementations to decide how to behave.
• The SQL standard precisely specifies the syntax that a conforming database system must implement. However, the standard's specification of the semantics of language constructs is less well-defined, leading to ambiguity.
• Many database vendors have large existing customer bases; where the newer version of the SQL standard conflicts with the prior behavior of the vendor's database, the vendor may be unwilling to break backward compatibility.
• There is little commercial incentive for vendors to make it easier for users to change database suppliers (see vendor lock-in).
• Users evaluating database software tend to place other factors such as performance higher in their priorities than standards conformance.
SQL was adopted as a standard by the American National Standards Institute (ANSI) in 1986 as SQL-86[37] and the International Organization for Standardization (ISO) in 1987. Nowadays the standard is subject to continuous improvement by the Joint Technical Committee ISO/IEC JTC 1, Information technology, Subcommittee SC 32, Data management and interchange, which affiliate to ISO as well as IEC. It is commonly denoted by the pattern: ISO/IEC 9075-n:yyyy Part n: title, or, as a shortcut, ISO/IEC 9075.
ISO/IEC 9075 is complemented by ISO/IEC 13249: SQL Multimedia and Application Packages (SQL/MM), which defines SQL based interfaces and packages to widely spread applications like video, audio and spatial data.
Until 1996, the National Institute of Standards and Technology (NIST) data management standards program certified SQL DBMS compliance with the SQL standard. Vendors now self-certify the compliance of their products.[38]
The original standard declared that the official pronunciation for "SQL" was an initialism: ("es queue el").[11] Regardless, many English-speaking database professionals (including Donald Chamberlin himself[5]) use the acronym-like pronunciation of ("sequel"),[39] mirroring the language's pre-release development name of "SEQUEL".[14][15][5][14] The SQL standard has gone through a number of revisions:
Year Name Alias Comments
1986 SQL-86 SQL-87 First formalized by ANSI.
1989 SQL-89 FIPS 127-1 Minor revision that added integrity constraints, adopted as FIPS 127-1.
1992 SQL-92 SQL2, FIPS 127-2 Major revision (ISO 9075), Entry Level SQL-92 adopted as FIPS 127-2.
1999 SQL:1999 SQL3 Added regular expression matching, recursive queries (e.g. transitive closure), triggers, support for procedural and control-of-flow statements, non-scalar types, and some object-oriented features (e.g. structured types). Support for embedding SQL in Java (SQL/OLB) and vice versa (SQL/JRT).
2003 SQL:2003 Introduced XML-related features (SQL/XML), window functions, standardized sequences, and columns with auto-generated values (including identity-columns).
2006 SQL:2006 ISO/IEC 9075-14:2006 defines ways that SQL can be used with XML. It defines ways of importing and storing XML data in an SQL database, manipulating it within the database, and publishing both XML and conventional SQL-data in XML form. In addition, it lets applications integrate queries into their SQL code with XQuery, the XML Query Language published by the World Wide Web Consortium (W3C), to concurrently access ordinary SQL-data and XML documents.[40]
2008 SQL:2008 Legalizes ORDER BY outside cursor definitions. Adds INSTEAD OF triggers. Adds the TRUNCATE statement.[41]
2011 SQL:2011 Adds temporal data definition and manipulation.
2016 SQL:2016 Adds row pattern matching, polymorphic table functions, JSON.
Interested parties may purchase SQL standards documents from ISO,[42] IEC or ANSI. A draft of SQL:2008 is freely available as a zip archive.[43]
The SQL standard is divided into nine parts.
• ISO/IEC 9075-1:2016 Part 1: Framework (SQL/Framework). It provides logical concepts.
• ISO/IEC 9075-2:2016 Part 2: Foundation (SQL/Foundation). It contains the most central elements of the language and consists of both mandatory and optional features.
• ISO/IEC 9075-3:2016 Part 3: Call-Level Interface (SQL/CLI). It defines interfacing components (structures, procedures, variable bindings) that can be used to execute SQL statements from applications written in Ada, C respectively C++, COBOL, Fortran, MUMPS, Pascal or PL/I. (For Java see part 10.) SQL/CLI is defined in such a way that SQL statements and SQL/CLI procedure calls are treated as separate from the calling application's source code. Open Database Connectivity is a well-known superset of SQL/CLI. This part of the standard consists solely of mandatory features.
• ISO/IEC 9075-4:2016 Part 4: Persistent stored modules (SQL/PSM) It standardizes procedural extensions for SQL, including flow of control, condition handling, statement condition signals and resignals, cursors and local variables, and assignment of expressions to variables and parameters. In addition, SQL/PSM formalizes declaration and maintenance of persistent database language routines (e.g., "stored procedures"). This part of the standard consists solely of optional features.
• ISO/IEC 9075-9:2016 Part 9: Management of External Data (SQL/MED). It provides extensions to SQL that define foreign-data wrappers and datalink types to allow SQL to manage external data. External data is data that is accessible to, but not managed by, an SQL-based DBMS. This part of the standard consists solely of optional features.
• ISO/IEC 9075-10:2016 Part 10: Object language bindings (SQL/OLB). It defines the syntax and semantics of SQLJ, which is SQL embedded in Java (see also part 3). The standard also describes mechanisms to ensure binary portability of SQLJ applications, and specifies various Java packages and their contained classes. This part of the standard consists solely of optional features, as opposed to SQL/OLB JDBC, which is not part of the SQL standard, which defines an API.[]
• ISO/IEC 9075-11:2016 Part 11: Information and definition schemas (SQL/Schemata). It defines the Information Schema and Definition Schema, providing a common set of tools to make SQL databases and objects self-describing. These tools include the SQL object identifier, structure and integrity constraints, security and authorization specifications, features and packages of ISO/IEC 9075, support of features provided by SQL-based DBMS implementations, SQL-based DBMS implementation information and sizing items, and the values supported by the DBMS implementations.[44] This part of the standard contains both mandatory and optional features.
• ISO/IEC 9075-13:2016 Part 13: SQL Routines and types using the Java TM programming language (SQL/JRT). It specifies the ability to invoke static Java methods as routines from within SQL applications ('Java-in-the-database'). It also calls for the ability to use Java classes as SQL structured user-defined types. This part of the standard consists solely of optional features.
• ISO/IEC 9075-14:2016 Part 14: XML-Related Specifications (SQL/XML). It specifies SQL-based extensions for using XML in conjunction with SQL. The XML data type is introduced, as well as several routines, functions, and XML-to-SQL data type mappings to support manipulation and storage of XML in an SQL database.[40] This part of the standard consists solely of optional features.[]
ISO/IEC 9075 is complemented by ISO/IEC 13249 SQL Multimedia and Application Packages. This closely related but separate standard is developed by the same committee. It defines interfaces and packages based on SQL. The aim is a unified access to typical database applications like text, pictures, data mining or spatial data.
• ISO/IEC 13249-1:2016 Part 1: Framework
• ISO/IEC 13249-2:2003 Part 2: Full-Text
• ISO/IEC 13249-3:2016 Part 3: Spatial
• ISO/IEC 13249-5:2003 Part 5: Still image
• ISO/IEC 13249-6:2006 Part 6: Data mining
• ISO/IEC 13249-7:2013 Part 7: History
## Alternatives
A distinction should be made between alternatives to SQL as a language, and alternatives to the relational model itself. Below are proposed relational alternatives to the SQL language. See navigational database and NoSQL for alternatives to the relational model.
## Distributed SQL processing
Distributed Relational Database Architecture (DRDA) was designed by a work group within IBM in the period 1988 to 1994. DRDA enables network connected relational databases to cooperate to fulfill SQL requests.[46][47]
An interactive user or program can issue SQL statements to a local RDB and receive tables of data and status indicators in reply from remote RDBs. SQL statements can also be compiled and stored in remote RDBs as packages and then invoked by package name. This is important for the efficient operation of application programs that issue complex, high-frequency queries. It is especially important when the tables to be accessed are located in remote systems.
The messages, protocols, and structural components of DRDA are defined by the Distributed Data Management Architecture.
## Notes
1. ^ "Media Type registration for application/sql". Internet Assigned Numbers Authority. 10 April 2013. Retrieved 2013.
2. ^ "The application/sql Media Type, RFC 6922". Internet Engineering Task Force. April 2013. p. 3. Retrieved 2013.
3. ^ Paul, Ryan. "A guided tour of the Microsoft Command Shell". Ars Technica. Retrieved 2011.
4. ^ Beaulieu, Alan (April 2009). Mary E Treseler, ed. Learning SQL (2nd ed.). Sebastapol, CA, USA: O'Reilly. ISBN 978-0-596-52083-0.
5. ^ a b c Gillespie, Patrick. "Pronouncing SQL: S-Q-L or Sequel?". Pronouncing SQL: S-Q-L or Sequel?. Retrieved 2012.
6. ^ "SQL". Britannica.com. Retrieved .
7. ^ "SQL". Oxforddictionaries.com. Retrieved .
8. ^ "SQL Guide". Publib.boulder.ibm.com. Retrieved .
9. ^ "Structured Query Language (SQL)". Msdn.microsoft.com. Retrieved .
10. ^ Codd, Edgar F (June 1970). "A Relational Model of Data for Large Shared Data Banks". Communications of the ACM. Association for Computing Machinery. 13 (6): 377-87. doi:10.1145/362384.362685. Retrieved .
11. ^ a b Chapple, Mike. "SQL Fundamentals". Databases. About.com. Retrieved .
12. ^ "Structured Query Language (SQL)". International Business Machines. October 27, 2006. Retrieved .
13. ^
14. ^ a b c d Chamberlin, Donald D; Boyce, Raymond F (1974). "SEQUEL: A Structured English Query Language" (PDF). Proceedings of the 1974 ACM SIGFIDET Workshop on Data Description, Access and Control. Association for Computing Machinery: 249-64. Retrieved .
15. ^ a b Oppel, Andy (February 27, 2004). Databases Demystified. San Francisco, CA: McGraw-Hill Osborne Media. pp. 90-1. ISBN 0-07-146960-5.
16. ^ "History of IBM, 1978". IBM Archives. IBM. Retrieved .
17. ^ ANSI/ISO/IEC International Standard (IS). Database Language SQL--Part 2: Foundation (SQL/Foundation). 1999.
18. ^ "DECODE". Docs.oracle.com. Retrieved .
19. ^ "Transact-SQL Reference". SQL Server Language Reference. SQL Server 2005 Books Online. Microsoft. 2007-09-15. Retrieved .
20. ^ SAS 9.4 SQL Procedure User's Guide. SAS Institute. 2013. p. 248. ISBN 9781612905686. Retrieved . Although the UNIQUE argument is identical to DISTINCT, it is not an ANSI standard.
21. ^ Leon, Alexis; Leon, Mathews (1999). "Eliminating duplicates - SELECT using DISTINCT". SQL: A Complete Reference. New Delhi: Tata McGraw-Hill Education (published 2008). p. 143. ISBN 9780074637081. Retrieved . [...] the keyword DISTINCT [...] eliminates the duplicates from the result set.
22. ^ "Derived Tables". ORACLE. Retrieved .
23. ^ a b Hans-Joachim, K. (2003). "Null Values in Relational Databases and Sure Information Answers". Semantics in Databases. Second International Workshop Dagstuhl Castle, Germany, January 7-12, 2001. Revised Papers. Lecture Notes in Computer Science. 2582. pp. 119-138. doi:10.1007/3-540-36596-6_7. ISBN 978-3-540-00957-3.
24. ^ a b
25. ^ ISO/IEC. ISO/IEC 9075-2:2003, "SQL/Foundation". ISO/IEC.
26. ^ "Semantics and problems of universal quantification in SQL". Portal.acm.org. doi:10.1093/comjnl/32.1.90. Retrieved .
27. ^ "Technique for universal quantification in SQL". Portal.acm.org. doi:10.1145/126482.126484. Retrieved .
28. ^ Kawash, Jalal (2004) Complex quantification in Structured Query Language (SQL): a tutorial using relational calculus; Journal of Computers in Mathematics and Science Teaching ISSN 0731-9258 Volume 23, Issue 2, 2004 AACE Norfolk, Virginia. Thefreelibrary.com
29. ^ "Information Technology: Database Language SQL". CMU. (proposed revised text of DIS 9075).
30. ^ Arie Jones, Ryan K. Stephens, Ronald R. Plew, Alex Kriegel, Robert F. Garrett (2005), SQL Functions Programmer's Reference. Wiley, 127 pages.
31. ^ [1]
32. ^ PostgreSQL contributors (2011). "PostgreSQL server programming". PostgreSQL 9.1 official documentation. postgresql.org. Retrieved .
33. ^ PostgreSQL contributors (2012). "About PostgreSQL". PostgreSQL 9.1 official website. PostgreSQL Global Development Group. Retrieved 2012. PostgreSQL prides itself in standards compliance. Its SQL implementation strongly conforms to the ANSI-SQL:2008 standard
34. ^ Lorentz, Diana; Roeser, Mary Beth; Abraham, Sundeep; Amor, Angela; Arora, Geeta; Arora, Vikas; Ashdown, Lance; Baer, Hermann; Bellamkonda, Shrikanth (October 2010) [1996]. "Basic Elements of Oracle SQL: Data Types". Oracle Database SQL Language Reference 11g Release 2 (11.2). Oracle Database Documentation Library. Redwood City, CA: Oracle USA, Inc. Retrieved 2010. For each DATE value, Oracle stores the following information: century, year, month, date, hour, minute, and second
35. ^ Lorentz, Diana; Roeser, Mary Beth; Abraham, Sundeep; Amor, Angela; Arora, Geeta; Arora, Vikas; Ashdown, Lance; Baer, Hermann; Bellamkonda, Shrikanth (October 2010) [1996]. "Basic Elements of Oracle SQL: Data Types". Oracle Database SQL Language Reference 11g Release 2 (11.2). Oracle Database Documentation Library. Redwood City, CA: Oracle USA, Inc. Retrieved 2010. The datetime data types are DATE...
36. ^ Lorentz, Diana; Roeser, Mary Beth; Abraham, Sundeep; Amor, Angela; Arora, Geeta; Arora, Vikas; Ashdown, Lance; Baer, Hermann; Bellamkonda, Shrikanth (October 2010) [1996]. "Basic Elements of Oracle SQL: Data Types". Oracle Database SQL Language Reference 11g Release 2 (11.2). Oracle Database Documentation Library. Redwood City, CA: Oracle USA, Inc. Retrieved 2010. Do not define columns with the following SQL/DS and DB2 data types, because they have no corresponding Oracle data type:... TIME
37. ^ "Finding Aid". X3H2 Records, 1978-95. American National Standards Institute.
38. ^ Doll, Shelley (June 19, 2002). "Is SQL a Standard Anymore?". TechRepublic's Builder.com. TechRepublic. Archived from the original on 2012-07-05. Retrieved .
39. ^ Melton, Jim; Alan R Simon (1993). "1.2. What is SQL?". Understanding the New SQL: A Complete Guide. Morgan Kaufmann. p. 536. ISBN 1-55860-245-3. SQL (correctly pronounced "ess cue ell," instead of the somewhat common "sequel")...
40. ^ a b Wagner, Michael (2010). SQL/XML:2006 - Evaluierung der Standardkonformität ausgewählter Datenbanksysteme. Diplomica Verlag. p. 100. ISBN 3-8366-9609-6.
41. ^ "SQL:2008 now an approved ISO international standard". Sybase. July 2008.
42. ^
43. ^ "SQL:2008 draft" (Zip). Whitemarsh Information Systems Corporation.
44. ^ "ISO/IEC 9075-11:2008: Information and Definition Schemas (SQL/Schemata)". 2008: 1.
45. ^ Fernando Saenz-Perez. "Outer Joins in a Deductive Database System" (PDF). Lbd.udc.es. Retrieved .
46. ^ Reinsch, R. (1988). "Distributed database for SAA". IBM Systems Journal. 27 (3): 362-389. doi:10.1147/sj.273.0362.
47. ^ Distributed Relational Database Architecture Reference. IBM Corp. SC26-4651-0. 1990.
## References
Connect with defaultLogic
What We've Done
Led Digital Marketing Efforts of Top 500 e-Retailers.
Worked with Top Brands at Leading Agencies.
Successfully Managed Over \$50 million in Digital Ad Spend.
Developed Strategies and Processes that Enabled Brands to Grow During an Economic Downturn.
Taught Advanced Internet Marketing Strategies at the graduate level.
Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your IT knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share.
|
2017-04-27 03:44:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19739079475402832, "perplexity": 10374.876743398017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121865.67/warc/CC-MAIN-20170423031201-00421-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://www.postonline.co.uk/post/news/1202308/us-reinsurance-sector-performance-falls
|
# US reinsurance sector performance falls
The latest survey by the Reinsurance Association of America (RAA) showed that the combined ratio of a total of 30 major US property and casualty reinsurers reached 139.5% in the first nine months
|
2018-03-17 20:28:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37268775701522827, "perplexity": 5216.9619210114415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645310.34/warc/CC-MAIN-20180317194447-20180317214447-00186.warc.gz"}
|
https://willhoffer.com/2021-01-22/complex-number-operations/
|
22 Jan 2021
## Multiplication and Rotation
The following demo shows that multiplication (by a number distance one away from the origin) is the same as rotation.
Drag the red slider to rotate the point $$w$$. Click and drag the point $$z$$ to move it around.
The black $$+$$ represents the point $$z$$ rotated by $$a$$ degrees, while the purple $$\times$$ represents the product $$zw$$. They’re the same!
|
2022-05-27 16:22:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5888769030570984, "perplexity": 843.2572114789691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00585.warc.gz"}
|
https://bird.bcamath.org/handle/20.500.11824/19/browse?authority=109&type=author
|
Now showing items 1-11 of 11
• #### Centre-of-mass like superposition of Ornstein-Uhlenbeck processes: A pathway to non-autonomous stochastic differential equations and to fractional diffusion
(2018-10-25)
We consider an ensemble of Ornstein–Uhlenbeck processes featuring a population of relaxation times and a population of noise amplitudes that characterize the heterogeneity of the ensemble. We show that the centre-of-mass ...
• #### The challenge of brain complexity: A brief discussion about a fractal intermittency-based approach
(2016-10-30)
In the last years, the complexity paradigm is gaining momentum in many research fields where large multidimensional datasets are made available by the advancements in instrumental technology. A complex system is a ...
• #### The emergence of self-organization in complex systems-Preface
(2015-12-31)
[No abstract available]
• #### Finite-energy Lévy-type motion through heterogeneous ensemble of Brownian particles
(2019-02-01)
Complex systems are known to display anomalous diffusion, whose signature is a space/time scaling $x \sim t^\delta$ with $\delta \neq 1/2$ in the probability density function (PDF). Anomalous diffusion can emerge jointly ...
• #### Fractional Diffusion and Medium Heterogeneity: The Case of the Continuos Time Random Walk
(2021-07-24)
In this contribution we show that fractional diffusion emerges from a simple Markovian Gaussian random walk when the medium displays a power-law heterogeneity. Within the framework of the continuous time random walk, the ...
• #### Fractional kinetics emerging from ergodicity breaking in random media
(2016)
We present a modelling approach for diffusion in a complex medium characterized by a random lengthscale. The resulting stochastic process shows subdiffusion with a behavior in qualitative agreement with single particle ...
• #### Gaussian processes in complex media: new vistas on anomalous diffusion
(2019-09)
Normal or Brownian diffusion is historically identified by the linear growth in time of the variance and by a Gaussian shape of the displacement distribution. Processes departing from the at least one of the above conditions ...
• #### A hypothesis about parallelism vs. seriality in dreams
(2019-10-10)
The current article discusses the hypothesis about parallelism vs. Seriality in dreams. The process of dream building implies the construction of a complex network of closely interrelated sources. On the other hand, the ...
• #### Langevin equation in complex media and anomalous diffusion
(2018-07-30)
The problem of biological motion is a very intriguing and topical issue. Many efforts are being focused on the development of novel modelling approaches for the description of anomalous diffusion in biological systems, such ...
• #### A renewal model for the emergence of anomalous solute crowding in liposomes
(2015-12-31)
A fundamental evolutionary step in the onset of living cells is thought to be the spontaneous formation of lipid vesicles (liposomes) in the pre-biotic mixture. Even though it is well known that hydrophobic forces drive ...
• #### Scaling law of diffusivity generated by a noisy telegraph signal with fractal intermittency
(2015-12-31)
In many complex systems the non-linear cooperative dynamics determine the emergence of self-organized, metastable, structures that are associated with a birth-death process of cooperation. This is found to be described by ...
|
2022-11-27 09:58:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5394182801246643, "perplexity": 1921.9872654595888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00461.warc.gz"}
|
http://devmaster.net/posts/9793/calling-native-binary-code-from-c
|
0
102 May 13, 2006 at 09:26
Hi all,
I was wondering whether this C++ code has any C# equivalent:
void main()
{
unsigned char function[] = {0x90, // NOP (no operation)
0xC3}; // RET (return)
typedef void(*VoidFunction)();
((VoidFunction)&function)();
}
This is the first step towards run-time code generation. I’ve read about delegates and unsafe pointers, but this would require a real function pointer to native code. Any ideas?
Thanks!
Nick
#### 19 Replies
0
101 May 13, 2006 at 09:56
I don’t think that’s possible. C# is a purely managed language.
If you want to do something like that from C# you likely need a C++/CLI class.
0
101 May 13, 2006 at 11:29
Would something like this not do the job (albeit with a huge overhead compared to c++) scroll a bit down to ‘high-level assembler’:
0
101 May 13, 2006 at 11:30
Hi Nick
I don’t think that there is inline assembly nor function pointers in c# so this is a solution that came to my mind you can write your unmanged c++ code , assembly or shellcode in unmanaged DLL and call functions in it using Interop Services in C#.
i hope i have helped.
0
102 May 14, 2006 at 12:15
Thanks! Would it be possible to pass a C# memory buffer filled with x86 binary code to a native DLL and execute it there?
0
101 May 14, 2006 at 14:23
You can do whatever you want once you’re in the DLL
Search for “C# DLLimport”
Of course windows wont let you run arbitary code.
You have to allocate a buffer and change the protection on it.
I cant remember the particular code
0
101 May 14, 2006 at 14:44
Sure
http://msdn2.microsoft.com/en-us/library/bd99e6zt.aspx
Also see
http://www.pinvoke.net/ for listing of most Windows API for using with c#’s pinvoke.
and like dave said you should reallocate the buffer in the native DLL and copy the passed contents to that new buffer to use it.
0
101 May 14, 2006 at 15:21
@Nick
Thanks! Would it be possible to pass a C# memory buffer filled with x86 binary code to a native DLL and execute it there?
I would really recommend using C++/CLI for such things. Its much easier to handle. Since .NET allows to mix languages freely you can do that stuff in a C++/CLI class (C++/CLI allows to mix managed and unmanaged code) and use that one in C#.
0
101 May 15, 2006 at 08:34
@Axel
I would really recommend using C++/CLI for such things. Its much easier to handle. Since .NET allows to mix languages freely you can do that stuff in a C++/CLI class (C++/CLI allows to mix managed and unmanaged code) and use that one in C#.
Well, I have the feeling that Nick is aiming for a softwire-alike system for c# :) (which isn’t a bad idea at all, imho)
0
102 May 15, 2006 at 11:55
@roel
Well, I have the feeling that Nick is aiming for a softwire-alike system for c# :) (which isn’t a bad idea at all, imho)
Exactly. I wish to combine the nice language that is C# with the incredible power offered by soft-wiring technology. C++ is great too but it has clear flaws and stopped evolving…
0
101 May 15, 2006 at 12:07
C++ hasn’t stopped evolving, but besides that, C++/CLI has a lot of new features that can be used on the unmanaged side as well ;)
0
101 May 15, 2006 at 12:33
@.oisyn
C++ hasn’t stopped evolving, but besides that, C++/CLI has a lot of new features that can be used on the unmanaged side as well :)
I would do a thin wrapper with the C++/CLI, which then uses a native dll. C++/CLI is definitely a real improvement compared to the Managed C++, but as it being a new technology there might be some bugs and not sure the quality of the compiled code either. At least the Managed C++ is almost useless for a real production use because of that (like calling destructors twice for an object in some cases…).
0
101 May 16, 2006 at 06:33
@kariem2k
Hi Nick
I don’t think that there is inline assembly nor function pointers in c# so this is a solution that came to my mind you can write your unmanged c++ code , assembly or shellcode in unmanaged DLL and call functions in it using Interop Services in C#.
i hope i have helped.
C# can use the reflection classes to use MSIL inline generally speaking and
I do believe C# has function pointers but I’m not entirely sure (unsafe code is so much fun!!)
As for the main question, NO you CANNOT use native assembly in managed
languages since the binaries are compiled to a high-level assembler which in
turn are compiled to native assembler by the JIT engine upon execution.
You can probably use C++ with Managed Extensions to use assembler but
for most projects I wouldn’t recommend it since you’ll be breaking that wonderful
cross-platformness that C# has (assuming you have an environment on that platform such as .NET, ROTOR, DotGnu, or Mono).
0
101 May 16, 2006 at 07:58
Prozac:
I am afraid you did not understand me.
inline assembly is machine asm not the MSIL that can be written inside your c,c++ code directly.
As for the main question, NO you CANNOT use native assembly in managed
languages since the binaries are compiled to a high-level assembler which in
turn are compiled to native assembler by the JIT engine upon execution.
You can use(not include) native(unmanged) Dlls in c# just with DllImport attribute you can call any function from any Unmanaged dll,Microsoft has to do that to enable the programmers to use windows API.
You can probably use C++ with Managed Extensions to use assembler but
for most projects I wouldn’t recommend it since you’ll be breaking that wonderful
cross-platformness that C# has (assuming you have an environment on that platform such as .NET, ROTOR, DotGnu, or Mono).
I don’t think he want to make his app cross-platform because of a simple thing shellcodes are not portable like any low level assembly application,you need much work to make it portable.
0
101 May 16, 2006 at 08:28
@juhnu
I would do a thin wrapper with the C++/CLI, which then uses a native dll.
You don’t need the native DLL. C++/CLI allows switching from managed to unmanaged code on the function level (the compiler will do that for you).
0
101 May 16, 2006 at 08:59
@Axel
You don’t need the native DLL. C++/CLI allows switching from managed to unmanaged code on the function level (the compiler will do that for you).
Yeah I know that, but you have to do marshalling between .NET and native C++ types at some point anyway. If you separate your code for different dlls it’s pretty easy to know what is actually compiled to IL and what not. Also as the C++/CLI is really new technology it probably has some bugs, which might make a larger use unpractical for the time being - it’s predecessor the Managed C++ suffered from this very problem.
0
101 May 16, 2006 at 09:08
Also as the C++/CLI is really new technology it probably has some bugs
Well it’s not that new, technically it’s just managed C++ with a revised syntax. The fundamentals are basically the same.
0
101 May 16, 2006 at 12:33
@.oisyn
Well it’s not that new, technically it’s just managed C++ with a revised syntax. The fundamentals are basically the same.
So are the bugs too? ;) Seriously C++/CLI is a language on it’s own right and addresses many shortcomings of the Managed C++. You are making it sound
merely like a face-lift.
0
102 May 16, 2006 at 12:58
@Axel
You don’t need the native DLL. C++/CLI allows switching from managed to unmanaged code on the function level (the compiler will do that for you).
Interesting! I haven’t looked at C++/CLI yet.
So is C++/CLI undisputably better than unmanaged C++? I mean, could it replace it completely? Does it inherit unmanaged C++’s flaws or is it more like C# with unmanaged support?
Thanks. :happy:
0
101 May 16, 2006 at 16:07
@juhnu
So are the bugs too? :blink: Seriously C++/CLI is a language on it’s own right and addresses many shortcomings of the Managed C++.
Well most of the shortcomings were actually in the syntax itself. And sure it’s new, as it also targets .Net 2.0 while Managed C++ only targets 1.1, and it has some more interesting new features. And it’s a language on it’s own because MS took the liberty to standardize it at ECMA, but that has got nothing to do with the actual implementation of course :P.
But under the hood, like pinning pointers for example and the rest of the interface with .Net itself, they’re very similar.
|
2014-03-11 19:45:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23419830203056335, "perplexity": 3264.329165685283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011250349/warc/CC-MAIN-20140305092050-00008-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://blog.jordanhillier.com/kindle/neuere-geometrie
|
# Pfaff H.'s Neuere geometrie PDF
By Pfaff H.
By Pfaff H.
Similar geometry and topology books
Eckhard Krotscheck, Jesus Navarro's Microscopic Approaches to Quantum Liquids in Confined PDF
Quantum beverages in restrained geometries show a wide number of new and engaging phenomena. for instance, the interior constitution of the liquid turns into extra stated than in bulk beverages while the movement of the debris is particular via an exterior matrix. additionally, unfastened quantum liquid droplets allow the examine of the interplay of atoms and molecules with an exterior box with no problems bobbing up from interactions with box partitions.
This e-book provides a strong approach to research Einstein's designated concept of relativity and its underlying hyperbolic geometry within which analogies with classical effects shape the precise device. It introduces the inspiration of vectors into analytic hyperbolic geometry, the place they're referred to as gyrovectors. Newtonian pace addition is the typical vector addition, that is either commutative and associative.
Extra info for Neuere geometrie
Example text
If f ∈ C ∞ (P ) has compact support (or if Xf is complete), then XJ ∗ (f ) is complete. The global action mN : G(P ) ×P Q → Q arising in this way is compatible with the Poisson structure on Q in the sense that graph(mN ) is a Lagrangian submanifold of (G(P ), π ) × (Q, πQ ) × (Q, −πQ ),3 where π is the Poisson structure associated with the symplectic form ω on G(P ). 5) where pr G : G(P ) ×P S → G(P ) and pr S : G(P ) ×P S → S are the natural projections, see [25, 32]. On the other hand, if (Q, πQ ) is a Poisson manifold and mN is an action of a symplectic groupoid G on J : Q → P compatible with πQ in the sense just described, then J is automatically a Poisson map (this is just a leafwise version of [25, Thm.
34) shows that dJ π = −(ρM σ ∨ )∗ ; dualizing it (and using (π )∗ = −π , which holds by the first part of the lemma), we obtain the moment map condition. 28 H. Bursztyn and M. 37. The bivector field π is g-invariant. Proof. We have to show that LρM (v) (π (α)) = π (LρM (v) (α)) for v ∈ g, and 1-forms α. , dJ (LρM (v) (π (α))) = −(ρM σ ∨ )∗ LρM (v) (α), ∗ (LρM (v) (π (α)), C LρM (v) (α)) ∈ L. 26. 66). 33), and the fact that L is isotropic, we conclude that ([ρM (v), π (α)], LρM (v) (C ∗ α) − Lπ ∗ + d J σ (v), π (α) − iρM (v)∧π (α) (J ∗ (α) (J σ (v)) ∗ G φ )) ∈ L.
Bursztyn and M. 19), then it is a Dirac realization whose integration is mN . In order to prove the theorem, we need the following result. 8. Let (M, LM ) be a φ-twisted Dirac manifold and assume that LM is integrable. Let mN : G(LM ) ×M N → N be an action of G(LM ) on J : N → M, and assume that N is equipped with a J ∗ φ-twisted presymplectic form ωN . Then J is an f-Dirac map if and only if m∗N ωN = pr ∗N ωN + pr ∗G ω. 8) Proof. To simplify the notation, let G = G(LM ), and let us denote by A the corresponding Lie algebroid (which is just LM ).
|
2019-08-24 09:48:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432394504547119, "perplexity": 5411.388453064336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320156.86/warc/CC-MAIN-20190824084149-20190824110149-00220.warc.gz"}
|
https://istopdeath.com/find-the-bounds-of-the-zeros-pxx25/
|
# Find the Bounds of the Zeros p(x)=x^2+5
p(x)=x2+5
Check the leading coefficient of the function. This number is the coefficient of the expression with the largest degree.
Largest Degree: 2
Create a list of the coefficients of the function except the leading coefficient of 1.
5
There will be two bound options, b1 and b2, the smaller of which is the answer. To calculate the first bound option, find the absolute value of the largest coefficient from the list of coefficients. Then add 1.
Arrange the terms in ascending order.
b1=|5|
The absolute value is the distance between a number and zero. The distance between 0 and 5 is 5.
b1=5+1
b1=6
b1=6
To calculate the second bound option, sum the absolute values of the coefficients from the list of coefficients. If the sum is greater than 1, use that number. If not, use 1.
The absolute value is the distance between a number and zero. The distance between 0 and 5 is 5.
b2=5
Arrange the terms in ascending order.
b2=1,5
The maximum value is the largest value in the arranged data set.
b2=5
b2=5
Take the smaller bound option between b1=6 and b2=5.
Smaller Bound: 5
Every real root on p(x)=x2+5 lies between -5 and 5.
-5 and 5
Find the Bounds of the Zeros p(x)=x^2+5
|
2022-12-09 19:29:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216609954833984, "perplexity": 565.0957913549205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00453.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/find-the-sum-of-gp-1-3-9-27-to-12-terms-simple-applications-geometric-progression_65111
|
# Find the Sum of G.P.: 1 + 3 + 9 + 27 + ………. to 12 Terms - Mathematics
Sum
Find the sum of G.P.:
1 + 3 + 9 + 27 + ………. to 12 terms
#### Solution
Given G.P. : 1 + 3 + 9 + 27+.......
Here,
first term, a = 1
common ratio, r =3/1 = 3 (r > 1)
number of terms to be added, n = 12
∴ Sn = (a(r^n-1))/(r-1)
"S"_12=(1(3^12-1))/(3-1)=(3^12-1)/2=(531441-1)/2=531440/2=265720
Concept: Simple Applications - Geometric Progression
Is there an error in this question or solution?
#### APPEARS IN
Selina Concise Maths Class 10 ICSE
Chapter 11 Geometric Progression
Exercise 11 (D) | Q 1.1 | Page 161
|
2021-04-10 18:32:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22377361357212067, "perplexity": 2789.3558875620606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00330.warc.gz"}
|
https://underworld2.readthedocs.io/en/latest/build/underworld.systems.html
|
# underworld.systems module¶
This module contains routines relating to differential system.
## Functions¶
underworld.systems.Solver This method simply returns a necessary solver for the provided system.
## Classes¶
underworld.systems.AdvectionDiffusion This class provides functionality for a discrete representation of an advection-diffusion equation. underworld.systems.HeatSolver Steady State Heat Equation Solver. underworld.systems.SteadyStateDarcyFlow This class provides functionality for a discrete representation of the steady state darcy flow equation. underworld.systems.SteadyStateHeat This class provides functionality for a discrete representation of the steady state heat equation. underworld.systems.Stokes This class provides functionality for a discrete representation of the Stokes flow equations. underworld.systems.StokesSolver The Block Stokes Schur Complement Solver: This solves the saddle-point system underworld.systems.SwarmAdvector Objects of this class advect a swarm through time using the provided velocity field. underworld.systems.TimeIntegration Abstract class for integrating numerical objects (fields, swarms, etc.) in time.
underworld.systems.Solver(eqs, type='BSSCR', *args, **kwargs)[source]
This method simply returns a necessary solver for the provided system.
class underworld.systems.AdvectionDiffusion(phiField=None, velocityField=None, fn_diffusivity=None, fn_sourceTerm=None, method='SUPG', conditions=[], phiDotField=None, allow_non_q1=False, gauss_swarm=None, **kwargs)[source]
Bases: object
This class provides functionality for a discrete representation of an advection-diffusion equation.
$\frac{\partial\phi}{\partial t} + {\bf u } \cdot \nabla \phi= \nabla { ( k \nabla \phi ) } + H$
Two methods are available to integrate the scalar $$\phi$$ through time:
1. SUPG - The Streamline Upwind Petrov Galerkin method. [1]
2. SLCN - The Semi-Lagrangian Crank-Nicholson method. [2]
SLCN is the preferred method for Q1 elements on a orthogonal cartesian meshes. It is quicker, less diffusive and unconditionally stable. SUPG, the legacy method, is more robust for arbitrarily deformed meshes. Both methods are considered EXPERIMENTAL for non Q1 element meshes.
Parameters: phiField (underworld.mesh.MeshVariable) – The concentration field, typically the temperature field velocityField (underworld.mesh.MeshVariable) – The velocity field. fn_diffusivity (underworld.function.Function) – A function that defines the diffusivity within the domain. fn_sourceTerm (underworld.function.Function) – A function that defines the heating within the domain. Optional. conditions (underworld.conditions.SystemCondition) – Numerical conditions to impose on the system. This should be supplied as the condition itself, or a list object containing the conditions. phiDotField (underworld.mesh.MeshVariable) – Only used for SUPG. A MeshVariable that defines the initial time derivative of the phiField. Typically 0 at the beginning of a model, e.g. phiDotField.data[:]=0 When using a phiField loaded from disk one should also load the phiDotField to ensure the solving method has the time derivative information for a smooth restart. No Dirichlet conditions are required for this field as the phiField degrees of freedom map exactly to this field’s Dirichlet conditions, the value of which ought to be 0 for constant values of phi. gauss_swarm (underworld.swarm.GaussIntegrationSwarm) – If provided this gauss_swarm will be used for (gaussian) numerical integration rather than a default gauss integration swarm that is automatically generated and dependent on the element order of the mesh. allow_non_q1 (Bool (default False)) – Allow the integration to perform over a non Q1 element mesh. (Under Q2 elements instabilities have been observed as the implementation is only for Q1 elements)
Notes
Constructor must be called by collectively all processes.
[1] Brooks, A. N. and Hughes, T. J. R., “Streamline Upwind/Petrov-Galerkin Formulations for Convection Dominated Flows with Particular Emphasis on the Incompressible Navier-Stokes Equations”, Comput. Methods Appl. Mech. Eng., Aug 1990, 199-259.
[2] Spiegelman, M. and Katz, R.F., “A semi-Lagrangian Crank-Nicolson algorithm for the numerical solution of advection-diffusion problems”, Geochemistry, Geophysics, Geosystems, 7(4), 2006.
get_max_dt()[source]
Returns a timestep size for the current system.
Returns: The timestep size. float
integrate(dt=0.0, **kwargs)[source]
Integrates the advection diffusion system through time, dt Must be called collectively by all processes.
Parameters: dt (float) – The timestep interval to use
class underworld.systems.HeatSolver(heatSLE, **kwargs)[source]
Bases: underworld._stgermain.StgCompoundComponent
configure(solve_type='')[source]
Configure solver.
solve_type can be one of:
• mumps : MUMPS parallel direct solver.
• superludist : SuperLU parallel direct solver.
• superlu : SuperLU direct solver (serial only).
• lu : LU direct solver (serial only).
solve(nonLinearIterate=None, nonLinearTolerance=0.01, nonLinearMaxIterations=500, callback_post_solve=None, **kwargs)[source]
Solve the HeatEq system
Parameters: nonLinearIterate (bool) – True will perform non linear iterations iterations, False (or 0) will not nonLinearTolerance (float, Default=1.0e-2) – Relative tolerance criterion for the change in the velocity field nonLinearMaxIterations (int, Default=500) – Maximum number of non linear iteration to perform callback_post_solve (func, Default=None) – Optional callback function to be performed at the end of a linear solve iteration. Commonly this will be used to perform operations between non linear iterations, for example, calibrating the solution or removing the system null space.
class underworld.systems.SteadyStateDarcyFlow(pressureField, fn_diffusivity, fn_bodyforce=None, voronoi_swarm=None, conditions=[], velocityField=None, swarmVarVelocity=None, _removeBCs=True, gauss_swarm=None, **kwargs)[source]
Bases: underworld._stgermain.StgCompoundComponent
This class provides functionality for a discrete representation of the steady state darcy flow equation.
The class uses a standard Galerkin finite element method to construct a system of linear equations which may then be solved using an object of the underworld.system.Solver class.
The underlying element types are determined by the supporting mesh used for the ‘pressureField’.
The strong form of the given boundary value problem, for $$f$$, $$q$$ and $$h$$ given, is
\begin{split}\begin{align} q_i =& \kappa \, ( -u_{,i} + S_i ) & \\ q_{i,i} =& \: f & \text{ in } \Omega \\ u =& \: q & \text{ on } \Gamma_q \\ -q_i n_i =& \: h & \text{ on } \Gamma_h \\ \end{align}\end{split}
where,
• $$\kappa$$ is the diffusivity, $$u$$ is the pressure,
• $$S$$ is a flow body-source, for example due to gravity,
• $$f$$ is a source term, $$q$$ is the Dirichlet condition, and
• $$h$$ is a Neumann condition.
The problem boundary, $$\Gamma$$, admits the decomposition $$\Gamma=\Gamma_q\cup\Gamma_h$$ where $$\emptyset=\Gamma_q\cap\Gamma_h$$. The equivalent weak form is:
$-\int_{\Omega} w_{,i} \, q_i \, d \Omega = \int_{\Omega} w \, f \, d\Omega + \int_{\Gamma_h} w \, h \, d \Gamma$
where we must find $$u$$ which satisfies the above for all $$w$$ in some variational space.
Parameters: pressureField (underworld.mesh.MeshVariable) – The solution field for pressure. fn_diffusivity (underworld.function.Function) – The function that defines the diffusivity across the domain. fn_bodyforce (underworld.function.Function) – A function that defines the flow body-force across the domain, for example gravity. Must be a vector. Optional. voronoi_swarm (underworld.swarm.Swarm) – A swarm with just one particle within each cell should be provided. This avoids the evaluation of the velocity on nodes and inaccuracies arising from diffusivity changes within cells. If a swarm is provided, voronoi type numerical integration is utilised. The provided swarm is used as the basis for the voronoi integration. If no voronoi_swarm is provided, Gauss integration is used. conditions (underworld.conditions.SystemCondition) – Numerical conditions to impose on the system. This should be supplied as the condition itself, or a list object containing the conditions. gauss_swarm (underworld.swarm.GaussIntegrationSwarm) – If provided this gauss_swarm will be used for (gaussian) numerical integration rather than a default gauss integration swarm that is automatically generated and dependent on the element order of the mesh. NB: if a voronoi_swarm is defined it OVERRIDES this gauss_swarm as the preferred integration swarm (quadrature method). velocityField (underworld.mesh.MeshVariable) – Solution field for darcy flow velocity. Optional. swarmVarVelocity (undeworld.swarm.SwarmVariable) – If a swarm variable is provided, the velocity calculated on the swarm will be stored. This is the most representative velocity data object, as the velocity calculation occurs on the swarm, away from mesh nodes. Optional.
Notes
Constructor must be called collectively by all processes.
fn_bodyforce
The heating function. You may change this function directly via this property.
fn_diffusivity
The diffusivity function. You may change this function directly via this property.
class underworld.systems.SteadyStateHeat(temperatureField, fn_diffusivity, fn_heating=0.0, voronoi_swarm=None, conditions=[], gauss_swarm=None, _removeBCs=True, **kwargs)[source]
Bases: underworld._stgermain.StgCompoundComponent
This class provides functionality for a discrete representation of the steady state heat equation.
The class uses a standard Galerkin finite element method to construct a system of linear equations which may then be solved using an object of the underworld.system.Solver class.
The underlying element types are determined by the supporting mesh used for the ‘temperatureField’.
The strong form of the given boundary value problem, for $$f$$, $$h$$ and $$h$$ given, is
\begin{split}\begin{align} q_i =& - \alpha \, u_{,i} & \\ q_{i,i} =& \: f & \text{ in } \Omega \\ u =& \: g & \text{ on } \Gamma_g \\ -q_i n_i =& \: h & \text{ on } \Gamma_h \\ \end{align}\end{split}
where, $$\alpha$$ is the diffusivity, $$u$$ is the temperature, $$f$$ is a source term, $$g$$ is the Dirichlet condition, and $$h$$ is a Neumann condition. The problem boundary, $$\Gamma$$, admits the decomposition $$\Gamma=\Gamma_g\cup\Gamma_h$$ where $$\emptyset=\Gamma_g\cap\Gamma_h$$. The equivalent weak form is:
$-\int_{\Omega} w_{,i} \, q_i \, d \Omega = \int_{\Omega} w \, f \, d\Omega + \int_{\Gamma_h} w \, h \, d \Gamma$
where we must find $$u$$ which satisfies the above for all $$w$$ in some variational space.
Parameters: temperatureField (underworld.mesh.MeshVariable) – The solution field for temperature. fn_diffusivity (underworld.function.Function) – The function that defines the diffusivity across the domain. fn_heating (underworld.function.Function) – A function that defines the heating across the domain. Optional. voronoi_swarm (underworld.swarm.Swarm) – If a voronoi_swarm is provided, voronoi type numerical integration is utilised. The provided swarm is used as the basis for the voronoi integration. If no voronoi_swarm is provided, Gauss integration is used. gauss_swarm (underworld.swarm.GaussIntegrationSwarm) – If provided this gauss_swarm will be used for (gaussian) numerical integration rather than a default gauss integration swarm that is automatically generated and dependent on the element order of the mesh. NB: if a voronoi_swarm is defined it OVERRIDES this gauss_swarm as the preferred integration swarm (quadrature method). conditions (underworld.conditions.SystemCondition) – Numerical conditions to impose on the system. This should be supplied as the condition itself, or a list object containing the conditions.
Notes
Constructor must be called collectively by all processes.
Example
Setup a basic thermal system:
>>> linearMesh = uw.mesh.FeMesh_Cartesian( elementType='Q1/dQ0', elementRes=(4,4), minCoord=(0.,0.), maxCoord=(1.,1.) )
>>> tField = uw.mesh.MeshVariable( linearMesh, 1 )
>>> topNodes = linearMesh.specialSets["MaxJ_VertexSet"]
>>> bottomNodes = linearMesh.specialSets["MinJ_VertexSet"]
>>> tbcs = uw.conditions.DirichletCondition(tField, topNodes + bottomNodes)
>>> tField.data[topNodes.data] = 0.0
>>> tField.data[bottomNodes.data] = 1.0
>>> tSystem = uw.systems.SteadyStateHeat(temperatureField=tField, fn_diffusivity=1.0, conditions=[tbcs])
Example with non diffusivity:
>>> k = tField + 1.0
>>> tSystem = uw.systems.SteadyStateHeat(temperatureField=tField, fn_diffusivity=k, conditions=[tbcs])
>>> solver = uw.systems.Solver(tSystem)
>>> solver.solve()
Traceback (most recent call last):
...
RuntimeError: Nonlinearity detected.
Diffusivity function depends on the temperature field provided to the system.
Please set the 'nonLinearIterate' solve parameter to 'True' or 'False' to continue.
>>> solver.solve(nonLinearIterate=True)
fn_diffusivity
The diffusivity function. You may change this function directly via this property.
fn_heating
The heating function. You may change this function directly via this property.
class underworld.systems.Stokes(velocityField, pressureField, fn_viscosity, fn_bodyforce=None, fn_one_on_lambda=None, fn_source=None, voronoi_swarm=None, conditions=[], gauss_swarm=None, _removeBCs=True, _fn_viscosity2=None, _fn_director=None, fn_stresshistory=None, _fn_stresshistory=None, _fn_v0=None, _fn_p0=None, _fn_fssa=None, _callback_post_solve=None, **kwargs)[source]
Bases: underworld._stgermain.StgCompoundComponent
This class provides functionality for a discrete representation of the Stokes flow equations.
Specifically, the class uses a mixed finite element method to construct a system of linear equations which may then be solved using an object of the underworld.system.Solver class.
The underlying element types are determined by the supporting mesh used for the ‘velocityField’ and ‘pressureField’ parameters.
The strong form of the given boundary value problem, for $$f$$, $$g$$ and $$h$$ given, is
\begin{split}\begin{align} \sigma_{ij,j} + f_i =& \: 0 & \text{ in } \Omega \\ u_{k,k} + \frac{p}{\lambda} =& \: H & \text{ in } \Omega \\ u_i =& \: g_i & \text{ on } \Gamma_{g_i} \\ \sigma_{ij}n_j =& \: h_i & \text{ on } \Gamma_{h_i} \\ \end{align}\end{split}
where,
• $$\sigma_{i,j}$$ is the stress tensor
• $$u_i$$ is the velocity,
• $$p$$ is the pressure,
• $$f_i$$ is a body force,
• $$\lambda$$ is pseudo compressibility factor,
• $$H$$ is the compressible equation source term,
• $$g_i$$ are the velocity boundary conditions (DirichletCondition)
• $$h_i$$ are the traction boundary conditions (NeumannCondition).
The problem boundary, $$\Gamma$$, admits the decompositions $$\Gamma=\Gamma_{g_i}\cup\Gamma_{h_i}$$ where $$\emptyset=\Gamma_{g_i}\cap\Gamma_{h_i}$$. The equivalent weak form is:
$\int_{\Omega} w_{(i,j)} \sigma_{ij} \, d \Omega = \int_{\Omega} w_i \, f_i \, d\Omega + \sum_{j=1}^{n_{sd}} \int_{\Gamma_{h_j}} w_i \, h_i \, d \Gamma$
where we must find $$u$$ which satisfies the above for all $$w$$ in some variational space.
Parameters: velocityField (underworld.mesh.MeshVariable) – Variable used to record system velocity. pressureField (underworld.mesh.MeshVariable) – Variable used to record system pressure. fn_viscosity (underworld.function.Function) – Function which reports a viscosity value. Function must return scalar float values. fn_bodyforce (underworld.function.Function, Default = None) – Function which reports a body force for the system. Function must return float values of identical dimensionality to the provided velocity variable. fn_one_on_lambda (underworld.function.Function, Default = None) – Pseudo-compressibility factor. Note that non-zero values are incompatible with the ‘penalty’ stokes solver. Ensure a ‘penalty’ equal to 0 is used if this function is non-zero. By default this is the case. fn_source (underworld.function.Function, Default = None) – Mass source term. Check fn_one_on_lambda for usage caveats. fn_stresshistory (underworld.function.Function, Default = None) – Function which defines the stress history term used for viscoelasticity. Function is a vector of size 3 (dim=2) or 6 (dim=3) representing a symetric tensor. voronoi_swarm (underworld.swarm.Swarm) – If a voronoi_swarm is provided, voronoi type numerical integration is utilised. The provided swarm is used as the basis for the voronoi integration. If no voronoi_swarm is provided, Gauss integration is used. gauss_swarm (underworld.swarm.GaussIntegrationSwarm) – If provided this gauss_swarm will be used for (gaussian) numerical integration rather than a default gauss integration swarm that is automatically generated and dependent on the element order of the mesh. NB: if a voronoi_swarm is defined it OVERRIDES this gauss_swarm as the preferred integration swarm (quadrature method). conditions (underworld.conditions.SystemCondition) – Numerical conditions to impose on the system. This should be supplied as the condition itself, or a list object containing the conditions.
Notes
Constructor must be called by collectively all processes.
eqResiduals
Returns the stokes flow equations’ residuals from the latest solve. Residual calculations use the matrices and vectors of the discretised problem. The residuals correspond to the momentum equation and the continuity equation.
Returns: r1 is the momentum equation residual r2 is the continuity equation residual (r1, r2) - 2 tuple of doubles
Notes
This method must be called collectively by all processes.
fn_bodyforce
The body force function. You may change this function directly via this property.
fn_one_on_lambda
A bulk viscosity parameter
fn_source
The volumetric source term function. You may change this function directly via this property.
fn_viscosity
The viscosity function. You may change this function directly via this property.
stokes_callback
Return the callback function used by this system
velocity_rms()[source]
Calculates RMS velocity as follows
$v_{rms} = \sqrt{ \frac{ \int_V (\mathbf{v}.\mathbf{v}) \, \mathrm{d}V } {\int_V \, \mathrm{d}V} }$
class underworld.systems.StokesSolver(stokesSLE, **kwargs)[source]
Bases: underworld._stgermain.StgCompoundComponent
The Block Stokes Schur Complement Solver: This solves the saddle-point system
$\begin{split}\begin{bmatrix} K & G \\ G^T & C \end{bmatrix} \begin{bmatrix} u \\ p \end{bmatrix} = \begin{bmatrix}f \\ h \end{bmatrix}\end{split}$
via a Schur complement method.
We first solve:
(1)$S p= G^T K^{-1} f - h,$
where $$S = G^T K^{-1} G-C$$
Then we backsolve for the velocity:
(2)$K u = f - G p.$
The effect of $$K^{-1}$$ in (1) is obtained via a KSPSolve in PETSc. This has the prefix ‘A11’ (often called the ‘inner’ solve)
The solve in (1) for the pressure has prefix ‘scr’.
Assuming the returned solver is called ‘solver’, it is possible to configure these solves individually via the solver.options.A11 and solver.options.scr dictionaries.
Try help(solver.options.A11) for some details.
Common configurations are provided via the set_inner_method() method.
help(solver.set_inner_method) for more.
For more advanced configurations use the solver.options.A11/scr dictionaries directly.
help(solver.options) to see more.
set_inner_method(solve_type='mg')[source]
Configure velocity/inner solver (A11 PETSc prefix).
Available options below. Note that associated solver software (for example mumps) must be installed.
• mg : Geometric multigrid (default).
• nomg : Disables multigrid.
• lu : LU direct solver (serial only).
• mumps : MUMPS parallel direct solver.
• superludist : SuperLU parallel direct solver.
• superlu : SuperLU direct solver (serial only).
set_mg_levels(levels)[source]
Set the number of multigrid levels manually. It is set automatically by default.
set_penalty(penalty)[source]
By setting the penalty, the Augmented Lagrangian Method is used as the solve. This method is not recommended for normal use as there is additional memory and cpu overhead. This method can often help improve convergence issues for problems with large viscosity contrasts that are having trouble converging.
A penalty of roughly 0.1 of the maximum viscosity contrast is not a bad place to start as a rule of thumb. (check notes/paper)
solve(nonLinearIterate=None, nonLinearTolerance=0.01, nonLinearKillNonConvergent=False, nonLinearMinIterations=1, nonLinearMaxIterations=500, callback_post_solve=None, print_stats=False, reinitialise=True, **kwargs)[source]
Solve the stokes system
Parameters: nonLinearIterate (bool) – True will perform non linear iterations iterations, False (or 0) will not nonLinearTolerance (float, Default=1.0e-2) – Relative tolerance criterion for the change in the velocity field nonLinearMaxIterations (int, Default=500) – Maximum number of non linear iteration to perform callback_post_sovle (func, Default=None) – Optional callback function to be performed at the end of a linear solve iteration. Commonly this will be used to perform operations between non linear iterations, for example, calibrating the pressure solution or removing the system null space. print_stats (bool, Default=False) – Print out solver iteration and timing counts per solver reinitialise (bool, Default=True,) – Rebuild the system discretisation storage (location matrix/petsc mats & vecs) and repopulate, if available, the stokes voronio swarm before the system is solved.
class underworld.systems.SwarmAdvector(velocityField, swarm, order=2, **kwargs)[source]
Bases: underworld.systems._timeintegration.TimeIntegration
Objects of this class advect a swarm through time using the provided velocity field.
Parameters: velocityField (underworld.mesh.MeshVariable) – The MeshVariable field used for evaluating the velocity field that advects the swarm particles swarm (underworld.swarm.Swarm) – Particle swarm that will be advected by the given velocity field
integrate(dt, update_owners=True)[source]
Integrate the associated swarm in time, by dt, using the velocityfield that is associated with this class
Parameters: dt (double) – The timestep to use in the intergration update_owners (bool) – If this is set to False, particle ownership (which element owns a particular particle) is not updated after advection. This is often necessary when both the mesh and particles are advecting simutaneously.
Example
>>> import underworld as uw
>>> import numpy as np
>>> from underworld import function as fn
>>> dim=2;
>>> elementMesh = uw.mesh.FeMesh_Cartesian(elementType="Q1/dQ0", elementRes=(9,9), minCoord=(-1.,-1.), maxCoord=(1.,1.))
>>> velocityField = uw.mesh.MeshVariable( mesh=elementMesh, nodeDofCount=dim )
>>> swarm = uw.swarm.Swarm(mesh=elementMesh)
>>> particle = np.zeros((1,2))
>>> particle[0] = [0.2,-0.2]
array([0], dtype=int32)
>>> velocityField.data[:]=[1.0,1.0]
>>> np.allclose(swarm.particleCoordinates.data[0], [ 0.27856742, -0.12143258], rtol=1e-4)
True
class underworld.systems.TimeIntegration(order, **kwargs)[source]
Bases: underworld._stgermain.StgCompoundComponent
Abstract class for integrating numerical objects (fields, swarms, etc.) in time.
The integration algorithm is a modified Runge Kutta method that only evaluates midpoint information varying in space - using only the present timestep solution. The order of the integration used can be 1,2,4
Parameters: order (int {1,2,4}) – Defines the numerical order ‘in space’ of the Runge Kutta like integration scheme.
dt
Time integrator timestep size.
time
Time integrator time value.
|
2022-08-08 12:26:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8530596494674683, "perplexity": 4187.53339389444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00091.warc.gz"}
|
https://www.physicsforums.com/threads/relativistic-cyclotron-frequency.235611/
|
# Relativistic cyclotron frequency
Since the acceleration is transverse to the velocity, should we consider the transverse mass in the formula $mv^2/r$ ie
$\gamma \frac{mv^2}{r} = qvB \implies \frac{v}{\sqrt{1-(v/c)^2}} = \frac{qBr}{m}$?
How can you express the centripetal force in terms of momentum (with other terms)?
How is the relativistic momentum?
Since the acceleration is transverse to the velocity, should we consider the transverse mass in the formula $mv^2/r$ ie
$\gamma \frac{mv^2}{r} = qvB \implies \frac{v}{\sqrt{1-(v/c)^2}} = \frac{qBr}{m}$?
Yes. But just to be clear, using your symbols, the transverse mass = gamma*m.
Pete
It's clearer in terms of momentum pv/r=qvB-->p=qBr.
|
2021-01-16 00:04:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9267666339874268, "perplexity": 1133.7875917517931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703497681.4/warc/CC-MAIN-20210115224908-20210116014908-00073.warc.gz"}
|
https://tex.stackexchange.com/questions/352104/create-simple-repetitive-macro-with-optional-argument
|
# Create simple repetitive macro with optional argument
I would like the simplest (in terms of length of characters) way to create a macro that does something or, optionally, does something n times.
In fact, all I want to do is to create something like --------X where the -s go from 1 to n. I do not want to have to supply 1 to the macro as that is default and the most used case.
So
\dash{X} gives -X
\dash[3]{X} gives ---X
or
\dash{X}[3] gives ---X
or
\dash{X}{3} gives ---X
or
\dash{3}{X} gives ---X
I don't care about the syntax. Just that it is short and works as intended and should be efficient. I probably don't want to have nested groups used because there would be no point and it would be inefficient, but either way doesn't matter.
I'd like to use as little extra packages as possible unless some package does this well to make it pretty short.
Something like
\newcommand{\dash}[2][1]{\directlua{for i=1,#1 do tex.print('-') end}#2}%
works with LuaLaTeX but tex.print seems to add a space after/before each dash resulting in - - - -X.
• Since it appears you have problems with the given images, could you please add some more information about intended usage? – egreg Feb 5 '17 at 11:09
• @egreg huh? I assume you are talking about the spacing? If so, I am simply trying to reduce the space between the symbols(-, +, x, y, or whatever) because I use these things in a large table and can only fit so much information in it. The symbol's touching or closer than normal does not cause any problems in my case but having too much space, like - - X will use far to much space creating a table that won't fit on a page, etc. It's not a huge deal, I can shrink the table... but only so far. Luckily I was able to get a reasonable result with the default space. – AbstractDissonance Feb 5 '17 at 11:16
• That's caused by math mode automatic spacing: if you use Werner's \dash: fix Werner's code by doing \prg_replicate:nn {#1}{{#2\kern0pt}}#3 – egreg Feb 5 '17 at 11:20
Using xparse (with options):
\documentclass{article}
\usepackage{xparse}
\ExplSyntaxOn
\NewDocumentCommand{\dash}{O{1} O{-} m}{%
\prg_replicate:nn {#1} {#2}#3
}
\ExplSyntaxOff
\begin{document}
\dash{X}
\dash[2]{X}
\dash[15][{{-}}]{X}
\dash[7][a]{X}
\end{document}
Reference: Repeat characters n times
• Why does using a + cause a space/gap between each one? Same with - one of your examples. Similar to my directlua example. I want to use +++ to have no spaces between them similar to your ------- example. In my code the +'s have almost the a whole space ' ' between them(seems like it). This might just be a property of the characters but the -'s have no space in your example yet for length of 2. When I use your example with a rep of 15 there are no spaces, unlike yours. I need to minimize space between these characters so give me the room I need for other things. – AbstractDissonance Feb 5 '17 at 5:36
• - is special, since pairs of dashes are turned into an endash, and 3 dashes become an emdash. If you want to remove the spaces between + (say), try with \def\bsp{\def\bsp{$\!$}} \dash[15][\bsp+]{X}. – Werner Feb 5 '17 at 5:53
• When I use this in my real document I get errors. When I use it in the demo document it works. I put \bsp in the macro you provide, e.g., \bsp#2 and it works as expected. In my code, the only real difference is I'm using the macros in a tikzpicture environment(a matrix). Any idea why it would fail? – AbstractDissonance Feb 5 '17 at 6:17
• @AbstractDissonance Sorry, no. Not sure what it could be. – Werner Feb 5 '17 at 6:54
• I believe it is because you are using matrix a matrix of math nodes. (which, one would assume would remove the space but it doesn't...). I tried removing the but nothing changed. Is there a way to create a new normal symbol like + and - that simply do not have space or is space something that tex adds after the fact? – AbstractDissonance Feb 5 '17 at 7:09
Just use one of David Kastrup's good old \replicate-macros that are described here:
Example:
\documentclass{article}
\makeatletter
\newcommand\xii[2]{\if#2m#1\expandafter\xii\else\expandafter\@gobble\fi{#1}}
\newcommand\xiii{}\long\def\xiii#1\relax#2{\xii{#2}#1\relax}
\newcommand\replicate[1]{\expandafter\xiii\romannumeral\number\number#1 000\relax}
%
\newcommand\dash[1][1]{\replicate{#1}{-}\@firstofone}
%
% You can avoid consecutive dashes yielding en-dash-ligatures and em-dash-ligatures
% by wrapping the single dash into braces---\hyphendash yields hyphens:
%
\newcommand\hyphendash[1][1]{\replicate{#1}{{-}}\@firstofone}
\makeatother
\begin{document}
\verb|\dash[0]{X}| should yield X and indeed yields \dash[0]{X}
\verb|\dash{X}| should yield -X and indeed yields \dash{X}
\verb|\dash[1]{X}| should yield -X and indeed yields \dash[1]{X}
\verb|\dash[2]{X}| should yield --X and indeed yields \dash[2]{X}
\verb|\dash[3]{X}| should yield ---X and indeed yields \dash[3]{X}
\verb|\hyphendash[0]{X}| should yield X and indeed yields \hyphendash[0]{X}
\verb|\hyphendash{X}| should yield {-}X and indeed yields \hyphendash{X}
\verb|\hyphendash[1]{X}| should yield {-}X and indeed yields \hyphendash[1]{X}
\verb|\hyphendash[2]{X}| should yield {-}{-}X and indeed yields \hyphendash[2]{X}
\verb|\hyphendash[3]{X}| should yield {-}{-}{-}X and indeed yields \hyphendash[3]{X}
\end{document}
If it is only about dashes/hyphens and if you don't want dashes/hyphens to be broken across lines, you can probably also fill a horizontal box of predetermined width with horizontal \leaders:
\documentclass{article}
\makeatletter
\newbox\mytempbox
\newcommand\nobreakhyphendash[1][1]{%
\begingroup
\setbox\mytempbox\hbox{-}%
\endgroup
\@firstofone
}
\makeatother
\begin{document}
\verb|\nobreakhyphendash[0]{X}| should yield X and indeed yields \nobreakhyphendash[0]{X}
\verb|\nobreakhyphendash{X}| should yield {-}X and indeed yields \nobreakhyphendash{X}
\verb|\nobreakhyphendash[1]{X}| should yield {-}X and indeed yields \nobreakhyphendash[1]{X}
\verb|\nobreakhyphendash[2]{X}| should yield {-}{-}X and indeed yields \nobreakhyphendash[2]{X}
\verb|\nobreakhyphendash[3]{X}| should yield {-}{-}{-}X and indeed yields \nobreakhyphendash[3]{X}
\verb|\nobreakhyphendash[15]{X}| should yield {-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}X and indeed yields \nobreakhyphendash[15]{X}
\end{document}
You might enjoy this generalization:
\documentclass{article}
\usepackage{xparse}
\ExplSyntaxOn
\NewDocumentCommand{\pattern}{>{\SplitList{,}}m}
{
\ProcessList{#1}{\MakePattern}
}
\NewDocumentCommand{\MakePattern}{m}
{
\MakePatternAux #1 \MakePatternAux
}
\NewDocumentCommand{\MakePatternAux}{O{1}u{\MakePatternAux}}
{
\prg_replicate:nn { #1 } { {#2} }
}
\ExplSyntaxOff
\begin{document}
\pattern{[3]-,X,-,Y,[5]abc}
$\pattern{[3]-,X,-,Y,[5]+}$
\end{document}
You describe a pattern by a comma separated list of items: X means “print one copy of X”, while [5]Y means “print five copies of Y). The additional braces make TeX into not adding automatic spacing between atoms, because all are treated as ordinary atoms. Items can be more than one token.
• Just a side comment: is it appropriate to use NewDocumentCommand` for definitions that aren't intended to be author-level? – Sean Allred Feb 5 '17 at 17:47
• @SeanAllred Why not? – egreg Feb 5 '17 at 18:53
• I dunno… 'new document command' does seem to imply it will be document-/author-level. – Sean Allred Feb 5 '17 at 18:54
• @SeanAllred Probably there should be an inner interface also for reading optional arguments. – egreg Feb 5 '17 at 18:57
|
2021-03-07 03:22:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7279287576675415, "perplexity": 1907.6604179318786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376006.87/warc/CC-MAIN-20210307013626-20210307043626-00454.warc.gz"}
|
https://brilliant.org/problems/i-am-the-cube-of-my-digit-sum/
|
# I am the cube of my digit sum
Number Theory Level 1
There are certain integer that are the cube of the digit sum of itself. For example, $$17^3=4913$$ while $$4+9+1+3=17$$. What is the smallest such positive integer?
×
|
2016-10-27 20:41:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5393489599227905, "perplexity": 602.6151131538212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721392.72/warc/CC-MAIN-20161020183841-00563-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/149769/uniqueness-of-exponential-objects-up-to-isomorphism-in-any-category/149983
|
# Uniqueness of Exponential Objects up to Isomorphism in any Category
I want to prove that for any pair of objects $a,b$ in a category $\mathcal{C}$, the exponential object $a^b$ of $a$ and $b$, if it exists, is unique up to isomorphism. It looks to be really simple, but I can't figure out the proof myself. (I need a proof that doesn't mention adjoints or Yoneda's lemma, since I'm working from a category theory textbook that hasn't defined these concepts yet)
-
It's the same as the uniqueness proof for any universal object: define a category such that the object you want is an initial or terminal object in that category. – Qiaochu Yuan May 25 '12 at 19:23
Please define what you mean by exponential object, because the one that immediately comes to mind uses adjoints. – Zhen Lin May 25 '12 at 19:39
$b^a$ is an exponential object of two objects $a,b$ in a category $\mathcal{C}$ if there is an associated evaluation function $eval:b^a \times a \rightarrow b$ such that for any object $c$ in $\mathcal{C}$ there is a unique arrow $\lambda g: c \times a \rightarrow b$ such that $eval \circ (\lambda g \times 1_a) = g.$ – Anas May 25 '12 at 20:07
Short answer. The definition of the exponential object $b^a$ implies there is a natural bijection between arrows $c \to b^a$ and arrows $c \times a \to b$; the latter does not depend on $b^a$, if $d$ also has the universal property of $b^a$, then there is a natural bijection between arrows $c \to b^a$ and arrows $c \to d$, hence, $b^a$ must be isomorphic to $d$, by the Yoneda lemma.
Long answer. Let's spell out exactly where the isomorphism comes from. Suppose $d$ has the universal property of $b^a$: so there is a universal morphism $e : d \times a \to b$ such that for any $g : c \times a \to b$, there is a unique $g' : c \to d$ such that $e \circ (g' \times \textrm{id}_a) = g$. But in particular we can take $c = b^a$ and $g = \textrm{eval}$, and this gives us $g' : b^a \to d$ such that $e \circ (g' \times \textrm{id}_a) = \textrm{eval}$. But by the universal property of $b^a$ there is a unique $\lambda e : d \to b^a$ such that $\textrm{eval} \circ (\lambda e \times \textrm{id}_a) = e$, so we get $$\textrm{eval} \circ (\lambda e \times \textrm{id}_a) \circ (g' \times \textrm{id}_a) = \textrm{eval}$$ but $(\lambda e \times \textrm{id}_a) \circ (g' \times \textrm{id}_a) = (\lambda e \circ g') \times \textrm{id}_a$, hence $\lambda e \circ g' = \textrm{id}_{b^a}$ by uniqueness. Conversely, we have $$e \circ (g' \times \textrm{id}_a) \circ (\lambda e \times \textrm{id}_a) = e$$ so by uniqueness again we conclude $g' \circ \lambda e = \textrm{id}_d$. Thus $d \cong b^a$.
Remark. All of these answers are actually the same, just expressed in different ways. As others have said: go learn the Yoneda lemma and representable functors.
-
Thanks for the worked out solution. Yeah I think I need to work with another text alongside Goldblatt's Topoi, one that introduces functors and adjoints much earlier. – Anas May 26 '12 at 20:58
Awodey's great book, chapter 6, has a good introduction to exponentials, without Yoneda, ect.
The proof works out in the same way that is used for products (for example) or any other universal contruction.
Remember: in the definition of an exponential E1 there is a unique arrow, so if you suppose that there is another exponential E2......in the end there must be a unique arrow from E1 to E2 and a unique arrow from E2 to E1, so E1 and E2 must be isomorphic.
Study the technique used for products, equalizers or pullbacks for inspiration (always in Awodey's book)
-
Thanks for the recommendation and link. – Anas May 26 '12 at 21:02
I am glad it helped :-) – magma May 27 '12 at 9:12
$b^a$ is by definition a representation (or just the representing object) of the functor $c \mapsto \hom(c \times a , b)$, i.e. we have a natural bijection $\hom(c,b^a) \cong \hom(c \times a,b)$. The Yoneda Lemma tells us that representations are unique up unique isomorphism.
|
2013-12-19 12:38:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630529880523682, "perplexity": 181.19049742173877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345763533/warc/CC-MAIN-20131218054923-00040-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://wikieducator.org/User:Drmksharma/lectuernotes/sm
|
# User:Drmksharma/lectuernotes/sm
CONCEPT NOTES
Students may use the given concept notes as overview of the subject , meanwhile you will get ppt and pdf files of class
lectuers time to time by mail .
1: Introduction to Software Project Management
What is software project development
What is A Software project and its different parts
Software development as a project an overview
Why projects fail, how to make them succeed.
Many software organizations have problems delivering quality software that is finished on time and meets the users' needs. Luckily, most software project problems have surprisingly few root causes, and these causes are well understood. Solutions to these problems have been discovered, explained, and tested in thousands of software organizations around the world. These solutions are generally straightforward and easy to implement. However, they are not always intuitive to people who do not understand project management, and that makes them difficult to introduce. It's possible to make projects succeed by recognizing the common causes of project failure, and applying basic project management principles in order to address those causes.
2: Software Project Planning
Using a Vision and Scope document to identify needs; Elements of a project plan.
If a project manager does not really understand the context in which the software is being built, then the only thing that the project team sees is the urgency; they lose track of the needs. They can see the individual problems that they are working to solve, but they may lose track of the big picture. Unless the team understands the needs that drive the project, they may end up with a narrow focus, causing them to waste time addressing problems that are of little importance to the stakeholders. It’s easy to build great software that solves the wrong problems, but the only way to build the appropriate software is for everyone in the project to understand and agree on both why and how that software will be built before the work begins. That’s the purpose of project planning.
3. Project Estimation
Helping the project team create realistic estimates.
To someone who has never estimated a project in a structured way, estimation seems little more than attempting to predict the future. This view is reinforced when off-the-cuff estimates are inaccurate and projects come in late. But a good formal estimation process, one that allows the project team to reach a consensus on the estimates, can improve the accuracy of those estimates, making it much more likely that projects will come in on time. A project manager can help the team to create successful estimates for any software project by using sound techniques and understanding what makes estimates more accurate.
4: Project Schedules
Creating, optimizing and tracking a project schedule.
The project schedule is the core of the project plan. It is used by the project manager to commit people to the project and show the organization how the work will be performed. Schedules are used to communicate final deadlines and, in some cases, to determine resource needs. They are also used as a kind of checklist to make sure that every task necessary is performed. If a task is on the schedule, the team is committed to doing it. In other words, the project schedule is the means by which the project manager brings the team and the project under control.
5: Reviews
Different practices for reviewing work products.
A review is any activity in which a work product is distributed to reviewers who examine it and give feedback. Different work products will go through different kinds of reviews: the team may do a very thorough, technical review of a software requirements specification, while the vision and scope document will be passed around via email and have higher level walkthroughs. Reviews are useful not only for finding and eliminating defects, but also for gaining consensus among the project team, securing approval from stakeholders, and aiding in professional development for team members. In all cases, the work product coming out of the review has fewer defects than it had when it was submitted—even though the author thought it was “complete” before the review. Every defect that is found during a review is a defect that someone did not have to spend time tracking down later in the project.
6: Software Requirements
Eliciting requirements, creating a software requirements specification and controlling changes.
Most software is built to meet the needs of someone other than the programmer. If those needs are going to be satisfied, the behavior of the software must be planned before the software is built. Software requirements engineering is the art and science of developing an accurate and complete definition of the behavior of software that can serve as the basis for software development. Like project management, programming, and testing, software requirements engineering encompasses a set of skills that require training and practice. Many projects are delayed (or fail completely) because development begins before anyone on the project team really understands how the software should behave. The solution to this problem is to take the time to gather and verify the software requirements — documentation that completely describes the behavior that is required of the software — before the software is designed, built, and tested.
7: Design and Programming
Reviewing the design, refactoring, unit testing and project automation.
While many development problems originate outside of the programming team, there are some basic changes that the programmers can make that will improve the quality of the code they produce. Most teams, even ones with skilled and talented programmers, are vulnerable to the same design and programming problems. These problems can be addressed with a few basic tools and techniques—which can often be put in place entirely within the programming team, without involving anyone else in the organization.
8: Software Testing
Test plans, test cases, defect tracking, automation and post-mortem reports.
In software testing, quality is defined as “conformance to requirements.” Every use case, functional requirement, and other software requirement defines a specific behavior that the software must exhibit. When the software does not behave the way that the requirements say it must behave, that is a defect. This means that your software testers are responsible for figuring out whether the software that was produced by the team behaves in the way that the requirements it was built from say that it should. Throughout the entire software project, the team does many things to find and prevent defects. Once the software has been built, it’s time to look back and make sure that it meets the requirements. The goal of software testing is to make sure that the product does what the users and stakeholders need it to do. Software testers review the final product to make sure that the initial requirements have been met.
9-10: How to diagnose and fix a troubled software project
There's an old saying: "There's only one way to be right, but a million ways to be wrong." This is not necessarily the case with software projects. In practice, the vast majority of projects go wrong in one of a small number of ways. Throughout Part I of "Applied Software Project Management," many scenarios are identified which highlight the most common causes of project failure. Lecture 1 and 2 shows how projects can fail due to problems with project planning, estimation, and failure to hold reviews. Lecture 8 shows project problems that are caused by poor requirements engineering practices, bad programming habits, or a lack of software testing. Both lectures cover many scenarios that typify how projects fail, and point to some of the tools, techniques and practices that project managers can use to fix them.
11: Understanding Change
Why change fails, and how to make your changes succeed.
It would be nice if it were sufficient to understand why projects fail, and to know how to apply specific tools, techniques and practices to fix them. Unfortunately, that's not enough. Building better software requires changing the way things are done in your organization, and change makes many people very uncomfortable. Project managers around the world have tried to implement straightforward improvements to the way they build software, only to find that they can't convince the other people in their organizations to agree to discuss those changes, much less to actually alter the way their projects are carried out. It can be frustrating to see a problem, feel like you have the solution, and not be able to do anything about it. By understanding the most common ways that people respond to change and learning how to convince or reassure the ones who are resistant to change, it is possible to overcome these obstacles and successfully make the changes that your organization needs.
Managing your organization, team and project.
In a sense, part of the job of the project manager is to serve as an information conduit. The project manager helps information flow from the team up to senior management in the form of project status and analysis information. It is his job to understand all of the work being done, so that it can be summarized to the people who make the decisions about the future of the project; they need this information to make informed and intelligent decisions. This requires that the project manager put a lot of effort into understanding what it is the team is doing and why they are doing it. The project manager cannot simply ask for estimates, fit those estimates in a schedule, and quiz the team on the percentage they’ve completed. He must understand what actions each team member is taking to complete the task, and what possible complications they are running into. The project manager is the only person looking at how the tasks interrelate; he is the only one with the perspective to see the problems and, ideally, fix them.
13: Managing an Outsourced Project
Avoiding the common pitfalls that happen when working with an outsourced vendor.
Managing projects is hard. Scope creeps, changes go uncontrolled, defects are introduced, schedules are delayed…and that’s all in your own organization, where your software engineering team is right down the hall. Imagine how difficult it is to get even these results when your team is in another organization in an entirely different building—and possibly in a city halfway around the world! When you hire a company outside your organization to build your software, you open up yourself, your project, and your organization to exactly these problems.
Old semester Notes
e-Governance : Definitions ‘E-Government' or e-Governance is defined as ‘The utilization of the Internet and the world-wide-web for delivering government information and services to the citizens.’ (United Nations, 2006; AOEMA, 2005) , Like when you are using Indian railway website to reserve or book a online ticket for you , you are using one of e-Governance application .
'Electronic Governance' essentially refers to the approach ‘How government utilized IT, ICT, and other web-based telecommunication technologies to improve and/or enhance on the efficiency and effectiveness of service delivery in the public sector.’ Like when you are using ATM facility of a Bank to collect cash any time from any where or you are using on line fund transfer facility of an Indian bank is an application of e-Governance .
Examples of e-Governance applications
Public Grievances: Electricity, Water, Telephone, Ration Card, Sanitation, Public Transport, Police Rural Services: Land Records, Below Poverty Line (BPL) /EWS Families Police: FIR Registration, Lost and Found , Valuables, Persons, Dead Bodies
Social Services: Pension ,Old Age, Widows, Exgratia Scheme ,Acquisition / Rehabilitation & Compensation ,Registration of Licenses and Certificates,Ration Cards, Birth Certificates, Death Certificate, Domicile Certificate, Caste / Tribe Certificate, Arms Renewal, Registration of Documents, School Registration, University Registration, Motor Vehicle Registration, Driving License Public Information:
Employment Exchange Registration, Employment Opportunities, Examination Results, Hospitals / Beds Availability / Services, Railway Time Tables, Airline Time Tables, Road Transport Time Tables, Charitable Trusts, Government Notifications, Government Forms, Government Schemes EWS Services:
Civil supplies, Old Age Pension, Widow Pension, Handicapped Pension / Services, Ex Gratia Payment Agriculture Sector:
Speeds Information, Pesticides, Fertilizers, Crop disease, Weather Forecast - short range / District wise, Market Price Utility Payments / Billing
Electricity, Water, Telephone Commercial Taxation & Return Filing, Income Tax, Corporate Tax, Custom Duty, Central / State Excise Duty, Sales Tax, House Tax, Property Tax, Road Tax, Company Returns Government:
Electronic Procurement, Education University Model for E-Governance
Evolution of e-governance
E‐Governance originated in India during the seventies with a focus on in‐house government
applications in the areas of defence, economic monitoring, planning and the deployment of ICT to manage data intensive functions related to elections, census, tax administration, etc. The efforts of the National Informatics Center (NIC) to connect all the district headquarters during the eighties was a watershed. From the early nineties, e‐governance has seen the use of IT for wider sectoral applications with policy emphasis on reaching out to rural areas and taking in greater inputs from NGOs and private sector as well. While the emphasis was initially on automation and computerization, later on forays began to be made into connectivity, networking, setting up systems for processing information and delivering services. At a micro level, this ranged from IT automation in individual departments, electronic file handling, access to entitlements, public grievance systems, service delivery for high volume routine transactions such as payment of bills, tax dues to meeting poverty alleviation goals through the promotion of entrepreneurial models and provision of market information. The thrust has varied across initiatives, with some focusing on enabling the citizen‐state interface for various government services, and others focusing on bettering livelihoods.
|
2018-03-19 16:39:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17803116142749786, "perplexity": 2112.691100376233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647003.0/warc/CC-MAIN-20180319155754-20180319175754-00049.warc.gz"}
|
https://topfuturepoint.com/1-8-as-a-decimal/
|
# 1/8 as a decimal
Let us know about 1/8 as a decimal.
To convert 1/8 to a decimal, divide the denominator by the numerator. 1 8 = . divided by 125 .
Also, what does 1/3 have as a decimal?
Answer: 1/3 is expressed as 0.3333 in its decimal form.
Next, what do 1 and 3/4 have as a decimal?
Method 1: Write 1 3/4 in decimal using the division method. To convert a fraction to a decimal form, we need to divide its numerator by the denominator. It gives the answer 1.75 . So, the decimal to 1 3/4 is 1.75.
Also to know what 9 and 3/4 have as a decimal? So the answer is that 9 3/4 in the decimal form is 9.75 .
What is 3/8 in decimal?
Answer: 3/8 in decimal form is 0.375 .
#### What is 1/3 as a Decimal and a Percentage?
Some common decimals and fractions
April 13, 2021
#### What is 3 and 1/3 as a decimal?
So the answer is that 3 1/3 as a decimal is 3.3333333333333 .
#### What is 3/4 percent as a decimal?
3/4 as a decimal is 0.75 .
#### What is 1 and 3/4 as a percentage?
Result: The mixed number 1 3/4 can be expressed as 175 percent .
#### What is 3 and 3/4 as a decimal?
So the decimal section for this is 0.75 .
#### What is 7 and 3/4 as a decimal?
Explanation: It is a mixed number. 734 is between 7 and 8. 34 as a decimal is 0.75 , so we just add it to 7.75 to get 7.
#### What is 3 2 as a decimal?
Answer: 3/2 is expressed as a decimal 1.5 .
#### How do you write 5/8 as a fraction?
5/8 = 58 = 0.625 .
#### What is 3/8 as a decimal and as a percentage?
Some common decimals and fractions
April 13, 2021
#### How do you write 1/3 as a percentage?
Now we can see that our fraction is 33.333333333333/100, which means that 1/3 as a percentage is up to 33.3333% .
#### How do you calculate 1/3 of the total?
Therefore, it is one third of a sum. Counting thirds 3. is done by dividing by . For example: One third of 24 = 3/24 of 1 = 24/3 = 8.
#### What is 3/4 as a number?
Fraction (math) Three quarters (3⁄4) equals 0.75 .
#### How do you write 3/4 as a percentage?
Answer: 3/4 is expressed in terms of percentage up to 75% .
#### What is 8 and 3/4 as a decimal?
So the answer is that 8 3/4 in the decimal form is 8.75 .
#### What is 3/7 as a percentage?
Variation in Percentage Conversion Table
#### What is 1 and 3/4 as an improper fraction?
Answer and Explanation: The mixed number 1 3/4 will be equal to the improper fraction 7 / 4 .
example value
#### What is 7 plus 3 as a decimal?
2.3333
is a decimal and 233.33/100 or 233.33% is a percentage of 7/3.
how to write 7/3 as a decimal?
#### What is 4/7 as a decimal?
Answer: 4/7 is expressed as a decimal 0.571 .
|
2022-08-18 13:51:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.901863694190979, "perplexity": 1157.246691888654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573197.34/warc/CC-MAIN-20220818124424-20220818154424-00160.warc.gz"}
|
http://mathhelpforum.com/discrete-math/6936-number-time-statement-executed-print.html
|
# Number of time Statement Executed
• Oct 27th 2006, 06:56 PM
fifthrapiers
Number of time Statement Executed
From the following "pseudocodes", I have to determine how many times each statement
x <-- x + 1 is executed and explain why, from the following options:
O(1), O(log(n)), O(n), O(n*log(n)), O(n^2), O(n^3), O(2^n), or O(n!)
The pseudocode is the following (I don't recall how to do the indentations, but I think its just the "code" thing).
Code:
a.) for i <-- 1 to n for j <-- 1 to n for k <-- 1 to i x <-- x + 1 next k next j next i
Code:
b.) i <-- n while i >= 1 for j <-- 1 to n x <-- x + 1 next j i <-- [i/2] (square brackets without horizontal line at "top") end while
Code:
c.) i <-- 2 while i < n i <-- i^2 x <-- x + 1 end while
Code:
d.) i <-- n while i >= 1 for j <-- 1 to i x <-- x + 1 next j i <-- [i/3] (square brackets without horizontal line at "top") end while
*EDIT*
Yay, [code] worked.
• Oct 28th 2006, 02:28 PM
CaptainBlack
Quote:
Originally Posted by fifthrapiers
From the following "pseudocodes", I have to determine how many times each statement
x <-- x + 1 is executed and explain why, from the following options:
O(1), O(log(n)), O(n), O(n*log(n)), O(n^2), O(n^3), O(2^n), or O(n!)
The pseudocode is the following (I don't recall how to do the indentations, but I think its just the "code" thing).
Code:
a.) for i <-- 1 to n for j <-- 1 to n for k <-- 1 to i x <-- x + 1 next k next j next i
First check that the limits on the loops are what you intend.
1. The innermost loop has i assignments.
2. The next loop performs the inner loop n times, so with the inner loop
has n.i assgnments.
3. The outermost loop is performed for i=1..n, so the operations count
is:
$
\sum_{i=1}^n n\cdot i = n \sum_{i=1}^n i = n\ n(n+1)/2 = O(n^3)
$
• Oct 28th 2006, 02:54 PM
CaptainBlack
Quote:
Originally Posted by fifthrapiers
From the following "pseudocodes", I have to determine how many times each statement
x <-- x + 1 is executed and explain why, from the following options:
O(1), O(log(n)), O(n), O(n*log(n)), O(n^2), O(n^3), O(2^n), or O(n!)
The pseudocode is the following (I don't recall how to do the indentations, but I think its just the "code" thing).
Code:
b.) i <-- n while i >= 1 for j <-- 1 to n x <-- x + 1 next j i <-- [i/2] // [.] floor function end while
1. The innermost loop has 1 assignment which is executed $n$ times.
2. The outer loop is executed $\lfloor \log_2(n) \rfloor$ times
3. Hence the total assignment count is:
$n\ \lfloor \log_2(n) \rfloor=O(n \log(n))$
• Nov 2nd 2006, 06:49 PM
fifthrapiers
Let me see if I can do c and d now.
For c:
1. The innermost loop has i assignments.
2. While x is less than n, there is 1 assignment.
Therefore, n*n = n^2 = O(n^2)
for d: This looks a lot like b.
1. The outer loop is executed log(n) times.
I think this one is similar to b, without the n scalar, so:
O(log(n)).
• Nov 2nd 2006, 11:28 PM
CaptainBlack
Quote:
Originally Posted by fifthrapiers
Let me see if I can do c and d now.
For c:
1. The innermost loop has i assignments.
2. While x is less than n, there is 1 assignment.
Therefore, n*n = n^2 = O(n^2)
Code:
c.) i <-- 2 while i < n i <-- i^2 x <-- x + 1 end while
(I will assume that the while statement should be "while i<=n" as this
seems more natural to me than "while i<n", the changes needed if the latter
really is intended should be easy enough to make)
The loop has 1 assignment of the form x<--x+1, but how many trips
around this loop are there?
If n=2 or 3, then there is 1 trip, as i<--4 in the first trip and so
i is now >n and so no further trips will be made.
If n=4, 5, 6, 7, then there are two trips i<--8 after the second trip
and so is now greater than n so there will be no further trips.
In general if $n=2^k, 2^k+1, .. , 2^{k+1}-1$ there will be $\lfloor \log_2(n) \rfloor=k$
trips, which is $O(\log(n))$.
RonL
• Nov 2nd 2006, 11:44 PM
CaptainBlack
Quote:
Originally Posted by fifthrapiers
for d: This looks a lot like b.
1. The outer loop is executed log(n) times.
I think this one is similar to b, without the n scalar, so:
O(log(n)).
Except in b the inner loop had a trip count independent of the outer loop
count, here we have a decreasing trip count as the outer loop progresses.
I think we have a total trip count:
$
N \sim n+\lfloor n/3 \rfloor +\lfloor n/9 \rfloor + ... +\lfloor n/3^{\lfloor \log_3(n) \rfloor} \rfloor=O(n)
$
In this case I feel the need to actualy conduct an experiment to check this.
The attachment shows the result of this, and as the plot shows an essentialy straight line
I am reasonably happy that the result is OK.
RonL
• Nov 2nd 2006, 11:47 PM
fifthrapiers
Thanks a lot, CaptainBlack.
|
2017-10-22 11:19:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6861699223518372, "perplexity": 4554.237574543232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825174.90/warc/CC-MAIN-20171022094207-20171022114207-00677.warc.gz"}
|
https://www.intel.com/content/www/us/en/developer/articles/guide/get-started-with-intel-deep-learning-boost-and-the-intel-distribution-of-openvino-toolkit.html
|
# Get Started with Intel® Deep Learning Boost and the Intel® Distribution of OpenVINO™ Toolkit
Published: 11/14/2019
Last Updated: 11/14/2019
## Introduction
In recent years, deep learning has required an increasing number of computationally heavy calculations, making the acceleration of deep learning workloads an important area of research. While most deep learning applications today use 32 bits of floating-point (FP) precision, various researchers have demonstrated the successful use of lower numerical precision in deep learning training and inference workloads.
The 2nd Generation Intel® Xeon® Scalable processor includes new embedded acceleration instructions known as Intel® Deep Learning Boost (Intel® DL Boost) that uses Vector Neural Network Instructions (VNNI) to accelerate low precision performance.
Read this tutorial to learn how to use the new Intel DL Boost accelerator on an inference workload. We provide a set of guidelines for running inference with both 32-bit FP precision and 8-bit integer precision using the Intel® Distribution of OpenVINO™ toolkit. For more information, see the Intel white paper Lower Numerical Precision Deep Learning Inference. We use a pre-trained model to guide you through the following processes:
1. Transform a frozen graph into an intermediate representation (.bin or .xml), which is required by the Intel Distribution of OpenVINO toolkit.
2. Run inference on the trained model (FP32) over our dataset, which is a 37-category set of pet images with roughly 200 images for each class. We show the accuracy of our model on the entire dataset and measure inference frames per second.
3. Use the Calibration Tool (part of the OpenVINO™ toolkit) to quantize the model to INT8.
4. Rerun the same inference application on the same dataset and same machine.
Finally, we compare the inference results of the FP32 and INT8 models.
### Prerequisites
This tutorial was created using version 2019 R3.1 of the Intel Distribution of OpenVINO toolkit installed on a 2nd generation Intel Xeon Scalable processor.
## Inference Flow with the Intel® Distribution of OpenVINO™ Toolkit
In an end-to-end deep learning development cycle where you have a well-trained model, after you have tuned parameters you need tools to deploy the model on the hardware of your choice. This is where the Intel Distribution of OpenVINO toolkit enters the picture. It quickly deploys applications and solutions that emulate human vision. The toolkit also maximizes performance for computer-vision workloads across Intel® hardware. It includes the Deep Learning Deployment Toolkit (DLDT), Open Model Zoo, Intel® Media SDK, and drivers and runtimes for OpenCL™, OpenCV, and OpenVX*.
Figure 1. Intel® Distribution of OpenVINO™ Toolkit inference flow
Using the deep learning Model Optimizer and deep learning inference engine, which are both part of the DLDT, we built an inference flow as shown in Figure 1. To start the process we made sure that our pre-trained model was in the correct format for conversion into an intermediate representation (IR). In this example, we start from Keras or TensorFlow*, which requires us to freeze our graph into the protobuf format to allow the deep learning Model Optimizer to read and convert it into an IR. See Supported Frameworks and Formats in Intel Distribution of OpenVINO toolkit for further details.
Next, we use the inference engine API to load plugins and the network parameters. After configuring the input and output files, we load our model parameters into the network, perform pre-processing to prepare our dataset, and call into the inference engine, which generates our model’s output.
While Figure 1 summarizes the flow of the inference model in this article, the Intel Distribution of OpenVINO toolkit offers several inference engine demos and inference engine samples to help you get started with a variety of models and workloads.
### FP32 Model Inference Stages
This section explains the steps required to run FP32 inference:
1. Create an IR (.bin or .xml) using the deep learning Model Optimizer.
2. Understand the OpenVINO arguments.
3. Instantiate the OpenVINO network.
4. Run inference over a dataset.
#### Create the Intermediate Representation (IR)
Model Optimizer is a cross-platform command-line tool that facilitates the transition between the training and deployment environment, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices.
Model Optimizer process assumes you have a network model trained using a supported deep learning framework. Figure 2 below illustrates the typical workflow for converting a trained model to an IR file using the OpenVINO Model Optimizer tool.
Figure 2. OpenVINO Model Optimizer flow
1. Call out directly to the command line and look for the mo.py file.
2. Add parameters for the input model and input shape of the topology. The command below generates the FP32 model of the IR by default. To generate the FP16 version, you must add --data_type=FP16 to the parameters.
mo.py --input_model=all_layers.rn50.pb --input_shape=[128,224,224,3] --mean_values=[123,117,104] --reverse_input_channels
For more details on this process, see Converting a Model to Intermediate Representation.
#### Instantiate the Network
Note: To review the specific steps for instantiating the network, see the inference.py file included in the Intel® IoT Developer Kit samples.
The general steps are as follows:
1. Examine the parameters that were passed to the constructor. As shown below, you can see that the .XML file is passed as the base model, there are no CPU extensions, and the CPU is targeted as the device type.
2. Call out to the constructor for the network instantiation.
3. Load the model into that network by passing in the .bin and .xml parameters.
4. Read in the labels file that you will use to decode the results during your inference.
from inference import Network
import warnings
warnings.filterwarnings("ignore")
arg_model="pets.rn50.xml"
arg_device="CPU"
# Initialise the class
infer_network = Network()
# Load the network to IE Core to get shape of input layer
infer_network.load_model(arg_model, arg_device, 1, 1, 2, None)
print("Network Loaded")
#Read in Labels
arg_labels="pets/pets-labels.txt"
label_file = open(arg_labels, "r")
labels = label_file.read().split('\n')
print("Labels Read")
#### Run the Inference
The code below shows the following six steps:
1. Instantiate a figure to graph the output.
2. Gather paths for images.
3. Read the image and transform/preprocess.
4. Start the inference request.
5. Calculate the inference frames per second (FPS) and label.
6. Update the graph every N steps.
import random
import glob
import os
from keras.applications.resnet50 import preprocess_input
from keras.preprocessing import image
from multiprocessing import Pool
import numpy as np
import tqdm
import math
import time
import matplotlib.pyplot as plt
%matplotlib widget
plt.rcParams.update({'font.size': 16})
from IPython.display import display, clear_output
file_list = glob.glob("pets/pets_images/*/*")
random.shuffle(file_list)
def process_images(img_path):
img_cat = os.path.split(os.path.dirname(img_path))[1]
img = image.load_img(img_path, target_size=(224, 224))
img = image.img_to_array(img)
img = np.transpose(img, (2, 0, 1))
img = preprocess_input(img)
return (img, img_cat)
if __name__ == '__main__':
print("Processing Images")
with Pool() as p:
data = list(tqdm.tqdm(p.imap(process_images, file_list), total=len(file_list)))
data_split = [list(t) for t in zip(*data)]
processed_images = data_split[0]
processed_labels = data_split[1]
ips = 0
true_result = 0
ips_result = []
batch_size = 128
print("Start Inference")
#Instantiate Figure to Graph IPS and Image
fig = plt.figure(figsize=(13,6), dpi=80)
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122, frameon=False)
ax2.axis('off')
plt.tight_layout(pad=2)
plt.subplots_adjust(top=0.70)
pred_labels = []
np_images = np.array(processed_images)
for idx in range(math.ceil(len(processed_images)/batch_size)):
start = idx * batch_size
end = (idx + 1) * batch_size
if len(np_images[start:end:]) != batch_size: break;
time1 = time.time()
# Start asynchronous inference for specified request
infer_network.exec_net(0, np_images[start:end:])
# Wait for the result
infer_network.wait(0)
# Results of the output layer of the network
res = infer_network.get_output(0)
time2 = time.time()
#Calculate Inference Per Second
ips = (1/(time2-time1)) * batch_size
ips_result.append(ips)
#Gather Label Result for Top Prediction
for result in res:
top = result.argsort()[-1:][::-1]
pred_label = labels[top[0]]
pred_labels.append(pred_label)
true_result = np.mean(np.array(processed_labels[:end]) == np.array(pred_labels))
rand_img = random.randint(start, end)
if rand_img == len(pred_labels):
rand_img-=1
fig.suptitle("Inference on {0}/{1} images from Test Set\nAvg Inf/Sec: {2:.2f} Avg Acc: {3:.4f}".format(
end,(math.floor(len(processed_images)/batch_size)*batch_size), sum(ips_result)/(idx+1), true_result), fontsize=18)
ax2.imshow(image.load_img(file_list[rand_img]))
ax2.axis('off')
ax2.set_title("Prediction: {0}\nActual: {1}".format(pred_labels[rand_img], processed_labels[rand_img], size=16))
ax1.clear()
ax1.set_title("Inference Per Second on Data Set", size=16)
ax1.set(ylabel="Inference Per Second",xlabel="Batches Inferenced")
ax1.plot(range(0, len(ips_result)), ips_result)
clear_output(wait=True)
display(fig)
plt.close()
The results shown in Figure 3 were generated after running through the entire test set.
Figure 3. Inference Results with FP32 IR
The plot in Figure 3 shows the measured inference FPS for every batch while running through the entire dataset. In this experiment, the average FPS1 was 490.13, and the overall accuracy of the test set was 0.8664.
The ResNet50 model is used here, and it can be better tuned and trained to increase accuracy. However, in this experiment, we are only interested in the performance of a trained model over a dataset to show how it performs with both FP32 and INT8 IRs.
### INT8 Model Inference Stages
This section describes the steps required to run INT8 inference.
1. Use the Calibration Tool in the OpenVINO™ toolkit to create the INT8 model.
2. Load the quantized IR.
3. Run inference with the INT8 IR.
#### Using the Calibration Tool
The Calibration Tool quantizes a given FP16 or FP32 model and produces a low-precision 8-bit integer (INT8) model while keeping model inputs in the original precision. To learn more about benefits of inference in INT8 precision, refer to Using Low-Precision 8-bit Integer Inference.
You can run the Calibration Tool in two modes: standard and simplified. In standard mode, it performs quantization with a minimal drop in accuracy. In simplified mode, all layers are considered to be executed in INT8 IR, and the tool produces the IR that contains plain statistics for each layer. Because simplified mode can cause a dramatic accuracy drop, use this mode only to understand the potential inference performance gain of your application. To learn more about how to use the Calibration Tool, refer to Calibration Tool.
Figure 4. Calibration Tool Flow
Figure 4 summarizes the steps to create the INT8 IR from the same frozen graph. The following steps use the new INT8 IR to perform inference on the same dataset.
##### Step 1: Convert the Annotations
This step uses the Imagenet style files to create a Pickle file and a Json file; each file contains a subset of the dataset for accuracy checking. We're using a subset of 256 to gather a multiple-of-2 batch size. This will provide some flexibility for annotation file reuse later when creating a model with different batch sizes. Refer to Annotation Converters for further details.
mkdir annotations
python3 /opt/intel/openvino/deployment_tools/tools/accuracy_checker_tool/convert_annotation.py \
imagenet \
--annotation_file val.txt \
--labels_file synset_words.txt \
--subset 256 \
--output_dir annotations \
--annotation_name val.pickle \
--meta_name val.json
##### Step 2: Generate the INT8 Model
Using the Calibration Tool to generate the INT8 model requires two files: pets-definition.yml and pets-config.yml. These files contain the information needed to create the IR and they require only a minimum number of parameters when calling the calibrate.py file. Once the calibration is done, the tool will create the .bin or .xml file appended with _i8.
python3 /opt/intel/openvino/deployment_tools/tools/calibration_tool/calibrate.py \
--config pets-config.yml \
--definition pets-definition.yml \
-M /opt/intel/openvino/deployment_tools/model_optimizer/
The following two .yml files serve as an example for your applications:
pets-config.yml
models:
- name: resnet50
launchers:
- framework: dlsdk
device: CPU
tf_model: pets.rn50.pb
adapter: classification
mo_params:
data_type: FP32
input_shape: "(128, 224, 224, 3)"
mean_values: "(123,117, 104)"
mo_flags:
- reverse_input_channels
datasets:
- name: pets
pets-definition.yml
launchers:
- framework: dlsdk
device: CPU
datasets:
- name: pets
data_source: pets_images
annotation: annotations/val.pickle
dataset_meta: annotations/val.json
preprocessing:
- type: bgr_to_rgb
- type: normalization
mean: 123, 117, 104
metrics:
- name: accuracy @ top1
type: accuracy
top_k: 1
##### Step 3: Verify the Results
To verify the results using the Accuracy Checker (from the DLDT), run the following command:
python3 /opt/intel/openvino/deployment_tools/tools/accuracy_checker_tool/accuracy_check.py \
--config pets-config.yml \
-d pets-definition.yml \
-M /opt/intel/openvino/deployment_tools/model_optimizer/
#### Instantiate the Network and Run Inference
This process uses the same code from above, pulls in the new .bin or .xml files, and uses those files moving forward. Instead of defining arg_model="pets.rn50.xml", the INT8 IRs arg_model="pets.rn50_i8.xml" is loaded and everything else for the inference code will be the same.
Running through the same dataset again, the new application performs as shown in Figure 5. The average FPS[ii] in the run is 1073.38 and the accuracy is 0.8642. Comparing Figures 3 and 5, you can see that there is a clear boost in performance when using INT8 (1073.38/490.13 = 2.19, and the loss in accuracy is 0.8664-0.8642 = 0.0022, which is less than 0.3 percent).
Figure 5. Inference results with INT8 IR
## Further Analysis Using OpenVINO Tools
While running our inference application, we found that all the cores were not fully utilized. OpenVINO provides the Benchmark Python* Tool and Benchmark C++ Tool, which perform inference using convolutional neural networks and help you measure the performance of your trained model.
Performance can be measured for two inference modes: synchronous (latency-oriented) and asynchronous (throughput-oriented). The code below demonstrates how to run the FP32 and INT8 IR through the Benchmark Python* Tool and save the outputs into a text file for further analysis.
#Run Benchmark App with FP32 IR
python3 /opt/intel/openvino/deployment_tools/tools/benchmark_tool/benchmark_app.py -m pets.rn50.xml -t 20 2>&1 | tee pets_fp32_benchmark.txt
#Run Benchmark App with INT8 IR
python3 /opt/intel/openvino/deployment_tools/tools/benchmark_tool/benchmark_app.py -m pets.rn50_i8.xml -t 20 2>&1 | tee pets_int8_benchmark.txt
OpenVINO provides a preview release of Deep Learning Workbench, which is another option to run a web-based graphical environment for your models. It helps you measure your model performance on a variety of hardware and can automatically fine-tune the performance of an OpenVINO model by reducing the precision of certain model layers (quantization) from FP32 to INT8. Using this tool, you can experiment with model optimizations and inference options, analyze inference results, and apply an optimal configuration.
## Comparison of FP32 Versus INT8 Inference Speed
Figure 6 summarizes the results of the inference FPS metric for all the above experiments. The FP32 and INT8 bars reflect the results from the inference application; they show a boost of approximately 2.19x. The FP32 Streams and INT8 Streams bars show the results from the OpenVINO Benchmark PythonTool and we see a 3.42x boost when comparing FP32 Streams vs INT8 Streams as a result of better CPU core utilization.
Figure 6. FP32 versus INT8 Inference Speeds
## Conclusion
This article describes how the use of Intel Distribution of OpenVINO—and the power of vector neural network instructions (VNNI) and Intel® Advanced Vector Extensions 512 (Intel® AVX-512) can accelerate your workload. You can see a clear performance boost with Intel DL-Boost on inference workloads.
## Additional Resources
Intel® Distribution of OpenVINO™ Toolkit
Calibration Tool User Guide
## Authors
Abdulmecit Gungor is an AI developer evangelist at Intel Corporation. He works closely with the developer community, trains developers, and speaks at universities and conferences on Intel® AI technologies. He has a master’s degree from Purdue University and a bachelor’s degree from City University of Hong Kong. He also holds an academic achievement award from The S. H. Ho Foundation. His interests are end-to-end natural language programming application development on real-life problems, text mining, statistical machine learning, and performance optimization of AI workloads.
Michael Zephyr is an AI developer evangelist in the Architecture, Graphics and Software Group at Intel Corporation. He promotes various Intel® technologies that pertain to machine learning and AI, and he regularly speaks at universities and conferences to help spread AI knowledge. Michael holds a bachelor's degree in computer science from Oregon State University and a master's degree in computer science from the Georgia Institute of Technology. In his free time, you can find him playing board games or video games and lounging with his wife and cat.
## Footnotes
1. Performance results are based on testing as of October 2019 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.
Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information, see Performance Benchmark Test Disclosure.
Per node using 2 x Intel® Xeon® Platinum 8280 Processor @ 2.7Ghz, 192 GB, 12 slots / 16 GB / 2933 Mhz DDR4 DIMM, HyperThreading: Enable, Turbo: Enable, Storage: 1x Intel® DC S4600 480GB, Network Device: 1 x Intel® Ethernet Network Adapter X722, OS: Ubuntu Server 18.04.3 LTS, Kernel: 5.0.0-32-generic, OpenVINO 2019 3.334 (2019_R3), and Tensorflow 1.14.
#### Product and Performance Information
1
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
|
2022-08-19 17:34:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23989367485046387, "perplexity": 5783.470488236489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00497.warc.gz"}
|
https://www.nature.com/articles/s41467-020-14880-2?error=cookies_not_supported&code=1729381e-0205-4dc3-a23c-9555f53de947
|
Introduction
Organic semiconductors are now well established as a thin-film electronics platform for displays, solar power and printed electronics1,2,3,4. Their optical and electronic properties can be tailored through molecular design to give very efficient light emission, and to tune the band gap energy or thin film morphology5,6,7. Organic electronics and organic light emitting diodes (OLEDs) in particular are now widely adopted in smart devices and wearables8,9,10,11. They can be produced in very high volumes on very thin, flexible plastic substrates, at low cost12,13, with high density integration of pixels of different materials or for very large area electronics. While high power efficiency has been achieved for OLEDs, organic solar cells, and photodiodes, their temporal performance is generally thought to be intrinsically slow. For example, in comparison to inorganic LEDs, OLEDs have low modulation bandwidth which is mainly attributed to their low charge mobility and high capacitance. The charge mobility of organic semiconductor materials (especially the ones used in OLEDs) is several orders of magnitude lower than for inorganic semiconductors14. In addition, OLEDs have a planar structure of several tens to hundreds of nanometers thickness. This forms a high capacitance which limits bandwidth drastically14,15, with few reports exceeding 10 MHz16. A striking consequence of this is the highest reported data rate for an OLED optical data link is 51.6 Mbps by Chen et al.17, whereas GaN LEDs have achieved visible light communication (VLC) data links of several Gbps18,19,20.
OLEDs with high electrical bandwidth (of hundreds of MHz or above) could enable a wide range of new applications of organic electronics. For example, OLEDs are an attractive light source for low cost integration in lab-on-a-chip formats for disposable point of care diagnostics21; fast intensity modulation would enable time-resolved fluorescence measurements of biomarkers22, broadening the scope for diagnostic tests with high specificity. Visible light communications is a promising approach to address the rapidly increasing global demand for wireless communications bandwidth23,24,25; fast OLED transmitters could play a role in data links with smart phones, smart label sensors or secure contactless communications between credit cards and ATMs. Fast modulation of OLED arrays and displays could also enable new approaches to fluorescence lifetime imaging26,27,28, optical detection29 and ranging30,31, and structured light imaging32,33.
Here, we report a significant breakthrough in speeds of OLEDs. To achieve this, we carefully investigated the factors limiting the bandwidth of OLEDs and how to overcome them. We explored the effect of active area size, electrode design, and the emitting material. By simultaneously optimizing all three of these aspects, we fabricated OLEDs that show a significant improvement in bandwidth. We illustrate one area of possible application of these very fast OLEDs by showing that they can be used as a transmitter for a free space communication link at a rate of 1.13 Gbps over a distance of 2 m. To the best of our knowledge, this constitutes an improvement of a factor of 20 over previously reported OLED VLC data rates17, and is achieved over a much larger distance. We believe that these results will pave the way for efficient, low-cost, and high-speed organic optoelectronics, with potential applications in secure communications, point of care diagnostics, and optical imaging and ranging.
Results
Overview
Three generations of OLED devices were developed and a combination of optoelectronic and photophysical measurements made to identify the key factors limiting their modulation bandwidth and overcome them. Each OLED generation was then characterized in a data transmission measurement to assess their capacity to usefully exploit their modulation bandwidth. An orthogonal frequency division multiplexing (OFDM) modulation scheme was used, which permits adaptive allocation of data and energy in different frequency bands to optimally utilize the available bandwidth (see Methods section for further details)18,34,35.
First generation OLEDs with p-i-n structure and fluorescent emitter
The first generation of OLEDs were designed with a p-i-n structure36 with doped charge transport layers, based on an established blue fluorescent emitter, 2,5,8,11-tetra-tert-butylperylene (TBPe)37,38,39 (hereafter termed “G1-OLEDs”, see Fig. 1 for details). This fluorescent emitter was chosen over more efficient phosphorescent and thermally activated delayed fluorescent emitters due to its substantially shorter luminescence lifetime40 (4.4 ns for TBPe doped into 2-methyl-9,10-bis(naphthalen-2-yl)anthracene (MADN) (hereafter termed “TBPe film”, see Supplementary Fig. 1a) A p-i-n structure with doped transport layers36 was selected to achieve good charge injection from the contacts and high conductivity. This reduced heating of the device and hence enabled operation at high current and high brightness (Supplementary Fig. 2a).
To assess the bandwidth of the G1-OLEDs, the frequency response of the system was determined (see Methods section). The overall optical link exhibits a low-pass characteristic whose bandwidth is significantly determined by the OLED. Figure 2a displays the bandwidth of the link (the frequency at which the power gain of a VLC link, [H]2, decreases 6 dB from its maximum value) as a function of the voltage applied to the G1-OLEDs. We observed an increase of the bandwidth with increasing voltage. This may be attributed to a reduction in device resistance as the voltage was increased, leading to a reduction of the electrical time constant of the OLEDs41 (see Supplementary Note 2). We note that the maximum voltage for the G1-OLED was limited to around 5 V by device breakdown. The details and the limiting factors of the G1-OLEDs are discussed further in the next section.
A free-space data link using the G1-OLED as transmitter was next evaluated by locating a receiver 2 m away and driving the OLED with a data stream using an implementation of direct current (DC)-biased optical orthogonal frequency division multiplexing (DCO-OFDM) (see Supplementary Note 3 for the details of data transmission optimization). Figure 2b shows the data rate at a bit error ratio (BER) of 3.8 × 10−3, which corresponds to the 7% forward error correction limit (7% FEC limit42) as a function of voltage for the G1-OLEDs. A maximum data rate of 140 Mbps (average of 4 OLED samples, Supplementary Table 1) was achieved for an applied DC voltage of 5 V.
Second generation OLEDs of different active area with thermal management and top emission geometry
To further improve the OLED bandwidth, it is necessary to reduce the electrical time constant. To achieve this, the device design was modified to reduce both the wiring resistance, which is the resistance between the contact pads and the active area of the OLED, and the capacitance of the OLED. These steps reduced the RC time constant, and also enabled the device to operate at higher voltage. The indium tin oxide (ITO) layer (90 nm; 42 Ω/sq.43) was replaced with a thin silver layer (30 nm; <3 Ω/sq.), thereby reducing the resistance of the transparent electrode. Smaller OLED areas can achieve higher bandwidth44 due to reduced capacitance, but this comes with a penalty of lower light output, potentially limiting the signal to noise ratio (SNR) of a data link. We therefore fabricated an improved set of OLEDs with the same organic stack but in a range of different sizes to test this trade-off and find an optimized area to maximize the communication performance. We used thermal evaporation of crossed electrodes to define the device area rather than introducing insulators with photolithography, which would cause parasitic capacitance45. Finally, we note that for the first generation OLEDs, both their bandwidth and data rate increased with applied voltage, but were limited to a maximum value by the temperature rise of the device, which led to device breakdown46. To suppress this temperature rise, the second generation OLEDs were fabricated on silicon substrates, which offer high thermal conductivity47,48. These second generation OLEDs (hereafter termed “G2-OLEDs”) were fabricated with three different device areas: G2-S-OLEDs (1.2 × 10−3 cm2), G2-M-OLEDs (1.1 × 10−2 cm2), and G2-L-OLEDs (9 × 10−2 cm2).
Figure 2a shows the results of bandwidth measurements of a link using G2-OLEDs as a function of voltage. The bandwidth is higher for G2-OLEDs with smaller device area—for example at 8 V the bandwidths of G2-L-OLED, G2-M-OLED, and G2-S-OLED are 24, 37, and 81 MHz, respectively. It can also be seen that the bandwidth increases with increasing voltage. Both observations are substantially due to a reduction in electrical time constant. This arises from the change in device area and (as diodes have nonlinear current–voltage characteristics) a reduction in resistance as voltage is increased. The results show that reducing device size and increasing operating voltage are helpful to realize high bandwidth. The data rate of these devices is shown in Fig. 2b. For all second generation OLEDs, the data rate increases with voltage, with the G2-S-OLEDs giving the highest data rate of 610 Mbps at 9 V (average of three OLED samples).
Third generation OLEDs with improved device configuration and faster emitter
The potential ways of increasing the modulation bandwidth of the G2-S-OLEDs are to reduce further the electrical time constant, decrease the charge transit time, and shorten the emission lifetime. These points were addressed in the third generation OLEDs (G3-OLEDs). The electrical time constant could be reduced further by reducing the device area, increasing the number of contact pads for the more resistive transparent electrode (as resistance decreases by connecting two resistors in parallel), and by increasing the thickness of the aluminum (Al) bottom electrode to reduce wiring resistance. G3-OLEDs were therefore designed with similar stack structure to the G2-OLEDs, but with the device area slightly reduced to 9.2 × 10−4 cm2, and a different electrode configuration. The cathode was re-designed with two contact pads, while the thickness of the Al anode was increased from 80 to 200 nm (see Fig. 1b).
Since the bandwidth of the G2-S-OLED is comparable to the emission lifetime of the TBPe film, OLEDs with an alternative emitter of shorter photoluminescence (PL) lifetime, 4,4′-bis[4-(diphenylamino)styryl]biphenyl (BDAVBi),49 were fabricated as well (G3-“fast-emission-molecule” (FEM)-OLEDs). The lifetime of BDAVBi films doped into MADN was measured to be 1.1 ns (see Supplementary Fig. 1a).
Figure 2a shows our initial measurements of the link’s bandwidth with the G3-OLED and G3-FEM-OLEDs as a function of applied voltage. Higher bandwidth was observed with the G3-OLED than with the G2-S-OLED. The bandwidth of the fastest OLEDs was then studied in more detail using a photodiode with flatter frequency response (Thorlabs, APD430A2/M)50 and the results are shown in Fig. 3a. The G3-OLEDs show higher bandwidth than the G2-OLEDs and the bandwidth increases substantially with increasing voltage. The G3-OLED and G3-FEM-OLEDs achieved maximum bandwidths of 178 and 245 MHz respectively, and the bandwidth of G3-FEM-OLED increases more steeply than the G3-OLED with voltage. Figure 3b shows a comparison of the frequency response of the OLEDs and the corresponding films of their emission layers. The BDAVBi film has much higher bandwidth than the TBPe film because of its shorter PL lifetime. Interestingly, it was found that the bandwidth of these OLEDs can be higher than the PL bandwidth of the emitting molecules, especially for the G3-OLED which uses TBPe as an emitter. This suggests there are additional processes in the device, such as singlet-triplet annihilation38, which shorten the emission lifetime.
There are two main factors that contribute to the increase of bandwidth with increasing voltage. One is that the higher electric field leads to a shorter charge transit time. The other is that the reduction in resistance reduces the electrical time constant. In the G3-FEM-OLED the resistance of the device is large compared with the wiring resistance at all operating voltages. In this scenario, the effect of the change in device resistance is minor41, and the main effect is the reduction of charge transit time. In combination with the short emission lifetime, this leads to the very high bandwidth for an organic semiconductor device of 245 MHz. We note that this measurement was made with a detector with a flat frequency response up to 400 MHz whereas the VLC link used a detector with a resonance around 100 MHz (Supplementary Fig. 6).
Finally, the VLC performance of the G3- and G3-FEM-OLEDs was investigated. Figure 2b and Supplementary Table 1 show the data rate at the 7% FEC limit of the OLEDs in a 2 m data link. Both G3-OLEDs achieved data rates exceeding 1 Gbps. The highest data rate was for the G3-FEM-OLED which exhibited a maximum data rate of 1.13 Gbps (average of 3 OLED samples) for a peak to peak voltage of 1.3 V with DC offset of 12 V and mudulation bandwidth of 165 MHz.
Like other OLED-VLC work we used a lens at the transmitter and receiver, and this allows us to compare our results with literature reports in Fig. 4. Our data rate of 1.13 Gbps with the G3-FEM-OLED is 22 times greater than that of earlier devices17,51,52,53,54. Furthuremore, while Chen et al. recently demonstrated that the data transmission rate significantly drops with increasing separation distance17, we achieved our value in a link of a practical length of 2 m, i.e., 16 times longer than they used.
Discussion
We have demonstrated a breakthrough in high speed OLED performance. Organic optoelectronic devices are usually thought to be slow, but we have shown how the potential limitations of electrical time constant, low mobility and excited state lifetime can be overcome by careful device design and materials selection. This paves the way to a new generation of organic electronic and optoelectronic devices working much faster than previously thought possible. We have illustrated this by making a 20-fold advance in the VLC performance to achieve data rates of 1.13 Gbps from OLEDs as data transmitter over a 2 m link. While the lens-coupled 2 m data link demonstrated here requires point-to-point alignment of the transmitter and receiver, the fast OLEDs could potentially be applied lens-free in shorter contactless datalinks, or with high gain alignment-tolerant receivers55,56,57. Moreover, our results suggest that OLEDs may be suitable for a range of potential applications where fast modulation is required, spanning secure communications, point of care diagnostics, and optical imaging and ranging. The principles developed here may also prove useful for making other organic electronic devices faster than previously thought possible.
Methods
Fabrication of OLEDs and films of emission layers of OLEDs
The OLEDs were fabricated as follows: metals and organic materials were thermally evaporated through a shadow mask in a vacuum chamber at a base pressure of 10−7 mbar (Angstrom Engineering Inc., EvoVac). The OLED layer stack consisted of 40 nm 2,2′,7,7′-tetrakis(N,N′-di-p-methylphenylamino)-9,9′-spirobifluorene (Spiro-TTB) p-doped with 2,2′- (perfluoronaphthalene-2,6-diylidene)dimalononitrile (F6-TCNNQ) (4 wt%) as a hole transport layer (HTL), 10 nm N,N′-di(naphtalene-1-yl)-N,N′-diphenylbenzidine (NPB) as an electron blocking layer (EBL), 20 nm emission layer (EML, detailed below), 10 nm bis-(2-methyl-8-quinolinolato)-(4-phenyl-phenolato)-aluminum (III) (BAlq) as a hole blocking layer (HBL), and 40 nm cesium-doped 4,7-diphenyl-1,10-phenanthroline (BPhen) as an n-doped electron transport layer (ETL). Calibrated quartz crystal microbalances were used to control deposition rates and the final thicknesses of each layer. All materials were purchased from commercial suppliers and used without further purification. All OLEDs were encapsulated under nitrogen atmosphere using 1.1 mm thick glass lids (Shanghai Amerina Optoelectronic Co., Ltd.), epoxy resin (Norland Products Inc., Norland Optical Adhesive 68), and an additional moisture getter (Dynic Corporation, HD-071210T-50S). A similar procedure was used to make films of the emission layers of the OLEDs although films for PL quantum yield were not encapsulated.
The G1-OLED was fabricated onto 1.1 mm thick glass substrates coated with a 90 nm thick pre-patterned indium tin oxide (ITO) anode (Thin Film Devices Inc.). The device stack was capped with a 100 nm thick aluminum cathode. The EML consisted of the fluorescent emitter 2,5,8,11-tetra-tert-butylperylene (TBPe), which was doped at 1.5 wt% in the host 2-methyl-9,10-bis(naphthalen-2-yl)anthracene (MADN). The active area of these G1-OLEDs was 16.1 mm2.
The G2- and G3-OLEDs were fabricated on silicon substrates with a thickness of 675 µm coated with a 300 nm silicon dioxide (SiO2) layer on both sides to prevent electrical leakage through the substrate. Aluminum anodes were evaporated onto the SiO2 layer through a shadow mask with different aperture sizes. The organic material stack was subsequently evaporated as detailed above. The EML of the G2- and G3-OLEDs consisted of TBPe, which was doped at 1.5 wt% in MADN. The EML of G3-FEM-OLEDs consisted of 4,4-bis[4-(diphenylamino)styryl]biphenyl (BDAVBi) doped at 3 wt% in MADN. A 30 nm silver layer was evaporated on top as a semi-transparent cathode and the stack was finished with a 50 nm thick NPB capping layer. Shadow masks were changed without breaking vacuum. A single layer of aluminum was used as a bottom electrode instead of the commonly used multi-layer of aluminum and silver58,59 as this was found to give higher device fabrication yield and more stable operation. The active areas were measured from electroluminescence (EL) images of the operating OLEDs under a microscope (ZEISS, Axio Lab A1, (see Fig. 1d)).
OLED characterization
Current density-voltage-radiance characteristics were measured using a source meter (Keithley Instruments, Inc., 2400 SourceMeter), and a multimeter (Keithley Instruments, Inc., 2000 Multimeter) with a calibrated Si photodiode. Emission spectra from the OLEDs under constant current operation at ≈6 mA/cm2 were recorded by a CCD spectrograph (Andor Technology Ltd., DV420-BV), and used to calculate the radiance. The external EL quantum efficiency (EQE) of the OLEDs was estimated assuming a Lambertian emission pattern.
The frequency response of G2-S-, G3-, and G3-FEM-OLEDs was measured using a photodiode with a linear frequency response (Thorlabs, APD430A2/M, −3 dB bandwidth at 400 MHz). The OLEDs and the receiver were connected to a network analyser (Keysight, E5061B) and the measurement known as “S21” measurement was conducted to estimate the frequency response of the OLEDs at different DC bias voltages and at frequencies in the range 0.1–500 MHz. Results are shown up to 400 MHz due to low SNR above this frequency. The frequency response of films of the emission layers were measured in a similar way but with excitation by a laser at 405 nm (US-Lasers Inc., D405-20) connected to the network analyser through a diode driver (Thorlabs, LDC210C) and with a long-pass filter (Comar Optics Ltd, 510 IY 50) inserted between the lenses. The frequency response of the laser was measured and the frequency response of the films was estimated by dividing the measured frequency response of the films by the frequency response of the laser.
In all the data transmission experiments, orthogonal frequency division multiplexing (OFDM) was used. It is a well-known and widely applied modulation scheme in visible light communication18,34,35. OFDM was selected because it is adaptive to communication systems properties, and permits adaptive allocation of data and energy at different frequency bands. It further offers advantages such as cost-effective equalization with single-tap equalizers and an easy way to avoid low-frequency interference cause by ambient light.
In order to estimate the frequency response of the end-to-end system, including OLEDs, multiple frames of known quadrature phase shift keying (QPSK) modulated DCO-OFDM signals were sent through the link. In each frame transmission, the channel gain (H) was measured at each subcarrier and then averaged over several OFDM frames. After estimating the channel gain at each subcarrier, the average noise power, $$\sigma _n^2$$, was measured as the difference between the received power (PRx), i.e., signal plus noise power and the noise-free received signal power, H2PTx, i.e., the transmitted power scaled by the estimated channel gain. Finally, the SNR was estimated at each subcarrier as the ratio between the received signal power and noise power (i.e., $${\mathrm{SNR}} = H^2P_{{\mathrm{Tx}}}/\sigma _n^2$$). The bandwidth estimated for OLEDs is the frequency at which the value of [H]2 decreases 6 dB from the maximum value of [H]2.We confirmed that data transmission was due to the light emitted from the OLEDs by comparing the SNR before and after inserting a piece of paper to block the light in a shorter VLC link of 5 cm (see Supplementary Fig. 9). It was found that the SNR drops to almost 0 dB when the light is blocked.
Steady state photoluminescence (PL) spectra of the films were recorded in a fluorimeter (Edinburgh Instruments, FLS980). Transient PL decay curves were recorded on the same fluorimeter by time-correlated-single-photon-counting (TCSPC). Excitation was by a 379 nm laser diode (Pico Quant, LDH-D-C-375) operating at a repetition rate of 300 kHz. The measured PL decay curves were then fitted by a two exponential decay model considering the instruments response function (IRF) and an averaged emission lifetime (<τ>) of the films was estimated using the following equation; $$< \tau > = \gamma _1\tau _1 + \gamma _2\tau _2$$, where τ1 and τ2 are the emission lifetime of each component and γ1 and γ1 are the contribution of the emission from each component to the total emission (i.e., $$\gamma _1 = \frac{{A_1\tau _1}}{{A_1\tau _1 + A_2\tau _2}},\gamma _2 = \frac{{A_2\tau _2}}{{A_1\tau _1 + A_2\tau _2}}$$, where A1 and A2 are the pre-exponential-factors of each component). PL quantum yield of the films were measured on unencapsulated films using an integrating sphere based measurement system (Hamamatsu, C9920-02) under nitrogen flow.
|
2023-02-04 06:38:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5655158162117004, "perplexity": 2242.7290565913313}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00632.warc.gz"}
|
http://glueviz.org/en/stable/getting_started/index.html
|
Getting started¶
This page walks through Glue’s basic GUI features, using data from the W5 star forming region as an example. You can download the data files for this tutorial here:
After installing Glue, you can launch it by typing:
glue
or by clicking on the Glue icon in the Anaconda Navigator if using it. After a few seconds you should see the main window of glue which looks like this:
The main window consists of 4 areas:
1. The data collection. This lists all open data sets and subsets (highlighted regions).
2. The viewer layers. This is where you will see a list of layers in the current viewer, and be able to control the appearance of individual layers
3. The viewer options. This is where you will see global options for the active viewer
4. The visualization canvas. This is where visualization windows resides.
Opening Data¶
There are multiple ways to open data:
• By clicking on the red folder icon in the top left
• By selecting the Open Data Set item under the File menu or using the equivalent shortcut (e.g. Ctrl+O on Linux, Cmd+O on Mac).
• By dragging and dropping data files onto the main window
Find and open the file w5.fits which should be in the w5.tgz or w5.zip archive you downloaded above. This is a WISE image of the W5 Star Forming Region. While this is an astronomical dataset, glue can be used for data in any discipline, and many of the concepts shown below are applicable to many types of dataset.
Plotting Data¶
After opening w5.fits, a new entry will appear in the data manager:
To visualize a dataset, click and drag the entry from the data manager to the visualization dashboard. A popup window asks about what kind of plot to make. Since this is an image, select 2D Image Viewer.
Defining Subsets¶
Work in glue revolves around “drilling down” into interesting subsets within data. Each visualization type (image, scatterplot, …) provides different ways for defining these subsets. In particular, the image window provides 5 options:
• Rectangular selection
• Horizontal range
• Vertical range
• Circular selection
• Freeform selection
To use these, click on one of the selection icons then click and drag on the image to define a selection. If using the polygon selection, you should press ‘enter’ when the selection is complete (or ‘escape’ to cancel).
We can highlight the west arm of W5 using the rectangle selector:
Notice that this highlights the relevant pixels in the image, adds a new subset (named Subset 1) to the data manager, and adds a new visualization layer in the visualization dashboard.
We can redefine this subset by dragging a new rectangle in the image, or we can also move around the current subset by pressing the ‘control’ key and clicking on the subset then dragging it. As long as Subset 1 is selected in the data collection view in the top left, drawing selections will redefine Subset 1. If you un-select this subset and draw a new region, a new subset will be created.
You can edit the properties of a visualization layer (color, transparency, etc.) by clicking on the layer in the Plot layers list on the left. Likewise, you can re-arrange the rows in this widget to change the order in which each layer is drawn – the top entry will appear above all other entries.
Visualizations are linked in Glue – that is, we can plot this data in many different ways, to better understand the properties of each subset. To see this, click and drag the W5[PRIMARY] entry into the visualization area a second time, and make a histogram. Edit the settings in the histogram visualization dashboard to produce something similar to this:
This shows the distribution of intensities for the image as a whole (gray), and for the subset in red (the label PRIMARY comes from the FITS header)
Perhaps we wish to remove faint pixels from our selection. To do this, we pick the last mode (Remove From Selection) from the selection mode toolbar:
When this mode is active, new regions defined by the mouse are subtracted from the selected subsets. We can therefore highlight the region between x=450-500 in the histogram to remove this region from the data.
Note
Make sure you switch back to the first, default selection mode (Replace Selection) once you have finished defining the selection.
Open the second file, w5_psc.vot – a catalog of Spitzer-identified point sources towards this region. You will see a new entry in the data manager. We can double click on that entry to rename it to Point Sources, and the result will look like this:
At this point, you can visualize and drilldown into this catalog. However, Glue doesn’t know enough to intercompare the catalog and image. To do that, we must Link these two data entries. Click on the Link Data button in the toolbar. This brings up a new window, showing all the pieces of information within each dataset:
The image has an attribute Right Ascension. This is the same quantity as the RAJ2000 attribute in the Point Sources dataset – they are both describing Right Ascension (the horizontal spatial coordinate on the sky). Select these entries, and click Glue to instruct the program that these quantities are equivalent. Likewise, link Declination and DEJ2000 (Declination, the other coordinate). Click OK.
Note
What does this do? This tells Glue how to derive the catalog-defined quantities DEJ2000 and RAJ2000 using data from the image, and vice versa. In this case, the derivation is simple (it aliases the quantity Declination or Right Ascension). In general, the derivation can be more complex (i.e. an arbitrary function that maps quantities in the image to a quantity in the catalog). Glue uses this information to apply subset definitions to different data sets, overplot multiple datasets, etc.
After these connections are defined, subsets that are defined via spatial constraints in the image can be used to filter rows in the catalog. Let’s see how that works.
First, make a scatter plot of the point source catalog. Then, select Subset 1 and draw a new region on the image. You should see this selection applied to all plots:
You can also overplot the catalog rows on top of the image. To do this, click the arrow next to the new subset – this shows the individual selections applied to each dataset. Click and drag the subset for the point source catalog on top of the image. To see these points more easily, you may want to disable the layer showing all the points (named Point Sources) in the list of plot layers.
Glue is able to apply this filter to both datasets because it has enough information to apply the spatial constraint in the image (fundamentally, a constraint on Right Ascension and Declination) to a constraint in the catalog (since it could derive those quantities from the RAJ2000 and DEJ2000 attributes).
Tip
Glue stores subsets as sets of constraints – tracing a rectangle subset on a plot defines a set of constraints on the quantities plotted on the x and y axes (left < x < right, bottom < y < top). Copying a subset copies this definition, and pasting it applies the definition to a different subset.
As was mentioned above, the highlighted subsets in the data manager are the ones which are affected by selecting regions in the plots. Thus, instead of manually copy-pasting subsets from the image to the catalog, you can also highlight both subsets before selecting a plot region. This will update both subsets to match the selection.
Note
Careful readers will notice that we didn’t use the image subset from earlier sections when working with the catalog. This is because that selection combined spatial constraints (the original rectangle in the image) with a constraint on intensity (the histogram selection). There is no mapping from image intensity to quantities in the catalog, so it isn’t possible to filter the catalog on that subset. In situations where Glue is unable to apply a filter to a dataset, it doesn’t render the subset in the visualization.
Glue provides a number of ways to save your work, and to export your work for further analysis in other programs.
Saving The Session
You can save a Glue session for later work via the Save Session button in the toolbar or in the File menu. This creates a glue session file (the preferred file extension is .glu). You can restore this session later via the Open Session button in the toolbar or in the File menu.
By default, these files store references to the files you opened, and not copies of the files themselves. Thus, you won’t be able to re-load this session if you move any of the original data. To include the data in the session file, you can select ‘Glue Session including data’ when saving:
Saving Plots
Static images of individual visualizations can be saved by clicking the floppy disk icon on a given visualization window. There are also exporters available under the File menu - built-in exporters include one to export plots to the plotly service, and one to export plots using D3PO.
Saving Subsets
Glue is primarily an exploration environment – eventually, you may want to export subsets for further analysis. Glue currently supports saving subsets as FITS masks. Right click on the subset in the data manager (note that you need to select the subset applied to a specific dataset, not the overall subset, so be sure to expand the subset by clicking on the triangle on the left of the subset name), and select Export subset values or Export subset mask(s) to write the subset to disk.
|
2017-10-18 14:38:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34023842215538025, "perplexity": 1435.7288178203419}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822992.27/warc/CC-MAIN-20171018142658-20171018162658-00102.warc.gz"}
|
https://www.physicsforums.com/threads/analysis-finding-limit-with-lhospital.548430/
|
# Analysis: Finding limit with l'Hospital
1. Nov 7, 2011
### Shackleford
I'm not sure what I did with this expression is right.
I looked at the argument as x tends to 0. I used l'Hospital twice and found that the argument tends to pi/3. Of course, tan(pi/3) = sqrt3.
http://i111.photobucket.com/albums/n149/camarolt4z28/Untitled-3.png [Broken]
Last edited by a moderator: May 5, 2017
2. Nov 7, 2011
### ehild
How did you use l'Hospital twice? Your method is legal. The result is correct.
ehild
3. Nov 7, 2011
### Shackleford
Oops. You're right. I just used it once.
I plugged this into my TI-89 Ti, and it didn't agree with my analytical result. I suppose it's not very reliable as it didn't agree for a couple of other problems, too.
4. Nov 8, 2011
### Staff: Mentor
It's worth noting that you don't need to use L'Hopital's Rule at all for this problem.
$$\lim_{x \to 0^+}\tan\left( \frac{sin 2\pi x}{6x}\right)$$
$$=\tan (\lim_{x \to 0^+}\left( \frac{sin 2\pi x \cdot 2\pi x}{2\pi x \cdot 6x}\right))$$
$$=\tan (\lim_{x \to 0^+}\left( \frac{sin 2\pi x}{2\pi x} \cdot \frac{2\pi x}{6x}\right))$$
$$=\tan (\frac{\pi}{3}) = \sqrt{3}$$
I used a wellknown limit, $\lim_{t \to 0} \frac{sin(t)}{t} = 1$ and the fact that you can interchange the limit and function, provided the function is continuous at the limiting point.
5. Nov 8, 2011
### Shackleford
I like that method better. Ah. I didn't know that. I haven't read the section yet, but I'm sure it mentions being able to interchange the limit and function.
|
2017-11-18 14:55:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7764580845832825, "perplexity": 1056.9800848115801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804965.9/warc/CC-MAIN-20171118132741-20171118152741-00127.warc.gz"}
|
https://www.allanswered.com/post/okgk/fenics-loses-geometry-data/
|
### Fenics loses geometry data
338
views
0
13 months ago by
Hi all, I've been having a weird problem that I can't seem to fix. I've been using gmsh to create geometry and meshes, then running them through Fenics to solve poisson's equation in 3D, and outputting the results as a .pvd to view in ParaView. The solver runs fine and I get understandable results. However, one of the geometries (the bottom rectangular prism labeled "Substrate", id number 2) I defined in gmsh doesn't make it through to the .pvd. It just doesn't display, even though the boundary condition applied to it works fine. I've attached pictures, and the fenics code. Any ideas why this might be? I'm thinking it might have something to with the fact that I specified a couple surfaces as members of multiple Physical Groups in gmsh, but I'm not sure why that would cause problems...let me know if any other info is needed!
Fenics code:
from fenics import *
from mshr import *
%matplotlib notebook
mesh = Mesh("/home/fenics/shared/electrodes_fem/gmsh/electrodes_3d_4.xml")
domains = MeshFunction('size_t', mesh, "/home/fenics/shared/electrodes_fem/gmsh/electrodes_3d_4_physical_region.xml")
boundaries = MeshFunction('size_t', mesh, "/home/fenics/shared/electrodes_fem/gmsh/electrodes_3d_4_facet_region.xml")
V = FunctionSpace(mesh, 'Lagrange', 2)
bc_border = DirichletBC(V, Constant(0), boundaries, 1)
bc_e1 = DirichletBC(V, Constant(1000), boundaries, 3)
bc_e2 = DirichletBC(V, Constant(-1000), boundaries, 4)
bc_substrate = DirichletBC(V, Constant(0), boundaries, 2)
bcs = [bc_border, bc_e1, bc_e2, bc_substrate]
dx = Measure('dx', domain=mesh, subdomain_data=domains)
# Define variational problem
u = TrialFunction(V)
v = TestFunction(V)
f = Constant(0.0)
L = f*v*dx
# Compute solution
u = Function(V)
solve(a == L, u, bcs)
# Save solution in VTK format
file = File("../shared/electrodes_fem_3d/electrode_field_fem_3d_4.pvd")
file << u
# Plot solution and mesh
plot(u)
Gmsh screenshot:
ParaView screenshot:
Community: FEniCS Project
|
2018-08-17 07:48:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3238961100578308, "perplexity": 10031.387523483065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211933.43/warc/CC-MAIN-20180817065045-20180817085045-00055.warc.gz"}
|
https://chemistry.stackexchange.com/questions/66565/chaotic-melting-points-of-n-alkyl-carboxylic-acids
|
# Chaotic melting points of n-alkyl carboxylic acids
Is there any trend here at all? This seems very chaotic as a trend.
• This paper may help.
– DHMO
Jan 18 '17 at 13:27
• It has the common zig-zag practically all properties of alkylic homologues show, and there is a minimum at 5. Looks very regular to me. No.5 cannot decide if it's rather polar or nonpolar, and gets so confused it does not crystallise.
– Karl
Jan 18 '17 at 18:02
• More to the point the odd number carbons have one trend and the even another.
– MaxW
Feb 18 '17 at 6:30
My guess without looking at any literature:
Higher acidity $\implies$ higher hydrogen bonding $\implies$ higher melting point.
Thus, changing from H and Me where there is virtually no +I effect from the chain, to Et, Pr, Bu where the +I effect of the chain decreases the melting point, as the hydrogen bonds get weaker, as the COOH groups are less acidic.
Further increasing the chain length after hexyl does not really lead to any additional +I effects on the COOH group that is now far away. Thus, the melting point increases as there are additional van der Waals interactions between the longer chains.
Then there seems to be a factor that makes crystals of odd chains more favorable.
• So there are basicall two opposing trends: longer chains decrease the stability of the coo- anion but then after a certain chain length each methylene group adds additional vdw interactions. Jan 18 '17 at 14:02
• Please, use proper punctuation, spacing, and capitalization, and do not use short forms unless in chemical formulas.
– DHMO
Jan 18 '17 at 14:05
• @DHMO Sorry, was kind of in a hurry ; ) Jan 18 '17 at 18:09
• I'm afraid only last 2 sentences are OK to have them in answer to this question. Jan 18 '17 at 19:37
• @Mithoron Why exactly? Jan 18 '17 at 20:34
An obvious rule is that for even $n+1$ the melting point of the acid $\ce{C_{n+1}}$ is higher than the melting point of acid $\ce{C_n}$.
• I don't find it obvious. Can you explain why? Mar 28 '17 at 12:58
• The zigzag line in the above image shows that the melting point increases from 1 to 2, from 3 to 4, from 5 to 6, and so on. I'm quite sure that there's no exception from this rule for n-alkyl carboxylic acids. Mar 28 '17 at 20:57
|
2022-01-23 19:44:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6706234812736511, "perplexity": 1374.2072287849815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00679.warc.gz"}
|
https://www.math10.com/forum/viewtopic.php?f=42&t=1815
|
# Limit with trigonometric sums
### Limit with trigonometric sums
$\lim_{x \to 0}\frac{1-cosx*cos2x*cos3x*...*cosnx}{\sum_{k=1}^{n }sinkx^{2}}$
Guest
### Re: Limit whit trigo
$\frac{2n+1}{6}$
Use l'hopital's rule twice,
or alternatively
Taylor expand all the functions and keep track of terms $1$, $x$, $x^2$, and ignore higher order terms.
Hope this helped,
R. Baber.
Guest
Return to Math Problem of the Week
### Who is online
Users browsing this forum: No registered users and 1 guest
|
2017-06-23 10:16:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564814329147339, "perplexity": 8531.067190867538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320049.84/warc/CC-MAIN-20170623100455-20170623120455-00174.warc.gz"}
|
https://www.ques10.com/p/49999/a-mild-steel-link-as-shown-in-figure-no-2-by-full-/
|
0
A mild steel link as shown in Figure No. 2, by full lines transmits a pull of 80 kN; find the dimensions b and t if b = 3t. Assume the permissible stress as 70 N/mm2.
somd-2 • 103 views
0
Given:
For M.S link, P = 80 kN = $80 \times 10^{3}$ , b = 3t , $\sigma = 70 N/mm^{2}$
Solution:
$\sigma = \frac{P}{A}$ $\therefore A = \frac{P}{\sigma} = \frac{80 \times 10^{3}}{70}$
$\therefore A = 1142.86 mm^{2}$
$\therefore A = 1142.86$
$\therefore 3t \times t = 1142.86$
$\therefore t = 19.52 mm$
& b = 3t = $3 \times 19.52 = 58.56 mm$
|
2020-02-28 16:15:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5255916118621826, "perplexity": 2752.3733996085502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147234.52/warc/CC-MAIN-20200228135132-20200228165132-00374.warc.gz"}
|
http://firsttimeprogrammer.blogspot.com/2014/07/maths-with-python.html
|
## Tuesday, 15 July 2014
### Maths with Python
Last week I was reviewing analytic geometry and while looking at graphs, equations, theorems and points I was a little bored and decided to try and implement some part of that in Python.
Even though I am still not that practical with Python classes, I am beginning to experiment classes for different purposes. As you know a class is a sort of template, a collection of code and methods (functions inside a class are called methods) which can be called on the instances of the class.
In this example, I created two classes: a class for points and a class for circumferences.
The modules I used are matplotlib and numpy. You can check on my Python page for the link to download these modules or simply google them.
The Point class is really basic, well, not the most basic class you could create but pretty basic.
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def __repr__(self):
return "(%s,%s)" %(round(self.x ,2), round(self.y, 2))
The __init__ methods initializes the class. After you have defined an instance of the point Class, as below
p = Point(0,0)
every time you need to know the x coordinate, you can simply type in
p.x
and Python will print to you the value of x for the instance (the point) p.
The __repr__ methods gives you the freedom to decide how to (represent) display your instance when called. For example, in this case, when I type in the command line
p
The interpreter will show (0,0) because that is the representation of a point p I told him to display through __repr__
The screenshots below show the results
Now that we have our Point class, let's skip to fancier stuff :)
for the circumference class pyplot and numpy are needed so we need to import them. Note that usually import statements are above, in the first lines of the script, however, in order not to confuse anyone I put them here.
This is the Circle class:
#New class for circumferences
import matplotlib.pyplot as plt
import numpy as np
class Circle(object):
def __init__(self, point, r):
self.point = point
self.xc = point.x
self.yc = point.y
self.r = r
self.a = -2*self.xc
self.b = -2*self.yc
self.c = (self.xc)**2 + (self.yc)**2 - (self.r)**2
def check_if_circle(self, x, y):
if (round(x**2 + y**2 + self.a*x + self.b*y + self.c, 1) == 0):
#print("The point is on the circumference")
return True
else:
#print("The point is NOT on the circumference")
return False
def __repr__(self):
parameters = {"a":self.a, "b":self.b, "c":self.c, "r":self.r}
#print(parameters)
if self.a != 0 and self.b != 0:
return ("The circle has the centre in (%s, %s) and
has a radius of %d.\nThe equation is:\nx^2 + y^2 " + str(self.a)
+ "x " + str(self.b) + "y " + str(self.c) + " = 0")
%(round(self.xc, 2), round(self.yc, 2), self.r)
elif self.a == 0 and self.b != 0:
return ("The circle has the centre in (%s, %s) and
has a radius of %d.\nThe equation is:\nx^2 + y^2 " + str(self.b)
+ "y " + str(self.c) + " = 0") %(round(self.xc, 2),
round(self.yc, 2), self.r)
elif self.b == 0 and self.a != 0:
return ("The circle has the centre in (%s, %s) and
has a radius of %d.\nThe equation is:\nx^2 + y^2 " + str(self.a)
+ "x " + str(self.c) + " = 0") %(round(self.xc, 2),
round(self.yc, 2), self.r)
else:
return ("The circle has the centre in (%s, %s) and
has a radius of %d.\nThe equation is:\nx^2 + y^2 " + str(self.c)
+ " = 0") %(round(self.xc, 2), round(self.yc, 2), self.r)
def area(self):
return (np.pi * (self.r)**2)
def area_sphere(self):
return (4*(np.pi)*(self.r)**2)
def volume_sphere(self):
return(4/3*(np.pi)*(self.r)**3)
def plot_crf(self):
def semi_sphere_positive(x):
y = self.yc + np.sqrt((self.r)**2 - (x - self.xc)**2)
return y
def semi_sphere_negative(x):
y = self.yc - np.sqrt(self.r**2 - (x - self.xc)**2)
return y
z = np.linspace(self.xc - self.r,self.xc + self.r,1000)
graphone = semi_sphere_positive(z)
graphtwo = semi_sphere_negative(z)
plt.plot(z,graphone)
plt.plot(z,graphtwo)
plt.show()
#define an instance, a circumference of centre p and radius 2
C = Circle(p, 2)
The __repr__ method is slightly fancier in order to display the equation of the circumference in its canonical form according to the values of a,b and c. Remember that:
After this class has been defined, you can call all its methods on C and take a look at the results.
This is particularly interesting if you are actually doing your homework on analytic geometry and you want to develop your programming skills at the same time. Every geometrical figure has its own characteristics and properties. Classes are the perfect tool to collect data which is characterized by a common denominator.
Here you can find a screenshots of the methods in the Circle class
and finally the most interesting method: the graphical representation of the circumference:
C.plot_crf()
and the result:
Note that, since the equation of the circumference is not a function, I had to plot the two semi-circumference. The graphical result looks still good though.
Alternatively, you could have used polar coordinates and the code would have shrunk down by a great amount.
|
2018-07-16 07:03:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36261314153671265, "perplexity": 3545.1528455224825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589222.18/warc/CC-MAIN-20180716060836-20180716080836-00606.warc.gz"}
|
http://www.statemaster.com/encyclopedia/Marginal-distribution
|
FACTOID # 20: Statistically, Delaware bears more cost of the US Military than any other state.
Home Encyclopedia Statistics States A-Z Flags Maps FAQ About
WHAT'S NEW
SEARCH ALL
Search encyclopedia, statistics and forums:
(* = Graphable)
Encyclopedia > Marginal distribution
In probability theory, given two jointly distributed random variables X and Y, the marginal distribution of X is simply the probability distribution of X ignoring information about Y, typically calculated by summing or integrating the joint probability distribution over Y. Probability theory is the mathematical study of probability. ... A random variable can be thought of as the numeric result of operating a non-deterministic mechanism or performing a non-deterministic experiment to generate a random result. ... In mathematics, a probability distribution assigns to every interval of the real numbers a probability, so that the probability axioms are satisfied. ... This article defines some terms which characterize probability distributions of two or more variables. ...
For discrete random variables, the marginal probability mass function can be written as Pr(X = x). This is In mathematics, a random variable is discrete if its probability distribution is discrete; a discrete probability distribution is one that is fully characterized by a probability mass function. ... This article defines some terms which characterize probability distributions of two or more variables. ...
$Pr(X=x) = sum_{y} Pr(X=x,Y=y) = sum_{y} Pr(X=x|Y=y) Pr(Y=y),$
where Pr(X = x,Y = y) is the joint distribution of X and Y, while Pr(X = x|Y = y) is the conditional distribution of X given Y. Given two random variables X and Y, the joint probability distribution of X and Y is the probability distribution of X and Y together. ... Given two jointly distributed random variables X and Y, the conditional probability distribution of Y given X (written Y | X) is the probability distribution of Y when X is known to be a particular value. ...
Similarly for continuous random variables, the marginal probability density function can be written as pX(x). This is By one convention, a random variable X is called continuous if its cumulative distribution function is continuous. ... In mathematics, a probability density function (pdf) serves to represent a probability distribution in terms of integrals. ...
$p_{X}(x) = int_y p_{X,Y}(x,y) , dy = int_y p_{X|Y}(x|y) , p_Y(y) , dy$
where pX,Y(x,y) gives the joint distribution of X and Y, while pX|Y(x|y) gives the conditional distribution for X given Y.
Why the name 'marginal'? One explanation is to imagine the p(x,y) in a 2D table such as a spreadsheet. The marginals are obtained by summing the columns (or rows) -- the column sum would then be written in the margin of the table, ie. the column at the side of the table.
Results from FactBites:
Marginal distribution - Wikipedia, the free encyclopedia (212 words) In probability theory, given two jointly distributed random variables X and Y, the marginal distribution of X is simply the probability distribution of X ignoring information about Y, typically calculated by summing or integrating the joint probability distribution over Y. Y = y) is the conditional distribution of X given Y. The marginals are obtained by summing the columns (or rows) -- the column sum would then be written in the margin of the table, ie.
More results at FactBites »
Share your thoughts, questions and commentary here
|
2019-12-16 08:38:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.843237042427063, "perplexity": 302.10411140836266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541318556.99/warc/CC-MAIN-20191216065654-20191216093654-00506.warc.gz"}
|
http://lees2014.univ-paris-diderot.fr/program.html
|
# Program
Sun, 29th Mon, 30th Tue, 1st Wed, 2nd Thu, 3rd Fri, 4th
8:45 Welcome
Sacuto et al.
Mo1: 9:00-10:50
Pseudogap I
Chair: Bourges
Norman; Greven; Varma; Yakovenko; Mangin-Thro; Sakai
Tu1: 9:00-10:45
Pseudogap II
Chair: Norman
Chubukov; Taillefer; Pépin; Cyr-Choinière; Mansart; Cilento
We1: 9:00-10:45
Graphene
Chair: Hoffman
Basov; Fuchs; Orlita; Boris; Kuzmenko
9:00-10:40
URu2Si2
Chair: Dressel
Blumberg; Coleman; Timusk; Maslov
Fr1: 9:00-10:45
Pnictides & Heavy Fermions
Chair: Benfatto
Hackl; Gallais; Paul; Hall; Okamura
10:50 - Break 10:45 - Break 10:45 - Break 10:40 - Break 10:45 - Break
Mo2: 11:15-12:45
Multiferroics
Chair: Lobo
Park; de Brion; Petzelt; Nagel; Chaix
Tu2: 11:15-12:45
Electromagnons
Chair: Petzelt
We2: 11:15-12:45
Fermi Liquids
Chair: Timusk
van der Marel; Georges; Dressel; Scheffler
Th2: 11:10-12:45
Top. Insulators II
Chair: Armitage
Fr2: 11:15-12:35
Heterostructures
Chair: Gervais
Perez; Todorov; Bergeal; Perucchi
12:45 - Lunch 12:45 - Lunch 12:45
Lunch box
Excursion
12:45 - Lunch 12:45 - Lunch
14:00
Exhibit
14:00
Exhibit
14:00
Exhibit
Mo3: 15:15-16:50
Top. Insulators I
Chair: van der Marel
Lupi; Perfetti; Akrap; Autore; Wu; Kalinko
Tu3: 15:00-16:50
Methods
Chair: Roy
Hillenbrand; Shen; Damascelli; Tanner; Martin
Th3: 15:15-16:50
BCS & CDW
Chair: Tanner
Shimano; Carr; Méasson; Benfatto; Wang
16:50 - Break 16:50 - Break 16:50 - Break
17:00-19:00
Registration
Mo4: 17:20-18:50
Optical Cond.
Chair: Tajima
Homes; Armitage; Degiorgi; Janod
Tu4: 17:20-18:50
Cuprates CDW
Chair: Greven
Blackburn; Le Tacon; Proust; Comin
Th4: 17:20-18:50
Fluctuations
Chair: Damascelli
Giannetti; Hinton; Tajima; Munzar
19:00
Welcome
Buffet
19:00
Poster
Exhibit
Buffet
MagSurf
19:30
Poster
Buffet
MagSurf
# Monday, June 30th
## Monday 9:00-10:50
### Session Mo1: Pseudogap I
#### Chair: P. Bourges
• 9:00-9:25 - M.R. Norman -- What is the pseudogap? (Expand...)
A bewildering array of phenomena occur in underdoped cuprates, including superconductivity, nematicity, charge order, spin order, and (perhaps) orbital currents. Often lost in this discussion of the pseudogap phase is what the large energy gap seen by spectroscopic probes is actually due to. Here, I will discuss the pros and cons of the various scenarios that address the origin of this gap, and implications this has for the physics of the cuprates.
• 9:25-9:50 - M. Greven -- New insights into the cuprate phase diagram from transport, X-ray and neutron scattering studies of HgBa2CuO4+δ (Expand...)
We will review our extensive efforts to understand the properties of the tetragonal cuprate superconductor HgBa2CuO4+δ, with particular focus on recent charge transport, resonant X-ray scattering and neutron scattering experiments that reveal Fermi-liquid behavior, charge-density-wave correlations and an unusual magnetic response below optimal doping. The comparison with the properties of cuprates that feature a higher degree of disorder and/or lower structural symmetry leads to profound new insights into the phase diagram of the cuprates.
• 9:50-10:15 - C. M. Varma -- Some Constraints put by Experiments on Theories of High Temperature Superconductivity in Cuprates (Expand...)
There is wide-spread agreement now that a phase competes with superconductivity in the under-doped region of the Cuprates. The scattering of its associated quantum-critical fluctuations determine both the strange metal properties and the high temperature superconductivity.
I will use the ARPES measurements to raise an important issue - the inelastic scattering in the normal state is independent of angle on the Fermi-surface, while scattering with maxima at ± 90 degrees is required for d-wave superconductivity.
I will also ask why the phase transition to the under-doped phase is un-observed in the specific heat measurements and what related sound-velocity measurements have to say on this issue.
• 10:15-10:30 - V.M. Yakovenko -- Possible spiral structure in the pseudogap phase of cuprates (Expand...)
We propose a novel chiral order parameter to explain recent experimental developments concerning the polar Kerr effect in underdoped cuprates. Originally, experimental observation of the polar Kerr effect in the pseudogap phase of cuprates was interpreted as a signature of a ferromagnetic-like time-reversal symmetry breaking. However, more recent measurements suggest that this interpretation is not correct, and the polar Kerr effect is more likely to originate from a chiral and inversion symmetry breaking, which results in handedness and spatial dispersion with gyrotropy. Our theoretical scenario is based on the loop-current model by Varma, which is characterized by the in-plane anapole moment N and exhibits the magnetoelectric effect. We propose a helical structure where the vector N(n) in the layer n is twisted by the angle π/2 relative to N(n-1), thus breaking inversion symmetry. We show that coupling between magnetoelectric terms in the neighboring layers for this structure produces optical gyrotropy, which results in circular and linear dichroism and the polar Kerr effect. Magnetic field lines produced by the loop currents get twisted and tilted in a double-helix manner reminiscent of DNA, which is consistent with tilted intra-unit-cell magnetic moments observed in neutron scattering experiments. The chiral order parameter requires only local correlations between neighboring layers and does not imply long-range translational order and additional Bragg peaks. We also discuss consequences of this peculiar symmetry breaking for non-linear optical response, such as second-harmonic generation and appearance of dc current in response to ac electromagnetic field.
• 10:30-10:40 - L. Mangin-Thro -- Search for Intra-Unit-Cell magnetic order close to optimal doping in superconducting cuprates (Expand...)
After more than two decades, the origin of superconductivity in copper oxide materials is still a mystery. Several broken symmetry states could compete with superconductivity, but could also produce electronic or magnetic fluctuations potentially important for the understanding of the superconducting pairing mechanism and the anomalous electronic properties observed in the normal state. Using polarized neutron scattering in four cuprates families, including YBa2Cu3O6+x, the existence of a new magnetic order was reported. This order develops below the pseudo-gap (PG) transition temperature, Tmag. Combined with ultrasound measurements, it has been established that the PG is a broken symmetry state. This new magnetic phase could be induced by the staggered orbital magnetism within the unit cell as proposed in the loop current model for the PG state. This Intra-Unit-Cell (IUC) magnetic order indicates that time reversal symmetry is broken in the PG state, but translation invariance is preserved. The ordering temperature matches the hole doping dependence of the PG state and is likely to vanish around a quantum critical point (QCP) close to p~0.2. The existence of the IUC magnetic order is well documented in a wide hole doping range. However, its evolution at larger hole doping (in particular, upon approaching the QCP) had not yet been addressed.
Our study on YBa2Cu3O6.85 (Tc=89K, p=0.15), performed on 4F1 (Orphée), revealed that the IUC order settles in below T~200K> Tc, proving the persistence of the IUC order close to optimal doping. Compared to samples with a lower hole doping level, the measured intensity at the Bragg position is reduced by a factor of 4, suggesting a shortening of the magnetic correlation length upon increasing hole doping. Interestingly, this scenario is also consistent with the fast decay of the magnetic intensity from the under-doped to optimally doped range in the Bi2Sr2CaCu2O8+d compounds. In order to shed light on the evolution of the correlation length upon approaching the QCP, complementary measurements were carried out on D7 (ILL). By performing momentum scans across the Bragg reflection, we indeed report a shortening of the magnetic correlation length in YBa2Cu3O6.85: the persistence of the signal away from the Bragg peak at low temperature means that the scattering remains short range even at temperature well below Tmag. Diffraction measurements showed also a critical slowing down of magnetic fluctuations on both sides of the transition temperature. Besides, by moving away from the Bragg reflection in momentum and in energy, we could observe for the first time low energy magnetic excitations.
• 10:40-10:50 - S. Sakai -- Evidences of an s-wave structure of the pseudogap in high-Tc cuprate superconductors (Expand...)
It is established that the superconducting gap in high-Tc cuprates is dominantly d wave, distinct from conventional superconductors, which have s-wave gap. A gap (the pseudogap) in the one-electron excitation spectra persists even above the superconducting critical temperature Tc, and its relationship with the superconducting gap has been a central issue in the search for the high-Tc superconducting mechanism. According to angle-resolved photoemission spectroscopy (ARPES), the symmetry of the pseudogap also looks like d wave. Based on this observation, a number of phenomenological theories have assumed a d-wave structure also for the pseudogap. However, since APRES in practice observes only the occupied spectra, almost nothing is actually known about the spectrum (and the gap structure) above the Fermi level. We explore this dark (unoccupied) side of the excitation spectra, by combining the electronic Raman spectroscopy experiments and a cluster extension of the dynamical mean-field theory. Our result reveals an unprecedented s-wave structure of the pseudogap, whose energy location is strongly dependent on momentum: The gap opens around the Fermi level in the antinodal region while it resides above the Fermi level in the nodal region. This s-wave pseudogap structure, which is different from the s-wave gap in a standard sense, is compatible with the ARPES observations because the gap below the Fermi level is seemingly d-wave; The main difference is in the unoccupied spectra, which have been elusive in experiments. The s-wave pseudogap furthermore explains well the electron-hole asymmetry observed in recent ARPES and STM experiments. Our results thus suggest that the pseudogap is not smoothly connected to the superconductivity gap, imposing a strong constraint on phenomenological theories of the pseudogap and the high-Tc superconductivity in cuprates.
## Monday 11:15-12:45
### Session Mo2: Multiferroics
#### Chair: R.P.S.M. Lobo
• 11:15-11:40 - J.G. Park -- Low energy spin dynamics of multiferroic RMnO3 and BiFeO3 (Expand...)
Multiferroic materials that have the coexistence of both magnetic and ferroelectric ground states have drawn significant attention in materials science over the past ten years or so. The underlying origin of this unusual behavior in either naturally occurring materials or artificially synthesized thin films has been a strong enough motivation for material scientists to discover or rediscover new multiferroic materials.
Among a long list of multiferroic materials, hexagonal manganites RMnO3 and BiFeO3 are arguably two most interesting multiferroic materials. In particular, BiFeO3 has been extensively investigated for potential applications by virtue of its magnetic and ferroelectric transitions occurring above room temperature: TN=650 K and TC=1050 K. Moreover, it has a very interesting incommensurate magnetic phase transition with an extremely long period of 650 Å. On the other hand, the hexagonal manganite materials exhibit a natural two-dimensional triangular lattice, which provide an interesting platform to investigate low dimensional magnetism with a supposedly strong coupling to the ferroelectric order parameter.
Over the past few years or so, there have been a multitude of studies done on both compounds. Largely thanks to these extensive studies, we have now come to know extremely details about the physical properties of the two materials. One of the most fruitful approaches in addressing some fundamental issues is to examine the spin dynamics. This exercise would lead to a microscopic understanding of how the magnetic moment is coupled, if ever, to the ferroelectric degree of freedom. In this talk, I will present our results obtained from high resolution inelastic neutron scattering experiments.
• 11:40-12:05 - S. de Brion -- THz Magneto-Electric excitations in multiferroic compounds (Expand...)
Magneto-electric (ME) phenomena, involving cross coupling between electric and magnetic degrees of freedom, are attracting considerable attention. These ME couplings can also have signatures on the elementary excitations that emerge from the ordered states that prevail. The first examples, the so called electromagnons, were observed in several compounds, mainly multiferroics by Terahertz (THz), and Far Infrared spectroscopies. They are now perceived as in magnons dressed with electric charges, hence excitable via an electric field. Several microscopic mechanisms at these ME coupling have been proposed, that vary with the investigated material, and even within they same compound. Here we report on new mechanisms for ME excitations that we have evidenced combining THz/FIR spectroscopies with inelastic neutron measurements.
• 12:05-12:20 - J. Petzelt -- Broadband dielectric spectroscopy of BaTiO3-BaZrO3 solid solutions (Expand...)
xBaZrO3-(1-x)BaTiO3 (BZT-x) solid solution is the most studied lead-free perovskite system with properties changing from ferroelectric (x=0), over diffuse ferroelectric (x=0.1-0.2), relaxor ferroelectric (x=0.3-0.8), dipolar glass (x>0.8), up to standard dielectric behaviour (x=1). We have studied the dielectric response in whole composition region in the wide 100 Hz-100 THz frequency range down to 10 K using several spectroscopic techniques. The complex phonon behaviour in the far IR range is ascribed to eigenvector crossover of the lowest-frequency TO1 mode. The relaxation in the 100 GHz range is assigned to theoretically proposed (in acentric systems) anharmonic quasi-Debye losses. The main lower-frequency relaxation is thermally activated with a universal (for x=0.4-0.8) activation energy of ∼ 0.16 eV. It is assigned to local hopping of the off-centered Ti4+ ions in the frozen microscopic BaTiO3 clusters. It appears to be the first relaxor ferroelectric system with such a simple dynamics of polar clusters. First results are published and submitted.
• 12:20-12:35 - U. Nagel -- Room temperature toroidal moment in multiferroic BiFeO3 and its interaction with THz light (Expand...)
Excitations in multiferroics, electromagnons, can be used to control the polarization and intensity of THz light. Using far infrared spectroscopy we show that spin excitations in BiFeO3 simultaneously interact with the electric and magnetic field components of light because an applied static magnetic field induces a toroidal moment in the cycloidal spin structure. These toroidal excitations exhibit strong direction-dependent absorption even in the room-temperature state of the material.
• 12:35-12:45 - L. Chaix -- Bunched modulation of the magnetic structures in chiral compounds: towards new magnetoelectric excitations (Expand...)
The langasite Ba3NbFe3Si2O14 has recently attracted a lot of attention due to its unique chiral (structural, magnetic and excitations) and multiferroic properties \cite{lc1,lc2,lc3,lc4}. The magnetic order of the Fe3+ magnetic moments is made of a 120 arrangement in triangles, helically propagated in the perpendicular direction. We report here new experimental evidence showing that this picture is incomplete. An anti-crossing of magnons leading to extinction is observed by inelastic neutron scattering. Moreover, higher order and forbidden weak magnetic and structural satellites have been evidenced using single crystal neutron diffraction measurements, with and without longitudinal polarization analysis. From a comparison with the Ba3TaFe3Si2O14 compound with the reversed structural chirality, where these experimental signatures are significant, I will show that these features can be explained by the presence of single-ion anisotropy combined with the DM interaction, and by the loss of the 3-fold axis. This results in a bunched modulation of the helical order. Moreover, the resulting symmetry lowering is a key parameter to understand the new kind of magnetoelectric excitations observed in these two compounds by THz spectroscopy as well as their multiferroic properties \cite{lc3,lc4}.
## Monday 15:15-16:50
### Session Mo3: Topological Insulators I
#### Chair: D. van der Marel
• 15:15-15:40 - S. Lupi -- Plasmonic Excitations in Topological Insulators (Expand...)
The quantized collective oscillations of ordinary massive electrons are since a long time basic ingredients of research in plasmonics and in optical metamaterials. Instead, plasmons of massless Dirac electrons have been recently observed in graphene and their properties applied to new tunable metamaterials from terahertz to the mid-infrared range. Apart graphene, Dirac particles have been also observed in topological insulators (TI). TI are quantum electronic material with an insulating gap in the bulk of spin–orbit origin, and metallic states at the interface with the vacuum or another dielectric. In this talk, we report on plasmonic excitations in Bi$_2$Se$_3$ topological insulator thin films patterned in different micro-structures. Propagating plasmons were observed in micro-ribbon arrays and localized plasmons instead in micro-disks and rings. In all cases plasmon excitations can be well described in terms of the electrodynamics properties of single-particle Dirac carriers. Moreover, the plasmon lineshape was found to be extremely robust vs. temperature, as expected for the excitations of topological carriers.
• 15:40-16:05 - L. Perfetti -- Dynamics of the electrons in surface states with strong spin-orbit coupling (Expand...)
The realization of transistors via the transport of spin polarized electrons has attracted the interest of the solid state community for 20 years. In such devices, an applied gate voltage induces a spin torque of the injected electrons via spin-orbit (SO) interaction. The energy scale of this effect is typically 1–10 meV in the semiconductor heterostructures, but reaches values 10 times larger at surfaces of systems containing heavy elements. Therefore, the latter are considered as valuable models for future spintronic applications. This cross fertilizing field has been recently enriched by the discovery of protected edge states in topological insulators. A macroscopic spin injection in these surface states may be obtained by optically pumping the system with circularly polarized photons. Such a technique has been widely employed to optically orient the spins of bulk semiconductors and GaAs/AlAs quantum wells. Here the optical transition selection rules of the Bloch bands are inherited from their parent atomic orbits. Alternatively, the spin polarization can be entangled to valley degrees of freedom by intercellular currents. The ternary compound BiTeI is a valuable example of a noncentrosymmetric material where this process should induce an optical spin-orientation. Our time resolved detection of the electronic states out of equilibrium conditions disclose unusual scattering mechanism in the femtosecond regime and nanometric scale. Since BiTeI holds large spin-orbit coupling, the optical pumping with circularly polarized photons naturally produce a macroscopic spin orientation of the electronic states. We monitor the orbital polarization of the photoexcited system via the dichroic constrast generated by circularly polarized laser pulses on a photoelectron current. Interestingly, the dichroic contrast acquired 80 fs after photoexcitation is already different from the one expected from the point group symmetry of the crystal. We deduce that the initial spin orientation decays on a time scale faster than the electronic thermalization. The observed dichroism has been ascribed to the transport of electrons out of the photoexcitation volume within the duration of the pump pulse. In the case of BiTeI, this ultrafast transport depends on the photon helicity because of the lack of spatial inversion symmetry. We conclude that the different transport of energy density is due to chiral scattering events of an electronic distribution that does not respect detailed balanced conditions.
• 16:05-16:20 - A. Akrap -- Magnetooptical study of giant Rashba systems BiTeBr and BiTeCl (Expand...)
Layered bismuth chalchogenides BiTeX (where X=I, Br or Cl) host giant Rashba spin splitting, which makes these materials interesting for spin manipulation. We present a comparative study of the optical properties - reflectance, transmission and optical conductivity - and Raman spectra of BiTeBr and BiTeCl at 300 K and 5 K. Despite different space groups, the optical properties of the two compounds are very similar. Both materials are doped semiconductors, with the absorption edge above the optical gap which is lower in BiTeBr (0.62 eV) than in BiTeCl (0.77 eV). The same Rashba splitting is observed in the two materials. A non-Drude free carrier contribution in the optical conductivity as well as three Raman and two infrared phonon modes are observed in each compound. There is a dramatic difference in the highest infrared phonon intensity for the two compounds, and a difference in the doping levels. Aspects of the strong electron-phonon interaction are identified. Several interband transitions are assigned, among them the low-lying absorption $\beta$ which has the same value 0.25 eV in both compounds and is caused by the Rashba spin splitting of the conduction band. An additional weak transition is found in BiTeCl, caused by the lower crystal symmetry. We performed a detailed study of reflectivity and Kerr effect on BiTeBr and BiTeCl up to a field of 7 T and in a wide energy range from 2 meV to 0.5 eV. We find a large Kerr effect in BiTeBr in the far infrared (with Kerr rotation up to 0.15 rad) and a smaller but qualitatively similar response in BiTeCl. The analysis of the Kerr BiTeBr data is done in terms of the Rashba split bands.
• 16:20-16:30 - M. Autore -- Dirac plasmonics of Topological Insulators (Expand...)
Recently, a new developement in the field of plasmonics has been achieved by means of engineering plasmonic structures in Topological Insulators (TIs). Indeed, their peculiar properties such as 2D intrinsic transport carried out by Dirac fermions, very high surface density (n 10 13 cm -2) compared to the typical values for metallic surfaces, backscattering protection and robustness of the topological phase at room temperature, make them perfect candidates to develope and take further plasmon based technology. We performed a wide study of plasmonic excitations on topological insulator Bi2Se3-based devices in the THz range, using Fourier Transform Infrared (FT-IR) spectroscopy technique. In particular, we investigated resonances dispersion in thin film samples patterned with several shapes by means of Electron Beam Lithography (EBL) in collaboration with the Institute for Photonics and Nanotechnologies in Rome. A first study was performed on Bi2Se3 micro-ribbon arrays, enabling to trace the energy-momentum dispersion for plasmonic excitations, in good agreement with the relation expected for 2D gases p ∼ k 1/2. Most importantly, we found a very good quantitative agreement with that predicted for Dirac plasmons. Then we investigated devices patterned with microring arrays, which are interesting because they exhibit two distinct resonances (bonding and antibonding), arising from the hybridization between the resonance of a disk and the one of an anti-dot. The measured extinction coefficient has been compared to ab initio analytical model for plasmonic resonances, through a collaboration with the Institute of Photonic Sciences in Barcelona. The comparison gives an impressive agreement, given the fact that the model has no free parameter, thus opening the way for further prediction and tailoring of plasmonic devices in the THz range. Finally, we studied the excitations on Bi2Se3 micro-disk arrays, and we intend to perform transmission measurements under the effect of a strong magnetic field (up to 30 T) perpendicular to the surface, expecting to observe and charachterize the resonance splitting due to the excitation of magnetoplasmons. Moreover, we studied the quantum phase transition (QPT) from a topological insulator to a conventional band insulator exploiting the plasmonic properties of micro-ribbon arrays of (Bi1-xInx)2Se3 . Under the effect of In-Bi substitution, this material undergoes a QPT around x=0.05, as recently observed by means of ARPES and time domain spectroscopy measurements. The study of plasmon dispersion and width as a function of doping represent a first study of the behavior of collective excitations through the QPT.
• 16:30-16:40 - Liang Wu -- A sudden collapse in the transport lifetime across the topological phase transition in (Bi1-xInx)2Se3 (Expand...)
Topological insulators (TIs) are newly discovered states of matter with robust metallic surface states protected by the topological properties of the bulk wavefunctions. A quantum phase transition (QPT) from a TI to a conventional insulator and a change in topological class can only occur when the bulk band gap closes. In this work, we have utilized time-domain terahertz spectroscopy (TDTS) to investigate the low frequency conductance in (Bi1-xInx)2Se3 as we tune through this transition by indium substitution. Above certain substitution levels we observe a collapse in the transport lifetime that indicates the destruction of the topological phase. We associate this effect with the threshold where states from opposite surfaces hybridize. The substitution level of the threshold is thickness dependent and only asymptotically approaches the bulk limit x >> 0.06 where a maximum in the mid-infrared absorption is exhibited. This absorption can be identified with the bulk band gap closing and a change in topological class. The correlation length associated with the QPT appears as the evanescent length of the surface states. The observation of the thickness-dependent collapse of the transport lifetime shows the unusual role that finite size effects play in this topological QPT. Possibility of realization of Weyl semi-metal near the Quantum critical point (x >> 0.06) by applying a magnet field will also be discussed. THz measurements on a new generation of Bi2Se3 samples with low bulk carrier concentration is also investigated.
• 16:40-16:50 - A. Kalinko -- Low temperature and high pressure lattice dynamics study of alpha - and beta-SnWO4 (Expand...)
Tin tungstate (SnWO4) exists in two polymorphs: the low-temperature orthorhombic alpha -phase (Pnna space group) with the indirect band gap Eg=1.64 eV and the high-temperature cubic beta -phase (P213 space group) with the direct band gap Eg=2.6-2.7 eV. A diffusion-controlled phase transition from alpha to beta -phase occurs upon heating under vacuum at about 670oC, whereas the high-temperature beta -phase can be stabilized at room temperature by quenching from above 670oC. The difference in band gaps is related to the atomic structure: while tin atoms are six-fold coordinated by oxygen atoms in both phases, tungsten atoms have WO6 octahedral coordination in alpha -SnWO4, but WO4 tetrahedral coordination in beta -SnWO4. The valence band in both phases is mainly composed of strongly interacting O 2p and Sn 5s states, whereas the conduction band is built up of W 5d and O 2p antibonding states with an admixture of Sn 5p states. These small band gaps and specific band structures allow using SnWO4 in photocatalysis and gas sensing. Only a few studies of both SnWO4 phases are available in the literature: their lattice dynamics has been probed by Raman and mid-infrared spectroscopies at room temperature and n ormal pressure. Nevertheless, their small band gap values and peculiar atomic structure suggest possible pressure induced phase transitions. In the present study, the lattice dynamics of alpha - and beta -SnWO4 as a function of pressure (0-12 GPa) and in a wide temperature range (6-300 K) was investigated by Raman and synchrotron-based far/mid-infrared spectroscopies. The experimental results were supported by the first-principles linear combination of atomic orbital (LCAO) calculations. The alpha - and beta - phases of SnWO4 were synthesized using a solid-state reaction method by heating under vacuum equimolar mixtures of SnO and WO3 at 600oC and 750oC, respectively. X-ray powder diffraction was used to control the phase of the obtained samples. Far (50-650 cm-1) and mid (550-8000 cmcm-1) infrared measurements were performed in transmission mode using synchrotron radiation combined with a IFS 125MR FTIR interferometer. The temperature and pressure control were respectively realized using a closed cycle He cryostat and a recently developed high-pressure setup based on a diamond anvil cell. The sample pressure was monitored using ruby luminescence. The high-pressure micro-Raman measurements were performed in backscattering geometry using HORIBA Jobin Yvon iHR320 spectrometer and 514.5 nm argon laser. Polyethylene and KBr pressure transmitting media were used for far- and mid-infrared experiments, respectively, whereas a methanol-ethanol mixture (4:1) was used for Raman experiments. Far-infrared measurements in the low-temperature range (6-300 K) did not reveal any phase transitions neither in alpha - or beta -SnWO4. The evolution of vibrational bands upon increasing temperature can be explained by thermal lattice expansion leading to band broadening and shift to lower frequencies. The main difference between the two phases is the 250 cm-1 wide gap in the phonon density of states in beta -SnWO4, separating high-frequency stretching modes of the WO42- tetrahedra from other modes. High-pressure Raman and infrared measurements of beta -SnWO4 indicate an almost linear increase of the frequencies of all phonon up to 10 GPa except for those at about 270 and 800 cm-1, which show anomalous pressure dependence: a negative shift applying the pressure. The amorphisation of beta -SnWO4 giving rise to broad phonon structures occurs above 10 GPa, and the amorphous phase remains stable after releasing the pressure. In contrast, the phonon modes of alpha -SnWO4 are only broadened with pressure increasing in this pressure range while, a large shift of 25 cm-1 for the highest W-O stretching mode at 777 cm-1 appears in the Raman spectrum indicative of a distortion of the WO6 octahedra upon lattice contraction. First-principles LCAO simulations predict changes in the structure and electronic properties of alpha - and beta -SnWO4 as pressure increases up to 16 GPa. They suggest that the calculated band gap value is reduced upon increasing pressure in both phases, leading to an insulator-to-metal transition for alpha -SnWO4 only. The evidence of this transition was experimentally observed in the mid-infrared region (800-1600 cm-1), where an abrupt decrease of the transmission was detected in the 5-7 GPa pressure range in alpha -SnWO4.
## Monday 17:20-18:50
### Session Mo4: Optical Conductivity of Superconductors and MIT
#### Chair: S. Tajima
• 17:20-17:45 - C.C. Homes -- Optical properties of iron-based conductors and superconductors (Expand...)
In the high-temperature cuprate superconductors, only a single band is observed at the Fermi level; as a result the optical conductivity may be modeled using a single free-carrier component. In a simple metal the Drude model usually sufficient; however, electronic correlations and electron-boson coupling in the cuprates require a more generalized form in which the scattering rate and the effective mass are both frequency dependent. The iron-based conductors and superconductors are multiband materials with several bands crossing the Fermi level, resulting in multiple hole and electron pockets at the center and corners of the Brillouin zone, respectively. The presence of multiple bands requires, at a minimum, a "two-Drude" model in which the electron and hole pockets are treated as separate contributions. In general, the two-Drude approach reveals: (i) a strong component associated with the hole pocket with a large scattering rate (nearly incoherent transport) that is essentially temperature independent; (ii) a weaker component associated with the electron pocket whose scattering rate has a strong temperature dependence. Some recent results using this approach in the pnictide materials BaFe2As2 (TN ∼ 138 K) and Ba0.6K0.4Fe2As2 (Tc ∼ 38 K) will be discussed, as well as some preliminary findings the iron-chalcogenide systems, Fe1+δTe (TN ∼ 68 K) and FeTe0.55Se0.45 (Tc ∼ 14 K).
• 17:45-18:10 - N.P. Armitage -- Optical Birefringence and Dichroism of Cuprate Superconductors in the THz regime (Expand...)
The presence of optical polarization anisotropies, such as Faraday/Kerr effects, linear birefringence, and magnetoelectric birefringence are evidence for broken symmetry states of matter. The recent discovery of a Kerr effect using near-IR light in the pseudogap phase of the cuprates can be regarded as a strong evidence for a spontaneous symmetry breaking and the existence of an anomalous long-range ordered state. In this work we present a high precision study of the polarimetry properties of the cuprates in the THz regime. While no Faraday effect was found in this frequency range to the limits of our experimental uncertainty (1.3 milli-radian or 0.07o), a small but significant polarization rotation was detected that derives from an anomalous linear dichroism. In YBa2Cu3Oy the effect has a temperature onset that mirrors the pseudogap temperature T* and is enhanced in magnitude in underdoped samples. In x=1/8 La2-xBaxCuO4, the effect onsets above room temperature, but shows a dramatic enhancement near a temperature scale known to be associated with spin and charge ordered states. These features are consistent with a loss of both C4 rotation and mirror symmetry in the electronic structure of the CuO2 planes in the pseudogap state.
• 18:10-18:35 - L. Degiorgi -- Hysteretic behavior in the optical response of the underdoped Fe-arsenide Ba(Fe1-xCox)2As2 in the electronic nematic phase (Expand...)
The tetragonal-to-orthorhombic structural phase transition at TS, coincident or preceding the onset of an antiferromagnetic ground state at TN, in the underdoped regime of quite all families of iron-pnictide and chalcogenide superconductors breaks the four-fold rotational symmetry of the tetragonal phase, implying the onset of a nematic phase. The relevance of nematicity, either electronic in nature or spin-induced, in shaping their phase diagram is certainly one of the most debated issue nowadays. We report on an optical reflectivity study of Ba(Fe1-xCox)2As2 with x = 0 and 2.5%, detwinned by uniaxial and in-situ tunable pressure acting as an external symmetry-breaking field. We discover a remarkable optical anisotropy as a function of the applied pressure, very much reminiscent of a hysteretic-like behavior. Its temperature dependence supports the analogy between pressure and external magnetic field with respect to the electronic anisotropy in iron-pnictides and magnetization in ferromagnets, respectively. We estimate the nematic susceptibility, which is Curie-like at temperatures close to and above TS and which may hint to a ferro-orbital ordering as driving mechanism for both structural and magnetic transitions.
• 18:35-18:50 - E. Janod -- Optical Conductivity Measurements of GaTa4Se8 Under High Pressure: Evidence of a Bandwidth-Controlled Insulator-to-Metal Mott Transition (Expand...)
Mott insulators represent a large class of materials with half-filled d or f orbitals that should be metallic according to conventional band theory, but are actually insulators thanks to onsite electron-electron (Coulomb) repulsion U. The theoretical description of the Mott insulating state has been a long-standing problem [1,2] and only modern approaches such as the dynamical mean field theory (DMFT) have successfully predicted the whole phase diagram (U/D, T/D) of this class of materials (where D represents the half bandwidth) [3]. An interesting characteristic of Mott insulators is that external perturbations, such a chemical or physical external pressure, may provoke insulator to metal transitions (IMTs) [4].
Recently, a new family of insulators, the lacunar spinel chalcogenides AM4Q8 (A = Ga, Ge; M = V, Nb, Ta; Q = S, Se) has attracted attention because of their surprising electronic properties emerging in the vicinity of metal-insulator transitions. As an example, GaTa4Se8 undergoes an insulator-metal-superconductor transition under pressure [5,6]. Moreover, a resistive switching induced by electric pulses was discovered in these compounds, making these materials promising for RRAM applications [7,8,9,10]. It was argued that these compounds belong to a new class of Mott insulators where the relevant entity for electronic correlation is a cluster of transition metals M4 rather than a single atomic site as usually encountered, and that their striking resistive switching properties are related to a Mott IMT [11,12].
In this work,[13] we show that GaTa4Se8 behaves as a canonical Mott insulator. Our study of optical conductivity under high pressure combined with band structure calculations indeed indicates that this compound undergoes an IMT through a crossover region, in remarkable agreement with the (U/D, T/D) phase diagram predicted by the DMFT. Moreover, our measurements unravel a three components optical conductivity, including a narrow low energy contribution related to the quasiparticle peak theoretically expected on the metallic side of the Mott metal-insulator transition.
To our knowledge, this is the first time that such a successful comparison between experimental and theoretical (U/D, T/D) values is obtained, with the use of optical conductivity under pressure measurements. This may result from several unique aspects of GaTa4Se8, which include (i) the high connectivity (n = 12 first neighbors) of its fcc lattice particularly well suited for a comparison with DMFT, an approach based on an infinite connectivity (n = ¥), and (ii) the absence of disorder due to chemical substitution, which plays a role in prototypical Mott insulators considered so far, in particular (V1-xCrx)2O3 and NiS2-xSex. In this context, GaTa4Se8 could become the new archetypal Mott insulator ideally suited for future in-depth comparisons between experiments and theory.
# Tuesday, July 1st
## Tuesday 9:00-10:45
### Session Tu1: Pseudogap II
#### Chair: M.R. Norman
• 9:00-9:25 - A. Chubukov -- Superconductivity, charge order, and psudogap phase in hole-doped cuprates (Expand...)
I discuss the interplay between superconductivity and charge order within the magnetic scenario for hole-doped cuprates I show that the magnetically-mediated interaction, which is known to give rise to d-wave superconductivity, is also attractive in the charge-density-wave channel with diagonal momenta Qx=(2Q,0) and Qy=(0,2Q), as seen in the experiments. I show that the emerging charge order with Qx/Qy is of stripe type (it appears with Qx or Qy, but not with both). I further show that a stripe charge order parameter has two components: one is incommensurate density variation, another is an incommensurate current. Such an order breaks time reversal symmetry and generates an oscillating magnetic field. I show that, before CDW order develops, the system develops a pre-emptive composite order, in which the two CDW components form a bound state. Such an order does not break U(1) symmetry associated with CDW, but it breaks time-reversal symmetry. I also show that CDW order develops in parallel with pair-density-wave order in the particle-particle channel. I hope that this theory provides the "missing link" in the spin-fluctuation scenario for the cuprates.
• 9:25-9:50 - L. Taillefer -- The three phase diagrams of cuprate superconductors (Expand...)
I present three phase diagrams for superconductivity in the high-Tc cuprate YBCO. For the temperature-doping (T-p) phase diagram, with its characteristic Tc dome, important new information has emerged since the discovery of quantum oscillations and a negative Hall coefficient showed that the Fermi surface undergoes a reconstruction in the underdoped regime. This reconstruction is caused by the onset of charge order, and was shown to be universal in cuprates, occurring in YBCO, Eu-LSCO and Hg-1201. Its onset coincides with the fall of Tc below its maximal value -- so it is responsible for shaping the Tc dome. We used thermal conductivity to directly detect the upper critical field Hc2 and map out the field-temperature (H-T) phase diagram, revealing that in the T = 0 limit Hc2(T) becomes equal to the resistive critical field H vs. (T) where the vortex solid melts. In other words, there is no vortex liquid at T = 0. Finally, I present the full (H-p) phase diagram of Hc2 vs. doping, using data from both YBCO and Tl-2201. It shows two peaks. Below the upper peak, located at p = 0.18, the condensation energy drops by a factor 20. We attribute this dramatic drop to the mechanisms that cause Fermi-surface reconstruction, namely charge order and pseudogap formation.
• 9:50-10:15 - C. Pépin -- Charge ordering around a Quantum Critical Point in cuprate superconductors (Expand...)
Recent experiments in Cuprate superconductors have unveiled the existence of charge ordering in the underdoped regime, inside the pseudo gap phase. This new order, first observed by Nuclear Magnetic Resonance and Scanning Tunneling Microscopy, was confirmed through quantum oscillation measurements leading to Fermi surface reconstruction, soft and hard X-ray scattering. The phase diagram of the cuprate is thus becoming more and more complex, with the hope that the charge order will shed light on the still mysterious pseudo-gap regime. In our theoretical investigation, we address this issue starting from a minimal model where commensurate anti-ferromagnetic fluctuations become increasingly strong around a QCP. A recent solution of the critical region around the QCP showed the emergence of a pseudo-gap with an unusual composite SU(2) symmetry relying a Quadrupolar Density Wave (QDW) and pairing (SC) fluctuations. The issue of charge ordering and relations to experiments will be discussed in the framework of this new theoretical findings.
• 10:15-10:25 - O. Cyr-Choinière -- Anisotropy of the thermoelectric response in the pseudogap phase of the cuprate superconductor YBa2Cu3Oy (Expand...)
We recently reported evidence of a broken rotational symmetry in the pseudogap phase of the cuprate superconductor YBa2Cu3Oy. This broken symmetry was inferred from the onset of a large in-plane anisotropy of the Nernst signal N below the pseudogap temperature T*, attributed to an anisotropy in the longitudinal coefficients: the electrical conductivity σ and/or the Seebeck coefficient S. It was pointed out that an anisotropy in N could also come from an anisotropy in the transverse coefficients: the Hall conductivity σxy and/or the off-diagonal Peltier conductivity αxy. Here we report a complete study of the anisotropy of all transport coefficients in a single crystal of underdoped YBa2Cu3Oy with a hole doping p = 0.12. The measurements were performed first with the sample length along the b-axis direction and then along the a-axis direction of the orthorhombic crystal structure, the change being achieved by rotating the CuO chain direction via detwinning. We therefore extract the anisotropy of the transport coefficients without uncertainty from geometric factors or sample variation. We disentangle the various contributions to the anisotropy of N, and find that there is a strong anisotropy in both the longitudinal and transverse thermo-electric coefficients, i.e. in both S and αxy. We discuss the implications for our understanding of the pseudogap phase and the Fermi-surface reconstruction, attributed to charge order.
• 10:25-10:35 - B. Mansart -- Real-time observation of structural and electronic degrees of freedom in high Tc superconductors (Expand...)
Unraveling the complex interplay between electronic and lattice degrees of freedom is one of the keys towards understanding the unconventional superconductivity mechanism in cuprates. In these complex systems, the applicability of conventional pairing theories, based on retarded interactions between electrons mediated by low energy glue bosons, has been doubted and a completely different framework has been proposed involving non-retarded interactions associated with high-energy electronic scales .
Electron-phonon interactions, leading to superconductivity in conventional systems, are accessible by ultrafast pump-probe techniques which allow to distinguish thermally activated phonons from those emitted by conduction band electrons. Separating the electronic from the out-of-equilibrium lattice subsystems, we probed their re-equilibration by monitoring the transient lattice temperature through femtosecond X-ray diffraction in La2-xSrxCuO4 single crystals with x=0.1 and 0.21. The temperature dependence of the electron-phonon coupling is obtained experimentally and shows similar trends to what is expected from the \textitab-initio calculated shape of the electronic density-of-states near the Fermi energy. This study evidences the important role of band effects in the electron-lattice interaction in solids, in particular in superconductors .
Time-resolved spectroscopies also allow the coherent excitation and real-time observation of atomic motions and elementary electronic excitations. We performed broad-band pump-probe optical spectroscopy measurements in La2-xSrxCuO4 (x=0.15), where a polarized ultrafast laser pulse excites the superconductor through the Impulsive Stimulated Raman Scattering (ISRS) effect . The coherent oscillations of the Cooper pairs condensate are detected via delayed supercontinuum pulses and enable a new technique, Coherent Charge Fluctuation Spectroscopy (CCFS), to distinguish the electronic excitations that couple to the superconducting quasiparticles. These results reveal strong resonance effects between the oscillating condensate and high energy electronic transitions attributed to the Cu-O charge transfer excitation, suggesting the importance of non-retarded interactions in the superconductivity mechanism of cuprates .
• 10:35-10:45 - F. Cilento -- Photoinduced antinodal metallicity in the pseudogap state of high-Tc cuprates (Expand...)
A major challenge in understanding the cuprate superconductors is to clarify the nature of the fundamental electronic correlations that lead to the pseudogap phenomenon. We used ultrashort light pulses to prepare a non-thermal distribution of excitations, and we performed time-resolved broadband reflectivity measurements in order to capture novel properties that are hidden at equilibrium. Our framework unveils a universal pseudogap-like region in the temperature (T) and hole-doping (p) phase diagram, delimited by a well-defined T*neq(p) line. In this region the photoexcitation process leads to a quench of local correlations triggering the evolution of antinodal excitations from gapped (localized) to metallic (delocalized) quasi-particles characterized by a longer lifetime. This photoinduced antinodal metallicity finds a natural explanation in terms of the single-band Hubbard model, in which the short-range Coulomb repulsion leads to a k-space differentiation between “nodal” quasiparticles and antinodal excitations, whose self-energy diverges as in the insulating state.
## Tuesday 11:15-12:45
### Session Tu2: Electromagnons
#### Chair J. Petzelt
• 11:15-11:40 - J. Hlinka -- How many different indicators of direction in space can be distinguished by the space-time symmetry? (Expand...)
This talk aims to draw the attention to the spatiotemporal symmetry of various vector-like physical quantities. It will be argued that along with the canonical polar vector, there are still another 7 symmetrically distinct classes of stationary physical quantities, ("direction indicators") that can be - and often are - denoted as standard three-components vectors, even though they do not transform quite as the static polar vector shoud do. Many of them are familiar to us as from discussion of excitations and order-parameters of magnetically ordered crystals, but the concept is general and the application can go beyond the scope of magnetism and multiferroic physics only. I shall try to demonstrate the result using both simple arguments and example as as well as using the concepts of group theory.
• 11:40-12:05 - A. Loidl -- Electromagnons (Expand...)
Electromagnons are generic excitations of multiferroics and can be characterized as spin waves carrying dipolar weight via a strong magnetoelectric coupling. In many cases, in the magnetically ordered phases optical weight is transferred via magneto-electric coupling from low-lying optical phonons to magnons. Electromagnon excitations were theoretically predicted already more than 40 years ago and only recently were experimentally observed in spin-driven multiferroic rare-earth manganites. Theoretical models to explain character and nature of electromagnons include purely electronic processes like spin currents, small lattice distortions via an inverse Dzyaloshinskii-Moriya interaction or Heisenberg exchange coupling.
In this talk I will discuss and review the recent status of experiments and theoretical modeling of the dynamics of multiferroics, reporting on recent experimental observations of magnetoelectric excitations in different classes of multiferroics, like in pure and mixed multiferroic rare earth manganites, or in triangular lattice antiferromagnets like calcium chromate.
Here I will focus on and raise a number of so far unsolved and open questions: (i) What are the important ingredients of electromagnons? How to explain the typical twin-peak structure of multiferroic manganites? (ii) How to distinguish electromagnons from multimagnon excitations and AFM resonances? (iii) Do different types of electromagnons exist, e.g. driven by spin currents or by Heisenberg exchange, or possibly are of other origin and nature? (iv) Do electromagnons exist also in the magnetic but paraelectric state? (v) Why is electromagnon response often so broad and is accessible throughout the Brillouin zone, while antiferromagnetic resonances are sharp and well defined?
Finally, concerning the wording of electromagnons we advise caution: Not every excitation below TN is an electromagnon!
• 12:05-12:20 - A. Pimenov -- Electric field control of terahertz polarization with electromagnon (Expand...)
Magnetoelectric effect in multiferroics has attracted much interest recently because of possibility to modulate electric polarization by external magnetic field. A related effect, control of magnetization by electric voltage is more difficult to realize experimentally but is strongly desirable especially from the point of view of applications. A promising way to solve this task is to use novel magnetoelectric excitations (electromagnons) which determine the terahertz dynamics of several multiferroic systems. In the first part of the talk we discuss an all-electrical control of a dynamic magnetoelectric effect in a classical multiferroic manganite DyMnO3, a material containing coupled antiferromagnetic and ferroelectric orders. Due to off-diagonal elements of the dynamic magnetoelectric susceptibility a linearly polarized terahertz light rotates upon passing through the sample. The amplitude and the direction of the polarization rotation is defined by the orientation of ferroelectric domains and can be controlled by static voltage. These results can be explained within the model of a dynamic magnetoelectric susceptibility of a cycloidal magnet with inverse Dzyaloshinskii-Moriya coupling.
In the second part of the talk recent results on the terahertz dynamics in (i) multiferroic borate SmFe3(BO3)4 with record values of magnetodielectric effect and (ii) in triangular lattice antiferromagnet CuCrO2 will be presented.
• 12:20-12:35 - S. Kamba -- Electromagnons in multiferroic CaMn7O12 and ε-Fe2O3 (Expand...)
The spin waves are usually excited by the magnetic component of the electromagnetic radiation and they contribute to the magnetic susceptibility. In multiferroics, spin waves can be also excited by the electric component of the electromagnetic radiation; therefore, they contribute to the dielectric permittivity. Such electrically active excitations are activated due to the dynamic magnetoelectric effect and they are called electromagnons.
The properties of electromagnons and the reasons of their activation in the THz permittivity spectra will be discussed for two different compounds, which will be compared with the canonical spin-induced ferroelectric TbMnO3. There, the ferroelectricity is induced by the inverse Dzyaloshinskii-Moriya interaction, but two electromagnons are activated by the magnetostriction. The first example is CaMn7O12, whose ferroelectric polarization is the highest among all spin-induced ferroelectrics. In this material, we observed three infrared-active excitations below 80 cm-1, which change their frequencies and split in the external magnetic field. Their intensities show remarkable anomalies at the magnetic phase transitions occurring at T\rm N1 = 90 K and T\rm N2 = 50 K. The frequencies of these absorption peaks correspond to the maxima in the magnon density of states obtained by inelastic neutron scattering. These facts provide an evidence of a magnonic origin of these excitations. Furthermore, these magnons receive the strength from the polar phonons observed in the infrared spectra; therefore they must be electromagnons. Surprisingly, two of them persist in the paramagnetic phase, 100--150 K above T\rm N1, due to short-range magnetic correlations which we observed by means of quasielastic neutron scattering.
Further, we will demonstrate that the electromagnons are not restricted to spin-induced ferroelectrics. We have observed an electromagnon in nanograin ceramics of the e phase of Fe2O3. Below 490 K, this material is a ferroelectric ferrimagnet. The infrared-active electromagnon activates in the spectra only below 110 K, at the same time as the magnetic structure becomes incommensurately modulated. Inelastic neutron scattering spectra provide an evidence of the magnetic character of the electromagnon. Simultaneously, they show that the electromagnon corresponds to a magnon from the Brillouin-zone boundary.
Finally we will show how, by combining infrared, THz and inelastic neutron scattering experiments, the electromagnons can be generally discerned from magnons or phonons. In ε-Fe2O3, this was achieved for the first time using ceramics, i.e. without using single crystals. In a broader perspective, the electromagnons are sensitive not only to the static external magnetic field, but also to the external electric field. This opens a promising route for electrically controlled magnonics in the THz region.
• 12:35-12:45 - P. Rovillain -- Iron borate multiferroics, a new way to observe the electromagnon? (Expand...)
In the last years, multiferroics, have attracted much attention worldwide because of their large magnetoelectric effects. They open a myriad of possibilities in spintronics applications as tuning the polarization direction with a magnetic field and/or change the magnetization direction via an applied voltage. A particularly exciting prospect in the field of spintronics is to use the wave like excitations of a magnetic material as a means to transmit and process information.
We used infrared scattering to probe excitations as phonons and electromagnons, the dynamical coupling between spin and lattice degrees of freedom in rare earth iron borates. These materials have interesting magnetic properties due to the subtle interactions between the rare earth and the iron moments. Among these materials, Nd, Nd0.75Dy0.25, Sm0.5Nd0.5 and DyFe3(BO3)4 are multiferroics or become multiferroics under magnetic field. The iron moments order antiferromagnetically below TN >> 36 K. For all of these compounds we observe a renormalisation of some phonons at the magnetic transition. Dy Fe3(BO3)4 shows a structural phase transition at 280 K that we observe by the change of the phonon mode numbers.
In Nd, Nd0.75Dy0.25, Sm0.5Nd0.5 Fe3(BO3)4 a splitting of the lowest frequency mode in the ab plane appears at low temperature. This splitting can be the signature of change in the lattice around the magnetic transition but no structural phase transitions are expected in these compounds. Moreover, along the c direction the lowest phonon mode has its frequency decreasing with temperature and the total dielectric function for the c axis comes from this phonon. This is the typical behaviour of a soft phonon mode in a ferroelectric compound. These materials are good candidates to observe the electromagnon.
## Tuesday 15:00-16:50
### Session Tu3: Methods
#### Chair: P. Roy
• 15:00-15:15 - M. Dressel Genzel Prize Ceremony
• 15:15-15:40 - (Genzel Prize) R. Hillenbrand -- Infrared nanoimaging and nanospectroscopy (Expand...)
With the development of scattering-type scanning near-field optical microscopy (s-SNOM), the analytical power of visible, infrared and THz imaging has been brought to the nanometer scale. The spatial resolution of about 10 - 20 nm opens a new era for modern nano-analytical applications such as chemical identification, free-carrier profiling and plasmonic vector near-field mapping. After a brief overview of fundamentals and applications of s-SNOM, recent achievements such as broadband infrared-spectroscopic mapping of polymers and proteins will be presented, as well as the launching and mapping of propagating and localized plasmons in graphene nanostructures.
• 15:40-16:05 - Z.X. Shen -- Domain Walls and Edge Structures in Quantum System -- Views from Scanning Microwave Impedance Microscope (Expand...)
Understanding and controlling local conductivity have been a corner stone for important scientific breakthroughs and technological inventions, as exemplified by transistor, integrated circuit, Anderson localization, quantum hall effect and fractional quantum hall effect. In these cases, local visualization and control of doping, mobility, gating, dielectric, heterostructure, and inter-diffusion are important. Microwave Impedance microscope provides a new platform to measure local electrical properties.
Microwave has several inherent advantages. It is coherent so both amplitude and phase information can be analyzed to gain quantitative insight. Its high frequency naturally leads to efficient capacitive coupling, thus no contact is needed for electrical measurement. It has much higher inherent contrast for electrical properties than optical microscopy, for conductivity diverges for metal but approaches zero for insulator. However, it also has two disadvantages – relatively poor spatial resolution and stray field coupling that compromises quantitative analysis.
In this talk, we will report our progress in developing a scalable (batch processed and shielded tip) non-resonance microwave impedance microscope that achieves a resolution $\sim$ 30-50 nm, approaching the interesting physics length scales: localization and coherence length, the edge state width of topologically ordered systems, etc. The non-resonance approach and merge with the AFM platform also greatly reduce many of the “practical problems” that severely compromises advances of the earlier resonator based scanning microwave microscope, such as thermal drift, height control, and tip consistency – all critical for quantitative and repeatable measurements.
The highlight of this talk will be the physics insight we gain by applying this technique to investigate topological structures of novel quantum systems, including edge state of topological order in quantum hall states of semiconductors and graphene, as well as a rich hierarchy of domain wall physics in charge ordered oxides.}
• 16:05-16:20 - A. Damascelli -- Probing spin-orbital entanglement by polarization- and spin-resolved ARPES (Expand...)
Novel phases of matter, such as topological and relativistic-Mott insulating behavior, are being discovered as a result of spin-orbit coupling effects in solids. These emergent phenomena stem from the strong momentum-dependent spin-orbital entanglement of the low-energy electronic wavefunction. A complete microscopic description critically hinges on the availability of techniques that can probe the entangled spin and orbital quantum numbers, with momentum resolution. This can be achieved, as I will illustrate using Bi2Se3 and Sr2RuO4 as examples, by taking advantage of novel developments in angle-resolved photoemission spectroscopy (ARPES). I will show in particular how polarization- and spin-resolved ARPES can be used to reveal the layer-by-layer entangled spin-orbital texture of the topological surface state in Bi2Se3 , and how 3-dimensional photoelectron spin-polarization control can be achieved – via quantum interference – by varying experimental configuration and photon energy . Finally, I will show how spin-resolved ARPES with circular polarization can be used to measure the strength of the effective spin-orbit coupling in Sr2RuO4, and reveal the spin-orbit-induced breakdown of pure singlet and triplet Cooper pairing, necessitating a description of the superconductivity of Sr2RuO4 in terms of the newly found spin-orbital entangled eigenstates.
• 16:20-16:35 - D.B. Tanner -- Use of X-ray scattering factors for Kramers-Kronig high-frequency extensions (Expand...)
Kramers-Kronig analysis, of --- for the most part --- reflectance data, is commonly used to estimate the optical conductivity, dielectric function, sum rules, and other optical functions for new materials. The experimenter typically has data from far infrared through near ultraviolet, covering, say, 40 to 40,000 cm-1. This is a reasonably wide bandwidth, but of course the Kramers-Kronig integral extends from zero to infinity, so that extrapolations need to be made outside the measured range. The high frequency extrapolation is especially problematic and can cause significant distortions to the conductivity near the end of the measured range, with consequences for sum rules as well. The approach used by most is to use a slow power law in 1/ω, transitioning to 1/ω4 at a considerably higher frequency and continuing this free-carrier extension to infinity. The mid-range power law is adjusted to match the slope of the data and to give pleasing curves, but the choice of power (typically between 0.5 and 2) is arbitrary. Here, I will present an analysis using X-ray atomic scattering functions presented by Henke and co-workers . These basically treat the solid as a linear combinations of its atomic constituents and, knowing the chemical formula and the density, allow the computation of dielectric function, reflectivity, and other functions. The "Henke reflectivity" can be used over 10 eV-30 keV, after which a 1/ω4 continuation is perfectly fine. The bridge between experimental data and the Henke reflectivity as well as two corrections that needed to be made to the latter will be discussed.
• 16:35-16:50 - M.C. Martin -- Ultra-broadband Synchrotron Infrared Nano-spectroscopy (Expand...)
Characterizing and ultimately controlling the heterogeneity underlying quantum behavior of complex matter, photonic materials, or catalysis requires large scale, spectroscopic imaging with simultaneous specificity to structure, phase, and chemical composition at nanometer spatial resolution. However, as with any ultrahigh spatial resolution microscopy technique, the associated demand for an increase in both spatial and spectral bandwidth often leads to a decrease in desired sensitivity. We overcome this limitation in infrared vibrational scattering-scanning probe nearfield optical microscopy (s-SNOM) using synchrotron midinfrared radiation. Tip-enhanced localized light-matter interaction is induced by low-noise, broadband, and spatially coherent synchrotron light of high spectral irradiance, and the nearfield signal is sensitively detected using heterodyne interferometric amplification. We achieve sub-40 nm spatially resolved, molecular and phonon vibrational spectroscopic imaging, with rapid spectral acquisition, spanning the full mid-infrared (700–5000 cm-1) with few cm-1 spectral resolution. We demonstrate the performance of synchrotron infrared nanospectroscopy (SINS) on boron nitride, semiconductor, biomineral, and protein nanostructures, providing vibrational chemical imaging with subzeptomole sensitivity. With a spatial resolution 100-1000 times better than conventional FTIR microscopy, SINS enables the investigation of nanoscale phenomena in both hard- and soft-matter systems.
## Tuesday 17:20-18:50
### Session Tu4: Cuprates CDW
#### Chair: M. Greven
• 17:20-17:45 - E. Blackburn -- Charge density waves in underdoped YBCO (Expand...)
The notion of competing order is central to many theories of unconventional superconductivity, where superconductivity emerges on tuning between phases governed by much larger energy scales. An example of this is the widely-discussed "stripe order" observed in some HTSs and related compounds. Recently, charge density waves (CDWs), as sampled by a range of different probes, have been shown to be a widespread competitor to superconductivity in YBCO and other high-Tc materials. These CDWs appear to originate in the CuO2 planes, but also affect the lattice dynamics. This talk will discuss these observations, their relationship with other experimental probes, and will report on new investigations by both elastic and inelastic X-ray techniques of the energy-, wavevector- and temperature-dependences.
• 17:45-18:10 - M. Le Tacon -- Competing orders in underdoped cuprates (Expand...)
I will present an overview of recent results obtained in high temperature superconducting cuprates obtained by various photon scattering experiments. The greatly enhanced sensitivity of resonant x-ray scattering to the valence electron system led us to the discovery of a fluctuating charge density wave (CDW) competing with the superconducting order at low doping levels. Direct evidences for CDW were also observed using inelastic scattering of visible light (Raman scattering). Trying to get more insights about the mechanism leading to the CDW formation, we have used high resolution inelastic x-ray scattering to study low energy phonons with wavevectors near the CDW ordering vector. We found that they exhibit large superconductivity induced lineshape renormalizations, attributed to strongly anisotropic electron-phonon interaction. This provides important insights regarding the long-standing debate of the role of this interaction, which is a major factor influencing the competition between collective instabilities in correlated-electron materials. Finally, combination of resonant scattering studies on Bi-based compounds with surface sensitive techniques (ARPES and STM) allow us to locate, in the reciprocal space, the electron involved in the CDW phenomena. These studies suggest that the CDW might be driven by a nesting of the tip of the Fermi arcs.
• 18:10-18:35 - C. Proust -- Fermi surface reconstruction by charge order in underdoped copper oxides (Expand...)
25 years after the discovery of high temperature cuprate superconductors, the observation of quantum oscillations has deeply changed the theoretical landscape relevant to these materials. The measurements of quantum oscillations on both sides of the phase diagram of cuprates show that the Fermi surface has suffered a drastic modification on the underdoped side. Indeed, the small Fermi pocket inferred from quantum oscillations combined with the negative Hall and Seebeck coefficients pointing to an electron pocket show that the Fermi surface of underdoped YBa2Cu3Oy undergoes a reconstruction because the translational symmetry of its lattice is broken at low temperature.
In underdoped YBa2Cu3Oy, many studies such as NMR measurements, x-ray scattering point to a reconstruction of the Fermi surface due to charge order. After providing the context for charge order in underdoped cuprates, I will present transport and ultrasonic measurements in magnetic fields large enough to suppress superconductivity at low temperature in underdoped YBa2Cu3Oy and HgBa2CuO4+δ. These results point to an universal Fermi surface reconstruction at a critical doping in underdoped cuprates. I'll also discuss Fermi surface reconstruction scenarios with the aim to provide information on the Fermi surface in the pseudogap phase.
• 18:35-18:50 - R. Comin -- Charge Order, Fermi-Arc Instability, and d-wave bond order in underdoped cuprates (Expand...)
The phase diagram of cuprates features a rich diversity of exotic electronic states characterized by broken symmetries which induce corresponding orders: antiferromagnetism (AFM), pseudogap (PG), charge-density-wave (CDW). Recent experiments in YBCO have revamped the debate around the importance and universality of a charge-ordered electronic ground state emerging in the normal phase above the critical temperature Tc. In our work , we investigated the origin and characteristics of charge-ordered states in single- (Bi2201) and bi-layer (YBCO) compounds, using a suite of complementary real- and momentum-space, and surface and bulk probes – resonant X-ray scattering (REXS), scanning-tunnelling microscopy (STM), and angle-resolved photoemission spectroscopy (ARPES). By bringing together these techniques, we identify the connection between charge order and the Fermi arcs to occur via the hot spots, and detect an onset of charge modulations right below T* - thus pointing to an intimate relationship between the coexisting CDW and PG orders . In addition, we have also explored the local symmetry of charge modulations, and reveal it to correspond to a d-wave bond order . This discovery implies that the same mechanism which drives particle-particle pairing is also active in the particle-hole channel, and is suggestive that the CDW and SC instabilities originate from the very same attractive interactions.
## Tuesday 19:30
### Dinner talk
#### Chair: A. Sacuto
J.C. Séamus Davis -- Visualizing (Electronic) Dragons (Expand...)
High-Tc superconductivity is found in strongly correlated (repulsive electron-electron interaction) systems exhibiting antiferromagnetic 'parent' states. Examples include the copper-based, iron-based and heavy-fermion superconductors. However, the terra incognita between the superconductivity the antiferromagnetic phase often harbors very exotic states whose identification/explanation has proven extremely challenging. Motivation for the introduction of atomically resolved spectroscopic imaging STM [RSI 70, 1459 (1998)] came from my desire to directly visualize these electronic dragons. I will discuss the history of this technique and explain how it was used to first reveal the cuprate Q ≠ 0 density wave [Science 266,455 (2002)] along with its detailed microscopic structure [Science 315, 1380 (2007)], the cuprate Q = 0 intra-unit-cell nematic state [Nature 466, 374 (2010)] and its intimate relation the Q ≠ 0 density wave [K. Fujita et al., Science (2014)], and the pnictide Q = 0 nematic state [Science 327, 181 (2010)]. Time permitting; I will discuss a model conceptual framework within which to understand the relationship between AF electron–electron interactions, these exotic states, and the correlated SC [PNAS 110, 17623 (2013)].
# Wednesday, July 2nd
## Wednesday 9:00-10:45
### Session We1: Graphene
#### Chair: J.E. Hoffman
• 9:00-9:25 - D.N. Basov -- Nano-plasmonic phenomena in graphene (Expand...)
The term “plasmonics” often carries an applied connotation owing to remarkable successes in controlling and manipulating light at the nanoscale in artificial structures. Infrared nano-spectroscopy and nano-imaging experiments on graphene carried out in our group [Nano Letters 11, 4701 (2011), Nature Nano 8, 821 (2013)] have uncovered a rich variety of plasmonic effects that may enable functionalities not attainable through metal-based plasmonics. Applications aside, the nano-scale exploration of surface plasmons has offered an entirely new perspective on fundamental physics behind electronic phenomena in graphene. For example, by interferometric infrared imaging of plasmonic standing waves we were able to quantify the electronic losses in graphene. This latter result highlights the important role of many body effects [Nature 487, 82 (2012)] that were not anticipated theoretically. By examining the sub picosecond dynamics of plasmons in a setting of a unique pump-probe nano-spectroscopy apparatus we were able to discriminate between the roles of several photo-induced processes in mono-layer and few layer graphene [Nano Letters 14, 894 (2014)]. Unexpectedly, infrared photo-excitation enables ultra-fast control of plasmons with the efficiency rivaling that of electrostatic gating. Confined surface waves that can travel over macroscopic distances are generic to other classes of two-dimensional atomic crystals [Science 343, 1125 (2014)].
• 9:25-9:50 - J.N. Fuchs -- From dia- to paramagnetic orbital susceptibility of Dirac cones (Expand...)
We study the orbital susceptibility of multiband systems with a pair of Dirac points interpolating between honeycomb and dice lattices. Despite having the same zero-field energy spectrum, these different systems exhibit spectacular differences in their orbital magnetic response, ranging from dia- to paramagnetism at Dirac points. We show that this striking behavior is related to a topological Berry phase varying continuously from π (graphene) to 0 (dice). The latter strongly constrains interband effects, resulting in an unusual dependence of the magnetic response also at finite doping.
• 9:50-10:15 - M. Orlita -- Massless fermions in 2D and 3D: infrared magneto-spectroscopy studies (Expand...)
Solid-state physics and quantum electrodynamics, with its relativistic (massless) particles, meet to their mutual benefit in steadily expanding class of materials. Those include, 1D carbon nanotubes, 2D graphene or topological-insulator surfaces, and, most recently, the systems with an isotropic conical dispersion in 3D - with Weyl, Dirac or Kane fermions. In this talk, I will review how the linear dispersion impacts the basic (magneto-) optical properties of these systems. To illustrate this, we focus on two representative materials: a 2D graphene and bulk HgCdTe which displays the 3D conical dispersion when tuned to the point of the semiconductor-to-semimetal topological transition. We demonstrate that it is the number of dimensions, which defines the (joint) density of states, and in consequence, the simple physical quantities such as absorption of light - dispersionless in graphene but displaying a linear-in-photon-energy dependence in HgCdTe. In magnetic fields, the conical dispersion is transformed into Landau levels (LLs) and the optical response is determined by electronic excitations between discrete (in 2D) or dispersed (in 3D) LLs, both, however, with a typical for relativistic particles, square root dependence on the magnetic-field. Further relativistic effects may appear, depending on the strength of spin-orbit coupling. Spin-related effects are rather absent in the optical response of graphene which exhibits a weak spin-orbit coupling. Instead, we observe a pronounced spin splitting of LLs in HgCdTe, which follows the √B-dependence -- a well-established signature of relativistic particles, but never observed in any condensed-matter system up to now.
• 10:15-10:30 - A.V. Boris -- Fano resonances in hyperkagome iridates (Expand...)
Iridates exhibit enhanced spin-orbit coupling and their 5d electrons have reduced on-site Coulomb repulsion, compared to 3d-electron systems, giving rise to novel electronic phases, including relativistic Mott insulators with antiferromagnetic and spin liquid ground states, for geometrically unfrustrated and frustrated lattices, respectively. Geometrical frustration and spin-orbit interaction imply a strong spin-lattice coupling in a sublattice of Ir ions, but the effects of electron-phonon interactions have not been explored in the system.
One of the most striking manifestations of the electron-phonon interaction is the quantum interference between discrete phonons and the continuum of electron-hole excitations. Drawing on a spectroscopic ellipsometry study, we report evidence that conditions favorable for Fano interference are met in the three-dimensional hypekagome lattice of Na3Ir3O8 , the semimetallic counterpart of Mott-insulating Na4Ir3O8, one of the best candidates for a three-dimensional (3D) spin-liquid state . The entire set of well-defined phonon modes in the ellipsometric IR spectra of Na3Ir3O8 single crystals exhibit highly asymmetric line shapes characteristic of Fano resonances. With decreasing temperature, we observe a sharp increase of the infrared intensity of the resonances followed by concomitant changes in the underlying electronic background, formed by electronic transitions between Ir 5d t2g bands of a mostly Jeff = 1/2 character. Because of the lack of inversion symmetry the four Jeff = 1/2 bands have linear Rashba-type dispersion in the vicinity of the Gamma point. These bands originate from strong spin-orbit coupling and intersect near the Fermi level, resembling the Dirac cone in graphene. An analysis of the dipole matrix elements has shown that interband transitions between these partially filled bands have a high probability. This provides high density of electron-hole excitations which interfere with superimposed discrete phonon states, in a similar manner as discussed for graphene.
• 10:30-10:45 - A.B. Kuzmenko -- Strong Plasmon Reflection at Nanometer Gaps in Graphene (Expand...)
Graphene plasmons attract much attention as they exhibit a remarkable electrostatic tunability and the ability to strongly concentrate electromagnetic energy, potentially useful for applications. We used tip-enhanced infrared near-field microscopy (s-SNOM) to study propagating plasmons in epitaxial quasi-free-standing monolayer graphene on silicon carbide. We observe that plasmons are strongly reflected at tiny graphene gaps formed at the steps between the atomically flat substrate terraces. For the step height of only 1.5 nm, which is two orders of magnitude smaller than the plasmon wavelength, the plasmon reflection reaches 20 percent, and it approaches 0.5 for steps of 5 nm. We support this observation with numerical simulations and provide physical rationale for this intriguing phenomenon. Our results suggest that in general plasmon propagation can be controlled using ultracompact nanostructures.
## Wednesday 11:00-12:30
### Session We2: Fermi Liquids
#### Chair: T. Timusk
• 11:15-11:40 - D. van der Marel -- Fermi liquid behaviour in strongly correlated metals (Expand...)
A reference point for research on a wider range of correlated behaviour is provided by the so-called Fermi-liquids, characterized by a relaxation rate 1 / τ ∝ ω2 + (p π kB T)2. The theoretical prediction for the relaxation rate appearing in the optical conductivity is p=2 when considering the experimentally most accessible range ω > 2 &pi B T. A number of recent optical studies have addressed the issue of Fermi-liquid characteristics, reporting indeed ω2 and T2 for the optical scattering rate of a number different materials. However, a perfect match to the prediction p=2 has not been observed. One possible scenario that has been proposed to explain this discrepancy is the presence of magnetic impurities.
In a recent study we have investigated Sr2RuO4, a material which can be synthesized in very pure form, with well established T2 resistivity below 25 K. Here we observe a perfect scaling collapse of 1/τ as a function of ω2 + (p π kB T)2 for with ω < 36 meV, and temperature below 40 K, with p=2. We also observe features in the spectrum at higher energy, which are manifestly beyond the Fermi-liquid model. The sign and size of these features agree quantitatively with the notion of resilient quasiparticles predicted by dynamical mean field theoretical calculations for this compound.
• 11:40-12:05 - A. Georges -- Fermi liquids and beyond: non-Drude universal scaling and optical signatures of resilient quasiparticles (Expand...)
Based on:X.Deng et al. Phys Rev Lett 100, 086401 (2013); C. Berthod et al. Phys Rev B 87, 115109 (2013); D. Stricker et al. arXiv:1403.5445
• 12:05-12:30 - M. Dressel -- Power-Law Behavior of Optical Conductivity Observed in Strongly-Correlated Organic Conductors (Expand...)
Organic charge-transfer salts are highly conducting one- or two-dimensional electron systems which exhibit unusual physical behavior due to significant electron-electron interactions. The best known example are the one-dimensional metals of the TMTSF family which show significant deviations from a Drude metal and been established as the model compound of a Luttinger-liquid. The optical properties provide insight into the dynamics of the correlated electrons and energy dependence of the interactions. In two-dimensional metallic k-(BEDT-TTF) salts the effective Coulomb repulsion can be tuned be pressure until the Mott-insulating state is reached. With increasing correlations, the Fermi-liquid behavior becomes more pronounced, the effective mass grows and the prefactor in temperature- and frequency dependent scattering rate increases. The ratio between the temperature and frequency dependent contributions corresponds to Landau’s predictions of a Fermi liquid. On the insulating side of the Mott transition, frustration on the triangular lattice causes a spin-liquid behavior with no magnetic order down to lowest temperatures. The optical conductivity of k-(BEDT-TTF)2Cu2(CN)3 reveals a large in-gap absorption where the excess conductivity exhibits a power-law behavior σ(ω) = ωn that grows stronger as the temperature decreases. With n ∼ 0.8 to 1.5, the exponent is significantly smaller than predicted for spinon contributions to the optical conductivity.
• 12:30-12:45 - M. Scheffler -- THz properties of CaRuO3: can we reconcile non-Fermi-liquid optics with Fermi-liquid concepts? (Expand...)
Landau Fermi liquid (FL) theory is the established framework to describe the many-particle system of electrons in metals within a single-particle picture. In particular, the existence of well-defined Landau quasiparticles is taken as hallmark that renders a metal a Fermi liquid. While experiments have verified FL predictions for many different physical quantities, the situation for the optical properties of a FL remains under debate. Furthermore, it is not clear how optical properties of metals that contradict FL predictions and that are often phenomenologically termed non-FL optics should be interpreted or analyzed quantitatively. CaRuO3 has long been established as a non-FL metal in a wide temperature range, including non-FL optical properties , but with recent improvements in the growth of CaRuO3 thin films using metalorganic aerosol deposition we could document a FL ground state for CaRuO3 by observation of Shubnikov-de Haas oscillations and a quadratic temperature dependence of the dc resistivity below 1.5 K . Metallic thin films allow optical measurements at frequencies well below the conventional infrared range, and this low-frequency range is particularly suitable to address optical FL and non-FL properties, which are usually confined to low energies both in temperature and frequency. Here we present the results of phase-sensitive THz transmission measurements on high-quality thin films of CaRuO3 at frequencies 0.2 - 1.4 THz and temperatures 2 - 300 K .
When cooled below 100 K, the optical scattering rate of our CaRuO3 films moves into our spectral range, and we can observe the charge dynamics. Below 40 K, the metallic response clearly develops a frequency-dependent scattering rate, i.e. cannot be described in simple Drude terms. In particular, we find a pronounced increase of the scattering rate for frequencies above 0.6 THz, and this increase is much stronger than predicted in the context of FL theory. However, the temperature-dependent response below 0.6 THz, which exhibits only weak deviations from Drude behavior, can be described within FL concepts as recently suggested by Berthod \textitet al. . We discuss how the optical response of CaRuO3 in the non-FL temperature range fits into the present discussion of FL optics, and how future studies on CaRuO3 at even lower temperatures or in magnetic field could further elucidate the situation.
# Thursday, July 3rd
## Thursday 9:00-10:40
### Session Th1: URu2Si2 & Heavy Fermions
#### Chair: M. Dressel
• 9:00-9:25 - G. Blumberg -- Chiral density wave of the `hidden order' phase in URu2Si2 (Expand...)
A second-order phase transition is associated with emergence of an "order parameter" and a spontaneous symmetry breaking. For the "hidden order" phase below 17.5 K in the heavy fermion superconductor URu2Si2, the symmetry of the order parameter and its associated collective modes has remained ambiguous despite 30 years of research. Here we use polarization resolved Raman spectroscopy to specify the symmetry of low energy excitations above and below the hidden order transition. These excitations involve transitions between interacting heavy uranium 5f orbitals, responsible for the broken symmetry in the "hidden order" phase. From the symmetry analysis of the collective mode, we determine that the hidden order parameter breaks local vertical and diagonal reflection symmetries at the uranium sites, resulting in crystal field states with distinct chiral properties, which order to a chiral density wave.
• 9:25-9:50 - P. Coleman -- Composite and Topological Order in Heavy Fermion Materials (Expand...)
Heavy Electron materials provide a low energy realization of many of the emergent properties of correlated electron systems, providing a readily tuneable research test-bed for research into correlated electron materials. In this talk I shall talk about two aspects of these materials that are ripe for new spectroscopic study: topological order and composite pairing. Topological order appears to be present in some of the Kondo insulators, such as SmB6 and high pressure SmS, but it may also be present in the quantum critical material YbAlB4 and could even be present in fully gapped scenarios of composite paired heavy electron superconductivity, such as in Yb doped CeCoIn5. I shall review these three possibilities and discuss ways in which STM, optical and ARPES spectroscopy may be able to shed new light on our understanding.
• 9:50-10:15 - T. Timusk -- The normal state of URu2Si2: spectroscopic evidence for an anomalous Fermi liquid (Expand...)
The Landau Fermi liquid, is recognized experimentally by an electrical resistivity that is proportional to the square of the absolute temperature. There is also a frequency dependent term which has received less attention since the experiments have to be performed in the difficult far infrared region of the spectrum. Calculations show that, if electron-electron scattering dominates the electron lifetime, in a Landau Fermi liquid, ρ(T,ω)=A'(ω2 + b π2 T2) where b= 4. Using an optical technique that minimizes interference artifacts, we find the coefficient b=1.0 ± 0.1 in the normal state of the heavy Fermion metal URu2Si2. This unexpected result implies that the electrons are experiencing a novel scattering process. This scattering is intrinsic and we suggest that, above 17.5 K, the uranium f electrons do not hybridize with the free spd electrons to form a coherent Fermi liquid but instead act like a dense array of elastic impurities, interacting incoherently with the charge carriers. Calculations by Maslov and Chubukov show that resonant elastic scattering in a Fermi liquid can yield a scattering rate where b=1. This behavior is not restricted to URu2Si2. Fermi liquid like states with b ≠ 4 have been observed in a number of disparate systems but the significance of this result has not been widely recognized.
• 10:15-10:40 - D.L. Maslov -- Optical conductivity of Fermi-liquid and non-Fermi-liquid metals (Expand...)
In the first part of the talk, I will discuss the robustness of the Fermi-liquid (FL) theory results for the imaginary part of the single-particle self–energy, Im Σ ∼ ω2 + π2 T2, and for the real part of the (inverse) optical conductivity, Re σ-1 ∼ ω2 + b π2 T2, with b=4. I will show that these scaling forms follow from exact analytic properties in the complex plane (the "first-Matsubara-frequency rules"), and also that the result for the optical conductivity is valid to any order in the electron-electron interaction and with all vertex corrections taken into account. However, optical measurements on a number of strongly correlated metals indicate that the scaling form of Re σ-1 differs from the FL prediction: the coefficient b is closer to 1 than to 4. We propose a phenomenological model in which electrons, in addition to inelastic electron-electron interaction, are also scattered elastically by resonant levels. This model explains the data on URu2Si2. In the second part of the talk, I will discuss the optical conductivity of a 2D metal at the onset of the spin-density-wave instability. It will be shown that composite scattering of "lukewarm fermions" results in a non-FL behavior of Re σ, which scales as ω-1/3 and ω-1 below and above some characteristic frequency, correspondingly.
## Thursday 11:10-12:45
### Session Th2: Topological Insulators II
#### Chair: N.P. Armitage
• 11:10-11:35 - J.E. Hoffman -- Nanoscale Band Structure Imaging of Topological Materials: Sb and SmB6 (Expand...)
We use STM to study the topological semimetal Sb(111), and report the first simultaneous observation and quantitative reconciliation of Landau level spectroscopy and quasiparticle interference imaging. We thus establish the technique of band structure tunneling microscopy (BSTM), and use it to reconstruct the multi-component surface states band structure of Sb with nanoscale spatial resolution, and to quantify essential metrics for spintronics applications. We also conduct the first atomic resolution spectroscopic study of the proposed topological Kondo insulator SmB6. We disentangle the tunneling interference between two distinct bands, to reveal a robust hybridization gap which universally spans the Fermi level on four distinct surface morphologies, paving the way for more detailed understanding of the purported topological surface states.
• 11:35-12:00 - K. Ishizaka -- Giant spin splitting in inversion-symmetry broken semiconductors (Expand...)
There has been increasing interest in emerging phenomena from relativistic electrons in a solid, which gives a potential impact on spintronics and magnetoelectrics. One such example is the Rashba and/or Dreselhaus effect that lifts the electron-spin degeneracy as a consequence of spin-orbit interaction (SOI) under the broken inversion-symmetry. A high-energy scale spin-splitting is highly desirable for enhancing the coupling between electron-spins and electricity relevant for spintronic functions. In my talk, I will present the findings of huge SOI effects in several inversion-symmetry broken semiconductors. One is a polar semiconductor composed of heavy elements, BiTeI, where the whole bulk carriers are ruled by big Rashba-like spin-splitting. The band splitting and its spin polarization obtained by spin- and angle-resolved photoemission spectroscopy are well in accord with relativistic first-principles calculations, confirming the spin-splitting indeed derived from bulk atomic configurations. At the same time, the emergence of a strongly spin-orbit coupled 2-dimensional (2D) electron gas is also found in BiTeI, BiTeBr, and BiTeCl, arising from the 2D confinement of conduction electrons due to the surface band bending effect. I will also show the recent result on another inversion-symmetry broken semiconductor, transition metal dichalcogenide MoS$_2$, clearly indicating the nearly full spin polarization of valence band electrons at the Brillouin zone corners.
• 12:00-12:15 - V. Madhavan -- Coexistence of Massless and Massive Dirac Fermions in Topological Crystalline Insulators (Expand...)
Topological crystalline insulators (TCIs) are recently discovered topological materials where topology and crystal symmetry intertwine to create linearly dispersing Dirac surface states similar to graphene. Among the theoretical predictions for TCIs is the possibility of imparting mass to the massless Dirac fermions by breaking crystal symmetry, as well as a Lifshitz transition with a change of Fermi surface topology. In this talk I will discuss our recent experimental and theoretical investigations of a TCI, Pb1-xSnxSe . We performed scanning tunneling microscopy (STM) studies at low temperatures and as a function of magnetic field. By analyzing two types of STM data: Fourier transforms of interference patterns and Landau level spectroscopy, we reveal two distinct regimes of fermiology separated by a Van-Hove singularity at the Lifshitz transition point. Our studies reveal the coexistence of zero mass Dirac fermions protected by crystal symmetry with massive Dirac fermions resulting from crystal symmetry breaking. In addition, I will discuss our recent data on the evolution of the mass as well as the Dirac surface states as we go through a quantum phase transition from the topological to trivial regime.
• 12:15-12:30 - R. Valdés Aguilar -- Time-resolved terahertz dynamics in thin films of the topological insulator Bi2Se3 (Expand...)
We use optical pump-THz probe spectroscopy at low temperature to study the hot carrier response on thin films of Bi2Se3 of several thicknesses to separate the bulk from the surface response. We find that for thinner films the photo excitation changes the transport scattering rate and reduces the THz conductivity, which relaxes within 10 picoseconds (ps). For thicker films, the conductivity increases upon photo excitation and scales with the increase in both the film thickness and optical fluence, with a decay time of approximately 5 ps, as well as a much larger scattering rate. The different dynamics between surface and bulk electrons indicate a decoupling of surface and bulk carriers, and present the possibility of accessing long-lived surface photo-carriers for optoelectronic applications.
• 12:30-12:45 - E.E.M. Chia -- Terahertz conductivity of Dirac-like materials (Expand...)
I will discuss terahertz conductivity results of two Dirac-like materials using terahertz time-domain spectroscopy, as a function of temperature in the frequency range 0.3–3 THz. In twisted bilayer graphene, on top of a Drude-like response, we see a strong peak in the real conductivity σ1(ω) at ∼ 2.7 THz. We analyze the overall Drude-like response using a disorder-dependent (unitary scattering) model, then attribute the peak at 2.7 THz to the presence of van Hove singularities arising from a small-angle commensurate twisting of the two graphene layers. In the three-dimensional topological insulator Bi1.5Sb0.5Te1.8Se1.2,the complex conductivity was analyzed using the Drude-Lorentz model . By calculating the Drude spectral weights of the sample, we found, compared to other bismuth-based topological insulators, that the topological surface states are more clearly discerned with the three-dimensional bulk states being suppressed. An impurity band is present about 30 meV below the Fermi level. We compare the calculated surface and bulk carrier densities with those obtained from transport data. From the surface Drude contribution, we obtained a ∼ 98 % transmission through one surface layer - this is consistent with the transmission through single-layer or bilayer graphene, which shares a common Dirac-cone feature in the band structure. We also show that the temperature dependence of the experimental parameters is not the result of aging effects. The low-frequency real conductivity follows a thermally-activated behavior.
## Thursday 15:15-16:50
### Session Th3: BCS & CDW
#### Chair: D.B. Tanner
• 15:15-15:40 - R. Shimano -- Observation of Higgs Amplitude Mode in Superconductors (Expand...)
Superconductivity is a striking example of the spontaneous symmetry breaking (SSB) phenomena. In general, when a phase transition occurs associated with the SSB, two kinds of collective excitations emerge; the gapless phase mode and the gapped amplitude mode of the complex order parameter. The latter is also called as the Higgs mode from its analogy to the Higgs bosons in elementary particle physics. The nature of the Higgs amplitude mode in superconductors has been intensively studied theoretically in the framework of a quench problem, according to which the Higgs amplitude mode can be thought of as the collective Rabi oscillation of the Anderson’s pseudo-spins. Within the mean-field approximation, a variety of collective mode dynamics such as collision-less damping, power-law decay, persistent oscillation, has been investigated. On the other hand, since the Higgs mode does not couple directly to the electromagnetic field within the linear response, the experimental investigation of the Higgs mode in superconductors has remained as an open issue. In this presentation, we report on the observation of Higgs amplitude mode in s-wave superconductors, Nb1-xTixN films, by using THz-pump and THz-probe spectroscopy technique. In order to excite the Higgs amplitude mode with suppressing the heating effect, we irradiated the sample by an intense monocycle THz pulse whose center frequency was tuned to the superconducting gap. When the excitation pulse width was short enough compared to the inverse of superconducting gap energy, namely in a non-adiabatic excitation regime, a damped oscillation was observed in the transmission of the electric field of THz probe pulse as a function of pump-probe delay. The oscillation frequency obtained from the damped-oscillation fits coincides with the value of asymptotic BCS gap energy after the THz excitation, which behavior indicates the character of Higgs amplitude mode. When the excitation pulse width is comparable to the inverse of superconducting gap energy, the Higgs mode becomes less prominent, as the non-adiabatic excitation condition is not satisfied. In the presentation, the dynamics of pseudo-spins arising from the coherent interaction with the THz electric field will also be discussed.
• 15:40-16:05 - G. L. Carr -- Strong-Field THz Study of Superconductivity in the Time Domain (Expand...)
We have developed a time-domain methodology for determining the THz response of BCS superconductors. The method, based on a finite difference (FDTD) calculation of electromagnetic fields, includes a time-domain susceptibility function valid for the superconducting state with T << Tc. We demonstrate the method's validity by comparing calculation results for the transmission and reflection properties of superconductors with their known frequency-domain counterparts. We also calculate the E-field waveform observed experimentally in a time-domain THz study of superconducting NbTiN.
Some of the advantages of the time-domain approach include calculating instantaneous quantities such as the induced current density that, in turn, affect other quantities. For example, it is known that a superconductor carrying a current has a reduced energy gap under equilibrium conditions, and that equilibrium is established through inelastic electron scattering. We have explored the limits on how fast this process can occur using a strong-field THz pulse to disrupt the superconducting state of a NbN film and modeled the behavior using our time-domain methodology. Comparison between the two suggest that strong currents can destroy superconductivity on a ∼ 100 fs time scale.
• 16:05-16:20 - M.A. Méasson -- Amplitude ¢Higgs¢ mode in 2H-NbSe2 Superconductor (Expand...)
When a spontaneous breaking of a continuous symmetry takes place, as happens during a superconducting transition, collective excitations of the order parameter emerge among which the massive amplitude Higgs mode. Here, we report experimental evidences for the observation of the such superconducting (SC) amplitude mode, so-called \textquoteleft Higgs¢ mode in the charge density wave (CDW) superconductor 2H-NbSe2 using Raman scattering. By comparing 2H-NbSe2 and its iso-structural partner 2H-NbS2 which shows superconductivity but lacks the charge density wave order, we demonstrate that the superconducting mode in 2H-NbSe2 owes its spectral weight to the presence of the coexisting charge density wave order. In addition temperature dependent measurements at ambient pressure in 2H-NbSe2 show a full spectral weight transfer from the charge density wave mode to the superconducting mode upon entering the superconducting phase. Finally, thanks to the technical development of a new set-up for electronic Raman scattering at low temperature (3 K) and high pressure (20 GPa), we show that when the charge density wave order disappears (at 7GPa) the superconducting collective mode turns into a pair breaking peak as observed in 2H-NbS2. All these observations are consistent with a superconducting amplitude mode or Higgs mode. Moreover, we have finely followed the CDW and the SC modes with pressure. Whereas the CDW mode softens and enlarges until it collapses at ~ 5 GPa, the SC mode hardens and gains spectral weight up to 3.5 GPa before losing intensity and softening up to 5 GPa. We will discuss these intriguing results in the context of the quantum critical point theory.
• 16:20-16:35 - L. Benfatto -- Spectroscopic signatures of phase and amplitude modes in superconductors (Expand...)
The formation of a superconducting state leads to the appearance of two collective modes associated with the amplitude and phase fluctuations of the SC order parameter. A direct observation of these two modes is usually elusive, since in a standard BCS superconductor they do not couple to the current or density fluctuations, which are the ones probed by the different spectroscopic techniques. However, specific mechanisms able to break a fundamental symmetry of the system can make these mode observable. Here we discuss two paradigmatic cases suggested by recent experiments in conventional superconductors. The first example is the visibility of phase fluctuations in a strongly disordered superconductor: we show that phase modes become optical active in the presence of disorder, giving rise to additional optical absorption below the gap edge, probed recent by microwave spectroscopy. In particular, we show that the optical response is tightly connected to the spontaneous emergence of SC islands (embedded in a bad SC background) probed recently by STM in several conventional superconductor (as e.g. NbN, TiN, InOx) near the superconductor-insulator transition. Indeed, isolated SC islands act as micro-antennas via the accumulation of a finite phase gradient across their edges. The second example in the visibility of the Higgs (amplitude) mode in the Raman response of a SC state formed on top of a CDW state. Indeed, the CDW state breaks the particle-hole symmetry leading to a finite direct coupling between amplitude fluctuations of the SC order parameter and density fluctuations, probed by Raman scattering. Such a purely electronic mechanism can explain the emergence of an additional peak below Tc in 2H-NbSe2, without assuming a direct coupling between the phonon and the amplitude fluctuations. We finally comment on the consequences of all these findings within the context of other SC systems, as SC interfaces and cuprate superconductors.
• 16:35-16:50 - N.L. Wang -- Coexistence and competition of multiple charge-density-wave orders in RTe3 as revealed by optical probes (Expand...)
The rare-earth tri-telluride RTe3 (R = rare earth elements) is among the most well-known and representative CDW compounds driven by the nesting of Fermi surfaces. It is widely accepted that the compounds with light R ions in the series experience a single CDW phase transition, while the four heavy R ion compounds (R=Dy, Ho, Er, Tm) undergo two CDW transitions. The second transition occurs at lower temperature with a nesting wave vector perpendicular to the first one. Moreover, the two CDW phase transition temperatures involve in an opposite trend. Optical measurement on a heavy R compound ErTe3 indeed revealed two CDW energy gaps which matches fairly well with the established results by other experimental probes. However, optical measurements (including ultrafast pump-probe) on light and intermediate rare-earth compounds CeTe3 and TbTe3 revealed puzzling results: an additional CDW energy gap still develops at low temperature , which is at odds with other experimental measurements. Moreover, the gap values for CeTe3 and TbTe3 do not follow the trend observed for the four heavy rare-earth RTe3 compounds.
To resolve the puzzle, we grew single crystals of the whole series of the rare-earth RTe3 and performed systematic optical spectroscopy studies on all eleven different compounds, which unexpectedly reveals the presence of a third CDW order which also evolves systematically in the series. The first and third CDW orders cooperate with each other and show similar suppressions with decreasing the radii of R ions (or increasing chemical pressure). Meanwhile, both compete with the second CDW order by depleting the low energy spectral weight. The energy gaps observed previously in CeTe3 and TbTe3 at lower energies actually belong to this third CDW order (this order had never been reported by any other probes before). On the basis of those results, we established a complete electronic phase diagram for the multiple CDW orders in RTe3 system . The compounds offer a precious opportunity to study the interplay among the multiple order of the same type.
## Thursday 17:20-18:50
### Session Th4: Fluctuations in Cuprates
#### Chair: A. Damascelli
• 17:20-17:45 - C. Giannetti -- Snapshots of the retarded electronic interaction with antiferromagnetic fluctuations in high-temperature superconductors (Expand...)
One of the pivotal questions in the physics of high-temperature superconductors is whether the low-energy dynamics of the charge carriers is mediated by bosons with a characteristic timescale. This issue has remained elusive since electronic correlations are expected to dramatically speed up the electron-boson scattering processes, confining them to the very femtosecond timescale that is hard to access even with state-of-the-art ultrafast techniques.
Here we simultaneously push the time resolution and the frequency range of transient reflectivity measurements up to an unprecedented level that enables us to directly observe the ∼ 16 fs build-up of the effective electron-boson interaction in hole-doped copper oxides. This extremely fast timescale, together with the outcome of calculations for the t-J model and the repulsive Hubbard model, indicates that short-range antiferromagnetic fluctuations are the bosons that likely mediate the retarded electron interactions in copper oxides close to optimal doping, where the largest critical temperature is reached.
• 17:45-18:10 - J.P. Hinton -- Quasiparticle recombination dynamics in the model cuprate superconductor HgBa2CuO4+δ (Expand...)
The cuprate family of high temperature superconductors is characterized by a variety of electronic phases which emerge when charge carriers are added to the antiferromagnetic parent compound. In recent years, it has been established that various forms of charge ordering at temperatures proximate to the superconducting transition temperature are a universal feature of underdoped cuprates. The structural simplicity of the single layer cuprate system HgBa2CuO4+δ (Hg1201) makes it an ideal system for studying subtle interactions between charge order and superconductivity. In this work, we investigate the recombination dynamics of photo-excited quasiparticles in Hg1201 as a function of doping, temperature, and magnetic field using pump-probe optical reflectivity. We observe two distinct onset temperatures above Tc in the underdoped part of the phase diagram, corresponding to T* and T** as observed in transport and neutron scattering experiments. We also measure a suppression of the recombination rate near Tc. This suppression can be modeled as a crossover from fluctuating charge density wave to superconducting quasiparticle coherence.
• 18:10-18:35 - S. Tajima -- Optical observation of precursory superconductivity in YBa2Cu3Oy (Expand...)
We have systematically studied the c-axis polarized optical spectra for Zn-doped YBa2Cu3Oy over a wide range of oxygen and Zn-contents. Subtracting the normal components carefully, we found finite superconducting condensates at the temperatures far above Tc but below the pseudogap temperature T*. This temperature for precursory superconductivity Tp is sensitive to the Zn-doping like Tc, which indicates that this phenomenon is linked to superconductivity but different from the pseudogap. On the other hand, the doping dependence of Tp is similar to that of T*, namely, Tp increases with underdoping.
These results suggest that as the system approaches a Mott insulator the pairing interaction becomes stronger but the simultaneously developed competing order (pseudogap) suppresses superconductivity. Although these two (superconductivity and pseudogap) are distinct order, they may originate from the same interaction. Strong electron correlation is a possible candidate for this interaction. Since our recent study of oxygen isotope effect in YBa2Cu3Oy suggests that the charge channel is involved in the competing order, the coexistence of spin and charge order such as the stripe order could be the origin of the pseudogap.
• 18:35-18:50 - D. Munzar -- Evidence for a superconducting origin of prominent features of the in-plane infrared response of underdoped cuprates and implications of their persistence above Tc (Expand...)
The possible persistence of some form of superconductivity many tens of K above the bulk superconducting transition temperature Tc in underdoped cuprate superconductors belongs to the most vividly discussed topics in the field of high-Tc superconductivity. Surprisingly high (up to 100 K above Tc) values of the temperature Tons of the onset of an increase of coherence, presumably due to an onset of a precursor superconducting phase, have been deduced from the data of the c-axis infrared response of underdoped YBa2Cu3O7-δ (Y-123). The interpretation of the Tons scale in terms of a precursor superconductivity, however, has not yet been widely accepted. The main reasons are: (i) The c-axis response of Y-123 is a fairly complex quantity due to the specific bilayer structure. (ii) Underdoped cuprates are known to exhibit ordered states distinct from superconductivity, in particular, charge modulations have been reported that set on at temperatures comparable to Tons. It is thus possible to speculate that the Tons scale is determined by an order competing with superconductivity rather than by superconducting correlations themselves. In this context it is of high importance to address manifestations of the increase of coherence below Tons in the in-plane response and to ascertain their relation to superconductivity.
We report on results of our analysis of published experimental data of the in-plane infrared response of two representative underdoped high-Tc cuprate superconductors (Y-123, Tc=59 K; HgBa2CuO4+δ, Tc=67 K) focusing on a characteristic gap feature in the spectra of the real part of the conductivity and the corresponding structures of the memory function/optical selfenergy, that develop below Tons. Several arguments based on comparisons of the data with results of our calculations will be provided indicating that these features are due to superconductivity and that Tons marks the onset of a precursor superconducting phase: (i) The low temperature data of the two moderately underdoped cuprates will be shown to be consistent with results of our calculations based on a well established model of the in-plane response of a d-wave superconductor. This has not been recognized earlier and it strongly suggests that the features are due to superconductivity. (ii) The onset of the gap feature in the data, below Tons, will be shown to be similar to that of our calculated spectra below Tc and to that of optimally doped cuprates below Tc. This finding supports the assignment of the Tons scale to precursor superconductivity. (iii) It will be demonstrated that the temperature dependence of the features cannot be simply accounted for in terms of a normal state pseudogap independent of superconductivity. (iv) Our interpretation of the infrared data will be shown to be consistent with the precursor superconductivity based interpretation of the photoemission data proposed by D. Dessau and coworkers: The above Tc spectral functions provide a reasonable profile of the conductivity only when complemented with the corresponding off-diagonal components.
# Friday, July 4th
## Friday 9:00-10:45
### Session Fr1: Pnictides & Heavy Fermions
#### Chair: L. Benfatto
• 9:00-9:25 - R. Hackl -- A light scattering study of the pairing potential in Fe-based superconductors and related compounds (Expand...)
We present results of light scattering experiments on Fe-based superconductors. The main focus will be placed on superconducting gap excitations in Ba1-xKxFe2As2 (BKFA). It is shown that the response in A1g and B2g symmetry is dominated by pair-breaking. In contrast to Ba(Fe1-xCox)2As2 the energy gaps derived for the various bands are only slightly anisotropic. In B1g symmetry we find a sharp mode inside the gap that is most naturally explained in terms of an exciton-like excitation resulting from the final state interaction between the two electrons of a broken Cooper pair. Here, as opposed to the original prediction by Bardasis and Schrieffer, the interaction does not originate in an intraband anisotropy but rather in a subdominant contribution to the pairing potential from the interaction between the electron bands. The subdominant coupling is found to be almost as strong as the dominant one between the electron and hole bands. In BKFA the identification of the in-gap mode is straightforward for the temperature dependence of the energy and the transfer of spectral weight. The position and the intensitiy of the mode can be reproduced by a semi-quantitative calculation on the basis of model assumptions for the pairing potential. In addition to the Bardasis-Schrieffer modes, there exist various other collective excitations inside the gap allowing conclusions as to the interactions in the superconducting state. We give some examples of results observed earlier in superfluid He, A15 compounds, NbSe2, and MgB2 and propose an identification in terms of Bardasis-Schrieffer, Higgs and Leggett modes, respectively.
• 9:25-9:50 - Y. Gallais -- Raman scattering as a probe of charge nematic fluctuations in Iron-based superconductors (Expand...)
In this talk, I present recent Raman scattering results showing the presence of dynamical charge nematic fluctuations in the tetragonal phase of electron doped (Ba,Sr)(Fe1-xCox)2As2 and hole doped Ba1-xKxFe2As2 Iron-based superconductors. The diverging Raman response at low temperatures unveils an underlying charge nematic state that extends to superconducting compositions and which has hitherto remained unnoticed. Comparison between the extracted static charge nematic susceptibility and elastic modulus measurements allows to disentangle the charge contribution to the nematic instability. Key differences between hole and electron-doping will also be highlighted.
• 9:50-10:15 - I. Paul -- Nesting Induced Large Magnetoelasticity in the Iron Arsenide Systems (Expand...)
An interesting feature of the iron arsenides is the magnetoelastic coupling between the long wavelength in-plane strains of the lattice and the collective spin fluctuations of the electrons near the magnetic ordering wavevectors. We study the microscopic origin of this feature from an electronic model with nested Fermi pockets and a nominal interaction. We find the couplings diverge with a power-law as the system is tuned to perfect nesting, thereby implying the magnetoelasticity in these systems is a nesting induced feature. We also elucidate how this nesting induced singularity plays a role in triggering a spin fluctuation driven nematic instability that gives rise to the orthorhombic phase of these materials. These results show the microscopic connection between the nesting of the bands with the nematic and the magnetoelastic properties of the iron arsenides.
• 10:15-10:30 - J.S. Hall -- Hybridization regime and hidden order state of URu2Si2: Effects of doping on Fermi liquid scattering and energy gap (Expand...)
We present new optical data on the heavy fermion metal URu2Si2 doped with Re and Fe. The un-doped material shows a Drude peak and a developing hybridization gap above 17.5 K. Below this temperature there is a second order phase transition to the "hidden order state". A gap develops in the incoherent part of the conductivity but the Drude peak remains. By doping Re into URu2Si2 the hybridization and hidden order states can be changed considerably. In the former, the Fermi liquid behaviour onsets at a lower temperature and persists over a broader temperature range. In the latter, the hidden order gap is weakened with a corresponding decrease in the transition temperature. Doping with Fe, by contrast, causes an increase in the hidden order transition temperature before pushing URu2Si2 into an antiferromagnetic state. The effect of these dopings on the hybridization and hidden order states, as revealed by optical conductivity, offers significant insight into the electrodynamics of this mysterious system.
• 10:30-10:45 - H. Okamura -- Electron-Hole Symmetry in the Electronic Structures of Ce and Yb Compounds Examined by Optical Study under High Pressure (Expand...)
The electron-hole symmetry is one of the most fundamental concepts in condensed matter physics. Electron-hole symmetry exists between Ce3+ and Yb3+ ions, since the former has one f electron while the latter has one f hole. Accordingly, many common properties are found between Ce and Yb compounds. For example, heavy fermion (HF) states with large effective mass have been observed for both Ce and Yb compounds. However, there are also differences between them. For example, there have been much fewer Yb-based HF superconductors than Ce-based ones. Therefore, it is interesting to compare the microscopic electronic structures of Ce and Yb compounds. Their properties under high pressure are particularly interesting, since external pressure is possible to tune the hybridization between the conduction and f electrons (c-f hybridization). We have compared the electronic structures of CeRhIn5 (Ce115) and YbNi3Ga9 (Yb139) by measuring their optical conductivity σ(ω) under high pressure with diamond anvil cell. Ce115 is tuned by pressure from a localized state (antiferromagnet) to a delocalized HF state for P > 2.5 GPa, while Yb139 is tuned from a delocalized HF (intermediate valence) state to a localized state (magnetic order) for P > 9 GPa. In the measured σ(ω) of both Ce115 and Yb139, a characteristic mid-infrared (mIR) peak shows marked pressure evolution. This peak results from a c-f hybridized state, and hence its pressure evolution results from that of the c-f hybridized state. It is found that the peak shift with pressure is opposite between Ce115 and Yb139: the mIR peak shifts to higher energy for Ce115 and to lower energy for Yb139. This result should reflect the electron-hole symmetry, i.e., pressure evolutions of f electron state toward opposite characteristics between Ce and Yb compounds. On the other hand, the mIR peak becomes broader with pressure for both Ce115 and Yb139, in contrast to what is expected from a simple electron-hole symmetry. We will discuss these results on the basis of microscopic electronic structures in Ce and Yb compounds under high pressure.
## Friday 11:15-12:35
### Session Fr2: Heterostructures
#### Chair: F. Gervais
• 11:15-11:40 - F. Perez -- Collective spin excitations in diluted magnetic quantum wells (Expand...)
We present ten years work on collective spin excitations in electron gas confined in diluted magnetic quantum wells made of Cd1-xMnxTe. The giant Zeeman effect due to the sd-exchange coupling between the conduction electron and the magnetic impurities introduce a high spin-polarization degree of the electron gas with low magnetic fields rendering the Landau orbital quantization negligible. Hence a model spin-polarized electron gas (SP2DEG) is formed. By Raman scattering, we studied the spin waves of the SP2DEG, which propagates in the plane of the quantum well thanks to the spin-resolved Coulomb interaction. The sd-coupling introduce spin mixed modes in which the Mn spin precession is coherently coupled to the electron spins precession. The spin-mixture of the spin motion was observed in the transient THz radiation that follows the coherent spin precession induced by an optical pulse. More recently, we observed huge spin-orbit fields, reinforced by the Coulomb interaction and acting as a unique field on the spin waves. This effect might have applications in spin-wave based spintronics.
• 11:40-11:55 - Y. Todorov -- Collective effects in 2D electron gas and Ultra-strong light-matter coupling (Expand...)
The resonant interaction between light and a quantum structure is commonly described as a process between two electronic levels. However, in solid-state physics this simple picture must be revised when a large number of particles are involved in the process, as many-body effects can modify profoundly the optical response of the system. One such example are the intersubband transitions between the quantized states of a semiconductor quantum well, hosting high density electron gas. In such system, the optical response is provided both by the quantizing effects of the heterostructure potential and by the plasmonic nature of the electronic oscillations excited by the impinging photon.
In such systems, one direct physical effects of the collective electronic response is the possibility to gain very large oscillators strengths for the many-body state as compared to the single particle electronic states. Such gain results in a considerable acceleration of the absorption and emission of light in these systems. For instance, when highly doped quantum wells are coupled with a resonant microcavity, one reaches the ultra-strong light-matter coupling regime, recently demonstrated experimentally by our group . In these regime, the light-matter coupling strength, the Rabi splitting 2ΩR becomes a sizeable fraction of the intersubband transition frequency ω12, and fractions as large as 2ΩR / ω12 =73 % have been recently achieved at room temperature. Beyond the fundamental physical concepts, we are now exploring the large oscillator strength of the quantum plasmons for building novel devices operating in the infrared spectral range. In particular, I will present our recent investigations on electrically driven supper-radiant emitters and low-dark current microcavity-coupled quantum detectors.
• 11:55-12:20 - N. Bergeal -- Multiple Quantum Phase Transitions in a two-dimensional superconductor (Expand...)
Transition metal oxides display a great variety of quantum electronic behaviors where correlations often play an important role. The achievement of high quality epitaxial interfaces involving such materials gives a unique opportunity to engineer artificial materials where new electronic orders take place. It has been shown recently that a two-dimensional electron gas 2DEG could form at the interface of two insulators such as LaAlO3 and SrTiO3, or LaTiO3 (a Mott insulator) and SrTiO3. We study the magnetic field driven Quantum Phase Transition (QPT) in electrostatically gated superconducting in LaTiO3/SrTiO3 and LaAlO3/SrTiO3 interfaces. Through finite size scaling analysis, we show that it belongs to the (2+1)D XY model universality class. The system can be described as a disordered array of superconducting islands coupled by a 2DEG. Depending on the 2DEG conductance tuned by the gate voltage, the QPT is single (corresponding to the long range phase coherence in the whole array) or double (one related to local phase coherence, the other one to the array). By retrieving the coherence length critical exponent, we show that the QPT can be "clean" or "dirty" according to the Harris criteria, depending on whether the phase coherence length is smaller or larger than the island size. The overall behaviour is well described by a model of coupled superconducting puddles in the framework of the fermionic scenario of 2D superconducting QPT.
• 12:20-12:35 - A. Perucchi -- Electrodynamics of hetero-structured high temperature superconductors (Expand...)
Both the iron-based and the cuprate high temperature superconductors, are intrinsically multi-layered materials. Particular efforts have thus been devoted to the deposition of thin superconducting films and to artificially synthesize heterostructures based onto different superconducting materials. The study of these systems can provide new clues to the understanding of the general mechanism of high temperature superconductivity while offering the possibility to tailor important superconducting properties. An important example is provided by Co-doped Ba122 superlattices, where it was shown that heterostructuring the pristine superconducting compound can result in a substantial enhancement of the upper critical field, due to controlled flux pinning. On the other hand, in cuprates, the fabrication of artificial interfaces between the insulating CaCuO3 and SrTiO3 compounds, results in superconducting interfaces, analogous to the Copper-Oxide planes of the cuprates. We address here the electrodynamics of both these classes of heterostructured superconductors. Optics allows to probe the electronic structure of these new superconducting states, thereby addressing important issues as the number and symmetry of the gaps, the density of the charge carriers and their effective masses.
|
2017-10-20 04:51:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6131667494773865, "perplexity": 2389.3334967196306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823731.36/warc/CC-MAIN-20171020044747-20171020064747-00015.warc.gz"}
|
https://math.stackexchange.com/questions/1561774/diophantine-equation-x2-y2-z3
|
# Diophantine equation $x^2 + y^2 = z^3$
I have found all solutions to the Diophantine equation $x^2 + y^2 = z^3$ when $z$ is odd. I am having some difficulty finding the solutions when $z$ is even. I am asking for a proof that provides the solutions where $z$ is even. I want the proof to be elementary and use only Number theory and perhaps Calculus or basic ideas about groups and rings.
• I see $x = y = z = 2$ is a trivial solution to the system. – Jack Tiger Lam Dec 6 '15 at 0:50
• If $z$ is even, then $x^2 + y^2$ is divisible by $4.$ In turn, this means both $x,y$ are even, with $x \equiv y \pmod 4.$ With enough effort this should allow you to finish. – Will Jagy Dec 6 '15 at 1:02
• @WillJagy: $x\equiv y\pmod 4$? – Greg Martin Dec 6 '15 at 1:16
• If $(x,y,z)$ is a solution to $x^2+y^2=z^3$, then so is $$(8n^3x, 8n^3y, 4n^2z)$$ for any non-negative integer $n$. This relies on the fact that multiplying both sides of the equation by an even sixth power will yield another solution. This gives you an infinite number of even solutions for each solution that you find (you said you'd already found all the solutions with odd $z$). This probably doesn't yield all solutions with even $z$, but it may help. – Zubin Mukerjee Dec 6 '15 at 1:36
• @Zubin Mukerjee Let a,b be integers. Then $(a^3 - 3ab^2,3a^2b - b^3, a^2 + b^2) is a solution. These solutions contain all the odd solutions – Tanner Carawan Dec 6 '15 at 1:48 ## 5 Answers Unfortunately, there isn't (apparently) one complete polynomial parameterization to $$x^2+y^2 = z^k\tag1$$ when$k>2$. For$k=2$, the complete solution is, $$x,\,y,\,z = (a^2-b^2)s,\; (2ab)s,\; (a^2+b^2)s$$ where$s$is a scaling factor. Using complex numbers$a+b i$, one can generalize the method. For$k=3$, it is $$x,\,y,\,z = (a^3 - 3a b^2)s^3,\; (3a^2 b - b^3)s^3,\; (a^2+b^2)s^2\tag2$$ but you can no longer find rational$a,b,s$for certain solutions. For example,$\hskip2.7in9^2+46^2 = 13^3\quad$Yes$\hskip2.7in58^2+145^2=29^3\quad$No (You can click on the Yes/No links for Walpha output.) A related discussion can be found in this post while an alternative method is described here. For the case$k=3$, if$a^2+b^2=c^3$, then an infinite more can be found as, $$(a u^3 + 3 b u^2 v - 3 a u v^2 - b v^3)^2 + (b u^3 - 3 a u^2 v - 3 b u v^2 + a v^3)^2 = c^3(u^2+v^2)^3\tag3$$ which should provide some solutions not covered by$(2)$. • Is this a joke? $$58^2+145^2=29^3$$ $$(2*29)^2+(5*29)^2=29^3$$ $$2^2+5^2=29$$ So can any nonlinear equation is reduced to linear. – individ Dec 6 '15 at 8:05 • @individ: Sigh... It should not take you a few seconds to answer your own question. – Tito Piezas III Dec 6 '15 at 8:19 • @Tanner Carawan: I asked a more general version of your question. Kindly see this post. – Tito Piezas III Dec 6 '15 at 9:22 • @TitoPiezasIII: I looked at it. Thanks for answering my question. I have not yet studied enough Abstract Algebra to understand why there isn't a complete polynomial parametrization. – Tanner Carawan Dec 7 '15 at 23:00 Fermat's two squares theorem says exactly which integers$n$can be written as the sum of two squares, and indeed it can be made constructive, with a procedure to find all such representations. I recommend applying that known procedure to$n=z^3$. I don't think there's a significantly easier way; for example, already when$z$is a high power of$5$(or twice a high power of$5$), there are many representations. You can write the solution in this form. http://www.artofproblemsolving.com/community/c3046h1054060_cubes_with_squares But usually use the standard simple approach. In the equation. $$X^2+Y^2=Z^3$$ $$X=ab+cd$$ $$Y=cb-ad$$ And receive such record. $$(a^2+c^2)(b^2+d^2)=Z*Z^2$$ $$b^2+d^2=Z^2$$ $$Z=a^2+c^2$$ So. $$d=a^2-c^2$$ $$b=2ac$$ Then the decision on the record. $$X=3ca^2-c^3$$ $$Y=3ac^2-a^3$$ $$Z=a^2+c^2$$ • Is every solution of that form? Why can we say X = ab + cd and Y = cb - ad? Why must b^2 + d^2 = z^2 or z = a^2 + c^2? What is all this talk about "record"? – Tanner Carawan Dec 6 '15 at 7:52 • @TannerCarawan Because such a simple entry - allows you to simply solve an equation. So why not use it? – individ Dec 6 '15 at 8:08 Playing with the degrees and undetermined coefficients, we try to solve $$r^2(\alpha r^2+\beta s^2)^2+s^2(\gamma\space r^2+\delta s^2)^2=(\lambda r^2+\mu s^2)^3$$ in order to get an identity like for the pythagorean triples. Operating, $$\alpha^2 r^6+(2\alpha \beta+\gamma^2)r^4s^2+(\beta^2+2\gamma \delta)r^2s^4+\delta^2 s^6=(\lambda r^2+\mu s^2)^3$$ We see convenient at first sight take$\alpha^2=\delta^2=1$so that $$\pm2\beta+\gamma^2=\beta^2\pm2\gamma=3$$ Finally we take the values $$(\alpha,\beta,\gamma,\delta,\lambda,\mu)=(1, -3, 3,-1, 1, 1)$$ an we have get the identity $$[r(r^2-3s^2)]^2+[s(3r^2-s^2)]^2=(r^2+s^2)^3$$ From which infinitely many solutions. • All this is good, but the equations do not solve. In the first formula, You actually set what type must have solutions. But the equation we haven't decided about this kind of know nothing. Then solve the equation - when we do not know the solution. – individ Dec 6 '15 at 7:51 • Do you want to explain me more, please. I don´t understand what you desire to tell me. And consider I have no pretended to have found all the solutions but infinitely many of them by an identity which It was not sure I could get (have in account, please, my English of Google translator is weak) – Piquito Dec 6 '15 at 13:07 from equation:$\left( {p}^{2}+{k}^{2}\right) \,{z}^{2}={y}^{2}+{x}^{2}(-2\,a\,b\,p-{b}^{2}\,k+{a}^{2}\,k,\left( {a}^{2}-{b}^{2}\right) \,p+2\,a\,b\,k,{b}^{2}+{a}^{2})(2\,a\,b\,p-{b}^{2}\,k+{a}^{2}\,k,\left( {a}^{2}-{b}^{2}\right) \,p-2\,a\,b\,k,{b}^{2}+{a}^{2})$if$z={p}^{2}+{k}^{2}{\left( {p}^{3}+{k}^{2}\,p\right) }^{2}+{\left( k\,{p}^{2}+{k}^{3}\right) }^{2}={\left( {p}^{2}+{k}^{2}\right) }^{3}{\left( {p}^{3}-3\,{k}^{2}\,p\right) }^{2}+{\left( 3\,k\,{p}^{2}-{k}^{3}\right) }^{2}={\left( {p}^{2}+{k}^{2}\right) }^{3}\$
• You lead the same solutions as above. What's the point of this response? – individ Dec 6 '15 at 13:57
|
2019-12-14 08:24:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8311676383018494, "perplexity": 1175.2642759244204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540585566.60/warc/CC-MAIN-20191214070158-20191214094158-00374.warc.gz"}
|
https://www.nature.com/articles/s41467-020-15709-8?error=cookies_not_supported&code=5fe996af-4253-4e1d-93e8-5ebd79264afe
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Electron tunneling of hierarchically structured silver nanosatellite particles for highly conductive healable nanocomposites
## Abstract
Healable conductive materials have received considerable attention. However, their practical applications are impeded by low electrical conductivity and irreversible degradation after breaking/healing cycles. Here we report a highly conductive completely reversible electron tunneling-assisted percolation network of silver nanosatellite particles for putty-like moldable and healable nanocomposites. The densely and uniformly distributed silver nanosatellite particles with a bimodal size distribution are generated by the radical and reactive oxygen species-mediated vigorous etching and reduction reaction of silver flakes using tetrahydrofuran peroxide in a silicone rubber matrix. The close work function match between silicone and silver enables electron tunneling between nanosatellite particles, increasing electrical conductivity by ~5 orders of magnitude (1.02×103 Scm−1) without coalescence of fillers. This results in ~100% electrical healing efficiency after 1000 breaking/healing cycles and stability under water immersion and 6-month exposure to ambient air. The highly conductive moldable nanocomposite may find applications in improvising and healing electrical parts.
## Introduction
Healable and deformable conductive materials have received considerable attention for future electronics such as artificial human skin, internet of things, and bioelectronics1,2,3,4,5. The healable materials are essential components in robust electronics owing to recoverability after mechanical/electrical damages autonomously or in response to external stimuli4,5,6,7. A variety of mechanisms, such as metal-coordinated, covalent, and hydrogen bonds, have been investigated to synthesize healable polymer matrix8,9,10,11. In order to impart electrical conductivity (σ), various conductive fillers and conducting polymers were incorporated into the healable polymer matrix4,6,8,9,10,12,13,14,15,16,17,18,19,20,21,22.
The healable conductive nanocomposites can be classified into two types depending on the moldability at room temperature; rigid/flexible/stretchable4,8,9,10,12,13,14,15 or moldable viscoelastic nanocomposites6,16,17,18,19,20,21,22. The rigid/flexible/stretchable nanocomposites with crosslinking exhibited greater elastic behavior with little moldability4,8,9,10,12,13,14,15. They provided relatively higher σ as shown in Supplementary Table 1. A recent study reported a high σ (850 S cm−1) with graphene (25 wt%)10. The nanocomposite was rigid at room temperature10. However, it became bendable and deformable at high temperatures. The excellent healing was achieved by the abundant Zn(II)-carboxylate interactions after heat treatment10. In contrast, the moldable nanocomposites showed more viscous behavior and changed shape permanently upon molding, similar to silly putty or playdough6,16,17,18,19,20,21,22. The elastic property of moldable nanocomposites in literature was typically low to retain their shape after demolding16,17,18,20,22. The moldable nanocomposites provided smaller σ than the rigid/flexible/stretchable nanocomposites in literature (Supplementary Table 2). Note that only the healable conductive nanocomposites in literature were listed in Supplementary Tables 1 and 2. Commercially-available conducting playdoughs rely on ionic conduction of electrolytes and possess very low σ (≤0.1 S cm−1, Supplementary Table 3). Moreover, they show irreversible electrical and mechanical degradation upon drying or heating, limiting practical applications (Supplementary Fig. 1). The incorporation of conductive nanofillers (<2 S cm−1) or conducting polymers (≤78 S cm−1) still could not provide high conductivity of moldable nanocomposites in previous reports (Supplementary Table 2)6,16,17,18,19,20,21,22.
About the conducting mechanism, a percolation network with dynamic alignment and physical coalescence of fillers has been suggested as an efficient strategy for typical non-healable nanocomposites1,23,24,25,26. A high σ was achieved by the coalescence of fillers after thermal, optical, or chemical curing23,24,25,26. However, the coalesced particles were fractured under mechanical deformation or stretching, resulting in an irreversible decrease in σ23,24,25,26. In contrast, non-coalesced percolation networks based on physical contact or electron tunneling of randomly mixed metal/nanocarbon fillers have been investigated for healable nanocomposites4,8,9,10,13,14,15,16,17,18,19,20,21,22. However, there was an irreversible degradation in σ or mechanical property after the healing process4,8,9,10,13,14,15. The fillers had to be transported to the damaged area, reoriented, and recontacted, without solid coalescence of fillers, decreasing electrical healing efficiency. The electron tunneling mechanism was also suggested2,21, but it was challenging to uniformly disperse nanocarbon fillers within the tunneling distance.
Here we report a highly conductive completely reversible electron tunneling-assisted percolation network of silver nanosatellite (AgNS) particles for putty-like healable and moldable nanocomposites. The hierarchically structured AgNS particles were generated by the unique radical and reactive oxygen species-mediated vigorous etching and reduction reaction of silver flakes (AgFLs) using tetrahydrofuran (THF) peroxide in a healable silicone rubber (SR) matrix. The AgNS network dramatically increased σ by ~5 orders of magnitude, achieving an unusually high σ (1.02 × 103 S cm−1) in putty-like nanocomposites with excellent moldability. The AgNS particles were uniformly and densely distributed with an interparticle distance of 3.1 nm, and the close work function match between Ag and SR enabled electron tunneling. This led to a completely reversible reconstruction of the percolation network, achieving ~100% electrical healing efficiency after 1000 breaking/healing cycles. Moreover, the σ was stable even after 1000 water immersion cycles and 6-month exposure to ambient air. An emergency electronics repair demonstration was also carried out by a robot.
## Results
### Synthesis of silver nanosatellite particles
Figure 1 describes synthesis of the AgNS particles by vigorous in-situ etching and reduction reaction of AgFLs (see Methods for details). Firstly, THF was peroxidized by atmospheric oxygen to form THF peroxide (Fig. 1a)27. In contrast, the peroxidation could be inhibited by a butylated hydroxytoluene (BHT) inhibitor28. Hereafter, the THF containing BHT and peroxidized THF without the inhibitor were referred to as THF and THF peroxide, respectively, unless otherwise specified. The acronyms used in this study are summarized in Supplementary Table 4. The THF peroxidation was confirmed by proton nuclear magnetic resonance (1H NMR) analysis (Fig. 1b). A distinct peak was observed at δ = 5.18 ppm for THF peroxide due to oxidation at the α-carbon29. However, there was no such peak for THF verifying the absence of peroxidation. The concentration of THF peroxide was 0.068 M (Supplementary Fig. 2 and Supplementary Note 1).
Figure 1c shows the schematic of the AgNS-AgFL-SR nanocomposite. SR was previously employed for putty-like healable nanocomposites due to the hydrogen bonding of cross-linked polydimethylsiloxane chains18,30. The vigorous in-situ etching and reduction reaction of microscale AgFLs (~4.3 μm) by THF peroxide in the SR matrix generated hierarchically structured (medium-size and small-size) AgNS particles. The radical and reactive oxygen species-mediated etching and reduction reaction of Ag was proposed as a mechanism, similar to the Ag-hydrogen peroxide (H2O2) reaction31,32,33. As shown in the reaction schematic of Fig. 1c, THF peroxide etches AgFLs resulting in Ag+, THF peroxide radical, and hydroxide anion31. The highly reactive THF peroxide radical then forms 2-hydroxy tetrahydrofuran and C4H7O-OO radical. In the next step, the reaction between C4H7O–OO radical and hydroxide anion generates 2-hydroxy tetrahydrofuran and superoxide anion (O2•−)31,32. Finally, O2•− reduces Ag+ yielding AgNS particles33. Supplementary Fig. 3a and b show optical and scanning electron microscopy (SEM) images of AgFLs treated by THF (THF-AgFLs). AgNS particles were not generated by THF. In contrast, AgNS particles were clearly observed when AgFLs were treated by THF peroxide (AgNS-AgFLs, Supplementary Fig. 3c). AgNS particles were not generated either when the THF peroxide-Ag reaction was carried out with a radical scavenger ((2,2,6,6-Tetramethylpiperidin-1-yl)oxyl), supporting the radical-mediated reaction mechanism (Supplementary Fig. 4). This in-situ etching and reduction reaction was not previously noticed and investigated although AgFLs or graphene were treated by THF in literature9,34. Interestingly, there was no oxidation in AgNS-AgFLs (Supplementary Fig. 5). This prevented any decrease in σ of nanocomposites as will be discussed later. The generation of AgNS particles changed color of the AgNS-AgFL-SR nanocomposite into dark brown whereas the AgFL-SR nanocomposite (AgFLs treated by THF in SR) was light gray (Fig. 1c inset). The nanocomposite synthesis process, without involving thermal curing, is provided in Supplementary Fig. 6.
Figure 1d compares the Fourier transform infrared (FTIR) spectra of the pure AgFLs, THF-AgFLs, AgNS-AgFLs, and AgNS-AgFL-SR nanocomposite. The CH and C–O peaks related with THF were observed for the THF-AgFLs. An additional strong O–C=O peak (1569 cm−1), corresponding to conjugated γ-butyrolactone, was observed for the AgNS-AgFLs. The 2-hydroxy tetrahydrofuran, formed in the reaction medium, generated conjugated γ-butyrolactone as a byproduct upon further oxidation35,36. Additional peaks related with SR (Si(CH3)2 stretching, Si–O–Si stretching, Si–CH3 stretching, and CH bending) were observed for the AgNS-AgFL-SR nanocomposite37. Note that there was no chemical reaction between THF peroxide and SR itself (Supplementary Fig. 7).
The peroxidation process was further investigated by the NMR analysis. The 1H NMR analysis with dimethyl sulfone as an internal standard indicated that the concentration of THF peroxide was 0.061 M (Supplementary Fig. 8a and Supplementary Note 2). This was close to the iodometric titration result (Supplementary Fig. 2, 0.068 M). The 13C NMR analysis also confirmed the existence of peroxidation. The THF with BHT inhibitor exhibited two peaks (Supplementary Fig. 8b). In contrast, the oxidation at the α-carbon atom of THF exhibited four distinct peaks of THF peroxide (Supplementary Fig. 8c). The 1H NMR analysis was also carried out after the reaction of THF peroxide with AgFLs (AgNS-AgFL, Supplementary Fig. 8d). The residual THF peroxide peak was negligible after the AgNS particle generating reaction.
### Characterization of silver nanosatellite particles
There was no AgNS particle generation in the AgFL-SR nanocomposite synthesized using the THF with BHT inhibitor (Fig. 2a). The SEM image of pristine AgFLs is also shown in Fig. 2a. The smooth edges of AgFLs were clearly observed. In contrast, AgNS particles were generated when AgFLs were treated by THF peroxide in SR matrix at room temperature, due to the radical and reactive oxygen species-mediated vigorous etching and reduction process (as discussed in Fig. 1c). The unique hierarchically structured AgNS particles with a bimodal size distribution were categorized as the medium and small AgNS particles. Figure 2b (left and middle) show the in-situ generated medium AgNS particles between microscale AgFLs. The average particle size of the medium AgNS particles was 164 nm (Fig. 2c). Figure 2b (right) shows rough edges and dimples of the AgFL where medium AgNS particles were etched from. The severely etched AgFL surface could also be observed when the THF peroxide amount was increased to 45 mL (Supplementary Fig. 9). The area fraction of medium AgNS particles was 10.5% (Supplementary Fig. 10). Figure 2d, e show transmission electron microscopy (TEM) images of densely and uniformly distributed small AgNS particles. The average size of small AgNS particles was 3.7 nm with an interparticle distance of 3.1 nm (Fig. 2c). There was no small AgNS particle generation when AgFLs were treated using the THF with BHT inhibitor (Supplementary Fig. 11).
The in-situ formation of AgNPs from AgFLs was also previously reported2,12. Ag ions were diffused from the Ag2O layer of AgFLs embedded in the fluorine rubber matrix and reduced into AgNPs (average size: 8.1 nm) by the fluorine surfactant and thermal curing process (80 and 120 °C, 1 hr each)2. The diffusion continued with time, increasing the size of AgNPs and forming nanorods2. The possibility of exfoliation of AgNPs from AgFLs was excluded2. Another work also reported the in-situ formation of AgNPs by diffusion of Ag ions from AgFLs12. The diffused ions were reduced into AgNPs by the carbonyl groups of self-healing polymer matrix12. In contrast, we do not rely on the diffusion mechanism to synthesize the medium and small AgNS particles. The reduction was achieved without the aid of surfactant/polymer and thermal curing process to generate AgNS particles (Supplementary Fig. 3c). Both the medium and small AgNS particles were synthesized by the unique radical and reactive oxygen species-mediated vigorous etching and reduction reaction with THF peroxide (Fig. 1c). THF peroxide with radical scavenger or THF with BHT inhibitor could not generate AgNS particles (Supplementary Figs. 4 and 11). There was no change in electrical conductivity of the AgNS-AgFL-SR nanocomposites after 6-month storage in an ambient air environment, excluding the diffusion mechanism (as will be discussed in Supplementary Fig. 23a). The unique AgNS particles with bimodal (medium and small) size distribution enabled nearly perfect healing efficiency (~100%) as will be discussed later. The stretching cycle durability and/or healing efficiency were limited in previous reports, without additional encapsulation layer, increasing resistance with repeated cycles2,12. This work was also different from our previous report where the solvated Ag ions were directly mixed with the polymer solution38. There was no Ag ion diffusion or etching process38. The solvated Ag ions were then thermally reduced into AgNPs38.
The electron tunneling has been previously suggested as a transport mechanism between nanoparticles in nanocomposites2,21. The electron tunneling depends on the height (V) and width (d) of potential barrier, which is described by the Simmons approximation39.
$${\it{I}} \propto {\mathrm{exp}}\left( {\frac{{ - 2{\it{d}}\sqrt {2{\it{m}}^ \ast {\it{V}}} }}{{\it{h}}}} \right)$$
(1)
where I is the electron tunneling current, m* is the effective mass of electron, and h is the Planck’s constant39. The V is determined from the work function difference between AgFLs and SR matrix. Figure 2f shows the work function distributions of SR and AgFLs measured by Kelvin probe force microscopy (KPFM). The mean work function of AgFLs (4.71 eV) was consistent with the previous report measured by ultraviolet photoelectron spectroscopy40. The close work function match between Ag and SR lowered the barrier height and enabled electron tunneling between small AgNS particles with an interparticle distance of 3.1 nm.
### Electrical transport property of the nanocomposites
The electrical transport property of the AgNS-AgFL-SR nanocomposite is shown in Fig. 3. The σ was measured using a square prism-shaped specimen (Supplementary Fig. 12). Figure 3a shows the σ of the AgNS-AgFL-SR nanocomposite (Ag = 44 vol%) as a function of THF peroxide amount in the initial mixture. The σ of the AgFL-SR specimen, synthesized without THF peroxide, was only 8.64 × 10−3 S cm−1. Surprisingly, the in-situ etching and reduction reaction of AgFLs by the optimum THF peroxide amount (15 mL) increased σ by ~5 orders of magnitude, achieving 2.06 × 102 S cm−1. This was striking because a highly conductive electrical percolation network was constructed only by electron tunneling of AgNS particles, bridging AgFL islands, without curing-induced physical coalescence of fillers. The carrier mobility and concentration of the optimum AgNS-AgFL-SR specimen, measured by the van der Pauw method, were 1.59 cm2 V−1 s−1 and 1.69 × 1021 cm−3, respectively. The reproducibility of the AgNS-AgFL-SR nanocomposite was good over multiple batches (Supplementary Fig. 13). The σ decreased beyond the optimum THF peroxide amount due to the excessive etching of AgFLs. The dimples and defects on the severely-etched AgFLs acted as electron scattering sites (Supplementary Fig. 9). The density (ρ) of the AgNS-AgFL-SR nanocomposite was also maximum at the optimum THF peroxide amount (Supplementary Fig. 14).
Figure 3b shows the σ of the AgNS-AgFL-SR nanocomposite as a function of the Ag filler fraction (THF peroxide = 15 mL). Note that the in-situ etching and reduction reaction of AgFLs and generation of AgNS particles did not change the total Ag fraction. Surprisingly, there was no enhancement in σ of the AgFL-SR nanocomposite, synthesized with THF, even when the Ag fraction increased up to 47 vol% (9.07 × 10−3 S cm−1). The σ could be increased to 1.35 × 10−1 S cm−1 (AgFL = 47 vol%) only after an additional thermal curing process (200 °C, 1 h). In contrast, the σ of the AgNS-AgFL-SR nanocomposite increased dramatically, even without the thermal curing process, when the Ag fraction was greater than 36 vol%. The electron tunneling-assisted AgNS percolation network showed a good agreement with the 3D percolation theory.
$$\sigma = \sigma _0\left( {{\it{V}}_{\mathrm{f}} - {\it{V}}_{\mathrm{c}}} \right)^s$$
(2)
where σ0 is the electrical conductivity of Ag, Vf is the filler volume fraction, Vc is the experimentally obtained percolation threshold (0.36), and s is the fitting exponent23. The maximum σ of the AgNS-AgFL-SR nanocomposite was 1.02 × 103 S cm−1 (Ag = 47 vol%). The mechanical healability and moldability started to degrade at excessive Ag filler fractions, and the optimized Ag concentration was selected as 44 vol%. The AgNS-AgFL-SR nanocomposite synthesized at the optimized condition (Ag = 44 vol%, THF peroxide = 15 mL) was primarily investigated in Figs. 37, unless otherwise specified.
As a control, a H2O2-AgFL-SR nanocomposite (Ag = 44 vol%) was synthesized by mixing H2O2-treated AgFLs with SR. The concentration of H2O2 was the same as that of the THF peroxide (0.068 M). There was only a slight increase in σ (9.13 × 10−3 S cm−1) compared with the AgFL-SR nanocomposite. The cyclic voltammetry analysis of AgFLs was carried out using the THF peroxide or H2O2 electrolyte (Supplementary Fig. 15). The oxidation peak was observed at a higher bias potential for the H2O2. This demonstrated that AgFLs were more resistant to oxidation in the H2O2 environment, generating a smaller number of AgNS particles. The H2O2 etching generated only a small number of AgNPs (<20 nm) even at a higher concentration (0.68 M, Supplementary Fig. 16). This demonstrated the efficient reaction of THF peroxide, generating densely and uniformly distributed AgNS particles. As another control, commercial AgNPs (~293 nm, 10.5 vol%) were mixed with AgFLs (33.5 vol%) in SR matrix using the THF with inhibitor instead of THF peroxide (AgNP-AgFL-SR nanocomposite). The size and concentration of AgNPs were similar to those of medium AgNS particles (Supplementary Fig. 17a). The σ of the AgNP-AgFL-SR nanocomposite (8.29 × 10−3 S cm−1) was significantly smaller than that of the AgNS-AgFL-SR nanocomposite (2.06 × 102 S cm−1). The commercial AgNPs with a smaller diameter (~30 nm, 10.5 vol%) were also investigated (Supplementary Fig. 17b). The σ of the AgNP-AgFL-SR nanocomposite (7.83 × 10−3 S cm−1, AgNP = ~30 nm) was similar to that of the nanocomposite with bigger AgNPs (8.29 × 10−3 S cm−1, AgNP = ~293 nm). The commercial AgNPs could not mimic the in-situ generated AgNS particles which had both medium (~164 nm) and small (~3.7 nm) sizes. This demonstrated the unique role of the in-situ AgNS particles synthesized by THF peroxide, which could not be achieved by other methods.
The σ of the AgNS-AgFL-SR nanocomposite was also compared with those of putty-like or playdough-like healable and moldable nanocomposites in literature (Fig. 3b). The literature data, where the filler volume fraction or σ information is not available, are summarized in Supplementary Table 2. The maximum σ of the moldable conductive filler-polymer matrix nanocomposites was only 1.98 S cm−117. One moldable nanocomposite made of conducting polymer provided 78 S cm−16. However, the concept of filler fraction was not applicable to the conducting polymer since the conduction mechanism was completely different. The maximum σ of the AgNS-AgFL-SR nanocomposite (1.02 × 103 S cm−1) was about 3 orders of magnitude greater than that of the moldable conductive filler-polymer matrix nanocomposites in literature17. As shown in Fig. 3c, the ρ of AgNS-AgFL-SR nanocomposite was close to the theoretical value calculated by the rule of mixture41. The densely and uniformly distributed fillers were embedded in the polymer matrix without air voids. The other control specimens showed a lower ρ probably due to the air voids which scattered electrons and decreased σ.
As shown in Fig. 3d, the electrical resistance (R) of the AgNS-AgFL-SR nanocomposite decreased monotonically with compression primarily due to the geometry change. The steady state R was recorded after initial relaxation period at each strain. The dynamic stress relaxation behavior upon compressive step strain was also investigated (Fig. 3e). The stress rapidly increased upon step strain followed by a slow (40–50 s) relaxation behavior. The large stress relaxation was consistent with the characteristics of viscous moldable materials6. There was also a rapid decrease in R upon step strain followed by a slow relaxation (Fig. 3f). The electrical relaxation duration (40–50 s) was similar to the stress relaxation duration. This indicated that small AgNS particles immediately rearranged and reconstructed tunneling-assisted percolation network, during the stress relaxation period, stabilizing electrical transport property. Note that randomly mixed graphene fillers in a viscoelastic polymer matrix exhibited a much slower electrical relaxation than stress relaxation possibly due to the diffusion, reorientation, and recontact of fillers18.
### Healable electrical transport and healing mechanism
Figure 4 shows the mechanical and electrical healing process of the AgNS-AgFL-SR nanocomposite. Healibility typically stems from the dynamic chemical bonds of polymers11,30. The healibility of SR is attributed to the hydrogen bonding as it contains both hydrogen bond donor (oxygen atom in dimethylsiloxane network) and acceptor (hydrogen atom)30. The bifurcated AgNS-AgFL-SR nanocomposites were immediately healed by a gentle finger touch for ~2 s (Fig. 4a and b). The applied pressure was 194 kPa (Supplementary Fig. 18). Figure 4c shows the change in R of the AgNS-AgFL-SR nanocomposite during a breaking/healing cycle. The R increased to infinity after bifurcation of the specimen. Surprisingly, it immediately recovered to its initial value (~0.1 Ω) upon gentle touch of the bifurcated specimens. The AgNS particles were already densely and uniformly distributed in the SR matrix within the electron tunneling distance. These particles immediately reconstructed the percolation network upon touch (Fig. 4b schematic), without the need for diffusion, re-orientation, and recontact of fillers. The σ of the AgNS-AgFL-SR nanocomposite was completely recovered (electrical healing efficiency, σ/σ0 = ~100%) even after 1000 breaking/healing cycles (Fig. 4d). The specimen was reshaped into a square prism after the breaking/healing cycles for the precise σ measurement by the four-point probe in-line method (Supplementary Fig. 12). The previously reported healable nanocomposites could not provide perfect electrical healing efficiency (Supplementary Tables 1 and 2). This demonstrated the excellence of the AgNS particles for the repeatable electrical percolation network construction for healable nanocomposites.
The compressive modulus (E) of the AgNS-AgFL-SR nanocomposite was almost invariant during 20 breaking/healing cycles, indicating reversible mechanical property (Supplementary Fig. 19). The specimen was compressed, manually bifurcated, healed, and reshaped for each stress-strain experiment. The tensile stress-strain characteristics were also investigated during 20 breaking/healing cycles (Supplementary Fig. 20). The dumbbell-shaped AgNS-AgFL-SR (Ag = 47 vol%, THF peroxide = 15 mL) specimen was clamped, stretched, bifurcated, healed, and reshaped for each tensile stress-strain experiment. Almost completely reversible mechanical properties were observed, realizing ~100% healing efficiency in tensile modulus, although there was somewhat fluctuation. The fluctuation in the data could be due to the slightly different shape of the dumbbell-shaped test specimen for each tensile test, because it was manually formed using a polylactic acid mold. There was no degradation in mechanical properties after the repeated tensile tests.
The electrical transport property of the AgNS-AgFL-SR nanocomposite was also investigated after thermal curing (200 °C, 1 h) in order to further elucidate the nature of the AgNS percolation network (Fig. 5a). Differential scanning calorimetry (DSC) analysis was employed to investigate coalescence behavior of the AgNS particles at high temperatures (Fig. 5b and c). There was an exothermic peak at 159 °C for the AgNS-AgFL particles, indicating coalescence of Ag particles (Fig. 5b)41,42. However, the exothermic peak disappeared when the AgNS particles were embedded in the SR matrix (Fig. 5c). The AgNS particles were uniformly encapsulated by SR, preventing coalescence up to 300 °C. The coalescence of AgNS particles could not be observed from SEM images even after the heating process (200 °C, 1 h) as shown in Fig. 5d. Resultantly, there was no increase in σ of the AgNS-AgFL-SR nanocomposite even after the thermal curing process (Fig. 5g). The electrical transport was still realized by the tunneling-assisted percolation network of AgNS particles. This was advantageous for reconstruction of the percolation network, maintaining high and stable σ for healable nanocomposites, since there was no fracture of the coalesced fillers during the breaking process (Fig. 5a). Indeed, there was no change in σ of the thermally-cured AgNS-AgFL-SR nanocomposite during 1000 breaking/healing cycles. The AgNS-AgFL-SR nanocomposite started to decompose after 400 °C (Supplementary Fig. 21). The cyclic temperature-sweep rheology measurement of the AgNS-AgFL-SR nanocomposite is also carried out (Supplementary Fig. 22). The storage (G’) and loss (G”) moduli return to the initial values at room temperature when the maximum temperature is 80 °C. The variation in G’ and G” is less than an order of magnitude in the investigated temperature range although there is somewhat hysteresis. However, there is an increase in G’ and G” at room temperature when the maximum temperature is higher (100 and 120 °C). This could be due to the evaporation of the remnant solvent. Nevertheless, the electrical conductivity could be healed after the curing at 200 °C for 1 h as discussed in Fig. 5g.
As a control, flower-shaped Ag nanoparticles (AgNFs) were mixed with SR matrix (Fig. 5a, AgNF-SR nanocomposite). AgNFs had a hierarchical nanostructure with 2-dimensional thin petals (~12 nm) radially protruded out of a spherical bud (~400 nm) (Fig. 5e)24,41,43. The DSC analysis revealed a strong exothermic peak at 122 °C for AgNFs, indicating active coalescence of thin petals (Fig. 5b)41,43. Unlike the AgNS-AgFL-SR nanocomposite, AgNFs embedded in SR still exhibited an exothermic coalescence peak at 198 °C (Fig. 5c) although the sintering temperature increased due to the wrapping polymer. The coalesced AgNFs could be clearly observed (Fig. 5f). The randomly-mixed AgNFs were coalesced due to the imperfect encapsulation by the polymer matrix. Consequently, the σ of the AgNF-SR nanocomposite significantly increased from 1.80 to 632 S cm−1 after the thermal curing process (200 °C, 1 h, Fig. 5g). The σ of the typical filler-polymer matrix nanocomposites in literature also increased after thermal curing process2,23,38,43. However, the σ dramatically decreased to 8.48 × 10−3 S cm−1 after the first breaking/healing cycle due to the fracture of coalesced AgNFs (Fig. 5a and g). The σ could not be recovered during the 1000 breaking/healing cycles.
Strikingly, there was no change in σ of the AgNS-AgFL-SR nanocomposite during 1000 water immersion cycles due to the hydrophobic nature of the SR (Fig. 5h). The σ and E were also stable for 6 months in an ambient air environment (Supplementary Fig. 23). The AgNS-AgFL-SR nanocomposite maintained stability under harsh conditions which is useful for practical industrial applications.
### Mechanical property, moldability, and application
Figure 6a compares the E of the nanocomposites as a function of the Ag filler fraction. The E was estimated from stress-strain curves (Supplementary Fig. 24). The AgNS-AgFL-SR nanocomposite exhibited greater E than the AgFL-SR nanocomposite in the entire filler fraction. The E of the AgNS-AgFL-SR nanocomposite was also greater than those of the H2O2-AgFL-SR and AgNP-AgFL-SR nanocomposites (Ag = 44 vol%). As shown in Fig. 6b, a finite element analysis (FEA)44 was carried out in order to compare the E of the nanocomposite before and after the in-situ etching and reduction process (see Supplementary Note 3 for details). The AgNS-AgFL-SR nanocomposite was modeled as a unit cube (side length = 0.4 µm) and considered to be composed of two parts (Fig. 6b inset). Before the in-situ etching process, the nanocomposite was composed of the polymer matrix and Ag flakes (AgFL-SR nanocomposite) which was taken as a reference matrix of the model. The stress–strain characteristics of this reference matrix were directly obtained from the experimental measurements of the AgFL-SR nanocomposite (Fig. 6a). The AgNS particles, modeled as cubic elements with a similar size (4 nm), constituted the second part. The effect of the AgNS particles on the E of the AgNS-AgFL-SR nanocomposite was simulated by FEA. This approach allowed the direct comparison of the simulation results with the experimental data (Fig. 6a). Four numerical models with different AgNS fractions (3, 6, 9, 12 vol%) were created (Supplementary Fig. 25). The simulated E of the AgNS-AgFL-SR nanocomposite increased as the AgNS fraction increased (Fig. 6b), which was consistent with the experimental data (Fig. 6a). The AgNS fraction was found to be 12.2 vol%, from the regression analysis (goodness of fit = 93.1%), to match the experimentally measured E. The stress distribution at 1% compressive strain is provided in Supplementary Fig. 26. A larger stress was generated around the AgNS particles since the stiffer AgNS particles experienced a much heavier load, compared with the reference matrix. Furthermore, the reference matrix exhibited local stiffening in the AgNS particle rich region as if the particles were connected. This was reasonable because the reference matrix had a smaller E than the AgNS particles, absorbing the impact of compression.
The G’ and G” were also measured as a function of frequency (Fig. 6c). The G’ and G” of the pure SR were similar in the low frequency range (<6.3 rad s−1), demonstrating a viscoelastic behavior. The addition of AgFLs to SR (AgFL-SR) increased both G’ and G”. The in-situ etching and reduction reaction of AgFLs (AgNS-AgFL-SR) further increased G’ and G”. Interestingly, the relative magnitude of G” became greater than G’ in the low frequency range. This was also observed in the damping factor (tan δ = G”/G’, Fig. 6d)1,6,45. The tan δ of the AgNS-AgFL-SR nanocomposite was greater than one in the low frequency range (<6.3 rad s−1). The densely and uniformly dispersed AgNS particles made the nanocomposite more viscous. This is favorable for energy dissipation from mechanical shock or vibration1,45. The addition of AgFLs and AgNS particles increased the dynamic viscosity (η) (Fig. 6d inset). The η decreased with increasing shear strain rate, demonstrating a shear thinning behavior. This indicated that there was no severe aggregation or alignment of the AgNS particles under high-rate mechanical deformation17. Figure 6e and f show excellent macroscale and microscale moldability of the AgNS-AgFL-SR nanocomposite. Various different shapes were easily formed by the simple molding process.
As an application demonstration, the highly conductive AgNS-AgFL-SR nanocomposite was employed as random-shaped electrical interconnectors, stably operating light-emitting diodes (Fig. 6g). An emergency electronics repair demonstration was also performed by a robot using the AgNS-AgFL-SR nanocomposite (Fig. 7 and Supplementary Movie 1). This is useful for accidents at places where humans cannot enter, due to the leakage of toxic gas and high temperature. An exhaust fan was connected to the healable nanocomposite circuit, which was heated to 200 °C, removing toxic gas at harsh conditions (Fig. 7a). A situation was simulated where the circuit was cut, accumulating the toxic gas and increasing the gas concentration above the safe level (Fig. 7b and c). The dry ice in water was used as a simulant for the toxic gas, and relative humidity was monitored. A robot was dispatched to the emergency site, healing the AgNS-AgFL-SR circuit (Fig. 7d). The fan restarted, removing the toxic gas and decreasing the gas concentration below the safe level (Fig. 7e).
## Discussion
The vigorous in-situ etching and reduction reaction of microscale AgFLs by THF peroxide generated uniformly distributed medium (164 nm) and small (3.7 nm) AgNS particles in a healable and moldable SR matrix (AgNS-AgFL-SR nanocomposite) at room temperature. The close work function match between Ag and SR enabled electron tunneling between small AgNS particles (interparticle distance = 3.1 nm). This constructed electron tunneling-assisted percolation network between AgFL islands without physical coalescence of fillers. This increased σ by ~5 orders of magnitude, achieving an extraordinary high σ (1.02 × 103 S cm−1) for putty-like nanocomposites. The ~100% healing efficiency was maintained even after 1000 breaking/healing cycles since the percolation network relied on electron tunneling rather than physical coalescence of fillers. The σ was also stable under harsh conditions such as 1000 water immersion cycles and 6-month exposure to ambient air. The AgNS particles increased G” in the low frequency range, resulting in excellent moldability with shear thinning behavior. An emergency electronics repair demonstration was also performed by a robot. The highly conductive, moldable, healable, and stable AgNS-AgFL-SR nanocomposite is promising for future electronic materials.
## Methods
### Synthesis of moldable nanocomposites
The AgNS-AgFL-SR nanocomposite was synthesized by the following process. THF without inhibitor (Sigma-Aldrich, 401757) was exposed to ambient air for a week to prepare THF peroxide (0.068 M). THF with butylated hydroxytoluene inhibitor (Sigma-Aldrich, 186562) was also employed to prevent THF peroxidation. The viscoelastic SR (Wacker Chemie, Elastosil R 401/10, 15 wt%) was dissolved in the THF with inhibitor by stirring for 24 h at room temperature to prepare a polymer matrix solution. AgFLs (Metalor, SA-31812, 29–47 vol%) were dispersed in the SR matrix solution (2 g) with additional THF peroxide (0–45 mL) by tip sonication (Sonictopia, STH-750S, 525 W, 10 min) and subsequent stirring to generate AgNS particles. The mixture was then drop-casted on a Teflon petri dish and dried overnight to obtain the AgNS-AgFL-SR nanocomposite.
The AgFL-SR (AgFL = 29–47 vol%) nanocomposite was synthesized by dispersing AgFLs in the SR matrix solution (2 g) with additional THF with inhibitor (15 mL), instead of THF peroxide. The AgNP-AgFL-SR nanocomposite was synthesized by dispersing both commercial AgNPs (Inframat Advanced Materials, 47MN-06, ~293 nm, 10.5 vol% or CNVISION, Ag Nanopowder, ~30 nm, 10.5 vol%) and AgFLs (33.5 vol%) in the SR matrix solution (2 g) with additional THF with inhibitor (15 mL). AgNFs (0.735 g, 20 vol%), synthesized following a previously published protocol43, were mixed with the SR matrix solution (2 g) with additional THF with inhibitor (15 mL) to synthesize the AgNF-SR nanocomposite.
The H2O2-AgFL-SR nanocomposite was also prepared. Firstly, H2O2 (Sigma-Aldrich, 216763, 9.77 M) was diluted using ethanol to obtain an H2O2 solution (0.068 or 0.68 M, 15 mL). In the next step, AgFLs (2.311 g) were treated by the H2O2 solution and dried overnight to obtain H2O2-AgFL powders. The H2O2-AgFL powders were then mixed with the SR matrix solution (2 g) with additional THF with inhibitor (15 mL). The ratio of the AgFLs (2.311 g) to H2O2 (15 mL) was the same as that of the AgNS-AgFL-SR (Ag = 2.311 g at 44 vol%, THF peroxide = 15 mL, 0.068 M) nanocomposite.
### Characterization
The σ was measured by the four-point probe in-line method using a laboratory-built device23. The mean values and standard deviations of multiple specimens (≥3) were evaluated at each condition in Figs. 35. The R was measured by the two-point probe method using a direct current power supply (Keithley, 2280S-32-6). The resistance of the probing wires and contact electrodes was subtracted from the total resistance to precisely measure the resistance of specimens38. The work function was measured by KPFM (Nanonavi, E-sweep) using a rhodium-coated silicone probe (Nanoworld, SI-DF3-R). The measurement was calibrated using a highly oriented pyrolytic graphite. The ρ was measured by the Archimedes method (Sartorius, Quintix224-1SKR). The NMR (Bruker, AVANCEIII700, 700 MHz) analysis was carried out using CDCl3 (δ = 7.26 ppm, Sigma-Aldrich, 151858). The specimens were analyzed by FTIR (Bruker, IFS-66/S, TENSOR27), SEM (Jeol, JSM-7500F), TEM (Jeol, JEM-2100F), XRD (Smart Lab, Cu Kα radiation at 1.5418 Å, 45 kV, and 200 mA), XPS (Thermo Scientific, ESCALAB250), DSC (Seiko, DSC7020, 25–300 °C, 10 °C min−1, nitrogen atmosphere), and TGA (Seiko, TG/DTA7300, 25–800 °C, 10 °C min−1, nitrogen atmosphere). The water contact angles were measured (SmartDrop, FEMTOFAB, SDL200TEZD). The stress-strain characteristics were measured by a universal testing machine (Instron, 3343). The rheological property was measured by a rheometer (TA Instruments, ARES-G2). The cyclic voltammetry analysis was also carried out (CH Instruments, Electrochemical Workstation, CHI660C).
### Emergency electronics repair by a robot
A robot arm (ABB, IRB120) equipped with a robot hand (ROBOTIQ, 2F-85) performed the emergency electronics repair using the AgNS-AgFL-SR nanocomposite. The robot was controlled using a robot controller (ABB, IRC5) and a teach pendant. The fog effect was created using the dry ice (Taekyung Chemical) in water. The relative humidity was also monitored (Grove-THO2 sensor).
## Data availability
The authors declare that the main data supporting the findings of this study are available within the article and its Supplementary Information files. All other relevant data are available from the corresponding author upon reasonable request.
## Code availability
The code used to calculate the results for this work is available from the authors upon reasonable request.
## References
1. 1.
Kim, Y. et al. Stretchable nanoparticle conductors with self-organized conductive pathways. Nature 500, 59–63 (2013).
2. 2.
Matsuhisa, N. et al. Printable elastic conductors by in situ formation of silver nanoparticles from silver flakes. Nat. Mater. 16, 834–840 (2017).
3. 3.
Choi, S. et al. Highly conductive, stretchable and biocompatible Ag–Au core–sheath nanowire composite for wearable and implantable bioelectronics. Nat. Nanotechnol. 13, 1048–1056 (2018).
4. 4.
Son, D. et al. An integrated self-healable electronic skin system fabricated via dynamic reconstruction of a nanostructured conducting network. Nat. Nanotechnol. 13, 1057–1065 (2018).
5. 5.
Markvicka, E. J., Bartlett, M. D., Huang, X. & Majidi, C. An autonomously electrically self-healing liquid metal–elastomer composite for robust soft-matter robotics and electronics. Nat. Mater. 17, 618–624 (2018).
6. 6.
Oh, J. Y., Kim, S., Baik, H.-K. & Jeong, U. Conducting polymer dough for deformable electronics. Adv. Mater. 28, 4455–4461 (2016).
7. 7.
Yang, Y. et al. Self-healing of electrical damage in polymers using superparamagnetic nanoparticles. Nat. Nanotechnol. 14, 151–155 (2019).
8. 8.
Zou, Z. et al. Rehealable, fully recyclable, and malleable electronic skin enabled by dynamic covalent thermoset nanocomposite. Sci. Adv. 4, eaaq0508 (2018).
9. 9.
Zhang, Q. et al. An elastic autonomous self-healing capacitive sensor based on a dynamic dual crosslinked chemical system. Adv. Mater. 30, 1801435 (2018).
10. 10.
Lai, J.-C. et al. A rigid and healable polymer cross-linked by weak but abundant Zn(II)-carboxylate interactions. Nat. Commun. 9, 2725 (2018).
11. 11.
Yanagisawa, Y., Nan, Y., Okuro, K. & Aida, T. Mechanically robust, readily repairable polymers via tailored noncovalent cross-linking. Science 359, 72–76 (2018).
12. 12.
Kim, S. H. et al. An ultrastretchable and self-healable nanocomposite conductor enabled by autonomously percolative electrical pathways. ACS Nano 13, 6531–6539 (2019).
13. 13.
Gong, C. et al. A healable, semitransparent silver nanowire-polymer composite conductor. Adv. Mater. 25, 4186–4191 (2013).
14. 14.
Bae, J.-S. et al. The feasibility of healable electronics and mechanical behavior of silver nanowire (AgNW)/healable polymer composite. Adv. Mater. Technol. 3, 1700364 (2018).
15. 15.
Sun, H. et al. Self-healable electrically conducting wires for wearable microelectronics. Angew. Chem. 126, 9680–9685 (2014).
16. 16.
D’Elia, E., Barg, S., Ni, N., Rocha, V. G. & Saiz, E. Self-healing graphene-based composites with sensing capabilities. Adv. Mater. 27, 4788–4794 (2015).
17. 17.
Wu, T. & Chen, B. A mechanically and electrically self-healing graphite composite dough for stencil-printable stretchable conductors. J. Mater. Chem. C. 4, 4150–4154 (2016).
18. 18.
Boland, C. S. et al. Sensitive electromechanical sensors using viscoelastic graphene-polymer nanocomposites. Science 354, 1257–1260 (2016).
19. 19.
Wu, T. & Chen, B. Synthesis of multiwalled carbon nanotube-reinforced polyborosiloxane nanocomposites with mechanically adaptive and self-healing capabilities for flexible conductors. ACS Appl. Mater. Interfaces 8, 24071–24078 (2016).
20. 20.
Zhong, X., Hu, H. & Fu, H. Self-cleaning, chemically stable, reshapeable, highly conductive nanocomposites for electrical circuits and flexible electronic devices. ACS Appl. Mater. Interfaces 10, 25697–25705 (2018).
21. 21.
Yuan, F. et al. A flexible viscoelastic coupling cable with self-adapted electrical properties and anti-impact performance toward shapeable electronic devices. J. Mater. Chem. C. 7, 8412–8422 (2019).
22. 22.
Chen, Y. et al. Shape-adaptive, self-healable triboelectric nanogenerator with enhanced performances by soft solid–solid contact electrification. ACS Nano 13, 8936–8945 (2019).
23. 23.
Chun, K.-Y. et al. Highly conductive, printable and stretchable composite films of carbon nanotubes and silver. Nat. Nanotechnol. 5, 853–857 (2010).
24. 24.
Ma, R., Kang, B., Cho, S., Choi, M. & Baik, S. Extraordinarily high conductivity of stretchable fibers of polyurethane and silver nanoflowers. ACS Nano 9, 10876–10886 (2015).
25. 25.
Park, M. et al. Highly stretchable electric circuits from a composite material of silver nanoparticles and elastomeric fibres. Nat. Nanotechnol. 7, 803–809 (2012).
26. 26.
Jiang, Z. et al. Highly stretchable metallic nanowire networks reinforced by the underlying randomly distributed elastic polymer nanofibers via interfacial adhesion improvement. Adv. Mater. 31, 1903446 (2019).
27. 27.
Matsubara, H., Suzuki, S. & Hirano, S. An ab initio and DFT study of the autoxidation of THF and THP. Org. Biomol. Chem. 13, 4686–4692 (2015).
28. 28.
Giammarse, A. T., Alliet, D. F. & Pacco, J. M. Precautions in the use of tetrahydrofuran for UV analysis of GPC effluents. Polym. Lett. 6, 499–506 (1968).
29. 29.
Zhang, B. et al. Delineating oxidative processes of aqueous C60 preparations: role of THF peroxide. Environ. Sci. Tech. 43, 108–113 (2009).
30. 30.
Stricher, A. M., Rinaldi, R. G., Barrès, C., Ganachaud, F. & Chazeau, L. How I met your elastomers: from network topology to mechanical behaviours of conventional silicone materials. RSC Adv. 5, 53713–53725 (2015).
31. 31.
Guo, J.-Z., Cui, H., Zhou, W. & Wang, W. Ag nanoparticle-catalyzed chemiluminescent reaction between luminol and hydrogen peroxide. J. Photochem. Photobiol. A 193, 89–96 (2008).
32. 32.
Kitajima, N., Fukuzumi, S. & Ono, Y. Formation of superoxide ion during the decomposition of hydrogen peroxide on supported metal oxides. J. Phys. Chem. A 82, 1505–1509 (1978).
33. 33.
He, D., Garg, S. & Waite, T. D. H2O2-mediated oxidation of zero-valent silver and resultant interactions among silver nanoparticles, silver ions, and reactive oxygen species. Langmuir 28, 10266–10275 (2012).
34. 34.
Fang, M. et al. Flexible and recyclable conductive composite based on few-layered graphene with excellent self-healing capability. Eur. Polym. J. 108, 536–541 (2018).
35. 35.
Ausavasukhi, A. & Sooknoi, T. Oxidation of tetrahydrofuran to butyrolactone catalyzed by iron-containing clay. Green. Chem. 17, 435–441 (2015).
36. 36.
Mallat, T. & Baiker, A. Reactions in “sacrificial” solvents. Catal. Sci. Technol. 1, 1572–1583 (2011).
37. 37.
Gao, Y. et al. Investigation on permeation properties of liquids into HTV silicone rubber materials. IEEE T Dielect El 21, 2428–2437 (2014).
38. 38.
Ajmal, C. M., Bae, S. & Baik, S. A superior method for constructing electrical percolation network of nanocomposite fibers: in situ thermally reduced silver nanoparticles. Small 15, 1803255 (2019).
39. 39.
Reusch, T. Cross-Sectional Scanning Tunneling Microscopy of Au Contacts on GaAs (110). (Cuvillier Verlag, Gottingen, 2003).
40. 40.
Faseela, K. P., Kim, Y. J., Kim, S.-G., Kim, S. W. & Baik, S. Dramatically enhanced stability of silver passivated dicalcium nitride electride: Ag-Ca2N. Chem. Mater. 30, 7803–7812 (2018).
41. 41.
Suh, D., Lee, S., Xu, C., Jan, A. A. & Baik, S. Significantly enhanced phonon mean free path and thermal conductivity by percolation of silver nanoflowers. Phys. Chem. Chem. Phys. 21, 2453–2462 (2019).
42. 42.
Suh, D., Moon, C. M., Kim, D. & Baik, S. Ultrahigh thermal conductivity of interface materials by silver-functionalized carbon nanotube phonon conduits. Adv. Mater. 28, 7220–7227 (2016).
43. 43.
M. A., C., F., K. P., Singh, S. & Baik, S. Hierarchically-structured silver nanoflowers for highly conductive metallic inks with dramatically reduced filler concentration. Sci. Rep. 6, 34894 (2016).
44. 44.
Lim, J. G. et al. Parametric study for optimal design of an air plasma sprayed thermal barrier coating system with respect to thermal stress. Surf. Coat. Technol. 315, 105–111 (2017).
45. 45.
Zhang, J., Perez, R. J. & Lavernia, E. J. Documentation of damping capacity of metallic, ceramic and metal-matrix composite materials. J. Mater. Sci. 28, 2395–2404 (1993).
## Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1A2C3003199 and NRF-2017R1A2A1A17069289). We acknowledge Hyoyoung Lee for helping the radical scavenger analysis.
## Author information
Authors
### Contributions
D.S., K.P.F., and S.B. conceived and designed the experiments, which were carried out by D.S., K.P.F., and W.K. C.P., S.S., and H.M. performed the robot demonstration. J.G.L. and M.K.K. carried out finite element analysis. D.S., K.P.F., and S.B. wrote the paper. All authors contributed to data analysis and scientific discussion.
### Corresponding author
Correspondence to Seunghyun Baik.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Suh, D., Faseela, K.P., Kim, W. et al. Electron tunneling of hierarchically structured silver nanosatellite particles for highly conductive healable nanocomposites. Nat Commun 11, 2252 (2020). https://doi.org/10.1038/s41467-020-15709-8
• Accepted:
• Published:
|
2021-10-28 16:40:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6975255608558655, "perplexity": 10003.436953523482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588398.42/warc/CC-MAIN-20211028162638-20211028192638-00466.warc.gz"}
|
https://www.reddit.com/r/learnmath/comments/14glee/arithmetic_problem_a_little_confused/
|
This is an archived post. You won't be able to vote or comment.
[–] 2 points3 points (1 child)
Do you just erase brackets and subtract 9 - 8, thus becoming +1?
... yes
((x+2)+3)=(x+5)
[–] 0 points1 point (0 children)
Ah, the condescending ellipsis
[–] 1 point2 points (7 children)
Well note that [; (x + 9) - 8 = x + (9 - 8) ;] by associativity, then [; x + (9 - 8) = x + 1 ;].
[–][S] 0 points1 point (6 children)
I was thinking you multiply -8 by x and -8 by +9? Strange.
[–] 3 points4 points (3 children)
No. The "-" sign means subtraction, unless there's nothing to subtract from. Then, and only then, does it mean negation. So (x+9)-8 means "Subtract 8 from the quantity (x+9)". On the other hand (x+9)(-8) means "Multiply negative 8 by the quantity (x+9)". This is also what you'd get from (-8)(x+9) or -8(x+9). In all those cases, there's nothing to the immediate left of the "-" sign, so it means negation, not subtraction.
[–][S] 0 points1 point (2 children)
Another question, why does y = f(3x)/4
becomes y = 1/4f(3x)?
Damn I must've forgot...
[–] 1 point2 points (0 children)
Two ways to look at it; both give you the same result:
• Division is the same as multiplying by the reciprocal. So "f(3x) divided by 4" is the same as "f(3x) times the reciprocal of 4". In other words, (1/4) times f(3x).
• f(3x)/4 = (1f(3))/(41) = (1/4)*(f(3x)/1) = (1/4)f(3x)
[–] 0 points1 point (0 children)
Because you were confused by the notation above, I just want to make sure you're clear on this: f is not a variable. f() usually stands for a function that's defined elsewhere in the problem. Its not "f times 3x", it's "a function with the input 3x".
Second, as you've written it, y = 1/4f(3x) is ambiguous. I can't tell if this is "1/4 * f(3x)" or if it's "1/(4*f(3x))". The former is correctly equal to the original problem; the latter is not.
[–] 0 points1 point (1 child)
That would be written with the -8 in parens: ((x+9)(-8)). ((x+9)-8) is just subtraction. It looks like you're confused about notation, not math.
[–][S] 0 points1 point (0 children)
Okay thanks.
|
2016-07-31 03:35:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8275139927864075, "perplexity": 3208.0596515507127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258948913.96/warc/CC-MAIN-20160723072908-00232-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://docs.vmware.com/en/Management-Packs-for-vRealize-Operations-Manager/1.5/kubernetes-solution/GUID-E1CFF270-7AC5-4303-B625-8FC55B681EF2.html
|
To set up the Windows exporter on a Windows node, you have to perform the following actions.
## Procedure
1. Install GO on each Windows node from here.
```\$url = "https://github.com/prometheus-community/windows_exporter/releases/download/v0.14.0/windows_exporter-0.14.0-amd64.msi"
`msiexec /i <path-to-msi-file> ENABLED_COLLECTORS=cpu,cs,logical_disk,net,os,service,system,textfile,container,memory`
|
2021-09-21 21:53:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131842613220215, "perplexity": 11700.620450947039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057227.73/warc/CC-MAIN-20210921191451-20210921221451-00591.warc.gz"}
|
https://www.freemathhelp.com/forum/threads/how-do-you-find-general-patterns-trends-in-a-sequence-of-random-numbers.126356/
|
# How do you find general patterns/trends in a sequence of random numbers?
#### MasterAria
##### New member
I'm currently working on a coding project and want to incorporate very basic pattern finding. However, the sequence of data that it needs to find patterns in is a random sequence of 5 numbers, each less frequent than the previous. How can I go about programming this? I can't seem to find any methods of doing this.
#### Dr.Peterson
##### Elite Member
I'm currently working on a coding project and want to incorporate very basic pattern finding. However, the sequence of data that it needs to find patterns in is a random sequence of 5 numbers, each less frequent than the previous. How can I go about programming this? I can't seem to find any methods of doing this.
I'm not sure what you mean by "each less frequent than the previous". Can you give an example?
But the bigger issue is, how are you defining "pattern"? Often that is entirely subjective, and can't be defined mathematically. If you have a specific kind of "pattern" in mind, you'll have to define it before you can program it.
#### MasterAria
##### New member
I'm not sure what you mean by "each less frequent than the previous". Can you give an example?
But the bigger issue is, how are you defining "pattern"? Often that is entirely subjective, and can't be defined mathematically. If you have a specific kind of "pattern" in mind, you'll have to define it before you can program it.
First of all, thank you for responding
Secondly, here's my clarification:
The Numbers
The 1st number has a 48% chance of appearing next in the sequence, the 2nd has a 24% chance, the 3rd = 16%, 4th = 8%, and the 5th = 4%.
The Patterns
Since the sequence is randomly generated, I can't predict the next number. The "patterns" that I'm trying to find are how often a number would repeat and form a "group" and the frequency of each "group" forming.
(ex: A group of four 1s appearing in a row = 1111)
My question
I was wondering if there was any way to predict when a group would form after seeing the chosen number appear in the sequence, or if the "patterns" are just as random as the numbers and are a coincidence of the probabilities.
(ex: Chosen number = 3; Sequence = 211143 < Will the 3 form into a group of 3s, or will it be an outlier like the 2 at the beginning?)
#### Dr.Peterson
##### Elite Member
The Numbers
The 1st number has a 48% chance of appearing next in the sequence, the 2nd has a 24% chance, the 3rd = 16%, 4th = 8%, and the 5th = 4%.
The Patterns
Since the sequence is randomly generated, I can't predict the next number. The "patterns" that I'm trying to find are how often a number would repeat and form a "group" and the frequency of each "group" forming.
(ex: A group of four 1s appearing in a row = 1111)
My question
I was wondering if there was any way to predict when a group would form after seeing the chosen number appear in the sequence, or if the "patterns" are just as random as the numbers and are a coincidence of the probabilities.
(ex: Chosen number = 3; Sequence = 211143 < Will the 3 form into a group of 3s, or will it be an outlier like the 2 at the beginning?)
I still don't understand; I need a clear example, and I think you need to use clearer terminology.
When you say "the first number", do you mean the first number in the sequence, or perhaps the first number in some unspecified set from which the numbers are drawn? You appear to be saying that there is a 48% chance that the first two terms in the sequence are the same. Is that what you mean? (No, it can't be, because you list a probability that the last one appears next, when there is no next! And the example 211143 consists of 6 terms, not 5.)
Maybe you're talking about a sequence of unspecified length, consisting of elements taken (with repetition) from the set {1, 2, 3, 4, 5}?
Also, this sounds like a question about the probability of "patterns", not one about writing a program to "find patterns". I'm always suspicious when a problem transmutes.
#### MasterAria
##### New member
Maybe you're talking about a sequence of unspecified length, consisting of elements taken (with repetition) from the set {1, 2, 3, 4, 5}?
Also, this sounds like a question about the probability of "patterns", not one about writing a program to "find patterns". I'm always suspicious when a problem transmutes.
Sorry for my poor wording, I didn't know how to phrase my question/where to ask.
The set is 5 (pre-set) numbers, (let's go with your example of {1, 2, 3, 4, 5})
The sequence has an undetermined length and can continue to grow if needed until enough data has been gathered.
The probabilities I mentioned earlier were about the numbers in the set, not the sequence. (The number 1 has a 48% probability of showing up next in the sequence, etc. It's almost like the numbers on a roulette wheel, with 1 being the most common and 5 being the least)
I believe you're correct, I'm trying to find the probability of "patterns".
However, my goal is to use machine learning so I can change the probabilities/length of the set/sequence and still get results. Before I can start programming a bot that can calculate the probability of patterns, I need to find out how the process of finding these probabilities works.
Thank you for your help and understanding.
Last edited:
#### MasterAria
##### New member
I just found a similar question to mine that might be helpful:
#### Dr.Peterson
##### Elite Member
Sorry for my poor wording, I didn't know how to phrase my question/where to ask.
The set is 5 (pre-set) numbers, (let's go with your example of {1, 2, 3, 4, 5})
The sequence has an undetermined length and can continue to grow if needed until enough data has been gathered.
The probabilities I mentioned earlier were about the numbers in the set, not the sequence. (The number 1 has a 48% probability of showing up next in the sequence, etc. It's almost like the numbers on a roulette wheel, with 1 being the most common and 5 being the least)
I believe you're correct, I'm trying to find the probability of "patterns".
However, my goal is to use machine learning so I can change the probabilities/length of the set/sequence and still get results. Before I can start programming a bot that can calculate the probability of patterns, I need to find out how the process of finding these probabilities works.
Thank you for your help and understanding.
Okay, let's try stating the problem clearly.
A sequence is formed from a set of possible values in a given probability distribution. That is, each term has a known probability of being each of the numbers in the set, say $$\displaystyle p_1=0.48$$, $$\displaystyle p_2=0.24$$, $$\displaystyle p_3=0.16$$, $$\displaystyle p_4=0.08$$, and $$\displaystyle p_5=0.04$$. We can think of this as rolling the same weighted die to choose each number.
Given the nth term of the sequence, you want to find (a) the probability that it starts a run of n terms, or (b) the expected length of a run of that value. Or something like that.
Does that sound right?
#### JeffM
##### Elite Member
I think this thread got off to a bad start by use of the word “pattern.” A random sequence by definition cannot have a pattern. Therefore, the pattern can’t be learned by a machine or anything else because it doesn’t exist.
Let’s assume for now that what you mean by patterns are repetitions of any of your symbols (there does not appear to be any significance to using numerals).
There are several questions that I can think of that make sense if Dr. Peterson is basically correct in his interpretation.
What is the probability that, in a string or sequence on n symbols drawn from a set of m distinct symbols, there is at least one sub-sequence consisting of k repetitions of at least one of the symbols, given that the probability of drawing a particular symbol is constant from drawing to drawing and those probabilities are independent?
#### MasterAria
##### New member
What is the probability that, in a string or sequence on n symbols drawn from a set of m distinct symbols, there is at least one sub-sequence consisting of k repetitions of at least one of the symbols, given that the probability of drawing a particular symbol is constant from drawing to drawing and those probabilities are independent?
This is exactly what I wanted to ask. Thank you for narrowing it down into a solid question.
Since I didn't know how to describe this question, I had to go through the previous shenanigans of describing the situation but only making things more complicated/confusing.
I also want to be able to ask this question for each of the distinct symbols rather than "at least one of the symbols".
Some examples:
º What is the probability of a sub-sequence of 1 of any length greater than x?
º How many sub-sequences of 1 will there be on average if the main sequence is n digits long?
º How long is the average length of a sub-sequence of 1?
(Rinse & Repeat for all the distinct symbols in the set)
I want to thank you both for helping me this far. (I've been saying thank you in almost every post, but I really do appreciate the help)
#### Dr.Peterson
##### Elite Member
In probability, it is extremely important to be very clear about what you are asking. You have several types of questions here, each of which would still need some clarification. I'll just answer one:
Suppose you see a 2, and want to know the probability that there will be (at least) 4 of them in a row (that is, at least 3 more after the one you already have). Then you are asking for the probability of "222". The probability of the first (subsequent) term being a 2 is 0.24. For the second, it is again 0.24. For the third, it is again 0.24. So the probability of all three is 0.24*0.24*0.24 = 0.013824, or 1.4%.
You'd do similarly for elements with other probabilities.
The question about average length is what you found about the expected length of a run, and is more complicated. You can probably search for terms like "run length" to find more.
The question about how many runs is probably a lot more complicated, because the runs could be of any length.
#### tkhunny
##### Moderator
Staff member
"How do you find general patterns/trends in a sequence of random numbers?"
Read that a couple of times and tell me it makes any sense at all. Why would there be trends or patterns in a sequence of Random Numbers? You have done well to take encouragement and important lessons from previous posts. Keep up the good work.
#### JeffM
##### Elite Member
To answer the question I posed is likely to turn into a rather ugly formula. I’m working on it. It may make more sense to do it as a spread sheet algorithm.
#### MasterAria
##### New member
Take as much time as you need for the algorithm, I don't want to impede your progress, but I had a similar question about another program I'm working on and was wondering if you could look at it after you finished the algorithm.
Thank you and Happy (early) Thanksgiving!
Summary:
- I started working on a bot that learns to play simple tile-based games by playing against itself, then playing against people.
- I made the original version already know every possible move from the start and increase the value of moves that lead to victory while decreasing the value of losing moves after playing against people, but with games like chess, things started to get complicated.
- I got into machine learning and made a new version that only sees the score & current possible moves. It uses a neural network to select the next move, and "learns" by playing against itself.
Problem:
It was pretty good at playing simple games but often ran into issues when playing against people.
The problem is, it can't learn any long-term strategy or remember moves that players make often.
Example:
- Tik Tac Toe (Some games like Tik Tak Toe have unbeatable strategies, but this is just an example)
- The numbers 1-9 in the set represent the player's moves (bottom-left to top-right), & 10-18 represent the computer's moves (bottom-left to top-right)
- A possible sequence of moves could be: {1, 13, 3, 11, 9, 15, 5} (Player wins)
Question:
> If the player plays the same corners in multiple games and the sub-sequence {1, 3, 9} shows up in previous rounds they played, how can the bot recognize the repetition and change its strategy?
> If someone plays multiple rounds, is there a way to identify a sequence of moves they repeat often?
> What about multiple people using the same strategy?
#### Romsek
##### Senior Member
looking at the Autocorrelation or Power Spectrum is a way to suss out patterns in a stream of possibly random numbers.
Last edited:
#### JeffM
##### Elite Member
We have an experiment where we select n times from a set of m arbitrary but distinct symbols. The probability of selecting a given symbol is constant and independent on each selection. $$\displaystyle p_i$$ is the probability of selecting the ith symbol, i = 1, ... m.
We have $$\displaystyle \text i \in \mathbb Z \text { and } 1 \le i \le m \implies 0 < p_i \text { and } \sum_{i=1}^m p_i = 1.$$
$$\displaystyle \therefore p_{min} = \text {min}(p_1,\ ... \ p_m) \implies 0 < p_{min} \le \dfrac{1}{m},$$
We are interested in the probability of at least one event, where an event is defined to be k > 0 successive occurrences of any one of the m symbols. Obviously, if m = 1, that probability is 1. Equally obviously, if k = 1, the probability is 1. Furthermore, it is also obvious that if k > n, the probability is 0. Thus, for the problem to be non-trivial, we must have
$$\displaystyle 2 \le k \le n \text { and } m \ge 2.$$
$$\displaystyle \therefore i \in \mathbb Z \text { and } 1 \le i \le m \implies 0 < p_i < 1 \text { and } \sum_{i=1}^m p_i = 1.$$
If we have at least one event, there must be a first event. That could start at the first selection, the second selection, and so so on all the way up to the n+1-k selection. (If n = 10 and k = 2, we cannot start a sequence of length 2 in position 10; the latest it can start is position 9.)
Let $$\displaystyle q_j$$ be the probability that the first event starts at selection j > 0,
and let $$\displaystyle c_j$$ be the probability that the first event starts on or before selection j > 0.
Define $$\displaystyle c_0 = 0.$$
Obviously, $$\displaystyle j > 0 \implies c_j = q_j + c_{j-1}.$$
Let r = the probability that a given selection is the start of a sequence of k occurrences of one of the m symbols.
$$\displaystyle \therefore r = \sum_{i=1}^m p_i^k \implies 0 < r < 1 \ \because \ k \ge 2.$$
The probability that the FIRST sequence of k occurrences of one of the m symbols occurs at position j is a conditional probability equal to the probability that the first such sequence did not occur earlier times the probability that such a sequence starts at this position. In other words,
$$\displaystyle q_j = (1 - c_{j-1}) * r.$$
And the probability that we want is $$\displaystyle c_{n+1-k}.$$
This is enough to set up a VERY simple spreadsheet to solve problems where m, k, n and the probabilities associated with each symbol are known. Moreover, messing around with such a spreadsheet seems to confirm my intuition that if n >> k, the probability of at least one sequence of at least length k of at least one symbol approaches 1. For example, with m = 4, k = 4, and probabilities for individual symbols of 0.5, 0.25, 0.15, and 0.1, the probability of at least one sequence of at least length k of at least one symbol > 99% if n > 66. It exceeds 50% if n > 10.
My difficulty has been to prove that as k/n approaches 0, the probability approaches 1, let alone say anything about the rate of convergence. I get a pattern that is similar to (but obviously different from) Pascal's triangle, but I have not been able to generalize it.
$$\displaystyle q_1 = (1 - c_0) * r = (1 - 0)r = r.$$
$$\displaystyle c_1 = q_1 + c_0 = r + 0 = r.$$
$$\displaystyle q_2 = (1 - c_1) * r = (1 - r)r = r - r^2.$$
$$\displaystyle c_2 = q_2 + c_1 = r - r^2 + r = 2r - r^2.$$
$$\displaystyle q_3 = (1 - c_2) * r = r - r(2r - r^2) + r^3= r - 2r^2 + r^3.$$
$$\displaystyle c_3 = q_3 + c_2 = r - 2r^2 + r^3 + 2r - r^2 = 3r - 3r^2 + r^3.$$
$$\displaystyle q_4 = (1 - c_3) * r = r - r(3r - 3r^2 + r^3) =\\ r - 3r^2 + 3r^3 - r^4.$$
$$\displaystyle c_4 = q_4 + c_3=\\ r - 3r^2 + 3r^3 - r^4 + 3r - 3r^2 + r^3 = 4r - 6r^2 + 4r^3 - r^4.$$
$$\displaystyle q_5 = (1 - c_4) * r = r - r(4r - 6r^2 + 4r^3 - r^4) =\\ r - 4r^2 + 6r^3 - 4r^4 + r^5.$$
$$\displaystyle c_5 = q_5 + c_4=\\ r - 4r^2 + 6r^3 - 4r^4 + r^5 + 4r - 6r^2 + 4r^3 - r^4 = \\ 5r - 10r^2 +10r^3 - 5r^4 + r^5.$$
$$\displaystyle q_6 = (1 - c_5) * r = r - r(5r - 10r^2 +10r^3 - 5r^4 + r^5) =\\ r - 5r^2 + 10r^3 - 10r^4 + 5r^5 - r^6.$$
$$\displaystyle c_6 = q_6 + c_5= r - 5r^2 + 10r^3 - 10r^4 + 5r^5 - r^6 \\ + 5r - 10r^2 +10r^3 - 5r^4 + r^5 = \\ 6r - 15r^2 + 20r^3 - 15r^4 + 6r^5 - r^6.$$
Does anyone see a way to generalize that?
#### anon_hedgepig
##### New member
We have an experiment where we select n times from a set of m arbitrary but distinct symbols. The probability of selecting a given symbol is constant and independent on each selection. $$\displaystyle p_i$$ is the probability of selecting the ith symbol, i = 1, ... m.
We have $$\displaystyle \text i \in \mathbb Z \text { and } 1 \le i \le m \implies 0 < p_i \text { and } \sum_{i=1}^m p_i = 1.$$
$$\displaystyle \therefore p_{min} = \text {min}(p_1,\ ... \ p_m) \implies 0 < p_{min} \le \dfrac{1}{m},$$
We are interested in the probability of at least one event, where an event is defined to be k > 0 successive occurrences of any one of the m symbols. Obviously, if m = 1, that probability is 1. Equally obviously, if k = 1, the probability is 1. Furthermore, it is also obvious that if k > n, the probability is 0. Thus, for the problem to be non-trivial, we must have
$$\displaystyle 2 \le k \le n \text { and } m \ge 2.$$
$$\displaystyle \therefore i \in \mathbb Z \text { and } 1 \le i \le m \implies 0 < p_i < 1 \text { and } \sum_{i=1}^m p_i = 1.$$
If we have at least one event, there must be a first event. That could start at the first selection, the second selection, and so so on all the way up to the n+1-k selection. (If n = 10 and k = 2, we cannot start a sequence of length 2 in position 10; the latest it can start is position 9.)
Let $$\displaystyle q_j$$ be the probability that the first event starts at selection j > 0,
and let $$\displaystyle c_j$$ be the probability that the first event starts on or before selection j > 0.
Define $$\displaystyle c_0 = 0.$$
Obviously, $$\displaystyle j > 0 \implies c_j = q_j + c_{j-1}.$$
Let r = the probability that a given selection is the start of a sequence of k occurrences of one of the m symbols.
$$\displaystyle \therefore r = \sum_{i=1}^m p_i^k \implies 0 < r < 1 \ \because \ k \ge 2.$$
The probability that the FIRST sequence of k occurrences of one of the m symbols occurs at position j is a conditional probability equal to the probability that the first such sequence did not occur earlier times the probability that such a sequence starts at this position. In other words,
$$\displaystyle q_j = (1 - c_{j-1}) * r.$$
And the probability that we want is $$\displaystyle c_{n+1-k}.$$
This is enough to set up a VERY simple spreadsheet to solve problems where m, k, n and the probabilities associated with each symbol are known. Moreover, messing around with such a spreadsheet seems to confirm my intuition that if n >> k, the probability of at least one sequence of at least length k of at least one symbol approaches 1. For example, with m = 4, k = 4, and probabilities for individual symbols of 0.5, 0.25, 0.15, and 0.1, the probability of at least one sequence of at least length k of at least one symbol > 99% if n > 66. It exceeds 50% if n > 10.
My difficulty has been to prove that as k/n approaches 0, the probability approaches 1, let alone say anything about the rate of convergence. I get a pattern that is similar to (but obviously different from) Pascal's triangle, but I have not been able to generalize it.
$$\displaystyle q_1 = (1 - c_0) * r = (1 - 0)r = r.$$
$$\displaystyle c_1 = q_1 + c_0 = r + 0 = r.$$
$$\displaystyle q_2 = (1 - c_1) * r = (1 - r)r = r - r^2.$$
$$\displaystyle c_2 = q_2 + c_1 = r - r^2 + r = 2r - r^2.$$
$$\displaystyle q_3 = (1 - c_2) * r = r - r(2r - r^2) + r^3= r - 2r^2 + r^3.$$
$$\displaystyle c_3 = q_3 + c_2 = r - 2r^2 + r^3 + 2r - r^2 = 3r - 3r^2 + r^3.$$
$$\displaystyle q_4 = (1 - c_3) * r = r - r(3r - 3r^2 + r^3) =\\ r - 3r^2 + 3r^3 - r^4.$$
$$\displaystyle c_4 = q_4 + c_3=\\ r - 3r^2 + 3r^3 - r^4 + 3r - 3r^2 + r^3 = 4r - 6r^2 + 4r^3 - r^4.$$
$$\displaystyle q_5 = (1 - c_4) * r = r - r(4r - 6r^2 + 4r^3 - r^4) =\\ r - 4r^2 + 6r^3 - 4r^4 + r^5.$$
$$\displaystyle c_5 = q_5 + c_4=\\ r - 4r^2 + 6r^3 - 4r^4 + r^5 + 4r - 6r^2 + 4r^3 - r^4 = \\ 5r - 10r^2 +10r^3 - 5r^4 + r^5.$$
$$\displaystyle q_6 = (1 - c_5) * r = r - r(5r - 10r^2 +10r^3 - 5r^4 + r^5) =\\ r - 5r^2 + 10r^3 - 10r^4 + 5r^5 - r^6.$$
$$\displaystyle c_6 = q_6 + c_5= r - 5r^2 + 10r^3 - 10r^4 + 5r^5 - r^6 \\ + 5r - 10r^2 +10r^3 - 5r^4 + r^5 = \\ 6r - 15r^2 + 20r^3 - 15r^4 + 6r^5 - r^6.$$
Does anyone see a way to generalize that?
Sounds like you're dealing with a negative binomial distribution to me? Check wikipedia? Also you posted a reply instead of an individual thread?
#### JeffM
##### Elite Member
Sounds like you're dealing with a negative binomial distribution to me? Check wikipedia? Also you posted a reply instead of an individual thread?
Yes. I was answering the original question. I suspect I gave the OP what was needed. The rest is more sort of intellectual curiosity. I'll look up negative binomial distribution.
|
2021-05-15 16:22:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7758186459541321, "perplexity": 344.3682162674753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00217.warc.gz"}
|
http://math.stackexchange.com/questions/402314/two-algorithm-a-and-b-solve-the-same-problem
|
# Two Algorithm A and B solve the same problem
Two algorithms $A$ and $B$ solve the same problem. $A$ solves a problem of size $n$ with $n^2~2^n$ operations. $B$ solves it with $n!$ operations. As $n$ grows, which algorithm uses fewer operations?
-
Hi & welcome! Firstly, please try to use $\LaTeX$ to write mathematics in future - for example, I don't know what "n22n" means, mathematically. Secondly, you should tell us what you do know, and any thoughts you've had about the problem. – Sharkos May 25 '13 at 19:44
Is it $n^² 2^n$ by any chance? – Andreas Caranti May 25 '13 at 19:45
thank you, sorry for that but yes thats what it means. and i have no idea on how to solve this. – mmk May 25 '13 at 19:47
|
2015-04-19 18:01:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8475078344345093, "perplexity": 535.0909684972972}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639325.91/warc/CC-MAIN-20150417045719-00075-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://www.groundai.com/project/compressed-secret-key-agreement-maximizing-multivariate-mutual-information-per-bit/
|
Compressed Secret Key Agreement
# Compressed Secret Key Agreement
Maximizing Multivariate Mutual Information Per Bit
Chung Chan Institute of Network Coding,
The Chinese University of Hong Kong, Hong Kong.
###### Abstract
The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components for generating a common secret key. The objective is to maximize the achievable key rate as a function of the joint entropy of the compressed sources. Since the maximum achievable key rate captures the total amount of information mutual to the compressed sources, an optimal compression scheme essentially maximizes the multivariate mutual information per bit of randomness of the private sources, and can therefore be viewed more generally as a dimension reduction technique. Single-letter lower and upper bounds on the maximum achievable key rate are derived for the general source model, and an explicit polynomial-time computable formula is obtained for the pairwise independent network model. In particular, the converse results and the upper bounds are obtained from those of the related secret key agreement problem with rate-limited discussion. A precise duality is shown for the two-user case with one-way discussion, and such duality is extended to obtain the desired converse results in the multi-user case. In addition to posing new challenges in information processing and dimension reduction, the compressed secret key agreement problem helps shed new light on resolving the difficult problem of secret key agreement with rate-limited discussion, by offering a more structured achieving scheme and some simpler conjectures to prove.
###### Keywords:
secret key agreement; source compression; rate-limited discussion; communication complexity; dimension reduction; multivariate mutual information
## 1 Introduction
In Information-theoretic security, the secret key agreement problem by public discussion is the problem where a group of users discuss in public to generate a common secret key that is independent of their discussion. The problem was first formulated by Maurer [34], Ahlswede and Csiszár [1] under a private source model involving two users who observe some correlated private sources. Rather surprisingly, public discussion was shown to be useful in generating the secret key, i.e., it strictly increases the maximum achievable key rate called the secrecy capacity. Such phenomenon was also discovered in [4] in a different formulation. Furthermore, the secrecy capacity was given an information-theoretically appealing characterization— it is equal to Shannon’s mutual information [41] between the two private sources, assuming the wiretapper can listen to the entire public discussion but not observe any other side information of the private sources. It was also shown that the capacity can be achieved by one-way public discussion, i.e., with only one of the users discusses in public.
As a simple illustration, let , and be three uniformly random independent bits, and suppose user observes privately while user observes , where when but when . If user reveals in public, then user can recover and therefore . Furthermore, since is independent of , it can serve as a secret key bit that is recoverable by both users but remains perfectly secret to a wiretapper who observes only the public message . This scheme achieves the secrecy capacity equal to the mutual information roughly because user reveals bit in public so there is bits of randomness left for the secret key. However, if no public discussion is allowed, it follows from the work of Gác and Körner [27] that no common secret key bit can be extracted from the sources. In particular, cannot be used as a secret key because user does not know whether is or . and also cannot be used as a secret key either because they may not be observed by user when and respectively. It can be seen that, while the private sources are clearly statistical dependent, public discussion is needed to consolidate the mutual information of the sources into a common secret key.
The secret key agreement formulation was subsequently extended to the multi-user case by Csiszár and Narayan [22]. Some users are also allowed to act as helpers who can participate in the public discussion but need not share the secret key. The designated set of users who need to share the secret key are referred to as the active users. Different from the two-user case, one-way discussion may not achieve the secrecy capacity when there are more than two users. Instead, an omniscience strategy was considered in [22] where the users first communicate minimally in public until omniscience, i.e., the users discuss in public at the smallest total rate until every active user can recover all the private sources. The scheme was shown to achieve the secrecy capacity in the case when the wiretapper only listens to the public discussion. This assumes, however, that the public discussion is lossless and unlimited in rate, and the sources take values from finite alphabet sets. If the sources were continuous or if the public discussion were limited to a certain rate, it may be impossible to attain omniscience.
This work is motivated by the search of a better alternative to the omniscience strategy for multiterminal secret key agreement. A prior work of Csiszár and Narayan [21] considered secret key agreement under rate-limited public discussion. The model involves two users and a helper observing correlated discrete memoryless sources. The public discussion by the users is conducted in a particular order and direction. While the region of achievable secret key rate and discussion rates remains unknown, single-letter characterizations involving two auxiliary random variables were given for many special cases, including the two-user case with two rounds of interactive public discussion, where each user speaks once in sequence, with the last public message possibly depending on the first. By further restricting to one-way public discussion, the characterization involves only one auxiliary random variable and was extended to continuous sources by Watanabe and Oohama in [48], where they also gave an explicit characterization without any auxiliary random variable for scalar Gaussian sources in [48]. For vector Gaussian sources, the characterization by the same authors in [49] involving some matrix optimization was further improved in [31] to a more explicit formula. However, if the discussion is allowed to be two-way and interactive, Tyagi [45] showed with a concrete two-user example that the minimum total discussion rate required, called the communication complexity, can be strictly reduced. Using the technique of Kaspi [30], multi-letter characterizations were given in [45] for the communication complexity and, similarly, by Liu et al. in [32] for the region of achievable secret key rate. [32] further simplified the characterization using the idea of convex envelope using the technique by Ma et al [33]. While these characterizations provide many new insights and properties, they are not considered computable, compared to the usual single-letter and explicit characterizations. Further extension to the multi-user case also appears difficult, as the converse can be seen to rely on the Csiszár sum identity [1, Lemma 4.1], which does not appear to extend beyond the two-user case.
Nevertheless, partial solutions under more restrictive public discussion constraints were possible. By simplifying the problem to the right extent, new results were discovered in the multi-user case, which has led to the formulation in this work. For instance, Gohari and Anantharam [28] characterized the secrecy capacity in the multi-user case under the simpler vocality constraint where some users have to remain silent throughout the public discussion. Using this result, simple necessary and sufficient conditions can be derived as to whether a user can remain silent without diminishing the maximum achievable key rate [36, 50, 7]. This is a simpler result than characterizing the achievable rate region because it does not say how much discussion is required if a user must discuss. Another line of work [19, 35, 37, 9] follows [45] to characterize the communication complexity but in the multi-user case. Courtade and Halford [19] characterized the communication complexity under a special non-asymptotic hypergraphical source model with linear discussion. [37] obtained a multi-letter lower bound on the communication complexity for the asymptotic general source model. It also gave a precise and simple condition under which the omniscience strategy for secret key agreement is optimal for a special source model called the pairwise independent network (PIN) [40], which is a special hypergraphical source model [18]. [9, 17] further derived some single-letter and more easily computable explicit lower bounds, from which one can also obtain conditions for the omniscience strategy to be optimal under the hypergraphical source model, which covers the PIN model as a special case. [10] considered the more general problem of characterizing the multiterminal secrecy capacity under rate-limited public discussion. In particular, an objective of [10] is to characterize the constrained secrecy capacity defined as the maximum achievable key rate as a function of the total discussion rate. This covers the communication complexity as a special case when further increase in the public discussion rate does not increase the secrecy capacity. While only single-letter bounds were derived for the general source model, a surprisingly simple explicit formula was derived for the PIN model [10]. The optimal scheme in [10] follows the tree-packing protocol in [39]. It turns out to belong to the more general approach of decremental secret key agreement in [6, 5] inspired by the achieving scheme in [19] and the notion of excess edge in [18]. More precisely, the omniscience strategy is applied after some excess or less useful edge random variables are removed (decremented) from the source. Since the entropy of the decremented source is smaller, the discussion required to attain omniscience of the decremented source is also smaller. Such decremental secret key agreement approach applies to hypergraphical sources more generally, and it results in one of the best upper bounds in [35] for communication complexity. However, for more general source models that are not necessarily hypergraphical, the approach does not directly apply.
The objective of this work is to formalize and extend the idea of decremental secret key agreement beyond the hypergraphical source model. More precisely, the secret key agreement problem is considered with an additional source compression step before public discussion where each user independently compresses their private source component to filter away less correlated randomness that does not contribute much to the achievable secret key rate. The compression is such that the entropy rate of the compressed sources is reduced to under certain specified level. In particular, the edge removal process in decremental secret key agreement can be viewed as a special case of source compression, and the more general problem will be referred to as compressed secrecy key agreement. The objective is to characterize the achievable secret key rate maximized over all valid compression schemes. For simplicity, this work will focus on the case without helpers, i.e., when all users are active and want to share a common secret key. A closely related formulation is by Nitinawarat and Narayan [38], which characterized the maximum achievable key rate for the two-user case under the scalar gaussian source model where one of the user is required to quantize the source to within a given rate. [46] also extended the formulation and techniques in [38] to the multi-user case where every user can quantize their sources individually to a certain rate. The compression considered in this work is more general than quantizations for gaussian sources, and the new results are meaningful beyond continuous sources.
The compressed secret key agreement problem is also motivated by the study of multivariate mutual information (MMI) [15], i.e., an extension of Shannon’s mutual information to the multivariate case involving possibly more than two random variables. The unconstrained secrecy capacity in the no-helper case has been viewed as a measure of mutual information in [11, 15], not only because of its mathematically appealing interpretations such as the residual independence relation and data processing inequalities in [15], but also because of its operational significance in undirected network coding [13, 14], data clustering [8] and feature selection [16] (cf. [20]). The optimal source compression scheme that achieves the compressed secrecy capacity can be viewed more generally as an optimal dimension reduction procedure that maximizes the MMI per bit of randomness, which is an extension of the information bottleneck problem [44] to the multivariate case. However, different from the multivariate extension in [25], the MMI is used instead of Watanabe’s total correlation [47], and so it captures only the information mutual to all the random variables rather than the information mutual to any subsets of the random variables. Furthermore, the compression is on each random variable rather than subsets of random variables.
The paper is organized as follows. The problem of compressed secret key agreement is formulated in Section 2. Preliminary results of secret key agreement are given in Section 3. The main results are motivated in Section 4 and presented in Section 5, followed by the conclusion and some discussions on potential extensions in Section 6.
## 2 Problem Formulation
Similar to the multiterminal secret key agreement problem [22] without helpers or wiretapper’s side information, the setting of the problem involves a finite set of users, and a discrete memoryless multiple source
ZV :=(Zi|i∈V)∼PZV taking % values from ZV :=∏i∈VZi(not necessarily % finite).
N.b., letters in sans serif font are used for random variables and the corresponding capital letters in the usual math italic font denote the alphabet sets. denotes the joint distribution of ’s.
A secret key agreement protocol with source compression can be broken into the following phases:
Private observation:
Each user observes an -sequence
Zni:=(Zit|t∈[n])=(Zi1,Zi2,…,Zin)
i.i.d. generated from the source for some block length . N.b., for convenience, denotes the set of positive intergers up to , i.e, .
Private randomization:
Each user generates a random variable independent of the private source, i.e.,
H(UV|ZV)=∑i∈VH(Ui). (1)
Source compression:
Each user computes
~Zi =ζi(Ui,Zni) (2)
for some function that maps to a finite set. is referred to as the compressed source.
Public discussion:
Using a public authenticated noiseless channel, a user is chosen in round to broadcast a message
~Ft :=~ft(~Zit,~Ft−1) where (3a)
is a positive integer denoting the number of rounds and denotes all the messages broadcast in the previous rounds. If the dependency on is dropped, the discussion is said to be non-interactive. The discussion is said to be one-way (from user ) if (and ). For convenience,
Fi :=(~Ft|t∈[ℓ],it=i) (3b) F :=~Fℓ=FV (3c)
denote the aggregate message from user and the aggregation of the messages from all users respectively.
Key generation:
A random variable , called the secret key, is required to satisfy the recoverability constraint that
limn→∞Pr(∃i∈V,K≠θi(~Zi,F))=0, (1)
for some function , and the secrecy constraint that
limn→∞1n[log|K|−H(K|F)]=0, (2)
where denotes the finite alphabet set of possible key values.
N.b., unlike [45], non-interactive discussion is considered different from one-way discussion in the two-user case since both users are allowed to discuss even though their messages cannot depend on each other. Different from [23], there is an additional source compression phase, after which the protocol can only depend on the origninal sources through the compressed sources.
The objective is to characterize the maximum achievable secret key rate for a continuum of different levels of source compression:
###### Definition 1
The compressed secrecy capacity with a joint entropy limit is defined as
~C\operator@fontS(α) :=supliminfn→∞1nlog|K| (3)
where the supremum is over all possible compressed secret key agreement schemes satisfying
limsupn→∞1nH(~ZV)−α≤0. (4)
This constraint limits the joint entropy rate of the compressed source.
N.b., instead of the joint entropy limit, one may also consider entropy limits on some subset that
limsupn→∞1nH(~ZB)−α≤0. (5)
If multiple entropy limits are imposed, will be a higher-dimensional surface instead of a one-dimensional curve. For example, in the two-user case under the scalar gaussian source model, [38] considered the entropy limit only on one of the users. In the multi-user case under the gaussian markov tree model, [46] considered the symmetric case where the entropy limit is imposed on every user.
For simplicity, however, the joint entropy constraint (4) will be the primary focus in this work. It will be shown that is closely related to the constrained secrecy capacity defined as [10]
C\operator@fontS(R) (6)
with instead of (2), i.e., without compression, and the entropy limit (4) replaced by the constraint on the total discussion rate
R≥limsupn→∞1nlog|F|=limsupn→∞1n∑i∈Vlog|Fi|. (7)
N.b., it follows directly from the result of [22] that remains unchanged whether the discussion is interactive or not. Indeed, the relation between and to be shown in this work will not be affected either. Therefore, for notational simplicity, may refer to the case with or without interaction, even though may be smaller with non-interactive discussion.
It is easy to show that is continuous, non-decreasing and concave in [10, Proposition 3.1]. As goes to , the secrecy capacity
C\operator@fontS(∞) :=liminfR→∞C\operator@fontS(R) (8)
is the usual unconstrained secrecy capacity defined in [22] without the discussion rate constraint (7). The smallest discussion rate that achieves the unconstrained secrecy capacity is the communication complexity denoted by
R\operator@fontS:=inf{R≥0∣C\operator@fontS(R)=C\operator@fontS(∞)}. (9)
Similar to , the following basic properties can be shown for :
###### Proposition 1
is continuous, non-decreasing and concave in . Furthermore,
C\operator@fontS(∞)=liminfα→∞~C\operator@fontS(α), (10)
achieving the unconstrained secrecy capacity in the limit.
###### Proof
Continuity, monotonicity and (10) follow directly from the definition of . Concavity follows from the usual time-sharing argument, i.e., for any , , a secret key rate of is achievable with the entropy limit by applying the optimal scheme that achieves for the first samples of and applying the optimal scheme that achieves for the remaining samples.
Because of (10), a quantity playing the same role of for can be defined for as follows.
###### Definition 2
The smallest entropy limit that achieves the unconstrained secrecy capacity is defined as
α\operator@fontS:=inf{α∣~C\operator@fontS(α)=C\operator@fontS(∞)} (11)
and referred to as the minimum admissible joint entropy.
One may also consider both the entropy limit (4) and discussion rate constraint (7) simultaneously, and define the secrecy capacity as a function of and . For simplicity, however, we will not consider this case but, instead, focus on the relationship between and .
The following example illustrates the problem formulation. It will be revisited at the end of Section 5 (Example 3) to illustrate the main results.
###### Example 1
Consider and
Z1:=(Xa,Xb),Z2:=(Xa,Xb,Xc),andZ3:=(Xa,Xc), (12)
where and are uniformly random and independent bits. It is easy to argue that
(13a)
To see this, notice that is observed by every user. Any choice of can therefore be recovered by every user without any discussion, satisfying the recoverability constraint (1) trivially. Since there is no public discussion required, the secrecy constraint (2) also holds immediately by taking a portion of the bits from to be the key bits in . Finally, setting for all ensures , satisfying the entropy limit (4) with equal to the key rate. Hence, as desired. Indeed, we will show (by Proposition 5) that the reverse inequality holds in general, and so we have equality for for this example.
For , every user can simply retain their source without compression, i.e., with for while satisfying the entropy limit (4). Now, with and where is the elementwise XOR, it can be shown that both the recoverability (1) and secrecy (2) constraints hold. This is because user can recover from the XOR with the side information . Furthermore, the XOR bit is independent of and therefore does not leak any information about the key bits. With this scheme, . By the usual time-sharing argument,
~C\operator@fontS(α) ≥{1+α2for α∈[1,3]2for α≥3. (13b)
Indeed, the reverse inequality can be argued using one of the main results (Theorem 5.1) and so the minimum admissible joint entropy will turn out to be .
## 3 Preliminaries
In this section, a brief summary of related results for the secrecy capacity and communication complexity will be given. The results for the two-user case will be introduced first, followed by the more general results for the multi-user case, and the stronger results for the special hypergraphical source model. An example will also be given at the end to illustrate some of the results.
### 3.1 Two-user case
As mentioned in the introduction, no single-letter characterization is known for and even in the two-user case where . Furthermore, while multi-letter characterizations for and were given in [45] and [32] respectively in the two-user case under interactive discussion, no such multi-letter characterization is known for the case with non-interactive discussion. Nevertheless, if one-way discussion from user is considered, then the result of [21, Theorem 2.4] and its extension [48] to continuous sources gave the following characterization of :
C\operator@fontS,1(R) :=supI(Z′1∧Z2)where (1a) I(Z′1∧Z1)−I(Z′1∧Z2)≤R (1b) I(Z′1∧Z2|Z1)=0. (1c)
The last constraint (1c) corresponds to the Markov chain and so the supremum is taken over the choices of the conditional distribution . Using the double Markov property as in [45], it follows that can be characterized more explicitly by the Gács–Körner common information
J\operator@fontGK(Z1∧Z2):=sup{H(U)∣H(U|Z1)=H(U|Z2)=0} (1)
where is a discrete random variable. If (1) is finite, a unique optimal solution exists and is called the maximum common function of and because any common function of and must be a function of . The communication complexity also has a more explicit characterization [45, (44)]
R\operator@fontS,1 =J\operator@fontW,1(Z1∧Z2)−I(Z1∧Z2)where (2) J\operator@fontW,1(Z1∧Z2):=inf{H(W)∣H(W|Z1)=0,I(Z1∧Z2|W)=0} (3)
and is a discrete random variable. If is finite, a unique optimal solution exists and is called the minimum sufficient statistics of for since can only depend on through .
In Section 4, the expression will be related to the compressed secret key agreement restricted to the two-user case when the entropy limit is imposed only on user . This duality relationship in the two-user case will serve as the motivation of the main results for the multi-user case. Indeed, the desired characterization of for the two-user case has appeared in [38, Lemma 4.1] for the scalar gaussian source model:
~C\operator@fontS,1(α) :=supI(Z′1∧Z2)where (4a) I(Z′1∧Z1)≤α (4b) I(Z′1∧Z2|Z1)=0. (4c)
For the general source model, the expression (3.1) has also appeared before with other information-theoretic interpretations as mentioned in [24]. The lagrangian dual of (3.1), in particular, reduces to the dimension reduction technique called the information bottleneck method in [44], where is an observable used to predict the target , and is a feature of that captures as much mutual information with the target variable as possible per bit of mutual information with the observable. Interestingly, the principal of the information bottleneck method was also proposed in [43, 42] as a way to understand deep learning, since the best prediction of from is nothing but a particular feature of sharing a lot of mutual information with .
### 3.2 General source with finite alphabet set
Consider the multi-user case where . If takes values from a finite set, then the unconstrained secrecy capacity was shown in [22] to be achievable via communication for omniscience (CO) and equal to
C\operator@fontS(∞) =H(ZV)−R\operator@fontCO, (1)
where is the smallest rate of CO [22] characterized by the linear program
R\operator@fontCO =minrVr(V)such that (2a) r(B) ≥H(ZB|ZV∖B)∀B⊊V, (2b)
where denotes the sum . Further, can be achieved by non-interactive discussion. It follows that
R\operator@fontS ≤R\operator@fontCO, or equivalently (1a) C\operator@fontS(R) =C\operator@fontS(∞) R≥R\operator@fontCO. (1b)
It was also pointed out in [22] that private randomization does not increase . Hence, if is finite, we have
α\operator@fontS≤H(ZV) (1)
because can be achieved with . While it seems plausible that randomization does not decrease nor increase for any , a rigorous proof remains elusive. Similarly, it appears plausible that neither nor are affected by randomization but, again, no proof is known yet.
An alternative characterization of was established in [11, 18] by showing that the divergence bound in [22] is tight in the case without helpers. More precisely, with defined as the set of partitions of into at least two non-empty disjoint sets, then
C\operator@fontS(∞)=I(ZV) :=minP∈Π′(V)IP(ZV),where (2a) IP(ZV) :=1|P|−1D(PZV\vrule\vrule∏C∈PPZC) (2b) =1|P|−1[∑C∈PH(ZC)−H(ZV)].
In the bivariate case when , reduces to Shannon’s mutual information . It was further pointed out in [15] that is the minimum solution to the residual independence relation
H(ZV)−γ=∑C∈P[H(ZC)−γ] (1)
for some . To get an intuition of the above relation, notice that is a solution when the joint entropy on the left is equal to the sum of entropies ’s on the right for some partition . In other words, the MMI is the smallest value of removal of which leads to an independence relation, i.e., the total residual randomness on the left is equal to the sum of individual residual randomness on the right according to some partitioning of the random variables. It was further shown in [15] that there is a unique finest optimal partition to (2a) with a clustering interpretation in [8]. The MMI is also computable in polynomial time, following the result of Fujishige [26].
In the opposite extreme with , it is easy to argue that
C\operator@fontS(0)≥J\operator@fontGK(ZV) (2)
where is the multivariate extension of the Gács–Körner common information in (1)
J\operator@fontGK(ZV):=sup{H(U)∣H(U|Zi)=0∀i∈V} (3)
with again chosen as a discrete random variable. Note that, even without any public discussion, every user can compress their source independently to where is the maximum common function if is finite. Hence, it is easy to achieve a secret key rate of without any discussion. The reverse inequality of (2) seems plausible but has not been proven yet except in the two-user case. The technique in [21] which relies on the Csiszár sum identity does not appear to extend to the multi-user case to give a matching converse.
### 3.3 Hypergraphical sources
Stronger results have been derived for the following special source model:
###### Definition 3 (Definition 2.4 of [18])
is a hypergraphical source w.r.t. a hypergraph with edge functions iff, for some independent edge variables for with ,
Zi:=(Xe∣e∈E,i∈ξ(e)) for i∈V. (4)
In the special case when the hypergraph is a graph, i.e., , the model reduces to the pairwise independent network (PIN) model in [40]. The hypergrahical source can also be viewed as a special case of the finite linear source considered in [12] if the edge random variables take values from a finite field.
For hypergraphical sources, various bounds on and have been derived in [35, 37, 9, 10]. The achieving scheme makes use of the idea of decremental secret key agreement [6, 5], where the redundant or less useful edge variables are removed or reduced before public discussion. This is a special case of the compressed secret key agreement, where the compression step simply selects the more useful edge variables up to the joint entropy limit.
For the PIN model, it turns out that decremental secret key agreement is optimal, leading to a single-letter characterization of and in [10]:
R\operator@fontS =(|V|−2)C\operator@fontS(∞). (5a) C\operator@fontS(R) =min{R|V|−2,C\operator@fontS(∞)}for R≥0. (5b)
It can be verified that (5a) is the smallest value of such that using (5b). While the proof of converse, i.e., for (5b), is rather involved, the achievability is by a simple tree packing protocol, which belongs to the decremental secret key agreement approach that removes excess edges unused for the maximum tree packing. In other words, the achieving scheme is a compressed secret key agreement scheme. This connection will lead to a single-letter characterization of for the PIN model (in Theorem 5.2).
To illustrate the above results, a single-letter characterization for will be derived in the following for the source in Example 1. It will also demonstrate how an exact characterization for can be extended from a PIN model to a hypergraphical model via some contrived arguments. The characterization will also be useful later in Example 3 to give an exact characterization of .
###### Example 2
The source defined in (12) in Example 1, for instance, is a hypergraphical source with , , and . By (3.2), we have with the optimal solution and . This means that user needs to discuss bit to attain omniscience. In particular, user can reveal the XOR so that user and can recover and respectively from their observations. By (1b), then, we have
C\operator@fontS(R)=C\operator@fontS(∞)=H(ZV)−R\operator@fontCO=2% for R≥R\operator@fontCO=1. (1)
It can also be checked that the alternative characterization of in (3.2) gives
C\operator@fontS(∞)=I(ZV)=12[H(Z1)+H(Z2)+H(Z3)−H(Z{1,2,3})]=2.
Next, we argue that
C\operator@fontS(R)=1+Rfor R∈[0,1]. (2)
The achievability, i.e., the inequality , is by the usual time-sharing argument. In particular, the bound , for example, can be achieved by the compressed secret key agreement scheme in Example 1 with , i.e., by time-sharing the compressed secret key agreement schemes for and for equally. More precisely, we set , , , and . It follows that the public discussion rate is .
Now, to prove the reverse inequality for (2), we modifies the source to another source defined as follows with an additional uniformly random and independent bit :
N.b., is different from , namely, is obtained from by adding , and is obtained from by adding and removing . It follows that is a PIN. By (3.2) and (5b), the constrained secrecy capacity for the modified source is
C′\operator@fontS(R)=min{R,2}.
The desired inequality is proved if we can show that
C′\operator@fontS(R+1)≥C\operator@fontS(R).
To argue this, note that, if user reveals in public, then user can recover . Furthermore, does not leak any information about , and so the source effectively emulates the source . Consequently, any optimal discussion scheme that achieves for can be used to achieve the same secret key rate but after an additional bit of discussion . This gives the desired inequality that establishes (2).
## 4 Multi-letter characterization
We start with a simple multi-letter characterization of the compressed secrecy capacity in terms of the MMI (3.2).
###### Proposition 2
For any , we have
~C\operator@fontS(α) =suplimn→∞1nI(~ZV) (3)
where the supremum is over all valid compressed source satisfying the joint entropy limit (4).
###### Proof
This is because the compressed secrecy capacity is simply the secret key agreement on a compressed source. Hence, by (3.2), the MMI on the compressed source gives the compressed secrecy capacity.∎
The characterization in (3) is simpler than the formulation in (3) because it does not involve the random variables and , nor the recoverability (1) and secrecy (2) constraints. Although such a multi-letter expression is not computable and therefore not accepted as a solution to the problem, it serves as an intermediate step that helps derive further results. More precisely, consider the bivariate case where . Then, (3) becomes
~C\operator@fontS(α) =suplimn→∞1nI(~Z1∧~Z2)where (4a) limsupn→∞1nH(~Z1,~Z2)−α≤0 (4b)
If in addition the joint entropy constraint (4b) is replaced by the entropy constraint on user only, i.e.,
limsupn→∞1nH(~Z1)−α≤0, (4c)
then can be single-letterized by standard techniques as in [21] to defined in (3.1). The following gives a simple upper bound that is tight for sufficiently small .
###### Proposition 3
defined in (3.1) is continuous, non-decreasing and concave in with
~C\operator@fontS,1(α)≤α. (1)
Furthermore, equality holds iff .
###### Proof
Monotonicity is obvious. Continuity and concavity can be shown by the usual time-sharing argument as in Proposition 1. (1) follows directly from the data processing inequality that under the Markov chain required in (4c). If , then there exist a feasible solution to (1) (a common function of and ) with , and so the compressed sources and can be chosen as a function of to achieve the equality for (1). Conversely, suppose is finite and (1) is satisfied with equality. Then, in addition to , we also have , which implies by the double Markov property that, for the maximum common function achieving defined in (1),
I(Z′1∧Z1,Z2|U)=0(or Z′1−U−(Z1,Z2)).
In other words, the optimal is a stochastic function of the maximum common function of and , and so as desired.∎
We will show that the above upper bound in (1) extends to the multi-user case (in Proposition 5). However, for , the above upper bound is not tight even in the two-user case. To improve the upper bound, the following duality between and will be used and extended to the multi-user case (in Theorem 5.1).
###### Proposition 4
For ,
~C\operator@fontS,1(α)=C\operator@fontS,1(α−~C\operator@fontS,1(α)). (2)
Furthermore, the set of optimal solutions to the left (achieving defined in (3.1)) is the same as the set of optimal solutions to the right (achieving in (3.1) with ). It follows that the minimum admissible entropy (9) but with the entropy constraint on user instead is
α\operator@fontS,1 =R\operator@fontS,1+I(Z1∧Z2)=J\operator@fontW,1(Z1∧Z2) (3)
where and are defined in (2) and (3) respectively.
###### Proof
Set . Consider first an optimal solution to and show that it is also an optimal solution to . By optimality,
I(Z′1∧Z2)=~C\operator@fontS,1(α). (4)
By the constraint (4b), . It follows that the constraint (1b) holds, and so is a feasible solution to , i.e., we have for (2) that
~C\operator@fontS,1(α)≥C\operator@fontS,1(α−~C\operator@fontS,1(α)). (5)
To show that is also optimal to , suppose to the contrary that there exists a strictly better solution to , i.e., with
I(Z′′1∧Z2)>I(Z′1∧Z2)=~C\operator@fontS,1(α). (6)
It follows that
I(Z′′1∧Z1)>I(Z′∧Z1)=α. (7)
The last equality means that the constraint (4b) is satisfied with equality. If to the contrary that the equality does not hold, setting to be for some fraction of time gives a better solution to , contradicting the optimality of . The first inequality can also be argued similarly by the optimality of . Now, we have
I(Z′′1∧Z2)−I(Z′1∧Z2)I(Z′′1∧Z1)−I(Z′1∧Z1)(a)≤I(Z′1∧Z2)I(Z′1∧Z1)(b% )≤1,
where (a) is by the concavity of ; and (b) is by the upper bound in (1). N.b., equality cannot hold simultaneously for (a) and (b) because, otherwise, we have , which, together with (6) and (7), contradicts the result in Proposition 3 that (with strict inequality) for . Hence,
I(Z′′1∧Z2)−I(Z′1∧Z2)I(Z′′1∧Z1)−I(Z′1∧Z1)<1,
which, together with (6) and (7), implies
I(Z′′1∧Z1)−I(Z′′1∧Z2)>
|
2019-08-19 22:21:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8933038711547852, "perplexity": 630.851601034273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315132.71/warc/CC-MAIN-20190819221806-20190820003806-00204.warc.gz"}
|
http://uu.diva-portal.org/smash/record.jsf?pid=diva2%3A173287&c=11&searchType=ORGANISATION&language=no&query=&af=%5B%5D&aq=%5B%5B%7B%22organisationId%22%3A%221071%22%7D%5D%5D&aq2=%5B%5B%5D%5D&aqe=%5B%5D&noOfRows=50&sortOrder=author_sort_asc&sortOrder2=title_sort_asc&onlyFullText=false&sf=all
|
uu.seUppsala universitets publikasjoner
Endre søk
Referera
Referensformat
• apa
• ieee
• modern-language-association
• vancouver
• Annet format
Fler format
Språk
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Annet språk
Fler språk
Utmatningsformat
• html
• text
• asciidoc
• rtf
Phenomenology of Charged Higgs Bosons and B-meson Decays
Uppsala universitet, Teknisk-naturvetenskapliga vetenskapsområdet, Fysiska sektionen, Institutionen för fysik och astronomi, Kärn- och partikelfysik.
2009 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
##### Abstract [en]
For more than 30 years the Standard Model has been the theoretical foundation for particle physics. The theory has been verified successfully by experimental tests. Its biggest shortcoming is the non-discovery of the Higgs boson,responsible for giving the other particles masses. Despite its success there are hints that the Standard Model is not the complete theory and many extensions of it, such as supersymmetry, have been proposed.
Extended theories often predict the existence of a charged Higgs boson and its detection will be a clear sign of physics beyond the Standard Model. The main focus in this thesis is on various phenomenological aspects of the charged Higgs boson. For favorable mass and couplings direct detection is shown to be possible at the Large Hadron Collider in production with an associated W boson. It is also shown how a light charged Higgs can have measurable effects on spin correlations in decays of pair-produced top quarks. The charged Higgs boson can also be seen indirectly, in for example B-meson decays, which can be used to put constraints on its mass and fermion couplings. Exclusion limits in two supersymmetric models are given together with a comparison with the discovery potentials for the LHC experiments. A tool for calculating properties, such as masses and decays, of both charged and neutral Higgs bosons in the Two-Higgs-Doublet Model is also presented.
B-meson decays can also be used to test aspects of the strong interaction. Part of this thesis deals with improving and applying phenomenological models to B-meson decays. Although these models are not derived from first principles, their success shows that they capture important features of non-perturbative strong interactions.
##### sted, utgiver, år, opplag, sider
Uppsala: Acta Universitatis Upsaliensis , 2009. , s. 40
##### Serie
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 606
##### Emneord [en]
Supersymmetry, Beyond Standard Model, B-Physics, Charged Higgs Boson, LHC, QCD, Spin Correlations, CMSSM, NUHM
##### Identifikatorer
ISBN: 978-91-554-7422-5 (tryckt)OAI: oai:DiVA.org:uu-9564DiVA, id: diva2:173287
##### Disputas
2009-03-20, Siegbahnsalen, Ånsgströmslaboratoriet, Lägerhyddsvägen 1, Uppsala, 13:15 (engelsk)
##### Veileder
Tilgjengelig fra: 2009-02-24 Laget: 2009-02-11 Sist oppdatert: 2010-12-09bibliografisk kontrollert
##### Delarbeid
1. Associated charged Higgs and W boson production in the MSSM at the CERN Large Hadron Collider
Åpne denne publikasjonen i ny fane eller vindu >>Associated charged Higgs and W boson production in the MSSM at the CERN Large Hadron Collider
2008 (engelsk)Inngår i: European Physical Journal C, ISSN 1434-6044, E-ISSN 1434-6052, Vol. 53, nr 2, s. 267-280Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
We investigate the viability of observing charged Higgs bosons () produced in association with bosons at the CERN large hadron collider, using the leptonic decay and hadronic decay, within different scenarios of the minimal supersymmetric standard model (MSSM) with both real and complex parameters. Performing a parton level study we show how the irreducible standard model background from jets can be controlled by applying appropriate cuts and find that the size of a possible signal depends on the cuts needed to suppress QCD backgrounds and misidentifications. In the standard maximal mixing scenario of the MSSM we find a viable signal for large and intermediate masses () when using softer cuts (, 50 GeV), whereas for harder cuts (, 100 GeV) we only find a viable signal for very large (). We have also investigated a special class of MSSM scenarios with large mass splittings among the heavy Higgs bosons where the cross-section can be resonantly enhanced by factors up to one hundred, with a strong dependence on the -violating phases. Even so we find that the signal after cuts remains small except for small masses () when using the softer cuts. Finally, in all the scenarios we have investigated we have only found small -asymmetries.
##### Identifikatorer
urn:nbn:se:uu:diva-98098 (URN)10.1140/epjc/s10052-007-0453-x (DOI)000252299200007 ()
Tilgjengelig fra: 2009-02-11 Laget: 2009-02-11 Sist oppdatert: 2017-12-13bibliografisk kontrollert
2. PYBBWH: A program for associated charged Higgs and W boson production
Åpne denne publikasjonen i ny fane eller vindu >>PYBBWH: A program for associated charged Higgs and W boson production
(engelsk)Manuskript (Annet vitenskapelig)
##### Abstract [en]
The Monte Carlo program, PYBBWH, is an implementation of the associated production of a charged Higgs and a W boson from $b\bar b$ fusion in a general Two-Higgs-Doublet model for both CP-conserving and CP-violating couplings.It is implemented as a external process to Pythia 6.The code can be downloaded from http://www.isv.uu.se/thep/MC/pybbwh.
##### Identifikatorer
urn:nbn:se:uu:diva-98099 (URN)
Tilgjengelig fra: 2009-02-25 Laget: 2009-02-11 Sist oppdatert: 2010-01-14bibliografisk kontrollert
3. New angles on top quark decay to a charged Higgs
Åpne denne publikasjonen i ny fane eller vindu >>New angles on top quark decay to a charged Higgs
2009 (engelsk)Inngår i: Journal of High Energy Physics (JHEP), ISSN 1126-6708, E-ISSN 1029-8479, Vol. 01, nr 1, s. 024-Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
To properly discover a charged Higgs Boson (H±) requires its spin and couplings to be determined. We investigate how to utilize t spin correlations to analyze the H± couplings in the decay tbH+bτ+ντ. Within the framework of a general Two-Higgs-Doublet Model, we obtain results on the spin analyzing coefficients for this decay and study in detail its spin phenomenology, focusing on the limits of large and small values for tan β. Using a Monte Carlo approach to simulate full hadron-level events, we evaluate systematically how the H± → τ±ντ decay mode can be used for spin analysis. The most promising observables are obtained from azimuthal angle correlations in the transverse rest frames of t(). This method is particularly useful for determining the coupling structure of H± in the large tan β limit, where differences from the SM are most significant.
##### Emneord
Beyond Standard Model, Higgs Physics, Spin and Polarization Effects, Hadronic Colliders
##### Identifikatorer
urn:nbn:se:uu:diva-98100 (URN)10.1088/1126-6708/2008/01/024 (DOI)000252983400053 ()
Tilgjengelig fra: 2009-02-11 Laget: 2009-02-11 Sist oppdatert: 2017-12-13bibliografisk kontrollert
4. Charged Higgs effects on top spin correlations
Åpne denne publikasjonen i ny fane eller vindu >>Charged Higgs effects on top spin correlations
2009 (engelsk)Inngår i: Proceedings of Science, ISSN 1824-8039, Vol. CHARGED2008, nr 024Artikkel i tidsskrift (Fagfellevurdert) Published
##### Identifikatorer
urn:nbn:se:uu:diva-98101 (URN)
Tilgjengelig fra: 2009-02-11 Laget: 2009-02-11 Sist oppdatert: 2009-06-17bibliografisk kontrollert
5. Charged Higgs bosons in minimal supersymmetry: updated constraints and experimental prospects
Åpne denne publikasjonen i ny fane eller vindu >>Charged Higgs bosons in minimal supersymmetry: updated constraints and experimental prospects
2008 (engelsk)Inngår i: Journal of High Energy Physics (JHEP), ISSN 1126-6708, E-ISSN 1029-8479, Vol. 11, nr 035Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
We discuss the phenomenology of charged Higgs bosons in the MSSM with minimal flavor violation. In addition to the constrained MSSM (CMSSM) with universal soft supersymmetry breaking mass parameters at the GUT scale, we explore non-universal Higgs mass models (NUHM) where this universality condition is relaxed. To identify the allowed parameter space regions, we apply constraints from direct searches, low energy observables, and cosmology. We find that values of the charged Higgs mass as low as mH+ 135 GeV can be accommodated in the NUHM models, but that several flavor physics observables disfavor large H+ contributions, associated with high tan β, quite independently of MSSM scenario. We confront the constrained scenarios with the discovery potentials reported by ATLAS and CMS, and find that the current exclusion by indirect constraints is similar to the expected LHC discovery reach with 30 fb−1 of data. Finally, we evaluate the sensitivity of the presented discovery potential to the choice of MSSM benchmark scenario. This sensitivity is found to be higher in the case of a light (mH+ < mt) charged Higgs.
##### Emneord
Supersymmetric Standard Model, Higgs Physics, Hadronic Colliders
##### Identifikatorer
urn:nbn:se:uu:diva-98102 (URN)10.1088/1126-6708/2008/11/035 (DOI)000261315100035 ()
Tilgjengelig fra: 2009-02-11 Laget: 2009-02-11 Sist oppdatert: 2017-12-13bibliografisk kontrollert
6. Color rearrangements in B-meson decays
Åpne denne publikasjonen i ny fane eller vindu >>Color rearrangements in B-meson decays
2009 (engelsk)Inngår i: Physical Review D, ISSN 1550-7998, E-ISSN 1550-2368, Vol. 79, nr 1, s. 014011-Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
Wepresent a new model, based on color rearrangements, which atthe same time can describe both hidden and open charmproduction in B-meson decays. The model is successfully compared toboth inclusive decays, such as B-->J/psiX and B-->DsX, as wellas exclusive ones, such as B-->J/psiK(*) and B-->D(*)D(*)K. It alsogives a good description of the momentum distribution of directJ/psi's, especially in the low-momentum region, which earlier has beenclaimed as a possible signal for new exotic states.
##### Identifikatorer
urn:nbn:se:uu:diva-98103 (URN)10.1103/PhysRevD.79.014011 (DOI)000262979700036 ()
Tilgjengelig fra: 2009-02-11 Laget: 2009-02-11 Sist oppdatert: 2017-12-13bibliografisk kontrollert
7. 2HDMC - two-Higgs-doublet model calculator
Åpne denne publikasjonen i ny fane eller vindu >>2HDMC - two-Higgs-doublet model calculator
2010 (engelsk)Inngår i: Computer Physics Communications, ISSN 0010-4655, E-ISSN 1879-2944, Vol. 181, nr 1, s. 189-205Artikkel i tidsskrift (Fagfellevurdert) Published
##### Abstract [en]
We describe the public C++ code 2HDMC which can be used to perform calculations in a general, CP-conserving, two-Higgs-doublet model (2HDM). The program features simple conversion between different parametrizations of the 2HDM potential, a flexible Yukawa sector specification with choices of different Z_2-symmetries or more general couplings, a decay library including all two-body - and some three-body - decay modes for the Higgs bosons, and the possibility to calculate observables of interest for constraining the 2HDM parameter space, as well as theoretical constraints from positivity and unitarity. The latest version of the 2HDMC code and full documentation is available from: http://www.isv.uu.se/thep/MC/2HDMC.
##### Emneord
Higgs physics, Two-Higgs-doublet model
##### Identifikatorer
urn:nbn:se:uu:diva-111157 (URN)10.1016/j.cpc.2009.09.011 (DOI)000273168600022 ()
Erratum in: Computer Physics Communications, 2010, vol. 181, issue 5, p. 985, doi: 10.1016/j.cpc.2009.12.026 Tilgjengelig fra: 2009-12-04 Laget: 2009-12-04 Sist oppdatert: 2017-12-12bibliografisk kontrollert
#### Open Access i DiVA
fulltekst(1033 kB)427 nedlastinger
##### Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 1033 kBChecksum SHA-512
4b97a9e54740722514e14ec6c956ff76d5ae4c6c9525cb256e77ef0d965beb7d0fc4af7a9c805a1295560ffb23a4e5b4ac89fc0a7b86116d475cacbea034205d
Type fulltextMimetype application/pdf
Kjøp publikasjonen >>
#### Søk utenfor DiVA
Totalt: 427 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige
isbn
urn-nbn
#### Altmetric
isbn
urn-nbn
Totalt: 1154 treff
Referera
Referensformat
• apa
• ieee
• modern-language-association
• vancouver
• Annet format
Fler format
Språk
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Annet språk
Fler språk
Utmatningsformat
• html
• text
• asciidoc
• rtf
|
2019-12-06 04:16:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6381218433380127, "perplexity": 10328.523377446538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540484477.5/warc/CC-MAIN-20191206023204-20191206051204-00188.warc.gz"}
|
https://im.kendallhunt.com/HS/teachers/3/4/1/preparation.html
|
Lesson 1
Growing and Shrinking
Lesson Narrative
This lesson builds on students’ experience with exponential functions in a previous course and with geometric sequences from earlier in this course. The goal is to recall some features of exponential change, such as:
• Exponential change involves repeatedly multiplying a quantity by the same factor, rather than adding the same amount.
• Exponential growth happens when the factor is greater than 1, and exponential decay happens when the factor is less than 1.
• A quantity that grows exponentially may appear to increase slowly at first but then increases very rapidly later.
In addition, students briefly explore the meaning of an exponential function at a non-whole number input and how they could determine the value of the function for that input. This exploration is done in the context of a pond whose surface is being covered by algae that doubles in size each day and wondering how much of the surface is covered half a day before 100% coverage. In future lessons, students will focus on making sense of the meaning of rational inputs in other contexts before using the principle that exponential functions change by equal amounts over equal intervals to calculate things like growth factors over different intervals of time.
Students may represent exponential changes in different ways. They reason abstractly and quantitatively by using descriptions to write expressions, create a table, or make a graph in order to answer questions about a situation (MP2). They may also use expressions to capture regularity in repeated reasoning (MP8). For instance, after being shrunk $$n$$ times by a factor of $$\frac{4}{5}$$, the height of a passport picture gets multiplied by $$\left(\frac{4}{5}\right)^n$$. This work will support students throughout the unit, as they deepen their knowledge of exponential functions and extend it to include any type of rational input, with an emphasis on non-whole number input, later in the unit.
Technology isn’t required for this lesson, but there are opportunities for students to choose to use appropriate technology, such as spreadsheets, to solve problems. We recommend making technology available (MP5). In particular, provide students access to calculators that can process exponential expressions for all lessons in this unit.
Learning Goals
Teacher Facing
• Compare and contrast (orally) exponential growth and decay.
• Determine values of simple exponential functions in context.
Student Facing
• Let’s calculate exponential change.
Required Preparation
Rulers should be made available for the activity Shrinking a Passport Photo, but won’t necessarily be used by all students.
Student Facing
• I understand how to calculate values that are changing exponentially.
|
2022-12-08 15:45:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39469587802886963, "perplexity": 845.1223322794858}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00083.warc.gz"}
|
https://mdolab-pygeo.readthedocs-hosted.com/en/latest/_modules/pygeo/geo_utils/remove_duplicates.html
|
# Source code for pygeo.geo_utils.remove_duplicates
import numpy as np
from .norm import eDist
# --------------------------------------------------------------
# Functions that removes duplicate entries from a list
# --------------------------------------------------------------
[docs]def unique(s):
r"""Return a list of the elements in s, but without duplicates.
For example, unique([1,2,3,1,2,3]) is some permutation of [1,2,3],
unique("abcabc") some permutation of ["a", "b", "c"], and
unique(([1, 2], [2, 3], [1, 2])) some permutation of
[[2, 3], [1, 2]].
For best speed, all sequence elements should be hashable. Then
unique() will usually work in linear time.
If not possible, the sequence elements should enjoy a total
ordering, and if list(s).sort() doesn't raise TypeError it's
assumed that they do enjoy a total ordering. Then unique() will
usually work in :math:\mathcal{O}(N\log_2(N)) time.
If that's not possible either, the sequence elements must support
equality-testing. Then unique() will usually work in quadratic
time.
"""
n = len(s)
if n == 0:
return []
# Try using a dict first, as that's the fastest and will usually
# work. If it doesn't work, it will usually fail quickly, so it
# usually doesn't np.cost much to *try* it. It requires that all the
# sequence elements be hashable, and support equality comparison.
u = {}
try:
for x in s:
u[x] = 1
except TypeError:
pass
else:
return sorted(u.keys())
# We can't hash all the elements. Second fastest is to sort,
# which brings the equal elements together; then duplicates are
# easy to weed out in a single pass.
# NOTE: Python's list.sort() was designed to be efficient in the
# presence of many duplicate elements. This isn't true of all
# sort functions in all languages or libraries, so this approach
# is more effective in Python than it may be elsewhere.
try:
t = list(s)
t.sort()
except TypeError:
pass
else:
assert n > 0
last = t[0]
lasti = i = 1
while i < n:
if t[i] != last:
t[lasti] = last = t[i]
lasti += 1
i += 1
return t[:lasti]
# Brute force is all that's left.
u = []
for x in s:
if x not in u:
u.append(x)
return u
[docs]def uniqueIndex(s, sHash=None):
"""
This function is based on :meth:unique.
The idea is to take a list s, and reduce it as per unique.
the same size as the original s, and points to where it ends up in
the the reduced list
if sHash is not specified for sorting, s is used
"""
if sHash is not None:
ind = np.argsort(np.argsort(sHash))
else:
ind = np.argsort(np.argsort(s))
n = len(s)
t = list(s)
t.sort()
diff = np.zeros(n, "bool")
last = t[0]
lasti = i = 1
while i < n:
if t[i] != last:
t[lasti] = last = t[i]
lasti += 1
else:
diff[i] = True
i += 1
b = np.where(diff)[0]
for i in range(n):
ind[i] -= b.searchsorted(ind[i], side="right")
return t[:lasti], ind
[docs]def pointReduce(points, nodeTol=1e-4):
"""Given a list of N points in ndim space, with possible
duplicates, return a list of the unique points AND a pointer list
for the original points to the reduced set"""
# First
points = np.array(points)
N = len(points)
if N == 0:
return points, None
dists = []
for ipt in range(N):
dists.append(np.sqrt(np.dot(points[ipt], points[ipt])))
# we need to round the distances to 8 decimals before sorting
# because 2 points might have "identical" distances to the origin,
# but they might differ on the 16 significant figure. As a result
# the argsort might flip their order even though the elements
# should not take over each other. By rounding them to 8
# significant figures, we somewhat guarantee that nodes that
# have similar distances to the origin dont get shuffled
# because of floating point errors
dists_rounded = np.around(dists, decimals=8)
# the "stable" sorting algorithm guarantees that entries
# with the same values dont overtake each other.
# The entries with identical distances are fully checked
# in the brute force check below.
ind = np.argsort(dists_rounded, kind="stable")
i = 0
cont = True
newPoints = []
while cont:
cont2 = True
tempInd = []
j = i
while cont2:
if abs(dists[ind[i]] - dists[ind[j]]) < nodeTol:
tempInd.append(ind[j])
j = j + 1
if j == N: # Overrun check
cont2 = False
else:
cont2 = False
subPoints = [] # Copy of the list of sub points with the dists
for ii in range(len(tempInd)):
subPoints.append(points[tempInd[ii]])
# Brute Force Search them
newPoints.extend(subUniquePts)
for ii in range(len(tempInd)):
i = j - 1 + 1
if i == N:
cont = False
[docs]def pointReduceBruteForce(points, nodeTol=1e-4):
"""Given a list of N points in ndim space, with possible
duplicates, return a list of the unique points AND a pointer list
for the original points to the reduced set
Warnings
--------
This is the brute force version of :func:pointReduce.
"""
N = len(points)
if N == 0:
return points, None
uniquePoints = [points[0]]
for i in range(1, N):
foundIt = False
for j in range(len(uniquePoints)):
if eDist(points[i], uniquePoints[j]) < nodeTol:
|
2022-10-07 21:56:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2758682370185852, "perplexity": 7999.034415030335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00670.warc.gz"}
|
https://wiki.pvmet.org/index.php?title=Optical_simulations_of_bifacial_PV_cells_and_modules
|
Optical simulations of bifacial PV cells and modules
Malte Ruben Vogt, Carsten Schinke, Karsten Bothe; Institut für Solarenergieforschung GmbH (ISFH), Germany
Bifacial PV cells and modules convert light incident onto the front as well as the rear side. The ability to use light incoming from the rear side for the generation of electrical current enables bifacial modules to have higher energy yields than monofacial modules under equal conditions [1][2]. The "international road map for photovoltaics" [3] predicts an increase in market share for bifacial cells and modules from 2% in 2016 to above 30% within the next ten years. In order for this prediction to become reality, fast and accurate simulation tools for bifacial PV cells, modules and power plants are required.
Optical simulations of bifacial PV devices are more complex compared to simulations of monofacial devices since light incident onto the rear surface must be considered as well. Moreover, the optical properties of objects behind the devices have to be included since light may be transmitted through and reflected back into the device from the rear. The necessity of taking the reflectance of objects behind the solar cell into account is evident from experimental observations [2][4].
An effective numeric simulation model needs to balance the required accuracy of the results against the computational requirements. This is commonly achieved by focusing on a sufficiently accurate description of relevant physical effects while neglecting those effects of minor impact on the results. Applying this principle to bifacial PV devices leads to splitting the simulation into two different types, depending on the focus of the optical simulation. If the goal of the simulation is to understand or improve the optical properties of bifacial solar cells and modules, then optical properties of the cells and other module components are modeled in steady state conditions and with wavelength resolution. Often, standard testing conditions as present in indoor testing laboratories are assumed for this purpose. However, if the goal of the simulation is to calculate the energy yield of a module or an entire power plant at a given location over a certain time, then the modules themselves are modeled in less detail and the focus is shifted to the mounting environment and the incoming irradiation. In this case, wavelength effects are usually neglected, but time dependence needs to be taken into account. The issues considered in the following are discussed in more detail in Ref. 5.
1. Optical simulations of bifacial PV cells and modules indoors
Modeling the optics of solar cells and modules must address the following physical effects, which are visualized in Figure 1:
• Refraction of light at interfaces between different materials (rays 2-9),
• Reflection of light at interfaces between different materials (rays 1, 3, 6, 7),
• Absorption of light within materials (rays 2, 4, 5, 8, 9),
• Scattering of light at rough surfaces (ray 6),
• Interference at thin films, e.g., anti-reflection coatings (ray10),
• Diffraction, e.g, at surface textures (ray 11).
Figure 1: Optical effects in a PV module: Refraction (2-9), diffraction (11), reflection (1, 3, 6, 7), absorption (2, 4, 5, 8, 9), scattering (6) and interference (10).
Ray tracing [5][6][7][8] simulations are often used when simulating complex PV devices as shown in Fig. 1. The basic idea of optical ray tracing simulations is the calculation of the propagation of single light rays with various directions through a scene. The light rays are emitted by a light source and ray parameters such as intensity and direction are continuously monitored. When the light ray interacts with an object within the scene, a refraction calculation is performed (e.g., based on refractive index data and the morphology of the object's surface) and the ray parameters are adjusted accordingly. This process is continued until the ray is absorbed or leaves the scene. In this case, the ray tracer records the termination parameters, e.g., position of absorption, and the process is restarted with another ray. Often, rays are emitted with random direction and wavelength. Repeating the simulation for a large number of rays then yields an estimation for the average interaction of light with the objects in the simulation.
Ray tracing models for photovoltaics have to deal with the interaction of light with objects whose size scale is on six different orders of magnitude. For instance, a PV module has to be modeled on the order of meters. In contrast, the lateral extent of solar cells in the module is of the order of centimeters, whereas their thickness is of the order of 100-200 microns. Surface textures, which alter the optical properties of solar cells significantly, are of the order a few microns. Combining these different size scales in one ray tracing simulation while restricting the computational effort to a reasonable amount is a challenging task.
For solar cells, a common ray tracing approach is the consideration of a simulation domain consisting of one element of the texture (e.g., a pyramid) and the part of the solar cell below, as depicted in Figure 2. The simulation domain then has dimensions of the order of 5 µm × 5 µm × 170 µm. Periodic boundary conditions are applied, which means that infinite lateral extent of the solar cell is assumed. This assumption is usually justified: Typical solar cells have a lateral extent of 156 mm, which is more than four orders of magnitude larger then the lateral extent of the simulation domain. Due to the periodic boundary conditions, this simulation describes solar cells with a regular structure of surface texture. However, solar cells with random textures can also be described by introducing a random displacement of the ray after application of the periodic boundary condition. This approach is sometimes called random boundary condition and is used by many solar cell ray tracing programs such as Sunrays [6], the PV lighthouse wafer ray tracer [7] or the Sentaurus raytracer [8].
Figure 2: Classical simulation domain for Si based solar cells.
The simulation of bifacial PV devices s requires an adaptation of the rear side of the simulation domain, i.e.: Addition of rear side texture, materials below the cell and eventually a light source at the rear side of the cell. This can lead to two separate simulations (one with the light source positioned in front of and another one with the light source positioned behind the PV device) or the simultaneous use of two light sources in one simulation. In the first case, the simulation results need to be combined in an appropriate post processing step. The second option requires a simulation tool which is capable of using multiple light sources with correct relative intensities. For an optical loss analysis, it may be helpful to distinguish between light rays originating from either of the two light sources.
A limitation of the common simulation domain described above becomes obvious when trying to integrate the metalized contacts into the simulation domain, since the metallization structures are much larger than the domain. The front surface metallization is therefore usually not considered in the ray tracing simulation itself but taken into account during post processing of the simulation results by reducing the determined short circuit current of the solar cell by the fraction of metalized (shaded) front surface area. The symmetry of simulation domains as in Fig. 2 is also broken for bifacial PV devices, since they have localized metal structures on the rear side. These rear side metal contacts change the optical properties of the rear side locally, but on a size scale larger than the texture, which determines the domains width.
Figure 3: Multi-domain-approach for bifacial modules. From the perspective of the light rays (red lines), the way through the different domains needs to represents the path through a real PV module.
In order to simulate the interaction of light with a PV module appropriately, not only light hitting the solar cells must be considered, but also light hitting the gaps between the solar cells. This can be achieved by using a multi-domain approach [9] as illustrated in Figure 3. In the multi-domain approach, the simulation scene is split into several domains. Each domain contains one symmetry element (modeled in three dimensions) and appropriate boundary conditions to represent its periodic or random lateral repetition. In order to enable interaction between objects in different domains, there needs to be a way for the light rays to travel from one domain to another. From the perspective of the light ray, the way through the different domains needs to represents the path through a real PV module. Depending on the cell architecture, up to five domains are required for a representation of a PV module. The first (top-level) domain is called "module domain". It is irradiated by the light source and contains one whole solar cell and its surrounding components in the module. The second domain is the "front finger domain". It contains the symmetry element surrounding one front finger. The third domain is the "front texture domain". It contains the symmetry element of the surface texture and its surroundings. Depending on the cell architecture, a "rear texture domain" and a "rear finger domain" are also used. Each of these domains apply periodic or random boundary conditions at their side faces (green lines). In contrast, top and bottom faces of domains b)-e) are connecting the domains. One option for a practical realization of this connection are surface effects (magenta lines) which shift light rays from one domain to another.
Figure 4 shows the ray tracing results for a typical bifacial module illuminated by a front light source. The orange area represents the fraction of the light converted into electrical current. The two biggest losses are the reflection of all the module's components (green) and transmission through the module (magenta). The other losses are absorption by parts of the module other than the cell's absorber (e.g. glass or metallization), which are converted into heat.
Figure 4: The results of the interaction of the incoming light with a typical bifacial module. The orange area represents the fraction of the light converted into electrical power. Cumulated, the two biggest losses are the reflection of all the module's components (green) and transmission through the module (magenta).
2. Optical simulations of bifacial PV modules and power plants outdoors
Outdoors, the irradiation depends on the location, time and weather. Thus, the light source must either generate radiation representing irradiation conditions which are averaged over the whole year [10] or specific irradiation for each hour of the year [2][11]. The second option requires a post-processing step in order to combine the simulation results over the whole year.
Modelling the interaction of the module with its environment correctly requires the mounting height and angle to be included in the simulation domain (see Fig. 5a). Additionally, other objects that might block or reflect light, such as ground, trees, buildings or the mounting rack, need to be included in the simulation model as well.
Figure 5: Simulation domain for modelling the interaction of the module with its environment. a) The mounting height and angle of a module. b) The light rays can hit the module front and rear side directly or after reflection from the ambience.
Since the energy yield of bifacial and monofacial PV devices can be very similar under front side illumination [12], the possible advantage of bifacial modules is often quantified be the bifacial gain Gbif , which is the current generated by light incoming from the rear side Irear divided by the current generated by light incoming from the front side of the PV device Ifront :
$\displaystyle{ G_{bif} = \frac {I_{rear}} {I_{front}} }$ (1)
The average annual current of a PV device is the current generated under illumination by a light source representing irradiation conditions which are averaged over the whole year [10][13]. The average annual current facilitates a current rating for PV modules. A current rating can be used to evaluate the optical performance of PV modules under certain reference irradiance and mounting conditions. Analogously to Eq. (1), a "bifacial average annual current gain " [13] can be defined as Ḡbif:
$\displaystyle{ \bar{G}_{bif} = \frac {\bar{I}_{rear}} {\bar{I}_{front}} }$ (2)
Figure 6: Average annual current gain of bifacial modules compared to monofacial modules.
Figure 6 shows the average annual bifacial current gain for different mounting heights and ground materials according to [13]. The simulations are conducted for a module facing south at a 35° tilt angle (see Fig. 5a) with the average yearly irradiation conditions in Hamelin, Germany. The results for a ground reflection of zero (Fig. 6, magenta) show, that a bifacial gain of 3.6%abs is achieved due to light which hits the rear side of the module without being reflected at the ground before (see light paths Fig. 5b). Consequently, this 3.6%abs gain is independent from the module mounting height. The bifacial gains due to reflection from the ground are dependent on the ground albedo and on the module mounting height. For the bifacial module simulated above the white underground, a 10%abs increase in average annual current is achieved by mounting it 1 m above the ground instead of 0.1 m. Making the same change in mounting height above green grass or asphalt only leads to a 4%abs or 1%abs increase in average annual current, respectively. This demonstrates that high ground reflectivity increases the current gain caused by higher mounting heights. This is a consequence of the larger ground area "seen" by the module when increasing its mounting height.
References
[1] A. Cuevas, A. Luque, J. Eguren, and J. del Alamo, "50 Per cent more output power from an albedo-collecting flat panel using bifacial solar cells," Sol. Energy, vol. 29, no. 5, pp. 419-420, 1982.
[2] G. J. M. Janssen, B. B. Van Aken, A. J. Carr, and A. A. Mewe, "Outdoor Performance of Bifacial Modules by Measurements and Modelling," Energy Procedia, vol. 77, pp. 364-373, 2015.
[3] "International Technology Roadmap for Photovoltaic (ITRPV)," no. Eighth Edition. 2017.
[4] C. Deline, S. Macalpine, B. Marion, F. Toor, A. Asgharzadeh, and J. S. Stein, "Assessment of Bifacial Photovoltaic Module Power Rating Methodologies-Inside and Out," IEEE J. Photovoltaics, vol. 7, no. 2, pp. 575-580, 2017.
[5] C. Schinke, M. R. Vogt, and K. Bothe, "Optical Modeling of Photovoltaic Modules with Ray Tracing Simulations," in Photovoltaic Modeling Handbook, 2018, pp. 27-92.
[6] R. Brendel, "Sunrays: A versatile tracing program for the photovoltaic community," in 12th EUPVSEC, 1994, no. April, pp. 1339-1342.
[7] P. Lighthouse, "Wafer ray tracer." [Online]. Available: https://www2.pvlighthouse.com.au/calculators/waferraytracer/waferraytracer.html.
[8] Synopsys Incorparation, "Sentaurus Device." Mountain View, CA, USA.
[9] M. R. Vogt et al., "PV module current gains due to structured backsheets," Energy Procedia, vol. 124, pp. 495-503, Sep. 2017.
[10] M. Winter, H. Holst, M. R. Vogt, and P. P. Altermatt, "Impact of realistic illumination on optical losses in Si solar cell modules compared to standard testing conditions," in 31st EU PVSEC, 2015, pp. 1869-1874.
[11] U. A. Yusufoglu et al., "Simulation of energy production by bifacial modules with revision of ground reflection," Energy Procedia, vol. 55, pp. 389-395, 2014.
[12] T. Dullweber et al., "Present status and future perspectives of bifacial PERC + solar cells and modules," Jpn. J. Appl. Phys., vol. 57, no. 08RA01, 2018.
[13] M. R. Vogt, T. Gewohn, K. Bothe, C. Schinke, and R. Brendel, "IMPACT OF USING SPECTRALLY RESOLVED GROUND ALBEDO DATA FOR PERFORMANCE SIMULATIONS OF BIFACIAL MODULES," in Proc. 35th European Photovoltaic Solar Energy Conference and Exhibition, 2018, pp. 1011-1016.
|
2022-10-06 23:44:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4501114785671234, "perplexity": 1408.6674666629751}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00655.warc.gz"}
|
https://proofwiki.org/wiki/Special_Linear_Group_is_Subgroup_of_General_Linear_Group
|
# Special Linear Group is Subgroup of General Linear Group
## Theorem
Let $K$ be a field whose zero is $0_K$ and unity is $1_K$.
Let $\SL {n, K}$ be the special linear group of order $n$ over $K$.
Then $\SL {n, K}$ is a subgroup of the general linear group $\GL {n, K}$.
## Proof
Because the determinants of the elements of $\SL {n, K}$ are not $0_K$, they are invertible.
So $\SL {n, K}$ is a subset of $\GL {n, K}$.
Now we need to show that $\SL {n, K}$ is a subgroup of $\GL {n, K}$.
Let $\mathbf A$ and $\mathbf B$ be elements of $\SL {n, K}$.
As $\mathbf A$ is invertible we have that it has an inverse $\mathbf A^{-1} \in \GL {n, K}$.
$\map \det {\mathbf A^{-1} } = \dfrac 1 {\map \det {\mathbf A} }$
and so:
$\map \det {\mathbf A^{-1} } = 1$
So $\mathbf A^{-1} \in \SL {n, K}$.
Also, from Determinant of Matrix Product:
$\map \det {\mathbf A \mathbf B} = \map \det {\mathbf A} \map \det {\mathbf B} = 1$
Hence the result from the Two-Step Subgroup Test.
$\blacksquare$
|
2021-09-18 22:27:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.93775874376297, "perplexity": 98.32632145593699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00255.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=145&t=19852
|
## When B and C is significantly larger than A
$aR \to bP, Rate = -\frac{1}{a} \frac{d[R]}{dt} = \frac{1}{b}\frac{d[P]}{dt}$
Julianne Seog 3K
Posts: 24
Joined: Wed Sep 21, 2016 2:58 pm
### When B and C is significantly larger than A
In the course reader, it says if the concentration of B and C is significantly larger than A then the reaction rate depends only on A. How much is "significantly larger" so that this rule can apply?
Johnson Thai 1L
Posts: 19
Joined: Wed Sep 21, 2016 2:59 pm
### Re: When B and C is significantly larger than A
I believe it is just "significantly large" enough so that the concentrations of B and C remain constant when A is being used up. This way, as stated in the Course Reader, the reaction rate depends on just concentration of A.
Cherry_Deng_1K
Posts: 12
Joined: Wed Sep 21, 2016 2:55 pm
### Re: When B and C is significantly larger than A
Also, I am pretty sure a question will tell you that the concentrations of B and C are significantly larger than A. You will probably not know to write a pseudo-first-order rate law (or other order depending on the reaction) otherwise.
ChristinaRoble3J
Posts: 14
Joined: Fri Jul 15, 2016 3:00 am
### Re: When B and C is significantly larger than A
I also don't understand this in my opinion "significantly large" is very subjective, so I am also confused as to how we know if one or two of the reactants are in "large excess" in comparison to another. I hope lavelle goes over this.
|
2021-03-01 02:04:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.658670961856842, "perplexity": 853.0381912564926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00064.warc.gz"}
|
https://cracku.in/blog/logical-reasoning-questions-for-sbi-po-pdf/
|
0
4311
# Logical Reasoning Questions For SBI PO PDF
Download SBI PO Logical Reasoning Questions & Answers PDF for SBI Clerk Prelims and Mains exam. Very Important SBI PO Logical Reasoning questions on with solutions.
Take a Free SBI PO Mock Test
Question 1: Select the related word/letters/number from the given alternatives.
81 : 10 :: 169 : ?
a) 23
b) 18
c) 12
d) 14
Question 2: If “!” denotes “added to”, “@” denotes “divided by”, “%” denotes “multiplied by” and “^” denotes “subtracted from”, then 13 ! 102 @ 6 % 2 ^ 41 = ?
a) 6
b) 9
c) 14
d) 12
Question 3: Pointing to the photograph, Ram said: “She is the only daughter of my father’s mother”. How is Ram related to the person in the photograph?
a) aunt
b) son
c) nephew
d) grandson
Question 4: If 1st January 1897 fell on a Friday, on what day of a week 1st January of 1901 would have fallen on?
a) Sunday
b) Thursday
c) Wednesday
d) Tuesday
Question 5: If 121 @ 34 = 87 and 28 ! 13 = 41 then what is the value of
353 ! 89 @ 167?
a) 275
b) 263
c) 317
d) 244
Question 6: A machine codes EAT as 100 and CAT as 60. Which of the following words will be coded as 40320?
a) BAIN
b) PAINT
c) HIGH
d) RAISE
Question 7: Find the order in which the following words will appear in a dictionary.
2.Quark
4.Quartet
a) 2143
b) 3124
c) 3214
d) 1324
Question 8: Virat first travels some distance in the west direction, then he takes a right and walks for some time. After that, he again takes a right and travels for some time. Finally, he takes a left and reaches his destination in some time. In which direction did he proceed just before reaching the destination?
a) North
b) South
c) East
d) West
Question 9: In this question, the sets of numbers given in the alternatives are represented. The columns and rows of Matrix I are numbered from 0 to 4 and that of Matrix II are numbered from 5 to 9. A letter from these matrices can be represented first by its row and next by its column, e.g., ‘V’ can be represented by 57, 79, etc., and ‘G’ can be represented by 76, 21, etc. Similarly, you have to identify the set for the word “SWINDLE”.
a) 75, 67, 13, 57, 85, 87, 88
b) 31, 12, 34, 77, 11, 86, 58
c) 23, 67, 76, 56, 85, 86, 65
d) 23, 95, 34, 77, 11, 87, 65
Question 10: From the given alternative words, select the word which can be formed using the letters of the given word?
GSPLQTIEQNU
a) STURDY
b) STERN
c) LISTEN
d) STEM
Question 11: In the following question, select the missing number form the given series.
a) 9
b) 7
c) 8
d) 13
Question 12: A series is given, with one term missing. Choose the correct alternative from the given ones that will complete the series.
-3, -1, 11, 39, 89, 167, ?
a) 277
b) 278
c) 279
d) 281
Question 13: Select the odd one out from the given alternatives.
a) 57
b) 51
c) 29
d) 91
Question 14: Five friends, Deepak, Rahul, Mukesh, Ajit and Sohan are standing in a queue in order of decreasing weight. Deepak is standing between Mukesh and Ajit and they all are standing consecutively. Mukesh is heavier than Ajit and Sohan. Deepak is lighter than Rahul.
Who is the heaviest?
a) Mukesh
b) Rahul
c) Deepak
d) Either Mukesh or Rahul
Question 15: Some children are standing in a row. Ram is fourth from the left while Rahul is third from the right. If Ram and Rahul interchange their position then Rahul is sixth from the right. What will be the position of Ram from the left after the interchange?
a) 8th
b) 6th
c) 9th
d) 7th
Question 16: Consider the given statement/s to be true and decide which of the given conclusions/assumptions can definitely be drawn from the given statement.
Statements:
A : Mid-day meal program has managed to increase the attendance of students.
B: But, there is a sharp decline in the number of students in the afternoon session.
Conclusion:
I. Mid-day meal program has rekindled the interest to study among the students.
II. Some students come to school just to avail a free meal.
a) Choose A if only conclusion I follows.
b) Choose B if only conclusion II follows.
c) Choose C if both conclusion I and II follow.
d) Choose D if neither conclusion I nor conclusion II follows.
Question 17: In each of the following questions two statements are given, and these statements are followed by two conclusions numbered (1) and (2). You have to take the given two statements to be true even if they seem to be at variance from commonly known facts. Read the conclusions and then decide which of the given conclusions logically follows from the two given statements, disregarding commonly known facts.
Statements :
Some fishes are alligators. All the alligators are cats.Conclusions :
(1) Some fishes are cats.
(2) No alligator is fish.
a) Only (1) conclusion follows
b) Only (2) conclusion follows
c) Either (1) or (2) follows
d) Neither (1) nor (2) follows
Question 18: According to the following venn diagram how many teachers do not read books?
a) 69
b) 66
c) 79
d) 74
Question 19: Which of the following correctly depicts the mirror image of ‘INSTINCTIVE’?
a) 1
b) 2
c) 3
d) 4
Question 20: Select the word which CANNOT be formed using below word
CONVERSATION
a) VERSION
b) CONSERVATION
c) NATION
d) STATION
$\sqrt{81} + 1 = 10$
$\sqrt{169} + 1 = 14$
Hence, option D is the correct option.
13 ! 102 @ 6 % 2 ^ 41 = 13 + [(102 /6) * 2] – 41 = 13 + (17*2) – 41 = 6
Hence, option A is the right choice.
The person in the photograph should be Ram’s aunt. Thus, Ram is her nephew.
1st January of consecutive non -leap years will fall on consecutive days since 365 leaves a remainder of 1 on division by 7.
1900 is not a leap year. 1900 is divisible by 100 but not by 400. Therefore, all the years from 1897 till 1901 are non-leap years.
Therefore, the day on which 1st January falls will move by 1 for every year. Therefore, in 1897, 1st January would have fallen on Saturday. In 1898, 1st January would have fallen on Sunday. Similarly, in 1901, 1st January would have fallen on Tuesday. Therefore, option D is the right answer.
121 @ 34 = 87 and 28 ! 13 = 41
121 – 34 = 87 and 28 + 13 = 41
So, 353 ! 89 @ 167 = 353 + 89 – 167 = 275
Hence, option A is right choice.
The numerical position of E is 5, A is 1 and T is 20 – 5*1*20 = 100.
The numerical position of C is 3, A is 1 and T is 20 – 3*1*20 = 60.
BAIN = 2*1*9*14 = 252.
PAINT = 16*1*9*14*20 = 40320.
HIGH = 8*9*7*8 = 4032.
RAISE = 18*1*9*19*5 = 15390.
As we can see, PAINT will be coded as 40320. Therefore, option B is the right answer.
The words when arranged in the dictionary will appear as:
2.Quark
4.Quartet
First Virat travelled in west direction then took a right and travelled in the north direction, then he took a right and travelled in east direction and finally took a left and travelled in the north direction. Hence, option A is the correct option.
In option A, 57 does not correspond to ‘N’. Hence option A is incorrect.
In option B, 86 does not correspond to ‘L’. Hence option B is also incorrect.
In option C, 76 does not correspond to ‘I’. Hence option C is also incorrect.
Thus option D is the correct answer.
Option A and B contains R but it is not there in the given letters. Hence both of these can be ruled out Option D contains M which is not there in the given letters. Hence D is also not possible. The word LISTEN can be formed from the given letters. Hence the correct answer is option C
We see that 39/3 = 13
48/4 = 12
117/9 = 13
Hence, option D is right choice.
$1^3 – 2^2 = -3$
$2^3 – 3^2 = -1$
$3^3 – 4^2 = 11$
$4^3 – 5^2 = 39$
$5^3 – 6^2 = 89$
$6^3 – 7^2 = 167$
$7^3 – 8^2 = 279$
57, 51 and 91 are composite numbers.
29 is an prime number. Hence, option C is the correct answer.
Deepak is standing between Mukesh and Ajit and Mukesh is heavier than Ajit. We have three possibilities:
It is given that Mukesh is heavier than Sohan. So, Sohan must stand somewhere behind Mukesh. But, this is not possible in case III. Thus, case III is invalid. In case I, Sohan can be at any of the two last positions and in case II, Sohan must be at the last position.
Deepak is lighter than Rahul. So, Rahul must be standing somewhere ahead of Deepak. But, this is not possible in case I. Thus, case I is invalid. In case II, Rahul must be at the first position. We get the final arrangement as:
From the arrangement, we can see that Rahul is the heaviest. Hence, option B is the correct answer.
Ram’s initial position is fourth from the left, and when Rahul comes at Ram’s position, he is sixth from the right. So total number of children in the row = 6 + 4 – 1 = 9
Rahul’s initial position is third from the right which is = 9 – 3 + 1 = 7th from the left. So, after interchange Ram’s position will be 7th from the left. Hence, option D is the correct option.
From the two statements, it is clear that some students avail the free lunch and leave the schools.
If conclusion I is true, then there shouldn’t have been a decline in the afternoon session.
Hence, only statement II is true.
As we can check from the diagram.
Therefore, Only Statement (1) follows. Hence, Option A is correct.
We want circle but subtracting the rectangle
Thus required answer = 21 + 36 + 17 = 74
Hence, option D is the right choice.
|
2021-09-22 01:44:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6565185189247131, "perplexity": 948.5533549549123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057303.94/warc/CC-MAIN-20210922011746-20210922041746-00027.warc.gz"}
|
https://codereview.stackexchange.com/questions/106238/the-minion-game-challenge/106273
|
# “The Minion Game” challenge
This is an implementation of the minion game, which takes in string input and declares a winner between two pre-determined players.
The logic is player 1 gets all sequenced words not starting with a vowel and player 2 gets all sequenced words starting with vowel in the input string.
Objective: performance optimisation. The code passed all the test cases, but I am very unsure where to tune the performance.
For a 1000 letter word, here are the time stats:
• real 0m10.142s
• user 0m6.386s
• sys 0m0.743s
import string
import sys
import itertools
def minion(str):
person_a_name = 'Stuart'
person_b_name = 'Kevin'
letter_list = [a for a in str]
l = len(letter_list)
vowel = ['A','E','I','O','U']
consonants = ['Q','W','R','T','Y','P','S','D','F','G','H','J','K','L','Z','X','C','V','B','N','M']
all_word = []
person_a_words = []
person_b_words = []
all_word = [letter_list[start:end+1] for start in xrange(l) for end in xrange(start, l)]
for array in all_word:
if array[0] in vowel:
person_b_words.append(array)
for array in all_word:
if array[0] in consonants:
person_a_words.append(array)
if len(person_a_words) == len(person_b_words):
print 'Draw'
if len(person_a_words) > len(person_b_words):
print person_a_name, len(person_a_words)
if len(person_b_words) > len(person_a_words):
print person_b_name, len(person_b_words)
def main():
str = raw_input()
minion(str.upper())
if __name__ == '__main__':
main()
Sample input / output:
banana
Stuart 12
guavaisanotherfruit
Kevin 98
The big performance hit probably comes in this block of code, where you don a bunch of array manipulations and looping:
all_word = [letter_list[start:end+1] for start in xrange(l)
for end in xrange(start, l)]
for array in all_word:
if array[0] in vowel:
person_b_words.append(array)
for array in all_word:
if array[0] in consonants:
person_a_words.append(array)
if len(person_a_words) == len(person_b_words):
print 'Draw'
if len(person_a_words) > len(person_b_words):
print person_a_name, len(person_a_words)
if len(person_b_words) > len(person_a_words):
print person_b_name, len(person_b_words)
Appending to an array is a (relatively) expensive operation, as is looping over a list. I can see a number of optimisations here. [Edit: I wrote these in the order I thought of them; the big performance gain comes on the last item. I'm leaving the remaining items because they're still instructive, even if not directly applicable here.]
• Only loop over all_word once. You can check for starting with a consonant and vowel in the same iteration of the loop:
for array in all_word:
if array[0] in vowel:
person_b_words.append(array)
elif array[0] in consonants:
person_a_words.append(array)
We've just cut out an iteration over all_word. If all_word is large, that will be a significant saving.
• Don't store the words in a list, just the count. All you care about is the relative number of words in each list; the words themselves don't matter. It's much easier to increment an integer than mutate a list, so consider the following:
person_a_words = 0
person_b_words = 0
for array in all_word:
if array[0] in vowel:
person_b_words += 1
elif array[0] in consonants:
person_a_words += 1
and then you can compare the two integers at the end. That's bound to be a performance saving.
• Don't construct all_word as a list; use a generator. If you replace the square brackets with parens:
all_word = (letter_list[start:end+1] for start in xrange(l)
for end in xrange(start, l))
then this becomes a generator comprehension instead of a list comprehension. This means it only creates the elements as they're needed by the for loop; it doesn't create them all in memory before continuing.
Using generators instead of lists is a really good way to reduce occupancy and speed up programs.
• Do you even need to use all_word? For each value of start, the first letter of the resulting words will be the same, and this gives you (l - start) different words. You don't actually need to create the words; you just care about their initial letter, and how many distinct words they create.
You could just add the number of distinct words to each person's score directly:
person_a_words = 0
person_b_words = 0
for idx, letter in enumerate(letter_list):
if letter in vowel:
person_b_words += len(letter_list) - idx
else:
person_a_words += len(letter_list) - idx
That is substantially faster: I just ran this with a 1.5m character string, and it finished in ~1.5s.
• Don't use str as a variable name; overriding builtins is bad practice.
• You've imported the string, sys and itertools modules, but you never use any of them. Why?
• PEP 8 requires a space after commas in a list; you should add this in vowel and consonants.
• You can get the individual letters of a string by calling list() on it. These calls are equivalent:
letter_list = [a for a in my_string]
letter_list = list(my_string)
although in this case, you don't need to coerce to a list first – you can iterate over the characters of a string directly.
• There's no need to assign the length of letter_list to a variable, especially not one with as undescriptive a name as l. It just makes your code harder to read.
• The name of your function isn't particularly helpful. Ideally it should give me some idea of what the function does. There should also be a docstring to explain the result.
• I would rename the person_a* and person_b* variables to be consonant* and vowel*, respectively – that will make the code easier to read. A and B don't really mean anything (and as evidence, I got them the wrong way round when I first wrote that sentence).
• Your function is doing some work (finding out whether there are more vowel sub-words or consonant subwords) and printing to screen (the result). It would be better to separate this into two functions: one that does the work, the other does the result.
That makes it easier to reuse the work of vowels vs. consonants.
• To aid readability, I'd keep the same order of variables when you do the comparison at the end. i.e.
if vowel_count > consonant_count:
print("Vowels victorious!")
elif vowel_count < consonant_count:
print("Consonants champion!")
else:
print("Draw!")
• Was testing a solution of my own, and before posting I reread your post, and amongst all the suggestions I found the thingy I was going to focus on: Do you even need to use all_word. For each value of start, the first letter of the resulting words will be the same, and this gives you (l - start) different words. This one point makes the whole difference performance wise, it also removes the need for a very large memory structure to hold all words when having large strings – holroy Oct 1 '15 at 13:49
• Another small code smell in the original code, is the multiple execution of len(person_X_words) close to the end. Do it once, and be over with it. – holroy Oct 1 '15 at 13:54
There are two points I would like to emphasize as really performance penalties in your code:
• Memory usage when generating all the possible word combinations, which is exponential when increasing word length
• Unneccessary complexity to calculate points
Before diving into these points, I would like to say that both alexwlchan and SuperBiasedMan has given good pointers related to other code smells and stuff you need to look into as well.
## Exponential memory usage
There is only line which really stands out and will require a load of memory when the length of the text increases:
all_word = [letter_list[start:end+1]
for start in xrange(length)
for end in xrange(start, length)]
Lets do some numbers, in a text of length = 4, like in abcd, you'll get the following word combinations:
• 4 words starting with first letter: abcd, abc, ab, a
• 3 words starting with second letter: bcd, bc, b
• 2 words starting with third letter: cd, c
• 1 word starting with fourth letter: d
In other words the total points available in your game for a text with length, $N$, is the sum of $N + N-1 + N-2 + ... + 2 + 1$. Luckily there exist an easy formula to calculate this number: $(N+1)*N/2$. This is listed at points in the table below.
But that was only the points (or number of words), when looking at memory usage we need at least to look at how long each of the words are. Continueing with our example with $N=4$, we have 4 words of length 1, 3 words of length 2, and so on. In general $N \cdot 1 + (N-1)\cdot 2 + ... + 2\cdot (N-2) + 1\cdot N$. I haven't found the general formula1 for this, but made a simple Python function to calculate it:
# Shift the range index by +1 so that we get the proper 1 to N sequence
sum( (n+1-k)*k for k in xrange(1, n+1) )
In the table below I've listed the length of text with corresponding number of points/words, and how many characters are needed to store these words. As can be seen these number increase quite fast. The last line is memory usage in megabytes when using memory_profiler on the original code.
text length : 4 10 50 100 500 1000 5000
points/words: 10 55 1275 5050 125250 500500 1250250
characters: 20 220 22100 171700 20958500 167167000 20845835000
usage in MiB: ~0 0.24 1.52 185.68 1251.93 too much
## Unneccessary complexity
When reviewing your original problem statement, you need to calculate points of words starting with either a vowel or a consonant. You don't need to actually now the words.
Combining this with knowledge from previous section that at a given position, $k$, in the text you can generate $N-k$ words, the total complexity reduces quite nicely to a method like the following:
def count_minion_words(text):
text_length = len(text)
word_count_vowels = 0
word_count_consonants = 0
for (index, character) in enumerate(text):
if character in ['A', 'E', 'I', 'O', 'U']:
word_count_vowels += text_length - index
else:
word_count_consonants += text_length - index
return (word_count_vowels, word_count_consonants)
This doesn't require any memory besides the original text, loops through the entire text in one go, and calculates the points for word counts starting with either a vowel or consonant. Feel free to test this one with text length of 1000 or more. Tested it with a text of length 1.5m, and it completed within 0.35 seconds.
1 Added: Thanks to my question at Mathematica SE I now know that the formula is:
$$\frac{n(n+1)(n+2)}{6}$$
• I wish I could give this answer multiple votes! Thanks – Grijesh Chauhan Apr 5 '19 at 17:57
I have multiple performance ideas, some smaller than others but consistent good performance choices are a good mindset to be in.
Minor notes, you can get a list of characters from a string just using list(). So instead of your list comprehension, use letter_list = list(str). Note that I agree you should use a different name than str, but for now I'm focusing on performance.
Also when you call len(letter_list) it's slower than getting the length of a string, so call len(str) instead. Generally, strings are faster than lists. If you don't need more complex list capabilities, stick with strings. So instead of making vowel and consonant lists, make them strings.
But even more efficient than lists, are integers. And you create lists of each person's words as if those matter. But all you need is a count of these values. Replacing every append with += 1 would be much faster. This is the main source of slowness I believe.
for array in all_word:
if array[0] in vowel:
person_b_words += 1
else:
person_a_words += 1
We also only need to loop once if you use an else. It might be faster to use sum here, but that does get more complicated and might not actually prove helpful. If you're interested to know more I could explain later.
Now of course you no longer need multiple len calls, you can just compare the variables directly as they're both integers. You should also use elif and else statements since you know only one of the conditions is possible.
if person_a_words == person_b_words:
print 'Draw'
elif person_a_words > person_b_words:
print person_a_name, person_a_words
else:
print person_b_name, person_b_words
I think another thing you could is find a more intelligent way to iterate over all the sequeunces. You make a huge list of strings, but you only need the first letter of each. It'd be easier if you used a loop that got each first letter the appropriate amount of times rather than your huge list based on the indices range.
Using Python2 this worked.
def minion_game(s):
Stuart, Kevin = 0,0
length = len(s)
for idx,sub in enumerate(s):
if sub in 'AEIOU': Kevin += length - idx
else: Stuart += length - idx
print(['Draw','Kevin {}'.format(Kevin),'Stuart {}'.format(Stuart)][0 if Kevin == Stuart else 1 if Kevin>Stuart else 2])
if __name__ == '__main__':
s = raw_input()
minion_game(s)
• Welcome to Code Review! A good code review answer takes the original code into account and describes how it can be improved. At the moment you have just presented an alternative solution without any reasoning or context. You can read more at How do I write a good answer?. – AlexV May 22 '19 at 21:18
|
2020-01-22 21:45:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3310586214065552, "perplexity": 1905.6931454753772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607407.48/warc/CC-MAIN-20200122191620-20200122220620-00518.warc.gz"}
|
https://www.jw.org/en/publications/bible/king-james-version/books/1-chronicles/10/
|
# 1 Chronicles 10:1-14
10 Now the Philistines fought against Israel; and the men of Israel fled from before the Philistines, and fell down slain in mount Gilboa. 2 And the Philistines followed hard after Saul, and after his sons; and the Philistines slew Jonathan, and Abinadab, and Malchishua, the sons of Saul. 3 And the battle went sore against Saul, and the archers hit him, and he was wounded of the archers. 4 Then said Saul to his armourbearer, Draw thy sword, and thrust me through therewith; lest these uncircumcised come and abuse me. But his armourbearer would not; for he was sore afraid. So Saul took a sword, and fell upon it. 5 And when his armourbearer saw that Saul was dead, he fell likewise on the sword, and died. 6 So Saul died, and his three sons, and all his house died together. 7 And when all the men of Israel that were in the valley saw that they fled, and that Saul and his sons were dead, then they forsook their cities, and fled: and the Philistines came and dwelt in them. 8 And it came to pass on the morrow, when the Philistines came to strip the slain, that they found Saul and his sons fallen in mount Gilboa. 9 And when they had stripped him, they took his head, and his armour, and sent into the land of the Philistines round about, to carry tidings unto their idols, and to the people. 10 And they put his armour in the house of their gods, and fastened his head in the temple of Dagon. 11 And when all Jabeshgilead heard all that the Philistines had done to Saul, 12 They arose, all the valiant men, and took away the body of Saul, and the bodies of his sons, and brought them to Jabesh, and buried their bones under the oak in Jabesh, and fasted seven days. 13 So Saul died for his transgression which he committed against the LORD, even against the word of the LORD, which he kept not, and also for asking counsel of one that had a familiar spirit, to enquire of it; 14 And enquired not of the LORD: therefore he slew him, and turned the kingdom unto David the son of Jesse.
|
2017-05-22 20:25:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.853304922580719, "perplexity": 14140.87132225749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607046.17/warc/CC-MAIN-20170522190443-20170522210443-00563.warc.gz"}
|
http://mathhelpforum.com/calculus/150107-functions.html
|
# Math Help - Functions
1. ## Functions
Let $f_1(x)$ and $f_2(x)$ be odd and even functions respectively. How can we construct an even function out of these?
2. Plenty of ways, but why can't you just take $f_2(x)$ as the even function? What's the point of using $f_1(x)$ in constructing an even function?
Edit: In case you mean you need to construct an even function from only $f_1(x)$, some simple ways would be to take the absolute value or square the function.
$g(x)=|f_1(x)|$
$h(x)=(f_1(x))^2$
These would both be even functions.
3. What does "construct out of them" mean?
4. Originally Posted by drumist
Plenty of ways, but why can't you just take $f_2(x)$ as the even function? What's the point of using $f_1(x)$ in constructing an even function?
Edit: In case you mean you need to construct an even function from only $f_1(x)$, some simple ways would be to take the absolute value or square the function.
$g(x)=|f_1(x)|$
$h(x)=(f_1(x))^2$
These would both be even functions.
And if you really have to use both functions, so would $|f_1(x)|+ f_2(x)$ and $(f_1(x))^2+ f_2(x)$.
Now, if the problem had been to construct an even function from two odd functions, that would have been a little more interesting!
5. Also their composition{ g(x) = f1(f2(x)) and h(x) = f2(f1(x)) } are even!
|
2016-06-29 15:29:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8465271592140198, "perplexity": 413.22203455252003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00125-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://hal.archives-ouvertes.fr/hal-00724093
|
# A note on Fiedler value of classes with sublinear separators
* Corresponding author
Abstract : The $n$-th Fiedler value of a class of graphs $\mathcal C$ is the maximum second eigenvalue $\lambda_2(G)$ of a graph $G\in\mathcal C$ with $n$ vertices. In this note we relate this value to shallow minors and, as a corollary, we determine the right order of the $n$-th Fiedler value for some minor closed classes of graphs, including the class of planar graphs.
Document type :
Preprints, Working Papers, ...
2012
https://hal.archives-ouvertes.fr/hal-00724093
Contributor : Patrice Ossona de Mendez <>
Submitted on : Friday, August 17, 2012 - 1:31:24 PM
Last modification on : Wednesday, September 28, 2016 - 4:09:44 PM
Document(s) archivé(s) le : Sunday, November 18, 2012 - 2:30:26 AM
### Files
document.pdf
Files produced by the author(s)
### Identifiers
• HAL Id : hal-00724093, version 1
• ARXIV : 1208.3581
### Citation
Jaroslav Nesetril, Patrice Ossona de Mendez. A note on Fiedler value of classes with sublinear separators. 2012. <hal-00724093>
Record views
|
2016-10-28 12:05:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.416431725025177, "perplexity": 3146.94335668851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722459.85/warc/CC-MAIN-20161020183842-00187-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/two-trillion-dollar-meltdown.292357/
|
# News Two Trillion Dollar Meltdown
1. Feb 14, 2009
### Astronuc
Staff Emeritus
or 2+ trillion . . .
I heard an interview with Charles Morris last week, and I just happen to find his book last night. Two trillion is a revision of his earlier book "Trillion Dollar Meltdown".
The Two Trillion Dollar Meltdown
https://www.amazon.com/Two-Trillion-Dollar-Meltdown-Rollers/dp/1586486918/
By Andy Ross "Swimming with Dolphins"
By Rolf Dobelli
Morris indicates the current situation is more a problem of solvency than liquidity, but basically there is an inability to cover all outstanding debt, hence the huge losses (writedowns). The sub-prime mortgage problem is one of the contributing factors, but their negative effect was amplified by the various complex financial instruments built around them.
Solvency - the quality or state of being solvent, i.e. able to pay all legal debts
Liquidity - a: consisting of or capable of ready conversion into cash <liquid assets> b: capable of covering current liabilities quickly with current assets.
Morris slams both liberal and conservative, right and left policies. I would hope Obama would have read the book, but I doubt it. It's provides a lot of 'food for thought'.
Last edited by a moderator: May 4, 2017
2. Feb 25, 2009
### mheslep
I read it couple months back (first version). Morris certainly has some business and financial cred, and I agree with your comments as far as they go, but I got the impression it was a bit of a rush job, too general, and did not really try a build a self consistent framework of what happened. I don't think we still have yet to see THE book yet on the credit/mortgage collapse. Im hoping McLean will do one.
Last edited by a moderator: May 4, 2017
3. Feb 25, 2009
### Astronuc
Staff Emeritus
I'd certainly like to know the details of who did what and when. I'd like to know going back to say 2003 about:
the GSEs, Fannie Mae and Freddie Mac, and what they did in terms of originating mortgages, bundling them into securities, and what high yield investments they bought for their portfolios,
Countrywide and others
GS, Lehman Brothers, Merrill Lynch, Bear Sterns, . . . . and their derivatives and SIVs,
AIG and its insurance
the Fed and Treasury,
Congress
who did I leave out?
I'd like to know more about ABSs, MBSs, CLOs, CDOs, CDSs, . . . .
However, I get the feeling those who know don't want this information made public.
Apparently McLean is writing a book with Joe Nocera on the 2008 financial crisis - according to the Wikipedia article about her.
Last edited: Feb 25, 2009
4. Feb 25, 2009
### DrClapeyron
Not surprisingly, financial derivatives were given the blame for the 1987 stock market crash. However, the stock market in 1987 had a steeper decline (based on DJIA) and was quicker to recover than the 2008-2009 stock market crash. There is a common theme that if people cannot remember their history then thay are condemned to repeat history. Same scenario as 1987, out-of-control financial derivatives, declining oil prices and a stock market crash, and not surprisingly the blame is put on the same thing.
As best as I can say, the repo market grew rapidly from the end of the Clinton administration to 2006. Repos, or repurchase agreements are shert-term (most often overnight) collateralized instruments dealt amongst large primary-dealer banks (those banks like Goldman Sachs and Lehman Brothers). Their collatorel: mortgages. These are essentially mortgage-backed securities.
What are repos used for? Financing of overdrafts within the Federal Reserve system. What is so signigicant about 2006?
http://www.federalreserve.gov/releases/h6/discm3.htm
Eurodollar data indicates oversea's interest rates on dollar denominated deposits and time deposits have significance toward how reserve requirements are calculated. Here is how M1, M2 and M3 stood last time M3 was published:
Type Date Billions($)____% from last period AR___% change from last year M1 - 2006 02 781361.5________-11.166%______________0.487% M2 - 2006 02 146702·3_________3.095%______________4.933% M3 - 2006 02 1810276.1________6.552%______________8.035% http://www.economagic.com/fedstl.htm I think it is quite obvious that those who know do not want this information public. 5. Feb 26, 2009 ### mheslep Actually I find it very surprising that derivatives were given credit for '87. Source? 6. Feb 26, 2009 ### mgb_phys Surely with mortgages this is less of a proplem than derivatives. Derivatives are an unlimited bet - you might end up in a position where you 'owe' 100x the market cap of a stock because your bet is leveraged on movements. But mortgages are real money transfers, every$million lent to someone who can't pay it back was paid to the seller of the house - all the money is still in the system.
So a $T bailout to the banks to cover bad mortgage debt must equal a$T in savings accounts belonging to sellers/developers - it's pretty much a zero sum game.
7. Feb 26, 2009
### DrClapeyron
One thing we do not have to walk away with is double digit inflation, supposedly. Rising OPEC oil prices in the 1950's and 1960's never resulted in significant impacts on the US economy; at the time the US was not a net oil importer. By 1973 when the Yom Kippur War broke out, embargoes were placed on the US, gas prices however were slow to catch up to the rising oil prices. The result was double digit inflation. By the 1979 oil price increase the US economy would not feel the same effects as it had in 1973.
The repeal of regulation Q in the mid-1980's and declining oil prices in the mid-1980's lead towards a new kind of financial derivative trade - mortgage-backed securities. Oil futures themselves were introduced in the 1980's and lead toward real estate value increases in Louisiana, Texas and Oklahoma. The failure of MBS followed the 1987 stock market crash, only this time around declining oil prices, failure of MBS's and decline in the stock market have all happened nearly at the same point in time...but not really.
8. Feb 26, 2009
### Jimmy Snyder
This web site seems to say that the US was a net oil importer in the 1950'sand 1960's.
http://tonto.eia.doe.gov/dnav/pet/hist/mcrimus1m.htm" [Broken]
This web site seems to say that oil prices weren't rising (except for inflation) in the 1950's and 1960's.
http://www.inflationdata.com/inflation/Inflation_Rate/Historical_Oil_Prices_Table.asp" [Broken]
This web site implies that OPEC didn't even exist in the 1950's.
Am I missing something?
Last edited by a moderator: May 4, 2017
9. Feb 26, 2009
### mheslep
That's correct, though you want _net_ imports. The only non-war time the US was a net exporter was in the 30's with the large Tx discoveries and the Middle East's cheap oil was not yet widely available.
http://tonto.eia.doe.gov/dnav/pet/hist/mcrntus2A.htm [Broken]
Last edited by a moderator: May 4, 2017
10. Feb 26, 2009
### Jimmy Snyder
How careless of me. Fortunately you got my back.
11. Feb 26, 2009
### DrClapeyron
Yes, the 1956 Suez Cannal Crisis caused a sharp increase in oil prices and the same occured during the Six-Day War. Price flucuations were caused by OPEC embargoes or OPEC decreases in production; to their dismay these actions had no discernable effect on the US economy. For onething, the US could go to countries within OPEC who were not participating in the embargo or countries like Mexico and purchase oil for import.
12. Feb 26, 2009
### Jimmy Snyder
Perhaps one reason there was no discernable effect on the US economy is that there was no discernable effect on the price of oil. Here is data from the Historical Oil Price web site linked to above, there are the numbers around 1956:
1955 $2.93 1956$2.94
1957 $3.14 Barely keeping up with inflation. According to the site these are average annual prices. Opec didn't exist in 1956. These are the numbers around 1967 1966$3.10
1967 $3.12 1968$3.18
Falling behind inflation. Do you have something you can cite to show that "Rising OPEC oil prices in the 1950's and 1960's" ever actually occured?
13. Feb 27, 2009
### DrClapeyron
What was unique about 1973 was that for about the first time in US history, production and prices outside the US had a considerable effect on domestic gasoline prices. Inflation sky-rocketed after the 1973 embargo and was generally high throughout the 1970's, with the CPI (consumer price index) reaching a change of 13% in 1979 after the Iranian Revolution.
However, since the early 1980's rising oil prices have not appeared to cause large increases in the CPI on the magnitude of those which occured in the 1970's, i.e. we have not seen double digit inflation since the early 80's. What is the difference? How about a huge increase in the US government budget deficit and the US public debt? Despite the downturn in the stock market food prices have not increased by the levels in the 1970's.
There is a trade-off with large budget deficits and inflation - either you have large budget deficits and keep inflation low or you have do not have large budget deficits and see whether or not consumer prices will increase. The Clinton administration found the need in maintaining a budget surplus during periods of low oil prices.
Conclusion: the spoils of war wane.
|
2018-02-26 04:19:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17922565340995789, "perplexity": 4551.551078206389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817999.51/warc/CC-MAIN-20180226025358-20180226045358-00331.warc.gz"}
|