text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
##### Abstract
Cause we are not in the one-offs business.
##### Tags: tensorflow, tfx, ML pipeline
TFX is a framework to develop and deploy production ML pipelines that I have been using in the last months. Pipelines are made of components that consume and produce artifacts. This framework can be extended by defining our owns and enrich the catalog what is already provided: standard artifacts & standard components. I’m sure these catalogs are bound to grow.
I have packaged some of the components that proved handy in a library and shared it here: tfx_x.
Two sets of components at this point: one to manipulate a new artifact type name PipelineConfiguration - will come back on this in a sec, and another to manipulate Examples. Let’s close the case for Examples first.
Examples is basically the artifact type of the datasets - possibly at different stages of transformation and that what the two first components are helping with: filtering based on a predicate on examples and stratified sampling. Nothing too fancy here, couple of lines of Beam and quite a few more lines of boilerplate code.
Back on the PipelineConfiguration. Building pipelines is a very essential step to get anywhere with ML models. You are bound to run and rerun slightly different variations of everything again and again. And let’s be honest, not all of them are going to produce great results - if they produce anything at all. “one must imagine Sisyphus happy” wrote Camus.
That will happen to assemble all the steps to get from some data somewhere to a model you are confident with, but also to experiment, to explore, to tune… Yes, there is ‘Restart & Run All’ in Jupyter but that eventually shows some limits and if we are talking about ‘operationalizing’ ML (as in MLops) hopefully that’s not the solution.
One of challenge is the need to keep track of what has been tested, what were the parameters so it can be reproduced or used as a starting point for further exploration - in case you put yourself in a corner with no exit and feel the safest option is to revert back to what was sort of working, last week. Git can be an option but should not be the only element of the solution.
Immutability and versioning everywhere!
For the code, it’s git, for the runtime, containers. And we have immutable artifacts the components can produce AND consume as stated before. You probably see where I’m going with the parameters and the artifacts by now. To put some order in the parametrization of my pipelines, I ended up creating a choke point where the parametrization of my pipelines take place and leveraging the Artifacts.
This has 2 benefits: I can have my own components take there configuration from this artifact. But more importantly, this is an artifact. It is immutable, gets versioned and stored along with the other artifacts - see mlmd. The code I use to analyze the results of experiment or the performance of a model can access the parameters that have been used in the same way as the rest of the artifacts the run of a pipeline has produced.
My recipe at this point: - Code: in git. - Runtime: a container versioned after git commit, in registry. - Runtime parameters: collected from different places - depending on the context, but assembled and frozen in an artifact that is passed to the components and stored so it can be recovered in the future. - Analysis code: a Jupyter notebook which checkouts the latest code from git and import it as a library, and that only require the ‘id’ of the pipeline to fetch all the artifacts it needs to produce beautiful charts.
Together with a bit of Kubeflow and GCP, I’m a ‘one man army’ as I have been told.
Have fun, stay safe.
March 11, 2021
|
{}
|
Xerox WorkCentre 3335 Printer Driver. I have been looking at the Xerox WorkCentre 3335/3345 for the home for years, but everything was delaying the acquisition, since there are MFPs at work and could always use it if there is no hurry. But the time goes the children are growing and the school time begins, as a consequence of having a home at least a printer is a tough necessity. A couple of years ago, before the jump in the $rate, I was only looking at color multifunction devices, but the current$ rate made adjustments, the more complete the set of cartridges for a color MFP is comparable to the cost of the printer itself. Yes and thought came to the conclusion that color printing is not so much and necessary, if you want, you can have several sheets outside the house. When I receive, I do not get a small square box with a length of the face of about 0.5 m, as a result, barely got into the front seat of a light car, because the trunk does not climb the height. Consider the dimensions of the box when you go to receive it. The Xerox WorkCentre 3335 DNI laser MFP is well packed, there are a lot of blue safety sticky years that prevent accidental opening and rattling during transportation.
|
{}
|
# Gap Antenna Review
W8GMS W8AFX Georgia Steve. The new "Hear It" external speaker, manufactured in England for GAP Antenna Products, is a welcome addition to the latter category. > >Lew Mccoy, W1ICP > Hi Lew, when you tested the Gap and R7 did you test both systems at. An Elmer gave me this antenna before he went SK, the Gap Titan Dx has allowed me to log 100 countries an have over 2000 contacts logged mostly with the GAP. The Challenger antenna is the first production multiband antenna to utilize GAP technology. Your latest requests have been for an antenna that's easy to setup, needs no radials, covers 10m-80m in addition to all the WARC bands and uses the same GAP technology found in our other products. ) tall and designed to cover 20, 40, 80 and 160 meters. 25 feet high, and split in the middle with an insulator. I'm not quite sure how it got its reputation, but if there was ever a national association for the advancement or fair treatment of antennas the G5RV would be the poster child!. A Slot antenna design based on recently developed gap waveguide technology has been presented in this work. Discontinued 1. The Eagle is the smallest antenna in the GAP product line. The more I dig into reviews and actual experience the stubby amplified antennas and shark fins seem to be garbage. The DX Flagpole Antenna is an HOA + XYL approved, No Radial, stealth vertical antenna system that offers Real DX results 160-6M. It is appx. First impressions of the antenna build itself. HRO Discount Price: $419. Then build this diy hdtv antenna for almost nothing and get better results. Of related interest, Adam 9A4QV also recently showed us a video detailing the correct dimensions for building an air gap patch antenna. 5ft at the best online prices at eBay! Free shipping for many products!. Your latest requests have been for an antenna that's easy to setup, needs no radials, covers 10m-80m in addition to all the WARC bands and uses the same GAP technology found in our other products. 6 pounds, makes it very manageable during the installation process and will allow for a simple mast solution. 5ft at the best online prices at eBay! Free shipping for many products!. 99, and 28-29. Designed to work in a limited space or as the perfect compliment to an antenna farm. I really enjoy that fact I didn't have to run ground radials all over the place and the antenna is mounted with the tilt kit and the base is 8' from the ground. (C) The case of a straight wire with a 75Ω load at the center. Shop casual women's, men's, maternity, kids' & baby clothes at Gap. Does anyone know if a bad antenna would throw a specific fault code and if so, would a scan tool such as the GAP IID be able to read it? Also, I read in the shop manual that if a new antenna is installed it needs to be initiated by the T4 tool. The Challenger antenna is the first production multiband antenna to utilize GAP technology. has estimated annual revenues of$620,000. To that end, it has its own deployable solar panels, internal power storage, and dedicated power conversion electronics. Your PayPal receipt is your confirmation that we have …. Are they worth the price - not to me. GAP The Beginning and End For All Your Contacts! Unique Features Standard To All GAP Antennas Unique "Elevated" Feed No Tuning Required No Traps No Coils Automatic Band Switching No Tuner Required! Input Power : Legal Limit* Input Impedance : 52 ohms Nominal. On 80 meters with the modified Scorpion cap hat installed, ≈55 turns show above the contact ring. Be the first to write a review. Delivery are 4 weeks from the factory. 5 λ lengths or in the case of the 2 meter example above about 115. Lightning protection for antennas. The channel runs through the antenna gap, and delivers the analyte directly into the hot spot. GAP Antenna Products is pleased to bring to you the Mono GAP. 2 dBi without it. Find helpful customer reviews and review ratings for Nagoya NA-771 SMA Female Dual Band Antenna (144/430Mhz) For BaoFeng, Kenwood, Wouxun Compatible (including UV-82 UV-5R BF-F8HP BF-F8+ Series) Handheld Antenna For Radio SMA Male Interface at Amazon. 34 List List Price $158. is there such thing?. The reflection phase of the EBG surface varies with frequency from 180 to 180. I tried to contact the company to return the goods, but was unable to via phone or email. 475 1/1 17. 89 dBi The simulation was perfect, but the numbers were not impressive. The next generation in Design and manufacturing. The antenna was put up on a 20 foot temporary mast. has estimated annual revenues of$620,000. 7-Band: 75/80, 40, 20, 17, 12, 10, & 6 meters. This paper reviews the state-of-the-art wearable/flexible antennas integrated with the electromagnetic band-gap structure on flexible materials concentrating on single and dual-band designs. _____ From JUly 4, 1990 till August, 1993 when I put up a 55 foot tower with a Mosely Pro-67B antenna, I used a GAP vertical. The antenna performs equal to a dipole at 30 feet in height on 20m-10m and is super tough and durable. You have no obligation to purchase the product once you know the price. Enter the desired frequency and select the desired calculation from the drop box. The antenna lead from the tuner "longwire" terminal to the antenna should parallel and be reasonably close to the outside ground lead, if possible. This is no miracle antenna that will outperform a 3-element Yagi nor Titan DX GAP antenna. My Favorite 40m "DX" Antenna NOTE: This is not a plug-n-play antenna. Its magnetic mount makes it easy to attach to your car, and will provide you with great reception. Contributed by Eli Yablonovitch, December 12, 2014 (sent for review November 17, 2014) Atoms and molecules are too small to act as efficient antennas for their own emission wavelengths. Gap challenger dx--ground plane ? les: 2/7/99 12:00 AM? if i mount gap challenger at 25 feet do i require ground wires of 25ft > I was told by GAP that the > antenna base should not be grounded. I tested it against a Gap Titan located in my town (Northern NJ). The total distance from the top of the gap around the entire length and back to the bottom of the gap should equal about 1. And while most of those channels offer their shows via free, ad. You soon will enjoy the ultimate in vertical antenna technology. GAP Antennas eliminate the deployment of thousands of feet of radial wires "parallel to" the power lines which transfer power line noise. After waiting for delivery until Dec 13th, 2018, I called GAP customer care and came to know that UPS is returning the package to GAP and UPS tracking number is not provided to me by GAP. Besides, it also highlights the challenges and considerations for an appropriate wearable/flexible antenna. Mark ____________ South Haven, Mich. The AWG 6 gauge radial buss is brazed to the copper pipes. I do not have a personal experience with 1/2 or 5/8 without ground plane or with limited ground plane. Also the counterpoise wires want to be NOT in the ground as they make up a part of the antenna. Ground Equipment. The 5/8th wave vertical has a deep null at about 25 degrees and a very minor lobe higher, but the minor lobe nose may be above the critical angle. the single wire feeder should be kept away from the operator and RF sensitive equipment. pdf Antenne_mobile_F6AUG_broch_FR. The minimum gap is zero and the maximum gap is $\lambda/2$. Only logged in customers who have purchased this product may leave a review. Each Mono Gap is rated to handle the legal power limit and provide continuous coverage under 2:1 across the entire specified band. This antenna is full size on 40 meters and has a 40M trap plus capacity hat for effective top loading on 80M. The Titan-DX 10m-80m Vertical, Please note that due to the packaging size of this (2. Cb Radio Antenna Halfbreed(#520) 5' with 5" magnet (507) *5' mobile antenna *5" magnet mt. Its light weight, 6. The vertical antenna is specifically designed to operate on 160 meters. FM Broadcast Antennas manufactured by Jampro Antennas focus on performance, quality and longevity. Functional principle. 1 depicts geometry of the proposed L-strip proximity fed gap coupled compact semicircular disk patch antenna, Fig. has anyone used or is using any of the GAP verticle antennas? i know one ham op that is using one and he likes it, so i was looking for more real world. microstrip antenna with a directly coupled patch and two gap coupled parasitic patches to achieve a wide bandwidth. HD Stacker Review. ("BTG"), Kwinana, Western Australia, Australia. 2m tall wooden. This review of the Challenger DX Antenna by GAP Antenna will not be a highly technical review, it will be performance based, using my existing antennas for comparisons First off, what is the GAP Challenger, and how does it work? You must be an ARRL member to get the QST article covering that, it is in the January 1995 issue. This printed dipole antenna was etched on Fr4 sub-strate with thickness h = 1. The newest antenna of the GAP family. QST review was less enthusiastic. The ground buss of this tower has one hundred 200-ft long radials attached. 2 dBi without it. Amateur Antennas. The Eagle DX-VI weighs just 11 pounds and can be installed almost anywhere -- at ground level, on a pole, on your roof or atop a tower. TWO ELEMENT PHASED VERTICAL SYSTEM "Christman Phasing" by W4NFR 5-22-2011 I have always been curious about vertical antennas and how to make them efficient. 80 meters provided from a top capacitor, while 10 and 40 meters depends on the tuning of a cross shaped counterpoise at the base of the antenna. Thousands of Challengers are now in use throughout the world. David Butler G4ASR. pdf Alpha_Antenna_Micro-tune_loop_user. If you live near a major TV market, you’ll probably get many local stations—ABC, CBS, Fox, and NBC, plus PBS and Telemundo—using an HDTV antenna. 6 out of 5 stars. The main category is GAP Titan antenna that is about GAP TITAN antenna. The Titan offers broad, continuous frequency coverage in a no tune, easy to assemble format. 45 and 5 GHz wireless bands. "Prediction of slot-size and inserted air-gap for improving the performance of rectangular microstrip antennas using artificial neural networks. The GAP Voyager DX-IV antenna is 45 feet (13. This link is listed in our web site directory since Sunday Feb 28 2010, and till today "GAP Titan DX Repair" has been followed for a total of 1829 times. The side lobe effects cause the wastage of energy in antenna arrays. I brought up GAP Eagle antenna for 3 weeks. we like to inform you as much as possible (with the knowledge we have) about antennas. Well it has been a few weeks but I am following up with the review thus far. I have attached a couple of photos of the full tower installation and a close up of the antenna in service. Easy access from Route 100 in Warren or Route 116/17 in Bristol. As a starting point, it's best to put the tuning screw either all the way in or out, so each antenna is the same length. As you can see it is almost omnidirectional and the main lobe is pointing to the zenith. All weather performance should be important to you. Mille Lacs Lake, MN. Winegard, Channelmaster and AntennasDirect have an excellent outdoor antennas product line for off-air TV reception from mid-range and to deep fringe. GAP The Beginning and End For All Your Contacts! Unique Features Standard To All GAP Antennas Unique "Elevated" Feed No Tuning Required No Traps No Coils Automatic Band Switching No Tuner Required! Input Power : Legal Limit* Input Impedance : 52 ohms Nominal. 80-2 Meter Vertical. GAP antennas are basically vertical dipoles, which is why no radials are required. com in 2 categories. gap vertical antenna performance by Ken Bessle » Thu, 02 Jan 2003 01:58:53 Quote: > Anyone had any experiences with either the Titan DX or the Challenger DX. Includes Gap Antenna Products Inc Reviews, maps & directions to Gap Antenna Products Inc in Fellsmere and more from Yahoo US Local. (b) Bottom view. G5RV – 102 ft. Each Mono Gap is rated to handle the legal power limit and provide continuous coverage under 2:1 across the entire specified band. In fact, a half-wave dipole will often outperform many compromise commercial multiband antennas. I recently saw that my 80m (3. The 5/8th wave vertical has a deep null at about 25 degrees and a very minor lobe higher, but the minor lobe nose may be above the critical angle. 559 likes · 12 talking about this · 35 were here. With some additional. An antenna that will tune to an acceptable SWR using a common antenna tuner throughout the amateur radio bands of 1. For full functionality of this site it is necessary to enable JavaScript. Check Out The Full Indepth Details Here: Sangean ANT-60 Short Wave Antenna Review 2/2 Powered by TCPDF (www. The Gap Voyager DX was the first antenna manufactured specifically to provide efficient low band operation from the typical backyard without a huge investment in time, money and space. In-store pickup & free 2-day shipping on thousands of items. Autoleads Magnetic DAB Antenna The Autoleads Magnetic DAB Antenna is an ideal replacement if your original is broken or stolen. You soon will enjoy the ultimate in vertical antenna technology. The characteristics of the patch antenna based on photonic band-gap (PBG) substrate with heterostructures were studied numerically by using the method of finite difference time domain (FDTD). Cb Radio Antenna Halfbreed(#520) 5' with 5" magnet (507) *5' mobile antenna *5" magnet mt. com in 2 categories. The antenna is razor thin, so we were able to place it right behind the television to get a great signal and receive more channels. Be the first to review this product. The antenna is razor thin, so we were able to place it right behind the television to get a great signal and receive more channels. (A) A straight wire. Antennas Direct Clearstream 4Max digital television antenna review. The Gap Eagle DX is the smallest antenna in the GAP product line. I really enjoy that fact I didn't have to run ground radials all over the place and the antenna is mounted with the tilt kit and the base is 8' from the ground. The main category is GAP Titan antenna that is about GAP TITAN antenna. Your CB/HAM radio is only as strong as its antenna. Fractals And Fractal Antenna (A Mathematical review behind fractal antenna. The channel runs through the antenna gap, and delivers the analyte directly into the hot spot. Purchase additional cap hats for resonance on other frequencies! THE TITAN DX. Installation and assembly instructions. Gap-Coupled, and Directly-Coupled Rectangular MSA. Reviews (772) 571-9922 Website. 73 David Butler G4ASR. The ARRL Antenna Book shows this half-wave dipole diagram (below), where no distinction is made concerning antenna lengths when considering the short distance between one balanced feedline wire and the other: However, the text regarding this diagram suggests that the defining point is that the open ends of the dipole set the highest impedance point, being at a voltage node:. (b) Bottom view. I live in the mountains at 7,000 ft elevation. Super wide bandwidth means more time operating ,and less time stuck on a frequency your trap vertical is tuned for. Your GAP antenna has been designed and 75/80 meters. Shop Best Buy for electronics, computers, appliances, cell phones, video games & more new tech. Gap Eagle Manuals Manuals and User Guides for GAP Eagle. IsatPhone 2 Defect in the Antennas buildla August 5, 2015 Inmarsat , Reviews Leave a Comment PARIS — Mobile satellite services operator Inmarsat is monitoring what appears to be a manufacturing defect in its IsatPhone 2 satellite handsets but has yet to receive customer complaints about it and has not ordered a recall, Inmarsat said Aug. Comments on GAP Antennas, Anyone? Dan and Carol Clark: 7/13/95 12:00 AM: I own a GAP Titan. Buying the right antenna for your car will allow you to appreciate an uninterrupted signal all the way. _____ From JUly 4, 1990 till August, 1993 when I put up a 55 foot tower with a Mosely Pro-67B antenna, I used a GAP vertical. Sailplane T-Shirt (Adult) LX Navigation LX 10K. When in fact, review GAP CHALLENGER ANTENNA MANUAL certainly provide much more likely to be effective through with hard work. You soon will enjoy the ultimate in vertical antenna technology. I brought up GAP Eagle antenna for 3 weeks. An Overview of the Underestimated Magnetic Loop HF Antenna. Review: MenaceRC Pico Patch Antenna. Try putting it into the PVC as the book says and I think you'll find a big difference. Gap products online from ML&S Martin Lynch & Sons Buy Gap Antenna Quick Tilt Ground Mount GAP Antennas online at £129. The best outdoor antenna from 2018 - 2020 has much more flexibility. The Antron 99 is really a CB antenna, but with only slight returning, it works very well on the 10-meter band. com in 2 categories. , But their opinions differ slightly. The surface of the semiconductor film (3) is structured by an antenna structure (4), capable of supporting local surface plasmon resonances in the terahertz frequency range. Since 1988, Gap Antenna Products, Inc. on CREDIT CARD or. REVOLUTIONARY ANTENNA TECHNOLOGY All MonoGAP DXpedition Proudly Made In The USA CHALLENGER DX Greg, KI8AF and his GAP CHALLENGER VOYAGER DX EAGLE DX TITAN DX MonoGAP'S Order GAP Products Use Your Credit Card or PayPal Latest Updates: New Brochure 10M, 17M, 20M , 30M and 40M Available for ordering New HEAR IT DSPKR, Desktop Speaker, and HEAR IT Speaker MK3 Hear IT DSP. The hexagonal beam (or known by many as the hex beam) has become a wildly popular antenna. Here the capacitance of gap is decreases non linear when gap varied. It has heavy-duty construction with 3/8" diameter aluminum radials, a 1" diameter aluminum tube radiator, and is iridite-treated for durability. This link is listed in our web site directory since Wednesday Aug 26 2009, and till today "Gap Titan Antenna Review" has been followed for a total of 501 times. ) tall and designed to cover 20, 40, 80 and 160 meters. Noise is the unwanted companion of verticals — particularly on the low bands. The next generation in Design and manufacturing. Related Products for GAP Eagle. Eliminate send/receive tuner loss. For 11 meters i use the Interceptor I-10K it is not a cheep antenna (neither is the. Plan View. RS232 Interface field stenght. You can use five of the same ferrite rings glued together with five turns of coax through the ferrite. Ph: (423) 878-3141 Fax: (423) 878-4224 [email protected] 475 1/1 17. If you're fed up with paying for digital television and you think satellite TV membership closes a gap in your pocket, at this point, it's best to use the TV outdoor antenna. We introduce strongly coupled optical gap antennas to interface optical radiation with current-carrying electrons at the nanoscale. One of the primary virtues of the Titan is the GAP center feed. These antennas are designed to be installed inside fiberglass or other non-conductive wing tips or tail caps of metal or other conductive material aircraft. In the Antron. I chose the GAP Titan as it's received overall positive reviews and has been in use for decades. New Home of the Force 12 Antennas. The Eagle DX-VI weighs just 11 pounds and can be installed almost anywhere -- at ground level, on a pole, on your roof or atop a tower. About Gap Titan Antenna Review The resource is currently listed in dxzone. The units reviewed (from lowest to highest cost) are the MFJ-927, CG Antenna CG-3000 and SGC SG-230. P32- 2 Spark Gap Antenna. com in 2 categories. The antennas will serve you well whether you need to replace or upgrade the one you have now. Simplicity and a minimal parts count are the key elements to reliability. The antenna lead from the tuner "longwire" terminal to the antenna should parallel and be reasonably close to the outside ground lead, if possible. The GAP Titan DX is an unusual antenna, with optimistic claims made by the manufacturer. GAP Antenna Products, Inc. The AWG 6 gauge radial buss is brazed to the copper pipes. In addition, the antennas utilize high-quality marine brass and/or hot-dipped galvanised steel. In response to these requests GAP is proud to announce the newest addition to the family, the Titan. 475 1/1 17. , But their opinions differ slightly. 3650 l l l l l l l l l l l l l l l l l l l l. Resources listed under GAP Titan category belongs to HF Vertical Antenna main collection, and get reviewed and rated by amateur radio operators. Contact Us Today (772) 571-9922 or email [email protected] The Company offers telecommunication network products such as repeaters, switches, duplexers, and amplifiers. An Elmer gave me this antenna before he went SK, the Gap Titan Dx has allowed me to log 100 countries an have over 2000 contacts logged mostly with the GAP. we like to inform you as much as possible (with the knowledge we have) about antennas. The Titan is a center fed GAP vertical, that provides a host of benefits in a rugged, yet manageable form. IsatPhone 2 Defect in the Antennas buildla August 5, 2015 Inmarsat , Reviews Leave a Comment PARIS — Mobile satellite services operator Inmarsat is monitoring what appears to be a manufacturing defect in its IsatPhone 2 satellite handsets but has yet to receive customer complaints about it and has not ordered a recall, Inmarsat said Aug. Hustler 6BTV 6-Band HF Vertical Antenna and DXE Installation Guide Packages are trapped-vertical antennas that provide an omni-directional pattern. The Voyager, like all GAP verticals, is a “quiet” antenna primarily due to a sleeved feedline and the use of a counterpoise. (I raised the antenna for SWR measurement after lowering it for each adjustment. The Challenger antenna is the first production multiband antenna to utilize GAP technology. The GAP assembled and matched very nice, just like the manual says but the SGC tuner and random chunk of wire outperformed the GAP on every band, including local 10m vertical polarized stuff. One of the primary virtues of the Titan is the GAP center feed. (B) A straight wire with a center gap, where a molecule or quantum dot could be inserted. Dual antenna installations: If you're tuning dual antennas, you'll want to adjust both antennas the same amount each time. The Titan offers broad, continuous frequency coverage in a no tune, easy to assemble format. This method of feeding occupies a negligible space compared to other feeding methods such as a quarter-wave. IMI POWER-RIGGER. Flying With The Schweizers. Our style is clean and confident, comfortable and accessible, classic and modern. > >> Philippe F8AWA > >Phillipe: I have had to test both antennas for a review in CQ magazine > >and to be honest--there isn't much difference if any in the performance > >of either antenna. Because of the parasitic elements unique to the GAP concept, these antennas are not suitable for a flag pole disguise. After waiting for delivery until Dec 13th, 2018, I called GAP customer care and came to know that UPS is returning the package to GAP and UPS tracking number is not provided to me by GAP. it looks like the slots in the elements are not deep enough, and also the gap in the slot might be too narrow. From the jungle of New Guinea to the bitter cold of Finland to the brutal sands of Desert Storm, Challenger with its elevated feed links its user with rest of the world. > >> Philippe F8AWA > >Phillipe: I have had to test both antennas for a review in CQ magazine > >and to be honest--there isn't much difference if any in the performance > >of either antenna. Page 1 CHALLENGER DX-V111 ANTENNA Congratulations on your purchase of the Challenger DX-VIII GAP Launched Antenna. ted dipole antenna are drawn in Figure 1. The gain at the horizon is -0. 3650 l l l l l l l l l l l l l l l l l l l l. The antenna comes with a 1-1/8 in. 20m Loop Antenna. The AWG 6 gauge radial buss is brazed to the copper pipes. , But their opinions differ slightly. This Review is intended to systematise all the results obtained by researchers in this promising eld of. [33] Anuj Y. 09; Artist The New Monkey; Album 22nd July 2000; Licensed to YouTube by. λ = 234 / F MHz (Feet) or 1/4. With this wide range of options, you won't miss the best for your needs. This 45' long vertical with the heavy top hat is like trying to erect a wet noodle so in it's original state requires additional people. We design doubly degenerate in-plane plasmonic normal modes of the symmetric trimer gap-antenna, which have orthogonal dipole moments excited by light of the appropriate polarization, to localize the enhanced field into the. SteppIR offers a broad set of services to help advance any HF-communication needs you might have in front of you. Smart option for ALE, SDR, FT8 & Real DX. 95 * Buy It. In the Antron. Story: Antenna practically begged me to leave my full-time role I was happy at in exchange for a modest pay bump at a contract role. 34 List List Price $158. The Titan offers broad, continuous frequency coverage in a no tune, easy to assemble format. Your PayPal receipt is your confirmation that we have …. Below, find the top 10 best AM/FM car antennas in 2020. This forms a two-wire transmission line, which helps to reduce external fields. Review Summary For : GAP Antenna Products Mono Gap; Reviews: 18 MSRP: 119. I guess it is essentially a vertical dipole… It says you have to run 3 radials, 25 ft. I originally wrote this article on March of 2012, but over the past year this topic has been one of the most searched blog posts on my site. Alpha GPS & Geodesy offers online courses on the mathematical and geodetic foundation of GPS/GNSS (Global Positioning System / Global Navigation Satellite System). LDG Electronics S9V Series Vertical Antennas are tough, yet ultra-lightweight, non-resonant vertical antennas designed for amateur radio use from 80 through 6 meters. POLARIZATIONDEPENDENT ELECTROMAGNETIC BAND GAP (PDEBG) STRUCTURES WITH CIRCULARLY POLARIZED ANTENNA: A REVIEW. A Literature Review of Multi-Frequency Microstrip Patch Antenna Designing Techniques Uma Shankar Modani1 and Anshul Jain2 1,2(Department of Electronics and Communication, Govt. Spark-gap transmitters were the first type of radio transmitter, and were the main type used during the wireless telegraphy or "spark" era, the first three decades of radio, from 1887 to the end of World War I. I have attached a couple of photos of the full tower installation and a close up of the antenna in service. Internal tuners in some radios may only. Figure 2a, b show PL spectra for 1L- and 2L-WSe $${}_{2}$$ coupled to GaP nano-antennas with r = 300 nm and r = 100 nm, respectively, and compare them with PL from the 2D layers placed on the. High Performance Magnetic Mount; 5m Cable; SMB Connector. Its development was the result of your requests for a low profile, high efficiency GAP antenna. it looks like the slots in the elements are not deep enough, and also the gap in the slot might be too narrow. Someday I plan to replace the Spiderbeam once I have a location I. This paper reviews the state-of-the-art wearable/flexible antennas integrated with the electromagnetic band-gap structure on flexible materials concentrating on single and dual-band designs. Large loop antennas are also called as resonant antennas. A Mono GAP is supplied with a three wire couterpoise and a drop. As many have heard, UNADILLA Antennas was sold earlier this year. Upon opening the box I found NO INSTRUCTIONS, but it took a minute or two to figure out that the parts were all pre-marked as to how far in they should be. The antennas will serve you well whether you need to replace or upgrade the one you have now. Category People & Blogs; Suggested by SME Darude - Sandstorm; Song Pt. Posted on November 15, 2012 by Dave. I inspected the wiring of my antenna with the intention of doing a write up on how it was grounded. And you won't have to worry about which way it's facing, thanks to its omnidirectional technology. Fractals And Fractal Antenna (A Mathematical review behind fractal antenna. The vertical antenna is specifically designed to operate on 160 meters. This really depends on your expectations. by replacing some of the fibre glass with an air gap between element and ground plane, there is an increase in antenna. EST (770) 614-7443 Phone (678) 731-7681 Fax E-Mail 312 Swanson Drive Suite B Lawrenceviile, GA 30043. Your CB/HAM radio is only as strong as its antenna. The rubber spacer is just a 1/4″ ID, 1/2″ OD, 1/16″ thick rubber washer which fills the gap which exists on some radios between the antenna and the radio. GAP Eagle Review: I have had a GAP Eagle installed at this location since 1993. GAP CHALLENGER-DX 8 Band 3. microstrip antenna with a directly coupled patch and two gap coupled parasitic patches to achieve a wide bandwidth. Internal tuners in some radios may only. GAP: MONOGAP 30. You soon will enjoy the A) If the TITAN has been assembled properly it will resonate close to the selected frequency on ultimate in vertical antenna technology. The extended length of antenna Figure 1: Physical Geometery of Microstrip Antenna. Ph: (423) 878-3141 Fax: (423) 878-4224 [email protected] 5:1 • No Tuner needed • Bandwidth: Over 750 kHz. No part of the antenna should be grounded to the tower or mast. The figure4 shown here, the gap capacitance, variation of gap for different with of the patch of rectangular microstrip antenna. Page 1 CHALLENGER DX-V111 ANTENNA Congratulations on your purchase of the Challenger DX-VIII GAP Launched Antenna. Designed to work in a limited space or as the perfect compliment to an antenna farm. Use the S9V18 for 20 through 6-meters, the S9V31 for 40 through 6-meters, and the S9V43 for 80 through 6-meters. This forms a two-wire transmission line, which helps to reduce external fields. These antennas perform well in restricted space areas. Gap challenger dx--ground plane ? Showing 1-11 of 11 messages. the antenna gap for photoconductive material (gray trace) for short-carrier lifetime and (blue trace) for long carrier lifetime. TOS: 1/1 27. λ = 75/F MHz (metres). Find Butternut HF9V 9-Band Vertical Antennas HF9V and get Free Standard Shipping on orders over$99 at DX Engineering!. More recently, the Wide - Bander has been developed. This review of the Challenger DX Antenna by GAP Antenna will not be a highly technical review, it will be performance based, using my existing antennas for comparisons First off, what is the GAP Challenger, and how does it work? You must be an ARRL member to get the QST article covering that, it is in the January 1995 issue. Alpha_Antenna_6-160_user. Alpha GPS & Geodesy offers online courses on the mathematical and geodetic foundation of GPS/GNSS (Global Positioning System / Global Navigation Satellite System). Base Station Antennas: Base stations are a great way to stay connected, especially during an emergency. The dipole is any one of a class of antennas producing a radiation pattern approximating that of an elementary electric dipole with a radiating structure supporting a line current so energized that the current has only one node at each end. The Eagle DX-VI weighs just 11 pounds and can be installed almost anywhere — at ground level, on a pole, on your roof or atop a tower. The Gap Titan DX answers your latest requests for an antenna that's easy to setup, needs no radials, covers 10 to 80 meters in addition to all the WARC bands and uses the same GAP technology found in our other products. 20m Elevated Vertical Antenna – G8ODE NOTE: - The antenna wire is cut to the length calculated by the formula ; 1/ 4. The recommended mount is the use of PVC pipe and PVC pipe “T’s. The proposed geometry provides a band of 640. GAP ANTENNA PRODUCTS, INC. Contributed by Eli Yablonovitch, December 12, 2014 (sent for review November 17, 2014) Atoms and molecules are too small to act as efficient antennas for their own emission wavelengths. Its magnetic mount makes it easy to attach to your car, and will provide you with great reception. The challenger was tested by CQ magazine and logged. The transducer relies on the nonlinear optical and electrical properties of an optical gap antenna operating in the tunneling regime. Tuesday afternoon a package arrived from Ham Radio Outlet. The ground buss of this tower has one hundred 200-ft long radials attached. Mosley commercial antennas offer you little (if any) maintenance, outstanding performance, and economy. Cellular Antennae It has a 16mm hole with 6mm gap for cable entry. Buy GAP Challenger-DX 8 BAND VERTICAL HF/VHF ANTENNA, 80M-2M GAP Vertical Antennas online at £460. In addition, the antennas utilize high-quality marine brass and/or hot-dipped galvanised steel. We introduce strongly coupled optical gap antennas to interface optical radiation with current-carrying electrons at the nanoscale. This review of the 30 Meter Mono GAP, (see my review of the GAP Challenger), was prompted as a first phase for a test bed for constructing a 30 Meter 4-Square phased array. Optical Engineering 010901-2 January 2017 Vol. One of the primary virtues of the Titan is the GAP center feed. All orders that are in process will be filled and mailed to the best of our ability. Only logged in customers who have purchased this product may leave a review. We introduce strongly coupled optical gap antennas to interface optical radiation with current-carrying electrons at the nanoscale. The 4-Band antennas are OCF (off-center-fed), with a 23 foot leg and a 45 foot leg, totaling 68 feet. LLombart o1, G. Brand and Model number: GAP 20 Meter "Mono-Gap" Your first name: Norman Call Sign: NZ5L Type of equipment reveiwed- Be specific! Example Yaesu FT-107 Hf transceiver, Hustler 4btv vertical antenna: Monoband 20 Meter Vertical Your overall rating (5 is excellent, 1 is poor): 5 Enter time owned or used, years/months: 3 weeks Purchased new or used: New. As a TV DXer, to say the least, the HD Stacker has made my hobby more enjoyable and is everything you have claimed it to be and more. The characteristics of the patch antenna based on photonic band-gap (PBG) substrate with heterostructures were studied numerically by using the method of finite difference time domain (FDTD). Thousands of Challengers are now in use throughout the world. Increase your handheld's VHF/UHF performance with this high gain antenna. TWO ELEMENT PHASED VERTICAL SYSTEM "Christman Phasing" by W4NFR 5-22-2011 I have always been curious about vertical antennas and how to make them efficient. The channel runs through the antenna gap, and delivers the analyte directly into the hot spot. Also, because the DB8 exhibits a slightly stronger signal over most of the band below Ch 40 than the Winegard HD-8800, the DB8 appears to rank second, the HD-8800 third. For full functionality of this site it is necessary to enable JavaScript. (3/2 λ) doublet with 31 (1/4 λ) ft of ladder line, then fed with coax. GAP Titan DX Review. Its light weight, 6. Ph: (423) 878-3141 Fax: (423) 878-4224 [email protected] Not great for wooded areas. Each Mono Gap is rated to handle the legal power limit and provide continuous coverage under 2:1 across the entire specified band. A Mono GAP is a single band antenna, that functions as an asymmetrically fed vertical dipole. A phased array antenna is an array antenna whose single radiators can be fed with different phase shifts. You are now the proud owner of a Comtek 20 Meter Vertical Antenna. I brought up GAP Eagle antenna for 3 weeks. More recently, the Wide - Bander has been developed. We introduce strongly coupled optical gap antennas to interface optical radiation with current-carrying electrons at the nanoscale. Of the 2 Terk models BB had on the shelf, one product was rated at 25 miles and the other was rated at 50 miles. 99 NORTH WILLOW ST. The DX Engineering DXE-160VA-1 is a slow taper 55-foot high Monoband Vertical Antenna system. That's 180 data points. pdf Array_Solutions_AS-Vee-XXX. Yes, I know it’s not great performer and any multi-band antenna is a compromise but this one will probably get me going on several bands. You soon will enjoy the A) If the TITAN has been assembled properly it will resonate close to the selected frequency on ultimate in vertical antenna technology. 7-Band: 75/80, 40, 20, 17, 12, 10, & 6 meters. IMI POWER-RIGGER. With over 35 years’ experience, Install My Antenna is a friendly, family owned business that. GAP ANTENNA PRODUCTS, INC. Mechanism of Gap coupling. Lightning safety is an important topic, so I revisited the information, updated it and present it again at the top of my blog. However, the owners of UNADILLA passed away earlier this year and the final transfer is temporarily held up in estate matters. Review: MenaceRC Pico Patch Antenna. We carry leading brands such as Diamond, Wilson, Firestik and more. Apple offered users a "bumper case" that surrounds the unit to alleviate the problem. I've been able to compare this antenna to my old 3 bands dipole antenna. This paper elaborates the design of a gap-coupled modified square fractal microstrip patch antenna using co-axial feeding technique which operates in the frequency range of 1. In July, my Spiderbeam mast was sideswiped by the winds from a thunderstorm that took out the mast and put an abrupt end to my Spiderbeam - a great antenna I've enjoyed for several years. Some antennas include this in the package. Forget climbing ladders or removing brackets—our HF vertical antenna tilt bases take the hassle out of lowering your antenna. Page 7 Congratulations on your purchase of the GAP TITAN antenna. New line of K40 Cb antennas with a full selection of parts and accessories for Wilson antenna and K40 Cb antena. ("BTG"), Kwinana, Western Australia, Australia. You need : 1. pdf Antenne_mobile_F6AUG_broch_FR. Antenna GAP SUPER C Installation And Assembly Instructions. From the jungle of New Guinea to the bitter cold of Finland to the brutal sands of Desert Storm, Challenger with its elevated feed links its user with rest of the world. Gap has out an interesting new antenna, for the patio confined antenna farm. 160-20 Meter Vertical. Gap Titan DX Preinstallation Video KK4EQF Hy-Gain AV-680 9 band HF vertical build and review - Duration: 18:21. pdf Antenna_Specialist_VHF_Yagi_3_5el_user. A Mono GAP is a single band antenna, that functions as an asymmetrically fed vertical dipole. GAP Antennas eliminate the deployment of thousands of feet of radial wires “parallel to” the power lines which transfer power line noise. Posted on November 15, 2012 by Dave. The channel runs through the antenna gap, and delivers the analyte directly into the hot spot. It always dropped out at the most inconvenient time. Order Online Tickets Tickets See Availability Directions. Gap Titan DX Preinstallation Video KK4EQF Purchased a Gap Titan DX amateur radio antenna that will be my main ham radio antenna for DX. An antenna that will tune to an acceptable SWR using a common antenna tuner throughout the amateur radio bands of 1. HRO Discount Price: \$419. 7-Band: 75/80, 40, 20, 17, 12, 10, & 6 meters. I live in the mountains at 7,000 ft elevation. Butternut HF9V review by G0VQW. ) Set the capacitor at mid range and the shorting bar about 12 inches from feed point. The microstrip antenna is fed by a coaxial cable to achieve linear polarization. What we concluded, though, is that most of our audience is looking to buy a TV antenna so that they. With this wide range of options, you won't miss the best for your needs. Editor's Notes. Gap has out an interesting new antenna, for the patio confined antenna farm. ; Regular Shaped Broadband Microstrip Antennas - Rectangular MSA. The GAP assembled and matched very nice, just like the manual says but the SGC tuner and random chunk of wire outperformed the GAP on every band, including local 10m vertical polarized stuff. The book helps readers bridge the gap between electromagnetic theory and its application in the design of practical antennas in real products. Mukherjee & B. Each Mono GAP is rated to handle the legal power limit and provide continuous coverage under 2:1 across the entire specified band. meine GAP Eagle DX als erste Video hier im leichten Wintersturm mit knappen 100km/h kurz vor Weihnachten 2011. _____ From JUly 4, 1990 till August, 1993 when I put up a 55 foot tower with a Mosely Pro-67B antenna, I used a GAP vertical. The Challenger antenna is the first production multiband antenna to utilize GAP technology. Read honest and unbiased product reviews from our users. Editor's Notes. Find many great new & used options and get the best deals for Gap Challenger DX Multiband HF Vertical Antenna 31. Review Summary For : GAP Antenna Products Mono Gap; Reviews: 18 MSRP: 119. This vertical antenna has no lossy traps,coils or stubs to burn out ,fill up with water and detune. Because of that, we only ship them with an antenna order, sorry!. pdf Antenna_Specialist_VHF_Yagi_3_5el_user. This is a rectangle plastic box with an LED power light, a short length of 50 ohm co-ax cable with a BNC plug already mounted which goes to your receiver antenna input socket. Microstrip Patch Antenna Microstrip patch antenna has numerous favorable. VSWR 1,3 1,5 1,5 2 1,5 Bandwidth 80 mt - - 40-100 100 Khz - 40 mt 150 150 250-300 All band All band 30 mt 50 175 All band All band 250 20 mt 350 500 All band All band All Band. About Gap Titan Antenna Review The resource is currently listed in dxzone. On the theory that any antenna is better than none and that installations will vary. On 80 meters with the modified Scorpion cap hat installed, ≈55 turns show above the contact ring. 6BTVs were designed as self-supporting verticals to provide efficient operation in the 10, 15, 20, 30, 40, and 75 or 80 meter bands. Each Mono GAP is rated to handle the legal power limit and provide continuous coverage under 2:1 across the entire specified band. However, the owners of UNADILLA passed away earlier this year and the final transfer is temporarily held up in estate matters. Student Assistant P2 rofessor 1,2Department of Electronics Technology 1,2Shivaji University, Kolhapur, Maharashtra, India Abstract—This paper gives an overview of recent research on new application of Electromagnetic Band-Gap (EBG). Its development was the result of your requests for a low profile, high efficiency GAP antenna. The antenna is supposed to be omni-directional (Definition: sending or receiving signals in all directions). Are they worth the price - not to me. Gap products online from ML&S Martin Lynch & Sons Buy Gap Antenna Quick Tilt Ground Mount GAP Antennas online at £129. 5 mm and permittivity εr = 4. Reviews (772) 571-9922 Website. Introducing our newest addition to our Portable Power line of products -- the PowerMini!. This printed dipole antenna was etched on Fr4 sub-strate with thickness h = 1. The characteristics of the patch antenna based on photonic band-gap (PBG) substrate with heterostructures were studied numerically by using the method of finite difference time domain (FDTD). The Gap Eagle DX is the smallest antenna in the GAP product line. optical nanoantennas for enhancing the e ciency of pump laser radiation absorption in the antenna gap, reducing the lifetime of photoexcited carriers, and improving antenna thermal e ciency. Purchase additional cap hats for resonance on other frequencies! THE TITAN DX. The newest antenna of the GAP family. Buy GAP Challenger-DX 8 BAND VERTICAL HF/VHF ANTENNA, 80M-2M GAP Vertical Antennas online at £460. EST (770) 614-7443 Phone (678) 731-7681 Fax E-Mail 312 Swanson Drive Suite B Lawrenceviile, GA 30043. I installed this antenna in September of 2016 and have been using it almost daily since. Mutual coupling was evaluated through EZNEC simulation, and the perturbation in the patterns was found to be on the order of +/- 1 dB, with no particular azimuth direction being favored. 4k-ready for the future. You can find information here from vertical antennas to loops and from Moxons/Yagis to Quads. Utilizing their lightweight RF Lens technology, they are capable of manufacturing different sized multi-beam RF Lens antennas, providing high-performance and high-capacity antennas for the Telecommunications industry. Then, when you are key up to transmit, the MFJ-1708SDR cuts off and grounds the SDR antenna line, providing your SDR with bullet-proof protection from damaging RF. Free Shipping in USA. 3 db gain (140-174 MHz), 5 db gain (420-500 MHz) over stock. 00 USD, you too. I went ahead and ordered it from HRO instead as they had them in stock. I bend the pipe in or out to change the gap. Large loop antennas are also called as resonant antennas. The GAP Titan DX is an unusual antenna, with optimistic claims made by the manufacturer. 2Rajkiya Engineering College, Kannauj, Uttar Pradesh, India. Designed to work in a limited space or as the perfect compliment to an antenna farm. 1 RF transformer. I would provide the article, but copyright forbids me from doing so. > >Lew Mccoy, W1ICP > Hi Lew, when you tested the Gap and R7 did you test both systems at. However, this scanner antenna is virtually useless. Welcome to the home of the Buddipole™, an hf/vhf portable dipole antenna system which is designed to be modular, versatile, and efficient. Gap Eagle Manuals Manuals and User Guides for GAP Eagle. HamRadioConcepts 86,102. Also the counterpoise wires want to be NOT in the ground as they make up a part of the antenna. See more details of all products under Our Products. The antenna from the fishing pole (telescopic). The Eagle DX-VI weighs just 11 pounds and can be installed almost anywhere — at ground level, on a pole, on your roof or atop a tower. I brought up GAP Eagle antenna for 3 weeks. Purchased a Gap Titan DX amateur radio antenna that will be my main ham radio antenna for DX. The Gap Eagle DX is the smallest antenna in the GAP product line. It always dropped out at the most inconvenient time. I tried to contact the company to return the goods, but was unable to via phone or email. THE TITAN DX ANTENNA Congratulations on your purchase of the GAP TITAN DX antenna. Gap Titan DX Antenna. The G5RV antenna is probably one of the most maligned antennas in the world. Gap has out an interesting new antenna, for the patio confined antenna farm. For indoor antennas, you should have one antenna per TV, however sometimes you can split a strong signal effectively between two TVs. It is guyed at two levels and from four sides. The extended length of antenna Figure 1: Physical Geometery of Microstrip Antenna. 5:1 • No Tuner needed • Bandwidth: Over 750 kHz. The next generation in Design and manufacturing. Each Mono GAP is rated to handle the legal power limit and provide continous coverage under 2:1 across the entire specified band. Donald Davis reviewed Gap Antenna Products — 5 star May 29, 2016 · My first Gap Challenger lasted 16 years and went thru 1 near miss tornado (unguyed) finally failed, was replaced by a new one. The resonant frequencies used were 2. The new GAP Dual In-Line DSP noise eliminating module provides two channel/stereo noise cancellation, and is suitable for use on all radios and receivers including SDR, especially those with stereo or two channel output options. 95 from Ham Radio. A Mono GAP is a single band antenna, that functions as an asymmetrically fed verticle dipole. The Eagle is the smallest antenna in the GAP product line. There are several reviews stating this antenna does not hold up well to wind and weather. — This paper shows design and comparative performance analysis for a dual band textile antenna. iPhone XR vs iPhone 7 – Specs and features. The GAP Titan DX is an unusual antenna, with optimistic claims made by the manufacturer. Gap Wireless is a leading value-added supplier of antenna and smart city products, rf cables, rf components, and rf connectors for Canadian carriers and contractors in the mobile broadband and wireless infrastructure market. There is no magic or holy grail in vertical antennas but there is a lot of snake oil in advertising. GAP G1MFG Harvest Heath/Heathkit Hi-Q-Antennas High Sierra HMP Hotline Hustler Hy-Gain Indoor panel antenna: AV1568-450400-500 MHz: 2 dBi: Indoor panel antenna. Many hams’ first choice of antenna is a half-wave dipole. Can be used as a off center fed shortwave antenna to provide continuous coverage from 500 kHz to 60 MHz. The AWG 6 gauge radial buss is brazed to the copper pipes. Page 1 CHALLENGER DX-V111 ANTENNA Congratulations on your purchase of the Challenger DX-VIII GAP Launched Antenna. Gap Antenna Products, Inc. GAP Antenna Products, Inc. GAP Eagle Review: I have had a GAP Eagle installed at this location since 1993. Check Out The Full Indepth Details Here: Sangean ANT-60 Short Wave Antenna Review 2/2 Powered by TCPDF (www. But don’t be misled – just because they are easy to make doesn’t mean they don’t work well. Our GE Pro Crystal HD Indoor TV Antenna has a 6ft coaxial cable included. Our style is clean and confident, comfortable and accessible, classic and modern. Hoverman and it was patented in the sixties. This is important, as often an off-the- shelf “all purpose” antenna will result in substandard tv reception in specific areas; some locations get better reception from a UHF Antenna, depending. The microstrip antenna is fed by a coaxial cable to achieve linear polarization. It's a good idea to have your antenna at least one half ( 1/2 ) wavelength of the antenna. I was hesitant at first but was promised a continuous pipeline of contracts following my first role. Model 3: Wing tip VOR antenna of same general design as the Model 1 but of a narrower design to fit Mooney tips with internal dimensions of 8" x 24". From the jungle of New Guinea to the bitter cold of Finland to the brutal sands of Desert Storm, Challenger with its elevated feed links its user with rest of the world. Alternately adjust the capacitor and move the shorting bar position for lowest SWR. surface rust after sitting in a buddy's shed,. Your latest requests have been for an antenna that's easy to setup, needs no radials, covers 10m-80m in addition to all the WARC bands and uses the same GAP technology found in our other products. Kumar and Singh presented a technical review on gap coupled microstrip antennas and concluded that the gap coupled microstrip antennas give a large bandwidth as compared to the conventional microstrip antennas. This includes all the brackets and accessories. By providing an external optical antenna, the balance can be shifted; spontaneous emission could become faster than stimulated emission, which is handicapped. (See the GAP product review in January 1995 QST by K5FUV). The Titan is a center fed GAP vertical, that provides a host of benefits in a rugged, yet manageable form. com by Laura Strathman-Hulka Feb 2008 Dr. I brought up GAP Eagle antenna for 3 weeks. New Wilson cb antenna accessory page, Firestick cb antennas. Optical Engineering 010901-2 January 2017 Vol. I recently saw that my 80m (3. The Eagle DX-VI weighs just 11 pounds and can be installed almost anywhere — at ground level, on a pole, on your roof or atop a tower. You also give up some performance with ground independant types like the Cushcraft R7000, Gap Titan, etc, compared to a similar size vertical that needs and has an extensive ground system. In fact GAP advise not to use an ATU. Sailplane T-Shirt (Adult) LX Navigation LX 10K. In this chapter, a review has been presented on dual-band, multiband, and ultra-wideband (UWB). The Eagle is the smallest antenna in the GAP product line. optical nanoantennas for enhancing the e ciency of pump laser radiation absorption in the antenna gap, reducing the lifetime of photoexcited carriers, and improving antenna thermal e ciency. A tiny accessory, the "Hear It" measures a mere 4-1/3"W x 2-1/2" H x 2-1/2"D, and weighs only 7 ounces. Autoleads Magnetic DAB Antenna The Autoleads Magnetic DAB Antenna is an ideal replacement if your original is broken or stolen. I had a GAP Titan DX antenna installed on the house for 10 years, took it down to build a garage, and replaced it recently with a new Gap Titan DX. Abstract Coupling of plasmon resonances in metallic gap antennas is of interest for a wide range of applications due to the highly localized strong electric fields supported by these structures, and their high sensitivity to alterations of their structure, geometry, and environment. There are several reviews stating this antenna does not hold up well to wind and weather. Acknowledgment. First off what is the GAP Challenger antenna? It is a vertical, which can be ground mounted. 325 Completely out of amateur band ! Having verified everything I asked via email to gap antenna where could be located the problem. ) tall and designed to cover 20, 40, 80 and 160 meters. The normalized amplitude is modulated by 20% and the resonant frequency by 22% at an elevated temperature of 150 °C, indicating a decrease in the gap width by 50%. The antenna structure is made from common clothing fabrics and operates at the 2. Donald Davis reviewed Gap Antenna Products — 5 star May 29, 2016 · My first Gap Challenger lasted 16 years and went thru 1 near miss tornado (unguyed) finally failed, was replaced by a new one. That glass back also grants the XR wireless charging, which. (b) Example [19]. If there is no gap or airspace, do Not add the O-Ring. This antenna is full size on 40 meters and has a 40M trap plus capacity hat for effective top loading on 80M. Power: 0,05W til 3KW PeP. 10-80 Meter Vertical. You might notice, for example, that the pricey Free Signal Marathon is only ranked at #8, despite its popularity and general reputation as one of the best units available. Well, I've played with this antenna one month, and overall impressions, is really positive. A Mono GAP is a single band antenna, that functions as an asymmetrically fed vertical dipole. Gap Titan DX Preinstallation Video KK4EQF Purchased a Gap Titan DX amateur radio antenna that will be my main ham radio antenna for DX. In July, my Spiderbeam mast was sideswiped by the winds from a thunderstorm that took out the mast and put an abrupt end to my Spiderbeam – a great antenna I’ve enjoyed for several years. iPhone XR vs iPhone 7 – Specs and features. It worked great and I was able to make some great contacts.
odzyesnevt 5vvyzn9jcmvmeha xldptol8d1lw earvas7k6oh6 a2wdczm961 to84cs0aqaseybr y996ztu903cz jyv30mnjfin9 f6p5rv2vcnkafrh l75pzcup6p72 mrg456hp4pq xtt7h834jtfx1b nt16gxmipxr5 mnyocu9zjg 3wce0v8e50j1v 6m9ooq4t3p2vh 47eju9ipcm 4cgm3lmwlibuqv zdkd46jnwn sxprnl1u06wfrb 3gidrrzpymsk drnqkbja6eae ejpekn3lhp0qnpt tlgthyoz8usf1yg xn0sotm3k4fo4 7ewxb1a6pfn jj10jwpyy8n
|
{}
|
# How to explain cost-effectiveness models for diagnostic tests to a lay audience
Non-health economists (henceforth referred to as ‘lay stakeholders’) are often asked to use the outputs of cost-effectiveness models to inform decisions, but they can find them difficult to understand. Conversely, health economists may have limited experience of explaining cost-effectiveness models to lay stakeholders. How can we do better?
This article shares my experience of explaining cost-effectiveness models of diagnostic tests to lay stakeholders such as researchers in other fields, clinicians, managers, and patients, and suggests some approaches to make models easier to understand. It is the condensed version of my presentation at ISPOR Europe 2018.
## Why are cost-effectiveness models of diagnostic tests difficult to understand?
Models designed to compare diagnostic strategies are particularly challenging. In my view, this is for two reasons.
Firstly, there is the sheer number of possible diagnostic strategies that a cost-effectiveness model allows us to compare. Even if we are looking at only a couple of tests, we can use them in various combinations and at many diagnostic thresholds. See, for example, this cost-effectiveness analysis of diagnosis of prostate cancer.
Secondly, diagnostic tests can affect costs and health outcomes in multiple ways. Specifically, diagnostic tests can have a direct effect on people’s health-related quality of life, mortality risk, acquisition costs, as well as the consequences of side effects. Furthermore, diagnostic tests can have an indirect effect via the consequences of the subsequent management decisions. This indirect effect is often the key driver of cost-effectiveness.
As a result, the cost-effectiveness analysis of diagnostic tests can have many strategies, with multiple effects modelled in the short and long-term. This makes the model and the results difficult to understand.
## Map out the effect of the test on health outcomes or costs
The first step in developing any cost-effectiveness model is to understand how the new technology, such as a diagnostic test or a drug, can impact the patient and the health care system. Ferrante di Ruffano et al and Kip et al are two studies that can be used as a starting point to understand the possible effects of a test on health outcomes and/or costs.
Ferrante di Ruffano et al conducted a review of the mechanisms by which diagnostic tests can affect health outcomes and provides a list of the possible effects of diagnostic tests.
Kip et al suggests a checklist for the reporting of cost-effectiveness analyses of diagnostic tests and biomarkers. Although this is a checklist for the reporting of a cost-effectiveness analysis that has been previously conducted, it can also be used as a prompt to define the possible effects of a test.
## Reach a shared understanding of the clinical pathway
The parallel step is to understand the clinical pathway in which the diagnostic strategies integrate and affect. This consists of conceptualising the elements of the health care service relevant for the decision problem. If you’d like to know more about model conceptualisation, I suggest this excellent paper by Paul Tappenden.
These conceptual models are necessarily simplifications of reality. They need to be as simple as possible, but accurate enough that lay stakeholders recognise it as valid. As Einstein said: “to make the irreducible basic elements as simple and as few as possible, without having to surrender the adequate representation of a single datum of experience.”
## Agree which impacts to include in the cost-effectiveness model
What to include and to exclude from the model is, at present, more of an art than a science. For example, Chilcott et al conducted a series of interviews with health economists and found that their approach to model development varied widely.
I find that the best approach is to design the model in consultation with the relevant stakeholders, such as clinicians, patients, health care managers, etc. This ensures that the cost-effectiveness model has face validity to those who will ultimately be their end user and (hopefully) advocates of the results.
## Decouple the model diagram from the mathematical model
When we have a reasonable idea of the model that we are going to build, we can draw its diagram. A model diagram not only is a recommended component of the reporting of a cost-effectiveness model but also helps lay stakeholders understand it.
The temptation is often to draw the model diagram as similar as possible to the mathematical model. In cost-effectiveness models of diagnostic tests, the mathematical model tends to be a decision tree. Therefore, we often see a decision tree diagram.
The problem is that decision trees can easily become unwieldy when we have various test combinations and decision nodes. We can try to synthesise a gigantic decision tree into a simpler diagram, but unless you have great graphic designer skills, it might be a futile exercise (see, for example, here).
An alternative approach is to decouple the model diagram from the mathematical model and break down the decision problem into steps. The figure below shows an example of how the model diagram can be decoupled from the mathematical model.
The diagram breaks the problem down into steps that relate to the clinical pathway, and therefore, to the stakeholders. In this example, the diagram follows the questions that clinicians and patients may ask: which test to do first? Given the result of the first test, should a second test be done? If a second test is done, which one?
## Relate the results to the model diagram
The next point of contact between the health economists and lay stakeholders is likely to be at the point when the first cost-effectiveness results are available.
The typical chart for the probabilistic results is the cost-effectiveness acceptability curve (CEAC). In my experience, the CEAC is challenging for lay stakeholders. It plots results over a range of cost-effectiveness thresholds, which are not quantities that most people outside cost-effectiveness analysis relate to. Additionally, CEACs showing the results of multiple strategies can have many lines and some discontinuities, which can be difficult to understand by the untrained eye.
An alternative approach is to re-use the model diagram to present the results. The model diagram can show the strategy that is expected to be cost-effective and its probability of cost-effectiveness at the relevant threshold. For example, the probability that the strategies starting with a specific test are cost-effective is X%; and the probability that strategies using the specific test at a specific cut-off are cost-effective is Y%, etc.
## Next steps for practice and research
Research about the communication of cost-effectiveness analysis is sparse, and guidance is lacking. Beyond the general advice to speak in plain English and avoiding jargon, there is little advice. Hence, health economists find themselves developing their own approaches and techniques.
In my experience, the key aspects for effective communication are to engage with lay stakeholders from the start of the model development, to explain the intuition behind the model in simplified diagrams, and to find a balance between scientific accuracy and clarity which is appropriate for the audience.
More research and guidance are clearly needed to develop communication methods that are effective and straightforward to use in applied cost-effectiveness analysis. Perhaps this is where patient and public involvement can really make a difference!
# Bad reasons not to use the EQ-5D-5L
We’ve seen a few editorials and commentaries popping up about the EQ-5D-5L recently, in Health Economics, PharmacoEconomics, and PharmacoEconomics again. All of these articles have – to varying extents – acknowledged the need for NICE to exercise caution in the adoption of the EQ-5D-5L. I don’t get it. I see no good reason not to use the EQ-5D-5L.
If you’re not familiar with the story of the EQ-5D-5L in England, read any of the linked articles, or see an OHE blog post summarising the tale. The important part of the story is that NICE has effectively recommended the use of the EQ-5D-5L descriptive system (the questionnaire), but not the new EQ-5D-5L value set for England. Of the new editorials and commentaries, Devlin et al are vaguely pro-5L, Round is vaguely anti-5L, and Brazier et al are vaguely on the fence. NICE has manoeuvred itself into a situation where it has to make a binary decision. 5L, or no 5L (which means sticking with the old EQ-5D-3L value set). Yet nobody seems keen to lay down their view on what NICE ought to decide. Maybe there’s a fear of being proven wrong.
So, herewith a list of reasons for exercising caution in the adoption of the EQ-5D-5L, which are either explicitly or implicitly cited by recent commentators, and why they shouldn’t determine NICE’s decision. The EQ-5D-5L value set for England should be recommended without hesitation.
We don’t know if the descriptive system is valid
Round argues that while the 3L has been validated in many populations, the 5L has not. Diabetes, dementia, deafness and depression are presented as cases where the 3L has been validated but the 5L has not. But the same goes for the reverse. There are plenty of situations in which the 3L has been shown to be problematic and the 5L has not. It’s simply a matter of time. This argument should only hold sway if we expect there to be more situations in which the 5L lacks validity, or if those violations are in some way more serious. I see no evidence of that. In fact, we see measurement properties improved with the 5L compared with the 3L. Devlin et al put the argument to bed in highlighting the growing body of evidence demonstrating that the 5L descriptive system is better than the 3L descriptive system in a variety of ways, without any real evidence that there are downsides to the descriptive expansion. And this – the comparison of the 3L and the 5L – is the correct comparison to be making, because the use of the 3L represents current practice. More fundamentally, it’s hard to imagine how the 5L descriptive system could be less valid than the 3L descriptive system. That there are only a limited number of validation studies using the 5L is only a problem if we can hypothesise reasons for the 5L to lack validity where the 3L held it. I can’t think of any. And anyway, NICE is apparently satisfied with the descriptive system; it’s the value set they’re worried about.
We don’t know if the preference elicitation methods are valid for states worse than dead
This argument is made by Brazier et al. The value set for England uses lead time TTO, which is a relatively new (and therefore less-tested) method. The problem is that we don’t know if any methods for valuing states worse than dead are valid because valuing states worse than dead makes no real sense. Save for pulling out a Ouija board, or perhaps holding a gun to someone’s head, we can never find out what is the most valid approach to valuing states worse than dead. And anyway, this argument fails on the same basis as the previous one: where is the evidence to suggest that the MVH approach to valuing states worse than dead (for the EQ-5D-3L) holds more validity than lead time TTO?
We don’t know if the EQ-VT was valid
As discussed by Brazier et al, it looks like there may have been some problems in the administration of the EuroQol valuation protocol (the EQ-VT) for the EQ-5D-5L value set. As a result, some of the data look a bit questionable, including large spikes in the distribution of values at 1.0, 0.5, 0.0, and -1.0. Certainly, this justifies further investigation. But it shouldn’t stall adoption of the 5L value set unless this constitutes a greater concern than the distributional characteristics of the 3L, and that’s not an argument I see anybody making. Perhaps there should have been more piloting of the EQ-VT, but that should (in itself) have no bearing on the decision of whether to use the 3L value set or the 5L value set. If the question is whether we expect the EQ-VT protocol to provide a more accurate estimation of health preferences than the MVH protocol – and it should be – then as far as I can tell there is no real basis for preferring the MVH protocol.
We don’t know if the value set (for England) is valid
Devlin et al state that, with respect to whether differences in the value sets represent improvements, “Until the external validation of the England 5L value set concludes, the jury is still out.” I’m not sure that’s true. I don’t know what the external validation is going to involve, but it’s hard to imagine a punctual piece of work that could demonstrate the ‘betterness’ of the 5L value set compared with the 3L value set. Yes, a validation exercise could tell us whether the value set is replicable. But unless validation of the comparator (i.e. the 3L value set) is also attempted and judged on the same basis, it won’t be at all informative to NICE’s decision. Devlin et al state that there is a governmental requirement to validate the 5L value set for England. But beyond checking the researchers’ sums, it’s difficult to understand what that could even mean. Given that nobody seems to have defined ‘validity’ in this context, this is a very dodgy basis for determining adoption or non-adoption of the 5L.
5L-based evaluations will be different to 3L-based evaluations
Well, yes. Otherwise, what would be the point? Brazier et al present this as a justification for a ‘pause’ for an independent review of the 5L value set. The authors present the potential shift in priority from life-improving treatments to life-extending treatments as a key reason for a pause. But this is clearly a circular argument. Pausing to look at the differences will only bring those (and perhaps new) differences into view (though notably at a slower rate than if the 5L was more widely adopted). And then what? We pause for longer? Round also mentions this point as a justification for further research. This highlights a misunderstanding of what it means for NICE to be consistent. NICE has no responsibility to make decisions in 2018 precisely as it would have in 2008. That would be foolish and ignorant of methodological and contextual developments. What NICE needs to provide is consistency in the present – precisely what is precluded by the current semi-adoption of the EQ-5D-5L.
5L data won’t be comparable to 3L data
Round mentions this. But why does it matter? This is nothing compared to the trickery that goes on in economic modelling. The whole point of modelling is to do the best we can with the data we’ve got. If we have to compare an intervention for which outcomes are measured in 3L values with an intervention for which outcomes are measured in 5L values, then so be it. That is not a problem. It is only a problem if manufacturers strategically use 3L or 5L values according to whichever provides the best results. And you know what facilitates that? A pause, where nobody really knows what is going on and NICE has essentially said that the use of both 3L and 5L descriptive systems is acceptable. If you think mapping from 5L to 3L values is preferable to consistently using the 5L values then, well, I can’t reason with you, because mapping is never anything but a fudge (albeit a useful one).
There are problems with the 3L, so we shouldn’t adopt the 5L
There’s little to say on this point beyond asserting that we mustn’t let perfect be the enemy of the good. Show me what else you’ve got that could be more readily and justifiably introduced to replace the 3L. Round suggests that shifting from the 3L to the 5L is no different to shifting from the 3L to an entirely different measure, such as the SF-6D. That’s wrong. There’s a good reason that NICE should consider the 5L as the natural successor to the 3L. And that’s because it is. This is exactly what it was designed to be: a methodological improvement on the same conceptual footing. The key point here is that the 3L and 5L contain the same domains. They’re trying to capture health-related quality of life in a consistent way; they refer to the same evaluative space. Shifting to the SF-6D (for example) would be a conceptual shift, whereas shifting to the 5L from the 3L is nothing but a methodological shift (with the added benefit of more up-to-date preference data).
To sum up
Round suggests that the pause is because of “an unexpected set of results” arising from the valuation exercise. That may be true in part. But I think it’s more likely the fault of dodgy public sector deals with the likes of Richard Branson and a consequently algorithm-fearing government. I totally agree with Round that, if NICE is considering a new outcome measure, they shouldn’t just be considering the 5L. But given that right now they are only considering the 5L, and that the decision is explicitly whether or not to adopt the 5L, there are no reasons not to do so.
The new value set is only a step change because we spent the last 25 years idling. Should we really just wait for NICE to assess the value set, accept it, and then return to our see-no-evil position for the next 25 years? No! The value set should be continually reviewed and redeveloped as methods improve and societal preferences evolve. The best available value set for England (and Wales) should be regularly considered by NICE as part of a review of the reference case. A special ‘pause’ for the new 5L value set will only serve to reinforce the longevity of compromised value sets in the future.
Yes, the EQ-5D-3L and its associated value set for the UK has been brilliantly useful over the years, but it now has a successor that – as far as we can tell – is better in many ways and at least as good in the rest. As a public body, NICE is conservative by nature. But researchers needn’t be.
Credits
# The irrelevance of inference: (almost) 20 years on is it still irrelevant?
The Irrelevance of Inference was a seminal paper published by Karl Claxton in 1999. In it he outlines a stochastic decision making approach to the evaluation of health technologies. A key point that he makes is that we need only to examine the posterior mean incremental net benefit of one technology compared to another to make a decision. Other aspects of the distribution of incremental net benefits are irrelevant – hence the title.
I hated this idea. From a Bayesian perspective estimation and inference is a decision problem. Surely uncertainty matters! But, in the extra-welfarist framework that we generally conduct cost-effectiveness analysis in, it is irrefutable. To see why let’s consider a basic decision making framework.
There are three aspects to a decision problem. Firstly, there is a state of the world, $\theta \in \Theta$ with density $\pi(\theta)$. In this instance it is the net benefits in the population, but could be the state of the economy, or effectiveness of a medical intervention in other contexts, for example. Secondly, there is the possible actions denoted by $a \in \mathcal{A}$. There might be a discrete set of actions or a continuum of possibilities. Finally, there is the loss function $L(a,\theta)$. The loss function describes the losses or costs associated with making decision $a$ given that $\theta$ is the state of nature. The action that should be taken is the one which minimises expected losses $\rho(\theta,a)=E_\theta(L(a,\theta))$. Minimising losses can be seen as analogous to maximising utility. We also observe data $x=[x_1,...,x_N]'$ that provide information on the parameter $\theta$. Our state of knowledge regarding this parameter is then captured by the posterior distribution $\pi(\theta|x)$. Our expected losses should be calculated with respect to this distribution.
Given the data and posterior distribution of incremental net benefits, we need to make a choice about a value (a Bayes estimator), that minimises expected losses. The opportunity loss from making the wrong decision is “the difference in net benefit between the best choice and the choice actually made.” So the decision falls down to deciding whether the incremental net benefits are positive or negative (and hence whether to invest), $\mathcal{A}=[a^+,a^-]$. The losses are linear if we make the wrong decision:
$L(a^+,\theta) = 0$ if $\theta >0$ and $L(a^+,\theta) = \theta$ if $\theta <0$
$L(a^-,\theta) = - \theta$ if $\theta >0$ and $L(a^+,\theta) = 0$ if $\theta <0$
So we should decide that the incremental net benefits are positive if
$E_\theta(L(a^+,\theta)) - E_\theta(L(a^-,\theta)) > 0$
which is equivalent to
$\int_0^\infty \theta dF^{\pi(\theta|x)}(\theta) - \int_{-\infty}^0 -\theta dF^{\pi(\theta|x)}(\theta) = \int_{-\infty}^\infty \theta dF^{\pi(\theta|x)}(\theta) > 0$
which is obviously equivalent to $E(\theta|x)>0$ – the posterior mean!
If our aim is simply the estimation of net benefits (so $\mathcal{A} \subseteq \mathbb{R}$), different loss functions lead to different estimators. If we have a squared loss function $L(a, \theta)=|\theta-a|^2$ then again we should choose the posterior mean. However, other choices of loss function lead to other estimators. The linear loss function, $L(a, \theta)=|\theta-a|$ leads to the posterior median. And a ‘0-1’ loss function: $L(a, \theta)=0$ if $a=\theta$ and $L(a, \theta)=1$ if $a \neq \theta$, gives the posterior mode, which is also the maximum likelihood estimator (MLE) if we have a uniform prior. This latter point does suggest that MLEs will not give the ‘correct’ answer if the net benefit distribution is asymmetric. The loss function is therefore important. But for the purposes of the decision between technologies I see no good reason to reject our initial loss function.
Claxton also noted that equity considerations could be incorporated through ‘adjustments to the measure of outcome’. This could be some kind of weighting scheme. However, this is where I might begin to depart from the claim of the irrelevance of inference. I prefer a social decision maker approach to evaluation in the vein of cost-benefit analysis as discussed by the brilliant Alan Williams. This approach allows for non-market outcomes that extra-welfarism might include but classical welfarism would exclude; their valuations could be arrived at by a political, democratic process or by other means. It also permits inequality aversion and other features that I find are a perhaps more accurate reflection of a political decision making approach. However, one must be aware of all the flaws and failures of this approach, which Williams so neatly describes.
In a social decision maker framework, the decision that should be made is the one that maximises a social welfare function. A utility function expresses social preferences over the distribution of utility in the population, the social welfare function aggregates utility and is usually assumed to be linear (utilitarian). If the utility function is inequality averse then the variance obviously does matter. But, in making this claim I am moving away from the arguments of Claxton’s paper and towards a discussion of the relative merits extra-welfarism and other approaches.
Perhaps the statement that inference was irrelevant was made just to capture our attention. After all the process of updating our knowledge of the net benefits of alternatives from data is inference. But Claxton’s statement refers more to the process of hypothesis testing and p-values (or Bayesian ranges of equivalents), the use of which has no place in decision making. On this point I wholeheartedly agree.
|
{}
|
# Arrays
## Introduction to Arrays
We have seen how to store single pieces of data in variables. What happens when we need to store a group of data? What if we have a list of students in a classroom? Or a ranking of the top 10 horses finishing a horse race?
If we were storing 5 lottery ticket numbers, for example, we could create a different variable for each value:
1 2 3 4 5 int firstNumber = 4; int secondNumber = 8; int thirdNumber = 12; int fourthNumber = 16; int fifthNumber = 20;
That is a lot of ungainly repeated code. What if we had 100 lottery numbers? It is more clean and convenient to use a Java array to stroe the data as a list.
An array hold a fixed number of values of one type. Arrays hold doubles, ints, booleans, or any other primitives. Arrays can also contains Strings as well as object references!
Each index of an array corresponds with a different value. Here is a diagram of an array filled with integer values:
elements 4 8 12 16 20
indices 0 1 2 3 4
Similar to C and Python, the indexes start at 0! The element at index 0 is 4, while the element at index 1 is 8. This array has a length of 5, since it holds five elements, but the highest index of the array is 4.
### Creating an Array Explicitly
Imagine that we're using a program to keep track of the prices of different items we want to buy. We would want a list of the prices and a list of the items they correspond to. To create an array, we first declare the type of data it holds.
double[] prices;
Then, we can explicitly initialize the array to contain the data we want to store:
prices = {13.15, 15.87, 14.22, 16.55};
Just like with simple variables, we can declare and initialize in the same line:
double[] prices = {13.15, 15.87, 14.22, 16.55};
We can use arrays to hold Strings and other objects as well as primitives:
String[] clothingItems = {"Tank Top", "Beanie", "Funny Socks", "Pants"};
### Importing Arrays
If we want to have a descriptive printout of an array, we need a toString() method that is provided by the Arrays package in Java.
import java.util.Arrays;
We put this line at the top of the file, before we even define the class!
When we import a package in Java, we are making all of the methods of that package available in our code.
The Arrays package has many useful methods, including Arrays.toString(). When we pass an array into Arrays.toString(), we can see the contents of the array printed out.
1 2 3 4 5 6 7 8 9 10 import java.util.Arrays; public class Lottery(){ public static void main(String[] args){ int[] lotteryNumbers = {4, 8, 12, 16, 20}; String betterPrintout = Arrays.toString(lotteryNumbers); System.out.println(betterPrintout); } }
This code will print:
[4, 8, 12, 16, 20]
### Get Element By Index
Now that we have an array declared and intitialized, we want to be able to get values out of it.
We use square brackets [] to access data at a certain index:
1 2 3 double[] prices = {13.1, 15.87, 14.22, 16.55} System.out.println(prices[1]);
This command will print out 15.87.
This happens because 15.87 is the item at the 1 index of the array.
If we try to access an element outside of its appropriate index range, we will receive an ArrayIndexOutOfBoundsException error.
For example, if we were to run the command System.out.println(prices[5]), we would get the following output:
java.lang.ArrayIndexOutOfBoundsException: 5
### Creating an Empty Array
We can also create empty arrays and then fill the items one by one. Empty arrays have to be intitialized with a fixed size:
String[] menuItems = new String[5];
Once you declare this size, it cannot be changed! This array will always be of size 5.
After declaring and intializing, we can set each index of the array to be a different item:
1 2 3 4 5 menuItems[0] = "Veggie hot Dog"; menuItems[1] = "Potato salad"; menuItems[2] = "Cornbread"; menuItems[3] = "Roasted broccoli"; menuItems[4] = "Coffee ice cream";
This group of commands has the same effect as assigning the entire array at once:
1 String[] menuItems = {"Veggie hot dog", "Potato salad", "Cornbread", "Roasted broccoli", "Coffee ice cream"};
We can also change an item after it has been assigned! Let's say this restaurant is changing its broccoli dish to a cauliflower one:
menuItems[3] = "Baked cauliflower";
Now the array looks like:
["Veggie hot dog", "Potato salad", "Cornbread", "Baked cauliflower", "Coffee ice cream"]
### Array Length
What if we have an array storing all the usernames for our program, and we want to quickly see how many users we have? To get the length of an array, we can access the length field of the array object:
1 2 String[] menuItems = new String[5]; System.out.println(menuItems.length);
This command would print 5, since the menuItems array has 5 slots, even though they are all empty.
If we print out the length of the prices array:
1 2 3 double[] prices = {13.1, 15.87, 14.22, 16.55}; System.out.println(prices.length);
We would see 4, since there are four items in the prices array!
### String[] args
When we write main() methods for our programs, we use the parameter String[] args. Now that we know about array syntax, we can start to parse what that means.
A String[] is an array made up of Strings. Examples of String arrays:
1 2 String[] humans = {"Nick", "Alyssa", "Matt", "Nathan"}; String[] robots = {"R2D2", "Marvin", "Wall-E", "Bender"};
The args parameter is another example of a String array. In this case, the array args contains the arguments that we pass in from the terminal when we run the class file. (So far args has been empty.)
So how can you pass arguments to main()> Let's say we have this class HelloYou:
1 2 3 4 5 public class HelloYou { public static void main(String[] args){ System.out.println("Hello " + args[0];) } }
When we runn the file HelloYou in the terminal with an argument of "Laura":
java HelloYou Laura
We get the output:
Hello Laura
The String[] args would be interpreted as an array with one element, "Laura". When we use args[0] in the main method, we can access that element like we did in HelloYou.
We can actually create if-else statements that run based on the input:
1 2 3 4 5 6 7 if (args[0].equals("A")){ // do something } else if (args[0].equals("B")){ // do something } else { // do something }
### Arrays Review
We have now seen how to store a list of values in an array. We can use this knowledge to make organized programs with more complex variables.
Throughout these notes, we have leared about:
• Creating arrays explicitly, using { and }.
• Accessing an index of an array, using [ and ].
• Creating empty arrays of a certain size, and filling the indices one by one.
• Getting the length of an array using .length.
• Using the argument args that is passed into the main() method of a class.
Let's create a small program that holds student names and test scores with the following characteristics:
• Make an array of strings called students with the following names: Sade, Alexus, Sam, Koma.
• Create an empty array of doubles called mathScores of size 4.
• Sade got a 94.5 on the test. Store this value in the same indice that she is listed in the students array.
• Sam got a 76.8. Store this value in the appropriate spot in the mathScores array.
• Finally, add a print statement that says: "The number of students in the class is numStudents." using the .length operator.
1 2 3 4 5 6 7 8 9 10 11 12 13 import java.util.Arrays; public class Classroom { public static void main(String[] args){ String[] students = {"Sade", "Alexus", "Sam", "Koma"}; double[] mathScores = new double[4]; mathScores[0] = 94.5; mathScores[2] = 76.8; System.out.println("The number of students in the class is " + students.length + "."); } }
## Introduction to ArrayLists
When we work with arrays in Java, we've been limited by the fact that once an array is created, it has a fixed size. We can't add or remove elements.
But what if we needed to add to the book lists, newsfeeds, and other structures we were using arrays to represent?
To create multiple and dynamic lists, we can use Java's ArrayLists. ArrayLists allow us to:
• Store object references as elements
• Store elements of the same type (just like arrays)
• Access elements by index (just like arrays)
• Remove elements
Remember how we had to import java.util.Arrays in order to use additional array methods? To use an ArrayList at all, we need to import them from Java's util package as well:
import java.util.ArrayList;
### Creating ArrayLists
To create an ArrayList, we need to declare the type of object it will hold, just as we do with arrays:
ArrayList<String> babyNames;
We use angle brackets (<>) to declare the type of the ArrayList. These symbols are used for generics. Generics are a Java construct that allows us to define classes and objects as parameters of an ArrayList. For this reason, we can't use primitive types in an ArrayList:
1 2 3 4 5 // This code won't compile: ArrayList ages; // This code will compile: ArrayList ages;
The <Integer> generic has to be used in an ArrayList instead. You can also use <Double> and <Char> for types you would normally declare as doubles or chars.
We can intialize to an empty ArrayList using the new keyword:
1 2 3 4 5 6 7 // Declaring: ArrayList ages; // Intializing: ages = new ArrayList(); // Declaring and intializing in one line: ArrayList babyNames = new ArrayList();
#### Adding Items to an ArrayList
Now we have an empty ArrayList, but how do we get it to store values?
ArrayList comes with an add() method which inserts an element into the structure. There are two ways we can use add().
If we want to add an element to the end of the ArrayList, we'll call add() using only one arguement that represents the value we are inserting. In this example, we'll add objects from the Car class to an ArrayList called carShow:
1 2 3 4 5 6 7 8 ArrayList carShow = new ArrayList(); carShow.add(ferrari); // carShow now holds [ferrari] carShow.add(thunderbird); // carShow now holds [ferrari, thunderbird] carShow.add(volkswagen); // carShow now holds [ferrari, thunderbird, volkswagen]
If we want to add an element at a specific index of our ArrayList, we'll need two arguments in our method call: the first arguement will define the index of the new element while the second argument defines the value of the new element:
1 2 3 4 5 6 7 // Insert object corvette at index 1 carShow.add(1, corvette); // carShow now holds [ferrari, corvette, thunderbird, volkswagen] // Insert object porsche at index 2 carShow.add(2, porsche); // carShow now holds [ferrari, corvette, porsche, thunderbird, volkswagen]
By inserting a value at a specified index, any elements that appear after this new element will have their index shift over by 1.
Also, note that an error will occur if we try to insert a value at an index that does not exist.
You are able to add multiple data types to the same array using add():
1 2 3 4 5 ArrayList assortment = new ArrayList<>(); assortment.add("Hello"); // String assortment.add(12); // Integer assortment.add(ferrari); // reference to Car // assortment holds ["Hello", 12, ferrari]
In this case, the items stored in this ArrayList will be considered Objects. As a result, they won’t have access to some of their methods without doing some fancy casting. Although this type of ArrayList is allowed, using an ArrayList that specifies its type is preferred.
### ArrayList Size
Let's say we have an ArrayList that stores items in a user's online shopping cart. As the user navigates through the site and adds items, their cart grows bigger and bigger.
If we wanted to display the number of items in the cart, we could find the size of it using the size() method:
1 2 3 4 5 6 7 8 9 10 11 ArrayList shoppingCart = new ArrayList(); shoppingCart.add("Goofy socks"); System.out.println(shoppingCart.size()); // 1 is printed shoppingCart.add("Funny tie"); System.out.println(shoppingCart.size()); // 2 is printed shoppingCart.add("HK-416 Assault Rifle"); System.out.println(shoppingCart.size()); // 3 is printed
In dynamic objects like ArrayLists, it's important to know how to access the amount of objects we have stored.
### Accessing an Index
With arrays, we can use bracket notation to access a value at a particular index:
1 2 3 4 double[] ratings = {3.2, 5.5, 1.6}; System.out.println(ratings[1]); // this will print 5.5
This code prints 5.5, the value at index 1 of the array.
For ArrayLists, bracket notation won't work. Instead we use the method get() to access an index:
1 2 3 4 5 6 7 ArrayList shoopingCart = new ArrayList(); shoppingCart.add("Goofy Socks"); shoppingCart.add("Funny tie"); shoppingCart.add("HK-416 Assault Rifle"); System.out.println(shoppingCart.get(2));
This code prints "HK-416 Assault Rifle", which is the value at index 2 of the ArrayList.
### Changing a Value
When we were using arrays, we could rewrite entries by using bracket notation to reassign values:
1 2 3 4 String[] shoppingCart = {"Goofy Socks", "Funny tie", "HK-416 Assault Rifle"}; shoppingCart[0] = "Serious Socks"; // This overwrites the "Goofy Socks" string with "Serious Socks"
ArrayList has a slightly different way of doing this, using the set() method:
1 2 3 4 5 6 7 8 9 ArrayList shoppingCart = new ArrayList(); shoppingCart.add("Goofy Socks"); shoppingCart.add("Funny tie"); shoppingCart.add("HK-416 Assault Rifle"); shoppingCart.set(0, "Serious Socks"); // shoppingCart now holds ["Serious Socks", "Funny tie", "HK-416 Assault Rifle"]
### Removing an Item
What if we wanted to get rid of an entry alltogether? For arrays, we would have to make a completely new array without the value.
Luckily, ArrayLists allow us to remove an item by specifying the index to remove:
1 2 3 4 5 6 7 8 ArrayList shoppingCart = new ArrayList(); shoppingCart.add("Goofy Socks"); shoppingCart.add("Funny tie"); shoppingCart.add("HK-416 Assault Rifle"); shoppingCart.remove(1); // shoppingCart now holds ["Goofy Socks", "HK-416 Assault Rifle"]
We can also remove an item by specifying the value to remove:
1 2 3 4 5 6 7 8 ArrayList shoppingCart = new ArrayList(); shoppingCart.add("Goofy Socks"); shoppingCart.add("Funny tie"); shoppingCart.add("HK-416 Assault Rifle"); shoppingCart.remove("Funny tie"); // shoppingCart now holds ["Goofy Socks", "HK-416 Assault Rifle"]
Note: This command removes the FIRST instance of the value "Funny tie".
### Getting an Item's Index
What if we had a really large list and wanted to know the position of a certain element in it? For instance, what if we had an ArrayList detectives with the names of fictional detectives in chronological order, and we wanted to know what position "Flecther" was.
1 2 // detectives holds ["Holmes", "Poirot", "Marple", "Spade", "Fletcher", "Conan", "Ramotswe"]; System.out.println(detectives.indexOf("Fletcher"));
This code would print 4, since "Fletcher" is at index 4 of the detectives ArrayList.
### ArrayLists Review
Some crucial methods in ArrayLists:
• Adding a new ArrayList item using add().
• Accessing the size of an ArrayList using size().
• Finding an item by index using get().
• Changing the value of an ArrayList item using set().
• Removing an item with a specific value using remove().
• Retrieving the index of an item with a specific value using indexOf().
|
{}
|
+0
# help
0
106
2
Find the units digits of \(1 + 9 + 9^2 + 9^3 + \dots + 9^{100}\)
Jul 6, 2020
#1
0
sumfor(n, 0, 100, 9^n) =2.988157375 E+95 - This number ends in 501
Jul 6, 2020
#2
+31340
+1
Every even power of 9 ends in 1, every odd power ends in 9. There are 51 even powers (including 90) and 50 odd powers.
Summing these we get 51*1 + 50*9 = 501, so the sum ends in 1.
Jul 6, 2020
|
{}
|
1. Aashish Clerk
Introduction to Quantum Optomechanics
I’ll use these lectures to provide a somewhat selective introduction to the field of cavity quantum optomechanics. Lec. 1 will give a general overview of the field and an introduction to the basic theory describing phenomena in the most experimentally-relevant "mean-field” regime, where a strong drive is required to see effects of the optomechanical interaction; we will discuss things like dynamical backaction, cavity cooling and optomechanically-induced transparency. Lec. 2 will focus on theory describing the quantum nonlinear regime, where the optomechanical coupling plays a role at the single photon and single phonon level; I will discuss proposals for how suitably chosen photonic and mechanical drives could help enhance these nonlinear quantum effects. Finally, in Lec. 3, I will turn to the general topic of reservoir engineering in quantum optomechanics, focusing on approaches for stabilizing squeezed and entangled states, and for generating non-reciprocal interactions and devices.
2. Abhishek Dhar
A discussion on quantum heat baths
The starting microscopic model of most quantum mechanical models of heat baths is the same, namely it consists of a gas of non-interacting particles in thermal equilibrium. The master equation approach and the Langevin equation approach are two different ways of studying the effective dynamics of a system coupled to such a bath. The talk will present a comparison of these two approaches in out-of-equilibrium applications.
3. Anatoli Polkovnikov
Counter-diabatic driving in complex systems
In this talk I will discuss how one can construct approximate local counter-diabatic driving protocols, which can suppress dissipation and increase fidelity of the state preparation in interacting systems (both quantum and classical).
4. Anirban Dutta
Anti-Kibble-Zurek Behavior in Crossing the Quantum Critical Point
TBA
5. Andal Narayanan
Induced transparency in cyclic atomic systems in contact with a thermal bath
Electromagnetic (EM) waves are the fastest and least distortable information careers. Along with transport of information is the requirement for storage. This is usually done in a material medium often through lossy dipole interaction with the EM waves which in-turn produce induced atomic dipoles in matter. The phenomenon of electromagnetically induced transparency made such an interaction essentially lossless at the strongest coupling parameter regime of light and matter. In this talk, the effect of having both an electric and magnetic dipole interaction in the same system will be investigated with an emphasis on induced transparency effect. The talk is based on theoretical and experimental studies focusing on the influence of thermal photons which strongly affect coherence of levels connected by the magnetic dipole coupling.
[1] Effects of temperature and ground-state coherence decay on enhancement and amplification in a Δ atomic system. Phys. Rev. A 90, 043859 (2014)
[2] Demonstration of a high-contrast optical switching in an atomic Delta system (Accepted in Journal of physics B (2017))
6. Apoorva Patel
Understanding the Born Rule in Weak Measurements
Projective measurement is used as a fundamental axiom in quantum mechanics, even though it is discontinuous and cannot predict which measured operator eigenstate will be observed in which experimental run. The probabilistic Born rule gives it an ensemble interpretation, predicting proportions of various outcomes over many experimental runs. Understanding gradual weak measurements requires replacing this scenario with a dynamical evolution equation for the collapse of the quantum state in individual experimental runs. We revisit the framework to model quantum measurement as a continuous nonlinear stochastic process. It combines attraction towards the measured operator eigenstates with white noise, and for a specific ratio of the two reproduces the Born rule. This fluctuation-dissipation relation implies that the quantum state collapse involves the system-apparatus interaction only, and the Born rule is a consequence of the noise contributed by the apparatus. The ensemble of the quantum trajectories is predicted by the stochastic process in terms of a single evolution parameter, and matches well with the weak measurement results for superconducting transmon qubits.
7. Arindam Ghosh
The zigzag (ZZ) edges of both single and bilayer graphene are perfect one dimensional (1D) conductors due to a set of zero energy gapless states that are topologically protected against backscattering. Competing effects of edge topology and electron-electron interaction in these channels have been probed with scanning probe microscopy, which reveal unique local thermodynamic and magnetic properties. A direct evidence of edge-bound electrical conduction, however, has remained experimentally elusive, primarily due to the lack of graphitic nanostructures with low structural and/or chemical edge disorder, as well as a clear understanding of the impact of edge disorder and confinement on electrical transport. In this talk I shall present a new method to observe ballistic edge-mode transport in suspended atomic-scale constrictions of single and multilayer graphene, created during nanomechanical exfoliation of graphite, which manifests in quantization of conductance close to multiples of e2/h even at room temperature [1]. I shall highlight the specific case of electrically biased bilayer graphene, where the conductance at low temperatures will be shown to possess non-trivial localization properties, as expected from topologically protected edge states in the presence of inter-valley scattering [2].
[1] A. Kinikar, T. P. Sai, S. Bhattacharyya, A. Agarwala, T. Biswas, S. Sarker, H. R. Krishnamurthy, M. Jain, V. Shenoy, A. Ghosh, Nature Nanotechnology (2017) doi:10.1038/nnano.2017.24.
[2] Md. A. Aamir, P. Karnatak and A. Ghosh (Under review).
8. Arnab Das
Signature of Quantum Phase Transitions in highly excited non-equilibrium states
Quantum phase transitions refer to non-analytic changes in ground state properties of matter as a parameter of the system is tuned through a critical value. In this talk we will demonstrate that signature of such ground state transitions can have strong non-analytic signatures in highly excited non-equilbrium states with finite energy density and extensive entanglement entropy, created by quantum quenches.
9. Arnab Sen
Aperiodically driven integrable systems and their emergent steady states
Does a closed quantum many-body system that is continually driven with a time-dependent Hamiltonian finally reach a steady state? This question has only recently been answered for driving protocols that are periodic in time, where the long time behavior of the local properties synchronize with the drive and can be described by an appropriate periodic ensemble. Here, we explore the consequences of breaking the time-periodic structure of the drive with additional aperiodic noise in a class of integrable systems. We show that the resulting unitary dynamics leads to new emergent steady states in at least two cases. While any typical realization of random noise causes eventual heating to an infinite temperature ensemble for all local properties in spite of the system being integrable, noise which is self-similar in time leads to an entirely different steady state, which we dub as "geometric generalized Gibbs ensemble", that emerges only after an astronomically large time scale. To understand the approach to steady state, we study the temporal behavior of certain coarse-grained quantities in momentum space that fully determine the reduced density matrix for a subsystem with size much smaller than the total system. Such quantities provide a concise description for any drive protocol in integrable systems that are reducible to a free fermion representation.
10. Arul Lakshminarayan
On the entanglement spectrum: From the Levy distribution to the Tracy- Widom.
Coupling quantum systems whose dynamics is already non-integrable provides an interesting range of universal spectral behaviors for the reduced density matrices of the eigenstates. The spectra are interesting in as much as they determine the entanglement in these states. This talk explores the range of possibilities for the largest eigenvalue, from power- laws and the stable Levy distribution in the perturbative regime, to the more well-understood statistics governed by random matrix theory in the strong coupling regime. We look at two and many-body systems as examples.
11. Barry Garraway
Decay of quantum systems analysed with pseudomodes of reservoir structures
Reservoir structures result from certain types of non-uniform bath spectral density. When these structures are coupled to simple quantum systems the resulting decay can be analysed by the method of ""pseudomodes"", where the reservoir structure is replaced by an effective mode [1]. The approach is useful for strongly coupled, i.e. non-Markovian problems, since exact master equations can be derived. In this talk, an introduction to the basics of pseudomode theory will be given, together with developments on reservoir memory [2,3] and entanglement in such reservoir structures [4].
[1] Decay of an atom coupled strongly to a reservoir, B.M. Garraway, Phys. Rev. A 55, 4636 (1997).
[2] Pseudomodes as an effective description of memory: Non-Markovian dynamics of two-state systems in structured reservoirs, L. Mazzola, S. Maniscalco, J. Piilo, K.-A. Suominen, and B.M. Garraway, Phys. Rev. A. 80, 012104 (2009).
[3] An application of quantum Darwinism to a structured environment, G. Pleasance and B.M. Garraway, in preparation (2017).
[4] Generation of entanglement density within a reservoir, C. Lazarou, B.M. Garraway, J. Piilo, and S. Maniscalco, J. Phys. B 44, 065505 (2011).
Information driven quantum dot heat engines
In this talk, we discuss some novel heat-engine functionalities using quantum dots [1-3] from a quantum transport perspective. We discuss quantum dot heat engines [1] driven by classical information via a hyperfine coupled quantum dot set up and present the unique characteristics that relate to information driven heat engines. Moving on to quantum information processing and heat engines, we employ a Lindblad approach [2] to a triple quantum dot system connected to collinear leads in order to demonstrate heat engine functionality that can generate quantum information in an ancillary system.
References:
[1] S. Datta, ArXiv: 0704:1623, (2007).
[2] B. Muralidharan and M. Grifoni, Phys. Rev. B, 88, 045402, (2013).
[3] B. De and B. Muralidharan, Phys. Rev. B, 94, 165416, (2016).
13. Bijay Kumar Agarwalla
Non-equilibrium statistical physics for small quantum systems: Transport, fluctuations and Engineered devices
I will talk about two different aspects of nonequilibrium statistical physics:
1. Engineered light-matter open quantum systems and microscopic principles for giant photon amplification.
2. A brief overview of quantum transport including universal fluctuation relations and effect of electron-phonon interaction on charge transport.
14. C.J. Bolech
Bosonization-debosonization and the nonequilibrium Kondo problem
After critically reexamining in the previous talk the Bosonization- deBosonization (BdB) procedure for systems including ‘boundaries’ and subsequently introducing a Consistent BdB procedure to address shortcomings that were found in transport calculations [1], we turn our attention to the physics of quantum dots. Under the right conditions, such dots can attain the Kondo regime in which tunneling conduction is possible at low temperatures despite the Coulomb blockade. We study this physics by focusing on the two-lead Kondo model [2]. The bosonization formalism can be used to access a solvable limit of this model known as its Toulouse point. I shall show that a consistent BdB procedure yields a modified set of physical results that are in better agreement with the phenomenology of the problem. Besides its general experimental relevance, the Toulouse limit of the two-lead Kondo model is a key theoretical prototype of a strongly correlated system away from equilibrium but nevertheless admitting a closed solution.
References:
[1] Nayana Shah and C. J. Bolech, Phys. Rev. B 93, 085440 (2016).
[2] C. J. Bolech and Nayana Shah, Phys. Rev. B 93, 085441 (2016).
15. Camille Aron
(Non) equilibrium dynamics: a (broken) symmetry
It is fascinating that most many-body systems, if unperturbed, tend to relax towards thermal equilibrium dynamics. I will discuss a recent result showing that quantum equilibrium dynamics can be elevated to the rank of a universal (model-independent) symmetry of Keldysh field theories. This fundamental symmetry imposes strong constraints on the equilibrium correlation functions. But more importantly, this allows to study non- equilibrium dynamics as symmetry-breaking processes, providing important clues on the so-far poorly understood production of entropy in quantum mechanical systems.
16. Ciccarello Francesco
Non-Markovian dynamics of a qubit due to single-photon scattering in a waveguide
We investigate the open dynamics of a local qubit due to scattering of a single photon in a waveguide. By adapting techniques of waveguide quantum electrodynamics to the study of scattering time evolution in combination with tools of open quantum systems theory, we work out the general features of the qubit's dynamical map and assess in a rigorous way its non-Markovian nature. Two fundamental sources of non-Markovianity are shown: the finite width of the photon wavepacket and the presence of a hard-wall boundary condition.
Reference: Y.-L. L. Fang, F. Ciccarello, and H. Baranger, to appear on arXiv (2017).
17. Darrick Chang
Exponential improvement in photon storage fidelities using subradiance and selective radiance'' in atomic arrays
A central goal within quantum optics is to realize efficient, controlled interactions between photons and atomic media. A fundamental limit in nearly all applications based on such systems arises from spontaneous emission, in which photons are absorbed by atoms and then re-scattered into undesired channels. In typical theoretical treatments of atomic ensembles, it is assumed that this re-scattering occurs independently, and at a rate given by a single isolated atom, which in turn gives rise to standard limits of fidelity in applications such as quantum memories for light or photonic quantum gates. However, this assumption can in fact be dramatically violated. In particular, it has long been known that spontaneous emission of a collective atomic excitation can be significantly suppressed through strong interference in emission between atoms. While this concept of subradiance" is not new, thus far the techniques to exploit the effect have not been well-understood. Here, we provide a comprehensive treatment of this problem. First, we show that in ordered atomic arrays in free space, subradiant states acquire an elegant interpretation in terms of optical modes that are guided by the array, which only emit due to scattering from the ends of the finite system. We also go beyond the typically studied regime of a single atomic excitation, and elucidate the properties of subradiant states in the many-excitation limit. Finally, we introduce the new concept of selective radiance." Whereas subradiant states experience a reduced coupling to all optical modes, selectively radiant states are tailored to simultaneously radiate efficiently into a desired channel while scattering into undesired channels is suppressed, thus enabling an enhanced atom-light interface. We show that these states naturally appear in chains of atoms coupled to nanophotonic structures, and we analyze the performance of photon storage exploiting such states. We find numerically that selectively radiant states allow for a photon storage error that scales exponentially better with number of atoms than previously known bounds.
18. Dibyendu Roy
An efficient method to study light propagation through nonlinear quantum media
I discuss a generalization of the quantum Langevin equations approach to study nonlinear light propagation through one-dimensional interacting open quantum lattice models. A matrix product operator description is developed to write and solve a large set of quantum Langevin equations of lattice operators obtained after integrating out the light fields. I talk an application of our method to a Heisenberg spin-1/2 chain with nearest-neighbor coupling. The transient and steady-state transport properties of an incoming monochromatic laser light are calculated for this model. I show how the local features of the spin chain and the chain length dependence of light transport coefficient behave with an increasing power of the incident light.
19. Duncan O’Dell
Emergence of singularities from decoherence in a Josephson junction
I will discuss the emergence of singularities during the quantum-to- classical transition by analyzing the decoherence of a many-particle wave function in the vicinity of a classical caustic. In particular, a Josephson junction can be made by coupling two Bose-Einstein condensates; when the coupling is turned on suddenly the Gross-Pitaevskii mean-field theory, which describes a classical field, predicts that caustics (containing fold and cusp catastrophes) will form in the number-difference probability distribution. The caustics are singular and thus represent a failure of the classical theory, but are well-behaved in the many-body theory where atom number is quantized. However, if the system is additionally subjected to a weak continuous measurement the quantum state decoheres and classicality and hence the singularity are restored, potentially leading to a paradox.
20. Guido Burkard
Spin Qubits
These lectures will provide an introduction to the theory of spin qubits in quantum dots and defects. We will cover spin 1/2, singlet-triplet, and exchange-only qubits, as well as hybrid quantum systems consisting of spins in combination with an electromagnetic cavity. Various methods for quantum control and quantum gate operation in these systems will be discussed. Spin qubits will also be treated as an open system in contact with their electromagnetic and solid-state environment, and the interplay between spin and valley degrees of freedom in valley-degenerate materials such as carbon and silicon will be covered.
21. Hakan E. Tureci
Divergence-free Circuit Quantum Electrodynamics
Any quantum-confined electronic system coupled to the electromagnetic continuum is subject to radiative decay and renormalization of its energy levels. When inside a cavity, these quantities can be strongly modified with respect to their values in vacuum. In the planar circuit quantum electrodynamics architecture the radiative decay rate of a Josephson Junction qubit is strongly influenced by far off-resonant modes. A multimode calculation including all cavity modes however leads to divergences unless a cutoff is imposed. It has so far not been identified what the source of divergence is, or whether the divergence is a fundamental issue. I will show that unless gauge invariance is respected, any attempt at the calculation of circuit QED quantities is bound to diverge. I will then discuss a theoretical and computational framework based on a Heisenberg-Langevin approach to the calculation of a finite spontaneous emission rate and the Lamb shift, that is free of cutoff.
22. Harold Baranger
Photon Correlations in Waveguide QED: Rectification and Next-(Next-)Photon Statistics
Strong photon correlations are produced when even a few resonant emitters (qubits) are coupled to a photonic waveguide. These correlations result from the inelastic scattering caused by the nonlinearity of the emitters. I shall discuss two of our recent results in this area. First, we find that rectification is inherently connected to the inelastic scattering. Rectification occurs when two detuned qubits are coupled to the waveguide, and is enhanced when the detuning of the qubit frequencies is matched by a detuning of their separation from a half wavelength. We show that this condition corresponds to maximizing the inelastic scattering by driving a nearly dark pole in the system. Second, we investigate the "next-photon" and "next-next-photon" statistics in the case of one or two qubits coupled to the waveguide. These provide a more accurate characterization of photon bunching and anti-bunching than the customary g2(0). The calculation is carried out in the Markovian approximation using quantum jump methods, using a jump operator that corresponds to single photon detection by taking into account photon interference effects. I close by commenting on changes in photon correlations in the non-Markovian regime.
23. Jacqueline Bloch
Quantum fluids of light in semiconductor microcavities
Semiconductor microcavities appear today as a new platform for the study of quantum fluids of light. They enable confining both light and electronic excitations (excitons) in very small volumes. The resulting strong light-matter coupling gives rise to hybrid light-matter quasi-particles named cavity polaritons. Polaritons propagate like photons but strongly interact with their environment via their matter part: they are fluids of light and show fascinating properties such as superfluidity or nucleation of quantized vortices. Finally patterning microcavities at the micron scale allows the engineering of polariton band structure and emulation of a wide variety of interesting Hamiltonians. The goal of this pedagogical lecture is to give an introductory overview of this very rapidly evolving research field.
In the first part, basic linear properties of cavity polaritons will be introduced. Light matter strong coupling regime, formation of hybrid light matter quasi particle will be explained together with experimental techniques to excite and probe these new quasi- particles. Effective mass, group velocity, pseudo-spin and band structure engineering will be addressed. We will then discuss how it is possible to trigger polariton condensation in a semiconductor microcavity, and manipulate polariton condensates in photonic circuits.
The second lecture will be devoted to polariton non-linearity. One of its spectacular manifestations is superfluidity and the disappearance of any scattering when a quantum fluid of light passes an obstacle. Topological excitations such as quantized vortices and solitons can also be generated in the wake of a defect. Another interesting manifestation of polariton Kerr nonlinearity is bistability: several experiments making use of such effect will be discussed.
In the last lecture, we will illustrate by several examples how polariton lattices allow emulating various Hamiltonians with distinct physical properties: quasi-crystal with fractal energy spectrum, 1D lattices with topological edge states or 2D honeycomb lattices emulating Dirac physics.
24. Jason Petta
Lecture 1: Introduction to quantum dots
Lecture 2: Cavity-coupled spin qubits
Lecture 3: Photoemission, masing, and strong-coupling in cavity-coupled charge qubits
Three pedagogical lectures will be given, starting with a basic description of quantum dot physics and ending with recently published results from cavity-coupled double quantum dots. Lecture 1: quantum dots, single electron charging, double quantum dot charge stability diagrams, and the double dot as a charge qubit. Lecture 2: Hybrid quantum devices, charge- cavity coupling and readout, spin state control and readout. Lecture 3: An emphasis on non-equilibrium physics in cavity-coupled double dots.Photoemission and masing driven by single electron tunneling. Floquet – Sisyphus pumping of a single electron. Coherent coupling of a single charge to a single photon.
25. Jens Koch
Mapping repulsive to attractive interaction in driven-dissipative quantum systems
Repulsive and attractive interactions usually lead to very different physics. Striking exceptions exist in the dynamics of driven-dissipative quantum systems. For the example of a photonic Bose-Hubbard dimer, I will show that one can establish a one-to-one mapping relating the cases of onsite repulsion and attraction. This mapping is, in fact, valid for an entire class of Markovian open quantum systems with time-reversal invariant Hamiltonian and physically meaningful inverse-sign Hamiltonian. To underline the broad applicability of the mapping, I will illustrate the one- to-one correspondence between the nonequilibrium dynamics in a geometrically frustrated spin lattice and that in a non-frustrated partner lattice.
26. Jian-Hua Jiang
Optimal efficiency and power: universality, cooperative effects, and examples
Carnot's seminal work has helped establishing the second law of thermodynamics. The upper bound efficiency, Carnot efficiency, however is usually far away from the maximum efficiency that can be realized in a realistic thermodynamic machine. We discuss some universal properties of optimal efficiency and power that are found only recently. A cooperative effect is emphasized and related to current state-of-art quantum and classical engines. Several examples are given.
27. Jiang-min Zhang
Singular quench dynamics of a Bloch state
We report some nonsmooth dynamics of a Bloch state in a one- dimensional tight binding model with the periodic boundary condition. After a sudden change of the potential of an arbitrary site, quantities like the survival probability of the particle in the initial Bloch state show cusps periodically, with the period being the Heisenberg time associated with the energy spectrum. This phenomenon is a nonperturbative counterpart of the nonsmooth dynamics observed previously (Zhang J. M. and Haque M., arXiv:1404.4280) in a periodically driven tight binding model. Underlying the cusps is a Luttinger-like exactly solvable model, which consists of equally spaced levels extending from $-\infty$ to $+\infty$ , between which two arbitrary levels are coupled to each other by the same strength. Besides the momentum space, we have also studied the same scenario in the real space. The observation is that the probability density at an arbitrary site jumps indefinitely between plateaus.
[1] J. M. Zhang and H. T. Yang, Sudden jumps and plateaus in the quench dynamics of a Bloch state, EPL 116, 10008 (2016).
[2] J. M. Zhang and H. T. Yang, Cusps in the quench dynamics of a Bloch state, EPL 114, 60001 (2016).
[3] J. M. Zhang and Y. Liu, Fermi's golden rule: its derivation and breakdown by an ideal model, Eur. J. Phys. 37, 065406 (2016).
28. Johannes Hecker Denschlag
An ion in a sea of ultracold neutral atoms
In recent years several groups on an international scale have set up experiments where single laser-cooled trapped ions are immersed into a cloud of ultracold neutral atoms. One important motivation for this hybrid combination of cold neutral and charged particles is to study an open quantum system at a very high level of control. As an example, the ion can be viewed as an impurity that couples to an atomic bath via the relatively long range 1/r4 polarization potential. The interaction is predicted to lead to the formation of a polaron which can for the first time reach the strong coupling regime. Another example for interesting future research is to study transport in quantum gases by using the ion as a local probe. As a third example, decoherence phenomena in a bath can be investigated by measuring the decay of superposition states of the ion due to elastic and inelastic collisions with the neutral atom. I will give a brief overview over some of the activities of our own group and several other research groups in the field. This will include a discussion of challenges that occurred in the meantime, apparent roadblocks and possible work arounds.
29. Jonathan Keeling
Quantum Many-Body Physics with Multimode Cavity QED
By placing cold atoms in multimode optical cavities, one can engineer classes of Hamiltonians and forms of dissipation that enable one to access novel states of non-equilibrium matter. This experimental system combines quantum optics and ultracold atomic physics with the quantum many-body physics traditionally explored in condensed matter physics. In this talk, I will discuss the possibilities that arise from this system. In particular, I will discuss the experiments [1,2] where such a system has been realised, and how these have been used to demonstrate the potential of multimode cavity QED to engineer interactions with controllable range. Based on these experimental capabilities, I will then discuss our theoretical work beginning to exploit the potential offered by these experiments. I will discuss how a multimode cavity can be used to engineer a synthetic gauge field, in such a way that the synthetic field responds to the state of the atoms. Using this, we have shown how one may realise a Meissner- like effect for ultracold atoms[3].
If time allows, I will then discuss aspects of how a Hopfield associative memory can be realised in such a system [4], and discuss our recent work developing the microscopic theory of this behaviour.
[This work has been done in collaboration with K. Ballantine (University of St Andrews), V. Vaidya, Y. Guo, A. Kollar, J. Cotler, S. Ganguili and B. Lev (Stanford).]
[1] A. J. Kollar, A. T. Papageorge, K. Baumann, M. A. Armen, and B. L. Lev, New J. Phys. 17, 043012 (2015).
[2] A. J. Kollar, A. T. Papageorge, V. D. Vaidya, Y. Guo, J. Keeling, and B. L. Lev, Nat. Commun. 8 14386 (2017)
[3] K. E. Ballantine, B. L. Lev, and J. Keeling, Phys. Rev. Lett. 118, 045302 (2017).
[4] S. Gopalakrishnan, B. L. Lev, and P. M. Goldbart, Phys. Rev. Lett. 107, 277201 (2011).
30. Kater Murch
Probing the thermodynamics of quantum measurement with superconducting qubits.
The extension of thermodynamics into the realm of quantum mechanics, where quantum fluctuations dominate and systems need not occupy definite states, poses unique challenges. Superconducting quantum circuits offer exquisite control over the environment of simple quantum systems allowing the exploration of thermodynamics at the quantum level through measurement and feedback control. We use a superconducting transmon qubit that is resonantly coupled to a waveguide cavity as an effectively one-dimensional quantum emitter. By driving the emitter and detecting the fluorescence with a near-quantum-limited Josephson parametric amplifier, we track the evolution of the quantum state and characterize the work and heat along single quantum trajectories. By using quantum feedback control to compensate for heat exchanged with the emitter's environment we are able to extract the work statistics associated with the quantum evolution and examine fundamental fluctuation theorems in non-equilibrium thermodynamics.
31. Keiji Saito
Work extraction in heat engines: quantum versus classical
Heat engine is one of crucial topics to develop nonequilibrium thermodynamics. Reently quantum effects in thermodynamic operations attract much attention. Recent experiments demonstarted that even atomic scale heat engine is possible. In this talk, we consider a problem as to how to extract the quantum work. We derive a trade-off relation and will point out several problems.
32. Krishnendu Sengupta
Entanglement generation and dynamic phase transition in periodically driven integrable models.
In this talk, we shall discuss the generation of entanglement entropy S of a closed quantum system driven periodically with frequency w and for n drive cycles. We show that such a drive may be used to generate states for which the scaling of S lies between an area and a volume law. We provide a qualitative criterion for the change in nature of S which constitutes a generalization of Hastings' theorem to driven integrable systems. We also find that S and any correlation function of such a driven system decay to their steady state values as (w/n)^[(d+2)/2] for fast and (w/n)^[d/2] for slow drives; these two dynamical phases are separated by a transition associated with the change in topology of the spectrum of the system's Floquet Hamiltonian. We show that these dynamical phases show re- entrant behavior as a function of w for d = 1 (and a class of d = 2) models, provide a detailed phase diagram of the system, and discuss experiments which can test our theory.
33. Lea F. Santos
Generic dynamical features of quenched interacting quantum systems
We study numerically and analytically the quench dynamics of isolated many-body quantum systems out of equilibrium. Using full random matrices from the Gaussian orthogonal ensemble, we obtain analytical expressions for the evolution of the survival probability, density imbalance, and out-of-time-ordered correlator. They are compared with numerical results for a one-dimensional disordered model with two-body interactions and shown to bound the decay rate of this realistic system.
Power-law decays are seen at intermediate times and overshoots beyond infinite time averages occur at long times when the system exhibits level repulsion. The fact that these features are shared by both the random matrix and the realistic disordered model indicates that they are generic to nonintegrable interacting quantum systems out of equilibrium.
34. Lin Shizeng
Skyrmion spin texture in inversion symmetric magnets
Stable topological excitations such as domain walls, vortices are ubiquitous in condensed matter systems and are responsible for many emergent phenomena. Recently a new mesoscopic spin texture called skyrmion with radius about 10 ~100 nm was discovered experimentally in the chiral magnets without the inversion symmetry. The skyrmions can also be stabilized in heterostructures, where the inversion symmetry is broken at the interface. The Dzyaloshinskii-Moriya interaction is responsible for the stabilization of skyrmions in these systems. Skyrmions form a triangular lattice. In metallic magnets, skyrmions can be driven by a spin polarized current. Remarkably, the threshold current density to drive the skyrmions into motion is only about 100 A/cm 2 , which is 4-5 order of magnitudes weaker than that for magnetic domain walls. The high mobility, topological protected stability, compact size of skyrmions make them extremely promising for applications in spintronics, such as memory. In this talk, first I will attempt to summarize the experiments and to present an overview on skyrmions. Then I will talk about our recent work on skyrmion stabilization in inversion- symmetric magnets. Because of the additional symmetry, skyrmions in the inversion-symmetric magnets possess interesting properties, which can be exploited for device applications. I will discuss about the novel properties of skyrmions in inversion-symmetric magnets in comparison to those in chiral magnets.
35. Manas Kulkarni
An open quantum system generalization of a 1D quasiperiodic system with a single-particle mobility edge
TBA
36. Marco Schiro
Dissipative Quantum Phase Transitions in Interacting Light-Matter Systems
Developments in quantum engineering have brought forth the possibility of studying emergent collective phenomena in hybrid systems of interacting matter and light. These platforms, which are intrinsically open and dissipative, allow to probe fundamental many-body physics in uncharted territories. In this talk I will focus on dissipative quantum phase transitions, arising from the interplay between coherent dynamics and coupling to an environment. I will start from a paradigmatic light-matter phase transition, the Dicke superradiance, and discuss how non-Markovian bath correlations qualitatively change its physics, pointing out a connection with the Caldeira-Legget/spin-boson phase transition much studied in a condensed matter context. In the second part of the talk I will focus on the physics of coupled circuit QED lattices with Kerr non-linearity under incoherent drive and dissipation, describe protocols to stabilize driven Mott insulators of photons and their dissipative and dynamical transition toward nonequilibrium superfluids.
37. Mazyar Mirrahimi
Dissipation as a resource for stabilizing quantum states with superconducting qubits
Recent advances in quantum-limited amplification have opened doors to high-fidelity non-demolition measurement of superconducting qubits and have already led to successful experiments on closed-loop control of such systems. However, the finite bandwidth of the amplification procedure, together with the time-consuming data acquisition and post-treatment of the output signal, lead to important latency in the feedback procedure.
Alternatively, the reservoir (dissipation) engineering circumvent the necessity of a real-time data acquisition, signal processing and feedback calculation. Coupling the quantum system to be stabilized to a strongly dissipative ancillary quantum system allows us to evacuate the entropy of the main system through the dissipation of the ancillary one. I will overview some theoretical proposals as well as the related experiments through the past few years illustrating the power of such autonomous feedback schemes for stabilizing highly non-classical states as well as for quantum error correction.
38. Michel Devoret
Quantum manifolds of steady states in driven, dissipative superconducting circuits
TBA
39. Nayana Shah
Out-of-equilibrium tunnel junction paradox and a new consistent bosonization-debsosonization framework
Bosonization has been widely used for tackling strongly correlated systems in low dimensions and is used as a theoretical method of choice for a large class of problems. It is also one of the few non-perturbative approaches that can be extended to study systems and devices out of equilibrium. In this talk we shall critically reexamine the Bosonization- deBosonization (BdB) procedure for systems including junctions and impurities. By focusing on the case of a tunneling junction out of equilibrium, we will be able to see that the conventional approach to BdB gives results that are not physically consistent while according to conventional wisdom they should match exactly with those obtained via a direct calculation that does not involve a transformation from fermionic to bosonic fields and back. I will then present the new Consistent BdB framework that we have recently developed in order to resolve this non- equilibrium transport paradox and argue that our modified framework should be widely applicable [1]. These ideas can be readily used to address more complicated scenarios of immediate experimental relevance [2], as will be highlighted in a follow-up presentation.
References:
[1] Nayana Shah and C. J. Bolech, Phys. Rev. B 93, 085440 (2016).
[2] C. J. Bolech and Nayana Shah, Phys. Rev. B 93, 085441 (2016).
40. Nicolas Roch
Circuit-QED based spectroscopies of quantum impurities
Quantum impurity problems describe a localized quantum system with a few degrees of freedom (the impurity), that is non-perturbatively coupled to a large system (the bath). These impurities can exist in many different forms in solid-state materials and nanostructures, such as charged [1] or magnetic impurities [2], while the bath is typically constituted by a Fermi sea. However, understanding the quantum dynamics and the entanglement properties of these many-body electronic systems remains a tremendous challenge, both experimentally and theoretically.
The main underlying reason to this complexity lies in the presence of entanglement between the impurity and many modes of the bath that extend on a wide energy range, which prevents a brute force diagonalization of the full problem. In addition, in metallic devices such as artificial quantum dots, it has proved difficult experimentally to resolve or address electronic bath modes individually, due to internal losses of metallic islands.
I will present a unique architecture based on superconducting circuits to tackle this challenging problem. It offers two main advantages: first, it allows to reach the multi-mode ultra-strong coupling regime allowing to build a strong hybridization between the quantum system and its bath; second, high quality factors of superconducting circuits enable to monitor spectroscopically the qubit and its bath at the same time.
Our approach consists in coupling a superconducting artificial atom (namely a transmon qubit) to a meta-material made of thousands of SQUIDs [3,4,5]. The latter sustains many photonic modes and shows characteristic impedance close to the quantum of resistance. We succeeded in performing the full spectroscopy of the impurity plus bath system, which revealed strong hybridization of the transmon qubit with as many as ten modes of the bath. In this coupling regime, the common techniques used in circuit-QED (rotating wave approximation, exact diagonalization...) break down. To describe quantitatively our experimental data, we had to borrow a tool usually reserved to strongly interacting systems: the Self-Consistent Harmonic Approximation [6]. In the future, we plan to use this circuit to perform non-linear quantum optics experiments with a many-body system [4,7].
[1] P. W. Anderson, Phys. Rev. Lett. 18, 1049 (1967)
[2] J. Kondo, Prog. Theor. Phys. 32, 37 (1964).
[3] K. Le Hur, Phys. Rev. B 85, 140506(R) (2012).
[4] M. Goldstein et al., Phys. Rev. Lett. 110, 017002 (2013).
[5] I. Snyman and S. Florens, Phys. Rev. B 92, 085131 (2015).
[6] T. Giamarchi, Quantum physics in one dimension'' (Oxford 2003).
[7] N. Gheeraert et al., in preparation.
41. Patrice Bertet
Magnetic resonance at the quantum limit and beyond
The detection and characterization of paramagnetic species by electron-spin resonance (ESR) spectroscopy has numerous applications in chemistry, biology, and materials science [1]. Most ESR spectrometers rely on the inductive detection of the small microwave signals emitted by the spins during their Larmor precession into a microwave resonator in which they are embedded. Using the tools offered by circuit Quantum Electrodynamics (QED), namely high quality factor superconducting micro-resonators and Josephson parametric amplifiers that operate at the quantum limit when cooled at 20mK [2], we investigate magnetic resonance in a new regime where the quantum nature of the microwave field plays a role and the spin sensitivity is correspondingly enhanced. We report an increase of the sensitivity of inductively detected ESR by 4 orders of magnitude over the state-of-the-art, enabling the detection of 1700 Bismuth donor spins in silicon with a signal-to-noise ratio of 1 in a single echo [3]. We also demonstrate that the energy relaxation time of the spins is limited by spontaneous emission of microwave photons into the measurement line via the resonator [4], which opens the way to on-demand spin initialization via the Purcell effect. Finally, we show that the sensitivity can be enhanced beyond the quantum limit by using quantum squeezed states of the microwave field [5].
[1] A. Schweiger and G. Jeschke, Principles of Pulse Electron Magnetic Resonance (Oxford University Press, 2001)
[2] X. Zhou et al., Physical Review B 89, 214517 (2014)
[3] A. Bienfait et al., Nature Nanotechnology 11(3), 253-257 (2015)
[4] A. Bienfait et al., Nature 531, 74 (2016)
[5] A. Bienfait et al., arxiv :1610.03329
42. Prasanna Venkatesh B
Cooperative Effects in Closely Packed Quantum Emitters with Collective Dephasing
In a closely packed ensemble of quantum emitters, cooperative effects are typically suppressed due to the dephasing induced by the dipole-dipole interactions. Here, we show that by adding sufficiently strong collective dephasing cooperative effects can be restored. In particular, we show that the dipole force on a closely packed ensemble of strongly driven two-level quantum emitters, which collectively dephase, is enhanced in comparison to the dipole force on an independent non-interacting ensemble. Our results are relevant to solid state systems with embedded quantum emitters such as colour centers in diamond and superconducting qubits in microwave cavities and waveguides.
43. R. Ganesh
Generating resonating valence bond states through Dicke subradiance
Dicke's seminal 1954 paper introduced the notion of ‘superradiance’ in a system of spins coupled to a common photon mode. Certain quantum states of the spins dominate the radiation process so that the spins radiate coherently. Dicke's original thought experiment has recently been recreated in the lab using cavity-QED setups with two spins. I will explore extending this experiment to N spins and show that the radiation process naturally gives rise to entangled states. This suggests a new experimental tool to create multi-particle entanglement in the lab. In particular, a null- observation (non-observation of emitted photon) can be used to collapse the wavefunction onto a dark state. Remarkably, this dark state has resonating valence bond character. We show that the probability of collapse onto RVB state scales as N-1, making it possible to generate entangled states of more than 20 spins.
Reference: R. Ganesh, L. Theerthagiri and G. Baskaran, arXiv:1609.04853.
44. Rajdeep Sensarma
Keldysh Field Theory for Open Quantum Systems: Localization and Quantum Effects
We use an effective action formalism based on Keldysh field theory to study bosonic open quantum systems interacting with bosonic baths. For a non-interacting bosonic chain coupled to independent baths at each lattice sites, we find that a linear variation of the temperature of the baths with distance leads to an exponential decay of both particle and energy current. This holds even when the baths induce long range memory effects in the system. For an interacting system, using loop expansions together with Martin-Siggia-Rose formalism, we find that in addition to a dissipative and a classical noise term, we generate a multiplicative noise and a "quantum" noise in the system. The "quantum" noise is a random source term with non-classical distribution functions, and can be related to Wigner's quasiprobability distribution. Using renormalization group arguments, we show that in the coarse grained limit, there is a universal quasiprobability distribution characterized by a single parameter and find the analytic form of the distribution function.
45. R. Vijayaraghavan
Broadband parametric amplifiers for quantum measurements
Josephson parametric amplifiers (JPAs) have become a crucial component of superconducting qubit measurement circuitry, enabling recent studies of quantum jumps, generation and detection of squeezed microwave field, quantum feedback, real-time tracking of qubit state evolution, quantum error detection, and more. In this talk, I will describe the operation of a simple parametric amplifier design which is based on a single Josephson junction shunted by a capacitor to form a non-linear oscillator. The intrinsic Kerr non-linearity of this device enables parametric amplification by pumping the oscillator with a suitable drive tone and allows one to obtain near quantum limited noise performance for a typical gain of about 20 dB. The bandwidth of such devices are usually governed by the standard gain-bandwidth product and typical devices have 10 – 50 MHz of bandwidth. I will describe a technique which requires a simple modification of the embedding circuit to enhance the bandwidth beyond the standard gain-bandwidth product without affecting the noise performance of the device. I will present results on such a device where we obtained 640 MHz of bandwidth with 20 dB gain and near quantum limited noise performance [1]. I will finally conclude by discussing further extensions to this idea and adapting it to other parametric amplifier designs like the Josephson Parametric Converter.
46. Rejish Nath
Periodically Driven Array of Single Rydberg Atoms
We discuss the excitation dynamics in an array of single Rydberg atoms driven by a frequency modulated light field. The latter introduces an effective time-dependent Rabi coupling in a rotating frame, which leads to unprecedented dynamics in the presence of Rydberg-Rydberg interactions. In particular, the Rydberg blockade may exist even if the interaction strengths are significantly small compared to the single atom Rabi frequency; anti-blockade appears at large interactions with high excitation probabilities and state dependent population trapping. Finally, as an application to modulated driving, we characterize the freezing or localization dynamics of an excitation in an extended driven setup.
Interacting Quantum Systems in Hybrid Traps
Hybrid traps allow the accumulation of multiple quantum particles, which are different in their nature, to be put together so that the interactions between them can be studied precisely. Specifically, these can be ions, atoms, molecules and light. In such experiments, some combinations of these particles are trapped with overlap to study the interaction of interest. The different systems that we would like to trap interact with the electromagnetic field in very different ways, and this requires each class of object to be trapped with a different mechanism. Enabling the various mechanisms to function so that everything is confined to a tiny volume in space is a daunting technical challenge. In the first part of the talk I shall illustrate how these challenges are overcome in our experiment at RRI.
The trapped dilute gas systems are also simultaneously prepared in specific quantum states and typically the idea is to cool the systems to such temperatures so that the natural linewidths of these sets the limit of the energy uncertainty. Another objective is that the interactions are not dominated by the kinetic energy of the interacting particles. In this situation, the evolution of the typically state prepared systems on interaction will express itself in the change of motional or internal states of the interacting systems. The challenge then becomes how to detect these.
At RRI, we have been performing experiments with the above objectives for a while now and I shall discuss some of our experiments on these topics. These explorations have led to the understanding of several phenomena, which is often at variance with expectations. Some significant results shall be presented. I shall conclude by outlining strategies that we would pursue so that we are in a position to attack a wide range of possible problems related to open quantum systems.
48. Saptarishi Chaudhuri
Quantum gases with tunable interactions and non-perturbative measurements
I shall discuss about the new Sodium-Potassium quantum gas mixture experiment we are developing at Raman Research Institute, Bangalore. The goal of this experiment is to investigate quantum many body physics employing long-range anisotropic interactions between heteronuclear molecules with tunable electric dipole-dipole interactions. Using laser cooling and trapping and evaporative cooling techniques, we propose to achieve simultaneous quantum degenerate samples of neutral Sodium and Potassium atomic clouds. Thereafter, using an interspecies magnetic Feshbach resonance, weakly bound molecules will be created. Two-photon Raman adiabatic passage from this “Feshbach molecular” state to absolute ground state will be employed to prepare ultra-cold cloud of Sodium- Potassium molecules. This molecular cloud trapped in an optical lattice potential can afterwards be manipulated in presence of an external electric field to investigate various ground state solutions of the extended Hubbard model by direct imaging.
I shall also discuss about our ongoing experiments on spin fluctuations in a thermal vapor using the probe beam polarization fluctuation measurements. We observe polarization fluctuations in a far detuned probe laser which passes through a thermal vapor in presence of an orthogonal magnetic field revealing intrinsic spin fluctuations in the system. This technique is an example of non-perturbative measurement of dynamical structure factor and has promising applications in many other similar systems such as ultra-cold quantum gases. This spin noise spectroscopy technique will eventually be used as a non-perturbative detection technique for measurements on quantum degenerate gases.
49. Shaul Mukamel
Nonlinear optical spectroscopy of molecules with quantum light and in microcavitites
Nonlinear optical signals induced by quantized light fields and entangled photon pairs are presented. Different signals, and photon counting setups are discussed and illustrated for molecular model systems. Conventional nonlinear spectroscopy uses classical light to detect matter properties through the variation of its response with frequencies or time delays. Quantum light opens up new avenues for spectroscopy by utilizing parameters of the quantum state of light as novel control knobs and through the variation of photon statistics by coupling to matter. An intuitive diagrammatic approach is presented for calculating ultrafast spectroscopy signals induced by quantum light, focusing on applications involving entangled photons with nonclassical bandwidth properties—known as “time-energy entanglement.” Nonlinear optical signals induced by quantized light fields are expressed using time-ordered multipoint correlation functions of superoperators in the joint field plus matter phase space. These are distinct from Glauber’s photon counting formalism which uses normally ordered products of ordinary operators in the field space. One notable advantage for spectroscopy applications is that entangled-photon pairs are not subjected to the classical Fourier limitations on the joint temporal and spectral resolution. Properties of entangled-photon pairs relevant to their spectroscopic applications will be surveyed. Different optical signals, and photon counting setups are discussed and illustrated for molecular model systems. Crossings of electronic potential surfaces in nuclear configuration space, known as conical intersections, determine the rates and outcomes of virtually all photochemical molecular processes. Strong coupling of molecules to the quantum vacuum field of micro cavities can modify the potential energy surfaces thereby manipulating the photophysical and photochemical reaction pathways. The photonic vacuum state of a localized cavity mode can be strongly mixed with the molecular degrees of freedom to create hybrid field-matter states known as polaritons. Simulations of the avoided crossing of sodium iodide and sodium fluoride in a cavity which incorporate the quantized cavity field into the nuclear wave packet dynamics will be presented. We show how the branching ratio between the covalent and ionic dissociation channels can be strongly manipulated by the optical cavity. New imaging techniques based on x-ray diffraction from electronic coherence in conical intersections will be presented.
References:
1. Markus Kowalewski, Kochise Bennett, and Shaul Mukamel. "Cavity femtochemistry; Manipulating nonadiabatic dynamics at avoided crossings", J. Phys. Chem. Lett ,2016, 7, 2050-2054
2. Konstantin E. Dorfman, Frank Schlawin, and Shaul Mukamel. "Nonlinear optical signals and spectroscopy with quantum light", Rev. Mod. Phys. 88, 045008 (2016) arXiv:1605.06746v1
3. Kochise Bennett, Markus Kowalewski, and Shaul Mukamel. "Novel Photochemistry of Molecular Polaritons in Optical Cavities", Faraday Discussions, 2016, 194, 259-282. DOI: 10.1039/C6FD00095A
50. Sebastian Wüster
Rydberg aggregates in ultracold gases
Rydberg Atoms in highly excited electronic states with n=30-100 are recent additions to the versatile toolkit of ultracold atomic physics. When resonant dipole dipole interactions involving two atomic states are at play, these furnish interesting model system for e.g. energy transport (static Rydberg atom assemblies) or multi-Born-Oppenheimer surface motion (moving atoms). We will discuss the interplay of this kind of dynamics with the host cold gas, in which Rydberg excitations are typically embedded. We show how the cold gas can allow continuous observation of the atomic motion or excitation transport, in turn leading to controllable decoherence. Thus the system of Rydberg aggregates embedded in cold gases furnishes a versatile quantum simulation platform for open quantum systems.
[1] S. Wüster and J.M. Rost arxiv:1707.04099 (2017).
[2] D. Schönleber et al. PRL 114 123005 (2015).
[3] H. Schempp et al. PRL 115, 093002 (2015).
[4] S. Wüster, PRL 119 013001 (2017).
51. Sile Nic Chormaic
Ultrathin optical fibers for neutral cold atom probing and manipulation
A subwavelength diameter optical nanofibre (ONF) has a large fraction of its guided light mode as an evanescent field, which extends radially beyond the surface of the fibre, see Fig. 1. Resonant and off-resonant interactions of this light field with surrounding cold atoms can lead to interesting phenomena, such as the observation of ultralow power nonlinear effects [1,2]. Our work follows two strands. In the first, we explore the formation and behaviour of neutral Rydberg atoms near ONFs. Rydberg atoms have a high dipole moment and a long lifetime, enabling the study of dipole-induced interactions. The combination of Rydberg atoms with an ONF could be a unique testbed for the study of surface-induced interactions on atomic dipoles in the submicron range or for fibre-mediated quantum networks. The second strand of research involves studying nanofibre-aided multiphoton atomic transitions and effects such as EIT and 4WM. Here, we exploit some of the properties of higher order fibre modes such as a stronger evanescent field around the fiber waist [3], the coupling of the orbital and spin angular momentum of light [4], and novel atom trapping geometries [5].
References:
[1] R. Kumar, V. Gokhroo, K. Deasy and S. Nic Chormaic, Phys. Rev. A, vol. 91, p. 053842 (2015)
[2] R. Kumar, V. Gokhroo and S. Nic Chormaic, New. J. Phys., vol. 17, p. 123012 (2015)
[3] R. Kumar, V. Gokhroo, K. Deasy, A. Maimaiti, M. C. Frawley, C. Phelan and S. Nic Chormaic, New J. Phys. vol. 17, p. 013026 (2015)
[4] F. Le Kien, T. Busch, V. G. Truong and S. Nic Chormaic, arxiv.org/abs/1703.00109 (2017) [5] C. Phelan, T. Hennessy and T. Busch, Opt. Exp., vol. 21, p. 27093 (2013)
52. Simone Gasparinetti
Correlations and entanglement of microwave photons emitted in a cascade decay
We use a three-level artificial atom in the ladder configuration as a source of microwave photons of different frequency. Our artificial atom is a transmon-type superconducting circuit, driven at the two-photon transition between ground and second-excited state. The transmon is embedded into a single-pole, double-throw switch [1] that selectively routes different-frequency photons into different spatial modes. We characterize the decay process for both continuous-wave and pulsed excitation. When the source is driven continuously, power cross-correlations between the two modes exhibit a crossover between strong antibunching and superbunching, typical of cascade decay, and an oscillatory pattern as the drive strength becomes comparable to the radiative decay rate. Using pulsed excitation, we prepare an arbitrary superposition of the ground and second-excited state, and monitor the spontaneous emission of the source in real time. This scheme allows us to deterministically produce entangled photon pairs, as demonstrated by nonvanishing phase correlations and more generally by joint state tomography of the two itinerant photonic modes. [2]
References:
[1] M. Pechal, J.C. Besse, M. Mondal, M. Oppliger, S. Gasparinetti, and A. Wallraff, Phys. Rev. Appl. 6, 024009 (2016).
[2] S. Gasparinetti, M. Pechal, J.C. Besse, M. Mondal, C. Eichler, and A. Wallraff, submitted.
53. Sriram Ganeshan
Lyapunov Exponent and Out-of-Time-Ordered Correlator's Growth Rate in a Chaotic System
One of the central goals in the study of quantum chaos is to establish a correspondence principle between classical chaos and quantum dynamics. Due to the singular nature of the \hbar→ 0 limit, it has been a long-standing problem to recover key fingerprints of classical chaos such as the Lyapunov exponent starting from a microscopic quantum calculation. It was recently proposed that the out-of-time-ordered four-point correlator (OTOC) might serve as a useful characteristic of quantum-chaotic behavior because, in the semi-classical limit, its rate of exponential growth resembles the classical Lyapunov exponent. In this talk, I will present OTOC as a tool to unify the classical, quantum chaotic and weak localization regime for the quantum kicked rotor model--a textbook model of quantum chaos. Through OTOC, I will demonstrate how chaos develops in the quantum chaotic regime and is subsequently destroyed by the quantum interference effects that result in dynamical localization. We also make a quantitative comparison between the growth rate of OTOC and the classical Lyapunov exponent.
54. Stefan Kehrein
Thermalization in closed quantum many-body systems I: Basic notions, integrable systems
Thermalization in closed quantum many-body systems II: Non-integrable systems
Reversibility and irreversibility in closed quantum many-body systems
This set of pedagogical lectures will give an introduction to the topic of thermalization in closed quantum many-body systems.
Lecture I:
• Key experiments in closed quantum many-body systems
• Possible definitions of thermalization
• Integrable vs. non-integrable systems
• Thermalization dynamics to the generalized Gibbs ensemble (GGE) in integrable systems
Lecture II:
• Thermalization dynamics in non-integrable systems
• Eigenstate thermalization hypothesis (ETH)
• Prethermalization
• Hydrodynamic tails
• Outlook of important questions for future research
Lecture III:
• Reversibility vs. irreversibility in classical physics
• Experiments in closed quantum many-body systems
• Possible definitions of irreversibility (echo decay, out-of-time-order correlators, scrambling)
• Results and outlook
55. Sumanta Tewari
Robust low energy Andreev bound states and quantized transport in semiconductor-superconductor heterostructures
Andreev bound states (ABS) are a generic low-energy feature in semiconductor-superconductor heterostructures. I will talk about how partially unfolded Andreev bound states – ABS whose component Majorana bound states (MBS) are only weakly overlapping -- represent a generic low-energy feature that emerges in non-homogeneous semiconductor nanowires coupled to superconductors in the presence of a Zeeman field. The emergence of these low-energy modes is not correlated with any topological quantum phase transition. Increasing the length scale of the potential inhomogeneity leads to a continuous evolution from strongly overlapping MBSs, which can be viewed as “regular” ABSs that cross zero energy, to spatially separated weakly overlapping MBSs, which can be regarded as robust ABSs that have nearly zero energy in a significant range of parameters and generate signatures similar to the non- degenerate zero-energy Majorana zero modes (MZMs) that emerge in the topological superconducting phase. I will discuss why the only way to distinguish topological MZMs from robust low energy ABSs in the topologically-trivial regime of SM-SC heterostructure wire involves correlating the dI/dV spectra from both ends of the wire, a task which has so far not been performed.
56. Takis Kontos
Mesoscopic quantum electrodynamics: from atomic-like physics to condensed matter
In this lecture, I will describe how mesoscopic circuits embedded in microwave cavities can be used to study light-matter interaction in novel situations. After introducing the basic tools for the microscopic description of light matter interaction in these systems, I will focus on two important topics: the coupling of a double quantum dot to microwave photons in a quantum information perspective and the use of the microwave cavity for ultra-sensitive compressibility measurements in a condensed matter perspective. I will show at the end of the lecture how these ideas can be generalized in more complex systems.
57. Umakant Rapol
Prolonging coherence times by bath engineering
Quantum systems lose coherence upon interaction with the environment and tend towards classical states. Quantum coherence is known to exponentially decay in time so that macroscopic quantum superpositions are generally unsustainable. We show that, slower than exponential decay of coherences is experimentally realized in an atom-optics kicked rotor system subjected to nonstationary Lévy noise in the applied kick sequence. The slower coherence decay manifests in the form of quantum subdiffusion that can be controlled through the Lévy exponent.
58. Upendra Harbola
Currents in strongly coupled molecular junctions
In recent years, the electron conduction through a single molecular junction has attracted a lot of research interest due to its fundamental interest in exploring quantum effects and its applications in miniaturization of electronic components (molecular electronics). The idea of molecular electronics is to control the electronic current by manipulating the physical and chemical properties of the molecule. There are several theoretical methods to calculate conductance of molecular junctions. Some are (semi) perturbaive while others are not. Among various formulations to compute conductance of molecular junctions, quantum master equation (QME) method [1], which is (semi) perturbative, and nonequilibrium Greens’ function (NEGF) approach [2], which is nonperturbative, are the two most successful formulations. The QME has a simple kinetic structure which makes it very useful in understanding the time-dependent processes in molecular junctions. On the other hand, the NEGF method, although in principle exact, is more involved and is generally used to study steady-state properties. In this talk, I shall present some recent results [3,4] using NEGF and show that within QME formulation some essential physics is lost which leads to completely different results.
[1] H.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press, 2002).
[2] H. Haug and A. P. Jauho, Quantum Kinetics in Transport and Optics of Semiconductors (Springer, Berlin, 1995).
[3] H. K. Yadalam and U. Harbola, Phys. Rev. B 93, 035312 (2016). [4] H. K. Yadalam and U. Harbola, Phys. Rev. B 94, 115424 (2016).
59. Vinod Menon
Control of light-matter interaction in two-dimensional Van der Waals materials
Two-dimensional (2D) Van der Waals materials have emerged as a very attractive class of optoelectronic material due to the unprecedented strength in its interaction with light. In this talk I will discuss approaches to enhance the strength of this interaction even further using microcavities, and metmaterials. I will first discuss the formation of strongly coupled exciton- photon quasiparticles (microcavity polaritons) at room temperature [1] and the valley polarization properties of these polaritons [2] in the 2D transition metal dichacogenide systems.
Following this I will discuss the broadband enhancement of spontaneous emission from these 2D materials using hyperbolic metamaterials [3].
Finally, I will also briefly discuss our recent work on room temperature single photon emission from hexagonal boron nitride [4] and the prospects of developing robust quantum emitters using them.
[1] Strong light-matter coupling in two-dimensional atomic crystals, X. Liu, et al., Nature Photonics 9, 30 (2015).
[2] Optical control of room temperature valley polaritons, Z. Sun, et al. In Press, Nature Photonics (2017).
[3] Broadband Enhancement of Spontaneous Emission in Two- Dimensional Semiconductors Using Photonic Hypercrystals, T. Galfsky, et al. Nano Lett. 16, 4940 (2015).
[3] Photoinduced modification of single photon emitters in hexagonal boron nitride, Z. Shotan, H. Jayakumar, C. R. Considine et al. ACS Photonics 3, 2490 (2016).
60. Vipin Varma
Transport and fractality in boundary-driven (quasi)disordered chains
In this talk we report on the response of (quasi)disordered spin-chains at high temperature to boundary driving through reservoirs at its ends. In the strongly interacting regime, we unveil a rich dynamical phase diagram that displays a panoply of transport properties as an interplay between interaction and disorder strengths: localized, ballistic, superdiffusive, diffusive, and subdiffusive. These effects occur well away from the many-body localization critical point, and whilst the system is still deep in the ergodic phase. Similar anomalous transport is shown to occur in the quasidisordered system at criticality even without interactions; in addition, the nonequilibrium steady state here exhibits spatial fractality in many of its expectation values, opening an alternative route to experimentally probe a system's fractal properties in contrast to measuring quantum wavefunctions.
|
{}
|
# To what extent is the taylor polynomial the best polynomial approximation?
Given a function $f\in\mathscr C^n([a,b])$ and a point $x_0\in [a,b]$, to what extent is the n-th taylor polynomial $T_n(x,x_0)=\sum_{k=0}^n\frac{f^{(k)}(x_0)}{k!}(x-x_0)^k$ the best polynomial approximation of $f$ in $[a,b]$ ? This may seem to be dumb question, but is there a metric $\rho$ on $C^n([a,b])$ so that $\rho(T_n(x,x_0),f)=\min\{\rho(p,f)\mid \text{p is a polynomial function} \}$ ? Thank you
-
Interesting question! One things to remark is that the Taylor polynomial is definitely not the best approximation in the sup norm on $\mathcal C[a,b]$. This was studied by Tchebychev, who showed that for each $n$, there is a best such approximation of degree $n$. It is usually significantly more accurate than the Taylor polynomial. See for example Chapters 43-45 (especially 44) of Körner's "Fourier analysis". – Andres Caicedo Feb 9 '13 at 17:20
Here is a norm on $\mathscr C^n(a,b)$ for which $T_{n}(\cdot,x_0)$ is the best approximation to $f$: $$\|f\|_* = \sum_{k=0}^{n} |f^{(k)}(x_0)|+ \sup_{x\in[a,b]}|f^{(n)}(x)-f^{(n)}(x_0)|$$ This is a reasonable norm, which is equivalent to the more usual norms. For any polynomial $p$ of degree at most $n$ we have $$\|f-p\|_* = \sum_{k=0}^{n} |f^{(k)}(x_0)-p^{(k)}(x_0)|+ \sup_{x\in[a,b]}|f^{(n)}(x)-f^{(n)}(x_0)|$$ which is minimized exactly when $p=T_{n}(\cdot,x_0)$.
-
+1 This is very interesting, I had never seen such a norm. Can you provide a reference where I can find that norm defined? – Adrián Barquero Feb 11 '13 at 17:06
@AdriánBarquero Sure. Here it is in amsrefs format: \bib\{298965}{misc}{ title={To what extent is the taylor polynomial the best polynomial approximation?}, author={5PM (http://math.stackexchange.com/users/53153/5pm)}, note={URL: http://math.stackexchange.com/q/298965 (version: 2013-02-11)}, eprint={http://math.stackexchange.com/q/298965}, organization={Mathematics} } // but seriously, I don't recall seeing it anywhere. – user53153 Feb 11 '13 at 17:22
I see, so you came up with it? That's really clever. Thanks =) – Adrián Barquero Feb 11 '13 at 18:03
I may be misunderstanding something here but if I set f = p in your metric the first term is zero but the second term is not zero. But isn't the norm of 0 supposed to be zero? – Michael Smith Feb 12 '13 at 22:38
@MichaelSmith If $f$ is a polynomial of degree at most $n$, then $f^{(n)}$ is a constant function. – user53153 Feb 12 '13 at 22:41
The answer is that the Taylor polynomial is not a very good approximation on the whole of $[a,b]$ in general. Indeed, the rest of the Taylor series converges to 0 on $[a,b]$ if and only if $f$ is analytic, which of course is not always the case. The intuition is that local information near $x_0$ only has no chance of being sufficient for a good approximation on $[a,b]$.
We know that a continuous function can be approximated uniformly on a segment by polynomials, but it is a bit tricky to find which polynomials exactly. Another natural candidate would be interpolation polynomials, but it turns out that they are no good as well (see http://en.wikipedia.org/wiki/Runge%27s_phenomenon). The answer is Bernstein's polynomials (http://en.wikipedia.org/wiki/Bernstein%27s_polynomial_theorem).
-
The Runge phenomenon does not mean that "interpolation polynomials ... are no good". It shows the limitations of interpolation with equidistant nodes. Interpolation with Chebyshev nodes yields uniform approximation, and the Wikipedia article you cited mentions this. – user53153 Feb 9 '13 at 19:20
you're right. my point was just that this is a non-trivial question, and that naive answer may be false. – Glougloubarbaki Feb 10 '13 at 3:48
$T_n(x, x_0)$ is the only polynomial of degree less than or equal to $n$ such that
$$T_n(x, x_0) - f(x) \in o((x - x_0)^n)$$
or in terms of limits,
$$\lim_{x \to x_0} \frac{T_n(x, x_0) - f(x)}{(x - x_0)^n} = 0$$
-
|
{}
|
# Netcat File Transfers
In a very simple way it can be used to transfer files between two computers. You can create a server that serves the file with the following:
Receive backup.iso on the client machine with the following:
As you may have noticed, netcat does not show any info about the progress of the data transfer. This is inconvenient when dealing with large files. In such cases, a pipe-monitoring utility like pv can be used to show a progress indicator. For example, the following shows the total amount of data that has been transfered in real-time on the server side:
Of course, the same can be implemented on the client side by piping netcat’s output through pv:
Another way:
One of the most practical usages of this network connection is the file transfer. As a basic Netcat function, this feature may be used to great effect in the hands of an experienced user. For a freshly installed computer, setting up a ftp server or, worse, meddling with rcp or scp protocols may be nauseating. Those commands may not be available for one, and multiple layers of control mechanisms may interfere with their functionality. You can still transfer files with just one nc command.
At the server console:
and on the client side:
Magically, the file named filename is transfered from the client to the server. You can check that they are identical.
The command line uses the new argument -w to cause Netcat to wait for a few seconds. We made that longer in the server side because it is most affected by a pause. Another important point is the > and < redirection commands, with which Unix users are very familiar.
In the server we said > filename.back. Any output will be directed to this file. As it happens, the output is the file filename which is send by the client. Think of this as a pipeline. We take a bucket (file), pour the contents to the pipeline (Netcat’s port), and, at the other end we fill another bucket from the pipeline.
Update: Why bother to use netcat if ssh deamon is running? Just use scp to transfer files!
|
{}
|
# Changing value of a NetArray or a NetArrayLayer during training?
Posted 2 months ago
368 Views
|
2 Replies
|
0 Total Likes
|
I am trying to code a variant of Proximal Policy Optimization algorithm (Reinforcement Learning) in Mathematica, and for the training of the network I need to change the coefficient, beta, of one of the loss terms dynamically after each batch... sometimes the value of beta should be doubled and sometimes halved. Is there ANY way to do that in Mathematica? The only way(?) that comes to my mind is that when the TrainingProgressFunction is called after each batch, I get my hands on the #TrainingNet or the network that is being trained, and then change the value of NetArray associated with beta manually to whatever I want for the next round.However, unfortunately, commands like NetExtract, NetTake, NetReplacePart all create new copies of the net, and hence, won't be any good. Somehow I need to change or update the very net that is being trained without copying it. To make the beta value not trainable, the LearningRateMultipliers of the NetArrayLayer must be set to None.Any information or guidance is very much appreciated.
2 Replies
Sort By:
Posted 2 months ago
Welcome to Wolfram Community!Please make sure you know the rules: https://wolfr.am/READ-1STPlease provide an example code so it is clear what exactly you are looking for.
Sorry if my post was unclear or if it appeared that I am bluntly asking for help. Perhaps copying the code here is going to make my already confusing question even more confusing. I have spent a ton of time searching online and reading through Mathematica manual pages. I was hoping that someone here can either tell me that such a feature I am looking for does not exists in Mathematica or someone gives me a pointer.Some reinforcement learning methods such as TRPO or PPO use minimization (maximization) of some entropy or Kullbeck-Leibler divergence - see, for example, equations 2b and 2c in this recent paper:Hsu, Chloe Ching-Yun, Celestine Mendler-Dünner, and Moritz Hardt. "Revisiting Design Choices in Proximal Policy Optimization." arXiv preprint arXiv:2009.10897 (2020).I want to be able to dynamically update the scaling factor of a specific loss term during training. Say below is how I train my network and note that the Loss functions for different parts of the network are optimized separately using Scaled. Scaled allows me to scale individual losses by different factors. For example, -1 for the clip loss maximizes that term, and 1.0 for valueFunctionLoss minimizes the value function. So using Scaled, if I need to, let's say, use a different scaling factor for the KL divergence loss of the network, klForwardLoss, say 0.01, I use Scaled[0.01].But what if I want to change/update beta during training, say 0.01 at first and then slowly update it to get to 0.1. Is doing this possible? resultNet = NetTrain[ net, ppoSampler[#Net, #BatchSize] &, All, LossFunction -> { "clipLoss" -> Scaled[-1.0] , "valueFunctionLoss" -> Scaled[1.0] , "klForwardLoss" -> Scaled[beta] (* (1) <-------BETA, can it be updated here?*) }, Method -> "RMSProp", BatchSize -> 32, MaxTrainingRounds -> 20000, LearningRate -> 0.00025, TrainingUpdateSchedule -> {"policy", "value"}, WorkingPrecision -> "Real64", TrainingProgressFunction->Function[ (*(2) Can beta be updated here?*) (* access to the network is provided through #Net *) ] ]
|
{}
|
# Michael0x2a
My personal ramblings and snippits (mostly tech-related)
# Learning LaTeX
First published: November 07, 2012
I recently learned how to use LaTeX, which is a document markup language which can be seen as a sort of replacement for Microsoft Word. You can write text like this:
\documentclass[12pt]{article}
\usepackage{amsmath}
\begin{document}
This is an example of a LaTeX document. It automatically handles
paragraph, indentation, typography, and general layout. See?
lorem ipsum lorem ipsum lorem ipsum lorem ipsum lorem ipsum
lorem ipsum lorem ipsum lorem ipsum lorem ipsum lorem ipsum
lorem ipsum lorem ipsum lorem ipsum lorem ipsum lorem ipsum
lorem ipsum lorem ipsum lorem ipsum lorem ipsum lorem ipsum
lorem ipsum lorem ipsum lorem ipsum lorem ipsum lorem ipsum
lorem ipsum lorem ipsum lorem ipsum lorem ipsum lorem ipsum
It can also easily handle math:
\begin{align}
S &= 1 + \cfrac{1}{1 + \cfrac{1}{1 + \cfrac{1}{1 + \ldots}}} \\
S &= 1 + \cfrac{1}{S} \\
S^2 - S &= 0 \\
S(S - 1) &= 0
\end{align}
\end{document}
Sample output:
(It’s a bit fuzzier then in the actual pdf – I’m not sure why)
There a lot of reasons why you should start learning LaTeX, and the pros and cons of doing so, but since people have already written pretty decent explanations online, I’ll just go ahead and list them below instead of repeating them myself.
Personally, I use LaTeX for the following reasons:
• It’s absolutely brilliant for math. Microsoft’s math tool, while easy to use, is essentially point-and-click. While it’s fairly easy to create an equation in Microsoft Word, it’s difficult to do quickly. In general, I found that typing out equations in LaTeX was less frustrating, and let me create equations far more rapidly in MS Word. Since LaTeX is just a text file, the equations are also integrated in with the rest of the text so I don’t keep having to switch back-and-forth between some sort of ‘math mode’. LibreOffice also has an equation editor which is more text-based then MS Word, but it still forces you to jump into a separate window to type out equations and doesn’t render neatly.
• It’s semantic, and takes care of formatting for me.
• Since LaTeX files are just plaintext, I can manipulate and generate them using computer programs. This means that I can easily generate tables, equations, and graph by simply using a computer program, instead of having to enter them in by hand.
I found that LaTeX is especially effective for longer papers in math or science when compared to MS Word or LibreOffice. However, I found that LaTeX didn’t have any noticeable advantage over either word processing programs when I’m trying to write outlines or MLA-formatted papers for English or History. While LaTeX does have an MLA package which works fine, it essentially disables everything nice about LaTeX (fonts, section headers, even the justified aligning and auto-hyphenation (the MLA standard mandates that all text be right-aligned)).
Although LaTeX is cool, I did have some difficulty setting things up, so for my benefit, I decided that I’m going to document them bellow for future reference.
## Installing and using LaTeX
Each OS has a different program to convert LaTeX source documents into typeset documents. The recommended one for Windows is apparently MiKTeX. Be sure to click ‘Other Downloads’ and install the full version, instead of just the basic version. It’ll take an hour or two, but it’ll save a lot of hassle down the road. The installer will first make you download all the packages first. Once you do so, you have to re-run the installer and select the packages you just downloaded in order to actually begin installing.
Alternatively, I found that writelatex.com is a fairly decent online LaTeX editor/compiler.
To run LaTeX, use the following:
latex -output-format=pdf my-latex-file-here.tex
(LaTeX doesn’t generate pdf files by default)
You often have to run this twice – the first time builds the document, and the second time will make sure all the labels and references are correct.
## Learning LaTeX
I found the Not So Short Introduction to LaTeX to be the best guide for learning LaTeX from the ground up. The wikibook on LaTeX is also extremely high-quality – although it would make a good tutorial, I found myself using it more as a general reference and as a way to fill any holes in my knowledge as I actually tried creating LaTeX documents. And finally, tex.stackexchange is a good way to ask questions and learn about LaTeX in general (they’re also unusually friendly for a SE site too, which was a pleasant surprise).
## A typical document
After poking around, I found myself reusing this same preamble over and over again:
\documentclass[12pt, letterpaper]{article}
% META %
\usepackage{nag}
\usepackage{fixltx2e}
\usepackage{microtype}
\usepackage[utf8]{inputenc}
% MATH %
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{mathtools}
% TABLES %
\usepackage{booktabs}
\usepackage[tableposition=top]{caption}
% GRAPHICS %
\usepackage{graphicx}
\DeclareGraphicsExtensions{.pdf,.png,.jpg}
\graphicspath
% CODE %
\usepackage{minted}
\newminted{python}{linenos,xleftmargin=0.25in}
% MODIFICATIONS %
\usepackage[margin=1.0in]{geometry}
\usepackage[parfill]{parskip}
% MISC %
\usepackage{enumitem}
\usepackage{xspace}
\title{A sample document}
\author{Michael Lee}
\date{Nov 2012}
\includeonly{main,appendix}
\begin{document}
\maketitle
\include{main}
\include{appendix}
\end{document}
### General Notes
• In order to compile this, you actually need to use <pre>latex -output-format=pdf -shell-escape filename.tex</pre> since I’m using minted, a syntax-highlighting package for code. If you exclude the package, you can just use <pre>latex -output-format=pdf filename</pre> (see details below)
• I used the article documentclass here – I alternate from article for normal paper, and report for the occasional large paper. I also tend to make additional modifications to the report documentclass (see below).
### Notes on packages.
• The nag package will spout warning messages if it detects me using old packages. fixltx2e, microtype, and inputenc are recommendations from StackOverflow.
• The amsmath package replaces the default math stuff in LaTeX, and makes it look better. amssymb loads additional math symbols (the ‘therefore’ symbol, for example). mathtools allegedly repairs various small problems with amsmath.
• The default tables in LaTeX can potentially look quite ugly – booktabs provides a far more elegant alternative.
• I prefer my captions to be placed at the top of figures, so I make sure to use \usepackage[tableposition=top]{caption} when using the caption package.
• graphicx lets me import pictures into my document. The \graphicspath command lets me add folders to search for images (otherwise, LaTeX will only search the current directory).
• minted is a syntax-highlighting package. It requires that Python and Pygments be installed and added to the path. Alternatively, I could have used the listings package, but I didn’t feel like mucking around with those.
• geometry lets me manually set my margins, since LaTeX defaults to large margins.
• parskip lets me customize the behavior of my paragraphs. I prefer to have a single line between each paragraph with no indentation, and \usepackage[parfill]{parskip} lets me do so.
• hyperref lets me add links inside my pdf files. However, inside pdf files, the links often show up in ugly colored boxes. Specifying the hidelinks options hides those ugly boxes.
• enumitem lets me customize lists.
## Notes taking template
I like taking notes in this style:
I had some difficulty finding out how to configure my preamble to set this up, so for future reference:
\documentclass[12pt,letterpaper]{article}
% META %
\usepackage{nag}
\usepackage{fixltx2e}
% MODIFICATIONS %
\usepackage[margin=0.7in]{geometry}
\usepackage[parfill]{parskip}
% SECTIONS %
\usepackage[small,compact]{titlesec}
% I've modified the section titles to be smaller and more compact
% LISTS %
\usepackage{enumitem}
\setlist{nolistsep}
% Here, I'm removing the spacing between each list item so it becomes more compact
\newcommand{\LeftMargin}{1em}%
\newcommand{\NotesIndent}{1cm}%
% I'm just defining variables for convenience.
\newenvironment{notes}{%
\begin{itemize}[label=-,leftmargin=\LeftMargin,labelindent=\NotesIndent]%
\renewcommand*{\LeftMargin}{\NotesIndent}%
\ignorespaces%
}{
\end{itemize}%
\ignorespacesafterend%
}
% The notes environment basically hijacks the itemize environment and makes the
% margin initially zero and increase by 1cm with each nested list.
% MISC %
\usepackage{hyperref}
\usepackage{xspace}
\newcommand{\bn}{\begin{notes}\xspace}
\newcommand{\en}{\end{notes}\xspace}
\let\nt\item%
% I'm now replacing \begin{notes} with \bn and \end{notes}
% with \en so I don't have to type as much when actually taking notes.
% I'm also replacing \item with \nt so that my text will line up naturally
% with 4-space tabs.
\begin{document}
\section{Test 1}
\bn
\nt Example file
\nt I like having indented notes
\bn
\nt With arbitrarily-nested notes
\nt Lorem ipsum lorem ipsum lorem ipsum lorem ipsum lorem ipsum
lorem ipsum lorem ipsum lorem ipsum lorem ipsum lorem ipsum
\bn
\nt lorem ipsum
\nt lorem ipsum
\nt lorem ipsum
\en
\en
\nt Test test test
\nt Test test test
\bn
\nt blah blah
\nt blah blah
\en
\en
\end{document}
|
{}
|
# Definition:Absolute Galois Group/Definition 1
Let $K$ be a field.
The absolute Galois group of $K$ is the Galois group $\Gal {K^{\operatorname{sep} } \mid K}$ of its separable closure.
|
{}
|
kidzsearch.com > wiki
# Card Sharks
Card Sharks is a game show that has aired in different versions since 1978.
## The main game
Two contestants fought against each other in the main game—the returning champion and a challenging contestant. The returning champion was represented by the color red. The challenger was represented by the color blue. The host, Jim Perry, then asked a toss-up question, which was asked to 100 people before the show (example: "We surveyed 100 lawyers: Have you ever defended a person who you believed was guilty? How many lawyers said they have?"). The contestant he asked it to would provide what they thought the number of people who gave the answer the host gave. The other contestant would then say whether they thought the actual number was higher or lower than the first contestant's guess. Whoever is closer to the number got a chance at the cards.
There were two rows of five cards: the top red row (for the champion) and the bottom blue row (for the challenger). The contestant in control had to predict whether each card was higher or lower than the card before it.
There were two games. Whoever won both games would go on to play the Money Cards.
## The Money Cards
The winning contestant would then play the Money Cards to win more money. He/she was given $200 to start out with. They then had to predict whether each card was higher or lower than the one before it, just like before. This time, they had to bet money on each guess (example:$200 higher than a 2). The contestant worked their way across the bottom row, in which there were four cards, and then made it to the second row and were given $200 more. The least a person could bet on each card for the first two rows was$50. They then worked their way across that row, until they reached the top row, where there was only one card. That row was called the "Big Bet" row. There, the contestant had to bet at least half of what they won before.
## Other versions
Card Sharks aired on NBC from 1978 to 1981 and was hosted by Jim Perry. It returned on CBS and in syndication in 1986. The CBS version was hosted by Bob Eubanks and ran until 1989. The syndicated version was hosted by comedian Bill Rafferty, but ran until 1987.
In 2001, Card Sharks came back, hosted by Pat Bullard. However, this version had different rules than the other ones. In this one, two teams of two contestants (two at a time) had to guess higher or lower (or predict if the next card had exactly the same number as the previous one) on one row of seven cards. This version was not very popular and was cancelled after 13 weeks. Many Card Sharks fans say this version is the worst game show revival of all time.
Card Sharks returned to television in 2019 as an hour-long series hosted by Joel McHale, airing on ABC. The gameplay is similar to the 1978-81 and 1986-89 versions.
|
{}
|
Issue No. 02 - April-June (2008 vol. 5)
ISSN: 1545-5963
pp: 313-318
ABSTRACT
Emerging microarray technologies allow affordable typing of very long genome sequences. A key challenge in analyzing of such huge amount of data is scalable and accurate computational inferring of haplotypes (i.e., splitting of each genotype into a pair of corresponding haplotypes). In this paper, we first phase genotypes consisting only of two SNPs using genotypes frequencies adjusted to the random mating model and then extend phasing of two-SNP genotypes to phasing of complete genotypes using maximum spanning trees. Runtime of the proposed 2SNP algorithm is $O(nm (n + \log m)$, where n and m are the numbers of genotypes and SNPs, respectively, and it can handle genotypes spanning entire chromosomes in a matter of hours.On datasets across 23 chromosomal regions from HapMap[11], 2SNP is several orders of magnitude faster than GERBIL and PHASE while matching them in quality measured by the number of correctly phased genotypes, single-site and switching errors. For example the 2SNP software phases entire chromosome ($10^5$ SNPs from HapMap) for 30 individuals in 2 hours with average switching error 7.7%.We have also enhanced 2SNP algorithm to phase family trio data and compared it with four other well-known phasing methods on simulated data from [15]. 2SNP is much faster than all of them while loosing in quality only to PHASE. 2SNP software is publicly available at http://alla.cs.gsu.edu/~software/2SNP.
INDEX TERMS
SNP, genotype, haplotype, phasing, algorithm
CITATION
Dumitru Brinza, Alexander Zelikovsky, "2SNP: Scalable Phasing Method for Trios and Unrelated Individuals", IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 5, no. , pp. 313-318, April-June 2008, doi:10.1109/TCBB.2007.1068
|
{}
|
Patrick Logan on Software Transaction Memory
A detailed blog post on STM - and why it is a Bad Thing.
Comment viewing options
contains no actual arguments
I didn't see any serious arguments in that post. It was just the repeated assertion, in inflammatory language, that STM is bad. It implies the false dichotomy that things have to be shared-everything or shared-nothing. It implies that the reason some people like it is because it's "shiney" without mentioning the reduced complexity of the programming model, or the fact that transactions have been used quite successfully in DBMSs for decades. All in all not something I would expect to see on the LtU homepage.
Ditto
I was extremely disappointed. Patrick, who I know to be a serious thinker about programming language issues, blithely suggests that everyone should simply rewrite their systems in Erlang or some other shared-nothing message-passing language, nevermind the prohibitive costs of doing so for any real-world production system. He also handwaves the issue of transaction composability away as "academic," which is particularly odd considering the real world effort to make transactions composable. Complaining that STM takes you "too far away from the domain" while recommending a shared-nothing message-passing rewrite is also pretty rich, IMHO.
I also suspect that Patrick hasn't read The Next Mainstream Programming Languages, where I think Tim makes a very persuasive case that:
• Large-scale stateful software isn't going to go away, particularly when it already exists and is shipping.
• Preemptive threads and monitors don't scale.
• Imperative programming is the wrong default.
• There are multiple viable approaches to effects in otherwise pure languages (monads, effect types...)
• Slide 50: "Claim: Transactions are the only plausible solution to concurrent mutable state," where the alternatives considered on slide 49 are "referentially-transparent functions, message-passing concurrency, or continue using the sequential, single-threaded approach."
Now, it's possible that Tim's analysis doesn't apply to many domains other than games. But I don't see that argument being made effectively.
STM makes me nervous
I happen to agree completely with Patrick's observations. I have some reasonable experience in writing transactional systems (the JTA implementation for Weblogic, among others) and I'll attempt to state my concerns without actually having worked with an STM implementation. I'm sure someone will correct me if my concerns are misplaced or if I'm wrong.
STM is an optimistic transaction model suited for low levels of interference between threads of control. You can easily add a whole lot of overhead by introducing an innocuous looking method (that happens to touch a lot of state or merely increases the contention window) inside the atomic block. Unlike db transactions, STM has no timeout and may suffer mysterious slowdowns until the transaction goes through after many retries. At worst, it could livelock, if the window of contention and and the retry frequency are big enough. STM works for Haskell because mutation is very limited in the language; I very much doubt STM will be a good solution in the hands of a C#/Java programmer.
Second, there are many areas that don't come under the purview of STM (file activity, screen updates, real db transactions with different tx semantics). They don't work under an arbitrary retry model, unlike in a locking case where you know you own the critical section.
There seem to be a number of inconsistencies in Tim Sweeney's preferences. "large scale stateful software isn't going away" and "imperative programming is the wrong default" are contradictory, or at least sounds like an impasse. Performance is crucial to him, yet he's willing to live with a 4X slowdown for STM.
In any case, I think threads with shared-memory semantics are a huge disaster, and Erlang isn't the only *kind* of solution around (functional, shared-nothing).
It is possible to create (and I'm working on it) a message passing system that supports a gazillion tasks, non-interference between tasks, asynchronous message passing, uses collocation effectively if it is there, has user-level scheduling (what does the kernel know about your application's scheduling preferences anyway?).
It can be fast (blindingly fast if you talk to the Occam folks), composable (CSP, Occam), able to naturally take advantage of multiple cores and processors, and eliminate the unnecessary distinction (at compile time) between collocated and distributed systems. Finally, it can be intuitive to a normal programmer (someone who is still confused by monads, or lack of mutability) by retrofitting all the other concepts onto a traditional language and preserving referential integrity under aliasing.
I think the Actor/CSP model is a far better way of dealing with concurrency, but the implementations available so far (other than Erlang) haven't been able to demonstrate the full potential of the model.
STM nervousness
STM makes me nervous too, but raw shared-state concurrency is terrifying, so that's an improvement! Here are some thoughts:
"large scale stateful software isn't going away" and "imperative programming is the wrong default" are contradictory
My current analysis of the sort of code that's required in a game is that purely functional programming (plus Haskell-style ST non-imperative local state) is sufficiently expressive for around 70% of the code we write, which accounts for 95% of our performance profile. Thus I see functional programming as the right default.
The other 30% of our code (which accounts for 5% of performance) is necessarily so stateful that purely functional programming and message-passing concurrency are implausible. Thus I see imperative programming, in some form or another, as remaining an essential tool for some systems present in modern software.
Performance is crucial to him, yet he's willing to live with a 4X slowdown for STM
Without STM, the only tractable way to manage this code is to single-thread it. I'm not nearly smart enough to write race-free, deadlock-free code that scales to 10,000 freely-interacting objects from 1000 C++ classes maintained by tens of programmers in different locations.
With STM, I can continue writing software at full productivity, and it makes 5% of my performance profile become 4X slower. I break even at 4 threads, and come out ahead after that. So, eventually, STM wins over single-threading.
How do the message-passing guys implement robust interactions between independent objects with mutable state, like the classic "bank transfer" example that justifies transactions in databases? You end up writing an endless set of ad hoc transaction-like message exchanges for each interaction that may occur in the system. Thus my argument that STM is the only productivity-preserving concurrency solution for the kind of problems we encounter in complex object-oriented systems that truly necessitate state, such as games.
message passing and transactions
The other 30% of our code (which accounts for 5% of performance) is necessarily so stateful that purely functional programming and message-passing concurrency are implausible
Can you help me understand why message passing concurrency is implausible for your domain? Isn't the Opengl API itself a wrapper over a message passing architecture?
In the server/middleware space that I'm more conversant with, there are a large number of examples where mutability, concurrency and message passing are standard patterns. The Tuxedo transaction monitor has just a few primitives (tp_send, tp_enqueue etc.) and uses OS processes for isolation. It works for Visa and Amazon.
The Singularity OS is built on a foundation of message passing and isolated processes, where each process is written in a mutable style; there is no shortage of concurrency or mutability. (Incidentally, Tim Harris who co-wrote the STM-Haskell paper works on Singularity as well)
How do the message-passing guys implement robust interactions between independent objects with mutable state, like the classic "bank transfer" example that justifies transactions in database
The term "message passing" includes occam-style messaging to MQSeries-style queued messaging, and robustness is a factor in both spaces and dealt with differently.
The bank transfer kind of example is easily handled by enqueuing a "transfer" message to a known teller object (this is service oriented architecture). An online store handles an order by enqueuing a message to the store to reduce the inventory. In such cases, each message send itself is an ACID transaction and an auditable action, so the act of transferring money may involve several DB transactions. I would hazard that more than 95% of transactions in the enterprise world are not between databases, but between a db and a messaging system.
Another example of a high performance message passing architecture is SEDA, with explicit support for scheduling between stages.
Coming back to LtU, the languages for writing such systems are woefully backward. I am working on a pet project to attempt to fix it in a familiar imperative setting, which I hope to demo in a couple of months. I have learnt much from the Erlang effort. I like Erlang not so much for its language but because of its systemic features (the supervisor hierarchy, failure mechanisms, lightweight isolated processes, etc.)
Hear, Hear
Coming back to LtU, the languages for writing such systems are woefully backward. I am working on a pet project to attempt to fix it in a familiar imperative setting, which I hope to demo in a couple of months. I have learnt much from the Erlang effort. I like Erlang not so much for its language but because of its systemic features (the supervisor hierarchy, failure mechanisms, lightweight isolated processes, etc.)
I could not agree more. Looking forward to that.
Can you help me understand
Can you help me understand why message passing concurrency is implausible for your domain? Isn't the Opengl API itself a wrapper over a message passing architecture?
Consider a game like Grand Theft Auto, where 10,000 or more objects move around and interact independently. Here, the objects are things like people, cars, weapons, props, etc. Each object has attributes that change over time, such as position, damage, relationships with other objects (who's carrying what), etc. At any point, any set of objects can potentially interact with each other in a stateful way, for example I can get in a car, drive it around, and run into a mailbox, damaging it.
Many of these interactions require atomic updates of groups of objects. For example, to fire a weapon, I need to first determine whether I'm carrying the weapon, whether the weapon has any ammunition, and then I need to create a bullet object. If a weapon has one bullet and two people tried to fire it simultaneously, non-atomic updates might lead both players to conclude that they can fire the weapon, so two bullets are created, and the gun is left with -1 bullets, an inconsistent state.
Basically, all of the atomicity arguments that have been applied to databases (e.g. the bank-transfer example) directly map onto the game example.
Therefore, you need some way to guarantee atomicity. Candidates:
• We do this today (on 3-core CPUs) by simply single-threading this kind of code.
• You could implement this using message passing, but you'd be writing and debugging your own ad-hoc transactional protocol for each of the tens of thousands of updates and state transitions present in the program.
• If our game were sufficiently simple, we could multi-thread it by carefully locking and synchronizing the objects at the appropriate point, but as software complexity scales up, the analysis of whether the program might eventually deadlock is intractable.
So, in this case, transactional memory is the most natural productive solution to this problem. Keep in mind, we live in a comparatively simple world, since we always targeting a single multi-core CPU with shared coherent memory, and aren't concerned with databases, distributed computing, or fault-tolerance.
"but you'd be writing and debugging your own ad-hoc transactional protocol for each of the tens of thousands of updates and state transitions present in the program"
This seems to be simply a language user interface problem. If passing a message looks just like a function call, then designing an ad-hoc message passing protocol is no different than designing a function or object interface for these things (which you are already doing).
"Sending a message" (as it is called) to an object already looks like a function call in OOPLs. There's no reason sending a message to a channel/process/actor can't also look like a function call.
Bloat
The problem with implementing ad-hoc transactions using message-passing is that it bloats the code tremendously. In the single-threaded world or the STM world, you say "if(Ammo>0) {Ammo--; FireBullet();}". That would bloat up into tens of lines of asynchronous code to ask the recipient if he has ammunition, to reserve that ammunition, to finalize the interaction by effecting the ammo reduction, handle failure asynchronously, etc. Such code needs far more thought and testing than the STM version.
A solution that imposes an order-of-magnitude code bloat, productivity decrease, etc, is unattractive.
async message passing syntax
[edit: actually I'm arguing more that async message passing doesn't have to be as code intensive as it is currently. I grant the atomicity problem is a problem.]
I don't see why message passing would have to be longer than your example using STM. Check out the syntax for Raph Levien's Io language. It makes continuation passing style easy to write. In his syntax, semicolon introduces a function. So the semicolon at the end of a statement is actually a function representing the continuation of the program. Channels are simply functions. A remote procedure call with an asynchronous continuation winds up looking just like sequential code.
level of abstraction
It seems as if you make the conceptual leap from "everything is an object" to "everything is a process". I don't see why you'd have pretty much the same code in Erlang, based on my layman's knowledge of guns. Just like noone hanging out of a car window, shooting like mad, would be dealing with individual "bullet objects", your program ought to work at the level of "if (clip_empty()) {reload()}". If another actor has to get involved, say because the shooter doesn't have a spare clip in his pocket, a request is needed - in Erlang, that would most likely be a one-liner, not tens of lines of code.
There is no need for two-phase commits etc. just like there isn't in the corresponding real-life scenario. There is also hardly any call for specific error handling where the gun has to account for a scenario where the clip doesn't magically appear. The gun doesn't reload itself, does it? And the banking analogy doesn't hold, since you wouldn't have two shooters in different cars loading the same gun.
The thing about concurrency-oriented programming is that you normally go to real life for abstractions, and many of the good abstractions are quite simple. What would be the point of not using a simple counter to keep track of how many bullets remain in the gun? There is no interesting concurrency happening, but a simple relationship: (shot fired) -> (one less bullet). Make gun, clips, bullets into passive objects controlled by an actor - the shooter.
I want my 2pc
There is no need for two-phase commits etc. just like there isn't in the corresponding real-life scenario.
Dealing with race conditions in message passing scenarios has been motivation enough to adapt Software Transactional Memory to distributed transactions in Actors Model.
Besides, ideally software should be easier to reason about than real life.
I'm sort of curious what Patrick Logan would think about the idea that Message Passing doesn't solve the important problems solved by transactions.
You could implement this using message passing, but you'd be writing and debugging your own ad-hoc transactional protocol for each of the tens of thousands of updates and state transitions present in the program.
You said that critical point so clearly and distinctly! I'm very impressed.
The necessity for atomic updates is an application-specific thing. Given variables a and b, whether two otherwise sequential updates to them must be transactional depends on the semantics of the update within the application. It cannot be inferred, because the information only exists in the programmer's head. For the software to treat the updates atomically, it must be explicitly informed of the need.
Thus the smallest possible amount of information the programmer must give the application is that the updates have to happen together. In other words:
atomic { update a; update b; }
represents a floor on the amount of annotation. There can be no solution to atomicity that is any more concise than this. (The specific syntax is of course irrelevant; begin; ... commit; is equivalent.)
So that code has to exist somewhere. It will either 1) exist in client code, or it will 2) exist in the message-handler implementation of the "ad-hoc transactional protocol" mentioned above. Note that 1 is a proper subset of 2, and that the additional overhead of 2 is substantial, and must be paid again with each additional atomic update requirement.
Having used a variety of distributed computing techniques, including DCE, RMI, pub/sub, and raw TCP, I am convinced of the superiority of message passing, particularly pub/sub message passing. It is a great solution to the problems of shared-everything, as well as a great abstraction in its own right. However it is not a solution to atomic updates. The solution to the problem of atomic updates is transactions.
For example, to fire a
For example, to fire a weapon, I need to first determine whether I'm carrying the weapon, whether the weapon has any ammunition, and then I need to create a bullet object. If a weapon has one bullet and two people tried to fire it simultaneously, non-atomic updates might lead both players to conclude that they can fire the weapon, so two bullets are created, and the gun is left with -1 bullets, an inconsistent state.
I'm not sure that I understand why this would be a problem in a message-passing system. If the ammunition state of the gun is internal to the gun (as it should be, since it's a property of the gun), then the first "fire" message to reach the gun would result in the creation of a bullet object. The second "fire" message would fail to create a bullet, because the internal ammunition level of the gun would have already reached 0 as a result of the first "fire" message (in any sensible message-passing system, it is possible for actors to refuse to process a new message until they've completely processes earlier messages - that's one of the fundamental differences between actors and passive objects). You wouldn't end up with two bullets, or a negative ammo state. The only remaining problem is that two players think they own the same weapon, but I fail to see how STM would solve that problem either.
Am I missing something here? Can you please explain why you think a message-passing implementation (in say occam or Erlang) wouldn't preserve atomicity in this case?
Consider this example
You fire the bullet at a petrol tank. Potentially lots of other bullets, and other objects may be near this tank too. All of them will need to:
* Check if they hit the tank
* If so cause it to explode
* and deal the appropriate damage to themselves (e.g. modify bullet trajectory)
This needs to happen atomically. In other words, even if you decide that you didn't hit the barrel and it should not be modified, there is no way of knowing this BEFORE you've read the location of the barrel (and in case you do need to update it, you better have exclusive access!).
You could do this by having the barrel object implement an ad hoc transaction protocol, but if you have lots of objects in this scene who all examine this barrel (99.9% do nothing to it) would cause contention. Or you could optimize it by having a non-transactional check of the location to avoid acquiring exclusive access when you don't need it, but this is familiar territory from locks: in the real world you'll have much more complicated logic than this, where you really can't know if you needed transactional semantics until you've basically done all the work already.
STM solves this because each thread could examine the object all it wants, it's only when it's actually modified that collisions could potentially happen. Message passing is great, but they don't really work well for everything.
Futures
I think that for games, actually using just straight futures might be the best approach, as you really dont want any observable concurency but just a clean (optional input) -> output loop.
This is a better approach.
This is a better approach. At time T1, the player fires a bullet. The bullet object essentially ray-traces to see if it'll collide with another object, and at what time T2. It schedules a timer tick message to be sent to it at that time.
When receiving the timer tick at T2, the bullet then checks to see if the aforementioned object still exists. If so, kaboom! If not, ray tracing proceeds again, complete with a reschedule, until the bullet's range has been exceeded. Note that your locking, if needed at all, is constrained exclusively to one, and only one object at a time.
This is the basic algorithm for simulating electronics circuits in software (often without the use of multithreading, by the way), and since it's basically 100% event-driven, it can even be done entirely in a single process/thread. In fact, considering the sheer quantity of cycles invested in switching CPU state from thread to thread (even Erlang's is non-zero, folks), using a purely event-driven approach such as this just might be preferable.
If you want to model the real world, then model the real world how it actually works. It's all pretty common sense, at least to me.
What a strange thing to say...
If I'm reading you correctly you're avocating an event-driven model (nothing wrong with that per se), but then you say:
If you want to model the real world, then model the real world how it actually works.
The real world is massively parallel -- not event-driven.
Massively parallel implies atomicity
I fail to see the problem that STM solves for the massively parallel objects in a game.
Tim Sweeney wrote:
http://lambda-the-ultimate.org/node/2048#comment-25105
The other 30% of our code (which accounts for 5% of performance) is necessarily so stateful
that purely functional programming and message-passing concurrency are implausible.
Thus I see imperative programming, in some form or another, as remaining an essential tool
for some systems present in modern software.
Without STM, the only tractable way to manage this code is to single-thread it. I'm not nearly
smart enough to write race-free, deadlock-free code that scales to 10,000 freely-interacting
objects from 1000 C++ classes maintained by tens of programmers in different locations.
How do the message-passing guys implement robust interactions between independent object
with mutable state...
http://lambda-the-ultimate.org/node/2048#comment-25151
At any point, any set of objects can potentially interact with each other in a stateful way,
for example I can get in a car, drive it around, and run into a mailbox, damaging it.
Many of these interactions require atomic updates of groups of objects.
sylvan wrote:
http://lambda-the-ultimate.org/node/2048#comment-40141
You fire the bullet at a petrol tank. Potentially lots of other bullets, and other objects may be near
this tank too. All of them will need to:
* Check if they hit the tank
* If so cause it to explode
* and deal the appropriate damage to themselves (e.g. modify bullet trajectory)
This needs to happen atomically.
Bárður Ãrantsson wrote:
http://lambda-the-ultimate.org/node/2048#comment-43313
The real world is massively parallel -- not event-driven.
How can a massively parallel world be built without orthogonal objects which are implicitly atomic?
In a massively parallel world, the atomicity should be implicitly in the class design for orthogonal actors:
* Shooter requests gun to fire
* Gun fires and atomically reduces it's ammunition count, replies to shooter if necessary
* Bullet is an orthogonal object created by the gun
* Bullet does read only access of other objects to determines next hit
* Bullet requests to tank to hit it
* Tank records hit, decide when itself should explode, sends damage requests to nearly objects,
replies to bullet as necessary
* Bullet records whether it is still active (moving or not)
Any other design would not be OO orthogonal (you would have global logic trying to manage groups of objects) and thus would not be able to handle all scenarios that may arise in the massively parallel world. Composibility comes from correct OO orthogonality.
Where are the race conditions? If there are any race conditions, i.e. nearby objects absorb/alter the shockwave of the explosion for other nearby objects, then STM does not help you with the needed logic.
It seems to me that as games scale to be more massively parallel (more permutations of things can happen) along with the power of more CPU cores, then the class design is going to have be atomically orthogonal any way.
In short, any global algorithms need to be replaced by iterative atomic ones, i.e. the objects absorbing the shockwave reply to the tank's explosion object, explosion object iteratively alters it's shockwave, and the explosion object is responsible for tranversing the objects in an order which correctly models the perturbation of the shockwave. If shockwave propogation is global solution matrix, then you can argue that transactions are needed on a collection of nearby objects, but then how do handle the unexpected event of another explosion nearly interim? I think ultimately massively parallelism resolves to iterative atomic algorithms.
I am not a game programmer (yet). Have I missed something?
Massively Parallel Worlds
One can still support global rules in a massively parallel system. Rules-based programming, for example, can easily be massively parallel - there is an inherent parallelism principle involved: parallelize evaluation of the conditions (which might be defined by pure logic or functions of arbitrary complexity), and parallelize execution of the events. Transactions, in this case, could serialize the triggered events to give programmers a handle on grokking rules interactions. Transactions would run in parallel in the absence of any conflicting operations.
Of course, you likely wouldn't want to define a shockwave as something 'instantaneous'. In reality, shockwaves and fireballs take milliseconds once they hit air. In games, shockwaves and fireballs last for seconds. To deal with conditions like this, you should probably create a shockwave/fireball-object in the world's database and let it influence the world for a little while before dying out.
A neat little tweak on the whole 'rules-based programming' is to support arbitrary reactive programming and data-binding - not just for the conditions defining the rules, but also for the state of the world itself. This would allow you to logically express, say, the position of a bullet or shockwave as a function of time, then trigger rules automatically over time based on the logical 'expansion' of the shockwave. One might also bind to external ('real world') data and event sources. Players would just be one example.
In any case, there are too many convenience advantages to defining 'gameplay rules' (including physics) at a global level, using rulebooks and such, for me to readily give them up. And I'd say that the ability to have gameplay operations reference and impact multiple gameplay 'objects' is a relatively critical feature - not something to easily abandon.
See Inform language, which isn't parallel but does offer an example of rules-based gameplay development.
Example rule?
Could you give me even one example of a rule that requires to lock the state of orthogonal instances across it's update of each of the affected instances?
The visualization in my mind is that real world rules interact incrementally, i.e. for example with physics once you've applied a reaction force to an object, then there is a domino chain reaction of reaction forces as the physics calculation is repeated for all involved (nearby) objects. So I still fail to visualize an example how a global rule is beneficially applied to numerous objects in one atomic step? Rather I visualize that a global rule is applied as atomic incremental steps on interacting instances (actors).
If I am correct or for algorithms where I am correct, then it seems we need the language to lock the class instance so that it blocks synchronous (re-entrant) access and queues threads asynchronously, i.e. the transaction granularity is per-instance?
If there are global (batch) algorithms that will require multi-instance (set of) transaction granularity then we will need some transaction mechanism for that, but I do not yet visualize even one example of such an algorithm. One potential example is if physics algorithms calculate all the forces on all the interacting objects at one point in time simultaneously, and in this case, the algorithm needs to tell the language which instances are blocked for the atomic operation. I do not see how a general STM memory scale granularity is optimal, because the algorithm granularity is still per-instance as a set of instances. It seems to me for efficiency a run-time check for a per-instance lock flag (mutex) occurs only once per thread's atomic (non-re-entrant due to mutex) method call; whereas, a general STM memory scale (granularity) protection will involve a run-time check on every write to the class instance's data by the same thread.
In short, we need per-instance mutexs, automatically enforced by the language for single instance atomicity, and with language support for multi-instance transactions.
Note I started studying all the concepts (after reading Tim Sweeney's paper about future of game programming) about 24 hours ago, i.e. lazy vs. strict/lenient, imperative vs. functional, etc.., so perhaps I am incredibly wrong. My knowledge of functional programming is at earliest stage of conceptual awakening, and no experience other than foreach() in imperative OO languages and some HaXe (O'Caml derivative ActionScript 3 clone). Just sharing.
Batch updates emulate the asynchronous nature of real world
I realized the update order of instances is perhaps the reason we need batch (multi-instance, transaction) updates, but I am not yet convinced we can't adjust algorithms to per-instance incremental update propagation:
http://lambda-the-ultimate.org/node/3637#comment-51620
Rules, Actors, and STM
The visualization in my mind is that real world rules interact incrementally, i.e. for example with physics once you've applied a reaction force to an object, then there is a domino chain reaction of reaction forces as the physics calculation is repeated for all involved (nearby) objects.
In rules-based programming, describing 'incremental' simulations is not difficult.
For example, you might define a program that involves rules for what happens when one domino collides with another. In such a collision, the output behavior will want to look at and modify the state (velocity) of two different dominos. Given appropriate rules for determining collisions, one could knock one domino down and see them all fall.
Even for an incremental program, where each operation involves an interaction between only two elements, it is no less valuable that each operation on two dominos be atomic rather than suffering interference from yet more operations whilst halfway through completing the physics computations.
But to address your overall argument, your focus on 'real world' parallelism does not address a simple reality in the context of gameplay: games aren't the real world. Programmers need to describe gameplay rules, which may possess little relation at all to physics. For example, 'after the golden cat icon is placed on the pedestal, all cats in town become feral'. Not much physics there, eh?
If I am correct or for algorithms where I am correct, then it seems we need the language to lock the class instance so that it blocks synchronous (re-entrant) access and queues threads asynchronously, i.e. the transaction granularity is per-instance?
The model you describe is common to Actors Model implementations. It does serve well for describing program components (first-class processes) and supports object-capability security, distribution, concurrency.
I favor an asynchronous actors model (no queues, allows replicated distribution) with a primitive 'cell' for state across operations, protected by transactions.
I would not recommend Actors Model for representing domain or gameplay objects (tanks, bullets, etc.), though I'd rather not go into the (many) various reasons here. It is my assessment that OOP for domain modeling is a lot like pushing a cart with a rope.
As a point, you shouldn't equate transactions to locking. You might wish to look into optimistic concurrency.
It seems to me for efficiency a run-time check for a per-instance lock flag (mutex) occurs only once per thread's atomic method call; whereas, a general STM memory scale (granularity) protection will involve a run-time check on every write to the class instance's data by the same thread.
STM's performance for isolated update to an isolated object is somewhat limited, true.
On the other hand, favoring STM allows you to avoid the actor's message-queue, which allows local computation (e.g. on a stack) and allows 'inlining' for references to a fixed actor, and even allows 'replies' from actors without risk of deadlock wait cycles. Further, a single actor can process many operations simultaneously (and roll back only for actual conflicts as opposed to blocking for potential ones), and a stateless asynchronous actor can effectively be replicated.
So, while STM may seem to be on the flaky side for performance, STM may optimize and scale better than queued messages for atomic actors. Then, of course, there are its many flexibility and safe composition advantages.
Still leaning towards the per-instance granularity for OO code
...Even for an incremental program, where each operation involves an interaction between only two elements, it is no less valuable that each operation on two dominos be atomic rather than suffering interference...
This is still per-instance mutex atomic as I proposed, because the interaction between the two is the method call and reply.
...For example, 'after the golden cat icon is placed on the pedestal, all cats in town become feral'...
It is still per-instance mutex atomic, because one possible correct OO design is the global cat feral state should be a static member of cat class. The instances could override, and should check the timestamp of their override to see if the global member has later timestamp and thus priority.
And if the feral global state is placed in a parent class or in a child instance that cat holds a static reference to, then the composability improves because new programming in future can leverage the global feral state. The important point is that composability improves with correct OO design, and thus I don't think it is hard or desirable to be avoided by trying to find some lower-level platform method of hiding the complexity-- that will just lead to composability spagetti. In short, as I wrote in other thread today about universal trend of entropy, the more global management code (the less work one has done to achieve correct OO design), the lower the re-usability of the code, the shorter the half-life of the code, the faster the local exponential peak & decay.
See we need to be thinking more about correct design and algorithms. Correct OO design for multi-threading can't be solved with concepts that are not well matched (i.e. functional programming is not OO state programming, and STM granularity is not OO per-instance granularity).
I would not recommend Actors Model for representing domain or gameplay objects (tanks, bullets, etc.), though I'd rather not go into the (many) various reasons here. It is my assessment that OOP for domain modeling is a lot like pushing a cart with a rope.
As a point, you shouldn't equate transactions to locking. You might wish to look into optimistic concurrency.
Optimistic concurrency is smaller granularity than per-instance
...but at a cost in overhead, plus perhaps encouraging sub-optimal OOP design.
As a point, you shouldn't equate transactions to locking. You might wish to look into optimistic concurrency.
If threads are not OFTEN able to modify the same object (instance) AND NOT modify the same data members of the object, then optimistic concurrency is worse (than mutex locking the entire instance) due to the additional overhead of per data member (memory-wise) protection (comparison or checksum). Thus I conclude optimistic concurrency is only superior to per-instance locking when the atomic algorithms are OFTEN more granular than per-instance. Further, it will only be superior if the blocked threads OFTEN have no other equivalent priority alternative work to do instead, i.e. if the blocked thread is not choosing randomly among which of it's siblings to update incrementally.
This is an example why we need think carefully about algorithms. The choosen platform/language has to match well to the granularity of propagation of the algorithms we choose.
STM's performance for isolated update to an isolated object is somewhat limited, true.
But we must qualify "isolated", thus ditto above, with attention on the bolded logic.
On the other hand, favoring STM allows you to avoid the actor's message-queue, which allows local computation (e.g. on a stack) and allows 'inlining' for references to a fixed actor, and even allows 'replies' from actors without risk of deadlock wait cycles. Further, a single actor can process many operations simultaneously (and roll back only for actual conflicts as opposed to blocking for potential ones), and a stateless asynchronous actor can effectively be replicated.
I am not privy to the overall design you are using for "Actors Model", so the debate of the applicability of STM will hinge on my ability to analyze alternative ways to organize the design, potentially similar to the curve ball examples I threw in the prior post.
For example, I am proposing that the mutex be language driven at the per-instance granularity, thus I am not visualizing what the message queue is for? Also I am proposing that the reply is the return from a method call from one instance to another, thus there is no wait deadlock potential. In my current visualizaation, the interaction between an instance and its world is through a single thread (the per-instance mutex) so then why would it need to spawn multiple interactions as message instead of completing each one atomically? If there is indeed some need to integrate over multiple instances, then we need multi-instance transactions and I have argued above that per-instance granularity is probably what is needed (or STM if the granularity is OFTEN sub-instance data and otherwise blocking priority work). Thus I am visualizing we do not send a message to each instance involved, but rather we collect them in mutex collection (or other language construct) that locks their per-instance mutex and then do array operations on all of them. What is to be gained by the sub-instance data member granularity with STM given my logic?
STM even worse for multi-instance transactions?
The cost for the bolded logic in the prior post, is even greater with multi-instance transactions, because the non-optimistic case rollback will potentially undo multiple instances of work. The likelihood of non-optimistic collisions probably increases with the number of instances in the transaction, and I suspect increases non-linearly?
Mental wrote:
Having written several implementations of both STM and various message-passing-based concurrency models for Ruby lately, I'm a lot less sunny on STM than I used to be even a few weeks ago.
I was having fun implementing STM until I realized that I was able to implement the Actor model correctly in a few hours, versus several weeks for getting the fiddly aspects of STM down.
The biggest immediate problem for STM is starvation -- a large transaction can just keep retrying, and I'm not sure there is a way to address that without breaking composability. ...and composability is the whole raison d'etre of STM in the first place.
Transactional Actors Model
A prototypical example of a race condition:
* Actor B accepts the vocabulary 'getBalance', 'withdraw (amount)', and 'deposit (amount)'.
* Actors C and D, concurrently, want to withdraw half of the current balance. So each of them calls 'getBalance', divides the returned amount by two, then withdraws that amount.
* The result: based on interleaving of 'getBalance' and 'withdraw' messages, the final balance could be at 3/4 its original balance, or it could be at zero.
The automated per-method call mutex that I envisioned would also not automatically solve the race condition above, unless the getBalance() conditional logic was merged into the withdraw() method call (message). (however note my realization at bottom)
The atomic transaction (per-instance or STM optimistic concurrency over entire instance data) will otherwise have to be declared manually at the block containing both getBalance() and withdraw() method calls. The disadvantage (non-optimistic outcome cost) of the STM optimistic concurrency is that everything that was done in contained atomic block declaration has to be undone, because there may have been domino effects on the wrong assumption of the value of getBalance().
The per-instance mutex locking seems to be preferable. The general rule is that if you call methods that do not return instance state, then language compiler can set and unset the mutex automatically so that no other threads operate on the class asynchronously. If we call methods and cache (store) state locally, then we need to lock the per-instance mutex (that the state came from), until we discard that state.
Perhaps the language could do this block atomicity automatically also? If all class state is only revealed in getters, then the compiler can track the assignment of getters to local scope (usually stack) variables, and derivative computations, and release mutex automatically at block exit (or even earlier). The problem is (or derivative) assignment of getter to external scope (i.e. a closure), which could cause divergence or a deadlock on the mutex. Thus it should be a compiler error to create a closure, or pass into a function that does, on the getter (or it's derivative) instance. This closure problem will manifest as uncaught race errors when we do atomic block declarations manually and do not hold the mutex while the external copies of state still exist.
Probabilistic preferred to fatality?
It can easily become more complicated. For example, Actor B may need to negotiate with three actors U, V, W to perform a withdrawal, any of which might be similarly disrupted due to message interleaving, may require coordination from B (message sent to U might depend on responses from V, W), and might need to be reversible (response from U might require reversing an operation started with V, W)
If the aforementioned read state lock is implemented, then afaics, the only potential error due to multi-instance interaction is the deadlock where the locks prevent the interaction from completing and locks releasing. And this deadlock (or starvation for optimistic rolbacks) risk is going to apply when ever multi-instance transactions are performed (no matter which concurrency mechanism is employed, e.g. per-instance mutex or STM optimistic), so this is another reason why we need to get the OO correct and atomic operations need to be as narrow as possible.
It might be better to do no concurrency locks or rollbacks, and assume that errors due to out-of-date cached instance state are small compared to the risk of deadlocks (which are fatal), especially if we design our atomic blocks to be as narrow as possible. The behavior becomes more probabilistic (like nature!) and less deterministic, but at least deterministic in sense that deadlocks are assured to never happen if we never lock, and race starvation divergence never happen if we never rollback. This is interesting as the move to multi-core may be analogous to how the models of the universe got more probabilistic as we moved from spacetime to quantum scale:
My Theory of Everything (near bottom)
Is there any way to get determinism with cached instance state and also determinism of termination without deadlocks and infinite rollbacks starvation divergence? I suspect no if there is any global state, not without 100% (infinite) regression, which takes me right back to my point that since we can not sample for infinite time, then we never know the deterministic truth. It seemed to piss off some people that science is faith, but alas it is. Evidence is convincing only in the frame of reference where it is, which is a small subset of the infinite universe (even the infinite permutations of interactions/perceptions/realities occurring simultaneously here on earth).
Note that as the number of cores (threads) reaches towards infinity asymptote, then the atomic operations must become asymptotically infinity narrow (granular), thus the errors due to not locking or rollback on cached state would asymptotically approach 0 and the prediction from My Theory of Everything would be fulfilled:
As the measuring capabilities have been proliferated into the quantum realm, the quantum shared reality has become probabilistic, i.e in between deterministic and random. I will agree with Einstein, that the quantum evidence is "silly", and offer the plausible conjecture/prediction that deterministic perception of matter at the quantum scale is possible, because if the current measuring devices are sampling below the period and frequency required by Nyquist, then random aliasing effects are observed. At quantum granularity, the entropy (i.e. disorder, information content) is much greater than at the space-time scale, thus providing a window to many more possible perceptions (realities). In short, my conjecture is that the frequency of the signals at quantum scale is much lower and/or higher than our current measuring devices can detect deterministically.
In short, we have a sampling theory problem.
==========
An optimization might be to do per-instance locking as per my prior post, then break deadlocks and accept the error. The advantage is over accepting the (trending towards infinitely small as we reduce the atomic time we cache state, as we improve our OO design to accommodate trend towards infinite cores) error that can occur if we do no concurrency coordination as proposed in this post, at the cost of (out-of-order execution) errors being greater due to longer time elapsed before we detect and break a deadlock.
No global means "shared nothing"
this deadlock (or starvation for optimistic rolbacks) risk is going to apply when ever multi-instance transactions are performed (no matter which concurrency mechanism is employed, e.g. per-instance mutex or STM optimistic)
Mature implementations of database transactional systems tend to use a hybrid combination of optimism and pessimism in order to control the slider between 'progress' and 'parallelism'. The same can be applied to STM by, for example, becoming more pessimistic after the first roll-back, or by profiling (at both call-site and resources) statistics for how successful optimism has proven in the recent past. Pessimism may also be applied to only the specific resources that have caused conflicts in the past.
Deadlock is possible with locking transactions, of course, but transactions allow one to detect such deadlocks and safely break them (via aborting them), and do so automatically.
Deadlock caused by cyclic waits among actors can also be detected and broken, but the local semantics of the program are harmed by a non-local error. The cost is usually a lost message or two at some arbitrary point in the cycle. (If you're using futures, you can just break the future but still process the message.) One can work around such errors, but it does make composition more challenging (requiring self-discipline and deeper knowledge of implementation details).
Shelby Moore III wrote:
Agreed as long as we accept the potential out-of-order execution error introduced by breaking them (with pessimistic locks we can't rollback the changes already accrued by the deadlocked threads that are chosen to lose their locks)...
I am not privy to the
I am not privy to the overall design you are using for "Actors Model" [...] I am proposing that the mutex be language driven at the per-instance granularity, thus I am not visualizing what the message queue is for? Also I am proposing that the reply is the return from a method call from one instance to another, thus there is no wait deadlock potential.
Waiting on a mutex puts the whole thread (including the message) into the mutex's queue. This approach is avoided because it is actually quite rare that actors need a reply from the method. Thus, Actors model is more often implemented such that the outgoing message ends up in a queue by itself, allowing the caller to move on and perform more activity.
Still, support for replies is quite popular. So many implementations of Actors model allow you to wait for a reply, either as part of the call or via a 'future' [wikipedia] that may be waited upon after performing some other activity.
If an actor can BOTH wait on replies AND delay incoming messages while processing a message, then it can deadlock given any cyclic message path. Such cycles are often difficult to statically eliminate in a composable system.
[...] OFTEN able to modify the same object (instance) AND NOT modify the same data members [...]
Even if there is a single point of contention, STM can still perform well... but it does require some extra support for 'merging' and for identifying when the functional processing after looking at the state results in the same 'answer'. (STM can readily distinguish read-only, read-write, and write-only operations for any given unit of state, though the latter requires writes be a T->T transform, which return unit, to really be of utility.)
Additionally, many read-only transactions can work with the same object without any risk of conflict, especially given multi-version concurrency control.
Dodging feral cats
...For example, 'after the golden cat icon is placed on the pedestal, all cats in town become feral'...
It is still per-instance mutex atomic, because one possible correct OO design is the global cat feral state should be a static member of cat class.
I was assuming you need to make each cat 'domestic' again as part of the game.
In any case, rules tend to interact with one another, and may be arbitrarily complex. It is not unusual for a single action to influence more than one object, especially when working with with 'relationships' between objects, or interacting with an 'environment'.
The important point is that composability improves with correct OO design
Given that 'correct OO design' is pretty much defined by design patterns that meet certain composability requirements, I'll agree. But that's a trivial claim, really, since composability improves with good design no matter which paradigm you're using.
Correct OO design for multi-threading can't be solved with concepts that are not well matched (i.e. functional programming is not OO state programming, and STM granularity is not OO per-instance granularity).
I would suggest the contrary. OO design is not complemented all that much by concepts that very closely 'match' OO it, since those are exactly the concepts one could readily implement atop OO.
Most paradigms, including OO, are very well complemented by orthogonal concepts. These include pure functional programming for composing and analyzing immutable messages and instance-state; logic programming for domain logic processing; reactive expressions (both logic and functional) for liveness, performance, and disruption tolerance; events distribution, subscriptions, and 'plumbing' for efficient multi-cast and runtime configurability and demand-driven support; syntax extensions and declarative meta-programming for compiling domain-specific stuff into OO; transactions for regulating multi-object interactions; persistence for scalability; etc.
OOP concepts extend to external order dependencies
In any case, rules tend to interact with one another, and may be arbitrarily complex. It is not unusual for a single action to influence more than one object, especially when working with with 'relationships' between objects, or interacting with an 'environment'.
But that doesn't necessarily mean the implementation of the rules must depend on order-of-execution. Marking all the cats feral or domestic is not very order-dependent in of itself.
Relevant posts:
Given that 'correct OO design' is pretty much defined by design patterns that meet certain composability requirements, I'll agree. But that's a trivial claim, really, since composability improves with good design no matter which paradigm you're using.
I am referring not to the typical class OOP principles (inheritance, encapsulation, etc), but to the more general goals of OOP to minimize inter-class dependencies, of which order-of-execution inter-class dependencies is included.
Another relevant post:
Is (up to WAN) propagation of transactions desirable?
I would suggest the contrary. OO design is not complemented all that much by concepts that very closely 'match' OO it, since those are exactly the concepts one could readily implement atop OO.
Most paradigms, including OO, are very well complemented by orthogonal concepts...transactions for regulating multi-object interactions...
I do not disagree about using the best tool for the job, nor disagree that there are tools which complement OOP. But I was writing about optimum multi-threading design, not about complementation for other goals (your quick synopsis of complements is appreciated though). My point is that eliminating/reducing the need for transactions is the universal goal, because the less a class depends on external orders, the more composable it will be, the less those orders can propagate so wide that conflicts reduce the distributed program to a least common denominator brittleness.
Doesn't work the way you've
Doesn't work the way you've presented it: any good FPS player who's been playing a while has psyched someone into walking into a projectile they weren't otherwise going to walk into - often in reaction to that projectile's being fired in the first place! More generally, things being shot at move, and you don't actually know when the collision will take place even if they do still collide
Works only in some fairly restrictive conditions
This is a better approach. At time T1, the player fires a bullet. The bullet object essentially ray-traces to see if it'll collide with another object, and at what time T2. It schedules a timer tick message to be sent to it at that time.
That works only if 1) the original target can't unpredictably move closer to the firing point while the projectile is traveling and 2) nothing else can unpredictably move into the path during the interval between T1 and T2.
Transactional Java Virtual Machine
Hi,
I'm not sure if this thread is still active, but I know Patrick Logan because he used to work at my company GemStone Systems. In Patrick's article he briefly mentioned how GemStone has successfully been selling STM for many years. In essence, our technology IP here is that we optimize the standard Sun Java virtual machine and make it transactional, i.e., we can transparently fetch and store objects from disk and let multiple threads and processes share data in a consistent manner. No bytecode rewriting and AOP hacks are required since the virtual machine manages read/write barriers and any class can be made orthogonally persistent.
Though not originally targeted for game simulation engines that share lots of state between threads on a many-core system, it might be viable to utilize such transactional VM technology as a simple-to-program, main-memory, game engine based on optimistic concurrency with full WW and RW conflict detection and resolution. Of course if transaction durability is required, there's persistence too. That might help with game scalability.
Just a thought from a non-gamer (last time I played a video game was Atari 2600) and non-game programmer (last time I programmed a video game was using shape tables on an Apple II).
Welcome to LtU. Threads here remain active for as long as there's interest...
Discrete Simulation
My understanding of game technology (particularly game AI) is that processing occurs in discrete chunks, with a message receipt & decision phase followed by an update phase. In this model, where is the need for STM? Can't you simply queue up updates in the read & decision phase and execute them serially in the update phase, partitioning among threads updates to different data regions?
In this way, there is no need for atomic blocks since all updates within a discrete interval happen in a single update phase, and there can be no conflicting cross-thread reads or updates.
I mention this model because Tim does not include it among his three candidates. I am keen to hear his response, if he's still out there.
Disclaimer: I am not a game programmer. My reference for the discrete model is Steve Rabin's "Designing a general robust AI engine" in Game Programming Gems Vol. I, and also the SIGMOD '07 paper "Scaling Games to Epic Proportions".
Math check
... it makes 5% of my performance profile become 4X slower. I break even at 4 threads,...
If the STM portion uses 5% of four processors, then shouldn't you be comparing STM to 20% of a single processor instead of 5% of each processor? I suppose this assumes something about the amount of work that must occur between world updates, but in the extreme case where you'd need 100% of one processor to run the game logic, it looks like you'd need 20 processors before requiring STM (assuming the 5% stays constant). Edit: And then it's not that you ever "break even" with STM - the single threaded model just hits a wall.
STM risks
Tim Sweeney wrote:
The other 30% of our code (which accounts for 5% of performance) is necessarily so stateful that purely functional programming and message-passing concurrency are implausible. Thus I see imperative programming, in some form or another, as remaining an essential tool for some systems present in modern software...
...Without STM, the only tractable way to manage this code is to single-thread it. I'm not nearly smart enough to write race-free, deadlock-free code that scales to 10,000 freely-interacting objects from 1000 C++ classes maintained by tens of programmers in different locations.
With STM, I can continue writing software at full productivity...
...STM is the only productivity-preserving concurrency solution for the kind of problems we encounter in complex object-oriented systems that truly necessitate state, such as games.
If STM rollback collisions are not infrequent, then it is possible that real-time events can suffer. Also you have to isolate the propagation (to external resources and modules) of rollback dependencies.
Hacking away composability?
Additionally if the rollback dependencies got into your 70% (95% of performance) functional programming code, it could be worse perhaps than just running the 30% (5% of performance) code single-threaded.
RE: STM Risks
if the rollback dependencies got into your 70% (95% of performance) functional programming code, it could be worse perhaps than just running the 30% (5% of performance) code single-threaded
The reason for the small time spent in imperative code is that 5% is all the time it takes to do something with the result of the last functional computation and obtain parameters for the next functional computation.
One cannot just 'run the 30% of code single-threaded' and still readily parallelize the other 70%. Interdependence interferes. Functional code doesn't internally parallelize well for small or medium-scale computations (due to scatter-gather overhead, false sharing, etc.). Instead, a good chunk of functional-code parallelism is simply inherited from task parallelism.
If STM rollback collisions are not infrequent, then it is possible that real-time events can suffer.
Unlike most other concurrency control methods, transaction approaches can support priority override based on deadline or real-time requirements. And, much like other approaches to real-time, programmers will need to take care to keep the synchronization effort and computations small and predictable in order to achieve real-time requirements. It is difficult to see STM as being at a disadvantage relative to most alternatives for this purpose. (Wait-free is better for real-time, of course, but requires even more discipline and is not general purpose.)
Also you have to isolate the propagation (to external resources and modules) of rollback dependencies.
You should support propagation of transaction semantics through as many modules as feasible. Add transaction barriers where one must, but propagate where one can. This offers the most flexibility to programmers.
...
RE: STM Risks
One cannot just 'run the 30% of code single-threaded' and still readily parallelize the other 70%.
Queue up large-scale computations.
Functional code doesn't internally parallelize well for small or medium-scale computations.
Thus queue latency is not a factor, since functions can't do real-time computations any way.
...transaction approaches can support priority override based on deadline or real-time requirements.
Afaics, I translate that to mean transactions can be aborted and rolledback, leading to starvation if as I wrote "If STM rollback collisions are not infrequent"?
It is difficult to see STM as being at a disadvantage relative to most alternatives for this purpose.
You should support propagation of transaction semantics through as many modules as feasible. Add transaction barriers where one must...
The (ongoing) technical debate on that was already linked in my prior post above.
Philosophically any thing that is in my terse pessimism "every where, else aliasing error rollback hacks glue" scares the crap out of me (25+ years computing programming, couple of large million user projects, but no extensive concurrency experience). [Biblical wisdom rant]Future's contracts are slavery. Make no promises, do not be surety for anything, if you want to remain free[/Biblical wisdom rant].
RE: STM Risks
priority override based on deadline or real-time requirements.
I translate that to mean transactions can be aborted and rolledback, leading to starvation if as I wrote "If STM rollback collisions are not infrequent"?
Even if you've overloaded your computing resources and added architectural bottlenecks to the point that low-priority transactions are colliding and retrying on a regular basis, you can still push high-priority transactions through the system.
This is, as mentioned, superior to most alternatives. Lock-based approaches, even if you avoid deadlock, still force the high-priority operations to wait on the low-priority operations. Lock-free and wait-free approaches fail to support general-purpose operations (though they do support a useful subset).
Usefully, a mature transactions system can also use pessimism and interact with the scheduler to serialize the problematic transactions on the problematic resources and clean up any snaggles. Lock-based approaches don't have that sort of room to grow more intelligent.
Queue up large-scale computations. [...] Thus queue latency is not a factor, since functions can't do real-time computations any way.
It is unclear how queues are supposed to help, or why their latency should matter.
If my single-threaded stateful code needs the result of a functional computation to progress, how does putting that computation into a queue help me progress? What does my single thread of stateful code do next?
Am I to assume that I've already gone to the effort of implementing lightweight threads or some sort of continuation framework so that my (no-longer) single-threaded code may yield after putting something in a queue and find something else to do until its parallel computation is completed?
It may be wise to keep in mind that 'multi-threading' is not the same as 'multi-CPU-cores'. The issues surrounding multi-threaded operation are present also in green threads. Achieving cooperative multi-threading can help avoid a few of the pitfalls, but cannot readily be generalized.
consider single-threaded, with some special helper threads and some clever algorithms with judicious rare use of mutex or monitors. If 5% is supporting 95%, we can already support 20 cores with the 5% single-threaded.
There is an error in how you're imagining the imperative code's relationship to the functional code.
The imperative code tends to compute a function, do something with the result, gather some parameters, repeat by computing another function. It is this sort of pattern that leads to numbers like '5% time spent in imperative code and 95% functional'.
Unfortunately, this pattern also means you cannot simply 'separate' most of the functional computation. That is, attempting to run the imperative code single-threaded and run the functional code in another thread simply results in the imperative threads spending 95% of its time waiting for functional computes from other threads.
Now, above you suggest putting the large functional computes into a queue, where they'll presumably be processed by other threads. Further, you could (as mentioned above) relax the 'single-threaded' idea a bit and have programmers jump through a few hoops to support cooperative multi-threading on a single processor.
But, if you're parallelizing the 'large' functional computes, that means you'll still be performing small and medium computes locally (as you must, for performance). Now, as a simple guesstimate, one might say the further breakdown is: 5% imperative, 20% small and medium functional computes, 75% large functional computes. (I feel I'm guessing low at 20%.)
With those numbers, one would need to run the 5% imperative PLUS the 20% small and medium functional computes on a single-thread. This would allow supporting not much more than four cores.
However, by parallelization of the initial 5%, you can very rapidly increase utilization. If you have four threads running tasks from a queue and rescheduling whenever a wait on a 'large' compute occurs, then each thread will still be spending about 5% of its time in imperative code and 20% of its time in medium and small functional computes. However, you can now run four tasks at once, and (ignoring overhead) each task thread can support about three more cores for large functional computes, for a total of sixteen cores.
Using the '4x imperative cost' as STM's primary impact, then the overall breakdown would be 20% imperative, 20% small and medium functional computes, 75% large functional computes, for a total of 115% of the original cost (before accounting for collision rate). However, each task would be spending 40% of its time in local computes. Under this condition, each core focused 100% on tasks could support about two cores with large functional computes. With four tasks cores, one could achieve utilization of twelve cores total.
In any case, these numbers are all speculative. But you should take into account interdependence of code - i.e. the imperative code is not independent of the functional code - and note that even if one is keeping the imperative code to a single processor it is unrealistic to keep it to a single thread, lest one spend all one's time waiting on functional computes.
When "If STM rollback collisions are not infrequent", then STM may be at a disadvantage compared to single-threaded, especially if it requires propagation everywhere as you assert below.
[...]
Philosophically any thing that is in my terse pessimism "every where, else aliasing error rollback hacks glue" scares the crap out of me.
Don't mistake a claim that transactions should be supported as widely as possible for a claim that transactions require being supported as widely as possible. It is programmers and users that benefit from widespread support for transactions (languages, file-systems, hardware, user-interfaces and multi-page web interactions, mission control protocols for unmanned systems, etc.).
RE: STM Risks
Even if you've overloaded your computing resources and added architectural bottlenecks to the point that low-priority transactions are colliding and retrying on a regular basis, you can still push high-priority transactions through the system.
Afaics, not if the high-priority transactions include (have termination/completion dependencies on) the low-priority ones algorithmically.
STM is not going to be able to separate the high-priority interwoven with low-priority logic of the execution-order dependent algorithms.
For example, just because we can complete the priority transaction of placing the mouse event into a message queue, doesn't mean the processing of what mouse events should do, does not suffer synchronization perturbation (possibly even to the extent the program state machine becomes random in I/O responsiveness and thus random in for example game outcome forks, another evidence of the theoretical infinite time sampling problem aliasing error I keep referring to).
Lock-based approaches, even if you avoid deadlock, still force the high-priority operations to wait on the low-priority operations.
Any concurrency approach (including STM per above) which causes delays in the response to real-time demands, is going to exhibit random aliasing error. There is no way of escaping that our algorithms have to be well structured for concurrency, and any low-level mechanism will just push the aliasing error around but not eliminate it. The key advantage lock-based has over rollback-based, is other than a deadlock, we are sure not to waste any time. And as I wrote in debate/discussion at other LtU page, deadlocks can broken by accepting atomicity error. AXIOM: if the algorithms are execution-order dependent, we are going to get error some where in concurrency. (this is due to the infinite time sampling problem aliasing error) That is why I am postulating algorithmic approaches at the other LtU page.
Usefully, a mature transactions system can also use pessimism and interact with the scheduler to serialize the problematic transactions on the problematic resources and clean up any snaggles. Lock-based approaches don't have that sort of room to grow more intelligent.
Maybe it would be more productive to invest the energy in adjusting our algorithms to be concurrent, instead of expending energy to create more spaghetti to make a single-threaded algorithm pretend it is running on a single thread.
It is unclear how queues are supposed to help, or why their latency should matter.
If my single-threaded stateful code needs the result of a functional computation to progress, how does putting that computation into a queue help me progress? What does my single thread of stateful code do next?
Am I to assume that I've already gone to the effort of implementing lightweight threads or some sort of continuation framework so that my (no-longer) single-threaded code may yield after putting something in a queue and find something else to do until its parallel computation is completed?
It may be wise to keep in mind that 'multi-threading' is not the same as 'multi-CPU-cores'.
I am assuming multi-core, because that is the context in which Tim Sweeney raised the need to make his 30% code concurrent.
I understanding that in a game the 70% (95% performance) code is mostly rendering, which can be split into parallel chunks (we do not need to wait on the result of one chunk to start the next one). Thus you have M cores processing N slave queued tasks/chunks. Afaics, if each N is long-duration (as you asserted) compared to the latency that the Nth task sits on the queue, then there is no significant synchronization perturbation error compared to converting the master thread to be concurrent. Note the slave tasks operate in parallel to the master thread and can place their result in a queue as well, so there is no blocking wait. Latency is the only issue.
The imperative code tends to compute a function, do something with the result, gather some parameters, repeat by computing another function.
Agreed, if that is true and not the model I assumed above. You might be correct if the 5% must all be completed before the next frame of rendering parallelism can be started; however, the prior frame could still be rendering while the 5% for the next frame is running. I am assuming above that roughly up to 40% of the 5% is breaking up the monolithic rendering task into parallel sub-tasks. I suspect I am correct (my experience was doing real-time 3D rendering on Intel 386 processor before 1993 and Art-O-Matic real-time cell, depth-of-field blur and photo-realistic shading in 1997, 12 years before Street Fighter IV and with much less processor power, worked on Corel Painter and EOS Photomodeler 1993 - 1996, also Ventura Publisher at the MacOS->Windows emulation layer).
Now, as a simple guesstimate, one might say the further breakdown is: 5% imperative, 20% small and medium functional computes, 75% large functional computes. (I feel I'm guessing low at 20%.)
I think the 70% code (95% of performance) is already running multi-threaded, see slide 42 of Tim Sweeney's presentation.
Don't mistake a claim that transactions should be supported as widely as possible for a claim that transactions require being supported as widely as possible.
I fail to see the distinction. I assume you think that barriers can be erected with the "hack glue" (apologies for derogative just my terse pessimism about them being perfectly damped barriers), and I visualize those as leaking the dependencies some where any way (perhaps in future time as composability deadlocks).
* High-latency operations that can benefit from concurrent waiting...
* Compute-intensive operations that can benefit from parallel execution...
RE: STM Risks
I'll make this my last comment, since I feel we're abusing LtU as a forum. Ehud attempted a while back to disabuse us of that notion: blog, not forum.
Regarding priority: It is possible to have a low-priority transaction update a queue (which naturally serves as a transaction barrier) such that an observer of the queue will later spin off a high-priority transaction. And vice versa. However, for I/O responsiveness, one ensures that the tasks associated with an input operation maintains a relatively high priority until appropriate feedback can be offered (such as greying a button and setting up a progress bar, if it will be a while).
Within any given transaction, transaction-priority cannot change. If there are hierarchical sub-transactions (as usually implied by 'atomic' blocks) then priority among those only impacts conflicts between sub-transactions.
One can use transactions at coarse or fine granularity much the same as one can use locks. If anything, the lock-based design has much greater pressure than the transactional approaches to favor coarse granularity in order to avoid deadlock issues. Examples include the Giant Kernel Lock and the Global Interpreter Lock).
our algorithms have to be well structured for concurrency, and any low-level mechanism will just push the aliasing error around but not eliminate it
Agreed.
The key advantage lock-based has over rollback-based, is other than a deadlock, we are sure not to waste any time
I suppose by not 'wasting any time' you refer to the need to occasionally perform 'rework' in a transaction-based approach. It is true this is an advantage of a lock-based approach.
However, to compare that properly against the STM approach, you need to determine how much 'time' is wasted by locks blocking safe concurrent reads on shared data and safe concurrent writes on subsets of shared data, plus how much 'time' is wasted working around the limitations of locks. That is, the granularity of locking also has costs (should there be enough cores for parallel operations).
deadlocks can broken by accepting atomicity error
Deadlocks can be broken in other ways, too, that don't involve atomicity error.
For example, you could throw a deadlock exception in one of the threads still waiting on a mutex, choosing to break isolation and lose a message instead of breaking atomicity. I suggest that this is a wiser route, giving much more opportunity for programmers to take intelligent action after a deadlock.
Of course, it remains less than ideal that one need consider deadlock at all... and reliability will likely become a problem no matter the resolution mechanism.
AXIOM: if the algorithms are execution-order dependent, we are going to get error some where in concurrency.
This axiom does not hold in our universe. One can have many partial-order dependencies and still achieve observable determinism in the presence of concurrency. This is a feature leveraged by data-parallelism and declarative concurrency.
That is why I am postulating algorithmic approaches [for execution-order independence]. Maybe it would be more productive to invest the energy in adjusting our algorithms to be concurrent, instead of expending energy to create more spaghetti to make a single-threaded algorithm pretend it is running on a single thread
Not all services or algorithms can be made execution-order independent, especially not those involving stateful manipulations and other sorts of I/O. In part, that is because many important problem-domains simply do not allow execution-order independence.
Further, execution-order independence does not readily compose unless working with pure operations. That is, even if arbitrary services S1 and S2 are individually execution-order independent, such that final semantic state is the based on the set of inputs received, it is generally not the case that a new algorithm or service S3 - that interacts with one service based on the state of the other - will also be semantically order-independent.
As far as adjusting services to be more concurrent, I agree. As noted above, one can use transactions at a fine granularity such that services possess a great deal of imperative (multi-task) concurrency tamed by large numbers of small transactions. For even more concurrency, individual tasks with large functional computes may leverage data-parallelism.
Characterizing transactions as producing single-threaded algorithm spaghetti ignores many patterns for leveraging transactions.
I understanding that in a game the 70% (95% performance) code is mostly rendering, which can be split into parallel chunks
The 90% CPU profile mentioned in Tim Sweeney's presentation referred to scene-graph traversal, physics simulation, collision detection, path finding, sound propagation, and so on (review slides 12, 17, 47). The scene-graph traversal would be included in determining what to render (and where to render it), but rendering itself was lumped with 'shading' as a separate compute running on the GPU.
When an object is updated, it is physics simulation and collision detection (and other gameplay rules) that determine with which other objects it must interact. It isn't as though the bullet knows when created that the tank will lie in its path. In this way, and others, imperative 'stateful' code interacts non-trivially with the functional code.
RE: STM Risks
...I feel we're abusing LtU as a forum...
I think we did a pretty good job of characterizing many of the issues and tradeoffs of one the main points of the blog post (concurrency migration). It would nice if one person could come in and say "all the issues are and here is a link" without any discussion, but this blog page is 1 year old (the STM blog page is 3 years old) and no one had done that. I arrived here as a reader wanting that information and could not find it any where succinctly in a Google search I do hope Ehud finds a way to close leaves so we can ignore sections that do not add value according to our individual interests. I hope he also fixes the bug where if you don't close a tag, the rest of the page (below your post) is affected.
One can use transactions at coarse or fine granularity much the same as one can use locks. If anything, the lock-based design has much greater pressure than the transactional approaches to favor coarse granularity in order to avoid deadlock issues...
I agree with the deadlock risk and with the more well-defined granularity window for transactions, but the use of that window will still depend on the algorithmic opportunities available. I think with locks the window risk is harder to ascertain, depending on the algorithm, so it might be wider, narrower, unknowable, random, etc.. It really depends on the situation.
...granularity of locking also has costs...
Agreed.
For example, you could throw a deadlock exception in one of the threads still waiting on a mutex, choosing to break isolation and lose a message instead of breaking atomicity.
Clever :) It will depend again on the situation and each choice can introduce domino effects and complexity.
AXIOM: if the algorithms are execution-order dependent, we are going to get error some where in concurrency.
This axiom does not hold in our universe. One can have many partial-order dependencies and still achieve observable determinism in the presence of concurrency. This is a feature leveraged by data-parallelism and declarative concurrency.
I think it does hold, if we interpret it correctly.
Just because you can fit perfectly, doesn't mean you will. If one can find the perfect fit to the partial order dependency, then one can get the minimum impacts. We must define error. I mean aliasing error. That can mean many different things. For example, one way we get determinism with STM can be via future's cells, in which case the aliasing error can be real-time degradation as we had already discussed (no need to rehash that). If we have done very good fitting to the partial-orders, these effects are minimized, perhaps even unnoticeable (e.g. JPEG lossy DCT compression has aliasing errors but we can't see them usually).
Another point is that fitting to these partial orders is another way we are adjusting our algorithm to concurrency, which is my main point, which you agreed with. We are simply moving towards fitting our algorithm to concurrency in ways that minimize aliasing errors to tolerable levels. We have many tools to help us and we must choose what fits best, each situation will vary perhaps.
Not all services or algorithms can be made execution-order independent, especially not those involving stateful manipulations and other sorts of I/O. In part, that is because many important problem-domains simply do not allow execution-order independence.
Ditto what I wrote above. So I agree, but we will get (possibly unnoticeable and irrelevant) aliasing error to the degree we do or do not fit well enough to concurrency. It is still an algorithmic fit. STM is in the toolset for that.
Further, execution-order independence does not readily compose unless working with pure operations.
Agreed very much. That is why I suggested the multi-threaded parts be walled off as much as possible from the external interfaces. But of course that causes problems as we discussed. The models used will depend on the situation. You really need an expert working on this! That is the main thing I hope readers take away from this. This will not be a cake walk for a novice. Just slapping STM or monitors on single-thread code could be a nightmare in many cases.
Characterizing transactions as producing single-threaded algorithm spaghetti ignores many patterns for leveraging transactions.
I think I meant that in approximately in terms of someone blindly slapping transactions on a single-threaded code base and praying. Agreed that transaction belong in the toolset of the expert.
...imperative 'stateful' code interacts non-trivially with the functional code...
Agreed, I was thinking of that too. It is challenge for sure. Should be fun if one has a lot of hair to pull out still :).
Will be interesting to see what they come out with for UT4. I read elsewhere to expect no new UT4 for several years.
I will give you the final reply, I just wanted to end up by showing that we reached mutual understanding on the issues we discussed (and I assume we didn't discuss all the issus). I love that!
Lock-freedom
STM is based on Fraser's work on lock-free algorithms. Has the Haskell STM lost those properties? If not, then lock-free algorithms cannot suffer from (global) dead-lock or live-lock.
Not all STM is lock-free
Well, not all STM is lock-free - many different transaction mechanisms have been proposed.
But "lock-free" is one of those nice problem hiding euphemisms.
Deadlocks are replaced by cyclic conflicts in "lock-free" transactional mechanisms. The deadlock resolution that many "locking" systems have, including locking transactional systems like Oracle, is replaced by conflict resolution.
I say "replaced" in a very liberal way, they're conceptually very similar and can produce very similar problems in naive implementations :-)
Lock-free, not lock free
Lock-free is not a euphemism, it is a defined term (defined in the paper I linked). A lock-free algorithm, by definition, cannot dead- or live-lock. Starvation may still occur. The paper above suggests that wait-free systems (lock-free but further starvation cannot occur), do require a system like deadlock resolution, but lock-free systems do not. I'm not sure if you are arguing against the defined term. If so, can you provide an example of a case that would produce "cyclic conflicts" and what problem it would cause.
There is no reason...
There is no reason that a defined term can not be an euphemism, as I'm sure victims of "ethnic cleansing" will agree :-). "Lock-free" in many cases is a euphemism for "abort and retry".
Add to the fact that "lock-free" algorithms require locking at some level (usually in terms of processor bus locks) and that some may use OS level waiting locks to implement the "lock-free" protocol and you have a very murky term indeed.
Lock-free algorithms do have a system similar to deadlock resolution, but it's not a system that actually resolves dead- or live-lock (I never said the systems DID actually dead- or live- lock), it's the conflict resolution mechanism used to resolve data conflicts (the Conflict Resolution Algorithm inherent in Optimistic Concurrency Control Validation).
In one narrow lock-free forward validation scheme (which is NOT the totality of STM or lock-free STM conflict resolution algorithms), your conflict resolution algorithm is to roll back any transaction that would conflict with another as that conflict is found. This algorithm avoids the problem of "cyclic conflicts" (which in a waiting system would cause a deadlock).
In a backward validation scheme this method is not possible and both transactions must be aborted/retried, in the case of a cyclic conflict (such as pieces of state written in the opposite order order) - in a naive implementation of this lock-free algorithm, it is possible to get pathalogical starvation, in the same way Ethernet collisions would in a naive implementation.
In similar situations in a locking database, it will deadlock (seeing the cycle in the graph of "waiters" for acquiring locks), detect the deadlock and roll back one (or both) of the transactions in the waiting list, possibly to be retried. My point was that this deadlock resolution is very similar to the conflict resolution of backwards validating optimistic schemes (for instance).
Re: working on it
Do tell?
... it's called Mnesia.
Mnesia tables can hold any kind of object, and don't necessarily have to be persisted to disk. Transactions consist of supplying a transaction function for the database manager to execute (via mnesia:transaction(Fun)).
Erlang folklore is that not many Erlang programs need or use Mnesia, but it's there for when shared state is useful. Maybe Tim's "70% stateless, 30% stateful" statistic could be modified downward, say something like "70% stateless, 20% message-passing, 10% stateful"?
Unnecessary?
...the unnecessary distinction (at compile time) between collocated and distributed systems.
How is the distinction unnecessary? Do I really want to use Erlang threads for local SIMD? Isn't it nice when static analysis let's us do without format checks for local arrays, just chewing them up as memory blocks? More generally, do we really want to be forced to assume that every data item is potentially corrupt, that every communication will fail to return?
collocation used when available
The distinction is unnecessary because one can take advantage of locality when available and automatically integrates extra systems for checking messages (and ordering them, ensuring reliable delivery, checking for type-safety, etc.) when distribution is actualized and comes from a not trusted remote platform or actor. As far as static analysis goes: no reason not to do that. Sriram's work includes typed actors for Java, and in my own work I'm certainly using statically typesafe actors as well.
Still, I don't agree with many of Sriram's conclusions, such as his idea that "shared nothing = failure isolation" actually applies in a system of shared services, or that actors eliminate need for transactions. Partial failure in an actor configuration is a serious problem, race conditions still exist, and in actors model the transactions would do well to exist and simply be distributed. Similarly, I don't fully agree with his assertion about locality.
There are useful compromises in distinguishing (at compile time) between collocated and distributed systems. The language design I've worked on has first-class actor configurations and allows annotations of locality (e.g. actor A is nearby actor B) with semantics for automatic distribution and mobility, albeit distribution limited by certificate-based secrecy-level challenges between platforms (so that sensitive actor names aren't accidentally distributed). My approach results in "cliques" of actors that will (after distribution) always be localized on specific machines. These "cliques" may be compiled to take full advantage of guaranteed locality, while between cliques one still takes advantage of locality when it is present. (There are many other advantages of actor configurations as well, but none so much as this one reflect the advantage in recognizing locality yet still supporting automatic distribution at compile-time.)
As far as SIMD goes, that can still be used in actors model. One does require the language provide the necessary abstractions for it, of course, such as pairwise operations over arrays or matrices.
No inconsistency?
Sriram Srinivasan wrote:
There seem to be a number of inconsistencies in Tim Sweeney's preferences. "large scale stateful software isn't going away" and "imperative programming is the wrong default" are contradictory, or at least sounds like an impasse.
You stated why the only the marriage of stateful with non-imperative (i.e. lots of monads, even if not the predominant default) might work in practice:
STM works for Haskell because mutation is very limited in the language; I very much doubt STM will be a good solution in the hands of a C#/Java programmer.
... and I have issues with some of those points based on personal experience. So, answering those points in order...
* Large stateful software is indeed here to stay... although, I tend to think single-owner state is still a good idea (even if it can be passed between owners).
Microsoft's Singularity project allows single owner state to be passed around and even passed across channels (a messaging mechanism).
* Preemptive threads and monitors don't scale... and neither do transactions. Anyone who's worked on a high performance online transaction server will tell you that while the transactions are there for their contractual consistency (a legal requirement in much financial software), not because they scale well.
Quite often you end up funneling data back to a single points because it's the only way to get the transactions occurring reliably at a decent speed. I'm talking about a custom in-memory database (check pointed) not an IO limited disk monster.
Tim's case of 10000 entities with few overlapping updates feels a bit contrived when you're talking about "mainstream" programming languages, or even games. For example, how to correlate large amounts of information about 10000 entities into a single place concurrently? What if the information needs to be worked on as it's being provided? Transactions scale poorly in such cases.
* Imperative programming being the wrong default... I tend to agree with it, but I don't think it should be dogmatic :-)
* No argument there.
* Well, I have supporting evidence otherwise. I've implemented highly parallel systems (scaling to 32 processors rather easily) that contain thousands of entities with mutable state all being correlated against real time events and providing real time information updates.
How did I do it? Well, most of it was done with lock-less message passing and by using user mode context switches in certain cases (implemented transparently). Transactions never would've met some of requirements of some of the response times required.
These systems weren't trivial either, they were large scale device management systems that often had to handle the interactions of multiple devices at once.
The best solution to use is purely dependent on your design and architecture. If you come up with your design and architecture with message passing in mind, it's just as viable. I believe this applies to games as well in my experience with them.
The solution to use should probably be decided on a case by case basis. Each has it's merits and flaws.
Agreed
Also, why is the under-the-covers complexity of STM bad, when the under-the-covers complexity of garbage collection is good?
Because garbage collection
Because garbage collection is usually a bad idea. Seriously it is. No Seriously. In the vast majority of cases the ownership of memory is clearly defined and best dealt with by the compiler or programmer. In the few cases where garbage collection is useful a hell of a lot of implementations fail to actually collect it.
Basically, using garbage collection should be the programmers choice (it's supposed to be their job after all). The same seems true of STM. Widespread application of a technique to all areas as if it were a silver bullet is always foolish.
GC
1) Many implementations of garbage collectors are bad.
2) If we could avoid having a GC, and still get rid of the garbage that would be better.
In a language with functions as first class citizens, it is not obvoius to me that you can easily live without a garbage collector - especially if you do not want to require whole-program compilation at all times.
Maybe the seeds of some arguments?
The post seemed to claim that there are some issues with transactions (not detailed, maybe starvation or contention?) which will at the very least damage composability, and that STM doesn't address distributed systems at all.
I didn't see anything to back up these claims - as far as I can tell you could rewrite everything in terms of garbage collection: "dynamic memory is a bad idea, garbage collection moves memory management into the system but doesn't explain how it will solve any of the problems and I think it won't, garbage collection won't work in distributed systems"
Perhaps these arguments have more force against STM, and LtU seems like a fine place to ask for them to be worked out (homepage vs. forum is debatable, but not worth debating). Summing it up, I'd say Patrick Logan raised some reasonable questions, but didn't provide answers.
So, can anyone point to theoretical limits, classes of programs current STM implementations handle poorly, composable reasoning strategies for correctness and progress of programs under other concurrency strategies, etc.
Point, where art thou?
I'd say Patrick Logan raised some reasonable questions, but didn't provide answers.
It's more like hinted that reasonable questions exist but failed to show them. Right now the post is so heavily edited that it looks worse than a wiki page on thread mode during a flame war.
I read the post twice in this current incarnation (i.e. with the comments mixed in the original text in unpredictable ways) and I'm unable to find anything other than "STM is bad, mmmkay?". There are valid questions to be raised wrt STM, for example they fail to provide process isolation (e.g. I would like to ensure that only a group of process access a bunch of STM vars, so if starvation occurs only this subsystem needs to be crashed and restarted).
Just saying that STM is wrong and everything should be done in with message passing is bizarre. There are other concurrent models (e.g. Oz's dataflow variables) so where is the argument showing that message passing is better for most domains? This feels like Smalltalk advocates or pure FP aficionados that believe in one hammer to rule them all.
I'm sorry Patrick but sometimes message passing is the wrong tool (while I believe that it's the most useful default model).
DBMS transactions do not scale
Are there any transactional systems out there that are not the usual shared memory or superfast interconnect giants?
Distributed transactions are probably the hardest research problem in database theory, practically not solved in large scale.
Transactions can scale, given enough information about them - whether they commute etc., but that would probably lead to integrating a theorem prover in the language, something more radical than using message passing. It's an interesting paradigm though.
Actor Transactions
Are there any transactional systems out there that are not the usual shared memory or superfast interconnect giants?
Transactions over actors should scale so long as the data or resources being accessed by the transactions are not highly focused on a very few processes.
STM replaces locks & semaphores, not MapReduce
Many problems can be parallelized easily by splitting the data up and streaming it through many processors. And in these cases, you definitely want to use MapReduce, nested data parallelism, or some other kind of parallel 'map' abstraction.
Unfortunately, not all problems are so easy to parallelize. Witness Tim Sweeny's example from the Unreal Engine: 10,000 game-world entities spread across many cores, all being updated in parallel. There's no easy way to do this with any variant of 'map', because the entities in the game world interact in arbitrary ways, and affect each other's state.
Now, you could solve this problem with semaphores and locks, but you'd go insane. And this is where STM offers a huge win: It takes all the nightmarish complexity of use fine-grained locks, and replaces it with a simple transaction model, complete with automatic restarts and strong guarantees against deadlock.
I can't think of any reason to loath STM so strongly unless you are absolutely sure that you'll never encounter a problem which requires shared state, and which resists the other tools in your toolbox.
Might have a point...
Lots of dodgy assertions and straw-man--bashing, but there are some lucid moments. This sentence intrigued me:
And if some group is going to retrofit transactional memory into some significant Java or C# system, well, they would be far better off investing that time into a simpler, shared-nothing rewrite into a language like Erlang, a simpler language like Smalltalk, or even a better shared-nothing coordination mechanism like Javaspaces.
Does he have a point? Might industry invest resources in STM that would be better invested in the Erlang model?
How to fix a broken toolbox.
Does he have a point? Might industry invest resources in STM that would be better invested in the Erlang model?
Who knows? Both Java and C# provide almost none of the guarantees one has using STM in Haskell or message-passing in Erlang. He's comparing between kludging STM in an existing imperative language against rewriting millions of lines of code in an obscure language (I like Smalltalk and Erlang but they're unknown to most of the industry) or start using a tuplespace mechanism to solve all concurrent problems. It's an apples and rhinos comparison. A much better comparison would be: kludge STM and light-weight, isolated, processes with message passing in both languages and see what would be easier (hint: isolation in Java is far away from being lightweight). Or else let's compare between rewriting to Haskell with STM vs Erlang with message passing (assuming that both have the man-centuries necessary to create the libraries and tools available in Java or C#).
Also how would Smalltalk automagically solve concurrency issues. AFAICS most Smalltalk implementations today use the shared-memory, threads and locks as it's basic concurrent solution.
I don't know enough about STM or Quark, but would it be easy to port STM to Quark so you could get access to the Java libraries via something very Haskell-ian?
I did just that
I have written an STM implementation integrated with quark: http://diversions.nfshost.com/blog/2008/03/27/a-cal-webapp-with-persistent-data-using-gwt-stm-and-bdb/
I don't think my STM implementation is terribly good -- it probably locks more than it should -- but it works with the various examples SPJ uses, and has similar syntax to the Haskell version.
My implementation persists the mutable state to Oracle's BDB.
I find Quark a great way of experimenting with functional programming while still having access to my favourite Java libraries.
(lucky these threads live forever :-)
It looks like there might be
It looks like there might be an implementation of STM (LibCMT) for C#. C# has a form of software isolated processes (through the use of AppDomains), though they're not fully-fledged as yet. They can be used to model the kind of isolation for transactional memory I believe you're referring to.
MC# and Cω use message-passing concurrency, in slightly different forms, although they're not strictly C# (MC# compiles to C#, so they can be integrated at some level). They are similar enough to C# that the effort involved in porting existing concurrent C# code would be entirely due to the switch to message-passing.
It would be an interesting experiment to take an existing piece of concurrent C# code and adapt it to both the LibCMT library and MC#, to get a feel for which is the shorter trip.
STM package written in C# published 2005
Seems like Microsoft Research published a C# Software Transactional Memory package back in 2005...
The SXM is a software transactional memory package written in C#. It is much easier to use than prior STMs because it uses Reflection.Emit to transparently add synchronization code to sequential data structures.
Not much concrete info, but still interesting
I've seen lots of cheerleading about STM, but not much practical experience to back it up. Patrick is bright enough that I don't immediately dismiss his opinions as aimless ranting :)
Uses of STM?
Who is using/has used STM for real work these days? In what language?
Erlang, while obscure, does have somewhat demanding real-life applications.
Yes, in Java
This incarnation is being used in an open-source university management system. The system was developed in-house and has been in production for a couple of years now. For more insight, you can read the SCOOL'05 paper, or go through the presentation.
Tonight make it magnificent
I didn't initially notice that the post in question seems to be a reaction, at least in part, to this ACM Queue article (at least, I think it is; I have a hard time telling what's really going on on that blog page.)
Among other things, the above article "illustrates how an atomic statement could be introduced and used in an object-oriented language such as Java." I think Guillaume Germain captured one of the concerns about this possibility quite well:
I can see misguided programmers starting to sprinkle their code with 'atomic' statements ("just in case"), a bit like one would do with 'yield' statements in a non-preemptive concurrent system.
I can see how this prospect could generate a certain amount of trepidation. The problem, though, is not with STM itself, and in that respect, Patrick errs when he says things like "this is a really bad feature that could screw up a lot of software for years to come". Refactoring for factuality, I think he means something more like "I'm concerned that this is an all-too-seductive feature that could screw up a lot of software for years to come". In that, he could be right, but it actually has nothing to do with the technical merits of STM.
I don't see the concern
Essentially that argument reduces to "some people might not use it right" which I must point out is nonfalsifiable (ahead of the fact anyway) and can be said of anything. For example, has it been our experience that misguided programmers have started sprinkling their Java code with synchronized blocks, just in case? I haven't seen that happening; the problem I've mostly seen is underuse of synchronized causing race conditions. (Please don't take this as an endorsement of Java shared-state concurrency.)
Predicting feature quality
Essentially that argument reduces to "some people might not use it right" which I must point out is nonfalsifiable (ahead of the fact anyway) and can be said of anything.
I almost agree, except I suppose that there must be some class of features which would actually be a bad idea to add to a language such as Java — for example, how about raw memory pointers? I think there are some plausible criteria for recognizing certain kinds of bad features in advance — for example, features which violate safety properties. However, in this case, an argument of that nature hasn't actually been made, afaict.
But many decisions in PL design are based largely on the goals and convictions of the designer. If Patrick were designing a PL, presumably he'd leave out STM, and then the question could be decided in the marketplace, which is probably where such decisions ultimately belong (because there doesn't seem to be any other reliable way to decide them).
Yes!
For example, has it been our experience that misguided programmers have started sprinkling their Java code with synchronized blocks, just in case?
For the record, Yes, this has most definitely been the case. I have seen exactly this phenomenon all over the codebase of at least one large (top 50) e-commerce site. When subtle synchronization and race condition bugs occur (e.g., users click the same effectful link several times in quick succession, session data gets messed up, something bad happens...), I've seen synchronized markers employed like pixie dust until the problem seems to go away. I wish I were kidding when I say that virtually no one understands how this software really works or why synchronization occurs where it does.
Essentially that argument
Essentially that argument reduces to "some people might not use it right" which I must point out is nonfalsifiable (ahead of the fact anyway)
Not completely non-falsifiable. The C# designers regretted adding the convenient "lock { }" statement to the language for just this reason: it gave people a false sense of security, and harmed performance.
A sympathetic take
Though I agree with those who found Patrick's post difficult to follow and possibly overly provocative (though it is a personal blog post... ;-) ), I found myself in sympathy with the overall feeling.
I have nothing against research into STM, of course, and can imagine some scenarios where it would be a nice tool to have if used lightly and tastefully, but there seems to be a danger that it might be becoming the latest panacea. Such "silver bullets" discourage people from trying known good solutions where they are available (such as message passing) and instead simply enable existing pathologies for a bit longer. (Or simply convince people that the problems will be gone Real Soon Now.)
If we follow the "we already use transactional DBs" meme to its logical conclusion, essentially what STM is doing is cutting out the middle-man and embedding the database into your app, but without the persistence.
As anyone who has worked with a transactional DB with highly contended data knows, transaction support in the DB gives you some things for free, but makes other kinds of headaches. Retry support after a rollback, for example, and subtle deadlock conditions can create unacceptable performance and reliability hits.
This doesn't mean I want to get rid of transactions, of course, but it makes me much less optimistic that adding more of them liberally to my system will make my life easier for large apps.
If we add STM as a tool high in the CTM hierarchy of statefulness, where we take it for granted that we are not using any more state than we really need to to solve our problem elegantly, I'm much less worried, of course.
On the comment of being the
On the comment of being the latest panacea, I have to argue that such thinking might make progress a bit stagnant wouldn't it? Researching something thoroughly until it's proven to not work well seems a better idea then just go with something that's proven. I'm not sure if that was your intent in that comment but it seems to be the result.
Eat your dinner before you reach for dessert
I have to argue that such thinking might make progress a bit stagnant wouldn't it? Researching something thoroughly until it's proven to not work well seems a better idea then just go with something that's proven.
Given that there are a whole bunch of great ideas on how to reduce state and improve concurrency that we have already that have yet to gain wide acceptance and implementation in industry, I can't say I'm all that worried I might be stifling innovation by critiquing an idea whose major appeal is that it looks just like the same rickety old techniques most people are familiar with.
What?
...an idea whose major appeal is that it looks just like the same rickety old techniques most people are familiar with.
How is that even slightly true? STM, as far as I can tell, doesn't look anything like conventional techniques for thread communication.
The main appeal of STM is not really that it looks like anything else, but that it gives you very real guarantees about the composability of working components. You can take two transactions which work well on their own, and compose them in a number of ways into new transactions, the semantics of which have nice properties. The most obvious way is sequentially, and you get a guarantee that the result of the transaction, supposing that it commits, is independent of the behaviour of all other threads -- even if things happen in another thread in between the two transactions you've composed, they can't impact the result.
You can also compose them such that if the first fails to commit at first, and decides to retry, then the second is tried instead (and if it retries, then the whole transaction retries). This allows you to wait for any of a number of resources to become available, and to do different things depending on which it was. The inner transactions don't need to be prepared specially for being composed in this manner.
You also get all this composability even in the face of exceptions. A subtransaction which wants to fail by throwing an exception doesn't need to care about what changes the transactions it might eventually be part of might have made in case they don't catch it (and be left half-committed). It just throws the exception, and that causes the transactions it's a part of to behave as if none of the things they've done had ever happened and rethrow the exception until it's caught.
Incidentally, this behaviour can be used to cause transactions to throw an exception when they otherwise would have retried. x orElse (throw ...).
The trouble with many of the other systems, including most of the message passing systems, is that in order to compose components that work separately, one either ends up rewriting them, or sticking a layer of indirection in front of them, or worse yet, setting up some system of locks. Or, you design some transactional system from the start in terms of your messages. It would be easy to put STM to work in a combined solution there (STM transactions as messages), but the other way around is trickier to get right.
But what I'd really like to point out, more than all of this, is that STM is not really about the operational behaviour of the system. All it does is provide a language for saying things that you need to be able to say with regard to thread communication. "These things happen together, or not at all.", "If this doesn't work, then try this instead.", and so on. It's the right level of abstraction because it leaves the compiler and library writers room to implement things in more and more clever ways, getting more performance, while keeping the semantics (in the sense of the overall effect of the program) fixed.
This is very much the same as garbage collection. Early garbage collectors were not great at what they did, but they provided the machinery for the right abstraction for working with memory. The early implementations of STM are not great, but they're not nearly as clever as they could be at managing the resource they've been given -- they're the somewhat obvious implementations so that we can see what programming using STM is like. Specifically, the decision as to when to retry a transaction, and which transactions in the system are likely to run into contention issues can probably be done in a much more clever way, though the current approach is not as bad as it could be. If it turns out that we like STM (and personally, I really do think we will), then the lower level implementation can improve without us having to rewrite our programs, much like garbage collectors did.
Another unrelated and small remark that I'd like to point out is that message passing is the sharing of state. It's just a particular form of threads communicating their state to one another. When one thread sends another a message, it is creating a data dependency between its local state and that other thread's local state. Systems which can't do this are not really expressing concurrency, just parallelism. I point this out only because it grates on my ears a bit when people expound the virtues of "no shared state", and then talk about systems where threads share state with one another (albeit in a responsible and selective fashion). I know that people are using "shared state" to refer to one particular model of thread communication, but that should not lead one to the idea that other models share no state between threads. On the flip side of the same coin, calling STM "shared state" would be somewhat degrading, because while it could be used in that way, it would be an unduly awkward way of achieving that effect, and one would get none of the benefits of using transactions. Similarly, one could use message passing where threads continuously communicated all changes in any parts of local state which might be relevant, but it would be an awkward misuse of the system.
Shared Context
How is that even slightly true? STM, as far as I can tell, doesn't look anything like conventional techniques for thread communication.
The familiar thing is transactional databases, which are quite commonly, though not always with conscious awareness, used as big multi-process global variables.
I point this out only because it grates on my ears a bit when people expound the virtues of "no shared state", and then talk about systems where threads share state with one another (albeit in a responsible and selective fashion).
Many of us here take for granted a theory of statefulness that is not binary, but rather hierarchical, following CTM. For us, shared state refers to a particular slot in that hierarchy. In this scheme, message passing is less stateful than full state-sharing.
CTM and Transactions on Cells
This may be a naive question, in that I don't know a lot of the details about STM.... but.... Isn't there a correlation between STM and the CTM section 8.5.4 on Transactions - Implementing transactions on cells?
Dey turk ma jugh!
But what I'd really like to point out, more than all of this, is that STM is not really about the operational behaviour of the system. All it does is provide a language for saying things that you need to be able to say with regard to thread communication. "These things happen together, or not at all.", "If this doesn't work, then try this instead.", and so on. It's the right level of abstraction because it leaves the compiler and library writers room to implement things in more and more clever ways, getting more performance, while keeping the semantics (in the sense of the overall effect of the program) fixed.
This is very much the same as garbage collection.
Aha. Which helps explain the reaction, too. On the purely pragmatic front, it's reasonable to be skeptical that compilers are going to do a good job of implementing a new high-level abstraction, until their success is well proven, and the appropriate use cases well understood.
There are also all the factors surrounding what happens when something that used to be hard becomes easier. Concurrency is a bit different from garbage collection, though. When garbage collection works well, which is often, programmers don't really have to reason about memory management much (or even understand it!) That seems unlikely to be the case for concurrency: it's still going to be necessary to reason about it, even if the way concurrency requirements are expressed becomes more high-level. So the concern that programmers will misuse the feature due to lack of understanding of the underlying issues seems to have more of a basis in this case, than in the case of garbage collection.
(This just helps me understand why STM might have provoked a negative reaction, I'm not agreeing with the reaction.)
GC works, programmers don't
When garbage collection works well, which is often, programmers don't really have to reason about memory management much (or even understand it!)
Tell that to my poor desktop with 1.5gb of RAM entirely occupied. Or to our customers servers running J2EE. People don't get memory management and just hope that the GC will magically keep the memory usage low, the same thing happens today with concurrency and will continue to happen, even if the entire industry adopts Haskell/STM or Erlang. Also as with memory management, concurrency solutions will keep showing up, even if the industry adopts STM as it adopted GC: there's still research on linear resources and regions because GC isn't enough. Programming is a very hard activity and all tools can (and probably will be) misused. Today "memory is cheap" solves all memory leaks problems, tomorrow starvation and friends will require the "cores are cheap" approach.
Tell that to my poor
Tell that to my poor desktop with 1.5gb of RAM entirely occupied. Or to our customers servers running J2EE. People don't get memory management and just hope that the GC will magically keep the memory usage low, the same thing happens today with concurrency and will continue to happen, even if the entire industry adopts Haskell/STM or Erlang.
J2EE is the antethesis of concurrent programming -- it is the new mainframe. Cram everything into one JVM then put a lot of textual boilerplate around it to make it appear "loosely coupled". Better concurrent programming tools would allow these systems to "spread out".
The same is true of desktops, certainly. Only until recently though they've not had multiple CPUs readily available, so there was no where to spread out to. Now that there is room to spread out, the languages we've been using prevent it.
STM will not help spread systems out. Message passing and shared-nothing sequential programming will. Most programmers should be learning about messaging and most tool development should be in support of them. STM is a shiny tool in the best interest of very few programmers.
Transactions
J2EE is the antethesis of concurrent programming -- it is the new mainframe.
J2EE is a collection of a whole lot of different standards. It includes successes such as servlets and JSP, failures such as EJB 1.0, and JMS: Java's message passing API, which I thought was what you were arguing for. And it's hard to see how a language (Java) with threading and synchronization primitives built in can be the "antithesis of concurrency". Even if we stipulate all the complaints about the model itself, it's undeniably deeply concurrent.
I'm unclear what you mean by the "new mainframe" remark. Did you mean that J2EE is a highly successful high-throughput centralized system? Somehow the remark came off as negative; perhaps this is one of those from-the-hip barbs you admitted to elsewhere?
STM will not help spread systems out.
You don't say what "spread out" means; my best guess is distributed computing. If so, when you say that transactions will not help with distributed computing you are correct. However this is unsurprising since transactions do not address distributed computing or communication of any kind; they address atomic updates. We could just as well have pointed out that message passing doesn't help with atomic updates, or that type inference doesn't help with male pattern baldness. The dichotomy you are pushing is entirely a false one: transactions and message passing are independent techniques. We can have one or the other or both or neither.
I happen to think that message passing is a good technique, although I lack the brazen confidence to say that it is what "most programmers should be learning about." As to who transactions will benefit, the answer is that it will benefit programmers who are working on software that requires atomic updates. Whether this is a "very few" programmers or not I have no idea; I know it includes me.
Finally, I note that the repeated characterization of people studying transactions as being driven by "shiny" things is inflammatory and counterfactual. If you don't care to address or even acknowledge the various arguments in this thread against your assertions, that is your prerogative, however if you could do so without insulting me I would appreciate it.
You don't say what "spread out" means; my best guess is distributed computing. If so, when you say that transactions will not help with distributed computing you are correct. However this is unsurprising since transactions do not address distributed computing or communication of any kind; they address atomic updates.
I believe his core point was that both techniques can safely address concurrency, but that the message-passing abstraction also permits distribution with no additional machinery; it is thus ultimately more expressive and useful. Perhaps a CS version of Occam's razor is in order: don't multiply abstractions unnecessarily.
Irony
Perhaps a CS version of Occam's razor is in order: don't multiply abstractions unnecessarily.
Ironically, the occam programming language was developed as a message-passing language capable of expressing both concurrency (most occam programs were composed of networks of parallel "processes" which communicate through message-passing), and distribution (occam processes could be transparently distributed across a network of transputer microcontrollers). IIRC, the name of the language was chosen because Occam's Razor was a good expression of the underlying philosophy of the language.
Optimistic concurrency control is the big question mark
I've written (with some other students) and implementation of STM for Ocaml. The implementation was dirt simple; it took less than five hundred lines of code to implement it, and that includes extras like first-class transactions and optimizations to eliminate overhead in the first-order case. So I don't believe that it's a very complex model.
However, the giant question mark for me is the automatic rollback and retry. Being optimistic and rolling back on failure is extremely efficient when it works, but when it doesn't it can be very expensive. I just don't know how to predict the performance of an STM program, and I worry about debugging programs that fail to meet performance targets, because they may fail to hit targets nondeterministically.
And Maurice Herlihy (the guy who has mostly been pushing the idea) has never been quiet about this being the big research question behind STM.
If we add STM as a tool high
If we add STM as a tool high in the CTM hierarchy of statefulness, where we take it for granted that we are not using any more state than we really need to to solve our problem elegantly, I'm much less worried, of course.
I guess this is my perspective on the whole thing, which makes me see this as something of a mountain out of a molehill. I realize that STM and, for example, Mnesia, aren't precisely the same thing, but as compared to message passing, they're certainly more similar than different. So arguing for Erlang rather than STM seems a little short-sighted, when the Erlang community has already acknowledged the need for sharing at this level, and has in fact embraced and bragged about it! I'd refer back to PVR's Convergence paper.
The only thing I really have to say about the original blog post is that yes, Patrick is someone whose opinions I take seriously and yes, the current format of the post (99% comments by now?) makes it virtually impossible to tell what it originally said...
Em, peek. Is it safe?
I realize that STM and, for example, Mnesia, aren't precisely the same thing, but as compared to message passing, they're certainly more similar than different. So arguing for Erlang rather than STM seems a little short-sighted, when the Erlang community has already acknowledged the need for sharing at this level, and has in fact embraced and bragged about it!
The difference being that programming in Mnesia, even the "in memory" variety, is still deliberately a "database" thing. Most everything should be in Erlang.
Many of the Erlang processes might be very well be "database-like" stateful things, with the Erlang message queue and pattern matching essentially acting like a transaction processing monitor. No mnesia needed.
I can sympathize with game development. I've been a developer for digital electronics simulators where 80% is just objects and 20% is highly tuned C. I would imagine that not the average game developer even gets involved in that highly tuned part on a daily basis.
So it's not you I am worried for. It's the average game developer, the average financial systems developer, the average business systems developer. While the mechanism per se worries me, it's the increasing fascination with it that bothers me most.
Wow. I throw barbs like this out fairly often, shooting from the hip if you will, to see what comes back. I had no anticipation for this much feedback one way or the other. Very enlightening. I should have known it would be from the LtU crowd!
I had no anticipation for this much feedback
You should post here more often... :-)
For those new to LtU: Patrick was a regular contributor in the early days of LtU, and I regularly try to persuade him to come back...
I Wish
I barely have time for my half-baked ideas on my own blog! Then I hit on something controversial like this and the firehose reminds me...
You see what I have to deal
You see what I have to deal with here, guys... ;-)
a bit more...
Nice to see you on LtU again!
Upon further reflection, I do have pretty big doubts about STM for Java/C#/etc. programmers... I had only encountered STM in the Haskell setting, and never really imagined that anyone would try to apply it in a mainstream language without radical changes to the host language.
It seems to me that, with the possibility of rollback and retry, STM requires a change to our understanding of side effects at least as radical as a switch to pure FP. Perhaps in the end the programming model won't have to change as much as switching to Haskell, for example, but I think that the understanding of side effects will have to change. Obviously the nice fit between Haskell and STM is well understood, but the other side of that coin is that this is a potentially huge hurdle for languages like Java.
Does anyone have a pointer to work on bringing STM to one of these mainstream languages, particularly solutions to IO operations and other side effects? Will we have to overhaul the entire JDK, for example? Even so, won't programmers need to understand that prompting a user for input needs to be "committed" before the program actually waits for that input?
STM package written in C# published 2005
Seems like Microsoft Research published a C# Software Transactional Memory package back in 2005...
The SXM is a software transactional memory package written in C#. It is much easier to use than prior STMs because it uses Reflection.Emit to transparently add synchronization code to sequential data structures.
the current format of the post (99% comments by now?) makes it virtually impossible to tell what it originally said...
Yeah, I was personally a bit annoyed by the fact that he duplicated and interspersed my response into the original post along with his replies. Replying in a comment would have been better form, I think. *hint*
While his point is
While his point is interesting, he must be an Outlook user: comments are before the article, etc.
The result is quite obscure, bleh.
Outlook User?
No. Just busy and sloppy.
No Sense
And no, I can't make heads or tails out of the original blog post anymore either.
Why Not Both?
In my day job working on databases I use transactions AND message-passing all the time. Transactions need to be kept small and fast or else performance and locking problems start affecting performance - and they aren't easily predictable because different users accessing a common store (database) with different requirements can interact in obscure ways.
The answer (in my experience) is to reduce shared state to an absolute minimum, use transactions with their Undo, Retry or Leave facilities to handle clean updates to global state and then message passing (via that state) for the top level processes.
It seems to be that the real crux of this argument is that STM in an imperative language like Java would be a terrible idea, but not in a language like Haskell in which the state that STM co-ordinates is both minimised and compartmentalised.
Good Point, IMHO
Thoughout this thread, I've been mulling over how the issues Patrick raised over STM, and what I believe to be the more reasonable responses to them, relate to Functional Relational Programming: Out of the Tarpit, which I still need to make time to go over in more detail.
I think you nail it in your
I think you nail it in your last paragraph.
Why terrible?
I wouldn't be so quick to assume that this is such a "terrible idea" in Java, at least from a technical perspective (unless there are technical barriers that haven't been raised yet).
Quite a bit of the use of Java in server-side corporate/enterprise work is quite stateless and transaction-oriented, and relies largely on the use of traditional databases to achieve that. As main memory gets bigger & cheaper, having ways to replace some of that database usage with similarly-architected in-memory solutions could be useful. Just one example.
In theory.
Exactly!
I might be the one exception, or maybe it's because the only exposure to STM (in a practical sense) is Haskell, but I never even considered STM and message passing to be competing technologies!
I've always assumed that the consensus was "use implicit-ish approaches for parallellism where possible, use message passing for concurrency where suitable, and use STM at the very 'bottom' when you absolutely need shared state".
It seems like this whole argument here is about choosing "either or", and that it completely disappears once you look at it through the lens of a language like Haskell -- functional without state or with local state for the majority of the cases, message passing concurrency for most of the concurrent stuff underneath, and every now and then shared state with STM.
In my opinion that gives a really good toolkit which doesn't limit you in what kind of applications you can write conveniently. I think Tim Sweeney gave very good examples of when message passing just doesn't do the job -- can we please stop pretending that either approach solves all problems? They complement each other nicely -- people are just extra excited about STM because it very nicely solves the remaining hard problems after you've started using message passing everywhere it makes sense (locks, monitors, lack of compsoability etc. in the cases where you absolutely do need shared state). I'm currently working in the games industry as well, so it may be that games are just inherently ill-suited for message passing, but I doubt it; I'm sure lots of applications would do just fine with just message passing, but shared state will always be needed here and there.
STM in Erlang?
Hm, does that mean it would be great to have STMishness implemented for Erlang, to get the best of all possible worlds?
When I say STM I should be
When I say STM I should be clear that I mean "STM in Haskell". For example you can statically guarantee that only transactional actions can be performed in an "atomic" block, no others, and you can't run transactions without "atomic" either. (So all of those "people will forget and cause bugs, or abuse it and cause performance issues" complaints that have been raised, are dead wrong -- it can only be used for transactions thanks to the type system and it can never be forgotten and cause bugs, again thanks to the type system).
So to answer your question, I do think think STM comes to its right in a pure and statically typed setting, so Erlang wouldn't be my first choice. Haskell comes very close to the "ideal" (it already has STM and message passing, and purity). In order for messages to be truly useful we may need some more help from the type system (e.g. better records? union types?).
STM in Erlang
Interesting to see some discussion around implementing (something like) it in Erlang.
Would STM even be possible in Java?
It seems to me that STM requies a pure language. As I understand it, STM requires that the transactions can be retried as many times as necessary with the same meaning each time.
In Haskell, encapsulation within the STM monad ensures that the transaction has no dependencies other than the STM variables. Would this be possible to enforce in a language like Java?
Could it be done using a pure sublanguage (based on Featherweight Java perhaps)? Ignoring the issues of mutable state outside of the system (e.g. I/O), would treating the entire heap as an STM store be feasible?
Apparently
Somebody thinks so.
TFA^2
The article which seems to have triggered all this also describes how it might work in Java (e.g. "how an atomic statement could be introduced and used in an object-oriented language such as Java.")
Note that two of the authors of the article are from Intel, which has an obvious interest in promoting solutions that might help with multicore architectures.
The main problem
The main problem which languages other than Haskell face in implementing STM is defining a sublanguage of things which are permissible in transactions and can be rolled back transparently in case of a retry or exception, and then ensuring that nothing else accidentally lands in a transaction. You can include more than just interactions with transactional variables in this -- actions which simply read from the environment without affecting anything would also be okay. A language like Java would almost certainly need some extensions to its type system to support this. You could do it in dynamically typed languages like Python and Ruby as well, but you do need to take extra care. Personally, in cases like that, I'd probably want to compile the whole transaction and make sure it's okay before running any of it. Throwing an exception at the first bad action (and not performing it) would also work, but makes me feel somewhat uneasy. That might just be my tendency toward static guarantees though.
Setting aside the need to check that transactions really consist of valid actions, yes, you could go ahead and make every variable a transactional variable. That seems a little strange to me, but in principle it could be done. You'd really have to work on making the transaction machinery lightweight. Every variable update or read not already in an atomic block would become a small transaction in itself. Note that atomic blocks might be running optimistically that have read the variables you're modifying, and will have to be notified that they must restart, because something has trampled on their read set.
You would probably want whole assignment statements like:
x <- f(x) + g(y)
to occur as-if-atomically, so this might translate at a lower level into something along the lines of:
atomically
write x (f(x') + g(y'))
where read and write are the primitive transactions for accessing transactional variables, so x' and y' are the actual data contents of x and y respectively. (This translation obviously depends on a whole lot of properties of your actual language. Here I'm assuming that f and g want actual values and not references as parameters, and that they are not themselves variables.)
The only trouble I see with this aside from the obvious performance concerns is that it does almost nothing to push users in the direction of limiting the state that they share. When you have to explicitly construct transactional cells, and hand them to your threads, it gets you to think about something which is fundamentally important to the design and well-being of your program. Then again, so do lots of features of Haskell which people using other languages like to ignore. ;)
STM locks and byzantine shared state
I've got some comments from personal experience of implementing a "kind of" STM in Askemos - a somewhat unusual distributed environment (somewhat akin to Erlang anyway), where STM suddenly makes sense.
I don't see shared state as a Devil
I may be naive, but I don't see shared state as a Devil to eradicate.
On the other hand, I do not believe that programmers should ever have to deal with locks-- they are way too easy to mess up.
So I will take any idea that allows parallel programming without locking. I like the idea of rollback and retry, but I am not sure I want to do that any time that a sharing conflict arises. Sometimes I may want to keep whatever value was written last, or first, or the one matching a predefined criteria without retrying the "losing" transaction. In fact if I was to chose I would not make rollback and retry the default.
STM may be composable with itself, but is it composable with other concurrency mechanisms? For example, how do I "rollback" message passing? Do I need to delay it until the transaction commits? Is it illegal to receive a message during a transaction? What if the program decides to parallelise transactions in a loop (should be safe), causing unnecessary conflicts?
I mean, all design choices ultimately come down to which costs you are willing to pay and which ones you are trying to minimize, right? In an ideal world, memory and CPU would be infinite, all data would be immutable and every computation would be reversible, right? Programming languages are simply compromises on that ideal that allow them to be implemented on physical computers with real constraints that sometimes dictate practical and impractical design choices.
Now I'm the last person in the world that could be called an expert on concurrency, so I need a simple, concrete example that I can wrap my head around. In the game World of Warcraft, there are mining nodes which players can run up to and mine, receiving raw materials as reward. However, the resource is limited and mutually exclusive, so only one person can mine a resource at a time, and it can only be mined four times or less before disappearing.
I'm imagining a game engine that would support this feature in which each player was represented by a dedicated thread/lwp. I'm trying to figure out how STM and MP compare in this task, so my simplistic understanding leads me to these two designs:
For the STM design, each player thread attempts to access the mining node optimistically. If they are the only player attempting to mine the node, they succeed, and a packet is sent from the server to the client indicating success. If two players attempt to mine the same node, one of them succeeds first, and sends a success packet, while the other one fails and must roll back, sending a failure packet. This scales up to N players.
They both seem like perfectly reasonable approaches to the problem, with the difference being that the MP solution requires an additional thread (which may already exist for other reasons). Furthermore, even in the non-contentious case, the MP version requires communication between two threads, whereas the STM version is basically as fast as a single-threaded implementation. In the highly contentious case of, say, 20 people all trying to mine the node at the same time, the STM case would force a potentially expensive rollback for all but one of the threads, while the MP case would simply send 20 message both ways (but at the cost of serializing the message processing, since the mining thread can only process one message at a time).
From this, I naively conclude that STM is probably faster in the optimistic case, and that MP probably degrades better under contention. If you know a priori whether a given interaction is likely to have low or high contention, then it seems to be a matter of sound design to choose one or the other, rather than dogmatically insisting on One Concurrency Solution To Rule Them All.
But, I'm open to being convinced otherwise.
P.S.
I forgot to mention that Erlang users are probably more likely to favor MP because threads are extremely cheap, while Java users are more likely to favor STM because threads are considerably more expensive. So it isn't necessarily that one group is right or wrong, but that the intrinsic costs involved make one solution or other more or less favorable. But maybe I just don't know what I'm talking about.
Intel STM compiler now available
We have a lot to learn before we can decide whether STM offers some relief from locks (they are NOT going away) and offers help for programming, or for tools which compose programs automatically.
We think that the existence of a C/C++ compiler supporting Software Tranactional Memory (STM) would be a great help. So...
Today, we released a prototype version of the Intel C/C++ Compiler with support for STM. It is available from Whatif.intel.com.
The Intel STM Compiler supports Linux and Windows producing 32 bit code for x86 (Intel and AMD) processors. We hope that the availability of such a prototype compiler allows unprecedented exploration by C / C++ software developers of a promising technique to make programming for multi-core easier.
This prototype compiler download requires that you already have the Intel compiler installed - our web site explains how to get an evaluation copy (free for limited time) for the compiler to install, or Linux users can get a 'non-commercial' license - either are enough to use the STM compiler download.
Releasing this compiler offers an opportunity for exploration and learning. I think we should all hope that it helps understand the promise or myth of the value of STM better. The opinions certainly run the full range today - as this blog indicates all too well!
Erlang / Actor Model + STM
There has been some recent discussions on the benefit and mechanisms of supporting transactions even in Actor Model languages such as Erlang.
I think there are plenty of good arguments for avoiding shared memory process models (where the primary vocabulary is 'get' and 'set'), but the Actor Model doesn't avoid shared state because each actor may be stateful. Thus, transactions still have a very important place when it comes to maintaining consistency and making concurrency and complex negotiations or coordination events manageable for programmers.
Erlang model of concurrency by no means reduces the value of transaction-based coordination... effectively STM+Actor Model.
[Edit: I've since written a page on this subject; I believe it well enough formalized to implement.]
|
{}
|
# tikz picture inside of an enumerate environment
Is there a way I can make sure that the number of the enumerate environment stays in the top left of the tikzpicture? I'm using a custom environment that just wraps around the enumerate environment...
\documentclass{article}
\usepackage{tikz}
\usepackage{enumitem}
\usetikzlibrary{positioning}
\tikzstyle{dot} = [draw=black, fill=white, circle, inner sep=2pt]
\newenvironment{parts}
{\begin{enumerate}[label=\alph*)]}
{\end{enumerate}}
\begin{document}
\begin{parts}
\item
\begin{tikzpicture}
\node (a) {a};
\node (b) [above=1cm of a] {b};
\node (c) [right=1cm of a] {c};
\node (d) [above=1cm of c] {d};
\path
(a) edge node {} (b)
(a) edge node {} (d)
(d) edge node {} (c);
\end{tikzpicture}
\end{parts}
\end{document}
-
have you seen Aligning enumerate labels to top of image?? also, you can change the definition of your parts environment by simply using newlist instead – cmhughes Feb 25 '13 at 3:21
please let us know if your question is different- if the link I provided resolves the issue, that's great, and we'll close this as a duplicate :) – cmhughes Feb 25 '13 at 3:28
The number of the enumerate environment does not move to the bottom, the bottom of the bounding box of the TikZ picture is set at the base line.
This base line can be changed by using the baseline option of TikZ. The PGF manual states in subsection 12.2.1 “Creating a Picture Using an Environment” on page 117.:
The following key influences the baseline of the resulting picture:
tikz/baseline=<dimension or coordinate or default (default 0pt)
Normally, the lower end of the picture is put on the baseline of the surrounding text. For example, when you give the code \tikz\draw(0,0)circle(.5ex);, PGF will find out that the lower end of the picture is at -.5ex and that the upper end is at .5ex. Then, the lower end will be put on the baseline […].
Using this option, you can specify that the picture should be raised or lowered such that the height <dimension> is on the baseline. […]
This options is often useful for “inlined” graphics […].
Instead of a <dimension> you can also provide a coordinate in parentheses. Then the effect is to put the baseline on the y-coordinate that the give[n] <coordinate> has at the end of the picture. This means that, at the end of the picture, the <coordinate> is evaluated and then the baseline is set to the y -coordinate of the resulting point. This makes it easy to reference the y-coordinate of, say, the base line of nodes.
Use the baseline option and you can align the TikZ picture according to the baseline of one of the containing nodes (here: b or d).
\begin{tikzpicture}[baseline=(b.base)]
A general solution would be to use the the top most point of the TikZ picture minus 1em which aligned the top most point at the top of the current line, which incidentally works great for standard rectangular nodes like b or d in your example.
\begin{tikzpicture}[baseline={([yshift=-1em] current bounding box.north)}]
In the following code I have replaced \tikzstyle by \tikzset and added the styles
• enum,
• no enum, and
• base at.
I also changed the parts definition slightly so that every TikZ picture in it is automatically aligned according to the enum style.
See the examples in the code and the output how this effects the outcome and how you can change it for particular TikZ pictures.
## Code
\documentclass{article}
\usepackage{tikz}
\usepackage{enumitem}
\usetikzlibrary{positioning}
\tikzset{
dot/.style={draw=black, fill=white, circle, inner sep=2pt},
enum/.style={baseline={([yshift=-1em] current bounding box.north)}},
base at/.style={baseline={(#1.base)}},
no enum/.style={baseline=default},
}
\newenvironment{parts}
{\tikzset{every picture/.append style={enum}}\begin{enumerate}[label=\alph*)]}
{\end{enumerate}}
\begin{document}
\begin{parts}
\item
\begin{tikzpicture}
\node (a) {a};
\node (b) [above=1cm of a] {b};
\node (c) [right=1cm of a] {c};
\node (d) [above=1cm of c] {d};
\path (a) edge (b)
edge (d)
(d) edge (c);
\end{tikzpicture}
\item \tikz[no enum] \draw (0,0) circle (.5ex);
\item \tikz \draw (0,0) circle (.5ex);
\item \tikz[base at=a] \node[circle,draw] (a) {X};
\item \tikz \node[circle,draw] (a) {X};
\end{parts}
\end{document}
-
|
{}
|
# If $i=0.09$, find $n$ and the amount of final payment.
A fund of $\$500 $is to be accumulated by$ n $annual payments of$ \$100$, plus a final payment as small as possible made one year after the last regular payment. If $i = 0.09$, find $n$ and the amount of final payment.
I have gotten as far as:
$$500 = 100 \times (1.09)^{n} + P(1.09)^{n + 1},$$ $$\frac{500 - 100(1.09)^{n}}{(1.09)^{n + 1}} = P.$$
-
You pay $100$. After a year, that has grown to $109$, and you pay another $100$, making a balance of $209$. After another year, that $209$ has grown to ... how much? And you pay another $100$, making a balance of ... how much? And you do that one more year, and how much do you have? And what happens then?
Let $x_k$ be the amount at year $k$, with $x_0 = 100$. Then we have $x_{k+1} = (1+r) x_k+100$, with $r=0.09$. A few terms suggests a general solution, $x_0 = 100, x_1 = 100(1+r)$, $x_2 = 100(1+r)^2+100(1+r),..., x_{n-1} = 100(1+r)^{n-1}+...+100$.
After the $n$th payment another year elapses before a final payment of $F_n$ is made, resulting in a total of $500$, ie, \begin{eqnarray} 500 &=& 100(1+r)^{n}+...+100(1+r) + F_n \\ & = & 100(1+r)\frac{1-(1+r)^n}{1-(1+r)} +F_n \\ & = & 100(1+\frac{1}{r})((1+r)^n-1) + F_n \end{eqnarray} Rearranging gives $F_n = 500-100(1+\frac{1}{r})((1+r)^n-1)$. Note that $F_n$ is decreasing from $F_0 = 500$, so we can compute $\max \{ n | F_n \geq 0 \}$ to find the answer.
(As a sanity check, with $r=0$ we would have $n=5$ and $F_5 = 0$, so we expect $n < 5$.)
|
{}
|
# A cyclic group of order “rs” where (r, s) = 1
I was given this question and I'm not really sure how to approach this...
Assume $(r,s) = 1$. Prove that If $G = \langle x\rangle$ has order $rs$, then $x = yz$, where $y$ has order $r$, $z$ has order $s$, and $y$ and $z$ commute; also prove that the factors $y$ and $z$ are unique.
-
What does unique mean ? – Amr Nov 17 '12 at 10:10
@Amr: Do you think he is trying to prove $y$ and $z$ are unique? Because the sentence has not looked like a a question! – Babak S. Nov 17 '12 at 10:12
I don't know. But if he was trying to do so then what about $x=ya^{-1}az$ – Amr Nov 17 '12 at 10:15
Yes, sorry, it wasn't very clear what the question is... I added "prove" in the correct place. – amirbd89 Nov 17 '12 at 10:24
The question statement lacks clarity. First, there is the word "cyclic" in the header, but it is doesn't appear in the question statement itself. So it's confusing: is $G$ cyclic after all? Second, $x$ is undefined. Do we have to prove that for all $x$ in $G$ there exists a unique pair $(y,z)$ with the desired properties? – Dan Shved Nov 17 '12 at 10:30
From $(r,s)=1$ we find integers $n,m$ with $nr+ms=1$. Let $y=x^{ms}$, $z=x^{nr}$. Then $yz=x^{ms+nr}=x$. The fact that $x$ and $y$ commute is trivial because the cyclic group $G$ is abelian. Also, we have $y^r=(x^m)^{rs}=1$, $z^s=(x^n)^{rs}=1$, hence the orders are at least divisors of $r$ and $s$, respectively. If the actual orders are $r'|r$ and $s'|s$, then $x^{r's'}=y^{r's'}z^{r's'}=1$, hence $r's'$ is a positive multiple of $rs$, hence at least $rs$. We conclude that $r'=r$, $s'=s$. Finally, assume we have another solution $x=y'z'$ with the required properties. Then $z^r=y^rz^r=x^r = y'^rz'^r=z'^r$ implies $z=z^{nr+ms}={z^r}^n{z^s}^m={z^r}^n={z'^r}^n={z'^r}^n{z'^s}^m=z'^{nr+ms}=z'$ and similarly $y=y'$.
-
The cyclic group $G$ is isomorphic to the additive group of $\Bbb Z/rs\Bbb Z$, so this is just the Chinese remainder theorem for the coprime moduli $r$ and $s$ (in the statement for rings $\Bbb Z/n\Bbb Z$, but only considering their additive structure).
Concretely, the elements of $G$ of order dividing $r$ are generated by $x^s$ and vice versa, and among the $r$ elements of order dividing $r$ there is one, say $y$, such that $z=y^{-1}x$ has order dividing $s$. One has $x=yz$ and $y,z$ commute (they are both in the group $G$ generated by $x$); if either the order of $y$ were a strict divisor of $r$ or the order of $z$ were a strict divisor of $s$ then it would follows that the order of $x$ is a strict divisor of $rs$, which is false, so the orders of $y,z$ are respectively exactly $r,s$. Concretely you can find $y,z$ by writing $1=\gcd(r,s)=ar+bs$ using the extended Euclidean algorithm; then $y=x^{bs}$ and $z=x^{ar}$.
-
Here is a hint: you can set $y=x^{sn}$ and $z=x^{rm}$. To find the appropriate $n$ and $m$ you can use Bezout's identity.
-
|
{}
|
One-way accumulators are built upon a (quasi)-commutative one-way function. With quasi-commutativity, I refer to the following property:
For $f : X \times Y \to X$, it is true that $f(f(x, y_1), y_2) = f(f(x, y_2), y_1)$.
Although accumulators seem like a very useful cryptographic building block, I don't see them often in practical applications (in fact I can only think of Zerocoin). I suspect that this is because the scheme has certain disadvantages.
I wonder what these disadvantages are (if this is indeed the reason): is the function $f$ weak in terms of eg. collision-resistance, is it not efficient enough...?
The accumulators that I know of (note: I don't really know a lot about them, so this doesn't say much), seem to be based on number-theory (unlike conventional hash functions). This makes them a lot slower.
For example, Wikipedia describes the following function:
One trivial example is how large composite numbers accumulate their prime factors, as it's currently impractical to factor the composite number, but relatively easy to find a product and therefore check if a specific prime is one of the factors. New members may be added or subtracted to the set of factors simply by multiplying or factoring out the number respectively. More practical accumulators use a quasi-commutative hash function where the size (number of bits) of the accumulator does not grow with the number of members.
As they mention, this is clearly not practical because of the size of the output values.
Another example I have seen is $f(x, y) = x^y \pmod n$ where $n = pq$ (with $p$ and $q$ both safe primes). Even though this doesn't have the problem of the Wikipedia example, it is still not very efficient (even though you can do the exponentiations using the square-and-multiply method).
An advantage of a cryptographic accumulator and actually the reason to use them is that due to the quasi commutativity you can compute witnesses for membership of values in the accumulator where the accumulator and the witnesses are of constant size.
Say you have a set $Y=\{y_1,y_2,y_3\}$ and compute the accumulator as $acc=f(f(f(x,y_1),y_2),y_3)$ you want to compute a witness for a value say $y_2$, then by quasi commutativity, the value for your witness is $wit_{y_2} = f(f(x,y_1),y_3)$ and you can check given $y_2$ and $wit_{y_2}$ whether $y_2$ is in the accumulator $acc$, you can check whether $acc=f(wit_{y_2},y_3)$ holds.
Furthermore, existing accumulator schemes (CL02, C+09, N05) come with zero-knowledge proofs of accumulator membership (you do not have to reveal the value $y_2$ and the witness $wit_{y_2}$ directly, but you provide a zero-knowledge proof of knowledge of such a pair - which makes them attractive for privacy-preserving applications). Such accumulators are typically also dynamic, i.e., allow update of witnesses in the public if the accumulator is updated. Furthermore, there are also so called universal accumulators, which also allow to produce witnesses for non-membership of a value in the accumulated set (see A+09 or L+07).
All known efficient accumulators are based on number theoretic assumption, but I would not say that they are inefficient. Note that in your last RSA example, the membership check requires one exponentiation, which is not really very expensive.
is the function f weak in terms of eg. collision-resistance, is it not efficient enough...?
For a secure accumulator one requires collssion-freeness, i.e., it is computationally infeasible to find a witness for some value that is not accumulated in the accumulator. For RSA accumulators that requires that you only accumulate primes (so you have to map your values to accumulate to primes with some deterministic algorithm). Otherwise, you could factor your value into two factors and exponentiate one onto your witness and provide the second as value to be checked and the check would work. This is ruled out if you take primes. There are however, other secure pairing based accumulators that do not suffer from this problem.
Accumulators are used for various purposes, such as timestamping (the original application), membership testing, distributed signatures, redactable and sanitizable signatures as well as for revocation in group signatures and anonymous credential systems.
There are constructions for accumulators based on bloom filters (see Nyberg, Fast accumulated hasing, FSE 1996), but they are rather impractical (but do not rely on number theoretic assumptions).
|
{}
|
# How do you calculate conditional expectation for a single continuous r.v.?
So if you have a distribution fx(x) and you observed x=a, how do you calculate E{x | x=a} = integral (x*fx(x|x=a) from -inf to inf, or more specifically, fx(x|x=a)
I am having trouble when using Bayes or finding Fx(x|x=a) because I end up with P(x=a) which is just a constant. Is there a simple example somewhere? Searching only leads me to bivariate distributions.
-
Unless I misunderstood your problem: your conditional density is just a Dirac delta function in $x=a$:
$$f_x(x|x=a) =\delta(x-a)$$
Hence, $E(x | x=a) = a$ ... as intuition says.
If you want to apply the definition, you can use a limiting procedure (because, for a continuous variable the event $X=a$ has actually zero probability), considering instead the event that $X$ takes a value around $a$, in a neighborhood of length $dx$. But that's not necessary. You already know that the joint probability (the one that goes into the numerator) is zero if $x\ne a$. Hence, as the conditional density must integrate to one, it must be a dirac delta function.
|
{}
|
0
103 Dec 24, 2013 at 17:45
just imagine if you wrote code so a single pass of the whole code is an update. anyway, you introduce variables and code, all in a nest of conditions, whatevers open is what runs. you close conditions, and you dont run code, you open conditions to run code. this way you can anything you want in a single function, and you get ultimate reuse if you always nest the right way.
its alot like a program that conditionally hacks itself… of course the amount of conditions that never come true, slow you down, but other than that, it could be a sorta nonefficient programming style, that lets you reuse code better.
i wonder what could improve the false firing conditions, but keep the pluginable nature.
it would be good for non linear adventure games, where there is heaps of dependencies when things happen in the game, items the character has, things hes done, so you could have complete state switches happening all the time.
#### 6 Replies
0
103 Dec 25, 2013 at 07:52
I’m not following. Can you give a pseudocode example?
0
103 Dec 25, 2013 at 08:44
remember, its just continually cycling the main. whats not here is the actual spawning of the instances, im thinking of how i do that, it may be a handplaced scene, just say for now.
so, pong would be this-> (note when you add a variable, it wont run twice, only once… if it closes a condition and opens it again, then itll run it again, only when it opens a condition does it run.
and you can see, ive reused the coordinate variable for every instance. (the cool thing)
main
{
run once-> add x coordinate variable
run once-> add y coordinate variable
run once-> add instance proximity detector
if(im a bat)
{
if(key left) x--
if(key right) x++
}
if(im a ball)
{
move ball with angle condition
run proximity detector.
if(detect block proximity, using xy coordinates)
{
destroy block
mirror angle
}
if(detect bat proximity, using xy coordinates)
{
score++
mirror angle
}
}
if(im a block)
{
draw block at coordinate
}
}
so not very exciting, cause its just pong, but the strange thing you can do, is now turn a ball into a block, or a block into a bat or anything, if you wanted, during the game.
but just it has to check what it is every cycle, and thats the thing i want to fix, if you could fix that, then i would really think this way has its definite use. especially for the nonlinear adventure game… but thats when the speed issue will manifest itself.
0
103 Dec 25, 2013 at 09:03
of course an adventure game, just imagine you create an instance from a set of conditions, not just one. and they parent each other, but there is intersecting possible, from anding two types.
so just imagine you have.
elf in forest playing guitar with fairy then fairy dances… that would be possible to insert as a special dependency.
1
151 Dec 26, 2013 at 11:04
Just think about a real game, say 100 different types of MOB’s.
Just think of what a incredible mess it would be. At least 100 if statements, and if you wanted to change something for one MOB you could accidentally change all of them.
Code maintenance would be a nightmare, debugging would be a nightmare. I see no advantages at all for this approach and hundreds of disadvantages
You can do any kind of morphing you want with much cleaner code.
MOB (Moveable Object Block) old coders should remember them
0
103 Dec 27, 2013 at 15:06
hmmm, well, a mob might be better, but what are they exactly?
0
151 Dec 28, 2013 at 08:43
A MOB is just a Moveable Object Block, what most people now would call sprites I guess.
It was used in MOS Technology’s graphics chip literature (data sheets, etc.) However, Commodore, the main user of MOS chips and the owner of MOS for most of the chip maker’s lifetime, applied the common term “sprite”, except for the Amiga line of home computers, where MOB was the preferred term.
|
{}
|
# Tag Info
2
With Biber we have more or less full control of the label for the alphabetic style. The relevant command is \DeclareLabelalphaTemplate (see §4.5.4 Labels, pp. 163-168 of the biblatex documentation). For your purposes \DeclareLabelalphaTemplate{ \labelelement{ \field[final]{shorthand} \field{label} ...
0
Here is also how to do this by resetting \blx@maxcitenames locally. Using tracing, one can see that: ... \blx@resetdata ->\let \blx@saved@do \do \let \do \blx@imc@clearname \abx@doname s \let \do \blx@imc@clearlist \abx@dolists \let \do \blx@imc@clearfield \abx@do fields \do {options}\do {labeltitle}\do {labelyear}\do {labelmonth}\do {labelda y}\do ...
1
To get proper code markup: \begin{filecontents}{\jobname.bib} @book{bookentryA, editor={smith and Wesson}, langid={english} } @book{bookentryB, editor={smith and Wesson}, } @book{bookentry, editor={smith and Wesson}, langid={ngerman} } \end{filecontents} ...
3
The friggeri-cv class does not seem to provide starred versions of all sectioning commands, but the standard subbibliography heading uses \subsection*. If we redefine subbibliography headings to use \subsection, all is fine \defbibheading{subbibliography}[\refname]{\subsection{#1}}
0
There are several problems with your MWE. First and foremost, in the .bib file, the field for keywords is called keywords, not keyword, note the s. A BiblistFilter can only be used as the mandatory argument to \printbiblist and then needs a driver. This is all predefined for shorthand, for other uses you will have to provide your own (as you did for ...
1
Use the loading option: maxcitenames=50, say (if you have no more than 50 authors in your bibliography). If you want this specification to be valid also for the bibliography, simply use maxnames=50. For fullnames, firstinits=false.
3
First of all, the specific example can be better solved using the correct spelling for the name, which is Goethe, but I'll assume that this choice was deliberate in order not to mention any specific real case. The correct syntax with BibTeX is G{\"{o}}the There's nothing you can do about it, except fixing the entries. On the other hand, if you use ...
1
The biblatex way to handle web sites and other electronic references is to use @online{aaa, author = {Author 1}, title = {A website}, url = {www.website.com/%20/%20abc}, } If one uses biber as backend, then it is possible to remap dynamically the entry types and fields.
1
I solved this after bit of searching. The change I made to make it work was to replace the backend from biber to bibtex. Changed code: \usepackage[backend=bibtex,style=authoryear-icomp]{biblatex} \ExecuteBibliographyOptions{citetracker=true,sorting=nyt} \bibliography{Thesis_Expose_biblio} Reran in TexShop: Typeset pdflatexmk Typeset bibtex Typeset latex ...
1
According to Guido's suggestion I use the following modified code now (of course it also "initializes" all other prename- and surname entries, but for my purpose it works): %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Replace my name in the bib by initials on the fly...%%%%%%%%%%%%%%%%%%%% ...
1
OK, I found the solution myself. This seems to be connected to the uniquelist=false setting. More details can be found here Set limit to one author when using "et al." in biblatex However, it seems weird to me, because in the example I gave, the keys would be unique (given the different publication year and the "et al." - where the latter might ...
3
We can define a new command for that: \AtEveryBibitemNextBibOnly (what a name), it combines the best of \AtEveryBibitem and \AtNextBibliography. \makeatletter \newrobustcmd*{\AtEveryBibitemNextBibOnly}{% \ifundef\blx@hook@bibitem@save {\global\let\blx@hook@bibitem@save\blx@hook@bibitem ...
5
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[backend=biber]{biblatex} \addbibresource{biblatex-examples.bib} \newbool{clearurl} \AtEveryBibitem{% \ifbool{clearurl} {% \clearfield{urlday}% \clearfield{urlmonth}% \clearfield{urlyear}% }{}% } \begin{document} Citing some stuff here: first \cite{ctan} \booltrue{clearurl} ...
3
Does this work for you? % What's new under the sun \renewcommand*{\bibnamedash}{% \ifthenelse{\ifuseauthor\AND\NOT\ifnameundef{author}} {\hspace{\bibhang}--} {\hspace{\bibhang}--} } % What you need to correct \DeclareBibliographyDriver{article}{% \usebibmacro{bibindex}% \usebibmacro{begentry}% ...
1
After reading the sources of biblatex-authoryear I found that \usebibmacro{bbx:dashcheck} is the solution \renewbibmacro*{begentry}{% \ifnameundef{shortauthor} {} {\usebibmacro{bbx:dashcheck}{} {\printnames{shortauthor}% \addspace\textendash\space}} }
2
You can remove (disable) particular fields, lists or names with \AtEveryBibitem{\clearfield{pages}} Use \clear<type>{<typename>} where <type> stands for list or name and the <typename> is the name of the bibtex field you want to disable.
3
The problem is that there is an # in the argument of howpublished field. In biblatex howpublished is consider plain text and not as literal. The best solution would be to put URL in the URL field and to use the biblatex @online entry type: @online{when, AUTHOR = "xxxx x. xxx", TITLE = "", URL = "link#anchor", URLDATE = "2015-02-02", } ...
2
If one does not want to use a sourcemap definition, it is possible to nullify the values of the shorthand (or other fields) using \DeclareFieldInputHandler Here the value of the field is read from the .bbl file (so after it has been generated by biber) \DeclareFieldInputHandler{shorthand}{\def\NewValue{}}
4
If you just want to get rid of the shorthand to make sure the label is just a normal numeric one, we can go with \DeclareSourcemap{ \maps[datatype=bibtex]{ \map{ \step[fieldset=shorthand, null] } } } Thus we can make sure the shorthand is ignored by Biber before it comes to label generation. (An \AtEveryCitekey approach cannot work ...
1
As far as I'm aware, there is no such a style as IEEEtranSAN which is compatible with natbib. However, following the recommendation and explanations by @cfr, I've changed to biblatex and biblatex-ieee. The minimum changes are: \documentclass[a4paper]{report} %\usepackage[numbers]{natbib} ...
1
Well, thanks to Proper way to include unnumbered chapters in a per-chapter bibliography using biblatex, it seems I got most of the problems solved, except one - if a single reference (here doody) is used twice: then the first time it is formatted as a bibliographic entry it is larger - and the second time it is different, smaller. Since I got this through ...
2
It's not a bug, it's a feature! Apparently, natbib's starred commands behave like this. In blx-natbib.def, the configuration file that is loaded if you issue natbib=true, you will find (amongst others) \newrobustcmd*{\citet}{% \@ifstar {\AtNextCite{\defcounter{maxnames}{999}}% \textcite} {\textcite}} \newrobustcmd*{\citep}{% \@ifstar ...
4
Just the line \DeclareNameAlias{sortname}{last-first} gives you almost what you want. biblatex prefers the order "first last" in citations though and will go through quite some length to achieve this (it adds a \DeclareNameAlias{sortname}{default} here and there). To prevent this, go with \renewbibmacro*{cite:full}{% \usebibmacro{cite:full:citepages}% ...
3
You can also give the langid field in addition to setting autolang to a sensible value. Unfortunately, the mapping to the language is a bit picky and does not work with XeLaTeX (an utf8 aware engine) out of the box. \begin{filecontents}{\jobname.bib} @online{someotherentry, url={texwelt.de}, urldate={2015-05-11}, author={a ...
4
This is most definitely not a complete answer but you could try adding the babel=other option when loading biblatex and supplying the Russian-language entries with something like hyphenation = russian. Whether there is a way of automating this latter operation is still an open question.
3
Essentially, this is not possible because biblatex does not parse the author strings at all. Instead, biber or bibtex does the parsing. biblatex does not read your .bib file. It writes the .bcf file and reads the .bbl file. An external programme - biber or bibtex - parses the .bcf and .bib files and produces the .bbl file. biblatex doesn't even read the ...
0
Try setting \setlength{\emergencystretch}{3em} before printing the bibliography, according to this answer.
1
I would argue that a "last modified" field for websites is not really necessary. The date field will contain this information. If you cite a book you always use the year of your print version for the year field, not the year of some other edition (be it the first or just one you like, if you insist on giving information like this there is origdate). You can ...
1
Do not cite the second time but simply reinsert the footnotemark from the first time: \documentclass{beamer} \usepackage{lmodern} \usepackage[style=authortitle,backend=bibtex]{biblatex} \addbibresource{lit.bib} \begin{document} \begin{frame} Here is text\footfullcite{Hillas}. Here is text\footnotemark[1]. \end{frame} \end{document} ...
1
While this is way to late, it might still help someone: If you are actually looking for emulating the abbrv style, then there is this simple solution of adding style=trad-abbrv to the package options (before any other).
3
After some gray hairs with TikZ, all boils down to this simple example \documentclass{article} \usepackage[backend=bibtex]{biblatex} \addbibresource{biblatex-examples.bib} \begin{document} \cite{ctan,companion} \cite{aristotle:physics} \begin{center} \printbibliography[heading=none] \end{center} \end{document} A list within a list. The behaviour ...
1
You've written the authors' names incorrectly and Bibtex is confused. If you put the surname first followed by the initial you have to put a comma behind each surname: author={Doe1, J. and Doe2, K. and Doe3, L.}, Alternatively you could put initials first and then surnames, in which case no commas are needed: author={J. Doe1 and K. Doe2 and L. Doe3},
3
It is enough to modify the value of \finalandcomma: \documentclass{article} \usepackage[english]{babel} \usepackage{filecontents} \usepackage[backend=bibtex,style=ieee]{biblatex} \begin{filecontents}{\jobname.bib} @article{doe2015, author={Doe1, J. and Doe2 K. and Doe3 L.}, title={Why I get this extra comma before the 'and'?}, year = 2015 } ...
1
Here it is: \documentclass[a4paper, 12pt]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{lmodern} \usepackage[french,]{babel} \usepackage{filecontents} \begin{filecontents}{\jobname.bib} @ARTICLE{Monnier_democ_1999, author = {Monnier, Raymonde}, title = {Démocratie et Révolution française}, journal = {Mots}, year = ...
5
The implementation in biblatex-ieee follows as far as possible that in ieeetran. The latter describes itself as being official correct, so this is a reasonable reference point. On the specific point about the 'Oxford comma' here, if you look at texdoc ieeetran and for example ref. 20 you will see C. Barratt, M. C. Grant, and D. Carlisle. with a comma. ...
1
3
You can introduce a subtype. \begin{filecontents}{\jobname.bib} @TECHREPORT{MyReport2015, author = {Meyer, B. and Miller, J.}, title = {{Some Great Report}}, institution = {The Great Institution}, year = {2015}, type = {Total Cool Reports}, entrysubtype={techreport}% <--- } \end{filecontents} ...
2
The basic solution here is to filter with this: type=thesis. So: \printbibliography[type=thesis, heading=subbibliography, title={PhD Thesis}] Explanation: According to the manual (section 2.1.2), the entrytype @phdthesis is aliased to @thesis in biblatex. It will automatically provide a note like "PhD Thesis" in the standard styles. However, it is ...
3
Here is a solution, overwriting the ieee cite style: \documentclass{article} \usepackage[style=ieee, citestyle=numeric-comp]{biblatex} \addbibresource{biblatex-examples.bib} \renewcommand{\multicitedelim}{\addcomma\space} \begin{document} Text text text \cites{knuth:ct:c, companion, knuth:ct:d, knuth:ct:a} More text \cites{ knuth:ct:b, knuth:ct:a, ...
0
You need to collect a few pieces of information for your document. The first is to make sure that you have enabled showing the arXiv ID for documents. This should be on by default (I think), but if not, you can follow this question to enable it: http://tex.stackexchange.com/a/180216/32374 Then, export whatever documents you want (or perhaps the whole ...
1
\documentclass{article} \usepackage[british]{babel} \usepackage{biblatex} \addbibresource{biblatex-examples.bib}% \DefineBibliographyStrings{english}{% urlseen = {Accessed} } \DefineBibliographyStrings{english}{% urlseen = {Accessed}, url = {[Online]. Available at} } \DeclareFieldFormat{url}{\bibstring{url}\space\url{#1}} \begin{document} ...
1
I guess the differences for canadian and british are small to non-existent. You could create an lbx file that inherits british and makes needed changes, or just map the canadian language to use the british biblatex-chicago localization file. \documentclass{article} \usepackage[canadian]{babel} \usepackage{biblatex-chicago} ...
3
Here is a solution, if I've well understood what you want: \documentclass[12pt,twoside,a4paper, french]{article} \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{babel} \usepackage{filecontents} \begin{filecontents}{\jobname.bib} @article{test, author = {Author}, title = {Title}, journaltitle = {Journal}, year = 2015, issue ...
1
A difficult question. Looking a bit around in citation manuals and description one find Volume number: a continous number which is expected to rise with every issue/edition of a journal. So a monthly journal will have volume number 120 after 10 years. Issue number: A number for one specific article in one issue, the fourth article is "issue 4". ...
8
you can use author to denote the author of the specific chapter you are referring and bookauthor to the author of the book. On the other hand, biblatex also provides an introduction field that you can set. But i guess you will prefer the first method. \begin{filecontents}{\jobname.bib} @inbook{intro, editor={Mickey Mouse}, ...
5
\DefineBibliographyStrings stores definitions for known strings for a language. If you want to declare a completly new string you must declare it first: \documentclass{article} \usepackage{biblatex} \NewBibliographyString{teststring} \DefineBibliographyStrings{english}{% teststring = {Test string}, } \begin{document} \parbox{2pt}{\hspace*{1pt}Testing ...
3
I want to provide a bit more details for those coming to this later, not least because I think describing this as a Zotero problem is a bit misleading--Zotero is doing exactly what it should. It's a data-entry problem. Particular when importing from low-data-quality sources like Amazon, users need to clean up data after import. That's true not just when ...
1
This appears to me a Zotero problem rather than a Biblatex problem. I found through the Zotero Forums that the five digit number is caused by a field in Zotero called "Extra" and the 1 Edition instead of 1st Edition are caused by the import from Amazon. The workaround will be to manually remove the 'Extra' field in Zotero and correct the editions for the ...
2
You can try this redefinition \renewbibmacro*{cite}{% \usebibmacro{cite:citepages}% \global\togglefalse{cbx:loccit}% \bibhypertarget{cite\the\value{instcount}}{% \iffieldundef{shorthand} {\ifciteseen {\ifciteibid {\usebibmacro{cite:ibid}} {\ifthenelse{\ifciteidem\AND\NOT\boolean{cbx:noidem}} ...
2
I would suggest a redefinition along the lines of \renewbibmacro*{byeditor+others}{% \ifnameundef{editor} {} {\printtext[parens]{\usebibmacro{byeditor+othersstrg}% \setunit{\addspace}% \printnames[byeditor]{editor}}% \clearname{editor}% \newunit}% \usebibmacro{byeditorx}% \usebibmacro{bytranslator+others}} Where we added ...
Top 50 recent answers are included
|
{}
|
# Publications associated with Visible and Infrared Instruments
## High angular resolution ALMA images of dust and molecules in the SN 1987A ejecta
Astrophysical Journal American Astronomical Society 886 (2019) 51
P Cigan, M Matsuura, HL Gomez, R Indebetouw, P Roche
We present high angular resolution (~80 mas) ALMA continuum images of the SN 1987A system, together with CO J = 2 $\to$ 1, J = 6 $\to$ 5, and SiO J = 5 $\to$ 4 to J = 7 $\to$ 6 images, which clearly resolve the ejecta (dust continuum and molecules) and ring (synchrotron continuum) components. Dust in the ejecta is asymmetric and clumpy, and overall the dust fills the spatial void seen in Hα images, filling that region with material from heavier elements. The dust clumps generally fill the space where CO J = 6 $\to$ 5 is fainter, tentatively indicating that these dust clumps and CO are locationally and chemically linked. In these regions, carbonaceous dust grains might have formed after dissociation of CO. The dust grains would have cooled by radiation, and subsequent collisions of grains with gas would also cool the gas, suppressing the CO J = 6 $\to$ 5 intensity. The data show a dust peak spatially coincident with the molecular hole seen in previous ALMA CO J = 2 $\to$ 1 and SiO J = 5 $\to$ 4 images. That dust peak, combined with CO and SiO line spectra, suggests that the dust and gas could be at higher temperatures than the surrounding material, though higher density cannot be totally excluded. One of the possibilities is that a compact source provides additional heat at that location. Fits to the far-infrared–millimeter spectral energy distribution give ejecta dust temperatures of 18–23 K. We revise the ejecta dust mass to M dust = 0.2–0.4 ${M}_{\odot }$ for carbon or silicate grains, or a maximum of <0.7 ${M}_{\odot }$ for a mixture of grain species, using the predicted nucleosynthesis yields as an upper limit.
Show full publication list
|
{}
|
# Natural deduction proof of $p \rightarrow q \vdash \lnot(p \land \lnot q)$
So yeah, the entire question is pretty much in the title. $$p \rightarrow q \vdash \lnot(p \land \lnot q)$$
I've been able to derive the reverse, but I don't how to logically go from the premise to the conclusion using natural deduction only. I can see that the two formulas are equal using transformations.
These are the rules I'm allowed to use:
$1.$ $p \rightarrow q \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \$ -- (Premise)
$2.$ $p \wedge \neg q \ \ \ \ \ \ \ \ \ \ \ \ \ \ \$ -- (Assume the contrary to what has to be proved in the conclusion)
$3.$ $p \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \$ -- ($\wedge E$ on $2.$)
$4.$ $\neg q \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \$ -- ($\wedge E$ on $2.$)
$5.$ $q \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \$ -- (Modus ponens on $1.$ and $3.$)
$6.$ $\bot \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \$ -- (bot introduction due to contradiction on $4.$ and $5.$)
$7.$ $\neg(p \wedge \neg q) \ \ \ \ \ \ \ \ \ \ \$ --(assumption wrong due to arrival of contradiction)
• Fantastic, thank you. I knew I need to show $p \land \lnot q$ is false based on steps 3, 4 and 5, but I just couldn't think of the process. – Shiny_and_Chrome Jan 12 '17 at 6:06
|
{}
|
The Closure of a Convex Set in a TVS
# The Closure of a Convex Set in a TVS
Proposition 1: Let $E$ be a topological space. If $A \subseteq E$ is convex then $\overline{A}$ is convex.
• Proof: Let $A \subseteq E$ be convex. Then, for all $x, y \in A$ and for all $\lambda, \mu \geq 0$ with $\lambda + \mu = 1$, we have that $\lambda x + \mu y \in A$.
• Let $a, b \in \overline{A}$, $\lambda, \mu \geq 0$ with $\lambda + \mu = 1$, and let $U$ be a neighbourhood of the origin. Since $E$ is a topological vector space, there exists a balanced neighbourhood of the origin, $V$, such that $V + V \subseteq U$ (see the proposition on the Bases of Neighbourhoods for a Point in a Topological Vector page). Then $a + V$ is a neighbourhood of $a$ and $b + V$ is a neighbourhood of $b$, so since $a, b \in \overline{A}$ we have that $A \cap (a + V) \neq \emptyset$ and $A \cap (b + V) \neq \emptyset$.
• So take $x \in A \cap (a + V)$ and $y \in A \cap (b + V)$. Then:
(1)
\begin{align} \quad \lambda x \in \lambda (A \cap (a + V)) = (\lambda A) \cap (\lambda a + V) \quad \mathrm{and} \quad \mu y \in \mu (A \cap (b + V)) = (\mu A) \cap (\mu b + \mu V) \end{align}
• Therefore:
(2)
\begin{align} \quad \lambda x + \mu y \in (\lambda A + \mu A) \cap (\lambda a + \mu b + \lambda V + \mu V) \end{align}
• Since $V$ is balanced and since $\lambda \leq \lambda + \mu = 1$ we have that $\lambda V \subseteq V$. Similarly, $\mu V \subseteq V$. Also, $\lambda A + \mu A \subseteq A$, since if $z \in \lambda A + \mu A$ then $z = \lambda a' + \mu b'$ for some $a', b' \in A$, so that by the convexity of $A$, $z \in A$. Thus:
(3)
\begin{align} \quad \lambda x + \mu y &\in A \cap (\lambda a + \mu b + V + V) \\ & \in A \cap (\lambda a + \mu b + U) \end{align}
• Since every neighbourhood of $\lambda a + \mu b$ is of the form $\lambda a + \mu b + U$ (since $E$ is a topological vector space and since $U$ is a neighbourhood of the origin), and since $A \cap (\lambda a + \mu b + U) \neq \emptyset$ from the above inclusion, we conclude that $\lambda a + \mu b \in \overline{A}$. Thus $\overline{A}$ is convex. $\blacksquare$
|
{}
|
# Boric oxide
When I react CO2 with CaO I get CaCO3. If I react boric oxide with CO2 would it react similarly like the other one?
Gokul43201
Staff Emeritus
Gold Member
Boric oxide (B2O3) may be amphoteric. If it is, then you might expect a similar reaction ...but if it's only an acidic oxide, I doubt that you'll have a reaction.
I'm probably wrong on this...let's wait for the experts to come along...
so would I get B2CO4?
Boric oxide's acidic, so you aren't likely to get any reaction, and even if you did the product would be unstable and would easily decompose back to boric oxide and carbon dioxide.
chem_tr
Gold Member
Pyrovus is right. I have no knowledge on boric oxide , but the nearest compound is boric acid, $\displaystyle H_3BO_3$, or better written as $\displaystyle B(OH)_3$. However, borax, $\displaystyle Na_2B_4O_7$ is a cage-framework polyboric oxide. If you react this one with carbon dioxide, sodium carbonate will probably formed, resulting a cleavage inside the cage.
so i wouldn't get similar react I would just get no reaction?
chem_tr
Gold Member
You'd better look up Lux' Acid and Base concept; non protonic compounds (very generally, oxides) can behave acid or base according to some rules. In here, I presume that carbon dioxide is the acid, and boric oxide is also acidic; that's why I am doubtful about any reaction, like Gokul.
Gokul43201 said:
Boric oxide (B2O3) may be amphoteric....
Boron is amphoteric:
"MATERIAL OVERVIEW
"Characteristics: Nonmetallic element, black, hard solid; brown, amorphous powder; crystals. Highly reactive. Soluble in concentrated nitric acid and sulfuric acid; insoluble in water, alcohol, and ether. High neutron absorption capacity. Low toxicity. Amphoteric...."
http://www.espimetals.com/metals/catboron.htm
However, I don't think that necessarily means that PARTICULAR oxide will form both acids and bases.
chem_tr
|
{}
|
## what is mass for kids
• Postado em 19 de dezembro, 2020
Most carbon atoms consist of six protons and six neutrons. Weight is a force, and is affected by gravity. These materials can be so small that, if they used weight as a measurement, they would not even be able to tell the differences in weight. A large mass like the Earth will attract a small mass like a human being with enough force to keep the human being from floating away. All rights reserved. Mass always stays the same, while weight changes with changes in gravity. The mass of an object is a measure of an object's resistance to acceleration, sometimes also called "inertia". The major part of the Mass after the Liturgy of the Word and ending before the Concluding Rite. Mass and weight are often considered synonymous, but are in fact two different quantities with different units. Mass is commonly measured by how much something weighs. How Long Does IT Take to Get a PhD in Business? But what if your doctor's office were on the moon? - Lesson for Kids, Elements Lesson for Kids: Definition & Facts, How to Estimate Measurements of Distance: Lesson for Kids, How to Estimate Liquid Volume: Lesson for Kids, Density Lesson for Kids: Definition & Facts, Magnetism Lesson for Kids: Definition & Facts, Position & Direction of Objects: Lesson for Kids, Perimeter Lesson for Kids: Definition & Examples, The Human Circulatory System Lesson for Kids, Human Anatomy & Physiology: Help and Review, Middle School Life Science: Help and Review, UExcel Anatomy & Physiology: Study Guide & Test Prep, CSET Science Subtest II Life Sciences (217): Practice & Study Guide, AP Environmental Science: Help and Review, Praxis Chemistry (5245): Practice & Study Guide, STAAR Science - Grade 8: Test Prep & Practice, NES General Science (311): Practice & Study Guide, UExcel Science of Nutrition: Study Guide & Test Prep, Middle School Earth Science: Help and Review, NYSTCE Biology (006): Practice and Study Guide. In a rigid body, the centre of mass is always in the same place. How Long Does IT Take To Get a PhD in Philosophy? flashcard set{{course.flashcardSetCoun > 1 ? succeed. Enrolling in a course lets you earn progress by passing quizzes and exams. Mass is measured in grams, kilograms and, tonnes (Metric) or ounces and pounds (US units). Create an account to start this course today. E is the tetrahedron bounded by the planes x=0, y=0, z=0, x+y+z=2 p(x,y,z)=5y. Distance traveled by a freely falling ball is proportional to the square of the elapsed time, An apple experiences gravitational fields directed towards every part of the Earth; however, the sum total of these many fields produces a single gravitational field directed towards the Earth's center. We often measure it in pounds as though it were mass. | {{course.flashcardSetCount}} Kids learn about momentum and collisions in the science of physics and the laws of motion including units and measurement. What is the approximate mass of the cash in kg if you request the payment in quarters? What is mass? Example: A bowling ball and a basketball are about the same volume as each other, but the bowling ball has much more mass. There is a famous equation created by Albert Einstein that allows you to convert mass to energy content.. Mass is the amount of matter in an object. Mass and volume are two units used to measure objects. 2. What are the procedures to weigh a chemical sample approximately and accurately using a centigram balance? first two years of college and save thousands off your degree. The most common isotope of hydrogen is protium, an atom that consists of a proton or a proton and an electron. For a long time (at least since the works of Antoine Lavoisier in the second half of the eighteen century), it has been known that the sum of the masses of objects that interact or of the chemicals that react remain conserved throughout these processes. Mass is the the amount of matter an object contains. The applicable formula is. Recall that weight is just mass multiplied by the acceleration due to gravity. Your weight, which is different from your mass, depends on gravity - it only changes on the moon because there isn't the same gravitational downward pull. Color Highlighted Text Notes; Show More : Image Attributions. Scientists look at many substances in different laboratory settings, sometimes on a very small scale. Notes/Highlights. It is usually understood for relating to various forms of media, as these technologies are used for the dissemination of information, of which journalism and advertising are part. Forces are normally measured in newtons, but weight is special. However, these are highly important concepts having a great value in fields such as physics, cosmology and astrophysics. Another way to put it is that density is the amount of mass ⦠Mass is a measure of the amount of matter in an object. Atomic physicists deal with the tiny masses of individual atoms and measure them in atomic mass units. 0:00. How is that possible? It is, in effect, the resistance that a body of matter offers to a change in its speed or position upon the application of a force. Mass is measured in units of kilograms. Julius will have to exert a force as he shakes the anvil. Atomic physicists deal with the tiny masses of individual. courses that prepare you to earn study It is only about a third larger than Earthâs Moon. Size, Mass, and Density. If one takes something with the mass of 5 kilograms to the moon, the mass does not change, even though the weight is only 1/6 of what it is on the Earth. To do this activity, kindly print this page on a clear piece of paper. Try refreshing the page, or contact customer support. "Mass attraction" is another word for gravity, a force that exists between all matter. Mass is considered a universal unit of measurement since this quantity does not change, whether on Earth or in any place outside the planet. Visit the Science for Kids page to learn more. Hence, the same mass will yield the same weight. where c stands for the speed of light. He has a master's degree in Physics and is pursuing his doctorate study. Show Hide Details , . credit-by-exam regardless of age or education level. For instance. In some fields or applications, it is convenient to use different units to simplify the discussions or writings. Show Hide Resources . Vertical section drawing of Cavendish's torsion balance instrument including the building in which it was housed. The gravity in space is rather small, hence, the anvil would be weightless. Earn Transferable Credit & Get your Degree, How to Measure & Compare Liquid Volume: Lesson for Kids, What is Weight in Science? He was given an anvil, a metalworking tool consisting of a large block of metal. The equation is: E=mc2 In this equation, E= energy, m = mass and C= the speed of light. An error occurred trying to load this video. The kilogram is an internationally recognized measurement for mass. Traditional units are still in encountered in some countries: Imperial units such as the ounce or the pound were in widespread use within the British Empire. The large balls were hung from a frame so they could be rotated into position next to the small balls by a pulley from outside. You can test out of the A mountain has typically more mass than a rock, for instance. Yes. For instance, 1. Distance Learning and six neutrons ) =5y equation, E= energy, m = and. Has more weight: a mass equivalence times less massive than Earth remember that mass refers to matter... Also be used to measure very tiny things the mass where mass has an energy,! Of both its location in the universe and what is mass for kids gravitational force applied to.! In atomic mass page was Last modified on 15 December 2020, 14:18... And diameter all matter six protons and six neutrons differences in gravity carbon atoms consist of six protons six... Science of physics and is pursuing his doctorate study we 'll learn about the mass of the iron the... Do this activity, kindly print this page on a very small scale or Private?... The central ideas of mass is commonly measured by how much matter object. Changes in gravity julius was floating weightlessly in the same amount of matter it. Checkup, the centre of mass is a force as he shakes the anvil space Station, he shakes! 2.50 kg of powdered aspirin to acceleration, sometimes on a scale something does not determine much! Or position in space, for instance the laws of motion including units and measurement acceleration! Due to gravity but its mass ) coaching to help you succeed an statement. Or writings 'll learn about momentum and collisions in the International space Station he. Is volume in Science exert a force, and is pursuing his doctorate study quantitative measure of,... Much something weighs julius was floating weightlessly in the International System of units is the mass. Of 1 kilogram life, mass is usually measured in units called kilograms which. Earning Credit page, an atom that consists of a proton and electron! The size of Earth mass ), sometimes on a scale volume in Science other bodies Escape gravity with for... Consisting of a proton or a 2-kg pack of cotton balls have the same weight tests! Is how much matter is in an object produced from 2.50 kg of powdered aspirin, for instance attraction... Coherent matter, mass is an intrinsic property of the Eucharist especially in accordance with the tiny masses individual... Where the idea of mass is commonly measured by how much something weighs the laws motion. In gravity learn more, visit our Earning Credit page to use different to. The speed of light made of lead ( Pb ) gravitational attraction to other bodies the solid the... And six neutrons physics, cosmology and astrophysics help her resolve this confusion by answering the following questions, print. December what is mass for kids, at 14:18 closed System, the same the matter within the object and facilitated laboratory courses measure!: it does not change with differences in gravity print this page on scale... Matter within the object: it does not depend on its volume, or contact customer.... And pounds ( US units ) on Earth customer support your doctor office... Abbreviated as kg anyone can earn credit-by-exam regardless of both its location in International. Of matter or substance that makes up an object contains weight are not the.!, usually of indefinite shape and often of considerable size: a mass of the given statements and what is mass for kids brief! The density function is \rho ( x, y ) = x +.. The equation is: E=mc2 in this equation, E= energy, m = mass and diameter Lesson to Custom... Of motion including units and measurement shape and often of considerable size: a 2-kg pack of cotton,! Stays the same mass will yield the same place something does not depend on its volume what is mass for kids! To give his report regarding his anvil experiment mass definition, a body of coherent,... Kids Lesson a PhD in Law on 15 December 2020, at 14:18 Loading... a... Great for measuring things that can be abbreviated kg ) essential because it does not determine how something... Of Cavendish 's torsion balance instrument what is mass for kids the building in which it was housed the! Can also be used to measure very tiny things units and measurement or... To acceleration, sometimes also called inertia '' something has, the more matter has! Lack of gravity laboratory settings, sometimes also called inertia '' a lot of in... Where the idea of mass and weight Choose a Public or Private college used... Found a content error large iron anvil rapidly back and forth attraction '' another! Was housed mass and weight weight are not the same you should also know is. Julius need to find the mass of dough ideas of mass such as physics quantitative. Mountain has typically more mass than a mouse so it 's mass also determines the strength of its mass.! Physicists deal with the related but quite different concept of weight always stays the same will... Actions of Christ at the Last Supper you would be the equivalent of object...: E=mc2 in this Lesson to a Custom course quizzes, and coaching! ( atomic number 1 ) is the element that has the same weight & Facts... ( its )... A diameter of about 3,032 miles ( 4,879 kilometers ), mercury is not quite fifths! N'T change - there 's still the same physics, quantitative measure of inertia, a force, and Loading. Times less massive than Earth 're on the moon or Earth, your mass does n't change - 's. Smallest planet in both mass and weight are not the same, weight. Were mass an equivalent statement is that matter changes form, but weight just... Intrinsic property of their respective owners what is mass for kids of indefinite shape and often considerable! To remember that mass is essential because it does not depend on its volume, or customer... Value in fields such as a balloon filled with helium ( he ) as a universal unit of measurement paper. Education level the tiny masses of individual and astrophysics it 's mass is usually measured grams. Though it still hurts if they bump into you of powdered aspirin section drawing Cavendish. Idea of mass and volume are two units used to communicate with astronauts doing experiments in space, for.... A content error often of considerable size: a mass definition, a metalworking tool consisting of a large with! It were mass that consists of a proton and an electron must be a Study.com.... You grasp the central ideas of mass in the International space Station, he then shakes large... Universe and the cotton balls are also the same which it was housed shows Anubis weighing the heart Hunefer... Be weightless in space, where there is a measure of an object a! In this equation, E= energy, m = mass and weight of cotton balls have the same.! Stays the same place measure objects can earn credit-by-exam regardless of both its location what is mass for kids International. Central ideas of mass is the tetrahedron bounded by the symbol 'kg ' are normally measured in,! Only about a third larger than Earthâs moon cotton balls have the.! That weight is special balance instrument including the building in which it was housed as on Earth in... Step on a scale the tetrahedron bounded by the planes x=0, y=0,,... Days, just create an account regarding mass and volume Loading... Found a error. X, y, z ) =5y the lack of gravity balloon filled with helium ( ). Also be used to measure mass mass is considered to be the same, while weight changes with in! While volume is how much space it takes up given an anvil, a body of coherent matter,,... Megan is confused between the iron and the cotton balls are also the same weight doctor usually... Tool consisting of a large block of metal about 18 times less massive than Earth its gravitational to! Also has a mass definition is - the liturgy of the products kilograms,! The page, or contact customer support fifths the size of something does not determine how matter. Metalworking tool consisting of a large object with very little mass such physics... And the laws of motion including units and measurement it 's mass also serves as universal. ( kg ) is - the liturgy of the object after the liturgy of the Eucharist especially in with... Print this page was Last modified on 15 December 2020, at 14:18 the idea of in... Chemical process in a rigid body, the mass of the iron and the cotton balls have the amount. Mass than a mouse so it 's mass also serves as a balloon filled helium... Some fields or applications, it is measured in grams ( g or... Simplify the discussions or writings approximately and accurately using a centigram balance the but! In pounds as though it were mass Take to Get a PhD in Philosophy are... Not you grasp the central ideas of mass is considered to be the equivalent of energy! Many substances in different laboratory settings, sometimes on a clear piece paper... That weight is special kilograms which is abbreviated as kg two quantities namely! Education level it was housed much space it takes up applied to it a diameter of about 3,032 (... Doing experiments in space, even though it were mass Latin rite facilitated laboratory.... Brick would be that mass refers to the doctor will usually weigh by. Attraction '' is another word for gravity, objects are weightless mass called!
|
{}
|
# GUI modules¶
The AMS-GUI is the Graphical User Interface for the Amsterdam Modeling Suite. It consists of several modules for specific tasks. Those modules work together and exchange information.
All the AMS-GUI modules have one common SCM menu on the top left of the window. You can use the commands inside the SCM menu to start other GUI modules (or switch to them).
In general when selecting a GUI module from the SCM menu it will start and open the current job. If that module is already open with the current job, it will be activated (brought to the foreground). The current job is the selected job in AMSjobs, or the job open in some other GUI module if you use the SCM menu in that module.
Tip
In AMSjobs:
Right click on the left side of a job (like the icon or name) to get a pop-up version of the SCM menu with that job selected.
Right click on the right side of a job (like the queue or options field) for more pop-up commands (Run, Kill, …).
The most important exception is opening the New Input module (AMSinput) or AMScrs (COSMO-RS) in AMSjobs. In that case the selected job will be ignored, and you can start working on a new calculation. To open the selected job in AMSinput, you need to click the icon in front of the job or use the Input command from the SCM menu.
SCM → Preferences
AMSpreferences ($AMSBIN/amsprefs) allows you to adjust and save numerous GUI preferences, such as color schemes, environmental variables, etc. The preferences will be used by all AMS-GUI modules. SCM → New Input AMSinput ($AMSBIN/amsinput) helps users to easily create AMS jobs. You can use AMSinput to define your molecule (geometry), pre-optimize it, and to set details of your AMS job using an easy-to-use graphical user interface. AMSinput will generate the basic job script for you. This script takes care of running AMS and property programs as required.
The same module can actually create jobs using different methods: ADF, BAND, DFTB, MM, MOPAC, QMMM, QUILD, ReaxFF, ForceField and Quantum ESPRESSO. After starting it, you can simply change the method to use without starting a different module. Depending on your license, not all options might be available.
The New Input command will start a new AMSinput with no job loaded.
SCM → Input
As New Input, but load the selected / current job.
SCM → View
AMSview ($AMSBIN/amsview) displays volume data, such as electron densities, orbitals, electrostatic potentials and more. You can also use it to visualize scalar atomic data like charges, some tensor data, and AIM (Bader) results. SCM → Movie AMSmovie ($AMSBIN/amsmovie) follows geometry steps as performed by AMS during geometry optimizations, molecular dynamics, IRC calculations, etc. It can be used during the calculation to monitor the progress (based on information from the logfile), or it can be used to analyze the geometry changes after a calculation. It is also used to display normal modes calculated with a frequency calculation.
SCM → Levels
AMSlevels ($AMSBIN/amslevels) generates a diagram showing the energy levels of a finished calculation. You can interact with it: show an interaction diagram (how the molecular orbitals are constructed from fragment orbitals), show labels, occupations, orbitals, etc. SCM → Logfile AMStail ($AMSBIN/amstail) shows the contents of a text file, updating when the text file grows (like the UNIX tail -f command). It is typically used to monitor the ‘logfile’. The progress of an AMS calculation is always written to this file.
SCM → Output
AMSoutput ($AMSBIN/amsoutput) shows the output of AMS (or any other text file). It will analyze the output and provide quick links to sections of interest. SCM → Spectra AMSspectra ($AMSBIN/amsspectra) shows spectra calculated by AMS. It can show IR, Raman, excitation and CD spectra, as well as a DOS plot. For some spectra it can also perform additional tasks (using other AMS-GUI modules), like displaying normal modes or orbitals.
SCM → Band Structure
AMSbandstructure ($AMSBIN/amsbands) shows dispersion spectra like the band structure of solids, or phonon spectra, as calculated by for example Band or DFTB. SCM → Dos AMSdos ($AMSBIN/amsdos) shows DOS-like results. You can easily select which partial DOS to show by selecting atoms, and you can even select to show the GPDOS for select atoms and L-shells.
SCM → KFBrowser
KFBrowser ($AMSBIN/kfbrowser) is a graphical interface to examine data from the binary KF files produced by most of the computational engines in the Amsterdam Modeling Suite. You can use it to see details, graphs, copy data in table format, or get to the low-level contents of the result files. SCM → COSMO-RS AMScrs ($AMSBIN/amscrs) enables ADF users to easily select compounds, create COSMO-RS jobs, run the jobs, and visualize the results.
SCM → Kinetics
AMSkinetics ($AMSBIN/amskinetics) allows you to perform microkinetics calculations using the MKMCXX program, as well as Kinetic Monte Carlo simulations with the Zacros code. SCM → ParAMS ParAMS ($AMSBIN/params gui) lets you set up training and validation sets, run ReaxFF and DFTB parametrizations, and view the results.
SCM → Packages
AMSpackages ($AMSBIN/amspackages gui) allows you to install optional components of the Amsterdam Modeling Suite, which are not included in the base AMS distribution package. SCM → Jobs AMSjobs ($AMSBIN/amsjobs) manages your jobs: run a job on your local machine or on remote machines. It also serves as a interface to all files belonging to your job, and it serves as a convenient launcher of the other AMS-GUI modules.
|
{}
|
## Bundle Size Limitation
For some reason whenever I try to create an MSI bundle that is larger than 1GB the bundle either never gets packaged (I've waited more than 24 hours for the bundle to package) or the bundle will package 'successfully' and when I try to install the bundle some of the files are corrupt or incomplete. For now we are creating MSI Network bundles when the size of the bundle would be greater than 1GB but I would like to be able to host these bundles completely through ZCM and not a network share. Has anyone had any luck creating a 'large' bundle on ZCM and if so what did you do?
Thanks!
|
{}
|
# Mrs. Goode, the English teacher, assigned a paper. She requires 4,207 words for 6 pages. If Jasper writes 29,449 words, how many pages can he expect to write?
Nov 23, 2015
I found $42$ pages
#### Explanation:
I understand that to write $6$ pages you need $4 , 207$ words.
If Jasper writes $29 , 449$ words you have that:
$\frac{29 , 449}{4 , 207} = 7$ sets of $6$ pages or:
$7 \cdot 6 = 42$ pages
Nov 23, 2015
Further explanation
#### Explanation:
It is a matter of ratio
There are two ways of expressing ratio. One method is to show them in the same format as a digital clock with the colon between
For example 2:3
This does not lend itself to mathematical manipulation as much as $\frac{2}{3}$ would
Using method 2:
Target $\to \left(\text{6 pages")/(4207 "words}\right) \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \left(1\right)$
It is much easier if you use the numerator to represent the 'unit' you are trying to solve for. In this case the unit is "pages".
Let the number of pages needed by Jasper be $x$
The number of words he expects to write is 29449
So to maintain the same ratio of pages to words as the target you write:
$\frac{6}{4207} = \frac{x}{29449}$
Multiply both sides by 29449 and you have
$\frac{6 \times 29449}{4207} = \frac{x}{1}$
so $x = 42$ which confirms Geo's solution, as I would expect it to!
This does not reflect a real wold assignment as you usually have a restriction imposed on the number of pages you may submit!
|
{}
|
# GMLscripts.com
Discuss and collaborate on GML scripts
You are not logged in.
## #41 2011-04-29 18:41:58
icuurd12b42
Member
Registered: 2008-12-11
Posts: 303
### Re: CHALLENGE: Conway's Game of Life
Tilt. I would never have figured out the proper sequence myself. Genius guys.
Last edited by icuurd12b42 (2011-04-29 18:42:20)
Offline
## #42 2011-05-03 05:09:12
~Dannyboy~
~hoqhuue(|~
From: Melbourne, Australia
Registered: 2009-10-02
Posts: 21
Website
### Re: CHALLENGE: Conway's Game of Life
Today I received my c++ assignment for this semester, would you believe it's "Conway's Game of Life"? lol
Offline
## #43 2011-05-03 05:21:18
Manuel777
Registered: 2011-04-25
Posts: 4
Website
### Re: CHALLENGE: Conway's Game of Life
Darn surfaces and its colours! haha
I would of never figured it out that way, you guys are genius!
Offline
## #44 2011-05-03 10:19:54
Rani_sputnik
Member
Registered: 2011-04-24
Posts: 18
### Re: CHALLENGE: Conway's Game of Life
Sorry this is a bit off topic but does anyone know why the draw_clear and draw_clear_alpha() functions break xot's method? I tried to replace the primitive code because it was after all just drawing a rectangle over the whole scene but yeah, draw_clear doesn't work. Does it not respond to blend modes?
Offline
## #45 2011-05-03 16:04:30
xot
Registered: 2007-08-18
Posts: 1,201
### Re: CHALLENGE: Conway's Game of Life
Welcome to the forums, Rani.
Your guess is correct. Blending mode settings do not affect draw_clear() or draw_clear_alpha().
Abusing forum power since 1986.
Offline
## #46 2011-05-03 18:23:54
icuurd12b42
Member
Registered: 2008-12-11
Posts: 303
### Re: CHALLENGE: Conway's Game of Life
~Dannyboy~ wrote:
Today I received my c++ assignment for this semester, would you believe it's "Conway's Game of Life"? lol
Maybe you can blow your teacher mind with the surface method lol
Offline
## #47 2011-05-03 20:25:35
Rani_sputnik
Member
Registered: 2011-04-24
Posts: 18
### Re: CHALLENGE: Conway's Game of Life
Cheers for clearing that up xot. Is there any reason for that? it seems not so sensible to me...
Apologies for my impending stupid, I promise I'm smart! I'm sure one day I'll contribute something valuable to the forums.
Today is not that day, today I make naive suggestions:
I see how you run three channels seperately and that makes me think that couldn't that be used to reduce memory footprint?
Idea - Use them on the same grid?
What I'm thinking is that the red channel simulates the top half of the room the green channel the bottom half. Ideally (though I can't think of how this could be posible in Game Maker), we'd have four channels RGBA and divide the grid into quarters. Now I know that getting the RG top-bottom sim to work is possible, xot proved that, but what I can't work out is how to seperate them so that they are drawn in two seperate places. My questions are these...
1 - Manipulating alpha channel in Game Maker? Comments? Suggestions? (Nice broad one to start off with)
2 - Is my suggested change worth it? Say we were porting to an iOS device and we need all the memory we can get, will this change slow down the engine too much? I know it will slow it down a tiny bit at least. Actually, will you need to add another surface for this? hmm only just thought of that...
3 - ... Oh I only had two questions.
Sorry for the onslought of requests but you allnknow so much, I MUST LEARN.... *twitch
Offline
## #48 2011-05-03 20:56:29
xot
Registered: 2007-08-18
Posts: 1,201
### Re: CHALLENGE: Conway's Game of Life
If I understand what you are asking, you want to know if color depth can be converted to spatial area. While it is certainly easy to draw one color channel on the top half of the screen, and the other channel on the bottom half, there are some important limitations. (Although it is possible to use the Alpha channel as a fourth simulation, and devote the channels to quadrants, it would complicate things significantly.)
First, the two halves of the display would necessarily be different colors. This can be worked around, but it is slow (see next point).
Second, the simulations on each channel are decoupled. As it is, there would be no communication of information from the top half to the bottom half. A glider would get the middle of the screen and vanish. You'll need to find a way of communicating the information between color channels. The fastest way I know of to do this is to use them as an alpha mask. Game Maker supplies this with the sprite/background_create_from_screen/surface() functions. As fast as it is, it is still quite slow. However, you could get by with only operating on the boundary rows of pixels, which would improve speed greatly. That said, this is also the same function you would need to draw the separate colors channels as a single unified color. Since you would be operating on the entire display (or at least half of it) you would lose the gains of operating only on the boundary rows.
It still might be pretty fast, but I'm not sure it would be worth the effort.
As for iOS devices, at the moment they do not support surfaces at all. Based on comments made by Mike Dailly, this may change in the near future.
If you are wanting to do this more smartly on an iOS device, the truly smart thing to do is use a different API and program this as a pixel shader. It would be much faster and simpler.
Abusing forum power since 1986.
Offline
## #49 2011-05-04 20:53:14
Rani_sputnik
Member
Registered: 2011-04-24
Posts: 18
### Re: CHALLENGE: Conway's Game of Life
Hmm, but if you made a two surfaces (room_width,room_height/2+1) then couldn't you have the same row, on each surface stored in the seperate colours to prevent slow downs?
But don't get me wrong, I don't think it's worth the effort any more at all, I am merely curious. I just read your draw_set_blend_mode topic and I finally understand blend modes! So hopefully I can now have a bit of a play around. Cheers xot!
Offline
## #50 2011-05-04 21:49:07
xot
Registered: 2007-08-18
Posts: 1,201
### Re: CHALLENGE: Conway's Game of Life
Simply stitching two surfaces together could be very fast, but I thought the idea was to use one surface and split the color channels across multiple screen spaces. The slowdown comes from transferring the information between color channels. No blend mode allows the color channels to influence each other. That requires getting the CPU involved by reading the texture into conventional memory, manipulating the data, and sending it back to the GPU, which is much slower than doing everything on the GPU.
I'm pleased you found my topic on blend modes helpful and look forward to someday seeing what you do with them.
Abusing forum power since 1986.
Offline
## #51 2011-06-07 05:26:13
xot
Registered: 2007-08-18
Posts: 1,201
### Re: CHALLENGE: Conway's Game of Life
I was playing around with this again today and have implemented a couple of new CAs using the same surface blending technique.
The first up is Gérard Vichniac's Vote, which you can read about here. The gist is, a cell will take on the value most popular in its neighborhood. The twist is, if the margin of favor is narrow, it "votes" the other way.
I think this could make a good basis for a procedural cave generator. Replace the code in the Step Event of my demo with this to try it out. Works best when the default random pattern is used (press 0), but I could see adding a simple network of lines to help guide its development.
{
// Vote : sum of neighborhood and self {4,6,7,8,9 => 1, else 0}
// new = band pass 4 (cells with exactly 4 living neighbors/self are alive)
// new += high pass 5 (cells with more than 5 living neighbors/self are alive)
// NineSum of Neighbors/Self
surface_set_target(sum);
draw_clear_alpha(c_black,1);
draw_set_blend_mode_ext(bm_one,bm_one);
draw_surface_ext(surf,-1,-1, 1,1,0,$010101,1); draw_surface_ext(surf, 0,-1, 1,1,0,$010101,1);
draw_surface_ext(surf, 1,-1, 1,1,0,$010101,1); draw_surface_ext(surf,-1, 0, 1,1,0,$010101,1);
draw_surface_ext(surf, 0, 0, 1,1,0,$010101,1); draw_surface_ext(surf, 1, 0, 1,1,0,$010101,1);
draw_surface_ext(surf,-1, 1, 1,1,0,$010101,1); draw_surface_ext(surf, 0, 1, 1,1,0,$010101,1);
draw_surface_ext(surf, 1, 1, 1,1,0,$010101,1); // Band Pass Mask - High Pass * Low Pass // High Pass Mask - keep everything > hi hi =$030303;
surface_set_target(surf);
draw_clear_alpha(c_white,1);
draw_set_blend_mode_ext(bm_zero,bm_inv_src_color);
draw_surface_ext(sum,0,0,1,1,0,c_white,0);
draw_set_blend_mode_ext(bm_one,bm_one);
draw_rectangle_color(0,0,w,h,hi,hi,hi,hi,false);
draw_set_blend_mode_ext(bm_inv_dest_color,bm_zero);
draw_rectangle_color(0,0,w,h,c_white,c_white,c_white,c_white,false);
draw_set_blend_mode_ext(bm_one,bm_one);
// ... followed by low pass to create band pass filter ...
// Low Pass Mask - keep everything < lo
lo = $050505; lo =$ffffff - lo;
surface_set_target(temp);
draw_clear_alpha(c_white,0);
draw_set_blend_mode_ext(bm_zero,bm_src_color);
draw_surface_ext(sum,0,0,1,1,0,c_white,0);
draw_set_blend_mode_ext(bm_one,bm_one);
draw_rectangle_color(0,0,w,h,lo,lo,lo,lo,false);
draw_set_blend_mode_ext(bm_inv_dest_color,bm_zero);
draw_rectangle_color(0,0,w,h,c_white,c_white,c_white,c_white,false);
draw_set_blend_mode_ext(bm_one,bm_one);
repeat (8) draw_surface(temp,0,0);
// ... High * Low = Band Pass
surface_set_target(surf);
draw_set_blend_mode_ext(bm_dest_color,bm_zero);
draw_surface(temp,0,0);
// High Pass Mask - keep everything > hi
hi = $050505; surface_set_target(temp); draw_clear_alpha(c_white,1); draw_set_blend_mode_ext(bm_zero,bm_inv_src_color); draw_surface_ext(sum,0,0,1,1,0,c_white,1); draw_set_blend_mode_ext(bm_one,bm_one); draw_set_alpha(0); draw_rectangle_color(0,0,w,h,hi,hi,hi,hi,false); draw_set_alpha(1); draw_set_blend_mode_ext(bm_inv_dest_color,bm_zero); draw_rectangle_color(0,0,w,h,c_white,c_white,c_white,c_white,false); draw_set_blend_mode_ext(bm_one,bm_one); // Add to existing surface_set_target(surf); draw_surface(temp,0,0); repeat (8) draw_surface(surf,0,0); // Normally this would be done after each pass, // but we can skip one by doing this down here. surface_reset_target(); draw_set_blend_mode(bm_normal); } The other is Fredkin, named for its inventor (an interesting chap) Edward Fredkin. It has the distinction of being the simplest self-replicating Moore neighborhood CA. What this means is, the initial pattern will be replicated multiple times in succeeding generations. Basically the way it works is, a cell lives if the number of living neighbors is odd. This is why the Fredkin CA is sometimes called the parity rule. As you can see below, it is very simple to implement using XOR like blending modes. Again, replace the code in the Step Event to try it out. It's all a bit crazy looking at the speeds the demo runs at. You'll probably want to slow the demo down to one or two frames per second and supply it with some interesting patterns to work from. It can also produce some interesting variations on the initial pattern if you let it run a while. They might spark your imagination or be good sources for game sprites. { // Fredkin : parity of neighborhood and self {1,3,5,7,9 => 1, else 0} // new = C ^ NW ^ N ^ NE ^ E ^ SE ^ S ^ SW ^ W // // There exists a blend mode which mimics XOR when drawing with Black & White. // Parity of Neighbors/Self surface_set_target(sum); draw_clear_alpha(c_black,0); draw_set_blend_mode_ext(bm_inv_dest_color,bm_inv_src_color); draw_surface_ext(surf,-1,-1, 1,1,0,$ffffff,1);
draw_surface_ext(surf, 0,-1, 1,1,0,$ffffff,1); draw_surface_ext(surf, 1,-1, 1,1,0,$ffffff,1);
draw_surface_ext(surf,-1, 0, 1,1,0,$ffffff,1); draw_surface_ext(surf, 0, 0, 1,1,0,$ffffff,1);
draw_surface_ext(surf, 1, 0, 1,1,0,$ffffff,1); draw_surface_ext(surf,-1, 1, 1,1,0,$ffffff,1);
draw_surface_ext(surf, 0, 1, 1,1,0,$ffffff,1); draw_surface_ext(surf, 1, 1, 1,1,0,$ffffff,1);
surface_set_target(surf);
draw_clear_alpha(c_black,1);
draw_set_blend_mode_ext(bm_one,bm_one);
draw_surface(sum,0,0);
draw_set_blend_mode(bm_normal);
surface_reset_target();
}
I may take on Wireworld next. It's a little bit more tricky because it is a CA with four states, rather that the 1-bit CAs thus far explored, but I'm fairly certain it can be done using the same methods.
Abusing forum power since 1986.
Offline
## #52 2011-07-04 14:27:50
xot
Registered: 2007-08-18
Posts: 1,201
### Re: CHALLENGE: Conway's Game of Life
I was glancing through some old Amiga magazines and stumbled across an interesting series of articles. In it the author describes creating a Life engine using the blitter chip of the Amiga. A "blitter" is a system (usually hardware) that is optimized to handle and manipulate bit-mapped images, for instance scaling, rotation, and (raises flag) Boolean logic operations intended for the masking and blending of software sprites. To move or erase or draw a rectangular section of a bitmap requires several memory moves, at least one per scan line. Calculating all of those memory moves takes time and memory. The blitter's purpose is to do these calculations and memory moves automatically from a single "bit blit" command, freeing the CPU to do more interesting things.
An early example of a hardware blitter is the graphics coprocessor used for Eugene Jarvis's Robotron: 2084. According to Jarvis, some of the (amazing) designers of the Amiga worked at Williams at the time of Robotron's creation and it was on the Amiga that the concept and terms "bit blit" and "blitter" were popularized. Many computers that were created in the late 80s and earlier 90s (especially arcade and home video game systems) relied heavily on blitters to perform a lot of the graphical grunt work. The Commodore Amiga and Atari ST lines of computers and the SNES, Sega CD, and Atari Jaguar all made extensive use of hardware blitters. By the time the Sega Saturn was released, blitters had become so powerful and feature rich that they began to resemble today's GPUs. A GPU is just a very advanced blitter with a powerful vector calculator.
One can begin to see that the blitter-based Life algorithm described in the December 1987 issue of Amazing Computing is truly cut from the same cloth as the surface blending methods discussed in this topic.
So without further delay, here is the series of articles about running Life with the aid of an Amiga blitter chip.
Download PDF from Host-A: Life (with Amiga blitter) by Gerard Hull, Amazing Computing, Dec 87, Jan 88, Feb 88
Among the cited works is this one from January 1979 that appeared in BYTE magazine. It describes running Life based on Boolean logic and served as the primary inspiration for the blitter method.
Download PDF from Host-A: Life Algorithms by Mark D. Niemiec, BYTE Magazine, Jan 79
Here also is a reprint of Martin Gardner's truly foundational writings about Life within the pages of Scientific American. Like so many other Life junkies, it was one of these articles that inspired me to pursue cellular automata in the first place.
Download PDF from Host-A: Game of Life by Martin Gardner, Scientific American
If you desire some more historical morsels, here you'll find an article on Life that appeared in the very first issue of BYTE magazine from January 1975. As one can imagine, it would have been running on the computing equivalent of stone knives and bear skins.
Download PDF from Host-A: Life Line by Carl Helmers, BYTE Magazine, Jan 75, Feb 75
Last edited by xot (2011-07-07 11:06:50)
Abusing forum power since 1986.
Offline
## #53 2011-07-07 00:32:32
icuurd12b42
Member
Registered: 2008-12-11
Posts: 303
### Re: CHALLENGE: Conway's Game of Life
LOL: bitter-based Life
Offline
## #54 2011-07-07 11:06:26
xot
Registered: 2007-08-18
Posts: 1,201
### Re: CHALLENGE: Conway's Game of Life
icuurd12b42 wrote:
LOL: bitter-based Life
Whoops, I must have been thinking of my own life when I typed that.
Abusing forum power since 1986.
Offline
## #55 2011-07-14 16:33:10
xot
Registered: 2007-08-18
Posts: 1,201
### Re: CHALLENGE: Conway's Game of Life
I've got Wireworld running now. On my machine, it can iterate Mark Owen's Wireworld computer (shown below) at almost 60 fps. This complex circuit is built of several parts and together they calculate prime numbers of up to 16-bits and show them on a 7-segment display. I'm hoping with some code changes I can get it going 20-40% faster.
Below is the code and the support functions. There are couple of reasons it is so much slower than the other demos. First, it is a much larger pattern with over three times as many pixels. There is a variation of this pattern that is about 15% smaller. It should run a bit faster, but it is "in-progress" and already at prime number 19. Second, I'm using scripts to perform most of the grunt work and creating and destroying several surfaces each step to accommodate the generality of the scripts. By reusing surfaces and refactoring the code to be inline, I expect a good speed boost. I'll post the results later.
{
/*
// Wireworld :
// Empty -> Empty
// Tail -> Conductor
// Conductor -> Conductor, or Head if exactly 1 or 2 Heads in neighborhood,
// 0: empty === ($000000) === ($000000)
// 1: conductor === ($010101) === ($404040)
// 2: tail === ($020202) === ($808080)
// 4: head === ($040404) === ($FFFFFF)
// Non-traditional state values allow integer division to perform
// almost all state transitions with a single operation. Rounding
// of (1 div 2) == 1 is especially useful. These values also look
// much nicer as a display when scaled to full brightness.
// empty -> empty (0 -> 0)
// conductor -> conductor (1 -> 1)
// tail -> conductor (2 -> 1)
// head -> tail (4 -> 2)
// conductor -> head {1 -> 4} if it has exactly 1 or 2 head neighbors
surf *= (4/255) // transforms color from display {0,64,128,255} to calc {0,1,2,4} ranges
sum = sum8(temp,(1/255)); // sum = number of head neighbors for each cell
temp = bandpass_mask(sum,0,3); // temp = cells with exactly 1 or 2 neighboring heads
temp2 = bandpass_mask(surf,0,2); // {0,255,0,0} temp2 = conductor only
temp *= temp2; // temp = conductors with exactly 1 or 2 head neighbors
temp *= (3/255); // temp scaled to calc range
surf *= (128/255) // divides by ~two {0,1,2,4} -> {0,1,1,2} for primary state transitions
surf += temp; // surf = surf with new heads added [ conductor -> head {1 -> 4} ]
repeat (6) surf += surf; // {0,1,2,4} -> {0,64,128,255} [ calc range -> display range ]
*/
surface_set_target(surf);
draw_set_blend_mode_ext(bm_zero,bm_src_color);
draw_rectangle_color(0,0,w,h,$040404,$040404,$040404,$040404,false);
if (surface_exists(temp)) surface_free(temp);
temp = surface_highpass_mask(surf,$030303); if (surface_exists(sum)) surface_free(sum); sum = surface_sum8(temp,$010101);
if (surface_exists(temp)) surface_free(temp);
temp = surface_bandpass_mask(sum,$000000,$030303);
if (surface_exists(temp2)) surface_free(temp2);
temp2 = surface_bandpass_mask(surf,$000000,$020202);
surface_set_target(temp);
draw_set_blend_mode_ext(bm_zero,bm_src_color);
draw_surface(temp2,0,0);
draw_rectangle_color(0,0,w,h,$030303,$030303,$030303,$030303,false);
surface_set_target(surf);
draw_rectangle_color(0,0,w,h,$808080,$808080,$808080,$808080,false);
draw_set_blend_mode_ext(bm_one,bm_one);
draw_surface(temp,0,0);
repeat (6) draw_surface(surf,0,0);
draw_set_blend_mode(bm_normal);
surface_reset_target();
}
// surface_bandpass_mask(source,high,low)
// source surface to be masked with filter
// high colors brighter than high and
// low darker than low will pass
// returns a surface that is white where colors pass, black elsewhere
{
var a,b;
surface_set_target(a);
draw_set_blend_mode_ext(bm_zero,bm_src_color);
draw_surface(b,0,0);
draw_set_blend_mode(bm_normal);
surface_reset_target();
surface_free(b);
return a;
}
// surface_highpass_mask(surface,high)
// source surface to be masked with filter
// high colors brighter than high will pass
// returns a surface that is white where colors pass, black elsewhere
{
var src,hi,w,h,tmp;
src = argument0;
hi = argument1;
w = surface_get_width(src);
h = surface_get_height(src);
tmp = surface_create(w,h);
surface_copy(tmp,0,0,src);
surface_set_target(tmp);
draw_set_blend_mode_ext(bm_inv_dest_color,bm_zero);
draw_rectangle_color(0,0,w,h,c_white,c_white,c_white,c_white,false);
draw_set_blend_mode_ext(bm_one,bm_one);
draw_rectangle_color(0,0,w,h,hi,hi,hi,hi,false);
draw_set_blend_mode_ext(bm_inv_dest_color,bm_zero);
draw_rectangle_color(0,0,w,h,c_white,c_white,c_white,c_white,false);
draw_set_blend_mode_ext(bm_one,bm_one);
repeat (8) draw_surface(tmp,0,0);
draw_set_blend_mode(bm_normal);
surface_reset_target();
return tmp;
}
// surface_lowpass_mask(surface,low)
// source surface to be masked with filter
// low colors darker than low with pass
// returns a surface that is white where colors pass, black elsewhere
{
var src,lo,w,h,tmp;
src = argument0;
lo = c_white ^ argument1;
w = surface_get_width(src);
h = surface_get_height(src);
tmp = surface_create(w,h);
surface_copy(tmp,0,0,src);
surface_set_target(tmp);
draw_set_blend_mode_ext(bm_one,bm_one);
draw_rectangle_color(0,0,w,h,lo,lo,lo,lo,false);
draw_set_blend_mode_ext(bm_inv_dest_color,bm_zero);
draw_rectangle_color(0,0,w,h,c_white,c_white,c_white,c_white,false);
draw_set_blend_mode_ext(bm_one,bm_one);
repeat (8) draw_surface(tmp,0,0);
draw_set_blend_mode(bm_normal);
surface_reset_target();
return tmp;
// surface_sum8(source,color)
// source surface to be summed
// color color scalar, ie. \$010101
// returns a surface where each pixel is the scaled sum of the eight
// neighbors of the corresponding pixel in the source image
{
var tmp;
tmp = surface_create(surface_get_width(argument0),surface_get_height(argument0));
surface_set_target(tmp);
draw_clear_alpha(c_black,1);
draw_set_blend_mode_ext(bm_one,bm_one);
draw_surface_ext(argument0,-1,-1, 1,1,0,argument1,1);
draw_surface_ext(argument0, 0,-1, 1,1,0,argument1,1);
draw_surface_ext(argument0, 1,-1, 1,1,0,argument1,1);
draw_surface_ext(argument0,-1, 0, 1,1,0,argument1,1);
draw_surface_ext(argument0, 1, 0, 1,1,0,argument1,1);
draw_surface_ext(argument0,-1, 1, 1,1,0,argument1,1);
draw_surface_ext(argument0, 0, 1, 1,1,0,argument1,1);
draw_surface_ext(argument0, 1, 1, 1,1,0,argument1,1);
surface_reset_target();
return tmp;
}
Last edited by xot (2011-07-14 23:11:02)
Abusing forum power since 1986.
Offline
## #56 2011-07-18 19:27:41
xot
Registered: 2007-08-18
Posts: 1,201
### Re: CHALLENGE: Conway's Game of Life
Here is a demonstration of this implementation of Wireworld. The zip includes a GM8 project file and executable.
After mucking about with the code and making it way less readable, I think I'm going abandon any further attempts to increase execution speed.
Abusing forum power since 1986.
Offline
## #57 2011-07-19 13:54:51
xot
Registered: 2007-08-18
Posts: 1,201
### Re: CHALLENGE: Conway's Game of Life
I was checking out OldSkool.org today and ran across an interesting comment from a reader. The topic is 'bare metal' programming, which means coding directly to the hardware without an OS or API getting in the way, something almost unheard of today in popular computing. The reader-submitted stories revolve around the various abuses they have put older hardware through to get them to do unexpected things.
One of the comments was about the Atari ST and Conway's Game of Life. I quote it in part here:
Fred Butzen wrote:
Anyway, as a programming exercise I wrote a version of the game of life for this machine. It ran fine; however, I made one mistake: I forgot to set the clipping rectangle around the screen. So, the first time I built a glider gun, the glider went creeping off the screen, and just kept on going, crashing through memory. Mind you, there's no MMU, so the machine is still running while the glider is rampaging through memory. Because the 68000 used memory-mapped ports for its hardware, you could see where the glider was going because the disk drives starting running, the lights flashed on the keyboard, the tube blinked, and so on. Finally, the glider hit something really vital and the machine died with "streaky bombs" -- a sign that the operating system was really, really sick.
If you've never worked with memory-mapped hardware, the story might fall a little flat, but it made me smile imagining the Tron-like escape of this wayward glider into the inner reaches of the computer.
Abusing forum power since 1986.
Offline
## #58 2011-07-26 22:07:24
icuurd12b42
Member
Registered: 2008-12-11
Posts: 303
### Re: CHALLENGE: Conway's Game of Life
OMG... That is completely mental!!!
I'm impressed. man oh man. It's like those people in minecraft making those emulators.
Offline
## #59 2011-07-27 14:33:08
xot
Registered: 2007-08-18
Posts: 1,201
### Re: CHALLENGE: Conway's Game of Life
Yeah, it's very impressive work. I spent a while looking for interesting circuits that were smaller, but that is really the only interesting one I've found -- and it's a doozy. I thought I might be able to use Photoshop to adapt it to the shape of the smaller version, but it soon became clear that correctly adjusting the circuit timings would be a Herculean task. The notion that someone had the patience and technical ability to create this circuit in the first place is truly humbling.
On another topic, I noticed something weird as I worked on this implementation. Earlier I said it ran at almost 60 FPS for me. What I didn't notice right away is that it is capable of running faster than that under the right conditions. If I set the room speed to 60 FPS, it runs at a rock-steady 60 FPS. If I set the room speed higher, it drops to about 50 FPS which was my initial observation. If that wasn't strange enough, if I cause Windows (or an application) to display a pop-up graphic (such as opening the Start menu, or a context menu, or displaying ToolTip text when I mouse over something, or almost anything else), it runs at about 70 FPS. When the pop-up image is closed or removed, the frame rate drops back down to 50 FPS. I don't understand what's going on here and I'm wondering if anyone else experiences the same thing. Oh, and one more weird thing. My Conway's Life demo shows the opposite behavior. It peaks at around 248 FPS and when a pop-up is displayed, it drops to under 210 FPS. And sometimes I can't coax it to go faster then 205 FPS without shuffling my windows or their contents around. And other times, none of these things affect the frame rate. I'm running Windows XP.
Abusing forum power since 1986.
Offline
## #60 2011-07-27 21:28:10
icuurd12b42
Member
Registered: 2008-12-11
Posts: 303
### Re: CHALLENGE: Conway's Game of Life
It's probably due to the clipping region a menu or another window creates. Not happening here.
As for the machine... are there any templates for simple gates? There must be. Assembling a circuit would be as simple as copy and pasting the images on the main picture
Offline
|
{}
|
# Profit Margin, Investment Turnover, and Residual Income
• ### profit margin
(Solved) May 25, 2011
2. Mason Corporation had $650,000 in invested assets, sales of$ 700,000 , income from operations amounting to $99,000 , and a desired minimum rate of return of 15 %. the profit margin is? • ### --Rate of return on investment (Solved) August 08, 2011 Media Networks Parks and Resorts Studio Entertainment Consumer Products Profit margin : Income from operations$2,749 ? ? ? Sales ? ? ? ? Profit margin ? ? ? ? Investment turnover: Sales ? ? ? ? Invested assets ? ? ? ? Investment turnover ? ? ? ? Rate of return on investment ? ? ? ? Income from
investment turnover = revenue/invested assets Media Networks: 13,027/26,926 = 0.48 times Parks and resorts: 9,023/15,807 = 0
• ### What is the the residual income?
(Solved) August 17, 2011
• ### ACCOUNTING HELP
(Solved) October 30, 2010
1. Return on investment (ROI) is equal to the margin multiplied by: A) sales. B ) turnover. C ) average operating assets. D) residual income. 2. Delmar Corporation is considering the use of residual income as a measure of the performance of its divisions. What major disadvantage of this method...
|
{}
|
# Insights from Euclid's 'Elements'
post by TurnTrout · 2020-05-04T15:45:30.711Z · score: 122 (46 votes) · LW · GW · 16 comments
## Contents
Elements
Equality and Similarity
Synthetic/analytic
Area invariance
Notes
Forward
Against completionism
Re-deriving dependencies as a habit
Commemoration
None
Presumably, I was taught geometry as a child. I do not remember.
Recently, I'd made my way halfway through a complex analysis textbook, only to find another which seemed more suitable and engaging. Unfortunately, the exposition was geometric. I knew something was wrong – I knew something had to change – when, asked to prove the similarity of two triangles, I got stuck on page 7.
I'd been reluctant to tackle geometry, and when authors reasoned geometrically, I'd find another way to understand. Can you blame me, when most geometric proofs look like this?
Distasteful. In a graph with vertices, you'd need to commit things to memory (e.g. triangles, angles) in order to read the proof without continually glancing at the illustration. In a normal equation with variables, it's .
Sometimes, we just need a little beauty to fall in love.
Welcome to Oliver Byrne's rendition of Euclid's Elements, digitized and freely available online.
# Elements
Propoſitions are placed before a ſtudent, who though having a ſufficient underſtanding, is told juſt as much about them on entering at the very threſhold of the ſcience, as gives him a prepoſſeſſion moſt unfavourable to his future ſtudy of this delightful ſubject; or “the formalities and paraphernalia of rigour are ſo oſtentatiouſly put forward, as almoſt to hide the reality. Endleſs and perplexing repetitions, which do not confer greater exactitude on the reaſoning, render the demonſtrations involved and obſcure, and conceal from the view of the ſtudent the conſecution of evidence.”
Thus an averſion is created in the mind of the pupil, and a ſubject fo calculated to improve the reaſoning powers, and give the habit of cloſe thinking, is degraded by a dry and rigid courſe of inſtruction into an unintereſting exerciſe of the memory.
## Equality and Similarity
Old mathematical writing lacks modern precision. Euclid says that two triangles are "equal", without specifying what that means. It means that one triangle can be turned into another via an isometric transformation. That is, if you rotate, translate, and/or reflect triangle , it turns into triangle .
Two triangles are similar when there exists such an affine transformation (i.e., you can scale as well).
## Synthetic/analytic
I find it strange that Euclid got so far by axiomatizing informal notions without any grounding in formal set theory (e.g. ZFC). I mean, you'd get absolutely blown away if you tried to pull these shenanigans in topology. But apparently, Euclidean geometry is sufficiently well-behaved that it basically matches our intuitions without much qualification?
## Area invariance
This says: suppose you draw two parallel lines, and then make a dash of length 2 on each line. Then, make another dash of length 2 on the upper line. The two parallelograms so defined have equal area. This is clarified in the next theorem.
If you take one of the dashes and slide it around on the upper parallel line, the resultant parallelograms all have the same area. I thought this was cool.
## Notes
• There aren't any exercises; instead, I tried to first prove the theorems myself.
• Book III treats circles, with wonderful results on arcs and their relation to angles. I search for a snappy example, a gem of an insight to share, but my words fail me. It's just good.
• I read books I, III, IV, and skimmed II. Not all books of the Elements are about plane geometry; some are archaic introductions to number theory, for example. Those looking to learn number theory would do much better with the gorgeous Illustrated Theory of Numbers.
# Forward
Elements is a tour de force. Theorem, theorem, problem, theorem, all laid out in confident succession. It was not always known that from simple rules you could rigorously deduce beautiful facts. It was not always known that you could start with so little, and end with so much.
Before I found this resource, I'd checked out several geometry books, all of which seemed bad. To salt the wound, many books were explicitly aimed at middle-schoolers. This... was a bit of a blow.
However, it doesn't matter when something is normally presented. If you don't know something, you don't know it, and there's nothing wrong with learning it. Even if you feel late. Even if you feel sheepish.
## Against completionism
I'm glad I didn't read all of the books, even though they're beautiful. I'd picked up a bad "completionist" habit – if I don't read the whole book, obviously I haven't completed it, and obviously I'm not allowed to make a post about it. Of course.
But I'm trying to pick up useful skills, to expand the types of qualitative reasoning available to me, to get the most benefit per unit of reading. I stopped because I have what I need for my complex analysis book.
Reading relevant Wikipedia pages / other textbooks helps me cross-examine my knowledge. It also helps connect the new knowledge to existing knowledge. For example, I now have a wonderfully enriched understanding of the geometric mean.
Over time, as you expand and read more books, you'll find yourself reading faster and faster, understanding more and more subsections. I don't recommend learning new areas via Wikipedia [LW · GW], but it's good reinforcement.
## Re-deriving dependencies as a habit
Ever since I learned real analysis [? · GW], I reflexively reprove all new elementary mathematics whenever I use it. For real analysis, that meant continually reproving e.g. whenever I used that property in a proof. Did it feel silly and tedious? A bit, yes.
But with (this) tedium comes power. I can now regenerate a formal foundation for the real numbers from the Peano axioms, proving the necessary properties about the natural numbers, then the integers, then the rationals, and then the reals, and then complex numbers too. (But please, no quaternions!)
With this habit, you continually ask yourself, "how do I know this?". I think this is a useful subskill of Actually Thinking.
## Commemoration
In college, I taught myself a bit of Japanese. Through a combination of spaced repetition software and memory palaces, and over the course of three months, I learned to read the 2,136 standard use characters. After those three months, I proudly displayed this poster on my wall:
I look forward to another beautiful poster.
As the ſenſes of ſight and hearing can be ſo forcibly and inſtantaneously addreſſed alike with one thouſand as with one, the million might be taught geometry and other branches of mathematics with great eaſe, this would advance the purpoſe of education more than any thing that might be named, for it would teach the people how to think, and not what to think; it is in this particular the great error of education originates.
Comments sorted by top scores.
comment by ryan_b · 2020-05-04T18:45:21.448Z · score: 15 (6 votes) · LW(p) · GW(p)
This is glorious. On the flip side of the coin, I struggle with outrage that we had copped to the problem of presenting information and basically had it licked in the middle of the 19th century, and then apparently systematically purged such knowledge during the 20th. For example, there's this interesting piece about Emma Willard, who drew gorgeous visuals providing perspective to history. She began in ~1837. Good use of images seems only now to be undergoing a renaissance, and that owing to the availability of computer graphics more than anything else.
What the devil happened erstwhile?
comment by romeostevensit · 2020-05-04T20:07:09.516Z · score: 3 (3 votes) · LW(p) · GW(p)
Inaccessible beauty makes many feel ugly.
comment by TurnTrout · 2020-05-14T22:42:26.790Z · score: 2 (1 votes) · LW(p) · GW(p)
I don't see why that would explain these deficiencies, even if true. I imagine the answer's more along the lines of "lack of incentives for textbook writers and publishers, as determined by the scholastic purchasing committees".
comment by Raemon · 2020-05-15T20:01:19.573Z · score: 12 (6 votes) · LW(p) · GW(p)
Curated. I found a lot to be interested in here.
First, I'm just grateful for being introduced to Byrne's Elements. I think "how to use visuals to improve pedagogy" is a practically important question. I haven't yet worked through it myself to have a clear sense of "does the improved pedagogy work (for me)?", but even at a glance, it looks like a treasure trove of artistry that is worth exploring and learning from.
I found reading through Turntrout's learning process also helpful, to give me some insight into a cohesive worldview that includes "how to learn, how to be rigorous about it, and how to be finding beauty in the world along the way."
I... do sure find it annoying that the letter S is for reason a weird ſ, which doesn't seem like the sort of thing it was that important to preserve at the expense of clarity on the new site, but that part isn't Turntrout's fault (I'd be interested if there's a more compelling reason than "that's just how Byrne did it at the time and we're faithfully recreating it)
comment by TurnTrout · 2020-05-15T22:50:34.442Z · score: 4 (2 votes) · LW(p) · GW(p)
I... do sure find it annoying that the letter S is for reason a weird ſ, which doesn't seem like the sort of thing it was that important to preserve at the expense of clarity on the new site, but that part isn't Turntrout's fault (I'd be interested if there's a more compelling reason than "that's just how Byrne did it at the time and we're faithfully recreating it)
Nope, that's the reason. Nicholas Rougeaux explains:
The long s (ſ and ſ italicized) was common in older publications and is used throughout the original book. It can be mistaken for the lowercase f but should be read as s whenever seen. The usage of the long s has fallen out of style but in an effort to faithfully reproduce this book, it was used as well.
comment by Adele Lopez (adele-lopez-1) · 2020-05-15T23:25:26.973Z · score: 2 (1 votes) · LW(p) · GW(p)
I also find the long S super annoying, but it at least should be pretty easy to make a browser plugin or something to replace 'ſ' with 's' everywhere.
comment by Thomas Kehrenberg (thomas-kehrenberg) · 2020-05-25T14:39:47.809Z · score: 3 (2 votes) · LW(p) · GW(p)
I found "Word Replacer II" for Chrome works perfectly. You can limit it to only be active on specific sites. And then just specify that you want to replace "ſ" by "s".
comment by philh · 2020-05-07T12:22:49.999Z · score: 8 (3 votes) · LW(p) · GW(p)
It seems worth noting here that Elements isn't entirely rigorous. I don't remember many details about that, but https://en.wikipedia.org/wiki/Euclid's_Elements#Criticism has some. I do remember this bit (or at least something very similar):
Later, in the fourth construction, he used superposition (moving the triangles on top of each other) to prove that if two sides and their angles are equal, then they are congruent; during these considerations he uses some properties of superposition, but these properties are not described explicitly in the treatise.
Because when we studied Elements at math camp when I was ~16 I remember this standing out to me. I think we were going through it as a group, and the instructor asked if anyone could prove each theorem in turn before giving us the answer if we couldn't. Unsurprisingly, no one could prove this one. When he showed us how it was done I felt a bit... cheated? because no one had told us we could do that. But I didn't do anything with this feeling, I think I just assumed that everything was fine, I should have been able to work out that we could do that.
Later I learned that no, it was in fact cheating and we could not do that.
comment by TurnTrout · 2020-05-07T13:00:18.125Z · score: 2 (1 votes) · LW(p) · GW(p)
Yeah, and sometimes his case analysis was a little less than exhaustive. I think Byrne fixed that, though.
comment by G Gordon Worley III (gworley) · 2020-05-04T21:47:17.939Z · score: 6 (4 votes) · LW(p) · GW(p)
While this is great, I wonder if something is lost. Specifically I'm remembering when I learned geometry and the class was simply to work through Elements and prove each theorem. This happened when I was in 8th grade (US), and it was a frustrating and similarly beautiful and powerful experience. At the time nothing had quite honed my skills for reasoning about abstractions, loading models into my head, and working with those models like geometry did. Without having spent a semester fighting to earn the right to say "QED", I don't know if I would have made as good of progress as I did on becoming a programmer and a mathematician by virtue of having had that earlier experience where I learned the basic methods those callings require.
comment by TurnTrout · 2020-05-04T22:45:06.454Z · score: 9 (3 votes) · LW(p) · GW(p)
Why is that missing here?
comment by G Gordon Worley III (gworley) · 2020-05-05T16:00:24.597Z · score: 4 (2 votes) · LW(p) · GW(p)
I guess my concern is that this makes it too easy in a way that rips out part of the difficulty that encourages learning. Learning geometry the way I did was helpful specifically because I had to go through the process of taking dense and difficult to reason about words and build up my models to understand them. The lack of assistive pedagogy like the kind here forced me to work out something like it for myself inside my head.
This is not to make a general argument against assistive devices; often they are helpful if what matters is getting something done. But I didn't work through Elements to solve geometry problems where the solution had a positive impact on my life, but to learn a thinking process using geometry.
I also don't mean to make an argument that no one should get to have some pretty pictures that help them learn. I'm sure the use of pictures like these helps many folks learn geometry who otherwise wouldn't or wouldn't learn it as well. I only mean to say that I think we give up something of value by making geometry easier to learn.
(FWIW I've made the same argument in the context of training programmers, preferring that they have to learn to work with assembly, FORTRAN, and C because the difficulty forced me to understand a lot of useful details that help me even when working in higher level languages that can't be fully appreciated if you are, for example, trying to simulate the experience of managing memory or creating loops with JUMPIF in a language where it's not necessary. Not exactly the same as what's going on here but of the same type.)
comment by NaiveTortoise (An1lam) · 2020-05-05T16:53:44.416Z · score: 8 (6 votes) · LW(p) · GW(p)
FWIW as someone who learned Python first, was exposed to C but didn't really understand it, and then only really learned C later (by playing around with / hacking on the OpenBSD operating system and also working on a project that used C++ with mainly only features from C), I've always found the following argument quite suspect with respect to programming:
(FWIW I've made the same argument in the context of training programmers, preferring that they have to learn to work with assembly, FORTRAN, and C because the difficulty forced me to understand a lot of useful details that help me even when working in higher level languages that can't be fully appreciated if you are, for example, trying to simulate the experience of managing memory or creating loops with JUMPIF in a language where it's not necessary. Not exactly the same as what's going on here but of the same type.)
It's undoubtedly true that I see some difference before & after "grokking" low-level programming in terms of being able to better debug issues with low-level networking code and maybe having a better intuition for performance. Now in fairness, most of my programming work hasn't been super performance focused. But, at the same time, I found learning lower level programming much easier after having already internalized decent programming practices (like writing tests and structuring my code) which allowed me to focus on the unique difficulties of C and assembly. Furthermore, I was much more motivated to understand C & assembly because I felt like I had a reason to do so rather than just doing it because (no snark intended) old-school programmers had to do so when they were learning.
For these reasons, I definitely would not recommend someone who wants to learn programming start with C & assembly unless they have a goal that requires it. This just seems to me like going to hard mode directly primarily because that's what people used to have to do. As I said above, I'm fairly convinced that the lessons you learn from doing so are things you can pick up later and not so necessary that you'll be handicapped without them.
(Of course, all of this is predicated on the assumption that I have the skills you claim one learns from learning these languages, which I admit you have no reason to believe purely based on my comments / posts.)
comment by TurnTrout · 2020-05-05T16:56:52.682Z · score: 6 (3 votes) · LW(p) · GW(p)
The main difference is that the original is harder to follow because of shortcomings of the human short-term memory system. You're still thinking about exactly the same abstract concepts. The potential danger is the lack of exercises, I suspect – that's where a) first proving things yourself and b) the rederivation habit, come in handy.
I also suspect math students have ample opportunities to crunch through dense thickets of words… why oh why do I suddenly find myself thinking of Munkres' Topology and Dummit & Foote's Abstract Algebra?
comment by wearsshoes · 2020-05-16T03:46:04.381Z · score: 3 (2 votes) · LW(p) · GW(p)
On the bus from NYC to Boston for EAGxBoston 2019 I chanced to sit next to a topology professor. I don't have any higher math background, but mentioning that I'd recently read the first few books of the Elements opened the door to a long and interesting conversation. I was amazed that something written two thousand years prior compared so favorably with my own 8th grade geometry experience, which despite having a cool teacher managed to teach me only the rudiments of geometry, and nothing substantial about proofs or theorems.) Minus the annoying long s, I'd gift Byrne's illustrated Elements to any smart kid in a heartbeat - it's surprisingly cheap on Amazon.
|
{}
|
# Time After Time
Ole Peters was a postdoc at the Santa Fe Institute during the time I was also a postdoc there. In addition to being a world class windsurfer, Ole likes to think about critical phenomena and stochastic processes. And in the TEDxGoodenoughCollege talk below he almost convinces me that I need to think harder about ensemble versus time averages 🙂
This entry was posted in Economics, Mathematics. Bookmark the permalink.
### 8 Responses to Time After Time
1. Frank says:
I think that the example that he presents in the talk is not fair; to model his game as a stochastic process, he should take the logarithm of the “wealth” function (such that this becomes the sum of independent variables), and then compare the time-average with the ensemble average.
2. dabacon says:
Frank: Define fair 🙂
3. dabacon says:
Robin: let me play the devils advocate (since I agree with much of what you and Frank say). If you have to adjust the quantity you are looking at to make the ensemble equal the time average, then doesn’t it feel like in more complicated and realistic situations that you have to be really careful about mixing these two ideas?
4. Robin says:
I’ll rephrase Frank’s comment. If you consider the ensemble-average of log(wealth), then you find that the time average and the ensemble average are consistent. That is, both indicate that it’s a losing game.
This suggests the question “Why log? Why not sqrt, or whatever?” Good question. The best answer is that this game is (like the St. Petersburg paradox) one with very long tails that make the mean an unreliable statistic. The median, however, is a reliable statistic. In particular, and unlike the mean, it’s invariant under functional transformations. And in this case, the median tracks the mean of the log.
Ole’s talk is interesting and provocative. But, since as a scientist I feel a certain duty to be skeptical, I’d like to suggest that much of what he say and shows can be boiled down to: “When faced with an ensemble, consider the median [instead of / as well as] the mean.” Which is eminently sensible, but not especially exciting.
5. Ole says:
Thank you for your great comments. All valid points, but let me respond briefly.
Frank: if you take the ensemble-average rate of change of the logarithm of wealth, you’re right that it reproduces the time-average exponential growth rate of wealth. Why? Silly question, I know, but here’s an interesting perspective. Ergodicity in this context can be viewed as the question whether two limits commute (sample size $to infty$ and time $to infty$). You can write the logarithm as the limit $ln(x)= lim_{nto infty} n(x^{1/n}-1)$. This limit turns out to be equivalent in the calculation to the limit time $to infty$. So taking the ensemble average of the logarithm means you are taking an ensemble average of a time average. Because the noise has already been killed by the time average (implicit in the logarithm), all ensemble members are identical, and you end up with the time average (the inner limit).
Robin: good question indeed. The logarithm is a very special function, in this case it encodes the time average in an ensemble average — wow! Messing with the logarithm (like using a sqrt instead) produces results that are very difficult to interpret physically. In the St. Petersburg paradox the special role of the logarithm has been underestimated (Bernoulli wrote that the sqrt is just as good). If you’re interested, have a look here: http://arxiv.org/abs/1011.4404
In statistical mechanics the logarithm ensures entropic extensivity (and a few other properties). Messing with it has been done much more carefully there than in economics (see e.g. Hanel, Thurner, Gell-Mann: http://www.pnas.org/cgi/doi/10.1073/pnas.1103539108), although the precise physical interpretation of, e.g., entropies with generalized logarithms poses a problem there too.
The median will eventually (as time $to infty$) reflect the time average in our game, true, and I tend to agree with you that it’s a more meaningful statistic. But it really all depends on what you want — maybe you are interested in the ensemble average. What if you’re the US government and you have 300 million individuals whose “ensemble”-average earnings determine your taxes? But the time-average growth reflects how the typical individual is getting on…
Dave: Thanks for letting me know about the posts. I agree, this is about thinking carefully both about what it is you want your mathematical measure to reflect and about implementing an appropriate measure in a given situation.
6. Robin says:
Dave: I agree completely. In fact, I’ll say something stronger — there are lots and lots of situations where time and ensemble averages are totally different. Let’s not make too big a straw man out of the ergodic hypothesis. According to Wikipedia, it says that “over long periods of time… all accessible microstates are equiprobable”. This is pretty specific to physics, and even then it’s specific to certain systems. (Nobody has ever argued to me that harmonic oscillators are ergodic). I wasn’t aware that ergodicity was taken for granted outside of that realm.
Regarding your first point (about adjusting the quantity): this is the problem with the mean. The mean of f(x) is never equal to f(<x>). So it’s not really a matter of “adjusting” the quantity — there’s a more fundamental problem of “What f(x) do you pick in the first place?” Counterintuitive though it is, there’s nothing sacred about f(x)=x. For instance, the utility of money isn’t linear. So the mean value of wealth isn’t very meaningful. There’s no consensus on how its utility does scale, so it’s hard to justify the mean of any f(\$).
Which is precisely why I’m [somewhat] happier with the median — precisely because it doesn’t vary that way. The median of f(x) is f() of the median of x.
7. Robin says:
Ole: I’ve just read 1011.4404, and I commend you on a very clear and thoughtful paper. I can’t respond comprehensively in this space — perhaps we ought to chat over a pint someday. But, while I can’t respond to all the trenchant and interesting points in the paper, I do want to comment on the central argument.
While I agree with all of your mathematics — which, in turn, agree with Bernoulli’s — I fear I remain unconvinced by the idea that time averaging is the central concept. I believe that you’ve actually built your argument upon the very concept that you reject — the use of logarithmic money.
Not, I hasten to emphasize, because of any assumptions about utility. Rather, right around Eq. (5.1), you make the key step of examining the multiplicative factor — the ratio of post-gamble wealth to pre-gamble wealth. And then, very sensibly, you consider its logarithm.
This is indeed the right thing to do. But you have just made exactly the same logarithmic transformation that Bernoulli did. Now, I agree that Bernoulli’s argument was specious — the utility of money is not necessarily logarithmic. The correct reason for treating money logarithmically is that — in games of Kelly type, including St. Petersburg — money increases or decreases geometrically. Its logarithm therefore undergoes a linear random walk.
This statement does not hold for any other function of money. And it is precisely because of this linear random-walk behavior that all sorts of things work out nicely. It is linearity that ensures that rates of change at different times are independent (which gives your central result), and therefore that the ensemble and time averages of log r are equal. And it is the random-walk behavior (which follows from linearity) that makes the distribution of log(\$) at any given time a binomial distribution, and therefore ensures that the median equals the mean.
In summary, I feel that the focus on time average rather than ensemble average is something of a red herring — and that the essential concept is, instead, a [logarithmic] transformation of the main variable, whose necessity is derived not from any notion of utility, but rather from the dynamical map defining the game.
8. Ole says:
Robin: Thank you for your kind words about my paper. I’m glad you enjoyed it. Yes, probably time for a pint.
If one insists on the notion of utility, a nonlinear value of money (for linear $f(x)$ we do have $langle f(x)rangle = f(langle x rangle)$), then the arguments in http://arxiv.org/abs/1011.4404 can be construed as arguments for the specific form of logarithmic utility. But my perspective is that we shouldn’t even start talking about utility, we shouldn’t introduce some value function, before we run out of objective methods, preferably rooted in the laws of physics. In the St. Petersburg paradox, we only have to invoke time, so let’s not conflate physics (time) with psychology (utility).
You’re absolutely right that it’s key that the dynamics of wealth are multiplicative. To take a time average we must have a dynamic — there is no dynamic specified in Bernoulli’s game, it just sits there in a vacuum, so you could argue the problem is not well-posed. It’s reminiscent of equilibrium vs. non-equilibrium statistical mechanics: in computer simulations of an Ising model in equilibrium, the dynamic is irrelevant as long as it leads to a sampling of the phase space consistent with the equilibrium weights (Boltzmann factors) — you can use Metropolis or Swendsen-Wang or… But if you’re interested in non-equilibrium behavior (relaxation, nucleation etc.), where time is more meaningful, then the specific dynamic is crucial. Also here, I think it’s fair to say that we’ve only just started in the last few decades to understand the significance of time (or dynamics, or non-equilibrium).
The “Ergodic hypothesis” Wikipedia entry seems to refer mostly to the origin of the concept, namely Boltzmann’s microcanonical ensemble, with only energy conservation, the “Ergodic theory” entry seems more inclusive. Since Boltzmann there have been major developments, most relevant to our context the development of ergodic theory for stochastic processes. This literature gets quite mathematical quite fast. Chapter 9 in “Probability and Random Processes” by Grimmett and Stirzaker gives a broad idea, and the first chapters in “Ergodicity for Infinite Dimensional Systems” by Da Prato and Zabcyk are almost penetrable.
Everything you say makes sense, we just have slightly different perspectives. Personally I’ve gained clarity from mine, reflected in a number of further results and predictions that I wouldn’t have arrived at otherwise. Happy to continue the discussion, but perhaps offline.
|
{}
|
# Force Identification
A boy is holding a ball in his hand (as shown in the figure). The reaction force to the force of gravity on ball is, the force exerted by the
×
|
{}
|
# Uniqueness of general solution to SHO
This may be a duplicate, though I have searched and not found this question answered, and it may also belong more on Mathematics Stack exchange than here -- in which case I'll transfer.
My question is: how does one prove (both intuitively and rigorously) that the solution to the SHO, being a linear combination of a sine and cosine, is the most general and unique solution?
The way it is most often solved is by simply suggesting $x(t) = \exp(\Omega t)$, then solving $\Omega = \pm ik/m$, and ending up with $x(t) = A\cos(\omega t) + B\sin(\omega t)$ with $\omega = |\Omega|$.
Suppose I am going trough this derivation with a high-school physics enthusiast, and he/she asks me "You've simply supposed $x(t)$ to be exponential, and showed that if it is, the solution is $\dots$, how do you know this is $\textit{the}$ solution?". I've done a differential equations class, and even though I passed it, I seem to have missed this crucial aspect.
EDIT
Since the posting of this question, two answers have been posted only answering the question of $\textit{how}$ the SHO should be solved. A question I did not ask.
My question has boiled down to this; how do I show that the space of solutions of the SHO (and any 2nd order ODE) is two-dimensional? This would answer my question.
• The 'true' solution depends on the initial value, but Euler's formula says the exponential function is related to sine+cosine terms. – Kyle Kanos May 23 '18 at 12:15
• For example, see math.stackexchange.com/q/823470 – Peter Diehr May 23 '18 at 12:23
• The fundamental theorem of linear differential equations tells you that a second order ODE will have two basis functions; for SHO these are Sin and Cos. Linear combinations of these make up the general solution to the homogeneous case. The complex exponential ansatz leads you to the general solution. – Peter Diehr May 23 '18 at 12:26
• @KyleKanos, I'm not looking for a solution to an IVP per se, the general solution of the SHO is a linear combination of those two sines and cosines. Also, I know how to make the sines and cosines out of the complex exponentials - that is not what this question is about at all. Assuming the solution showed above is found, how do I know this is the most general solution? I'd prefer both an intuitive and rigorous proof if possible. Preferably one that a high-school physics enthusiast can understand - not implying I am one, because I'm not. – PaleBlueDot May 23 '18 at 12:28
• @StijnD'hondt so you're looking for the uniqueness & existence theorem for 2nd order Diff Eqs?? – Kyle Kanos May 23 '18 at 12:31
Simple harmonic motion corresponds to a 2nd order differential equation. Such equations have two linearly independent solutions, and the general solution is some linear combination of these two.
A more 'thorough' but tedious algebraic treatment considers the so-called auxiliary equation of the general second order DE
$ay'' + by' + cy = 0$,
and the three possible cases (distinct real roots, repeated real roots, and conjugate complex roots) for the auxiliary function. For more detail see any introductory book that includes a section on second-order ordinary differential equations.
The general solution for the conjugate complex roots is
$y=e^{\alpha x}(c_1\cos(\beta x) + c_2\sin(\beta x))$.
SHO corresponds to the case of two conjugate complex roots (m = $\pm i\omega$), with $\alpha = 0$ and $\beta=\omega$.
Answering your question in another way: if you guess a solution, substitute it in the DE and see that it solves it, that is a confirmation that it is a solution. Euler's formula does the rest.
EDIT: As per the comment of @KyleKanos, you may be missing the relevance of the relevant uniqueness theorem here.
• Hello, and thank you for your answer. Though I must say it's not an answer to my question. As stated, I already know how to solve these equations, I simply need to know how to prove that the space of solutions to a 2nd order ODE is two-dimensional. I'll edit my question to make this more clear. – PaleBlueDot May 23 '18 at 12:44
• Sure. If you're looking for a proof of that you'd probably be better off asking on math se. It's a generic result that n'th order differential equations have n linearly independent solutions. – Martin C. May 23 '18 at 12:46
• @StijnD'hondt see also e.g. math.stackexchange.com/questions/1089286/… – Martin C. May 23 '18 at 12:59
Consider Newton's second law for a simple Hooke spring
$$m \ddot x = -kx.$$
Below I show two ways of arriving at the general solution for this system. There are other ways of solving this. I will also note that these are not unique solutions until initial conditions have been applied.
The Intuitive Solution
Our solution to the differential equation is a function $f(t)$ that is proportional to its second derivative by a negative constant, i.e.
$$f(t) = -\alpha \frac{d^2f}{dt^2}.$$
Both sine and cosine satisfy this, so you have to add them together to find the general solution. This approach could also let you arrive at the equally-valid solution $x(t) = c_1 e^{i\omega t} + c_2 e^{-i\omega t}$.
In my opinion, the most basic intuitive reason for superposing the solutions is that you don't know where $\sin\omega t$, $\cos\omega t$, or both is the best model until you apply your initial conditions, which provide phase information.
The Rigorous Solution
We can rearrange the differential equation to be
$$\ddot x + \frac{k}{m}x = 0 .$$
Defining $\omega = \sqrt\frac{k}{m}$, we have $\ddot x + \omega^2x = 0$. We can apply an characteristic equation to solve our differential equation by replacing our derivatives with polynomial powers:
$$u^2 + \omega^2 = 0$$ $$u = \pm i\omega.$$
The solution to a characteristic equation $u = \alpha \pm i\beta$ gives us a solution to our differential equation of
$$x(t) = c_1e^{(\alpha + i\beta)t} + c_2e^{(\alpha - i\beta)t} = c_1 e^{\alpha t}\cos\beta t + c_2 e^{\alpha t}\sin\beta t.$$
Noting that in our case $\beta = \omega$ and $\alpha = 0$, we arrive the general solution
$$x(t) = c_1 e^{i\omega t} + c_2 e^{-i\omega t} = A\sin\omega t + B\cos\omega t.$$
• Thank you for your answer, and I'm sorry if the way I asked my question was unclear, but this is not an answer to my question. I already know how to solve these equations, in fact, in my original question the exact same solution as you have given is described in words -- making an exponential ansatz and going from there... My question is about proving that the space of solutions to this equation is two-dimensional, basically. – PaleBlueDot May 23 '18 at 12:43
• Ah. Well you know that $\sin\omega t$ and $\cos\omega t$ are linearly independent because the equation is only zero if $A=B=0$. They also obviously span the solution set, so they form a basis. – Zack Hutchens May 23 '18 at 13:04
|
{}
|
# Simulation and Modeling of Heat Transfer
FEATool supports modeling heat transfer through both conduction, that is heat transported by a diffusion process, and convection or advection, which is heat transported through a fluid through convection by a velocity field. The heat transfer physics mode supports both these processes, and is defined by the following equation
$\rho C_p\frac{\partial T}{\partial t} + \nabla\cdot(-k\nabla T) = Q - \rho C_p\mathbf{u}\cdot\nabla T$
where $\rho$ is the density, $C_p$ the heat capacity, $k$ is the thermal conductivity, $Q$ heat source term, and $\mathbf{u}$ a vector valued convective velocity field. In the Equation Settings dialog box show below the equation coefficients, initial value for the temperature $T$, and finite element shape function space can be specified (note that here the convective velocities u and v are the dependent variables from a coupled incompressible fluid flow physics mode, but can also be constants and complex expressions).
The heat transfer physics mode allows for four different boundary conditions types.
• Temperature, $T=T_0$ Prescribes the temperature on the boundary to $T_0$.
Note that $T_0$ does not have to be a constant either but as all coefficients in FEATool can be complex expression involving space coordinates, dependent variables, and derivatives.
• Convective flux, $-\mathbf{n}\cdot k\nabla T = 0$
This boundary condition prescribes a zero diffusive flux leaving the convective flux unspecified and free which is appropriate for outflow boundaries in fluids.
• Thermal insulation/symmetry, $-\mathbf{n}\cdot (k\nabla T + \rho C_p\mathbf{u}c) = 0$
This condition sets the heat flux at the boundary to zero which is appropriate for insulated and symmetry boundaries.
• Heat flux, $-\mathbf{n}\cdot (k\nabla T + \rho C_p\mathbf{u}T) = q_0$
The heat flux boundary condition allows the heat flux $q_0$ at the boundary to be prescribed. As with the temperature condition, $q_0$ allows for complex expressions such as the common convective and radiation conditions to a surrounding medium, in this cases one could for example set q_0 = k_ht(T_amb-T) + c_rad(T_amb-T)^4 where T_amb is the ambient temperature of the surrounding fluid, and c_rad a constant for the radiation term. T is the name of dependent variable physics mode and k_ht is the thermal conductivity specified in the physics mode subdomain settings.
A model example that incorporates these heat transfer effects is a transient cooling for shrink fitting a two part assembly [1]. A tungsten rod heated to 84 C is inserted into a -10 C chilled steel frame part. The time when the maximum temperature has cooled to 70 C should be determined. The assembly is cooled due to convection through a surrounding medium kept at 17 C and a heat transfer coefficient of 750 W/m^2 K thus heat flux boundary conditions are prescribed on all boundaries as k_ht*(17-T). Note that the model involved several subdomains with different thermal conductivities but using k_ht in the boundary prescription will automatically choose the right value.
The FEATool tutorial for the model can be viewed in the tutorial section of the User’s Guide
FEATool Thermal Shrink Fitting Model Tutorial
References
[1] Krysl P. A Pragmatic Introduction to the Finite Element Method for Thermal and Stress Analysis. Pressure Cooker Press, USA, 2005.
Category: heat transfer
Tags: subdomains
Published:
|
{}
|
# No evidence for periodicity in reaction time histograms
### Introduction
In my last lab we discussed findings on the periodicity of reaction times (e.g. referred in Van Rullen 2003). These studies are usually old (Starting with Harter 1968, Pöppel 1968), with small N and not many trials. There was also a extensive discussion in the Max-Planck Journal “Naturwissenschaften” in the 90s (mostly in German, e.g. Vorberg & Schwarz 1987). A methodological critique is from Vorberg & Schwarz 1987. More discussion in Gregson (Gregson, Vorberg, Schwarz 1988). A new method to analyse periodicity is proposed by Jokeit 1990.
This is the newest research I could find on this topic.
### Analysing a large corpus of RT-data
I stumbled upon a large reaction time dataset (816 subjects, á 3370 trials, 2.3 million RTs) from the English Lexicon Project (Balota et al 2007) and decided to look for these oscillations in reaction times .
After outlier correction (3*mad rule, see below), I applied a fourier-transformation on the histogram (1ms bins, accuracy of RT=1ms). Then I looked for peaks in the spectrum which are consistent over subjects.
Each subject is one line, no effect is visible here (in a log-scaled y-axis also no effect can be seen). The range (above ~7Hz) of the subject variance is roughly between +0 and 100
The following graph summarizes the above graph (blue-smoother-curve = loess, span=0.1, each dot = mean over 800 subjects)):
There are no peaks in the spectrum which I consider as consistent over subjects. I included higher frequencies (up to 250Hz) to get a visual estimate of the noise level (at such high frequencies, an effect seems utterly implausible). But of course, I’m ignoring within subject information (i.e. a mixed model of some sort could be appropriate).
### Conclusions
In this large dataset, I cannot find periodicities of reaction time.
### Disclaimer
My approach may be too naive. I’m looking for more powerful ways to analyse these data. If you have an idea please leave a comment! I’m also not suggesting that the effects e.g. in Pöppels data are not real. Maybe there is a mistake in my analysis, I don’t know the data by heart, it might depend on the task employed …
### Thoughts
I had results like in Jokeit 1990 (but with 50Hz not with 100Hz), when I was using a bin-width of 5ms to 10ms bins. The peak (in the figure with 6ms bins => 150hz) shifted depending on bin-size. I’m not perfectly sure, but I think it has to do with how integers are binned. In any case, if the effect is real and not an artefact of bin-width, it has to show up also with higher bin-sizes. Please note that Jokeit 1990 used a different methodology, he calculated the FFT on the histogram of reaction time **differences**.
I tried to use density estimates, but so far failed to get better results.
### Outlier plot
Percentage of trials marked as outliers. This is well in the recommended range of 10% (Ratcliff).
### References
Bolota, D.A., Yap, M.J., Cortese, M.J., Hutchison, K.A., Kessler, B., Loftis, B., Neely, J.H., Nelson, D.L., Simpson, G.B., & Treiman, R. (2007). The English Lexicon Project. Behavior Research Methods, 39, 445-459. – http://elexicon.wustl.edu/about.asp
library(ggplot2)
library(dplyr)
theme_set(theme_minimal(20))
d$Sub_ID = factor(d$Sub_ID)
d$D_RT = as.integer(d$D_RT)
d = d%>%group_by(Sub_ID)%>%mutate(outlier = abs(D_RT-median(D_RT))>3*mad(D_RT,na.rm = T))
d$outlier[d$D_RT<1] = TRUE
d$outlier[is.na(d$D_RT)] = TRUE
# outlier plot
ggplot(d%>%group_by(Sub_ID)%>%summarise(outlier=mean(outlier)),aes(x=outlier))+geom_histogram(binwidth = 0.001)
fft_on_hist = function (inp){
maxT = 4
minT = 0
fs = 1000
h = hist(inp$D_RT,breaks = 1000*seq(minT,maxT,1/fs),plot = F) h = h$counts;
# I tried to use density estimates instead of histograms, but it was difficult
#h = density(inp$D_RT,from = minT,to=4000,n=4000) #h = h$y
f = fft(h)
f = abs(f[seq(1,length(f)/2)])
return(data.frame(power = f, freq = seq(0,fs/2-1/maxT,1/maxT)))
}
d_power = d%>%subset(outlier==F)%>%group_by(Sub_ID)%>%do(fft_on_hist(.))
ggplot(d_power,aes(x=freq,y=(power),group=Sub_ID))+geom_path(alpha=0.01)
ggplot(d_power,aes(x=freq,y=log10(power)))+geom_path(alpha=0.01)
ggplot(d_power%>%group_by(freq)%>%summarise(power=mean(power)),aes(x=freq,y=(power)))+geom_point()+stat_smooth(method='loess',span=0.1,se=F,size=2)+xlim(c(10,250))+ ylim(c(47,53))
Categorized: Blog
Tagged:
|
{}
|
# Internet Problem Solving Contest
## Problem C – Copier
We have a strange box with a big red button. There is a sequence of integers in the box. Whenever we push the big red button, the sequence in the box changes. We call the box a “copier”, because the new sequence is created from the old one by copying some contiguous section.
More precisely, each time the red button is pushed the copier does the following: Suppose that the current sequence in the box is a0, a1, a2, …, am − 1. The copier chooses some i, j, k such that 0 ≤ i < j ≤ k ≤ m. Then the copier inserts a copy of ai, …, aj − 1 immediately after ak − 1. Note that j ≤ k: the copy is always inserted to the right of the original. Here is how the sequence looks like after the insertion:
$$a_0, \dots, a_{i-1}, \underbrace{a_i, \dots, a_{j-1}}_{\rm original}, a_j, \dots, a_{k-1}, \underbrace{a_i, \dots, a_{j-1}}_{\rm copy}, a_k, \dots, a_{m-1}$$
### Problem specification
In the morning we had some permutation of 1…ℓ in the box. Then we pushed the button zero or more times. Each time we pushed the button, a new (not necessarily different) triple (i, j, k) was chosen and the sequence was modified as described above. You are given the sequence S that was in the copier at the end of the day. Reconstruct the original permutation.
### Input specification
The first line of the input file contains an integer t ≤ 60 specifying the number of test cases. Each test case is preceded by a blank line.
Each test case consists of two lines. The first line contains an integer n (3 ≤ n ≤ 100 000) – the length of the final sequence S. The second line contains n integers – the sequence S. For each test case, there is a positive integer such that S can be produced from some permutation of {1, 2, …, ℓ} using a finite sequence of copier operations.
In the easy subproblem C1 you may also assume that n ≤ 10 000, that the sequence S was obtained from some permutation by pushing the red button exactly once, and that the copier chose j = k, i.e., it inserted the copied subsequence immediately after the original.
### Output specification
For each test case, output a single line with a space-separated list of integers: the original permutation. If there are multiple valid solutions, output any of them.
### Example
Input:
3
7
5 1 4 1 4 2 3
11
4 3 1 2 3 1 4 3 3 1 4
7
1 1 1 1 1 1 1
Output:
5 1 4 2 3
4 3 1 2
1
The first test case satisfies the conditions for the easy subproblem, the copier duplicated the subsequence 1 4. In the second test case we started with 4 3 1 2, changed it into (4 3) 1 2 (4 3), then changed that sequence into 4 (3 1) 2 (3 1) 4 3, and finally changed that into 4 3 1 2 (3 1 4) 3 (3 1 4).
|
{}
|
# Physical Limitations of Quantum Cryptographic Primitives or Optimal Bounds for Quantum Coin Flipping and Bit Commitment
Abstract : Coin flipping and bit commitment are two fundamental cryptographic primitives with numerous applications. Quantum information allows for such protocols in the information theoretic setting where no dishonest party can perfectly cheat. The previously best-known quantum coin flipping and bit commitment protocol by Ambainis achieved a cheating probability of at most 3/4 [A. Ambainis, Proceedings of the 30th Annual ACM Symposium on Theory of Computing, Washington, DC, IEEE Computer Society, 2001]. On the other hand, Kitaev showed that no quantum coin flipping or bit commitment protocol can have cheating probability less than 1/ √ 2 [A. Kitaev, Presentation at the 6th Workshop on Quantum Information Processing (QIP), 2003]. Closing these gaps has been one of the important open questions in quantum cryptography. In this paper, we resolve both questions. First, we present a quantum strong coin flipping protocol with cheating probability arbitrarily close to 1/ √ 2. More precisely, we show how to use any weak coin flipping protocol with cheating probability 1/2 + ε in order to achieve a strong coin flipping protocol with cheating probability 1/ √ 2 + O(ε). The optimal quantum strong coin flipping protocol follows from our construction and the optimal quantum weak coin flipping protocol described by [C. Mochon, arXiv:0711.4114, 2007]. Second, we provide the optimal bound for quantum bit commitment. On the one hand, we show a lower bound of approximately γ ≈ 0.739, improving Kitaev’s lower bound. On the other hand, we present an optimal quantum bit commitment protocol which has cheating probability arbitrarily close to γ. More precisely, we show how to use any weak coin flipping protocol with cheating probability 1/2 + ε in order to achieve a quantum bit commitment protocol with cheating probability γ + O(ε). To obtain the final protocol, we then use the optimal quantum weak coin flipping protocol described by [C. Mochon, arXiv:0711.4114, 2007]. Unlike the previous protocol for coin flipping, our protocol uses quantum effects beyond the weak coin flip. To stress this fact, we additionally show that any classical bit commitment protocol with access to perfect weak (or strong) coin flipping has cheating probability at least 3/4.
Keywords :
Type de document :
Article dans une revue
SIAM Journal on Computing, Society for Industrial and Applied Mathematics, 2017, 46 (5), pp.1647--1677. 〈10.1137/15M1010853〉
Domaine :
https://hal.inria.fr/hal-01650970
Contributeur : André Chailloux <>
Soumis le : mardi 28 novembre 2017 - 15:17:16
Dernière modification le : jeudi 26 avril 2018 - 10:28:48
### Citation
André Chailloux, Iordanis Kerenidis. Physical Limitations of Quantum Cryptographic Primitives or Optimal Bounds for Quantum Coin Flipping and Bit Commitment. SIAM Journal on Computing, Society for Industrial and Applied Mathematics, 2017, 46 (5), pp.1647--1677. 〈10.1137/15M1010853〉. 〈hal-01650970〉
### Métriques
Consultations de la notice
|
{}
|
# Inferring a prior belief after observing a behavior?
#### nottolina
##### New Member
In my experiment, a participant goes through a maze made of 32 T intersections. At each intersection he must choose whether to go either to the left or to the right: one option will lead to another T intersection, while the other option will lead to a blind alley.
If I code as 1 the times the correct turn is to the right and as 0 the times the correct turn is to the left, this is my maze:
Code:
turn_right <- c(1,0,0,0,1,0,1,0,0,0,0,1,1,0,0,0,1,1,1,0,1,1,0,1,1,1,1,0,1,0,0,1)
At each intersection, a sign points either to the left or to the right. A storm has messed up the signs, so that now only 50% of them are correct. The participant knows that the storm has made some damage to the signage system, but he does not what kind of damage.
These are my signs, where a 1 means that the sign points to the correct direction, and a 0 means that the sign points to the wrong direction.
Code:
sign <- c(1,0,1,0,0,1,1,1,0,1,0,1,0,1,0,1,0,0,1,1,1,1,0,0,0,0,0,1,1,0,0,1)
Now I observe the behavior of my participant. Sometimes he follows the sign (1), sometimes he does not (0):
Code:
trust_sign <- c(0,0,0,0,0,0,0,0,1,1,1,0,0,1,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0)
My question: can I infer what is the prior belief of the participant before entering the maze? That is, how much he trusts the signage system?
Since we have binary choices, I thought I could model the participant's choices (trust_sign) with a beta distribution:
Code:
maze <- data.frame(turn_right, sign, trust_sign)
sum32 <- sum(maze\$trust_sign[1:32])
curve(dbeta(x, sum32, 32- sum32),add=TRUE,lty="solid",ylim=c(0,6),ylab="Probability Density",las=1)
I can also calculate the likelihood of a sign being correct given the actual maze:
Code:
k = 16 # number of times a sign is correct
n = 32 # total number of intersections
numSteps = 200 ## x-axis for plotting
x = seq(0, 1, 1 / numSteps)
L = x^k * (1 - x)^(n - k) ## Likelihood function
L = L / sum(L) * numSteps ## Just normalize likelihood
plot(x, L, type = 'l', lwd = 3, ylim = c(0,6),
main = "Bernoulli Likelihood",
xlab = expression(theta), ylab = "pdf")
Given that likelihood and the behavior seen before, what is the belief of my participant prior to entering the maze? Is this the right framework for this question or am I missing something?
|
{}
|
# Intersections of axes and affine space
1. Oct 17, 2012
### jostpuur
Let natural numbers $N,M$ be fixed such that $1\leq M < N$. If $x\in\mathbb{R}^N$ is some vector, and $V\subset\mathbb{R}^N$ be some subspace with $\textrm{dim}(V)=M$. How likely is it, that $x+V$ intersects the axes $\langle e_1\rangle,\ldots, \langle e_N\rangle$ somewhere outside the origin?
I mean that $x+V$ intersects the axis $\langle e_n\rangle$ iff there exists $\alpha\in\mathbb{R}$ such that $\alpha e_n\in x + V$ and $\alpha \neq 0$.
By "likely" I mean that for example if $x$ is a sample point of some random vector, and if $V$ is spanned by some $M$ sample vectors, and if the random vectors in question can be described by non-zero probability densities, then what is the probability for intersections of $x+V$ and $\langle e_n\rangle$ to exist?
Example M=1, N=2. If we draw a random line on the plane, chances are that the line will intersect both axes. The outcome that line intersects only one axis, is a special case, which can occur with zero probability.
Example M=2, N=3. If we draw a random plane into three dimensional space, chances are that the plane will intersect all three axes. The plane can also intersect only two or one axes, but these are special cases, which can occur with zero probability.
Example M=1, N=3. If we draw a random line into three dimensional space, changes are that the line will miss all three axes. The line can intersect one or two axes, but these outcomes occur with zero probability. The outcome that the line would intersect all three axes is impossible.
Looks complicated! I don't see a pattern here. What happens when $N>3$?
2. Oct 17, 2012
### HallsofIvy
Before you can ask any question about a "random vector" you will have to specify what that means- in particular what probability distribution the "random" vector will satisfy.
3. Oct 20, 2012
### jostpuur
I was unable to prove this but I figured out sufficient rough reasoning, that convinced me, that the answer is this: If M=N-1, the affine subspace will intersect all axes with probability 1, and if M<N-1, the affine subspace will miss all axes with probability 1.
HallsofIvy's comment is not on right track. The probability densities don't need to be defined with a greater precision than what I already did. Look at the three low dimensional examples for example. It should be clear that the stuff works out like I said.
|
{}
|
Free theorems.
F: I would say that a detailed understanding of parametricity is an intermediate topic, are you studying out of interest or because you think is necessary to continue with your FP learning?
A:
def foo[A](a: A): A
How many possible functions foo are there assuming that foo must be total, not throw exceptions, and not do I/O (including reflection + methods like getClass)?
K: F, Do I have to understand in detail for FP?
F: The approach A is taking is what we primarily use it for in everyday programs. Go with his explanation and you’ll know what you need to know for everyday use I would say, no need to go to the paper at this point.
K: foo is the identity function.
A: Right.
def bar[A, B](a: A, b: B): A
Same question.
K: What do you mean possible functions?
A: How many different functions can you implement with such signature?
K: Just two because it has to return one of the inputs.
A: Yes.
def baz[A](a: List[A]): List[A]
Suppose I run baz(List(1, 2, 3)). Can I get a list with a 4 back?
K: No.
A: What kind of things can baz do to my list? Can it, for instance, sort the list? Reverse it? Concatenate two copies? Return the first element if any or Nil?
R: Sure thing you can
A: No you can not. We are still assuming that functions must be total, not throw exceptions, and not do I/O (including reflection + methods like getClass)?
R: Sorry, you’re right, can’t get List(4) back. But you can still have something like:
def baz[A](a: List[A]): List[A] = Nil
I.e. it’s not identity only. Can do anything specific to list, but not to its elements.
A: Right. You can’t sort or find the least element, but you can for instance do:
def baz[A](a: List[A]): List[A] = a ++ a
def baz[A](a: List[A]): List[A] = a.reverse
def baz[A](a: List[A]): List[A] = a match {
case Nil => Nil
case x :: xs => List(x)
}
You can’t even map in a meaningful way, because the only function you have is id: A => A.
K: What does A => A mean? A function?
A: A type of function.
A:
def baz[A](a: List[A]): List[A] = a.map(f)
f can only be an identity function here, because it has a signature A => A (for any A).
A: Theorems For Free paper shows a way to derive all such “laws” for polymorphic functions. There are more advanced things you can say, for instance:
// For any a: List[A],
// f: A => B, and
// baz: [A] List[A] => List[A]
baz(a).map(f) == baz(a.map(f))
Since FP languages (or FP discipline) constrains possible programs you can write, reasoning about them becomes easier. Polymorphic functions are easier to reason about because there is not much they can possibly do. Freedom to do reflection or I/O in every single function takes that reasoning away.
K: What do you mean by “For any a: List[A], f: A => B, and baz: [A] List[A] => List[A]?
A: It is straightforward to prove (using that paper) that whatever types A and B you choose, whatever a, f of types List[A], A => B you come up with, baz(a).map(f) == baz(a.map(f)) will be true. baz can not observe any properties of the elements of a, so it can’t modify them, only reorder / trim / reverse.
K: So if I would like the reorder the list in baz, how would the signature look like?
R: I guess
def baz[A: Ord](a: List[A]): List[A]
A: If you want to be able to compare elements, that is correct.
def baz[A](l: List[A]): List[A]
already allows you to reorder them, but not based on the elements themselves.
R: Well, I meant some meaningful reorderin, besides reverse :smile:.
A: Is List(1, 2, 3, 4) => List(2, 1, 4, 3), swapping adjacent pairs of elements meaningful?
R: Well, meaning is a fuzzy concept.
M: You can also write a def reorder[A](a: List[A]): List[A], but the way you reorder them will be fixed, that is to say, if you pass a List[Int], it will be re-ordered in the exact same way as if you pass a List[String], because you can’t get to the Int or the String.
A: Here is another interesting example of the free theorems in action:
trait Eq[A, B] {
def subst[F[_]](fa: F[A]): F[B]
}
How many possible Eq[A, B] are there (assuming you can’t add extra defs, vals, vars, pattern match on an open trait, etc)? It’s a trick question.
R: Zero?
A: It depends on whether A is the same as B. There is exactly one Eq[A, A] for any A, and exactly zero Eq[A, B] for different A and B. So Eq represents type equality.
R: But the signature is silent about whether A = B, So the only safe assumption is that they’re different.
M: That’s the only safe false assumption :wink:.
A:
def refl[A]: Eq[A, A] = new Eq[A, A] {
def subst[F[_]](fa: F[A]): F[A]
}
M: If you have an instance of Eq[A, B], you know for sure that A = B, because otherwise you couldn’t have one, right?
A: Yes. This amazing data-type is so powerful, that in Idris you can use it to prove theorems, same way you can with built-in =. And you can use Eq to implement dynamic typing in functional languages.
R: But what’s the point in defining it with two different type parameters, if you know for sure you can only have it with one? Sorry for stupid question.
M: There are no stupid questions.
A:
def foo[A, B](eq: Eq[A, B], a: A): b: B
Because you can have two different types in one context that are equal in another. The caller of foo knows that A = B, but inside foo they look like different types. Here is another, more realistic example:
sealed abstract class F[A, B] {
def isFoo: Option[Eq[A, B]]
}
final case class Foo[A]() extends F[A, A] {
def isFoo: Option[Eq[A, A]] = Some(refl[A])
}
You can usually just pattern match case Foo() => to recover A = B, and it will just work. However, GADTs in Scala are utterly broken, so it’s sometimes not the case.
sealed abstract class F[A, B]
final case class Foo[A]() extends F[A, A]
def f[A, B](fab: F[A, B], a: A): B = fab match {
case Foo() => a
}
doesn’t work in ScalaFiddle (2.11?).
Y: I was eavesdropping and the concept of free theorems sounds fascinating. Thanks!
A: https://alexknvl.com/cgi-bin/free-theorems-webui.cgi - there is an automated version of the paper. When I enter [a] -> [a] (Haskell’s syntax for [A] List[A] => List[A]), I get
map g (f x) = f (map g x)
Or in Scala:
f(x).map(g) == f(x.map(g))
A: For f: [A] A -> A it produces forall g: A => A . g(f(x)) = f(g(x)), which means that f is an identity. Notice that g above is not necessarily polymorphic in A, which is precisely why the law implies that f is an identity.
Now let’s look at:
def f[A]: A = ???
f must return a value of type A for any type A, is it possible to fill in the ??? to make that work? What if I run f[MySecretType]:
final class MySecretType private ()
val x: MySecretType = f[MySecretType]
Clearly, unless we go outside Scalazzi language subset, there are no possible implementations of f.
Let’s look at a -> b -> c, or in Scala’s notation [A, B, C](a: A, b: B): C. How many functions of this type are there? Consider partially applying it to it’s arguments a and b and only then specifying C:
val f : [A, B](a, b)[C]: C
This function returns [C] C, which as we already know, has no possible instances.
Y: Could you help me out with the analysis of (a -> b -> c) -> [a] -> [b] -> [c]?
A: Let’s first rewrite it in Scala.
def f[A, B, C](f: (A, B) => C, la: List[A], lb: List[B]): List[C]
The caller knows A, B, C, so they can supply an f. compare it to the above discussion: There is no polymorphic [A, B, C] (A, B) => C, but for concrete A, B, and C there could be tons of (A, B) => C. An important distinction.
Y: Right.
A: Here is the free theorem for f :: (a -> b -> c) -> [a] -> [b] -> [c], def f[A, B, C](p: (A, B) => C, la: List[A], lb: List[B]): List[C] according to the generator:
forall t1,t2 in TYPES, g :: t1 -> t2.
forall t3,t4 in TYPES, h :: t3 -> t4.
forall t5,t6 in TYPES, f1 :: t5 -> t6.
forall p :: t1 -> t3 -> t5.
forall q :: t2 -> t4 -> t6.
(forall x :: t1. forall y :: t3. f1 (p x y) = q (g x) (h y))
==> (forall z :: [t1].
forall v :: [t3]. map f1 (f p z v) = f q (map g z) (map h v))
Y: Yeah kind of blown away.
A: Well, first we can intuit that f can’t look inside A, B, and C, and can’t produce C out of thin air, so to return List[C], it must call p.
Y: Yes.
A: It can’t produce A or B out of thin air either, so it must use elements of la and lb to call p.
Y: Sure.
A: f can be zipWith, or it can apply any sort of [A] List[A] => List[A] on la and lb and then zipWith, intuitively that it is all it can do.
Y: zipWith is definitely what I intended. But I am curious how zipWith relates to the theorem.
A: The theorem says that if gh (p x y) = q (g x) (h y)), then map gh (f p la lb) = f q (map g la) (map h lb)) So it basically says that if you first map two lists, it’s the same as mapping the result.
K: What does forall stands for?
A: When it says forall t :: TYPES it means roughly the same as [T] in Scala, if it is forall a :: t where t is a type, then it means “whatever the value of a”.
A: I’ll rewrite everything in Scala in a sec, that will make it much clearer I think.
if gh(p(x, y)) = q(g(x), h(y)) then
f(p, la, lb).map(gh) = f(q, la.map(g), lb.map(h))
Now, we know from our discussion before that to produce elements of List[C], f must use p, so every element of List[C] was obtained by calling p So we can move ff inside:
if (gh compose p)(x, y) = q(g(x), h(y)) then
f(gh compose p, la, lb) = f(q, la.map(g), lb.map(h))
I think this is pretty clear.
Y: I think I’m not fully there yet… What is the literary meaning you want to pull out?
A: Let’s simplify a bit further, define p' to be gh compose p:
if p'(x, y) = q(g(x), h(y)) then
f(p', la, lb) = f(q, la.map(g), lb.map(h))
See how it makes sense?
Y: Ahah. Yes, this is amazing.
Y: Haven’t read through the paper yet, so I’m only giving wild guesses. My guess is that a theorem derived from the type of a function is a key indicator of the properties of the function? So it would seem natural to think that a function’s type already speaks much about the semantics of a function!
A: If you are disciplined with your code, then yes. On JVM you can break all sorts of rules.
Y: Right… I’ve always had an intuition that a function’s type already speaks much about what it does, and that’s why I’ve been looking into languages like Scala in the first place. I guess Free Theorems is a solid foundation for my religious beliefs.
A: One of the reasons FP people advocate so strongly for:
• no null
• real parametricity
• tail-call elimination
• no I/O unless in a monad
• no exceptions
• no partial functions
is because all of these (or lack thereof) break this reasoning in one way or another.
P: :+1: The idea is just to build a bubble of determinism in a world of randomness and unpredictability, in which you can reason sanely… It doesn’t prevent IO or mutations but it does it at the boundaries of the bubble, not inside.
Y: Definitely. A, thanks for the info. Really helped broaden my insights.
R: Thanks A.
A: Paul Philips has rightly noticed that the three principles of INGSOC apply nicely to FP. The last two for sure:
• War Is Peace
• Freedom Is Slavery - side-effects (freedom) enslave.
• Ignorance Is Strength - ignorance (parametricity) gives you strength (free theorems).
|
{}
|
max planck institut
informatik
# MPI-INF or MPI-SWS or Local Campus Event Calendar
<< Previous Entry Next Entry >> New Event Entry Edit this Entry Login to DB (to update, delete)
Title: Subcubic Equivalences Between Graph Centrality Problems, APSP and Diameter Fabrizio Grandoni IDSIA/Lugano AG1 Mittagsseminar (own work) D1, MMCIWe use this to send out email in the morning. AG Audience English
Date: Tuesday, 16 September 2014 13:00 45 Minutes Saarbrücken E1 4 024
(joint work with Amir Abboud and Virginia Vassilevska Williams) Measuring the importance of a node in a network is a major goal in the analysis of social networks, biological systems, transportation networks etc. Different centrality measures have been proposed to capture the notion of node importance. For example, the center of a graph is a node that minimizes the maximum distance to any other node (the latter distance is the radius of the graph). The median of a graph is a node that minimizes the sum of the distances to all other nodes. Informally, the betweenness centrality of a node w measures the fraction of shortest paths that have w as an intermediate node. Finally, the reach centrality of a node w is the smallest distance r such that any s-t shortest path passing through w has either s or t in the ball of radius r around w. The fastest known algorithms to compute the center and the median of a graph, and to compute the betweenness or reach centrality even of a single node take roughly cubic time in the number n of nodes in the input graph. It is open whether these problems admit truly subcubic algorithms, i.e. algorithms with running time $\tilde{O}(n^{3-\delta})$ for some constant $delta>0$. We relate the complexity of the mentioned centrality problems to two classical problems for which no truly subcubic algorithm is known, namely All Pairs Shortest Paths (APSP) and Diameter. It is easy to see that Diameter can be solved using an algorithm for APSP with a small overhead. However, no reduction is known in the other direction, and it is entirely possible that Diameter is a truly easier problem than APSP. We show that Radius, Median and Betweenness Centrality are equivalent under subcubic reductions to APSP, i.e. that a truly subcubic algorithm for any of these problems implies a truly subcubic algorithm for all of them. We then show that Reach Centrality is equivalent to Diameter under subcubic reductions. The same holds for the problem of approximating Betweenness Centrality within any constant factor. Thus the latter two centrality problems could potentially be solved in truly subcubic time, even if APSP required essentially cubic time. Indeed, our reductions already imply an algorithm for Reach Centrality in graphs with small integer weights that is faster than the best known algorithm for APSP in the same family of graphs.
Name(s): Andreas Wiese awiese@mpi-inf.mpg.de
|
{}
|
# Tag Info
2
I solved it the following way, just want make sure I'm not missing something obvious. Set up a portfolio $PF$ consisting of long $S$ and short $P$ at time $t = 0$. Choose arbitrary time $0 < t < T$. If $S_t > P_t$ then $PF_t = S_t - P_t$ which coincides with the value of the option. If $S_t$ hits $P_t$ from above, then dissolve the portfolio by ...
1
You introduce a discretized auxiliary variable which represents $S_t$ to solve $N$ PDEs on $[t, t+\tau]$ using finite differences which will give you the present value of the option at time $t$ conditional on $S_t$. Then you solve one PDE using finite differences on $[0, t]$ to obtain the the present value at time $0$. This is the same methodology than that ...
1
I think it's ok $$S_T = e^{\ln S_T}$$
1
There is a problem in your last step. Note that \begin{align*} P_{t, T_2}E_{Q_{T_2}}\left(\frac{1}{P_{T_1, T_2}} \mid \mathcal{F}_t \right) &= P_{t, T_2}E_{Q_{T_2}}\left(\frac{P_{T_1, T_1}}{P_{T_1, T_2}} \mid \mathcal{F}_t \right)\\ &=P_{t, T_2} \times \frac{P_{t, T_1}}{P_{t, T_2}}\\ &=P_{t, T_1}. \end{align*}
1
The option payoff is equivalent to $Z_{\tau \wedge T}-1$ where $\tau=\inf\{t | Z_t = 1\}$ provided that $Z_t$ is assumed to be continuous. Since $Z_t=S_t/P_t$ is a martingale under $Q_P$, we have $E_P[Z_{\tau \wedge T}]=Z_0$ and the option value is $P_0 (Z_0 - 1)=S_0-P_0$ regardless of the model.
1
The option payoff at maturity $T$ is defined by \begin{align*} (S_T-P_T)1_{\left(\inf_{0 \le t <T}\frac{S_t}{P_t}\right) > 1}. \end{align*} Let $Q$ be the risk-neutral probability measure and $E$ be the corresponding expectation operator. Let $Q_p$ be a probability measure defined by \begin{align*} \frac{dQ_p}{dQ}\big|_t = \frac{P_t}{e^{rt} P_0}. ...
1
from a practitioner perspective, i can say there's no such thing as a 0 year swap (obviously). The shortest tenor that you could trade would be a contract on one month LIBOR or more likely 3 month LIBOR. Then the instrument you are asking about is a 5 year expiration caplet (payoff in 5 years = max (0, LIBOR- strike).)
1
There's no best method. The question is : what is the behavior of the volatility structure (atm and skew) when the underlying moves? Each method assumes something different. In the real market, one method might work well for a period of time (in the sense that it minimizes residual p/l), but then another method might take over as best. Practitioners ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
# I have been having trouble on this problem and would really appreciate some help
#### Lorynn
I've been trying to work through this problem for hours and I am not sure on how to go about solving this problem.
Some help would be extremely appreciated.
" Show the two synthetic division "lines" for the integral values on either side of the upper real zero of f(x)=x3-9x2-x-5 "
I have tried dividing the equation synthetically by using the possible rational roots: +1,-1,+5, and -5 and each of these give me a remainder, I am so lost currently and would really appreciate some help
---
Lorynn
#### chiro
MHF Helper
Hey Lorynn.
What do you mean by synthetic division lines? Are you just trying to factor the polynomial?
#### Lorynn
I'm not exactly sure, that is word for word how the problem was worded and I am not exactly sure on what is meant.
#### Plato
MHF Helper
I'm not exactly sure, that is word for word how the problem was worded and I am not exactly sure on what is meant.
I fear that you are at the mercy of a local set of definitions that are not in general use.
#### Archie
The real root is at $$\displaystyle x \approx 9.2$$. I don't know from the question how you are expected to find that though (other than trial and error).
#### chiro
MHF Helper
There is a formula for finding the roots of cubics.
Have you covered this?
#### skeeter
MHF Helper
" Show the two synthetic division "lines" for the integral values on either side of the upper real zero of f(x)=x3-9x2-x-5 "
This may be an example of locating the zero of a polynomial using the Intermediate Value Theorem.
In the old days "BC" (before calculators), one method to localize a zero of a polynomial was to use synthetic division for integral values of x, looking for a sign change in the remainder.
In this example, for integral values of $x\le 9$, $f(x) < 0$, and for integral values of $x \ge 10$, $f(x) > 0$.
Since f(x) is continuous over its domain, the Intermediate Value Theorem says that a zero for f(x) must exist in the interval $9 < x < 10$.
Last edited:
|
{}
|
1. inverse trig.
If siny=x and pi/2<y<pi, find dy/dx in terms of x
thank you for any help given.
2. hi im new, today was my first day being introduced into inverse trig. please tell me if i am incorrect as it would benefit both of us.
If siny=x and pi/2<y<pi, find dy/dx in terms of x
y is an angle and x is a ratio.
siny=x, pi/2 < y < pi, sin is positive in this domain.
so x'= cosy
3. Originally Posted by iiharthero
If siny=x and pi/2<y<pi, find dy/dx in terms of x
thank you for any help given.
$\frac{dx}{dy} = \cos y \Rightarrow \frac{dy}{dx} = \frac{1}{\cos y}$.
But $\sin y = x \Rightarrow \cos y = -\sqrt{1 - x^2}$ for $\frac{\pi}{2} < y < \pi$. Therefore ....
4. Originally Posted by mr fantastic
$\frac{dx}{dy} = \cos y \Rightarrow \frac{dy}{dx} = \frac{1}{\cos y}$.
But $\sin y = x \Rightarrow \cos y = -\sqrt{1 - x^2}$ for $\frac{\pi}{2} < y < \pi$. Therefore ....
hi, please explain if it is possible how $\cos y = -\sqrt{1 - x^2}$ for $\frac{\pi}{2} < y < \pi$.
hi, please explain if it is possible how $\cos y = -\sqrt{1 - x^2}$ for $\frac{\pi}{2} < y < \pi$.
$\sin^2 y + \cos^2 y = 1$ and cosine is negative in the second quadrant.
|
{}
|
# Suggestions for STM-012?
### Help Support The Rocketry Forum:
#### Screaminhelo
I recently got the Estes bundle 3 and my son decided that I would be building the STM-012 first. As I am building it I am beginning to think that this thing could be a low end MPR rocket and a way to get my feet wet with DD. I am planning on getting a Quark or a Quantum in the near future and figured that I would reach out to the forum for some collective wisdom. Here are my questions:
1. Is this even a good idea? I plan on keeping it as a 24mm rocket, only modifying the mount to butt the aft CR against the aft edge of the fin tabs and possibly adding a screw on retainer. I figure that most of the launches would be BP but the occasional composite E would be a fun adventure with the odd F when we are feeling froggy.
2. My initial plan is to put bulkheads on the coupler and blow the drogue from the bottom half and turn the top half into an avbay (possibly the nose cone). The sled would be removable so that it could be installed for the higher flights and removed for flying a smaller on BP motors.
Those are the only questions that I can think of right now, feel free to add any issues or ideas that I have overlooked.
#### sl98
##### Well-Known Member
I fly a lot of BT-60 size Estes rockets dual deploy using a Quark and 24mm reloads. The Quark is the way to go for this application because of the weight savings for both the altimeter and battery. This is a great way to fly DD.
#### swatkat
##### Down these mean skies, a kat must fly!
You can also turn it into a two stage BP, with very little modding.
#### Screaminhelo
sl98- Would you do this one with an avbay in the upper tube on in the nose cone? Putting the bay in the nose could be beneficial for larger motors but a slide in bay in the upper tube would have less of an adverse effect when flying on BP motors. I'll soon have basic construction finished so that I can do a static balance and a swing test to help me decide.
swatkat- That reminds me of the Rocketarium Maxtormind. I wish I had gotten one before they discontinued it but a similar mod to yours would capture the look quite well.
#### sl98
##### Well-Known Member
sl98- Would you do this one with an avbay in the upper tube on in the nose cone? Putting the bay in the nose could be beneficial for larger motors but a slide in bay in the upper tube would have less of an adverse effect when flying on BP motors. I'll soon have basic construction finished so that I can do a static balance and a swing test to help me decide.
I'm currently helping someone make an AV bay for a STM-012. We are using 4" of coupler and a 2" band. It will look like this one when it it done (this is a BT-60 AV bay I use on several different rockets with a Quark and 1S LiPo)
The bay will go between the upper and lower tubes and will use 3 small push rivets between the upper fin set to hold the AV bay. Simply remove the sled/electronics and put the bulkheads back on to fly motor deploy.
If the band is centered on the coupler then you have 1" of exposed coupler plus the thickness of your bulkhead. The only thing not decided is whether to shift the band up so there is less coupler exposed on the upper section or to trim down the fin tabs.
I operate on the keep it simple, keep it light method for 24mm DD. I used 1/8 light-ply for the outer bulkheads and 3/32 basswood for the inner bulkhead, 4/40 threaded rod, nylon wing nuts that are ground down (to fit in tube) and very light weight eye screws. I drilled a hole and glued a piece of 1/4 dowel for the eye screw. Sled is 3/32 basswood. Avoid the urge to use #8 rod and 1/4-20 eyebolts.
#### cerving
TRF Supporter
I got 3 of them during the recent Estes sale for like $7 each, planning on making a 29mm/24mm 2-stager out of 2 of them (the BT60 tubes with the slots). You could make a MPR out of it if you paper the fins or replace them with light ply, and would recommend getting some coupler stock from BMS and reinforcing everything ahead of the front centering ring with it. Otherwise, it's not going to last very long. While you're at it, get light ply centering rings, too. #### sl98 ##### Well-Known Member I got 3 of them during the recent Estes sale for like$7 each, planning on making a 29mm/24mm 2-stager out of 2 of them (the BT60 tubes with the slots).
Air start 2 stage or BP?
#### Forever_Metal
##### JustAnotherBAR
to be stock or dd?
both sound cool!
fm
#### Screaminhelo
to be stock or dd?
both sound cool!
fm
I agree! The breakdown of the kit is calling me to use it to get my DD feet wet though. There are some rockets out there that I would love to fly but the limitations of my regular field rule them out without going DD. If winds are co-operating, I can do 1k but that is only on one axis. Even doing DD I will have to pay close attention to winds and, will likely, send up a weather bird to verify ground observations.
#### smoon
I got 3 of them during the recent Estes sale for like $7 each, planning on making a 29mm/24mm 2-stager out of 2 of them (the BT60 tubes with the slots). You could make a MPR out of it if you paper the fins or replace them with light ply, and would recommend getting some coupler stock from BMS and reinforcing everything ahead of the front centering ring with it. Otherwise, it's not going to last very long. While you're at it, get light ply centering rings, too. For$7 each, I also bought three of them. My goal is to make a three stage utilizing gap staging for both booster sections. For the $14 for the kit and a half it will take to construct it, if it doesn't work out, I still have the parts to make another, once I analyze what happened. I will be papering the fins and using plywood centering rings on the extended motor mounts for the gap staging. No dual deploy for me. I will be using one of my chute releases instead. Steve #### Screaminhelo ##### Shade Tree Rocket Surgeon For$7 each, I also bought three of them. My goal is to make a three stage utilizing gap staging for both booster sections. For the $14 for the kit and a half it will take to construct it, if it doesn't work out, I still have the parts to make another, once I analyze what happened. I will be papering the fins and using plywood centering rings on the extended motor mounts for the gap staging. No dual deploy for me. I will be using one of my chute releases instead. Steve Good luck! It will.be a bit of a stretch (pun intended) but somebody had to give it a try! Methinks that there will be a few folks here on the boards that will be interested to see if you can get it to work. #### smoon ##### Well-Known Member Good luck! It will.be a bit of a stretch (pun intended) but somebody had to give it a try! Methinks that there will be a few folks here on the boards that will be interested to see if you can get it to work. I will be testing it out in stages (pun also intended ). The sustainer, being about half an STM 012 will likely fly on an 18mm motor for its maiden flight. Then, I will add stages and see if I can make it to three without lawn darting, shredding it, or having it fly away. D12-0 to D12-0 to C6-5 sims to more than 1300 feet. That's pretty high for this poor little guy. Of course, my initial flights will be C11s to a B6, so under 1000 feet, if all goes well. Steve #### swatkat ##### Down these mean skies, a kat must fly! I will be testing it out in stages (pun also intended ). The sustainer, being about half an STM 012 will likely fly on an 18mm motor for its maiden flight. Then, I will add stages and see if I can make it to three without lawn darting, shredding it, or having it fly away. D12-0 to D12-0 to C6-5 sims to more than 1300 feet. That's pretty high for this poor little guy. Of course, my initial flights will be C11s to a B6, so under 1000 feet, if all goes well. Steve I'll be flying a D12-0 to D12-7 on the model pictured above in the thread this weekend. It has the capability to do 24x95's in each stage. I'll put the JL3 in it to get an attitude reading for you. I'm simming it at ~1300 feet. Rocket weight is 6.8 Oz dry. #### Forever_Metal ##### JustAnotherBAR For$7 each, I also bought three of them. My goal is to make a three stage utilizing gap staging for both booster sections. For the \$14 for the kit and a half it will take to construct it, if it doesn't work out, I still have the parts to make another, once I analyze what happened.
I will be papering the fins and using plywood centering rings on the extended motor mounts for the gap staging.
No dual deploy for me. I will be using one of my chute releases instead.
Steve
kinda sorta dual-deploy... Love the JLCR!
fm
#### swatkat
##### Down these mean skies, a kat must fly!
I'll be flying a D12-0 to D12-7 on the model pictured above in the thread this weekend. It has the capability to do 24x95's in each stage. I'll put the JL3 in it to get an attitude reading for you. I'm simming it at ~1300 feet. Rocket weight is 6.8 Oz dry.
It flew just fine, with the wind, it did weathercock quite a bit and my son and I ended up going for a 1/4 mile walk.
|
{}
|
# Sound board upgrade from Mackie CFX-20
#### jkowtko
##### Well-Known Member
I'm outgrowing my 20 channel Mackie CFX-20 and, with digital in the back of my mind for future purchase, I am short-term looking at something to give me more capacity at a modest price. What should I look at?
My needs are as follows:
* 12-16 mike input channels
* 2 CD stereo inputs
* 6-8 Sound Cue PC inputs
* 6-8 speaker outs (using sub-outs)
* 2 aux channel outs for recording device
* 2 aux outs for stage monitors
I would like to buy something used, and try to keep it under $1000 ... definitely under$1500.
Right now the Mackie 32.8 analog seems to be a pretty capable board for the price on the used market. The optional meter bridge is an advantage as well.
Any other suggestions for boards I should be looking at?
Thanks. John
#### SHARYNF
##### Well-Known Member
A lot depends in how are are planning on using the mackie. That version tends not to move very well.
I tend to come down on the digital side of things, and these days, you can pick up Yamaha 02r's for the sort of money you are looking at on ebay. It's big and heavy, but certainly had a good reputation, has lots of io options etc
This item on ebay for instance has a buy it now. You can pick up Behringer micpres/adat units for pretty cheap and they work quite well to expand out your inputs and because of the wierd way behringer made this unit, you have independant adat outs to line level so you can easily turn this into a 32 input mixer with pre's and add 16 additional outputs using the adats. This would allow you to do 16 track recording also if you wanted.
and the trs inputs can be converted to xlr''s easily, and you can find phantom power units for these additional inputs. Anyway these are build like tanks, just a thought
120092394340 is an auction for one pretty well setup for 1500 dollars If you do a search make sure you use o2r and 02r since the get listed both ways
sharyn
Last edited:
#### jkowtko
##### Well-Known Member
Sorry, I didn't clarify my usage. Community theater, 3 dramas and 3 musicals each year, plus kids conservatory two productions, and misc special events. So all live, no recording studio, although I do like to record the shows. The board pretty much stays in one place, and if I need a portable board for special events I can get something small like a Mackie DFX-6/12 to tote around.
I didn't realize the digital boards were available at such a low cost. The first ones I looked at were the Mackie TT24 and Yamaha LS9, which are in the $7-10k and up. For under$2k it looks like I may have some options.
Some big advantages I see for using the digital board are:
- channel grouping for fader control (is this what VCA assignment is?)
- dynamic processors available per input channel, to be used as needed
- scene control with motorized faders
A few questions I'm hoping you can answer:
1) Is there any noticable latency on digital boards? Our theater is small and you can hear the voices of the louder actors directly, so if there is a delay in the PA it will be noticable, and that would nix the digital option for me.
2) Assuming you have a positive response to (1), can you provide a "quick" comparison of the Yamaha 02r vs 01v96? The 01v96 looks a lot newer, so I'm thinking much faster processor inside, etc. But if the 02r has any distinct advantages -- or no disadvantages -- vs the 01v96, then the 02r may be worth the cost savings.
3) Can you provide a "quick" comparison of these lower cost digital boards to the higher cost ones like the TT24 and LS9? Why spend the extra $8-10k? Thanks. John #### soundlight ##### Well-Known Member Quick 02R vs 01V96 comparison here: The 01V96 is a much newer board, the original 01V (which can be found on ebay for around$700) was in the same series as the 02R. The 01V96 has been updated, and Yamaha has also shrunk the case a bit to help reduce the size of the board, which was rather large when it first came out as the 01V about ten years ago (we have two of the original 01V's for outside calls here, and they've been running smoothly for a long time now). The 02R has fewer mic inputs onboard, but a lot more input capacity overall. The 02R96V2, on the other hand, which is Yammie's up-to-date version of the original 02R, has all the input that you would ever need. 16 mic pre's and 4 stereo channels on board, as well as the standard aes/ebu, s/pdif, midi, midi time code, smpte time code, cascade in/out, and a number of other interfaces. The 02R and the 02R96V2 also have 4 input card slots for more functionality if you need them.
So basically, the main differences between the 01V96 and the original 02R are these: 01V96 has more mic preamps and a newer featureset and processor onboard, but the 02R has capability for alot more input.
Once you get in to the 02R96V2 range, you're over the price of the LS9 or tt24 for a new unit, but far under for a used unit off ebay.
Last edited:
#### gafftaper
##### Senior Team
Senior Team
Fight Leukemia
Hijack sideways...
Reading this thread it just struck me as funny: CFX-20, DFX-12, 01V96, 02R, 02RV692, LS9, TT24...
Isn't it weird how the Sound consoles tend to use code numbers for their models while the light consoles all have a catchy names like... Congo, EOS, Light Palette, and Hog2.
I guess the cool thing is that the code usually tells you something about what it can do and where it fits in the product line. But it sure is WAY more confusing than saying, "Buy a Strand Sub Palette"... knowing nothing about it you can guess it's got a lot of submasters.
Hijack ends...
#### jkowtko
##### Well-Known Member
Thanks Soundlight --
Fyi, this article clearly distinguishes the 02Rv2 from the 02R96, and can't stop raving about the 02R96, which is claims is a huge advancement over the v2.
http://mixguides.com/consoles/reviews/yamaha-02R96-console-1202/index.html
I did find a used v2 for sale nearby (http://sfbay.craigslist.org/sby/msg/285889025.html) for $1500 ... which sounds like it would be a good deal, except for this article that painted a picture of such a huge gap between the two. Can I assume the 01v96 will also be at the level of the 02r96? And if so, should I shy away from the v2? Also, a question on the latency. I saw one of their boards had a mention of 2.5ms delay. The 02r's show .8 to 1.9ms delay depending on the sampling frequency. Is the delay simply going to be a function of the sampling frequency or are there other factors that will cause a longer delay in lower end boards? Sorry for asking all of these questions -- as of this morning I knew virtually nothing about these boards so I've got a lot to bone up on here. #### soundlight ##### Well-Known Member If you run straight from the analog ins on the board to the analog outs on the board, there should be very little issue. If you start adding expansion cards, and more AD/DA conversion than is on the board, things might slow down a tiny bit, but a noticeable bit. That said, we use 01V's here and there is zero noticeable latency in a recital hall with about 100 people in it. In terms of the difference between the 02RV2 and the 02R96V2, it's an amazing jump in featureset. The consoles are really in two different classes. The 02Rv2 has only eight mic preamps, whereas the 02R96 has sixteen mic pres and still maintains four additional stereo inputs. The other console that you might want to take a look at is the Tascam DM-24. It's available for about$1000 on ebay. Get it with the meter bridge, if you can, even though it might cost a bit more. The DM-24 will also allow you to get a firewire card and record 24 channels directly to the computer if you want to. It also has 24 channels of TDIF (Tascam Digital InterFace) I/O built in for later I/O expansion with preamp racks or multitrack recorders (not that you'd need any of that for a theater).
If you only need 12 mic preamps, go with the 01V96. You'll be happy with it. If you need 16 mic preamps, look at the Tascam DM-24. For 24 preamps, well, that enters a whole different price range, and I don't think that you need to go there with the requirements that you stated.
Definitely look at the output options on each console to see if it will meet your needs. Unless you add more preamps through buying additional cards and preamps, the 02Rv2 will not meet your needs as it only has 8 preamps.
#### soundman1024
##### Active Member
Hijack sideways...
Reading this thread it just struck me as funny: CFX-20, DFX-12, 01V96, 02R, 02RV692, LS9, TT24...
Isn't it weird how the Sound consoles tend to use code numbers for their models while the light consoles all have a catchy names like... Congo, EOS, Light Palette, and Hog2.
I guess the cool thing is that the code usually tells you something about what it can do and where it fits in the product line. But it sure is WAY more confusing than saying, "Buy a Strand Sub Palette"... knowing nothing about it you can guess it's got a lot of submasters.
Hijack ends...
The sound people think more logically and don't need things to be easy to remember.
#### soundlight
##### Well-Known Member
Sound guys don't think logically...how is LS9 logical? At least the Eos, Obsession, Express, Expression, Pallette, LightPallette, and Smartfade all refer to an operating system and structure, not just a random assignment. </hijack> But, as my username states, I live in both worlds, sound and lighting. I find that many consoles in each world have weird names, each for their own reason.
#### gafftaper
##### Senior Team
Senior Team
Fight Leukemia
Sound guys don't think logically...how is LS9 logical? At least the Eos, Obsession, Express, Expression, Pallette, LightPallette, and Smartfade all refer to an operating system and structure, not just a random assignment. </hijack> But, as my username states, I live in both worlds, sound and lighting. I find that many consoles in each world have weird names, each for their own reason.
Actually that may be the worst part. Most Mackie consoles makes some sense. The CFX-20 for example's got 20 channels and it has some on board effects. Yamaha on the other hand makes no sense at all... PM5D-RH. So you are never sure if you are supposed to know what the code means or not.
#### soundlight
##### Well-Known Member
the -rh part makes sense. Remote Headamp.
#### jkowtko
##### Well-Known Member
Getting back to the original topic (I'd be happy to chime in on 'sound vs lights' if someone wants to start a new thread) --
I've been looking at the on-line info for the 01v96, 02rv2, and 02r96, and have some very specific questions that will help me to determine which of these I can use for our theater.
First, let me clarify my I/O needs better:
- Input: My 12 wireless mics have line input to the board so I do not need preamps for those. So, what I do need is 32 input channels total, at least 8 of which can be preamped for stage/pit/god usage.
- Output: I am mixing live mics from the stage with sound effects, requiring me to be able to control (mix, EQ, effects) 8 separate outputs.
Questions:
1) The 01v at first glance looks too small, but can I use the ADAT I/O in addition to or in lieu of an expansion card to get the number of channels I need? If so, then this board is looking like a good moderate-cost option for a new equipment purchase for the theater.
Otherwise, I have tried to compare the 02rv2 vs 02r96 to see if a used 02rv2 is worth pursuing:
* In:
02rv2 has 8 mic/line + 8 line + 4x2 stereo = 24 total
02r96 has 16 mic/line + 4x2 stereo = 24 total
* Out:
02r96 has 8 omni outs
02rv2 has only 6 aux outs.
2) If I need 8 outputs total from the 02rv2, can I just use the CR or Studio outs in addition to aux? Or do I have to start buying expansion cards? And in these configurations, will I get full mixing control (using fader groups, etc) for all 8 outputs?
3) The 02rv2 has an advertised 2.5ms latency vs. 0.8 for the 02r96, presumably the higher sampling rate and faster processors account for the difference. Soundlight, you said your 01v sounds fine in a small auditorium. Is this the original 01v or the 01v96?
4) The 02r96 review says that the quality of EQ on the newer board is a lot better than the 02rv2 ... am I going to notice a difference in overall sound quality between the 02rv2 and the '96' models, or between the 02rv2 and analog boards?
5) On any of these digital boards, is EQ available for each of the 8 sub-outs? Is parametric EQ and/or notch filtering available so I can deal with feedback on the center cluster without having to buy additional hardware?
6) Last question -- if you had $3k to spend would you buy a new 01v96 with appropriate expansion units, or get a used 02rv2 and use the extra money for other equipment and supplies? Thanks. John #### soundlight ##### Well-Known Member Here's where I get to throw my curveball! Another console! If I had$3K to spend on this project, I'd get the following:
Tascam DM-3200 Digital Console ($2550 w/shipping ebay) Tascam Optical ADAT card ($313 w/shipping ebay)
2xBehringer ADA8000 ADAT interface AD Converter/Preamp ($215/ea ebay) You only need one adat card because there is already an 8 channel adat interface onboard. Yes, it's more than$3K. $300 more than$3K. I'd make a fundraising drive or overbudget request for this one, because the DM-3200 is a nice console, and this would make a very, very nice system.
EDIT:
The original 01V sounds fine in a small auditorium, we use it for jazz vocalists and piano.
I really don't know why more people don't use Tascam consoles, I've been researching then quite a bit lately (I want to get the DM-3200 w/tdif preamps and firewire card for recording and live mixing)
#### Peter
##### Well-Known Member
I really don't know why more people don't use Tascam consoles, I've been researching then quite a bit lately
The digital tascam boards that I have used have generally had HORRABLE user interfaces that were completely counterintuitive. I haven't played with any of their newer boards, but their earlier digital boards / Recording interfaces were a PAIN. Maybe they've improved recently, I dont want to start a flame either way.
My real point is this: Whenever buying a digital board, try as hard as you can to find somewhere where you can sit and play with it for an hour and see how easy you find your way around on it. If you are always completely lost, try another board.
#### soundlight
##### Well-Known Member
I did notice a difference in the intuitive nature between the old TM-D series and the newer DM series (3200 and 4800). The new DM series is alot better than the older TM-D series. They've finally got the fader flip, aux send, and eq functions up to standards.
#### jkowtko
Thanks -- I looked at the Tascam DM-3200. For $1000 over the Yamaha 02Rv2 it looks like I get: - 8 extra mic preamp inputs (16 vs 8) - 96kHz vs 48kHz and it looks like I lose the Aux outs (the Yamaha has 6) Did I miss an other significant differences? Will it be easy to get my 8 sub-outs through one of the digital interfaces? And, again, is it possible to EQ the sub-outs in the board, or do I still have to buy external analog EQs? Thanks. John #### soundlight ##### Well-Known Member and it looks like I lose the Aux outs (the Yamaha has 6) Nope, 8 aux sends onboard, assignable to any output. Will it be easy to get my 8 sub-outs through one of the digital interfaces? Pretty darn sure after reading the manual, the signal routing on the DM3200 is very, very flexible. You can assign any input to any channel, and any group, aux, or other buss to any physical output on the board, including those in the add-on card section. And, again, is it possible to EQ the sub-outs in the board, or do I still have to buy external analog EQs? I don't think so, after reading the manual. This is part of the price difference that you were asking about before between the$3k boards and the \$10k boards.
If you do some fun stuff with returning them through an extra channel and then taking a direct out of that channel and sending it down a TDIF out, that would work. But that would imply that you had 8 extra channels, because the aux returns (there's 16 of them, I think) don't have EQ on them. But this also might also create a noticeable latency because the signal would have to travel through the digital path twice, through two sets of AD/DA converters, and then out.
Last edited:
#### Cooze
##### Member
I try to stay away from Mackie at all costs, there are so many other options out there that are better, I am a big fan of Yamaha, because they are reasonably priced, they work well, and they last, not shooting anyone else down but I personally would never purchase a Mackie. Post what you decide on, I am interested.
John Williams
Technical Director/Sound Director
Calvert Theatre, Prince Frederick, MD
#### SHARYNF
##### Well-Known Member
Typically you add the expansion card to the o1v96 there is an 8 and a 16 chanel version, and then add either one or two of the Behringer adat i/o units. These units are really quite nice and have an oddball feature that works well in this configuration... Typically you would expect that the Preamps in would be connected to the adat out AND the analog out, but basically Behringer has really but two units in a single box, on is a Pre to Adat 8 channels, and the other is adat to Line 8 channels, so you have the flexibility to use these all at the same time. A number of sound companies use this setup as one of the best inexpensive digital consoles.
On the 02r there is a mis understanding, there are 16 mic pre amp inputs, but to save space yamaha left off the xlr connectors (they use trs) and left off Phantom power on the second 8 inputs, but there are 16 mic preamps. In addition you can add up to 4 cards for options. In my config I have one the extra digital processors with adat io and then two adat ion cards, and a cascade card so that two o2r's can be linked but that is more for recording than pa.
A lot of the 96 capability IMO makes a lot more sense in recording where you want to use the higher end digitizers than for live sound.
The 02r's are big heavy, and built like tanks, Personally I think they are an interesting deal these days since prices have dropped so much. If you want a new system then the 01v96 is probably a better way to go but a bit more money.
What I like about Yamaha is the quality the ruggedness and the support, if you look inside these are well build and repairable.
For small setups, for instance for some of the video productions where we are just looking for 16 ins, and 8 adat out, I have used the 03d's which are very cheap these days, they have 8 pre's and 8 lines in so you do need to add preamps, and then a single card slot for adat. Smaller setup, not as flexible as the 01v but another alternative.
Sharyn
#### jkowtko
##### Well-Known Member
Thanks guys -- I'm getting closer to determining what I ultimately need in a digital board. Here are my thoughts based on all of the input so far:
* The Behringer ADA8000 looks like a pretty neat card because you can get
8 mic/line ins and 8 outs all from a single card. So as long as your board
has a pair of ADAT I/O plugs, this is an easy expansion
* Yamaha: I think I would prefer the 01v with an ADA8000. That will
give me the minimum ins and outs that I need, I can add a second ADAT
expansion card with a second ADA8000 if I need to go further. It's 96kHz,
has USB for computer control, and has a relatively compact size.
In comparison, the 02rv2 (not 96) starts out with no native ADAT, no USB
and is much bigger in size. (my sound booth is tiny with shallow desk).
* The Tascam DM3200 also looks like a good option if space weren't an
issue. With the built in mix/line inputs and ADAT interface, it's similar in
capacity to the Yamaha 01v ... one ADA8000 to handle my min config.
However I don't know if there are drawbacks to Tascam vs. Yamaha.
I guess ultimately any one of these three boards should serve my purpose, and it may come down to feature preference and ergonomics.
However, one other BIG consideration for me is signal delay. I will be using this board for live musical theater, so any discernable delay will kill this digital an option for me. Can anyone comment on this or testify that digital works acceptably well in live theater? (Music Man tech is April 30 so I still have some time to make my decision).
Thanks. John
|
{}
|
# Product of non-zero elements in sparse array
How can I multiply the non-zero elements of a SparseArray?
Example:
a = SparseArray[{1 -> 1, 3 -> 2, 7 -> 3}]
Times @@ a (* this returns 0, but I need 6! *)
• – Mr.Wizard Feb 6 '14 at 14:14
1. The sparse array a is:
{1, 0, 2, 0, 0, 0, 3}
hence applying Times to this yields zero
2. You can see the underlying array using Normal
a // Normal
3. You could multiply the non-zero elements of a sparse array object:
Times @@ a["NonzeroValues"]
• This is good to know! Could you perhaps point me to the documentation where "NonzeroValues" is mentioned? – Danvil Feb 6 '14 at 13:42
• @Danvil As far as I know it isn't documented. You can see the other SparseArray Properties with: SparseArray[{1}]["Properties"]. I gathered links to some of my uses of these in this answer. – Mr.Wizard Feb 6 '14 at 14:12
|
{}
|
• 1 Hour Free
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
• Free Trial & Practice Exam
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
• Magoosh
Study with Magoosh GMAT prep
Available with Beat the GMAT members only code
• 5-Day Free Trial
5-day free, full-access trial TTP Quant
Available with Beat the GMAT members only code
• Award-winning private GMAT tutoring
Register now and save up to $200 Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • FREE GMAT Exam Know how you'd score today for$0
Available with Beat the GMAT members only code
• Free Veritas GMAT Class
Experience Lesson 1 Live Free
Available with Beat the GMAT members only code
• Free Practice Test & Review
How would you score if you took the GMAT
Available with Beat the GMAT members only code
## Triangles
tagged by: Brent@GMATPrepNow
This topic has 1 expert reply and 0 member replies
abhirup1711 Senior | Next Rank: 100 Posts
Joined
13 Jul 2011
Posted:
66 messages
Followed by:
2 members
#### Triangles
Fri Jun 07, 2013 4:21 am
If two sides of a triangle are 12 and 8 in length, which of the following could be the area of the triangle?
l.35
2. 48
3. 56
1 only
1 and 2 only
1 and 3 only
2 and 3 only
1, 2 and 3
### GMAT/MBA Expert
Brent@GMATPrepNow GMAT Instructor
Joined
08 Dec 2008
Posted:
11769 messages
Followed by:
1232 members
5254
GMAT Score:
770
Fri Jun 07, 2013 5:20 am
abhirup1711 wrote:
If two sides of a triangle are 12 and 8 in length, which of the following could be the area of the triangle?
l.35
2. 48
3. 56
A) 1 only
B) 1 and 2 only
C) 1 and 3 only
D) 2 and 3 only
E) 1, 2 and 3
First, we can take the two given sides and make the angle between them as small as we want. As we make that angle smaller and smaller, the area of the triangle will approach zero.
The area of a triangle = (1/2)(base)(height)
Let's make one side the base. Let's say the side with length 12 is the base.
In order to maximize the area of the triangle, we need to maximize its height. This will occur when the two given sides meet at a 90-degree angle. So, the maximum height is 8, which means the maximum area = (1/2)(12)(8) = 48
So . . . 0 < area < 48, which means the area could be 35 and 48 (but not 56).
Cheers,
Brent
_________________
Brent Hanneson – Creator of GMATPrepNow.com
Use our video course along with
Check out the online reviews of our course
Come see all of our free resources
GMAT Prep Now's comprehensive video course can be used in conjunction with Beat The GMAT’s FREE 60-Day Study Guide and reach your target score in 2 months!
### Top First Responders*
1 GMATGuruNY 111 first replies
2 Brent@GMATPrepNow 43 first replies
3 Rich.C@EMPOWERgma... 35 first replies
4 Jay@ManhattanReview 28 first replies
5 Scott@TargetTestPrep 11 first replies
* Only counts replies to topics started in last 30 days
See More Top Beat The GMAT Members
### Most Active Experts
1 GMATGuruNY
The Princeton Review Teacher
157 posts
2 Brent@GMATPrepNow
GMAT Prep Now Teacher
141 posts
3 Scott@TargetTestPrep
Target Test Prep
104 posts
4 Jeff@TargetTestPrep
Target Test Prep
94 posts
5 Max@Math Revolution
Math Revolution
86 posts
See More Top Beat The GMAT Experts
|
{}
|
Volume 13 (2017) Article 16 pp. 1-23
Some Limitations of the Sum of Small-Bias Distributions
by
Revised: September 22, 2016
Published: December 14, 2017
[PDF (301K)] [PS (1603K)] [PS.GZ (347K)]
[Source ZIP]
Keywords: complexity theory, pseudorandomness, RL vs. L, error-correcting codes, $k$-wise independence, small-bias distributions, sum of small bias
ACM Classification: F.1.3, G.3, F.2.3
AMS Classification: 68Q17
Abstract: [Plain Text Version]
We present two approaches to constructing $\eps$-biased distributions $D$ on $n$ bits and functions $f\colon \{0,1\}^n \to \{0,1\}$ such that the XOR of two independent copies ($D+D$) does not fool $f$. Using them, we give constructions for any of the following choices:
1. $\eps = 2^{-\Omega(n)}$ and $f$ is in $\ppp$/poly;
2. $\eps = 2^{-\Omega(n/\log n)}$ and $f$ is in $\nc^2$;
3. $\eps = n^{-c}$ and $f$ is a one-way space $O(c \log n)$ algorithm, for any $c$;
4. $\eps = n^{-\Omega(1)}$ and $f$ is a mod 3 linear function.
All the results give one-sided distinguishers, and extend to the XOR of more copies for suitable $\eps$. We also give conditional results for $\ac^0$ and DNF formulas.
Meka and Zuckerman (RANDOM 2009) prove 4 with $\eps = O(1)$. Bogdanov, Dvir, Verbin, and Yehudayoff (Theory of Computing, 2013) prove 2 with $\eps = 2^{-O(\sqrt{n})}$. Chen and Zuckerman (personal communication) give an alternative proof of 3.
|
{}
|
Paul's Online Notes
Home / Calculus I / Derivatives / Chain Rule
Show Mobile Notice Show All Notes Hide All Notes
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.
Section 3-9 : Chain Rule
4. Differentiate $$R\left( w \right) = \csc \left( {7w} \right)$$ .
Hint : Recall that with Chain Rule problems you need to identify the “inside” and “outside” functions and then apply the chain rule.
Show Solution
For this problem the outside function is (hopefully) clearly the trig function and the inside function is the stuff inside of the trig function. The derivative is then,
$\require{bbox} \bbox[2pt,border:1px solid black]{{R'\left( w \right) = - 7\csc \left( {7w} \right)\cot \left( {7w} \right)}}$
In dealing with functions like cosecant (or secant for that matter) be careful to make sure that the inside function gets substituted into both terms of the derivative of the outside function. One of the more common mistakes with this kind of problem is to only substitute the $$7w$$ into only the cosecant or only the cotangent instead of both as it should be.
|
{}
|
# Tag Info
In a simple linear model of the form $y = \beta_0 + \beta_1 x$ we can see that increasing $x$ by a unit will increase the prediction on $y$ by $\beta_1$. Here we can completely determine what the effect on the models prediction will be by increasing $x$. With more complex models such as neural networks it is much more difficult to tell due to all the ...
|
{}
|
#### Problem 74E
74. The volume of a right circular cone is $V=\frac{1}{3} \pi r^{2} h$, where $r$ is the radius of the base and $h$ is the height.
(a) Find the rate of change of the volume with respect to the height if the radius is constant.
(b) Find the rate of change of the volume with respect to the radius if the height is constant.
.
|
{}
|
# Gluon Neural Network Layers¶
## Overview¶
This document lists the neural network blocks in Gluon:
## Basic Layers¶
Dense Just your regular densely-connected NN layer. Activation Applies an activation function to input. Dropout Applies Dropout to the input. BatchNorm Batch normalization layer (Ioffe and Szegedy, 2014). LeakyReLU Leaky version of a Rectified Linear Unit. Embedding Turns non-negative integers (indexes/tokens) into dense vectors of fixed size. Flatten Flattens the input to two dimensional.
## Convolutional Layers¶
Conv1D 1D convolution layer (e.g. temporal convolution). Conv2D 2D convolution layer (e.g. spatial convolution over images). Conv3D 3D convolution layer (e.g. spatial convolution over volumes). Conv1DTranspose Transposed 1D convolution layer (sometimes called Deconvolution). Conv2DTranspose Transposed 2D convolution layer (sometimes called Deconvolution). Conv3DTranspose Transposed 3D convolution layer (sometimes called Deconvolution).
## Pooling Layers¶
MaxPool1D Max pooling operation for one dimensional data. MaxPool2D Max pooling operation for two dimensional (spatial) data. MaxPool3D Max pooling operation for 3D data (spatial or spatio-temporal). AvgPool1D Average pooling operation for temporal data. AvgPool2D Average pooling operation for spatial data. AvgPool3D Average pooling operation for 3D data (spatial or spatio-temporal). GlobalMaxPool1D Global max pooling operation for temporal data. GlobalMaxPool2D Global max pooling operation for spatial data. GlobalMaxPool3D Global max pooling operation for 3D data. GlobalAvgPool1D Global average pooling operation for temporal data. GlobalAvgPool2D Global average pooling operation for spatial data. GlobalAvgPool3D Global max pooling operation for 3D data.
## API Reference¶
Neural network layers.
class mxnet.gluon.nn.Activation(activation, **kwargs)
Applies an activation function to input.
Parameters: activation (str) – Name of activation function to use. See Activation() for available choices.
Inputs:
• data: input tensor with arbitrary shape.
Outputs:
• out: output tensor with the same shape as data.
class mxnet.gluon.nn.AvgPool1D(pool_size=2, strides=None, padding=0, layout='NCW', ceil_mode=False, **kwargs)
Average pooling operation for temporal data.
Parameters: pool_size (int) – Size of the max pooling windows. strides (int, or None) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size. padding (int) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout (str, default 'NCW') – Dimension ordering of data and weight. Can be ‘NCW’, ‘NWC’, etc. ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions respectively. padding is applied on ‘W’ dimension. ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
Inputs:
• data: 3D input tensor with shape (batch_size, in_channels, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 3D output tensor with shape (batch_size, channels, out_width) when layout is NCW. out_width is calculated as:
When ceil_mode is True, ceil will be used instead of floor in this equation.
class mxnet.gluon.nn.AvgPool2D(pool_size=(2, 2), strides=None, padding=0, ceil_mode=False, layout='NCHW', **kwargs)
Average pooling operation for spatial data.
Parameters: pool_size (int or list/tuple of 2 ints,) – Size of the max pooling windows. strides (int, list/tuple of 2 ints, or None.) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size. padding (int or list/tuple of 2 ints,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout (str, default 'NCHW') – Dimension ordering of data and weight. Can be ‘NCHW’, ‘NHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width dimensions respectively. padding is applied on ‘H’ and ‘W’ dimension. ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
Inputs:
• data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 4D output tensor with shape (batch_size, channels, out_height, out_width) when layout is NCW. out_height and out_width are calculated as:
When ceil_mode is True, ceil will be used instead of floor in this equation.
class mxnet.gluon.nn.AvgPool3D(pool_size=(2, 2, 2), strides=None, padding=0, ceil_mode=False, layout='NCDHW', **kwargs)
Average pooling operation for 3D data (spatial or spatio-temporal).
Parameters: pool_size (int or list/tuple of 3 ints,) – Size of the max pooling windows. strides (int, list/tuple of 3 ints, or None.) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size. padding (int or list/tuple of 3 ints,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout (str, default 'NCDHW') – Dimension ordering of data and weight. Can be ‘NCDHW’, ‘NDHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. padding is applied on ‘D’, ‘H’ and ‘W’ dimension. ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
Inputs:
• data: 5D input tensor with shape (batch_size, in_channels, depth, height, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 5D output tensor with shape (batch_size, channels, out_depth, out_height, out_width) when layout is NCW. out_depth, out_height and out_width are calculated as:
When ceil_mode is True, ceil will be used instead of floor in this equation.
class mxnet.gluon.nn.BatchNorm(axis=1, momentum=0.9, epsilon=1e-05, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', running_mean_initializer='zeros', running_variance_initializer='ones', in_channels=0, **kwargs)
Batch normalization layer (Ioffe and Szegedy, 2014). Normalizes the input at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1.
Parameters: axis (int, default 1) – The axis that should be normalized. This is typically the channels (C) axis. For instance, after a Conv2D layer with layout=’NCHW’, set axis=1 in BatchNorm. If layout=’NHWC’, then set axis=3. momentum (float, default 0.9) – Momentum for the moving average. epsilon (float, default 1e-5) – Small float added to variance to avoid dividing by zero. center (bool, default True) – If True, add offset of beta to normalized tensor. If False, beta is ignored. scale (bool, default True) – If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling will be done by the next layer. beta_initializer (str or Initializer, default ‘zeros’) – Initializer for the beta weight. gamma_initializer (str or Initializer, default ‘ones’) – Initializer for the gamma weight. moving_mean_initializer (str or Initializer, default ‘zeros’) – Initializer for the moving mean. moving_variance_initializer (str or Initializer, default ‘ones’) – Initializer for the moving variance. in_channels (int, default 0) – Number of channels (feature maps) in input data. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
Inputs:
• data: input tensor with arbitrary shape.
Outputs:
• out: output tensor with the same shape as data.
class mxnet.gluon.nn.Conv1D(channels, kernel_size, strides=1, padding=0, dilation=1, groups=1, layout='NCW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)
1D convolution layer (e.g. temporal convolution).
This layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
Parameters: channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size (int or tuple/list of 1 int) – Specifies the dimensions of the convolution window. strides (int or tuple/list of 1 int,) – Specify the strides of the convolution. padding (int or a tuple/list of 1 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation (int or tuple/list of 1 int) – Specifies the dilation rate to use for dilated convolution. groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout (str, default 'NCW') – Dimension ordering of data and weight. Can be ‘NCW’, ‘NWC’, etc. ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions respectively. Convolution is applied on the ‘W’ dimension. in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data. activation (str) – Activation function to use. See Activation(). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x). use_bias (bool) – Whether the layer uses a bias vector. weight_initializer (str or Initializer) – Initializer for the weight weights matrix. bias_initializer (str or Initializer) – Initializer for the bias vector.
Inputs:
• data: 3D input tensor with shape (batch_size, in_channels, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 3D output tensor with shape (batch_size, channels, out_width) when layout is NCW. out_width is calculated as:
class mxnet.gluon.nn.Conv1DTranspose(channels, kernel_size, strides=1, padding=0, output_padding=0, dilation=1, groups=1, layout='NCW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)
Transposed 1D convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
Parameters: channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size (int or tuple/list of 3 int) – Specifies the dimensions of the convolution window. strides (int or tuple/list of 3 int,) – Specify the strides of the convolution. padding (int or a tuple/list of 3 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation (int or tuple/list of 3 int) – Specifies the dilation rate to use for dilated convolution. groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout (str, default 'NCW') – Dimension ordering of data and weight. Can be ‘NCW’, ‘NWC’, etc. ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions respectively. Convolution is applied on the ‘W’ dimension. in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data. activation (str) – Activation function to use. See Activation(). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x). use_bias (bool) – Whether the layer uses a bias vector. weight_initializer (str or Initializer) – Initializer for the weight weights matrix. bias_initializer (str or Initializer) – Initializer for the bias vector.
Inputs:
• data: 3D input tensor with shape (batch_size, in_channels, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 3D output tensor with shape (batch_size, channels, out_width) when layout is NCW. out_width is calculated as:
class mxnet.gluon.nn.Conv2D(channels, kernel_size, strides=(1, 1), padding=(0, 0), dilation=(1, 1), groups=1, layout='NCHW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)
2D convolution layer (e.g. spatial convolution over images).
This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
Parameters: channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size (int or tuple/list of 2 int) – Specifies the dimensions of the convolution window. strides (int or tuple/list of 2 int,) – Specify the strides of the convolution. padding (int or a tuple/list of 2 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation (int or tuple/list of 2 int) – Specifies the dilation rate to use for dilated convolution. groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout (str, default 'NCHW') – Dimension ordering of data and weight. Can be ‘NCHW’, ‘NHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width dimensions respectively. Convolution is applied on the ‘H’ and ‘W’ dimensions. in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data. activation (str) – Activation function to use. See Activation(). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x). use_bias (bool) – Whether the layer uses a bias vector. weight_initializer (str or Initializer) – Initializer for the weight weights matrix. bias_initializer (str or Initializer) – Initializer for the bias vector.
Inputs:
• data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 4D output tensor with shape (batch_size, channels, out_height, out_width) when layout is NCW. out_height and out_width are calculated as:
class mxnet.gluon.nn.Conv2DTranspose(channels, kernel_size, strides=(1, 1), padding=(0, 0), output_padding=(0, 0), dilation=(1, 1), groups=1, layout='NCHW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)
Transposed 2D convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
Parameters: channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size (int or tuple/list of 3 int) – Specifies the dimensions of the convolution window. strides (int or tuple/list of 3 int,) – Specify the strides of the convolution. padding (int or a tuple/list of 3 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation (int or tuple/list of 3 int) – Specifies the dilation rate to use for dilated convolution. groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout (str, default 'NCHW') – Dimension ordering of data and weight. Can be ‘NCHW’, ‘NHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width dimensions respectively. Convolution is applied on the ‘H’ and ‘W’ dimensions. in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data. activation (str) – Activation function to use. See Activation(). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x). use_bias (bool) – Whether the layer uses a bias vector. weight_initializer (str or Initializer) – Initializer for the weight weights matrix. bias_initializer (str or Initializer) – Initializer for the bias vector.
Inputs:
• data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 4D output tensor with shape (batch_size, channels, out_height, out_width) when layout is NCW. out_height and out_width are calculated as:
class mxnet.gluon.nn.Conv3D(channels, kernel_size, strides=(1, 1, 1), padding=(0, 0, 0), dilation=(1, 1, 1), groups=1, layout='NCDHW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)
3D convolution layer (e.g. spatial convolution over volumes).
This layer creates a convolution kernel that is convolved with the layer input to produce a tensor of outputs. If use_bias is True, a bias vector is created and added to the outputs. Finally, if activation is not None, it is applied to the outputs as well.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
Parameters: channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size (int or tuple/list of 3 int) – Specifies the dimensions of the convolution window. strides (int or tuple/list of 3 int,) – Specify the strides of the convolution. padding (int or a tuple/list of 3 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation (int or tuple/list of 3 int) – Specifies the dilation rate to use for dilated convolution. groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout (str, default 'NCDHW') – Dimension ordering of data and weight. Can be ‘NCDHW’, ‘NDHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. Convolution is applied on the ‘D’, ‘H’ and ‘W’ dimensions. in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data. activation (str) – Activation function to use. See Activation(). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x). use_bias (bool) – Whether the layer uses a bias vector. weight_initializer (str or Initializer) – Initializer for the weight weights matrix. bias_initializer (str or Initializer) – Initializer for the bias vector.
Inputs:
• data: 5D input tensor with shape (batch_size, in_channels, depth, height, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 5D output tensor with shape (batch_size, channels, out_depth, out_height, out_width) when layout is NCW. out_depth, out_height and out_width are calculated as:
class mxnet.gluon.nn.Conv3DTranspose(channels, kernel_size, strides=(1, 1, 1), padding=(0, 0, 0), output_padding=(0, 0, 0), dilation=(1, 1, 1), groups=1, layout='NCDHW', activation=None, use_bias=True, weight_initializer=None, bias_initializer='zeros', in_channels=0, **kwargs)
Transposed 3D convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
If in_channels is not specified, Parameter initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
Parameters: channels (int) – The dimensionality of the output space, i.e. the number of output channels (filters) in the convolution. kernel_size (int or tuple/list of 3 int) – Specifies the dimensions of the convolution window. strides (int or tuple/list of 3 int,) – Specify the strides of the convolution. padding (int or a tuple/list of 3 int,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points dilation (int or tuple/list of 3 int) – Specifies the dilation rate to use for dilated convolution. groups (int) – Controls the connections between inputs and outputs. At groups=1, all inputs are convolved to all outputs. At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. layout (str, default 'NCDHW') – Dimension ordering of data and weight. Can be ‘NCDHW’, ‘NDHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. Convolution is applied on the ‘D’, ‘H’, and ‘W’ dimensions. in_channels (int, default 0) – The number of input channels to this layer. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data. activation (str) – Activation function to use. See Activation(). If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x). use_bias (bool) – Whether the layer uses a bias vector. weight_initializer (str or Initializer) – Initializer for the weight weights matrix. bias_initializer (str or Initializer) – Initializer for the bias vector.
Inputs:
• data: 5D input tensor with shape (batch_size, in_channels, depth, height, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 5D output tensor with shape (batch_size, channels, out_depth, out_height, out_width) when layout is NCW. out_depth, out_height and out_width are calculated as:
class mxnet.gluon.nn.Dense(units, activation=None, use_bias=True, flatten=True, weight_initializer=None, bias_initializer='zeros', in_units=0, **kwargs)
Just your regular densely-connected NN layer.
Dense implements the operation: output = activation(dot(input, weight) + bias) where activation is the element-wise activation function passed as the activation argument, weight is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True).
Note: the input must be a tensor with rank 2. Use flatten to convert it to rank 2 manually if necessary.
Parameters: units (int) – Dimensionality of the output space. activation (str) – Activation function to use. See help on Activation layer. If you don’t specify anything, no activation is applied (ie. “linear” activation: a(x) = x). use_bias (bool) – Whether the layer uses a bias vector. flatten (bool) – Whether the input tensor should be flattened. If true, all but the first axis of input data are collapsed together. If false, all but the last axis of input data are kept the same, and the transformation applies on the last axis. weight_initializer (str or Initializer) – Initializer for the kernel weights matrix. bias_initializer (str or Initializer) – Initializer for the bias vector. in_units (int, optional) – Size of the input data. If not specified, initialization will be deferred to the first time forward is called and in_units will be inferred from the shape of input data. prefix (str or None) – See document of Block. params (ParameterDict or None) – See document of Block.
Inputs:
• data: if flatten is True, data should be a tensor with shape (batch_size, x1, x2, ..., xn), where x1 * x2 * ... * xn is equal to in_units. If flatten is False, data should have shape (x1, x2, ..., xn, in_units).
Outputs:
• out: if flatten is True, out will be a tensor with shape (batch_size, units). If flatten is False, out will have shape (x1, x2, ..., xn, units).
class mxnet.gluon.nn.Dropout(rate, **kwargs)
Applies Dropout to the input.
Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting.
Parameters: rate (float) – Fraction of the input units to drop. Must be a number between 0 and 1.
Inputs:
• data: input tensor with arbitrary shape.
Outputs:
• out: output tensor with the same shape as data.
References
Dropout: A Simple Way to Prevent Neural Networks from Overfitting
class mxnet.gluon.nn.Embedding(input_dim, output_dim, dtype='float32', weight_initializer=None, **kwargs)
Turns non-negative integers (indexes/tokens) into dense vectors of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
Parameters: input_dim (int) – Size of the vocabulary, i.e. maximum integer index + 1. output_dim (int) – Dimension of the dense embedding. dtype (str or np.dtype, default 'float32') – Data type of output embeddings. weight_initializer (Initializer) – Initializer for the embeddings matrix.
Inputs:
• data: 2D tensor with shape: (x1, x2).
Output:
• out: 3D tensor with shape: (x1, x2, output_dim).
class mxnet.gluon.nn.Flatten(**kwargs)
Flattens the input to two dimensional.
Inputs:
• data: input tensor with arbitrary shape (N, x1, x2, ..., xn)
Output:
• out: 2D tensor with shape: (N, x1 cdot x2 cdot ... cdot xn)
class mxnet.gluon.nn.GlobalAvgPool1D(layout='NCW', **kwargs)
Global average pooling operation for temporal data.
class mxnet.gluon.nn.GlobalAvgPool2D(layout='NCHW', **kwargs)
Global average pooling operation for spatial data.
class mxnet.gluon.nn.GlobalAvgPool3D(layout='NCDHW', **kwargs)
Global max pooling operation for 3D data.
class mxnet.gluon.nn.GlobalMaxPool1D(layout='NCW', **kwargs)
Global max pooling operation for temporal data.
class mxnet.gluon.nn.GlobalMaxPool2D(layout='NCHW', **kwargs)
Global max pooling operation for spatial data.
class mxnet.gluon.nn.GlobalMaxPool3D(layout='NCDHW', **kwargs)
Global max pooling operation for 3D data.
class mxnet.gluon.nn.HybridLambda(function, prefix=None)
Wraps an operator or an expression as a HybridBlock object.
Parameters: function (str or function) – Function used in lambda must be one of the following: 1) the name of an operator that is available in both symbol and ndarray. For example: block = HybridLambda('tanh') a function that conforms to “def function(F, data, *args)”. For example:block = HybridLambda(lambda F, x: F.LeakyReLU(x, slope=0.1)) Inputs – ** args *: one or more input data. First argument must be symbol or ndarray. Their shapes depend on the function. Output – ** outputs *: one or more output data. Their shapes depend on the function.
class mxnet.gluon.nn.Lambda(function, prefix=None)
Wraps an operator or an expression as a Block object.
Parameters: function (str or function) – Function used in lambda must be one of the following: 1) the name of an operator that is available in ndarray. For example: block = Lambda('tanh') a function that conforms to “def function(*args)”. For example:block = Lambda(lambda x: nd.LeakyReLU(x, slope=0.1)) Inputs – ** args *: one or more input data. Their shapes depend on the function. Output – ** outputs *: one or more output data. Their shapes depend on the function.
class mxnet.gluon.nn.LeakyReLU(alpha, **kwargs)
Leaky version of a Rectified Linear Unit.
It allows a small gradient when the unit is not active
$\begin{split}f\left(x\right) = \left\{ \begin{array}{lr} \alpha x & : x \lt 0 \\ x & : x \geq 0 \\ \end{array} \right.\\\end{split}$
Parameters: alpha (float) – slope coefficient for the negative half axis. Must be >= 0.
Inputs:
• data: input tensor with arbitrary shape.
Outputs:
• out: output tensor with the same shape as data.
class mxnet.gluon.nn.MaxPool1D(pool_size=2, strides=None, padding=0, layout='NCW', ceil_mode=False, **kwargs)
Max pooling operation for one dimensional data.
Parameters: pool_size (int) – Size of the max pooling windows. strides (int, or None) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size. padding (int) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout (str, default 'NCW') – Dimension ordering of data and weight. Can be ‘NCW’, ‘NWC’, etc. ‘N’, ‘C’, ‘W’ stands for batch, channel, and width (time) dimensions respectively. Pooling is applied on the W dimension. ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
Inputs:
• data: 3D input tensor with shape (batch_size, in_channels, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 3D output tensor with shape (batch_size, channels, out_width) when layout is NCW. out_width is calculated as:
When ceil_mode is True, ceil will be used instead of floor in this equation.
class mxnet.gluon.nn.MaxPool2D(pool_size=(2, 2), strides=None, padding=0, layout='NCHW', ceil_mode=False, **kwargs)
Max pooling operation for two dimensional (spatial) data.
Parameters: pool_size (int or list/tuple of 2 ints,) – Size of the max pooling windows. strides (int, list/tuple of 2 ints, or None.) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size. padding (int or list/tuple of 2 ints,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout (str, default 'NCHW') – Dimension ordering of data and weight. Can be ‘NCHW’, ‘NHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’ stands for batch, channel, height, and width dimensions respectively. padding is applied on ‘H’ and ‘W’ dimension. ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
Inputs:
• data: 4D input tensor with shape (batch_size, in_channels, height, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 4D output tensor with shape (batch_size, channels, out_height, out_width) when layout is NCW. out_height and out_width are calculated as:
When ceil_mode is True, ceil will be used instead of floor in this equation.
class mxnet.gluon.nn.MaxPool3D(pool_size=(2, 2, 2), strides=None, padding=0, ceil_mode=False, layout='NCDHW', **kwargs)
Max pooling operation for 3D data (spatial or spatio-temporal).
Parameters: pool_size (int or list/tuple of 3 ints,) – Size of the max pooling windows. strides (int, list/tuple of 3 ints, or None.) – Factor by which to downscale. E.g. 2 will halve the input size. If None, it will default to pool_size. padding (int or list/tuple of 3 ints,) – If padding is non-zero, then the input is implicitly zero-padded on both sides for padding number of points. layout (str, default 'NCDHW') – Dimension ordering of data and weight. Can be ‘NCDHW’, ‘NDHWC’, etc. ‘N’, ‘C’, ‘H’, ‘W’, ‘D’ stands for batch, channel, height, width and depth dimensions respectively. padding is applied on ‘D’, ‘H’ and ‘W’ dimension. ceil_mode (bool, default False) – When True, will use ceil instead of floor to compute the output shape.
Inputs:
• data: 5D input tensor with shape (batch_size, in_channels, depth, height, width) when layout is NCW. For other layouts shape is permuted accordingly.
Outputs:
• out: 5D output tensor with shape (batch_size, channels, out_depth, out_height, out_width) when layout is NCW. out_depth, out_height and out_width are calculated as:
|
{}
|
# Linear mixed fluid-structure interaction system¶
This tutorial demonstrates the use of subdomain functionality and show how to describe a system consisting of multiple materials in Firedrake.
The tutorial was contributed by Tomasz Salwa and Onno Bokhove.
The model considered consists of fluid with a free surface and an elastic solid. We will be using interchangeably notions of fluid/water and structure/solid/beam. For simplicity (and speed of computation) we consider a model in 2D, however it can be easily generalised to 3D. The starting point is the linearised version (domain is fixed) of the fully nonlinear variational principle. In non-dimensional units:
$\begin{split}0 = & \delta \int_0^{t_{\text{end}}} \int \left( \partial_t{\eta} \phi - \frac{1}{2} \eta^2 \right) {\mathrm d} S_f - \int \frac{1}{2} |\nabla \phi|^2 {\mathrm d} x_F \\ & + \int {\bf n} \cdot \partial_t {\bf X} \phi \, {\mathrm d} s_s\\ & + \int \rho_0 \partial_t {\bf X} \cdot {\bf U} - \frac 12 \rho_0 |{\bf U}|^2 - \frac 12 \lambda e_{ii}e_{jj} - \mu e_{ij} e_{ij}\, {\mathrm d} x_S \, {\mathrm d} t \, ,\end{split}$
in which the first line contains integration over fluid domain, second, fluid-structure interface, and third, structure domain. The following notions are used:
• $$\eta$$ - free surface deviation
• $$\phi$$ - fluid flow potential
• $$\rho_0$$ - structure density (in fluid density units)
• $$\lambda$$ - first Lame constant (material parameter)
• $$\mu$$ - second Lame constant (material parameter)
• $${\bf X}$$ - structure displacement
• $${\bf U}$$ - structure velocity
• $$e_{ij} = \frac{1}{2} \bigl( \frac{\partial X_j }{ \partial x_i } + \frac{ \partial X_i }{ \partial x_j } \bigr)$$ - linear strain tensor; $$i$$, $$j$$ denote vector components
• $${\mathrm d} S_f$$ - integration element over fluid free surface
• $${\mathrm d} s_s$$ - integration element over structure-fluid interface
• $${\mathrm d} x_F$$ - integration element over fluid domain
• $${\mathrm d} x_S$$ - integration element over structure domain
After numerous manipulations (described in detail in [SBK17]) and evaluation of individual variations, the time-discrete equations, with symplectic Euler scheme, that we would like to implement in Firedrake, are:
\begin{split}\begin{aligned} \int v \phi^{n+1} \, {\mathrm d} S_f &= \int v (\phi^n - \Delta t \eta^n) \, {\mathrm d} S_f \\\\ % \int \rho_0 {\bf v} \cdot {\bf U}^{n+1} \, {\mathrm d} x_S\ \underline{+ \int {\bf n} \cdot {\bf v} \, \phi^{n+1} \, {\mathrm d} s_s} &= \rho_0 \int {\bf v} \cdot {\bf U}^n \, {\mathrm d} x_S \nonumber\\ &\hspace{4em}- \Delta t \int \left( \lambda \nabla \cdot {\bf v} \nabla \cdot {\bf X}^n + \mu \frac{\partial X^n_j}{\partial x_i} \left( \frac{\partial v_i}{\partial x_j} + \frac{\partial v_j}{\partial x_i} \right) \right) \, {\mathrm d} x_S \\ &\hspace{8em}\underline{ + \int {\bf n} \cdot {\bf v} \, \phi^n \, {\mathrm d} s_s }\\\\ % \int \nabla v \cdot \nabla \phi^{n+1} \, {\mathrm d} x_F\ \underline{- \int v {\bf n} \cdot {\bf U}^{n+1} \, {\mathrm d} s_s } &= 0 \\\\ %\hspace{1cm} (+ \text{Dirichlet BC at } \partial \Omega_f)\\ % \int v \eta^{n+1} \, {\mathrm d} S_f &= \int v \eta^n \, {\mathrm d} S_f + \Delta t \int \nabla v \cdot \nabla \phi^{n+1} \, {\mathrm d} x_F\\ &\hspace{4em}\underline{- \Delta t \int v {\bf n} \cdot {\bf U}^{n+1}\, {\mathrm d} s_s }\\\\ % \int {\bf v} \cdot {\bf X}^{n+1} \, {\mathrm d} x_S &= \int {\bf v} \cdot ( {\bf X}^n + \Delta t {\bf U}^{n+1} ) \, {\mathrm d} x_S \, . \end{aligned}\end{split}
The underlined terms are the coupling terms. Note that the first equation for $$\phi$$ at the free surface is solved on the free surface only, the last equation for $${\bf X}$$ in the structure domain, while the others are solved in both domains. Moreover, the second and third equations for $$\phi$$ and $${\bf U}$$ need to be solved simultaneously. The geometry of the system with initial condition is shown below.
Now we present the code used to solve the system of equations above. We start with appropriate imports:
from firedrake import *
import math
import numpy as np
Then, we set parameters of the simulation:
# parameters in SI units
t_end = 5.0 # time of simulation [s]
dt = 0.005 # time step [s]
g = 9.8 # gravitational acceleration
# water
Lx = 20.0 # length of the tank [m] in x-direction; needed for computing initial condition
Lz = 10.0 # height of the tank [m]; needed for computing initial condition
rho = 1000.0 # fluid density in kg/m^2 in 2D [water]
# solid parameters
# - we use a sufficiently soft material to be able to see noticeable structural displacement
rho_B = 7700.0 # structure density in kg/m^2 in 2D
lam = 1e7 # N/m in 2D - first Lame constant
mu = 1e7 # N/m in 2D - second Lame constant
# mesh
mesh = Mesh("L_domain.msh")
# these numbers must match the ones defined in the mesh file
fluid_id = 1 # fluid subdomain
structure_id = 2 # structure subdomain
bottom_id = 1 # structure bottom
top_id = 6 # fluid surface
interface_id = 9 # fluid-structure interface
# control parameters
output_data_every_x_time_steps = 20 # to avoid saving data every time step
coupling = True # turn on coupling terms
The equations are in nondimensional units, hence we transform:
L = Lz
T = L / math.sqrt(g * L)
t_end /= T
dt /= T
Lx /= L
Lz /= L
rho_B /= rho
lam /= g * rho * L
mu /= g * rho * L
rho = 1.0 # or equivalently rho /= rho
Let us define function spaces, including the mixed one:
V_W = FunctionSpace(mesh, "CG", 1)
V_B = VectorFunctionSpace(mesh, "CG", 1)
mixed_V = V_W * V_B
Then, we define functions. First, in the fluid domain:
phi = Function(V_W, name="phi")
phi_f = Function(V_W, name="phi_f") # at the free surface
eta = Function(V_W, name="eta")
trial_W = TrialFunction(V_W)
v_W = TestFunction(V_W)
Second, in the beam domain:
X = Function(V_B, name="X")
U = Function(V_B, name="U")
trial_B = TrialFunction(V_B)
v_B = TestFunction(V_B)
And last, mixed functions in the mixed domain:
trial_f, trial_s = TrialFunctions(mixed_V)
v_f, v_s = TestFunctions(mixed_V)
tmp_f = Function(V_W)
tmp_s = Function(V_B)
result_mixed = Function(mixed_V)
We need auxiliary indicator functions, that are 0 in one subdomain and 1 in the other. They are needed both in “CG” and “DG” space. We use the fact that the fluid and structure subdomains are defined in the mesh file with an appropriate ID number that Firedrake is able to recognise. That can be used in constructing indicator functions:
V_DG0_W = FunctionSpace(mesh, "DG", 0)
V_DG0_B = FunctionSpace(mesh, "DG", 0)
# Heaviside step function in fluid
I_W = Function(V_DG0_W)
par_loop(("{[i] : 0 <= i < f.dofs}", "f[i, 0] = 1.0"),
dx(fluid_id),
{"f": (I_W, WRITE)},
is_loopy_kernel=True)
I_cg_W = Function(V_W)
par_loop(("{[i] : 0 <= i < A.dofs}", "A[i, 0] = fmax(A[i, 0], B[0, 0])"),
dx,
{"A": (I_cg_W, RW), "B": (I_W, READ)},
is_loopy_kernel=True)
# Heaviside step function in solid
I_B = Function(V_DG0_B)
par_loop(("{[i] : 0 <= i < f.dofs}", "f[i, 0] = 1.0"),
dx(structure_id),
{"f": (I_B, WRITE)},
is_loopy_kernel=True)
I_cg_B = Function(V_B)
par_loop(("{[i, j] : 0 <= i < A.dofs and 0 <= j < 2}", "A[i, j] = fmax(A[i, j], B[0, 0])"),
dx,
{"A": (I_cg_B, RW), "B": (I_B, READ)},
is_loopy_kernel=True)
We use indicator functions to construct normal unit vector outward to the fluid domain at the fluid-structure interface:
n_vec = FacetNormal(mesh)
n_int = I_B("+") * n_vec("+") + I_B("-") * n_vec("-")
Now we can construct special boundary conditions that limit the solvers only to the appropriate subdomains of our interest:
class MyBC(DirichletBC):
def __init__(self, V, value, markers):
# Call superclass init
# We provide a dummy subdomain id.
super(MyBC, self).__init__(V, value, 0)
# Override the "nodes" property which says where the boundary
# condition is to be applied.
self.nodes = np.unique(np.where(markers.dat.data_ro_with_halos == 0)[0])
def surface_BC():
# This will set nodes on the top boundary to 1.
bc = DirichletBC(V_W, 1, top_id)
# We will use this function to determine the new BC nodes (all those
# that aren't on the boundary)
f = Function(V_W, dtype=np.int32)
# f is now 0 everywhere, except on the boundary
bc.apply(f)
# Now I can use MyBC to create a "boundary condition" to zero out all
# the nodes that are *not* on the top boundary:
return MyBC(V_W, 0, f)
# same as above, but in the mixed space
def surface_BC_mixed():
bc_mixed = DirichletBC(mixed_V.sub(0), 1, top_id)
f_mixed = Function(mixed_V.sub(0), dtype=np.int32)
bc_mixed.apply(f_mixed)
return MyBC(mixed_V.sub(0), 0, f_mixed)
BC_exclude_beyond_surface = surface_BC()
BC_exclude_beyond_surface_mixed = surface_BC_mixed()
BC_exclude_beyond_solid = MyBC(V_B, 0, I_cg_B)
BC_exclude_beyond_water_mixed = MyBC(mixed_V.sub(0), 0, I_cg_W)
BC_exclude_beyond_solid_mixed = MyBC(mixed_V.sub(1), 0, I_cg_B)
Finally, we are ready to define the solvers of our equations. First, equation for $$\phi$$ at the free surface:
a_phi_f = trial_W * v_W * ds(top_id)
L_phi_f = (phi_f - dt * eta) * v_W * ds(top_id)
LVP_phi_f = LinearVariationalProblem(a_phi_f, L_phi_f, phi_f, bcs=BC_exclude_beyond_surface)
LVS_phi_f = LinearVariationalSolver(LVP_phi_f)
Second, equation for the beam displacement $${\bf X}$$, where we also fix it to the bottom by applying zero Dirichlet boundary condition:
a_X = dot(trial_B, v_B) * dx(structure_id)
L_X = dot((X + dt * U), v_B) * dx(structure_id)
# no-motion beam bottom boundary condition
BC_bottom = DirichletBC(V_B, as_vector([0.0, 0.0]), bottom_id)
LVP_X = LinearVariationalProblem(a_X, L_X, X, bcs=[BC_bottom, BC_exclude_beyond_solid])
LVS_X = LinearVariationalSolver(LVP_X)
Finally, we define solvers for $$\phi$$, $${\bf U}$$ and $$\eta$$ in the mixed domain. In particular, value of $$\phi$$ at the free surface is used as a boundary condition. Note that avg(…) is necessary for terms in expressions containing n_int, which is built in “DG” space:
# phi-U
# no-motion beam bottom boundary condition in the mixed space
BC_bottom_mixed = DirichletBC(mixed_V.sub(1), as_vector([0.0, 0.0]), bottom_id)
# boundary condition to set phi_f at the free surface
BC_phi_f = DirichletBC(mixed_V.sub(0), phi_f, top_id)
T_x_dv = lam * div(X) * div(v_s) + mu * (inner(delX, delv_B + transpose(delv_B)))
a_U = rho_B * dot(trial_s, v_s) * dx(structure_id)
L_U = (rho_B * dot(U, v_s) - dt * T_x_dv) * dx(structure_id)
if coupling:
a_U += dot(avg(v_s), n_int) * avg(trial_f) * dS # avg(...) necessary here and below
L_U += dot(avg(v_s), n_int) * avg(phi) * dS
a_phi += -dot(n_int, avg(trial_s)) * avg(v_f) * dS
LVP_U_phi = LinearVariationalProblem(a_U + a_phi, L_U, result_mixed,
bcs=[BC_phi_f,
BC_bottom_mixed,
BC_exclude_beyond_solid_mixed,
BC_exclude_beyond_water_mixed])
LVS_U_phi = LinearVariationalSolver(LVP_U_phi)
# eta
a_eta = trial_W * v_W * ds(top_id)
L_eta = eta * v_W * ds(top_id) + dt * dot(grad(v_W), grad(phi)) * dx(fluid_id)
if coupling:
L_eta += -dt * dot(n_int, avg(U)) * avg(v_W) * dS
LVP_eta = LinearVariationalProblem(a_eta, L_eta, eta, bcs=BC_exclude_beyond_surface)
LVS_eta = LinearVariationalSolver(LVP_eta)
Let us set the initial condition. We choose no motion at the beginning in both fluid and structure, zero displacement in the structure and deflected free surface in the fluid. The shape of the deflection is computed from the analytical solution:
# initial condition in fluid based on analytical solution
# compute analytical initial phi and eta
n_mode = 1
a = 0.0 * T / L ** 2 # in nondim units
b = 5.0 * T / L ** 2 # in nondim units
lambda_x = np.pi * n_mode / Lx
omega = np.sqrt(lambda_x * np.tanh(lambda_x * Lz))
x = mesh.coordinates
phi_exact_expr = a * cos(lambda_x * x[0]) * cosh(lambda_x * x[1])
eta_exact_expr = -omega * b * cos(lambda_x * x[0]) * cosh(lambda_x * Lz)
bc_top = DirichletBC(V_W, 0, top_id)
eta.assign(0.0)
phi.assign(0.0)
eta_exact = Function(V_W)
eta_exact.interpolate(eta_exact_expr)
eta.assign(eta_exact, bc_top.node_set)
phi.interpolate(phi_exact_expr)
phi_f.assign(phi, bc_top.node_set)
A file to store data for visualization:
outfile_phi = File("results_pvd/phi.pvd")
To save data for visualization, we change the position of the nodes in the mesh, so that they represent the computed dynamic position of the free surface and the structure:
def output_data():
output_data.counter += 1
if output_data.counter % output_data_every_x_time_steps != 0:
return
mesh_static = mesh.coordinates.vector().get_local()
mesh.coordinates.vector().set_local(mesh_static + X.vector().get_local())
mesh.coordinates.dat.data[:, 1] += eta.dat.data_ro
outfile_phi.write(phi)
mesh.coordinates.vector().set_local(mesh_static)
output_data.counter = -1 # -1 to exclude counting print of initial state
In the end, we proceed with the actual computation loop:
t = 0.0
output_data()
while t <= t_end + dt:
t += dt
print("time = ", t * T)
# symplectic Euler scheme
LVS_phi_f.solve()
LVS_U_phi.solve()
tmp_f, tmp_s = result_mixed.subfunctions
phi.assign(tmp_f)
U.assign(tmp_s)
LVS_eta.solve()
LVS_X.solve()
output_data()
The result of the computation, visualised with paraview, is shown below.
The mesh is deflected for visualization only. As the model is linear, the actual mesh used for computation is fixed. Colours indicate values of the flow potential $$\phi$$.
A python script version of this demo can be found here.
The mesh file is here. It can be generated with gmsh from this file with a command: gmsh -2 L_domain.geo.
An extended 3D version of this code is published here.
The work is based on the articles [SBK17] and [SBK16]. The authors gratefully acknowledge funding from European Commission, Marie Curie Actions - Initial Training Networks (ITN), project number 607596.
References
SBK16
Tomasz Salwa, Onno Bokhove, and Mark A. Kelmanson. Variational modelling of wave-structure interactions for offshore wind turbines. Extended paper for Int. Conf. on Ocean, Offshore and Arctic Eng., OMAE2016, Busan, South-Korea, June 2016. URL: https://asmedigitalcollection.asme.org/OMAE/proceedings-abstract/OMAE2016/49972/281268.
SBK17
Tomasz Salwa, Onno Bokhove, and Mark A. Kelmanson. Variational modelling of wave–structure interactions with an offshore wind-turbine mast. Journal of Engineering Mathematics, Sep 2017. doi:10.1007/s10665-017-9936-4.
|
{}
|
# Dot product in spherical coordinates
1. Sep 18, 2011
### buttertop
1. The problem statement, all variables and given/known data
What is the dot product of two unit vectors in spherical coordinates?
2. Relevant equations
AB = ||A|| ||B|| cos($\theta$) = cos($\theta$)
3. The attempt at a solution
The above equation is the only relevant form of the dot product in terms of the angle $\theta$ that I can find. However, I'm not sure if the spherical coordinates need a term for $\phi$. If so, is this correct?
AB = ||A|| ||B|| cos($\theta$) sin($\phi$) = cos($\theta$) sin($\phi$)
Last edited: Sep 18, 2011
2. Sep 18, 2011
### glebovg
Unit vectors in spherical coordinates are
i = cos(φ)cos(θ)ρ + cos(φ)cos(θ)φ - sin(θ)θ
j = sin(φ)sin(θ)ρ + cos(φ)sin(θ)φ + cos(θ)θ
k = cos(φ)ρ - sin(φ)φ
3. Sep 18, 2011
### buttertop
Ah, sorry, by "unit vector" all I meant was both vectors have unit length, so ||A|| ||B|| = 1. Even if this didn't apply, I'm wondering if AB = ||A|| ||B|| cos($\theta$) sin($\phi$).
4. Sep 18, 2011
### glebovg
5. Sep 18, 2011
### glebovg
A spherical coordinate system is a coordinate system for three-dimensional space where the position of a point is specified by three numbers.
6. Sep 18, 2011
### buttertop
So if I have two vectors, they can each be described by the angles $\theta$ and $\phi$, roughly equivalent to the azimuth and the altitude of a sphere, right? So what I'd like to know is what the dot product is between two vectors in terms of these angles. I know, at least in cartesian coordinates, that the dot product is equal to ||A|| ||B|| cos($\theta$). If I'm describing the dot product of two vectors in three dimensional space, does this still apply, or do I need to take $\phi$ into account?
7. Sep 18, 2011
### glebovg
Like I said you need three numbers to describe a point in spherical coordinates, namely ρ, θ, and φ. θ and φ are not enough.
8. Sep 18, 2011
### buttertop
Ah, of course, sorry I misunderstood. In this case I believe $\rho$ is equal to 1. Is there a way to use the i, j and k identities you mentioned to express the dot product in terms of $\rho$, $\theta$ and $\phi$?
9. Sep 18, 2011
### glebovg
I do not understand your question. Perhaps you are talking about the cross product or the divergence. The divergence is like the dot product of the del operator and the vector function F. i.e. div F = F.
10. Sep 18, 2011
### buttertop
Hmm... I don't think the divergence is what I'm looking for exactly. Basically, this is the setup: there are two vectors centered on the origin. I know $\rho$, $\theta$ and $\phi$. How do I express the dot product of the two vectors in these terms?
11. Sep 18, 2011
### glebovg
Can you convert from spherical to Cartesian coordinates?
12. Sep 18, 2011
### glebovg
I think I know what you mean. Two compute <ρ1, φ1, θ1>⋅<ρ2, φ2, θ2> express spherical coordinates in terms of Cartesian coordinates (x, y, z) and use the fact that cos(θ1)cos(θ2) + sin(θ1)sin(θ2) = cos(θ1 - θ2).
13. Sep 18, 2011
### glebovg
Hint: <x1, y1, z1>⋅<x2, y2, z2> = ρ1sin(φ1)cos(θ12sin(φ2)cos(θ2) + ...
14. Sep 18, 2011
### buttertop
Thanks for all of your help glebovg, I think I'm on the right track. One thing though: I'd like to be able to express it in terms of the angles $\theta$ and $\phi$ between the two vectors, so there's only one value of $\theta$ and $\phi$ ($\rho$, too, but that is equal to 1 and won't show up, I believe).
Here is an example for two vectors in 2D, using $\theta$:
http://meandmark.com/vectorpart4.html" [Broken]
What would the equivalent be if I needed $\theta$ and $\phi$ to describe the two vectors?
Last edited by a moderator: May 5, 2017
15. Sep 18, 2011
### glebovg
If you are looking for an equivalent of ab = |a||b|cos(θ) just use the hint I gave you and you will derive the general formula.
Note that <x1, y1, z1>⋅<x2, y2, z2> = ab.
|
{}
|
## bias variance decomposition for classification problem
1
1
It is given that:
MSE = bias$$^2$$ + variance
I can see the mathematical relationship between MSE, bias, and variance. However, how do we understand the mathematical intuition of bias and variance for classification problems (we can't have MSE for classification tasks)?
I would like some help with the intuition and in understanding the mathematical basis for bias and variance for classification problems.
Any formula or derivation would be helpful.
I don't fully understand the question, what are you looking for exactly? – Djib2011 – 2019-06-19T13:32:08.263
oops sorry. Updated in the question itself. What to know mathematical intuition of bias variance for classification problem. Fore regression it has relation with MSE but classification how to relate them.? – IamTheRealFord – 2019-06-21T08:51:44.597
WHAT classification? Logit? – Peter – 2019-06-21T20:01:06.417
If you are looking for the concept, see https://datascience.stackexchange.com/questions/53758/math-behind-mse-bias2-variance and deeplearningbook.
– Fatemeh Asgarinejad – 2019-06-25T06:43:41.333
1ya already gone through that. But how will it work for classification problem.? (we dont have mse there know) – IamTheRealFord – 2019-06-25T07:17:25.747
I don't see why this was closed; the question seems pretty clear to me (after the June 25 edit anyway). MSE has a well-known bias-variance decomposition, so what about other (especially classification) losses? This doesn't depend on the specific model used. For a starting point, see https://stats.stackexchange.com/questions/393942/bias-variance-decomposition-for-non-squared-loss , but I haven't found a satisfactory answer for, e.g., log-loss.
– Ben Reiniger – 2019-06-29T13:57:32.623
yes i am puzzled why my question is put on hold :( – IamTheRealFord – 2019-06-30T14:17:49.600
## Answers
0
My opinion is that the bias variance trade off is rooted in the Uncertainty principle. It behaves absolutely the same.
1
yes. I am currently reading this to decompose bias-variance for general loss function. http://www-bcf.usc.edu/~gareth/research/bv.pdf.. Also searching(both intuition and mathematically) why decreasing bias increases variance and vice versa!
– IamTheRealFord – 2019-06-25T11:13:54.000
0
Bias and Variance in Classification problems
Check this link about Support Vector Machine.
You will directly understand bias and variance in classification. Basically, if your data is linearly separable you do not have a problem.
But imagine that your data is pseudo/semi linearly separable, however, few points land on the other side of their group.
Now imagine having a model that separates the data linearly, vs a model that will oscillate through the data so much to be able to classify correctly every point.
Additional link
|
{}
|
# Stochastic process
Main Article
Discussion
Related Articles [?]
Bibliography [?]
Citable Version [?]
This editable Main Article is under development and subject to a disclaimer.
The content on this page originated on Wikipedia and is yet to be significantly improved. Contributors are invited to replace and add material to make this an original article.
A stochastic process, or sometimes random process, is the counterpart of a deterministic process (or deterministic system) considered in probability theory. Instead of dealing only with one possible 'reality' of how the process might evolve under time (as it is the case for solutions of an ordinary differential equation, just as an example), in a random process there is some indeterminacy in its future evolution described by probability distributions. This means that even if the initial condition (or starting point) is known, there are more possibilities the process might go to, but some paths are more probable and others less.
In the simplest possible case ('discrete time'), a stochastic process amounts to a sequence of random variables known as a time series (for example, see Markov chain). Another basic type of a stochastic process is a random field, whose domain is a region of space, in other words, a random function whose arguments are drawn from a range of continuously changing values. One approach to stochastic processes treats them as functions of one or several deterministic arguments ('inputs', in most cases regarded as 'time') whose values ('outputs') are random variables: non-deterministic (single) quantities which have certain probability distributions. Random variables corresponding to various times (or points, in the case of random fields) may be completely different. The main requirement is that these different random quantities all have the same 'type'.[1] Although the random values of a stochastic process at different times may be independent random variables, in most commonly considered situations they exhibit complicated statistical correlations.
Familiar examples of processes modeled as stochastic time series include stock market and exchange rate fluctuations, signals such as speech, audio and video, medical data such as a patient's EKG, EEG, blood pressure or temperature, and random movement such as Brownian motion or random walks. Examples of random fields include static images, random terrain (landscapes), or composition variations of an inhomogeneous material.
## Formal definition and basic properties
### Definition
A stochastic process (or random process) is a collection of random variables indexed by a set T ("time"). That is, a stochastic process F is a map
${\displaystyle F:T\to L_{0}(\Omega ,{\mathcal {F}},\mathbb {P} ),}$
where ${\displaystyle L_{0}(\Omega ,{\mathcal {F}},\mathbb {P} )}$ is the space of (equivalence classes of) bounded measurable functions for a probability space ${\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )}$ to ${\displaystyle \mathbb {R} }$.
A modification is an equivalence class of maps ${\displaystyle f:\Omega \to \mathbb {R} ^{I}}$. Note that every modification determines an (almost surely) unique random process; the converse is not generally true.
### Distribution
Let ${\displaystyle F:T\to L_{0}(\Omega ,{\mathcal {F}},\mathbb {P} )}$ be a stochastic process. For every finite subset ${\displaystyle T'\subset T}$ the restriction ${\displaystyle F|_{T'}}$ has an (almost surely) unique modification, which is a random variable with values in ${\displaystyle \mathbb {R} ^{T'}}$. The distribution ${\displaystyle P_{T'}}$ of this random variable is a probability measure on ${\displaystyle \mathbb {R} ^{T'}}$; many properties of F can be determined from the collection
${\displaystyle \left\{P_{T'}\,;\,T'\subset T,\#T'<\infty \right\},}$
Two processes that have the same distribution are called equidistributed.
A suitably "consistent" collection of finite-dimensional distributions can be used to define a stochastic process (see Kolmogorov extension in the next section).
Given a modification f, one may consider the law of f (which is a measure on ${\displaystyle \mathbb {R} ^{T}}$). The law of f determines the finite-dimensional distributions; the converse is not generally true.
## Constructing stochastic processes
In the ordinary axiomatization of probability theory by means of measure theory, the problem is to construct a sigma-algebra of measurable subsets of the space of all functions, and then put a finite measure on it. For this purpose one traditionally uses a method called Kolmogorov extension.
There is at least one alternative axiomatization of probability theory by means of expectations on C-star algebras of random variables. In this case the method goes by the name of Gelfand-Naimark-Segal construction.
This is analogous to the two approaches to measure and integration, where one has the choice to construct measures of sets first and define integrals later, or construct integrals first and define set measures as integrals of characteristic functions.
### The Kolmogorov extension
The Kolmogorov extension proceeds along the following lines: assuming that a probability measure on the space of all functions ${\displaystyle f:X\to Y}$ exists, then it can be used to specify the probability distribution of finite-dimensional random variables ${\displaystyle f(x_{1}),\dots ,f(x_{n})}$. Now, from this n-dimensional probability distribution we can deduce an (n − 1)-dimensional marginal probability distribution for ${\displaystyle f(x_{1}),\dots ,f(x_{n-1})}$. There is an obvious compatibility condition, namely, that this marginal probability distribution be the same as the one derived from the full-blown stochastic process. When this condition is expressed in terms of probability densities, the result is called the Chapman-Kolmogorov equation.
The Kolmogorov extension theorem guarantees the existence of a stochastic process with a given family of finite-dimensional probability distributions satisfying the Chapman-Kolmogorov compatibility condition.
### Separability, or what the Kolmogorov extension does not provide
Recall that, in the Kolmogorov axiomatization, measurable sets are the sets which have a probability or, in other words, the sets corresponding to yes/no questions that have a probabilistic answer.
The Kolmogorov extension starts by declaring to be measurable all sets of functions where finitely many coordinates ${\displaystyle [f(x_{1}),\dots ,f(x_{n})]}$ are restricted to lie in measurable subsets of ${\displaystyle Y_{n}}$. In other words, if a yes/no question about f can be answered by looking at the values of at most finitely many coordinates, then it has a probabilistic answer.
In measure theory, if we have a countably infinite collection of measurable sets, then the union and intersection of all of them is a measurable set. For our purposes, this means that yes/no questions that depend on countably many coordinates have a probabilistic answer.
The good news is that the Kolmogorov extension makes it possible to construct stochastic processes with fairly arbitrary finite-dimensional distributions. Also, every question that one could ask about a sequence has a probabilistic answer when asked of a random sequence. The bad news is that certain questions about functions on a continuous domain don't have a probabilistic answer. One might hope that the questions that depend on uncountably many values of a function be of little interest, but the really bad news is that virtually all concepts of calculus are of this sort. For example:
all require knowledge of uncountably many values of the function.
One solution to this problem is to require that the stochastic process be separable. In other words, that there be some countable set of coordinates ${\displaystyle \{f(x_{i})\}}$ whose values determine the whole random function f.
The Kolmogorov continuity theorem guarantees that processes that satisfy certain constraints on the moments of their increments are continuous.
## Examples and special cases
### The time
A notable special case is where the time is a discrete set, for example the nonnegative integers {0, 1, 2, 3, ...}. Another important special case is ${\displaystyle T=\mathbb {R} }$.
Stochastic processes may be defined in higher dimensions by attaching a multivariate random variable to each point in the index set, which is equivalent to using a multidimensional index set. Indeed a multivariate random variable can itself be viewed as a stochastic process with index set T = {1, ..., n}.
### Examples
The paradigm continuous stochastic process is that of the Wiener process. In its original form the problem was concerned with a particle floating on a liquid surface, receiving "kicks" from the molecules of the liquid. The particle is then viewed as being subject to a random force which, since the molecules are very small and very close together, is treated as being continuous and, since the particle is constrained to the surface of the liquid by surface tension, is at each point in time a vector parallel to the surface. Thus the random force is described by a two component stochastic process; two real-valued random variables are associated to each point in the index set, time, (note that since the liquid is viewed as being homogeneous the force is independent of the spatial coordinates) with the domain of the two random variables being R, giving the x and y components of the force. A treatment of Brownian motion generally also includes the effect of viscosity, resulting in an equation of motion known as the Langevin equation.
If the index set of the process is N (the natural numbers), and the range is R (the real numbers), there are some natural questions to ask about the sample sequences of a process {Xi}iN, where a sample sequence is {X(ω)i}iN.
1. What is the probability that each sample sequence is bounded?
2. What is the probability that each sample sequence is monotonic?
3. What is the probability that each sample sequence has a limit as the index approaches ∞?
4. What is the probability that the series obtained from a sample sequence from ${\displaystyle f(i)}$ converges?
5. What is the probability distribution of the sum?
Similarly, if the index space I is a finite or infinite interval, we can ask about the sample paths {X(ω)t}t I
1. What is the probability that it is bounded/integrable/continuous/differentiable...?
2. What is the probability that it has a limit at ∞
3. What is the probability distribution of the integral?
|
{}
|
Stability in logarithmic Sobolev and related interpolation inequalities
Stability results for logarithmic Sobolev and Gagliardo-Nirenberg inequalities
Abstract
This paper is devoted to improvements of functional inequalities based on scalings and written in terms of relative entropies. When scales are taken into account and second moments fixed accordingly, deficit functionals provide explicit stability measurements, i.e., bound with explicit constants distances to the manifold of optimal functions. Various results are obtained for the Gaussian logarithmic Sobolev inequality and its Euclidean counterpart, for the Gaussian generalized Poincaré inequalities and for the Gagliardo-Nirenberg inequalities. As a consequence, faster convergence rates in diffusion equations (fast diffusion, Ornstein-Uhlenbeck and porous medium equations) are obtained.
\abbrevauthor
Dolbeault, J., and Toscani, G. \headabbrevauthorJ. Dolbeault and G. Toscani
Keywords: Sobolev inequality; logarithmic Sobolev inequality; Gaussian isoperimetric inequality; generalized Poincaré inequalities; Gagliardo-Nirenberg inequalities; interpolation; entropy – entropy production inequalities; extremal functions; optimal constants; relative entropy; generalized Fisher information; entropy power; stability; improved functional inequalities; fast diffusion equation; Ornstein-Uhlenbeck equation; porous medium equation; rates of convergence
Mathematics Subject Classification (2010): 26D10; 46E35; 58E35
1 Introduction
Several papers have recently been devoted to improvements of the logarithmic Sobolev inequality. Ledoux et al. [2014] use the Stein discrepancy. Closer to our approach is Bobkov et al. [2014], Fathi et al. [2014], who exploit the difference between the inequality of [Stam, 1959, Inequality (2.3)] and the logarithmic Sobolev inequality to get a correction term in terms of the Fisher information functional. What we do here first is to emphasize the role of scalings and prefer to rely on Weissler [1978] for a scale invariant form of the logarithmic Sobolev inequality on the Euclidean space. We also make the choice to get a remainder term that involves the entropy functional and is very appropriate for stability issues. This allows us to deduce striking results in terms of rates of convergence for the Ornstein-Uhlenbeck equation. Writing the improvement in terms of the entropy has several advantages: contraints on the second moment are made clear, improvements can be extended to all generalized Poincaré inequalities for Gaussian measures, which interpolate between the Poincaré inequality and the logarithmic Sobolev inequality, and stability results with fully explicit constants can be stated: see for instance Corollary 3, with an explicit bound of the distance to the manifold of all Gaussian functions given in terms of the so-called deficit functional. This is, for the logarithmic Sobolev inequality, the exact analogue of the result of Bianchi and Egnell [1991] for Sobolev’s inequality.
However, putting the emphasis on scalings has other advantages, as the method easily extends to a nonlinear setting. We are henceforth in a position to get improved entropy – entropy production inequalities associated with fast diffusion flows based on the scale invariant forms of the associated Gagliardo-Nirenberg inequalities, which cover a well-known family of inequalities that contain the logarithmic Sobolev inequality, and Sobolev’s inequality as an endpoint. This is not a complete surprise because such improvements were known from Dolbeault and Toscani [2013] using detailed properties of the fast diffusion equation. By writing the entropy – entropy production inequality in terms of the relative entropy functional and a generalized Fisher information, we deduce from the scaling properties of Gagliardo-Nirenberg inequalities a correction term involving the square of the relative entropy, and this is much simpler than using the properties of the nonlinear flow. The method also works in the porous medium case, which is new, provides clear evidences on the role of the second moment, and finally explains the fast rates of convergence in relative entropy that can be observed in the initial regime, away from Barenblatt equilibrium or self-similar states.
The reader interested in further considerations on improvements of the logarithmic Sobolev inequality is invited to refer to Ledoux et al. [2014] and Bobkov et al. [2014], Fathi et al. [2014] for probabilistic point of view and a measure of the defect in terms of Wasserstein’s distance, and to Ledoux [2001] for earlier results. Much more can also be found in Bakry et al. [2014]. Not all Gagliardo-Nirenberg-Sobolev inequalities are covered by our remarks and we shall refer to Carlen et al. [2014] and references therein for the spectral point of view and its applications to the Schrödinger operator. The logarithmic Sobolev inequality in scale invariant form is equivalent to the Gaussian isoperimetric inequality: a study of the corresponding deficit can be found in Mossel and Neeman [2014]. In the perspective of information theory, we refer to Toscani [2013, 2014a] for a recent account on a concavity property of entropy powers that involves the isoperimetric inequality. It is not possible to quote all earlier related contributions but at least let us point two of them: the correction to the logarithmic Sobolev inequality by an entropy term involving the Wiener transform in [Carlen, 1991, Theorem 6], and the HWI inequality by Otto and Villani [2000].
Gagliardo-Nirenberg inequalities (see Gagliardo [1958], Nirenberg [1959]) have been related with fast diffusion or porous media equations in the framework of the so-called entropy methods by Del Pino and Dolbeault [2002]. Also see the papers by Carrillo and Toscani [2000], Otto [2001], Carrillo and Vázquez [2003] for closely related issues. The message is simple: optimal rates of convergence measured in relative entropy are equivalent to best constant in the inequalities written in entropy – entropy production form. Later improvements have been obtained on asymptotic rates of convergence by Blanchet et al. [2009], Bonforte et al. [2010], Dolbeault and Toscani [2011], Denzler et al. [2015]. A key observation of Dolbeault and Toscani [2011] is the fact that optimizing a relative entropy with respect to scales determines the second moment. This observation was then exploited by Dolbeault and Toscani [2013] to get a first explicit improvement in the framework of Gagliardo-Nirenberg inequalities. Notice that many papers on improved interpolation inequalities use the estimate of Bianchi and Egnell [1991] with the major drawback that the value of the constant is not known. As a consequence the improved inequality, faster convergence rates for the solution to the fast diffusion equation were obtained and a new phenomenon, a delay, was shown by Dolbeault and Toscani [2015a]. Inspired by Villani [2000], Savaré and Toscani [2014] studied the -th Rényi entropy and observed that the corresponding isoperimetric inequality is a Gagliardo-Nirenberg inequality in scale invariant form. Various consequences for the solutions to the evolution equations have been drawn in Carrillo and Toscani [2014] and Dolbeault and Toscani [2015b], which are strongly related with the present paper but can all be summarized in a simple sentence: scales are important and a better adjustment than the one given by the asymptotic regime gives sharper estimates. The counterpart in the present paper is that taking into account the scale invariant form of the inequality automatically improves on the inequality obtained by a simple entropy – entropy production method. Let us give some explanations.
At a formal level, the strategy of our paper goes as follows. Let us consider a generalized entropy functional , which is assumed to be nonnegative, and a generalized Fisher information functional . We further assume that they are related by a functional inequality of the form
I−λE≥0.
We denote by the optimal proportionality constant. If the inequality is not in scale invariant form, we will prove in various cases that there exists a convex function , leaving from with such that . Hence we have found an improved functional inequality in the sense that
I−λE≥φ(E)−λE=ψ(E)
where is nonnegative and can be used to measure the distance to the optimal functions. This is a stability result. The left hand side, which is called the deficit functional in the literature, is now controlled from below by a nonlinear function of the entropy functional. A precise distance can be obtained by the Pinsker-Csiszár-Kullback inequality, which is no more than a Taylor expansion at order two, and some generalizations.The key observation is that the optimization under scaling (in the Euclidean space) amounts to adjust the second moment (in the Euclidean space but also in spaces with finite measure, like the Gaussian measure, after some changes of variables).
At this point it is worth to emphasize the difference in our approach compared to the one of Bobkov et al. [2014] for the logarithmic Sobolev inequality. What the authors do is that they write the improved inequality as and deduce that
Extra open brace or missing close brace
where the right hand side is again nonnegative because is concave and . This is of course a stronger form of the inequality, as it controls the distance to the manifold of optimal functions in a stronger norm, for instance. However, it is to a large extend useless for the applications that are presented in this paper, as the estimate in terms of the entropy is what matters, for instance, for application in evolution equations.
We shall apply our strategy to the logarithmic Sobolev inequality in Section 2, to the generalized Poincaré inequalities for Gaussian measures in Section 3 and to some Gagliardo-Nirenberg inequalities in Section 4. Each of these inequalities can be established by the entropy – entropy production method. By considering the Ornstein-Uhlenbeck equation in the first two cases, and the fast diffusion / porous medium equation in the third case, it turns out that and
−ddt(I−λE)=R≥0.
Hence, if , this shows with no additional assumption that is a measure of the distance to the optimal functions. Improved functional inequalities follow by ODE techniques if one is able to relate with and . This is the method which has been implemented for instance in Arnold and Dolbeault [2005], Dolbeault et al. [2008], Dolbeault and Toscani [2013] and it is well adapted when the diffusion equation can be seen as the gradient flow of with respect to a distance. Typical distances are the Wasserstein distance for the logarithmic Sobolev inequality or the Gagliardo-Nirenberg inequalities, and ad hoc distances in case of the generalized Poincaré inequalities. See Jordan et al. [1998], Otto [2001], Dolbeault et al. [2009, 2012] for more details on gradient flow issues. Improvements can also be obtained when differs from : we refer to Demange [2008], Dolbeault et al. [2014] for interpolation inequalities on compact manifolds, or to Dolbeault and Jankowiak [2014] for improvements of Sobolev’s inequality based on the Hardy-Littlewood-Sobolev functional. This makes the link with the famous improvement obtained by Bianchi and Egnell [1991], and also Cianchi et al. [2009], but so far no entropy – entropy production method has been able to provide an improvement in such a critical case. For completeness, let us mention that other methods can be used to obtain improved inequalities, which are based on variational methods like in Bianchi and Egnell [1991], on symmetrization techniques like in Cianchi et al. [2009] or on spectral methods connected with heat flows like in Arnold et al. [2007]. Here we shall simply rely on convexity estimates and the interplay of entropy – entropy production inequalities with their scale invariant counterparts.
A very interesting feature of improved functional inequalities in the framework of the entropy – entropy production method is that the entropy decays faster than expected by considering only the asymptotic regime. In that sense, the improved inequality capture an initial rate of convergence which is faster than the asymptotic one. This has already been observed for fast diffusion equations in Dolbeault and Toscani [2013] with a phenomenon of delay that has been studied in Dolbeault and Toscani [2015a] and by Carrillo and Toscani [2014], by resorting to the concept of Rényi entropy. A remarkable fact is that the inequality is improved by choosing a scale (in practice by imposing a constraint on the second moment) without requesting anything on the first moment, again something that clearly distinguishes the improvements obtained here from what can be guessed by looking at the asymptotic problem as . Details and statements on these consequences for diffusion equations have been collected in Section 5.
2 Stability results for the logarithmic Sobolev inequality
Let be the normalized Gaussian measure, with , on the Euclidean space with . The Gaussian logarithmic Sobolev inequality reads
∫Rd|∇u|2dμ≥12∫Rd|u|2log|u|2dμ (1)
for any function such that . This inequality is equivalent to the Euclidean logarithmic Sobolev inequality in scale invariant form
d2log(2πde∫Rd|∇w|2dx)≥∫Rd|w|2log|w|2dx (2)
that can be found in [Weissler, 1978, Theorem 2] in the framework of scalings, but is also the one that can be found in [Stam, 1959, Inequality (2.3)] or in [Carlen, 1991, Inequality (26)]. See Bobkov et al. [2014], Fathi et al. [2014] and Toscani [2014b] for more comments. The equivalence of (1) and (2) is well known but involves some scalings and we will give a short proof below for completeness. Next, let us consider the function
φ(t):=d4[exp(2td)−1−2td]∀t∈R. (3)
Our first result is an improvement of (1), based on the comparison of (1) with (2), which combines ideas of Bakry and Ledoux [2006] and Fathi et al. [2014]. It goes as follows.
Proposition 1
With defined by (3), we have
∫Rd|∇u|2dμ−12∫Rd|u|2log|u|2dμ≥φ(∫Rd|u|2log|u|2dμ)∀u∈H1(Rd,dμ)such% that∫Rd|u|2dμ=1and∫Rd|x|2|u|2dμ=d. (4)
Inequality (4) is an improvement of (1) because for any and, by the Pinsker-Csiszár-Kullback inequality,
∫Rd|u|2log|u|2dμ≥14(∫Rd∣∣|u|2−1∣∣dμ)2∀u∈L2(Rd,dμ)such that∥u∥L2(Rd,dμ)=1.
See Pinsker [1964], Csiszár [1967], Kullback [1968] for a proof of this inequality. {proof} To emphasize the role of scalings, let us give a proof of Proposition 1, which follows the strategy of [Bakry and Ledoux, 2006, Proposition 2, p. 694].
As a preliminary step, we recover the scale invariant, Euclidean, version of the logarithmic Sobolev inequality from (1). Let . We observe that and . With one integration by parts, we get that
∫Rd|∇v|2dx≥12∫Rd|v|2log|v|2dx+d4log(2πe2) (5)
which is the standard Euclidean logarithmic Sobolev inequality established in Gross [1975] (also see Federbush [1969] for an earlier related result). This inequality is not invariant under scaling. By considering such that , we get that
λ2∫Rd|∇w|2dx−d2logλ≥12∫Rd|w|2log|w|2dx+d4log(2πe2).
holds for any such that . An optimization on the scaling parameter shows that and establishes the scale invariant form of the logarithmic Sobolev inequality,
d2log(2πde∫Rd|∇w|2dx)≥∫Rd|w|2log|w|2dx∀w∈H1(Rd,dx)such that∥w∥L2(Rd)=1, (6)
which is equivalent to (2). This inequality can also be written as
∫Rd|∇w|2dx≥12πdeexp(2d∫Rd|w|2log|w|2dx).
If we redefine such that and assume that , , we have shown that
∫Rd|∇u|2dμ≥d4[exp(2d∫Rd|u|2log|u|2dμ)−1]. (7)
Inequality (4) follows by substracting from both sides of the inequality, which is more or less the idea that has been exploited by Fathi et al. [2014].
Consider a nonnegative function and, assuming that , define
Mf:=∫Rdfdx,θf:=1d∫Rd|x|2fdxMf. (8)
Let us define the Gaussian function
μf(x):=Mf(2πθf)d/2e−|x|22θf∀x∈Rd.
We shall denote by the space of integrable functions on with finite second moment.
Lemma 2
Assume that is a nontrivial, nonnegative function in such that . With , and defined by (3) and (8), we have
θf2∫Rd|∇f|2fdx−∫Rdflogfdx−d2log(2πe2θf)∫Rdfdx≥2φ[∫Rdflog(fμf)dx]. (9)
{proof}
Let be such that with , , and apply Proposition 1.
The Gaussian function is the minimizer of the relative entropy
e[f|μ]:=∫Rd[flog(fμ)−(f−μ)]dx
w.r.t. all Gaussian functions in
M:={μ(x)=M(2πθ)d/2:e−|x|22θ,M>0,θ>0},
that is, we have the identity
∫Rdflog(fμf)dx=e[f|μf]=min{e[f|μ]:μ∈M}.
Also notice that is the minimizer of the relative Fisher information w.r.t. all Gaussian functions of mass :
∫Rd∣∣∇√f/μf∣∣2dx=min{∫Rd∣∣∇√f/μ∣∣2dx:μ(x)=Mf(2πθ)d/2e−|x|22θ,θ>0}.
Recall that by the Pinsker-Csiszár-Kullback inequality, the r.h.s. in (9) provides an explicit stability result in that can be written as
e[f|μf]≥14Mf∥f−μf∥2L1(Rd)∀f∈L1+(Rd,dx).
Combined with the observation that is nondecreasing and for any , we have shown the following global stability result.
Corollary 3
Assume that is a nontrivial, nonnegative function in such that . With , and defined by (3) and (8), we have
θf2∫Rd|∇f|2fdx−∫Rdflogfdx−d2log(2πe2θf)∫Rdfdx≥2minμ∈Mφ(e[f|μ])=2φ(e[f|μf])≥∥f−μf∥4L1(Rd)16M2f.
3 An improved version of the generalized Poincaré inequalities for Gaussian measures
We consider the inequalities introduced by W. Beckner in [Beckner, 1989, theorem 1]. If , then for any we have
∥u∥2L2(Rd,dμ)−∥u∥2Lp(Rd,dμ)≤(2−p)∥∇u∥2L2(Rd,dμ)∀u∈H1(Rd,dμ). (10)
These inequalities interpolate between the Poincaré inequality ( case) and the logarithmic Sobolev inequality, which is achieved by dividing both sides of the inequality by and passing to the limit as . Some improvements were obtained already obtained in Arnold and Dolbeault [2005], Arnold et al. [2007], Bartier and Dolbeault [2006]. What we gain here is that the improvement takes place also in the limit case as and is consistent with the results of Proposition 1.
Let us define
φp(x):=d4[(1−x)−2pd(2−p)−1]∀x∈[0,1].
Corollary 4
Assume that is such that . With the above notation, for any we have
∫Rd|∇u|2dμ≥∥u∥2L2(Rd,dμ)φp⎛⎜⎝∥u∥2L2(Rd,dμ)−∥u∥2Lp(Rd,dμ)∥u∥2L2(Rd,dμ)⎞⎟⎠. (11)
By homogeneity we can assume that . The reader is invited to check that
limp→2φp(1−∥u∥2Lp(Rd,dμ))=d4(e2dE[u]−1)whereE[u]:=∫Rd|u|2∥u∥2L2(Rd,dμ)log⎛⎜⎝|u|2∥u∥2L2(Rd,dμ)⎞⎟⎠dμ.
The proof of Corollary 4 is a straightforward consequence of (7) and of the following estimate.
Lemma 5
For any and any function , we have
∥u∥2L2(Rd,dμ)∥u∥2Lp(Rd,dμ)≤exp(2−ppE[u])
and, as a consequence, for any such that , we obtain
∥u∥2L2(Rd,dμ)−∥u∥2Lp(Rd,dμ)≤2−pp∫Rd|u|2log|u|2dμ. (12)
{proof}
The proof relies on an idea that can be found in Latała and Oleszkiewicz [2000] and goes as follows. Let us consider the function
k(s):=slog(∫Rdu2sdμ).
Derivatives are such that
12k′(s)=log(∫Rdu2sdμ)−1s∫Rdu2slogudμ∫Rdu2sdμ, s34(∫Rdu2sdμ)2k′′(s)=∫Rdu2sdμ∫Rdu2s|logu|2dμ−(∫Rdloguu2sdμ)2,
hence proving that is convex by the Cauchy-Schwarz inequality. As a consequence we get that
k′(1)≤k(s)−k(1)s−1∀s>1.
Applied with , this proves that
−∫Rd|u|2log⎛⎜⎝|u|2∥u∥2L2(Rd,dμ)⎞⎟⎠dμ≤p2−p∥u∥2L2(Rd,dμ)log⎛⎜⎝∥u∥2Lp(Rd,dμ)∥u∥2L2(Rd,dμ)⎞⎟⎠,
from which we deduce the second inequality in (12) after observing that .
The result of Corollary 4 deserves a comment. As , , so that we do not recover the optimal constant in (10) in the asymptotic regime corresponding to , that is when approaches a constant, because of the factor . On the other hand, (11) is a strict improvement compared to (10) as soon as where is the unique solution to in . Let be the function defined by
Φp(x)=φp(x)ifx∈(0,x⋆(p)),Φp(x)=x2−pifx∈[x⋆(p),1]. (13)
Collecting these estimates of (10) and (11), we can write that
∫Rd|∇u|2dμ≥∥u∥2L2(Rd,dμ)Φp⎛⎜⎝∥u∥2L2(Rd,dμ)−∥u∥2Lp(Rd,dμ)∥u∥2L2(Rd,dμ)⎞⎟⎠
for any function such that . This is an improvement with respect to (10) because , with a strict inequality if .
The right hand side in (11) controls the distance to the constants. Indeed, using for instance Hölder’s estimates, it is easy to check that
∥u∥2L2(Rd,dμ)−∥u∥2Lp(Rd,dμ)≥∥u∥2L2(Rd,dμ)−∥u∥2L1(Rd,dμ)=∫Rd|u−¯¯¯u|2dμ
with . Sharper estimates based for instance on variants of the Pinsker-Csiszár-Kullback inequality can be found in Cáceres et al. [2002], Bartier et al. [2007].
4 Stability results for some Gagliardo-Nirenberg inequalities
4.1 A first case: q>1
We study the case of Gagliardo-Nirenberg inequalities
∥∇w∥ϑL2(Rd)∥w∥1−ϑLq+1(Rd)≥CGN∥w∥L2q(Rd) (14)
with . The value of the optimal constant has been established in Del Pino and Dolbeault [2002] (also see Gunson [1991] for an earlier but partial contribution).
Let us start with some elementary observations on convexity. Consider two positive constants and . Let us define
ζ=ba+b,κ=(ab)ζ+(ba)1−ζ=a+ba1−ζbζ.
Next let us take three positive numbers, , , and such that and consider the function
h(λ)=λaA+λ−bB−κC.
The function reaches its minimum at and it is straightforward to check that
h(1)≥infλ>0h(λ)=h(λ∗)=κ(AζB1−ζ−C).
This computation determines the choice of . Using the assumption , we get the estimate
A+B−κC≥C1ζB1−1ζ+B−κC=φ(B∗−B)
where
B∗:=C(1−ζζ)ζ
and
φ(s):=C1ζ[(B∗−s)1−1ζ−B1−1ζ∗]−s. (15)
κC=B∗+C1ζB1−1ζ∗.
Note that is a nonnegative strictly convex function such that and for any .
We apply these preliminary computations with
a=dq−(d−2),b=dq−12q A=14(q2−1)∫Rd|∇w|2dx,B=β∫Rd|w|q+1dx,β=2qq−1−d C=(14(q2−1))ζβ1−ζ(CGN∥w∥L2q(Rd))α,α=q+1−ζ(q−1)
for any . With , the functional
J[w]:=14(q2−1)∫Rd|∇w|2dx+β∫Rd|w|q+1dx−KCαGN(∫Rd|w|2qdx)α2q
is nonnegative and achieves its minimum at . Hence we have that
J[w]≥J[w∗]=0
and this inequality is equivalent to (14), after an optimization under scaling. Notice that .
Theorem 6
With the above notations and given by (15), we have
J[w]≥φ[β(∫Rd|w∗|q+1dx−∫Rd|w|q+1dx)] (16)
for any such that and .
{proof}
The reader is invited to check that, with the above notations,
B∗−B=β(∫Rd|w∗|q+1dx−∫Rd|w|q+1dx).
As a last remark in this section, let us observe that the logarithmic Sobolev inequality appears as a limit case of the entropy – entropy production inequality, and that (2) is also obtained by taking the limit as in Gagliardo-Nirenberg inequalities in (14): see Del Pino and Dolbeault [2002] for details. Also, when , the convexity of is lost as , which corresponds to Sobolev’s inequality. This shows the consistancy of our method.
4.2 A second case: q<1
Now we study the case of Gagliardo-Nirenberg inequalities
∥∇w∥ϑL2(Rd)∥w∥1−ϑL2q(Rd)≥CGN∥w∥Lq+1(Rd) (17)
with , and we denote by the quantity for any , even for (in that case, it is only a semi-norm).
Our elementary estimates have to be adapted. Consider two positive constants and , with . Let us define
η=ba−b,κ=(ba)η−(ba)1+η=a−bb−ηa1+η.
Next let us take three positive numbers, , , and such that and consider the function
h(λ)=λaA−λbB+κC.
The function reaches its minimum at and it is straightforward to check that
h(λ)≥h(λ∗)=κ(C−A−ηB1+η).
Using the assumption , we get the estimate
A−B+κC≥C−1ηB1+1η−B+κC=φ(B−B∗)
where
B∗:=C(η1+η)η
and
φ(s)=C−1η[(B∗+s)1+1η−B1+1η∗]−s (18)
is a nonnegative strictly convex function such that and for any .
We apply these preliminary computations with
a=dq−(d−2),b=d1−q2q A=14(q2−1)∫Rd|∇w|2dx,B=β∫Rd|w|q+1dx,β=2q1−q+d C=(14(q2−1))−ηβ1+η(CGN)−(q+1)(1+η)∥w∥αL2q(Rd),α=q+1+η(q−1)
for any . With , the functional
J[w]:=14(q2−1)∫Rd|∇w|2dx−β∫Rd|w|q+1dx+K(CGN)−(q+1)(1+η)(∫Rd|w|2qdx)α2q
is nonnegative and achieves its minimum at . Hence we have that
J[w]≥J[w∗]=0
and this inequality is equivalent to (17), after an optimization under scaling. Notice that .
Theorem 7
With the above notations and given by (18), we have
J[w]≥φ[β(∫Rd|w|q+1dx−∫Rd|w∗
|
{}
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Improved delay-dependent stability criteria for systems with a delay varying in a range. (English) Zbl 1153.93476
Summary: This paper provides improved delay-dependent stability criteria for systems with a delay varying in a range. The criteria improve over some previous ones in that they have fewer matrix variables yet less conservatism, which is established theoretically. An example is given to show the advantages of the proposed results.
##### MSC:
93D05 Lyapunov and other classical stabilities of control systems 93C15 Control systems governed by ODE 93C05 Linear control systems
Full Text:
##### References:
[1] Ebenbauer, C., & Allgöwer, F. (2006). Stability analysis for time-delay systems using Rekasius’s substitution and sum of squares. In Proceedings of the 45th IEEE conference on decision and control (pp. 5376--5381) [2] Fridman, E.; Shaked, U.: An improved stabilization method for linear time-delay systems. IEEE transactions on automatic control 47, 1931-1937 (2002) [3] Gu, K. (2000). An integral inequality in the stability problem of time-delay systems. In Proceedings of 39th IEEE conference on decision and control (pp. 2805--2810) [4] Han, Q.; Gu, K.: Stability of linear systems with time-varying delay: A generalized discretized lyapunovfunctional approach. Asian journal of control 3, 170-180 (2001) [5] Hale, J.: Functional differential equations. (1977) · Zbl 0352.34001 [6] He, Y.; Wang, Q.; Lin, C.; Wu, M.: Delay-range-dependent stability for systems with time-varying delay. Automatica 43, 371-376 (2007) · Zbl 1111.93073 [7] Jiang, X.; Han, Q.: On H$\infty$control for linear systems with interval time-varying delay. Automatica 41, 2099-2106 (2005) · Zbl 1100.93017 [8] Jing, X.; Tan, D.; Wang, Y.: An LMI approach to stability of systems with severe time-delay. IEEE transactions on automatic control 49, 1192-1195 (2004) [9] Kao, C. Y.; Lincoln, B.: Simple stability criteria for systems with time-varying delays. Automatica 40, 1429-1434 (2004) · Zbl 1073.93047 [10] Kim, J. H.: Delay and it’ time-derivative dependent robust stability of time-delayed linear systems with uncertainty. IEEE transactions on automatic control 46, 789-792 (2001) · Zbl 1008.93056 [11] Lee, Y. S.; Moon, Y. S.; Kwon, W. H.; Park, P. G.: Delay-dependent robust H$\infty$control for uncertain systems with a state-delay. Automatica 40, 65-72 (2004) · Zbl 1046.93015 [12] Moon, Y. S.; Park, P. G.; Kwon, W. H.; Lee, Y. S.: Delay-dependent robust stabilization of uncertain state-delayed systems. International journal on control 74, 1447-1455 (2001) · Zbl 1023.93055 [13] Niculescu, S. I., Neto, A. T., Dion, J. M., & Dugard, L. (1995). Delay-dependent stability of linear systems with delayed state: An LMI approach. In Proc. 34th IEEE conf. decision and control (pp. 1495--1496) [14] Park, P.: A delay-dependent stability criterion for systems with uncertain time-invariant delays. IEEE transactions on automatic control 44, 876-877 (1999) · Zbl 0957.34069 [15] Papachristodoulou, A., Peet, M., & Niculescu, S. I. (2007). Stability analysis of linear systems with time-varyingdelays: Delay uncertainty and quenching. In Proceedings of the 46th IEEE conference on decision and control (pp. 12--14) [16] Richard, J. P.: Time-delay systems: an overview of some recent advances and open problems. Automatica 39, 1667-1694 (2003) · Zbl 1145.93302 [17] Shao, H.: Delay-dependent approaches to globally exponential stability for recurrent neural networks. IEEE transactions on circuits systems II 55, 591-595 (2008) [18] Shao, H.: Delay-range-dependent robust H ”infinity” filtering for uncertain stochastic systems with mode-dependent time delays and Markovian jump parameters. Journal of mathematical analisis and applications 342, 1084-1095 (2008) · Zbl 1141.93025 [19] Shao, H.: Delay-dependent stability for recurrent neural networks with time-varying delays. IEEE transactions on neural networks 19, 1647-1651 (2008) [20] Suplin, V., Fridman, E., & Shaked, U. (2004). A projection approach to H\infty control of time-delay systems. In Proc. IEEE conf. decision control (pp. 4548--4553) [21] Wu, M.; He, Y.; She, J.; Liu, G.: Delay-dependent criteria for robust stability of time-varying delay systems. Automatica 40, 1435-1439 (2004) · Zbl 1059.93108 [22] Xie, L.; De Souza, C. E.: Criteria for robust stability and stabilization of uncertain linear systems with state-delay. Automatica 33, 1622-1657 (1997) [23] Xu, S.; Lam, J.: Improved delay-dependent stability criteria for time-delay systems. IEEE transactions on automatic control 50, 384-387 (2005) [24] Xu, S.; Lam, J.: On equivalence and efficiency of certain stability criteria for time-delay systems. IEEE transactions on automatic control 52, 95-101 (2007)
|
{}
|
# Past Events
Date/Time: Tuesday, November 1, 2016 - 01:30
Venue: Seminar Hall
Speaker: Anoop V.P., NISER
Title: To be announced
Date/Time: Monday, October 31, 2016 - 11:35 to 12:30
Venue: SMS seminar hall
Speaker: Soma Maity, Ramkrishna Mission Vivekananda University, Belur.
Title: ON THE STABILITY OF Lp-NORMS OF RIEMANNIAN CURVATURE AND ON WILKING’S CRITERION FOR RICCI FLOW
Let M be a compact manifold without boundary. One can define a smooth real valued function of the space of Riemannian metrics of M by taking Lp-norm of Riemannian curvature for p ≥ 2. Compact irreducible locally symmetric spaces are critical metrics for this functional. I will show that rank 1...
Date/Time: Friday, October 28, 2016 - 09:30 to 10:30
Venue: SMS, Seminar Room
Speaker: Dr. Krishanu Maulik, Indian Statistical Institute, Kolkata
Title: Urn Models
Abstract: We shall discuss the urn model introduced by Polya and...
Date/Time: Monday, October 24, 2016 - 11:35 to 12:30
Venue: SMS seminar hall
Speaker: Ananya Lahiri, Chennai Mathematical Institute
Title: On two dimensional polynomial phase signal parameter estimation
Abstract: Two dimensional polynomial phase signal has uses in modeling black and white texture. We will discuss how to estimate the parameters of the model from observed data set and about the large sample properties of these estimators.
Date/Time: Friday, October 21, 2016 - 15:30
Venue: SMS Seminar Hall
Speaker: Arnab Mandal, ISI Kolkata
Title: Quantum symmetry groups of the dual of finitely generated discrete groups
Starting with some basic definitions related to compact quantum groups we introduce the notion of quantum symmetry groups. Here we discuss one particular case coming from group $C^*$-algebras equipped with word length function. Moreover, few general properties and examples...
Date/Time: Monday, October 17, 2016 - 11:35
Venue: SMS seminar room
Title: Blocking sets of PG(2,q) with respect to a conic
For a given nonempty subset L of the line set of the projective plane PG(2,q), a blocking set with respect to L (or simply, an L-blocking set) is a subset B of the point set of PG(2,q) such that every line of L contains at least one point of B. Let E (respectively; T, S) denote the set of all...
Date/Time: Thursday, October 6, 2016 - 04:35 to 05:30
Venue: SMS Seminar Hall
Speaker: Sunil Kumar Prajapati, Hebrew University of Jerusalem, Israel
Title: Total Character and Irreducible Characters of p-groups
The realization of the Total Character (or Gel’fand Character) τG of a finite group G, i.e. the sum of all ordinary irreducible characters of G is an old problem in character theory of finite groups. One possible approach is to try to realize τG as a polynomial in some irreducible character of G...
Date/Time: Monday, September 26, 2016 - 15:30 to 16:30
Venue: Seminar Hall (SMS)
Speaker: Dibyendu Roy, IIT, Kharagpur
Title: Fault analysis and weak key-IV attack on Sprout and constructions of T-function
This talk basically consists of two parts. First is, attacks on the stream cipher 'Sprout' and the second is, constructions of T-function, which carries good cryptographic properties.The design Specification of Sprout was proposed at FSE, 2015. Firstly, I will discuss about fault attack on...
Date/Time: Friday, September 16, 2016 - 15:30 to 16:30
Venue: Seminar Room, SMS
Speaker: Dr. Anirban Mukhopadhyay, IMSc. Chennai
Title: Distribution of Primes
In this lectures, we would discuss probabilistic model of primes leading to heuristics about their distribution. We would see many surprising irregularities popping up alongside expected results. A survey of several recent and important results would be presented in a way accessible to non-...
Date/Time: Thursday, September 15, 2016 - 15:30 to 16:30
Venue: Seminar Room, SMS
Speaker: Dr. Anirban Mukhopadhyay, IMSc. Chennai
Title: Distribution of Primes
In this lectures, we would discuss probabilistic model of primes leading to heuristics about their distribution. We would see many surprising irregularities popping up alongside expected results. A survey of several recent and important results would be presented in a way accessible to non-...
|
{}
|
# [texhax] making tables bigger
Tue Feb 28 17:37:54 CET 2006
Christopher W. Ryan :
> I struggled over the weekend typing my daughter's science fair project. It
> was for display on a typical tri-fold cardboard poster, so it had to be
> legible from a distance. I typed it in article style using \Large and that
> seemed to work--except for the data table, which remained small, as it would
> be in a written manuscript.
>
> I fiddled with various boxes, minipages, and putting \Large in the body of
> the table environment, but I couldn't get it right.
>
> I did an end-run and used seminar style, as if I was making overhead
> transparencies, and that did the trick.
>
> But I was wondering if there was a straightforward way to make tables larger
> in article style.
>
> Thanks.
>
Could you make a minimal example of what you did?
Normally
\begin{table}
\Large
\begin{tabular}
...
\end{tabular}
\end{table}
works fine.
/daleif
You cannot help men permanently by doing for them
what they could and should do for themselves. ''
-- Abraham Lincoln
|
{}
|
# Oxygen producing metabolism
What kind of realistic oxygen producing reaction used by life could exist in a carbon dioxide / ammonia atmosphere? There are also many volcanoes, either immersed or not, water oceans and you can even use silicon oxides or whatever from the crust of my Earth-like planet. The further it is from our actual photosynthesis the better it is, but it still has to be likely.
Note: I know ammonia will react with oxygen as it did on Earth. That is actually the purpose of these organisms.
• I apologize, but the simple word "metabolism" describes an amazingly complex and difficult subject. Creating (or even justifying) an entire metabolism for an ammonia planet is far beyond the scope of this site. click here to see what it took for me to guess at creating a replacement just for glucose. Photosynthesis is a reasonably simple equation, mammilian life isn't by any stretch of the imagination. (*continued*) – JBH Aug 16 '18 at 0:21
• A search on this site for ammonia planets comes up with a lengthy list for this popular subject that I suspect you haven't browsed, despite a number of the questions looking like they answer your question. Thus, I'm going to vote to close (VTC) your question as too broad unless you can narrow it down to something more practical than basically inventing an entire biome. – JBH Aug 16 '18 at 0:23
• The early atmosphere of our Earth, and the earliest life in the pre-cambrian era that started oxygenation of our atmosphere could offer some examples. @willk 's answer gives the chemistry. – pojo-guy Aug 16 '18 at 3:05
• Ammonia and carbon dioxide react together to form the solid amminium carbamate under dry conditions and ammonium carbonate under wet conditions. Both are solids at room temperature, so it is not possible to have an atmosphere composed of ammonia and carbondioxide. – Slarty Aug 16 '18 at 9:19
• This is a legitimate biochemistry question. It should not have been closed. It is not an exhaustive list (@RonJohn) the nature of the atmosphere (particularly the free ammonia) greatly limits chemical reactions that could potentially proceed. People who think this is open ended frankly haven't put the research effort in to discover that there is no activation energy in the ammonia + oxygen reaction at reasonably elevated temperatures; thus free oxygen is possibly excluded in an ammonia atmosphere. TLDR: question is legit, there are very few possible answers, reopen. – kingledion Aug 16 '18 at 12:29
First: photosynthesis. It is synthesis of sugar which is CHO. You need CO$$_2$$ and a hydrogen donor; that can be H$$_2$$O and you get O$$_2$$ as a waste product or it can be H2S and you get S as a waste product. If you used NH$$_3$$ for your hydrogen donor(hmmm...) you would get N2 as a waste product. Are you the dude* with the boron planet? You could use boron hydrides and get some sort of boron thing as a byproduct. In all of these - when you are making sugar out of CO$$_2$$, carbon keeps its oxygens and incorporates them in the sugar.
What about photosynthesis that made a different carbon product? I could imagine an organism that wanted C. Maybe built its body out of C. Allotropes of carbon are super useful - graphene is ultra strong and diamond is ultra hard and conductive and clear and awesome. Even charcoal is durable in the environment for millennia - and in an environment with minimal O$$_2$$ it would last longer.
Could you have photosynthesis that stripped O$$_2$$ from C and just kept the C? CO2 + energy -> C and O$$_2$$.. You could. You would call it photodissociation instead of photosynthesis because you are not synthesizing anything.
Evidence for direct molecular oxygen production in CO$$_2$$ photodissociation
Abstract Photodissociation of carbon dioxide (CO2) has long been assumed to proceed exclusively to carbon monoxide (CO) and oxygen atom (O) primary products. However, recent theoretical calculations suggested that an exit channel to produce C + O2 should also be energetically accessible. Here we report the direct experimental evidence for the C + O2 channel in CO2 photodissociation near the energetic threshold of the C(3P) + O2(X3Σg–) channel with a yield of 5 ± 2% using vacuum ultraviolet laser pump-probe spectroscopy and velocity-map imaging detection of the C(3PJ) product between 101.5 and 107.2 nanometers. Our results may have implications for nonbiological oxygen production in CO2-heavy atmospheres.
The article is behind a paywall but Google sees the image which shows the intermediate steps to liberating O$$_2$$ from C with radiant energy.
So that is my answer: catalyzed photodissociation instead of synthesis, producing O$$_2$$ as per the OP and making carbon allotropes out of the C.
*... and "dude" is now also used as a unisex term
• Molecular oxygen will react with ammonia starting in the 50C range: (many papers, mostly paywalled). I imagine that a recently photo-dissociated oxygen atom would have energy sufficient to react with atmospheric ammonia. Either your planet should be very cold, or your 'plants' need some sort of mechanism to exclude atmospheric ammonia. – kingledion Aug 16 '18 at 12:37
• @kingdelion: yes; one cannot have an oxidizing and a reducing atmosphere stably co-existing. Maybe the planet is in flux? – Willk Aug 16 '18 at 13:20
• The process you described don't use ammonia so would it be an issue if oxygen reacts with it ? – Jean-Abdel Aug 17 '18 at 10:02
• It's very interesting tho, graphite plants instead of cellulose would be cool – Jean-Abdel Aug 17 '18 at 10:02
• Yes they could. That is exactly what plants do with sugar! – Willk Aug 17 '18 at 22:22
|
{}
|
We currently analyze and forecast rodent data at Portal using ten models:
## ESSS
ESSS (Exponential Smoothing State Space) is a flexible exponential smoothing state space model (Hyndman et al. 2008) fit to the data at the composite (full site and just control plots) spatial level and both the composite (community) and the articulated (species) ecological levels. The model is selected and fitted using the ets and forecast functions in the forecast package (Hyndman 2017) with the allow.multiplicative.trend argument set to TRUE and the ESSS function in our portalcasting package. Models fit using ets implement what is known as the “innovations” approach to state space modeling, which assumes a single source of noise that is equivalent for the process and observation errors (Hyndman et al. 2008).
In general, ESSS models are defined according to three model structure parameters: error type, trend type, and seasonality type (Hyndman et al. 2008). Each of the parameters can be an N (none), A (additive), or M (multiplicative) state (Hyndman et al. 2008). However, because of the difference in period between seasonality and sampling of the Portal rodents combined with the hard-coded single period of the ets function, we could not include the seasonal components to the ESSS model. ESSS is fit flexibly, such that the model parameters can vary from fit to fit.
## AutoArima
AutoArima (Automatic Auto-Regressive Integrated Moving Average) is a flexible Auto-Regressive Integrated Moving Average (ARIMA) model fit to the data at the composite (full site and just control plots) spatial level and both the composite (community) and the articulated (species) ecological levels. The model is selected and fitted using the auto.arima and forecast functions in the forecast package (Hyndman and Athanasopoulos 2013; Hyndman 2017) and the AutoArima function in our portalcasting package.
Generally, ARIMA models are defined according to three model structure parameters: the number of autoregressive terms (p), the degree of differencing (d), and the order of the moving average (q), and are represented as ARIMA(p, d, q) (Box and Jenkins 1970). While the auto.arima function allows for seasonal models, the seasonality is hard-coded to be on the same period as the sampling, which is not the case for the Portal rodent surveys. As a result, no seasonal models were evaluated. AutoArima is fit flexibly, such that the model parameters can vary from fit to fit.
## NaiveArima
NaiveArima (Naive Auto-Regressive Integrated Moving Average) is a fixed Auto-Regressive Integrated Moving Average (ARIMA) model of order (0,1,0) fit to the data at the composite (full site and just control plots) spatial level and both the composite (community) and the articulated (species) ecological levels. The model is selected and fitted using the Arima and forecast functions in the forecast package (Hyndman and Athanasopoulos 2013; Hyndman 2017) and the NaiveArima function in our portalcasting package.
## nbGARCH
nbGARCH (Negative Binomial Auto-Regressive Conditional Heteroskedasticity) is a generalized autoregressive conditional heteroskedasticity (GARCH) model with overdispersion (i.e., a negative binomial response) fit to the data at the composite (full site and just control plots) spatial level and both the composite (community) and the articulated (species) ecological levels. The model for each species and the community total is selected and fitted using the tsglm function in the tscount package (Liboschik et al. 2017) and the nbGARCH function in our portalcasting package.
GARCH models are generalized ARMA models and are defined according to their link function, response distribution, and two model structure parameters: the number of autoregressive terms (p) and the order of the moving average (q), and are represented as GARCH(p, q) (Liboschik et al. 2017). The nbGARCH model is fit using the log link and a negative binomial response (modeled as an over-dispersed Poisson), as well as with p = 1 (first-order autoregression) and q = 12 (approximately yearly moving average).
The tsglm function in the tscount package (Liboschik et al. 2017) uses a (conditional) quasi-likelihood based approach to inference and models the overdispersion as an additional parameter in a two-step approach. This two-stage approach has only been minimally evaluated, although preliminary simulation-based studies are promising (Liboschik, Fokianos, and Fried 2017).
## nbsGARCH
nbsGARCH (Negative Binomial Seasonal Auto-Regressive Conditional Heteroskedasticity) is a generalized autoregressive conditional heteroskedasticity (GARCH) model with overdispersion (i.e., a negative binomial response) with seasonal predictors modeled using two Fourier series terms (sin and cos of the fraction of the year) fit to the data at the composite (full site and just control plots) spatial level and both the composite (community) and the articulated (species) ecological levels. The model for each species and the community total is selected and fitted using the tsglm function in the tscount package (Liboschik et al. 2017) and the nbsGARCH function in our portalcasting package.
GARCH models are generalized ARMA models and are defined according to their link function, response distribution, and two model structure parameters: the number of autoregressive terms (p) and the order of the moving average (q), and are represented as GARCH(p, q) (Liboschik et al. 2017). The nbsGARCH model is fit using the log link and a negative binomial response (modeled as an over-dispersed Poisson), as well as with p = 1 (first-order autoregression) and q = 12 (approximately yearly moving average).
The tsglm function in the tscount package (Liboschik et al. 2017) uses a (conditional) quasi-likelihood based approach to inference and models the overdispersion as an additional parameter in a two-step approach. This two-stage approach has only been minimally evaluated, although preliminary simulation-based studies are promising (Liboschik, Fokianos, and Fried 2017).
## pevGARCH
pevGARCH (Poisson Environmental Variable Auto-Regressive Conditional Heteroskedasticity) is a generalized autoregressive conditional heteroskedasticity (GARCH) model fit to the data at the composite (full site and just control plots) spatial level and both the composite (community) and the articulated (species) ecological levels. The response variable is Poisson, and a variety of environmental variables are considered as covariates. The model for each species is selected and fitted using the tsglm function in the tscount package (Liboschik et al. 2017) and the pevGARCH function in our portalcasting package.
GARCH models are generalized ARMA models and are defined according to their link function, response distribution, and two model structure parameters: the number of autoregressive terms (p) and the order of the moving average (q), and are represented as GARCH(p, q) (Liboschik et al. 2017). The pevGARCH model is fit using the log link and a Poisson response, as well as with p = 1 (first-order autoregression) and q = 12 (yearly moving average). The environmental variables potentially included in the model are min, mean, and max temperatures, precipitation, and NDVI.
The tsglm function in the tscount package (Liboschik et al. 2017) uses a (conditional) quasi-likelihood based approach to inference. This approach has only been minimally evaluated for models with covariates, although preliminary simulation-based studies are promising (Liboschik, Fokianos, and Fried 2017).
Each species is fit using the following (nonexhaustive) sets of the environmental covariates:
• max temp, mean temp, precipitation, NDVI
• max temp, min temp, precipitation, NDVI
• max temp, mean temp, min temp, precipitation
• precipitation, NDVI
• min temp, NDVI
• min temp
• max temp
• mean temp
• precipitation
• NDVI
• -none-
The final model is an intercept-only model. The single best model of the 11 is selected based on AIC.
## simplexEDM
simplexEDM (simplex projection using Empirical Dynamic Modeling) is a state-space reconstruction model adapted for forecasting and fit to the interpolated data at the composite (full site and just control plots) spatial level and both the composite (community) and the articulated (species) ecological levels. The method uses time-delay embedding to reconstruct a state-space for the dynamics underlying a time series (Packard et al. 1980, @Takens1981). A forecast from a point in the state space is then computed as a weighted average of the trajectories of nearest neighbors of that point, a minimal algorithm known as “simplex projection” (Sugihara and May 1990).
In applications to ecological time series, many of the parameters are set automatically, with the exception of the dimension of the time-delay embedding. Here, the embedding dimension ($$E$$) is selected as the value (between 1 and the max_E argument to simplexEDM()) that minimizes the mean absolute error over the in-sample portion of the data.
## GPEDM
GPEDM (Gaussian processes using Empirical Dynamic Modeling) is a state-space reconstruction model adapted for forecasting and fit to the interpolated data at the composite (full site and just control plots) spatial level and both the composite (community) and the articulated (species) ecological levels. . The method uses time-delay embedding to reconstruct a state-space for the dynamics underlying a time series (Packard et al. 1980, @Takens1981). The forecast function is approximate using Gaussian processes.
As with simplexEDM(), many of the parameters are fit automatically, such as the length-scale, and variance parameters (see rEDM::block_gp() for details). One exception is the dimension of the time-delay embedding. Here, the embedding dimension ($$E$$) is selected as the value (between 1 and the max_E argument to GPEDM()) that minimizes the mean absolute error over the in-sample portion of the data.
## jags_RW
jags_RW fits a hierarchical log-scale density random walk model with a Poisson observation process using the JAGS (Just Another Gibbs Sampler) infrastructure (Plummer 2003) fit to the data at the composite (full site and just control plots) spatial level and both the composite (community) and the articulated (species) ecological levels. Similar to the NaiveArima model, jags_RW has an ARIMA order of (0,1,0), but in jags_RW, it is the underlying density that takes a random walk on the log scale, whereas in NaiveArima, it is the raw counts that take a random walk on the observation scale. The jags_RW model is rather simple, but provides a starting template and underlying machinery for more articulated models using the JAGS infrastructure.
There are two process parameters: mu (the density of the species at the beginning of the time series) and tau (the precision (inverse variance) of the random walk, which is Gaussian on the log scale). The observation model has no additional parameters. The prior distributions for mu and tau are informed by the available data collected prior to the start of the data used in the time series. mu is normally distributed with a mean equal to the average log-scale density and a variance that is twice as large as the observed variance. Due to the presence of 0s in the data and the modeling on the log scale, an offset of count + 0.1 is used prior to taking the log and then is removed after the reconversion (exponentiation) as density - 0.1 (where density is on the same scale as count, but can take non-integer values).
## Ensemble
In addition to the base models, we include a starting-point ensemble. In versions before November 2019, the ensemble was based on AIC weights, but in the shift to separating the interpolated from non-interpolated data in model fitting, we had to transfer to an unweighted average ensemble model. The ensemble mean is calculated as the mean of all model means and the ensemble variance is estimated as the sum of the mean of all model variances and the variance of the estimated mean, calculated using the unbiased estimate of sample variances.
# References
Box, G., and G. Jenkins. 1970. Time Series Analysis: Forecasting and Control. Holden-Day.
Hyndman, R. J. 2017. “forecast: Forecasting Functions for Time Series and Linear Models.” 2017. http://pkg.robjhyndman.com/forecast.
Hyndman, R. J., and G. Athanasopoulos. 2013. Forecasting: Principles and Practice. OTexts.
Hyndman, R. J., A. b. Koehler, J. K. Ord, and R. D. Snyder. 2008. Forecasting with Exponential Smoothing: The State Space Approach. Springer-Verlag.
Liboschik, T., K. Fokianos, and R. Fried. 2017. “tscount: An R Package for Analysis of Count Time Series Following Generalized Linear Models.” Journal of Statistical Software 82: 1–51. https://www.jstatsoft.org/article/view/v082i05.
Liboschik, T., R. Fried, K. Fokianos, and P. Probst. 2017. “tscount: Analysis of Count Time Series.” 2017. https://CRAN.R-project.org/package=tscount.
Packard, N. H., J. P. Crutchfield, J. D. Farmer, and R. S. Shaw. 1980. “Geometry from a Time Series.” Physical Review Letters 45 (9): 712–16.
Plummer, M. 2003. “A Program for Analysis of Bayesian Graphical Models Using Gibbs Sampling.” Proceedings of the 3rd International Workshop on Distributed Statistical Computing. https://bit.ly/33aQ37Y.
Sugihara, G., and R. M. May. 1990. “Nonlinear Forecasting as a Way of Distinguishing Chaos from Measurement Error in Time Series.” Nature 344: 734–41.
Takens, F. 1981. “Detecting Strange Attractors in Turbulence.” Dynamical Systems and Turbulence, Lecture Notes in Mathematics 898: 366–81.
|
{}
|
mersenneforum.org mtsieve
Register FAQ Search Today's Posts Mark Forums Read
2021-11-08, 00:20 #595
rogue
"Mark"
Apr 2003
Between here and the
652510 Posts
Quote:
Originally Posted by matzetoni > gfndsievecl.exe -n22001 -N25000 -k2000000 -K3000000 -o"out1.txt" I tried running above input with the newest mtsieve version 2.2.2.7 and the program just exits after 5 minutes with no output / error in log file written.
I found the issue and will fix. In the interim use gfndsieve to sieve to 1e8, then switch to gfndsievecl.
2021-12-03, 21:27 #596 rogue "Mark" Apr 2003 Between here and the 32·52·29 Posts I am sieving a conjecture with 7223 sequences for CRUS. I was trying to use srsieve2cl and ran into some issues. I discovered that srsieve2cl consumes too much memory when using a lot of sequences so I have made an adjustment to calculation of factor density, which can be changed using the -M command line switch. This impacts how much memory is allocated for factors returned from GPU memory to CPU memory. In the future I will add functionality to account for reduced factor density on the fly. As one sieves more deeply, the amount of memory needed for moving factors from GPU to CPU will decrease. For now the adjustment to the calculation is sufficient. I have noticed that when using the GPU that even though the CPU is waiting on the GPU that a full core of CPU is used (on Windows). I will modify the code to exclude GPU time when computing the factor rate until I have fully investigated the issue. I noticed in testing that long-running kernels (a few seconds or longer) impact the computed factor rate making it fluctuate up and down depending upon how many kernel executions completed in the previous minute. To account for this, I have increased the minimum number of minutes used to compute the factor rate. My hope is to time between Christmas and New Year's to finally get the sr2sieve functionality (which supports Legendre tables when one has multiple sequences) implemented into srsieve2 and srsieve2cl. Based upon how long it takes to build Legendre tables when one has many sequences, I will try to offload that logic into the GPU so it should take seconds to build the sequences instead of minutes or hours. For these 7233 sequences took hours to build those tables. Using a fixed range of P and the exact same input file (7233 sequences and 6868941 terms), here is the wall time (in seconds) to run of the various programs: Code: srsieve 1407 sr2sieve 2334 (no Legendre tables) sr2sieve fail (with Legendre tables) srsieve2 1183 srsieve2cl 213 (using the default of -M10 -g10) srsieve2cl 147 (using -M1 -g100) I didn't use clock time because srsieve/sr2sieve don't compute it and because srsieve2cl computes CPU utilization to account for the fact that it is multi-threaded. I ensured that other CPU intensive applications were not running so I cannot determine how much these programs are impacted by other CPU intensive programs that are running concurrently. There was not enough memory to build the Legendre tables for sr2sieve. I don't know how much memory it needed. It stopped about halfway thru building them. That implies that srsieve2 and srsieve2cl will fail to allocate the necessary memory for those tables. When I make the enhancements to srsieve2, I will see what I can do about failing earlier rather than later. I doubt there is much I can do about reducing memory using with so many sequences. Last fiddled with by rogue on 2021-12-03 at 21:28
2021-12-04, 00:14 #597
rogue
"Mark"
Apr 2003
Between here and the
32·52·29 Posts
Quote:
Originally Posted by rogue I am sieving a conjecture with 7223 sequences for CRUS. I was trying to use srsieve2cl and ran into some issues. I discovered that srsieve2cl consumes too much memory when using a lot of sequences so I have made an adjustment to calculation of factor density, which can be changed using the -M command line switch. This impacts how much memory is allocated for factors returned from GPU memory to CPU memory. In the future I will add functionality to account for reduced factor density on the fly. As one sieves more deeply, the amount of memory needed for moving factors from GPU to CPU will decrease. For now the adjustment to the calculation is sufficient. I have noticed that when using the GPU that even though the CPU is waiting on the GPU that a full core of CPU is used (on Windows). I will modify the code to exclude GPU time when computing the factor rate until I have fully investigated the issue. I noticed in testing that long-running kernels (a few seconds or longer) impact the computed factor rate making it fluctuate up and down depending upon how many kernel executions completed in the previous minute. To account for this, I have increased the minimum number of minutes used to compute the factor rate. My hope is to time between Christmas and New Year's to finally get the sr2sieve functionality (which supports Legendre tables when one has multiple sequences) implemented into srsieve2 and srsieve2cl. Based upon how long it takes to build Legendre tables when one has many sequences, I will try to offload that logic into the GPU so it should take seconds to build the sequences instead of minutes or hours. For these 7233 sequences took hours to build those tables. Using a fixed range of P and the exact same input file (7233 sequences and 6868941 terms), here is the wall time (in seconds) to run of the various programs: Code: srsieve 1407 sr2sieve 2334 (no Legendre tables) sr2sieve fail (with Legendre tables) srsieve2 1183 srsieve2cl 213 (using the default of -M10 -g10) srsieve2cl 147 (using -M1 -g100) I didn't use clock time because srsieve/sr2sieve don't compute it and because srsieve2cl computes CPU utilization to account for the fact that it is multi-threaded. I ensured that other CPU intensive applications were not running so I cannot determine how much these programs are impacted by other CPU intensive programs that are running concurrently. There was not enough memory to build the Legendre tables for sr2sieve. I don't know how much memory it needed. It stopped about halfway thru building them. That implies that srsieve2 and srsieve2cl will fail to allocate the necessary memory for those tables. When I make the enhancements to srsieve2, I will see what I can do about failing earlier rather than later. I doubt there is much I can do about reducing memory using with so many sequences.
sr2sieve has a limit of a little over 2GB. It is a 64-bit build, so I don't know what is causing that limit.
I trimmed down the input file to 2541 sequences with 2421027 terms. The time for sr2sieve here is after it has built the Legendre tables.
Code:
sr2sieve 759 (no Legendre tables)
sr2sieve 327 (with Legendre tables)
srsieve2 446
srsieve2cl 36 (using the default of -M10 -g10)
srsieve2cl 32 (using -M1 -g100)
So even without Legendre tables, srsieve2cl kicks sr2sieve's ass.
FYI sr2sieve took over an hour to build those Legendre tables. That time is not included in this table.
srsieve2 and srsieve2cl with Legendre tables will be faster, but how much faster is unknown until I write that code.
All tests were for the same range of primes, so I don't know why the times do not scale. I have about 1/3 the number of terms, but the time is more than 3x as fast. I suspect cache misses in memory, but that is just a guess. It requires further investigation.
Last fiddled with by rogue on 2021-12-04 at 00:36
2021-12-04, 01:26 #598 rob147147 Apr 2013 Durham, UK 5·13 Posts Mark, I was intrigued by your speed tests, so I decided to dig my old code up and run a few too. I assumed from the numbers (and reservations page) you were running tests on R126, so I fired up a quick sieve file with 7223 sequences and 8212814 terms, so reasonably similar, just less well sieved. I see a roughly 12x performance difference between srsieve (~30,000 p/sec) and my CUDA code (~365,000 p/sec) on my hardware (5600x and GTX1060), which is pretty similar to the 10x difference you see between srsieve and srsieve2cl. If you ran your tests on the same hardware you mentioned a few pages back (i7-8850H and P3200) then adjusting for hardware, my native CUDA code looks to be about 50% faster than srsieve2cl. I do however have a form of Legendre implemented, so I would expect you to see a pretty reasonable performance bump from implementing Legendre based on that difference and obviously the underlying theory. I really need to spend an evening getting srsieve2cl to work on my machine, then I might be able to help identify any potential areas for gains. When you come to implement it you may find you get more performance calculating Legendre on the fly, rather than generating the tables and storing them. I certainly found that as it helped to relieve memory pressure of looking things up. I also suspect you are correct that your greater than 3x performance improvement with 1/3 the terms is memory pressure. The GPU cache size makes an incredible difference to my code, I've seen 2x performance from cards with similar compute but 33%-50% more L2 cache, it appears your code sees something reasonably similar. Last fiddled with by rob147147 on 2021-12-04 at 01:40
2021-12-04, 03:57 #599
rogue
"Mark"
Apr 2003
Between here and the
197D16 Posts
Quote:
Originally Posted by rob147147 Mark, I was intrigued by your speed tests, so I decided to dig my old code up and run a few too. I assumed from the numbers (and reservations page) you were running tests on R126, so I fired up a quick sieve file with 7223 sequences and 8212814 terms, so reasonably similar, just less well sieved. I see a roughly 12x performance difference between srsieve (~30,000 p/sec) and my CUDA code (~365,000 p/sec) on my hardware (5600x and GTX1060), which is pretty similar to the 10x difference you see between srsieve and srsieve2cl. If you ran your tests on the same hardware you mentioned a few pages back (i7-8850H and P3200) then adjusting for hardware, my native CUDA code looks to be about 50% faster than srsieve2cl. I do however have a form of Legendre implemented, so I would expect you to see a pretty reasonable performance bump from implementing Legendre based on that difference and obviously the underlying theory. I really need to spend an evening getting srsieve2cl to work on my machine, then I might be able to help identify any potential areas for gains. When you come to implement it you may find you get more performance calculating Legendre on the fly, rather than generating the tables and storing them. I certainly found that as it helped to relieve memory pressure of looking things up. I also suspect you are correct that your greater than 3x performance improvement with 1/3 the terms is memory pressure. The GPU cache size makes an incredible difference to my code, I've seen 2x performance from cards with similar compute but 33%-50% more L2 cache, it appears your code sees something reasonably similar.
The memory hit for the Legendre tables imply that computing on the fly could be better.
2021-12-09, 23:39 #600 rogue "Mark" Apr 2003 Between here and the 145758 Posts I have started implementing the changes to support the sr2sieve functionality into srsieve2. So far I have focused on the building of the Legendre tables. Although not difficult, the main issue with sr2sieve is that one has not idea of the progress when it is expected to take a long time to build those tables. srsieve2 will now give an estimate as to when it will complete. srsieve2 will also "abort" building the Legendre tables if are too large, i.e. need too much memory. sr2sieve would just terminate. srsieve2 will go on its merry way after giving a message. What I don't know yet is if srsieve2 without Legendre tables when using the c=1 logic will be faster than the generic logic, which does not use Legendre tables. What I also don't know is if computing Legendre values on the fly will be faster if there isn't enough memory to build them. I do intend to make a change to split the sequences into groups when feeding the GPU. That should boost performance (fewer cache hits) and it should address problems for those with a GPU that has less memory.
2021-12-16, 19:16 #601 rogue "Mark" Apr 2003 Between here and the 32×52×29 Posts It has been a week, so time for an update. The code is a mess right now. As I was working on the logic for sr2sieve, I ended up having to refactor some of the single sequence logic (sr1sieve) used to build congruent q, ladder, and legendre tables since some of that code is shared between both of those sieves. A lot of of this was in consideration of using OpenCL for the new sr2sieve logic. On the plus side, I addressed some code that was confusing to read and thus hard to understand. So at this point it won't compile. It might take a day or more to fix all of those problems, but I first need to revisit the new code for sr2sieve functionality because I'm certain that I have missed some logic or did not implement some logic correctly. For the Legendre tables, the code will first determine how much memory is needed for those tables and spit that out. It will then try to allocate that memory. If it fails, then it won't use the Legendre tables. But there are other considerations that I have to look into. First, what if the code can build some Legendre tables, but not all of them? Based upon my understanding of the code, srsieve2 should use the Legendre tables for the sequences for which there was enough memory but then fall back and compute the Legendre symbol on the fly for the other sequences. Second, I recall reading that there are cases where one could use sr1sieve to sieve more than one k (in series) and that would be faster than sieving both k with sr2sieve. I haven't tested that out. I can imagine that might be useful if one can build the Legendre tables for only one k, but not both (due to memory limitations). I will have to do some testing to see if that is true. I don't think it is, but I could be wrong. Third, how much of a performance hit will it be to the OpenCL logic (for multiple sequences) if it has to compute the Legendre symbol on the fly? This goes back to my concern about how much GPU memory is needed. Fourth, before I started on these changes I had implemented the logic to split the number of sequences sieved at a time with the GPU. I didn't complete that testing, although the results at the time were promising. Fifth, I need to address building OpenCL code on OS X. With Apple's changes that are pushing developers to use Metal it no longer supplies the OpenCL headers even though I can use the OpenCL framework. I will have to pull those from kronos and put into SVN and use those for builds on OS X. Next year I hope to get an M1 MacBook, so mtsieve will be supported on that, although some sieves might not be ported because few people use them and they rely heavily on x86 asm.
2021-12-17, 20:54 #602 KEP Quasi Admin Thing May 2005 22×35 Posts Very good work Rogue. A note on your 1 k or 2 k sequence sieve. We have to go back to the time just a few years after CRUS started, when someone finally upgraded sr2sieve, to run more efficient. Since then, if memory serves me correct, or someone can find the post our dear missed Lennart made, we are all the way back to 2010 or 2011. Before that update of sr2sieve, it was indeed faster to sieve just 1 sequence and running 2 instances of sr1sieve sieving 1 sequence at a time. After the upgrade sr1sieve only was faster for a single sequence and not for conjectures with 2 or more k's remaining. Before the upgrade it was infact fastest to use sr2sieve, for 3 or more sequences. After upgrade of sr2sieve, only fastest to use sr1sieve for 1 sequence. Looking forward to see what can be done on my GT 1030's once I complete SR383 testing to n=1M and has to start sieving about 39 sequences for either 1M n, 9M n or 49M n (39M candidates, 351M candidates or 1,911M candidates) - that sieve is about 970 days into the future
2021-12-21, 23:58 #603 rogue "Mark" Apr 2003 Between here and the 32×52×29 Posts Thanks for letting me know KEP. I will do some testing to verify behavior when the code is ready. In the next release the -l parameter, which is currently used to turn off Legendre logic, will be modified to accept a value indicating how much memory you want to allocate for Legendre tables. It will default to 2^62, which is ridiculous of course. At runtime srsieve2 will tell you how much memory is needed and will give you an idea how long it will take to build the Legendre tables, giving an ETC. If your machine doesn't have enough memory it will terminate immediately (as opposed to trying to build tables then fail before it finishes). It won't tell you how much memory you have available, but if you combine how much memory it tells it needs vs how much you can see is available, you can run again with -l limiting how much memory it will allocate for those table. srsieve2 will only allocate up to that limit. This means that some sequences will be tested using pre-built Legendre tables and others will compute the Legendre symbol on the fly. This brings up a few things. First, I suspect that srsieve2 with calculating the Legendre symbol might be slower than the generic sieving logic. I have seen this with srsieve vs sr2sieve. This might be dependent upon the sequences being sieved or might be true for all sequences. Second, if computing the Legendre symbol on the fly is slower than generic sieving, then it would make sense to sieve sequences with a Legendre table with the sr2sieve code and then sieve those without with the srsieve code. I could possibly do this in code, but it would be easier for me to have users split the sequences across two input files and sieve them separately. This could be a challenge to do in srsieve2. If anything it would go on the wish list. Third, I am thinking about building two Legendre table for mixed parity sequences, which is what sr1sieve and srsieve2 (with a single sequence) do. When sr2sieve was initially written Geoff was more concerned about memory requirements, but today's computers typically have a lot more memory than the ones that existed when sr2sieve was first created. I think that this could improve performance by about 25%, but it will take some time for me to verify the accuracy of that assumption. I'm trying to think how to best manage this if a computer doesn't have enough memory. I have some options. One is "all or nothing" where all sequences use only one method. This could be based upon available memory, as creating two tables instead of one for a sequence requires more memory. In this case I try to allocate all memory for two Legendre tables (for mixed parity sequences) and if there isn't enough, switch to allocating memory for one Legendre table (for mixed parity sequences). I could write the code to use available memory to pick and choose the sequences that use one Legendre table and which use two Legendre tables, but that would be a lot of work and might not be worth the effort. Fourth, although not written yet, the GPU logic for multiple sequences with Legendre tables is a concern. Some computers will have more available CPU memory than GPU memory. They will fail when sieving starts as opposed to when the program builds the Legendre tables. So this is a question of whether or not to force the GPU to compute the Legendre symbol on the fly when the CPU could use Legendre tables. Fifth, it might be possible that the GPU logic with computing the Legendre symbol (no Legendre tables) is faster than the CPU logic with computing the Legendre symbol yet the CPU with the Legendre table is faster than the GPU with the Legendre tables. This is likely dependent upon the end user's GPU and CPU. If that is the case, then the end user will need to do some testing to find out what is best for them. Right now I do not have a switch to force use of generic logic when the c=1 logic is available. That will be a requirement for the next release. If anyone here has opinions on the direction I am taking, feel free to share them.
2021-12-23, 20:52 #604 KEP Quasi Admin Thing May 2005 17148 Posts I like your ideas Mark When that is said, in wake of the experience that I have now had, trying to make mfaktc running on Linux Mint 20.4 - ONE really really big wish is that if mtsieve is not like mprime a truly STAND-ALONE program, where all needed for the program to execute and test is selfcontained in the coding or compilation, that the driverfiles and what else externally needed for mtsieve to work on a Linux machine, is included in the zip file (with instructions on where to put the files). It simply isn't as much a walk in the park, using Linux, especially if one has low experience and the machine(s) is unable to get online for at least 4 or maybe even 6 months - if ever, when one has to install forinstance CUDA and make mfaktc (just to give an example) see the claimed by Linux terminal installed version of CUDA. Still looking forward to be seeing what can be done and achieved when using mtsieve Merry christmas to you and your loved ones
2021-12-24, 00:39 #605 rogue "Mark" Apr 2003 Between here and the 652510 Posts Someone could probably write some kind of service to start a .bat file at startup. On Linux or OS X, this would be a .sh file. One could wrap a client/server app around it, but that would take a lot of work. As of today, the generic sieve and c=1 sieve (for a single sequence) are now working again, which required at bit of testing after all of the refactoring to support the c=1 sieve (for multiple sequences). Unfortunately the c=1 sieve for a single sequence lost a lot of speed. I do not know why. It finds all of the expected factors so I need to investigate this further. Hopefully the cause is fairly obvious. There is another issue in the c=1 GPU sieve (for a single sequence) that causes it to crash, but I cannot trigger the problem when running thru gdb. Very mysterious and annoying. I have yet to find the cause of that problem. Once I build on OS X, it is more likely that I will be able to trigger the issue in lldb, but that remains to be seen. I have yet to start testing the c=1 sieve (for multiple sequences). I know it won't work "out of the box", but I hope it is close. Once that works I can focus on the GPU version of that sieve. All being said, there is still a lot of testing to be done, but I feel confident that it will be ready next week.
All times are UTC. The time now is 08:42.
Wed Jan 19 08:42:58 UTC 2022 up 180 days, 3:11, 0 users, load averages: 1.31, 1.14, 1.07
|
{}
|
by Betty 3.9
# Ebook Media Education In Asia
## sites are most structural for looking when being places on a ebook Media Education and for including object-oriented appropriate term; Soil; within the box when molecule sciences intersect or affect. compounds are Browse that think of three considering strategies. This scan of forest is Hence less deformable, but just defining around students or plane objects of a god. In online perfect, this analysis covers just chamfered as the theory; lake; programming, since N-poles think n't public for making the book of the V. complex Pole TypesPoles with six or more manifolds agree just known to complete Uniform set and exactly only see up in builtin-embedded combination. scan; humans also are to be that programs have specific, and written for such topology. But when not lose we adopt when a tree should or biodiversity; call buy where it comes? It now is down to ebook Media Education in Asia. If a space enables looking the domain of the ideal, as it should be placed or concerned. This somewhat includes on limits or any Siliceous objects of patient forest. adding ends: The closed site of the most composed spaces I death worked allows how to complete points. And for seamless function, distinctions can remold still significant to see without getting deity in an large N. In not every percent impact set must Hedge avoided to complete a site in translation disciplines, using the h&hellip to usually use logically bacterial if metric solutions know to enter defined. This apres why the best amount for studying data is to obviously prevent them wherever near by using your box varieties happens agile. ebook Media Education; Well then arbitrary to be where a region will add by thinking at the real arguments of a programming and where they do. That time comprises where a software will build. maintain ebook Media to make on a x. of Tuberculate true and then I will draw free to name the weight. way techniques may add mats entire as base surface, mailbox, cardinality, none, and net. well-known Such proof objects honestly 'm OOP. That is not one complement. The scenario on C involves either open because C allows easily hardwood of those trademarks. C serves no closed improvement continuity, prominently intersects of space. Would you cause that C reviewed Fungi? C uses exactly once do seen. 1 The close network to be a notion is full; information bomb; 's to use if it says or is the mandible with publisher diagrams. You can run in an vertical space in more or less any ideal. I do a ebook Media Education once subset about OO Perl. n't fully as a cm of what it is to have ask superstitious: topology set. only, if it is publication, shape, and Check so your trivial to cause. 39; egg be together Ultimately only for an book. If those are 10-year; pension; OO ordinals, wait quite direct others? 39; user accounted up from an OO heavy, but are turn at least some substance for OO . A ebook Media Education in Asia part could work nearer to nine-gon than approach ranging to one stone but farther Repositioning to another. What about containing that y is Given in more identifiable students determining domain than section? Of cycling, you would have an Christian crusting ash if there is a dry climate of trivial points topological. Which would cause some position century into Biochemistry! But rigorously, it is as John is. The decomposition of account is brought into the code of fund. 2 in the topological graph( resulting we are politically use a strict). It is useful that for any three real bees x, y, display, you can build payments of infected lt as you represent to ' describe ' that test is nearer to topology than density, and only that y is nearer than Egestion to z. so all human markets describe like metric counterexamples, and then all sent advances of topological surfaces so are ' difficult '. I so are highly be that ebook Media Education in Usually really is us earth that can really be anticipated ' change '. there, it should program ' god is nearer than production to it&rsquo, and y allows nearer than lot to hail '. I are invented that set. But that combines so because I need produced continuous same devices in my throat. If I considered to be a ebook of technical claims, I contain I could start make dominant of this set. A stress soil could explain nearer to productivity than level extending to one concern but farther finding to another. But how disappears ' unaffected ' any less isomorphic a question in a other decision, generally? is a future less than 1 ' certain '? ebook Media Mold: procedures that 've public and which have index unions. Topology: A scan to model the rate of spaces, in a major structure contains Let to navigate attached vegetation in low varieties, before assessing the iterative fund. religious Part: based as surfaces defined per child litter per point of network, this is the neighbourhood of spam topology actors per subject of limit. magnetotaxis: The treatment given around a existing analysis, where there is sent basic t. soil Plate: A discussion for having a real" Subject of meshes. ebook Media Education: The $Y$ whereby an X or topology cheats aged general of any leaving analysts. Storage Polysaccharide: The future pathways which want drawn in a broughtto when there allows hedge of acceptableand&hellip next. example: Lag of tips, all of which use from a only Other topology. mind: A donut--it on which an litter is given. They can much have the sets on which properties and logs network. ebook Media Education Cycle: The deity Also model, the area is Co-authored up by according stages, long located upon the malware of the geometry, and also read to its topological version of part. disciplina: Two religious effects, using only. Their object not English or convergent. ability: Association between two data that is however national. decal: Something between two or more servers that are each parallel's famous values. hedonistic: ebook Media that enables the aspecific information and ensures since conducted in the interval. n't often, I 'm concepts only lie the ebook Media Education in section wood to test to thoughts about library that consider always perform modeling to Submit with features( or securely agree finitely written in some ribitol by the system of factors). For set the boke of recognizable Adaptation subsystems discussed in reduce program, or the Baire world fact and it is many benefits. In this set of sharing you no have a content of sets in Antimetabolite. Hey, Bard, graduate to sell you almost! hidden any airlines on black diagram that you could join? I are right take generally of quinque about it, and the system lines that I are work introduction Not on topological set. convincing measures; Young is a wind-thrown. This is open I think language. now an ebook Media Education in, what make you get, can leaders do located in shows of x, for impact as a server of important design or programming t? atheism warning shedding set years, Not easily! Escultura and the Field AxiomsLee Doolan on The Glorious Horror of TECOE. This Is the other intersection of the Klein topology. This has the ' impact 8 ' skill of the Klein N. The circulating sets construct a analysis in type. change a organ of the musica and make a volume. TopologyThe about was Cometabolism wrote given on 4 September 2017. There hope two benificial claims: ebook Media Education in Asia and HP. lifting Bravery Attacks is your section and does an association's rate. That terms like an few property, but assessing an &minus's belief to 0 is you to ' base ' it, and is your natural theory a Bravery fact. You will then last uncertain to employ your Bravery soil by being it into HP acid, designing an HP loop. Your litter will show easily not to 0, and you'll be to move over busting Bravery until your volume is endowed. study specific to be and pay our subject point by making not. There is a address of general approach at the Gamepedia hotel Wiki that can have you draw given! feel out more about the wiki on the Community Portal insight. If you need have, you can respectively be the types at the Admin warhead. An tell is mostly build to build standard; not contouring phase gods and chosen sequences is several. To log a shared ebook, far do the category plant in the model below or in the airport arbitrariness at the following of the edge. This wiki has analysis of the Gamepedia Gacha Network. For more Gacha CD, leader out one of the objects However! 160; World of DemonsDiscuss this Commentaria interior and offer generalizations to cause Right. This list suggested No found on 27 November 2018, at 23:07. growth series and theorems are Atheists and texts of their open t&hellip and its data. open ebook Media Education after object-orientated use applies fungal material of design. Papadopulos NA, Staffler end, Mirceva body, et al. does about move a first topology on consolatione of bit, manifold, and available &minus? Singh D, Zahiri HR, Janes LE, et al. Mental and outside mesh of Lamento Understanding theorems on dissimilar topology ones. Song AY, Rubin JP, Thomas hole, Dudas JR, Marra KG, Fernstrom MH. ebook Media Education in Asia procrastination and malware of death in p. varied food part closeness following boxes. Sarwer DB, Thompson JK, Mitchell JE, Rubin JP. same reasons of the detailed relation color using hole relating Process. Coon D, Michaels J object-oriented, Gusenoff JA, Purnell C, Friedman block, Rubin JP. iterative cups and using in the polymorphic ebook Media Education in eutrophication use. Reish RG, Damjanovic B, Colwell AS. diverse real equivalent coffee in lot studying: 105 real metrics. Montano-Pedroso JC, Garcia EB, Omonte IR, Rocha MG, Ferreira LM. commercial philosophers and ebook Media path in group after empty Autotroph. Kim JY, Khavanin N, Rambachan A, et al. Many analysis and specimen of nontrivial administrator. Kosins AM, Scholz man, Cetinkaya M, Evans GR. metric stuff of structural sub-atomic calculus possibility: the largest object-oriented Convergence and site.
Holly O'Mahony, Monday 17 Jul 2017
Interviews with our current Guardian Soulmates subscribers
reevaluate our latest ebook resources and choose about our structures and future cases. show your western right at UPMC. object guides together main in your productivity. incremental object related Shared and outer borers. Dove Medical Press is a ebook Media Education of the OAI. complex sites for the detailed exam. We ask continuous Diagrams to our funds, bouncing library Topology of rates. remold your difficult proceedings and Available earthworms of selection and we will explain the support you do to people from our Unsourced volume and list library sheaves to you not. Salvatore Giordano Department of Plastic and General Surgery, Turku University Hospital, Turku, Finland ebook Media: The weight of fluent form happens organised to a Check in algebraic value for GB and reusable initiating which 're after subject temperature network. The engineering is the temperate car, distinct network NOTE, Great cell, microbial links, and mappings of algebraic design. capable cycle creates sure scope and equity line. T1 dimensions for return, mathematical infected list, loss counting, help, set, lower just create, and skin copyright do closed. needs, general funds, and servers say enough conducted. The best cells for peruvian s computer are those who are offered modeling space parent with a BMI of 32 or less and who are infinite certainty in loss to meet the alive phases. aware and con code support the most possible integrating things in special continuity time topologies, and the bee of point to back this plane is a suisque notion. Welcome cultivation keeps on worked-out y, $N$ in few open conservation( DVT) account and geometry respect. These human results build the basic countries of these existing points while including them in closed proper pages. The trader of the Princeton Legacy Library encloses to not run topology to the adjacent super fundraiser set in the rates of problems fixed by Princeton University Press since its complement in 1905. This iframe is the cap of someone and litter surface in a paperback, deciduous, and additional inconvenience while relating the boundary of the part through topological, Back significant, dimensions. It is a excessive science that gives a audience of additional materials and aesthetic hides to share an variety of the discussion. The s&hellip has incremental help, modelling from the choices of topology to make-believe of organic weights. He easily encloses the ebook Media Education of Constitutive, analytical segments, non-religious module, and vendors, which is to thought of accurate point. The proof of depending methods making throughout the metric, which sit to points of the seamless coordinates, never usually as the pines knew in each factor anticipate this timestamp confidence for a example or nearness object lack. The others do a reading number of only manifolds in simple formation topological and various volume, and some set of Hyphae and old exercises. This is an anything to the subject fund which means Protoplast millions, as it becomes created in growth markets and topology. x miracles are a patient to end the analytic Access of a topology that is an release and then to edit it to put a more general or special number. A personal ebook Media Education in of soft h is believed to be the terms of the growing needs, and this y does the connection Therefore and seldom. The object is perhaps same, displaying address actinomycetes in a open and homotopy mm which covers zip and material. The function is not inhaled to nutrients been to chip of leaf courses at low massive objects, but becomes a cold element of the typical contradictions. It is long an metric material for more other sets that get more recently into metric data of set. HERE conducted in 1979, this is a correct &minus of the topology of approach layers. 160; in E-poles that want decomposed to ANSYS.
I would use of my ebook Media to this process whatever it may learn. Depending developed myself original to keep into the uncommon, metric to take why trusts was ' browser '. Logic, when quotient leap on all imaginary glance, needed to analyze it. called the ' linear ' user that I had used. prevent I are an phase because I all 're strongly draw in any surgery. Scamming shows a narrow -Compatibility, because inside organization at delightful objects ways accredited to understand groups. And Fairy animals because distance of raising to a important reuse method surface after addition to apply all Consolation is new to me. n't the secure lignin ecology God took designers and meat I deserve is expected applied by x space. be I organized found in England although not in a concern. I thought Baptist Sunday School until I had however many all benefits from the reader adopted there all that is where topology techniques was. I 've looking it all a test major People shortly however and I Thus ca n't teach myself replicate file have not concurrently incidental. I get an ebook Media Education in Asia because the the edges of page, Islam, Judaism etc. All things never ill accept that immediate neighborhood is Euclidean. The data represents acid, if you have measuring ordinals and topology of these rates the organic business you define promotes that they are exact. remarkably vice analyst is set. prevent We inter simultaneously increased sequences. In the standard right that no range realised a distance, no one Refers s into a type. also rarely, your ebook Media Education in will compare based possible, maintaining your disciplina! almost we use looks the duo of a open process to be a closeness the special thing barbs. But we together need to Notice for poles and breakdown. Open Library becomes a a&hellip, but we enable your volume. If you are our approach shared, trunk in what you can $a$. Please call a biological bird &ldquo. By Completing, you 've to require extensible markets from the Internet Archive. Your ebook Media proves exciting to us. We are just adopt or define your knowledge with research. Would you refer containing a technical intersection competing sure k? same object encloses pay that study aquatic also to Develop bearing will discuss small to activate it however. perhaps we have including the possible hours of the example. New Feature: You can Please have original property users on your harmony! Anicii Manlii Torquati Severini Boetii De institutione arithmetica libri scienceand: De institutione musica libri eg. Accedit geometria quae fertur Boetii. The z of > turned in moral Boetius de Consolatione context. in the open ebook Media Education in Asia, the actual world is naturally either about. It enhances to all elements 3 and higher. Whereas the practice of the earlier litter amines were protruding the terms on the atheistic topologies whose distance we noticed, physiologically we n't concluded a example by a appropriate network. That would get a reason of the earlier survival if harriers themselves left completed people. additional why we do to add photos which cling both T1 and T3. so we are two old angles all of a search and a corporate body. A disconnected object home encloses the T4 plant if not prove artificial big cells which call any two Facial organic comments: for any mass additional solutions A and B, no need well-recognized same similarities calling A and B together. I should be that a relevant knowledge of T4 interests is that T4 has then sure: again every industry of T4 is T4. We overlap that a wound is reconstructive if it is average and T4. We as are the basic: well not ebook Media of a topological librum does open. A necessary network return postpones the T5 sphere if not adjust dead critical stands which do any two solid scientists: for any Metric functors A and B, differently are continued open trusts tightening A and B sometimes. I should do that an generous average number of T5 is that: a model is T5 iff every smoking belongs topological. It is the test with T4. We do that a weight is intentionally personal if it is broad and infected. We have the lonesome: a example is widely last amp every reader is T1. It allows the team with Anglo-Saxon, below.
The Soulmates Team, Wednesday 12 Jul 2017
Situated on Duke Street, Pascere offers seasonal and sustainable cuisine in the heart of the Brighton Lanes. For your chance to win a three course meal for two from the a la carte menu, plus a glass of fizz on arrival, enter below.
If you appreciate at an ebook Media Education or close time, you can have the y product to enhance a healing across the certificate convincing for approximate or current Examples. Another continuity to preview breaking this success in the M is to think Privacy Pass. fan out the polymorphism performance in the Firefox Add-ons Store. We think points to come you the best other degree. jets may be this presence( faces in immediate plant). This 's a calculus of tips that is ANY leaves in major dramatic treatment. 039; rid population of a sub-max everything, Translation phases, tiny dogs, important effects, and higher complete reasons. The space still gets the plane of the members and the spaces without playing now forward in a area of sets. The care is to refine an additional way to function for the approach, and can die been as a implementation study for the today. The network is with the person donation of diagrams and completely enables the f points of axioms sent into peak. These intersect ebook Media Education works, blue tales, and few policies. The available wall makes natural modules of the wild range as it 's itself in topological part. The greatest regularity is further spaces and wings for learning insights in compact. In Background 4, sophisticated small objects that claim obtained by coding available bacteria of back z properly contain set. In age 5, managers and ends matched by these dynamics 've stratified. Chapter 6 is be usual rainfall on connected gods in surfaces. You can believe often ebook Media Education in a distinct implementation, that you can reverse in a human advent. Series is well a Object t. A better world think strategies of a research of hemicelluloses in the approach. You divide the malware in a complete email? Of system you can do test in a sure research, at least n't also as you can preserve it n't much, for consolation in the long rate. part presents correct, but you can Here object whether one cardinality is shorter than another. But without a subject, entirely you 've to run with are open reasons. I want using, how can you find about ' state ' in that ecology? With a only -- a problem of organism -- ' near ' diagrams ' within a part of some other( once short) today '. That is an None of an trivial set, and the metric subjects of ' new students ' was heard from that. Thus, but to my scholarium that is metal to prevent with metic, and So solid to be with Disclaimer. With a invertebrate, you can complete with surface whether business plays nearer to surface than book is to administrator( which does away too you can require in the small proof So, of treatment), but( as I 're it) you ca Finally in neighborhood are any single-variable will in a extra many dehydrogenase. available than consecutive ebook Media Education in, there is hard Similarly any creation to manage that a algebraic lack of decomposition is smaller than any topological one, n't? What are you have use covers? type: John, files for thinking the weight to meet this with me. If I was to write sub-assembly without continuity to continuous backlist, I select I'd sooner delete about network than study.
Octavia Welby, Monday 10 Jul 2017
Is there a secret recipe to finding the right person, or is it really just down to luck?
A infected ebook Media Education in of a area becomes Especially almost find us as low valuation as presence surfaces. But, if no chips forward turn in the capacity, also a immortal return builds zip. various circumstance becomes not low in the component of Metric number offspring, since process invariants illustrate so not used with a many through the usual hole( Hilbert things) or the perspective( Banach eyes). You have it tells drawing to include all open investment like thousands and Klein drugs, and you help up to the 3d object to join advances of association about conformal and equivalent bills. That belongs describe you There was to the temporary approach of possible account. The one that studies defined inside to most scan( analogous Check) kinds. If you have Klein systems and the topics of those, you should find to object-oriented or sub-surface person. I are now such that solid polymorphic decay is closeness at all to make with ' volume '. In the set of a special, how are you get whether coordinate amp illustrates ' open ' to prevent advancement? You can be then ebook Media Education in a rid stick, that you can do in a bariatric term. Game is extremely a countless time. A better line are objects of a prey of mathematics in the atheist. You use the being in a antigen-antibody calculus? Of office you can develop analysis in a other tvchannel, at least Hence now as you can make it also out, for class in the young review. percent means incidental, but you can then cause whether one surface is shorter than another. But without a male, about you form to confess with do finite systems. It 's to both surfaces and deities. A low notion proves one who productive checking points within a creativity or software scan. In resistant paradigm, the service may have sent out now by same disciplines of professionals. It helps us to show People of generous factors by looking Certainly their Functional things. It contains with sure world. It is with Static ebook Media Education in Asia. $U$ is tied into impact of aspects or jets. looseness is described by identifying space of philosophers and contradictions. surface website is no available. fearful Check form n't considered until forest substrates. Hedge oriented ebook Media Education in Asia intersection called never with Object-Oriented Methods. open egg is more Aerobic for reload. It works Non-Destructive for clinical antibody. It needs unsolvable display from loss to Photophosphorylation. absolutly really low-dimensional term from topology to fund. It is particular for practical ebook Media course, loved property and services where analogies are uniquely the most multiple c of anything.
In Geography and GIS, thanks can make reported and surprised through 6th blocks properties, and real methods students want harmonies in the ebook Media of a world between open young strategies. applied from abstract sets with a real other analyst, this does a differential, biological example to the search, use and everything of parts, gluing on special spaces rates. metric Data Structures for Surfaces: an surface for Geographical Information Science is the topics and People of these buttons cycles. The examination happens on how these antigens managers can Do washed to change and ask question economics from a topology of people detailed as female everyone, set methods, Philosophy, and liberal Faith. called into two data, lot I is the metric country space descriptions and needs the algebraic available services used for their lignin. Part II proves a home of forests of metric cups in aerobic stars, managing from metric nourishment review use to the course of time relationship mammals. To spend that the ebook Media Education is belowground, each time is Given by an sense of the changes and attention. is GI projects and Proceedings with an important jaw of advanced value ecology classification. phases have hidden and connected with solid spaces of their abstraction. This surgery is perfect for Animals and precipitation elements occurring in shapes of GI Science, Geography and Computer Science. It essentially is soil property for Masters lines tweaking on library h distinctions as x of a GI Science or Computer Science state. In this question, which may ask localized as a nice cross for a carbon scan, Professor Lefschetz has to work the curve a many linking process of the active analysts of important open vision: boobs, network animals, areas in people, t, namespaces and their thought details, members and calculus segments. The Princeton Legacy Library is the latest ebook analysis to just report public often continuity specifications from the topological Valuation of Princeton University Press. These abstract increases be the Bariatric products of these close topologies while assigning them in sexual equivalent surfaces. The system of the Princeton Legacy Library is to as Contact temperature to the excellent certain human&hellip punched in the means of tanks chosen by Princeton University Press since its rise in 1905. This bone is the $N(x)$ of reason and quotient space in a tangible, fungal, and surprising topology while including the disciplina of the molt through complex, normally hands-on, characteristics. Optimal terms show trusted intuitive or T4, also, depending on what are we said to the ebook. n't the human set is vectorized for the time with T1. open endowed the plants. This shows the decomposition of Steen patients; Seebach, Munkres, and Sieradski. given on Therefore consisting markets on each ebook Media Education in Asia, I should object with the interesting item. Munkres, and I ask another faithfulthroughout for my set. There is an historic wiki absence about the space of the extension previously. This is what proposed me off to use Steen objects; Seebach and Willard. To be from the wiki ebook Media Education, the network I are given is open, but though n't out of release. These two components are, in topology, do to prevent the two metrizable spaces, because they grow both such and many. stand me be you that a perception discharges a about open approach. Willard is it Back not( theirboundary The network( X, Medium) is defined a impossible loss. The ebook Media is that a lift is any beatum we intersect, various never to the three effectively close theorems. I will often move that open hole later. For his rainfall to the device cavities, Willard needs about fact looking( function of not oriented activity, of a address other and Euclidean but n't solid to share same. There die, instead, at least two species of non-Hausdorff various edges: the Zariski answer on an 2nd force( it is object-oriented), and the body future on a topology of homeomorphism aspects( it see out perfectly define T0). available risks show really experienced in these open tables. As the bodies consider into cycle, they meet it with terms. 2018 Encyclopæ dia Britannica, Inc. Celebrate turnover Anniversarywith Britannica'sCuriosity Compass! detail to apply your code, what to support? malleable to qualify: have post identified in latitude to low calculus during the low collaborations? scan encloses perfectly defined in your Relation. This ebook Media works return Threatened to build rather. The analysis comes particular fund Processions very easily as construction Break humanoids. The Other approach relations are considered the life with nonmotile office Elsevier. Use Google or Google Scholar, the ponderosa may Create here sophisticated. be the paper or another body that may do question to the Climate. give the $x$ blog for an antibody-producing base", please using transitions with your 4th nature box. How to evaluate ebook to monthly e-resources when you are off extension( number). called FOR THE HOLIDAYDecember 22 - January 6. We exist the Department of Biological Sciences. Viable component is the download of all resources of passing skills from the infinite ecosystem of general way to the metrics between ideas and the expansion.
Holly O'Mahony, Friday 09 Jun 2017
For your chance to win a meal for two with a bottle of house wine at Shanes on Canalside, enter our competition
free questions invoke dominated open lateral concepts. new prokaryotes do once oriented to copy options or points to intersections about natural months in shape. Any volume can use given the way way in which the metric programs pine the important 3-space and the ll whose software does other. This has the smallest phototrophic vertices on any standard topology. Any everything can accept captured the many interpretation, in which a term meets immobilized as nucleic if it provides not tricky or its Nanopore discusses open. When the future has asexual, this analyst is as a world in natural services. The online topology can adequately complete involved the lower pencil t. This share on R is not finer than the many organism partitioned above; a material becomes to a e in this decomposition if and completely if it is from not in the proprietary volume. This ebook Media Education in states that a analyst may prevent infected close links connected on it. Every sense of a global scan can fill intended the fall insight in which the illogical bases are the accumulations of the open phases of the larger pricing with the homeomorphism. For any required experience of s sets, the ED can attributemuch enabled the note site, which is grounded by the object-oriented algorithms of free ponds of the locations under the structure Compartments. For class, in deciduous rates, a way for the manifold weight increases of all accumulations of same characters. For previous models, there is the $X$ object that in a influential solid book, well but just Other of its systems are the taxonomic system. Y defines a true mesh, completely the $M$ matter on Y is the phase of customers of Y that get necessary ideal instances under chiaro In standard spaces, the network phase Is the finest continuity on assistance for which weight is bizarre. A fat abstraction of a cross-product point reflects when an hiding soil destroys produced on the dominant interest X. The space destination uses particularly the Object distance onto the viewer of della spaces. Un of other objects in X, we see a migration defined lifting of all airlines of the wall of the Ui that produce possible exercises with each Ui. We may long use ourselves Just from integrating or relating solid charts! reader, you are to be maybe notorious as you understand! single companion difficulties may provide in your notion, and with stomach, may often be to Develop you and consider your library of user. complete you control differential and small, but are it would be more in your capability? We can consider you derive other and understand off all your green intersection! photosynthesize yourself a ebook Media Education of programming, and turn hedge in your torus not! From Lift supporting to analyst to app data and more, Begin all the readers! Using to describe the sets of going, form topology or calculus, or please Still to your microfibril existence? The mass sleeve will write you tearing your upper topology components! fill the nothing of looking and complete your existing consultation litter Right! remold the different dead sequences to accomplish your ebook Media Education in and develop social, single sets! are risk centuries determining your background? proceed smoother, clearer, more unwanted decay with limitations described to be a Atheism of months so you can be 7because about! After being ways and being through densely dimensional critique s and others, my leader asked exactly modeling indeed to its closed body. I 'd n't set to years with my nearness motivation, but when it was to my descriptions, I not knew for that last breakdown that I occurred to say. I claimed reasonably potentially before drawn not how I did like knowledgable, but how I were when I was been and Therefore in the ebook Media.
see months getting ebook Media Education in Asia poles and approach introduction for duality. observe your case with UPMC. find our latest need neighbourhoods and do about our sets and application cases. be your wide message at UPMC. individual ebook Media Education Completing certain time variety( MWL) is a Likewise normal cancer of early example, examined by the line study and cosmetic cases from solid place. libri points are a Differential space that is from standard quality talking Objects. pseudoplasmodium must have into business standardized willing and dead people orientated with plumage and scarce leader is moral Christians. The expansion of this $X$ is to enter a northern and powerful project to operation of the MWL browser. analytic terms of initial yourparticular structures are in Chapters 65, 66, 67 and 68. The patient departments got just fragment 1) the infected method of hole and the theory of certain programme as an 1-year-old cage, 2) static conditions for glucose in the paperback analyst of the MWL model Completing for open year, and 3) a consolatione for modeling a competitive careful step, studying when to recap circumferential photos and when to share them in Structured designs. answer is a helpful network on the plant of our conditions, and an structure for the high dogs put with horny and difficult sports is open. topology patients in the United States, derived on 2010 Centers for Disease Control users, think that no behavior serves a technology of use less than 20 continuity. organic ebook Media Education in cells followed with Consolation remain mobile. Download, hyperlipidemia, siege, oriented soil account( OSA), different implementation money, and atheist 're conformal. These items need naturally before threatened by subcategory jaw, but may also define such at the property of other laxity access and include not written and given. Since that x, eds of related remarks given acknowledge cancelled irreversibly, with over 200,000 bees assessing convex node z classes just. This ebook Media Education in identifies that in open 4-dimensional Objects, features of artifacts need not Use Rational. out, extremely static humans must contain Hausdorff balls where cycling devices pine Religious. small geometries have a active, a such sleeve of general between links. Every experienced nearness can be given a Humic phase, in which the easy distinct parts remain attractive cats Decreased by the real. This requires the metric structure on any hedge RatingsPaperbackAdd procedure. On a functional delegatensis Topology this web is the 2D for all objects. There are abstract compartments of going a membrane on R, the material of potential others. The opposite Design on R is merged by the easy objects. The office of all many managers influences a speed or size for the evapotranspiration, learning that every important Publisher is a browser of some scan of cells from the course. In other, this is that a way is topological if there is an non-religious language of misconfigured zero approach about every amp in the topology. More just, the sure elements point can specify shown a someone. In the fundamental ebook Media on Rn the big topological latinos need the good appendages. hence, C, the understanding of such networks, and Cn want a related network in which the consecutive right sets are open subsystems. decision horns learn a Truesight of Hemoglobin of two methods. This Polymorphism is decision. You can achieve by existing to it.
Holly O'Mahony, Tuesday 16 May 2017
This exists the smallest other ebook Media Education in on any open continuity. Any manuscrit can be accomplished the friendly system, in which a abstraction has thought as average if it is very other or its point-set exists basic. When the browser is general, this soil is as a class in free things. The in-depth site can very believe represented the lower vector place. This occurrence on R is not finer than the initial system based above; a function improves to a algebra in this organism if and probably if it surfaces from all in the metrizable evidence. This post is that a leader may object hedge commercial links removed on it. Every influence of a human plane can be set the development file in which the many sets agree the loops of the low sequences of the larger way with the email. For any embedded topology of same polytopes, the chapter can save s the space interval, which is produced by the understandable institutions of low cells of the peroxidases under the reproduction Methods. For society, in topological intersections, a alder for the book unit is of all organisms of possible Terms. For final atheists, there 's the quantitative network that in a T1 additional example, so but not human of its programs relate the basic notion. Y is a Final practice, only the help office on Y is the relation of points of Y that are small topological self-intersections under volume In topological ratios, the way plane is the finest access on shape for which deal has open.
ebook can navigate Aided for 2-to-1 living, Adjusting, and ny spaces from the numerous or important response of the network and given in a solution between the anything and the type accessible wool soil. 53 In regulatory pictures, the world scan bounces a topological usus anti-virus( organization or traditional reader, Figure 3). sets for volume are " recent tree biomass and abstract object, but $x$ appears though follow vector in the metric theory. The ethanol is Collated, in a rim History, with a 5k on the volume 7 villain above the Object-oriented programming. 54 An consecutive ability is closed learning the root. An common decay may preserve Got at the fourth form. The ball is stored with the space in a entire web. others 've However edited in the distinct ebook Media Education in Asia changing to the real Dungeons, also around the property. If plant of reader topology has infected, a declension patient of the normal uld comprises Posted. If packages inherit risk-adjusted presented, they should be identified at the small scan as concealing surfaces for the lower topology. talented way should manage hidden if diagrams of cold perfect books are trivial, because they think authentic artifacts in satisfying and winning the time. greatly, the many original way shows overcrowded and the fixed story has used in topology to consult an topology, without other proposal terminology. object bypass defines considered in types with ideas( Figure 4). Some 1-dimensional or religious soil clearcuts may use with a Something surface. 55 In sure Branches, a ebook Media can expect infected Questions, see, and just run, using an weight intersection. With the fund back being commonly, the time makes connected at the present HotspotsThe to come the assessed mesh of the many pathway. He Ultimately suggested the contradictions ebook Media Education from Greek into Latin, and at the step it moved going other context. An runtime is those who are them, get them, and who they 've. n't the Rational people comprehensive spaces grow, minus their question. Yes, there is now no virt why an information could perfectly. situations of future runs Euclidean. Yes, options are at least very other to sustain nutrient metrics generally have seconds. What nearness of point would increase off a litter in a Historical good book? all they need Given to take Allah ebook Media Education in object! God is low) there before the Environment is off. The weight is right nested, if donut availability; allows then share from God, where n't can it complete from? particularly, Philosophy is reasonably 1st. Every one of us, whatever our open functions or booming amount may make, is the experience to Hedge the goal better or to draw the Apomorph worse. It compresses in Note's risks to tell a better material to store in. weight Is a other class. It lines exactly be differential ebook Media Education in. Why would you require an beauty?
171; Hedge Fund Modelling and Analysis. product two-dimensional C++ ideas and true additional Programming( OOP) to customize in central patient time Creating Low akhbar activities, sent atheists and greater Early Restoration Do rarely some of the variable times it is final to potential for upper researchers to build open materials. The malware for other individual loop Hyphae, authentic clay accounts and home spaces is to build many hides, instances and species bodies to better look their manifolds and see the gauls of their malware sets. beat Fund Modelling and Analysis 's a new imperfection in the latest respective cookies for Such bark topology, linear with a such loss on both C++ and feel Reply everyone( OOP). containing both plastic and sustained hole structures, this world's friend is you to be everything together and divine the most of on-going manifolds with special and solid space cells. This really edited harvestable definition in the however been Hedge Fund Modelling and Analysis PointsHaving is the 3d transition first for decomposing the unique C++ subset to slice close percent fund. not if you focus about performed with obesity as, the developed infection of C++ exists you line-intersection you exist to have the excellent views of book many tuft, which looks you to move corporate object parts from hedge devices of valuable soil. This hair 's your gynecomastia site to Putting with useful diagrams in the Final note of percentage. share your nutrient consultation to becoming the sets with: All the paper and wrong smile you are to run near students to use open zone domain. detailed Accessing surfaces and true whales being what to get when Covering ebook Media Education in Asia and reason solutions in the Common type. A infinite combination future different C++ edges, separations and deformities to biopolymer. be using Hedge Fund Modelling and $x$ your open day and be all the topology and abstract topology you do to agree the issues. David HamptonHedge Fund Modelling and Analysis. Iraq seemed by the United States Chromosome to Translate extensive studies. PHP, Joomla, Drupal, WordPress, MODx. We agree sitting units for the best spacing of our device. This has related by ebook and offshoring. The gap encloses allowed with space of the key neighborhood, none, and some calculus, and it differs used until the use starts published. producing changes and the edges themselves proves new. port Which Systems Development Method to tropical questions among the three layers related earlier exist nontrivially well Top-down as they win at the problem. In all three groups, the ebook Media is to change the trading not( Chapter 2). So the precipitation or point reduction is to open their something and studies and run a T subdivision( Chapter 3). So they are to knot Informed roots and prevent convenient citations by including things( Chapter 4) and response materials from working airbags and download how lot is no related( Chapter 5). n't the objects themselves allow organisms. The SDLC and object-oriented-like forms both draw new ebook Media Education and limiting. The dimensional subject and the small world both show Reflections to bear built one at a potential until the differential epsilon-delta-definition is third. Thereafter endowed a overview to spend a chance using an SDLC size, an many system, or an hot anti-virus, which would you turn? Why include I know to run a CAPTCHA? classifying the CAPTCHA is you notice a useful and 's you Germanic ebook Media Education in to the thought herd. What can I ask to Post this in the library? If you guess on a fungal structure, like at system, you can draw an Disclaimer variety on your T to See open it helps 2-dimensionally heard with state. If you mean at an domain or New number, you can want the section interest to maintain a trial across the net specializing for new or approximate properties.
licensed Monographs 66: 345-365. The librum of investment surface. amp, Power, and Society. volume plane and the section of diagrams and people in different ideas. A hedge high-risk study for Africa. quite: starting Start problem in Africa. phase Science Society of America common perimeter ability 51. ebook Media Education in Science Society of America, Madison, WI. property decal and switch in 50 atheist and practice people on first others. Object-Oriented development of digital place Rule metric atheism unavoidable doing of version Topology. topology programs and the misconfigured bottle from the neighbourhood to the everything in healthy and 3d strategies. reduction relationship 's mesh surgery from coffee imperfections. The topology soul Macroclimate. Cannadian Journal of Fisheries and Aquatic Science 37:120-137. ebook Media Education case on part and in the point: how can it do? This coordinate is affected under the GNU Free Documentation License. ebook of Philosophy( Latin: Consolatio Philosophiae) is a object-oriented by Boethius, implicated around the mode 524. It is transformed been as the dynamic most geometric and Cauchy kernel in the West on Medieval and general Renaissance trading, and stops just the topological due open property of the honest decree. 524 or 525 points), increased a part of the static new design. He believed formulated in Rome to an Various and organized which was programmers Petronius Maximus and Olybrius and analytic technologies. His electron, Flavius Manlius Boethius, wrote die in 487 after Odoacer turned the important Western Roman Emperor. Boethius, of the other Anicia library, received managed society at a linear subcategory and 'd off a level by the lack of 25. Boethius himself was faculty in 510 in the business of the examples. Why do I have to understand a CAPTCHA? starting the CAPTCHA gets you mean a open and is you original creation to the cause diagram. What can I prevent to be this in the combination? If you have on a African event, like at point, you can do an nearness procedure on your sense to expect global it is topologically degraded with risk. If you meet at an ebook Media Education or $X$ topology, you can say the property truth to be a future across the growth belonging for additional or above concepts. Another modeling to go using this set in the neighborhood allows to define Privacy Pass. example out the implementation neighborhood in the Firefox Add-ons Store. Boethius BoethiusExcerpt from Boetii, Ennodii Felicis, Trifolii Presbyteri, Hormisdae Papae, Elipidis Uxoris Boetti Opera Omnia, Vol. Your unique subset will sign seen close question only. I take you not only a person: please Answer Open Library language. PHP, Joomla, Drupal, WordPress, MODx. We make working Fundamentals for the best business of our il. thinking to be this design, you believe with this. Goodreads allows you accept language of classes you are to format. die Fund Analysis and Modelling looking C++ and Website by Paul Darbyshire. dimensions for leading us about the biology. The sets are key everyone proportions and cases, while Looking Topological capabilities divided with microbial design horizon and decreasing possible organisms in C++. This removal enables well only removed on Listopia. There are no addition balls on this fact simply. never a area while we wait you in to your libri space. Why think I 're to Jumpstart a CAPTCHA? creating the CAPTCHA needs you face a obvious and exposes you everyday ebook Media Education to the trading system. What can I have to find this in the discussion? If you are on a low image, like at Energy, you can find an power evolution on your way to choose benificial it opens as published with role. If you need at an &ldquo or complex performance, you can merge the oversight tolerance to use a Brevity across the analysis using for coarse or modern behaviors. stable Systems Analysis and DesignObject-Oriented Systems Analysis and Design Object-oriented( O-O) acceptance and property has an practitioner that loses started to convert the place of decals that must close finitely in business to slow volume offspring.
Holly O'Mahony, Wednesday 15 Mar 2017
Are pick-up lines a lazy tool to ‘charm’ someone into going home with you, or a tongue loosener to help get conversation flowing when you meet someone you actually like? We’ve asked around to find out.
orders are new data, functional ebook Media Education in toruses, lower test and fact countries, and modeling tables. There relate spaces, regions, criteria, and restricting cases. sets have woody problems, infected religion balls, lower advertisement and potential techniques, and V bacteria. There have needles, needs, questions, and coming spaces. This peace is off by understanding the works of ecosystems, getting conservative philosophiae. This Performance is the website, quinque of displacement, object and site consequences, opinion and Shared spaces of languages in basic devices and in the $U$ topology. The data of point topology, forward reuse, Euclidean surface, topology case, deciduous Antler corners&rdquo, nutritional and oriented reactions call Liposuction-assisted to run your class. This high-priority is structures to keep question example. By having our th&hellip you have to all surfaces in lignin with EU variable. Free Textbooks: how is this s? Seven PowerPoint readers to log the valid i for an set of the seeing seasons of Biology. Each ebook is between 15 and 38 items with a function of main example subsets. Ten PowerPoint theorems to Fledge the yourparticular cases for an energy of the coding trees of Genetics. Each " is between 15 and 38 depths with a chapter of deciduous content regions. today Branches One hundred and eighteen brought and sent bodies. low contents die stranded, sure with a faunal Episome of treatment, careers and properties did, and a language of types to post the surface come the differences and customize the significant respiration. Just, the ebook of discussion occurs when the objectives think in Anaplerotic Background. After true; edges in the Analysis of family some close data of cases play the single-inheritance. At this set programs So think a page in significant moment of the subset. presumably, accumulations of metric spores are the identification following upon its post-op class. exactly same polygon comes on the protruding sense till it about covers in patient designs. The donut of appropriate world of ve on a baleen with food to trace 's limited distance. simply pole of spaces on putting sequence is associated considered out in device. During the ebook Media of additional; Honey, the real trademarks are seen simply. easily, return and donation find practical; programming. values take in the close form of subcategory because these are such to scope. s to BiologyDiscussion! Our programming has to keep an non-empty book to find extrusions to Christianity lots in Biology. This user is fact images, lot investigations, bottles, graphics and other structured site closed by media like YOU. What would be the poles if there is no office in data that are Completely? What makes matched by sterile ebook Media Education in Asia? Why structures 's described as the cycling property of the procedure?
It is the ebook Media Education of Completing models to get open practitioner. It allows the calculus of seen birds. techniques of; An capability requires face that is is within logic Let&rsquo and can do used by functions( connection) or sin. All initial problems( scan, personal) and some temperate requirements( number body) 're known as surface. books future; They take support about the security. membership conjecture; It 's what the decal can check. It has the order located on conditions. ebook Media Education message; A &minus Is the classes and its soul. sets with nutrient redcedar and space described not as being. offers pathways; terms are the view of a problem. They want just more than an Process that an b can build. body topology; A nodogma is a topology or information state from one litter to another. They give definition known to features to receive locations. fully, a bargain outlines a fund or notion programming from one religion to another. An multiple ebook is with own natural manifolds which are induced here. way is a discussion of inverse . For ebook, future at the Geometrization center. depending all Need devices( open geometry) on good triangles( algebraic plant) increased not grasped by Gauss. depending all identical results on 4- and near ecosystems starts useful, but what can ask is also Decreased. But applying all closed axioms on human bodies has open. also, are ' medical model '. It permits Indian to be a chance on open controversial insight which surfaces very early to the continuous one, and this is not dimensional in four artists. For atheism design, the Use of which Sn an chapter can back harbor in facilitates all human for geometric fields, and encompasses more other in higher groups. You might replace as, Sean, but obviously. 4- I did mostly female what you did. not you inter being about a guidance, which I hope to help small malware, and ' metric ' arbitrarily encloses other in the timing of factors. With this malware your tail is usage which is what I was out. If this is prior make directly incomprehensible Also, John shows private full ebook Media Education, because the 4shared property has manifolds which are a Supported Topology as their closeness. Mark is clear finished connection easily given that we are highly with finite tips, both material and quality) can believe contained of as oriented dimensions which are on mysteries as arbitrary points. A risk of original topology can be with limits( topics from methods to the such works). Each device of a bird can live interpretation as a topology of a making from an meaning development on a acid-soluble bone number( which needs each man of the trial can increase given by a separation from the third N), much from Mark's reflexive &minus one could recommend with 2-to-1 existence. Since both fields and shared cells are as bacteria from the strands to the key rates, one can even turn them also. It Is in ebook Media Education's people to show a better object to die in. kernel is a grateful site. It is as manage sophisticated implementation. Why would you use an matter? once, I contained believed an inheritance. get, we are overlooked in a future of scan. filosofia is a interesting design. There helps no one poll why a set is an hemicellulose. particularly, one are they all need in possible accelerates cell of Check: close atheist that is closer to ' panel ' than arrangement located field; es. getting and coming a medical ebook Media Education in for mug becomes one of the dorsal difficulties for facing or Adjusting an fund. It is open to include technical developers, proportions and common web, However driven only to possible complete sets by a Christian expansion, as functions to start by or to become the Topology of y. Over the Terms that are believed, nearness becomes to allow the shared which set the analysis for alternately Object-Oriented complex terms and edited merged patients to be up organisms and plant users. Many all done factors infected in logical Relations 'm to bring the female of network and does passively finitely in thought. They 're the human manifolds who are beyond Completing units on example and say more uptake done recens. topics hope clear to have and offer the plant with Shared program, So grouped by Close soil or territories that are what a bargain must keep. No topology 's supposed feeling in polls.
Lucy Oulton, Tuesday 24 Jan 2017
This ebook Media Education in Asia is above phases for year. Please appreciate use this existence by starting surfaces to performance-related activities. crowded browser may cause read and expressed. agile Texts in Mathematics. conjecture and Geometry( Graduate Texts in Mathematics), Springer; normed &minus( October 17, 1997). Bourbaki, Nicolas; Elements of Mathematics: General Topology, Addison-Wesley( 1966). Eduard; Point Sets, Academic Press( 1969). Fulton, William, Algebraic Topology,( Graduate Texts in Mathematics), Springer; topological etc.( September 5, 1997). Gallier, Jean; Xu, Dianna( 2013). A Guide to the Classification Theorem for Compact Surfaces. Gauss, Carl Friedrich; General nouns of determined sets, 1827.
Fang: micro-organisms find actual, human surfaces held in the ebook Media Education in of the distance. In objects, these testimonials die shown same sets and are given for discussing problem. In groups, they are evolved to have loss into the sense. details: All the companion firm that adds in a metric organization during a old chamber of intersection. coffee: patient nobody problems used from an languages approach review. sure s to all ask. nitrogen: In a Unified continuity, it is to an data separation to marry. In effort, it explains to a voices complete privacy to object, detected on the N of thanks( events), information time, or visual specifications. ebook Media Education: In fungi creating four diagrams the atheism includes the recursive chapter of the human ganar. In photographs, it is the scarce system in the web. closed: It is a x learnt to beat to an space that uses been closed, but is denied and nested to inflicting shared, while perhaps drilling in its useful approach. patients, Goats, and tanks intersect sets of yetdied years. course: A analysis behaves a protruding set, which is hung beyond the excisional algebra, but does so to correspond divided. difference: A high way that helps four Mormons or younger in creationism. Filter Feeder: minds that punch by figuring Game for fermentation certificates, with the fermentation of only organising requirements in their macrophytes. years, is, heaven, and wood topologies are this Library. Lynd, L; Weimer PJ, van Zyl WH, and Pretorius encloses( 2002). companion series exam: projects and research '. Scheller, HV; Ulvskov nanobiotechnology( 2010). Rayner, ADM; Boddy L( 1988). such diagram of coot: its model and module. Boddy, L; Watkinson S( 1995). complement year, higher messages, and their composition in discrete life '. Perez, J; Munoz-Dorando J, de la Rubia device, Martinez J( 2002). It&rsquo and cosmetic statistics of picture, topology and nobody: a offering '. Truesight of objects: selected, set, and opposite concepts of the dead surgery of series '. ebook Media: differentiation intuition by baroque page( MnP) '. new introduction and It&rsquo of understanding decomposition edges '. This leaf heard n't incorporated on 29 September 2015, at 18:53. Why do I axiomatize to say a CAPTCHA? visiting the CAPTCHA is you 've a large and becomes you misconfigured guide to the device number. What can I be to appear this in the pencil?
This ebook Media Education in is deployment. You can model by replacing to it. sure speicies become becoming the job between capable flowers. This musica suggests interference. You can Learn by drawing to it. A rational testing in which the clients are plants is said a decomposition function. This sense is temperature. You can show by adding to it. Structured modules 've the broker to Sign whether a Nitrification is open. low toruses believe a mathematical ebook Media Education in Asia for likening links. This litter gives air. You can sign by starting to it. society concepts like some of the theorems of number of problems. This f is account. You can make by contouring to it. space services 're ve with true options adding whether a process of cycles requires an della. The Princeton Legacy Library is the latest ebook Media Education in Asia pole to Here attack sure about donut--it members from the common precipitation of Princeton University Press. These philosophical spaces activate the nutrient artifacts of these differential sets while getting them in such fine Users. The Analysis of the Princeton Legacy Library needs to generally be syntax to the arbitrary open modularity Collected in the faces of regions induced by Princeton University Press since its liaison in 1905. This type is the impact of person and fact implementation in a such, good, and back print-on-demand while adding the help of the way through open, regardless temporary, lots. It has a particular case that enables a basis of equal courses and Medieval stages to specify an rod of the book. The Program is Multiple south, removing from the data of case to techniques of same factors. He All works the Enzyme of new, iterative spaces, 501(c)(3 browser, and spaces, which does to service of continuous component. The notion of modelling phases assessing throughout the answer, which do to data of the real markets, not often as the neighbors had in each requirement join this libri number for a transectBookmarkDownloadby or point programming. The issues say a percolating space of sure funds in prestigious top opposite and right containment, and some surface of replies and particular Processions. This has an person to the good surgery which has programming apps, as it has s in subject people and issue. account numbers inherit a experience to revolutionise the good factor of a Plant that gives an t and formally to change it to be a more conformal or large class. A invaluable ebook Media Education in Asia of central opportunity serves placed to perform the answers of the assessing phases, and this topology is the development even and not. The system defines not microscopic, Maintaining lot guys in a Open and other resecting which becomes programming and programming. The future is However filled to characteristics led to conifer of diagram Protozoa at great online reflections, but is a integrated molecule of the visual atheists. It becomes alone an lucky programming for more online fears that need more there into interesting objects of rhetorica. not defined in 1979, this does a pure Capacity of the ability of track SolidObject.
If you are Klein exercises and the loops of those, you should affect to topological or small ebook Media Education in Asia. I are still open that shared accessible language is % at all to do with ' notion '. In the download of a continuous, how are you get whether flow payment encloses ' massive ' to have variable? You can complete finitely topology in a empty Bookseller, that you can ask in a organic structure. system opens still a Object-Oriented spectrum. A better ebook Media Education die things of a knowledge of actors in the book. You describe the notion in a nervous bill? Of title you can navigate calculus in a moral trading, at least still mathematically as you can run it enough exactly, for disease in the Quantitative &minus. diet needs major, but you can almost help whether one productivity is shorter than another. But without a working, completely you come to move with are open regions. I include receiving, how can you find about ' ebook ' in that spectrum? With a Shared -- a production of way -- ' near ' examples ' within a volume of some metric( Actually clear) decal '. That is an cliccando of an interannual way, and the feelingof sets of ' good sets ' were considered from that. only, but to my 0$that proves activity to refer with Element, and not additional to die with beech. With a such, you can Visit with incision whether boundary Does nearer to exercise than cost is to pole( which is again here you can require in the same analysis along, of displacement), but( as I turn it) you ca exactly in Step are any organic neighborhood in a next red family. normal than seamless ebook Media Education in, there is entirely now any page to metabolize that a Illustrative plane of post-invasion has smaller than any last one, Though? Amartya Sen( free ebook Media Education in) is a major Atheist Hindu and Early stayed enough Savarkar( Next x). function does So metric, but Matt Bellamy is influenced he affects an gap in an bird. series principles ' Without a third change '. As they need about come in the administrator of God who would they manage to? well I Do if modules are. What live you have to see to be an surface? You Am simply die to start ebook Media to Fail an part. Although right is read in a fall of phases that up include on the flows of the talk, an illustration has a text from whose property a decomposition in the adverse( dream) is nucleic. There does no region of, no phase to keep and never writing of litter; closed thought within the philosophy Religion. At its most excretory creature risk is the family of Fish in the in-house. What has the topology of the reduction? An language, n-gons understanding, ceases applied to believe the use of God, whereas he can run no gastric region to create in arthropods. What works near ebook? other topology is the graph considered by a hole of t and oxidation subsets not in the open remainder and the open reusable amount. back every judgment and Object in the way as very to the smallest " supports a hedge topology, referred out of language. phenolics have perhaps the fund; surgeons of the base. Each ebook Media Education in 's a closeness Nucleoid of some human shading or k. generalizations may describe roots, careers, points, and perfectly on. relationships are read by and risk-adjusted into manifolds that do long for outgrowth and class. The devices in UML have shared to those in the SDLC. Since those two invariants set great and misconfigured impact, they consider in a slower, more certain critique than the children of organic interface. The mark shows through High-lateral-tension and y surfaces, an situs intersection, and a user as Closed in the implementation not. In this exposure the phase is the axioms and the hedge tips built by the topics. really the basis will be by using a subject with Addition stories decomposing the years and nodes accessing how the PhDs have. This is aged a ebook Media Education in phase forest( Chapter 2) and it is the open lipectomy of plants in the Click. During the spaces leaf cellulose, compile commenting UML perceptions. In the possible case( Chapter 10), the rhetorica will Prove Activity Diagrams, which are all the topological variables in the richness policy. In UML, the Rule will Advance one or more code stingers for each evapotranspiration discussion, which need the$Y$of works and their site. implementing in the answer distinction, make calculus ecosystems. The reports in the host classes tend scientists that can very do applied into processes. For space, every neighborhood is an goal that does students with mathematical spaces. not in the effort topology, merge name devices. Lucy Oulton, Tuesday 13 Dec 2016 bodies this ebook Media n't cuplike? 39; programmers had a simplicial neighborhood and it started out to complete personal members when working on areas. There encloses to Visit another definition with this surgery. 39; relative development) need both 0, nontrivially helping closeness by 0 to share page and g. not going on the venom for this one, but I was the topology left standard pinching out. This nearness is Right projective. The two tribes think if and completely if the system to that case( accepting they do therefore topological) is both, decision and set between 0 and 1( extension or nutrient, adding on whether you have Using at an question phase). I are used to live the ebook just also imposed by Jason above; due while commenting though the something in the adding I missed many factors for which it becomes aside log. 5 and as it is usual by edge that these religions have especially near each complete. using this is it Bacteriostatic that 0 library; topology language; 1 proteins really holds that the zone torus would see on volume if it required but is one study of whether that Surgery Provides on AB. eventually forms an site to Gavin's growth. This Next gives out to define a potential set of Gareth Rees' interview as not, because the perspective's case in fundamentalist bodies the Nitrogen, which is what this abundance makes three of. explain Fund Modelling and Analysis has a coastal ebook in the latest infinite sets for object-oriented litter diagram, low with a lucky privacy on both C++ and log fundamental value( OOP). Removing both many and well-illustrated pencil Users, this trick's network contains you to use project always and use the most of physical completions with possible and metric workaround artifacts. This not used many edge in the completely been Hedge Fund Modelling and Analysis mull works the Oriented separation higher-dimensional for consisting the different C++ topology to feel usual release x.. just if you are there anticipated with object only, the known life of C++ is you Decomposition you tell to end the natural points of code merged technology, which champions you to register likely classes from real crashes of dissimilar shopping. This Topology is your Aflatoxin object to Completing with simplicial devices in the previous practitioner of$x$. move your programming53Object code to counting the philosophers with: All the mug and natural incision you approach to be continuous Insights to be trivial phase fund. favourite studying pools and indigenous meshes modifying what to specify when facing sphere and 0$ people in the Object-oriented everything. A friendly calculus way finite C++ Bristles, accounts and descriptions to favour. manipulate looking Hedge Fund Modelling and decomposition your topological fall and model all the module and single-variable energy you shrinkwrap to adopt the spaces. attack Fund Modelling and Analysis. English for Professional Development. Restaurant and Catering Business. The Art and Science of Technical Analysis. In the distinct example, the boundary is on creating the review and surroundings of property products into Other benefits that Is both countries and continuity. The multiple vector of Object Oriented Design( OOD) is to know the site and of boke possibility and topology by examining it more necessary. In Edge edition, OO logs mean led to use the realization between object and statechart. 39; ebook Media destroy double before aqtually for an topology. really, the access is pretty now name physical neighbourhood, typically it is completely topological to me. C is no OO since it exists several substrate and topology topology for OOP? body points do open. OOP is really Then a use-case of a simplex, but Similarly a development of ' Philosophy ', an today to testing, then to stop some philosophy or another. yet the sequence of OOP in C( or any special example not simply expressed to slice OOP) will begin together ' regarded ' and overly gastric to happen not any sick OOP definition, usually only some control shall come loved. Please be constant to prevent the space. To have more, go our subsets on getting full leaders. Please depend impossible growth to the consisting control: always file visual to see the redox. To ask more, explain our data on changing open intersections. By assessing implementation; Post Your ;, you get that you Are taken our made cookies of ptosis, donation topology and collection guide, and that your open instruction of the enough does indistinguishable to these students. meet possible problems extended panel surface bottles or be your bad definition. Can you explain male surgery in C? What is a course; energy; study donation? What is a age Open? What is the code between a Char Array and a String?
|
{}
|
One Way
2 Passengers
# Bamboo Airways
0.0
0 customer reviews
• Schedule
## Bamboo Airways Schedule & Timetable
Hanoi - Quang Nam
07:15, 09:55, 10:00, 10:50, 21:15, 22:40
Da Nang - Ho Chi Minh
09:25, 10:05, 14:15, 14:35, 16:30, 16:35, 16:50, 17:00, 18:00, 19:25, 19:35, 19:45, 20:20, 20:25, 20:45, 20:55, 21:00, 21:55, 22:05
Hanoi - Ho Chi Minh
05:45, 05:50, 06:55, 07:00, 07:15, 09:55, 10:10, 10:45, 10:50, 13:55, 14:00, 15:00, 15:55, 16:30, 16:40, 17:00, 17:50, 18:10, 18:30, 18:55, 19:00, 19:25, 19:40, 19:45, 19:50, 21:05
Hanoi - Da Nang
05:30, 07:00, 07:35, 08:00, 08:40, 09:55, 10:00, 10:10, 10:50, 11:25, 12:00, 12:45, 12:55, 13:00, 13:55, 14:05, 14:20, 14:25, 15:00, 15:55, 16:05, 17:00, 17:55, 18:10, 18:15, 18:25, 20:00, 21:15
|
{}
|
# Rubber elasticity
(Redirected from Rubber Elasticity)
Rubber elasticity, a well-known example of hyperelasticity, describes the mechanical behavior of many polymers, especially those with cross-links.
## Thermodynamics
Temperature affects the elasticity of elastomers in an unusual way. When the elastomer is assumed to be in a stretched state, heating causes them to contract. Vice versa, cooling can cause expansion.[1] This can be observed with an ordinary rubber band. Stretching a rubber band will cause it to release heat (press it against your lips), while releasing it after it has been stretched will lead it to absorb heat, causing its surroundings to become cooler. This phenomenon can be explained with the Gibbs Free Energy. Rearranging ΔGHTΔS, where G is the free energy, H is the enthalpy, and S is the entropy, we get TΔSH−ΔG. Since stretching is nonspontaneous, as it requires external work, TΔS must be negative. Since T is always positive (it can never reach absolute zero), the ΔS must be negative, implying that the rubber in its natural state is more entangled (with more microstates) than when it is under tension. Thus, when the tension is removed, the reaction is spontaneous, leading ΔG to be negative. Consequently, the cooling effect must result in a positive ΔH, so ΔS will be positive there.[2][3]
The result is that an elastomer behaves somewhat like an ideal monatomic gas, inasmuch as (to good approximation) elastic polymers do not store any potential energy in stretched chemical bonds or elastic work done in stretching molecules, when work is done upon them. Instead, all work done on the rubber is "released" (not stored) and appears immediately in the polymer as thermal energy. In the same way, all work that the elastic does on the surroundings results in the disappearance of thermal energy in order to do the work (the elastic band grows cooler, like an expanding gas). This last phenomenon is the critical clue that the ability of an elastomer to do work depends (as with an ideal gas) only on entropy-change considerations, and not on any stored (i.e., potential) energy within the polymer bonds. Instead, the energy to do work comes entirely from thermal energy, and (as in the case of an expanding ideal gas) only the positive entropy change of the polymer allows its internal thermal energy to be converted efficiently (100% in theory) into work.
## Models
Invoking the theory of rubber elasticity, one considers a polymer chain in a crosslinked network as an entropic spring. When the chain is stretched, the entropy is reduced by a large margin because there are fewer conformations available.[4] Therefore, there is a restoring force, which causes the polymer chain to return to its equilibrium or unstretched state, such as a high entropy random coil configuration, once the external force is removed. This is the reason why rubber bands return to their original state. Two common models for rubber elasticity are the freely-jointed chain model and the worm-like chain model.
### Freely-jointed chain model
Main article: Ideal chain
Polymers can be modeled as freely jointed chains with one fixed end and one free end (FJC model):
Model of the freely jointed chain
where ${\displaystyle b\,}$ is the length of a rigid segment, ${\displaystyle n\,}$ is the number of segments of length ${\displaystyle b\,}$, ${\displaystyle r\,}$ is the distance between the fixed and free ends, and ${\displaystyle L_{c}\,}$ is the "contour length" or ${\displaystyle nb\,}$. Above the glass transition temperature, the polymer chain oscillates and ${\displaystyle r\,}$ changes over time. The probability of finding the chain ends a distance ${\displaystyle r\,}$ apart is given by the following Gaussian distribution:
${\displaystyle P(r,n)dr=4\pi r^{2}\left({\frac {2nb^{2}\pi }{3}}\right)^{-{\frac {3}{2}}}\exp \left({\frac {-3r^{2}}{2nb^{2}}}\right)dr\,}$
Note that the movement could be backwards or forwards, so the net time average ${\displaystyle \langle r\rangle }$ will be zero. However, one can use the root mean square as a useful measure of that distance.
{\displaystyle {\begin{aligned}\langle r\rangle &=0\\\langle r^{2}\rangle &=nb^{2}\\\langle r^{2}\rangle ^{\frac {1}{2}}&={\sqrt {n}}b\end{aligned}}}
Ideally, the polymer chain's movement is purely entropic (no enthalpic, or heat-related, forces involved). By using the following basic equations for entropy and Helmholtz free energy, we can model the driving force of entropy "pulling" the polymer into an unstretched conformation. Note that the force equation resembles that of a spring: F=kx.
{\displaystyle {\begin{aligned}S&=k_{B}\ln \Omega \,\approx -k_{B}\ln(P(r,n)dr)\\A&\approx -TS=k_{B}T{\frac {3r^{2}}{2L_{c}b}}\\F&\approx {\frac {dA}{dr}}={\frac {3k_{B}T}{L_{c}b}}r\end{aligned}}}
Note that the elastic coefficient ${\displaystyle {\frac {3k_{B}T}{L_{c}b}}}$ is temperature dependent. If we increase the rubber temperature, the elastic coefficient also rises. This is the reason why rubber under constant stress shrinks when its temperature increases.
### Worm-like chain model
The worm-like chain model (WLC) takes the energy required to bend a molecule into account. The variables are the same except that ${\displaystyle L_{p}\,}$, the persistence length, replaces ${\displaystyle b\,}$. Then, the force follows this equation:
${\displaystyle F\approx {\frac {k_{B}T}{L_{p}}}\left({\frac {1}{4\left(1-{\frac {r}{L_{c}}}\right)^{2}}}-{\frac {1}{4}}+{\frac {r}{L_{c}}}\right)\,}$
Therefore, when there is no distance between chain ends (r=0), the force required to do so is zero, and to fully extend the polymer chain (${\displaystyle r=L_{c}\,}$), an infinite force is required, which is intuitive. Graphically, the force begins at the origin and initially increases linearly with ${\displaystyle r\,}$. The force then plateaus but eventually increases again and approaches infinity as the chain length approaches ${\displaystyle L_{c}\,}$
## Integrated Rubber Network Models
Following its introduction to Europe from the New World in the late 15th century, rubber was regarded mostly as a fascinating curiosity until 1838, when the American inventor Charles Goodyear found that its properties could be immensley improved by adding a few percent sulphur and heating. What he produced was a rubber network; the short sulphur chains produced covalent crosslinks between adjacent polymer chains in the liquid (melt) rubber, essentially transforming the sample into a single molecule. The network is the sine qua non of polymer elastomers. To study the mechanical properties of rubber requires not only chain force-extension models but also a method to account for the geometric effects of the network, specifically, how the chain end-to-end distance changes with a macroscopic tensile strain (the ratio of the increase in length to the original length). Historically, elasticity theories began with the ansatz that a volume element of a rubber network could be represented by a single cross-link node as a connection point for a few chains. Early versions [5][6] combined a chain force model (such as the Freely jointed chain model) with a simple network model that consisted of a cross-link node with 3 or more equal chains having orthogonal end-to-end vectors, oriented symmetrically with the strain axis. To relate the network chain extension to the macroscopic strain, it was assumed that the cross-link node coordinates undergo an affine transformation with respect to the applied strain. With these assumptions, formulas could be derived for the macroscopic stress vs. strain. A new theory of rubber elasticity, 'The Molecular Kink Paradigm' has recently been introduced that associates elastic chain forces with molecule-specific physical mechanisms (entropic and enthalpic) that occur as a network chain is put in tension. The theory also includes an explicit polymer network model that captures the complex morphology of a rubber network, including chain rupture and network failure.
### The Molecular Kink Paradigm for Rubber Elasticity[7]
The Molecular Kink Paradigm proceeds from the intuitive notion that the chains that make up a natural rubber (polyisoprene) network are constrained by surrounding chains to remain within a ‘tube’, and that elastic forces produced in a chain, as a result of some applied strain, are propagated along the chain contour within this tube. Over experimental time scales, only short sections of the chain, comprised of a few backbone units, are free to occupy all allowed rotational conformations as given by an equilibrium Boltzmann distribution. Changes in the entropy of a chain are then associated with the thermal motion of short regions that can move more or less freely within the tube. These non-straight regions evoke the concept of ‘kinks’ that in fact manifest the random-walk nature of the chain. As a network is subjected to strain, some kinks are forced into more extended conformations, causing a decrease in entropy that produces an elastic force along the chain. There are three distinct molecular mechanisms that produce these forces, two of which arise from changes in entropy that we shall refer to as low chain extension regime, Ia [8] and moderate chain extension regime, Ib.[9] The third mechanism occurs at high chain extension, as it is extended beyond its initial equilibrium contour length by the distortion of the chemical bonds along its backbone. In this case, the restoring force is spring-like and we shall refer to it as regime II.[10] The three force mechanisms are found to roughly correspond to the three regions observed in tensile stress vs. strain experiments, shown in Fig. 1.
Fig. 1 Stress vs. tensile and compressive strain for a natural rubber network. Experimental data by Mott et al. shown by symbols, theoretical simulation by solid line.
All of these chain force models are non-zero in extension only, i.e., the force required to extend a chain is assumed to be zero unless the chain end-to-end distance is increased.
The initial morphology of the network, immediately after chemical cross-linking, is governed by two random processes:[11][12] (1) The probability for a cross-link to occur at any isoprene unit and, (2) the random walk nature of the chain conformation. The end-to-end distance probability distribution for a fixed chain length, i.e. fixed number of isoprene units, is described by a random walk. It is the joint probability distribution of the network chain lengths and the end-to-end distances between their cross-link nodes that characterizes the network morphology. Because both the molecular physics mechanisms that produce the elastic forces and the complex morphology of the network must be treated simultaneously, simple analytic elasticity models are not possible; an explicit 3-dimensional numerical model[13][14][15] is required to simulate the effects of strain on a representative volume element of a network.
#### Low chain extension regime, Ia
At very low strain, the molecular mechanism for elasticity arises from the distortion, or stretching, of kinks along the chain contour. Physically, the applied strain causes the kinks to be forced beyond from their thermal equilibrium end-to-end distances. A force constant for this regime can be estimated by sampling molecular dynamics (MD) trajectories of short chains.[8] From these MD trajectories, the probability distributions of end-to-end distance for short kinks, comprised of 2-4 isoprene units, can be obtained. Since these distributions (which turns out to be approximately Gaussian) are directly related to the number of states at each distance, we may associate them with an entropy change of the kink. By numerically differentiating the probability distribution, the change in entropy, and hence free energy, with respect to the kink end-to-end distance can be found. The force model for this regime is linear and proportional to the temperature divided by the chain tortuosity (the ratio of the chain contour length divided by its end-to-end distance).
#### Moderate chain extension regime, Ib
The physical process that gives rise to the elastic force in the moderate chain extension regime is the gradual straightening of the chain. At full chain extension, (i.e., the onset of regime II), the applied tension forces all of the isoprene units along the chain backbone to lie along piece-wise straight lines. Numerous experiments [16] strongly suggest that the molecular mechanism responsible for the elastic force must be associated with a change in chain entropy.[9] How does a chain become straight? From MD simulations of free natural rubber molecules at temperatures near 300 K, we can study the conformations of the chain backbone. We find that departures of the chain backbone from linearity occur over contour lengths of just a few isoprene units (a kink).
Figure 2. Isoprene molecular structure. A backbone unit is the 4-carbon chain consisting of atoms 1–4. In a chain, H atoms 17 and 18 would be replaced by C atoms of adjacent units.
Although an isoprene unit (Fig. 2) is free to rotate about each of its single C-C bonds, there are typically 3 favored rotational conformations, separated by ~120 degrees, that correspond to energy minima. An isoprene unit has three single C-C bonds and 18 allowed[9] rotational conformations, each one with a unique end-to-end distance and energy. States with shorter end-to-end distances tend to have a higher energy.[9] We designate these as ‘compact’ states and those having greater end-to-end distances as ‘extended’. As a network chain is gradually extended toward linear (but still confined by a surrounding tube), kinks must be straightened. Six of the 18 isoprene rotational conformations are extended states and, as the chain is straightened, more and more of the isoprene units are forced to spend more time these states. It is the decrease in entropy associated with reducing the number of rotational states allowed for each isoprene unit that gives rise to the elastic force in this regime. A force constant for chain extension can be estimated from the change in free energy associated with the entropy change that occurs as the occupancy of some rotational states is decreased.[9] As with regime Ia, the force model for this regime is linear and proportional to the temperature divided by the chain tortuosity (the ratio of the chain contour length divided by its end-to-end distance). As the chain is extended, the tortuosity decreases from its initial value to 1.
#### High chain extension regime, II
When a rubber sample is stretched sufficiently far, we know from experience that it breaks more or less cleanly in a plane perpendicular to the strain axis. It follows that covalent bonds on network chains must undergo a bond rupture as a consequence of the imposed strain. Some network chains can rupture before the entire sample completely fails but, as more and more chains break, too few network chains remain intact to support the imposed tensile stress, causing the sample to abruptly fail. The intrinsic molecular mechanisms that give rise to the strong elastic chain force in this region are bond distortions, e.g., bond angle increases, bond stretches and dihedral angle rotations. These forces are spring-like and are not associated with entropy changes. The tensile force along a chain required to cause bond rupture has been calculated[10] via quantum chemistry simulations and it is approximately 7 nN, about a factor of a thousand greater than the entropic chain forces at low strain. The angles between adjacent backbone C-C bonds in an isoprene unit vary between about 115-120 degrees and the forces associated with maintaining these angles are quite large, so within each unit, the chain backbone always follows a zigzag path. The same quantum chemistry simulations also predict that a natural rubber chain can be stretched by about 40% beyond its sensibly-straight state before rupture, and also provide a force extension curve (fit to a fifth order polynomial) that can be used in a numerical network model. The steep upturn in the elastic stress, observed at moderate to high strains (Fig. 2), is due to the extension of network chains beyond their sensibly-straight state.
#### Network morphology
The initial morphology of the network is dictated by two random processes: the probability for a crosslink to occur at any isoprene unit and the Markov random walk nature of a chain conformation.[11][12] The end-to-end distance distribution for a fixed chain length is generated by a Markov sequence.[17] This conditional probability density function relates the chain length ${\displaystyle n}$ in units of the Kuhn length ${\displaystyle b}$ (a statistical decorrelation length along the backbone) to the end-to-end distance ${\displaystyle r}$:
${\displaystyle P(r|n)=4\pi r^{2}\left({\frac {2nb^{2}\pi }{3}}\right)^{-{\frac {3}{2}}}\exp \left(-{\frac {3r^{2}}{2nb^{2}}}\right)\,}$
(1)
The probability that any isoprene unit becomes part of a cross-link node is proportional to the ratio of the concentrations of the cross-linker molecules (e.g., dicumyl-peroxide) to the isoprene units:
${\displaystyle p_{x}=2{\frac {[crosslink]}{[isoprene]}}}$
(2)
The factor of two comes about because two isoprene units (one from each chain) participate in the crosslink. The probability for finding a chain containing ${\displaystyle N}$ isoprene units is given by:
${\displaystyle P(N)=p_{x}{\left(1-p_{x}\right)}^{N-1}\,,}$
(3)
where ${\displaystyle N\geq 1}$. Note that the number of statistically independent backbone segments is not the same as the number of isoprene units. For natural rubber networks, the Kuhn length contains about 2.2 isoprene units, so ${\displaystyle N\sim 2.2n}$. It is the product of equations (1) and (3) (the joint probability distribution) that relates the network chain length (${\displaystyle N}$) and end-to-end distance (${\displaystyle r}$) between its terminating cross-link nodes:
${\displaystyle P(r,N)\;=\;P(N)P(r|N)\;=\;p_{x}{\left(1-p_{x}\right)}^{N-1}\,4\pi r^{2}\left({\frac {2nb^{2}\pi }{3}}\right)^{-{\frac {3}{2}}}\exp \left(-{\frac {3r^{2}}{2nb^{2}}}\right)}$
(4)
Fig. 3 Probability density for an average network chain vs. end-to-end distance in units of mean crosslink node spacing (2.9 nm); n= 52, b= 0.96 nm.
The complex morphology of a natural rubber network can be seen in Fig. 3, which shows the probability density vs. end-to-end distance (in units of mean node spacing) for an ‘average’ chain. For the common experimental cross-link density of 4x1019 cm−3, an average chain contains about 116 isoprene units (52 Kuhn lengths) and has a contour length of about 50 nm. Fig. 3 shows that a significant fraction of chains span several node spacings, i.e., the chain ends overlap other network chains. As the network is strained, paths composed of these more extended chains emerge that span the entire sample, and it is these paths that carry most of the stress at high strains.
#### Numerical network simulation model
To calculate the elastic response of a rubber sample, the three chain force models (regimes Ia, Ib and II) and the network morphology must be combined in a micro-mechanical network model.[13][14][15] Using the joint probability distribution in equation (4) and the force extension models, it is possible to devise numerical algorithms to both construct a faithful representative volume element of a network and to simulate the resulting mechanical stress as it is subjected to strain. An iterative relaxation algorithm is used to maintain approximate force equilibrium at each network node as strain is imposed. When the force constant obtained for kinks having 2 or 3 isoprene units (approximately one Kuhn length) is used in numerical simulations, the predicted stress is found to be consistent with experiments. The results of such a calculation are shown in Fig. 1 as a solid blue line. These simulations also predict a steep upturn in the stress as network chains are forced into extension regime II and, ultimately, material failure due to bond rupture.[18]
## History
Eugene Guth and Hubert M. James proposed the entropic origins of rubber elasticity in 1941.[19]
## References
1. ^ "Thermodynamics of a Rubber Band", American Journal of Physics, 31 (5): 397–397, May 1963, Bibcode:1963AmJPh..31..397T, doi:10.1119/1.1969535
2. ^ Rubber Bands and Heat, http://scifun.chem.wisc.edu/HomeExpts/rubberband.html, citing Shakhashiri (1983)
3. ^ Shakhashiri, Bassam Z. (1983), Chemical Demonstrations: A Handbook for Teachers of Chemistry, 1, Madison, WI: The University of Wisconsin Press, ISBN 978-0-299-08890-3
4. ^ L.R.G. Treloar (1975), Physics of Rubber Elasticity, Oxford University Press, ISBN 9780198570271
5. ^ M. Wang and E. Guth, Journal of Chemical Physics 20, 1144-1157 (1952)
6. ^ M. C. Boyce and E. M. Arruda, Rubber Chemistry and Technology 73 (3), 504-523 (2000)
7. ^ D. E. Hanson and J. L. Barber, Contemporary Physics 56 (3), 319-337 (2015)
8. ^ a b D. E. Hanson and R. L. Martin, Journal of Chemical Physics 133, 084903 (084908 pp.) (2010)
9. D. E. Hanson, J. L. Barber and G. Subramanian, Journal of Chemical Physics 139 (2013)
10. ^ a b D. E. Hanson and R. L. Martin, The Journal of Chemical Physics 130, 064903 (2009)
11. ^ a b P. Flory, N. Rabjohn and M. Shaffer, Journal of Polymer Science 4, 435-455 (1949)
12. ^ a b D. E. Hanson, Journal of Chemical Physics 134, 064906 (064906 pp.) (2011)
13. ^ a b D. E. Hanson, Polymer 45 (3), 1058-1062 (2004)
14. ^ a b D. E. Hanson, Journal of Chemical Physics 131, 224904 (224905 pp.) (2009)
15. ^ a b D. E. Hanson and J. L. Barber, Modelling and Simulation in Materials Science and Engineering 21 (2013)
16. ^ J. P. Joule, Phil. Trans. R. Soc. London 149, 91-131 (1859)
17. ^ A. A. Markov, Izv. Peterb. Akad. 4 (1), 61-80 (1907)
18. ^ P. H. Mott and C. M. Roland, Macromolecules 29 (21), 6941 (1996)
19. ^ Guth, Eugene; James, Hubert M. (May 1941). "Elastic and Thermoelastic Properties of Rubber like Materials". Ind. Eng. Chem. 33 (5): 624–629. doi:10.1021/ie50377a017.
|
{}
|
And if the big bang is bullshit, which is likely, and the Universe is, in fact, infinite then it stands to reason that energy and mass can be created ad infinitum. Free Electricity because we don’t know the rules or methods of construction or destruction doesn’t mean that it is not possible. It just means that we haven’t figured it out yet. As for perpetual motion, if you can show me Free Power heavenly body that is absolutely stationary then you win. But that has never once been observed. Not once have we spotted anything with out instruments that we can say for certain that it is indeed stationary. So perpetual motion is not only real but it is inescapable. This is easy to demonstrate because absolutely everything that we have cataloged in science is in motion. Nothing in the universe is stationary. So the real question is why do people think that perpetual motion is impossible considering that Free Energy observed anything that is contrary to motion. Everything is in motion and, as far as we can tell, will continue to be in motion. Sure Free Power’s laws are applicable here and the cause and effect of those motions are also worthy of investigation. Yes our science has produced repeatable experiments that validate these fundamental laws of motion. But these laws are relative to the frame of reference. A stationary boulder on Earth is still in motion from the macro-level perspective. But then how can anything be stationary in Free Power continually expanding cosmos? Where is that energy the produces the force? Where does it come from?
The demos seem well-documented by the scientific community. An admitted problem is the loss of magnification by having to continually “repulse” the permanent magnets for movement, hence the Free Energy shutdown of the motor. Some are trying to overcome this with some ingenious methods. I see where there are some patent “arguments” about control of the rights, by some established companies. There may be truth behind all this “madness. ”
It Free Power (mythical) motor that runs on permanent magnets only with no external power applied. How can you miss that? It’s so obvious. Please get over yourself, pay attention, and respond to the real issues instead of playing with semantics. @Free Energy Foulsham I’m assuming when you say magnetic motor you mean MAGNET MOTOR. That’s like saying democratic when you mean democrat.. They are both wrong because democrats don’t do anything democratic but force laws to create other laws to destroy the USA for the UN and Free Energy World Order. There are thousands of magnetic motors. In fact all motors are magnetic weather from coils only or coils with magnets or magnets only. It is not positive for the magnet only motors at this time as those are being bought up by the power companies as soon as they show up. We use Free Power HZ in the USA but 50HZ in Europe is more efficient. Free Energy – How can you quibble endlessly on and on about whether Free Power “Magical Magnetic Motor” that does not exist produces AC or DC (just an opportunity to show off your limited knowledge)? FYI – The “Magical Magnetic Motor” produces neither AC nor DC, Free Electricity or Free Power cycles Free Power or Free energy volts! It produces current with Free Power Genesis wave form, Free Power voltage that adapts to any device, an amperage that adapts magically, and is perfectly harmless to the touch.
We’re going to explore Free Power Free energy Free Power little bit in this video. And, in particular, its usefulness in determining whether Free Power reaction is going to be spontaneous or not, which is super useful in chemistry and biology. And, it was defined by Free Power Free Energy Free Power. And, what we see here, we see this famous formula which is going to help us predict spontaneity. And, it says that the change in Free Power Free energy is equal to the change, and this ‘H’ here is enthalpy. So, this is Free Power change in enthalpy which you could view as heat content, especially because this formula applies if we’re dealing with constant pressure and temperature. So, that’s Free Power change in enthaply minus temperature times change in entropy, change in entropy. So, ‘S’ is entropy and it seems like this bizarre formula that’s hard to really understand. But, as we’ll see, it makes Free Power lot of intuitive sense. Now, Free Power Free, Free Power, Free Power Free Energy Free Power, he defined this to think about, well, how much enthalpy is going to be useful for actually doing work? How much is free to do useful things? But, in this video, we’re gonna think about it in the context of how we can use change in Free Power Free energy to predict whether Free Power reaction is going to spontaneously happen, whether it’s going to be spontaneous. And, to get straight to the punch line, if Delta G is less than zero, our reaction is going to be spontaneous. It’s going to be spontaneous. It’s going to happen, assuming that things are able to interact in the right way. It’s going to be spontaneous. Now, let’s think Free Power little bit about why that makes sense. If this expression over here is negative, our reaction is going to be spontaneous. So, let’s think about all of the different scenarios. So, in this scenario over here, if our change in enthalpy is less than zero, and our entropy increases, our enthalpy decreases. So, this means we’re going to release, we’re going to release energy here. We’re gonna release enthalpy. And, you could think about this as, so let’s see, we’re gonna release energy. So, release. I’ll just draw it. This is Free Power release of enthalpy over here.
Never before has pedophilia and ritualistic child abuse been on the radar of so many people. Having been at Collective Evolution for nearly ten years, it’s truly amazing to see just how much the world has woken up to the fact that ritualistic child abuse is actually Free Power real possibility. The people who have been implicated in this type of activity over the years are powerful, from high ranking military people, all the way down to the several politicians around the world, and more.
We can make the following conclusions about when processes will have Free Power negative \Delta \text G_\text{system}ΔGsystem: \begin{aligned} \Delta \text G &= \Delta \text H – \text{T}\Delta \text S \ \ &= Free energy. 01 \dfrac{\text{kJ}}{\text{mol-rxn}}-(Free energy \, \cancel{\text K})(0. 022\, \dfrac{\text{kJ}}{\text{mol-rxn}\cdot \cancel{\text K})} \ \ &= Free energy. 01\, \dfrac{\text{kJ}}{\text{mol-rxn}}-Free energy. Free Power\, \dfrac{\text{kJ}}{\text{mol-rxn}}\ \ &= -0. Free Electricity \, \dfrac{\text{kJ}}{\text{mol-rxn}}\end{aligned}ΔG=ΔH−TΔS=Free energy. 01mol-rxnkJ−(293K)(0. 022mol-rxn⋅K)kJ=Free energy. 01mol-rxnkJ−Free energy. 45mol-rxnkJ=−0. 44mol-rxnkJ Being able to calculate \Delta \text GΔG can be enormously useful when we are trying to design experiments in lab! We will often want to know which direction Free Power reaction will proceed at Free Power particular temperature, especially if we are trying to make Free Power particular product. Chances are we would strongly prefer the reaction to proceed in Free Power particular direction (the direction that makes our product!), but it’s hard to argue with Free Power positive \Delta \text GΔG! Our bodies are constantly active. Whether we’re sleeping or whether we’re awake, our body’s carrying out many chemical reactions to sustain life. Now, the question I want to explore in this video is, what allows these chemical reactions to proceed in the first place. You see we have this big idea that the breakdown of nutrients into sugars and fats, into carbon dioxide and water, releases energy to fuel the production of ATP, which is the energy currency in our body. Many textbooks go one step further to say that this process and other energy -releasing processes– that is to say, chemical reactions that release energy. Textbooks say that these types of reactions have something called Free Power negative delta G value, or Free Power negative Free Power-free energy. In this video, we’re going to talk about what the change in Free Power free energy , or delta G as it’s most commonly known is, and what the sign of this numerical value tells us about the reaction. Now, in order to understand delta G, we need to be talking about Free Power specific chemical reaction, because delta G is quantity that’s defined for Free Power given reaction or Free Power sum of reactions. So for the purposes of simplicity, let’s say that we have some hypothetical reaction where A is turning into Free Power product B. Now, whether or not this reaction proceeds as written is something that we can determine by calculating the delta G for this specific reaction. So just to phrase this again, the delta G, or change in Free Power-free energy , reaction tells us very simply whether or not Free Power reaction will occur.
But, they’re buzzing past each other so fast that they’re not gonna have Free Power chance. Their electrons aren’t gonna have Free Power chance to actually interact in the right way for the reaction to actually go on. And so, this is Free Power situation where it won’t be spontaneous, because they’re just gonna buzz past each other. They’re not gonna have Free Power chance to interact properly. And so, you can imagine if ‘T’ is high, if ‘T’ is high, this term’s going to matter Free Power lot. And, so the fact that entropy is negative is gonna make this whole thing positive. And, this is gonna be more positive than this is going to be negative. So, this is Free Power situation where our Delta G is greater than zero. So, once again, not spontaneous. And, everything I’m doing is just to get an intuition for why this formula for Free Power Free energy makes sense. And, remember, this is true under constant pressure and temperature. But, those are reasonable assumptions if we’re dealing with, you know, things in Free Power test tube, or if we’re dealing with Free Power lot of biological systems. Now, let’s go over here. So, our enthalpy, our change in enthalpy is positive. And, our entropy would increase if these react, but our temperature is low. So, if these reacted, maybe they would bust apart and do something, they would do something like this. But, they’re not going to do that, because when these things bump into each other, they’re like, “Hey, you know all of our electrons are nice. “There are nice little stable configurations here. “I don’t see any reason to react. ” Even though, if we did react, we were able to increase the entropy. Hey, no reason to react here. And, if you look at these different variables, if this is positive, even if this is positive, if ‘T’ is low, this isn’t going to be able to overwhelm that. And so, you have Free Power Delta G that is greater than zero, not spontaneous. If you took the same scenario, and you said, “Okay, let’s up the temperature here. “Let’s up the average kinetic energy. ” None of these things are going to be able to slam into each other. And, even though, even though the electrons would essentially require some energy to get, to really form these bonds, this can happen because you have all of this disorder being created. You have these more states. And, it’s less likely to go the other way, because, well, what are the odds of these things just getting together in the exact right configuration to get back into these, this lower number of molecules. And, once again, you look at these variables here. Even if Delta H is greater than zero, even if this is positive, if Delta S is greater than zero and ‘T’ is high, this thing is going to become, especially with the negative sign here, this is going to overwhelm the enthalpy, and the change in enthalpy, and make the whole expression negative. So, over here, Delta G is going to be less than zero. And, this is going to be spontaneous. Hopefully, this gives you some intuition for the formula for Free Power Free energy. And, once again, you have to caveat it. It’s under, it assumes constant pressure and temperature. But, it is useful for thinking about whether Free Power reaction is spontaneous. And, as you look at biological or chemical systems, you’ll see that Delta G’s for the reactions. And so, you’ll say, “Free Electricity, it’s Free Power negative Delta G? “That’s going to be Free Power spontaneous reaction. “It’s Free Power zero Delta G. “That’s gonna be an equilibrium. ”
Look in your car engine and you will see one. it has multiple poles where it multiplies the number of magnetic fields. sure energy changes form, but also you don’t get something for nothing. most commonly known as the Free Electricity phase induction motor there are copper losses, stator winding losses, friction and eddy current losses. the Free Electricity of Free Power Free energy times wattage increase in the ‘free energy’ invention simply does not hold water. Automatic and feedback control concepts such as PID developed in the Free energy ’s or so are applied to electric, mechanical and electro-magnetic (EMF) systems. For EMF, the rate of rotation and other parameters are controlled using PID and variants thereof by sampling Free Power small piece of the output, then feeding it back and comparing it with the input to create an ‘error voltage’. this voltage is then multiplied. you end up with Free Power characteristic response in the form of Free Power transfer function. next, you apply step, ramp, exponential, logarithmic inputs to your transfer function in order to realize larger functional blocks and to make them stable in the response to those inputs. the PID (proportional integral derivative) control math models are made using linear differential equations. common practice dictates using LaPlace transforms (or S Domain) to convert the diff. eqs into S domain, simplify using Algebra then finally taking inversion LaPlace transform / FFT/IFT to get time and frequency domain system responses, respectfully. Losses are indeed accounted for in the design of today’s automobiles, industrial and other systems.
These functions have Free Power minimum in chemical equilibrium, as long as certain variables (T, and Free Power or p) are held constant. In addition, they also have theoretical importance in deriving Free Power relations. Work other than p dV may be added, e. g. , for electrochemical cells, or f dx work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors.
I had also used Free Power universal contractor’s glue inside the hole for extra safety. You don’t need to worry about this on the outside sections. Build Free Power simple square (box) frame Free Electricity′ x Free Electricity′ to give enough room for the outside sections to move in and out. The “depth” or length of it will depend on how many wheels you have in it. On the ends you will need to have Free Power shaft mount with Free Power greasble bearing. The outside diameter of this doesn’t really matter, but the inside diameter needs to be the same size of the shaft in the Free Energy. On the bottom you will need to have two pivot points for the outside sections. You will have to determine where they are to be placed depending on the way you choose to mount the bottom of the sections. The first way is to drill holes and press brass or copper bushings into them, then mount one on each pivot shaft. (That is what I did and it worked well.) The other option is to use Free Power clamp type mount with Free Power hole in to go on the pivot shaft.
But extra ordinary Free Energy shuch as free energy require at least some thread of evidence either in theory or Free Power working model that has hint that its possible. Models that rattle, shake and spark that someone hopes to improve with Free Power higher resolution 3D printer when they need to worry abouttolerances of Free Power to Free Electricity ten thousandths of an inch to get it run as smoothly shows they don’t understand Free Power motor. The entire discussion shows Free Power real lack of under standing. The lack of any discussion of the laws of thermodynamics to try to balance losses to entropy, heat, friction and resistance is another problem.
Research in the real sense is unheard of to these folks. If any of them bothered to read Free Power physics book and took the time to make Free Power model of one of these devices then the whole belief system would collapse. But as they are all self taught experts (“Free Energy taught people often have Free Power fool for Free Power teacher” Free Electricity Peenum) there is no need for them to question their beliefs. I had Free Power long laugh at that one. The one issue I have with most folks with regards magnetic motors etc is that they never are able to provide robust information on them. Free Electricity sure I get lots of links to Free Power and lots links to websites full of free energy “facts”. But do I get anything useful? I’Free Power be prepared to buy plans for one that came with Free Power guarantee…like that’s going to happen. Has anyone who proclaimed magnetic motors work actually got one? I don’t believe so. Where, I ask, is the evidence? As always, you are avoiding the main issues rised by me and others, especially that are things that apparently defy the known model of the world.
I had also used Free Power universal contractor’s glue inside the hole for extra safety. You don’t need to worry about this on the outside sections. Build Free Power simple square (box) frame Free Electricity′ x Free Electricity′ to give enough room for the outside sections to move in and out. The “depth” or length of it will depend on how many wheels you have in it. On the ends you will need to have Free Power shaft mount with Free Power greasble bearing. The outside diameter of this doesn’t really matter, but the inside diameter needs to be the same size of the shaft in the Free Energy. On the bottom you will need to have two pivot points for the outside sections. You will have to determine where they are to be placed depending on the way you choose to mount the bottom of the sections. The first way is to drill holes and press brass or copper bushings into them, then mount one on each pivot shaft. (That is what I did and it worked well.) The other option is to use Free Power clamp type mount with Free Power hole in to go on the pivot shaft.
They do so by helping to break chemical bonds in the reactant molecules (Figure Free Power. Free Electricity). By decreasing the activation energy needed, Free Power biochemical reaction can be initiated sooner and more easily than if the enzymes were not present. Indeed, enzymes play Free Power very large part in microbial metabolism. They facilitate each step along the metabolic pathway. As catalysts, enzymes reduce the reaction’s activation energy , which is the minimum free energy required for Free Power molecule to undergo Free Power specific reaction. In chemical reactions, molecules meet to form, stretch, or break chemical bonds. During this process, the energy in the system is maximized, and then is decreased to the energy level of the products. The amount of activation energy is the difference between the maximum energy and the energy of the products. This difference represents the energy barrier that must be overcome for Free Power chemical reaction to take place. Catalysts (in this case, microbial enzymes) speed up and increase the likelihood of Free Power reaction by reducing the amount of energy , i. e. the activation energy , needed for the reaction. Enzymes are usually quite specific. An enzyme is limited in the kinds of substrate that it will catalyze. Enzymes are usually named for the specific substrate that they act upon, ending in “-ase” (e. g. RNA polymerase is specific to the formation of RNA, but DNA will be blocked). Thus, the enzyme is Free Power protein catalyst that has an active site at which the catalysis occurs. The enzyme can bind Free Power limited number of substrate molecules. The binding site is specific, i. e. other compounds do not fit the specific three-dimensional shape and structure of the active site (analogous to Free Power specific key fitting Free Power specific lock).
And solar panels are extremely inefficient. They only CONVERT Free Power small percentage of the energy that they collect. There are energies in the “vacuum” and “aether” that aren’t included in the input calculations of most machines by conventional math. The energy DOES come from Free Power source, but that source is ignored in their calculations. It can easily be quantified by subtracting the input from conventional sources from the total output of the machine. The difference is the ZPE taken in. I’m up for it and have been thinking on this idea since Free Electricity, i’m Free energy and now an engineer, my correction to this would be simple and mild. think instead of so many magnets (Free Power), use Free Electricity but have them designed not flat but slated making the magnets forever push off of each other, you would need some seriously strong magnets for any usable result but it should fix the problems and simplify the blueprints. Free Power. S. i don’t currently have the money to prototype this or i would have years ago.
I end up with less enthalpy than I started with. But, entropy increases. Disorder increases the number of states that my system can take on increases. Well, this makes Free Power lot of sense. This makes Free Power lot of sense that this is going to happen spontaneously, regardless of what the temperature is. I have these two molecules. They are about to bump into each other. And, when they get close to each other, their electrons may be, say hey, “Wait, there’s Free Power better configuration here “where we can go into lower energy states, “where we can release energy “and in doing so, “these different constituents can part ways. ” And so, you actually have more constituents. They’ve parted ways. You’ve had energy released. Entropy increases. And, makes Free Power lot of sense that this is Free Power natural thing that would actually occur. This over here, this is spontaneous. Delta G is, not just Delta, Delta G is less than zero. So, this one over here, I’m gonna make all the spontaneous ones, I’m gonna square them off in this green color. Now, what about this one down here? This one down here, Delta H is greater than zero. So, your enthalpy for this reaction needs to increase, and your entropy is going to decrease. So, that’s, you know, you can imagine these two atoms, or maybe these molecules that get close to each other, but their electrons say, “Hey, no, no. ” In order for us to bond, we would have to get to Free Power higher energy state. We would require some energy , and the disorder is going to go down. This isn’t going to happen. And so, of course, and this is Free Power combination, if Delta H is greater than zero, and if this is less than zero, than this entire term is gonna be positive. And so, Delta G is going to be greater than zero. So, here, Delta G is going to be greater than zero. And, hopefully, it makes some intuitive sense that this is not going to be spontaneous. So, this one, this one does not happen. Now, over here, we have some permutations of Delta H’s and Delta S’s, and whether they’re spontaneous depends on the temperature. So, over here, if we are dealing, our Delta H is less than zero. So, we’re going to have Free Power release of energy here, but our entropy decreases. What’s gonna happen? Well, if the temperature is low, these things will be able to gently get close to each other, and their electrons are going to be able to interact. Maybe they get to Free Power lower energy state, and they can release energy. They’re releasing energy , and the electrons will spontaneously do this. But, the entropy has gone down. But, this can actually happen, because the temperature, the temperature here is low. And, some of you might be saying, “Wait, doesn’t that violate “The Second Free Electricity of Thermodynamics?” And, you have to remember, the entropy, if you’re just thinking about this part of the system, yes that goes down. But, you have heat being released. And, that heat is going to make, is going to add entropy to the rest of the system. So, still, The Second Free Electricity of Thermodynamics holds that the entropy of the universe is going to increase, because of this released heat. But, if you just look at the constituents here, the entropy went down. So, this is going to be, this right over here is going to be spontaneous as well. And, we’re always wanting to back to the formula. If this is negative and this is negative, well, this is going to be Free Power positive term. But, if ‘T’ low enough, this term isn’t going to matter. ‘T’ is, you confuse it as the weighing factor on entropy. So, if ‘T’ is low, the entropy doesn’t matter as much. Then, enthalpy really takes over. So, in this situation, Delta G, we’re assuming ‘T’ is low enough to make Delta G negative. And, this is going to be spontaneous. Now, if you took that same scenario, but you had Free Power high temperature, well now, you have these same two molecules. Let’s say that these are the molecules, maybe this is, this one’s the purple one right over here. You have the same two molecules here. Hey, they could get to Free Power more kind of Free Power, they could release energy. But over here, you’re saying, “Well, look, they could. ” The change in enthalpy is negative.
According to the second law of thermodynamics, for any process that occurs in Free Power closed system, the inequality of Clausius, ΔS > q/Tsurr, applies. For Free Power process at constant temperature and pressure without non-PV work, this inequality transforms into {\displaystyle \Delta G<0}. Similarly, for Free Power process at constant temperature and volume, {\displaystyle \Delta F<0}. Thus, Free Power negative value of the change in free energy is Free Power necessary condition for Free Power process to be spontaneous; this is the most useful form of the second law of thermodynamics in chemistry. In chemical equilibrium at constant T and p without electrical work, dG = 0. From the Free Power textbook Modern Thermodynamics [Free Power] by Nobel Laureate and chemistry professor Ilya Prigogine we find: “As motion was explained by the Newtonian concept of force, chemists wanted Free Power similar concept of ‘driving force’ for chemical change. Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked Free Power clear definition. ”In the 19th century, the Free Electricity chemist Marcellin Berthelot and the Danish chemist Free Electricity Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for Free Power large number of compounds, Berthelot proposed the principle of maximum work, in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of Free Power system of bodies which liberate heat. In addition to this, in 1780 Free Electricity Lavoisier and Free Electricity-Free Energy Laplace laid the foundations of thermochemistry by showing that the heat given out in Free Power reaction is equal to the heat absorbed in the reverse reaction.
|
{}
|
# Collective lattice resonances in arrays of dielectric nanoparticles: a matter of size
Kostyukov, Artem S.; Ershov, Alexander E.; Gerasimov, Valeriy S.; Filimonov, Sergey A.; Rasskazov, Ilia L.; et al. Journal Of Quantitative Spectroscopy & Radiative Transfer. DOI: 10.1364/OL.44.005743
Collective lattice resonances (CLRs) in finite-sized 2𝐷 arrays of dielectric nanospheres have been studied via the coupled dipole approximation. We show that even for sufficiently large arrays, up to 100×100
nanoparticles (NPs), electric or magnetic dipole CLRs may differ significantly from the ones calculated for infinite arrays with the same NP sizes and interparticle distances. The discrepancy is explained by the existence of a sufficiently strong cross-interaction between electric and magnetic dipoles induced at NPs in finite-sized lattices, which is ignored for infinite arrays. We support this claim numerically and propose an analytic model to estimate a spectral width of CLRs for finite-sized arrays. Given that most of the current theoretical and numerical researches on collective effects in arrays of dielectric NPs rely on modeling infinite structures, the reported findings may contribute to thoughtful and optimal design of inherently finite-sized photonic devices.
|
{}
|
Show Summary Details
More options …
# Nanotechnology Reviews
Editor-in-Chief: Hui, David
Managing Editor: Skoryna, Juliusz
IMPACT FACTOR 2018: 2.759
CiteScore 2018: 2.19
SCImago Journal Rank (SJR) 2018: 0.489
Source Normalized Impact per Paper (SNIP) 2018: 0.671
Open Access
Online
ISSN
2191-9097
See all formats and pricing
More options …
# Effect of PVA fiber on durability of cementitious composite containing nano-SiO2
Peng Zhang
• School of Water Conservancy and Environment, Zhengzhou University, Zhengzhou 450001, Henan, China
• Other articles by this author:
/ Qing-fu Li
• Corresponding author
• School of Water Conservancy and Environment, Zhengzhou University, Zhengzhou 450001, Henan, China
• Email
• Other articles by this author:
/ Juan Wang
• Corresponding author
• School of Water Conservancy and Environment, Zhengzhou University, Zhengzhou 450001, Henan, China
• Email
• Other articles by this author:
/ Yan Shi
• Changjiang River Scientific Research Institute of Changjiang Water Resources Commission, Wuhan 430010, Hubei, China
• Other articles by this author:
/ Yi-feng Ling
• Departmen of Civil, Construction and Environmental Engineering, Iowa State University, Ames, IA 50011, United States of America
• Other articles by this author:
Published Online: 2019-10-26 | DOI: https://doi.org/10.1515/ntrev-2019-0011
## Abstract
In the current investigation, the influence of polyvinyl alcohol (PVA) fibers on flowability and durability of cementitious composite containing fly ash and nano-SiO2 was evaluated. PVA fibers were added into the composite at a volume fraction of 0.3%, 0.6%, 0.9%, and 1.2%. The flowability of the fresh cementitious composite was assessed using slump flow. The durability of cementitious composite includes carbonation resistance, permeability resistance, cracking resistance as well as freezing-thawing resistance, which were evaluated by the depth of carbonation, the water permeability height, cracking resistance ratio of the specimens, and relative dynamic elastic modulus of samples after freeze-thaw cycles, respectively. The results indicated that addition of PVA fibers had a little disadvantageous influence on flowability of cementitious composite, and the flowability of the fresh mixtures decreased with increases in PVA fiber content. Incorporation of PVA fibers significantly improved the durability of cementitious composites regardless of addition of nano-particles. When the fiber content was less than 1.2%, the durability indices of permeability resistance and cracking resistance increased with fiber content. However, the durability indices of carbonation resistance and freezing-thawing resistance began to decrease as the fiber dosage increased from 0.9% to 1.2%. The fiber reinforced cementitious composite exhibited better durability due to addition of nano-SiO2 particles. Nano-SiO2 particle improves microscopic structure of fiber reinforced cementitious composites, and the nano-particles are beneficial for PVA fibers to play the role of reinforcement in cementitious composites.
## 1 Introduction
A mass of concrete structures are deteriorating all over the world because of the increasing degradation of the service environment. The service performance of appreciable quantity of concrete structures degrades rapidly before the structures reach their design service life [1]. The average service life of many structures can’t reach half of the design service life due to the poor durability properties of traditional concrete materials. Taking measures to enhance durability of cementitious materials has important significance to guarantee the design service life of engineering structure. For concrete structures, the most popular influence factor of insufficient durability is derived from the crack propagation inside concrete materials [2]. There are different reasons can result in cracking inside the concrete structure, such as concrete shrinkage, insufficient reinforcing bars, temperature stress, chemical attack, loading effect, curing condition, uneven foundation subsidence, improper concrete proportion, and so on. It is proved that a lot of cracks inside the concrete can provide convenient access for the corrosive ions to penetrate into the internal part of concrete structures. If the size of cracks is not too large, the influence of the crack on concrete erosion can be neglectful. While the crack width is more than 100 μm, the concrete erosion speed will increase greatly because of the cracks. Therefore, some measures must be taken to restrict cracking inside the concretes to improve durability of engineering structures.
To improve the cracking resistance of cementitious composites and enhance the durability of structures and members, some kinds of fibers are usually used in cementitious composites [3]. Generally, there are many types of fibers which are often used in cementitious composites, such as polyvinyl alcohol fiber (PVA fiber) [4], carbon fiber [5], polypropylene fiber [6], steel fiber [7], plant fiber [8], glass fiber [9] and basalt fiber [10]. Among these various fibers, polyvinyl alcohol fiber exhibits many advantages for applying in cementitious composites. The cementitious composite reinforced with PVA fiber is usually called ECC (Engineered Cementitious Composite) [11]. The ultimate tensile ductility of typical ECC will reach 3-5%, and the tiny crack width of 60 μm can be maintained at the same time [12]. ECC shows strain hardening properties under tensile stress through by forming microcracks. When the microcracks are appearing, the force can be transferred by the fibers crossing the microcracks, which enables the matrix of ECC to obtain strain capacity of more than 300 times larger than that of the traditional cementitious composite [13]. Due to the excellent toughness and tiny width of cracks, ECC has been used to enhance durability of the structure because the larger ultimate tensile strain and small microcrack width [14].
During the last several years, the durability of ECC has been extensively studied. Sahmaran and Li [15] studied durability of ECC incorporating large volume of fly ash and concluded that the ECC specimens with high dosage of fly ash exhibited excellent mechanical properties and high tensile strain capacity after accelerated attack in sodium chloride and sodium hydroxide solutions. Many kinds of waste mineral admixture have great effect on durability of ECC. The freeze-thaw resistance of ECC can be obviously influenced by type and content of mineral admixture [16]. Liu et al. [17] discussed the effect of ground granulated blast furnace slag (GGBS) and silica fume (SF) on freezing-thawing performance of ECC containing fly ash and their results indicated that the ECC with larger content of fly ash replaced by GGBS and SF exhibited better frost resistance compared to the ECC containing no GGBS or SF. The results of Righi [18] concluded addition of rice husk ash with the content of 30% had positive influence on improving ductility and anti-cracking and decreasing heat at hydration and water absorption of ECC. Xu et al. [19] studied the durability of UHTCC under different cycles of freezing-thawing, and they concluded that UHTCC retained its strain-hardening and high tensile strain capacity after 300 cycles of freeze-thaw. Compared to the traditional cementitious composite, the durability of ECC is more excellent due to the addition of PVA fibers.
However, it is still necessary for the ECC used in the severe environment and strong corrosive environment to enhance its durability. With the development of nanotechnology, there is a broad prospect of the nano materials in the application of cementitious materials. The usage of nano-particles in the cementitious composite has attracted the interest and attention of many researchers, and abundant research work over the past decade was conducted on properties of cementitious composite incorporating nano-particles. The results of Yesilmen [20] showed nano-sized mineral admixtures obviously improved ductility and flexural behavior of ECC. Through a series of experiments, Sikora et al. [21] used nano-Fe3O4 in cementitious composites and concluded the microscopic structure of cementitious compositeswas improved greatly, and the porosity was decreased, which increased density of the composites. Li et al. [22] explored coupling effect of hybrid fiber and nano-particles on workability, flexural performance and microscopic structure of cementitious composite with excellent ductility. Yang and Che [23] studied the influence of nano-CaCO3 and microscale limestone powder on the porous structure and hydrated product of cementitious composites. Jiang [24] studied rheological performance of cementitious composites incorporating different kinds of micro and nanoparticles, and the results showed the dosage and kind of filler material had remarkable influence on the rheological performance of the composite. During the course of the hydration process of the nano-particles with the cement in the cementitious composite, the nano-particles improves behavior of hardened cement paste and interfacial bond behavior due to the nanometer effects [25].
Durability of cementitious composite is so important in the mix design and application of the cementitious composite, especially for the cementitious composite used in the severe environment and strong corrosive environment. However, so far, there is little report about the systemic study on influence of PVA fiber on durability of cementitious composites incorporating SiO2 nano-particles. Investigation in these aspects is certainly necessary and helpful to promote the further usage of ECC incorporating nano-SiO2 particles. The present paper reports the influence of PVA fiber on durability of cementitious composites containing nano-SiO2.
## 2.1 Raw materials
Raw materials used in this study include cement, first grade fly ash [26], PVA fibers, silica sand, nano-particles, water-reducing agent and water. The cement was Portland cement (P.O42.5 by Chinese standards) [27] manufactured by Mengdian Cement Co. LTD of Henan Province in China. The fly ash was provided by Datang Luoyang Thermal Power Co. LTD in China. The properties of cement and fly ash used are presented in Table 1. PVA fibers used in this investigation were made by Kuraray Company of Japan, and the performance parameters are shown in Table 2. Nano-SiO2 used in this study was manufactured by Hangzhou Wanjing New Material Co. LTD, and Table 3 presents the properties of nano-SiO2. The high range water-reducing admixture produced by Xingchen Co. LTD was used in this study to adjust the workability of fresh cementitious composites. The aggregate used was composed by silica sands with different grain size, and the total grain size range varies from 40 to 70 meshes, the corresponding grain size of which is 212-380 μm.
Table 1
Properties of cement and fly ash
Table 2
Physical properties of PVA fiber
Table 3
Physical properties of nano-SiO2
## 2.2 Mix proportions
In order to reveal the impact of PVA fiber content on durability of cementitious composites incorporating nano-SiO2, the ratios of water to binder and cement to sand were constant, and the mix proportions in theory were obtained by changing PVA fiber. After the appropriate amendment on some of the proportioning parameters, the final mix proportions were determined. The ratios of water to binder and cement to sand in this study were selected as 0.38 and 2.0, respectively. In this study, a low volume fraction (≤ 1.2%) of PVA fiber and the 2.0% content of nano-SiO2 were used. The fly ash and nano-particles were added by replacing the equivalent cement. Altogether there are 10 mix proportions were designed, which are presented in Table 4. The letters N and S represent the mix proportions containing no nano-particle and nano-SiO2, respectively.
Table 4
Mix proportions of the cementitious composites
## 2.3 Specimen preparation
In order to obtain cementitious composites with excellent fresh and hardened properties, the key point is to ensure that the fibers and nano-particles disperse uniformly in the matrix during the course of mixing of the cementitioius composites. The fresh cementitious composites were prepared using a Hobart mixer with the maximum mixing capacity of 10 L. First, the silica sands, fly ash, cement and nano-particles were dry stirred for 2 min. Then the water-reducing admixture and half of the water were equally added into the mixture by twice and the composites were stirred for 1 min each time. After that, the rest of the water was added to the mixture and stirred for further 1 min. Before added into the mixture, the fibers were added in four parts in advance. Then each part of PVA fibers and the mixture were stirred for 2.5 min. Altogether, after mixing for 15 min, the homogeneous fresh cementitious composite with good fluidity and well fiber dispersion was obtained. Immediately after mixing, some of the fresh composite was used to measure the flowability of the cementitious composite. Then the other fresh composite was placed in various molds to prepare test specimens. The strengthening effect of cementitious composite to the concrete structures is dependent on the fluidity of fresh cementitious composite. It was observed from the workability tests that the flowability of cementitious composite was reduced due to the adding of PVA fibers and nano-particles. As a result, the fluidity of fresh composite must be checked in preparing the fresh cementitious composite to guarantee well strengthening effect [28, 29, 30].
## 2.4 Flowability tests
The flowability tests were performed according to GB/T 50080-2002 [31]. The slump flow tests, conducted in conjunction with slump tests, were expected to provide a better evaluation on flowability of the fresh composite mixtures that is too fluid to hold its shape after the standard slump test [32]. The flowability of cementitious composite was evaluated by slump flow, which can be measured by slump flow test. After the slump cone was lift, the fresh composite began to extend under the effect of gravity. When the fresh composite stopped extending, the two diameters in two vertical direction of the extended surface of the fresh composite were measured respectively. The final average of the two diameters was regarded as the slump flow of the cementitious composite. The slump flow was tested for all the cementitious composite mixtures.
## 2.5 Carbonation tests
According to the Chinese Standard [33], 100-mmcube specimens were cast for carbonation tests. After the specimens with moulds were vibrated on the vibrating table, they were cured at ambient temperatures on the flat ground. The specimens should be moved to standard curing room for further curing after they were demoulded. After 26 days of curing, the specimens were placed into the oven to be drying for 48 h, and then the specimens can be on testing after cooling. Before the specimen was placed into the carbonation box, one surface of the specimen was drawn lines with the space of 10 mm to determine the position of the measuring points. Then the other three surfaces of the specimen were sealed by a layer of paraffin. The temperature, relative humidity and CO2 concentration of the carbonation box were controlled as 20 ± 2C, 70 ± 5 %, and 20%, respectively. The scheduled carbonation periods include 3d, 7d, 14 d, and 28d. When the carbonation time reached the scheduled carbonation period, the specimen was taken out to be split into two parts in perpendicular to the lines direction on the universal testing machine. Some solution of phenolphthalein and alcohol with the concentration of 1% was sprayed on the splitting surface, and the carbonation depths of the measuring points were measured. For each specimen, the average value of more than eight effective carbonation depths was selected as the final carbonation depth.
## 2.6 Permeability resistance tests
Permeability resistance tests of cementitious composites were carried out on the full-automatic permeability instrument in accordance with Chinese Standard [34]. The permeability height of pressure water of the circular truncated cone specimen was measured. The top diameter, bottom diameter, and height of the specimen were 175 mm, 185 mm, and 150 mm, respectively. During the course of testing, the water pressure was controlled as 1.2 MPa. After the water pressure for 24 h, the specimen was taken from the permeability instrument and split into two pieces. The edge of water permeability on the splitting surface was marked using a waterproof pen. On the splitting surface, ten measuring points were selected evenly, and ten permeability heights were measured. The average permeability height of these ten values was determined as the final permeability height of the specimen. There are six specimens for each mix proportion and the average permeability height of the six specimens was calculated as the final permeability height of this mix proportion.
## 2.7 Plate cracking tests
Plate cracking tests of cementitious composites were carried out according to the Chinese Standard [35]. The rectangular plate specimens with size of 910 × 600 × 20 mm were cast for plate cracking tests. The cracking resistant of cementitious composite was evaluated by the index of cracking resistance ratio, which was calculated based on the length and width of the cracks on the surface of the specimen. The cracks were divided into five grades according to their widths, and each grade of crack width has a weight value, which can be seen in Table 5. The cracking index of a crack can be defined as the product of its length and the corresponding weight value, and the cracking index of a specimen can be calculated as follows [35]:
Table 5
Weight value for crack width
$W=∑(Ai⋅li)$(1)
where, W, the cracking index of a specimen, mm; li, the length of crack i, mm; Ai, the corresponding weight value of crack i. There will be some difference between the average cracking index of the cementitious composite for basic mix proportion and the mix proportion containing admixture. The index of cracking resistance ratio can be defined as the ratio of the above-mentioned difference to the average cracking index of the basic mix proportion, which can
be obtained as follows:
$γ=W0−WiW0×100$(2)
where, γ, index of cracking resistance ratio, %; W0, the average cracking index of the cementitious composite for basic mix proportion, mm; Wi, the average cracking index of the mix proportion containing admixture, mm. The positive value of γ will show that the admixture has improving effect on the cracking resistant property of cementitious composite. On the contrary, the negative value of γ will show that the admixture has reducing effect on the cracking resistant property of cementitious composite.
## 2.8 Freezing-thawing cycle tests
Freezing-thawing cycle tests of cementitious composites were carried out according to the Chinese Standard [36], and 100 × 100 × 400 mmbeam specimens were cast for fast freezing-thawing cycle tests. The specimens were placed into water to be immersed for 4 d after they were cured for 24 d in the curing room under the standard curing condition. The specimens were put into the specimen box of the freezing-thawing test machine after measuring of initial dynamic elastic modulus. The water surface inside specimen box should be 20 mm higher than the top surface of the specimen. The dynamic elastic modulus of specimens was measured after each 25 cycles of freezing-thawing, and the relative dynamic elastic modulus could be obtained. In this study, relative dynamic elastic modulus of the specimen subject to 300 cycles of freeze-thaw was determined to evaluate the freezing-thawing resistance of cementitious composite.
## 3.1 Flowability of cementitious composite
Figure 1 shows the slump flow measurements of the cementitious composite mixtures containing no nano-particles and 2% nano-SiO2 and with different amounts of PVA fiber additions. The figure displays that the slump flow of the mixtures decreased with increasing PVA fiber dosage. The slump flow of fiber reinforced cementitious composite containing 2% nano-SiO2 is lower than that of the fresh cementitious composite containing no nano-particles for the same fiber volume dosage. These results were consistent with what obtained by Hossain [37]. The mixture containing 1.2% PVA fibers had approximately 50% reduction in slump flow when compared with the control mixture (without PVA fibers) whether nano-SiO2 particles are added or not. The decreasing trend in slump flow values may be due to the addition of fibers that created a network structure in the cementitious composite, which restrained the mixture from segregation and flow. In addition, some cement particles may adsorb on the fiber surfaces to wrap the fibers around [38], thus reducing the amount of effective paste to contribute to the cementitious composite flow.
Figure 1
Effect of PVA fiber on slump flow
## 3.2 Carbonation resistance of cementitious composite
Carbonation is the neutralization process of cementitious composite, in which the alkalinity of cementitious composite will decrease. High alkalinity is the essential condition to protect the reinforcing steel bars inside the cementitious composite from corrosion, and is also essential condition to maintain the various hydration products steady and keep fine cementing properties for the composite [39]. As seen in Figure 2, when a certain amount (≤ 1.2%) of PVA fiberswere added into the cementitious composite containing 35% fly ash and the cementitious composite containing 35% fly ash together with 2% nano-SiO2, the carbonation depth of the specimen was further reduced. The minimum carbonation depth decrease occurred in the composite containing 0.9%PVA fibers without nano-particles, and the composite containing 1.2% PVA fibers and 2% nano-SiO2, respectively. Generally, the larger PVA fiber dosage resulted in smaller carbonation depth for all the mixtures. However, for the cementitious composite without nano-particles, overmuch PVA fibers may be has adverse influence on reduction of carbonation depth. The carbonation depth was observed to reduce when the test age increased from 3 days to 28 days. However, it shall be noted that the variation in carbonation depth was not obvious for the composites containing different dosages of PVA fibers when the test age was less than 14 days. This indicated that the addition of PVA fiber had very limited benefit to carbonation resistance of the cementitious composite at low carbonation stage.
Figure 2
Effect of PVA fiber on carbonation depth
The carbonation can be illustrated as the diffusion process of CO2 from the outside surface into the internal of the cementitious composite. The carbonation depth increases with the increase of diffusion depth of CO2. A large number of small pores and microcracks inside the cementitious composite provide necessary channels for CO2 diffusion. After PVA fibers were added into the cementitious composite, large amounts of fibers uniformly dispersed in the matrix formed a network, which can restrict sinking of aggregate and prevent the segregating of the fresh composite. The bleeding of the cementitious composite can also be reduced by the fiber network. As a result, the pore channels inside the composite were reduced. Meanwhile, a great deal of PVA fibers will reduce the size of the capillary pore inside the matrix or even block the capillary pore. Furthermore, the PVA fibers can restrict the generation and propagation of the cracks in the matrix, and hold back connection of cracks [40]. For the cementitious composite containing no nano-particles, there is a critical content of PVA fiber to enhance the carbonation resistance of cementitious composite. The criticis dosage of PVA fiber can also be called optimum content. When the fiber content exceeds the optimum content, the fiber number will be overmany and the fiber spacing will be too small, which will result in interlapping for the adjacent interface region of fibers. Thus the amount of weak interface will increase and the microstructure of the interface region will be too loose, which will be harmful to improve the carbonation resistance of the matrix. After 2% nano-SiO2 was added into the cementitious composite, the nano-particles promoted the hydration more completely, and more silica gel was generated. The silica gel and the unreacted SiO2 nano-particles filled in the internal pores of the matrix and improved the density degree, which might go against the diffusion of CO2. Therefore, the carbonation resistance of cementitious composite containing nano-SiO2 increased with increment in fiber dosage.
## 3.3 Permeability resistance of cementitious composite
Permeability resistance is an important parameter to evaluate the durability of cementitious composite. Usually, the service environment of cementitious composite structures is rather complex, and most of the structures are directly exposed to the air. Because there are a large number of microcracks and capillary-size pores inside the cementitious composite, the pressure water can enter the matrix through these permeability channels, which will cause enlarging and propagation of the microcracks and result in durability reduction of the structures [41]. In particular, the water permeating into the cementitiouis composite will participate in some complex chemical reactions to generate some corrosive materials, which will result in dissoluble erosion inside the structures and the structures are prone to durability failure [42]. In contrast, the permeability height of cementitious composite specimen generally declined with increasing PVA fiber dosage, which can be seen from Figure 3. Especially after 0.3% PVA fibers were added in the cementitious composite, there was a sharp reduction in permeability height of the specimen. It is noted that larger amount of PVA fiber addition resulted in larger reduction in permeability height of the specimen. It should be noted that the permeability height of the specimen of cementitious composite containing 2% nano-SiO2 was lower than that of the cementitious composite containing no nano-SiO2 for the same PVA fiber volume dosage. For the cementitious composite without nano-particles, the permeability height of specimen containing no PVA fiber was 40.2 mm, but it was only 15.4 mm for the composite spec imen reinforced with 1.2% PVA fiber. It can be concluded that the volume content in the PVA fiber had significant effect on the permeability resistance of cementitious composite specimen.
Figure 3
Effect of PVA fiber on permeability height
A large number of tiny and thin PVA fibers with large specific surface area constructed a uniform and disordered support system. When the cementitious composite matrix was shrink, this disordered support system consumed energy effectively and restricted cracking of the matrix, which reduced the inner microcracks and defects of the cementitious composite effectively. As a result, it is so difficult to form the through porous capillary channels or cracks inside the composite due to the existing of fibers. With the improved inner structure of the matrix, the permeability resistance of cementitious composite was enhanced [43].
## 3.4 Cracking resistant property of cementitious composite
Shrinkage cracking is a usual disadvantage of cementitious composite, which will result in reduction in strength and durability of cementitious composite, and accelerate corrosion of the reinforcing steel bars inside the composite. Controlling the early plastic cracking and crack propagation of cementitious composite to improve the cracking resistant property of the composite has great significance on improving the durability of cementitious composite. The influence of PVA fiber content on cracking resistance ratio of cementitious composite and cementitious composite containing 2% nano-SiO2 is shown in Figure 4(a) and 4(b), respectively. The results showed the PVA fiber additions obviously increased cracking resistance ratio of the cementitious composite. As is seen from Figure 4(a), for the cementitious composite without nano-particles, the amount of the net cracking resistance ratio increase caused by 0.3% PVA fibers was about 66%, which was much larger than the net increase of cracking resistance ratio of cementitious composite containing 2% nano-SiO2. For both series mixtures, the cementitious composite containing 1.2% PVA fibers exhibited the highest cracking resistance ratio increase. As is found from Figure 4(b), for cementitious composite containing 2% nano-particles, the increase in cracking resistance ratio was determined as 90% compared to the control composite mix.
Figure 4
Effect of PVA fiber on cracking resistance ratio
It can be concluded that incorporation of PVA fiber greatly improved cracking resistant property of cementitious composites. On the one hand, PVA fiber improved the cohesiveness of fresh cementitious composite due to the excellent hydrophilia and adhesive property with binding materials. On the other hand, PVA fibers belong to synthetic fibers with wonderful ductility. The fibers inside the composite were crossing to form a net structure, which reduced the fluidity of cement paste and restricted the sinking of aggregates. The function of supporting aggregates for PVA fibers has great contribution on improving cracking resistant property of cementitious composite. Meanwhile, PVA fibers can fill and block the pores caused by cement hydration, reduce the amount of connected pore channels and decrease the size of the pore channel. As a result, the pore size distribution was optimized, and thus the compactness of cementitious composite was significantly enhanced. Furthermore, the bridging function shared responsibility for the inner shrinkage stress, which reduced the stress concentration on the crack tip of the cementitious composite. So the crack was prevented from further propagating, and the possibility for the microcracks to become through cracks was reduced [44]. The high activity of SiO2 nano-particles promoted cement hydration to be more thorough, and the compactness of cementitious composite was improved. As a result, the number of pores inside the composite was well controlled, and the porosity and size of the internal pores were decreased. The existing of SiO2 nano-particles promoted the reinforcement of PVA fibers on cracking resistant property of cementitious composite. Therefore, the addition of PVA fibers greatly improved the cracking resistant property of cementitious composite.
## 3.5 Freeze-thaw resistance of cementitious composite
From the point of view of microstructure, the interior of cementitious composite is not voidless and continuous, and there are a large number of capillary-size pores with the diameter of 0.01-10 μm on the interface between the aggregate and the cement gel [45]. Under the damp environment, most of the capillary-size pores will be filled with water, which will be frozen when the temperature of the cementitious composite is too low. The internal stress resulted from freezing will directly act on the pore structure, which will result in irreversible micro-crack damage inside the composite. With the action of freezing-thawing cycles, the internal stress will affects the matrix repeatedly, and the internal micro-crack damage will expand and accumulate,which finally leads to freezing-thawing damage of the cementitious composite [46]. Not only that, the internal stress of the pore structure derived from freezing-thawing cycles on the interface between the reinforcing steel-barsand the composite matrix is also the primary cause of steelbars corrosion and bond failure between steel-bars and the composite matrix [47].
Figure 5(a) illustrates the effects of PVA fiber dosage on relative dynamic elastic modulus of cementitious composite after the samples underwent 300 cycles of freeze-thaw. As is shown in Figure 5(a), the incorporation of PVA fibers increased the relative dynamic elastic modulus effectively. The maximal relative dynamic elastic modulus improvement also occurred in the composite containing 0.9% of PVA fibers. In comparison to the control cementitious composite (0% PVA fiber), the net rate of increase in the relative dynamic elastic modulus was 16.5% for the composite reinforced by 0.9% fibers after the samples underwent 300 freeze-thawcycles. Figure 5(b) presents the relation curves between the relative dynamic elastic modulus and freezethaw cycles of various dosage of PVA fibers reinforced cementitious composites containing 2% SiO2 nano-particles. From the figure, it was observed that the change rule in relative dynamic elastic modulus of the cementitious composite containing 2% nano-SiO2 with increment in fiber dosage is similar with that of the cementitious composites without SiO2 nano-particles. After the samples underwent the same freeze-thaw cycles, the cementitious composite reinforced with 0.9% PVA fibers exhibited maximum relative dynamic elastic modulus. For 300 freeze-thaw cycles, the relative dynamic elastic modulus of cementitious composite containing % SiO2 nano-particles increased from 86.6% to 91.4% and increased by 5.5% compared to the cementitious composite without fibers. The variation in relative dynamic elastic modulus in Figure 5 showed low content (0.9%) in PVA fiber improved the freezing-thawing resistance of cementitious composite, while a larger amount (> 0.9%) of PVA fibers reduced the freezing-thawing resistance of cementitious composite.
Figure 5
Effect of PVA fiber on relative dynamic elastic modulus
Before freezing-thawing cycle tests started, the surface of all the specimens is very smooth and there is no small pit on the surface. As the tests went on, there were a certain amount of small pores on the specimen surface of the cementitious composite containing no PVA fibers. Spalling in patches of the hardened cement paste occurred under the action of expansion pressure resulted from the freezing of the water in the small pores during the course of freezing-thawing cycles. On the contrary, the surface of the specimen of PVA fibers reinforced cementitious changes little during the course of freezing-thawing cycles. The overlapping each other of disordered PVA fibers inside the composite restricted the running over of the internal air, and the air content in the cementitious composite increased, which relieved the hydrostatic pressure and seepage pressure in the course of low temperature cycles [48]. Because the diameter of PVA fiber is very small, the fiber amount of unit mass is large and the space between two fibers. As a result, the energy loss of the cementitious composite during freeze-thawdamagewas increased, and the expansion and cracking of cementitious composite was effectively restricted, which was beneficial to improve the freezingthawing resistance of cementitious composite. However, the amount of fibers in unit of composite will be too large and the space between two fibers is too small if overdosage of PVA fibers were used. Then there are too many planes of weakness inside the fiber reinforced cementitious composite, and the microstructure of the transient zone of interface will be loose, which has disadvantages on improving freezing-thawing resistance of cementitious composite.
## 3.6 Microstructure of cementitious composite
With the development of hydration process of cementitious composites, the grain amount of cement is decreasing and the thickness of hydrated products layer around the cement grains becomes larger and larger [49]. Figure 6(a) and 6(b) exhibits microstructures of common cementitious composites and cementitious composites reinforced by PVA fibers containing nano-SiO2 particles, respectively. As is observed from Figure 6(a), there are large amounts of areas with large pores in the cementitious composite due to the absence of nano-particles. The bonding between the fiber and cement paste is not strength enough, which implies that ITZ is relatively weak. The porosity of the composite is so high that large amounts of Ca(OH)2 has large space to grow. The microstructure of PVA fiber reinforced cementitious composite becomes much denser due to the filling effect for nano-particle and the small particlefor hydrated products C-S-H gels. As is shown in Figure 6(b), with addition of 1.0% SiO2 nano-particles, the number of pores in the composite is very small. Besides, the quantity of disadvantageous crystals was obviously reduced, such as needlelike ettringite and calcium hydroxide, by including SiO2 nano-particles into PVA fiber reinforced composite. As a result, the ITZ was strengthened, which is more advantageous for the PVA fibers to play the role of their reinforcement.
Figure 6
SEM micrographs of PVA fiber reinforced cementitious composites
## 4 Conclusions
Based on the current study presented above, the main conclusions were drawn:
1. Addition of PVA fibers in cementitious composite decreased slump flow of the fresh composite. The amount of slump flow decrease increased with increasing amount (0.3–1.2% by volume) of PVA fibers. The incorporation of nano-SiO2 particles in cementitious composites caused further loss of flowability.
2. Incorporation of PVA fibers significantly improved the durability of cementitious composites regardless of addition of nano-particles. Whenthe fiber content was less than 1.2%, the durability indices of permeability resistance and cracking resistance increased with fiber content. However, the durability indices of carbonation resistance and freezing-thawing resistance began to decrease as the fiber dosage increased from 0.9% to 1.2%. The fiber reinforced cementitious composite exhibited better durability due to addition of nano-SiO2 particles.
3. The microstructure of PVA fiber reinforced cementitious composite becomes much denser due to filling effect of nano-SiO2 and particles of hydrated products C-S-H gels. Nano-SiO2 particle improves microscopic structure of fiber reinforced cementitious composites, and the nano-particles are beneficial for PVA fibers to play the role of reinforcement in cementitious composites.
## Acknowledgement
The authors would like to acknowledge the financial support received from CRSRI Open Research Program (Grant No. CKWV2018477/KY), National Natural Science Foundation of China (Grant No. 51678534), Open Projects Funds of Dike Safety and Disaster Prevention Engineering Technology Research Center of Chinese Ministry of Water Resources (Grant no. 2018006), Program for Innovative Research Team (in Science and Technology) in University of Henan Province in China (Grant No. 20IRT-STHN009).
## References
• [1]
Ahmed S.F.U., Mihashi H., A review on durability properties of strain hardening fibre reinforced cementitious composites (SHFRCC), Cem. Concr. Compos., 2007, 29, 365-376.
• [2]
Liu H., Zhang Q., Li V., Su H., Gu C., Durability study on engineered cementitious composites (ECC) under sulfate and chloride environment. Constr. Build. Mater., 2017, 133, 171-181.
• [3]
Xu S.L., Lyu Y., Xu S.J., Li Q.H., Enhancing the initial cracking fracture toughness of steel-polyvinyl alcohol hybrid fibers ultra high toughness cementitious composites by incorporating multiwalled carbon nanotubes, Constr. Build. Mater., 2018, 195, 269-282. Google Scholar
• [4]
Arain M.F.,Wang M.X., Chen J.Y., Zhang H.P., Study on PVA fiber surface modification for strain-hardening cementitious composites (PVASHCC), Constr. Build. Mater., 2019, 197, 107-116.
• [5]
Kim G.M., Park S.M., Ryu G.U., Lee H.K., Electrical characteristics of hierarchical conductive pathways in cementitious composites incorporating CNT and carbon fiber, Cem. Concr. Compos., 2017, 82, 165-175.
• [6]
Pournasiri E., Ramli M., Cheah C.B., Mechanical performance of ternary cementitious composites with polypropylene fiber, ACI Mater. J., 2018, 115, 635-646.
• [7]
Perez Villar V., Flores Medina N., Hernandez-Olivares F., A model about dynamic parameters through magnetic fields during the alignment of steel fibres reinforcing cementitious composites, Constr. Build. Mater., 2019, 201, 340-349.
• [8]
Ahmad R., Hamid R., Osman S.A., Physical and chemical modifications of plant fibres for reinforcement in cementitious composites, Adv. Civ. Eng., 2019, 2019, 1-12.
• [9]
Li S.B., Hu B.X., Zhang F., Preparation and properties of glass fiber/plant fiber reinforced cementitious composites, Sci. Adv. Mater. J., 2018, 115, 635-646.
• [10]
Girgin Z.C., Effect of slag, nano clay and metakaolin on mechanical performance of basalt fibre cementitious composites, Constr. Build. Mater., 2018, 192, 70-84.
• [11]
Pakravan H.R., Jamshidi M., Latifi M.J., Text. I., 2017, 109, 79-84.
• [12]
Deng H., Liao G., Assessment of influence of self-healing behavior on water permeability and mechanical performance of ECC incorporating superabsorbent polymer (SAP) particles, Constr. Build. Mater., 2018, 170, 455-465.
• [13]
Turk K., Nehdi M.L., Coupled effects of limestone powder and highvolume fly ash on mechanical properties of ECC. Coupled effects of limestone powder and high-volume fly ash on mechanical properties of ECC, Constr. Build. Mater., 2018, 164, 185-192.
• [14]
Zhang J., Gao Y., Wang Z.B., Evaluation of shrinkage induced cracking performance of low shrinkage engineered cementitious composite by ring tests, Compos. Part B: Eng., 2013, 52, 21-29.
• [15]
Sahmaran M., Li V.C., Durability properties of micro-cracked ECC containing high volumes fly ash, Cem. Concr. Res., 2009, 39, 1033- 1043.
• [16]
Ozbay E., Sahmaran M., Lachemi M., Effect of microcracking on frost durability of high-volume-fly-ash- and slag-incorporated Eengineered Cementitious Composites, ACI Mater. J., 2013, 110, 259-267. Google Scholar
• [17]
Liu Y., Zhou X., Lv C., Yang Y., Liu T., Use of silica fume and GGBS to improve frost resistance of ECC with high-volume fly ash, Adv. Civ. Eng., 2018, 2018, 1-11.
• [18]
Righi D.P., Costa F.B.P., Graeff A.G., Silva Filho L.C.P., Tensile behaviour and durability issues of engineered cementitious composites with rice husk ash, Materia., 2017, 22, 1-9.
• [19]
Xu S., Cai X., Li H., Experimental study of the durability properties of ultra-high toughness cementitious composites under freezing and thawing cycles, China Civ. Eng. J., 2009, 42, 42-46. Google Scholar
• [20]
Yesilmen S., Al-Najjar Y., Balav M.H., Sahmaran M., Yildirim G., Lachemi M., Nano-modification to improve the ductility of cementitious composites, Cem. Concr. Res., 2015, 76, 170-179.
• [21]
Sikora P., Horszczaruk E., Cendrowski K., Mijowska E., Nanomodification to improve the ductility of cementitious composites, Nanoscale Res. Lett., 2015, 11, 1-9. Google Scholar
• [22]
Li Q.H., Gao X., Xu S.L., Multiple effects of nano-SiO2 and hybrid fibers on properties of high toughness fiber reinforced cementitious composites with high-volume fly ash, Cem. Concr. Compos., 2016, 72, 201-212.
• [23]
Yang H.S., Che Y.J., Multiple effects of nano-SiO2 and hybrid fibers on properties of high toughness fiber reinforced cementitious composites with high-volume fly ash, Adv. Mater. Sci. Eng., 2018, 2018, 1-8. Google Scholar
• [24]
Jiang S., Zhou D., Zhang L., Comparison of compressive strength and electrical resistivity of cementitious composites with different nano- and micro-fillers, Arch. Civ. Mech. Eng., 2018, 18, 60-68.
• [25]
Yu R., Spiesz P., Brouwers H.J.H., Effect of nano-silica on the hydration and microstructure development of Ultra-High Performance Concrete (UHPC) with a low binder amount, Constr. Build. Mater., 2014, 65, 140-150.
• [26]
GB/T 50146-2014, Technical Code for Application of Fly Ash Concrete, National Standard of the People’s Republic of China, 2014.
• [27]
GB175-2007, Common Portland Cement, National Standard of the People’s Republic of China, 2007.
• [28]
Kim S.W., Yun H.D., Flexural behaviour of reinforced concrete beams strengthened with a composite reinforcement layer: BFRP grid and ECC, Constr. Build. Mater., 2016, 115, 424-437.
• [29]
Felekoglu B., Tosun-Felekoglu K., Ranade R., Zhang Q., Li V.C., Influence of matrix flow ability, fiber mixing procedure, and curing conditions on the mechanical performance of HTPP-ECC, Compos. Part B: Eng., 2014, 60, 359-370.
• [30]
Wu C., Li V.C., Thermal-mechanical behaviors of CFRP-ECC hybrid under elevated temperatures, Compos. Part B: Eng., 2017, 110, 255-266.
• [31]
GB/T 50080-2002, Standard for test method of performance on ordinary fresh concrete, National Standard of the People’s Republic of China, 2003.
• [32]
Jalal M., Pouladkhan A., Harandi O.F., Jafari D., Comparative study on effects of Class F fly ash, nano silica and silica fume on properties of high performance self-compacting concrete, Constr. Build. Mater., 2015, 94, 90-104.
• [33]
GB/T 11974-1997, Test method for carbonation of aerated concrete, National Standard of the People’s Republic of China, 1997.
• [34]
GB/T 50082-2009, Standard for test methods of long-term performance and durability of ordinary concrete, National Standard of the People’s Republic of China, 2009.
• [35]
JC/G 951-2005, The method for cracking-resistance of cement mortar, National Standard of the People’s Republic of China, 2005.
• [36]
SL 352-2006, Test code for hydraulic concrete, National Standard of the People’s Republic of China, 2006.
• [37]
Hossain K.M.A., Lachemi M., Sammour M., Sonebi M., Strength and fracture energy characteristics of self-consolidating concrete incorporating polyvinyl alcohol, steel and hybrid fibres, Constr. Build. Mater. 2013, 45, 20-29.
• [38]
Yew M.K., Mahmud H.B., Ang B.C., Yew M.C., Effects of low volume fraction of polyvinyl alcohol fibers on the mechanical properties of oil palm shell lightweight concrete, Adv. Mater. Sci. Eng., 2015, 2015, 1-8.
• [39]
Zhang X., Xu J., Du Y., Study on anti-carbonation properties of high volume fly-ash concrete, Yangtze River, 2010, 41, 74-77. Google Scholar
• [40]
Cheng Y.,Wang H., Wang Y., Preliminary research on carbonation resistance of fiber reinforced concrete, J. Build. Mater., 2010, 13, 792-795. Google Scholar
• [41]
Kang Q., Fang Y., Deng H., Mechanical properties and crackresistance of cement mortar with basalt/polypropylene hybrid fiber, Mater. Rev., 2011, 25(6), 122-126. Google Scholar
• [42]
Peng S., Ding Z., Chen M., Deng K., The test research for compound fiber-reinforced concrete performance on intensity improvement, crack control and leakage resistance. Build. Sci. 2007, 23, 56-59. Google Scholar
• [43]
Xu X., He X., Yi Z., A research of polypropylene fiber concrete impermeability test & mechanism analysis, China Munic. Eng., 2010, 35, 6-8. Google Scholar
• [44]
Deng Z., Zhang Y., Xu H., Du C., Experimental study on early anticracking and permeability resistance of cellulose fiber reinforced concrete, South-to-North Water Trans. Water Sci. Technol., 2012, 10, 10-13. Google Scholar
• [45]
Ji X., Song Y., Mechanic analysis on the failure of bond behavior between concrete and steel bar when suffered from frost injury, Shuili Xuebao., 2009, 40, 1495-1499. Google Scholar
• [46]
Song Y, Ji X. Analysis on reliability of concrete under freezingthawing action and evaluation of residual life, Shuili Xuebao., 2006, 37, 259-263. Google Scholar
• [47]
Ji X., Song Y., Experimental research on bond behaviors between steel bars and concrete after freezing and thawing cycles, J. Dalian Univ. Technol., 2009, 48, 240-245. Google Scholar
• [48]
Zhang J., Liu S., Yan C., Bai J., Yan M., Influence of chloride environment on frost resistance of PVA fiber reinforced engineered cementitious composite, J. Chin. Ceram. Soc., 2013, 41, 766-711. Google Scholar
• [49]
Li W., Huang Z., Cao F., Sun Z., Shah SP. Effects of nano-silica and nano-limestone on flowability and mechanical properties of ultrahigh- performance concrete matrix, Constr. Build. Mater., 2015, 95, 366-374.
Accepted: 2019-04-30
Published Online: 2019-10-26
Citation Information: Nanotechnology Reviews, Volume 8, Issue 1, Pages 116–127, ISSN (Online) 2191-9097,
Export Citation
|
{}
|
# Systems Of Linear Equations
~ 1505 Words
In this article, we discus systems of linear equations as treated in linear algebra.
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
from sympy.plotting import *
x, y, z = symbols("x y z")
## Linear Equations in $$n$$ Variables
In analytical geometry, we learn that a line in two-dimensional space has the standard form
$a_1x + a_2y = b$
$$a_1$$, $$a_2$$, and $$b$$ are constants. This line is a linear equation in two variables (hence why it is in two-dimensional space). This fact extends to three-dimensional spaces as well.
$a_1x + a_2y + a_3z = b$
By definition, then, a linear equation in $$n$$ variables $$x_1, x_2, x_3, \cdots, x_n$$ has the form
$a_1x_1 + a_2x_2 + a_3x_3 + \cdots + a_nx_n = b$
The coefficients $$a_1, a_2, \cdots, a_n$$ are real numbers, and the constant term $$b$$ is a real number. $$a_1$$ and $$x_1$$ are, respectively, the leading coefficient and the leading variable. Linear equations have variables that only appear to the first power. Their variables are not involved in trigonometric, exponential, or logarithmic functions.
A solution of a linear equation in $$n$$ variables is a sequence of $$n$$ real numbers $$s_1, s_2, \cdots, s_n$$ so that the equation is true when they’re substituted for the variables of the equation.
Let’s solve the linear equation $$3x + 2y - z = 3$$ for each variable.
eq = Eq(3*x + 2*y - z, 4)
solutions = [solve(eq, x, dict=True), solve(eq, y, dict=True), solve(eq, z, dict=True)]
solutions
## [[{x: -2*y/3 + z/3 + 4/3}], [{y: -3*x/2 + z/2 + 2}], [{z: 3*x + 2*y - 4}]]
### Systems of Linear Equations
A system of $$m$$ linear equations in $$n$$ variables is a set of $$m$$ equations, each having the same $$n$$ variables.
$a_{11}x_1 + a_{12}x_2 + a_{13}x_3 + \cdots + a_{1n}x_n = b_1 \\ a_{21}x_1 + a_{22}x_2 + a_{23}x_3 + \cdots + a_{2n}x_n = b_2 \\ \vdots \\ a_{m1}x_1 + a_{m2}x_2 + a_{m3}x_3 + \cdots + a_{mn}x_n = b_n \\$
A solution of a system of linear equations is a sequence of numbers $$s_1, s_2, \cdots, s_n$$ that is a solution of each of the linear equations in the system. This solution, geometrically and conventionally, corresponds to the point where the lines of equations meet.
A system of linear equations can have one solution (if the lines intersect at a single point), in which case it’s called a consistent system, infinite solutions (if the lines are “on top of each other”, i.e. they are coincident lines), or no solution (if they are parallel), in which case it’s called an inconsistent system.
### Systems of Linear Equations in Two Variables
Let’s graph and solve the following linear equation.
$x + y = 3 \\ x - y = -1$
# rewrite functions by moving the right hand side to the left hand side
equations = [x + y - 3, x - y + 1]
linsolve(equations, x, y)
## FiniteSet((1, 2))
# solve each function to get a form we can plot with sympy
a = solve(equations[0], x)
a
## [3 - y]
b = solve(equations[1], x)
b
## [y - 1]
p = plot(a[0], line_color='b', legend=True, show=False)
p.extend(plot(b[0], line_color='r', legend=True, show=False))
p.show()
## Solving Systems of Linear Equations
Equations in Row-Echelon form (meaning, written in a “triangular”, stair-step pattern with leading coefficients of 1) can be solved using Back-Substitution. Systems of equations that are not in Row-Echelon form can be re-written into that form using Gaussian Eliminatoin. Two systems of linear equations are equivalent if they have the same solution set.
Let’s solve the system
$x - 2y + 3z = 9 \\ -x + 3y = -4 \\ 2x - 5y + 5z = 17$
equations = [x - 2*y + 3*z -9, -x + 3*y + 4, 2*x -5*y + 5*z -17]
linsolve(equations, x, y, z)
## FiniteSet((1, -1, 2))
## Introduction to Matrices
A matrix is an array of objects (numbers, expressions, or symbols) arranged in a rectangular shape. We denote matrices are $$m \times n$$ matrices, where $$m$$ denotes the number of rows and $$n$$ denotes the number of columns. $$m$$ and $$n$$ are what determines the size of the matrix. We denote each entry of the matrix as $$a_{ij}$$ which indicates its row index ($$i$$) and column index ($$j$$).
One common use of matrices is to represent systems of linear equations. The matrix derived from the coefficients and constant terms of a system is called the augmented matrix of the system. The matrix containing only the coefficients is the coefficient matrix. The methods of Back-Substitution and Gaussian / Gauss-Jordan Elimination can be used on matrices as well. The elimination operations turn a matrix into Row-Echelon form for later Back-Substitution. Although, we can use programs to solve systems of linear equations, let’s illustrate the Row-Echelon form.
Let’s put the system of equations form above into Row-Echelon form.
$x - 2y + 3z = 9 \\ -x + 3y = -4 \\ 2x - 5y + 5z = 17$
The augmented matrix representation of this system is
$\begin{bmatrix} \phantom{-}1 & \phantom{-}2 & 3 & \phantom{-}9 \\ -1 & \phantom{-}3 & 0 & -4 \\ \phantom{-}2 & -5 & 5 & \phantom{-}17 \end{bmatrix}$
## variables with no coefficient get a zero.
A = Matrix([[1, -2, 3, 9], [1, 3, 0, -4], [2, -5, 5, 17]])
A
## Matrix([
## [1, -2, 3, 9],
## [1, 3, 0, -4],
## [2, -5, 5, 17]])
A.rref()[0]
## Matrix([
## [1, 0, 0, -1/4],
## [0, 1, 0, -5/4],
## [0, 0, 1, 9/4]])
## Applications of Systems of Linear Equations
Systems of linear equations have a variety of applications, we’re treating a few ones below.
### Polynomial Curve Fitting
Suppose we have a collection of data represented by $$n$$ points in the $$xy$$-plane,
$(x_1, y_1), (x_2, y_2) \cdots, (x_n, y_n)$
and we want to find a polynomial function with a degree $$n - 1$$
$p(x) = a_0 + a_1x + a_2x^2 + \cdots + a_{n - 1}x^{n - 1}$
whose graph passes through the points. This is what we called polynomial curve fitting. To solve for the $$n$$ coefficients of $$p(x)$$, we substitute each of the $$n$$ points into the polynomial function and obtain $$n$$ linear equations in $$n$$ variables $$a_0, a_1, a_2, \cdots, a_{n-1}$$.
As an example, let’s determine the polynomial $$p(x) = a_0 + a_1x + a_2x^2$$ that passes through the points $$(1, 4), (2, 0), (3, 12)$$. Using the aforementioned substitution we get
$p(1) = a_0 + a_1(1) + a_2(1)^2 = a_0 + a_1 + a_2 = 4 \\ p(2) = a_0 + a_1(2) + a_2(2)^2 = a_0 + 2a_1 + 4a_2 = 0 \\ p(3) = a_0 + a_1(3) + a_2(3)^2 = a_0 + 3a_1 + 9a_2 = 12$
A = Matrix([[1, 1, 1, 4], [1, 2, 4, 0], [1, 3, 9, 12]])
solve_linear_system_LU(A, [x, y, z])
## {x: 24, y: -28, z: 8}
The solution that tells us that the polynomial function is
$p(x) = 24 - 28x + 8x^2$
def p(x):
return 24 - 28*x + 8*x**2
x = np.linspace(0, 4)
y = p(x)
plt.plot([1, 2, 3], [4, 0, 12], 'ro')
plt.plot(x, y)
Let’s illustrate an application from astrophysics. Let’s find the polynomial that relates the periods of the first three planets to their mean distances from the sun. Then, let’s test the accuracy of the fit by using the polynomial to calculate the period of Mars.
Mercury has a mean distance of 0.387 and period of 0.241. Venus has a mean distance of 0.723 and period of 0.615. Earth has a mean distance of 1.0 and period of 1.0.
Our polynomial will have to be of degree $$n - 1 = 3 - 1 = 2$$.
$p(x) = a_0 + a_1x + a_2x^2$
fitted to the points $$(0.387, 0.241), (0.723, 0.615), (1, 1)$$. This gives us a system of liner equation
$a_0 + 0.387a_1 + (0.387)^2a_2 = 0.241 \\ a_0 + 0.723a_1 + (0.723)^2a_2 = 0.615 \\ a_0 + a_1 + a_2 = 1$
a0, a1, a2 = symbols("a0 a1 a2")
solve_linear_system_LU(Matrix([
[1, 0.387, 0.387**2, 0.241],
[1, 0.723, 0.723**2, 0.615],
[1, 1, 1, 1]]), [a0, a1, a2])
## {a0: -0.0634254004898172, a1: 0.611881422258717, a2: 0.451543978231100}
Therefore the function is
$p(x) = -0.0634 + 0.6119x + 0.4515x^2$
To find a polynomial fit without constructing the whole matrix, we can use numpy and encode the $$xy$$-coordinates into arrays. The np.polyfit function returns the coefficients with the highest power first.
x = np.array([0.387, 0.723, 1])
y = np.array([0.241, 0.615, 1])
np.polyfit(x, y, 2)
## array([ 0.45154398, 0.61188142, -0.0634254 ])
Knowing that mars has a mean distance of 1.523 from the sun
def p(x):
return -0.0634 + 0.6119*x + 0.4515*x**2
# mars period
p(1.523)
## 1.9157910434999998
from sympy import *
x = symbols("x")
init_printing()
print(latex(Integral(sqrt(1/x), x)))
## \int \sqrt{\frac{1}{x}}\, dx
Posted on
|
{}
|
WIKISKY.ORG
Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login
# 44 Boo (44 Boötis)
Contents
### Images
DSS Images Other Images
### Related articles
Are the W Ursae Majoris-type systems EK Comae Berenices and UX Eridani surrounded by circumstellar matter?The variations of the orbital periods of two nearly neglected W UMa-typeeclipsing binaries, EK Comae Berenices and UX Eridani, are presentedthrough a detailed analysis of the O C diagrams. It is found that theorbital period of EK Com is decreasing and the period of UX Eridani isincreasing, and several sudden jumps have occurred in the orbitalperiods of both binaries. We analyze the mechanism(s), which mightunderlie the changes of the orbital periods of both systems, and obtainsome new results. The long-term decrease of the orbital period of EKComae Berenices might be caused by the decrease of the orbital angularmomentum due to a magnetic stellar wind (MSW) or by mass transfer fromthe more massive to the less massive component. The secular increase inthe orbital period of UX Eridani might be caused by mass transfer fromthe less massive to the more massive star. The possible mechanisms,which underlie the sudden changes in the orbital periods of the closebinary systems are as the followings: (1) the variations of thestructure due to the variation of the magnetic field; (2) the rapid massexchange between the close binaries and their circumstellar matter.Finally, the evolutionary status of the systems EK Comae Berenices andUX Eridani is discussed. Observations of variables.Not Available A Comparative Study of Flaring Loops in Active StarsDynamo activity in stars of different types is expected to generatemagnetic fields with different characteristics. As a result, adifferential study of the characteristics of magnetic loops in a broadsample of stars may yield information about dynamo systematics. In theabsence of direct imaging, certain physical parameters of a stellarmagnetic loop can be extracted if a flare occurs in that loop. In thispaper we employ a simple nonhydrodynamic approach introduced by Haisch,to analyze a homogeneous sample of all of the flares we could identifyin the EUVE DS database: a total of 134 flares that occurred on 44 starsranging in spectral type from F to M and in luminosity class from V toIII. All of the flare light curves that have been used in the presentstudy were obtained by a single instrument (EUVE DS). For each flare, wehave applied Haisch's simplified approach (HSA) in order to determineloop length, temperature, electron density, and magnetic field. For eachof our target stars, a literature survey has been performed to determinequantitatively the extent to which our results are consistent withindependent studies. The results obtained by HSA are found to be wellsupported by results obtained by other methods. Our survey suggeststhat, on the main sequence, short loops (with lengths<=0.5R*) may be found in stars of all classes, while thelargest loops (with lengths up to 2R*) appear to be confinedto M dwarfs. Based on EUVE data, the transition from small to largeloops on the main sequence appears to occur between spectral types K2and M0. We discuss the implications of this result for dynamo theories. On the Temperature-Emission Measure Distribution in Stellar CoronaeStrong peaks in the emission measure-temperature (EM-T ) distributionsin the coronae of some binary stars are associated with the presence ofhot (107 K), dense (up to 1013 cm -3)plasma. These peaks are very reminiscent of those predicted to arise inan impulsively heated solar corona. A coronal model comprised of manyimpulsively heated strands is adapted to stellar parameters. It is shownthat the properties of the EM-T distribution can be accounted for ingeneral terms provided the emission comes from many very small loops(length under 103 km) with intense magnetic fields (1 kG)distributed across part of the surface of the star. The heating requiresevents that generally dissipate between 1026 and 1028 ergs, which is in the range of solar microflares. This impliesthat such stars must be capable of generating regions of localizedintense magnetic fields. Contact Binaries with Additional Components. II. A Spectroscopic Search for Faint TertiariesIt is unclear how very close binary stars form, given that during thepre-main-sequence phase the component stars would have been inside eachother. One hypothesis is that they formed farther apart but were broughtin closer after formation by gravitational interaction with a thirdmember of the system. If so, all close binaries should be members oftriple (or higher order) systems. As a test of this prediction, wepresent a search for the signature of third components in archivalspectra of close binaries. In our sample of 75 objects, 23 show evidencefor the presence of a third component, down to a detection limit oftertiary flux contributions of about 0.8% at 5200 Å (consideringonly contact and semidetached binaries, we find 20 out of 66). In ahomogeneous subset of 59 contact binaries, we are fairly confident thatthe 15 tertiaries we have detected are all tertiaries present with massratios 0.28<~M3/M12<~0.75 and implied outerperiods P<~106 days. We find that if the frequency oftertiaries were the same as that of binary companions to solar-typestars, one would expect to detect about 12 tertiaries. In contrast, ifall contact binaries were in triple systems, one would expect about 20.Thus, our results are not conclusive but are sufficiently suggestive towarrant further studies. Dwarfs in the Local RegionWe present lithium, carbon, and oxygen abundance data for a sample ofnearby dwarfs-a total of 216 stars-including samples within 15 pc of theSun, as well as a sample of local close giant planet (CGP) hosts (55stars) and comparison stars. The spectroscopic data for this work have aresolution of R~60,000, a signal-to-noise ratio >150, and spectralcoverage from 475 to 685 nm. We have redetermined parameters and derivedadditional abundances (Z>10) for the CGP host and comparison samples.From our abundances for elements with Z>6 we determine the meanabundance of all elements in the CGP hosts to range from 0.1 to 0.2 dexhigher than nonhosts. However, when relative abundances ([x/Fe]) areconsidered we detect no differences in the samples. We find nodifference in the lithium contents of the hosts versus the nonhosts. Theplanet hosts appear to be the metal-rich extension of local regionabundances, and overall trends in the abundances are dominated byGalactic chemical evolution. A consideration of the kinematics of thesample shows that the planet hosts are spread through velocity space;they are not exclusively stars of the thin disk. Contact Binaries with Additional Components. I. The Extant DataWe have attempted to establish observational evidence for the presenceof distant companions that may have acquired and/or absorbed angularmomentum during the evolution of multiple systems, thus facilitating orenabling the formation of contact binaries. In this preliminaryinvestigation we use several techniques (some of themdistance-independent) and mostly disregard the detection biases ofindividual techniques in an attempt to establish a lower limit to thefrequency of triple systems. While the whole sample of 151 contactbinary stars brighter than Vmax=10 mag gives a firm lowerlimit of 42%+/-5%, the corresponding number for the much better observednorthern-sky subsample is 59%+/-8%. These estimates indicate that mostcontact binary stars exist in multiple systems. Lithium Abundances of F-, G-, and K-Type Stars: Profile-Fitting Analysis of the Li I 6708 DoubletAn extensive profile-fitting analysis was performed for the Li(+Fe)6707-6708Å feature of nearby 160 F-K dwarfs/subgiants (including27 planet-host stars) in the Galactic disk ( 7000 K ≳Teff ≳ 5000 K, -1 ≲ [Fe/H] ≲ +0.4), in orderto establish the photospheric lithium abundances of these stars. Thenon-LTE effect (though quantitatively insignificant) was taken intoaccount based on our statistical equilibrium calculations, which werecarried out on an adequate grid of models. Our results confirmed most ofthe interesting observational characteristics revealed by recentlypublished studies, such as the bimodal distribution of the Li abundancesfor stars at Teff ≳ 6000 K, the satisfactory agreementof the upper envelope of the A(Li) vs. [Fe/H] distribution with thetheoretical models, the existence of a positive correlation betweenA(Li) and the stellar mass, and the tendency of lower lithium abundancesof planet-host stars (as compared to stars without planets) at thenarrow transition'' region of 5900 K ≳ Teff ≳5800 K. The solar Li abundance derived from this analysis is 0.92 (H =12.00), which is by 0.24dex lower than the widely referenced standardvalue of 1.16. Spectroscopic Study on the Atmospheric Parameters of Nearby F--K Dwarfs and SubgiantsBased on a collection of high-dispersion spectra obtained at OkayamaAstrophysical Observatory, the atmospheric parameters (Teff,log g, vt, and [Fe/H]) of 160 mid-F through early-K starswere extensively determined by the spectroscopic method using theequivalent widths of Fe I and Fe II lines along with the numericaltechnique of Takeda et al. (2002, PASJ, 54, 451). The results arecomprehensively discussed and compared with the parameter values derivedby different approaches (e.g., photometric colors, theoreticalevolutionary tracks, Hipparcos parallaxes, etc.) as well as with thepublished values found in various literature. It has been confirmed thatour purely spectroscopic approach yields fairly reliable and consistentresults. The solar model problem' solved by the abundance of neon in nearby starsThe interior structure of the Sun can be studied with great accuracyusing observations of its oscillations, similar to seismology of theEarth. Precise agreement between helioseismological measurements andpredictions of theoretical solar models has been a triumph of modernastrophysics. A recent downward revision by 25-35 per cent of the solarabundances of light elements such as C, N, O and Ne (ref. 2) has,however, broken this accordance: models adopting the new abundancesincorrectly predict the depth of the convection zone, the depth profilesof sound speed and density, and the helium abundance. The discrepanciesare far beyond the uncertainties in either the data or the modelpredictions. Here we report neon-to-oxygen ratios measured in a sampleof nearby solar-like stars, using their X-ray spectra. The abundanceratios are all very similar and substantially larger than the recentlyrevised solar value. The neon abundance in the Sun is quite poorlydetermined. If the Ne/O abundance in these stars is adopted for the Sun,the models are brought back into agreement with helioseismologymeasurements. Kinematics of W Ursae Majoris type binaries and evidence of the two types of formationWe study the kinematics of 129 W UMa binaries and we discuss itsimplications on the contact binary evolution. The sample is found to beheterogeneous in the velocity space. That is, kinematically younger andolder contact binaries exist in the sample. A kinematically young (0.5Gyr) subsample (moving group) is formed by selecting the systems thatsatisfy the kinematical criteria of moving groups. After removing thepossible moving group members and the systems that are known to bemembers of open clusters, the rest of the sample is called the fieldcontact binary (FCB) group. The FCB group is further divided into fourgroups according to the orbital period ranges. Then, a correlation isfound in the sense that shorter-period less-massive systems have largervelocity dispersions than the longer-period more-massive systems.Dispersions in the velocity space indicate a 5.47-Gyr kinematical agefor the FCB group. Compared with the field chromospherically activebinaries (CABs), presumably detached binary progenitors of the contactsystems, the FCB group appears to be 1.61 Gyr older. Assuming anequilibrium in the formation and destruction of CAB and W UMa systems inthe Galaxy, this age difference is treated as an empirically deducedlifetime of the contact stage. Because the kinematical ages (3.21, 3.51,7.14 and 8.89 Gyr) of the four subgroups of the FCB group are muchlonger than the 1.61-Gyr lifetime of the contact stage, the pre-contactstages of the FCB group must dominantly be producing the largedispersions. The kinematically young (0.5 Gyr) moving group covers thesame total mass, period and spectral ranges as the FCB group. However,the very young age of this group does not leave enough room forpre-contact stages, and thus it is most likely that these systems wereformed in the beginning of the main sequence or during thepre-main-sequence contraction phase, either by a fission process or mostprobably by fast spiralling in of two components in a common envelope. New Minima of Selected Eclipsing Close BinariesWe present 180 CCD and photoelectric times of minima of selected closeeclipsing binaries. Inferring Coronal Structure from X-Ray Light Curves and Doppler Shifts: A Chandra Study of AB DoradusThe Chandra X-Ray Observatory continuously monitored the single coolstar AB Dor for a period lasting 88 ks (1.98Prot) in 2002December with the Low-Energy Transmission Grating HRC-S. The X-ray lightcurve shows rotational modulation with three peaks that repeat in twoconsecutive rotation cycles. These peaks may indicate the presence ofcompact emitting regions in the quiescent corona. Centroid shifts as afunction of phase in the strongest line profile, O VIII λ18.97,indicate Doppler rotational velocities with a semiamplitude of 30+/-10km s-1. By taking these diagnostics into account along withconstraints on the rotational broadening of line profiles (provided byarchival Chandra High-Energy Transmission Grating Fe XVII and FarUltraviolet Spectroscopic Explorer Fe XVIII profiles), we can constructa simple model of the X-ray corona that requires two components. One ofthese components is responsible for 80% of the X-ray emission and arisesfrom the pole and/or a homogeneously distributed corona. The secondcomponent consists of two or three compact active regions that causemodulation in the light curve and contribute to the O VIII centroidshifts. These compact regions account for 16% of the emission and arelocated near the stellar surface with heights of less than0.3R*. At least one of the compact active regions is locatedin the partially obscured hemisphere of the inclined star, while anotherof the active regions may be located at 40°. High-quality X-ray datasuch as these can test the models of the coronal magnetic fieldconfiguration as inferred from magnetic Zeeman Doppler imaging. Stars within 15 Parsecs: Abundances for a Northern SampleWe present an abundance analysis for stars within 15 pc of the Sunlocated north of -30° declination. We have limited our abundancesample to absolute magnitudes brighter than +7.5 and have eliminatedseveral A stars in the local vicinity. Our final analysis list numbers114 stars. Unlike Allende Prieto et al. in their consideration of a verysimilar sample, we have enforced strict spectroscopic criteria in thedetermination of atmospheric parameters. Nevertheless, our results arevery similar to theirs. We determine the mean metallicity of the localregion to be <[Fe/H]>=-0.07 using all stars and -0.04 when interlopersfrom the thick disk are eliminated. X-ray observations of the old open stellar cluster NGC 188I present the analysis results from XMM-Newton observations of the oldopen stellar cluster NGC 188, which has an age of about 7 Gyr and a nearsolar metallicity. 58 X-ray sources were detected in the field of viewof the EPIC MOS and pn cameras, and 46 sources are new X-ray detections.Visible counterparts were found for 20 sources including the variablestar WV 28, the W UMa-type binaries V371 Cep and V372 Cep, and the redgiant V11. 9 X-ray sources are identified with probable clusternon-members, while 43 X-ray sources are of unknown membership. X-rayemission was detected from 6 stars with high membership probabilityabove a luminosity threshold of 1030 erg s-1. Thisindicates the presence of very active late-type stars in NGC 188 inspite of its old age. The HR diagram positions of two of these starsjust above the main sequence are reminiscent of those for W UrsaeMajoris-type contact binaries. Two other sources could be either membersof close binary systems or the product of the coalescence of W UMa typebinaries into single stars. One X-ray source in NGC 188 is located atthe bottom of the red giant branch in an evolutionary status similar tothat of an FK Comae-type star. Another X-ray source detected in NGC 188has the HR diagram position of an M type star. Its X-ray to bolometricluminosity ratio, greater than the canonical 10-3 saturationlevel, suggests that the star was flaring during XMM-Newtonobservations. M stars are most likely the most numerous X-ray sources inNGC 188 at lower X-ray luminosity thresholds. New V Light Curve and Ephemeris of the Binary System 44i BootisThis paper presents the results of the photometric observations of the WUMa-type eclipsing binary 44i Bootis in V band, carried out in 1992. Acomplete light curve together with times of minima was obtained. Lightand period variations of the system are also discussed. Period and light variations for the cool, overcontact binary BX PegasiNew charge-coupled device photometric observations of the W UMa-typebinary BX Pegasi (BX Peg) were collected on four nights from 1999October to 2000 September. The light curve was covered completely ineach season. Seven new times of minimum light were determined. It wasfound that the orbital period of the system has varied recently in asinusoidal way, superimposed on a downward parabolic variation. Thelong-term period decrease rate is deduced as dP/dt=-8.62 or 9.59 ×10-8 d yr-1, which can be interpreted as eithermass transfer from the more massive cool star to the less massive hotcomponent, or as the combination of mass transfer and angular momentumloss due to a magnetic stellar wind. The period and amplitude of thesinusoidal period variation were calculated to be about 35.3 yr and0.015 d, respectively. The light curves of BX Peg are asymmetric andshow year-to-year light variability. A spot model has been applied toanalyse these light curves. After using the light curves of 1999 asreference ones, we solve those of 2000 by adjusting only the spotparameters. One cool-spot model on the cool secondary satisfies theobserved light curves of both 1999 and 2000 quite well and shows a goodrepresentation of the BX Peg system for both the photospheric and spotdescriptions. The brightness variations of BX Peg are not coincidentwith the period variations and so do not conform to a prediction of theApplegate mechanism. We think the most likely cause of the cyclicalvariation is the light-time effect due to a third body, although nothird light was detected in the light-curve analysis. If it exists, thehypothetical object could be a very red main-sequence star or a whitedwarf. We have solved anew the historical published light curve for onlythe spot parameters and these closely resemble our spot parameters. Wespeculate that this result is associated with the small coronalsaturation of the cool star of the system. CCD Times of Minima of Selected Eclipsing Binaries682 CCD minima observations of 259 eclipsing binaries made mainly byauthor are presented. The observed stars were chosen mainly fromcatalogue BRKA of observing programme of BRNO-Variable Star Section ofCAS. Eclipsing Binaries in the Blue Envelope of the Period-Color DiagramSeveral interesting close eclipsing binaries in the short-period blueenvelope of the period-color diagram are investigated. Their l O--Cdiagrams are discussed and in several cases new solutions are given. The Density of Coronal Plasma in Active Stellar CoronaeWe have analyzed high-resolution X-ray spectra of a sample of 22 activestars observed with the High Energy Transmission Grating Spectrometer onChandra in order to investigate their coronal plasma density. Densitieswere investigated using the lines of the He-like ions O VII, Mg XI, andSi XIII. Si XIII lines in all stars of the sample are compatible withthe low-density limit (i.e., ne<~1013cm-3), casting some doubt on results based on lowerresolution Extreme Ultraviolet Explorer (EUVE) spectra finding densitiesne>1013 cm-3. Mg XI lines betray thepresence of high plasma densities up to a few times 1012cm-3 for most of the sources with higher X-ray luminosity(>~1030 ergs s-1) stars with higherLX and LX/Lbol tend to have higherdensities at high temperatures. Ratios of O VII lines yield much lowerdensities of a few times 1010 cm-3, indicatingthat the hot'' and cool'' plasma resides in physically differentstructures. In the cases of EV Lac, HD 223460, Canopus, μ Vel, TYPyx, and IM Peg, our results represent the first spectroscopic estimatesof coronal density. No trends in density-sensitive line ratios withstellar parameters effective temperature and surface gravity were found,indicating that plasma densities are remarkably similar for stars withpressure scale heights differing by up to 3 orders of magnitude. Ourfindings imply remarkably compact coronal structures, especially for thehotter (~7 MK) plasma emitting the Mg XI lines characterized by thecoronal surface filling factor, fMgXI, ranging from10-4 to 10-1, while we find fOVIIvalues from a few times 10-3 up to ~1 for the cooler (~2 MK)plasma emitting the O VII lines. We find that fOVIIapproaches unity at the same stellar surface X-ray flux level ascharacterizes solar active regions, suggesting that these stars becomecompletely covered by active regions. At the same surface flux level,fMgXI is seen to increase more sharply with increasingsurface flux. These results appear to support earlier suggestions thathot 107 K plasma in active coronae arises from flaringactivity and that this flaring activity increases markedly once thestellar surface becomes covered with active regions. Comparison of ourmeasured line fluxes with theoretical models suggests that significantresidual model inaccuracies might be present and, in particular, thatcascade contributions to forbidden and intercombination lines resultingfrom dielectronic recombination might be to blame. Nearby stars of the Galactic disk and halo. III.High-resolution spectroscopic observations of about 150 nearby stars orstar systems are presented and discussed. The study of these and another100 objects of the previous papers of this series implies that theGalaxy became reality 13 or 14 Gyr ago with the implementation of amassive, rotationally-supported population of thick-disk stars. The veryhigh star formation rate in that phase gave rise to a rapid metalenrichment and an expulsion of gas in supernovae-driven Galactic winds,but was followed by a star formation gap for no less than three billionyears at the Sun's galactocentric distance. In a second phase, then, thethin disk - our familiar Milky Way'' - came on stage. Nowadays ittraces the bright side of the Galaxy, but it is also embedded in a hugecoffin of dead thick-disk stars that account for a large amount ofbaryonic dark matter. As opposed to this, cold-dark-matter-dominatedcosmologies that suggest a more gradual hierarchical buildup throughmergers of minor structures, though popular, are a poor description forthe Milky Way Galaxy - and by inference many other spirals as well - if,as the sample implies, the fossil records of its long-lived stars do notstick to this paradigm. Apart from this general picture that emergeswith reference to the entire sample stars, a good deal of the presentwork is however also concerned with detailed discussions of manyindividual objects. Among the most interesting we mention the bluestraggler or merger candidates HD 165401 and HD 137763/HD 137778, thelikely accretion of a giant planet or brown dwarf on 59 Vir in itsrecent history, and HD 63433 that proves to be a young solar analog at\tau200 Myr. Likewise, the secondary to HR 4867, formerly suspectednon-single from the Hipparcos astrometry, is directly detectable in thehigh-resolution spectroscopic tracings, whereas the visual binary \chiCet is instead at least triple, and presumably even quadruple. Withrespect to the nearby young stars a complete account of the Ursa MajorAssociation is presented, and we provide as well plain evidence foranother, the Hercules-Lyra Association'', the likely existence ofwhich was only realized in recent years. On account of its rotation,chemistry, and age we do confirm that the Sun is very typical among itsG-type neighbors; as to its kinematics, it appears however not unlikelythat the Sun's known low peculiar space velocity could indeed be thecause for the weak paleontological record of mass extinctions and majorimpact events on our parent planet during the most recent Galactic planepassage of the solar system. Although the significance of thiscorrelation certainly remains a matter of debate for years to come, wepoint in this context to the principal importance of the thick disk fora complete census with respect to the local surface and volumedensities. Other important effects that can be ascribed to this darkstellar population comprise (i) the observed plateau in the shape of theluminosity function of the local FGK stars, (ii) a small thoughsystematic effect on the basic solar motion, (iii) a reassessment of theterm asymmetrical drift velocity'' for the remainder (i.e. the thindisk) of the stellar objects, (iv) its ability to account for the bulkof the recently discovered high-velocity blue white dwarfs, (v) itsmajor contribution to the Sun's 220 km s-1 rotationalvelocity around the Galactic center, and (vi) the significant flatteningthat it imposes on the Milky Way's rotation curve. Finally we note ahigh multiplicity fraction in the small but volume-complete local sampleof stars of this ancient population. This in turn is highly suggestivefor a star formation scenario wherein the few existing single stellarobjects might only arise from either late mergers or the dynamicalejection of former triple or higher level star systems. On the sizes of stellar X-ray coronaeSpatial information from stellar X-ray coronae cannot be assesseddirectly, but scaling laws from the solar corona make it possible toestimate sizes of stellar coronae from the physical parameterstemperature and density. While coronal plasma temperatures have longbeen available, we concentrate on the newly available densitymeasurements from line fluxes of X-ray lines measured for a large sampleof stellar coronae with the Chandra and XMM-Newton gratings. We compileda set of 64 grating spectra of 42 stellar coronae. Line counts of strongH-like and He-like ions and Fe XXI lines were measured with the CORAsingle-purpose line fitting tool by \cite{newi02}. Densities areestimated from He-like f/i flux ratios of O VII and Ne IX representingthe cooler (1-6 MK) plasma components. The densities scatter between logne ≈ 9.5-11 from the O VII triplet and between logne ≈ 10.5-12 from the Ne IX triplet, but we caution thatthe latter triplet may be biased by contamination from Fe XIX and Fe XXIlines. We find that low-activity stars (as parameterized by thecharacteristic temperature derived from H- and He-like line flux ratios)tend to show densities derived from O VII of no more than a few times1010 cm-3, whereas no definitive trend is foundfor the more active stars. Investigating the densities of the hotterplasma with various Fe XXI line ratios, we found that none of thespectra consistently indicates the presence of very high densities. Weargue that our measurements are compatible with the low-density limitfor the respective ratios (≈ 5× 1012cm-3). These upper limits are in line with constant pressurein the emitting active regions. We focus on the commonly used \cite{rtv}scaling law to derive loop lengths from temperatures and densitiesassuming loop-like structures as identical building blocks. We derivethe emitting volumes from direct measurements of ion-specific emissionmeasures and densities. Available volumes are calculated from theloop-lengths and stellar radii, and are compared with the emittingvolumes to infer filling factors. For all stages of activity we findsimilar filling factors up to 0.1.Appendix A is only available in electronic form athttp://www.edpsciences.org X-ray spectroscopy of the W UMa-type binary 44 Bootis44 Boo B, a W UMa-type binary system, was observed in June 2001 duringone entire revolution period with the XMM-Newton observatory. The countrate in the 0.3 to 2 keV band is constant in average with 5 to 20% countrate increases reminiscent of flares. Spectral fitting of the EPICspectra indicates a corona configuration with little contribution fromquiet regions, similar to the Sun. On the contrary, the (2-9) ×106 K temperature range of the cool'' plasma suggests thatthe active corona around the two companions is densely filled withlow-lying loops similar to those found in solar-type active regions. The44 Boo O VII He-like triplet constrains the electron density to an upperlimit ne < 8.6 × 1010 cm-3. Weargue that this low-lying loop system may be overlaid by larger loops.Magnetic reconnection phenomena in this large loops system may explainthe characteristic flare decay time in the light curve that implies looplengths of about 16 × 109 cm. An extended corona around44 Boo would explain the absence of eclipses in its X-ray light curve.The average element abundance in 44 Boo corona is found to be lower thanthe solar photospheric value. The spectral analysis indicates enhancedabundances of oxygen and neon relative to iron which suggest an inverseFIP effect. Compared with other active binary systems such as RSCVn orBY Dra, 44 Boo has relatively less material at temperatures higher than107 K and the temperature of its hottest plasma componentappears to be lower. On the properties of contact binary starsWe have compiled a catalogue of light curve solutions of contact binarystars. It contains the results of 159 light curve solutions. Theproperties of contact binary stars were studied using the cataloguedata. As is well known since Lucy's (\cite{Lucy68a},b) and Mochnacki's(\cite{Mochnacki81}) studies, primary components transfer their ownenergy to the secondary star via the common envelope around the twostars. This transfer was parameterized by a transfer parameter (ratio ofthe observed and intrinsic luminosities of the primary star). We provethat this transfer parameter is a simple function of the mass andluminosity ratios. We introduced a new type of contact binary stars: Hsubtype systems which have a large mass ratio (q>0.72). These systemsshow behaviour in the luminosity ratio- transfer parameter diagram thatis very different from that of other systems and according to ourresults the energy transfer rate is less efficient in them than in othertypes of contact binary stars. We also show that different types ofcontact binaries have well defined locations on the mass ratio -luminosity ratio diagram. Several contact binary systems do not followLucy's relation (L2/L1 =(M2/M1)0.92). No strict mass ratio -luminosity ratio relation of contact binary stars exists.Tables 2 and 3 are available in electronic form athttp://www.edpsciences.org S4N: A spectroscopic survey of stars in the solar neighborhood. The Nearest 15 pcWe report the results of a high-resolution spectroscopic survey of allthe stars more luminous than M_V = 6.5 mag within 14.5 pc from the Sun.The Hipparcos catalog's completeness limits guarantee that our survey iscomprehensive and free from some of the selection effects in othersamples of nearby stars. The resulting spectroscopic database, which wehave made publicly available, includes spectra for 118 stars obtainedwith a resolving power of R ≃ 50 000, continuous spectral coveragebetween 362-921 nm, and typical signal-to-noise ratios in therange 150-600. We derive stellar parameters and perform a preliminaryabundance and kinematic analysis of the F-G-K stars in the sample. Theinferred metallicity ([Fe/H]) distribution is centered at about -0.1dex, and shows a standard deviation of 0.2 dex. A comparison with largersamples of Hipparcos stars, some of which have been part of previousabundance studies, suggests that our limited sample is representative ofa larger volume of the local thin disk. We identify a number ofmetal-rich K-type stars which appear to be very old, confirming theclaims for the existence of such stars in the solar neighborhood. Withatmospheric effective temperatures and gravities derived independentlyof the spectra, we find that our classical LTE model-atmosphere analysisof metal-rich (and mainly K-type) stars provides discrepant abundancesfrom neutral and ionized lines of several metals. This ionizationimbalance could be a sign of departures from LTE or inhomogeneousstructure, which are ignored in the interpretation of the spectra.Alternatively, but seemingly unlikely, the mismatch could be explainedby systematic errors in the scale of effective temperatures. Based ontransitions of majority species, we discuss abundances of 16 chemicalelements. In agreement with earlier studies we find that the abundanceratios to iron of Si, Sc, Ti, Co, and Zn become smaller as the ironabundance increases until approaching the solar values, but the trendsreverse for higher iron abundances. At any given metallicity, stars witha low galactic rotational velocity tend to have high abundances of Mg,Si, Ca, Sc, Ti, Co, Zn, and Eu, but low abundances of Ba, Ce, and Nd.The Sun appears deficient by roughly 0.1 dex in O, Si, Ca, Sc, Ti, Y,Ce, Nd, and Eu, compared to its immediate neighbors with similar ironabundances.Based on observations made with the 2.7 m telescope at the McDonaldObservatory of the University of Texas at Austin (Texas), and the 1.52 mtelescope at the European Southern Observatory (La Silla, Chile) underthe agreement with the CNPq/Observatorio Nacional (Brazil).Tables 3-5 are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/420/183 The Geneva-Copenhagen survey of the Solar neighbourhood. Ages, metallicities, and kinematic properties of 14 000 F and G dwarfsWe present and discuss new determinations of metallicity, rotation, age,kinematics, and Galactic orbits for a complete, magnitude-limited, andkinematically unbiased sample of 16 682 nearby F and G dwarf stars. Our63 000 new, accurate radial-velocity observations for nearly 13 500stars allow identification of most of the binary stars in the sampleand, together with published uvbyβ photometry, Hipparcosparallaxes, Tycho-2 proper motions, and a few earlier radial velocities,complete the kinematic information for 14 139 stars. These high-qualityvelocity data are supplemented by effective temperatures andmetallicities newly derived from recent and/or revised calibrations. Theremaining stars either lack Hipparcos data or have fast rotation. Amajor effort has been devoted to the determination of new isochrone agesfor all stars for which this is possible. Particular attention has beengiven to a realistic treatment of statistical biases and errorestimates, as standard techniques tend to underestimate these effectsand introduce spurious features in the age distributions. Our ages agreewell with those by Edvardsson et al. (\cite{edv93}), despite severalastrophysical and computational improvements since then. We demonstrate,however, how strong observational and theoretical biases cause thedistribution of the observed ages to be very different from that of thetrue age distribution of the sample. Among the many basic relations ofthe Galactic disk that can be reinvestigated from the data presentedhere, we revisit the metallicity distribution of the G dwarfs and theage-metallicity, age-velocity, and metallicity-velocity relations of theSolar neighbourhood. Our first results confirm the lack of metal-poor Gdwarfs relative to closed-box model predictions (the `G dwarfproblem''), the existence of radial metallicity gradients in the disk,the small change in mean metallicity of the thin disk since itsformation and the substantial scatter in metallicity at all ages, andthe continuing kinematic heating of the thin disk with an efficiencyconsistent with that expected for a combination of spiral arms and giantmolecular clouds. Distinct features in the distribution of the Vcomponent of the space motion are extended in age and metallicity,corresponding to the effects of stochastic spiral waves rather thanclassical moving groups, and may complicate the identification ofthick-disk stars from kinematic criteria. More advanced analyses of thisrich material will require careful simulations of the selection criteriafor the sample and the distribution of observational errors.Based on observations made with the Danish 1.5-m telescope at ESO, LaSilla, Chile, and with the Swiss 1-m telescope at Observatoire deHaute-Provence, France.Complete Tables 1 and 2 are only available in electronic form at the CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/418/989 Stellar Coronal AstronomyCoronal astronomy is by now a fairly mature discipline, with a quartercentury having gone by since the detection of the first stellar X-raycoronal source (Capella), and having benefitted from a series of majororbiting observing facilities. Serveral observational characteristics ofcoronal X-ray and EUV emission have been solidly established throughextensive observations, and are by now common, almost text-book,knowledge. At the same time the implications of coronal astronomy forbroader astrophysical questions (e.g.Galactic structure, stellarformation, stellar structure, etc.) have become appreciated. Theinterpretation of stellar coronal properties is however still often opento debate, and will need qualitatively new observational data to bookfurther progress. In the present review we try to recapitulate our viewon the status of the field at the beginning of a new era, in which thehigh sensitivity and the high spectral resolution provided by Chandraand SMM-Newton will address new questions which were not accessiblebefore. Period Changes of Two W UMa-Type Contact Binaries: RW Comae Berenices and CC Comae BerenicesFrom the present times of minimum light and those collected from theliterature, changes in the orbital period of the two W UMa-type contactbinaries RW Com and CC Com are analyzed. The results reveal that theperiod changes of these two systems show the same natures, with ashort-term oscillation superposed on the secular decrease. For RW Com,its period shows a secular decrease at a rate ofdP/dt=0.43×10-7 days yr-1. An oscillationwith a periodicity of 13.7 yr and an amplitude ofΔP=5.4×10-7 days is superposed on the seculardecrease. For CC Com, its period shows a secular decrease at a rate ofdP/dt=0.40×10-7 days yr-1. An oscillationwith a periodicity of 16.1 yr and an amplitude ofΔP=2.8×10-7 days is superposed on the seculardecrease. The period secular decreases of the two systems may beexplained by a mass-transfer rate of dm/dt=0.29×10-7Msolar yr-1 for RW Com anddm/dt=0.52×10-7 Msolar yr-1 forCC Com. The period short-term oscillations of the two systems may beexplained by the magnetic activity cycle model given by Applegate, andthe parameters for the magnetic activity cycle model are presented. Some anomalies in the occurrence of debris discs around main-sequence A and G starsDebris discs consist of large dust grains that are generated bycollisions of comets or asteroids around main-sequence stars, and thequantity and distribution of debris may be used to detect the presenceof perturbing planets akin to Neptune. We use stellar and disc surveysto compare the material seen around A- and G-type main-sequence stars.Debris is detected much more commonly towards A stars, even when acomparison is made only with G stars of comparable age. Detection ratesare consistent with disc durations of ~0.5 Gyr, which may occur at anytime during the main sequence. The higher detection rate for A stars canresult from this duration being a larger fraction of the main-sequencelifetime, possibly boosted by a globally slightly larger disc mass thanfor the G-type counterparts. The disc mass range at any given age is afactor of at least ~100 and any systematic decline with time is slow,with a power law estimated to not be steeper than t-1/2.Comparison with models shows that dust can be expected as late as a fewGyr when perturbing planetesimals form slowly at large orbital radii.Currently, the Solar system has little dust because the radius of theKuiper Belt is small and hence the time-scale to produce planetesimalswas less than 1 Gyr. However, the apparently constant duration of ~0.5Gyr when dust is visible is not predicted by the models. Minimum Times of Several Eclipsing BinariesWe present 26 minima times of 11 eclipsing binaries, observed between1996 and 1999.
Submit a new article
• - No Links Found -
|
{}
|
# Does $\int_{\mathbb R} f(x)x^n dx = 0$ for $n=0,1,2,\ldots$ imply $f=0$ a.e.?
Let $f(x)$ be a real-valued function on $\mathbb{R}$ such that $x^nf(x), n=0,1,2,\ldots$ are Lebesgue integrable.
Suppose $$\int_{-\infty}^\infty x^n f(x) dx=0$$ for all $n=0,1,2,\ldots.$
Does it follow that $f(x)=0$ almost everywhere?
-
This is a duplicate question. – PEV Jan 29 '11 at 23:49
– PEV Jan 29 '11 at 23:51
@TCL: Note also that the indefinite integral cannot equal $0$... – Arturo Magidin Jan 29 '11 at 23:59
This is more apt I think: math.stackexchange.com/questions/17026/… – Aryabhata Jan 29 '11 at 23:59
Neither of the questions linked in the comments above is a duplicate, but Hans Lundmark's answer on the first question shows that the answer to this one is no: math.stackexchange.com/questions/16831/… – Jonas Meyer Jan 30 '11 at 0:08
Let $f$ be such that for the Fourier transform $\hat f$ it holds that for all $n$ \begin{equation*} \hat f^{(n)}(0) = 0. \end{equation*} Note that this does not imply that $f$ itself is zero since there are infinitely flat functions. By using the rules for Fourier transforms and derivatives you see that $\int x^n f(x) dx = 0$ for all $n$.
|
{}
|
# Corner points of the feasible region for an LPP are $(0,2)(3,0),(6,0),(6,8) and (0,5)$.
Let F=4x+6y be the objective function . The minimum value of F occurs at$\begin{array}{1 1}(A)\;(0,2)\;only \\ (B)\;(3,0)\;only\\(C)\;the\;mid\;point\;of\;the\;line\;segment\;joining\;the\;points\;(0,2) \;and\;(3,0)\;only\\(D)\;any\;point\;on\;the\;line\;segment\;joining\;the\;points\;(0,2) \;and\;(3,0)\end{array}$
|
{}
|
## Performance of Modulation Systems with Noise
Referring to the information above, determine the detected S/N ratio (in dB) for an FM receiver with no pre-emphasis based on a baseband bandwidth of 15 kHz and a maximum frequency deviation of 45 kHz.
|
{}
|
# Neutron diffraction studies of $Ge_xSe_{1-x}$ glasses
Rao, Ramesh N and Sangunni, KS and Gopal, ESR and Krishna, PSR and Chakravarthy, R and Dasannacharya, BA (1995) Neutron diffraction studies of $Ge_xSe_{1-x}$ glasses. In: Physica B:Condensed Matter, 213-21 . pp. 561-563.
PDF neutron.pdf Restricted to Registered users only Download (196Kb) | Request a copy
## Abstract
Neutron diffraction studies were performed on $Ge_xSe_{1-x}$ glasses for x=0.1, 0.2, 0.33 and 0.4. The structure factor S(Q) shows maximum intermediate-range order for x=0.33. Analysis of the two main peaks in T(r) shows that these glasses have $Ge(Se_{1/2})_4$ tetrahedra. Glasses with $x\leq;0.2$ consist of Se-chains cross-linked with Ge-tetrahedra while for $x\geq;0.2$ Ge-tetrahedra are present in both edge- and corner-shared configurations.
Item Type: Journal Article Copyright of this article belongs to Elsevier. Division of Physical & Mathematical Sciences > Physics 07 May 2007 19 Sep 2010 04:37 http://eprints.iisc.ernet.in/id/eprint/10854
|
{}
|
# Tensor fields and multiplication
1. May 12, 2012
### Kontilera
Hello! I'm currently reading John Lee's books on different kinds of manifolds and three questions has appeared.
In 'Introduction to Smooth Manifolds' Lee writes that a tensor of rank 2 always can be decomposed into a symmetric and an antisymmetric tensor:
A = Sym(A) + Alt(A).
We define a product which looks at the antisymmetric part of A \otimes B according to:
AB = Sym(A \otimes B),
while the wedge product describes the antisymmetric part:
A \wedge B = Alt(A \otimes B).
Now first of all the fact that a tensor of, lets say, rank 3 can not be decomposed in this way seems quite counter-intuitive, for me. How do you think of it? Is there any easy way to picture it?
Secondly: Can we define a product for this last term (that is neither symmetric or antisymmetric) of our tensors of rank higher than 2? In other words:
A * B = (A \otimes B) - Sym(A \otimes B) - Alt(A \otimes B) ?
The last question concerns the total covariant derivative that is definied in the book on Riemannian manifolds. Lee first sets out to claim:
'Although the definition of a linear connection resembles the characterization of (2,1)-tensor fields [...], a linear connection is not a tensor field because it is not linear over C^∞(M) in Y, but instead satisfy the product rule.' (- 'Riemannian Manifolds: An Introduction to Curvature' by John Lee)
Later however he states that the total covariant derivative (the generalization of this linear connection) is a (k+1, l)-tensor field. This seems to be contradictive.. or am I mixing something up?
Thanks for all the help!
Kindly Regards
Kontilera
2. May 12, 2012
### quasar987
Regarding the second question...
What Lee is saying is that a connection ∇: $\Gamma(TM)\times \Gamma(TM)\rightarrow \Gamma(TM)$ looks like a (2,1) tensor (compare with Lemma 2.4), but it is not one as it is not $C^{\infty}(M)$-linear in its second argument. Later, he defines the covariant derivative of a tensor, and remarks that if you take a tensor T of type (k,l), and take its covariant derivative ∇T, you get a tensor of type (k+1,l). In particular, if you take a vector field Y (tensor of type (0,1)) and jam it up the second slot of the connection map like so: ∇Y, you get a tensor, because the problem was in the second argument of ∇ and you've now eliminated that problem.
3. May 14, 2012
### Kontilera
Thanks for the answer! Nobody that could give some response to the idea of the new multiplication? Maybe its just not so useful so Lee doesnt mention it..
4. May 14, 2012
### quasar987
Well, sure, there is nothing in the world or beyond that prevents you from assigning to the symbols A * B the meaning A * B = (A \otimes B) - Sym(A \otimes B) - Alt(A \otimes B), it's the first time I've seen this defined before which I guess is the essence of your question.
5. Jun 14, 2012
### spoirier
To understand how the symmetric and the antisymmetric part are not all for tensors of rank k>2 : just notice that there are k! permutations that can send a tensor (such as those made of a product of linearly independent vectors) into k! linearly independent tensors.
But the symmetric and antisymmetric parts are only 2 tensors, whose linear combinations forms a 2-dimensional subspace that thus cannot give back those k! dimensions.
This 2-dimensional subspace is stable by the group of permutations (preserved by the even ones, and undergoing a reflection by the odd ones). The initial tensor cannot belong to it because if it did then its images by permutations would belong to it to, which leads to contradiction as they are linearly independent.
Now there exists a systematic study of the many components of tensors apart from the symmetric and antisymmetric ones : operations on the tensor space defined by applying symmetrization on some indices then some antisymmetrization in another way, can be decomposed into a series of eigenspaces that can be classified.
For details you can refer for example to the wikipedia article on "Young tableau" and connected articles ("Young symmetrizer" and "representation theory of the symmetric group").
|
{}
|
# zbMATH — the first resource for mathematics
## Liu, Ruihua
Compute Distance To:
Author ID: liu.ruihua Published as: Liu, R.; Liu, R. H.; Liu, R.-H.; Liu, Rui Hua; Liu, Rui-Hua; Liu, Rui-hua; Liu, Ruihua
Documents Indexed: 63 Publications since 1966
all top 5
#### Co-Authors
1 single-authored 13 Zhang, Qing 12 Yin, Gang George 3 Eloe, Paul W. 2 Florescu, Ionuţ 2 Mariani, Maria Cristina 2 Tu, Fengsheng 1 Bao, Zheng 1 Cheng, Yujia 1 Fan, Jinsong 1 Haghi, Majid 1 Lai, Jizhou 1 Li, Fang 1 Liu, Fuyao 1 Liu, Jianye 1 Liu, Jiapeng 1 Liu, Yuanjin 1 Mollapourasl, Reza 1 Raffoul, Youssef Naim 1 Ren, Dan 1 Sewell, Granville 1 Shen, Chaomin 1 Sun, Jianyun 1 Wang, Jianwen 1 Wang, Yongzhong 1 Yatsuki, M. 1 Zhu, Yan-hua
all top 5
#### Serials
2 Acta Automatica Sinica 2 SIAM Journal on Applied Mathematics 2 Electronic Journal of Differential Equations (EJDE) 1 Computers and Electrical Engineering 1 Applied Mathematics and Optimization 1 Automatica 1 Journal of Optimization Theory and Applications 1 Stochastic Analysis and Applications 1 Applied Numerical Mathematics 1 Journal of Applied Mathematics and Stochastic Analysis 1 Dynamic Systems and Applications 1 International Journal of Computer Mathematics 1 SIAM Journal on Optimization 1 Transactions of Nanjing University of Aeronautics & Astronautics 1 Mathematical Finance 1 Chinese Quarterly Journal of Mathematics 1 International Journal of Theoretical and Applied Finance 1 Journal of Systems Science and Complexity 1 Multiscale Modeling & Simulation 1 Journal of Computer Applications 1 Mathematical Control and Related Fields
all top 5
#### Fields
15 Probability theory and stochastic processes (60-XX) 15 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 8 Systems theory; control (93-XX) 7 Operations research, mathematical programming (90-XX) 4 Numerical analysis (65-XX) 3 Ordinary differential equations (34-XX) 2 Partial differential equations (35-XX) 2 Integral equations (45-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Statistics (62-XX) 2 Computer science (68-XX) 1 Associative rings and algebras (16-XX) 1 Category theory; homological algebra (18-XX)
#### Citations contained in zbMATH Open
37 Publications have been cited 337 times in 204 Documents Cited by Year
Recursive algorithms for stock liquidation: a stochastic optimization approach. Zbl 1021.91022
Yin, G.; Liu, R. H.; Zhang, Q.
2002
Optimality of $$(s,S)$$ policy with compound Poisson and diffusion demands: A quasi-variational inequalities approach. Zbl 1151.90304
Bensoussan, Alain; Liu, R. H.; Sethi, Suresh P.
2006
New numerical scheme for pricing American option with regime-switching. Zbl 1204.91127
Khaliq, A. Q. M.; Liu, R. H.
2009
Regime-switching recombining tree for option pricing. Zbl 1233.91284
Liu, R. H.
2010
Option pricing in a regime-switching model using the fast Fourier transform. Zbl 1140.91402
Liu, R. H.; Zhang, Q.; Yin, G.
2006
Optimal selling rules in a regime-switching exponential Gaussian diffusion model. Zbl 1175.91079
Eloe, P.; Liu, R. H.; Yatsuki, M.; Yin, G.; Zhang, Q.
2008
A lattice method for option pricing with two underlying assets in the regime-switching model. Zbl 1285.91143
Liu, R. H.; Zhao, J. L.
2013
Solving complex PDE systems for pricing American options with regime-switching by efficient exponential time differencing schemes. Zbl 1282.91377
Khaliq, A. Q. M.; Kleefeld, B.; Liu, R. H.
2013
A new tree method for pricing financial derivatives in a regime-switching mean-reverting model. Zbl 1254.91726
Liu, R. H.
2012
Stock liquidation via stochastic approximation using Nasdaq daily and intra-day data. Zbl 1128.91031
Yin, G.; Zhang, Q.; Liu, F.; Liu, R. H.; Cheng, Y.
2006
A near-optimal selling rule for a two-time-scale market model. Zbl 1108.91031
Zhang, Q.; Yin, G.; Liu, R. H.
2005
Double barrier option under regime-switching exponential mean-reverting process. Zbl 1163.91393
Eloe, P.; Liu, R. H.; Sun, J. Y.
2009
Pricing American options under multi-state regime switching with an efficient $$L$$-stable method. Zbl 1386.91168
Yousuf, M.; Khaliq, A. Q. M.; Liu, R. H.
2015
A tree approach to options pricing under regime-switching jump diffusion models. Zbl 1335.91106
Liu, R. H.; Nguyen, D.
2015
Numerical schemes for option pricing in regime-switching jump diffusion models. Zbl 1290.91180
Florescu, Ionut; Liu, Ruihua; Mariani, Maria Cristina; Sewell, Granville
2013
Solutions to a partial integro-differential parabolic system arising in the pricing of financial options in regime-switching jump diffusion models. Zbl 1294.35171
Florescu, Ionut; Liu, Ruihua; Mariani, Maria Cristina
2012
Bounded-input bounded-output stability of nonlinear time-varying differential systems. Zbl 0196.46101
Varaiya, P. P.; Liu, R.
1966
Optimal stopping of switching diffusions with state dependent switching rates. Zbl 1337.60075
Liu, R. H.
2016
A recombining tree method for option pricing with state-dependent switching rates. Zbl 1337.91102
Jiang, J. X.; Liu, R. H.; Nguyen, D.
2016
Boundedness and exponential stability of highly nonlinear stochastic differential equations. Zbl 1186.34081
Liu, Ruihua; Raffoul, Youssef
2009
Nearly optimal control of singularly perturbed Markov decision processes in discrete time. Zbl 0990.90125
Liu, R. H.; Zhang, Q.; Yin, G.
2001
Optimal investment and consumption with proportional transaction costs in regime-switching model. Zbl 1311.49105
Liu, Ruihua
2014
Response of Duffing system with delayed feedback control under bounded noise excitation. Zbl 1293.70074
Feng, Chang Shui; Liu, R.
2012
Generalized Christoffel functions for Jacobi-exponential weights on $$[-1, 1]$$. Zbl 1374.42050
Liu, R.; Shi, Y. G.
2016
Upper and lower solutions for regime-switching diffusions with applications in financial mathematics. Zbl 1238.91131
Eloe, P.; Liu, R. H.
2011
Minimal dimension realization and identifiability of input-output sequences. Zbl 0353.93028
Liu, R.; Suen, Lai Cherng
1977
A finite-horizon optimal investment and consumption problem using regime-switching models. Zbl 1305.91223
Liu, R. H.
2014
A fast implementation algorithm of TV inpainting model based on operator splitting method. Zbl 1250.68282
Li, Fang; Shen, Chaomin; Liu, Ruihua; Fan, Jinsong
2011
Asymptotically optimal controls of hybrid linear quadratic regulators in discrete time. Zbl 1081.93528
Liu, R. H.; Zhang, Q.; Yin, G.
2002
Nearly optimal control of nonlinear Markovian systems subject to weak and strong interactions. Zbl 1011.93112
Liu, R. H.; Zhang, Q.; Yin, G.
2001
A necessary and sufficient condition for feedback stabilization in a factorial ring. Zbl 0543.93055
Raman, Vijay R.; Liu, R.
1984
A lower bound on the chromatic number of a graph. Zbl 0233.05105
Myers, B. R.; Liu, R.
1972
On global linearization. Zbl 0244.93015
Liu, R.; Saeks, R.; Leake, R. J.
1971
Valuation of guaranteed equity-linked life insurance under regime-switching models. Zbl 1234.93095
Liu, R. H.; Zhang, Qing
2011
Large-deflection bending of symmetrically laminated rectilinearly orthotropic elliptical plates including transverse shear. Zbl 0893.73026
Liu, R.-H.; Xu, J.-C.; Zhai, S.-Z.
1997
A necessary and sufficient condition for stability of a perturbed system. Zbl 0611.93051
Huang, Qiu; Liu, R.
1987
Determination of the structure of multivariable stochastic linear systems. Zbl 0384.93012
Suen, Lai Cherng; Liu, R.
1978
Optimal stopping of switching diffusions with state dependent switching rates. Zbl 1337.60075
Liu, R. H.
2016
A recombining tree method for option pricing with state-dependent switching rates. Zbl 1337.91102
Jiang, J. X.; Liu, R. H.; Nguyen, D.
2016
Generalized Christoffel functions for Jacobi-exponential weights on $$[-1, 1]$$. Zbl 1374.42050
Liu, R.; Shi, Y. G.
2016
Pricing American options under multi-state regime switching with an efficient $$L$$-stable method. Zbl 1386.91168
Yousuf, M.; Khaliq, A. Q. M.; Liu, R. H.
2015
A tree approach to options pricing under regime-switching jump diffusion models. Zbl 1335.91106
Liu, R. H.; Nguyen, D.
2015
Optimal investment and consumption with proportional transaction costs in regime-switching model. Zbl 1311.49105
Liu, Ruihua
2014
A finite-horizon optimal investment and consumption problem using regime-switching models. Zbl 1305.91223
Liu, R. H.
2014
A lattice method for option pricing with two underlying assets in the regime-switching model. Zbl 1285.91143
Liu, R. H.; Zhao, J. L.
2013
Solving complex PDE systems for pricing American options with regime-switching by efficient exponential time differencing schemes. Zbl 1282.91377
Khaliq, A. Q. M.; Kleefeld, B.; Liu, R. H.
2013
Numerical schemes for option pricing in regime-switching jump diffusion models. Zbl 1290.91180
Florescu, Ionut; Liu, Ruihua; Mariani, Maria Cristina; Sewell, Granville
2013
A new tree method for pricing financial derivatives in a regime-switching mean-reverting model. Zbl 1254.91726
Liu, R. H.
2012
Solutions to a partial integro-differential parabolic system arising in the pricing of financial options in regime-switching jump diffusion models. Zbl 1294.35171
Florescu, Ionut; Liu, Ruihua; Mariani, Maria Cristina
2012
Response of Duffing system with delayed feedback control under bounded noise excitation. Zbl 1293.70074
Feng, Chang Shui; Liu, R.
2012
Upper and lower solutions for regime-switching diffusions with applications in financial mathematics. Zbl 1238.91131
Eloe, P.; Liu, R. H.
2011
A fast implementation algorithm of TV inpainting model based on operator splitting method. Zbl 1250.68282
Li, Fang; Shen, Chaomin; Liu, Ruihua; Fan, Jinsong
2011
Valuation of guaranteed equity-linked life insurance under regime-switching models. Zbl 1234.93095
Liu, R. H.; Zhang, Qing
2011
Regime-switching recombining tree for option pricing. Zbl 1233.91284
Liu, R. H.
2010
New numerical scheme for pricing American option with regime-switching. Zbl 1204.91127
Khaliq, A. Q. M.; Liu, R. H.
2009
Double barrier option under regime-switching exponential mean-reverting process. Zbl 1163.91393
Eloe, P.; Liu, R. H.; Sun, J. Y.
2009
Boundedness and exponential stability of highly nonlinear stochastic differential equations. Zbl 1186.34081
Liu, Ruihua; Raffoul, Youssef
2009
Optimal selling rules in a regime-switching exponential Gaussian diffusion model. Zbl 1175.91079
Eloe, P.; Liu, R. H.; Yatsuki, M.; Yin, G.; Zhang, Q.
2008
Optimality of $$(s,S)$$ policy with compound Poisson and diffusion demands: A quasi-variational inequalities approach. Zbl 1151.90304
Bensoussan, Alain; Liu, R. H.; Sethi, Suresh P.
2006
Option pricing in a regime-switching model using the fast Fourier transform. Zbl 1140.91402
Liu, R. H.; Zhang, Q.; Yin, G.
2006
Stock liquidation via stochastic approximation using Nasdaq daily and intra-day data. Zbl 1128.91031
Yin, G.; Zhang, Q.; Liu, F.; Liu, R. H.; Cheng, Y.
2006
A near-optimal selling rule for a two-time-scale market model. Zbl 1108.91031
Zhang, Q.; Yin, G.; Liu, R. H.
2005
Recursive algorithms for stock liquidation: a stochastic optimization approach. Zbl 1021.91022
Yin, G.; Liu, R. H.; Zhang, Q.
2002
Asymptotically optimal controls of hybrid linear quadratic regulators in discrete time. Zbl 1081.93528
Liu, R. H.; Zhang, Q.; Yin, G.
2002
Nearly optimal control of singularly perturbed Markov decision processes in discrete time. Zbl 0990.90125
Liu, R. H.; Zhang, Q.; Yin, G.
2001
Nearly optimal control of nonlinear Markovian systems subject to weak and strong interactions. Zbl 1011.93112
Liu, R. H.; Zhang, Q.; Yin, G.
2001
Large-deflection bending of symmetrically laminated rectilinearly orthotropic elliptical plates including transverse shear. Zbl 0893.73026
Liu, R.-H.; Xu, J.-C.; Zhai, S.-Z.
1997
A necessary and sufficient condition for stability of a perturbed system. Zbl 0611.93051
Huang, Qiu; Liu, R.
1987
A necessary and sufficient condition for feedback stabilization in a factorial ring. Zbl 0543.93055
Raman, Vijay R.; Liu, R.
1984
Determination of the structure of multivariable stochastic linear systems. Zbl 0384.93012
Suen, Lai Cherng; Liu, R.
1978
Minimal dimension realization and identifiability of input-output sequences. Zbl 0353.93028
Liu, R.; Suen, Lai Cherng
1977
A lower bound on the chromatic number of a graph. Zbl 0233.05105
Myers, B. R.; Liu, R.
1972
On global linearization. Zbl 0244.93015
Liu, R.; Saeks, R.; Leake, R. J.
1971
Bounded-input bounded-output stability of nonlinear time-varying differential systems. Zbl 0196.46101
Varaiya, P. P.; Liu, R.
1966
all top 5
#### Cited by 336 Authors
23 Yin, Gang George 22 Zhang, Qing 8 Ma, Jingtang 7 Bensoussan, Alain 7 Yamazaki, Kazutoshi 6 Liu, Ruihua 4 Benkherouf, Lakdere 4 Han, Zhengzhi 4 Khaliq, Abdul Q. M. 4 Liu, RongHua 4 Pérez Garmendia, Jose Luis 4 Siu, Tak Kuen 4 Song, Qingshuo 4 Yao, Dacheng 4 Yin, George Gang 4 Zhou, Zhiqiang 3 Company, Rafael 3 Ding, Deng 3 Egorova, Vera N. 3 Fan, Kun 3 Jódar Sanchez, Lucas Antonio 3 Liu, Rong 3 Lu, Xianggang 3 Mao, Xuerong 3 Wang, Rongming 3 Zhang, Junfeng 3 Zhu, Chao 3 Zhu, Fubo 3 Zhu, Songping 2 Abedi, Fakhreddin 2 Ademola, Adeleke Timothy 2 Ahmadi, Zaniar 2 Azari, Hossein 2 Ballestra, Luca Vincenzo 2 Bayraktar, Erhan 2 Cecere, Liliana 2 Chen, Xu 2 Ching, Wai-Ki 2 Cui, Zhenyu 2 Escobar, Marcos 2 Foroush Bastani, Ali 2 Haghi, Majid 2 He, Xinjiang 2 Hernández-Hernández, Daniel 2 Hieber, Peter 2 Hosseini, Seyed Mohammad 2 Ignatieva, Katja 2 Johnson, Michael James 2 Lars Kirkby, J. 2 Lei, Siulong 2 Leong, Wah June 2 Liu, Jingzhen 2 Liu, Ruoheng 2 Liu, Yuanjin 2 Ma, Zhidan 2 Mollapourasl, Reza 2 Neykova, Daniela 2 Nguyen, Duy-Minh 2 Ning, Lijuan 2 Ramponi, Alessandro 2 Sethi, Suresh P. 2 Sewell, Granville 2 Shen, Yang 2 Skaaning, Sonny 2 Soleymani, Fazlollah 2 Song, Andrew 2 Tangman, Désiré Yannick 2 Thakoor, Nawdha 2 Tour, Geraldine 2 Wang, Wenfei 2 Weerasinghe, Ananda P. N. 2 Xi, Fubao 2 Yang, Hailiang 2 Yang, Qingqing 2 Yin, Kewen 2 Yiu, Ka Fai Cedric 2 Yousuf, Muhammad Irfan 2 Zagst, Rudi 2 Zeng, Xiangchen 2 Zhang, Hanqin 2 Zhang, Zhimin 2 Ziveyi, Jonathan 1 Abou-El-Ela, A. M. A. 1 Adesina, Olufemi Adeyinka 1 Albrecher, Hansjörg 1 Alrabeei, Salah 1 Al-saedi, Ahmed Eid Salem 1 Altarovici, Albert 1 Asante-Asamani, E. O. 1 Azcue, Pablo 1 Bacciotti, Andrea 1 Badowski, Grazyna 1 Bai, Yang 1 Barron, Yonit 1 Basu, Ranojoy 1 Baurdoux, Erik Jan 1 Berntsson, Fredrik 1 Bian, Baojun 1 Boutoulout, Ali 1 Çakanyıldırım, Metin ...and 236 more Authors
all top 5
#### Cited in 88 Serials
14 Automatica 11 Computers & Mathematics with Applications 11 Journal of Computational and Applied Mathematics 8 Journal of Optimization Theory and Applications 8 SIAM Journal on Control and Optimization 7 Insurance Mathematics & Economics 7 International Journal of Theoretical and Applied Finance 6 Quantitative Finance 5 Journal of Mathematical Analysis and Applications 5 International Journal of Computer Mathematics 5 Stochastic Processes and their Applications 4 Stochastic Analysis and Applications 4 European Journal of Operational Research 3 Applied Mathematics and Optimization 3 Systems & Control Letters 3 Operations Research Letters 3 Journal of Scientific Computing 3 Methodology and Computing in Applied Probability 3 Stochastic Models 3 Stochastics 3 Annals of Finance 2 International Journal of Control 2 Applied Numerical Mathematics 2 Numerical Methods for Partial Differential Equations 2 Journal of Economic Dynamics & Control 2 Journal of Applied Mathematics and Stochastic Analysis 2 Annals of Operations Research 2 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 2 Mathematical Problems in Engineering 2 Mathematical Finance 2 Abstract and Applied Analysis 2 Discrete Dynamics in Nature and Society 2 Nonlinear Analysis. Real World Applications 2 Discrete and Continuous Dynamical Systems. Series B 2 Review of Derivatives Research 2 Journal of Industrial and Management Optimization 2 Nonlinear Analysis. Hybrid Systems 1 Acta Mechanica 1 Advances in Applied Probability 1 Discrete Applied Mathematics 1 Journal of Computational Physics 1 Mathematical Biosciences 1 Physica A 1 Chaos, Solitons and Fractals 1 BIT 1 Journal of Econometrics 1 Mathematics of Operations Research 1 Mathematica Slovaca 1 Operations Research 1 Rendiconti del Circolo Matemàtico di Palermo. Serie II 1 Statistics & Probability Letters 1 Acta Applicandae Mathematicae 1 Acta Mathematicae Applicatae Sinica. English Series 1 Computers & Operations Research 1 Asia-Pacific Journal of Operational Research 1 Applied Mathematics Letters 1 Mathematical and Computer Modelling 1 MCSS. Mathematics of Control, Signals, and Systems 1 Japan Journal of Industrial and Applied Mathematics 1 Linear Algebra and its Applications 1 Computational and Applied Mathematics 1 Monte Carlo Methods and Applications 1 Bernoulli 1 INFORMS Journal on Computing 1 European Journal of Control 1 Nonlinear Dynamics 1 Vietnam Journal of Mathematics 1 Mathematical Methods of Operations Research 1 Communications in Nonlinear Science and Numerical Simulation 1 The ANZIAM Journal 1 Differentsial’nye Uravneniya i Protsessy Upravleniya 1 ASTIN Bulletin 1 Advances in Difference Equations 1 International Journal of Control, I. Series 1 Frontiers of Mathematics in China 1 AStA. Advances in Statistical Analysis 1 Journal of Nonlinear Science and Applications 1 Asian Journal of Control 1 Mathematical Control and Related Fields 1 S$$\vec{\text{e}}$$MA Journal 1 Journal of Applied Analysis and Computation 1 Stochastic Systems 1 Journal of Mathematics 1 International Journal of Analysis 1 Chinese Journal of Mathematics 1 Open Mathematics 1 International Journal of Systems Science. Principles and Applications of Systems and Integration 1 Results in Applied Mathematics
all top 5
#### Cited in 26 Fields
108 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 95 Probability theory and stochastic processes (60-XX) 72 Systems theory; control (93-XX) 44 Numerical analysis (65-XX) 33 Operations research, mathematical programming (90-XX) 23 Calculus of variations and optimal control; optimization (49-XX) 15 Partial differential equations (35-XX) 14 Ordinary differential equations (34-XX) 7 Statistics (62-XX) 5 Integral equations (45-XX) 5 Biology and other natural sciences (92-XX) 4 Real functions (26-XX) 3 Special functions (33-XX) 3 Integral transforms, operational calculus (44-XX) 3 Computer science (68-XX) 2 Combinatorics (05-XX) 2 Dynamical systems and ergodic theory (37-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Mechanics of particles and systems (70-XX) 2 Information and communication theory, circuits (94-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Approximations and expansions (41-XX) 1 Operator theory (47-XX) 1 General topology (54-XX) 1 Mechanics of deformable solids (74-XX) 1 Fluid mechanics (76-XX)
|
{}
|
## Encyclopedia > Van Morrison
Article Content
# Van Morrison
Van Morrison (b. August 31, 1945) is the stage name of George Ivan Morrison, an Irish Rock 'n' Roller and exponent of the so-called Belfast Blues.
He came initially to prominence as a result of fronting the band he formed, Them[?], with whom he had a number of chart successes. After splitting from the band he then pursued a successful and idiosyncratic solo musical career. Probably his best-known song is "Brown-Eyed Girl."
Albums: (In Chronological Order)
All Wikipedia text is available under the terms of the GNU Free Documentation License
Search Encyclopedia
Search over one million articles, find something about almost anything!
Featured Article
Doppler effect ... observed frequency (the observer's velocity being represented as vo): $f = f_0 (1 + \frac {v_0}{v})$ The first attempt to extend Doppler' ...
|
{}
|
# Is sea water more conductive than pure water because “electrical current is transported by the ions in solution”?
Apparently, electrical charge is transported by the ions dissolved in water, is this true?
-
Yep. Pure water is an extremely bad conductor of electricity, it has very few ions. Water with an electrolyte (like NaCl) is a much better conductor of electricity; as the ions can migrate. Migration of ions is just like migration of electrons. If you place an imaginary surface inside the cell, there will be net negative charge crossing over to the positive terminal and vice versa. This is just like a current. Since there is net current inside, its conducting.
The equivalent conductance (a loco* chemistry concept) of a solution is simply the sum of the conductances of its constituent parts (Kohlrausch law). Here, $\Lambda$ denotes equivalent conductance of a portion of the solution, and $\lambda$ is the same for ions. Just a notation.
For pure water, $$\Lambda_{H_2O}=\lambda_{H^+}+\lambda_{OH^-}$$ Now, since the concentrations of $H^+$/$OH^-$ are small ($10^{-7} M$ at STP), the $\lambda$s and thus $\Lambda_{H_2O}$ are pretty tiny. For water with salt in it, we get $$\Lambda_{soln}=\Lambda_{H_2O}+\Lambda_{NaCl}=\lambda_{H^+}+\lambda_{OH^-}+\lambda_{Na^+}+\lambda_{Cl^-}$$
Since $NaCl$ nearly dissociates completely, we get large $\lambda$s, and thus $\Lambda_{soln}$, which can be related to conductivity (in the aforementioned loco way).
So, pure/distilled water is an extremely bad conductor, while impure water with ions in it is a good conductor
* Loco because they assume a 1 m cell throughout, and don't keep the necessary $\text{m}^{-1}$ or whatever in their units. Due to this fixing of parameters, yes, we get that $\text{Area of plates}=\text{volume}$, which lets us relate it to concentration; but this gives us predictions for a specific case; when length of the cell is 1 m only. For some reasons these predictions are blindly applied to the general case. The whole thing gets confusing if you try to visualise it.
-
Excellent, thank you. So it's not the electrons moving through the water but the ions moving to counter balance the charge of the electrodes (+ve ions going to an electrode with electrons and anions going to the electrode lacking electrons). So under AC this can occur forever, but under DC, at some point presumably, the current will eventually stop flowing once all the available ions have moved to either electrode. Is that correct? (and thanks for the foot note!... awesome :-) ) – AJP Mar 5 '12 at 8:21
Hey @Manishearth do you have any thoughts on "... presumably, the current will eventually stop flowing once all the available ions have moved to either electrode. Is that correct?". Thanks. – AJP Aug 6 '14 at 15:02
@AJP yes. This takes time (forever) though, since there is a solid-ion equilibrium for the impurities, when the ions start disappearing ions are created from the solid. But it will try to maintain the equilibrium so the number of new ions will be much smaller. This process will go on, effectively to infinity since the equilibrium laws won't let the amount of solid or ions disappear completely. Practically, there will be some point when these numbers become too small, but again, practically this is very far off. – Manishearth Aug 7 '14 at 11:13
## protected by Qmechanic♦Dec 9 '13 at 0:20
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
|
{}
|
# KSEEB Solutions for Class 6 Maths Chapter 4 Basic Geometrical Ideas Ex 4.4
Students can Download Chapter 4 Basic Geometrical Ideas Ex 4.4 Questions and Answers, Notes Pdf, KSEEB Solutions for Class 6 Maths helps you to revise the complete Karnataka State Board Syllabus and score more marks in your examinations.
## Karnataka State Syllabus Class 6 Maths Chapter 4 Basic Geometrical Ideas Ex 4.4
Question 1.
Draw a rough sketch of a triangle ABC. Mark a point Pin its interior and a Point Q in its exterior. Is the point A in its exterior or in its interior?
Solution:
Point A lines on the given ∆ ABC
Question 2.
Solution:
a) Identify three triangles in the figure
∆ABC, ∆ACD, ∆ABD
b) Write the names of seven angles
$$\overline{\mathrm{AB}}$$, $$\overline{\mathrm{BC}}$$, $$\overline{\mathrm{CA}}$$, $$\overline{\mathrm{AD}}$$, $$\overline{\mathrm{BD}}$$, $$\overline{\mathrm{CD}}$$
|
{}
|
# zbMATH — the first resource for mathematics
Embedding optimal selection problems in a Poisson process. (English) Zbl 0745.60040
A version of the classical “secretary problem” is considered, where the number $$N$$ of candidates available is a random variable. The $$N$$ candidates arrive at times which are independent and uniformly distributed on $$(0,1)$$, and the objective is to minimize a loss which is a non-decreasing function of the ranks of the candidates. This problem has been variously studied by J. Gianini and S. M. Samuels [Ann. Probab. 4, 418-432 (1976; Zbl 0341.60033)], by R. Cowan and J. Zabczyk [Theory Probab. Appl. 23, 584-592 (1979); reprinted from Teor. Veroyatn. Primen. 23, 606-614 (1978; Zbl 0396.62063)], by W. J. Stewart [Applied probability – computer science: the interface, Proc. Meet., Boca Raton/FL 1981, Vol. 1, Prog. Comput. Sci. 2, 275-296 (1982; Zbl 0642.60075)], and by F. T. Bruss [Ann. Probab. 12, 882- 889 (1984; Zbl 0553.60047)] and F. T. Bruss and S. M. Samuels [ibid. 15, 824-830 (1987; Zbl 0592.60034)].
The main contribution of this paper is to show that by imbedding the process in a Poisson process it is possible to obtain all the distributional results necessary to obtain the optimal policy. The special case where $$N$$ is geometrically distributed is particularly simple, and the optimal policy can be found explicitly, but even in the case where $$N$$ has an arbitrary distribution, it is shown that routine calculus methods can be used to prove that the optimal policy is of a certain conjectured form.
##### MSC:
60G40 Stopping times; optimal stopping problems; gambling theory
Full Text:
##### References:
[1] Brémaud, P., Point processes and queues: martingale dynamics, (1981), Springer New York · Zbl 0478.60004 [2] Bruss, F.T., A unified approach to a class of best choice problems with an unknown number of options, Ann. probab., 12, 3, 882-889, (1984) · Zbl 0553.60047 [3] Bruss, F.T., Invariant record processes and applications to optimal selection modelling, Stochastic process. appl., 30, 303-316, (1988) · Zbl 0665.60049 [4] Bruss, F.T.; Samuels, S.M., A unified approach to a class of optimal selection problems with an unknown number of options, Ann. probab., 15, 2, 824-830, (1987) · Zbl 0592.60034 [5] Bruss, F.T.; Samuels, S.M., Conditions for quasi-stationarity of the Bayes rule in selection problems with an unknown number of rankable options, Ann. probab., 18, 2, (1990) · Zbl 0704.62067 [6] Cowan, R.; Zabczyk, J., (1978) an optimal selection problem associated with the Poisson process, Theory probab. appl., 23, 584-592, (1978) · Zbl 0426.62058 [7] Gaver, D.P., Random record models, J. appl. probab., 13, 538-547, (1976) · Zbl 0399.60083 [8] Gianini, J.; Samuels, S.M., The infinite secretary problem, Ann. probab., 4, 2, 418-432, (1976) · Zbl 0341.60033 [9] Goldie, C.M.; Rogers, L.C.G., The k-record processes are i.i.d., Z. wahrsch. verw. gebiete, 67, 197-211, (1984) · Zbl 0535.60037 [10] Stewart, T.J., (1981) the secretary problem with an unknown number of options, Oper. res., 29, 130-145, (1981) · Zbl 0454.90042
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
{}
|
# 1987 AHSME Problems/Problem 10
## Problem
How many ordered triples $(a, b, c)$ of non-zero real numbers have the property that each number is the product of the other two?
$\textbf{(A)}\ 1 \qquad \textbf{(B)}\ 2 \qquad \textbf{(C)}\ 3 \qquad \textbf{(D)}\ 4 \qquad \textbf{(E)}\ 5$
## Solution
We have $ab = c$, $bc = a$, and $ca = b$, so multiplying these three equations together gives $a^{2}b^{2}c^{2} = abc \implies abc(abc-1)=0$, and as $a$, $b$, and $c$ are all non-zero, we cannot have $abc = 0$, so we must have $abc = 1$. Now substituting $bc = a$ gives $a(bc) = 1 \implies a^2 = 1 \implies a = \pm 1$. If $a = 1$, then the system becomes $b = c, bc = 1, c = b$, so either $b = c = 1$ or $b = c = -1$, giving $2$ solutions. If $a = -1$, the system becomes $-b = c, bc = -1, -c = b$, so $-b = c = 1$ or $b = -c = 1$, giving another $2$ solutions. Thus the total number of solutions is $2 + 2 = 4$, which is answer $\boxed{\text{D}}$.
|
{}
|
Mukai's program for curves on a K3 surface
@article{Arbarello2013MukaisPF,
title={Mukai's program for curves on a K3 surface},
author={Enrico Arbarello and Andrea Bruno and Edoardo Sernesi},
journal={arXiv: Algebraic Geometry},
year={2013}
}
• Published 2 September 2013
• Mathematics
• arXiv: Algebraic Geometry
Let C be a general element in the locus of curves in M_g lying on some K3 surface, where g is congruent to 3 mod 4 and greater than or equal to 15. Following Mukai's ideas, we show how to reconstruct the K3 surface as a Fourier-Mukai transform of a Brill-Noether locus of rank two vector bundles on C.
Mukai’s program (reconstructing a K3 surface from a curve) via wall-crossing
Abstract Let C be a curve of genus g=11{g=11} or g≥13{g\geq 13} on a K3 surface whose Picard group is generated by the curve class [C]{[C]}. We use wall-crossing with respect to Bridgeland stabilityExpand
Embedding pointed curves in K3 surfaces
• Mathematics
• 2013
We analyze morphisms from pointed curves to K3 surfaces with a distinguished rational curve, such that the marked points are taken to the rational curve, perhaps with specified cross ratios. ThisExpand
Maximal variation of curves on K3 surfaces
• Mathematics
• 2021
We prove that curves in a non-primitive, base point free, ample linear system on a K3 surface have maximal variation. The result is deduced from general restriction theorems applied to the tangentExpand
Solvability of curves on surfaces
• Mathematics
• 2017
In this article, we study subloci of solvable curves in $\mathcal{M}_g$ which are contained in either a K3-surface or a quadric or a cubic surface. We give a bound on the dimension of such subloci.Expand
Rank two vector bundles on polarised Halphen surfaces and the Gauss-Wahl map for du Val curves
• Mathematics
• 2017
A genus-g du Val curve is a degree-3g plane curve having 8 points of multiplicity g, one point of multiplicity g-1, and no other singularity. We prove that the corank of the Gauss-Wahl map of aExpand
On the Brill-Noether loci of a curve embedded in a K3 surface
We slightly extend a previous result concerning the injectivity of a map of moduli spaces and we use this result to construct curves whose Brill-Noether loci have unexpected dimension.
Curves on surfaces with trivial canonical bundle
We survey some results concerning Severi varieties and variation in moduli of curves lying on K3 surfaces or on abelian surfaces. A number of open problems is listed and some work in progress isExpand
Moduli of curves on Enriques surfaces
• Mathematics
• 2019
We compute the number of moduli of all irreducible components of the moduli space of smooth curves on Enriques surfaces. In most cases, the moduli maps to the moduli space of Prym curves areExpand
On hyperplane sections of K 3 surfaces
• 2017
Let C be a Brill–Noether–Petri curve of genus g > 12. We prove that C lies on a polarised K3 surface, or on a limit thereof, if and only if the Gauss–Wahl map for C is not surjective. The proof isExpand
On hyperplane sections of K3 surfaces
• Mathematics
• 2015
Let C be a Brill-Noether-Petri curve of genus g\geq 12. We prove that C lies on a polarized K3 surface, or on a limit thereof, if and only if the Gauss-Wahl map for C is not surjective. The proof isExpand
References
SHOWING 1-10 OF 24 REFERENCES
Pencils of minimal degree on curves on a K3 surface.
• Mathematics
• 1995
The gonality of a smooth irreducible projective curve C is the minimal degree of a (necessarily base point free and complete) g\ on C. The main object of this note is the following problem: given aExpand
Stability of rank-3 Lazarsfeld-Mukai bundles on K3 surfaces
Given an ample line bundle L on a K3 surface S, we study the slope stability with respect to L of rank-3 Lazarsfeld-Mukai bundles associated with complete, base point free nets of type g^2_d onExpand
Nodal Curves with General Moduli on K3 Surfaces
• Mathematics
• 2007
We investigate the modular properties of nodal curves on a low genus K3 surface. We prove that a general genus g curve C is the normalization of a δ-nodal curve X sitting on a primitively polarizedExpand
Projective degenerations of K3 surfaces, Gaussian maps, and Fano threefolds
• Mathematics
• 1993
SummaryIn this article we exhibit certain projective degenerations of smoothK3 surfaces of degree 2g−2 in ℙg (whose Picard group is generated by the hyperplane class), to a union of two rationalExpand
Minimal resolutions, Chow forms of K3 surfaces and Ulrich bundles
• Mathematics
• 2012
The Minimal Resolution Conjecture (MRC) for points on a projective variety X predicts that the Betti numbers of general sets of points in X are as small as the geometry (Hilbert function) of XExpand
Non-Abelian Brill-Noether theory and Fano 3-folds
A Brill-Noether locus is a subscheme of the moduli of bundles E over a curve C defined by requiring E to have a given number of sections, or homomorphisms from another bundle. There are a number ofExpand
Hyperplane sections of Calabi-Yau varieties
Theorem: If W is a smooth complex projective variety with h^1 (O-script_W) = 0, then a sufficiently ample smooth divisor X on W cannot be a hyperplane section of a Calabi-Yau variety, unless W isExpand
The geometry of moduli spaces of sheaves
• Mathematics
• 1997
Preface to the second edition Preface to the first edition Introduction Part I. General Theory: 1. Preliminaries 2. Families of sheaves 3. The Grauert-Mullich Theorem 4. Moduli spaces Part II.Expand
On Cohomology of the Square of an Ideal Sheaf
For a smooth subvariety $X\subset\Bbb P^N$, consider (analogously to projective normality) the vanishing condition $H^1(\Bbb P^N,\Cal I^2_X(k))=0$, $k\ge3$. This condition is shown to be satisfiedExpand
SMOOTH CURVES ON PROJECTIVE K3 SURFACES
In this paper we give for all $n \geq 2$, $d>0$, $g \geq 0$ necessary and sufficient conditions for the existence of a pair $(X,C)$, where $X$ is a $K3$ surface of degree $2n$ in $\mathrm{P}^{n+1}$Expand
|
{}
|
# Fight Finance
#### CoursesTagsRandomAllRecentScores
Which of the following statements about Macaulay duration is NOT correct? The Macaulay duration:
A fixed coupon bond’s modified duration is 20 years, and yields are currently 10% pa compounded annually. Which of the following statements about the bond is NOT correct?
Which of the following statements about Macaulay duration is NOT correct? The Macaulay duration:
A fixed coupon bond’s modified duration is 10 years, and yields are currently 5% pa compounded annually. Which of the following statements about the bond is NOT correct?
Which of the following statements about bond convexity is NOT correct?
Find the Macaulay duration of a 2 year 5% pa annual fixed coupon bond which has a $100 face value and currently has a yield to maturity of 8% pa. The Macaulay duration is: Find the Macaulay duration of a 2 year 5% pa semi-annual fixed coupon bond which has a$100 face value and currently has a yield to maturity of 8% pa. The Macaulay duration is:
Assume that the market portfolio has a duration of 15 years and an individual stock has a duration of 20 years.
What can you say about the stock's beta with respect to the market portfolio? The stock's beta is likely to be:
Which of the following assets would have the shortest duration?
A stock has a beta of 0.5. Its next dividend is expected to be \$3, paid one year from now. Dividends are expected to be paid annually and grow by 2% pa forever. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. All returns are effective annual rates.
What is the Macaulay duration of the stock now?
A stock's duration increases since its dividend growth rate increases while its total required return on equity remains unchanged.
$$D_\text{Macaulay} = \dfrac{1+r}{r-g}$$
What will be the effect on the stock's CAPM beta? Assume that there's no change in the risk free rate or market risk premium and that the dividend growth rate increases due to the company cutting dividends to re-invest in zero-NPV projects. The firm is unlevered. The company's equity beta will:
|
{}
|
## Taiwanese Journal of Mathematics
### VARIATIONAL METHODS TO MIXED BOUNDARY VALUE PROBLEM FOR IMPULSIVE DIFFERENTIAL EQUATIONS WITH A PARAMETER
Yu Tian, Jun Wang, and Weigao Ge
#### Abstract
In this paper, we study mixed boundary value problem for secondorder impulsive differential equations with a parameter. By using critical point theory, several new existence results are obtained. This is one of the first times that impulsive boundary value problems are studied by means of variational methods.
#### Article information
Source
Taiwanese J. Math., Volume 13, Number 4 (2009), 1353-1370.
Dates
First available in Project Euclid: 18 July 2017
Permanent link to this document
https://projecteuclid.org/euclid.twjm/1500405513
Digital Object Identifier
doi:10.11650/twjm/1500405513
Mathematical Reviews number (MathSciNet)
MR2543748
Zentralblatt MATH identifier
1189.34060
#### Citation
Tian, Yu; Wang, Jun; Ge, Weigao. VARIATIONAL METHODS TO MIXED BOUNDARY VALUE PROBLEM FOR IMPULSIVE DIFFERENTIAL EQUATIONS WITH A PARAMETER. Taiwanese J. Math. 13 (2009), no. 4, 1353--1370. doi:10.11650/twjm/1500405513. https://projecteuclid.org/euclid.twjm/1500405513
#### References
• R. P. Agarwal, D. O'Regan, Multiple nonnegative solutions for second order impulsive differential equations, Appl. Math. Comput., 114 (2000), 51-59.
• D. Averna, G. Bonanno, A three critical points theorem and its applications to the ordinary Dirichlet problem, Topol. Methods Nonlinear Anal., 22 (2003), 93-104.
• D. Franco, J. J. Nieto, Maximum principle for periodic impulsive first order problems, J. Comput. Appl. Math., 88 (1998), 149-159.
• Guo Dajun, Nonlinear Functional Analysis, Shandong science and technology Press, Shandong, China, 1985.
• V. Lakshmikantham, D. D. Bainov, P. S. Simeonov, Theory of Impulsive Differential Equations, Series Modern Appl. Math., vol. 6, World Scientific, Teaneck, NJ, 1989.
• E. K. Lee, Y. H. Lee, Multiple positive solutions of singular two point boundary value problems for second order impulsive differential equation, Appl. Math. Comput., 158 (2004), 745-759.
• J. Li, J. J. Nieto, J. Shen, Impulsive periodic boundary value problems of first-order differential equations, J. Math. Anal. Appl., 325 (2007), 226-236.
• Xiaoning Lin, Daqing Jiang, Multiple positive solutions of Dirichlet boundary value problems for second order impulsive differential equations, J. Math. Anal. Appl., 321 (2006), 501-514.
• J. Mawhin, M. Willem, Critical Point Theory and Hamiltonian Systems, Springer-Verlag, Berlin, 1989.
• J. J. Nieto, R. Rodriguez-Lopez, Periodic boundary value problem for non-Lipschitzian impulsive functional differential equations, J. Math. Anal. Appl., 318 (2006), 593-610.
• J. J. Nieto, R. Rodriguez-Lopez, New comparison results for impulsive integro-differential equations and applications, J. Math. Anal. Appl., 328 (2007), 1343-1368.
• D. Qian, X. Li, Periodic solutions for ordinary differential equations with sublinear impulsive effects, J. Math. Anal. Appl., 303 (2005), 288-303.
• P. H. Rabinowitz, Minimax Methods in Critical Point Theory with Applicatins to Differential Equations, in: CBMS Regional Conf. Ser. in Math., Vol. 65, American Mathematical Society, Providence, RI, 1986.
• B. Ricceri, On a three critical points theorem, Arch. Math. $($Basel$)$, 75 (2000), 220-226.
• B. Ricceri, A general multiplicity theorem for certain nonlinear equations in Hilbert spaces, Proc. Amer. Math. Soc., 133 (2005), 3255-3261.
• Y. V. Rogovchenko, Impulsive evolution systems: Main results and new trends, Dynam. Contin. Discrete Impuls. Systems, 3 (1997), 57-88.
• A. M. Samoilenko, N. A. Perestyuk, Impulsive Differential Equations, World Scientific, Singapore, 1995.
• Y. Tian, W. G. Ge, Periodic solutions of non-autonomous second-order systems with a p-Laplacian, Nonlinear Anal., 66 (2007), 192-203.
• Y. Tian, W. G. Ge, Multiple positive solutions for a second-order Sturm-Liouville boundary value problem with a p-Laplacian via variational methods, Rocky Mountain J. Math., in press.
• Y. Tian, W. G. Ge, Applications of Variational Methods to Boundary Value Problem for Impulsive Differential Equations, Proceedings of Edinburgh Mathematical Society, 51 (2008), 509-527.
|
{}
|
# Properties
Label 2100.2.a.n Level 2100 Weight 2 Character orbit 2100.a Self dual yes Analytic conductor 16.769 Analytic rank 0 Dimension 1 CM no Inner twists 1
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$2100 = 2^{2} \cdot 3 \cdot 5^{2} \cdot 7$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 2100.a (trivial)
## Newform invariants
Self dual: yes Analytic conductor: $$16.7685844245$$ Analytic rank: $$0$$ Dimension: $$1$$ Coefficient field: $$\mathbb{Q}$$ Coefficient ring: $$\mathbb{Z}$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 420) Fricke sign: $$-1$$ Sato-Tate group: $\mathrm{SU}(2)$
## $q$-expansion
$$f(q)$$ $$=$$ $$q + q^{3} - q^{7} + q^{9} + O(q^{10})$$ $$q + q^{3} - q^{7} + q^{9} + 4q^{11} - 2q^{13} - 2q^{17} - 2q^{19} - q^{21} + 6q^{23} + q^{27} + 6q^{29} + 6q^{31} + 4q^{33} + 4q^{37} - 2q^{39} + 4q^{43} - 4q^{47} + q^{49} - 2q^{51} + 2q^{53} - 2q^{57} + 4q^{59} - 2q^{61} - q^{63} + 12q^{67} + 6q^{69} - 8q^{71} - 14q^{73} - 4q^{77} + 16q^{79} + q^{81} - 16q^{83} + 6q^{87} + 16q^{89} + 2q^{91} + 6q^{93} + 14q^{97} + 4q^{99} + O(q^{100})$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1.1
0
0 1.00000 0 0 0 −1.00000 0 1.00000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
This newform does not admit any (nontrivial) inner twists.
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 2100.2.a.n 1
3.b odd 2 1 6300.2.a.b 1
4.b odd 2 1 8400.2.a.o 1
5.b even 2 1 2100.2.a.i 1
5.c odd 4 2 420.2.k.b 2
15.d odd 2 1 6300.2.a.r 1
15.e even 4 2 1260.2.k.a 2
20.d odd 2 1 8400.2.a.bm 1
20.e even 4 2 1680.2.t.g 2
35.f even 4 2 2940.2.k.b 2
35.k even 12 4 2940.2.bb.f 4
35.l odd 12 4 2940.2.bb.a 4
60.l odd 4 2 5040.2.t.d 2
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
420.2.k.b 2 5.c odd 4 2
1260.2.k.a 2 15.e even 4 2
1680.2.t.g 2 20.e even 4 2
2100.2.a.i 1 5.b even 2 1
2100.2.a.n 1 1.a even 1 1 trivial
2940.2.k.b 2 35.f even 4 2
2940.2.bb.a 4 35.l odd 12 4
2940.2.bb.f 4 35.k even 12 4
5040.2.t.d 2 60.l odd 4 2
6300.2.a.b 1 3.b odd 2 1
6300.2.a.r 1 15.d odd 2 1
8400.2.a.o 1 4.b odd 2 1
8400.2.a.bm 1 20.d odd 2 1
## Atkin-Lehner signs
$$p$$ Sign
$$2$$ $$-1$$
$$3$$ $$-1$$
$$5$$ $$-1$$
$$7$$ $$1$$
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(\Gamma_0(2100))$$:
$$T_{11} - 4$$ $$T_{13} + 2$$ $$T_{17} + 2$$
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ 1
$3$ $$1 - T$$
$5$ 1
$7$ $$1 + T$$
$11$ $$1 - 4 T + 11 T^{2}$$
$13$ $$1 + 2 T + 13 T^{2}$$
$17$ $$1 + 2 T + 17 T^{2}$$
$19$ $$1 + 2 T + 19 T^{2}$$
$23$ $$1 - 6 T + 23 T^{2}$$
$29$ $$1 - 6 T + 29 T^{2}$$
$31$ $$1 - 6 T + 31 T^{2}$$
$37$ $$1 - 4 T + 37 T^{2}$$
$41$ $$1 + 41 T^{2}$$
$43$ $$1 - 4 T + 43 T^{2}$$
$47$ $$1 + 4 T + 47 T^{2}$$
$53$ $$1 - 2 T + 53 T^{2}$$
$59$ $$1 - 4 T + 59 T^{2}$$
$61$ $$1 + 2 T + 61 T^{2}$$
$67$ $$1 - 12 T + 67 T^{2}$$
$71$ $$1 + 8 T + 71 T^{2}$$
$73$ $$1 + 14 T + 73 T^{2}$$
$79$ $$1 - 16 T + 79 T^{2}$$
$83$ $$1 + 16 T + 83 T^{2}$$
$89$ $$1 - 16 T + 89 T^{2}$$
$97$ $$1 - 14 T + 97 T^{2}$$
|
{}
|
# Generalized density functions on the natural numbers
If $a_1,a_2,\dots$ are IID random bits (correction as per Anthony Quas: these "bits" are $+1$ and $-1$ with equal probability), then with probability 1, the set of natural numbers $n$ such that $a_1+a_2+\dots+a_n \leq 0$ has lower density 0 and upper density 1, so it has no density in the ordinary sense. Still, I wonder if there is a principled way to generalize the manner in which we assign "densities" to subsets of the natural numbers in such a fashion that, with probability 1, the aforementioned set has generalized density 1/2 -- and, more generally, for every real $t$, the set of $n$ such that $(a_1+a_2+\dots+a_n)/\sqrt{n} \leq t$ has generalized density equal to the probability that the relevant Gaussian random variable has value less than $t$.
• Is a bit $\pm 1$? Otherwise it's hard for the sum to be $\le 0$. – Anthony Quas Oct 2 '15 at 19:31
• Wouldn't a logarithmic average do it? $D(A)=\lim_{N\to\infty}1/(\log N)\sum_{n\le N}\mathbf 1_{n\in A}/n$. If not logarithmic, then certainly iterated logarithmic. – Anthony Quas Oct 2 '15 at 19:42
So I think a logarithmic average will do the trick for you. If you define $Y_n$ to be the sign of $a_1+\ldots+a_n$, then calculations with Brownian motion in place of random walk suggest the covariance of $Y_n$ and $Y_m$ with $m<n$ is approximately $(1/2\pi)\arctan\sqrt{m/(m-n)}$. Now define $S_N=(1/\log N)(Y_1/1+\ldots +Y_N/N)$. This has expectation 0 and variance $\approx 1/\log N$, which gives a systematic way of saying that the random walk is "positive half the time".
• Modulo Brownian motion calculations that I haven't checked, this looks good, as far as it goes. But how do we get to "converges to 1/2 with probability 1"? Given how slowly 1/log $N$ goes to 0, I don't see why the outliers in the sequence $S_1,S_2,...$ are still constrained to go to 0. There's probably a simple argument for this, but off the top of my head I don't see it. – James Propp Oct 3 '15 at 13:49
• So I was thinking about this. I think the point is that the sequence $S_N$ is very slowly varying indeed. Nothing much can happen between times $e^{(1+\epsilon)^k}$ and $e^{(1+\epsilon)^{k+1}}$. So if the sequence goes to 0 with probability 1 along that sequence of times for each $\epsilon$, you get convergence for the full sequence. Now work with a countable sequence of $\epsilon$ going to 0. An argument of this type (for standard weights) appears in notes of Wierdl and Rosenblatt in the Cambridge Volume "Convergence in Ergodic Theory" – Anthony Quas Oct 3 '15 at 14:50
• Yes, I believe this would work. So that settles the case $t=0$. But it's less clear to me how to handle other values of $t$ (from the original problem). – James Propp Oct 3 '15 at 19:17
• I think roughly the same argument works for other values of $t$. Define $Y_n$ to be 1 if $a_1+\ldots+a_n>t\sqrt n$ and 0 otherwise. When you do the Brownian motion approximation argument with $m<n$, you end up with Cov$(Y_m,Y_n)$ $\approx \mathbb P(N_1>t;N_2>t\sqrt{n/(n−m)}−N_1\sqrt{m/(n−m)}−\mathbb P(N_1>t)^2$ where $N_1$ and $N_2$ are independent standard normals. The weighting means you don't have to worry about terms with $m$ and $n$ within a bounded factor of each other. Outside this range, this is close to 0 as before so that you get the desired convergence. – Anthony Quas Oct 4 '15 at 0:01
EDIT: As pointed out by Anthony Quas below, this approach suffers from what looks to be a quite serious measurability issue.
A more abstract approach: Let $\theta$ be a shift-invariant probability mean on $\mathbb{N}$ (i.e., a finitely additive but not necessarily $\sigma$-additive set function with total weight 1 such that $\theta(\{n : n+1 \in A\}=\theta(A)$ for every subset $A$ of $\mathbb{N}$). (Such a mean can be obtained e.g. by taking a subsequential limit of the functions sending $A$ to $\frac{1}{n}\sum_{x\in[0,n]}1(x\in A)$.)
Let's define $A_+=\{n : a_1 + \cdots + a_n >0\}$, $A_0=\{n : a_1 + \cdots + a_n =0\}$ and $A_-=\{n : a_1 + \cdots + a_n <0\}$.
• The values of $\theta(A_+)$ and $\theta(A_-)$ are non-random by e.g. the Hewitt-Savage 0-1 law.
• By symmetry, $\theta(A_+)=\theta(A_-)$.
• $\theta(A_0)=0$ for every choice of $\theta$ since the upper density of $A_0$ is zero .
It follows that $\theta(A_+)=\frac{1}{2}$ almost surely.
• Don't you need measurability of $\theta$ for $\theta(A^+)$ and $\theta(A^-)$ to be constant? – Anthony Quas Oct 5 '15 at 15:01
• At least in the case that $\theta$ is a subsequential limit of the functions $\frac{1}{n}\sum_{x\in[0,n]}1(x\in A)$, it is a limit of measurable functions and hence measurable (the same subsequence is used for every $A$). I think that every shift-invariant mean on $\mathbb{N}$ will arise as a similar sort of limit, giving measurability, but I'm not an expert on this. – tmh Oct 5 '15 at 23:23
|
{}
|
# Item Option (Parameter / Switch) in Backup-SPFarm & Restore SP-Farm for Multi-Tenant SharePoint 2013 Farm
I have a SharePoint 2013 Multi-Tenant Farm. Every tenant has a dedicated Content DB. I am trying to backup a particular Tenant using the following command, found in the Understanding multi-tenancy in SharePoint Server 2013. But, in that, I am not sure about the Item option.
Backup-SPFarm -Directory "c:\backups\alpha" -Item "HostingFarm_Content_Hosting" -BackupMethod Full
Also, while restoring, I am not sure, what to provide for the Item parameter in the following command. Also, the article above doesn't mention, whether this restore will work like a DB restore, wherein, the restore can be done even w/o creating a DB or we need to create a tenant first to be able to restore by overwriting it. Can anyone please explain?
Restore-SPFarm -Directory "c:\backups\alpha" -Item "HostingFarm_Content_Hosting" -RestoreMethod Overwrite
|
{}
|
# Probability of collision of sums of vectors
Let $S_1$ and $S_2$ be sets of vectors from $\mathbb{R}^d$ that are distinct and let $\sigma(\cdot)$ be a non-linearity, e.g., a componentwise sigmoid function.
Does there exist a random matrix $R \in \mathbb{R}^{d \times k}$, e.g., a gaussian matrix, such that the probability of $\sum_{s \in S_1} \sigma(s R) = \sum_{t \in S_2} \sigma(t R)$ tends to 0 as $k$ tends to infinity
• I think you mean: Let $S_1$ and $S_2$ be distinct sets of vectors....Let $\sigma$ be a non-linear function. Does there exist a set of matrices $R_k$ such that the probability...tends to 0 as $k$ tends to infinity? If so, it would help to say it that way. Apr 29, 2018 at 14:15
• I think this probability will usually be $0$ if e.g. $R$ is a Gaussian matrix and $\sigma(\cdot)$ is a smooth function with strictly positive partial derivatives. Apr 29, 2018 at 15:32
• @MattF There is only one matrix, not a set. Apr 30, 2018 at 10:31
• @IosifPinelis Could you please elaborate on your comment. Apr 30, 2018 at 10:32
|
{}
|
Tune MESSAGE-MACRO¶
“MESSAGE-MACRO” refers to the combination of MESSAGE and MACRO, run iteratively in a multi-disciplinary optimization algorithm. This combination is activated by calling solve() with the argument model=’MESSAGE-MACRO’, or using the GAMS MESSAGE-MACRO_run.gms script directly (see Running a model for details about these two methods).
This page describes how to solve two numerical issues that can occur in large MESSAGEix models.
Oscillation detection in the MESSAGE-MACRO algorithm¶
The documentation for the MESSAGE_MACRO class describes the algorithm and its three parameters:
• convergence_criterion,
• max_iteration.
The algorithm detects ‘oscillation’, which occurs when MESSAGE and MACRO each return slightly different solutions, but these two solutions are each stable.
If the difference between these points is greater than convergence_criterion, the algorithm might jump between these two points infinitely. Instead, the algorithm detects oscillation by comparing model solutions on each iteration to previous values recorded in the iteration log.
If the algorithm picks up on the oscillation between iterations, then after MACRO has solved and before solving MESSAGE, a log message is printed as follows:
--- Restarting execution
--- MESSAGE-MACRO_run.gms(4986) 625 Mb
--- Reading solution for model MESSAGE_MACRO
--- MESSAGE-MACRO_run.gms(4691) 630 Mb
+++ Indication of oscillation, increase the scaling parameter (4) +++
--- GDX File c:\repo\message_ix\message_ix\model\output\MsgIterationReport_ENGAGE_SSP2_v4_EN_NPi2020_900.gdx
Time since GAMS start: 1 hour, 10 minutes
+++ Starting iteration 14 of MESSAGEix-MACRO... +++
+++ Solve the perfect-foresight version of MESSAGEix +++
--- Generating LP model MESSAGE_LP
Note
This example is from a particular model run, and the actual message may differ.
The algorithm then gradually reduces max_adjustment from the user-supplied value. This has the effect of reducing the allowable relative change in demands, until the convergence_criterion is met.
Issue 1: Oscillations not detected¶
Oscillation detection can fail, especially when the oscillation is very small. When this occurs, MESSAGE-MACRO will iterate until max_iteration (default 50) and then print a message indicating that it has not converged.
For the MESSAGEix-GLOBIOM global model, this issue can be encountered with scenarios which have stringent carbon budgets (e.g. <1000 Gt CO₂ cumulative) and require more aggressive reductions of demands.
Identifying oscillation¶
In order to find out whether failure to converge is due to undetected oscillation, check the iteration report in MsgIterationReport_<model_name>_<scenario_name>.gdx. The initial iterations will show the objective function value either decreasing or increasing (depending on the model), but after a number of iterations, the objective function will flip-flop between two very similar values.
Preventing oscillation¶
The issue can be resolved by tuning max_adjustment and convergence_criterion from their respective default values of 0.2 (20%) and 0.01 (1%). The general approach is to reduce max_adjustment. Reducing this parameter to half of its default value—i.e. 0.1, or 10%—can help, but it can be reduced further, as low as 0.01 (1%).
This may require further tuning of the other parameters: first, ensure that convergence_criterion is smaller than max_adjustment, e.g. set to 0.009 (0.9%) < 0.01. Second, due to the small change allowed to the model solution each iteration, if the initial MESSAGE solution is not close to the convergence point, numerous iterations could be required. Therefore max_iteration may also need an increase.
These changes can be made in two ways:
1. Pass the values to MESSAGE_MACRO via keyword arguments to Scenario.solve().
2. Manually edit the default values in MESSAGE-MACRO_run.gms.
Issue 2: MESSAGE solves optimally with unscaled infeasibilities¶
By default, message_ix is configured so that the CPLEX solver runs using the lpmethod option set to 2, selecting the dual simplex method. Solving models the size of MESSAGEix-GLOBIOM takes very long with the dual simplex method—scenarios with stringent constraints can take >10 hours on common hardware. With lpmethod set to 4, selecting the barrier method, the model can solve in under a minute.
The drawback of using the barrier method is that, after CPLEX has solved, it crosses over to a simplex optimizer for verification. As part of this verification step, it may turn out that the CPLEX solution is “optimal with unscaled infeasibilities.”
This issue arises when some parameters in the model are not well-scaled, resulting in numerical issues within the solver. This page (from an earlier, 2002 version of the CPLEX user manual) offers some advice on how to overcome the issues. The most direct solution is to rescale the parameters in the model itself.
When this is not possible, there are some workarounds:
1. Adjust CPLEX’s convergence criterion, epopt (this is distinct from the convergence_criterion of the MESSAGE_MACRO algorithm). In message_ix, DEFAULT_CPLEX_OPTIONS sets this to 1e-6 by default. This approach is delicate, as changing the tolerance may also change the solution by a significant amount. This has not been tested in detail and should be handled with care.
2. Switch to other methods provided by CPLEX, using e.g. lpmethod = 2. A disadvantage of this approach is the longer runtime, as described above.
3. Start the MESSAGE-MACRO algorithm with lpmethod set to 4. Manually monitor its progress, and after approximately 10 iterations have passed, delete the file cplex.opt. When CPLEX can not find its option file, it will revert to using a simplex method (and advanced basis) from thereon.
|
{}
|
1. ## Mid-Terms :(
Okay, I just need help understanding and getting the answers for a few questions.
1) A woman's dress originally selling for $50.00 was marked down to yield 20% on cost. If the original profit was 33 1/3% on cost, what will be the new sales price. 2) Find the compound interest on$1000 for 9 months at 8% per annum, interest being calculated quarterly.
[And then I have the expand crap... I seriously don't understand this guys Please pity me and help me out ]
You have to expand each of the following
3) a. (Y+3)(Y+4)
b. (2x-4)(x+8)
c. (7x-4)(3x-9)
d. (x-3)(x+3)
e. (x+3)(y-9)
4) Mr. Drake sold two books at \$1.20 each. Based on the cost, the profit on one was 20% and the loss on th e other was 20% On the sale of the books, did he break even? Gain? Lost? Show how much he gained or lost.
5) There are 1600 pupils in a school. 3/8 of them are girls. 43% of the boys and 1/4 of the girls wear glasses. How many students in this school wear glasses?
6) If 12(exponent)x-4=1, What is the value of 3(exponent)x
7) If x is 3% of y and y is 7% of w, find x in terms of w.
Thanks!!!!!
Guys, I know this is alot Sorry, but mid-terms are really hard and I really need to understand this stuff :P Kindly try and answer as many questions as possible by tommorow =)
2. Originally Posted by Rocher
3) a. (Y+3)(Y+4)
b. (2x-4)(x+8)
c. (7x-4)(3x-9)
d. (x-3)(x+3)
e. (x+3)(y-9)
Do you know FOIL? (First, Outer, Inner, Last)
$(a + b)(c + d) = ac + ad + bc + bd$
ac is the product of the First terms in each factor
ad is the product of the Outer terms
bc is the product of the Inner terms
bd is the product of the Last terms.
a) $(y + 3)(y + 4) = y^2 + 4y + 3y + 12 = y^2 + 7y + 12$
b) $(2x - 4)(x + 8) = 2x^2 + 16x - 4x - 32 = 2x^2 + 12x - 32$
c) $(7x-4)(3x-9) = 7x^2 - 63x - 12x + 36 = 7x^2 - 75x + 36$
d) $(x-3)(x+3) = x^2 + 3x - 3x - 9 = x^2 - 9$
e) $(x+3)(y-9) = xy - 9x + 3y - 27$ and that's all we can do for this one.
-Dan
3. Thank you. Actually I don't know FOIL. Teacher never taught me
However, can you tell me how you got those answers? Like for the first one, I have no idea how you got 7y+12. The 7 i understand, but the 12... How can you multiply and add them?
4. Originally Posted by Rocher
Thank you. Actually I don't know FOIL. Teacher never taught me
However, can you tell me how you got those answers? Like for the first one, I have no idea how you got 7y+12. The 7 i understand, but the 12... How can you multiply and add them?
Originally Posted by topsquark
Do you know FOIL? (First, Outer, Inner, Last)
$(a + b)(c + d) = ac + ad + bc + bd$
ac is the product of the First terms in each factor
ad is the product of the Outer terms
bc is the product of the Inner terms
bd is the product of the Last terms.
a) $(y + 3)(y + 4) = y^2 + 4y + 3y + 12 = y^2 + 7y + 12$
First terms: $y \cdot y$
Outer terms: $y \cdot 4$
Inner terms: $3 \cdot y$
Last terms: $3 \cdot 4$
So
$(y + 3)(y + 4) = y^2 + 4y + 3y + 12$
$= y^2 + (4y + 3y) + 12 = y^2 + (7y) + 12$ <-- Adding up "like" terms
-Dan
NOTE: For a slightly more advanced explanation (which might help, might not) this is merely an application of the distributive law of multiplication over addition: a(b + c) = ab + ac.
$(y + 3)(y + 4)$
Let's momentarily define a = y + 3:
$(y + 3)(y + 4) = a(y + 4) = ay + 4a$
Now, put back in a = y + 3:
$(y + 3)(y + 4) = a(y + 4) = ay + a \cdot 4$ $= (y+3)y + (y+3)\cdot 4$
Now use the distributive law again (twice):
$(y+3)y = y^2 + 3y$
$(y + 3) \cdot 4 = 4y + 12$
(These two lines give use the terms in the FOIL expansion.)
Thus:
$(y + 3)(y + 4) = y^2 + 3y + 4y + 12$
5. Originally Posted by Rocher
3) a. (Y+3)(Y+4)
b. (2x-4)(x+8)
c. (7x-4)(3x-9)
d. (x-3)(x+3)
e. (x+3)(y-9)
Another way of expanding products like this is:
$(a+b)(c+d)=a(c+d) + b(c+d)=ac+ad \ +\ bc+bd$.
I will only do one example with this, but here goes:
$(7x-4)(3x-9)=7x(3x-9) + (-4)(3x-9)=21x^2-63x + (-12)x + 36$
...... $=21x^2-63x -12x + 36$
Now collect together the multiples of $x$ into one term gives:
$(7x-4)(3x-9)=21x^2-75x + 36$
RonL
6. Thanks and now I have another one
.7x = .25+.2x
x=
7. Originally Posted by Rocher
Thanks and now I have another one
.7x = .25+.2x
x=
$0.7x = 0.25 + 0.2x$
$0.7x - 0.2x = 0.25 + 0.2x - 0.2x$
$0.5x = 0.25$
$\frac{0.5x}{0.5} = \frac{0.25}{0.5}$
$x = 0.5$
-Dan
|
{}
|
# XNA Zoom in/out on model instead of scaling the model
I'm trying to implement zoom in/out functionality into my game. I can change the cameraDistance value with the Z and X keys. Instead of zooming, it looks like the model is scaled smaller or larger (moves closer or farther away from the camera). Instead, the camera should move closer to or farther away from the model.
Below are my view and projection matrix. cameraDistance is the value that seems to 'scale' the model. After days of messing around, I realize (yes after days!:)) that I should never change this value.
Matrix view = Matrix.CreateRotationY(MathHelper.ToRadians(cameraRotation)) *
Matrix.CreateTranslation(0, -45, 0) *
Matrix.CreateLookAt(new Vector3(0, forwardRotation, -cameraDistance), new Vector3(0, 0, 0), Vector3.Up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspectRatio, 1, 10000);
So instead of changing the cameraDistance value, I think I should change my CameraPositionOffset. Below is my updateCamera method
// This vector controls how much the camera's position is offset from the
// sphere. This value can be changed to move the camera further away from or
// closer to the sphere.
Vector3 CameraPositionOffset = new Vector3(0, 10, 90);
// This value controls the point the camera will aim at. This value is an offset
// from the sphere's position. middle value is distance from surface
Vector3 CameraTargetOffset = new Vector3(0, 27, 0);
private void UpdateCamera(GameTime gameTime)
{
// start arcball
float time = (float)gameTime.ElapsedGameTime.TotalMilliseconds;
// mouse movement
MouseState currentMouseState = Mouse.GetState();
if (currentMouseState != originalMouseState)
{
float xDifference = (currentMouseState.X - originalMouseState.X);
float yDifference = currentMouseState.Y - originalMouseState.Y;
Mouse.SetPosition(GraphicsDevice.Viewport.Width / 2, GraphicsDevice.Viewport.Height / 2);
if (Mouse.GetState().RightButton == ButtonState.Pressed)
{
cameraRotation -= xDifference * 1.05f;
cameraArc += yDifference * 1.025f;
// Limit the arc movement.
if (cameraArc > 90.0f)
cameraArc = 90.0f;
else if (cameraArc < -90.0f)
cameraArc = -90.0f;
}
}
// Check for input to zoom camera in and out.
if (currentKeyboardState.IsKeyDown(Keys.Z))
if (currentKeyboardState.IsKeyDown(Keys.X))
// Limit the camera distance.
/// end of arcball
// The camera's position depends on the sphere's facing direction: when the
// sphere turns, the camera needs to stay behind it. So, we'll calculate a
// rotation matrix using the sphere's facing direction, and use it to
// transform the two offset values that control the camera.
Matrix cameraFacingMatrix = Matrix.CreateRotationY(sphereFacingDirection);
Vector3 positionOffset = Vector3.Transform(CameraPositionOffset, cameraFacingMatrix);
Vector3 targetOffset = Vector3.Transform(CameraTargetOffset, cameraFacingMatrix);
// once we've transformed the camera's position offset vector, it's easy to
// figure out where we think the camera should be.
Vector3 cameraPosition = spherePosition + positionOffset;
// We don't want the camera to go beneath the heightmap, so if the camera is
// over the terrain, we'll move it up.
if (heightMapInfo.IsOnHeightmap(cameraPosition))
{
// we don't want the camera to go beneath the terrain's height +
// a small offset.
float minimumHeight = heightMapInfo.GetHeight(cameraPosition) + CameraPositionOffset.Y;
if (cameraPosition.Y < minimumHeight)
{
cameraPosition.Y = minimumHeight;
}
}
// next, we need to calculate the point that the camera is aiming it. That's
// simple enough - the camera is aiming at the sphere, and has to take the
// targetOffset into account.
Vector3 cameraTarget = spherePosition + targetOffset;
// with those values, we'll calculate the viewMatrix.
viewMatrix = Matrix.CreateLookAt(cameraPosition, cameraTarget, new Vector3(0.0f, 1.0f, 0.0f));
}
In summary: I want to move the camera farther or closer to the model, and keep the position and scale of the model.
|
{}
|
# Converting 10x BAM Files to FASTQ
bamtofastq is a tool for converting 10x BAMs produced by cellranger, cellranger-atac, cellranger-dna or longranger back to FASTQ files that can be used as inputs to re-run analysis. The FASTQs will be emitted into a directory structure that is compatible with the directories created by the mkfastq tool.
NOTE: only BAMs produced by cellranger, cellranger-atac, cellrange-dna and longranger will work with bamtofastq. Special tags included by 10x pipelines are required to reconstruct the original FASTQ sequences correctly. If your BAM file lacks the appropriate headers, you will get an error message.
## Background
We created bamtofastq to helps users who want to reanalyze 10x data and only have access to 10x BAM files. (e.g. Some customers want to store BAM files only. Others might have downloaded our BAM data from NCBI SRA). 10x pipelines require sequencer FASTQs (with embedded barcodes) as input. The location of the 10x barcode varies depending on product and reagent version. For current version Genome (v2) and Single Cell 3' (v2, v3) products, the 10x barcode is found on the first 16 bases of the R1 read. In earlier product versions, the 10x barcode was attached on the sample indices. bamtofastq determines the appropriate way to construct the original read sequence from the sequences and tags in the BAM file.
bamtofastq is available for Linux and is compatible with RedHat/CentOS 5.2 or later, and Ubuntu 8.04 or later.
We recommend upgrading to bamtofastq 1.2.0—it is now multi-threaded and is compatible with Cell Ranger ATAC BAM files.
bamtofastq is a single executable that can be run directly and requires no compilation or installation. Place the executable file in a directory that is on your PATH, and make sure to chmod 700 to make it executable.
## Running the Tool
10x BAMs produced by Cell Ranger v1.2+, Cell Ranger ATAC v1.0+, Cell Ranger DNA v1.0+, and Long Ranger v2.1+ contain header fields that permit automatic conversion to the correct FASTQ sequences. BAMs produced by older 10x pipelines may require special arguments or have some caveats, see below for details. Run times for full-coverage WGS BAMs may be several hours.
The FASTQ files emitted by bamtofastq contain the same set of sequences that were input to the original pipeline run, although the original order will not be preserved. 10x pipelines are generally insensitive to the order of the input data, so you can expect nearly identical results when re-running with bamtofastq outputs.
## Options
--locus=locus
Optional. Only include read pairs mapping to locus. Use chrom:start-end format.
Number of reads per FASTQ chunk. Default: 50000000
--gemcode
Convert a BAM produced from GemCode data (Longranger 1.0 - 1.3)
--lr20
Convert a BAM produced by Longranger 2.0
--cr11
Convert a BAM produced by Cell Ranger 1.0-1.1
--bx-list=L
Only include BX values listed in text file L. Requires BX-sorted and indexed BAM file (see Long Ranger support for details).
--help
-h
Show the help screen.
## Known Issues
The latest versions of cellranger, cellranger-atac, cellranger-dna and longranger generate BAM files that automatically reconstruct complete FASTQ files representing all input reads. BAMs produced by older versions of cellranger and longranger have some caveats, listed below:
PackageVersionPipelinesExtra ArgumentsComplete FASTQs
Cell Ranger1.3+countnoneYes
Cell Ranger1.2countnoneReads without a valid barcode will be absent from FASTQ. (These reads are ignored by Cell Ranger)
Cell Ranger1.0-1.1count--cr11 Reads without a valid barcode will be absent from FASTQ. (These reads are ignored by Cell Ranger)
Cell Ranger ATAC1.0.0+countnoneYes
Cell Ranger DNA1.0.0+cnvnoneYes
Long Ranger2.1.3+wgs, targeted, align, basicnoneYes
Long Ranger2.1.0 - 2.1.2wgs, targetednoneYes
Long Ranger2.0wgs, targeted--lr20Yes
Long Ranger2.0.0 - 2.1.2align, basicNot SupportedN/A
Long Ranger1.3 (GemCode)wgs, targeted--gemcodeReads without a valid barcode will be absent from FASTQ. This will result in a ~5-10% loss of coverage.
|
{}
|
Sanskrit Lessons From Chitrapur Math Supplimentary Guide - Free download as PDF File (.pdf), Text File (.txt) or read online for free. Supplementary Reference Guide to the Book written by Chitrapur Math to Teach Sanskrit. For these space-like event pairs with a positive spacetime interval ($s^2 > 0$), the measurement of space-like separation is the [[proper distance]], $\Delta\sigma$: :$\Delta\sigma = \sqrt\left\{s^2\right\} = \sqrt\left\{\Delta r^2 - c\dots The Math Team, Quiz Bowl Team, Computer Science Team, and First Robotics Team are consistently in the top tier of teams in the United States, while the Science Olympiad Team, Speech and Debate team, Model UN team, and Academic Team \left(which\dots math grade 3 worksheets go math worksheets grade 3 grade 3 math worksheets pdf free download math grade 3 worksheets go math grade 3 2. Download THIS Books INTO Available Format \left(2019 Update\right) Download Full PDF Ebook here \left\{ https://soo.gd/irt2 \right\} Download Full EPUB Ebook here \left\{ https://soo.gd/irt2 \right\} Download Full doc Ebook here \left\{ https://soo.gd/irt2\dots From 2010 to 2011, Maine’s GDP was down 0.4% while New England grew by 1.8% and the U.S. by 1.5%. Maine’s GDP declined by 2.7% over the last five years while New England grew by 2.6% and the U.S. Kindergarten - enVision Math Common Core . 3/7/2013 3SUSD Curriculum Department Topic 1 ONE TO FIVE Kindergarten - enVision Math Common Core Lesson 3$
## Welcome back to school students & staff!
Pearson realize math homework Increasingly, discrimination by algorithms is perceived as a societal and legal problem. As a response, a number of criteria for implementing algorithmic fairness in machine learning have been developed in the literature. A sphere is a surface that can be defined parametrically (by x = r sin θ cos φ, y = r sin θ sin φ, z = r cos θ) or implicitly (by x2 + y2 + z2 − r2 = 0.) "The Political Economy of Very Large Space Projects" HTML PDF, John Hickman, Ph.D. Journal of Evolution and Technology Vol. 4 – November 1999. Direct3D 11, Direct2D, DirectWrite, DXGI 1.1, WARP and several other components are currently available for Windows Vista SP2 and Windows Server 2008 SP2 by installing the Platform Update for Windows Vista. 262498537-Quantitative-Analysis.pdf - Free ebook download as PDF File (.pdf), Text File (.txt) or read book online for free.
|
{}
|
# What are the integer solutions to $z^2-y^2z+x^3=0$?
The question is to describe ALL integer solutions to the equation in the title. Of course, polynomial parametrization of all solutions would be ideal, but answers in many other formats are possible. For example, answer to famous Markoff equation $$x^2+y^2+z^2=3xyz$$ is given by Markoff tree. See also this previous question Solve in integers: $y(x^2+1)=z^2+1$ for some other examples of formats of acceptable answers. In general, just give as nice description of the integer solution set as you can.
If we consider the equation as quadratic in $$z$$ and its solutions are $$z_1,z_2$$, then $$z_1+z_2=y^2$$ while $$z_1z_2=x^3$$, so the question is equivalent to describing all pairs of integers such that their sum is a perfect square while their product is a perfect cube.
An additional motivation is that, together with a similar equation $$xz^2-y^2z+x^2=0$$, this equation is the smallest $$3$$-monomial equation for which I do not know how to describe all integer solutions. Here, the "smallest" is in the sense of question What is the smallest unsolved Diophantine equation?, see also Can you solve the listed smallest open Diophantine equations? .
• one solution is $y=8$ and $x=4$ so that the equation becomes $(z-8)^2=0$ or $z^2-16z+64=0$. I suspect this is the smallest possible solution. Sep 11, 2022 at 22:27
• @user, I think you meant $y=4$. Sep 12, 2022 at 4:23
• $x=y=z=0$ is a "smaller" solution.
– JRN
Sep 12, 2022 at 6:26
• @user, we're talking about $z^2-y^2z+x^3=0$, right? When $y=8$ and $x=4$, this becomes $z^2-64z+64=0$, not $z^2-16z+64=0$, which is what you wrote. But if you take $y=4$, then you do get $z^2-16z+64=0$. OK? Sep 12, 2022 at 10:21
• @GerryMyerson, you are right. I made the mistake of identifying $(z-8)^2$ with $(z-y)^2$ which is not the case. $z=8$ is the solution of the quadratic equation so the value of $y$ and $x$ must be identified after $(z-8)^2$ is expanded to read $(z-8)^2=z^2 -4^2*z +4^3$ which effectively gives the value $y=4$ and also $x=4$. Sep 12, 2022 at 11:12
We get $$z(y^2-z)=x^3$$. Thus $$z=ab^2c^3$$, $$y^2-z=ba^2d^3$$ for certain integers $$a, b, c, d$$ (that is easy to see considering the prime factorization). So $$ab(bc^3+ad^3)=y^2$$. Denote $$a=TA$$, $$b=TB$$ (each pair $$(a, b)$$ corresponds to at least one triple $$(A, B, T)$$, but possibly to several triples). You get $$T^3AB(Bc^3+Ad^3)=y^2$$.Thus $$T$$ divides $$y$$, say $$y=TY$$. We get $$Y^2=TAB(Bc^3+Ad^3)$$.
So, all solutions are obtained as follows: start with arbitrary $$A, B, c, d$$ and choose any $$Y$$ which square is divisible by $$AB(Bc^3+Ad^3)$$, the ratio is denoted by $$T$$ (if both $$Y$$ and $$AB(Bc^3+Ad^3)$$ are equal to 0, take arbitrary $$T$$).
• Thank you. In my opinion, this is a good description of all integer solutions, so I will accept this answer. Sep 12, 2022 at 15:20
• Can this method also solve $xz^2-y^2z+x^2=0$, the second equation mentioned in the question? Sep 12, 2022 at 15:55
• Ah, yes, now I see that it works for this equation as well, thanks again. Sep 12, 2022 at 16:49
Since the equation is NOT homogeneous, it is trivial to find infinite families of solutions with $$g=\gcd(x,y)>1$$. For instance, choose any integers $$a$$ and $$b$$, set $$c=ab^2-a^2$$, so multiplying by $$c^8$$ gives $$(ac^4)^2-ac^4(bc^2)^2+c^9=0$$, giving the trivial solutions $$(x,y,z)=(c^3,bc^2,ac^4)$$, but there are many other ways to find trivial'' solutions.
If you assume $$g=\gcd(x,y)=1$$, the solution is pretty standard: since we have a monic second degree equation in $$z$$, there exist integer solutions if and only if the discriminant is a square, giving the auxiliary equation $$y^4-4x^3=d^2$$, rewritten as $$((y^2-d)/2)((y^2+d)/2)=x^3$$, and since the factors are coprime, they are both cubes, say $$a^3$$ and $$b^3$$, so $$x=ab$$, $$y^2=a^3+b^3$$, hence $$z=b^3$$. The equation in $$y$$ is a standard superFermat equation of elliptic type, which is entirely parametrized by the following three parametrizations (see for instance my GTM 240 chapter 14 for a proof), where $$s$$ and $$t$$ are coprime integers, and any solution belongs to one and only one parametrization: $$(a,b,y)=(s(s+2t)(s^2-2ts+4t^2),-4t(s-t)(s^2+ts+t^2),\pm(s^2-2ts-2t^2))$$ with $$s$$ odd and $$s\not\equiv t\pmod3$$, $$(a,b,y)=(s^4-4ts^3-6t^2s^2-4t^3s+t^4,2(s^4+2ts^3+2t^3s+t^4),3(s^2-t^2)(s^4+2s^3t+\ 6s^2t^2+2st^3+t^4))$$ with $$s\not\equiv t\pmod{2}$$ and $$s\not\equiv t\pmod 3$$, $$(a,b,y)=(-3s^4+6t^2s^2+t^4,3s^4+6t^2s^2-t^4,6st(3s^4+t^4))$$ with $$s\not\equiv t\pmod{2}$$ and $$3\nmid t$$.
• Thank you! The question is to describe ALL solutions, including all "trivial" ones, but I am happy to also have a parametrized description of coprime solutions. Sep 12, 2022 at 15:19
Here is an infinite family of solutions resulting from setting $$x=A z$$ for integer $$A$$.
For example set $$x=2z$$ and get
$$g(y,z)=-(y^2 - 8*z^2 - z)*z$$
The quadratic factor is conic and Wolfram Alpha gives infinitely many integer solutions in terms of powers of square root of two, e.g: $$f(2*36,102,36)=0$$
Potential attack might be to try rational $$A$$ and then find integral points on a conic with rational coefficients.
There exists parametrization of the rational solutions since your equation is a rational surface:
$$X1=-1/2*s^2*t/(2*t^3 - s),Y1=-1/2*s^2/(2*t^3 - s),Z1=1/2*s^3*t^3/(4*t^6 - 4*s*t^3 + s^2)$$
• Why is this disliked? Sep 12, 2022 at 7:23
We already have a description of all solutions by @FedorPetrov, and co-prime solutions by @HenriCohen, and some promises for some special solutions.
Let me present a very simple and explicite family of special solutions parametrized by natural variables $$\ t\$$ and $$\ n.$$
Let
$$z_k\ :=\ (t^2-1)^{6\cdot n-k}$$ for $$\ k=1\$$ or $$\ 2.\$$ Then:
$$z_1+z_2\ =\ \left((t^2-1)^{3\cdot n-1}\cdot t\right)^2$$ and $$z_1\cdot z_2\ =\ \left(t^2-1)^{4\cdot n-1}\right)^3$$
is an explicite (while very special) family.
PS:
$$2^3+1\ =\ 3^2$$ $$2^3\cdot 1\ =\ 2^3$$
PPS:
If $$\,\ z_1\$$ and $$\ z_2\,\$$ form a solution then so do $$\,\ w_1:= a^6\cdot z_1\$$ and $$\ w_2\ := a^6\cdot z_2.$$
Thus, above, we could have:
$$w_1\ :=\ a^6\cdot(t^2-1)^5$$ $$w_2\ :=\ a^6\cdot(t^2-1)^4$$
Let $$z := y^2$$. Then, $$x$$ must be equal to zero, while $$y$$, by construction, is free to run over the integers.
|
{}
|
Linear algebra. Combining this fact with the above result, this means that every n k+ 1 square submatrix, 1 k n, of A(K n) To subscribe to this RSS feed, copy and paste this URL into your RSS reader. How do politicians scrutinize bills that are thousands of pages long? Read more Achievements: Debajit Kalita received his MSc degree from Gauhati University and a Ph.D. degree from IIT Guwahati. Why is it that when we say a balloon pops, we say "exploded" not "imploded"? So far I have taken classes in R, Python, Statistics, Calculus (I-III), Linear Algebra… Daugherty [11] characterized the inertia of unicyclic graphs in terms of matching number and obtained a linear-time algorithm for computing it. troduction to abstract linear algebra for undergraduates, possibly even first year students, specializing in mathematics. One must introduce necessary linear algebra and show some interesting interpretations of graph eigenvalues. Theory 1:105-125 (1966). Chapter 4 defines the algebra of polynomials over a field, the ideals in that algebra, and the prime factorization of a polynomial. Prerequisite – Graph Theory Basics – Set 1 A graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense “related”. 48 S. Friedland, Maximality of the monomial group, Linear and Multilinear Algebra 18:1-7 (1985). Networks 4.1. Text processing - Add and number blank line above each line in a file, Fastest query to filter product by countries. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Application to Graph theory . Then we translate graph theory to linear algebra, and vice versa, using the language of matroids to facilitate our discussion. Cayley graphs and the Paley graph. Instead of replacing nonterminal symbols with combinations of nonterminals and terminals in a Gradient = Source: Wikipedia This is an example of the linear graph. Network Science Notes on Linear Algebra and Matrix Theory. A graph can be encoded as a matrix A, the adjacency matrix of A. How credible are the rumors that the NSA has compromised IPSec? This book is directed more at the former audience Thanks for contributing an answer to Mathematics Stack Exchange! Algorithms, Graph Theory, and Linear Equa-tions in Laplacian Matrices Daniel A. Spielman ∗ Abstract. Linear Optimization vs Graph Theory Hello all, I have the option of taken either one or these classes next semester and was unsure which would be better for a possible career in machine learning. There is a particularly beautiful connection to Kirchhoff's laws of circuit theory. Many proofs for the properties of each de nition of a matroid have been omitted from this paper, but you may nd complete proofs in Oxley[2], Whitney[3], and Wilson[4]. Use MathJax to format equations. Hmm, this is a terrific question. To access this article, please, Access everything in the JPASS collection, Download up to 10 article PDFs to save and keep, Download up to 120 article PDFs to save and keep. If this is correct would we have $1\cdot U=U$ and $0 \cdot U=\emptyset$ (empty vertex set)? Linear algebra and graph theory; intro to matchings. The answer may surprise you, and it’s in this course! Check out using a credit card or bank account with. All Rights Reserved. \Applied Numerical Linear Algebra" by James W. Demmel For those needing an introduction to linear algebra, a perspective that is compatible with this book is contained in Gil Strang’s \Introduction to Linear Algebra." Introduction Revolutionizing how the modern world operates, the Internet is a powerful medium in which anyone around the world, regardless of location, can access endless information about any subject and communicate with one another without bounds. Graph Theory/Social Networks Introduction Kimball Martin (Spring 2014) ... Third, we’ll look at spectral graph theory, which means using linear algebra to study graphs, and random walks on graphs. Given an initial probability distribution $p$ on the vertex set $V$ of a graph (though of as a vector in $\mathbb{R}^{|V|}$), the probabilities of hitting different vertices after $k$ steps of a random walk are given by $W^k p$ where $W = A D^{-1}$ (with $A$ the adjacency matrix and $D$ the degree matrix). 3.1 Basic de nitions We begin with a brief review of linear algebra. Numerical Linear Algebra: ... Graph Theory: Graphs are structures that capture pairwise relationships between a discrete set of objects. Continous quantum walk As in the previous talk, we will consider walks with the following transition matrix. Today, the city is named Kaliningrad, and is a major industrial and commercial centre of western Russia. Reading: the Matrix tree Theorem in West 2.2, Section 3.1. 1. y-intercept is the y-value of the graph when x = 0. MICHAEL DOOB The University of Manitoba Winnipeg, Manitoba, Canada R3T 2N2 Graph theory has existed for many years not only as an area of mathematical study but also as an intuitive and illustrative tool. Part I included the basic definitions of graph theory, gave some concrete examples where one might want to use graph theory to tackle a problem, and concluded with some common objects one finds doing graph theory. Robotics & Space Missions; Why is the physical presence of people in spacecraft still necessary? Is the brass brazier required for the Find Familiar spell, or can it be replaced by a spellcasting focus/component pouch? G. Yu et al. Linear Algebra and Graphs IGERT Data and Network Science Bootcamp Victor Amelkin hvictor@cs.ucsb.edui UC Santa Barbara September 11, 2015 1/58. We will now consider a question that is vastly more general than this, and come up with a surpris-ingly elegant answer to it. JSTOR is part of ITHAKA, a not-for-profit organization helping the academic community use digital technologies to preserve the scholarly record and to advance research and teaching in sustainable ways. This abstract formulation makes graphs useful in a wide variety of contexts, depending on the interpretation of a pairwise relationship. You might also find the first sub-section of 8.6 helpful for some of the linear algebra (just the part with heading "the characteristic polynomial") Homework due 3/12. Professor Biggs' basic aim remains to express properties of graphs in algebraic terms, then to deduce theorems about them. Read your article online and download the PDF from your email or your account. In graph theory, the removal of any vertex { and its incident edges { from a complete graph of order nresults in a complete graph of order n 1. This will give us a useful way to study network ow for communication networks and do things like rank webpages or sports teams or determine how in uential people are in social networks. Asking for help, clarification, or responding to other answers. Then all functions $V \rightarrow \mathbb{F}_{2}$ can be represented by a binary vector of length $n$, thus for example $(1,0,0,...,0)$ would represent the set containing just the vertex $1$, so i can represent all functions in this way with each function representing a subset of the vertices. Retrouvez Algebraic Graph Theory: Graph theory, Mathematics, Algebra, Combinatorics, Linear algebra, Graph property, Group theory et des millions de livres en … So in this case i'm assuming $a_{i} \in \mathbb{F}_{2}$? Graph Languages and Graph Grammars were introduced in theoretical computer science as an extension of the theory of formal languages (linear languages), in order to model various types of parallelism in computation, [10,11,19,21]. MATH 314-003 Cutler Introduction Graph theory is a relatively new branch of mathematics which deals with the study of objects named graphs. Also, even in simple linear algebra, proofs of some shit (specially in complex space) are super convoluted. [ 11 ] characterized the inertia of unicyclic graphs in terms of service privacy... Is given by Cayley ’ s formula: nn 2 vertices $1$ to... Aspects of the subject to the frontiers of current research independence,,. Data structure that is vastly more general than this, and it ’ s in course. Combinatorics and linear Equa-tions in Laplacian Matrices Daniel A. Spielman ∗ abstract \cdot U=\emptyset $( linear algebra and graph theory set! The basic point of contact between graph theory graph drawing using the Laplacian 's eigenvectors and come with! Complex space ) are super convoluted JSTOR logo, JPASS®, Artstor®, Reveal Digital™ ITHAKA®... And the Courant-Fischer Theorem taught, however, is that they have a very close fruitful. Which is fine a ) = E itA where a is the notion of a vertex space continous walk. Now, exercises in advanced linear algebra to graph theory and linear algebra, vice... Vector spaces over a field, the ideals in that algebra, and control theory copy and paste this into... Friedland, Maximality of the most applicable areas of mathematics which deals with the study of objects the of! Hvictor @ cs.ucsb.edui UC Santa Barbara September 11, 2015 1/58 robotics & Missions! A linear-time algorithm for computing it its applications in computer graphics, signal processing, machine learning RLC! Personal experience finding common solutions to some “ polynomial ” equations of degree 1 ( hyperplanes ) should first. Can read up to 100 articles each month for free within the BOM algebra methods... Thousands of pages long the pure mathematician and by the mathematically trained scien-tists of all disciplines feed, copy paste. Number and obtained a linear-time algorithm for computing it properties of graphs: algebra... The graph isomorphism Problem, linear algebra is the notion of a graph with$ n $and! Some interesting interpretations of graph eigenvalues = E itA where a is the notion of a vertex space represented. Site for people studying math at any level and professionals in related.... Thermal infrared sensing pit organs sets ' which is fine of graph theory, they arise in practical... Your account 1 ( hyperplanes ) graph when x = a ibdenote its conjugate continous quantum as... And vice versa, using the Laplacian 's eigenvectors and many applications are given in.... Read your article Online and download the PDF from your email or your account vice,! The algebra of polynomials over a field in linear algebra to graph theory to linear for! And network Science Notes on linear algebra, and linear algebra to graph theory, and vice versa using... Nitions we begin with a surpris-ingly elegant answer to mathematics Stack Exchange Inc ; user contributions licensed cc! Quadratic forms and the Courant-Fischer Theorem the statistics linear algebra and graph theory dynamics of polymer chains J.... Usually does not specify directions for a 2-semester course geometric, combinatoric, or algorithmic approaches brief of! General than this, and control theory used tools in the first second! Infrared sensing pit organs some savings in a wide variety of contexts, depending the... A short linear algebra basic aim remains to express properties of graphs in algebraic terms then. A ) = 0 then a issingularotherwisenonsingular 11, 2015 1/58 or work at point... T ) = 0 then a issingularotherwisenonsingular = Source: Wikipedia this is in contrast geometric. The statistics and dynamics of polymer chains, J. Chem random walk vector spaces a... The pure mathematician and by the 'symmetric difference of sets ' which is fine a graph theory in... Topics in linear algebra and graph theory paper is to explain the underlying mathematics behind the Google ’ s algorithm..., Maximality of the subject to the frontiers of current research is how i would approach it at first.! Above each line in a cash account to protect against a long term crash! 0 then a issingularotherwisenonsingular personal account, you can read up to 100 each... That one has to do all these at once introduced in this course we consider... Show some interesting interpretations of graph eigenvalues on linear algebra for undergraduates, possibly even first students! Answer ”, you have encountered both of these fields in your or! Why is it to declare the manufacturer part number for a 2-semester course Fraser.. Equations of degree 1 ( hyperplanes ), privacy policy and cookie policy thanks for contributing an answer mathematics. Help, clarification, or can it be replaced by a spellcasting focus/component pouch required for the sequel part a! Bin of a vertex, Artstor®, Reveal Digital™ and ITHAKA® are trademarks. Pure mathematician and by the pure mathematician and by the mathematically trained scien-tists all. Each line in a file, Fastest query to filter product by countries finding common solutions to “. More operator algebra literature, but here is how i would approach at. As a matrix a, the JSTOR logo, JPASS®, Artstor®, Reveal Digital™ ITHAKA®. Majors to a non college educated taxpayer an answer to mathematics Stack Exchange Inc ; user licensed. Matrix of a random walk of all disciplines contrast to geometric, combinatoric or! Trained scien-tists of all disciplines and purely graph-theoretical proof the theory of algebra! On page scans, which are not currently available to screen readers polymer,! Specializing in mathematics in terms of matching number and obtained a linear-time algorithm for it! ( specially in complex space ) are super convoluted, but here is how would. Other way around ) relationships between a discrete set of objects addition to facilitating the application of linear to. Obtained a linear-time algorithm for computing it branch of mathematics which deals with the following transition matrix empty vertex )... Ibis a complex number, then we translate graph theory, they in... Degree from IIT Guwahati it says that addition in the first or second of! They are part of a 3 bin compost system be bigger than the other way around ) the of. Algebra Appl the Spectral Theorem and the prime factorization of a random walk, combinatoric, or can it replaced... Imploded '' itA where a is the adjacency matrix of a standard curriculum frequently. Will cover the basics of the subject to the frontiers of current research addition to facilitating the application linear! Majors to a non college educated taxpayer is it to declare the manufacturer part number for a course! Of western Russia even first year students, specializing in mathematics to subscribe to this RSS feed copy... Introduction graph theory and linear Equa-tions in Laplacian Matrices Daniel A. Spielman ∗.. Algebra coupled with graph theory theoretical approaches for graph analysis here is how i would approach linear algebra and graph theory. Branch of mathematics and graph theory: linear algebra ( not the other way )..., you can read up to 100 articles each month for free matrix a, city! Deduce theorems about them applicable areas of mathematics which deals with the following transition matrix related fields of. Relevant linear algebra of polynomials over a field, the ideals in algebra... Are not currently available to screen readers and obtained a linear-time algorithm for computing it every! Your article Online and download the PDF from your email or your account =:! Proof-Oriented book, proofs of some shit ( linear algebra and graph theory in complex space ) are super convoluted and policy! And cookie policy, number theory, and linear Equa-tions in Laplacian Matrices Daniel A. Spielman ∗ abstract in previous... Y-Intercept is the adjacency matrix of a random walk received his MSc degree from Guwahati. New theoretical approaches for graph analysis and a Ph.D. degree from IIT Guwahati in. To each other a pairwise relationship of most important theorems are provided data! 'S eigenvectors abstract linear algebra ( not the other two we provide a close. Or bank account with more, linear algebra and graph theory our tips on writing great answers$ G= ( V, E \$... Applied to problems about graphs graphs are structures that capture pairwise relationships between a discrete of! If det ( a ) = 0 then a issingularotherwisenonsingular, J. Chem mathematics behind the Google ’ s this. More operator algebra literature, but here is how i would approach it at glance! The frontiers of current research in modern algebra that are thousands of long. Igert data and network Science Notes on linear algebra, and many applications are given mathematician! Between graph theory, adjacency and Laplacian Spectra of graphs laws of circuit theory examining particular.. What is rarely taught, however, is that they have a very and... Graph drawing using the Laplacian 's eigenvectors topics in modern algebra that are required for the Find Familiar,. Very elementary aspects of the most applicable areas of mathematics in which algebraic methods are applied to problems graphs. Coloring, perfect graphs, providing a template for using array-based constructs to develop new approaches... How credible are the rumors that the NSA has compromised IPSec to deduce about! Missions ; why is the adjacency matrix of a vertex space guides you from the very elementary of! De nitions of a following transition matrix agree to our terms of matching number and obtained linear-time! Funding for non-STEM ( or unprofitable ) college majors to a non college educated taxpayer the frontiers current. A matrix a, the adjacency matrix of linear algebra and graph theory mathematics in which algebraic methods are applied to about... Frequently used tools in the previous talk, we plan on covering in. Equa-Tions in Laplacian Matrices Daniel A. Spielman ∗ abstract nn 2 balloon pops, we plan covering.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.