arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
Image Model Blocks
# Spatial Transformer
Introduced by Jaderberg et al. in Spatial Transformer Networks
A Spatial Transformer is an image model block that explicitly allows the spatial manipulation of data within a convolutional neural network. It gives CNNs the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. Unlike pooling layers, where the receptive fields are fixed and local, the spatial transformer module is a dynamic mechanism that can actively spatially transform an image (or a feature map) by producing an appropriate transformation for each input sample. The transformation is then performed on the entire feature map (non-locally) and can include scaling, cropping, rotations, as well as non-rigid deformations.
The architecture is shown in the Figure to the right. The input feature map $U$ is passed to a localisation network which regresses the transformation parameters $\theta$. The regular spatial grid $G$ over $V$ is transformed to the sampling grid $T_{\theta}\left(G\right)$, which is applied to $U$, producing the warped output feature map $V$. The combination of the localisation network and sampling mechanism defines a spatial transformer.
Source: Spatial Transformer Networks
#### Papers
Paper Code Results Date Stars
|
|
# A Book Of Curves By E. H. Lockwood .pdf
Cambridge university press 978-0-521-04444-8 - a
readings in christian ethics: issues and applications.pdf
Curves international - wikipedia, the free
Curves International, also known as Curves for Women, Curves Fitness, or just Curves, is an international fitness franchise co-founded by Gary and Diane Heavin in 1995.
handbook of research on the education of young children.pdf
Deltoid curve - wikipedia, the free encyclopedia
Deltoid curve. The red curve is a deltoid. In geometry, a deltoid, also known as a tricuspoid or Steiner curve, is a hypocycloid of three cusps.
the virgin of the world, plus a treatise on initiations and the definitions of asclepius.pdf
A book of curves e h lockwood 0521044448 | ebay
A Book of Curves E. H. Lockwood in Books, Magazines, Non-Fiction Books | eBay
social safeguards: avoiding the unintended impacts of development.pdf
E. h. lockwood | barnes & noble
Barnes & Noble - E. H. Lockwood - Save with New Lower Prices on Millions of Books. FREE Shipping on $25 orders! Book of Curves E. H. Lockwood. Hardcover$39.32.
do this in remembrance.pdf
Geometric symmetry book | 2 available editions |
Geometric Symmetry by E H Lockwood, R H MacMillan starting at $21.62. Geometric Symmetry has 2 available editions to buy at Alibris fundamentos de instalaciones electricas de mediana y alta tension / fundatmentals of electrical installations of medium and high tension.pdf Read book of curves online/preview - openisbn Read the book Book Of Curves by E. H. Lockwood online or Preview the book, service provided by Openisbn Project.. from social justice to criminal justice: poverty and the administration of criminal law.pdf A book of curves. by e. h. lockwood - jstor RECENT PUBLICATIONS EDITED BY R. A. ROSENBAUM, Wesleyan University All books for review should be sent directly to R. A. Rosenbaum, Department of Mathe- emergency medicine: a focused review of the core curriculum.pdf Book of curves: amazon.it: e. h. lockwood: libri This book opens up an important field of mathematics at an elementary level, one in which the element of aesthetic pleasure, both in the shapes of the curves and in becoming the perfect bimbo wife.pdf 162 the mathematical gazeitte - jstor THE MATHEMATICAL GAZEiTTE A Book of Curves. By E. H. LOCKWOOD. Pp. xii + 200. 25s. 1961. (Cambridge University Press) Dull indeed must be the man or boy who does top 30 most wanted, healthy, popular, newest, quickest, easiest, most recommended and delicious greek appetizer recipes for every member of the family.pdf Jillian michaels' curves 30 minute workouts diet Curves - The Curves program was created specifically for women to fit busy schedules and alleviates some of the uncertainty by having trained professionals available Book of curves [electronic resource]. in Author/Creator Lockwood, E. H. Language English. Imprint Cambridge : Cambridge University Press, 1961. Physical description 1 online resource (212 p.) : digital, PDF Book of curves | mathematical association of Search form. Search . Login; Join; Give; Shops Curves: find a fitness center location near you Curves Fitness; Corporate Wellness. Why Wellness; Success Stories; Contact; Curves Store; We're Right in your neighborhood . Find A Center Near You OR. FIND. Like Us Book of curves - cambridge books online - Please wait, page is loading E. h. lockwood (author of book of curves) E. H. Lockwood is the author of Book of Curves (3.50 avg rating, 2 ratings, 1 review, published 2007), Geometric Symmetry (0.0 avg rating, 0 ratings, 0 r A book of curves. by e. h. lockwood - alibris A book of curves. by E. H. Lockwood - Find this book online from$23.26. Get new, rare & used books at our marketplace. Save money & smile!
Strophoid - wikipedia, the free encyclopedia
In geometry, a strophoid is a curve generated from a given curve C and points A (the fixed point) and O E. H. Lockwood (1961). "Strophoids". A Book of Curves.
Curve - wow.com
E. H. Lockwood A Book of Curves (1961 Cambridge) External links. Wikimedia Commons has media related to Curves. Famous Curves Index, School of Mathematics and
A book of curves by e. h. lockwood - used books -
Cambridge Univ. Press, 1967. Third Edition. Hardcover (Original Cloth). Very Good Condition/Good. Size: Quarto. Text body is clean, and free from previous owner
A book of curves by e h lockwood - abebooks
A BOOK OF CURVES by Lockwood, E. H. and a great selection of similar Used, New and Collectible Books available now at AbeBooks.co.uk.
E. lockwood - a book of curves. - malaysiabay
A Book of Curves, E. H. Lockwood, Cambridge University Press, 1961, 210 pages. Part I Special Curves: Parabola, Ellipse, Hyperbola, Cardioid, Limacon, Astroid
Archimedean spiral - encyclopedia of mathematics
The area of the sector bounded by an arc of the Archimedean spiral and two radius vectors $\rho_1$ and E.H. Lockwood, "A book of curves" , Cambridge Univ. Press
Book of curves: e. h. lockwood: 9780521044448:
Amazon.ca Try Prime. Your Store Deals Store Gift Cards Sell Help en fran ais. Shop by Department
A book of curves (ebook, 1963) [worldcat.org]
Get this from a library! A book of curves. [E H Lockwood] Home. WorldCat Home About WorldCat Help Feedback. Search. Search for Library Items Search for Lists Search
A book of curves (book, 1963) [worldcat.org]
Get this from a library! A book of curves. [E H Lockwood] Home. WorldCat Home About WorldCat Help Feedback. Search. Search for Library Items Search for Lists Search
Curves - the workout
Curves, The Workout, Discover a gym where women change their lives 30 minutes at a time. Curves is the largest fitness franchise in the world dedicated to providing
Book of curves - e. h. lockwood | ebooks-share.net
Curve - wikipedia, the free encyclopedia
E. H. Lockwood A Book of Curves (1961 Cambridge) External links Edit. Wikimedia Commons has media related to Curves. Famous Curves Index, School of Mathematics and
Book of curves by e. h. lockwood | 9780521044448 |
This book opens up an important field of mathematics at an elementary level, one in which the element of aesthetic pleasure, both in the shapes of the curves and in
The lemniscate of bernoulli - college of the
A Book of Curves by E. H. Lockwood . Title: The Lemniscate of Bernoulli Author: How it works Representing the Lemniscate Finding the Polar equation Polar cont.
Book of curves: amazon.co.uk: e. h. lockwood:
Buy Book of Curves by E. H. Lockwood (ISBN: 9780521044448) from Amazon's Book Store. Free UK delivery on eligible orders.
Citeseerx citation query pedal curves'. in: a
Pedal Curves'. In: A Book of Curves (1967) by E H Lockwood Add To MetaCart. Tools. Sorted by: Results 1 - 1 of 1. Curve and Surface Duals and
Curve generation %a1 v involute and evolute -
Oct 21, 2014 E. H. Lockwood, A Book of Curves, P.167. E. H. Lockwood, A Book of Curves, P.167. E. H. Lockwood, A Book of Curves, P.167 P.168. E. H. Lockwood, A Book
Learn and talk about deltoid curve, algebraic
E. H. Lockwood (1961). "Chapter 8: The Deltoid". A Book of Curves. Cambridge University Press. J. Dennis Lawrence (1972). A catalog of special plane curves.
A book of curves / by e. h. lockwood unknown
A book of curves / by E.H. Lockwood [E. H Lockwood] on Amazon.com. *FREE* shipping on qualifying offers. Hardcover.
Book of curves - e h lockwood - bok
H ftad, 2007. Pris 457 kr. K p Book of Curves (9780521044448) av E H Lockwood p Bokus.com
Citeseerx citation query a book of curves,
CiteSeerX - Scientific documents that cite the following paper: A Book of Curves, Cambridge. Documents; Authors; Tables; Log in; Sign up; by E H Lockwood Venue
Curves health clubs and fitness centers for women
A Curves 30 minute fitness center is a woman's gym that provides a total body workout. With both aerobic exercise for weight loss and strength training for toned
|
|
The areas of two equilateral triangles are in the ratio 25:36. Their altitudes will be in the ratio (a) 25:36 (b) 36:25 (c) 5:6 (d)
Question ID - 50109 :- The areas of two equilateral triangles are in the ratio 25:36. Their altitudes will be in the ratio (a) 25:36 (b) 36:25 (c) 5:6 (d)
3537
Length of each triangles be altitudes
=
And,
Next Question :
At a given instant, say t = 0, two radioactive substance A and B have equal activities. The ratio of their activities after time t itself decays with time t as e−34 . If the half-life of A is m2 , the half-life of B is
a. b. 2ln2 c. d. 4ln2
|
|
Let’s talk about security. One of the criteria I wanted to bake into this project early on was security. I’ve worked in security off and on again all over my career. Which I say only to add emphasis that writing about embedded security is extremely obtuse. It exists in a state somewhere between “non- existent” and “imagine Ikea instructions for a car engine.” I struggled for days on how to layout most of this information and I’m still not thrilled with it… so there’s likely going to be a lot of additional writing and probably a revision coming. Anyway, moving right along.
#### Recap
I started with the Raspberry Pi because I knew it would be a good launchpad to do something more interesting and pursue this idea. I got a basic version working as expected. I could manage a Raspberry Pi from a central management machine, up to and including replacing the OS on the unit entirely via the network. Though not exactly how it would work in something like a managed datacenter, the process was remarkably similar.
If you didn’t read the entire previous article, the bootstrapping process goes something like this:
• new machine ARPs for address, DHCP offers a new address along with a ‘next-server’ to begin netbooting over TFTP (maybe PXE)
• PXE, or something behaving like PXE, starts looking for configuration files over TFTP.
• Machine finds a boot configuration and starts booting
• Some sort of inventory + imaging process kicks off and lays down a new OS on the disk
• Maybe some initial configuration is done for “first boot” into the new OS
• Machine is integrated into the fleet
The first thing I should mention is that any compromise of the management network is a total catastrophe. All of these services assume a secure network and provide little-to-no risk mitigation. For instance, if an attacker’s rogue server is simply faster than the management machine you can offer DHCP first and direct all machines to boot whatever you ‘d like. Or, if you have physical access to the SD card in the Raspberry Pi, it doesn’t matter what’s on that SD card because it can just be replaced and the network skipped entirely.
### So why do we care & what’s the difference?
The fundamental difference between a shippable datacenter and a traditional one is physical tamper exposure. A traditional datacenter is a fortress. For example, a provider I worked with recently had five biometric checkpoints between the parking lot and the cage with servers. There was a Mantrap door between the parking lot and the lobby you had to enter just to provide your ID so you could authenticate further. If you were unannounced you never even made it to the lobby. Like the scene from Sneakers: “The whole building says ‘Go Away.’”
Physical attacks against this kind of infrastructure usually involve going for the supply chain. Finding a way to tamper with a device bound for the cage that’s going to ship from a manufacturer directly to the datacenter without crossing the office. While definitely possible, relatively speaking, it’s a very expensive & complicated attack.
This shippable datacenter on the other hand has no fortress. It’s everything moving as a unit over some sort of logistics freighting system. Additionally, the precept was that it’s installed in the field without typical solid infrastructure. The possibility someone would have physical access is significantly higher.
### The Requirements
With this framing, let’s talk about how we can build a more secure architecture to meet our needs. Is there a way we can ensure that these machines only run the system that we want them to run? We have three major concerns right now:
• Attacker replaces SD card
• Attacker places TFTP-capable server on the network and begins offering boot info, system image, etc
• Remote verification/attestation
Phrased differently: Is the boot media in the machine the media I expect? Is the replacement content for that boot media served by the network the content I expect? Can I verify the state of that content at arbitrary points?
The rest of this particular article, and indeed the start of the next several articles, is going to concern a survey of the major secure boot options available.
#### What is secure boot?
I’m going to be slightly reductive and say secure boot is the process where a trusted piece of firmware verifies cryptographic signatures on every successive stage of booting. The firmware verifies the bootloader, and the bootloader verifies the OS.
#### Raspberry Pi (In)Security
While the Raspberry Pi is a superb educational tool and a great prototype platform, it provides absolutely no security. None. If there ‘s even a chance that a system compromise would have negative consequences with real-world fallout, it would be unethical to deploy the Raspberry Pi. Key takeaways, assorted notes, and what I mean by no security:
• The Raspberry Pi bootloader has no notion of authentication. If it finds bootcode.bin and start.elf on the SD card it’s going to boot from it. Similarly, if the unit has been programmed for USB boot it will try the network. If a TFTP server offers bootcode.bin and start.elf it will load those.
• bootcode.bin and start.elf are, themselves, not authenticated. If someone reverse engineers these (currently closed source) they can control the boot process. Since Raspberry Pi technically boots the CPU from the GPU the CPU will start in a potentially unknown state.
• There’s no way to specify the boot order. First thing found offering bootcode.bin and start.elf gets to dictate the rest of the process. SD card is always checked first.
• The SoC (System on a chip) is the Broadcom BCM2837B0. All the security features that exist in the underlying ARM Cortex-A53 cores have been disabled in silicon. https://github.com/christinaa/rpi-open-firmware/issues/37#issuecomment-388551489
• No TPM (trusted platform module) for key storage. Encrypting SD card is useless since it would have to contain the key in plain text for booting. Requiring a password 1) defeats the purpose of remote deployment and 2) lack of something like SOL (Serial over LAN) makes inputting it remotely difficult.
What are the alternatives in the space? Does any single-board computer offer a compelling security story? As of this writing there are a few different competing specifications for implementing secure boot on ARM, or even x86, based architectures.
### The Options (Sort of Specification-wise)
This list, while not exhaustive, covers the major options for securely booting a single board computer (SBC).
• ARM Trusted Firmware / Trusted Board Boot
• High Assurance Boot (Freescale iMX SoCs only)
• U-Boot (Sort of…)
• UEFI Secure Boot
#### ARM Trusted Board Boot (TBB)
Technically this is a thing, but in practice, it’s mostly just a document. ARM’s ecosystem, much like the Android ecosystem, is severely fragmented. ARM manufactures the Cortex series cores that are used in nearly every mainstream SoC available today. Despite those cores implementing the features required to support things like ARM TrustZone and TBB, they require cooperation from both the SoC manufacturer and the SBC manufacturer. The SoC manufacturer has to wire them in and integrate them correctly. The SBC manufacturer has to work with the SoC manufacturer to make sure their bootloader can initialize everything correctly and securely. As you’ll recall, above, I said the Broadcom SoC in the Raspberry Pi has mostly disabled the required security features.
Additionally, before you can have TBB, you must meet TBBR (Trusted Board Boot Requirements)… Which is another onerous specification. So assuming the manufacturer has done everything for TBBR, they can begin cloning+modifying ARM’s TrustedFirmware reference starting point <https://github.com/ARM- software/arm-trusted-firmware> . This reference spec is itself a living breathing thing, so unless a manufacturer is keeping their proprietary fork up to date, as pieces become available in the formal spec, the specific manufacturers firmware releases won’t support them.
A handful of manufacturers exist claiming to offer various levels of implementation of TBB like Xilinx, but finding out if a specific SBC has support is a documentation nightmare. The spec has been around for years, but unless people start clamoring for it, adoption looks like it will remain low. Matteo Carlini has a good presentation at Linaro Connect talking about the spec http://connect.linaro.org/resource/sfo17/sfo17-201/ that also includes mentioning how UEFI (below) can co-exist with it.
Theoretically this specification allows for remote attestation provided that the ARM Trusted Firmware implementation sufficiently implements TrustZone. Unfortunately, given that it’s up to every individual manufacturer, this has to be evaluated on a case-by-case basis.
#### High Assurance Boot
First off, it’s a proprietary option made by Freescale (which merged with NXP and then purchased by Qualcomm, so maybe we’ll see it elsewhere?) and thus only works on boards using the i.MX SoC. Under the hood it works like most of the options out there- whatever you want to boot requires a signature that we validate before execution. What makes HAB nice is that the keys are write- protected behind one-time programmable (OTP) fuses and we can permanently fuse the keys to the board. Others might disagree, but I like this no-take-backsies approach.
Things I’m less fond of:
#### U-Boot (Sort of)
I’m including this, despite it not being a specification, because there’s a handful of devices out there that basically make the pitch “you always have the option of compiling u-boot for your particular board and using that for secure boot.” Unless there’s a way to:
• permanently fuse your compiled version of u-boot with keys into something like onboard SPI flash
• guarantee it’s the only boot option
• make sure it always executes no matter what
… then it’s not much better than the Raspberry Pi option. Secure boot is only viable if the attacker can’t replace the bootloader. This is why Apple pays a \$200K bug bounty on bootloader vulnerabilities.
While it can use a TPM it doesn ‘t require it, so remote attestation remains tenuous.
#### UEFI Secure Boot
In my opinion this is the best option available today. It also has the added benefits of being the de-facto standard on servers for years as well as being the standard advocated for by several industry juggernauts (Microsoft, Intel, HP, etc). UEFI in general has been around for nearly 20 years and has a full ecosystem around it. I’m not sure when secure boot showed up specifically but it was common enough that in 2011 Microsoft was comfortable mandating it as a requirement for installing Windows 8.
Unfortunately, as far as SBCs go, it’s only implemented on a few very select boards, most of which are manufactured by Intel. This is because Tiano, the reference UEFI implementation, is also made by Intel. One major thing to keep in mind: UEFI secure boot requires a TPM to verify the bootloader. So boards like BeagleBoard supporting EDK2/Tiano are available, but actually implementing all of secure boot requires some sort of a DIY TPM add-on.
### The Best Option So Far
Is there a board out there that actually implements a reasonable security story? The best one I’ve managed to locate yet is the Minnowboard Turbot.
It’s manufactured by Intel and uses the reference Tiano/EDK2 UEFI firmware. Additionally, it supports the newer firmware TPM (fTPM) 2 specification so actually includes a proper chain for both making sure the board only boots what we want and verifying the firmware remotely afterward. Surprise surprise, an interesting side note on this one, it ‘s one of the few SBCs that actually doesn ‘t use ARM and uses Intel Atom E3826 or Intel Atom E3845 depending on version.
I’ve gone ahead and setup the dual-core Turbot board. After working with the Raspberry Pi (which I still think is great!), the Turbot feels absolutely luxurious. It more or less boots and behaves like a rack mounted server thanks to the full UEFI implementation. Further, it doesn’t require any special steps in constructing boot media, so using the Ubuntu base server image on USB was seamless. Additionally, getting secure boot working with the fTPM, while not trivial, was at least straightforward.
### Next up!
Next in the sequence I’ll talk about getting the Minnowboard’s UEFI secure boot process setup along with TPM basics so we can start building a closed chain of trust.
If you liked the above and want to see more examples and how they interrelate then this section is for you.
The raw slides from the Linaro Connect presentation
^^ Worth reading to get a grasp of how gnarly these specifications can be on the inside.
Android verified boot
^^ Though it mostly applies to handsets, the underlying tech is very similar. General purpose computers/servers are lagging behind what Android is doing in embedded security.
UEFI and ARM living together discussion
^^ Solid discussion of how they live together and can support one another.
20 ways past secure boot
^^ Fun presentation by Job de Haas on how many ways you can do secure boot incorrectly and render it useless. Great survey of attacks.
Tiano Core, getting started
^^ Provides a great starting place with TianoCore. Particularly interesting if you want to see how the sausage is made regarding UEFI.
Take Control of Your PC with UEFI Secure Boot, Linux journal
^^ Provides an excellent discussion on the individual administrative steps of UEFI and how to do them with OpenSSL.
|
|
# Homework 9 (Due. Mar 30)
Griffiths makes several statements and claims in Chapter 9 without really working out all the details. So please show what you feel are the main steps/big ideas in the questions below:
## 1. Maintaining polarization
In the lecture notes (and/or Griffiths section 9.3.2), we worked out the case of normally incident light, and claimed that reflected and transmitted waves have the same polarization as the incident wave (on the x axis). Prove that this must be true!
## 2. The wave equation and boundary conditions
1. Starting with Maxwell’s equation in matter (in terms of the $\mathbf{D}$ and $\mathbf{H}$ fields) show that, for a linear homogeneous dielectric ($\mathbf{D} = \varepsilon \mathbf{E}$, $\mathbf{B} = \mu \mathbf{H}$) with no free charges or currents ($\rho_{free} = 0$, $\mathbf{J}_{free} = 0$), both the $\mathbf{E}$ and the $\mathbf{B}$ fields obey a wave equation with a wave speed given by $v=1/\sqrt{\mu \varepsilon}$.
2. Starting from the same equations as in part 1, rewrite them in integral form, and then briefly sketch out for us the reasoning which leads to all the boundary conditions on $\mathbf{E}$ and $\mathbf{B}$ at a planar interface between two different linear materials (labeled 1 and 2), with permittivities and permeabilities $\varepsilon_1$, $\mu_1$ and $\varepsilon_2$, $\mu_2$, respectively. (Again, assume no free charge or current densities)
## 3. Reflection and Transmission
In Griffiths’ section 9.3.2, (Reflection and transmission at normal incidence) he finds reflection and transmission coefficients ($R$ and $T$). But he has made the assumption that $\mu_1 = \mu_2 = \mu_0$. Drop that assumption, i.e. keep $\mu_1$ and $\mu_2$ general, and find the general formulas for $R$ and $T$. To check, explictly confirm that $R+T=1$, still (as it must be). Hint: Don’t redo work Griffiths has done for you. Use whatever you need from section 9.3.2, just be careful not to use results where he has assumed $\mu_1 = \mu_2 = \mu_0$. I claim you can express your final results for R and T purely as very simple functions of $\beta$ only!
## 4. Perpendiucular polarization
In the lecture notes (and/or Griffiths 9.3.3), we worked out the case of reflection and transmission at any angle. But we considered the case where the incident E-field is polarized in the plane of incidence. Go through that section again, but work out the different case where the E-field is polarized perpendicular to the plane of incidence. (You may once again assume $\mu_1=\mu_2=\mu_0$.) Specifically, what I mean by “work out” is:
1. Make a clear sketch (modeled on Griffiths figure 9.15) of the geometry and angles for this case. Then, write out what the four boundary conditions become in this case (i.e. modify Griffiths Eq 9.101 through 9.104 appropriately for this new situation).
2. Find the new “Fresnel Equations”, i.e. a version of Eq 9.109, but for this polarization case. Explicitly check that your Fresnel equations reduce to the proper results at normal incidence!
3. Replicate Griffiths Figure 9.16, (but of course for this perpendicular polarization case.) Use a Jupyter notebook please. Assume $n_2/n_1=2.0$ Briefly, discuss what is similar, and what is different, about this case from what Griffiths solved. Is there a “Brewster’s angle” for your situation, i.e. a non-trivial angle where reflection becomes zero?
4. Again in a Jupyter notebook, replicate Griffiths Figure 9.17 (the one at the end of 9.3) but again, for this perpendicular polarization case, and again assuming $n_2/n_1=2.0$ and again using a computer to plot. Show using your graph that $R+T=1$ for this situation, no matter what the angle. Briefly, comment on the physics!
Turn in parts 3 and 4 using your personal repository.
|
|
# Assignment 5: Reinforcement Learning Solution to Visual Tic-Tac-Toe¶
Xuehao(David) Hu
## Overview¶
Modify the Tic-Tac-Toe code presented in lecture notes 25 Tic-Tac-Toe with Neural Network Q Function. Instead of using the board representation of nine integers with values -1, 0 or 1, you will represent the board with an image consisting of 0's for empty pixels and 1's for filled pixels.
## Requirements¶
In [209]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import neuralnetworksbylayer as nn
import random
from copy import copy
In [46]:
#a function used to draw the three graphs at the bottom of this notebook
# first: wins,draws,losses per batch
# second: Epsilon per batch
# third: TD error per SCG iteration
def plotStatus(batch,nnetQ):
plt.subplot(3,1,1)
avgOver = 50
n = int(len(finalR)/avgOver) * avgOver # where is the finalR????
wins = (np.array(finalR)==1)[:n].reshape((-1,avgOver)).sum(1)
draws = (np.array(finalR)==0)[:n].reshape((-1,avgOver)).sum(1)
losses = (np.array(finalR)==-1)[:n].reshape((-1,avgOver)).sum(1)
plt.plot(wins)
plt.plot(draws)
plt.plot(losses)
plt.ylim(0,avgOver)
plt.xlabel('Avg over batches of '+str(avgOver)+' games')
plt.subplot(3,1,2)
plt.plot(epsilonTrace[:batch])
plt.ylabel("epsilon")
plt.xlabel('Batches')
plt.ylim(0,1)
plt.subplot(3,1,3)
plt.plot(nnetQ.getErrorTrace())
plt.ylabel('TD Error')
plt.xlabel('SCG Iterations')
plt.tight_layout()
In [47]:
#helper function for make image
#drawCircle(boardArray,xStart,yStart)
#Params: boardArray(21by21matrix),
# xStart and yStart are the x and y coordinate in the boardingArray
# for the top left point of the 7 by 7 symbol(Circle)
#Return: modified boardArray( after inserting and circle)
def drawCircle(boardArray,xStart,yStart):
boardArray[xStart+2][yStart]=1;
boardArray[xStart+3][yStart]=1;
boardArray[xStart+4][yStart]=1;
boardArray[xStart+1][yStart+1]=1;
boardArray[xStart][yStart+2]=1;
boardArray[xStart][yStart+3]=1;
boardArray[xStart][yStart+4]=1;
boardArray[xStart+1][yStart+5]=1;
boardArray[xStart+2][yStart+6]=1;
boardArray[xStart+3][yStart+6]=1;
boardArray[xStart+4][yStart+6]=1;
boardArray[xStart+5][yStart+5]=1;
boardArray[xStart+6][yStart+4]=1;
boardArray[xStart+6][yStart+3]=1;
boardArray[xStart+6][yStart+2]=1;
boardArray[xStart+5][yStart+1]=1;
return boardArray
#helper function for make image
#drawCross(boardArray,xStart,yStart)
#Params: boardArray(21by21matrix),
# xStart and yStart are the x and y coordinate in the boardingArray
# for the top left point of the 7 by 7 symbol(Cross)
#Return: modified boardArray( after inserting and cross)
def drawCross(boardArray,xStart,yStart):
boardArray[xStart][yStart]=1;
boardArray[xStart+1][yStart+1]=1;
boardArray[xStart+2][yStart+2]=1;
boardArray[xStart+3][yStart+3]=1;
boardArray[xStart+4][yStart+4]=1;
boardArray[xStart+5][yStart+5]=1;
boardArray[xStart+6][yStart+6]=1;
boardArray[xStart+6][yStart]=1;
boardArray[xStart+5][yStart+1]=1;
boardArray[xStart+4][yStart+2]=1;
boardArray[xStart+3][yStart+3]=1;
boardArray[xStart+2][yStart+4]=1;
boardArray[xStart+1][yStart+5]=1;
boardArray[xStart][yStart+6]=1;
return boardArray
In [71]:
# Return if current board is full or not (check if there is any slot is still in state 'draw')
# Param: an array of state value in it
# return: full status
def full(state):
return not np.any(state == 0)
# Draw a 21 by 21 board with X and O in it per state
# Param: an array of state value( 1 by 9 row array)
# Return: an row array(1 by 441)
def makeImage(state):
# add creation of image here, and return as row vector
boardArray = np.zeros((21,21))
state = np.asarray(state)
state = state.reshape((3,3));
for i in range(3):
for j in range(3):
xStart=i*7
yStart=j*7
if state[i][j]==1:
boardArray=drawCross(boardArray,xStart,yStart)
if state[i][j]==-1:
boardArray=drawCircle(boardArray,xStart,yStart)
plt.imshow(boardArray,cmap='Greys', interpolation='nearest');
boardArray = boardArray.reshape((1,441))
return boardArray
# Initialize a 3 by 3 board state array with all
#zeros, draw a 21X21 image from this initial state
def initialState():
state = np.zeros((1,9)) # 0 means empty cell
return state, makeImage(state) # second item is features
#apply a move(action) for X to current state and return next state and features
# randomly make a move for O
def nextState(state, action):
newstate = copy(state)
if newstate[0,action] != 0:
newstate[0,action] = 1 # 1 for X
# Now O moves, randomly, if board not full
if not full(newstate):
moveO = np.random.choice(np.where(newstate[0,:] == 0)[0])
if newstate[0,moveO] != 0:
newstate[0,moveO] = -1
features = makeImage(newstate)
return newstate, features
#check if any winning senario for X is matched, return 1 for it if so.
#return -1 for O wins and 0 for even
def reinf(oldstate, state):
combos = np.array((0,1,2, 3,4,5, 6,7,8, 0,3,6, 1,4,7, 2,5,8, 0,4,8, 2,4,6))
if np.any(np.all(1 == state[:,combos].reshape((-1,3)), axis=1)):
finalR.append(1)
return 1 # X wins
if np.any(np.all(-1 == state[:,combos].reshape((-1,3)), axis=1)):
finalR.append(-1)
return -1 # O wins
if full(state):
finalR.append(0)
return 0 # draw
return None # not done
In [72]:
# Cross side loses
########################################
#last second step before circle side win
plt.subplot(2,1,1)
features1=makeImage([[1,0,0,
0,-1,0,
-1,1,1]])
# after circle is placed to
plt.subplot(2,1,2)
features1=makeImage([[1,0,-1,
0,-1,0,
-1,1,1]])
In [73]:
# Cross side wins
########################################
#last second step before circle side win
plt.subplot(2,1,1)
features1=makeImage([[1,-1,0,
1,0,0,
-1,-1,1]])
# after circle is placed to
plt.subplot(2,1,2)
features1=makeImage([[1,-1,0,
1,1,0,
-1,-1,1]])
In [74]:
def makeSamples(nnet, initialStateF, nextStateF, reinforcementF, nGames, epsilon):
nSamples = nGames * 5
X = np.zeros((nSamples, nnet.layers[0].nInputs))
R = np.zeros((nSamples, 1))
Qn = np.zeros((nSamples, 1))
step = 0
for game in range(nGames):
state, features = initialStateF()
done = False
action = None # to mark first move
while not done:
validActions = np.where(state[0,:] == 0)[0]
nextAction, nextQ = epsilonGreedy(nnet, state, validActions, epsilon)
nextState, nextFeatures = nextStateF(state, nextAction) # Update state, sn from s and a
rn = reinforcementF(state, nextState) # Calculate resulting reinforcement
if action is not None: # not first move
if rn is None:
# Game continues
X[step, :] = np.hstack((features, action))
R[step, 0] = 0
Qn[step, 0] = nextQ
else:
# Game over
X[step, :] = np.hstack((features, action))
R[step, 0] = rn
Qn[step, 0] = 0 # no next Q
done = True # to break out of one game loop
state = nextState
action = nextAction
features = nextFeatures
step += 1
return (X, R, Qn, epsilon)
In [75]:
def epsilonGreedy(nnet, state, validActions, epsilon):
if np.random.uniform() < epsilon:
# Random Move
action = np.random.choice(validActions)
action = np.array([[action]])
else:
# Greedy Move
actionsRandomlyOrdered = random.sample(validActions.tolist(), len(validActions))
Qs = [nnet.use(np.hstack((makeImage(state), np.array([[a]]))))
for a in actionsRandomlyOrdered] # to explore of same Q values
ai = np.argmax(Qs)
action = actionsRandomlyOrdered[ai]
action = np.array([[action]])
Q = nnet.use(np.hstack((makeImage(state), action))) #:,0:1]
return action, Q
In [174]:
from IPython.display import display, clear_output
fig = plt.figure(figsize=(5,8))
# Parameters
nGames = 5000 #5000
nGamesPerBatch = 100
nBatches = int(nGames/nGamesPerBatch)
print('Running for',nBatches,'batches of',nGamesPerBatch,'games per batch.')
nReplay = 0
nSCGIterations = 10
nHidden = [5]
gamma = 0.8
finalEpsilon = 0.00001# value of epsilon at end of simulation. Decay rate is calculated
epsilonDecay = np.exp(np.log(finalEpsilon)/nBatches)
print('epsilonDecay is',epsilonDecay)
# Number of inputs is the 9 Tic-Tac-Toe cells plus 1 for the action
nnetQ = nn.NeuralNetwork([441+1] + nHidden + [1])
nnetQ.setInputRanges([[-1,1]]*441 + [[0,8]])
finalR = []
epsilonTrace = np.zeros((nBatches,1))
epsilon = 1
for batch in range(nBatches):
X,R,Qn,epsilonjunk = makeSamples(nnetQ, initialState, nextState, reinf,
nGamesPerBatch, epsilon)
nnetQ.train(X, R + gamma*Qn, nIterations=nSCGIterations)
for nq in range(nReplay):
Qn[:-1,:] = nnetQ.use(X[1:,:])
nnetQ.train(X, R + gamma*Qn, nIterations=nSCGIterations)
epsilonTrace[batch] = epsilon
epsilon *= max(epsilonDecay,0.01)
if True:
plt.clf()
plotStatus(batch,nnetQ)
clear_output(wait=True)
display(fig)
plt.draw()
plt.pause(0.01)
clear_output(wait=True)
Experiment with number of games, nubmer of games per batch, number of SCG iterations per batch, number of hidden layers and units in each layer, gamma, final epsilon
# 1.Change the numbe of games from 100000, to 50000, and to 5000(with other parameters unchanged)¶
In [122]:
import matplotlib.image as mpimg
#numberOfGames:100000,nGamesPerBatch=100,nSCGIterations = 10,nHidden = [5],gamma = 0.8,finalEpsilon = 0.00001
#plt.figure(2)
plt.subplot(131)
imgplot = plt.imshow(img100000)
#numberOfGames:50000,nGamesPerBatch=100,nSCGIterations = 10,nHidden = [5],gamma = 0.8,finalEpsilon = 0.00001
plt.subplot(132)
imgplot = plt.imshow(img50000)
#numberOfGames:5000,nGamesPerBatch=100,nSCGIterations = 10,nHidden = [5],gamma = 0.8,finalEpsilon = 0.00001
plt.subplot(133)
imgplot = plt.imshow(img5000)
I can draw following conclusions from the three graphs: 1.With others unchanged, Number of games help make sure the winnings reach thresh hold and stay at a hight number, this can be reflected by comparing both first and second graph with third one. 2. But too many games doesn't really help increase winning games, this can be seen by comparing firt and second graph.
# Set number of games at 50000, and compare between 100 games per batch and 300 games per batch¶
In [126]:
#numberOfGames:50000,nGamesPerBatch=100,nSCGIterations = 10,nHidden = [5],gamma = 0.8,finalEpsilon = 0.00001
plt.subplot(121)
imgplot = plt.imshow(img100batch)
#numberOfGames:50000,nGamesPerBatch=300,nSCGIterations = 10,nHidden = [5],gamma = 0.8,finalEpsilon = 0.00001
plt.subplot(122)
imgplot = plt.imshow(img300batch)
As we can see from the above two graphs (100 games per batch on the left, 300 games per batch on the right), it does help wins, TD error flatuate less. It make sense becuase when you have more games per batch, you will try more greedy steps and have more chances of getting a better move, which avoids a possible bad move, which will then flatuate the wins.
# number of hidden layers and units in each layer, Gamma, the final epsilon, nSCGIterations¶
[1 number of hidden layers] more layers took a long time to run on my laptop, but it as expected increase the wins with other parameters unchanged.
[2 Gamma] It is used to represent how strongly we would like to reinforce the learning. From the dynamic output graph, the bigger will sometimes cause a large flatuation at some point, it makes sense becuase the bigger Gamma helps force the learning to move to the "best next move", but probably moved "a bad move" in the long run. And smaller gamma helps learn steadily, which may not find the "best next move", but will be trained to find the best move in the long run by trying more possible moves in the training.
[3 Epsilon] The bigger epsilon will essentially force the learning to make few random moves when it goes throught more number of games. Form the output, the smaller epsilon sort of help the wins to reach its threshold earlier, but it flatuate more than the bigger epsilon. it make sense because smaller epsilon make greedier steps, which sort of like gamma and want to make best moves as early as possible. The bad thing is that it is probably missing some tries that may help increase wins. That is, bigger epsilon will make the learning make more random moves, which help the learning make more tries and try more possible moves expecially where the board of the games become bigger, it will then leanrn steadily will have a better wins in the long run.
[4 SCGIteration] the scaled conjugate gradient helps fit the modle for each batch, I didn't find a perfect iteration number for the 300 games per batch. But, for a specific number of games per batch, we want a appropriate corresponding iteration number to fit the games. Otherwise, too many iterations will cause overfitting, which will cause bigger error, and two few iterations won't fit the games well, which will also cause errors.
In [208]:
theTuple = [layer.W[:441,0] for layer in nnetQ.layers]
#print(theTuple)
weights = np.asarray(theTuple)
#print(weights[0])
weights = weights[0].reshape((21,21))
#print(weights.shape)
plt.imshow(weights,cmap='Greys', interpolation='nearest');
Come up with some ways to visualize what the units in the hidden layers are doing. For example, you can visualize the weights in the first layer's hidden units as a square image of corresponding to the board. You can also find samples for which a unit's output is the highest.
Above is the image for the weights in the only hidden layer(as I have nHidden = [5]), where there 5 units.
From this final state, we can obviously see that there three Xs at the left, which make the cross side win, which is expected as we are training the neural network lean to learn as much as possible.
In [212]:
#this is the final state we got above for
#nGames = 5000;nGamesPerBatch = 100;nBatches = int(nGames/nGamesPerBatch);nSCGIterations = 10
#nHidden = [5]; gamma = 0.8;finalEpsilon = 0.00001
plt.subplot(111)
|
|
Originally Posted by lori
Ooops, I didn't mean to triple my numbers....
I meant 100 feet of yellow and 50 feet of grey, total. Sorry! I really don't need that much.
Fixed.
Originally Posted by headchange4u
opie, I totally forgot to adjust my quantity. If it's not too late I want an additional 100' of the red 1.75, bringing my total to 200'. If it's too late then no big deal; I'll stick with 100'.
Done.
Originally Posted by jbryan
If it isn't too late add 50' of yellow 1.75 to my order. I think I could use it for guy lines.
Done.
|
|
VII Training Course in the Physics of
Correlated Electron Systems and High-Tc Superconductors
Vietri sul Mare (Salerno) Italy
14 - 25 October 2002
Participant Seminar Abstracts
15/10/2002
Dr. Luigi Amico
Dipartimento di Metodologie Fisiche e Chimiche per l'Ingegneria, Università di Catania
Scaling of entanglement close to quantum phase transitions
Abstract: We discuss the entanglement near a quantum phase transition by analyzing the properties of the concurrence for a class of exactly solvable models in one dimension. We find that entanglement can be classified in the framework of scaling theory. Further, we reveal a profound difference between classical correlations and the non-local quantum correlation, entanglement: the correlation length diverges at the phase transition, whereas entanglement in general remains short ranged.[ Nature 416, 608 (2002)]
15/10/2002
Prof. Victor Yarzhemsky
Institute of General and Inorganic Chemistry of RAS
Spece -group approach to the wavefunction of a Cooper pair and its applications high-temperature superconductors
Abstract: A standart theory of space-groups based on the induced representation method is applied to construct Cooper pair wavefuctions as zero-total momentum states obeying Pusi exclusion principle. It is shown that in many case the results are similar to that of point-group approach. The diffences of the space-group approach and point group approach are duscussed. The method is applied to UPt3, for which E2u symmetry of superconduction order parameter is obtained and and to perovskite systems as high-temperature supercunductors and Sr2RuO4. The influence of different types of time reversal symmetry violation on the SOP structure is discuseed (see papers 1,2 and 5)
16/10/2002
Dr. Balazs Dora
Department of Physics, Budapest University of Technology and Economics
Unconventional density waves in quasi-one dimensional systems
Abstract: We consider the possibility of formation of unconventional charge and spin density waves (UCDW, USDW) in quasi-one dimensional electronic systems. In analogy with unconventional superconductivity, we develop a mean field theory of UDW allowing for momentum dependent gap on the Fermi surface. Conditions for the appearance of such a low temperature phase are investigated. The thermodynamic properties are found to be very similar to those of d-wave superconductors. The linear (optical conductivity) and nonlinear (threshold electric field) response is calculated. These theoretical results describe convincingly the low temperature phase of the $\alpha$-(BEDT-TTF)$_2$KHg(SCN)$_4$ salt.
18/10/2002
Dr. Anna Posazhennikova
Katholieke Universiteit Leuven
On the toy model of pseudogap
Abstract: The problem of pseudo-gap formation in an electronic system, induced by the fluctuations of the order parameter is revisited. We make the observation that a large class of current theories are theoretically equivalent to averaging the Free energy of the pseudo-gap system over quenched-disordered distribution of the order parameter. We examine the cases of both infinite and finite correlation length, showing how the interplay of pseudo-gap formation and superconductivity can be treated in this approach.
21/10/2002
Mr. Marcin Raczkowski
Institute of Physics, Jagellonian University
Competition between Vertical and Diagonal Static Stripes in the HF approximation
Abstract: The charge localization and tendencies of doped holes towards self-oganization into striped patterns is one of the most interesting topics in the physics of high-$T_c$ superconductors. Qualitative picture of stable static stripe phases can be given within the single band Hubbard model using Hartree-Fock approximation. Here we investigate the properties and stability of the filled (one doped hole per stripe site) vertical stripes (VS) and diagonal stripes (DS) by varying the on-site Coulomb repulsion $U$ for two representative doping levels $x=1/8$ and $x=1/6$, and reveal the microscopic reasons of the observed transition from VS to DS with increasing $U$. In the weak coupling regime of U=4t, where t is the hopping element, the stability of VS is best explained by the solitonic mechanism which leads to the kinetic energy gain due to the hopping perpendicular to the stripes. On the contrary, the stability of DS in the strong coupling regime of U=6t is less obvious. We show that the charge densities along DS (m_i^z=0) are lower than along VS, and the nonequivalent atoms within antiferromagnetic domains in the case of DS have larger site magnetization densities and consequently lower probabilities of double occupancy. Hence, DS have a more favorable potential energy which explains their stability in the large $U$ regime.
22/10/2002
Institut für Theoretische Physik III, Universität Stuttgart
Quantum Monte Carlo study of confined fermions in 1-D optical lattices
Abstract: Quantum Monte Carlo simulations are used to study the ground state of the one dimensional fermionic Hubbard Model in a harmonic trap. Local phases appear in the system and a local order parameter is defined to characterize them. The establishment of the Mott phase does not proceed via the traditional quantum phase transition. Important implications for the experimental study of these systems are deduced.
23/10/2002
Marian Smoluchowski Institute of Physics, Jagellonian University
On metal-insulator transition for a one-dimensional correlated nanoscopic chain
Abstract: We have applied our novel numerical scheme combining Lanczos diagonalization in the Fock space with an ab initio renormalization of the single-particle (Wannier) functions, to study the ground state properties of the Extended Hubbard Model. Through the finite-size scaling we determine the discontinuity of the momentum distribution Fermi surface. Our results imply Fermi-liquid behavior for lattice parameter a < 3 a0 (a0 is the Bohr radius) and zero-temperature transition to the localized spin system for larger a. Future applications of the method are listed. The talk will be complemented by possible experimental verifications of the presented theoretical results, in respect to recently discussed limitations of ARPES experiments for one-dimensional systems.
24/10/2002
Dr. Yasuhiro Saiga
Department of Physics, Tokyo Institute of Technology
Two-Dimensional t-J Model in a Staggered Field
Abstract: The two-dimensional t-J model in a staggered field is studied by numerically exact diagonalization up to 20 sites. For the low-hole-density region and a realistic value of J/t, it is found that the presence of staggered field strengthens the attraction between two holes. With increasing field, the d_{x^2-y^2}-wave superconducting correlations are enhanced while the extended-s-wave ones hardly change. This implies that coexistence of the d_{x^2-y^2}-wave superconducting order and the commensurate antiferromagnetic order occurs in a staggered field.
[Home] [Up] [List of Accepted Participants] [Logistic Instructions] [Lecture Topics and Background References] [Participant Seminar Abstracts] [Program]
|
|
Composite (complex) Video 100% Straight C on a One-Dollar MCU!
T.Jackson
Joined Nov 22, 2011
328
How much extra time do you have to do any kind of data transfer and preparation?
Rich (BB code):
Delay_us(43);
asm nop;
asm nop;
43uS and change on lines that are not rendered.
This could be improved possibly. Never get anymore than 52uS though. At this point the TV expects horizontal sync, or it is a total no goer.
joeyd999
Joined Jun 6, 2011
4,477
In theory you're right, but I don't feel this to be practical. It needs to some how 'blend' in exact time with software algorithms.
I think you are going to discover, as soon as you start trying to do something other than displaying a static test pattern, that hardware timing is the only way to ultimately make this work.
I will be man enough to congratulate you when you prove me wrong.
MrChips
Joined Oct 2, 2009
25,053
I can do VGA, SVGA and XGA. Anyone interested? Early orders get discount prices.
T.Jackson
Joined Nov 22, 2011
328
I will be man enough to congratulate you when you prove me wrong.
Nothing to prove here mate. Just trying to get a project done that could be attractive for some. Saves money not having buy dedicated output devices all the time. Most cars have LCDs in them now too. A project like this could be of interest, especially on a one-dollar MCU.
T.Jackson
Joined Nov 22, 2011
328
I can do VGA, SVGA and XGA. Anyone interested? Early orders get discount prices.
Car screens don't have these interfaces. Gotta be composite.
MrChips
Joined Oct 2, 2009
25,053
You want composite. I give you composite... in 256 colors.
T.Jackson
Joined Nov 22, 2011
328
You want composite. I give you composite... in 256 colors.
Well if you can do that for a dollar, then I am wasting my time aren't I? Not saying that I can get colour on ANY MCU, let alone a one-dollar part, but usable monochrome to display information using an MCU that is abundant and dirt cheap, is of interest.
T.Jackson
Joined Nov 22, 2011
328
I think you are going to discover, as soon as you start trying to do something other than displaying a static test pattern, that hardware timing is the only way to ultimately make this work.
You are right, this project is out of my league. I don't know asm and have no plans on learning it.
I cannot properly time this in my IDE when I start to use functions such as: UART1_Read();
Rich (BB code):
{
Delay_us(43);
asm nop;
asm nop;
}
Can anyone tell me how long i = UART1_Read(); in uS takes to execute?
Last edited:
T.Jackson
Joined Nov 22, 2011
328
I will be forced into creating a parallel interface. PORTB changed to inputs during the free scan lines. One bit on PORTA as a data enable. It would be as fast as the "project" MCU at the other end telling it which segments to blit.
The other problem for me with the UART would be that it is on PORTB. It takes time to initialize it.
Doesn't look like there will be a simple serial interface on this, at least not on my version.
Last edited:
T.Jackson
Joined Nov 22, 2011
328
I didn't think about the clear the flag bit before the shift, so use the add instead.
portb += portb;
Yeah me too. A lot that I didn't think about as well. God dam PORTB as a shift register. A shift register is one thing in electronics that I have never used. I thought that the portb += portb; was a joke at first. Like as in you were having a 'lend' of me. As I said at the start -- I know 2% of it.
I feel like a bit of an idiot right now actually. Not only did you manage to get rid of the OR gates, you guys also managed to speed the protocol up by 50% if my maths is right. (600nS / 400nS) x 100
1.5 x faster!
Last edited:
joeyd999
Joined Jun 6, 2011
4,477
Can anyone tell me how long i = UART1_Read(); in uS takes to execute?
No. Because ASYNCHRONOUS serial comunications is, by definition, asynchronous!
Yes, the bit rates are defined, *but* you don't know when to expect the data.
Additionally, this is one of the troubles with C. You are dependent upon the developer's libraries* (if you choose to use them). And you must carry all his baggage around with you.
*EDIT: You are also dependent on the compiler's implementation *and* the code optimization. IMHO, more trouble than its worth!
Last edited:
joeyd999
Joined Jun 6, 2011
4,477
ASM isn't so bad? I think that most of its users are bananas. They're making video games by counting clock cycles with the source code so 'scattered' around the place it is unbelievable.
Can anyone tell me how long i = UART1_Read(); in uS takes to execute?
I feel vindicated!
Last edited:
joeyd999
Joined Jun 6, 2011
4,477
Can anyone tell me how long i = UART1_Read(); in uS takes to execute?
Sorry for three messages in a row, but I want to elaborate a little here. There are at least two possible implementations of the UART1_Read() function, and I don't know which approach your compiler is using.
The first is a "blocking" implementation, where the function waits until a character is available in the receive buffer. This will *never* work in your project, as instruction execution will stall until data is available. This will throw off your timing *every* time you make the call.
The second is a "non-blocking" implementation, where the software USART driver is implemented as an interrupt, and the incoming data is queued. In this case, you would need to test first if data is available in the queue, and then read it. Execution will not stall, but timing is still a pain, because there is receiver code executing in the background stealing instruction cycles from you. Again, I don't think this will ever work.
You are *severely* limited by the hardware you have chosen. IMHO, the only way to get acceptable results would be to have priority interrupts available (like on the 18F parts). The pixel clock (say driven by SPI or synchronous USART) would have high priority, as this is where you need timing precision. A non-blocking serial comm routine would have low-priority, and buffer the data into a queue. The main program would test the receive queue, capture data as necessary (while the other stuff runs in the background), and update the pixel buffer.
Hope this helps.
T.Jackson
Joined Nov 22, 2011
328
You are *severely* limited by the hardware you have chosen. IMHO, the only way to get acceptable results would be to have priority interrupts available (like on the 18F parts). The pixel clock (say driven by SPI or synchronous USART) would have high priority, as this is where you need timing precision. A non-blocking serial comm routine would have low-priority, and buffer the data into a queue. The main program would test the receive queue, capture data as necessary (while the other stuff runs in the background), and update the pixel buffer.
18F parts expensive, 16F parts cheap today expensive tomorrow. There are off-the-shelf products already existing for around 50-dollars (as mentioned earlier) -- to do what I am trying to do, and in color! So, I would not have attempted to do this on anything other than a 16F part that is of plentiful quantity in the market place.
More over, a parallel data interface could possibly be more appealing, because it means that another MCU doesn't necessarily have to be at the controlling realm of it all. Other, more cheaper obsolete parts could be used to create video projects.
The goal is just to get this done as quickly as possible on the chosen one-dollar part. Document it, publish it, and it will attract attention because of how cheap it can be to interface to monochrome composite video. It won't be as simple as with a serial interface, and some may argue that this is a blessing in disguise!
joeyd999
Joined Jun 6, 2011
4,477
18F parts expensive, 16F parts cheap today expensive tomorrow. There are off-the-shelf products already existing for around 50-dollars (as mentioned earlier) -- to do what I am trying to do, and in color! So, I would not have attempted to do this on anything other than a 16F part that is of plentiful quantity in the market place.
More over, a parallel data interface could possibly be more appealing, because it means that another MCU doesn't necessarily have to be at the controlling realm of it all. Other, more cheaper obsolete parts could be used to create video projects.
The goal is just to get this done as quickly as possible on the chosen one-dollar part. Document it, publish it, and it will attract attention because of how cheap it can be to interface to monochrome composite video. It won't be as simple as with a serial interface, and some may argue that this is a blessing in disguise!
Good luck. IMHO, this will an exercise in futility for you. I would like to know how things pan out in the end.
T.Jackson
Joined Nov 22, 2011
328
Good luck. IMHO, this will an exercise in futility for you. I would like to know how things pan out in the end.
It's already like almost all there in the first post of the thread. The only thing(s) that are going to change will be:
• Improved resolution
• Improved source code
• No OR gates
• Data interface
• Documentation with interfacing examples
T.Jackson
Joined Nov 22, 2011
328
The abstract is done. The skeleton code is all there with accurate NTSC timings that almost sent me crazy trying to figure out.
I don't have a decent scope either.
joeyd999
Joined Jun 6, 2011
4,477
It's already like almost all there in the first post of the thread.
Jeeze, I said good luck. And I meant it. I hope you do succeed.
T.Jackson
Joined Nov 22, 2011
328
The biggest hurdle for me now is that, because I only have the evaluation version of mikroC, I can only use around 40% of the pic's ram. I need to take full advantage of the ram to get higher res. I have downloaded BoostC which will allow me to use the full bandwidth of memory, but I don't know this compiler at all.
Days to learn? weeks? months? years?
Can I even time things in it that will tell me how long a routine takes to execute in uS?
Last edited:
tom66
Joined May 9, 2009
2,595
I've been working on an OSD capable of 256x192 pixels bilevel transparent graphics. Not quite $1 though; the OSD part costs are around$10.
I use a dsPIC33FJ128GP804 with a few other components (74LVC1G125, LMH1980, some passives). It uses an incoming video signal, strips the sync out, and embeds the graphics each pixel of which can be black or white or transparent. There are plans for self syncing, if the input signal is lost, but that hasn't yet been developed.
It's an all SMD layout: 0603 and TQFP44. Nothing too difficult to do by hand.
Getting smooth ~25 fps head up display on a processor clocked at 36.85 MHz is tricky but not impossible. There's room for improvement. It's for my model aircraft, so I can see the flight parameters whilst looking at the video monitor. No need to see the aircraft.
Video:
It's also open source - http://code.google.com/p/super-osd/
So the design is pretty simple, but there are some hacks to give such a high resolution at a low cost:
- Input video signal goes into sync separator which gives us CSYNC, VSYNC and ODD/EVEN. (You could use a LM1881 or an LMH1980.)
- A CSYNC event triggers an interrupt.
- Scanline is chosen and loaded into memory.
- A timer interrupt is set to trigger in about 10µs.
- SPI is set up to queue first word. SPI interrupt is set up.
- Interrupt fires when word is finished; new word loaded (in just 10 cycles - a challenge!)
- When VSYNC fires, new field starts; reset line counter.
Drawing graphics is more difficult. You can clear the screen immediately, but you will get flickering. So the HUD updates each bit one bit at a time, to reduce the flicker. Also, accelerated instructions, such as draw_hline, take advantage of the data storage and can draw 16 pixel horizontal lines in less than 30 cycles.
Last edited:
|
|
SCIENCE CHINA Information Sciences, Volume 61, Issue 11: 112206(2018) https://doi.org/10.1007/s11432-017-9408-6
## Evasion strategies of a three-player lifeline game
• AcceptedMar 5, 2018
• PublishedOct 15, 2018
Share
Rating
### Abstract
This study examines a multi-player pursuit-evasion game, more specifically, a three-player lifeline game in a planar environment, where a single evader is tasked with reaching a lifeline prior to capture. A decomposition method based on an explicit policy is proposed to address the game qualitatively from two main aspects: (1) the evader's position distribution to guarantee winning the game (i.e., the escape zone), which is based on the premise of knowing the pursuers' positions initially, and (2) evasion strategies in the escape zone. First, this study decomposes the three-player lifeline game into two two-player sub-games and obtains an analytic expression of the escape zone by constructing a barrier, which is an integration of the solutions of two sub-games. This study then explicitly partitions the escape zone into several regions and derives an evasion strategy for each region. In particular, this study provides a resultant force method for the evader to balance the active goal of reaching the lifeline and the passive goal of avoiding capture. Finally, some examples from a lifeline game involving more than one pursuer are used to verify the effectiveness and scalability of the evasion strategies.
### Acknowledgment
This work was supported by Innovative Research Groups of National Natural Science Foundation of China (Grant No. 61621063) and Key Program of National Natural Science Foundation of China (Grant No. 1613225).
### References
[1] Yuksek B, Ure N K. Optimization of allocation and launch conditions of multiple missiles for three-dimensional collaborative interception of ballistic target. Int J Aerospace Eng, 2016, 2016: 9582816. Google Scholar
[2] Liu Y F, Li R F, Wang S Q. Orbital three-player differential game using semi-direct collocation with nonlinear programming. In: Proceedings of the 2nd International Conference on Control Science and Systems Engineering, Singapore, 2016. 217--222. Google Scholar
[3] Shen D, Pham K, Blasch E, et al. Pursuit-evasion orbital game for satellite interception and collision avoidance. Proc SPIE, 2011, 8044: 284--287. Google Scholar
[4] Casbeer D W, Garcia E, Pachter M. The target differential game with two defenders. In: Proceedings of 2016 International Conference on Unmanned Aircraft Systems, Arlington, 2016. 202--210. Google Scholar
[5] Li Y, Mu Y F, Yuan S. The game theoretical approach for multi-phase complex systems in chemical engineering. J Syst Sci Complex, 2017, 30: 4-19 CrossRef Google Scholar
[6] Wei N, Zhang Z. Competitive access in multi-RAT systems with regulated interference constraints. Sci China Inf Sci, 2017, 60: 022306 CrossRef Google Scholar
[7] Zhu B, Xie L H, Han D. A survey on recent progress in control of swarm systems. Sci China Inf Sci, 2017, 60: 070201 CrossRef Google Scholar
[8] Chen P, Sastry S. Pursuit controller performance guarantees for a lifeline pursuit-evasion game over a wireless sensor network. In: Proceedings of the 45th IEEE Conference on Decision and Control, San Diego, 2016. 691--696. Google Scholar
[9] Mu Y F, Guo L. How cooperation arises from rational players?. Sci China Inf Sci, 2013, 56: 112201 CrossRef Google Scholar
[10] Gao H W, Petrosyan L, Qiao H. Cooperation in two-stage games on undirected networks. J Syst Sci Complex, 2017, 30: 680-693 CrossRef Google Scholar
[11] Isaacs R. Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization. Hoboken: John Wiley and Sons, 1965. Google Scholar
[12] Chen J, Zha W Z, Peng Z H. Multi-player pursuit-evasion games with one superior evader. Automatica, 2016, 71: 24-32 CrossRef Google Scholar
[13] Zha W Z, Chen J, Peng Z H. Construction of barrier in a fishing game with point capture. IEEE Trans Cybern, 2017, 47: 1409-1422 CrossRef PubMed Google Scholar
[14] Pan S, Huang H M, Ding J, et al. Pursuit, evasion and defense in the plane. In: Proceedings of American Control Conference, Montreal, 2012. 4167--4173. Google Scholar
[15] Chen M, Zhou Z Y, Tomlin C J. Multiplayer reach-avoid games via low dimensional solutions and maximum matching. In: Proceedings of American Control Conference, Portland, 2014. 1444--1449. Google Scholar
[16] Ibragimov G, Karapanan P, Alias I A. Pursuit differential game of two pursuers and one evader in R2 with coordinate-wise integral constraints. In: Proceedings of 2015 International Conference on Research and Education in Mathematics, Kuala Lumpur, 2015. 223--226. Google Scholar
[17] Alias I A, Ibragimov G, Rakhmanov A. Evasion Differential Game of Infinitely Many Evaders from Infinitely Many Pursuers in Hilbert Space. Dyn Games Appl, 2017, 7: 347-359 CrossRef Google Scholar
[18] Kuchkarov A, Ibragimov G, Ferrara M. Simple motion pursuit and evasion differential games with many pursuers on manifolds with euclidean metric. Discrete Dyn Nat Soc, 2016, 2016: 1386242. Google Scholar
[19] Zhou Y, Li J X, Wang D L. Target tracking in wireless sensor networks using adaptive measurement quantization. Sci China Inf Sci, 2012, 55: 827-838 CrossRef Google Scholar
[20] Selvakumar J, Bakolas E. Evasion from a group of pursuers with a prescribed target set for the evader. In: Proceedings of 2016 American Control Conference, Boston, 2016. 155--160. Google Scholar
[21] Li W. Formulation of a cooperative-confinement-escape problem of multiple cooperative defenders against an evader escaping from a circular region. Commun Nonlinear Sci Num Simul, 2016, 39: 442-457 CrossRef ADS Google Scholar
[22] Yao Y, Zhang P, Liu H. Optimal switching target-assignment based on the integral performance in cooperative tracking. Sci China Inf Sci, 2013, 56: 012203 CrossRef Google Scholar
[23] Garcia E, Casbeer D W, Pachter M. Active target defense using first order missile models. Automatica, 2017, 78: 139-143 CrossRef Google Scholar
[24] Merz A. Homicidal chauffeur: a differential game. Dissertation for Ph.D. Degree. Palo Alto: Stanford University, 1971. Google Scholar
[25] Bopardikar S D, Bullo F, Hespanha J P. A cooperative homicidal chauffeur game. Automatica, 2009, 45: 1771-1777 CrossRef Google Scholar
[26] Merz A W. The game of two identical cars. J Opt Theory Appl, 1972, 9: 324-343 CrossRef Google Scholar
[27] Shankaran S, Stipanovic D M, Tomlin C J. Collision avoidance strategies for a three-player game. In: Proceedings of International Society of Dynamic Games, Wroclaw, 2008. 253--271. Google Scholar
[28] Averboukh Y, Baklanov A. Stackelberg solutions of differential games in the class of nonanticipative strategies. Dyn Games Appl, 2014, 4: 1-9 CrossRef Google Scholar
[29] Exarchos I, Tsiotras P, Pachter M. UAV collision avoidance based on the solution of the suicidal pedestrian differential game. In: Proceedings of AIAA Guidance, Navigation, and Control Conference, San Diego, 2016. Google Scholar
• Figure 1
(Color online) Three-player lifeline game in the plane, where $\varphi$, $\phi_1$, and $\phi_2$ are the moving directions of players E, P$_1$, and P$_2$ in the realistic game space, respectively.
• Figure 2
(Color online) Barrier of the two-player lifeline game with $v_p=1.5$, $v_e=1$, and $l=2$.
• Figure 3
(Color online) Barrier of the three-player lifeline game with $u=10$, $h_1=5$, $h_2=10$, $l=2$, and $w=1.5$.
• Figure 4
(Color online) Force analysis of the evader, where $\alpha$ and $\beta$ are the directions of the forces $F_1$ and $F_2$, respectively, and $\varphi$ is the direction of the resultant force of the evader E.
• Figure 5
(Color online) Three-player lifeline game with two different evasion strategies. (a) The evader moves vertically down the lifeline; (b) the evader uses the resultant force strategy (28). The initial positions of the players are E $=(3,15)$, P$_1=(-10,5)$, P$_2=(10,20)$. $w=1.2$ and $l=2$. The black curves are the barriers of the game, the blue, green and red curves are the trajectories of the pursuers P$_1$, P$_2$, and the evader E, respectively.
• Figure 6
(Color online) Partition of the escape zone for the three-player lifeline game, where the blue curves are the barriers, and the red curves are described by (25). The initial positions of P$_1$ and P$_2$ are $(-20,5)$ and $(20,20)$, respectively. $w=1.2$ and $l=2$.
• Figure 7
(Color online) Lifeline game with more than two pursuers. (a) The evader escapes from three pursuers and reaches the lifeline; (b) the evader escapes from four pursuers and reaches the lifeline. The initial positions of the players are E $=(50,180)$, P$_1=(-200,200)$, P$_2=(200,100)$, P$_3=(300,150)$, and P$_4=(-150,50)$. $w=4/3$ and $l=2$.
Citations
• #### 0
Altmetric
Copyright 2019 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有
|
|
Drawing Book
After his sister Jensene finished the coloring book, Chris bought another drawing book for her. The first page of the book contains a diagram
There are 9 vertices from $$A$$ to $$I$$. How many ways are there to trace the figure without lifting the pen such that it passes through all the vertices and each edge is drawn at most once?
Clarification: Chris does not have to draw pass every edge.
×
|
|
# how to simplify radicals in fractions
c) = = 3b. Generally speaking, it is the process of simplifying expressions applied to radicals. Some techniques used are: find the square root of the numerator and denominator separately, reduce the fraction and change to improper fraction. Related Topics: More Lessons on Fractions. The denominator a square number. Combine like radicals. For example, to rationalize the denominator of , multiply the fraction by : × = = = . Step 2 : We have to simplify the radical term according to its power. b) = = 2a. Simplify: ⓐ √25+√144 25 + 144 ⓑ √25+144 25 + 144. ⓐ Use the order of operations. View transcript. To get rid of it, I'll multiply by the conjugate in order to "simplify" this expression. If you have square root (√), you have to take one term out of the square root for … The factor of 75 that wecan take the square root of is 25. There are actually two ways of doing this. Simplify radicals. So if you encountered: You would, with a little practice, be able to see right away that it simplifies to the much simpler and easier to handle: Often, teachers will let you keep radical expressions in the numerator of your fraction; but, just like the number zero, radicals cause problems when they turn up in the denominator or bottom number of the fraction. Example 1: Add or subtract to simplify radical expression: $2 \sqrt{12} + \sqrt{27}$ Solution: Step 1: Simplify radicals Rationalizing the fraction or eliminating the radical from the denominator. The numerator becomes 4_√_5, which is acceptable because your goal was simply to get the radical out of the denominator. How to simplify the fraction \$ \displaystyle \frac{\sqrt{3}+1-\sqrt{6}}{2\sqrt{2}-\sqrt{6}+\sqrt{3}+1} ... Browse other questions tagged radicals fractions or ask your own question. When you simplify a radical,you want to take out as much as possible. Numbers such as 2 and 3 are rational and roots such as √2 and √3, are irrational. Another method of rationalizing denominator is multiplication of both the top and bottom by a conjugate of the denominator. Fractional radicand. Two radical fractions can be combined by following these relationships: = √(27 / 4) x √(1/108) = √(27 / 4 x 1/108), Rationalizing a denominator can be termed as an operation where the root of an expression is moved from the bottom of a fraction to the top. - [Voiceover] So we have here the square root, the principal root, of one two-hundredth. This web site owner is mathematician Miloš Petrović. In other words, a denominator should be always rational, and this process of changing a denominator from irrational to rational is what is termed as “Rationalizing the Denominator”. If you have radical sign for the entire fraction, you have to take radical sign separately for numerator and denominator. But if you remember the properties of fractions, a fraction with any non-zero number on both top and bottom equals 1. Rationalize the denominator of the expression; (2 + √3)/(2 – √3). Simplifying Radicals 2 More expressions that involve radicals and fractions. Form a new, simplified fraction from the numerator and denominator you just found. To simplify a radical, the radicand must be composed of factors! For example, a conjugate of an expression such as: x 2 + 2 is. That leaves you with: And because any fraction with the exact same non-zero values in numerator and denominator is equal to one, you can rewrite this as: Sometimes you'll be faced with a radical expression that doesn't have a concise answer, like √3 from the previous example. Simplifying the square roots of powers. So your fraction is now: 4_√_5/5, which is considered a rational fraction because there is no radical in the denominator. After multiplying your fraction by your (LCD)/ (LCD) expression and simplifying by combining like terms, you should be left with a simple fraction containing no fractional terms. Consider the following fraction: In this case, if you know your square roots, you can see that both radicals actually represent familiar integers. Multiply both the numerator and denominator by the root of 2. Example Question #1 : Radicals And Fractions. Step 2. ... Now, if your fraction is of the type a over the n-th root of b, then it turns out to be a very useful trick to multiply both the top and the bottom of your number by the n-th root of the n minus first power of b. You can't easily simplify _√_5 to an integer, and even if you factor it out, you're still left with a fraction that has a radical in the denominator, as follows: So neither of the methods already discussed will work. = (3 + √2) / 7, the denominator is now rational. In these lessons, we will look at some examples of simplifying fractions within a square root (or radical). If you don't know how to simplify radicals go to Simplifying Radical Expressions. Meanwhile, the denominator becomes √_5 × √5 or (√_5)2. Order of Operations Factors & Primes Fractions Long Arithmetic Decimals Exponents & Radicals Ratios & Proportions Percent Modulo Mean, Median & Mode Scientific Notation Arithmetics Algebra Equations Inequalities System of Equations System of Inequalities Basic Operations Algebraic Properties Partial Fractions Polynomials Rational Expressions Sequences Power Sums Induction Logical Sets Depending on exactly what your teacher is asking you to do, there are two ways of simplifying radical fractions: Either factor the radical out entirely, simplify it, or "rationalize" the fraction, which means you eliminate the radical from the denominator but may still have a radical in the numerator. If it shows up in the numerator, you can deal with it. Purple Math: Radicals: Rationalizing the Denominator. There are two ways of simplifying radicals with fractions, and they include: Let’s explain this technique with the help of example below. For example, the cube root of 8 is 2 and the cube root of 125 is 5. Then multiply both the numerator and denominator of the fraction by the denominator of the fraction and simplify. A radical is also in simplest form when the radicand is not a fraction. In this example, we are using the product rule of radicals in reverseto help us simplify the square root of 75. Well, let's just multiply the numerator and the denominator by 2 square roots of y plus 5 over 2 square roots of y plus 5. Square root, cube root, forth root are all radicals. To rationalize a denominator, multiply the fraction by a "clever" form of 1--that is, by a fraction whose numerator and denominator are both equal to the square root in the denominator. Related. This may produce a radical in the numerator but it will eliminate the radical from the denominator. We are not changing the number, we're just multiplying it by 1. Copyright 2020 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. Just as "you can't add apples and oranges", so also you cannot combine "unlike" radical terms. And what I want to do is simplify this. 33, for example, has no square factors. Swag is coming back! When I say "simplify it" I really mean, if there's any perfect squares here that I can factor out to take it out from under the radical. Simplify the following expression: √27/2 x √(1/108) Solution. This is just 1. In this non-linear system, users are free to take whatever path through the material best serves their needs. Improve your math knowledge with free questions in "Simplify radical expressions involving fractions" and thousands of other math skills. Simplifying radicals. In order to be able to combine radical terms together, those terms have to have the same radical part. The right and left side of this expression is called exponent and radical form respectively. We simplify any expressions under the radical sign before performing other operations. Simplifying radicals. Thus, = . Example 5. Multiply these terms to get, 2 + 6 + 5√3, Compare the denominator (2 + √3) (2 – √3) with the identity, Find the LCM to get (3 +√5)² + (3-√5)²/(3+√5)(3-√5), Expand (3 + √5) ² as 3 ² + 2(3)(√5) + √5 ² and (3 – √5) ² as 3 ²- 2(3)(√5) + √5 ², Compare the denominator (√5 + √7)(√5 – √7) with the identity. The bottom and top of a fraction is called the denominator and numerator respectively. First, we see that this is the square root of a fraction, so we can use Rule 3. W E SAY THAT A SQUARE ROOT RADICAL is simplified, or in its simplest form, when the radicand has no square factors. Simplify:1 + 7 2 − 7\mathbf {\color {green} { \dfrac {1 + \sqrt {7\,}} {2 - \sqrt {7\,}} }} 2− 7 1+ 7 . And so I encourage you to pause the video and see if … Try the free Mathway calculator and problem solver below to practice various math topics. A radical is in its simplest form when the radicand is not a fraction. Simplifying Rational Radicals. Featured on Meta New Feature: Table Support. Example 1. Simplify the following radical expression: $\large \displaystyle \sqrt{\frac{8 x^5 y^6}{5 x^8 y^{-2}}}$ ANSWER: There are several things that need to be done here. Suppose that a square root contains a fraction. But sometimes there's an obvious answer. Simplify by rationalizing the denominator: None of the other responses is correct. A conjugate is an expression with changed sign between the terms. If n is a positive integer greater than 1 and a is a real number, then; where n is referred to as the index and a is the radicand, then the symbol √ is called the radical. 2. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. A radical fraction can be rationalized by multiplying both the top and bottom by a root: Rationalize the following radical fraction: 1 / √2. Rationalizing the fraction or eliminating the radical from the denominator. The first step is to determine the largest number that evenly divides the numerator and the denominator (also called the Greatest Common Factor of these numbers). For example, the fraction 4/8 isn't considered simplified because 4 and 8 both have a common factor of 4. Radical fractions aren't little rebellious fractions that stay out late, drinking and smoking pot. Often, that means the radical expression turns up in the numerator instead. For example, if you have: You can factor out both the radicals, because they're present in every term in the numerator and denominator. So if you see familiar square roots, you can just rewrite the fraction with them in their simplified, integer form. But you might not be able to simplify the addition all the way down to one number. The square root of 4 is 2, and the square root of 9 is 3. These unique features make Virtual Nerd a viable alternative to private tutoring. Just as with "regular" numbers, square roots can be added together. Let's examine the fraction 2/4. Rationalize the denominator of the following expression, Rationalize the denominator of (1 + 2√3)/(2 – √3), a ²- b ² = (a + b) (a – b), to get 2 ² – √3 ² = 1, Compare the denominator (3-√5)(3+√5) with identity a ² – b ²= (a + b)(a – b), to get. You also wouldn't ever write a fraction as 0.5/6 because one of the rules about simplified fractions is that you can't have a decimal in the numerator or denominator. When the denominator is … Simplify any radical in your final answer — always. Simplifying (or reducing) fractions means to make the fraction as simple as possible. Simplifying Radicals by Factoring. In this case, you'd have: This also works with cube roots and other radicals. Then, there are negative powers than can be transformed. We can write 75 as (25)(3) andthen use the product rule of radicals to separate the two numbers. Simplifying Radicals 1 Simplifying some fractions that involve radicals. Next, split the radical into separate radicals for each factor. Methods to Simplify Fraction General Steps. The denominator here contains a radical, but that radical is part of a larger expression. There are two ways of rationalizing a denominator. Multiply the numerator and the denominator by the conjugate of the denominator, which is . When working with square roots any number with a power of 2 or higher can be simplified . Consider your first option, factoring the radical out of the fraction. The steps in adding and subtracting Radical are: Step 1. Simplifying radicals is the process of manipulating a radical expression into a simpler or alternate form. The first step would be to factor the numerator and denominator of the fraction: $$\sqrt{\frac{253}{441}} = \sqrt{\frac{11 \times 23}{3^2 \times 7^2}}$$ Next, since we can't simplify the fraction by cancelling factors that are common to both the numerator and the denomiantor, we need to consider the radical. So, the last way you may be asked to simplify radical fractions is an operation called rationalizing them, which just means getting the radical out of the denominator. Two radical fractions can be combined by … This … A radical can be defined as a symbol that indicate the root of a number. This article introduces by defining common terms in fractional radicals. For example, to simplify a square root, find perfect square root factors: Also, you can add and subtract only radicals that are like terms. Depending on exactly what your teacher is asking you to do, there are two ways of simplifying radical fractions: Either factor the radical out entirely, simplify it, or "rationalize" the fraction, which means you eliminate the radical from the denominator but may still have a radical in the numerator. And because a square root and a square cancel each other out, that simplifies to simply 5. Multiply both the top and bottom by the (3 + √2) as the conjugate. In that case you'll usually preserve the radical term just as it is, using basic operations like factoring or canceling to either remove it or isolate it. This calculator can be used to simplify a radical expression. So you could write: And because you can multiply 1 times anything else without changing the value of that other thing, you can also write the following without actually changing the value of the fraction: Once you multiply across, something special happens. Show Step-by-step Solutions. Express each radical in simplest form. a) = = 2. Instead, they're fractions that include radicals – usually square roots when you're first introduced to the concept, but later on your might also encounter cube roots, fourth roots and the like, all of which are called radicals too. Then take advantage of the distributive properties and the … Simplify square roots (radicals) that have fractions. Lisa studied mathematics at the University of Alaska, Anchorage, and spent several years tutoring high school and university students through scary -- but fun! If the same radical exists in all terms in both the top and bottom of the fraction, you can simply factor out and cancel the radical expression. 10.5. In this case, 2 – √3 is the denominator, and to rationalize the denominator, both top and bottom by its conjugate, Comparing the numerator (2 + √3) ² with the identity (a + b) ²= a ²+ 2ab + b ², the result is 2 ² + 2(2)√3 + √3² = (7 + 4√3), Comparing the denominator with the identity (a + b) (a – b) = a ² – b ², the results is 2² – √3², 4 + 5√3 is our denominator, and so to rationalize the denominator, multiply the fraction by its conjugate; 4+5√3 is 4 – 5√3, Multiplying the terms of the numerator; (5 + 4√3) (4 – 5√3) gives out 40 + 9√3, Compare the numerator (2 + √3) ² the identity (a + b) ²= a ²+ 2ab + b ², to get, We have 2 – √3 in the denominator, and to rationalize the denominator, multiply the entire fraction by its conjugate, We have (1 + 2√3) (2 + √3) in the numerator. Why say four-eighths (48 ) when we really mean half (12) ? When using the order of operations to simplify an expression that has square roots, we treat the radical sign as a grouping symbol. Welcome to MathPortal. There are rules that you need to follow when simplifying radicals as well. Let’s explain this technique with the help of example below. Example 1. Rationalize the denominator of the following expression: [(√5 – √7)/(√5 + √7)] – [(√5 + √7) / (√5 – √7)], (√5 – √7) ² – (√5 + √7) ² / (√5 + √7)(√5 – √7), Radicals that have Fractions – Simplification Techniques. There are two ways of simplifying radicals with fractions, and they include: Simplifying a radical by factoring out. Fractional radicand. -- math subjects like algebra and calculus. The other responses is correct this expression is called the denominator becomes √_5 × √5 or ( √_5 ).... And denominator of the denominator, which is considered a rational fraction because there is no in... Responses is correct below to practice various math topics rational fraction because there is no radical in the and... That radical is part of a larger expression your first option, the. Their simplified, or in its simplest form when the radicand is not a fraction radicand must be composed factors... Want to do is simplify this order of operations have fractions final answer — always changed sign the... 2 or higher can be transformed the two numbers √3, are irrational what I want take. Factor of 75 is 2 and 3 are rational and roots such as 2 3! Them in their simplified, or in its simplest form when the radicand is not a fraction called exponent radical... Or higher can be transformed reduce the fraction by: × = =.... Just found will eliminate the radical from the denominator ⓑ √25+144 25 + 144 ⓑ 25!: None of the fraction and simplify radicand is not a fraction becomes √_5 × √5 or ( )... 3 ) andthen use the product rule of radicals to separate the two numbers separately for numerator denominator! Reducing ) fractions means to make the fraction eliminating the radical into separate radicals for each.... Numerator and denominator you just found cube root, the denominator and because a square root radical simplified... Of operations to simplify radicals go to simplifying radical expressions involving fractions '' and thousands of math! Your math knowledge with free questions in simplify radical expressions involving fractions '' and thousands of math! See that this is the square root and a square root ( or ). 'Re just multiplying it by 1 want to take whatever path through the material best serves their needs when really. Numerator respectively each factor that have fractions to simply 5 up in numerator. Into separate radicals for each factor process of manipulating a radical, you can rewrite! Root of 2 or higher can be transformed out, that simplifies to 5! / Leaf Group Ltd. / Leaf Group Ltd. / Leaf Group Media, all Rights Reserved that a root. These lessons, we 're just multiplying it by 1 and oranges '', so we have here the root! We 're just multiplying it by how to simplify radicals in fractions this may produce a radical, that... In adding and subtracting radical are: find the square root of 125 is 5 first, 're. '' and thousands of other math skills simplifying expressions applied to radicals, there are rules that you need follow! Is 3 of other math skills negative powers than can be transformed is now: 4_√_5/5, which is a..., but that radical is also in simplest form when the radicand is not a fraction with non-zero... √3 ) / 7, the denominator of, multiply the numerator becomes 4_√_5, which is considered rational. Of manipulating a radical, you can not combine unlike '' radical terms together, those terms have take. Between the terms square root and a square cancel each other out, that simplifies to simply 5 this Improve... Roots such as: x 2 + 2 is it will eliminate radical. + 144. ⓐ use the product rule of radicals in reverseto help simplify! We 're just multiplying it by 1 or eliminating the radical sign for the entire fraction, you can rewrite! Using the order of operations to simplify the square root ( or reducing ) fractions means to make fraction... Of one two-hundredth as: x 2 + √3 ) by defining common terms in fractional radicals practice math... Some examples of simplifying expressions applied to radicals do n't know how to simplify the following:. Radicals is the process of simplifying fractions within a square root of 9 3... Product rule of radicals to separate the two numbers you ca n't add apples and ''. To rationalize the denominator, which is considered a rational fraction because is... '' numbers, square roots any number with a power of 2 or higher can be combined by simplifying. √ ( 1/108 ) Solution radicand is not a fraction, you want to take whatever through... Not changing the number, we 're just multiplying it by 1 wecan take the square root of 8 2. Addition all the way down to one number forth how to simplify radicals in fractions are all radicals radical. A simpler or alternate form not a fraction, so also you can deal with.! Before performing other operations roots of powers of manipulating a radical is part a... That this is the process of manipulating a radical, but that radical is its... The free Mathway calculator and problem solver below to practice various math topics Leaf Group Ltd. / Group..., those terms have to simplify an expression such as √2 and √3, are.. Working with square roots ( radicals ) that have fractions, it is the of!, simplified fraction from the denominator becomes √_5 × √5 or ( )... / ( 2 – √3 ) know how to simplify radicals go to simplifying radical expressions fractions. Conjugate in order to be able to combine radical terms ⓐ use the product rule radicals! 9 is 3 denominator by the conjugate in order to be able simplify... Of 125 is 5 as: x 2 + 2 is rational fraction because there is no radical the... Another method of rationalizing denominator is now: 4_√_5/5, which is of it I. √_5 ) 2 simplify '' this expression is called exponent and radical form respectively radical... Simplify an expression that has square roots can be transformed encourage you pause. Reduce the fraction by the denominator fractions within a square root, the denominator out that.: ⓐ √25+√144 25 + 144 ⓑ √25+144 25 + 144 ⓑ √25+144 25 + 144 √25+144! Free to take whatever path through the material best serves their needs are! ) that have fractions your fraction is now: 4_√_5/5, which is the other responses is correct 144.! But if you do n't know how to simplify an expression that has square roots ( radicals that! You want to do is simplify this multiplying it by 1 ( 25 ) ( 3 andthen. Be able to simplify an expression with changed sign between the terms features Virtual... Roots can be transformed split the radical sign separately for numerator and denominator you just.! Say that a square root and a square root, of one.!, forth root are all radicals video and see if … simplifying radicals 2 More that... Higher can be used to simplify the following expression: √27/2 x √ ( 1/108 ) Solution simpler or form! 9 is 3 sign separately for numerator and denominator by the denominator: None of the other is. Is acceptable because your goal was simply to get rid of it I! Expression ; ( 2 + √3 ) simplify the radical from the denominator by the.. You ca n't add apples and oranges '', so we can write 75 as 25... The following expression: √27/2 x √ ( 1/108 ) Solution be transformed 2020 Leaf Group Ltd. / Leaf Ltd.! As: x 2 + √3 ) the properties of fractions, a conjugate is an expression such:! Process of manipulating a radical, the denominator separately for numerator and denominator of, multiply the by... Simplifying ( or reducing ) fractions means to make the fraction method of rationalizing denominator is multiplication of the... Let ’ s explain this technique with the help of example below 12?... As well radical out of the other responses is correct roots can be defined as a symbol... Simple as possible you 'd have: this also works with cube roots and other.! Radical ) the two numbers alternate form combine unlike '' radical terms together, those terms have to out. Expressions involving fractions '' and thousands of other math skills + √2 ) as the conjugate of an expression has! Step 1 with any non-zero number on both top and bottom by the root of 2 higher... Root and a square root ( how to simplify radicals in fractions radical ) separately for numerator and square! We treat the radical from the denominator of, multiply the numerator and denominator of, the. Ⓑ √25+144 25 + 144 ⓑ √25+144 25 + 144 ⓑ √25+144 25 + ⓐ. 3 + √2 ) / 7, the denominator becomes √_5 × √5 or ( )... By 1 need to follow when simplifying radicals is the square root 8... Have to have the same radical part radical expressions sign before performing other operations encourage you to pause video... Radical term according to its power, to rationalize the denominator by conjugate..., or in its simplest form, when the radicand must be composed of factors of powers take radical separately... Or in its simplest form when the radicand must be composed of factors in simplify! Simplifying fractions within a square cancel each other out, that simplifies simply! Of example below drinking and smoking pot this is the process of simplifying expressions to! + √3 ) / 7, the denominator might not be able simplify! Radical, the denominator of the fraction denominator and numerator respectively fractional radicals simplified, or in its simplest,! Roots can be simplified through the material best serves their needs lessons, treat... But that radical is also in simplest form when the radicand has no square.. 'Re just multiplying it by 1 numerator but it will eliminate the radical from the numerator and square!
|
|
English
# Item
ITEM ACTIONSEXPORT
Multivariate Fine-Grained Complexity of Longest Common Subsequence
Bringmann, K., & Künnemann, M. (2018). Multivariate Fine-Grained Complexity of Longest Common Subsequence. Retrieved from http://arxiv.org/abs/1803.00938.
Item is
show hide
Genre: Paper
### Files
show Files
hide Files
:
arXiv:1803.00938.pdf (Preprint), 786KB
Name:
arXiv:1803.00938.pdf
Description:
File downloaded from arXiv at 2018-05-03 08:48 Presented at SODA'18. Full Version. 66 pages
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
-
-
show
### Creators
show
hide
Creators:
Bringmann, Karl1, Author
Künnemann, Marvin1, Author
Affiliations:
1Algorithms and Complexity, MPI for Informatics, Max Planck Society, ou_24019
### Content
show
hide
Free keywords: Computer Science, Computational Complexity, cs.CC,Computer Science, Data Structures and Algorithms, cs.DS
Abstract: We revisit the classic combinatorial pattern matching problem of finding a longest common subsequence (LCS). For strings $x$ and $y$ of length $n$, a textbook algorithm solves LCS in time $O(n^2)$, but although much effort has been spent, no $O(n^{2-\varepsilon})$-time algorithm is known. Recent work indeed shows that such an algorithm would refute the Strong Exponential Time Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann, K\"unnemann FOCS'15]. Despite the quadratic-time barrier, for over 40 years an enduring scientific interest continued to produce fast algorithms for LCS and its variations. Particular attention was put into identifying and exploiting input parameters that yield strongly subquadratic time algorithms for special cases of interest, e.g., differential file comparison. This line of research was successfully pursued until 1990, at which time significant improvements came to a halt. In this paper, using the lens of fine-grained complexity, our goal is to (1) justify the lack of further improvements and (2) determine whether some special cases of LCS admit faster algorithms than currently known. To this end, we provide a systematic study of the multivariate complexity of LCS, taking into account all parameters previously discussed in the literature: the input size $n:=\max\{|x|,|y|\}$, the length of the shorter string $m:=\min\{|x|,|y|\}$, the length $L$ of an LCS of $x$ and $y$, the numbers of deletions $\delta := m-L$ and $\Delta := n-L$, the alphabet size, as well as the numbers of matching pairs $M$ and dominant pairs $d$. For any class of instances defined by fixing each parameter individually to a polynomial in terms of the input size, we prove a SETH-based lower bound matching one of three known algorithms. Specifically, we determine the optimal running time for LCS under SETH as $(n+\min\{d, \delta \Delta, \delta m\})^{1\pm o(1)}$. [...]
### Details
show
hide
Language(s): eng - English
Dates: 2018-03-022018
Publication Status: Published online
Pages: 66 p.
Publishing info: -
Rev. Method: -
Identifiers: arXiv: 1803.00938
URI: http://arxiv.org/abs/1803.00938
BibTex Citekey: Bringmann_arXiv1803.00938
Degree: -
show
show
show
show
|
|
# Diagonal - simple
Calculate the length of the diagonal of a rectangle with dimensions 5 cm and 12 cm.
Result
u = 13 cm
#### Solution:
$u=\sqrt{ 5^{ 2 } + 12^{ 2 } }=13 \ \text{cm}$
Try calculation via our triangle calculator.
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Pythagorean theorem is the base for the right triangle calculator.
## Next similar math problems:
1. Diagonal
Calculate the length of the diagonal of the rectangle ABCD with sides a = 8 cm, b = 7 cm.
2. Median
In the right triangle are sides a=41 dm b=42 dm. Calculate the length of the medians tc to the hypotenuse.
3. Tv screen
The size of a tv screen is given by the length of its diagonal. If the dimension of a tv screen is 16 inches by 14 inches, what is the size of the tv screen?
4. Umbrella
Can umbrella 75 cm long fit into a box of fruit? The box has dimensions of 390 mm and 510 mm.
5. Stairway
Stairway has 20 steps. Each step has a length of 22 cm and a height of 15 cm. Calculate the length of the handrail of staircases if on the top and bottom exceeds 10 cm.
6. Four ropes
TV transmitter is anchored at a height of 44 meters by four ropes. Each rope is attached at a distance of 55 meters from the heel of the TV transmitter. Calculate how many meters of rope were used in the construction of the transmitter. At each attachment
7. Broken tree
The tree is broken at 4 meters above the ground and the top of the tree touches the ground at a distance of 5 from the trunk. Calculate the original height of the tree.
The ladder has a length 3.5 meters. He is leaning against the wall so that his bottom end is 2 meters away from the wall. Determine the height of the ladder.
9. Base
Compute base of an isosceles triangle, with the arm a=20 cm and a height above the base h=10 cm.
8.3 meters long ladder is leaning against the wall of the well, and its lower end is 1.2 meters from this wall. How high from the bottom of a well is the top edge of the ladder?
11. Oil rig
Oil drilling rig is 23 meters height and fix the ropes which ends are 7 meters away from the foot of the tower. How long are these ropes?
|
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 28 Feb 2020, 14:01
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Math: Number Theory
Author Message
TAGS:
### Hide Tags
Intern
Joined: 01 Sep 2016
Posts: 18
### Show Tags
27 Oct 2016, 01:30
• Any positive divisor of n is a product of prime divisors of n raised to some power.
pls someone explain with example
Math Expert
Joined: 02 Sep 2009
Posts: 61549
### Show Tags
27 Oct 2016, 02:59
sanaexam wrote:
• Any positive divisor of n is a product of prime divisors of n raised to some power.
pls someone explain with example
For example, say n = 72. Consider it's factor 36 --> $$36 = 2^2*3^2$$ --> 36 = product of prime divisors of n raised to some power.
_________________
MBA Section Director
Affiliations: GMATClub
Joined: 22 May 2017
Posts: 2940
GPA: 4
WE: Engineering (Computer Software)
### Show Tags
23 May 2017, 15:47
Bunuel wrote:
NUMBER THEORY
Fractions (also known as rational numbers) can be written as terminating (ending) or repeating decimals (such as 0.5, 0.76, or 0.333333....).
--------------------------------------------------------
0.333333... looks like a recurring decimal and is not a rational number. This can be better modified as 0.33333 etc so it is less confusing in my opinion.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 61549
### Show Tags
24 May 2017, 03:02
workout wrote:
Bunuel wrote:
NUMBER THEORY
Fractions (also known as rational numbers) can be written as terminating (ending) or repeating decimals (such as 0.5, 0.76, or 0.333333....).
--------------------------------------------------------
0.333333... looks like a recurring decimal and is not a rational number. This can be better modified as 0.33333 etc so it is less confusing in my opinion.
0.333333... IS a rational number because it equals to the ratio of two integers: 1/3 = 0.3333......
_________________
Director
Joined: 06 Jan 2015
Posts: 733
Location: India
Concentration: Operations, Finance
GPA: 3.35
WE: Information Technology (Computer Software)
### Show Tags
28 Feb 2018, 17:18
1
Sum of First "n" natural nos = n(n+1)/2
Sum of First "n" ODD natural nos = n^2
Sum of First "n" EVEN natural nos = n (n+1)
_________________
आत्मनॊ मोक्षार्थम् जगद्धिताय च
Resource: GMATPrep RCs With Solution
Intern
Joined: 20 Dec 2014
Posts: 34
### Show Tags
27 Mar 2018, 04:40
Hi Bunuel
The topic doesn’t touch up on LCM and HCF for fractions..
LCM( (a/b) , (c/d) ) = LCM(a,c)/HCF(b,d))
HCF( (a/b) , (c/d) ) = HCF(a,c)/LCM(b,d)
Question from where I started to look for the theory:
https://gmatclub.com/forum/what-is-the- ... ml#p818818
IIMA, IIMC School Moderator
Joined: 04 Sep 2016
Posts: 1389
Location: India
WE: Engineering (Other)
### Show Tags
04 May 2018, 03:45
Bunuel niks18 gmatbusters pushpitkc VeritasPrepKarishma
I have a small query regarding rounding:
How do I interpret nearest ten, nearest hundred etc in
1234.1234
on both LHS and RHS of decimal?
_________________
It's the journey that brings us happiness not the destination.
Feeling stressed, you are not alone!!
Retired Moderator
Joined: 27 Oct 2017
Posts: 1491
Location: India
GPA: 3.64
WE: Business Development (Energy and Utilities)
### Show Tags
04 May 2018, 05:03
1
Hii
You have asked regarding rounding to nearest tens, hundred for 1234.1234 on both LHS and RHS of decimal.
First of all , Lets know what does the place signify:
we denote the digits as units, tens, hundreds, thousands before decimal and tenth, hundredth after decimal. see the sketch
Attachment:
gmatbusters3.jpg [ 27.32 KiB | Viewed 2649 times ]
Rounding
Simplifying a number to a certain place value. Drop the extra decimal places, and if the first dropped digit is 5 or greater, round up the last digit that you keep. If the first dropped digit is 4 or smaller, round down (keep the same) the last digit that you keep.
It means if we round 47 to nearest ten = 50
whereas, 44 to nearest ten = 40
Now lets come to your question:
1234.1234
Rounding to nearest ten = 1230
Rounding to nearest hundred = 1200
Rounding to nearest tenth =1234.1
Rounding to nearest hundredth= 1234.12
I hope it is clear now. Fell free to tag again.
I have a small query regarding rounding:
How do I interpret nearest ten, nearest hundred etc in
1234.1234
on both LHS and RHS of decimal?
_________________
IIMA, IIMC School Moderator
Joined: 04 Sep 2016
Posts: 1389
Location: India
WE: Engineering (Other)
### Show Tags
04 May 2018, 05:56
gmatbusters
Thanks for the pic , that made me wonder if the difference in position of tens on LHS and RHS
is because of raising base of 10 to exponents say 0,1 and -1. Is this how we arrive at
PLACE VALUES for a number?
Sorry, coming from engineering bakground, am still
not able to erase why aspects of logic
_________________
It's the journey that brings us happiness not the destination.
Feeling stressed, you are not alone!!
Retired Moderator
Joined: 27 Oct 2017
Posts: 1491
Location: India
GPA: 3.64
WE: Business Development (Energy and Utilities)
### Show Tags
04 May 2018, 06:03
1
yes you got it right
Attachment:
GB.jpg [ 50.21 KiB | Viewed 2631 times ]
gmatbusters
Thanks for the pic , that made me wonder if the difference in position of tens on LHS and RHS
is because of raising base of 10 to exponents say 0,1 and -1. Is this how we arrive at
PLACE VALUES for a number?
Sorry, coming from engineering bakground, am still
not able to erase why aspects of logic
_________________
Intern
Joined: 21 Sep 2018
Posts: 2
### Show Tags
06 Oct 2018, 21:02
I just started looking at this and I'm impressed. I have a suggestion for the LCM section. I believe it should be changed as follows, but please correct me if I'm wrong...
"Lowest Common Multiple - LCM
The lowest common multiple orlowest common multiple (lcm) or smallest common multiple of two integers a and b is the smallest positive integer that is a multiple both of a and of b. Since it is a multiple, it can be divided by a and b without a remainder. If either a or b is 0, so that there is no such positive integer, then lcm(a, b) is defined to be zero.
To find the LCM, you will need to do prime-factorization. Then multiply all the factors. (For any factors that are common, use the highest power.)"
Manager
Joined: 12 Jul 2018
Posts: 63
Location: India
Schools: ISB '20, NUS '21
GMAT 1: 420 Q26 V13
GMAT 2: 540 Q44 V21
### Show Tags
30 Nov 2018, 23:13
Can someone please explain me the last digit concept using more examples?
LAST DIGIT OF A PRODUCT
Last digits of a product of integers are last digits of the product of last digits of these integers.
For instance last 2 digits of 845*9512*408*613 would be the last 2 digits of 45*12*8*13=540*104=40*4=160=60
Example: The last digit of 85945*89*58307=5*9*7=45*7=35=5?
Intern
Joined: 04 Sep 2018
Posts: 26
GPA: 3.33
### Show Tags
09 Dec 2018, 08:55
Hi. Is it possible to solve official questions related to this specific topic "Numbers Theory", after going over this topic here?
So, for instance, let's say we read the topic "Percents" from the below link. Ideally, want to solve questions related to each topic as I study through. Just want to know if something like that is a possibility on the forum.
https://gmatclub.com/forum/all-you-need ... l#p1130136
Cheers.
Current Student
Joined: 04 Jun 2018
Posts: 155
GMAT 1: 610 Q48 V25
GMAT 2: 690 Q50 V32
GMAT 3: 710 Q50 V36
### Show Tags
08 Jan 2019, 06:15
Exponents and divisibility:
a^n−b^n is ALWAYS divisible by a−b
a^n−b^n is divisible by a+b if n is even.
a^n+b^n is divisible by a+b if n is odd, and not divisible by a+b if n is even.
Hi
Can some expert please explain this concept more clearly.
What I am looking for is the proof of these statements.
Bunuel
chetan2u
gmatbusters
MathRevolution
AjiteshArun
Math Expert
Joined: 02 Aug 2009
Posts: 8254
### Show Tags
08 Jan 2019, 06:50
1
nitesh50 wrote:
Exponents and divisibility:
a^n−b^n is ALWAYS divisible by a−b
a^n−b^n is divisible by a+b if n is even.
a^n+b^n is divisible by a+b if n is odd, and not divisible by a+b if n is even.
Hi
Can some expert please explain this concept more clearly.
What I am looking for is the proof of these statements.
Bunuel
chetan2u
gmatbusters
MathRevolution
AjiteshArun
Hi nitesh,
It is to do with binomial theorem, which further deals with expansion of a term..
Say you are looking for a^n . I can write a = a-b+b..
$$a^n=(a-b+b)^n=((a-b)+b)^n = (a-b)^n+n(a-b)^{n-1}b^1+....+b^n.....a^n-b^n=(a-b)^n+n(a-b)^{n-1}b^1+...=(a-b)((a-b)^{n-1}+.........)$$
So, Right hand side is multiple of a-b and on left side we have a^n-b^n..
so a^n-b^n is a multiple of a-b
similarly for the other too..
Just take small values to confirm..
Let n = 4.. $$a^4-b^4=(a^2-b^2)(a^2+b^2)=(a-b)(a+b)(a^2+b^2)$$.. so multiple of a-b and a+b.
Let n = 3... $$a^3-b^3=(a-b)(a^2+ab+b^2)$$... so multiple of only a-b
_________________
Retired Moderator
Joined: 27 Oct 2017
Posts: 1491
Location: India
GPA: 3.64
WE: Business Development (Energy and Utilities)
### Show Tags
08 Jan 2019, 06:56
1
The remainder /factor theorem
If you divide a polynomial f(x) by (x - h), then the remainder is f(h).
Hence if f(h) is 0, remainder = 0. hence (x-h) is a factor of f(x).
Attachment:
Factor theorem.jpg [ 79.42 KiB | Viewed 1401 times ]
nitesh50 wrote:
Exponents and divisibility:
a^n−b^n is ALWAYS divisible by a−b
a^n−b^n is divisible by a+b if n is even.
a^n+b^n is divisible by a+b if n is odd, and not divisible by a+b if n is even.
Hi
Can some expert please explain this concept more clearly.
What I am looking for is the proof of these statements.
Bunuel
chetan2u
gmatbusters
MathRevolution
AjiteshArun
_________________
Intern
Joined: 18 Oct 2018
Posts: 18
Location: India
### Show Tags
29 Mar 2019, 00:11
• If a is a factor of b and b is a factor of a, then a=b or a=−b.
I did not understand this. Could you please explain?
Retired Moderator
Joined: 27 Oct 2017
Posts: 1491
Location: India
GPA: 3.64
WE: Business Development (Energy and Utilities)
### Show Tags
29 Mar 2019, 00:44
It means a/b is an integer
Also, b/a is an integer.
This is only possible if either a=b or a=-b.
Hope, it is clear now.
dee1711s wrote:
• If a is a factor of b and b is a factor of a, then a=b or a=−b.
I did not understand this. Could you please explain?
Posted from my mobile device
_________________
Intern
Joined: 26 Mar 2019
Posts: 6
### Show Tags
01 Apr 2019, 23:28
• If a number equals the sum of its proper divisors, it is said to be a perfect number.
Example: The proper divisors of 6 are 1, 2, and 3: 1+2+3=6, hence 6 is a perfect number.
There are some elementary rules:
• If aa is a factor of bb and aa is a factor of cc, then aa is a factor of (b+c)(b+c). In fact, aa is a factor of (mb+nc)(mb+nc) for all integers mm and nn.
• If aa is a factor of bb and bb is a factor of cc, then aa is a factor of cc.
• If aa is a factor of bb and bb is a factor of aa, then a=ba=b or a=−ba=−b.
• If aa is a factor of bcbc, and gcd(a,b)=1gcd(a,b)=1, then a is a factor of cc.
• If pp is a prime number and pp is a factor of abab then pp is a factor of aa or pp is a factor of bb.
Math Expert
Joined: 02 Sep 2009
Posts: 61549
### Show Tags
01 Apr 2019, 23:29
• If a number equals the sum of its proper divisors, it is said to be a perfect number.
Example: The proper divisors of 6 are 1, 2, and 3: 1+2+3=6, hence 6 is a perfect number.
There are some elementary rules:
• If aa is a factor of bb and aa is a factor of cc, then aa is a factor of (b+c)(b+c). In fact, aa is a factor of (mb+nc)(mb+nc) for all integers mm and nn.
• If aa is a factor of bb and bb is a factor of cc, then aa is a factor of cc.
• If aa is a factor of bb and bb is a factor of aa, then a=ba=b or a=−ba=−b.
• If aa is a factor of bcbc, and gcd(a,b)=1gcd(a,b)=1, then a is a factor of cc.
• If pp is a prime number and pp is a factor of abab then pp is a factor of aa or pp is a factor of bb.
____________________________
Explained on previous pages.
_________________
Re: Math: Number Theory [#permalink] 01 Apr 2019, 23:29
Go to page Previous 1 2 3 4 5 Next [ 81 posts ]
Display posts from previous: Sort by
|
|
Let’s demonstrate a simulation from the posterior distribution with the Poisson-gamma conjugate model of Example 2.1.1.Of course we know that the true posterior distribution for this model is $\text{Gamma}(\alpha + n\overline{y}, \beta + n),$ and thus we wouldn’t have to simulate at all to find out the posterior of this model. See below for a brief discussion of the ideas behind posterior predictive checking, a description of the structure of this package, and tips on providing an interface to bayesplot from another package. Draw from posterior distribution using Markov Chain Monte Carlo (MCMC). Description. Some options are beyond my limited knowledge (ie Log Posterior vs Sample Step Size), so I usually look at the posterior distribution of the regression parameters (Diagnose -> NUTS (plots) -> By model parameter), the histogram should be more or less normal. View source: R/posterior_epred.R. Sampling with Stan HMC for Posterior Prediction. See Box 11.1 for a more detailed explanation. 3.5.1 Comparing different likelihoods; ... 11.2 A first simple example with Stan: ... Stan uses the shape of the unnormalized posterior to sample from the actual posterior distribution. In brms: Bayesian Regression Models using 'Stan' Description Usage Arguments Details Value Examples. Fit the model to the data to get the posterior distribution of the parameters: $$p(\theta | D)$$ Simulate data from the fitted model: $$p(\tilde{D} | \theta, D)$$ Compare the simulated data (or a statistic thereof) to the observed data and a statistic thereof. Can be performed for the data used to fit the model (posterior predictive checks) or for new data. The second way in which the beta model is used here is as a full joint probability model coded in Stan and sampled with NUTS HMC to derive a full posterior density. The posterior predictive distribution is the distribution of the outcome implied by the model after using the observed data to update our beliefs about the unknown parameters in the model. 4.1.2 Example: grid approximation. The distribution of posterior predictive check (y_ppc) is wider, taking into account the uncertainty of the parameters.The interquartile range and mean of my initial fake data and the sample of the posterior predictive distribution look very similar. I continue my Stan experiments with another insurance example. The posterior parameter distributions include both $$\mu$$ and $$\sigma$$ in the 95% credible interval. Description. In rstanarm: Bayesian Applied Regression Modeling via Stan. Here I am particular interested in the posterior predictive distribution from only three data points. 3.5 Posterior predictive distribution. Compute posterior samples of the expected value/mean of the posterior predictive distribution. From this density, prediction are made and compared to the empirical observations (posterior predictive check). One method evaluate the fit of a model is to use posterior predictive checks. Draw from the posterior predictive distribution of the outcome(s) given interesting values of the predictors in order to visualize how a manipulation of a predictor affects (a function of) the outcome(s). View source: R/posterior_predict.R. Posterior predictive checks The pre-compiled models in rstanarm already include a y_rep variable (our model predictions) in the generated quantities block (your posterior distributions). Description Usage Arguments Value Note See Also Examples. posterior_predict() methods should return a $$D$$ by $$N$$ matrix, where $$D$$ is the number of draws from the posterior predictive distribution and $$N$$ is the number of data ... (CRAN, GitHub). 9.2 Posterior Predictive Checks. The bayesplot PPC module provides various plotting functions for creating graphical displays comparing observed data to simulated data from the posterior (or prior) predictive distribution. The latter allows the model to gorge on data, update its parameters, and then make predictions based on the posterior predictive distribution, while the former forces the model to make predictions using the prior predictive distribution. Or, to put it differently I have a customer of three years and I’d like to predict the expected claims cost for the next year to set or adjust the premium. Evaluate how well the model fits the data and possibly revise the model.
Rain Shadow Area In South America, Dirt Devil Endura Max Xl Pet, Joovy Caboose Triple, Iberian Lynx Population Graph, Wool Mix Fabric, Glacier Park International Airport Code Gpi, Chevrolet Lumina Ss Reliability, Mango Vodka Soda, Pull Out Strength Of Nails In Plywood, Bertolli Chicken Carbonara Walmart, How Does Chlamydomonas Move, 2003 Fender Stratocaster Made In Mexico,
|
|
### Home > CCA2 > Chapter 12 > Lesson 12.1.3 > Problem12-53
12-53.
For each of the following problems, find all of the solutions without using a calculator. Use a graph or unit circle to support your answers. Then predict the solution that your calculator will give and use your calculator to check your prediction. Homework Help ✎
1. $4\sin(x)+2=0$
Subtract 2 from both sides and divide by $4$ to solve for $\sin(x)$.
$\sin(x)=\frac{1}{2}$
On a unit circle, find the two different times $\sin(x)=\frac{1}{2}$. To give all of the solutions, remember that every time you go around the circle, that value occurs again. Since there are $2π$ radians in a circle, add $2πn$ to each solution.
1. $2\cos(x)=\sqrt{3}$
Use the same method as part (a).
$= \frac{\pi}{6}+2{\pi}n,\;\frac{11\pi}{6}+2{\pi}n$
1. tan $(x) + 1 = 0$
Use the same method as part (a). How is the tangent function different from the sine function?
1. $4$ cos2$(x)- 4 = 0$
$x=\pi n$
|
|
# Own environment with own label
I want to have an environment for strategies. So far I used the package ntheorem to define such an environment.
I use it like this:
\begin{strategy}[bla]
bla bla
\end{strategy}
And get:
<surrounding text>
Strategy 1 (Bla):
bla bla
<surrounding text>
Which is what I want except that I would like to group the numbering of the strategies with by me specified prefixes, like LD.1,..LD.5 and I.1,..I.3 and so on:
<surrounding text>
Strategy A.1 (bla):
bla bla
more bla bla
<surrounding text>
So, the word Strategy my prefix and a short description an a non-indented line, then the strategy text indented. Also I would like a little space above and below the strategy.
Is it possible to use ntheorem in such a way:
\begin{strategy}[prefix][bla]
bla bla
\end{strategy}
Or has anybody another idea. I do not need to use ntheorem. It was just the first idea I had.
I use ntheorem for other environements, so they should work together.
-
It's a bit too vague; how do you decide to change A into B? Are strategies A and B mixed, that is, can there be A.1, A.2, B.1, A.3, B.2? – egreg Sep 25 '12 at 8:50
I edited the question. – Jana Sep 25 '12 at 9:01
Here's a way; you specify
\begin{strategy}{<label>}[<optional description>]
Text
\end{strategy}
The <label> can be empty. The various strategies will be numbered sequentially by label. If you want to reset a number, then issue
\resetstrategy{<label>}
Example:
\documentclass{article}
\usepackage{xparse}
\makeatletter
\NewDocumentEnvironment{strategy}{ m o }
{\@ifundefined{@strategy#1}
{\newtheorem{@strategy#1}{Strategy}%
\global\@namedef{the@strategy#1}{#1.\arabic{@strategy#1}}%
}%
{}%
\IfNoValueTF{#2}{\begin{@strategy#1}}{\begin{@strategy#1}[#2]}%
}
{\end{@strategy#1}}
\newtheorem{@strategy}{Strategy}
\newcommand{\resetstrategy}[1]{%
\@ifundefined{@strategy#1}
{\typeout{You have no Strategy #1'}}
{\setcounter{@strategy#1}{0}}%
}
\makeatother
\begin{document}
\begin{strategy}{}
A
\end{strategy}
\begin{strategy}{}[Good]
B
\end{strategy}
\begin{strategy}{A}
C
\end{strategy}
D
\end{strategy}
\begin{strategy}{X}[So and so]
E
\end{strategy}
\resetstrategy{A}
\begin{strategy}{A}
F
\end{strategy}
\end{document}
-
This is exactly what I want. Unfortunately I also use ntheorem for other environments and it seems to not work together. It just takes the style I last defined, even though I put the code where the old definition was. And there is no label and the name is written as normal text. Can I use this and ntheorem? – Jana Sep 25 '12 at 9:56
@Jana Just add \theoremstyle{plain} (or the style you wish) before the two \newtheorem commands in the code. – egreg Sep 25 '12 at 10:24
Like this?
## Code
\documentclass[parskip]{scrartcl}
\usepackage[margin=15mm]{geometry}
\usepackage{lipsum}
\newcounter{mystrategy}[section]
\setcounter{mystrategy}{0}
\renewcommand{\themystrategy}{\Alph{section}.\arabic{mystrategy}}
\newenvironment{strategy}[1]{\refstepcounter{mystrategy}\textbf{Strategy \themystrategy\ (#1)}\itshape\selectfont}{\par}
\begin{document}
\section{One}
\lipsum[1]
\begin{strategy}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\subsection{one}
\lipsum[1]
\begin{strategy}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\section{Two}
\lipsum[1]
\begin{strategy}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\subsection{one}
\lipsum[1]
\begin{strategy}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\end{document}
## Code
\documentclass[parskip]{scrartcl}
\usepackage[margin=15mm]{geometry}
\usepackage{lipsum}
\newenvironment{strategy}[2]{\textbf{Strategy #1 (#2)}\itshape\selectfont}{\par}
\begin{document}
\section{One}
\lipsum[1]
\begin{strategy}{Q.5}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\subsection{one}
\lipsum[1]
\begin{strategy}{X.101}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\section{Two}
\lipsum[1]
\begin{strategy}{H.42}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\subsection{one}
\lipsum[1]
\begin{strategy}{S.19}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\end{document}
Edit 2: With indentation (thanks to Werner) and spaces:
## Code
\documentclass{scrartcl}
\usepackage[margin=15mm]{geometry}
\usepackage{lipsum}
\newenvironment{strategy}[2]{\vspace{5mm}\noindent\textbf{Strategy #1 (#2)}\\\phantom{}\hfill\begin{minipage}{\dimexpr\textwidth-5mm} \itshape\selectfont}{\end{minipage}\par\vspace{5mm}}
\begin{document}
\section{One}
\lipsum[1]
\begin{strategy}{Q.5}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\subsection{one}
\lipsum[1]
\begin{strategy}{X.101}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\section{Two}
\lipsum[1]
\begin{strategy}{H.42}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\subsection{one}
\lipsum[1]
\begin{strategy}{S.19}{Bla}
\lipsum[2]
\end{strategy}
\lipsum[3]
\end{document}
## Result
-
This is close, but I actually would like to specify the prefix myself and not corresponding to the section. – Jana Sep 25 '12 at 9:02
@Jana: Now you can specify them freely. – Tom Bombadil Sep 25 '12 at 9:14
This is good. But now I have the caption (Strategy Q.5 (Bla)) indented and the text not. I would like the caption to not be indented, but the rest, linebreak after the caption and a space above and below teh strategy. – Jana Sep 25 '12 at 9:37
@Jana: Done. But next time it would be nice to describe exactly and completely all requirements in your initial question. – Tom Bombadil Sep 25 '12 at 9:52
If you use the hyperref and ntheorem. You can try something like this:
\newtheorem{strategy}{\normalfont{\textbf{Strategy}}}
\numberwithin{strategy}{chapter}
\newcommand{\strategyautorefname}{Strategy}
\begin{document}
...
\chapter{chapter something}
\begin{strategy}\label{str:first}
bla bla
\end{strategy}
In \autoref{str:first} you can see...
`
The first command defines the strategy environment. The second command defines the counter that precedes the strategy, in this case the Chapter number. The last command allows you to cite your environments and get automatically the string Strategy appended before the number. This will produce something like this when you reference it:
1. Chapter something.
Strategy 2.1. bla bla
In Strategy 2.1 you can see...
Where 2 is the chapter number and 1 is the counter of the strategy within the second chapter.
-
|
|
# Math Help - Challenging proofs
1. ## Challenging proofs
Hello!
Can anyone help me solve the problems? It paralyzes my work
It is obvious in the first two problems that $f'(0)=0$ and $f$ is Lipschitz. But I can't go any further.
1) A continuously differentiable function $f$ is defined on $\mathbb{R}$ such that $f(0) = 0$ and $|f'(x)| \leq |f(x)|$ for all $x\in \mathbb{R}$. Show that $f$ is constant.
2) Suppose that for some positive constant $M$, $|f'(x)|\leq M|f(x)|$, $x\in[0,1]$. Show that if $f(0)=0$, then $f(x) =0$ for any $x$, $0\leq x\leq 1$.
3) Show that there exists no real-valued function $f$ such that $f(x)>0$ and $f'(x)=f(f(x))$ for all $x$.
Thank you very much!
2. Originally Posted by Ivan
Hello!
Can anyone help me solve the problems? It paralyzes my work
It is obvious in the first two problems that $f'(0)=0$ and $f$ is Lipschitz. But I can't go any further.
1) A continuously differentiable function $f$ is defined on $\mathbb{R}$ such that $f(0) = 0$ and $|f'(x)| \leq |f(x)|$ for all $x\in \mathbb{R}$. Show that $f$ is constant.
Here's my attempt:
$f(x)$ is continuous on $\mathbb{R}$ since it's differentiable. Hence $f(x)$ is continuous on some interval $[0,a]$, and differentiable on $(0,a)$ where $a \in \mathbb{R}$ and $a \neq 0$.
By applying the mean value theorem $\exists \ x_0 \in (0,a) \ s.t \ f'(x_0)=\frac{f(a)-f(0)}{a-0}=\frac{f(a)-f(0)}{a}$.
We know that $|f'(x)| \leq |f(x)| \Rightarrow \ |f'(x_0)| \leq |f(x_0)|$.
Therefore:
$|\frac{f(a)-f(0)}{a}|=|f'(x_0)| \leq |f(x_0)|$
$\Rightarrow |f(a)-f(0)| \leq |a||f(x_0)|$
$\Rightarrow |f(a)-f(0)| \leq |af(x_0)|$
Let $\epsilon>|af(x_0)|$
This gives:
$|f(a)-f(0)| \leq |af(x_0)|<\epsilon$ so $|f(a)-f(0)|< \epsilon$ $\forall a \in \mathbb{R}$ where $a \neq 0$.
Hence $f(a)=f(0)=0$ since $|f(a)| < \epsilon$ as $f(0)=0$.
3. ## ?
Great, but the $\epsilon$ in your proof isn't arbitrary, is it?
4. I don't think it is , shoot!
hmm, this really is pretty tricky....
|
|
C2H6 is sp3 hybridised so form Tetrahedral geometry which has all H hydrogen out of plane hence non planar. The ethane/ethyne molecule is nonpolar because the molecule is symmetric. it … resonance occurs when two or more valid lewis structures may be drawn for. Pure acetylene is odorless, but commercial grades usually have a marked odor due to impurities. Furthermore, ethyne is a hydrocarbon or a compound containing only C-H bonds. Create . There is a triple bond between the two carbon atoms, each having one valency left for the hydrgen atoms: H-C-=C-H Since the triple bond prevents rotation, the ethyne molecule is linear. Fact Check: Is the COVID-19 Vaccine Safe? In fact, we get exactly 180. 2021-01-02. Statements such as: “all hydrocarbons are nonpolar’, “the carbons are surrounded by hydrogens” or “there are no lone pairs” do not earn this … Organic Chemistry. Figure 4. Ethene: The geometry of ethene is planar. The remaining unhybridized p orbitals on the carbon form a pi bond, which gives ethene its reactivity. An example of tetrahedral electron pair geometry and molecular geometry is CH4$C$$H$$4$. It is very reactive: the triple bond can open up for addition reactions. Ethyne, fluoro-2713-09-9. What are Molecular Geometry and Central Atom Hybridization for . Nothing changes in terms of the shape when the hydrogen atoms combine with the carbon, and so the methane molecule is also tetrahedral with 109.5° bond angles. Try This: Give the hybridization states of each of the carbon atoms in the given molecule. A new molecule C 2 H 2 ⋯CuF has been synthesized in the gas phase by means of the reaction of laser-ablated metallic copper with a pulse of gas consisting of a dilute mixture of ethyne and sulfur hexafluoride in argon. Answer. The various atomic orbitals which are pointing towards each other now merge to give molecular orbitals, each containing a bonding pair of electrons. When ethyne is subjected to combustion with oxygen, the flame created is known to have a temperature of roughly 3600 Kelvin. Lancenigo di Villorba (TV), Italy. Ethylene is an important industrial organic chemical. 4 Answers. Ethylene (IUPAC name: ethene) is a hydrocarbon which has the formula C 2 H 4 or H 2 C=CH 2.It is a colorless flammable gas with a faint "sweet and musky" odour when pure. 2 Names and Identifiers Expand this section. D) 8 in bonding MOs, 4 in antibonding MOs. Orbital overlap sketch of ethyne. geometry and the ligand number. The Ethyne structure data file can be imported to most of the cheminformatics software systems and applications. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. The ethyne molecule has … What is the molecular geometry of ethylene? The shape of the molecule can be predicted if hybridization of the molecule is known. Subsequently, question is, what does c2h2 stand for? Ethyne has a triple bond between the two carbon atoms. You will need to use the BACK BUTTON on your browser to come back here afterwards. What are the Uses of Ethyne? For example, an atom with four single bonds, a double bond, and a lone pair has an octahedral electron-group geometry and a square pyramidal molecular structure. Answer Save. Ethylene, or C2H4 has two trigonal planar type molecular geometries and its center is tetrahedral. Favorite Answer. 3 Answers. Ethylene (IUPAC name: ethene) is a hydrocarbon which has the formula C 2 H 4 or H 2 C=CH 2.It is a colorless flammable gas with a faint "sweet and musky" odour when pure. please include the bond angles around the carbon atoms and the shape of the molecule. The triple bond of the molecule stores a large amount of energy that readily oxidizes to form a more stable molecule. Two of the top 50 chemicals produced in the United States, ammonium nitrate and ammonium sulfate, both used as fertilizers, contain the ammonium ion. Relevance. The molecule is linear in geometry and shape because there are two electron groups surrounding each central C atom. at the lowest energy level. See the lower right cell in the table above. Chapter 8.Molecular Geometry and Bonding Theories (Homework) W Multiple Choice Identify the choice that best completes the statement or answers the question. Distortions of ethyne when complexed with a cuprous or argentous halide: the rotational spectrum of C 2 H 2 ⋯CuF†. Here we get about 180. Molecular Geometry: The electron pair and molecular geometries are dependent on the number of bond pairs and the lone pairs around the central atom of the molecule. Answer Save. Ethyne is a symmetric linear molecule, with the two carbon atoms in the center sharing a triple bond and one hydrogen on each carbon. Is tetrahedral. In the diagram each line represents one pair of shared electrons. Phillips Petroleum is working with a new catalyst to increase production efficiency and lower cost of acetylene. NOAA Hurricane Forecast Maps Are Often Misinterpreted — Here's How to Read Them. The molecular geometry is a result of the combination of the individual geometries. More... Molecular Weight: 44.03 g/mol. Drawing the Lewis Structure for C 2 H 2 (Ethyne or Acetylene). aaj. Monofluoroacetylene. it is built from only carbon and hydrogen atoms. Ethyne C2H2 linear sp Hybridisation with two pi bonds. In ethyne molecule, each carbon atom is Sp-hybridized. Chapter 8.Molecular Geometry and Bonding Theories (Homework) W CHEM 1411. 3 Chemical and Physical Properties Expand this section. •They each have two electron groups around them: one single bond and one triple bond. Geometry Optimization ¶. Please check out the chapter Singlepoint Calculations for options that are not optimization-specific, as the charge or the number of SCF iterations, for … Molecular Geometry of C2H2. First published on 18th June 2015. Hybridization of C2H2 - Acetylene (Ethyne) How Does the 25th Amendment Work — and When Should It Be Enacted? Drawing the Lewis Structure for C 2 H 2 (Ethyne or Acetylene). Coronavirus counter with new cases, deaths, and number of tests per 1 Million population. So, here we got about 120. 1 Structures Expand this section. Ethyne: The distance between C and H in ethyne is about 106.0 pm. C2H2. including the bonds and hyrbidizations? Relevance. Example 2: Predicting Electron-pair Geometry and Molecular Structure: Ammonium. Two sp 2 hybridized carbon atoms can make a sigma bond by overlapping one of the three sp 2 orbitals and bond with two hydrogens each and two hydrogens make sigma bonds with each carbon by overlapping their s orbitals with the other two sp 2 orbitals. Then try to find a chemical formula that would match the structure you have drawn. Don't confuse them with the shape of a p orbital. Click here to view acetylene (or ethyne), which has linear geometry around each … A colorless, highly flammable or explosive gas, C 2 H 2, used for metal welding and cutting and as an illuminant. Identify the electron-group geometry, molecular structure, and bond angles. And if we look at the molecular geometry for Ethyne, we discover the … NO3^-^ FNO2. 1: Download: Unmethylated | Methylated: Minor Groove Width Heat map for. The common name for this molecule is acetylene. The shape of methane. Generally, acetylene is used as a fuel and a chemical building block. On the next page, Table 1 lists the possible molecular shapes and how they are Relevance. The molecular shape of the ethyne molecule is linear. Bond angles: HCH bond angle is 109.5 o. Is C2H2 Polar Or Nonpolar? The LibreTexts libraries are Powered by MindTouch ® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. The gross formula is C_2H_2. A molecule’s 3-d geometry explains a lot of its physical and chemical properties, like its polarity and intermolecular bonding behavior. OGShelly. They are unsaturated hydrocarbon, each alkyne molecule is composed of carbon-carbon triple bond in which the carbon-carbon bonds is made up of two pi bonds and one sigma bond. If the formula of the compound is given, then count the number of atoms attached to each carbon and the type of bonds - single, double, or triple. Related questions. Sideways overlap between the two sets of p orbitals produces two pi bonds - each similar to the pi bond found in, say, ethene. What is the molecular shape of acetylene? Note: Explanation must refer to the shape of the molecule. Ethyne . Welding consumes the remaining 20 percent. Corey the Cosmonaut. Molecular geometry (trigonalpyramid) ... Ethyne, C 2H 2 •Ethyne (often called acetylene) has two central atoms, so we consider them separately. Ethene: There is one pi bond in ethene. Related Searches. Explanation of the properties of giant covalent compounds in terms of their structures. From this website, you can have lots of details, even a drawing of it. This is exactly the same as happens whenever carbon forms bonds - whatever else it ends up joined to. These Sp-orbital are arranged in linear geometry and 180oapart. lie perpendicular to … Ethyne (acetylene) is linear. Lv 6. Construct A Model Of Ethyne By First Connecting Two Black Balls With Three Springs. Ethyne (common name acetylene) is a hydrocarbon, i.e. So each carbon has 2 sigma (with C and H) and 2 … Favorite Answer. Use Two Short Sticks And Two Yellow Balls To Complete The Structure. Dates: Modify . In drawing the Lewis structure for C 2 H 2 (also called ethyne) you'll find that you don't have enough valence electrons available to satisfy the octet for each element (if you use only single bonds). When dissolved in acetone, acetylene becomes much less explosive. Figure 1: Steric number = 4, tetrahedral electron group geometry of H2O ... ethyne (also known as acetylene, C2H2) in Figure 4. Interesting note: Rotation about triple bonds is actually okay; Overlap between p orbitals is continuous enough through rotation. C) 9 in bonding MOs, 3 in antibonding MOs. The miner uses a valve to control the drip speed of water and the amount of gas produced which then burns to create light. 1 decade ago. Daniel P. Zaleski a, Susanna L. Stephens a, David P. Tew b, Dror M. Bittner a, Nicholas R. Walker * a and Anthony C. Legon * b a School of Chemistry, Newcastle University, Bedson Building, Newcastle-upon-Tyne, NE1 7RU, UK. This linear molecular geometry is similar to that of CO2 (Carbon dioxide), which is also a non-polar compound. 1 decade ago. 0 0. By Staff Writer Last Updated Mar 27, 2020 8:54:07 PM ET. draw electron dot structure of ethane and butane, Common alkanes include methane (natural gas), propane (heating and cooking fuel), butane (lighter fluid) and octane (automobile fuel). How do I determine the molecular shape of a molecule? C2H2 has a straight-line molecular geometry consisting of a hydrogen atom bonded to a carbon atom, which is triple-bonded to a second carbon atom bonded to a second hydrogen atom. 1 Answer Sam May 16, 2016 Linear. 4 Spectral Information Expand this section. Ethane molecule is arranged in tetrahedral geometry in which central carbon atoms are surrounded by H-atoms in three dimensions. Acetylene is a common gas for fueling welders' oxyacetylene torches. This oxidation occurs rapidly, and the release of energy is an explosion. It is a colorless gaseous chemical compound and the simplest alkyne i.e. These are sigma bonds - just like those formed by end-to-end overlap of atomic orbitals in, say, ethane. It is the simplest alkene (a hydrocarbon with carbon-carbon double bonds).. They use the 2s electron and one of the 2p electrons, but leave the other 2p electrons unchanged. The molecular geometry of C2H2Br2 is trigonal planar. 1 decade ago. Molecular Geometry. The resultant molecular structure for acetylene is linear, with a triple bond between the two carbon atoms (one sigma and two pi-bonds) and a single sigma bond between the carbon and hydrogen atoms. Ethyne : C 2 H 2 (a) Draw the complete Lewis electron-dot diagram for ethyne in the appropriate cell in the table above. That is a tetrahedral arrangement, with an angle of 109.5°. E) 7 in bonding MOs, 5 in antibonding MOs. Triple bond is formed using sp-hybridized atoms (one s orbital is "averaged" with one p orbital). According to molecular orbital (MO) theory, the twelve outermost electrons in the O2 molecule are distributed as follows: A) 12 in bonding MOs, 0 in antibonding MOs. Ethyne: There are two pi bonds in ethyne. It is the simplest alkene (a hydrocarbon with carbon-carbon double bonds).. It is unstable in its pure form and thus is usually handled as a solution. The common name for this molecule is acetylene. Calcium carbide reacts with water to produce acetylene. The carbon atom doesn't have enough unpaired electrons to form four bonds (1 to the hydrogen and three to the other carbon), so it needs to promote one of the 2s 2 pair into the empty 2p z orbital. The simple view of the bonding in ethyne. Depending on how many other pages you might have to refer to as well, return here later using the BACK button on your browser or the GO menu or HISTORY file - or via the Organic Bonding Menu (link from the bottom of each page in this section). Lv 7. In this manner, what hybridization is c2h2? The various p orbitals (now shown in slightly different reds to avoid confusion) are now close enough together that they overlap sideways. B) 10 in bonding MOs, 2 in antibonding MOs. explain why ethene is a planar molecules while ethyne is a linear molecules - 17433390 C2H2 has a straight-line molecular geometry consisting of a hydrogen atom bonded to a carbon atom, which is triple-bonded to a second carbon atom bonded to a second hydrogen atom. Contents. N2F2. Using the geometry optimization results, high-quality images of 3D molecular structures have been prepared for Ethyne in 3 different models, namely, stick, ball & stick, and space-filling, which provide not only the basic structure information but also a physically meaningful configuration (e.g., bond lengths, bond angles, etc.) •Sometimes it’s useful to describe the arrangement of atoms in terms of the molecular geometry, Find out A to Z information of PCL3 (i.e.) What these look like in the atom (using the same colour coding) is: Notice that the two green lobes are two different hybrid orbitals - arranged as far apart from each other as possible. Since a triple bond is present and each carbon is attached to 2 atoms (1 H and 1 C), the geometry is linear. For C 2 H 2 you have a total of 10 valence electrons to work with.. Molar Mass. Ethyne: The geometry of ethyne is linear. In this text, we use the standard convention of representing multiple charges with the number before the sign, e. A quick explanation of the molecular geometry of PO43- including a description of the PO43- bond angles. The carbon atom doesn't have enough unpaired electrons to form four bonds (1 to the hydrogen and three to the other carbon), so it needs to promote one of the 2s2 pair into the empty 2pz orbital. Ethylene is a gas known to chemist since second half of XIX century. Click here to view hydrogen cyanide, which has linear electron group and molecular geometry. The sigma bonds are shown as orange in the next diagram. 3D Molecular Orbital Images of Ethyne. The two carbon atoms and two hydrogen atoms would look like this before they joined together: The various atomic orbitals which are pointing towards each other now merge to give molecular orbitals, each containing a bonding pair of electrons. #H-C-=C-H# Answer link. If the formula of the compound is given, then count the number of atoms attached to each carbon and the type of bonds - single, double, or triple. If you have read the ethene page, you will expect that ethyne is going to be more complicated than this simple structure suggests. C 2 H 2 Molecular Geometry And Bond Angles As a result of the double bond C 2 H 2 molecular geometry is linear with a bond angle of 180 o . 2005-03-27. For C 2 H 2 you have a total of 10 valence electrons to work with.. molecular geometry of acetylene (or ethyne)? So, keep calm and know about the geometry of … Figure 1: Steric number = 4, tetrahedral electron group geometry of H2O Figure 2: Ligand number = 2, bent molecular shape of H2O Figures 1 and 2 illustrate the difference between electron group geometry and molecular shape. detailed explanation of the molecular geometry of acetylene? C) 9 in bonding MOs, 3 in antibonding MOs. Let's also con consider, Ethyne, which is C2H2, the Lewis structure for that you may recall from our earlier work, has a triple bond between the carbon atom so we might draw it something like that. Planar N has sp2 hybrid orbitals. The molecular shape is determined by the electron group geometry and the ligand number. 25. And if we look at the molecular geometry for Ethyne, we discover the bond angle is around 180 degrees. Predict the electron-pair geometry and molecular structure of the ${\text{NH}}_{4}^{+}$ cation. C2H2Cl2. Ethyne, (commonly known as Acetylene), is a chemical compound with the molecular formula C2H2. Lewis Structures and Molecular Geometry: ... Ethyne has a total of 10 valence electrons, which when made to connect the constituent atoms will result to the above structure. thanks. One point is earned for the correct Lewis … In drawing the Lewis structure for C 2 H 2 (also called ethyne) you'll find that you don't have enough valence electrons available to satisfy the octet for each element (if you use only single bonds). Bond length: C-H bond length is 1.09A o and C-C bond length is 1.54A o. These are sigma bonds - just like those formed by … 1 decade ago. These "averaged" s and p are used for sigma-bonds (2 bonds, since there are 2 orbitals), and two p-orbitals are used for pi-bonds (also 2 bonds). Notice the different shades of red for the two different pi bonds. The gas cylinders one finds in welding shops contain this acetylene-acetone solution under pressure. The bond length for the C-H bond is 106 pm and for C-C triple bond is 120.3 pm, while the bond angle is equal to 180°. Ethyne is built from hydrogen atoms (1s1) and carbon atoms (1s22s22px12py1). This colorless gas (lower hydrocarbons are generally gaseous in nature) is widely used as a fuel and a chemical building block. Prediction of molecular polarity from bond polarity and molecular geometry. B) 10 in bonding MOs, 2 in antibonding MOs. Also, the angular geometry of the H-C=C bond in ethylene is 121.3 degrees. an unsaturated hydrocarbon with a triple C-C Bond. Ethene: The molar mass of ethene is about 28 g/mol. To understand ethene you also have to understand orbitals and the bonding in methane - sorry, there are no short-cuts! Lv 7. The molecular geometry is a result of the combination of the individual geometries. This article can become your one place solution as it contains the step by step guide of PCL3 molecular geometry and also the bond angles, hybridization, & the Lewis structure of the same. Acetylene use is growing due to the demand for more of the gas in production of polyethylene plastics. H 2 C = CH – CN; HC ≡ C − C ≡ CH Chemical formula of Ethyne. Each carbon is only joining to two other atoms rather than four (as in methane or ethane) or three (as in ethene) and so when the carbon atoms hybridise their outer orbitals before forming bonds, this time they only hybridise two of the orbitals. Ethyne, C 2 H 2. Bonding in Ethane. Ethylene, the simplest of the organic compounds known as alkenes, which contain carbon-carbon double bonds. 8 Simple Ways You Can Make Your Workplace More LGBTQ+ Inclusive, Fact Check: “JFK Jr. Is Still Alive" and Other Unfounded Conspiracy Theories About the Late President’s Son. In the ethane molecule, the bonding picture according to valence orbital theory is very similar to that of methane. Number of Pi Bonds. Source(s): molecular geometry ethane c2h6: https://shortly.im/Qf05o. Alkenes have at least one double bond and alkynes have at least one triple bond. what is the hybridization of the carbon atoms in a molecule of ethyne represented above, Carbon hybridization in Ethylene—C 2 H 4. If we look at the molecular geometry of the Ethyne molecule, you can see it here, it's a … Acetylene forms an explosive gas in the presence of air. If this is the first set of questions you have done, please read the introductory page before you start. It is a hydrocarbon and the simplest alkyne. 3 Answers. The molecular geometry of C2H4 ( ethene - an alkene ) with the skeletal structure of H2CCH2 is. Is built from hydrogen atoms each 6, CO 3 2- and o 3 flame created is.... You will expect that ethyne is subjected to combustion with oxygen, the bonding picture according to valence orbital is. A colorless gaseous chemical compound with the skeletal structure of H2CCH2 is Rotation triple. The BACK BUTTON on your browser to come BACK here afterwards Balls with three Springs with. Contain carbon-carbon double bonds ).. ethyne has a triple bond between the two carbon atoms and the of! Supreme Court: Who are the molecular geometry and the simplest alkene ( a with! C 2 H 2 ( ethyne or acetylene ), is a common gas for fueling welders ' torches... Hydrogen atoms ( 1s22s22px12py1 ) out a to Z information of PCL3 i.e! Its center is tetrahedral with three Springs orange in the table above 3 2- and 3. Chemical properties, like its polarity and molecular geometry for ethyne, we discover the bond angle is o! Through Rotation known to chemist since second half of XIX century Nine on! Acetylene becomes much less explosive chemist since second half of XIX century orbital... Structures, examples include but are not limited to C 6 H 6, CO 3 2- and o.! A cuprous or argentous halide: the molar mass of ethene is a tetrahedral arrangement, with an of. One point is earned for the two different pi bonds in ethyne molecule …. Production efficiency and lower cost of acetylene complicated than this simple structure suggests the organic known! Presence of air is very similar to that of methane increase production efficiency lower!, 1525057, and the release of energy that readily oxidizes to a... Co2 ( carbon dioxide ), is the hybridization of the molecule ethyne ( or acetylene, HC=CH is. Hybridization states of each of the molecule stores a large amount of energy is an important industrial organic chemical gaseous... Energy that readily oxidizes to form a more stable molecule, with an angle of 109.5° known as alkenes which! Complicated than this simple structure suggests if we look at the molecular shape a. Around 180 degrees of details, even a drawing of it gas one! To have a marked odor due to impurities structure suggests whatever else it ends up joined to interesting note explanation... Flame, one of ethyne molecular geometry molecule is known linear sp Hybridisation with two bonds. And o 3 between the two carbon atoms is exactly the same as happens whenever carbon forms -! Angle of 109.5° what are molecular geometry is a result of the molecule! Ethene consists of two sp 2 -hybridized carbon atoms ( one s orbital is ''... By the electron group geometry and the ligand number 2 in antibonding MOs oxidizes to form a more stable.. Generally gaseous in nature ) is widely used as a solution ethene: triple. 3600 Kelvin in oxyacetylene gas cutting Model of ethyne by first Connecting two Black Balls with three Springs is! U.S. Supreme Court: Who are the Nine Justices on the Bench?. Enough through Rotation ).. ethyne has a triple bond containing only C-H bonds Ethylene—C 2 H 2 you drawn. Tests per 1 Million population 2 ( ethyne or acetylene ) is C-C-H! Between the two carbon atoms what is the first set of questions you a... Now close enough together that they overlap sideways valence orbital theory is very reactive: the rotational of! That readily oxidizes to form a pi bond in ethene 121.3 degrees, examples include but are limited... Details, even a drawing of it widely used as a fuel and a chemical formula that would the. Methylated: Minor Groove Width Heat map for with three Springs there are two pi bonds in ethyne welding. ( one s orbital is averaged '' with one p orbital in production of polyethylene.! That readily oxidizes to form a pi bond, which is also a non-polar compound Unmethylated., say, ethane H-C=C bond in ethylene is a colorless gaseous chemical compound and the shape of cheminformatics. Use this reaction to create light is widely used as a fuel and chemical... Hc=Ch, is a hydrocarbon or a compound containing only C-H bonds orbitals in, say ethane! Molecular geometry is a hydrocarbon with carbon-carbon double bonds ).. ethyne has a triple bond resonance! Similar to that of CO2 ( carbon dioxide ) ethyne molecular geometry which is also a non-polar compound carbon. S orbital is ethyne molecular geometry averaged '' with one p orbital ) of.. Sigma bonded to each other now merge to Give molecular orbitals, each containing bonding... Mos, 3 in antibonding MOs and bonding Theories ( Homework ) Multiple! Molecule has … what are molecular geometry of C2H4 ( ethene - an alkene ) with the C2H2! Around the carbon atoms in a molecule because there are no short-cuts tetrahedral arrangement, with angle. - 17433390 25 4 in antibonding MOs tests per 1 Million population bonding. Represented above, carbon hybridization in Ethylene—C 2 H 4 colorless, highly flammable or explosive gas, 2. The combination of the carbon form a pi bond in ethene structure Ammonium! W CHEM 1411 construct a Model of ethyne when complexed with a new catalyst to increase production efficiency lower! Ch ethyne in its pure form and thus is usually handled as a fuel a. Its physical and chemical properties, like its polarity and intermolecular bonding behavior Balls Complete... Line represents one pair of shared electrons formula that would match the structure you a... Orbitals, each containing a bonding pair of electrons chemical building block: Minor Groove Width Heat map for Sp-orbital! An angle of 109.5° to most of the molecule stores a large amount gas... Next diagram different pi bonds of C 2 H 2, used for welding! How they are as far apart as possible basics of calculating geometry optimizations with xtb are presented this! This website, you will expect that ethyne is built from only carbon hydrogen. Two hydrogen atoms each a lot of its most notable applications is in gas. Co2 ( carbon dioxide ), is the hybridization states of each of the combination of the 2p unchanged. About triple bonds is actually okay ; overlap between p orbitals is enough! Deduction of resonance structures, examples include but are not limited to C 6 6! Are two pi bonds in ethyne molecule is linear in geometry and shape because there are two electron surrounding! If this is exactly the same as happens whenever carbon forms bonds just... Used as a fuel and a chemical building block in three dimensions are generally gaseous in )... You have drawn molecular orbitals, each containing a bonding pair of electrons Methylated: Minor Groove Heat. Widely used as a fuel and a chemical building block carbon atoms and the shape the! Geometry and the release of energy that readily oxidizes to form a more stable molecule be more than. National Science Foundation support under grant numbers 1246120, 1525057, and the shape of ethyne... Orange in the diagram each line represents one pair of electrons ( ethyne acetylene... Information of PCL3 ( i.e. alkyne example ethyne ( common name acetylene ) is a tetrahedral arrangement with. The hybridization states of each of the 2p electrons, but commercial usually. S 3-d geometry explains a lot of its most notable applications is in oxyacetylene welding! Drawing of it 80 percent of the acetylene produced for chemical synthesis are formed, arrange... Different shades of red for the correct Lewis … Prediction of molecular polarity bond... Which are pointing towards each other now merge to Give molecular orbitals, each carbon atom is.. 1.54A o formula that would match the structure also have to understand orbitals and the number... 8.Molecular geometry and bonding Theories ( Homework ) W CHEM 1411 triple bond can up... 5 in antibonding MOs whenever carbon forms bonds - just like those formed by end-to-end overlap atomic! The 2p electrons, but commercial grades usually have a temperature of roughly 3600 Kelvin cuprous or argentous:. The miner uses a valve to control the drip speed of water and simplest... Coronavirus counter with new cases, deaths, and the simplest alkyne example of when! Cases, deaths, and 1413739 80 percent of the molecule ( systematic name: ethyne ) is widely as. Them: one single bond and one of the acetylene produced for chemical synthesis is continuous enough Rotation... On your browser to come BACK here afterwards: Minor Groove Width Heat map for an illuminant 121.3.... Angle is around 180 degrees is ethyne, better known as acetylene to Complete the structure you have read ethene... °C [ −155.0 °F ] built from only ethyne molecular geometry and hydrogen atoms each lot its... ( 1s22s22px12py1 ) in the ethane molecule is known − C ≡ CH ethyne, 4 in MOs. Carbide miner 's lamps use this reaction to create light those formed by end-to-end overlap of atomic orbitals are... Through Rotation compound containing only C-H bonds 28 g/mol bond angles: HCH bond angle is around 180...., you will need to use the 2s electron and one triple of... And oxyacetylene gas cutting alkene ) with the shape of the molecule have done, please read introductory! Cn ; HC ≡ C − C ≡ CH ethyne odorless, but leave the other 2p unchanged. Acetylene is odorless, but leave the other 2p electrons, but leave the 2p. Is one pi bond, which are pointing towards each other now merge to Give molecular orbitals, carbon...
|
|
# A Preview of More
An enormous diversity of problems can be solved by creatively applying algebraic techniques. This quiz showcases some of that diversity, pulling problems from five of the topics that we’ll cover in depth in this course:
• Factorials
• Rates & Ratios
• Sequences and Series
• Proving Algebraic Identities
• Avoid Proving that 1=2
# A Preview of More
Factorials
$\large \frac {{\color{red}6!} \times \color{blue}7!}{\color{purple}{10}!} = \, \color{green}?$
Note: The exclamation points are all factorials: $6! = 6 \times 5 \times 4 \times 3 \times 2 \times 1.$
# A Preview of More
Rates & Ratios
It takes 4 cooks 4 hours to bake 4 cakes. Assuming no change of rate, how many hours will it take 8 cooks to bake 8 cakes?
# A Preview of More
Sequences and Series
How many line-segments are there in the $$4^\text{th}$$ image of this growing tree sequence?
# A Preview of More
Proving Algebraic Identities
What is the sum of the first 13 positive odd numbers?
A big hint:
# A Preview of More
Help Us Avoid Proving That 1=2
Let $$a = b$$.
Then, $\begin{array} {c r c l } \color{red} {1} & ab & = & a^2 \\ \color{orange} {2} & ab - b^2 & = & a^2 -b^2 \\ \color{gold} {3} & b(a-b) & = & (a+b)(a-b) \\ \color{green} {4} & b & = & a+b \\ \color{blue} {5} & b & = & b+b \\ \color{indigo} {6} & b & = & 2b \\ \color{purple} {7} & 1 & = & 2. \end{array}$
Which line is the first that contains an error in this proof?
×
|
|
# How do I change the standard “math font”?
What I can tell, the font you get, it looks like the standard, "bookish" font, only tilted. How can I get a monospace font with a straight spine, e.g., the font you get with texttt?
I'm happy with everything else, including the interface, so, when set up, I don't want to invoke this in any other way than the standard $...$ dance.
-
You could put
\DeclareSymbolFont{letters}{OT1}{cmtt}{m}{n}
in the preamble of your document. Though I'm not certain why you would want to use this as the default math font. Also, by making this declaration some symbols get rendered incorrectly:
\documentclass{article}
\usepackage{amsmath,amssymb}
\DeclareSymbolFont{letters}{OT1}{cmtt}{m}{n}
\begin{document}
Hello world:
$x^2 + y^2 = \tan(\theta)$
\end{document}
which gets rendered as:
There is another approach. But it requires a bit more work.
First, declare a symbol font name
\DeclareFontSubstitution{OT1}{cmtt}{m}{n}
\DeclareSymbolFont{myletters}{OT1}{cmtt}{m}{n}
\DeclareSymbolFontAlphabet{\mathnormal}{myletters}
But this is not sufficient to enact the changes you want. Now you've got to go through for every symbol you want set in this style and declare it:
\DeclareMathSymbol{a}{\mathalpha}{myletters}{a}
\DeclareMathSymbol{b}{\mathalpha}{myletters}{b}
\DeclareMathSymbol{c}{\mathalpha}{myletters}{c}
\DeclareMathSymbol{d}{\mathalpha}{myletters}{d}
\DeclareMathSymbol{e}{\mathalpha}{myletters}{e}
etc., and then
\DeclareMathSymbol{A}{\mathalpha}{myletters}{A}
\DeclareMathSymbol{B}{\mathalpha}{myletters}{B}
\DeclareMathSymbol{C}{\mathalpha}{myletters}{C}
\DeclareMathSymbol{D}{\mathalpha}{myletters}{D}
\DeclareMathSymbol{E}{\mathalpha}{myletters}{E}
etc. It's a lot of work. You'll be losing a lot work others have already done.
At this point, I find my answer bordering on the questionable. For example, even in the original solution I posted \tan got type-set in the usual default font, and none of the changes I'm suggesting above seem to change that.
-
I don't like the tilted bookish font, nothin' more advanced as that. – Emanuel Berg Dec 13 '12 at 3:35
The mathastext package might meet your needs. The package will give you, by default, an upright roman font for math material, but otherwise it may do exactly what you're looking for.
The following MWE illustrates the package's default behavior:
\documentclass{article}
\usepackage{mathastext}
\begin{document}
Hello, Pythagoras. $a^2 + b^2 = c^2$. Goodbye, mathematics.
\end{document}
-
To get a typewriter font for the letters, digits, and a few symbols in math mode, you can use mathastext as advised in Mico's answer, with some additional set-up:
\documentclass{article}
\usepackage{mathastext}
\MTfamily{\ttdefault}\Mathastext % this tells mathastext to use typewriter
\begin{document}
Hello, Pythagoras. $a^2 + b^2 = c^2$. Goodbye, mathematics.
\end{document}
-
option defaultmathsizes would tell mathastext to use the standard choices of sizes for sub and sub-sub-scriptstyle. – jfbu Mar 11 at 12:39
please don't give direct links like that; we're trying to distribute ctan's load. the “correct” address for that file is http://mirror.ctan.org/info/Free_Math_Font_Survey/survey.html – wasteofspace Mar 11 at 15:32
|
|
# The Ultimate Nong Khiaw, Laos Travel Guide
After spending nearly a week in this remote Laotian village of the Luang Prabang Province, I fell in love with Nong Khiaw. Straddling the Nam Ou River and connected by a single bridge, Nong Khiaw is completely surrounded by sheer mountains covered in lush tropical forests.
While most of the very few tourists that come here participate in the 100 Waterfalls Trek, there is plenty more to see and do. If nothing else, take a few days to chill out along the Nam Ou river.
Learn about travel to Nong Khiaw, recommendations on where to stay and eat, and so much more below.
## Luang Prabang to Nong Khiaw Boat
While we really wanted to take a slow boat from Luang Prabang to Nong Khiaw, this unfortunately isn’t possible anymore. Due to construction of multiple new Chinese dams along the Mekong and Nam Ou River, long distance boat trips are not longer possible.
Instead, the best way to get from Luang Prabang to Nong Khiaw is either by van, bus, or motorbike. An air-conditioned van will cost you 65,000 kip (\$7.50 USD). Although Google Maps says the drive to Nong Khiaw will take 3 hours, expect it to take over 4 hours since, well… everything takes longer in Laos.
As far as motorbikes are concerned, the cheapest motorbike you can find in Luang Prabang is around 70,000 kip per day for a Scoopy. The road is mostly paved, but there are some stretches of dirt where it can get muddy, especially during rainy season. You’ll probably want to rent a proper motorbike to make the trip.
## Pha Daeng Peak Viewpoint in Nong Khiaw
The highlight of my Nong Khiaw travels was climbing to the top of the Pha Daeng Peak viewpoint. While the sign in front of the trailhead claims that it only take 1 hour to get to the top and 45 minutes the descend back down, I’d double this estimate unless you don’t plan on stopping for a break.
Regardless of how long it takes you to ascend to the top, the sweat and effort are well worth it. Your first glimpse of the 360 degree view down below is stunning to say the least.
### How to Get to Pha Daeng Peak Trailhead
If you type “Pha Daeng Peak” into Google Maps, you’ll be taken up north along the Nam Ou river. This is the wrong direction. You need to start your hike at the Nong Khiaw Viewpoint Trail Entrance and Ticket Office.
The price of a Pha Daeng Peak ticket is 20,000 kip (\$2.30 USD) per person in 2019.
In all honesty, the trail is well maintained and worth every penny. There are multiple trash cans along the trail, lots of bamboo hand rails, and climbing ropes. The ground is even carved into steps in a lot of places. In addition to a large wooden hut at the summit, you’ll find multiple benches along the trail for resting.
## 100 Waterfalls Trek
Although one of the main reasons we went to Nong Khiaw was for the 100 waterfalls trek, we ended up not doing it.
We opted out primarily because we were exhausted from the Pha Daeng Peak hike the day before, but probably more of a deterrent for us was the nightmare stories from recent 100 waterfalls trekkers about leeches. This was in July, but I’m sure other parts of the year don’t have this leech problem.
At the same time, I also heard fantastic stories about the 100 Waterfalls trek. Here’s a description about the 100 waterfalls one day trek from a Tiger Trail marketing flyer.
This tour starts with a 1 hour boat ride down the Nam Ou to the small village where the 100 waterfalls was first explored. Spend the next 3.5 hours trekking through some stunning scenery and up to the 100 waterfalls. After looking around the 100 waterfalls, we go to a Khmu village, famous for its Lao rice whisky. We take a boat back visiting another Khmu Lao village and the Pha Tok caves cave where they stayed during war.
There seems to be a bitter competition as to which tour company was the first tour company to find the 100 waterfalls. Both Nong Khiaw Aventures Travel (located at the end of the main road) and Tiger Trail (a few doors down) claim to be the first tour company to find the 100 waterfalls around 2008. From a tourist perspective, it doesn’t really matter who was first. I only bring this up because it rubbed me the wrong way during the sales pitches I heard.
### 100 Waterfalls Trek Price
The last thing I’ll mention here is about the 100 waterfalls tour pricing. I really dislike how these Nong Khiaw tour companies structure their prices based on the number of people who sign up. For example, if it’s just you and your travel buddy, you’ll pay 300,000 kip (\$35 USD) each for a one day 100 waterfalls tour, whereas if 7 other people sign up for the same day, you’ll only pay 200,000 kip (\$23 USD) each.
For this reason, you’ll see lots of chalkboard signs outside of the many tour companies in Nong Khiaw declaring how many people have signed up for the various tours that they offer.
## Best Place to Eat in Nong Khiaw
After trying a couple other Nong Khiaw restaurants including Sengdao Chittavong and Joy’s, we ended up eating probably 75% of our meals at Coco Home Bar & Restaurant during our 5 day stay.
Not only was all the food delicious, but prices were cheap, and the staff was friendly. We especially liked the owner, Sebastian, who was super nice. He event let us take our beer and wine down to enjoy along the river.
Not to say that the laap wasn’t good (because it certainly was), but a highlight was definitely breakfast for me. Breakfast comes with a baguette, muesli, yogurt, fruit, butter, jam, orange juice, and tea for 45,000 kip or just over \$5 USD.
While you don’t have to eat here for every meal like I did, I recommend that you at least stop by as you pass through Nong Khiaw.
## Nong Khiaw Bridge
A focal point of Nong Khiaw is the bridge. The bridge connects the more populous western side of Nong Khiaw to the opposite side where most tourists stay. The bridge was built in 1964 as a gift from China.
We calculated the height of the Nong Khiaw bridge by hocking a loogie off the bridge. It took on average 3.65 seconds for our spit to hit the water. By using one of the four kinematic equations of physics, we can determine that the Nong Khiaw bridge is around 65 meters tall.
Maybe more interesting than spitting off the bridge is how the youth of Nong Khiaw use the bridge at night to socialize. Take a stroll across the bridge after sunset and you’ll find different groups of kids and teenagers posted up under various light posts. Some groups will be chatting and drinking while others will be on their cell phones.
## Where to Stay in Nong Khiaw
We stayed five nights at Sunrise Guesthouse in Nong Khiaw. Kampi, who showed us our room, was such a jolly old man. He couldn’t speak a lick of English, but still managed to greet us and put a smile on or faces every time we crossed paths.
Sunrise Guesthouse has both rooms with and without air conditioning. We opted for a room with A/C. A non-A/C room will probably run you 50,000 kip or less.
Our room had good air-conditioning. The only complaint was the bathroom water had a weird smell to it, and the hammock on the deck was ripped. But for 110,000 kip per night, this was a much better deal than the bungalows a bit up the river at Nong Kiau Riverside which was over twice as expensive.
Since you made it this far, it seems like you’re planning a trip to Laos. We continued our trip up the river to Muang Ngoi which I also highly recommend, and then finally to Phongsaly.
Let me know in the comments below if you have any questions about Nong Khiaw or Northern Laos in general.
Meet Tony
After years of backpacking the world solo, Tony is an expert when it comes to budget travel. Discover why Tony quit his job to travel on the cheap, and follow him on YouTube for all the latest.
|
|
# Euclidean distance proof
How can I show that the Euclidean distance satisfies the triangle inequality?
Where the Euclidean distance is given by: $$d(p,q) = \sqrt{(p_1-q_1)^2 + \cdots + (p_n-q_n)^2}$$
Triangle Inequality: $\forall x,y,z\Bigl( d(x,z) \leq d(x,y) + d(y,z)\Bigr)$.
-
This is your third question on the same topic in just over one hour. Instead of asking questions here, why don't you take some time and try this questions yourself? If you have in fact tried these questions by yourself, where did you find difficulty? – JavaMan Oct 23 '11 at 22:04
I have tried these questions before asking. For this question. I used the case where (x - z)^2 ≤ (sqrt(x - y)^2 + sqrt(y - z)^2)^2. But I am not sure where to go from there. – Laciel Oct 23 '11 at 22:18
Let ${\bf p} = (p_1,\dots,p_n)$ and ${\bf q}=(q_1,\dots,q_n)$. Recall that ${\bf p \cdot q} = p_1q_1 + \cdots + p_nq_n$ and $|{\bf p}|=\sqrt{{\bf p \cdot p}}$ and finally $d({\bf p}, {\bf q})=|{\bf p}-{\bf q}|$.
First, let's establish the Cauchy-Schwarz inequality: $|{\bf p \cdot q}| \leq |{\bf p}| |{\bf q}|$.
Consider $|{\bf p}-c{\bf q}|^2 = ({\bf p}-c{\bf q}) {\bf \cdot} ({\bf p}-c{\bf q}) = c^2 {\bf q \cdot q} - 2c {\bf p \cdot q} + {\bf p \cdot p}=|{\bf q}|^2c^2-2({\bf p\cdot q})c+|{\bf p}|^2$. This is a quadratic in $c$ and since $|{\bf p}-c{\bf q}|^2 \geq 0$ we have $|{\bf q}|^2c^2-2({\bf p\cdot q})c+|{\bf p}|^2 \geq 0$. Thus this quadratic either has a repeated real root or complex roots. Thus its discriminant is non-positive. So $(-2({\bf p \cdot q}))^2-4|{\bf q}|^2|{\bf p}|^2 \leq 0$. This means $4({\bf p\cdot q})^2 \leq 4|{\bf q}|^2|{\bf p}|^2$. Canceling the 4's and taking square roots give us the required result.
Use CS as follows: $|{\bf p}+{\bf q}|^2=({\bf p}+{\bf q}){\bf \cdot}({\bf p}+{\bf q})=|{\bf p}|^2+2({\bf p\cdot q})+|{\bf q}|^2 \leq |{\bf p}|^2+2|{\bf p\cdot q}|+|{\bf q}|^2$ $\leq |{\bf p}|^2+2|{\bf p}||{\bf q}|+|{\bf q}|^2=(|{\bf p}|+|{\bf q}|)^2$. This gives you $|{\bf p}+{\bf q}| \leq |{\bf p}|+|{\bf q}|$.
Now consider 3 points ${\bf p}$, ${\bf q}$, and ${\bf r}$. $d({\bf p}, {\bf r})=|{\bf p}-{\bf r}|=|{\bf p}-{\bf q}+{\bf q}-{\bf r}| \leq |{\bf p}-{\bf q}| + |{\bf q}-{\bf r}|=d({\bf p},{\bf q})+d({\bf q},{\bf r})$
|
|
# How do you like them rectangles?
I love sudoku puzzles, and I remember how excited I was to discover kakuro puzzles. They combined the sudoku experience with math! This week, fivethirtyeight's riddler introduces what I think of as "geometric sudoku", because it combines the variable-finding elements of math puzzles with some good old fashioned geometry.
This week, we’ve got two puzzles from the forthcoming puzzle book “The Original Area Mazes,” by Alex Bellos, Naoki Inaba and Ryoichi Murakami. The goal of these puzzles, which are also known by the Japanese term “menseki meiro,” is to figure out what the “?” equals. The only math you’ll need to know is that length times width equals area. Keep in mind that the diagrams aren’t necessarily to scale — this is about logic, not measuring.
Importantly, the figures may not be to scale, but I have it on good authority that we can trust the lines and edges.
### Riddler Express
This puzzle was almost frustratingly simple, and I didn't trust my answer at first. But it is easy to show that $? = 4$.
$$4x = 24$$
$$\therefore\ x = 6$$
$$(6 + 3 + 2)? = 44$$
$$\therefore\ ? = 4$$
### Riddler Classic
This puzzle took a bit more effort, but it can be solved in several unique ways. At first, I solved for the size of the "missing" rectangle in the upper right in order to calculate the area of the entire rectangle. Later, I simplified this solution by writing two equations in terms of $x$ and $y$, which are the bottom and left edges of $?$, respectively. From there, with two equations and two missing variables, we can solve for our missing piece.
$$? = xy$$
$$x(11-y) = 32$$
$$y(14-x) = 45$$
Rearranging these equations, we can plot them as functions of x, set them equal to one another, and solve for the answer(s).
$$y = 11 - \frac{32}{x}$$
$$y = \frac{45}{14-x}$$
This yields two pairs of solutions, courtesy of wolframalpha and verified by Python for good measure.
import numpy as np
def y1(x): return 11 - 32/x
def y2(x): return 45 / (14-x)
x = np.arange(0.01,10.01,0.01)
line1 = y1(x)
line2 = y2(x)
$$x = \frac{64}{11},\ y = \frac{11}{2},\ xy = 32$$
$$x = 7,\ y = \frac{45}{7},\ xy = 45$$
Which one is correct? We know the total area of the rectangle must be $14\times{11} = 154$, which means the $?$ must be less than $154 - 45 - 34 - 32 = 43$. Only one solution meets this criteria, so we conclude that $? = 32$.
Looks like “The Original Area Mazes” could be a great source of future puzzles (and posts...)!
|
|
# More Proofs (2/3 I think I've solved)
1. Let A be an m × n real matrix of rank r whose m rows are all pairwise distinct. Let
s ≤ m, and let B be an s × n matrix obtained by choosing s distinct rows of A. Prove
that
rank(B) ≥ r + s − m.
Solution:
Assume that s is the largest amount of distinct rows of A.
r = n-dimNul A
dimNul A= m-s
r=n-m+s
rank(B) ≥ n-m +s + s − m
rank(B) ≥ n+ (-2m +2s)
rank(B) = n- dimNul B
dimNul B = 0 since all rows s are linearly independet
rank(B) = n
n ≥ n -2m +2s
0 ≥ -2m + 2s
2m ≥ 2s
m ≥ s (dividing by 2)
Since this is given doesn't this conclude the proof? Or should I plug in the 0 so
rank(B) ≥ n + x where 0≥x
n ≥ n + x
2. Let V be a vector space, let p ≤ m, and let b1, . . . , bm be vectors in V such that
A = {b1, . . . , bp} is a linearly independent set, while C = {b1, . . . , bm} is a spanning set
for V . Prove that there exists a basis B for V such that A ⊆ B ⊆ C.
Solution:
I'm going on the fact that it does not mention C is linearly independent, thus by the spanning set theorem there exists a linearly independent set of vectors {bi,...,bk} which spans V. Thus, this set {bi,...,bk} is a basis for V.
This means that the basis must at least be equal to A since B cannot be a basis for V if there is another linearly independent vecotr bp. Meaning:
$$A \subseteq B$$
Also since B is a spanning set of V and is comprised of at least {b1,...,bp} it must be a subset of C since C also spans V and includes A.
Thus
$$A \subseteq B \subseteq C$$
3. Let {v1 , v2, . . . , vm} be an indexed linearly dependent set of vectors in a vector space V such that v1 is not 0v . Prove that there exists exactly one index 2≤ i≤m with the property that the vector vi can be expressed as a linear combination of the preceding vectors v1, . . . , v(i-1) in a unique way.
|
|
# Mechanical Metallurgy Questions and Answers – Strengthening Mechanisms – Grain Boundaries & Deformation – 2
«
»
This set of Mechanical Metallurgy Questions and Answers for Experienced people focuses on “Strengthening Mechanisms – Grain Boundaries & Deformation – 2”.
1. Considering Hall-Patch relationship, the Red-colored marked region is called __________
a) yield strength
b) critical diameter
c) friction stress
d) hall-patch constant
Explanation: According to Hall-Patch relation:
-> σoi+kD-1/2
The intercept on the y-axis will be equal to = σi
This σi is physically interpreted as the yield strength when single grain exist. It is called lattice friction stress.
2. The Li model to predict the yield strength is based on dislocation density, which states that:
-> σoi+αGρ-1/2
Where α is constant, and ρ is dislocation density.
a) True
b) False
Explanation: The Hall-Patch relationship predicts the yield strength of material is almost equal to theoretical strength. When the grain diameter is minimal (4 nm) it is not correct. So to address this indiscreetly in strength prediction, the Li model of dislocation density was proposed.
3. The yield strength of a material is 200MPa when the grain size is 80 micron. If the frictional stress is 80MPa, find the strength when the grain diameter is 160 micron?
a) 100 MPa
b) 150MPa
c) 250MPa
d) 180 MPa
Explanation: According to the hall-patch relationship:
σoi+kD-1/2
-> 200=80+k(80)-1/2
-> k=1252
So at D=160
σo=80+1252(160)-1/2
σo=178.9 MPa.
4. The mean intercept method of grain size measurement ______________
a) overestimates the grain size
b) underestimates the grain size
c) predicts the accurate grain size
d) no defined relation
Explanation: Mean intercept method measure the grain size by counting the number of grain boundaries/grain intersecting a line along a certain direction. This method does not give an idea about the shape of grain and underestimate the grain size.
5. ASTM Grain size measurement method counts the __________________
a) number of grain in 1 mm2 at magnification of 1X
b) number of grain in 1 mm2 at magnification of 100X
c) number of grain in 100 mm2 at magnification of 1X
d) number of grain in 100 mm2 at a magnification of 100X
Explanation: ASTM Grain size measurement G method counts the number of grain in 1 mm2 at a magnification of 1X=na :
Grain size is given by:
-> G=-2.9542+1.4427 ln na.
6. As the ASTM grain size number increases, the grain diameter will ______________
a) increase
b) decrease
c) remain constant
d) increase first, then decrease
Explanation: The ASTM grain size number is given by:
-> G=-2.9542+1.4427 ln na
So as the value of G increases, na will also increase (number of grain in per mm square increase). So, the grain size will decrease.
7. As the ASTM grain size number increases, the number of grain per unit volume will ______________
a) increase
b) decrease
c) remain constant
d) increase first, then decrease
Explanation: The ASTM grain size number is given by:
-> G=-2.9542+1.4427 ln na
So as the value of G increases, na will also increase (number of grain in per mm square increase). So the number of grain per unit volume will also increase.
8. Determine the grain diameter if ASTM grain size number is 5?
a) 9.32*10-5 m
b) 6.32*10-3 m
c) 44.7*10-6 m
d) 44.7*10-8 m
Explanation: The ASTM grain size number is given by:
-> G=-2.9542+1.4427 ln na
G=5
So, 5=-2.9542+1.4427 ln na
na=exp{(5+2.95)/1.44}
na=249.84
So at 1mm square in 1X, there is 249.84 grain. So in the 1meter square, it will be 249.84*106
Grain diameter is roughly related to the number of grain as:
-> D=$$\sqrt{(1/n_a)}$$
D=$$\sqrt{(1/249.84*10^6)}$$
D= 6.32*10-5 m.
9. If there are 40 grains in 1 mm2 at 100X, how many grains were there in the actual grain of 1 mm2 of the surface?
a) 200
b) 200000
c) 0.4
d) 400000
Explanation: 40 grains at 100X in 1mm2
So, at 1X (the actual size of the sample) the number of grain will be;
40*100*100=400000.
10. If there are 10000 grains in 1 mm2 at 50X, how many grains will be there in the same area at 100X?
a) 20000
b) 5000
c) 2500
d) 100000
Explanation: 10000 grains -> at 50X
So at 1X, it will be 10000*50*50
And at 100X it will be -> 10000*50*50/100*100=2500.
11. What is the equilibrium spacing between the edge dislocation along a low angle grain boundary of 1.5-degree misorientation? (Given that the burger’s vector of dislocation is equal to 3 A°).
a) 10 A°
b) 1550 A°
c) 30 A°
d) 115 A°
Explanation: The misorientation, along with the grain boundary, is given in terms of θ:
-> θ=b/D
-> where D is equilibrium spacing
So, (1.5*3.14)/180= 3*10-10/D
-> D= 115.38*10-10
-> 115 A°.
|
|
Skip to main content
• Subscribe by RSS
• # Checking In Loans to Sublibraries Takes a Long Time
• Article Type: General
• Product: Aleph
• Product Version: 20
Description:
When checking in loans to sublibraries that occur when an item is placed In Transit, it takes nearly one minute for the check-in transaction to be completed. Loans to non-sublibrary patrons check in quickly.
Resolution:
Looking at the pc_server_6998.log.2104.0855 on Test, I find that the first transaction begins at 08:55:27 and the last completes at 08:55:52.
The return transaction itself takes only a couple seconds. It shows as an RET-ILL:
DESCRIPTION: Return Item
ACTION : RET-ILL
PROGRAM : pc_cir_c0442
I didn't think you were using ILL. This could be like KB 8192-2215. If so, then please try changing the "ReturnILL" parameter in your GUI .\CIRC\TAB\circ.ini file.
[From site:] Changing this circ.ini parameter corrected the problem.
• Article last edited: 10/8/2013
• Was this article helpful?
//doorbell.io feedback widged
|
|
#### Vol. 12, No. 4, 2019
Recent Issues
The Journal About the Journal Editorial Board Editors’ Interests Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1948-206X (e-only) ISSN: 2157-5045 (print) Author Index To Appear Other MSP Journals
Quantum dynamical bounds for ergodic potentials with underlying dynamics of zero topological entropy
867–902
### Rui Han and Svetlana Jitomirskaya
Two-dimensional gravity water waves with constant vorticity, I: Cubic lifespan
903–967
### Mihaela Ifrim and Daniel Tataru
Absolute continuity and $\alpha$-numbers on the real line
969–996
### Tuomas Orponen
Global well-posedness for the two-dimensional Muskat problem with slope less than 1
997–1022
### Stephen Cameron
Global well-posedness and scattering for the radial, defocusing, cubic wave equation with initial data in a critical Besov space
1023–1048
### Benjamin Dodson
Nonexistence of Wente's $L^\infty$ estimate for the Neumann problem
1049–1063
### Jonas Hirsch
Global geometry and $C^1$ convex extensions of 1-jets
1065–1099
### Daniel Azagra and Carlos Mudarra
Classification of positive singular solutions to a nonlinear biharmonic equation with critical exponent
1101–1113
### Rupert L. Frank and Tobias König
Optimal multilinear restriction estimates for a class of hypersurfaces with curvature
1115–1148
|
|
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PERSONAL OFFICE
General information Latest issue Forthcoming papers Archive Impact factor Guidelines for authors License agreement Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Tr. MIAN: Year: Volume: Issue: Page: Find
Tr. Mat. Inst. Steklova, 2001, Volume 232, Pages 248–267 (Mi tm217)
Nonexistence of Weak Solutions for Some Degenerate and Singular Hyperbolic Problems on $\mathbb R_+^{n+1}$
E. Mitidieria, S. I. Pohozaevb
a Dipartimento di Scienze Matematiche, Università degli Studi di Trieste, Via A. Valerio 12/1, 34127, Trieste, Italia
b Steklov Mathematical Institute, Russian Academy of Sciences
Abstract: Theorems concerning the absence of weak solutions are proved for a wide class of evolution equations and inequalities. This class includes, in particular, the inequalities with degenerate and singular operators of hyperbolic type.
Full text: PDF file (258 kB)
References: PDF file HTML file
English version:
Proceedings of the Steklov Institute of Mathematics, 2001, 232, 240–259
Bibliographic databases:
Document Type: Article
UDC: 517
Citation: E. Mitidieri, S. I. Pohozaev, “Nonexistence of Weak Solutions for Some Degenerate and Singular Hyperbolic Problems on $\mathbb R_+^{n+1}$”, Function spaces, harmonic analysis, and differential equations, Collected papers. Dedicated to the 95th anniversary of academician Sergei Mikhailovich Nikol'skii, Tr. Mat. Inst. Steklova, 232, Nauka, MAIK «Nauka/Inteperiodika», M., 2001, 248–267; Proc. Steklov Inst. Math., 232 (2001), 240–259
Citation in format AMSBIB
\Bibitem{MitPok01} \by E.~Mitidieri, S.~I.~Pohozaev \paper Nonexistence of Weak Solutions for Some Degenerate and Singular Hyperbolic Problems on $\mathbb R_+^{n+1}$ \inbook Function spaces, harmonic analysis, and differential equations \bookinfo Collected papers. Dedicated to the 95th anniversary of academician Sergei Mikhailovich Nikol'skii \serial Tr. Mat. Inst. Steklova \yr 2001 \vol 232 \pages 248--267 \publ Nauka, MAIK «Nauka/Inteperiodika» \publaddr M. \mathnet{http://mi.mathnet.ru/tm217} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=1851453} \zmath{https://zbmath.org/?q=an:1032.35138} \transl \jour Proc. Steklov Inst. Math. \yr 2001 \vol 232 \pages 240--259
• http://mi.mathnet.ru/eng/tm217
• http://mi.mathnet.ru/eng/tm/v232/p248
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. G. G. Laptev, “On the Absence of Solutions for a Class of Singular Semilinear Differential Inequalities”, Proc. Steklov Inst. Math., 232 (2001), 216–228
2. E. Mitidieri, S. I. Pokhozhaev, “A priori estimates and blow-up of solutions to nonlinear partial differential equations and inequalities”, Proc. Steklov Inst. Math., 234 (2001), 1–362
3. D'Ambrosio L., “Critical degenerate inequalities on the Heisenberg group”, Manuscripta Math., 106:4 (2001), 519–536
4. J. Hay, “On Necessary Conditions for the Existence of Local Solutions to Singular Nonlinear Ordinary Differential Equations and Inequalities”, Math. Notes, 72:6 (2002), 847–857
5. Laptev G.G., “Nonexistence of solutions for parabolic inequalities in unbounded cone-like domains via the test function method”, J. Evol. Equ., 2:4 (2002), 459–470
6. Hay J., “Necessary conditions for the existence of global solutions of higher–order nonlinear ordinary differential inequalities”, Differ. Equ., 38:3 (2002), 362–368
7. Laptev G.G., “Absence of solutions of evolution-type differential inequalities in the exterior of a ball”, Russ. J. Math. Phys., 9:2 (2002), 180–187
8. G. G. Laptev, “Non-existence of global solutions for higher-order evolution inequalities in unbounded cone-like domains”, Mosc. Math. J., 3:1 (2003), 63–84
9. D'Ambrosio L., Lucente S., “Nonlinear Liouville theorems for Grushin and Tricomi operators”, J. Differential Equations, 193:2 (2003), 511–541
10. Hay J., “Necessary conditions for the existence of local solutions to higher-order singular nonlinear ordinary differential equations and inequalities”, Dokl. Math., 67:1 (2003), 66–69
11. Lupo D., Payne K.R., “Critical exponents for semilinear equations of mixed elliptic-hyperbolic and degenerate types”, Comm. Pure Appl. Math., 56:3 (2003), 403–424
12. Pohozaev S., “The general blow-up for nonlinear PDE's”, Function Spaces, Differential Operators and Nonlinear Analysis: the Hans Triebel Anniversary Volume, 2003, 141–159
13. Caristi G., “Nonexistence of global solutions of higher order evolution inequalities in R-N”, Nonlinear Equations: Methods, Models and Applications, Progress in Nonlinear Differential Equations and their Applications, 54, 2003, 91–105
14. El Hamidi A., Laptev G.G., “Nonexistence of solutions to semilinear inequalities in conical domains”, Bull. Belg. Math. Soc. Simon Stevin, 11:3 (2004), 343–364
15. El Hamidi A., Laptev G.G., “Existence and nonexistence results for higher-order semilinear evolution inequalities with critical potential”, J. Math. Anal. Appl., 304:2 (2005), 451–463
16. Kaikina E.I., Naumkin P.I., Shishmarev I.A., “On the asymptotic behavior of solutions to the Cauchy problem for a nonlinear Sobolev-type equation”, Dokl. Math., 71:2 (2005), 269–273
17. O. V. Besov, A. M. Il'in, V. A. Il'in, L. D. Kudryavtsev, S. M. Nikol'skii, L. V. Ovsyannikov, E. Mitidieri, A. Tesei, L. Véron, “Stanislav Ivanovich Pokhozhaev (on his 70th birthday)”, Russian Math. Surveys, 61:2 (2006), 373–378
18. Lupo D., Payne K.R., Popivanov N.I., “Nonexistence of nontrivial solutions for supercritical equations of mixed elliptic-hyperbolic type”, Contributions to Nonlinear Analysis - A TRIBUTE TO D. G. DE FIGUEIREDO ON THE OCCASION OF HIS 70TH BIRTHDAY, Progress in Nonlinear Differential Equations and their Applications, 66, 2006, 371–390
19. M. Kirane, N.-e. Tatar, “Nonexistence for the Laplace equation with a dynamical boundary condition of fractional type”, Siberian Math. J., 48:5 (2007), 849–856
20. Galaktionov V.A., Mitidieri E., Pohozaev S.I., “Capacity induced by a nonlinear operator and applications”, Georgian Math. J., 15:3 (2008), 501–516
21. Sun F., Shi P., “Global Existence and Non-Existence for a Higher-Order Parabolic Equation with Time-Fractional Term”, Nonlinear Anal.-Theory Methods Appl., 75:10 (2012), 4145–4155
22. D'Abbicco M., Ebert M.R., “A new phenomenon in the critical exponent for structurally damped semi-linear evolution equations”, Nonlinear Anal.-Theory Methods Appl., 149 (2017), 1–40
23. Jleli M., Samet B., “Liouville-Type Theorems For a System Governed By Degenerate Elliptic Operators of Fractional Orders”, Arabian J. Math., 6:3 (2017), 201–211
• Number of views: This page: 256 Full text: 95 References: 26
|
|
# Geometric Interpretation of Gaussian elimination
I understand the solution of a linear system of equation is equivalent to finding the intersection points of n-hyperplanes.
There are 3 elementary row operations - scaling an equation, exchanging equations and subtracting a scalar multiple of an equation from another equation.
The first two I understand geometrically. I am trying to understand the subtraction of two equations geometrically. I can see that the row operation produces a new hyperplane which has one of the coordinate axes as its normal. What I am specifically trying to understand is how the hyperplane is rotated by exactly the right angle?. Anything to do with direction cosines/
Thanks Rajagopal
-
The coefficients of the equation specify the normal vector of the corresponding hyperplane. Thus, if you add equations, you're adding their normal vectors. Specifically, if you have equations $e_1$ and $e_2$ and you form $e=e_1+\lambda e_2$, you get a new normal vector $n=n_1+\lambda n_2$. The direction of this new normal vector goes to that of $n_2$ for $\lambda\to\pm\infty$, but, unless $n_2$ is parallel to $n_1$, it is never that of $n_2$, which ensures that the new equation is linearly independent of the one for $n_2$ if the old one was.
By "the row operation produces a new hyperplane which has one of the coordinate axes as its normal", I presume you're referring specifically to the row operations used in Gaussian elimination which make one of the coefficients $0$. This doesn't mean that the normal of the new hyperplane is one of the coordinate axes (else it would have to have all the previous coordinate axes as its normal, too). Rather, it means that the normal of the hyperplane is orthogonal to that coordinate axis.
@Rajagopal: That only happens in the very last step of the elimination. In all the previous steps, all you can say when you produce a $0$ by subtraction is that you've made the normal vector orthogonal to one of the axes. In the last step, when you've made it orthogonal to all the axes except one, it is then necessarily parallel to that remaining axis. – joriki May 23 '11 at 8:52
|
|
Deriving a Hamiltonian from dimensionless equations
1. Sep 10, 2009
thrillhouse86
Hey all,
(As I mentioned in my previous post) I am trying to derive the Hamiltonian for a aeroelastic system, where the dynamical equations of motion (determined by Newtonian Mechanics) are known.
My process has been to
1. "guess" a form of the Lagrangian, check that it recreates the equations of motion
2. determine the canonical momentum
3. Perform a Legendre transform from the Lagrangian to the Hamiltonian
I've been thinking recently though, that the equations of motion are in dimensionless form (i.e. the equations have been appropriately scaled so that they are unitless). My question is:
Is it acceptable to derive the Hamiltonian from the dimensionless equations of motion, or do I have to use the "undimensionalised" equations of motion ? -
My rationale for this question is that the Hamiltonian does have to give me the total energy of the system - and I'm not sure that working from dimensionless equations will give me that ...
Cheers,
Thrillhouse
|
|
Will this method for finding perpendicular distance from point to 3D line work? Coordinate Inputs Line: start (1, 0, 2) end (4.5, 0, 0.5) Point: pnt (2, 0, 0.5) Figure 2 The Y coordinates of the line and point are zero and as such both lie on the XZ plane. 10:53. custom functions in Excel. Convert the extruded point feature class to a 3D line feature class. Distance from C to a line that goes through A and B is 1. A vector "from B to A" is A-B. rev 2020.12.8.38142, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Why is "issued" the answer to "Fire corners if one-a-side matches haven't begun"? More in-depth information read at these rules. < Deleted because op didn't want a vba function >. Theory. In Euclidean geometry, the distance from a point to a line is the shortest distance from a given point to any point on an infinite straight line.It is the perpendicular distance of the point to the line, the length of the line segment which joins the point to nearest point on the line. $$B= (1,0,1)$$ This example treats the segment as parameterized vector where the parameter t varies from 0 to 1.It finds the value of t that minimizes the distance from the point to the line.. Is there any text to speech program that will run on an 8- or 16-bit CPU? Let $s=(a_1b_1+a_2b_2+a_3b_3)/(b_1^2+b_2^2+b_3^2)$. $$C = (1,2,0)$$, Hint: A point on the line BC is described by $t B + (1-t)C$, $t\in {\Bbb R}$. by a straight line and a point not on that line, and ; a point and a line perpendicular to the plane. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Shouldn't it be v=A-B, not B-A? MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…, find maxima on a cardinal (catmull rom) spline. The vector $\color{green}{\vc{n}}$ (in green) is a unit normal vector to the plane. so now you should be able to find the point. a x + b y + c z + d = 0 ax + by + cz + d = 0 a x + b y + c z + d = 0. and a point (x 0, y 0, z 0) (x_0, y_0, z_0) (x 0 , y 0 , z 0 ) in space. What are the features of the "old man" that was crucified with Christ and buried? Points, lines, and planes In what follows are various notes and algorithms dealing with points, lines, and planes. I does not look like a excel to me. Distance between a line and a point Figure 3 Step 1. Thus, $$(\langle 1,-2k,1+k \rangle-\langle 4,2,1\rangle)\cdot \langle 0,-2,1\rangle=0$$, $$\langle -3,-2k-2,k \rangle\cdot \langle 0,-2,1\rangle=0$$. It cannot be done with these functions. This can be done with a variety of tools like slope-intercept form and the Pythagorean Theorem. The focus of this lesson is to calculate the shortest distance between a point and a plane. Prime numbers that are also a prime number when reversed, I made mistakes during a project, which has resulted in the client denying payment to my company. Thus, the line joining these two points i.e. In this way we always get three linear equations of $x,y,z$ which we can solve by simple elimination process. $$AB=\sqrt{13},\qquad AC=\sqrt{10},\qquad BC=\sqrt{5}\tag{1}$$ Distance from point to plane. ; Note: Leave the 'Grouping Field' empty. Sorry about that. MathJax reference. Convert the line and point to vectors. If t is between 0.0 and 1.0, then the point on the segment that is closest to the other point lies on the segment.Otherwise the closest point is one of the segment’s end points. [Book I, Postulate 2] [Euclid, 300 BC] The primal way to specify a line L is by giving two distinct points, P0 and P1, on it. The end points of the line segment are B and B+ M. The closest point on the line to P is the projection of P onto the line, Q = B+ t 0M, where t 0 = M(P B) MM: The distance from P to the line is D = jP (B+ t 0M)j: If t 0 0, then the closest point on the ray to P is B. Thanks for your feedback, it helps us improve the site. Click OK to execute the tool and generate a 3D line feature class. Of course, this could be written in a somewhat shorter form. Why did DEC develop Alpha instead of continuing with MIPS? By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Is there a way to get a 2d distance, when snapping from a 3d node to a perpendicular point on a 3d line? To learn more, see our tips on writing great answers. Then from $A=(4,2,1)$ if the perpendicular line to this line intersects this line at $(x,y,z)$ we get another vector $(4-x,2-y,1-z)$. $$A= (4,2,1)$$ Launch the ArcToolbox > 3D Analyst Tools > Conversion > Layer 3D to Feature Class tool. Is it possible to calculate the Curie temperature for magnetic systems? Then, the question is what is the distance between a vector $A$ and the orthogonal projection of $A$ onto the subspace spanned by the vector $B$. Use MathJax to format equations. $$\langle 1,0,1 \rangle+ t\ \langle 0,-2,1 \rangle$$. Some pseudocode: If you are familiar with cross-product, you can get the required distance by calculating $$\frac{|\overrightarrow{BA}\times\overrightarrow{BC}|}{|\overrightarrow{BC}|}$$. Sorry for the inconvenience. This is the difference of $B$ and $C$, divided by their distance: Then you can define a vector from $B$ to $A$: Computing the dot product between this vector and the direction vector will give you the the distance between $B$ and the projection of $A$ on $BC$: The actual projection $P$ of $A$ on $BC$ is then given as, And finally, the distance that you have been looking for is. Since $H_A B + H_A C = BC = \sqrt{5}$, from $(2)$ it follows that $H_A B-H_A C = \frac{3}{\sqrt{5}}$, so: And Pythagoras's theorem to get the distance from $A$ to a point in the line. The projection can be computed using the dot product (which is sometimes referred to as "projection product"). Distance between a line and a point calculator This online calculator can find the distance between a given line and a given point. It has the advantages of giving you exactly the closest point on the line (which may be a nice add-on to computing only the distance), and it can be implemented easily. The distance between two points in a three dimensional - 3D - coordinate system can be calculated as d = ((x2 - x1)2 + (y2 - y1)2 + (z2 - z1)2)1/2 (1) Then d(P,L) = |h−1,−2,1i×h5,0,1i| h5,0,1i = |h−2,6,10i| √ 26 = √ 140 is the distance between P and L. Question to the reader: what is the equation of the plane which contains the point P and the line L? Point-volumetric distance queries: point-alignedbox, point-orientedbox, point-tetrahedron, point-ellipsoid, point-frustum Linear-linear distance queries: line-line, line-ray, line-segment, ray-ray, ray-segment, segment-segment. Search community answers and support articles, Microsoft 365 Apps or Office 365 Business. This tells us the distance between any point and a plane. Making statements based on opinion; back them up with references or personal experience. Try to minimize its distance to $A$. If you like, you can define a custom function to calculate the distance and use it as an add-in. If $D(1, 2t_0,-t_0)$ is the point on the line closest to $A$, we must have $\vec{AD} \bullet \vec{BC} =0$ (because $\vec{AD}$ must be perpendicular to the line). Distance from a point to a line . Drop perpendicular to the x-axis, it intersects x-axis at the point (1,0,0). Twist in floppy disk cable - hack or intended design? This is $\lVert A-\frac{A\cdot B}{B\cdot B}B\rVert$. So, if we take the normal vector \vec{n} and consider a line parallel t… A sketch of a way to calculate the distance from point $\color{red}{P}$ (in red) to the plane. This way you can use the function instead of typing all the parameters again next time if you need to calculate the distance: Create Algorithm for simplifying a set of linear inequalities. What would be the most efficient and cost effective way to stop a star's nuclear fusion ('kill it')? What is the altitude of a surface-synchronous orbit around the Moon? ... 11-9 3D - Distance of a Point From a Plane - Duration: 3:19. A line segment is restricted even further with t 2[0;1]. Minimum Distance between a Point and a Line Written by Paul Bourke October 1988 This note describes the technique and gives the solution to finding the shortest distance from a point to a line or line … Intuitively, you want the distance between the point A and the point on the line BC that is closest to A. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. You can input either a 2D or a 3D line and 2D or 3D points. So we get three equations $2y-z=3$ , $y+2z=2$, and $x-1=0$, Math Easy Solutions 4,129 views. $$H_A B = \frac{4\sqrt{5}}{5}\qquad H_A C = \frac{\sqrt{5}}{5}\tag{3}$$ Then, the distance is this formula: $\sqrt{(a_1-sb_1)^2+(a_2-sb_2)^2+(a_3-sb_3)^2}$. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Given a line passing through two points A and B and an arbitrary point C in a 3-D plane, the task is to find the shortest distance between the point C and the line passing through the points A and B. For t The length or the distance between the two is ((x 2 − x 1) 2 + (y 2 − y 1) 2) 1/2 . Point-Line Distance--3-Dimensional Let a line in three dimensions be specified by two points and lying on it, so a vector along the line is given by (1) The squared distance between a point on the line with parameter and a point is therefore Peripendicular distance from a line segment, Find point at distance d perpendicular to endpoint A of given line segment AB, Find point on a parallel line(slanted lines) with distance d from start points, point at perpendicular distance from another point on line. By the Pythagorean theorem we have Suppose the coordinates of two points are A(x 1, y 1) and B(x 2, y 2) lying on the same line. I would like to know a perpendicular distance from this line to point C (x'',y'',z''). By translation, we may assume one of the points (say $C$) is the origin (which is to say, consider $A-C$ and $B-C$ instead). This line will have slope B/A, because it is perpendicular to DE. $(4-x).0+(2-y). The formula for calculating it can be derived and expressed in several ways. The distance from a point to a line is the shortest distance between the point and any point on the line. Thus, there are many ways to represent a plane P. Some methods work in any dimension, and some work only in 3D. [Book I, Definition 4] To draw a straight line from any point to any point. Excel does not have a pre-defined function to calculate the distance between 3D points to lines. Point-To-Line Distance Formula: Geometric Proof #1 - Duration: 10:53. Thanks Richard Now as these two vectors$(0,-2,1)$and$(4-x,2-y,1-z)$are perpendicular to each other, their dot product will be zero so we get first equation. Additional features of distance from a point to a line 3D calculator. Plane Equations Implicit Equation. You can do this with basic calculate formulas like SUM() and SQRT() to calculate the distance step by step. How to derive the formula to find the distance between a point and a line. Arbitrary point on the line:$(1, 2t , -t)$for$t \in \mathbb R$. and: I am maybe wrong. Because all we're doing, if I give you-- let me give you an example. Correct. We are familiar with the representation of points on a graph sheet. $$H_A B^2 - H_A C^2 = AB^2 - AC^2 = 3 \tag{2}$$ In any dimension, one can always specify 3 non-collinear points , , as the vertices of a triangle Excel does not have a pre-defined function to calculate the distance between 3D points to lines. And we're done. Consider the point and the line segment shown in figurs 2 and 3. b) Find a point on the line that is located at a distance of 2 units from the point (3, 1, 1). Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. The line is parallel to the vector$\vec{BC} = <0,2,-1>$. If I have the plane 1x minus 2y plus 3z is equal to 5. the point P? And this is a pretty intuitive formula here. And let me pick some point that's not on the plane. And the point on the line that you are looking for is exactly the projection of A on the line. From$B=(1,0,1)$and$C = (1,2,0)$, we get the line segment$B-C = (0,-2,1)$we consider it as a vector. The projection can be computed using the dot product (which is sometimes referred to as "projection product"). If you like, you can define a custom function to calculate the distance and use it as an add-in. A human prisoner gets duped by aliens and betrays the position of the human space fleet so the aliens end up victorious. so the line is $$\langle 1,0,1 \rangle+ t\ \langle 0,-2,1 \rangle$$, So for some value of$t$, call it$k$, the vector from$A$to$\langle 1,-2k,1+k \rangle$is orthogonal to$\langle 0,-2,1 \rangle$. A point is that which has no part. So that's some plane. the perpendicular should give us the said shortest distance. Distance Between Two Points. You can parameterize the line (and you don't even need to worry about the fact that it’s a segment): $$B-C=\langle 0,-2,1 \rangle$$ To find the closest points along the lines you recognize that the line connecting the closest points has direction vector $$\mathbf{n} = \mathbf{e}_1 \times \mathbf{e}_2 = (-20,-11,-26)$$ If the points along the two lines are projected onto the cross line the distance is found with one fell swoop hence$ABC$is an acute triangle (since$AB^2 / logo © 2020 Stack Exchange is a line is question. Line feature class C to a 3D line work this article to create the function create! Indicators on the line $BC$ / ( b_1^2+b_2^2+b_3^2 ) $continuously in a straight line is parallel PQ... Input either a 2D or 3D points to lines human prisoner gets duped by aliens and betrays position. Is the distance if everything were at the start click OK to execute the tool and a! A finite straight line from a plane, see our tips on writing great answers for is exactly projection. That will run on an 8- or 16-bit CPU B to a 3D line and a plane and betrays position! B\Cdot B } { B\cdot B } { B\cdot B } { B\cdot B } B\rVert$ features... Feature class design / logo © 2020 Stack Exchange is a question and answer site for studying! For the 'Input feature Layer ' using the dot product ( which is sometimes to... Feature class from the distance between two points, lines, and in... A excel to me continuing with MIPS 0, -2,1 \rangle passes through origin... B\Rvert $a star 's nuclear distance from point to line 3d ( 'kill it ' ) is issued '' the answer to Stack. ; Note: Leave the 'Grouping field ' empty altitude of a surface-synchronous around! A finit… Now we construct another line parallel to the origin ( 0, -2,1 \rangle$! Other answers a given line and a plane P. some methods work in any dimension, planes... Now we construct another line parallel to the x-axis, it helps us improve the site or a 3D work... ( a_1b_1+a_2b_2+a_3b_3 ) / ( b_1^2+b_2^2+b_3^2 ) $365 Business on itself pulldown.. This tells us the distance the distance between a point to the plane / ( b_1^2+b_2^2+b_3^2 ) for! For your feedback, it helps us improve the site and paste this URL into your RSS reader to. Level and professionals in related fields 'kill it ' ) ArcToolbox > 3D Analyst Tools > Conversion > Layer to. To as projection product '' ) A-\frac { A\cdot B } { B\cdot B } B\rVert$ in... A-\Frac { A\cdot B } { B\cdot B } B\rVert $workbook to calculate the distance between any..: using a vector equation of the line$ BC $is A-B menu... \Langle 0, -2,1 \rangle$ $\langle 1,0,1 \rangle+ t\ \langle 0, 0 ) where derivative! Should give us the said shortest distance between the point ( 1,0,0 ) service, privacy and!: Leave the 'Grouping field ' empty figurs 2 and 3 if I give you an.... What follows are various notes and algorithms dealing with points, lines, and in... In excel sheet ) to calculate the distance between a point and a plane length... Able to find the distance matrix 0 ) reply to this RSS feed, and. From$ a $Conversion > Layer 3D to feature class to a 3D node to.... Line:$ ( 1, 2t, -t ) $for$ t \in \mathbb $... Thus, there are many ways to represent a plane P. some work. Like SUM ( ) to calculate the distance the said shortest distance use it an!, say a and B Pythagoras 's Theorem to get the distance between a given line passes... I would like to know how to calculate the distance between the point the... B to a line which lies evenly with the representation of points on a sheet. Your answer ”, you agree is equal to 5 11-9 3D - distance of a point the. I hope you agree is equal to 5 line 3D calculator '' that was with... How to calculate the distance between a line are points human prisoner gets duped by aliens betrays... Finit… Now we construct another line parallel to the vector$ \mathbb { d $... The dot product ( which is sometimes referred to as projection product '' ) efficient cost! Parallel to the x-axis, it helps us improve the site is to... And let me give you an example algorithms dealing with points, lines, and planes that. Z ' ) 3D calculator for the 'Input feature Layer ' using distance from point to line 3d dot product ( which is referred... Are various notes and algorithms dealing with points, lines, and some work only in 3D t\... And keys on keyboard to move between field in calculator distance RS, which hope! To lines 'kill it ' ) through$ BC $given line and through... Excel to me this method for finding perpendicular distance from point to a line 3D calculator other.. And paste this URL into your RSS reader ) to calculate the distance and use it as add-in! < 0,2 distance from point to line 3d -1 >$ them up with references or personal experience only in 3D find distance. } { B\cdot B } B\rVert $playing with the points on a graph sheet the:. ( 'kill it ' ) determine my line of them: using a vector equation of the man! Derivative equals 0 a spatial function for this, I would like to know how to calculate the and... Point on the brake surface between a point in the line and passes the! Feature Layer ' using the point on the line that you are for! Have the plane just like to know a perpendicular distance from point to a point and a point and given... The representation of points on itself follows are various notes and algorithms dealing with points lines. With Christ and buried my 2015 rim have wear indicators on the line that you are looking for is the! More, see our tips on writing great answers like SUM ( and... To Fire corners if one-a-side matches have n't begun '' matches have n't begun '' so Now should. Workbook to calculate the distance step by step issued '' the to! Making statements based on opinion ; back them up with references or personal experience workbook... Copy and paste this URL into your RSS reader the origin from a 3D node to a line is! One-A-Side matches have n't begun '' P. some methods work in any dimension, and some work only 3D! And answer site for people studying math at any level and professionals in related fields can follow steps! Curie temperature for magnetic systems is a question and answer site for people studying math at any level and in! Twist in floppy disk cable - hack or intended design$ s= a_1b_1+a_2b_2+a_3b_3. Give you an example in fact, this could be written in a straight from! Our terms of service, privacy policy and cookie policy line are points to PQ through. ( which is sometimes referred to as projection product '' ) keyboard to between... To find the distance ( x '', z '' ) 3D line ( a_1b_1+a_2b_2+a_3b_3 ) / ( )... Can input only integer numbers or fractions in this online calculator professionals in related fields in calculator features. ] to produce a finite straight line continuously in a straight line from any and! Because it is the altitude of a on the line BC that perpendicular! Input either a 2D or 3D points to lines, privacy policy and cookie.... 0, -2,1 \rangle gets duped by aliens and betrays the position of perpendicular! Me pick some point that 's not on the line segment that is perpendicular to DE why is ''... Need a spatial function for this, I built this simulation one them. ] to draw a straight line from a point calculator this online calculator can find the distance between a to! Ca n't be done with a variety of Tools like slope-intercept form the! The said shortest distance Layer 3D to feature class I do n't need a function. Even further with t 2 [ 0 ; 1 ] a straight line say a and the a! Drawing a line 3D point feature class distance from point to line 3d equal to the origin ( 0 0! The smae elevation perpendicular to the workbook to calculate the distance between point... I give you an example many ways to represent a plane 3D - distance of a on the line goes. 1X minus 2y plus 3z is equal to 5 segment that is closest to a it as an add-in like! Pythagorean Theorem \vec { BC } = < 0,2, -1 > \$ a excel to.. Be written in a straight line \langle 0, 0 ) ` slope-intercept and! X-Axis, it helps us improve the site -- let me pick some point that not...
|
|
# Tag Info
52
Logistic concerns tend to outweigh small performance differences. Courtesy of Uhoh in the question comments, the 7° difference in latitude is worth $$\left(\cos(24°) - \cos(31°)\right) \frac{ 2 \pi \times 6378137 \ \text{meters}} {86164 \ \text{seconds}} = 26\ \text{meters/second}$$ difference in surface rotation speed, about one-quarter of one percent of ...
40
That was the southernmost point in Japan (at the time) The answer to your question has its roots in history more so than it does in science. Tanegashima was chosen in 1966 and the space center completed construction in 1969. This was before Okinawa (which included the Yaeyama Islands) was returned to Japan, in 1972. Another potential site, the Ogasawara ...
36
Sea Dragon The very large rocket was probably Sea dragon and the advantages were more on allowing a massive vehicle to be built at all rather than inherent advantages in starting underwater. (image credits) Building the launch vehicle on a slip way and floating it to the launch site bypasses a number of size constraints in building and moving large ...
25
According to an article from the Lunar and Planetary Institute (archive.org link): As a result of the electrical disturbances experienced during the Apollo 12 launch, several experiments were performed prior to and during the launch of Apollo 13 to study certain aspects of launch-phase electrical phenomena. Measurements taken indicated a significant ...
21
Excess capacity was needed in the storage sphere to allow for multiple attempts in a launch campaign. Much of the propellant was recovered during a scrub but not all. The storage spheres were loaded from waves of tanker trucks and it was a lengthy process - weeks to several months. It would have been embarrassing to run out of propellant after a series of ...
20
The center of the Earth is, for any reasonable approximation, in one of the focus points of an elliptical orbit. For a circular orbit, there is only one focus point, so the center of the Earth is in the center of the orbit. The plane of the orbit thus would intersect both the center of the Earth as well as the launching site. If the launch site was on the ...
18
Why have two separate sites for launch and landing, instead of consolidating them at one site? The plan was to have one site for both launch and landing. The Challenger disaster resulted in a change of plans. Edwards AFB was one of the test locations of the Shuttle program. The test flights with the prototype Enterprise were performed here. But since ...
17
It was done horizontally, in a separate building called MIK-112 (MIK is translated as ‘assembly and testing building’) See more details and photos here: http://www.russianspaceweb.com/baikonur_energia_112.html
13
The short answer is that a spacecraft is attracted to the center point of the earth, not to the earth's rotational axis. [I]t would make sense to me that launching east would result in a 0° inclination with the orbital plane raised so it's parallel to the equator but above or below it. Here's one explanation of why that wouldn't happen that you might ...
12
"Buran" orbiters were assembled in Tushinskiy complex in Moscow (Тушинский авиастроительный завод). Than the orbiter articles were transported by VM-T airplane. The iconic An-225 was not ready in 1988. MIK-112 in Baikonur was used for preflight/postflight maintenance. ( From Russian wikipedia https://ru.wikipedia.org/wiki/Энергия_—_Буран) Quote: ...
11
The reason why the launch sites are built inland goes back to the Cold War. Western commentators have expressed surprise at the selection of a launch site so far inland, in difficult terrain, with poor communication facilities in a relatively populated rural area. The Chinese subsequently explained that during the tense seventies, an inland site was ...
9
You might be thinking of the Sea Dragon project, although this never got past the conceptual / early planning stages. Some of the advantages of a sea launch are that you can be far away from habitation and the water can provide cooling and acoustic damping during launch. But the disadvantages are also serious. You are even more at the mercy of the weather ...
9
Early in the development of the Polaris missile system, there was a lot of work on launching a missile from underwater. Polaris was a nuclear deterrent to rapidly launch multiple missiles from a fully submerged submarine. Staying submerged until a boat-load of launches were complete was a key goal: the boat was to be very difficult to track and destroy ...
9
On the spectrum from "build a new one for every launch" to "nothing was damaged" the actual experience was "some repair and refurbishment needed". The Mobile Launch Platforms (MLPs) and the Launch Umbilical Towers that were mounted on them for Apollo survived the program and were re-built and reused for Shuttle. (For Apollo, the towers were mounted on the ...
8
Edwards already existed, so using that for landing saves you the cost of constructing something else. Moving the shuttle back to Florida after landing cost a lot of money and time. With sufficient flight rate, you recoup the cost of a closer landing facility. History of the Shuttle Landing Facility states Landing the orbiter at KSC’s Shuttle Landing ...
7
Earth's gravity pulls you towards the centre of the Earth, so if you're above Kennedy, that pull has a Southwards component, as well as the component towards the Earth's axis. So your path curves South, so that in the end the orbit spends equal amounts of time North and South of the equator, and the pulls in that direction balance out over time. All orbits ...
6
Apparently it is an overhead tank (OHT) associated with fire suppression system of Second Launch Pad in Sriharikota. Learned about this from a recent tender about upgrading fire suppression to meet safety requirements of Augmented Second Launch Pad project under which few new facilities will be added to SLP complex to serve future line of Kerolox based ...
6
Partial answer: I can identify the manager responsible for the decision, and the date, but not the reason why. LC-39 was the sole topic at a meeting of the Launch Operations Working Group on 18-19 July [1962] that brought together 113 representatives from LOD, MSFC, and the launch vehicle contractors: Boeing, North American, Douglas, and General Electric. ...
5
It stands for Temporary Flight Restriction. Source: https://en.wikipedia.org/wiki/Temporary_flight_restriction Presumably the TFR is in place for whatever SpaceX is doing and the radar will be utilized during that same event. So the two things are associated, but one is not because of the other.
5
This answer is largely speculative but based upon knowledge of similar systems. The Safir space launch vehicle may use hypergolic propellants. The launch site shown in the photograph has minimal permanent infrastructure. Hypergolic propellant storage facilities at Johnson Space Center and White Sands Test Facility have burner stacks to safely dispose of ...
5
1) Cannons don't fire projectiles above the propagation velocity of the propellant. That's nowhere near orbital velocity. 2) Orbital mechanics 101: Other than when conducting gravity maneuvers your orbit will include the point where your rocket shut down. For a cannon that's when it leaves the barrel--thus your payload comes back down after going ...
4
For shuttle, they didn't. At least not all of the consoles. Note the wood-covered consoles in the rear, they face the window. This is Firing Room 4 which controlled the last 20 or so shuttle launches. Source A view from one of the consoles in Firing Room 3 which also faced the windows. (Personal photo)
4
Probably due to population density further south. The Southern islands are more densely (pictured in the top left corner) are more densely populated than Tanegashima (bottom left, oblong shaped island)
3
Launching from international waters. In addition to the factors mentioned in other posts, there's an additional benefit from launching from the ocean: you can launch from international waters. This could be handy if you're launching a rocket that uses some form of material or process that is illegal or heavily regulated for civilian use in your home country. ...
3
Reductio ad absurdum If you could choose freely on which circle to orbit, the most convenient place to take off from would be the North pole. That would set the circle diameter to zero. You would then climb to whichever altitude you pleased and remain there, immobile in space, for as long as you wanted. How cool would that be? An attempt at analogy In ...
3
A TFR is a Temporary Flight Restriction. My best guess for why SpaceX would include this information in their filing is that the FCC requires that marine radar does not interfere with airplane operations, and what SpaceX is saying is that because of the fact that there are no airplane operations when the radar is transmitting, there is no further need to ...
2
Wallops Island was already an established flight test facility, and there has been a number of rocket launches, for instance the explorer series. It was also close to NASA's facility in Langley, Virginia, which is where many of the scientists and engineers were located at the time, the other main location was Huntsville, Alabama, which was not that far away ...
2
Reading a bit "between the lines" of a NASA history book, it seems that Wallops was already the main workspace / location for PARD (Pilotless Aircraft Research Division) , part of NACA (National Advisory Committee for Aeronautics) . The PARD's experience at Wallops put the NACA in a very good position. Deriving accurate data using the rocket model ...
2
This let personnel observe the display screens shown at top in the cutaway diagram. Those displays mattered more than anything they could see through the windows.
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
It is currently 20 Jan 2018, 23:05
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Machine A and Machine B are each used to make 660 widgets.
Author Message
TAGS:
### Hide Tags
Manager
Joined: 06 Jun 2012
Posts: 140
Kudos [?]: 290 [7], given: 37
Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
04 Aug 2013, 10:47
7
KUDOS
30
This post was
BOOKMARKED
00:00
Difficulty:
85% (hard)
Question Stats:
62% (02:36) correct 38% (02:48) wrong based on 460 sessions
### HideShow timer Statistics
P.S. I searched but could not find this problem already on gmatclub. This problem took me forever to solve
Machine A and Machine B are each used to make 660 widgets. It take Machine A x hours longer to produce 660 widgets than Machine B. Machine B produces y% more widgets per hour than Machine A. How long does it take Machine A to make 660 widgets?
A. x + 100x/y
B. x + x/y
C. 100x + 100x/y
D. (x/100) + (660/y)
E. x + (660/y)
[Reveal] Spoiler: OA
_________________
Please give Kudos if you like the post
Kudos [?]: 290 [7], given: 37
Verbal Forum Moderator
Joined: 10 Oct 2012
Posts: 624
Kudos [?]: 1436 [0], given: 136
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
04 Aug 2013, 10:50
1
This post was
BOOKMARKED
summer101 wrote:
P.S. I searched but could not find this problem already on gmatclub. This problem took me forever to solve
Machine A and Machine B are each used to make 660 widgets. It take Machine A x hours longer to produce 660 widgets than Machine B. Machine B produces y% more widgets per hour than Machine A. How long does it take Machine A to make 660 widgets?
A. x + 100x/y
B. x + x/y
C. 100x + 100x/y
D. (x/100) + (660/y)
E. x + (660/y)
Something Similar: machine-a-and-machine-b-are-each-used-to-manufacture-98696.html?fl=similar
See if this helps.
_________________
Kudos [?]: 1436 [0], given: 136
VP
Status: Far, far away!
Joined: 02 Sep 2012
Posts: 1121
Kudos [?]: 2441 [9], given: 219
Location: Italy
Concentration: Finance, Entrepreneurship
GPA: 3.8
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
04 Aug 2013, 11:26
9
KUDOS
4
This post was
BOOKMARKED
summer101 wrote:
P.S. I searched but could not find this problem already on gmatclub. This problem took me forever to solve
Machine A and Machine B are each used to make 660 widgets. It take Machine A x hours longer to produce 660 widgets than Machine B. Machine B produces y% more widgets per hour than Machine A. How long does it take Machine A to make 660 widgets?
A. x + 100x/y
B. x + x/y
C. 100x + 100x/y
D. (x/100) + (660/y)
E. x + (660/y)
$$rate*time=work$$ so $$time=\frac{work}{rate}$$.
$$\frac{660}{A}-\frac{660}{B}=x$$ and $$B=A(\frac{100+y}{100})$$.
We have to find 660/A.
$$\frac{660}{A}-\frac{660}{A((100+y)/100)}=x$$
$$\frac{660}{A}(1-\frac{100}{100+y})=x$$
$$\frac{660}{A}(\frac{y}{100+y})=x$$
$$\frac{660}{A}=\frac{x(100+y)}{y}$$
$$\frac{660}{A}=x+\frac{x100}{y}$$
Hope it's clear
_________________
It is beyond a doubt that all our knowledge that begins with experience.
Kant , Critique of Pure Reason
Tips and tricks: Inequalities , Mixture | Review: MGMAT workshop
Strategy: SmartGMAT v1.0 | Questions: Verbal challenge SC I-II- CR New SC set out !! , My Quant
Rules for Posting in the Verbal Forum - Rules for Posting in the Quant Forum[/size][/color][/b]
Kudos [?]: 2441 [9], given: 219
Intern
Joined: 27 Feb 2013
Posts: 4
Kudos [?]: 8 [3], given: 3
Location: United States
GMAT 1: 610 Q37 V36
GPA: 3.4
WE: Information Technology (Consulting)
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
06 Aug 2013, 20:51
3
KUDOS
Nice problem. Took me a while to solve too but word problems are my weak area so I need the practice. Anyway I figured I would propose an alternative solution for those (like me ) who aren't as good with the algebra.
You can also pick numbers to solve. (This is my 1st post so please bear with me. )
1) To piggy back off of @Zarrolou we have:
x = (660/A) - (660/B) and B = A(1+y/100)
2) From here you can pick numbers for A and B, noting that A should be smaller than B (since A takes more time to complete the job). Let's say A = 10 and B = 66.
3) Solving for x and y you get:
x= (660/10)-(660/66) = 66-10 = 56
66=10(1+y/100)......skipping some steps if you solve for y you get y=560.
Our target value is A's time: 660/A = 660/10 = 66
4) Now plug x and y into the answer choices and see which one gives you the target value (66):
A) x + 100x/y = 56 + 100(56)/560 = 56 + 10 = 66 **Bingo!!!**
I wont bother to write out the rest but if you plug in x and y into the remaining choices you get different values so (A) is the right answer choice. To me this method is much easier to comprehend. I tried the algebra many times and still couldn't figure it out. To each his own. Regardless there is no way I could do this in 2 minutes!!
Hope this helps.
Kudos [?]: 8 [3], given: 3
Intern
Joined: 28 Sep 2014
Posts: 19
Kudos [?]: 13 [2], given: 88
D: M
GMAT 1: 700 Q49 V35
WE: Information Technology (Consulting)
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
26 Oct 2014, 20:28
2
KUDOS
The best way to solve this problem with in 2 min is to pick numbers :
let time taken by B to produce 660 goods = t and take t = 10 hours ; and take x = 10 hours
time taken by A to produce 660 goods = t+x = 20 hours
then the value of y = 100 % ; bcoz , per hour production of B =660/10 =66 and per hour production of A = 660/20 = 33 there fore y is 66 = 33 +33(y/100) and y = 100.
Now we know the time taken by A to produce 660 goods is t+x = 20 hours and Substitute x = 10 and y = 100 in the options above to get 20 hours. Only option A satisfies this.
Kudos [?]: 13 [2], given: 88
Intern
Joined: 22 Feb 2015
Posts: 8
Kudos [?]: 3 [1], given: 21
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
09 May 2015, 03:51
1
KUDOS
Plug in is crucial time saver
Take speed of machine A as 10 and y as 10%---
(660/10)-(660/1.1)=6
x=6
Just plugin the variables in the answers
thus A
Kudos [?]: 3 [1], given: 21
Manager
Joined: 05 Feb 2015
Posts: 67
Kudos [?]: 47 [2], given: 8
Concentration: Finance, Entrepreneurship
Schools: ISB '16, IIMA , IIMB, IIMC
WE: Information Technology (Health Care)
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
09 May 2015, 09:26
2
KUDOS
1
This post was
BOOKMARKED
Check out this approach. took less than 1 min
Check the attachment first.
Both produces 660 widgets
so, W(H+x) = W(1+y/100)H
H=100x/y
Machine A takes H+x hours
i.e. 100x/y+x
Attachments
rate.png [ 2.92 KiB | Viewed 6790 times ]
Kudos [?]: 47 [2], given: 8
Intern
Joined: 12 Dec 2015
Posts: 18
Kudos [?]: 4 [1], given: 28
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
09 Aug 2016, 17:35
1
KUDOS
We don't need this crazy number - 660 here. Let's say that A produces 5 widgets in 5 hours, B produces 5 widgets in one hour. So it takes A 4 hours longer to produce 5 widgets than B and B produces 400% more than a in one hour, so it takes A 5 hours to produce 5 widgets. X=4, Y=400, and the answer is A!
Kudos [?]: 4 [1], given: 28
Intern
Joined: 01 Feb 2016
Posts: 8
Kudos [?]: 14 [1], given: 0
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
04 Oct 2016, 21:04
1
KUDOS
summer101 wrote:
P.S. I searched but could not find this problem already on gmatclub. This problem took me forever to solve
Machine A and Machine B are each used to make 660 widgets. It take Machine A x hours longer to produce 660 widgets than Machine B. Machine B produces y% more widgets per hour than Machine A. How long does it take Machine A to make 660 widgets?
A. x + 100x/y
B. x + x/y
C. 100x + 100x/y
D. (x/100) + (660/y)
E. x + (660/y)
This is a typical VICs (Variable in choices) problem
Assume that x=1, the machine A takes 2 hrs and Machine B takes 1 hr
since time taken is 50%, work done would be 100% more, therefore Machine B would produce 100% (=y) more widgets in 1 hr.
Put x=1 and y=100 in answer choices, the option which gives you 2 (Machines A takes 2 hr as assumed), that would be right option.
In this case A.
Hope this helps
Kudos [?]: 14 [1], given: 0
Senior Manager
Joined: 03 Apr 2013
Posts: 292
Kudos [?]: 54 [0], given: 862
Location: India
Concentration: Marketing, Finance
Schools: Simon '20
GMAT 1: 740 Q50 V41
GPA: 3
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
12 May 2017, 23:12
Bunuel wrote:
sidoknowia wrote:
Machine A and Machine B are each used to make 660 widgets. It takes Machine A x hours longer to produce 660 widgets than Machine B. Machine B produces
y% more widgets per hour than Machine A. How long does it take Machine A to make 660 widgets?
A) x + 100x/y
B) x + x/y
C) 100x + 100x/y
D) x/100 + 660/y
E) x + 660/y
Got this question in veritas prep exam. The algebraic approach is too complex and time consuming, can someone suggest a different solution.
Please refer to the discussion above.
Hi Bunuel
Although I understand that number picking approach would be the best here, but I had to choose an algebraic approach to do the same, even when I prefer number picking as it saves time. This is because the number picking approach sometimes backfires..in the case when more than one options produce the same result. Please help me and maybe many others here by suggesting how the numbers should be picked and when should this approach be used and not used. It will help me so much.
Thank you
_________________
Spread some love..Like = +1 Kudos
Kudos [?]: 54 [0], given: 862
Manager
Joined: 02 Feb 2016
Posts: 89
Kudos [?]: 10 [0], given: 40
GMAT 1: 690 Q43 V41
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
31 Jul 2017, 02:08
The number picking approach may well be easy to apply and understand but has relatively lesser reliability than algebraic approach. It would be really helpful to know the algebraic method application for this question as there isn't a good explanation yet with regards to that.
Can someone help me with that?
Kudos [?]: 10 [0], given: 40
Target Test Prep Representative
Affiliations: Target Test Prep
Joined: 04 Mar 2011
Posts: 1821
Kudos [?]: 1050 [1], given: 5
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink]
### Show Tags
01 Aug 2017, 16:07
1
KUDOS
Expert's post
1
This post was
BOOKMARKED
Quote:
Machine A and Machine B are each used to make 660 widgets. It take Machine A x hours longer to produce 660 widgets than Machine B. Machine B produces y% more widgets per hour than Machine A. How long does it take Machine A to make 660 widgets?
A. x + 100x/y
B. x + x/y
C. 100x + 100x/y
D. (x/100) + (660/y)
E. x + (660/y)
We can let b = the number of hours it takes Machine B to make the 660 widgets. Thus, the time in hours for Machine A to make the 660 widgets = x + b. Furthermore, the rate of Machine B = 660/b and the rate of Machine A = 660/(x + b).
Since Machine B produces y% more widgets per hour than Machine A:
660/(x + b)(1 + y/100) = 660/b
1/(x + b)(1 + y/100) = 1/b
1 + y/100 = (x + b)/b
1 + y/100 = x/b + 1
y/100 = x/b
100/y = b/x
100x/y = b
Thus, the time for Machine A to make the 660 widgets is x + b = x + 100x/y.
_________________
Jeffery Miller
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Kudos [?]: 1050 [1], given: 5
Re: Machine A and Machine B are each used to make 660 widgets. [#permalink] 01 Aug 2017, 16:07
Display posts from previous: Sort by
|
|
The third category was cognitive control - how effectively you can check yourself in circumstances where the most natural response is the wrong one. A classic test is the Stroop Task, in which people are shown the name of a colour (let's say orange) written in a different colour (let's say purple). They're asked to read the word (which is easy, because our habitual response to a word is to read it) or to name the ink colour (which is harder, because our first impulse is to say "orange"). These studies presented a more mixed picture, but overall they showed some benefit "for most normal healthy subjects" - especially for people who had inherently poorer cognitive control.
Dark chocolate. Let's end with the good stuff. Dark chocolate has powerful antioxidant properties, contains several natural stimulants, including caffeine, which enhance focus and concentration, and stimulates the production of endorphins, which helps improve mood. One-half ounce to 1 ounce a day will provide all the benefits you need, says Kulze. This is one "superfood" where more is not better. "You have to do this one in moderation," says Kulze.
Also known as Arcalion or Bisbuthiamine and Enerion, Sulbutiamine is a compound of the Sulphur group and is an analogue to vitamin B1, which is known to pass the blood-brain barrier very easily. Sulbutiamine is found to circulate faster than Thiamine from blood to brain. It is recommended for patients suffering from mental fatigue caused due to emotional and psychological stress. The best part about this compound is that it does not have most of the common side effects linked with a few nootropics.
Choline is very important for cognitive function because it is a precursor to Acteylcholine. Your body needs enough choline to convert into Acteylcholine to keep your brain healthy. For this reason, choline supplements are often considered great nootropics, even by themselves. CDP-Choline and Alpha GPC are the best sources for supplemental Choline.
That’s why adults aren’t as crazy as teenagers, because adult brains aren’t as sensitive or reactive to external factors and experience teaches us to know better. That’s the potential danger with a drug like this. You return your brain to a state when you can learn a lot easier because you are ultra-sensitive to all stimuli in your environment, but it also makes it easier for that stimuli to affect you, for better or worse. The worst case scenario? You take this drug to be smarter but your personality can be destroyed by external stresses- it’s like being an emotional mess and losing yourself in high school again.
Autism BDNF Brain brain fuel brain health Brain Injury Cholesterol choline DAI DHA Diabetes digestion Exercise Fat Functional Medicine gastric Gluten gut-brain Gut Brain Axis gut health Health intestinal permeability keto Ketogenic leaky Gut Learning Medicine Metabolism Music Therapy neurology Neuroplasticity neurorehabilitation Nutrition omega Paleo Physical Therapy Recovery Science second brain superfood synaptogenesis TBI Therapy tube feed uridine
This supplement is dangerous and should not be sold. I have taken brain supplements for a while and each of them are very similar, EXCEPT for Addium. On the day I took Addium many blood vessels in my hands burst, and two on my face burst. With their "proprietary blend" not being detailed as to the amount of each ingredient (only listed in the aggregate of 500mg Proprietary Blend) you have no way of determining which ingredient may or may not be too much. I can only recommend to stay away from this supplement.
Pre and Post-Natal Depression are both complex conditions that can have multifactorial underlying drivers, including genetic and environmental influences. These are currently poorly investigated and the gold standard of treatment is often medication to help stabilise mood. Whilst SSRIs and other types of antidepressants have proven to be helpful for many, they do not address potential causes or drivers of poor mental health and can often mask symptoms. Antidepressants are also not regularly recommended during pregnancy, which is why being more mindful of nutrition and lifestyle habits can be a safer option for you and your baby. There are some natural, evidence-based steps you can take to help support optimal mental wellbeing:
The nootropic sulbutiamine, of the synthetic B-vitamin-derived nootropics family, is generally considered a low-risk supplement; however, some users have reported that the supplement has addictive qualities. While there is no firm evidence of sulbutiamine addiction, the risk may increase at high dosages. For instance, users who consume this supplement for 10 consecutive days may experience withdrawal for two to five days. There are also increased risks when sulbutiamine is taken with antipsychotic medications.[8]
Notice that poor diet is not on the list. They recommend active treatment of hypertension, more childhood education, exercise, maintaining social engagement, reducing smoking, and management of hearing loss, depression, diabetes, and obesity. They do not recommend specific dietary interventions or supplements. They estimate that lifestyle interventions “might have the potential to delay or prevent a third of dementia cases.”
I stayed up late writing some poems and about how [email protected] kills, and decided to make a night of it. I took the armodafinil at 1 AM; the interesting bit is that this was the morning/evening after what turned out to be an Adderall (as opposed to placebo) trial, so perhaps I will see how well or ill they go together. A set of normal scores from a previous day was 32%/43%/51%/48%. At 11 PM, I scored 39% on DNB; at 1 AM, I scored 50%/43%; 5:15 AM, 39%/37%; 4:10 PM, 42%/40%; 11 PM, 55%/21%/38%. (▂▄▆▅ vs ▃▅▄▃▃▄▃▇▁▃)
Along with a great formula, Brainol offers real value in their package deals. Brainol extends discounts of $280 if you order 6 bottles, this is an incredible, sensible, cost saving option. Positive customer feedback and testimonials demonstrate the huge numbers of satisfied customers. Consumers can feel very confident in this brain boosting product as it offers a 100% money-back guarantee. Brainol is formulated in a laboratory that is GMP certified. This means that the company is held to very strict standards and high-quality assurance. Though coffee gives instant alertness and many cups of the beverage are downed throughout the day, the effect lasts only for a short while. People who drink coffee every day may develop caffeine tolerance; this is the reason why it is still important to control your daily intake. It is advisable that an individual should not consume more than 300mg of coffee a day. Caffeine, the world’s favourite nootropic has very less side effects but if consumed abnormally high can result in nausea, restlessness, nervousness and hyperactivity. This is the reason why people who need increased sharpness would rather induce L-theanine, or some other Nootropic, along with caffeine. Today, you can find various smart drugs that contain caffeine in them. OptiMind , one of the best and most sought-after nootropic in the U.S, containing caffeine, is considered more effective and efficient when compared to other focus drugs present in the market today. Your brain is essentially a network of billions of neurons connected by synapses. These neurons communicate and work together through chemicals known as neurotransmitters. When neurotransmitters are able to send signals more efficiently, you experience improved concentration, better memory, mood elevation, increased processing ability for mental work, and longer attention spans. Difficulty concentrating. As mentioned previously, this may not be a direct result of age—though it can be a common side-effect of struggling with fatigue and brain fog. When it takes more mental energy to think, it is harder to stay with it for a long time. Many of us also are surrounded by distractions clambering for our limited attention. Modern life is fast-paced, stressful, and overcrowded. For obvious reasons, it’s difficult for researchers to know just how common the “smart drug” or “neuro-enhancing” lifestyle is. However, a few recent studies suggest cognition hacking is appealing to a growing number of people. A survey conducted in 2016 found that 15% of University of Oxford students were popping pills to stay competitive, a rate that mirrored findings from other national surveys of UK university students. In the US, a 2014 study found that 18% of sophomores, juniors, and seniors at Ivy League colleges had knowingly used a stimulant at least once during their academic career, and among those who had ever used uppers, 24% said they had popped a little helper on eight or more occasions. Anecdotal evidence suggests that pharmacological enhancement is also on the rise within the workplace, where modafinil, which treats sleep disorders, has become particularly popular. Rather than cause addiction, the nootropic choline may help to treat this illness. Choline helps to increase dopamine levels. In cocaine users, for instance, dopamine levels are lowered. Taking choline potentially helps those recovering from cocaine abuse to feel better and experience fewer cravings. Research in this area is limited, but it is promising.[9] ##### A picture is worth a thousand words, particularly in this case where there seems to be temporal effects, different trends for the conditions, and general confusion. So, I drag up 2.5 years of MP data (for context), plot all the data, color by magnesium/non-magnesium, and fit different LOESS lines to each as a sort of smoothed average (since categorical data is hard to interpret as a bunch of dots), which yields: Professor David O Kennedy published a book in 2014 called Plants and the Human Brain. In his book he summarizes the last 15 years of research into cognitive nutrition, including the work he's done with colleagues at the Brain Performance Nutrition Research Center at Northumbria University. It's a great read and a good guide to what sorts of herbs and other plants to include in our weekly diet and it is all based on hard science rather than mere assertion or trendy but unsubstantiated beliefs. Remembering what Wedrifid told me, I decided to start with a quarter of a piece (~1mg). The gum was pretty tasteless, which ought to make blinding easier. The effects were noticeable around 10 minutes - greater energy verging on jitteriness, much faster typing, and apparent general quickening of thought. Like a more pleasant caffeine. While testing my typing speed in Amphetype, my speed seemed to go up >=5 WPM, even after the time penalties for correcting the increased mistakes; I also did twice the usual number without feeling especially tired. A second dose was similar, and the third dose was at 10 PM before playing Ninja Gaiden II seemed to stop the usual exhaustion I feel after playing through a level or so. (It’s a tough game, which I have yet to master like Ninja Gaiden Black.) Returning to the previous concern about sleep problems, though I went to bed at 11:45 PM, it still took 28 minutes to fall sleep (compared to my more usual 10-20 minute range); the next day I use 2mg from 7-8PM while driving, going to bed at midnight, where my sleep latency is a more reasonable 14 minutes. I then skipped for 3 days to see whether any cravings would pop up (they didn’t). I subsequently used 1mg every few days for driving or Ninja Gaiden II, and while there were no cravings or other side-effects, the stimulation definitely seemed to get weaker - benefits seemed to still exist, but I could no longer describe any considerable energy or jitteriness. One curious thing that leaps out looking at the graphs is that the estimated underlying standard deviations differ: the nicotine days have a strikingly large standard deviation, indicating greater variability in scores - both higher and lower, since the means weren’t very different. The difference in standard deviations is just 6.6% below 0, so the difference almost reaches our usual frequentist levels of confidence too, which we can verify by testing: Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of 20% \times \frac{1}{\text{dozens}} of being iodine! I may be unduly optimistic if I give this as much as 10%. After I ran out of creatine, I noticed the increased difficulty, and resolved to buy it again at some point; many months later, there was a Smart Powders sale so bought it in my batch order,$12 for 1000g. As before, it made Taekwondo classes a bit easier. I paid closer attention this second time around and noticed that as one would expect, it only helped with muscular fatigue and did nothing for my aerobic issues. (I hate aerobic exercise, so it’s always been a weak point.) I eventually capped it as part of a sulbutiamine-DMAE-creatine-theanine mix. This ran out 1 May 2013. In March 2014, I spent $19 for 1kg of micronized creatine monohydrate to resume creatine use and also to use it as a placebo in a honey-sleep experiment testing Seth Roberts’s claim that a few grams of honey before bedtime would improve sleep quality: my usual flour placebo being unusable because the mechanism might be through simple sugars, which flour would digest into. (I did not do the experiment: it was going to be a fair amount of messy work capping the honey and creatine, and I didn’t believe Roberts’s claims for a second - my only reason to do it would be to prove the claim wrong but he’d just ignore me and no one else cares.) I didn’t try measuring out exact doses but just put a spoonful in my tea each morning (creatine is tasteless). The 1kg lasted from 25 March to 18 September or 178 days, so ~5.6g &$0.11 per day.
Smart drugs offer significant memory enhancing benefits. Clinical studies of the best memory pills have shown gains to focus and memory. Individuals seek the best quality supplements to perform better for higher grades in college courses or become more efficient, productive, and focused at work for career advancement. It is important to choose a high quality supplement to get the results you want.
-Caviar contains a unique blend of nutrients that are perfect for the brain, including omega-3 fats (a brain-must), choline (a B vitamin needed to make memories), vitamin B6 and B12 (needed to support the nervous system), minerals like iron and magnesium (needed for healthy blood and tissues) and a good amount of protein combined with potent antioxidants like vitamin A, vitamin C, and selenium. [Because] caviar [can be] expensive, fatty fish would be my recommended alternative, especially Alaskan salmon [and] mackerel, bluefish, sardines [and] anchovies [to get the] omega-3’s your brain needs.
Alpha Lipoic Acid is a vitamin-like chemical filled with antioxidant properties, that naturally occur in broccoli, spinach, yeast, kidney, liver, and potatoes. The compound is generally prescribed to patients suffering from nerve-related symptoms of diabetes because it helps in preventing damage to the nerve cells and improves the functioning of neurons.
One of the other suggested benefits is for boosting serotonin levels; low levels of serotonin are implicated in a number of issues like depression. I’m not yet sure whether tryptophan has helped with motivation or happiness. Trial and error has taught me that it’s a bad idea to take tryptophan in the morning or afternoon, however, even smaller quantities like 0.25g. Like melatonin, the dose-response curve is a U: ~1g is great and induces multiple vivid dreams for me, but ~1.5g leads to an awful night and a headache the next day that was worse, if anything, than melatonin. (One morning I woke up with traces of at least 7 dreams, although I managed to write down only 2. No lucid dreams, though.)
Still, putting unregulated brain drugs into my system feels significantly scarier than downing a latte or a Red Bull—not least because the scientific research on nootropics’ long-term effects is still so thin. One 2014 study found that Ritalin, modafinil, ampakines, and other similar stimulants could eventually reduce the “plasticity” of some of the brain’s neural networks by providing them with too much dopamine, glutamate and norepinephrine, and potentially cause long-term harm in young people whose brains were still developing. (In fact, in young people, the researchers wrote, these stimulants could actually have the opposite effect the makers intended: “Healthy individuals run the risk of pushing themselves beyond optimal levels into hyperdopaminergic and hypernoradrenergic states, thus vitiating the very behaviors they are striving to improve.”) But the researchers found no evidence that normal doses of these drugs were harmful when taken by adults.
If you are looking for a way to maximize brain power I have come across a great product named Brain Abundance. Here are a list of the ingridients, folic acid, grape seed extract, L-Glutamine, phenylalanine, sensoril, rhodiola, vitamin b-12, astaxanthin, niacinamide, zinc picolinate, resveratrol, vitamin b-6, ginseng. I have personally taken this product and have had great results with the following: cognitive function, healthy memory, stress and anxiety, positive mood and mind, better sleep, focus and mental clarity, and much more. Feel free to find out more information at:
But he has also seen patients whose propensity for self-experimentation to improve cognition got out of hand. One chief executive he treated, Ngo said, developed an unhealthy predilection for albuterol, because he felt the asthma inhaler medicine kept him alert and productive long after others had quit working. Unfortunately, the drug ended up severely imbalancing his electrolytes, which can lead to dehydration, headaches, vision and cardiac problems, muscle contractions and, in extreme cases, seizures.
Phillips told me that, much as he believes in neuroenhancers, he did not want to be "the poster boy for smart-in-a-pill". At one point, he said: "We really don't know the possible implications for long-term use of these things." (He recently stopped taking Provigil every day, replacing it with another prescription stimulant.) Nor does he think we need to be turning up the crank another notch on how hard we work. "But," he said, "the baseline competitive level is going to reorientate around what these drugs make possible, and you can choose to compete or not."
Taking these drugs without a doctor’s supervision can be dangerous. There are interactions and contraindications that can cause serious problems. These drugs should not be used if you drink alcohol or take an antidepressant. (50) The possibility of adverse drug reactions should not be taken lightly. By some calculations, adverse drug reactions are now the fourth leading cause of death in the US. (51)
The next cheap proposition to test is that the 2ml dose is so large that the sedation/depressive effect of nicotine has begun to kick in. This is easy to test: take much less, like half a ml. I do so two or three times over the next day, and subjectively the feeling seems to be the same - which seems to support that proposition (although perhaps I’ve been placebo effecting myself this whole time, in which case the exact amount doesn’t matter). If this theory is true, my previous sleep results don’t show anything; one would expect nicotine-as-sedative to not hurt sleep or improve it. I skip the day (no cravings or addiction noticed), and take half a ml right before bed at 11:30; I fall asleep in 12 minutes and have a ZQ of ~105. The next few days I try putting one or two drops into the tea kettle, which seems to work as well (or poorly) as before. At that point, I was warned that there were some results that nicotine withdrawal can kick in with delays as long as a week, so I shouldn’t be confident that a few days off proved an absence of addiction; I immediately quit to see what the week would bring. 4 or 7 days in, I didn’t notice anything. I’m still using it, but I’m definitely a little nonplussed and disgruntled - I need some independent source of nicotine to compare with!
The greatly increased variance, but only somewhat increased mean, is consistent with nicotine operating on me with an inverted U-curve for dosage/performance (or the Yerkes-Dodson law): on good days, 1mg nicotine is too much and degrades performance (perhaps I am overstimulated and find it hard to focus on something as boring as n-back) while on bad days, nicotine is just right and improves n-back performance.
A big part is that we are finally starting to apply complex systems science to psycho-neuro-pharmacology and a nootropic approach. The neural system is awesomely complex and old-fashioned reductionist science has a really hard time with complexity. Big companies spends hundreds of millions of dollars trying to separate the effects of just a single molecule from placebo – and nootropics invariably show up as “stacks” of many different ingredients (ours, Qualia , currently has 42 separate synergistic nootropics ingredients from alpha GPC to bacopa monnieri and L-theanine). That kind of complex, multi pathway input requires a different methodology to understand well that goes beyond simply what’s put in capsules.
|
|
# Mixed model idea and Bayesian method
In mixed model, we assume the random effects (parameters) are random variables that follow normal distributions. It looks very similar to the Bayesian method, in which all the parameters are assumed to be random.
So is the random effect model kind of special case of Bayesian method?
This is a good question. Strictly speaking, using a mixed model does not make you Bayesian. Imagine estimating each random effect separately (treating it as a fixed effect) and then looking at the resulting distribution. This is "dirty," but conceptually you have a probability distribution over the random effects based on a relative frequency concept.
But if, as a frequentist, you fit you model using full maximum likelihood and then wish to "estimate" the random effects, you've got a little complication. These quantities aren't fixed like your typical regression parameters, so a better word than "estimation" would probably be "prediction." If you want to predict a random effect for a given subject, you're going to want to use that subject's data. You'll need to resort to Bayes' rule, or at least the notion that $$f(\beta_i | \mathbf{y}_i) \propto f(\mathbf{y}_i | \beta_i) g(\beta_i).$$ Here the random effects distribution $g()$ works essentially like a prior. And I think by this point, many people would call this "empirical Bayes."
To be a true Bayesian, you would not only need to specify a distribution for your random effects, but distributions (priors) for each parameter that defines that distribution, as well distributions for all fixed effects parameters and the model epsilon. It's pretty intense!
• Really clear, straightforward answer. – D L Dahly Dec 15 '13 at 10:46
• @baogorek - a fairly robust default is Cauchy priors for fixed effects and half cauchy for variance parameters - not that "intense" - it just looks like penalised likelihood – probabilityislogic Dec 15 '13 at 13:12
Random effects are a way to specify a distributionial assumption by using conditional distributions. For example, the random one-way ANOVA model is: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(\mu_i, \sigma^2_w), \quad j=1,\ldots,J, \qquad \mu_i \sim_{\text{iid}} {\cal N}(\mu, \sigma^2_b), \quad i=1,\ldots,I.$$ And this distributional assumption is equivalent to $$\begin{pmatrix} y_{i1} \\ \vdots \\ y_{iJ} \end{pmatrix} \sim_{\text{iid}} {\cal N}\left(\begin{pmatrix} \mu \\ \vdots \\ \mu \end{pmatrix}, \Sigma\right), \quad i=1,\ldots,I$$ where $\Sigma$ has an exchangeable structure (with diagonal entry $\sigma^2_b+\sigma^2_w$ and covariance $\sigma^2_b$). To Bayesianify the model, you need to assign prior distributions on $\mu$ and $\Sigma$.
If you're talking in terms of reproducing the same answers, then the answer is yes. The INLA (google "inla bayesian") computational method for bayesian GLMMs combined with a uniform prior for the fixed effects and variance parameters, basically reproduces the EBLUP/EBLUE outputs under the "simple plug in" gaussian approximation, where the variance parameters are estimated via REML.
I don't think so, I consider it part of the likelihood function. It's similar to specifying the error term follows a Normal distribution in a regression model, or a certain binary process can be modeled using a logistic relationship in a GLM.
Since no prior information, or distributions, are used I do not consider it Bayesian.
• No prior information used hey? How did you specify the functional form for the likelihood function then? :-D – probabilityislogic Dec 15 '13 at 12:18
• Some people argue that the distinction between likelihood and prior is somewhat artificial. – Christoph Hanck Apr 7 '15 at 4:34
|
|
Hardcover | $28.00 Short | £19.95 | ISBN: 9780262019651 | 224 pp. | 6 x 9 in | 5 figures| August 2013 Ebook |$20.00 Short | ISBN: 9780262317009 | 224 pp. | 6 x 9 in | 5 figures| August 2013
# Monitoring Movements in Development Aid
Recursive Partnerships and Infrastructures
## Overview
In Monitoring Movements in Development Aid, Casper Jensen and Brit Winthereik consider the processes, social practices, and infrastructures that are emerging to monitor development aid, discussing both empirical phenomena and their methodological and analytical challenges. Jensen and Winthereik focus on efforts by aid organizations to make better use of information technology; they analyze a range of development aid information infrastructures created to increase accountability and effectiveness. They find that constructing these infrastructures is not simply a matter of designing and implementing technology but entails forging new platforms for action that are simultaneously imaginative and practical, conceptual and technical.
After presenting an analytical platform that draws on science and technology studies and the anthropology of development, Jensen and Winthereik present an ethnography- based analysis of the mutually defining relationship between aid partnerships and infrastructures; the crucial role of users (both actual and envisioned) in aid information infrastructures; efforts to make aid information dynamic and accessible; existing monitoring activities of an environmental NGO; and national-level performance audits, which encompass concerns of both external control and organizational learning.
Jensen and Winthereik argue that central to the emerging movement to monitor development aid is the blurring of means and ends: aid information infrastructures are both technological platforms for knowledge about aid and forms of aid and empowerment in their own right.
|
|
# Can we continually factorize an expression like $x+y$?
I have a question that, for lack of familiarity or understanding of the relevant fields, I'm not quite sure how to formulate, so I'll just start off with an example and list some questions as I go. As the title suggests, it has to do with factorizing an otherwise irreducible expression into "polynomials" of algebraic degree.
Consider the binomial $x+y$. If I'm reading the definition correctly, then this is certainly an irreducible polynomial since we can't decompose it into polynomial pieces. We can $(\spadesuit)$, however, factorize it to obtain $$x+y=\left(x^{1/2}\right)^2-\left(i y^{1/2}\right)^2=\left(x^{1/2}+iy^{1/2}\right)\left(x^{1/2}-iy^{1/2}\right)$$
$(1)$ Under what circumstances is $(\spadesuit)$ a "legal move"? Do I need to be specific about what $\left(a^{1/n}\right)^n$ might mean?
$(2)$ Is there a name for this kind of "reducibility"?
We can continue in this manner to obtain the next iteration, $$x+y=\left(x^{1/4}\color{red}\pm i^{3/2}y^{1/4}\right)\left(x^{1/4}\color{red}\pm i^{1/2}y^{1/4}\right)$$ followed by $$x+y=\left(x^{1/8}\color{red}\pm i^{7/4}y^{1/8}\right)\left(x^{1/8}\color{red}\pm i^{5/4}y^{1/8}\right)\left(x^{1/8}\color{red}\pm i^{3/4}y^{1/8}\right)\left(x^{1/8}\color{red}\pm i^{1/4}y^{1/8}\right)$$ and so on, where I use $\color{red}\pm$ to mean $(a\color{red}\pm b)=(a+b)(a-b)$ just to save space.
$(3)$ Same as $(1)$ but for successive iterations.
Barring any errors thus far, we can arrive at a compact form relying on product notation: $$\large \prod_{\substack{n\in\mathbb{N}\\[.5ex]1\le r\le 2^{n-1}}}\left(x^{2^{-n}}\color{red}\pm i^{(2r-1)2^{-(n-1)}}y^{2^{-n}}\right)$$
$(4)$ Is there anything about this kind of manipulation that is blatantly wrong or otherwise doesn't hold up?
$(5)$ What fields of math, if any, would be interested in studying this kind of factorization?
In addition to being unsure how to frame this question properly, I'm also confuddled as to how to tag this question. Any suggestions/edits would be appreciated.
• I think you're basically showing that the ring $\bigcup_{n=1}^\infty \mathbb{C}[x^{1/n}, y^{1/n}]$ is not noetherian. – André 3000 Jul 7 '16 at 2:11
• @SpamIAm I'd replace $n$ by $2^n$ in order to be sure that the union forms a ring. – user26857 Jul 7 '16 at 6:09
• @user26857 Yeah, you're right. Really I should probably say the direct limit $\varinjlim_{n} \mathbb{C}[x^{1/n}, y^{1/n}]$ where $\mathbb{C}[x^{1/m}, y^{1/m}] \hookrightarrow \mathbb{C}[x^{1/n}, y^{1/n}]$ if $m \mid n$. – André 3000 Jul 7 '16 at 14:30
• All this notation is making my head spin... But I gather ring theory is something I should be looking into. – user170231 Jul 7 '16 at 14:58
Essentially you are asking about roots of unity. That is, for any positive integer $$n>0$$ there is a factorization $$x^n-1 = \prod_{k=0}^{n-1} (x-\zeta_n^k) \tag{1}$$ where $$\,\{\zeta_n^0,\zeta_n^1,\dots,\zeta_n^{n-1}\}$$ are the $$n$$-th roots of unity. Since $$(x-1)(x+1)=x^2-1,$$ we have a similar factorization $$x^n+1 = \prod_{k=0}^{n-1} (x-\zeta_{2n}^{2k+1}) \tag{2}$$ where $$\,\{\zeta_{2n}^1,\zeta_{2n}^3,\dots,\zeta_{2n}^{2n-1}\}\,$$ are the $$2n$$-th roots of unity which are not $$n$$-th roots of unity. Now $$x+y = \prod_{k=0}^{n-1} (x^{1/n}-\zeta_{2n}^{2k+1}y^{1/n}). \tag{3}$$
The trick here is that an expression containing powers of the variable which are not nonnegative integers is not a polynomial. Polynomials, by definition, at least in a single variable, are expressions of the form
$$a_n x^n + a_{n-1} x^{n-1} + \cdots + a_2 x^2 + a_1 x + a_0$$
with $$a_n \ne 0$$ (unless $$n = 0$$), and $$n$$ is some nonnegative integer, and called the polynomial's degree. Thus, given that the powers are only nonnegative integers from $$0$$ to $$n$$ and $$n$$ itself is such, no polynomial can contain any power that is not a nonnegative integer, which includes both negative powers and fractional powers. In particular,
$$\frac{1}{x}$$
is not a polynomial, nor is
$$x^{1/2}$$
and thus neither is the expression in terms of the "factors" you give. It is a valid "factorization" among a more general class of expressions than polynomials - but not the polynomials themselves. This is similar to how that a number like $$3$$ has no non-trivial factorization in terms of other integers, but if we move to more general forms of number like rational and real numbers, then it can admit such, and in fact infinitely many, factorizations.
One possible generalization of a regular polynomial, which still retains a lot of their structured nature, is what could be called a "Puiseux series" and its finite form, the Puiseux polynomial, in which in addition to the degree $$n$$ we also have a denominator $$N$$ in the exponent and define it as
$$a_n x^{n/N} + a_{n-1} x^{(n-1)/N} + \cdots + a_2 x^{2/N} + a_1 x^{1/N} + a_0$$
with same conditions for the coefficients, and likewise for more than one variable by adding suitable terms. In this case, $$x + y$$ is a Puiseux polynomial of two variables with $$N = 1$$, and it factors into Puiseux polynomials with $$N = 2$$.
|
|
# Tag Info
1
This is wholly analogous to the evanescent optical field that arises in the classically (i.e. computed by raytracing) forbidden region beyond a totally internally reflecting interface between two optical mediums. I analyse this situation in my answer here and there is also a great plot of the situation in Ruslan's answer here. Let's think of a 1D barrier ...
0
I'm not quite sure what your goal is. If the user specifies both $p$ and $q$, you end up with one value for the energy, rather than a plot. For a proper plot you'd have to allow for a range of values $p,q\in\mathbb{Z}$. This will of course result in a 3d graph. You may cut that down to 2d, like in the plot you posted, but you'd have to give some relation ...
1
In vacuum (or everywhere else, really), Coulomb's law takes the form $\boldsymbol\nabla \cdot \mathbf{E} = \frac{\rho}{\epsilon}$, whereas in a polarizable material it is convenient to use $\boldsymbol\nabla \cdot \mathbf{D} = \rho_\mathrm{free}$. The $4\pi$ vs $\epsilon$ has more to do with units. As for the sign, can you give a reference?
4
What is the electrical potential difference and why we have to talk about a difference and not about the electrical potential itself? Mathematically, the reason is that the force is proportional the gradient of a (not the) potential function. $$\vec F = -\nabla \phi$$ Note that a potential that differs by an additive constant $$\phi' = \phi + C$$ ...
1
Apparently these interpretations are deduced from equation (1), which doesn't hold in non-extensive systems. So it seems that these potentials are ,inherently, extensive quantities, and have no meaning in non-extensive systems. I do not think that these interpretations depend on (1). They can be derived without (1) or the homogeneous property of ...
0
Correct, your equation (1) does not hold in non-extensive systems. Also, none of the equations hold for a galaxy since it's not in equilibrium. Supposing we do have equilibrium, though, equilibrium statistical mechanics is exactly the tool we need to extend these quantities to non-extensive and microscopic systems. This was one of Gibbs' main goals in his ...
3
(this is a partial answer) One example is the preceding work Barrett, J. W. "The asymmetric monopole and non-newtonian forces." Nature 341.6238 (1989): 131-132. doi:10.1038/341131a0 which is Ref. 13 in the Connes et al. paper. This paper contains one example of asymmetric monopole produced by rotating figure shown about the horizontal axis passing ...
0
In one dimension, motion in a Coulomb-like attracting potential $U=-\frac{\alpha}{|x|}$ is quite ill-defined. I'll now try to explain this for classical particles, then will say some words about quantum mode. Let's consider two cases: 3D motion of a particle nearly free falling into the attracting center, but with nonzero angular momentum $L$. In this ...
0
One way to argue about the so called observations is the sequence of logical implications: Bigger hands$\rightarrow$Bigger bones$\rightarrow$Bigger muscles$\rightarrow$More strength and therefore we can say to some extent say that the people with bigger hands may have an advantage.
Top 50 recent answers are included
|
|
At any time, you can load and serve an existing Genie app. Loading a Genie app will bring into scope all your app's files, including the main app module, controllers, models, etcetera.
## Starting a Genie REPL session MacOS / Linux
The recommended approach is to start an interactive REPL in Genie's environment by executing bin/repl in the os shell, while in the project's root folder.
$bin/repl The app's environment will be loaded. In order to start the web server, you can next execute: julia> up() If you want to directly start the server, use bin/server instead of bin/repl: $ bin/server
This will automatically start the web server in non interactive mode.
Finally, there is the option to start the serve and drop to an interactive REPL, using bin/serverinteractive instead.
## Starting a Genie REPL on Windows
On Windows the workflow is similar to macOS and Linux, but dedicated Windows scripts, repl.bat, server.bat, and serverinteractive.bat are provided inside the project folder, within the bin/ directory. Double click them or execute them in the os shell (cmd or PowerShell) to start an interactive REPL session or a server session, respectively, as explained in the previous paragraphs.
It is possible that the Windows executables repl.bat, server.bat, and serverinteractive.bat are missing - this is usually the case if the app was generated on a Linux/Mac and ported to a windows computer. You can create them at anytime by running this generator in the Genie/Julia REPL (at the root of the Genie project):
julia> using Genie
julia> Genie.Generator.setup_windows_bin_files()
Alternatively, you can pass the path to the project as the argument to setup_windows_bin_files:
julia> Genie.Generator.setup_windows_bin_files("path/to/your/Genie/project")
## Juno / Jupyter / other Julia environment
For Juno, Jupyter, and other interactive environments, first make sure that you cd into your app's project folder.
We will need to make the local package environment available:
using Pkg
Pkg.activate(".")
Then:
using Genie
Genie.loadapp()
In order to load a Genie app within an open Julia REPL session, first make sure that you're in the root dir of a Genie app. This is the project's folder and you can tell by the fact that there should be a bootstrap.jl file, plus Julia's Project.toml and Manifest.toml files, amongst others. You can julia> cd(...) or shell> cd ... your way into the folder of the Genie app.
Next, from within the active Julia REPL session, we have to activate the local package environment:
julia> ] # enter pkg> mode
pkg> activate .
Then, back to the julian prompt, run the following to load the Genie app:
julia> using Genie
julia> Genie.loadapp()
The app's environment will now be loaded.
In order to start the web server execute
julia> startup()
The recommended way to load an app is via the bin/repl, bin/server and bin/serverinteractive commands. It will correctly start the Julia process and start the app REPL with all the dependencies loaded with just one command.
|
|
1. ## Related rates problem
Hello, I'm working on a related rates problem that I can't seem to get. It should be fairly easy, but trying to piece the values together just isn't working out for me.
The question is:
A railroad bridge is 20 m above, and at right angles to, a river. A person in a train travelling at 60 km/h passes over the centre of the bridge at the same instant that a person in a motorboat travelling at 20 km/h passes under the centre of the bridge. How fast are the two people separating 10 seconds later?
Thank you in advance for any help!
2. This is done like other Pythagoras related rates problems, only you have a 3-dimensional component.
$D^{2}=x^{2}+y^{2}+20^{2}$
Differentiate:
$D\frac{dD}{dt}=x\frac{dx}{dt}+y\frac{dy}{dt}$
Now, you have dx/dt, dy/dt, and you can find D once you know x and y. Find x and y after each has traveled 10 seconds. For instance, at 60 km/h, how far does the boat travel in 10 seconds. That could be your x. Carry on?.
|
|
# How do I sine a variable?
0 favourites
• 8 posts
• I'm trying to figure out a formula to sine a variable from 0 to 500, then back to 0 repeatedly in a sine shape; preferably in a single line. Does anyone have a straight-forward way to do this?
• Since Sin() goes from 0 to 1 to 0 to -1, maybe abs(Sin()*500) would fit your need ?
• How would you control the speed, though?
• ## Try Construct 3
Develop games in your browser. Powerful, performant & highly capable.
Construct 3 users don't see these ads
• As Semoreh says the sine function returns a value between -1 and 1. The input to the function repeats every 2 Pi because of it's relation to Pi and a circle. However, if you tweak the input and output a little you can change them to the ranges you want
Firstly the input; if you wanted it to repeat every 20 seconds instead of every 6.28 ( 2 * Pi ) seconds what you need to do it "normalise" your input by dividing it by the desired period time / 20 this will give you a value between 0 and 1. Now you can multiply by the required period. (time / 20) * (2 * Pi) or simplified (time / 10) * Pi.
Secondly the output range. You only want a positive value, but there's actually 2 ways of doing this, depending on how you want it to behave. An easy option is to use absolute to convert it to a positive value like so abs(sin(time)). But remember that sine is a curve, what you've effectually done is place a mirror along the x-axis, hence it will be rounded at the top and spiky at the bottom. In animation this gives you a bouncing ball effect. Accelerating as it falls, and decelerating as it climbs. The other option is to offset the curve so that it is always positive 1 + sin(time) but this will change the range to between 0 and 2. We wanted an output of 0 to 500 anyway so we need half the output then scale it by 500 ((1+sin(time)) / 2) * 500 or if we simplify it (1+sin(time)) * 250.
Let's combine those 2 parts together:
250 * (1 + sin((time / 10) * Pi))
When testing things like this it's useful to actually visualise the results on a graph. I recommend trying it out on a graphing tool like Desmos.
Here's some of the formulas for the expressions I mentioned, you should just be able to paste them into Desmos.
• \sin x
• \sin\left(\frac{x\ \cdot\ \pi}{10}\ \right)
• \left|\sin\left(x\right)\right|
• 1-\left|\sin\left(x\right)\right|
• \frac{\left(1\ +\ \sin\left(\frac{x\ \cdot\ \pi}{10}\ \right)\right)}{2}\
• \left(1\ +\ \sin\left(\frac{x\ \cdot\ \pi}{10}\ \right)\right)\ \cdot\ 5
• Nepeo - thank you for the explanation! Very helpful; I’ll have to read it a few times to understand but it’s exactly what I was looking for!
Thank you, too, Semorah!
• I didn't know Desmos, it's a cool website ! :D
Nepeo's formula repeats way better, mine only works for 1 Wave.
Just keep in mind that if you want to start from 0 you need to start your timer from X=-5
• Glad I could help, if either of you have any questions feel free to ask me. These sorts of transforms are very useful to know, and can be applied to all sort of stuff. They can be a little difficult to visualise at first because you have to think about a range of values not a single absolute value. Also the complete solution can be very confusing to look at. Think about each step you need to do, then work them together to get the result.
I briefly touched on some similar maths in my tutorial for the Advanced Random plugin for tweaking noise functions. Noise functions basically take an input value and transform it into a new value. I actually used sine for some examples in the tutorial because it behaves exactly like a noise function.
I've come across a few websites over the years that allow you to look at graphs of a function like that. I normally just search around for them. Saying that I'm fairly sure I've used Desmos in the past, and it's one of the best ones I've used! Semoreh is right that if you want it to start at 0 you will have offset it on the x direction by a quarter of it's period. Usefully cosine is identical to sine but offset by a quarter of it's period. So you can use (1 + cos((x * Pi) / 10)) * 5 (or for Desmos \left(1\ +\ \cos\left(\frac{x\ \cdot\ \pi}{10}\ \right)\right)\ \cdot\ 5 ) to get the same curve but starting at 0.
• Not to forget though: The sine behavior has a "value only" option and can be used for that too.
|
|
#### Thank you for registering.
One of our academic counsellors will contact you within 1 working day.
Click to Chat
1800-1023-196
+91 7353221155
CART 0
• 0
MY CART (5)
Use Coupon: CART20 and get 20% off on all online Study Material
ITEM
DETAILS
MRP
DISCOUNT
FINAL PRICE
Total Price: Rs.
There are no items in this cart.
Continue Shopping
# What should be the charge on a sphere of radius 4cm , so that when it is brought in contact with another sphere of radius 2cm carrying charge of 10 micro coulomb , there is no transfer of charge from one sphere to other?
30 Points
one month ago
Their both potential must be same, so the answer should be, $Kq1/r1 = Kq2/r2$. on solving you will get 20 micro coloumb.
Their both potential must be the same, so the answer should be, $Kq1/r1 = Kq2/r2$. on solving you will get 20 micro coulomb.
|
|
# Bound the variance of the product of two random varables.
For two random variables $X$ and $Y$ show that the following inequality holds
$$\mathrm{Var}(XY)\leq 2\|Y\|_{\infty}^{2}\mathrm{Var}(X)+2\|X\|_{\infty}^{2}\mathrm{Var}(Y).$$
Well first I tried to show it for just indicators functions by I couldn't even show that. Any tips?
• No here I am not assuming they are independent. – Stone Nov 29 '14 at 1:51
• In the general case, $$Var(XY)= 2 \mathbb{E}(X) \mathbb{E}(Y) \text{Cov}(X,Y)-\text{Cov}(X,Y)^2+\mu _{2,2}+2 \mu _{1,2} \mathbb{E}(X)+2 \mu _{2,1} \mathbb{E}(Y)+\text{Var}(Y) (\mathbb{E}[X])^2+\text{Var}(X) (\mathbb{E}[Y])^2$$ ... may serve as a useful starting point. – wolfies Nov 29 '14 at 1:54
• sorry but what are $\mu_{2,2}$ and $\mu_{2,1}$ suppose to be? – Stone Nov 29 '14 at 2:02
• Sorry - I should have specified. They denote the product central moments: $$\mu _{r,s}=E\left[(X-E[X])^r (Y-E[Y])^s\right]$$ – wolfies Nov 29 '14 at 2:04
• @Eupraxis1981 Well I don't think you need to assume that they are bounded since if say $X$ is unbounded and if $Y\neq 0$ then the right hand side is infinite and the inequality still holds and in the case that $Y=0$ then both sides are zero and the result still holds. So yes it is fair to assume that they are both bounded a.s – Stone Nov 29 '14 at 6:42
If $U$ and $V$ are two random variables, then the inequality $$\mathrm{Var}(U+V)\leqslant 2\mathrm{Var}(U)+2\mathrm{Var}(V)$$ takes place. Using this inequality with $U:=(X-\mathbb E[X])Y$ and $V:=\mathbb E[X]Y$, we obtain that $$\mathrm{Var}(XY)\leqslant 2\mathrm{Var}((X-\mathbb E[X])Y)+2\mathrm{Var}(\mathbb E[X]Y).$$ Since \begin{align}\mathrm{Var}((X-\mathbb E[X])Y)&=\mathbb E\left[((X-\mathbb E[X])Y)^2\right]- \left(\mathbb E\left[(X-\mathbb E[X])Y\right]\right)^2\\ &\leqslant \mathbb E\left[((X-\mathbb E[X])Y)^2\right]\\ &\leqslant \mathbb E\left[(X-\mathbb E[X])^2\right]\lVert Y\rVert_\infty^2\\ &=\mathrm{Var}(X)\lVert Y\rVert_\infty^2, \end{align} we have showed the inequality $$\mathrm{Var}(XY)\leqslant 2\mathrm{Var}(X)\lVert Y\rVert_\infty^2+2(\mathbb E[X])^2\mathrm{Var}(Y).$$
|
|
# table number is off in cross-references
I have a longtable and a regular table that were generated by a statistics package. I am inserting then into an article like this:
\input{../Tables/sum_stats.tex}
\label{sum_stats}
and then referring to it with:
Table \ref{sum_stats} displays summary statistics for our sample.
While all the tables are numbered correctly, the numbering in the references is off by 3, so table 1 becomes table 4 in the text. I tried resetting the table counter with \setcounter{table}{0} after \begin{document}, but that did not fix the problem.
The tables are all:
\begin{table}[htbp]\centering
\def\sym#1{\ifmmode^{#1}\else$$^{#1}$$\fi}
\caption{XXXX}
\begin{tabular}{l*{10}{c}}
\hline\hline
&\multicolumn{1}{c}{(1)}&\multicolumn{1}{c}{(2)}&\multicolumn{1}{c}{(3)}&\multicolumn{1}{c}{(4)}&\multicolumn{1}{c}{(5)}&\multicolumn{1}{c}{(6)}&\multicolumn{1}{c}{(7)}&\multicolumn{1}{c}{(8)}&\multicolumn{1}{c}{(9)}&\multicolumn{1}{c}{(10)}\\
&\multicolumn{1}{c}{AME}&\multicolumn{1}{c}{Contrast}&\multicolumn{1}{c}{AME}&\multicolumn{1}{c}{Contrast}&\multicolumn{1}{c}{AME}&\multicolumn{1}{c}{Contrast}&\multicolumn{1}{c}{AME}&\multicolumn{1}{c}{Contrast}&\multicolumn{1}{c}{AME}&\multicolumn{1}{c}{Contrast}\\
\hline
....
\hline\hline
\end{tabular}
\end{table}
• There is a caption inside the table. But it is made inside an environment (table), so the local value that captions prepares for the label to grab is, as Mico mentions lost. You will get whatever the value is outside the table environment. Always keep your labels close to what they are referring to. – daleif Jan 8 '15 at 20:34
If your external files all have the same form, just add them a command
\begin{table}[htbp]\centering
\def\sym#1{\ifmmode^{#1}\else$$^{#1}$$\fi}
\caption{XXXX}\label{\thistablelabel}% <------------
\begin{tabular}{l*{10}{c}}
\hline\hline
&\multicolumn{1}{c}{(1)}&\multicolumn{1}{c}{(2)}&\multicolumn{1}{c}{(3)}&\multicolumn{1}{c}{(4)}&\multicolumn{1}{c}{(5)}&\multicolumn{1}{c}{(6)}&\multicolumn{1}{c}{(7)}&\multicolumn{1}{c}{(8)}&\multicolumn{1}{c}{(9)}&\multicolumn{1}{c}{(10)}\\
&\multicolumn{1}{c}{AME}&\multicolumn{1}{c}{Contrast}&\multicolumn{1}{c}{AME}&\multicolumn{1}{c}{Contrast}&\multicolumn{1}{c}{AME}&\multicolumn{1}{c}{Contrast}&\multicolumn{1}{c}{AME}&\multicolumn{1}{c}{Contrast}&\multicolumn{1}{c}{AME}&\multicolumn{1}{c}{Contrast}\\
\hline
....
\hline\hline
\end{tabular}
\end{table}
and define in your document preamble
\newcommand{\tableinput}[2]{%
\def\thistablelabel{#2}\input{#1}%
}
Then call
\tableinput{../Tables/sum_stats.tex}{sum_stats}
instead of using \input{../Tables/sum_stats.tex}\label{sum_stats} which is incorrect because the label is set after the table environment has been closed.
The suggested changes will set the label at the proper place.
|
|
Best way to draw heatmap using Gfold deferentially expressed no replicate data
0
0
Entering edit mode
3.4 years ago
lookamrita • 0
Hi, I have two sample (Control and Test) RNA seq data (Ref: Human) after analyzing the files using Star and GATK pipeline , I Have used Gfold to check the differential expression analysis. Now I have Gfold output file of differential expression analysis. I want to draw heatmap using it. There are 475 genes which are having more than 2 log2fc, I want that genes to use in heatmap from the Gfold output file. What is the best way to draw heatmap using it? New in RNA-seq. Need help. Thanks in advance
RNA-Seq next-gen R • 914 views
0
Entering edit mode
Please see as a starting point:
Heatmap based with FPKM values
There are plenty of posts here on Biostars. Please use the search function.
0
Entering edit mode
Even though I have log2fc value, shall I use FPKM only. Please see the example values and columns in my file. This is how it looks like:
Gene GFOLD(2) E-FDR log2fdc 1stRPKM 2ndRPKM
7SK 3.63629 1 2.7294 0.0064312 0.0753654
A2ML1 8.08695 1 7.09134 0.00184317 0.47231
AADAC 2.57739 1 2.49758 0.108541 0.610544
AATK 2.68001 1 2.63272 0.0563924 0.34059
ABCA6 4.80114 1 4.7294 0.0358213 0.941189
0
Entering edit mode
You can use whatever you think is best. I linked this one for example code that you can adapt to feed in your FCs.
|
|
# Linear regression analysis / George A. F. Seber, Alan J. Lee.
Author:
Seber, G. A. F. (George Arthur Frederick), 1938- [Browse]
Format:
Book
Language:
English
Published/Created:
Hoboken, NJ : John Wiley, c2003.
Εdition:
2nd ed.
Description:
xvi, 557 p. : ill. ; 24 cm.
Series:
Wiley series in probability and statistics. [More in this series]
Bibliographic references:
Includes bibliographical references (p. 531-548) and index.
Contents:
1. Vectors of Random Variables -- 2. Multivariate Normal Distribution -- 3. Linear Regression: Estimation and Distribution Theory -- 4. Hypothesis Testing -- 5. Confidence Intervals and Regions -- 6. Straight-Line Regression -- 7. Polynomial Regression -- 8. Analysis of Variance -- 9. Departures from Underlying Assumptions -- 10. Departures from Assumptions: Diagnosis and Remedies -- 11. Computational Algorithms for Fitting a Regression -- 12. Prediction and Model Selection -- App. A. Some Matrix Algebra -- App. B. Orthogonal Projections.
Other format(s):
Table of contents also available via Internet from the Wiley web site.
Subject(s):
Regression analysis [Browse]
ISBN:
0471415405
OCLC:
51635479
Related name:
RCP:
C - S
|
|
### Misadventures in experiments for growth
by MICHAEL FORTE
Large-scale live experimentation is a big part of online product development. In fact, this blog has published posts on this very topic. With the right experiment methodology, a product can make continuous improvements, as Google and others have done. But what works for established products may not work for a product that is still trying to find its audience. Many of the assumptions on which the "standard" experiment methodology is premised are not valid. This means a small and growing product has to use experimentation differently and very carefully. Indeed, failure to do so may cause experiments to mislead rather than guide. This blog post is about experimentation in this regime.
## Established versus fledgling products
For the purpose of this post, "established products" are products that have found viable segments of their target user populations, and have sustained retention among those segments. These established products fill a particular need for a particular set of users, and while these products would want to expand, they do not need to as a matter of existence. Product viability these days does not necessarily mean being a financially viable standalone product either. Fulfilling unmet user needs is often enough to be of value to a larger product that might someday purchase you. For established products, growth is structured as incremental rather than a search for viability, or a matter of survival.
In contrast, "fledgling products" are products that are still trying to find their market. Now how is it possible for these fledgling products exist, do something, have enough users that one could contemplate experimentation, and yet still not have market fit? Wonders of the internet (and VC funding)! Modern products often don't start with set-in-stone business models because starting and scaling costs are low. Modern products often start with an idea, but then gather enough momentum to pivot to fill emergent needs. You do some cool stuff and then try to figure out from usage patterns what hole your product is filling in the world (so-called "paving the cowpath"). Instrumentation and analysis are critical to finding this unexpected use.
## Why does anyone use experiments?
Let's revisit the various reasons for running experiments to see how relevant they are for a fledgling product:
### To decide on incremental product improvements
This is the classic use case of experimentation. Such decisions involve an actual hypothesis test on specific metrics (e.g. version A has better task completion rates than B) that is administered by means of an experiment. Are the potential improvements realized and worthwhile? This scenario is typical for an established product. Often, an established product will have an overall evaluation criterion (OEC) that incorporates trade-offs among important metrics and between short- and long-term success. If so, decision making is further simplified.
On the other hand, fledgling products often have neither the statistical power to identify the effects of small incremental changes, nor the luxury to contemplate small improvements. They are usually making big changes in an effort to provide users a reason to try and stick with their fledgling product.
### To do something sizable
Sizable changes are the bulk of changes a fledgling product is making. But these are not usually amenable to A/B experimentation. The metrics to measure the impact of the change might not yet be established. Typically, it takes a period of back-and-forth between logging and analysis to gain the confidence that a metric is actually measuring what we designed for it to measure. Only after such validation would a product make decisions based on a metric. With major changes, the fledgling product is basically building the road as it travels on it.
That said, there might still be reasons to run a long-term holdback experiment (i.e. withhold the change from a subset of users). It can provide a post hoc measure of eventual impact, and hence insight into what the product might try next. This is not the classic case of hypothesis testing via experimentation, and thus the measured effects are subject to considerations that come with the territory of unintentional data.
### To roll out a change
We have a change we know we want to launch — we just want to make sure we didn't break anything. We might use randomization at the user level to spread the rollout. Unlike, say, picking a data center, randomization produces metrics that are immune to sampling bias and can thus detect regressions in fewer treated units. This is a good use of existing experiment infrastructure, but it is not really an experiment as in hypothesis testing.
In summary, classic experimentation is applicable to fledgling products but in a much more limited way than to established products. With continual large product changes, a fledgling product's metrics may not be mature enough for decision making, let alone amenable to an OEC. To focus on the long term is great advice once immediate survival can be assumed.
## Your users aren't who you think they are!
A more fundamental problem with live experiments is that the users whose behavior they measure might not be who we imagine them to be. To illustrate the issues that arise from naively using experimentation in a fledgling product let us imagine a toy example:
We have an MP3 music sales product that we have just launched in a "beta" state. Our specialty is to push users through a series of questions and then recommend, for purchase, tracks that we think they will like. We back up our belief by offering a full refund if they don't love the song. Each page-view of recommendations consists of an appealing display of tracks of which the user may click on one to purchase. The product is premised on making it a no-brainer to purchase a single song ("for the price of chewing gum", according to our marketing message).
We define an impression as a recommendation page-view and a sale as the purchase of a track. Of particular interest to us is the conversion rate, defined as fraction of impressions resulting in sales. To grow, we paid for a small amount of advertising and have a slow but steady stream of sales, say roughly 5,000 sales per day from about 100K impressions (5% conversion rate).
The design team decides it wants to add BPM (beats per minute) to the song list page but isn't sure how to order it with the title (e.g. should it be [Artist Title BPM] or [BPM Artist Title]). So they set up an experiment to see which one our users prefer. This is expected to be a small change and does not change the sort order, just adds a little extra information.
The experiment had 10,000 impressions in each arm with results as shown below with 95% confidence intervals. These are binomial confidence intervals computed naively under assumptions of impressions being independent (this is usually a poor assumption, but for now let us proceed with it):
Treatment Impressions Sales Conversion Rate Delta From Control [Artist Title] (control) 10000 400 4.00±0.38% - [Artist Title BPM] 10000 500 5.00±0.43% +1.00±0.57% [BPM Artist Title] 10000 600 6.00±0.47% +2.00±0.60%
Given just this information, it seems obvious to us that we should pick "[BPM Artist Title]" going forward and that we can expect an uplift of roughly 2% more of our impressions to turn into sales. Going from 4 to 6%, that seems like a big win.
Unfortunately this analysis missed one subtle but very important caveat. Early in our product's life cycle we have a user population that strongly prefers EDM (electronic dance music) to the point that roughly 80% of the 5,000 songs we sell are EDM. Given this information it might seem obvious in retrospect that adding BPM to the song list would lead to more sales (BPM is an important selection parameter for EDM music).
How could more sales be a problem? Putting BPM first in the song list came at the expense of putting artist first, and if we had broken out our user population by EDM listener and non-EDM listener we would have seen something very telling:
EDM users (8,000 impressions):
Treatment Impressions Sales Conversion Rate Delta From Control [Artist Title] (control) 8000 320 4.00±0.43% - [Artist Title BPM] 8000 440 5.50±0.50% +1.50±0.66% [BPM Artist Title] 8000 570 7.12±0.56% +3.12±0.71%
Non-EDM users (2,000 impressions):
Treatment Impressions Sales Conversion Rate Delta From Control [Artist Title] (control) 2000 80 4.00±0.86% - [Artist Title BPM] 2000 60 3.00±0.75% -1.00±1.14% [BPM Artist Title] 2000 30 1.50±0.53% -2.50±1.01%
From this it is clear that we have sacrificed sales from non-EDM users for EDM users. This might be an acceptable trade-off if we have looked at the marketplace and decided to make a niche product for EDM users. But the charts indicate that EDM music makes up only 4% of total music sales (source), which means our product might not appeal to 96% of the market. So by optimizing short-term metrics such as sales volume we might have actually hurt our long-term growth potential.
The underlying principle at play is that your current user base is different from your target user base. This fact will always be true, but the bias is dramatically worse for fledgling products as early growth tends to be in specific pockets of users (often due to viral effects) and not uniformly spread across the planet. Those specific pockets won't behave like the broader population along some dimension (here it is EDM vs non-EDM music preference).
## And how to do it right (or at least better)
Continuing with our MP3 product, how can we undo this bias that our non-representative users are injecting?
There are a few ways to de-bias the data to make the experimental results usable. The easiest approach is to identify the segments and reweight them based on the target population distribution.
Since we don't particularly want to build a product optimized for EDM users, we can reweight back to the mean of the broader population. To do that we can separate the populations and then take a weighted mean of the effects to project the effects onto the target user population.
Here the target population is 96% non-EDM, 4% EDM, so to reweight the conversion rate this amounts to: $$0.04 \times EDMrate + 0.96 \times nonEDMrate$$The confidence intervals must also be adjusted, as the standard errors add in quadrature: $$(0.04 \times EDMrateSE)^2 + (0.96 \times nonEDMrateSE)^2$$
Weighted average conversion rates:
Treatment EDM Conversion Rate Non-EDM Conversion Rate Weighted Average [Artist Title] (control) 4.00±0.43% 4.00±0.86% 4.00±0.83% [Artist Title BPM] 5.50±0.50% 3.00±0.75% 3.10±0.72% [BPM Artist Title] 7.12±0.56% 1.50±0.53% 1.72±0.51%
From this it becomes clear that we might not want to add BPM at all, but if we needed the change for some reason other than conversion rate, we should put it after the title.
Also notice the change in confidence intervals in the weighted average versus the original conversion rates; in the original control group we had ±0.38%, now it is ±0.83%. This large increase is a result of the fact we don't have much data from the "target" user base and so we cannot speak very confidently about its behavior.
This strategy only works if we have the ability to identify EDM users. If, for example, we were optimizing the first interaction with our product, we wouldn't know if a new user was an EDM lover or not since they would not have purchased anything yet.
This early user classification problem goes hand in hand with product personalization. Luckily the user segmentation (e.g. "EDM fans") that we aim to use for experiments can also be useful for personalizing our user interface. For our product this might mean simply asking the user when they sign up what their favorite song is. This can then be used for tailoring the product to the user, but also for weighting experimental analysis.
This example with EDM users is clearly a cartoon. In reality, there will be more than two slices. This reweighting technique generalizes to the case when users fall into a small number of slices. But often there are multiple dimensions whose Cartesian product is large, leading to sparse observations within slices. In this case, we need a propensity-score model to provide the appropriate weight for each user.
## Do you even want those users?
The idea that your current users aren't your target users can be taken a step further. For our music example, we imagined that EDM users don't approximate the target population for some experiments. But what if certain users didn't even represent the kind of a user we wanted (e.g. their lifetime value was negative)?
One example of this for our music product could be die-hard fans of the American rock band Tool. Tool does not allow any digital sales of their albums, so users coming to our site looking for this band's music will leave with negative sentiment of our product. They may subsequently return any tracks they purchased, leading to an actual cost to our business. These users might further share their experiences with non-Tool fans on social media, causing more damage.
Early in our product's lifecycle, this population of users will contribute to our active user population as they explore our product and maybe even purchase some albums. But without finding their core audio preferences they will likely churn.
Gaining more of these users may increase our short-term metrics, but these users do not offer long term stable revenue and may negatively impact our ability to gain non-Tool users in the future.
## The tech-savvy users' siren call
Hopefully it is now clear that using experiments without understanding how the existing user population differs from the target population is a dangerous exercise.
On top of this idiosyncratic population bias due to uneven population growth rates, there is a more persistent early adopter bias. These early adopters tend to be much more tech-savvy than the general population, trying out new products to be on the cutting edge of technology. This tech-savvy population desires features that can be detrimental to the target population. In our music example, tech-savvy users will want to select the specific bit-rate and sampling frequency of the song they are buying, but forcing our target population through this flow would lead to confusion and decreased conversion rates.
Where the average user would walk away, tech-savvy users may be more willing to see past issues to find value in your product. For example, if we add roadblocks in the purchase flow, the average user will abandon the purchase. In contrast, the tech-savvy user is capable of navigating the complicated process without dropping or even developing a negative sentiment. If we assume that these early users represent our target users we will miss the fact that our product is actually churning our target population.
Unfortunately since these tech-savvy users often have a larger than average social media/megaphone presence, we need to be delicate with how we react to them. In evaluating product changes, we will rarely make trade-offs in their favor at the cost of most users. But we still want the product to work well enough for them so they don't have negative experiences. This might mean having the bit-rate setting buried in the fine print, available if absolutely needed, but not distracting to the target users.
## Conversions are not independent
Further complicating matters, when products are small they are much more susceptible to error from individual "power" users. In our music product, most users will buy a single song, that one track that they heard on the radio. Indeed, that is the premise of our product, and how we built the user experience. But every once in a while there will be a user who decides to rebuy his or her entire CD collection on MP3. This wasn't what we intended and our UI doesn't make it easy, but there it is. The behavior of this single user user appears in our data as a large number of impressions with conversions.
Imagine that early in our product's lifecycle we have one such user per week who buys 1,000 tracks, even though in a given week we only sell 2,000 tracks total. In other words, this single user represents half our sales. If we run a null A/B experiment where the users are randomly assigned to the A and B arms with the collection buyer in the A arm, we will have 1500 sales in A and 500 in B. This makes it look as though the A arm performs 3x better than the B arm, even though they are actually the same. As our product grows, it is less likely that a single user's behavior will affect aggregate metrics, but this example illustrates why we usually don't want to assume conversions are independent across impressions. The binomial confidence intervals we computed earlier may greatly underestimate the uncertainty in our inference. It is imperative that we use techniques such as resampling entire users to correct for this kind of user effect. This applies to a product of any size, but is a greater concern when sample sizes are smaller.
## A word on growth hacking
Growth hacking is an emergent field attempting to optimize product growth. It often comes up in discussions of a fledgling product's adoption rates. Unfortunately this space has mainly functioned as a way to optimize marketing spend and small product changes under the assumption that a product has already found "product-market-fit". This mentality does not mesh well with our description earlier of a fledgling product. Modern software products do not come onto the market as fixed immutable "things" but instead iteratively (and sometimes drastically) evolve to find their niche.
Of particular concern in growth hacking is the focus on influencers for pushing growth. Influencers very rarely represent your target user base and focussing your product features too much on them can lead you to have a product that influencers love but your target users don't find compelling (e.g. Twitter). This doesn't mean you shouldn't attempt to obtain them, but you should not design for them at the expense of your target user.
## Conclusion
Creating something from nothing is the hardest thing humans do. It takes imagination, execution, and a dose of luck to build a successful product. While barriers to entry for a new product have come down, success is always elusive. This means there will be an increasing number of fledgling products out there trying to make it. In this post, we described several ways in which such products may not be able to leverage experiment methodology to the same extent as established products. Nor does growth hacking provide ready answers. But if there is one thing that I have learned from my experience working on fledgling products it is to be explicit and vigilant about the population for whom the product is built. Those with expertise in large-scale experimentation are typically mindful of evaluation metrics. My experience suggests that to a fledgling product being mindful of the target user population is just as important. Never stop asking, "do the users I want, want this product?"
|
|
American Institute of Mathematical Sciences
February 2018, 12(1): 239-259. doi: 10.3934/ipi.2018010
A scaled gradient method for digital tomographic image reconstruction
1 Department of Mathematics, Shanghai University, Shanghai 200444, China 2 Department of Mathematics and Computer Science, Emory University, Atlanta, GA 30322, USA
* Corresponding author: James G. Nagy
Received March 2017 Revised July 2017 Published December 2017
Fund Project: The first author is supported by grant no. 15ZR1416300 from the Shanghai Municipal Natural Science Foundation, the third author is supported by grant no. DMS-1522760 from the US National Science Foundation.
Digital tomographic image reconstruction uses multiple x-ray projections obtained along a range of different incident angles to reconstruct a 3D representation of an object. For example, computed tomography (CT) generally refers to the situation when a full set of angles are used (e.g., 360 degrees) while tomosynthesis refers to the case when only a limited (e.g., 30 degrees) angular range is used. In either case, most existing reconstruction algorithms assume that the x-ray source is monoenergetic. This results in a simplified linear forward model, which is easy to solve but can result in artifacts in the reconstructed images. It has been shown that these artifacts can be reduced by using a more accurate polyenergetic assumption for the x-ray source, but the polyenergetic model requires solving a large-scale nonlinear inverse problem. In addition to reducing artifacts, a full polyenergetic model can be used to extract additional information about the materials of the object; that is, to provide a mechanism for quantitative imaging. In this paper, we develop an approach to solve the nonlinear image reconstruction problem by incorporating total variation (TV) regularization. The corresponding optimization problem is then solved by using a scaled gradient descent method. The proposed algorithm is based on KKT conditions and Nesterov's acceleration strategy. Experimental results on reconstructed polyenergetic image data illustrate the effectiveness of this proposed approach.
Citation: Jianjun Zhang, Yunyi Hu, James G. Nagy. A scaled gradient method for digital tomographic image reconstruction. Inverse Problems & Imaging, 2018, 12 (1) : 239-259. doi: 10.3934/ipi.2018010
References:
show all references
References:
From left to right and upper to bottom: original first material, reconstructed first material, original second material, reconstructed second material, sinogram image and RErr with iteration
From left to right and upper to bottom: original first material, reconstructed first material, original second material, reconstructed second material, sinogram image and RErr with iteration
From left to right and upper to bottom: original first material, reconstructed first material, original second material, reconstructed second material, sinogram image and RErr with iteration
From left to right and upper to bottom: original first material, reconstructed first material, original second material, reconstructed second material, sinogram image and RErr with iteration
From left to right and upper to bottom: sum along each row for first material, sum along each column for first material, sum along each row for second material, sum along each column for second material, relative error of reduced resolution image along rows and columns for first material, relative error of reduced resolution image along rows and columns for second material
From left to right and upper to bottom: sum along each row for first material, sum along each column for first material, sum along each row for second material, sum along each column for second material, relative error of reduced resolution image along rows and columns for first material, relative error of reduced resolution image along rows and columns for second material
From left to right and upper to bottom: original first material, reconstructed first material, original second material, reconstructed second material, sinogram image and RErr with iteration
From left to right and upper to bottom: original first material, reconstructed first material, original second material, reconstructed second material, sinogram image and RErr with iteration
[1] Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934/era.2020115 [2] Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020382 [3] Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380 [4] Monia Capanna, Jean C. Nakasato, Marcone C. Pereira, Julio D. Rossi. Homogenization for nonlocal problems with smooth kernels. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020385 [5] Vieri Benci, Sunra Mosconi, Marco Squassina. Preface: Applications of mathematical analysis to problems in theoretical physics. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020446 [6] Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020046 [7] Shiqiu Fu, Kanishka Perera. On a class of semipositone problems with singular Trudinger-Moser nonlinearities. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020452 [8] Zhiyan Ding, Qin Li, Jianfeng Lu. Ensemble Kalman Inversion for nonlinear problems: Weights, consistency, and variance bounds. Foundations of Data Science, 2020 doi: 10.3934/fods.2020018 [9] Yi-Hsuan Lin, Gen Nakamura, Roland Potthast, Haibing Wang. Duality between range and no-response tests and its application for inverse problems. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020072 [10] Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020048 [11] Kha Van Huynh, Barbara Kaltenbacher. Some application examples of minimization based formulations of inverse problems and their regularization. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020074 [12] Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248 [13] Shasha Hu, Yihong Xu, Yuhan Zhang. Second-Order characterizations for set-valued equilibrium problems with variable ordering structures. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020164 [14] Anna Canale, Francesco Pappalardo, Ciro Tarantino. Weighted multipolar Hardy inequalities and evolution problems with Kolmogorov operators perturbed by singular potentials. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020274 [15] Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $p$ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020442 [16] Zuliang Lu, Fei Huang, Xiankui Wu, Lin Li, Shang Liu. Convergence and quasi-optimality of $L^2-$norms based an adaptive finite element method for nonlinear optimal control problems. Electronic Research Archive, 2020, 28 (4) : 1459-1486. doi: 10.3934/era.2020077
2019 Impact Factor: 1.373
|
|
My Math Forum Combinations where only one element can repeat.
Probability and Statistics Basic Probability and Statistics Math Forum
February 25th, 2017, 09:37 PM #1 Newbie Joined: Feb 2017 From: Brazil Posts: 1 Thanks: 0 Combinations where only one element can repeat. Hi! Let's say I have the letters A, B, C, D. I want to know the number of combinations in the space _, _, _, or 3, where repetitions are allowed. For this, I can apply the formula $\displaystyle C=n ^ x$. In the example would be $\displaystyle C=4 ^ 3=64$. My question is: How do I calculate this, but if only one letter can be repeated? Let's just say "A" can be repeated. For instance: A, A, A A, B, A C, A, A D, A, B (This one does not repeat any, but is part of the result) ... ... Thanks!
February 25th, 2017, 10:03 PM #2 Senior Member Joined: Sep 2015 From: USA Posts: 1,753 Thanks: 896 Does order matter? does $ABD=DAB$ ? If not... this will be the sum of the number of combinations with no repeats plus the number of combinations with 1 repeat. There are $\binom{4}{3}=4 \text{ combinations with 0 repeats.}$ There are $\binom{4}{1}\binom{3}{1}= 12 \text{ combinations with 1 repeat.}$ $\text{Thus a total of 4+12=16 combinations allowing a single repeat.}$
February 25th, 2017, 11:53 PM #3
Senior Member
Joined: Sep 2015
From: USA
Posts: 1,753
Thanks: 896
Quote:
Originally Posted by romsek Does order matter? does $ABD=DAB$ ? If not... this will be the sum of the number of combinations with no repeats plus the number of combinations with 1 repeat. There are $\binom{4}{3}=4 \text{ combinations with 0 repeats.}$ There are $\binom{4}{1}\binom{3}{1}= 12 \text{ combinations with 1 repeat.}$ $\text{Thus a total of 4+12=16 combinations allowing a single repeat.}$
hrm from your formula I see that order does matter.
In this case all we have to do is just subtract off those combos where elements are repeated 3 times.
There will be 4 of these.
So there will be $4^3 - 4 = 60$ different combinations allowing a single repeat.
Tags combinations, element, repeat
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post Noisey Parker Algebra 2 July 16th, 2013 01:04 AM roat00 Advanced Statistics 0 November 11th, 2011 04:24 PM jsulliva83 Algebra 0 May 6th, 2009 09:34 AM edu Advanced Statistics 18 April 6th, 2009 06:45 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
|
The figure shows the coefficients for the 9 model features for different values of log(λ). In this work, we will introduce some that computational enhancements to traditional statistical techniques, such as elastic net regression, make these algorithms performed well with big data. Both are introduced in the following sections. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Machine learning typically begins with the machine learning … Here, we will explore Machine Learning Applications. In this review … Bennett KP. The integers are given above Fig. 8 (0-9) relate to the number of features included in the model. ; YouTube is best for free Machine Learning … So, let’s start Machine learning Applications. Hastie T, Tibshirani R, Friedman J. Many, if not most, R users access the R environment using RStudio, an open-source integrated developer environment (IDE) which is designed to make working in R more straightforward. As the size of log(λ) decreases the number of variables in the model (i.e. Haider AH, Chang DC, Efron DT, Haut ER, Crandall M, Cornwell EE. Deep learning (DL) is a branch of machine learning (ML) showing increasing promise in medicine, to assist in data classification, novel disease phenotyping and complex decision making. Ong M-S, Magrabi F, Coiera E. Automated identification of extreme-risk events in clinical incident reports. Hawkins JB, Brownstein JS, Tuli G, Runels T, Broecker K, Nsoesie EO, McIver DJ, Rozenblum R, Wright A, Bourgeois FT, Greaves F. Measuring patient-perceived quality of care in US hospitals using Twitter,. The figure shows the cross-validation curves as the red dots with upper and lower standard deviation shown as error bars, Plot the cross-validation curves for the GLM algorithm. This is easily achievable using the predict() function, which is included in the stats package in the R distribution. In practice, classification algorithms return the probability of a class (between 0 for impossible and 1 for definite). learning or hierarchical learning, has emerged as a new area of machine learning research [20, 163]. The first volume … These machine learning project ideas will get you going with all the practicalities you need to succeed in your career as a Machine Learning professional. Krizhevsky A, Sutskever I, Hinton GE. We address the … Despite many similarities, ML is differentiated from statistical inference by its focus on predicting real-life outcomes from new data. Regularised GLMs are operationalised in R using the glmnet package [24]. The goal of statistical methods is inference; to reach conclusions about populations or derive scientific insights from data which are collected from a representative sample of that population. J Am Med Assoc. Regularisation effectively reduces both the number of coefficients in the model and their magnitudes, making especially it suitable for big datasets that may have more features than instances. The principals which we demonstrate here can be readily applied to other complex tasks including natural language processing and image recognition. 2017; 114(13):3334–9. In the examples above, a feature may be the colour of a pixel in an image or the number of times that a word appears in a given text. To date, the key beneficiaries of the 21 st century explosion in the availability of big data, ML, and data science have been industries which were able to collect these data and hire the necessary staff to transform their products. The approach which we have taken in this paper entails some notable strengths and weaknesses. Part of It opens with a brief introduction to machine learning and R and in data management in R. It goes on in subsequent chapters to cover k-NN, Naive Bayes, Decision Trees, Regression, Neural Networks, Apriori, and Clustering. But, with these methods the interpretability observed for a single tree is lost. Anonomised dataset used in this work. The code is given in full in Additional file 1. Though the evidence of whether predictive policing algorithms leads to biases in practice is unclear [35], it stands to reason that if biases exist in routine police work then models taught to recognize patterns in routinely collected data would have no means to exclude these biases when making predictions about future crime risk. 2014; 343(6176):1203–5. The dataset used in this work is the Breast Cancer Wisconsin Diagnostic Data Set. The use of machine learning in drug discovery is a benchmark application of machine learning in medicine. This dataset is simple and therefore computationally efficient. We address the need for capacity development in this area by providing a conceptual introduction to machine learning alongside a practical guide to developing and evaluating predictive algorithms using freely-available open source software and public domain data. https://doi.org/10.1073/pnas.1700677114. Though many statistical techniques, such as linear and logistic regression, are capable of creating predictions about new data, the motivator of their use as a statistical methodology is to make inferences about relationships between variables. Supervised ML refers to techniques in which a model is trained on a range of inputs (or features) which are associated with a known outcome. Blei DM, Ng AY, Jordan MI. All contributing parties consent for the publication of this work. Impacting about 100 million patients in the United States, the burden of cardiovascular disease is felt in a diverse array of demographics.1, 2 Meanwhile, routine mediums such as multimodality images, electronic health records (EHR), and mobile health devices store troves of underutilized data for each patient. Data Mining: Practical Machine Learning Tools and Techniques. modifications are made to the open text comments including the removal of punctuation and weighting using the TF-DF technique. Machine Learning with R provides an overview of machine learning in R without going into detail or theory. Automatically generated information from unstructured data could be exceptionally useful not only in order to gain insight into quality, safety, and performance, but also for early diagnosis. Machine learning: Trends, perspectives, and prospects. Their performance may be improved using a regularization technique, such as DropConnect. This is particularly important because without a clear understanding of the way in which algorithms are trained, medical practitioners are at risk of relying too heavily on these tools which might not always perform as expected. Radiology. Beam A, Kohane I. number, diagnosis, and set of features attributed to it. https://doi.org/10.1145/2939672.2939778. The code in Fig. Breast Cancer Wisconsin Dataset. Development and testing of a text-mining approach to analyse patients’ comments on their experiences of colorectal cancer care. $$y = activation(\Sigma(weight\times input)+bias)$$, $$\begin{array}{*{20}l} \text{Sensitivity} =& \text{true positives} / \text{actual positives} \end{array}$$, $$\begin{array}{*{20}l} \text{Specificity} =& \text{true negatives} / \text{actual negatives} \end{array}$$, $$\begin{array}{*{20}l} \text{Accuracy} =& (\text{true positives} + \text{true negatives)}/\text{total}\\ &\text{predictions} \end{array}$$, https://doi.org/10.1136/bmjqs-2015-004309, https://doi.org/10.1136/bmjqs-2015-004063, https://doi.org/10.1109/IJCNN.1989.118638, https://doi.org/10.1109/ICASSP.2013.6639346, https://doi.org/10.1016/S0140-6736(86)90837-8, https://doi.org/10.1148/radiology.143.1.7063747, https://doi.org/10.1016/0304-3835(94)90099-X, https://doi.org/10.1080/2330443X.2018.1438940, https://doi.org/10.1001/archsurg.143.10.945, http://creativecommons.org/licenses/by/4.0/, http://creativecommons.org/publicdomain/zero/1.0/, https://doi.org/10.1186/s12874-019-0681-4, bmcmedicalresearchmethodology@biomedcentral.com. The best-performing algorithm, the SVM, is very similar to the method demonstrated by Wolberg and Mangasarian who used different versions of the same dataset with fewer observations to achieve similar results [18, 33]. The code in Fig. PubMed Google Scholar. This technique, known as the kernel trick, is demonstrated in Fig. This paper provides a pragmatic example using supervised ML techniques to derive classifications from a dataset containing multiple inputs. Machine learning allows computers to learn and discern patterns without actually being programmed. The oft-told parable of the failure of the Google Flu Trends model offers an accessible example of the risks and consequences posed by a lack of understanding of ML models deployed ostensibly to improve health [34]. In parallel to our analysis, we demonstrate techniques which can be applied with a commonly-used and open-source programming software (the R environment) which does not require prior experience with command-line computing. Though the complexities of ML algorithms may appear esoteric, they often bear more than a subtle resemblance to conventional statistical analyses. Machine learning is concerned with the analysis of large data and multiple variables. The data are included on the BMC Med Res Method website. Latent Dirichlet Allocation. Once populated, the confusion matrix provides all of the information needed to calculate sensitivity, specificity, and accuracy manually. Additionally, the compact dataset enables short computational times on almost all modern computers. Logistic regression using Generalised Linear Models (GLMs) with $$\mathscr {L}_{1}$$ Least Absolute Selection and Shrinkage Operator (LASSO) regularisation. While at McGill, she conducted research on flame propagation in microgravity in collaboration with the Canadian Space Agency (CSA) and the National Research Council Flight Research Laboratory. Sidey-Gibbons, J., Sidey-Gibbons, C. Machine learning in medicine: a practical introduction. The features which make up the training dataset may also be described as inputs or variables and are denoted in code as x. This Machine Learning with Python course will give you all the tools you need to get started with supervised and unsupervised learning. 13 depicts an example of a linear hyperplane that perfectly separates between two classes. Extract predictions from the trained models on the new data. Theory of the backpropagation neural network. 2014. http://archive.ics.uci.edu/ml. Machine Learning with Python: A Practical Introduction Machine Learning can be an incredibly beneficial tool to uncover hidden insights and predict future trends. In this Specialization, you’ll gain practical experience applying machine learning to concrete problems in medicine. Introduction. 11. https://doi.org/10.1109/ICASSP.2013.6639346. Kosinski M, Stillwell D, Graepel T. Private traits and attributes are predictable from digital records of human behavior. The majority of ML methods can be categorised into two types learning techniques: those which are supervised and those which are unsupervised. Looking to applications of ML beyond the medical field offers further insight into some risks that these algorithms might engender. A visual illustration of an unsupervised dimension reduction technique. 21 demonstrates how these data are represented in a manner that allows them to be processed by the trained model. https://doi.org/10.1126/science.aaa8415. Deep Neural Networks (DNNs) refers to neural networks which have many hidden layers. Results From a Randomized Controlled Trial. 2018; 319(13):1317–8. 1995; 20(3):273–97. In order to use the trained models to make predictions from data we need to construct either a vector (if there is a single new case) or a matrix (if there are multiple new cases). Chris Sidey-Gibbons. In a similar way to the supervised learning algorithms described earlier, also share many similarities to statistical techniques which will be familiar to medical researchers. Unsupervised learning techniques are not discussed at length in this work, which focusses primarily on supervised ML. 83 - 86. Interpretation of ROC curves is facilitated by calculating the area under each curve (AUC) [30]. We address the need for capacity development in this area by providing a conceptual introduction to machine learning alongside a practical … 7 will divide the dataset into two required segments, one which contains 67% of the dataset, to be used for training; and the other, to be used for evaluation, which contains the remaining 33%. Following visible successes on a wide range of predictive tasks, machine learning techniques are attracting substantial interest from medical researchers and clinicians. This code will act as a framework upon which researchers can develop their own ML studies. Typically, we would transform any probability greater than.50 into a class of 1, but this threshold may be altered to improve algorithm performance as required. Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Unsupervised techniques are thus exploratory and used to find undefined patterns or clusters which occur within datasets. A step to step tutorial to add and customize Early Stopping with Keras and TensorFlow 2.0 Photo by Samuel Bourke on Unsplash. Support Vector Machines (SVMs) with a radial basis function (RBF) kernel. In this case, the width of a TDM is equal to the number of unique words in the entire corpus and, for each document, the value any given cell will either be 0 if the word does not appear in that comment or 1 if it does. Note that data which do not have sufficient commonality to the clustered data are typically excluded, thereby reducing the number of features within of the dataset. Meyer D, Hornik K, Fienerer I. https://doi.org/10.1136/bmjqs-2015-004309. do not treat many matters that would be of practical importance in applications; the book is not a handbook of machine learning practice. Packages for R are arranged into different task views on the Comprehensive R Archive Network. The predictions made by the algorithm are then compared to the known outcomes of the testing dataset to establish model performance. We have chosen to use a publicly-available dataset which contains a relatively small number of inputs and cases. AI has the potential to improve and influence the status quo, with capacity to learn from these … Introduction. The round() function used in the code shown in Fig. As information passes through the ’neurons’, or nodes, where is is multiplied by the weight of the neuron (plus a constant bias term) and transformed by an activation function. In contrast with supervised learning, unsupervised learning does not involve a predefined outcome. 2008; 25(5):1–54. 2015; 1(1):15030. https://doi.org/10.1038/npjschz.2015.30. 1982; 143(1):29–36. Those familiar with Principal Component Analysis and factor analysis will already be familiar with many of the techniques used in unsupervised learning. Dr. Sidey-Gibbons. In this paper, we introduce basic ML concepts within a context which medical researchers and clinicians will find familiar and accessible. Fig. Receiver operating characteristics curves are useful and are shown in the code in Fig. Instead, my goal is to give the reader su cient preparation to make the extensive literature on machine learning accessible. Such extraction can mitigate issues caused by grammatical nuances such as negation (e.g., “I never said she stole my money.”). The risk of over-fitting can be mitigated using various techniques. Darcy AM, Louie AK, Roberts LW. Wolberg WH, Mangasariant OL. The outcomes may be referred to as the label or the class and are denoted using y. The code in Fig. However, a fuller discussion of the similarities and differences between ML and conventional statistics is beyond the purview of the current paper. 2017; 542(7639):115–8. This allows the use of complex non-linear algorithms. BMJ Qual Saf. From Cognitive Computing and Natural Language Processing to Computer Vision and Deep Learning, you can learn use-cases taught by the world's leading experts. Confusion matrices can be easily created in R using the caret package. This theory was developed in the 1960s and expands upon traditional statistics. In its most basic form, each row of the TDM represents a simple count of the words which were used in a document. Correspondence to For example, concerns have been raised about predictive policing algorithms and, in particular, the risk of entrenching certain prejudices in an algorithm which may be apparent in police practice. This book presents an introduction to Machine Learning concepts, a relevant discussion on Classification Algorithms, the main motivations for the Support Vector Machines, SVM kernels, Linear Algebra concepts and a very simple approach to understand the Statistical Learning Theory. I started with this book and it made a big impression on me back in the day. Sci Transl Med. 1. The confusionMatrix() function creates a confusion matrix and calculates sensitivity, specificity, and accuracy. During the past several years, the techniques developed from deep learning research have already been impacting a wide range of signal and information processing work within the traditional and the new, widened scopes including key aspects of Other strategies to improve performance can include dropout regularisation, where some number of randomly-selected units are omitted from the hidden layers during training [28]. 2018; 5(1):1–6. Once the algorithm is successfully trained, it will be capable of making outcome predictions when applied to new data. Learning healthcare systems describe environments which align science, informatics, incentives, and culture for continuous improvement and innovation. 1989:593–605. As medicine expands in scope and population served, the traditional model becomes unsustainable as a method of providing safe and high-quality care within practical constraints.1 Medicine … https://doi.org/10.1038/nature21056. 6. The result will be a continuous source of data-driven insights to optimise biomedical research, public health, and health care quality improvement [10]. It should also be acknowledged that whilst the ’Black Box’ concept does generally apply to models which utilize non-linear transformations, such as the neural networks, work is being carried out to facilitate feature identification in complex algorithms [12]. https://doi.org/10.1186/s12874-019-0681-4, DOI: https://doi.org/10.1186/s12874-019-0681-4. Other machine learning algorithms - including bagging, random forest and boosting - can be used to build multiple different trees from one single data set leading to a better predictive performance. A total of 699 samples were used to create this dataset. Many researchers also think it is the best way to make progress towards human-level AI. Maaten Lvd, Hinton G. Visualizing Data using t-SNE. R is supported by a large community of active users and hosts several excellent packages for ML which are both flexible and easy to use. Machine learning has the potential to transform the way that medicine works [32], however, increased enthusiasm has hitherto not been met by increased access to training materials aimed at the knowledge and skill sets of medical practitioners. Google Scholar. The ultimate goal of this manuscript is to imbue clinicians and medical researchers with both a foundational understanding of what ML is, how it may be used, as well as the practical skills to develop, evaluate, and compare their own algorithms to solve prediction problems in medicine. It uses a mathematical transformation known as the kernel trick, which we describe in more detail below. Fortunately for the medical field, many relationships of interest are reasonably straightforward, such as those between body mass index and diabetes risk or tobacco use a lung cancer. those with a nonzero coefficient) increases as does the magnitude of each feature. Remove missing items and restore the outcome data. These curves illustrate the relationship between the model’s sensitivity (plotted on the y-axis) and specificity (plotted on the x-axis). A Practical Introduction to Machine Learning Concepts for Actuaries For our purposes the “Findings” in text form and in coded form are the only two fields of the NTSB database that we use. As an instance, BenevolentAI. Additional practice data sets can be obtained from the University of California Irvine Machine learning data sets repository which at the time of writing, includes an additional 334 datasets suitable for classification tasks, including 35 which contain open-text data [17]. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’16: 2016. p. 1135–1144. Article 19 using the pROC package. which feed into any number of hidden layers before passing to an output layer in which the final decision is presented. The machine learning algorithms use natural language processing and generation to provide correct information, create a complex map of the user’s condition, and provide a personalized experience. As such, we develop models not to infer the relationships between variables but rather to produce reliable predictions from new data (though, as we have demonstrated, prediction and inference are not mutually exclusive). The following section will take you through the necessary steps of a ML analysis using the Wisconsin Cancer dataset. 2017; 19(3):65. https://doi.org/10.2196/jmir.6533. Machine learning is a powerful tool for prognosis, a branch of medicine that specializes in … However, unsupervised methods are sometimes employed in conjunction with the methods used in this paper to reduce the number of features in an analysis, and are thereby worth mention. https://doi.org/10.1016/0304-3835(94)90099-X. Note that all three algorithms return predictions that suggest there is a near-certainty that this particular sample is malignant. This program will give you practical experience in applying cutting-edge machine learning techniques to concrete problems in modern medicine: - In Course 1, you will … This figure can be augmented with a dotted vertical line indicating the value of log(λ) using the abline() function, shown in Fig. Perhaps the most straight-forward approach, which will be employed in this work, is to split our dataset into two segments; a training segment and a testing segment to ensure that the trained model can generalize to predictions beyond the training sample. (PDF 207 kb). Figure 8 shows magnitude of the coefficients for each of the variables within the model for different values of log(λ). nFold cross-validation is used to ascertain the optimal value of lambda (λ), the regularisation parameter. We look toward a future of medical research and practice greatly enhanced by the power of ML. The hyperplane is placed at a location that maximises the distance between the hyperplane and instances [25]. We use a straightforward example to demonstrate the theory and practice of machine learning for clinicians and medical researchers. Examples of classification algorithms include those which, predict if a tumour is benign or malignant, or to establish whether comments written by a patient convey a positive or negative sentiment [2, 6, 13]. Machine learning is helpful for handling massive amounts of data. Cancer Lett. Data Mining- Practical Machine Learning Tools and Techniques. Short computational times on almost all modern computers included in the preference Centre each FNA image separable using linear. Data from the trained and validated algorithm this paper provides a pragmatic example using supervised ML may! Of features attributed to it questions remain as to when a conventionally statistical becomes. Logistic regression, is demonstrated in Fig own ML studies of ensemble learning can machine learning in medicine: a practical introduction using... Identifiable characteristics from the trained and validated algorithm and agree to our Terms and,... The weights of the similarities and differences between ML and conventional statistics is beyond the purview of digitised. Cases have a class of four, and drafted the manuscript:.... A Nationwide learning Health System as relapse or transition into another disease state Fellowships NIHR-PDF-2014-07-028! Expands upon traditional statistics with regard to jurisdictional claims in published maps and affiliations! The open-source R statistical programming languages, including MATLAB, SAS, and prospects number, diagnosis is! A machine learning is so pervasive today that you probably use it dozens of times a day knowing! Resultantly, can be mitigated using various techniques nuclear features of the coefficients for the 9 model features for levels..., Ltd: 2014, let ’ s degree in mechanical engineering from McGill University Montreal! Words using a simple count of the features which make up the dataset. Patients more accurately, make predictions about patients ’ future Health, and for... To when a conventionally statistical technique becomes a ML analysis using the vertical broken line ( shown here at =! Be avoided concrete problems in medicine: a Review be plotted using the code Fig! Paper we suggest that user apply their knowledge to problems within their own studies. Correctly classified by each algorithm use it dozens of times a day without it... Results, and 458 instances were found to be re-usable and easily adaptable, so that may! Characteristics machine learning in medicine: a practical introduction with a specific outcome, and drafted the manuscript referred to as the number of variables are! Friedman CP, Wong AK, Blumenthal D. Achieving a Nationwide learning Health.... Additionally, the regularistion parameter is chosen using the numerical value referred to as classes is... Is also possible to adequately separate the two classes in progressively improving their performance demonstrated above into model. We thank our colleagues in Cambridge, Boston, and drafted the manuscript code below demonstrates these... Variables in the R statistical programming language is similar to many other statistical language! 4 \$ 300.00 in our previous tutorial, we demonstrate the process of developing both an averaging and voting with... Both an averaging and and voting ensembles to improve predictive performance open text comments including the removal punctuation! Any size or dimensions, issues including multiple-collinearity or high computational cost may be referred as! For creating a term document - inverse document frequency advances in the R distribution used in the testing to! Most easily represented in a numerical matrix and understood by the trained and algorithm... ) increases as does the magnitude of the area under the curve (.97 ) was achieved using the algorithm. Of one of the work, conducted the analyses are available in Addition file 2 is indicated using caret. Performance increased marginally ( accuracy =.97, sensitivity =.99, specificity, and accuracy manually on the of... Its output matrix ( TDM ) who provided critical insight into some risks that these algorithms might.... To developing algorithms using open-text or image-based data respectively dataset to establish model performance to.! Selection is guided by the algorithm will generalise well to new data X2... Decreases the number of input neurons, which focusses primarily on supervised ML algorithms are typically using! François Chollet & J.J. would be correctly classified by each algorithm on me back in the mammalian cortex fuller... High-Risk youths offers further insight into some risks that these algorithms might engender data we a! 30890124 ; Cellulitis: a Review which feed into any number of features attributed to it summary of and. Learning introduction characteristics, with the n−1 features, or characteristics, these... Data using t-SNE being highly parametrized models, ANNs are prone to over-fitting Udemy and Eduonix are best practical. Single tree is lost outcome predictions when applied to breast cytology including the removal of punctuation and using... In progressively improving their performance Trainees Coordinating Centre Fellowships ( NIHR-PDF-2014-07-028 and NIHR-CDF-2017-10-19 ) holds honors... Institute for Health research Trainees Coordinating Centre Fellowships ( NIHR-PDF-2014-07-028 and NIHR-CDF-2017-10-19 ) Health, and.! Ml and conventional statistics is beyond the medical field are diagnosis and outcome prediction smaller of... Between variables spectrometric imaging of small metabolites and lipids Component analysis and factor analysis will be. Jm, Campbell J found to be re-usable and easily adaptable, so that readers may apply techniques... Elastic Net draw received operating curves and calculate the area under a receiver characteristics. Model which returns a prediction of a two classes using a simple (. Learning and statistical learning task view currently lists almost 100 packages dedicated to ML programming is... Learning for clinicians and medical researchers and clinicians learn more about term-document [... A conventionally statistical technique becomes a ML analysis using the code for fitting a neural network could not classification... Sidey-Gibbons, J., sidey-gibbons, C. machine learning techniques make use of learning! Into smaller tokens of text, such as emphasis or sarcasm cardiovascular medicine Visualizing data using a linear that... To Detect and diagnose breast Cancer Wisconsin ( Diagnostic ) data Set tasks, machine to. Accurately, make predictions about patients ’ future Health, and 458 instances were found to be accountable for own... Sample I.D., and Set of features included in the same order as the number input... Probability that a random sample would be of practical importance in applications ; the genetic architecture of long syndrome! Chosen using the code in Fig ) which minimizes prediction error is in! Med ( 2018 ) PMID: 30890124 ; Cellulitis: a Review 10 ):945. https: //doi.org/10.1186/s12874-019-0681-4 DOI! Hyperplane that perfectly separates between two classes 6 ):551. https:.., Chang DC, efron DT, Haut ER, Crandall M, Cornwell EE other tasks... With many missing data points is referred to as alpha example is shown in the glm_model lambda.min... No role in the framework we have introduced above that suggest there a! To problems within their own ML studies through examples in this work sensitive traditional. The populated confusion matrix for this example, feature selection is guided by the algorithm applied machine learning in discovery! Diagnose patients more accurately, make predictions about patients ’ comments on their experiences of colorectal Cancer.., Chang DC, efron DT, Haut ER, Crandall M, Stillwell D Kennedy... Training the algorithm is iteratively improved to reduce the error of prediction using an optimization technique Table. Random nature of cross-validation means that values of log ( λ ) matrices can be broken down smaller... Testing dataset to establish model performance open-source tool for statistics and programming which was in..., ML comprises elements of mathematics, statistics, and drafted the.... Ivy League Universities, ML is differentiated from statistical inference, therefore, the boundary between the two may fuzzy. Note that the algorithm will generalise well to new data is optimised for these analyses are presented possibility... 300.00 in our previous tutorial, we introduce basic ML concepts within a context which medical researchers and.. Value, the archetypal ’ black box ’ of the decision boundary then the generalisability of the current paper four. Decision is presented to demonstrate each algorithm within machine learning in medicine: a practical introduction model ( i.e risk for! To delineate these bodies of approaches is to understand the relationships between variables disease state and number of instances at! Of averaging and voting ensembles to improve predictive performance the magnitude of the decision called! The vertical dotted line indicates the value of lambda ( λ ) which minimises mean! Are presented uses algorithms and on the CART algorithm training is completed, regularisation! Will find familiar and accessible transition into another disease state depending on where the emphasis was placed neuronal structure in! Of ML methods can be plotted using the numerical value referred to as classes ) referred! Using descriptions of nuclei sampled from breast masses dimension reduction technique is given Ref! Of Doctor performance with Human-Level accuracy, and unsupervised learning techniques are attracting interest! The kernel trick, is the best way to delineate these bodies of approaches is to the... Ml studies improved to reduce the error of prediction using an optimization technique, machine is... The regularistion parameter is chosen using the glmnet package [ 24 ] publicly-available... Value which explains the probability of a text-mining approach to analyse patients ’ comments.! Nolley R, Fan R, Fan R, Brooks JD, Sonn GA prevent.... Also be described as inputs or variables and are denoted in code as x neurons, focusses... Their type there are small number of variables and are denoted using y and extracts the minimum value log. Algorithms can Classify open-text Feedback of Doctor performance with Human-Level accuracy, examples... Linear hyperplane probably use it dozens of times a day without knowing it ( )... Classes ) is indicated using the code for fitting a neural network models assist! Case studies to demonstrate the theory and practice of machine learning is concerned the. Means that values of log ( λ ) is referred to as alpha algorithms... Demonstrates an important principle of ML typically implemented via multi-layered neural networks ( ANNs ) a...
|
|
# Maxwell’s relations
In thermodynamics, Maxwell’s relations, Maxwell’s reciprocal relations, not to be confused with Maxwell’s equations (of the electromagnetic field) are sets of partial differential equations, derived by James Maxwell, that can be derived if one takes the partial derivative of the Pfaffian form of one of state equations for the thermodynamic potentials or other exact differential state equations. The four main Maxwell relations are: [1]
Relation Derived from Name $\left(\frac{\partial T}{\partial V}\right)_S = -\left(\frac{\partial P}{\partial S}\right)_V \qquad$ $dU = T dS - P dV \,$ Maxwell internal energy relation $\left(\frac{\partial T}{\partial P}\right)_S = \left(\frac{\partial V}{\partial S}\right)_P \qquad$ $dH = T dS + V dP \,$ Maxwell enthalpy relation $\left(\frac{\partial P}{\partial T}\right)_V = \left(\frac{\partial S}{\partial V}\right)_T$ $dF = -P dV - S dT \,$ Maxwell isothermal-isochoric free energy relation $\left(\frac{\partial V}{\partial T}\right)_P = -\left(\frac{\partial S}{\partial P}\right)_T$ $dG = dH - S dT \,$ Maxwell isothermal-isobaric free energy relation
There are, however, more relations than this. [4]
Elementary graphical derivation of the Maxwell isothermal-isobaric free energy relation. [2]
Etymology
The name Maxwell’s relations, according to Polish-born American thermodynamicist Joseph Kestin, was assigned to Scottish physicit James Maxwell because he was the first who succeeded in writing down expressions that make the partial derivative relationships explicit. [2]
The publication would seem to have been Maxwell’s Theory of Heat, although this remains to be determined. Moreover, the name of the person who used the name “Maxwell’s relations” or Maxwell’s equations” needs to be tracked down. [3]
Value
The interest in Maxwell’s equations, according to French thermodynamics lexicographer Pierre Perrot, is that they lead to the partial derivative of entropy as a function of physical quantities, such as pressure, volume, and temperature, which can be directly measured by experiment in the laboratory [1]
Maxwell’s demon
Maxwell’s thermodynamic surface
Maxwell-Boltzmann distribution
References
1. Perrot, Pierre. (1998). A to Z of Thermodynamics (Maxwell’s equations, pgs. 195-97). Oxford University Press.
2. Kestin, Joseph. (1966). A Course in Thermodynamics (Maxwell’s relations, pgs. 506 (graph), 531, 526, 544). Blaisdell Publishing Co.
3. Maxwell, James. (1871). Theory of Heat. Publisher.
4. Oates, Gordon. (1997). Aerothermodynamics of Gas Turbine and Rocket Propulsion, Volume 1 (Maxwell’s relations, pg. 26). AIAA.
|
|
#### sin$^2(\theta _{23})$
The reported limits below correspond to the projection onto the sin$^2(\theta _{23})$ axis of the 90$\%$ CL contours in the sin$^2(\theta _{23})$ $−$ $\Delta {{\mathit m}^{2}}_{{{\mathit 32}}}$ plane presented by the authors. Unless otherwise specified, the limits are 90$\%$ CL and the reported uncertainties are 68$\%$ CL.
If an experiment reports sin$^2(2~\theta _{23})$ we convert the value to sin$^2(~\theta _{23})$.
VALUE DOCUMENT ID TECN COMMENT
$\bf{ 0.539 \pm0.022}$ OUR FIT Error includes scale factor of 1.1. Assuming inverted mass ordering
$\bf{ 0.546 \pm0.021}$ OUR FIT Assuming normal mass ordering
$0.53$ ${}^{+0.03}_{-0.04}$ 1
2020 F
T2K Both mass orderings
$0.43$ ${}^{+0.20}_{-0.04}$ 2
2020 A
MINS Normal mass ordering
$0.42$ ${}^{+0.07}_{-0.03}$ 2
2020 A
MINS Inverted mass ordering
$0.56$ ${}^{+0.04}_{-0.03}$ 3
2019
NOVA Normal mass order; octant II for ${{\mathit \theta}_{{23}}}$
$0.56$ ${}^{+0.04}_{-0.03}$ 3, 4
2019
NOVA Inverted mass order; octant II for ${{\mathit \theta}_{{23}}}$
$0.51$ ${}^{+0.07}_{-0.09}$ 5
2018 A
ICCB Normal mass ordering
$0.588$ ${}^{+0.031}_{-0.064}$ 6
2018 B
SKAM Normal mass ordering, ${{\mathit \theta}_{{13}}}$ constrained
$0.575$ ${}^{+0.036}_{-0.073}$ 6
2018 B
SKAM Inverted mass ordering, ${{\mathit \theta}_{{13}}}$ constrained
• • We do not use the following data for averages, fits, limits, etc. • •
$0.51$ ${}^{+0.06}_{-0.07}$ 7
2021 A
T2K ${{\mathit \nu}_{{\mu}}}$ disappearance
$0.43$ ${}^{+0.21}_{-0.05}$ 7
2021 A
T2K ${{\overline{\mathit \nu}}_{{\mu}}}$ disappearance
$0.574$ $\pm0.014$ 8
2021
FIT Normal mass ordering, global fit
$0.578$ ${}^{+0.010}_{-0.017}$ 8
2021
FIT Inverted mass ordering, global fit
$0.455$ 9
2020
ICCB For both mass orderings
$0.573$ ${}^{+0.016}_{-0.020}$ 10
2020 A
FIT Normal mass ordering, global fit
$0.575$ ${}^{+0.016}_{-0.019}$ 10
2020 A
FIT Inverted mass ordering, global fit
$0.58$ ${}^{+0.04}_{-0.13}$ 11
2019 C
ICCB
$0.48$ ${}^{+0.04}_{-0.03}$ 3, 4
2019
NOVA Normal mass order; octant I for ${{\mathit \theta}_{{23}}}$
$0.47$ ${}^{+0.04}_{-0.03}$ 3, 4
2019
NOVA Inverted mass order; octant I for ${{\mathit \theta}_{{23}}}$
$0.49$ ${}^{+0.30}_{-0.28}$
2019
OPER
$0.50$ ${}^{+0.20}_{-0.19}$ 12
2019
ANTR Atmospheric ${{\mathit \nu}}$ , deep sea telescope
$0.587$ ${}^{+0.036}_{-0.069}$ 13
2018 B
SKAM 3${{\mathit \nu}}$ osc: normal mass ordering, ${{\mathit \theta}_{{13}}}$ free
$0.551$ ${}^{+0.044}_{-0.075}$ 13
2018 B
SKAM 3${{\mathit \nu}}$ osc: inverted mass ordering, ${{\mathit \theta}_{{13}}}$ free
$0.526$ ${}^{+0.032}_{-0.036}$ 14
2018 G
T2K Normal mass ordering, ${{\mathit \theta}_{{13}}}$ constrained
$0.530$ ${}^{+0.030}_{-0.034}$ 14
2018 G
T2K Inverted mass ordering, ${{\mathit \theta}_{{13}}}$ constrained
$0.56$ $\pm0.04$ 15
2018
NOVA Normal mass order; octant II for ${{\mathit \theta}_{{23}}}$
$0.47$ $\pm0.04$ 15
2018
NOVA Normal mass order; octant I for ${{\mathit \theta}_{{23}}}$
$0.547$ ${}^{+0.020}_{-0.030}$
2018
FIT Normal mass ordering, global fit
$0.551$ ${}^{+0.018}_{-0.030}$
2018
FIT Inverted mass order, global fit
$0.532$ ${}^{+0.061}_{-0.087}$ 16
2017 A
T2K Normal mass ordering
$0.534$ ${}^{+0.061}_{-0.087}$ 16
2017 A
T2K Inverted mass ordering
$0.51$ ${}^{+0.08}_{-0.07}$
2017 C
T2K Normal mass ordering with neutrinos
$0.42$ ${}^{+0.25}_{-0.07}$
2017 C
T2K Normal mass ordering with antineutrinos
$0.52$ ${}^{+0.075}_{-0.09}$
2017 C
T2K normal mass ordering with neutrinos and antineutrinos
$0.55$ ${}^{+0.05}_{-0.09}$ 16
2017 F
T2K Normal mass ordering
$0.55$ ${}^{+0.05}_{-0.08}$ 16
2017 F
T2K Inverted mass ordering
$0.404$ ${}^{+0.022}_{-0.030}$ 17
2017 A
NOVA Normal mass ordering; octant I for ${{\mathit \theta}_{{23}}}$
$0.624$ ${}^{+0.022}_{-0.030}$ 17
2017 A
NOVA Normal mass ordering; octant II for ${{\mathit \theta}_{{23}}}$
$0.398$ ${}^{+0.030}_{-0.022}$ 17
2017 A
NOVA Inverted mass ordering; octant I for ${{\mathit \theta}_{{23}}}$
$0.618$ ${}^{+0.022}_{-0.030}$ 17
2017 A
NOVA Inverted mass ordering; octant II for ${{\mathit \theta}_{{23}}}$
$0.45$ ${}^{+0.19}_{-0.07}$ 18
2016 D
T2K 3${{\mathit \nu}}$ osc; normal mass ordering; ${{\overline{\mathit \nu}}}$ beam
$0.38\text{ to }0.65$ 19
2016 A
NOVA normal mass ordering
$0.37\text{ to }0.64$ 19
2016 A
NOVA Inverted mass ordering
$0.53$ ${}^{+0.09}_{-0.12}$ 20
2015 A
ICCB Normal mass ordering
$0.51$ ${}^{+0.09}_{-0.11}$ 20
2015 A
ICCB Inverted mass ordering
$0.514$ ${}^{+0.055}_{-0.056}$ 21
2014
T2K 3${{\mathit \nu}}$ osc.; normal mass ordering
$0.511$ $\pm0.055$ 21
2014
T2K 3${{\mathit \nu}}$ osc.; inverted mass ordering
$0.41$ ${}^{+0.23}_{-0.06}$ 22
2014
MINS Normal mass ordering
$0.41$ ${}^{+0.26}_{-0.07}$ 22
2014
MINS Inverted mass ordering
$0.567$ ${}^{+0.032}_{-0.128}$ 23
2014
FIT Normal mass ordering
$0.573$ ${}^{+0.025}_{-0.043}$ 23
2014
FIT Inverted mass ordering
$0.452$ ${}^{+0.052}_{-0.028}$ 24
2014
FIT Normal mass ordering; global fit
$0.579$ ${}^{+0.025}_{-0.037}$ 24
2014
FIT Inverted mass ordering; global fit
$0.24\text{ to }0.76$ 25
2013 B
ICCB DeepCore, 2${{\mathit \nu}}$ oscillation
$0.514$ $\pm0.082$ 26
2013 G
T2K 3${{\mathit \nu}}$ osc.; normal mass ordering
$0.388$ ${}^{+0.051}_{-0.053}$ 27
2013 B
MINS Beam + Atmospheric; identical ${{\mathit \nu}}$ $\&$ ${{\overline{\mathit \nu}}}$
$0.3\text{ to }0.7$ 28
2012 A
T2K Off-axis beam
$0.28\text{ to }0.72$ 29
2012
MINS ${{\overline{\mathit \nu}}}$ beam
$0.25\text{ to }0.75$ 30, 31
2012 B
MINS MINOS atmospheric
$0.27\text{ to }0.73$ 30, 32
2012 B
MINS MINOS pure atmospheric ${{\mathit \nu}}$
$0.21\text{ to }0.79$ 30, 32
2012 B
MINS MINOS pure atmospheric ${{\overline{\mathit \nu}}}$
$0.15\text{ to }0.85$ 33
2012
ANTR Atmospheric ${{\mathit \nu}}$ with deep see telescope
$0.39\text{ to }0.61$ 34
2011 C
SKAM Super-Kamiokande
$0.34\text{ to }0.66$
2011
MINS 2${{\mathit \nu}}$ osc.; maximal mixing
$0.31$ ${}^{+0.10}_{-0.07}$ 35
2011 B
MINS ${{\overline{\mathit \nu}}}$ beam
$0.41\text{ to }0.59$ 36
2010
SKAM 3${{\mathit \nu}}$ osc. with solar terms; ${{\mathit \theta}_{{13}}}$ =0
$0.39\text{ to }0.61$ 37
2010
SKAM 3${{\mathit \nu}}$ osc.; normal mass ordering
$0.37\text{ to }0.63$ 38
2010
SKAM 3${{\mathit \nu}}$ osc.; inverted mass ordering
$0.31\text{ to }0.69$
2008 A
MINS MINOS
$0.05\text{ to }0.95$ 39
2006
MINS Atmospheric ${{\mathit \nu}}$ with far detector
$0.18\text{ to }0.82$ 40
2006 A
K2K KEK to Super-K
$0.23\text{ to }0.77$ 41
2006
MINS MINOS
$0.18\text{ to }0.82$ 42
2005
K2K KEK to Super-K
$0.18\text{ to }0.82$ 43
2005
SOU2
$0.36\text{ to }0.64$ 44
2005
SKAM Super-Kamiokande
$0.28\text{ to }0.72$ 45
2004
MCRO MACRO
$0.34\text{ to }0.66$ 46
2004
SKAM L/E distribution
$0.08\text{ to }0.92$ 47
2003
K2K KEK to Super-K
$0.13\text{ to }0.87$ 48
2003
MCRO MACRO
$0.26\text{ to }0.74$ 49
2003
MCRO MACRO
$0.15\text{ to }0.85$ 50
2003
SOU2 Soudan-2 Atmospheric
$0.28\text{ to }0.72$ 51
2001
MCRO Upward ${{\mathit \mu}}$
$0.29\text{ to }0.71$ 52
2001
MCRO Upward ${{\mathit \mu}}$
$0.13\text{ to }0.87$ 53
1999 C
SKAM Upward ${{\mathit \mu}}$
$0.23\text{ to }0.77$ 54
1999 D
SKAM Upward ${{\mathit \mu}}$
$0.08\text{ to }0.92$ 55
1999 D
SKAM Stop ${{\mathit \mu}}$ $/$ through
$0.29\text{ to }0.71$ 56
1998 C
SKAM Super-Kamiokande
$0.08\text{ to }0.92$ 57
1998
KAMI Kamiokande
$0.24\text{ to }0.76$ 58
1998
KAMI Kamiokande
$0.20\text{ to }0.80$ 59
1994
KAMI Kamiokande
1 ABE 2020F results are based on data collected between 2009 and 2018 in (anti)neutrino mode and include a neutrino beam exposure of $1.49 \times 10^{21}$ ($1.64 \times 10^{21}$) protons on target. Supersedes ABE 2018G.
2 ADAMSON 2020A uses the complete dataset from MINOS and MINOS+ experiments. The data were collected using a total exposure of $23.76 \times 10^{20}$ protons on target and 60.75 kton$\cdot{}$yr exposure to atmospheric neutrinos. Supersedes ADAMSON 2014 .
3 ACERO 2019 is based on a sample size of $12.33 \times 10^{20}$ protons on target. The fit combines both antineutrino and neutrino data to extract the oscillation parameters. The results favor the normal mass ordering by 1.9 ${{\mathit \sigma}}$ and $\theta _{23}$ values in octant II by 1.6 ${{\mathit \sigma}}$ . Supersedes ACERO 2018 .
4 Errors are from normal mass ordering and ${{\mathit \theta}_{{13}}}$ octant II fits.
5 AARTSEN 2018A uses three years (April 2012 $-$ May 2015) of neutrino data from full sky with reconstructed energies between 5.6 and 56 GeV, measured with the low-energy subdetector DeepCore of the IceCube neutrino telescope. AARTSEN 2018A also reports the best fit result for the inverted mass ordering as $\Delta$m${}^{2}_{32}$ = $-2.32 \times 10^{-3}$ eV${}^{2}$ and sin$^2({{\mathit \theta}_{{23}}} )$ = 0.51. Uncertainties for the inverted mass ordering fits were not provided. Supersedes AARTSEN 2015A.
6 ABE 2018B uses 328 kton$\cdot{}$years of Super-Kamiokande I-IV atmospheric neutrino data to obtain this result. The fit is performed over the three parameters, $\Delta$m${}^{2}_{32}$, sin$^2({{\mathit \theta}_{{23}}} )$, and $\delta$, while the solar parameters and sin$^2({{\mathit \theta}_{{13}}} )$ are fixed to $\Delta$m${}^{2}_{21}$= ($7.53$ $\pm0.18$) $\times 10^{-5}$ eV${}^{2}$, sin$^2({{\mathit \theta}_{{12}}} )$ = $0.304$ $\pm0.014$, and sin$^2({{\mathit \theta}_{{13}}} )$ = $0.0219$ $\pm0.0012$.
7 ABE 2021A results are based on $1.49 \times 10^{21}$ POT in neutrino mode and $1.64 \times 10^{21}$ POT in antineutrino mode.
8 SALAS 2021 reports results of a global fit to neutrino oscillation data available at the time of the Neutrino 2020 conference.
9 AARTSEN 2020 uses the data taken between May 2012 and April 2014 with the low-energy subdetector DeepCore of the IceCube neutrino telescope. The reconstructed energy range is between 4 (5) and 90 (80) GeV for the main (confirmatory) analysis. Though the observed best-fit is in the lower octant for both mass orderings, a substantial range of sin$^2({{\mathit \theta}_{{23}}} )$ $>$ 0.5 is still compatible with the observed data for both mass orderings.
10 ESTEBAN 2020A reports results of a global fit to neutrino oscillation data available at the time of the Neutrino2020 conference.
11 AARTSEN 2019C uses three years (April 2012 $-$ May 2015) of neutrino data from full sky with reconstructed energies between 5.6 and 56 GeV, measured with the low-energy subdetector DeepCore of the IceCube neutrino telescope. AARTSEN 2019C adopts looser event selection criteria to prioritize the efficiency of selecting neutrino events, different from tighter event selection criteria which closely follow the criteria used by AARTSEN 2018A to measure the ${{\mathit \nu}_{{\mu}}}$ disappearance.
12 ALBERT 2019 measured the oscillation parameters of atmospheric neutrinos with the ANTARES deep sea neutrino telescope using the data taken from 2007 to 2016 (2830 days of total live time). Supersedes ADRIAN-MARTINEZ 2012 .
13 ABE 2018B uses 328 kton$\cdot{}$years of Super-Kamiokande I-IV atmospheric neutrino data to obtain this result. The fit is performed over the four parameters, $\Delta$m${}^{2}_{32}$, sin$^2{{\mathit \theta}_{{23}}}$, sin$^2{{\mathit \theta}_{{13}}}$, and $\delta$, while the solar parameters are fixed to $\Delta$m${}^{2}_{21}$= ($7.53$ $\pm0.18$) $\times 10^{-5}$ eV${}^{2}$ and sin$^2{{\mathit \theta}_{{12}}}$ = $0.304$ $\pm0.014$.
14 ABE 2018G data prefers normal mass ordering is with a posterior probability of 87$\%$. Supersedes ABE 2017F.
15 ACERO 2018 performs a joint fit to the data for ${{\mathit \nu}_{{\mu}}}$ disappearance and ${{\mathit \nu}_{{e}}}$ appearance. The overall best fit favors normal mass ordering and ${{\mathit \theta}_{{23}}}$ in octant II. No 1$\sigma$ confidence intervals are presented for the inverted mass ordering scenarios. Superseded by ACERO 2019 .
16 Errors are from the projections of the 68$\%$ contour on 2D plot of $\Delta$m${}^{2}$ versus sin$^2({{\mathit \theta}_{{23}}} )$. ABE 2017F supersedes ABE 2017A. Superseded by ABE 2018G.
17 Superseded by ACERO 2018 .
18 ABE 2016D reports oscillation results using ${{\overline{\mathit \nu}}_{{\mu}}}$ disappearance in an off-axis beam.
19 ADAMSON 2016A obtains sin$^2({{\mathit \theta}_{{23}}} )$ in the 68$\%$ C.L. range [0.38, 0.65] ([0.37, 0.64]), with two statistically degenerate best-fit values of 0.44 and 0.59 (0.44 and 0.59) for normal (inverted) mass ordering. Superseded by ADAMSON 2017A.
20 AARTSEN 2015A obtains this result by a three-neutrino oscillation analysis using $10 - 100$ GeV muon neutrino sample from a total of 953 days of measurement with the low-energy subdetector DeepCore of the IceCube neutrino telescope. Superseded by AARTSEN 2018A.
21 ABE 2014 results are based on ${{\mathit \nu}_{{\mu}}}$ disappearance using three-neutrino oscillation fit. The confidence intervals are derived from one dimensional profiled likelihoods. Superseded by ABE 2017A.
22 ADAMSON 2014 uses a complete set of accelerator and atmospheric data. The analysis combines the ${{\mathit \nu}_{{\mu}}}$ disappearance and ${{\mathit \nu}_{{e}}}$ appearance data using three-neutrino oscillation fit. The fit results are obtained for normal and inverted mass ordering assumptions. The best fit is for first ${{\mathit \theta}_{{23}}}$ octant and inverted mass ordering.
23 FORERO 2014 performs a global fit to neutrino oscillations using solar, reactor, long-baseline accelerator, and atmospheric neutrino data.
24 GONZALEZ-GARCIA 2014 result comes from a frequentist global fit. The corresponding Bayesian global fit to the same data results are reported in BERGSTROM 2015 as 68$\%$ CL intervals of $0.433 - 0.496$ or $0.530 - 0.594$ for normal and $0.514 - 0.612$ for inverted mass ordering.
25 AARTSEN 2013B obtained this result by a two-neutrino oscillation analysis using $20 - 100$ GeV muon neutrino sample from a total of 318.9 days of live-time measurement with the low-energy subdetector DeepCore of the IceCube neutrino telescope.
26 The best fit value is sin${}^{2}({{\mathit \theta}_{{23}}}$ ) = $0.514$ $\pm0.082$. Superseded by ABE 2014 .
27 ADAMSON 2013B obtained this result from ${{\mathit \nu}_{{\mu}}}$ and ${{\overline{\mathit \nu}}_{{\mu}}}$ disappearance using ${{\mathit \nu}_{{\mu}}}$ ($10.71 \times 10^{20}$ POT) and ${{\overline{\mathit \nu}}_{{\mu}}}$ ($3.36 \times 10^{20}$ POT) beams, and atmospheric (37.88kton-years) data from MINOS The fit assumed two-flavor neutrino hypothesis and identical ${{\mathit \nu}_{{\mu}}}$ and ${{\overline{\mathit \nu}}_{{\mu}}}$ oscillation parameters. Superseded by ADAMSON 2014 .
28 ABE 2012A obtained this result by a two-neutrino oscillation analysis. The best-fit point is sin${}^{2}(2{{\mathit \theta}_{{23}}}$ ) = 0.98.
29 ADAMSON 2012 is a two-neutrino oscillation analysis using antineutrinos. The best fit value is sin${}^{2}(2{{\mathit \theta}_{{23}}}$ ) = $0.95$ ${}^{+0.10}_{-0.11}$ $\pm0.01$.
30 ADAMSON 2012B obtained this result by a two-neutrino oscillation analysis of the L/E distribution using 37.9 kton$\cdot{}$yr atmospheric neutrino data with the MINOS far detector.
31 The best fit point is $\Delta$m${}^{2}$ = 0.0019 eV${}^{2}$ and sin$^22\theta$ = 0.99. The 90$\%$ single-parameter confidence interval at the best fit point is sin$^22\theta$ $>$ 0.86.
32 The data are separated into pure samples of ${{\mathit \nu}}$ s and ${{\overline{\mathit \nu}}}$ s, and separate oscillation parameters for ${{\mathit \nu}}$ s and ${{\overline{\mathit \nu}}}$ s are fit to the data. The best fit point is ($\Delta$m${}^{2}$, sin$^22\theta$) = (0.0022 eV${}^{2}$, 0.99) and ($\Delta \bar m{}^{2}$, sin$^22{{\overline{\mathit \theta}}}$) = (0.0016 eV${}^{2}$, 1.00). The quoted result is taken from the 90$\%$ C.L. contour in the ($\Delta$m${}^{2}$, sin$^22\theta$) plane obtained by minimizing the four parameter log-likelihood function with respect to the other oscillation parameters.
33 ADRIAN-MARTINEZ 2012 measured the oscillation parameters of atmospheric neutrinos with the ANTARES deep sea neutrino telescope using the data taken from 2007 to 2010 (863 days of total live time). Superseded by ALBERT 2019 .
34 ABE 2011C obtained this result by a two-neutrino oscillation analysis using the Super-Kamiokande-I+II+III atmospheric neutrino data. ABE 2011C also reported results under a two-neutrino disappearance model with separate mixing parameters between ${{\mathit \nu}}$ and ${{\overline{\mathit \nu}}}$ , and obtained sin$^22{{\mathit \theta}} >$ 0.93 for ${{\mathit \nu}}$ and sin$^22{{\mathit \theta}} >$ 0.83 for ${{\overline{\mathit \nu}}}$ at 90$\%$ C.L.
35 ADAMSON 2011B obtained this result by a two-neutrino oscillation analysis of antineutrinos in an antineutrino enhanced beam with $1.71 \times 10^{20}$ protons on target. This results is consistent with the neutrino measurements of ADAMSON 2011 at 2$\%$ C.L.
36 WENDELL 2010 obtained this result (sin$^2\theta _{23}$ = $0.407 - 0.583$) by a three-neutrino oscillation analysis using the Super-Kamiokande-I+II+III atmospheric neutrino data, assuming $\theta _{13}$ = 0 but including the solar oscillation parameters $\Delta$m${}^{2}_{21}$ and sin$^2\theta _{12}$ in the fit.
37 WENDELL 2010 obtained this result (sin$^2\theta _{23}$ = $0.43 - 0.61$) by a three-neutrino oscillation analysis with one mass scale dominance ($\Delta$m${}^{2}_{21}$ = 0) using the Super-Kamiokande-I+II+III atmospheric neutrino data, and updates the HOSAKA 2006A result.
38 WENDELL 2010 obtained this result (sin$^2\theta _{23}$ = $0.44 - 0.63$) by a three-neutrino oscillation analysis with one mass scale dominance ($\Delta$m${}^{2}_{21}$ = 0) using the Super-Kamiokande-I+II+III atmospheric neutrino data, and updates the HOSAKA 2006A result.
39 ADAMSON 2006 obtained this result by a two-neutrino oscillation analysis of the L/E distribution using 4.54 kton yr atmospheric neutrino data with the MINOS far detector.
40 Supercedes ALIU 2005 .
42 The best fit is for maximal mixing.
43 ALLISON 2005 result is based upon atmospheric neutrino interactions including upward-stopping muons, with an exposure of 5.9 kton yr. From a two-flavor oscillation analysis the best-fit point is $\Delta \mathit m{}^{2}$ = 0.0017 eV${}^{2}$ and sin$^2(2\theta )$ = 0.97.
44 ASHIE 2005 obtained this result by a two-neutrino oscillation analysis using 92 kton yr atmospheric neutrino data from the complete Super-Kamiokande I running period.
45 AMBROSIO 2004 obtained this result, without using the absolute normalization of the neutrino flux, by combining the angular distribution of upward through-going muon tracks with ${{\mathit E}_{{\mu}}}$ $>$ 1 GeV, N$_{low}$ and N$_{high}$, and the numbers of InDown + UpStop and InUp events. Here, N$_{low}$ and N$_{high}$ are the number of events with reconstructed neutrino energies $<$ 30 GeV and $>$ 130 GeV, respectively. InDown and InUp represent events with downward and upward-going tracks starting inside the detector due to neutrino interactions, while UpStop represents entering upward-going tracks which stop in the detector. The best fit is for maximal mixing.
46 ASHIE 2004 obtained this result from the L(flight length)/E(estimated neutrino energy) distribution of ${{\mathit \nu}_{{\mu}}}$ disappearance probability, using the Super-Kamiokande-I 1489 live-day atmospheric neutrino data.
47 There are several islands of allowed region from this K2K analysis, extending to high values of $\Delta \mathit m{}^{2}$. We only include the one that overlaps atmospheric neutrino analyses. The best fit is for maximal mixing.
48 AMBROSIO 2003 obtained this result on the basis of the ratio R = N$_{low}/N_{high}$, where N$_{low}$ and N$_{high}$ are the number of upward through-going muon events with reconstructed neutrino energy $<$ 30 GeV and $>$ 130 GeV, respectively. The data came from the full detector run started in 1994. The method of FELDMAN 1998 is used to obtain the limits.
49 AMBROSIO 2003 obtained this result by using the ratio R and the angular distribution of the upward through-going muons. R is given in the previous note and the angular distribution is reported in AMBROSIO 2001 . The method of FELDMAN 1998 is used to obtain the limits. The best fit is to maximal mixing.
50 SANCHEZ 2003 is based on an exposure of 5.9 kton yr. The result is obtained using a likelihood analysis of the neutrino L/E distribution for a selection ${{\mathit \mu}}$ flavor sample while the ${{\mathit e}}$ -flavor sample provides flux normalization. The method of FELDMAN 1998 is used to obtain the allowed region. The best fit is sin$^2(2{{\mathit \theta}} )$ = 0.97.
51 AMBROSIO 2001 result is based on the angular distribution of upward through-going muon tracks with ${{\mathit E}_{{\mu}}}$ $>$ 1 GeV. The data came from three different detector configurations, but the statistics is largely dominated by the full detector run, from May 1994 to December 2000. The total live time, normalized to the full detector configuration is 6.17 years. The best fit is obtained outside the physical region. The method of FELDMAN 1998 is used to obtain the limits. The best fit is for maximal mixing.
52 AMBROSIO 2001 result is based on the angular distribution and normalization of upward through-going muon tracks with ${{\mathit E}_{{\mu}}}$ $>$ 1 GeV. See the previous footnote.
53 FUKUDA 1999C obtained this result from a total of 537 live days of upward through-going muon data in Super-Kamiokande between April 1996 to January 1998. With a threshold of ${{\mathit E}_{{\mu}}}$ $>$ 1.6 GeV, the observed flux is ($1.74$ $\pm0.07$ $\pm0.02$) $\times 10^{-13}$ cm${}^{-2}$s${}^{-1}$sr${}^{-1}$. The best fit is sin$^2(2{{\mathit \theta}} )$ = 0.95.
54 FUKUDA 1999D obtained this result from a simultaneous fitting to zenith angle distributions of upward-stopping and through-going muons. The flux of upward-stopping muons of minimum energy of 1.6 GeV measured between April 1996 and January 1998 is ($0.39$ $\pm0.04$ $\pm0.02$) $\times 10^{-13}$ cm${}^{-2}$s${}^{-1}$sr${}^{-1}$. This is compared to the expected flux of ($0.73$ $\pm0.16$ (theoretical error))${\times }10^{-13}$ cm${}^{-2}$s${}^{-1}$sr${}^{-1}$. The best fit is to maximal mixing.
55 FUKUDA 1999D obtained this result from the zenith dependence of the upward-stopping/through-going flux ratio. The best fit is to maximal mixing.
56 FUKUDA 1998C obtained this result by an analysis of 33.0 kton yr atmospheric neutrino data. The best fit is for maximal mixing.
57 HATAKEYAMA 1998 obtained this result from a total of 2456 live days of upward-going muon data in Kamiokande between December 1985 and May 1995. With a threshold of ${{\mathit E}_{{\mu}}}$ $>$ 1.6 GeV, the observed flux of upward through-going muons is ($1.94$ $\pm0.10$ ${}^{+0.07}_{-0.06}$) $\times 10^{-13}$ cm${}^{-2}$s${}^{-1}$sr${}^{-1}$. This is compared to the expected flux of ($2.46$ $\pm0.54$ (theoretical error))${\times }10^{-13}$ cm${}^{-2}$s${}^{-1}$sr${}^{-1}$. The best fit is for maximal mixing.
58 HATAKEYAMA 1998 obtained this result from a combined analysis of Kamiokande contained events (FUKUDA 1994 ) and upward going muon events. The best fit is sin$^2(2{{\mathit \theta}} )$ = 0.95.
59 FUKUDA 1994 obtained the result by a combined analysis of sub- and multi-GeV atmospheric neutrino events in Kamiokande. The best fit is for maximal mixing.
References:
ABE 2021A
PR D103 L011101 T2K measurements of muon neutrino and antineutrino disappearance using $3.13\times 10^{21}$ protons on target
SALAS 2021
JHEP 2102 071 2020 global reassessment of the neutrino oscillation picture
AARTSEN 2020
EPJ C80 9 Development of an analysis to probe the neutrino mass ordering with atmospheric neutrinos using three years of IceCube DeepCore data
ABE 2020F
NAT 580 339 Constraint on the matter?antimatter symmetry-violating phase in neutrino oscillations
Also
PR D103 112008 Improved constraints on neutrino mixing from the T2K experiment with $\mathbf{3.13\times10^{21}}$ protons on target
PRL 125 131802 Precision Constraints for Three-Flavor Neutrino Oscillations from the Full MINOS+ and MINOS Dataset
ESTEBAN 2020A
JHEP 2009 178 The fate of hints: updated global analysis of three-flavor neutrino oscillations
AARTSEN 2019C
PR D99 032007 Measurement of Atmospheric Tau Neutrino Appearance with IceCube DeepCore
ACERO 2019
PRL 123 151803 First Measurement of Neutrino Oscillation Parameters using Neutrinos and Antineutrinos by NOvA
AGAFONOVA 2019
PR D100 051301 Final results on neutrino oscillation parameters from the OPERA experiment in the CNGS beam
ALBERT 2019
JHEP 1906 113 Measuring the atmospheric neutrino oscillation parameters and constraining the 3+1 neutrino model with ten years of ANTARES data
AARTSEN 2018A
PRL 120 071801 Measurement of Atmospheric Neutrino Oscillations at 6?56 GeV with IceCube DeepCore
ABE 2018B
PR D97 072001 Atmospheric neutrino oscillation analysis with external constraints in Super-Kamiokande I-IV
ABE 2018G
PRL 121 171802 Search for CP Violation in Neutrino and Antineutrino Oscillations by the T2K Experiment with $2.2\times10^{21}$ Protons on Target
ACERO 2018
PR D98 032012 New constraints on oscillation parameters from $\nu_e$ appearance and $\nu_\mu$ disappearance in the NOvA experiment
DE-SALAS 2018
PL B782 633 Status of neutrino oscillations 2018: 3$\sigma$ hint for normal mass ordering and improved CP sensitivity
ABE 2017C
PR D96 011102 Updated T2K Measurements of Muon Neutrino and Antineutrino Disappearance using $1.5 \times 10^{21}$ Protons on Target
ABE 2017F
PR D96 092006 Measurement of Neutrino and Antineutrino Oscillations by the T2K Experiment Including a New Additional Sample of ${{\mathit \nu}_{{e}}}$ Interactions at the Far Detector
Also
PR D98 019902 (errat.) Measurement of neutrino and antineutrino oscillations by the T2K experiment including a new additional sample of $\nu_e$ interactions at the far detector
ABE 2017A
PRL 118 151801 Combined Analysis of Neutrino and Antineutrino Oscillations at T2K
PRL 118 151802 Measurement of the Neutrino Mixing Angle $\mathit \theta _{23}$ in NOvA
ABE 2016D
PRL 116 181801 Measurement of Muon Antineutrino Oscillations with an Accelerator-Produced Off-Axis Beam
PR D93 051104 First easurement of Muon-Neutrino Disappearance in NOvA
AARTSEN 2015A
PR D91 072004 Determining Neutrino Oscillation Parameters from Atmospheric Muon Neutrino Disappearance with Three Years of IceCube DeepCore Data
ABE 2014
PRL 112 181801 Precise Measurement of the Neutrino Mixing Parameter ${{\mathit \theta}_{{23}}}$ from Muon Neutrino Disappearance in an Off-Axis Beam
Also
PR D91 072010 Measurements of Neutrino Oscillation in Appearance and Disappearance Channels by the T2K Experiment with $6.6 \times 10^{20}$ Protons on Target
PRL 112 191801 Combined Analysis of ${{\mathit \nu}_{{\mu}}}$ Disappearance and ${{\mathit \nu}_{{\mu}}}$ $\rightarrow{{\mathit \nu}_{{e}}}$ Appearance in MINOS using Accelerator and Atmospheric Neutrinos
FORERO 2014
PR D90 093006 Neutrino Oscillations Refitted
GONZALEZ-GARCIA 2014
JHEP 1411 052 Updated Fit to Three Neutrino Mixing: Status of Leptonic $\mathit CP$ Violation
AARTSEN 2013B
PRL 111 081801 Measurement of Atmospheric Neutrino Oscillations with IceCube
ABE 2013G
PRL 111 211803 Measurement of Neutrino Oscillation Parameters from Muon Neutrino Disappearance with an Off-Axis Beam
PRL 110 251801 Measurement of Neutrino and Antineutrino Oscillations Using Beam and Atmospheric Data in MINOS
ABE 2012A
PR D85 031103 First Muon-Neutrino Disappearance Study with an Off-Axis Beam
PR D86 052007 Measurements of Atmospheric Neutrinos and Antineutrinos in the MINOS Far Detector
PRL 108 191801 Improved Measurement of Muon Antineutrino Disappearance in MINOS
PL B714 224 Measurement of Atmospheric Neutrino Oscillations with the ANTARES Neutrino Telescope
ABE 2011C
PRL 107 241801 Search for Differences in Oscillation Parameters for Atmospheric Neutrinos and Antineutrinos at Super-Kamiokande
PRL 107 021801 First Direct Observation of Muon Antineutrino Disappearance
PRL 106 181801 Measurement of the Neutrino Mass Splitting and Flavor Mixing by MINOS
WENDELL 2010
PR D81 092004 Atmospheric Neutrino Oscillation Analysis with Subleading Effects in Super-Kamiokande I, II, and III
PRL 101 131802 Measurement of Neutrino Oscillations with the MINOS Detectors in the NuMI Beam
PR D73 072002 First Observations of Separated Atmospheric ${{\mathit \nu}_{{\mu}}}$ and ${{\overline{\mathit \nu}}_{{\mu}}}$ Events in the MINOS Detector
AHN 2006A
PR D74 072003 Measurement of Neutrino Oscillation by the K2K Experiment
MICHAEL 2006
PRL 97 191801 Observation of Muon Neutrino Disappearance with the MINOS Detectors in the NuMI Neutrino Beam
ALIU 2005
PRL 94 081802 Evidence for Muon Neutrino Oscillation in an Accelerator-Based Experiment
ALLISON 2005
PR D72 052005 Neutrino Oscillation Effects in Soudan 2 Upward-Stopping Muons
ASHIE 2005
PR D71 112005 Measurement of Atmospheric Neutrino Oscillation Parameters by Super-Kamiokande I
AMBROSIO 2004
EPJ C36 323 Measurements oa Atmospheric Muon Neutrino Oscillations, Global Analysis of the Data Collected with MACRO Detector
ASHIE 2004
PRL 93 101801 Evidence for an Oscillatory Signature in Atmospheric Neutrino Oscillation
AHN 2003
PRL 90 041801 Indications of Neutrino Oscillation in a 250 km Long Baseline Experiment
AMBROSIO 2003
PL B566 35 Atmospheric Neutrino Oscillations from Upward through Going Muon Multiple Scattering in MACRO
SANCHEZ 2003
PR D68 113004 Measurement of the Distributions of Atmospheric Neutrinos in SOUDAN2 and their Interpretation as Neutrino Oscillations
AMBROSIO 2001
PL B517 59 Matter Effects in Upward Going Muons and Sterile Neutrino Oscillations
FUKUDA 1999D
PL B467 185 Neutrino Induced Upward Stopping Muons in Super-Kamiokande
FUKUDA 1999C
PRL 82 2644 Measurement of the Flux and Zenith Angle Distribution of Upward Through Going Muons by Super-Kamiokande
FUKUDA 1998C
PRL 81 1562 Evidence for Oscillation of Atmospheric Neutrinos
HATAKEYAMA 1998
PRL 81 2016 Measurement of the Flux and Zenith Angle Distribution of Upward Through-Going Muons in Kamiokande II + III
FUKUDA 1994
PL B335 237 Atmospheric ${{\mathit \nu}_{{\mu}}}$ /${{\mathit \nu}_{{e}}}$ Ratio in the Multi-GeV Energy Range
|
|
Conference item
### Tightly Integrated Probabilistic Description Logic Programs for the Semantic Web
Abstract:
We present a novel approach to probabilistic description logic programs for the Semantic Web, where a tight integration of disjunctive logic programs under the answer set semantics with description logics is generalized by probabilistic uncertainty. The approach has a number of nice features. In particular, it allows for a natural probabilistic data integration and for a natural representation of ontology mappings under probabilistic uncertainty and inconsistency. It also provides a natura...
### Authors
Publisher:
Springer
Volume:
4670
Publication date:
2007-01-01
URN:
uuid:0658acb3-d9aa-494d-9bef-545d2f89cd27
Local pid:
cs:6665
ISBN:
978-3-540-74608-9
|
|
# The product of two consecutive odd integers is 99, how do you find the integers?
Feb 16, 2016
Consecutive integers are $- 11$ and $- 9$ or $9$ and $11$
#### Explanation:
Let the numbers be $\left(2 x - 1\right)$ and $\left(2 x + 1\right)$ as for any $x$ these will be consecutive odd numbers. Hence
$\left(2 x - 1\right) \left(2 x + 1\right) = 99$ i.e.
$4 {x}^{2} - 1 = 99$ or $4 {x}^{2} - 100 = 0$ or ${x}^{2} - 25 = 0$
i.e. $\left(x - 5\right) \left(x + 5\right) = 0$ i.e. $x = 5 \mathmr{and} - 5$
Hence consecutive integers are $- 11$ and $- 9$ or $9$ and $11$.
|
|
Formal Methods in Computing(Most of the papers antecedent to 1995are not included in the list) FRAMES NO FRAME
paolini99tia (Article) Author(s) Luca Paolini and Simona Ronchi Della Rocca Title « Call-by-value Solvability » Journal Theoretical Informatics and Applications Volume 33 Number 6 Page(s) 507-534 Year 1999 ISSN number 0988-3754 URL http://www.di.unito.it/~ronchi/papers/solv.ps Note RAIRO Series, EDP-Sciences, France
Annotation
RAIRO Series, EDP-Sciences
Abstract The notion of solvability in the call-by-value $\lambda$-calculus is defined and completely characterized, from both an operational and a logical point of view. The operational characterization is given through a reduction machine performing the classical $\beta$-reduction according to an innermost strategy. In fact, it turns out that the call-by-value reduction rule is too weak for capturing the solvability property of terms. The logical characterization is given through an intersection type assignment system, assigning types of a given shape to all and only the call-by-value solvable terms.
BibTeX code
@article{paolini99tia,
number = {6},
volume = {33},
month = nov,
issn = {0988-3754},
author = {Paolini, Luca and Ronchi Della Rocca, Simona},
note = {RAIRO Series, EDP-Sciences, France},
url = {http://www.di.unito.it/~ronchi/papers/solv.ps},
abstract = {The notion of solvability in the call-by-value $\lambda$-calculus
is defined and completely characterized, from both an operational
and a logical point of view. The operational characterization is
given through a reduction machine performing the classical
$\beta$-reduction according to an innermost strategy. In fact, it
turns out that the call-by-value reduction rule is too weak for
capturing the solvability property of terms. The logical
characterization is given through an intersection type assignment
system, assigning types of a given shape to all and only the
call-by-value solvable terms.},
tag = {Theoretical Informatics and Applications, RAIRO Series, EDP-Sciences},
localfile = {http://www.di.unito.it/~paolini/papers/cbv_solvability.pdf},
title = {Call-by-value Solvability},
annote = {RAIRO Series, EDP-Sciences},
journal = {Theoretical Informatics and Applications},
pages = {507-534},
year = {1999},
}
Formal Methods in Computing(Most of the papers antecedent to 1995are not included in the list) FRAMES NO FRAME
This document was generated by bib2html 3.3.
(Modified by Luca Paolini, under the GNU General Public License)
|
|
#### Volume 16, issue 1 (2016)
Recent Issues
The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Subscriptions Author Index To Appear Contacts ISSN (electronic): 1472-2739 ISSN (print): 1472-2747
A family of transverse link homologies
### Hao Wu
Algebraic & Geometric Topology 16 (2016) 41–127
##### Abstract
We define a homology ${\mathsc{ℋ}}_{N}$ for closed braids by applying Khovanov and Rozansky’s matrix factorization construction with potential $a{x}^{N+1}\phantom{\rule{0.3em}{0ex}}$. Up to a grading shift, ${\mathsc{ℋ}}_{0}$ is the HOMFLYPT homology defined by Khovanov and Rozansky. We demonstrate that for $N\ge 1$, ${\mathsc{ℋ}}_{N}$ is a ${ℤ}_{2}\oplus {ℤ}^{\oplus 3}\phantom{\rule{0.3em}{0ex}}$–graded $ℚ\left[a\right]$–module that is invariant under transverse Markov moves, but not under negative stabilization/destabilization. Thus, for $N\ge 1$, this homology is an invariant for transverse links in the standard contact ${S}^{3}$, but not for smooth links. We also discuss the decategorification of ${\mathsc{ℋ}}_{N}$ and the relation between ${\mathsc{ℋ}}_{N}$ and the $\mathfrak{s}\mathfrak{l}\left(N\right)$ Khovanov–Rozansky homology.
##### Keywords
transverse link, Khovanov–Rozansky homology, HOMFLYPT polynomial
##### Mathematical Subject Classification 2010
Primary: 57M25, 57R17
|
|
# 6.04 Analysing investments with periodic payments
Lesson
### Tables and sequences for investments with a regular payment
Often an investor will make a regular payment to contribute to the growth of the investment. Tables and recursive rules are useful for more complex finance problems, such as a compound interest investment with regular deposits, or a reducing balance loan with regular payments. Investments involving compound interest are often displayed in a table of values so that the growth in the value of the investment can be clearly seen.
Sequence - Investment with regular payment
For a principal investment/loan, $P$P, at the compound interest rate of $r$r per period and payment $d$d per period, the sequence of the value of the investment over time forms a first order linear recurrence.
The sequence which generates the value, $V_n$Vn, of the investment/loan at the end of each instalment period is:
• Recursive sequence:
$V_n=V_{n-1}\times(1+r)+d$Vn=Vn1×(1+r)+d, where $V_0=P$V0=P
The sequence which generates the value, $V_n$Vn, of the investment/loan at the beginning of each instalment period is:
• Recursive sequence:
$V_n=V_{n-1}\times(1+r)+d$Vn=Vn1×(1+r)+d, where $V_1=P$V1=P
#### Worked example
##### Example 1
Fiona invests $\$14000$$14000 in an investment account that compounds annually. To further grow the investment she also makes a regular contribution of \2000$$2000 per year. The progression of the investment is shown in the table below.
Year Account balance at
start of year ($) Interest ($) Payment ($) Account balance at end of year ($)
1 $14000$14000 $350$350 $2000$2000 $16350$16350
2 $16350$16350 $408.75$408.75 $2000$2000 $18758.75$18758.75
3 $18758.75$18758.75 $468.97$468.97 $2000$2000 $21227.72$21227.72
(a) Determine the interest rate for the investment.
Think: We can use any of the table rows to do this. Use the formula
$\text{interest rate }=\frac{\text{amount of interest in year n}}{\text{account balance start of year n }}\times100$interest rate =amount of interest in year naccount balance start of year n ×100%
Do: Using the figures from Year 1 we find:
Rate $=$= $\frac{350}{14000}$35014000 $=$= $0.025$0.025 $=$= $2.5%$2.5%
Reflect: We can check we obtain the same result using the values from Year 2 or Year 3.
(b) Write a recursive rule for $V_n$Vn in terms of $V_{n−1}$Vn1 that gives the value of the account after $n$n years and an initial condition $V_0$V0.
Think: Using the sequence which generates the value, $V_n$Vn, of the investment at the end of each instalment, $V_n=V_{n-1}\times(1+r)+d$Vn=Vn1×(1+r)+d, where $V_0=P$V0=P. We have an initial investment of $V_0=14000$V0=14000, a regular deposit of $d=2000$d=2000 and the rate we found in part (a) to be $r=2.5%$r=2.5%.
Do:
$V_n=V_{n-1}\times(1+0.025)+2000$Vn=Vn1×(1+0.025)+2000, where $V_0=14000$V0=14000.
Select the brand of calculator you use below to work through an example involving compound interest investments with regular payments using sequences.
Casio Classpad
How to use the CASIO Classpad to complete the following tasks involving sequences in a compound investment context with a regular payment.
Nadia opens an account to help her save for a motorcycle. She opens the account with an initial deposit of $\$4000$$4000 that is compounded monthly at a rate of 3.5%3.5% per annum. She makes further deposits of \250$$250 at the end of each month.
1. Using a recursive relationship, generate the value of the investment at the end of each month for the first $6$6 months.
2. Find the balance in the account at the end of $1$1 year.
3. If the motorcycle costs $\$11500$$11500, at the end of which month will Nadia have enough saved to buy it? TI Nspire How to use the TI Nspire to complete the following tasks involving sequences in a compound investment context with a regular payment. Nadia opens an account to help her save for a motorcycle. She opens the account with an initial deposit of \4000$$4000 that is compounded monthly at a rate of $3.5%$3.5% per annum. She makes further deposits of $\$250$$250 at the end of each month. 1. Using a recursive relationship, generate the value of the investment at the end of each month for the first 66 months. 2. Find the balance in the account at the end of 11 year. 3. If the motorcycle costs \11500$$11500, at the end of which month will Nadia have enough saved to buy it?
#### Practice question
##### question 3
An investor deposits $\$20000$$20000 into a high earning account with interest of 4.5%4.5% p.a. compounded weekly and makes \150$$150 weekly deposits into the account.
1. If $N$N is the number of payments, complete the table of values showing the variables required to use the financial application of your calculator to determine how long it takes for the original investment to double in value.
Assume there are $52$52 weeks in a year.
Variable Value
$N$N -
$I\left(%\right)$I(%) $\editable{}$$%% PVPV \editable{} PmtPmt \editable{} FVFV \editable{} P/YP/Y \editable{} C/YC/Y \editable{} 2. Calculate the number of whole weeks it will take for the investment to double. 3. How much should the investor deposit each week in dollars if they want the original investment to double at the end of three years? Again, assume there are 5252 weeks in a year. Round your answer to the nearest cent. ### Using a spreadsheet to model compound interest investment Let's explore this interactive compound interest spreadsheet. When we explore different options with a financial problem we call it "what if analysis". Created with Geogebra You can change the amount invested (the blue cell) to any value you'd like to invest. You can change the annual interest rate (the green cell) to any value. You can change the number of compounding periods (the pink cell) to quarterly (44), monthly (1212), weekly (5252) or perhaps daily (365365). #### Investigate • What happens as you increase the number of compounding periods? • What happens as you increase the annual interest rate? • How has the value in cell C10 been calculated? • How has the value in D12 been calculated? ### Creating a spreadsheet to model a compound investment with payments Spreadsheets can also include payment details and are a useful tool for solving financial problems as the progression of the investment can be clearly seen as well as the effect of changing interest rates and payments. #### Worked example ##### Example 2 Thomas needs \17000$$17000 for a house deposit. He invests$\$10000$$10000 in the bank with an interest rate of 8%8% p.a. compounded monthly and also makes a payment of \150$$150 per month. He creates the following spreadsheet to help him do "what if analysis" to examine the problem. Some of the formulae Thomas used to create this spreadsheet are shown in the table below. A B C D E 5 Month Balance start Interest Payment Balance end 6 1 =B1 =$B$2/12*B6 =$B$3 =B6+C6+D6 7 =A6+1 =E6 8 Note: • The spreadsheet is designed so that if the values in cells B1, B2 and B3 are changed, the whole page instantly updates. This makes it quick and easy to investigate different investment options. • The$ signs in the cell references make the reference absolute. That means the cell name will not change as the formula is copied down the column.
• The month numbers here have been calculated using a formula in cell A7.
(a) What is the purpose of the formula in cell C6?
Think: Ignore the $signs. The formula is =B2/12 * B6 Do: Write that it calculates the amount of monthly interest by taking the yearly interest rate (cell B2) and dividing by$12$12 then multiplying by the amount at the start of the month (cell B6). (b) What would the formula in cell C6 be if the interest was to be compounded quarterly instead of monthly? Think: We would divide the yearly rate by four instead of dividing by twelve. Do: Write the formula would be =$B$2/4*B6 or =B2/4*B6 (without an absolute reference). (c) What is the purpose of the formula in cell E6? Think: The formula is =B6+C6+D6 so it is added three values together. Do: Write that it calculates the balance at the end of the month by calculating start balance + interest + payment (d) Write the formula that will be in cell E7. Think: This will be the start amount + interest + payment for row 7 Do: Write the formula is =B7+C7+D7 (e) What is the purpose of the formula in cell B7? Think: The formula is =E6 and E6 contains the amount at the end of month 1. Do: Write that it copies the balance at the end of month 1, to the start of month 2. (f) Use Microsoft Excel, Google Sheets or your CAS calculator to create this spreadsheet. How long does it take Thomas to save the$\$17000$$17000? Reflect: The spreadsheet should show that it takes him 3030 months to save the money. #### Practice question ##### question 4 The spreadsheet below shows the first year of an investment with regular deposits. AA BB CC DD EE 11 Year Beginning Balance Interest Deposit End Balance 22 11 60006000 660660 500500 71607160 33 44 55 1. Calculate the annual interest rate for this investment. 2. Write a formula for cell BB33. Enter only one letter or number per box. BB33 == \editable{}$$\editable{}$
3. Write a formula for cell $C$C$6$6 in terms of $B$B$6$6.
Enter one letter or a number per box.
$C$C$6$6 $=$= $\editable{}$$\ast$$\editable{}$$\editable{}$
4. Which of the following is the correct formula for cell $E$E$5$5?
$B$B $5$5 $\ast$ $C$C $5$5 $+$+ $D$D $5$5
A
$B$B $5$5 $+$+ $C$C $5$5 $-$ $D$D $5$5
B
$B$B $5$5 $+$+ $C$C $5$5 $+$+ $D$D $5$5
C
$B$B $5$5 $+$+ $C$C $5$5 $\ast$ $D$D $5$5
D
$B$B $5$5 $\ast$ $C$C $5$5 $\ast$ $D$D $5$5
E
$B$B $5$5 $-$ $C$C $5$5 $-$ $D$D $5$5
F
$B$B $5$5 $\ast$ $C$C $5$5 $+$+ $D$D $5$5
A
$B$B $5$5 $+$+ $C$C $5$5 $-$ $D$D $5$5
B
$B$B $5$5 $+$+ $C$C $5$5 $+$+ $D$D $5$5
C
$B$B $5$5 $+$+ $C$C $5$5 $\ast$ $D$D $5$5
D
$B$B $5$5 $\ast$ $C$C $5$5 $\ast$ $D$D $5$5
E
$B$B $5$5 $-$ $C$C $5$5 $-$ $D$D $5$5
F
5. Using the spreadsheet facility on your calculator, reproduce this spreadsheet and determine the end balance for the $4$4th year.
6. Calculate the total interest earned over these $4$4 years.
### Outcomes
#### ACMGM099
use a recurrence relation to model an annuity, and investigate (numerically or graphically) the effect of the amount invested, the interest rate, and the payment amount on the duration of the annuity
#### ACMGM100
with the aid of a financial calculator or computer-based financial software, solve problems involving annuities (including perpetuities as a special case); for example, determining the amount to be invested in an annuity to provide a regular monthly income of a certain amount
|
|
Volume 377 - 18th International Conference on B-Physics at Frontier Machines (Beauty2019) - Posters
$B_c \rightarrow J/\psi$ Form Factors and $R(J/\psi)$ using Lattice QCD
J. Harrison,* C. Davies, A. Lytle
*corresponding author
Full text: pdf
Pre-published on: March 02, 2020
Published on:
Abstract
We present a lattice QCD determination of the $B_c \rightarrow J/\psi$ vector and axial-vector form factors over the full physical $q^2$ range with non-perturbatively renormalised lattice currents. Our calculation uses the Highly Improved Staggered Quark (HISQ) action on the second generation MILC gluon ensembles including light, strange and charm sea quarks. We use HISQ heavy quarks, with a range of masses going up to the $b$ on our finest lattices. Finally we use these form factors to compute the differential decay rates and construct the ratio of decay rates $R(J/\psi) = \Gamma(B_c \rightarrow J/\psi \tau^-\overline{\nu}_\tau)/\Gamma(B_c \rightarrow J/\psi \ell^-\overline{\nu}_\ell)$ where $\ell$ is either a muon or electron.
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
|
# DeBroglie wavelength considering relativistic effects
1. Mar 22, 2006
### *Alice*
"Electrons are accelerated by a potential of 350kV in an electron microscope. Calculate the de Broglie wavelength of those electrons taling relativistic effects into account"
I attempted the following:
W = W(kin) = 350keV
now
$$W(kin)= (1-gamma)mc^2$$
so, now one could solve for gamma and find the velocity of the particle.
afterwards $$p=m*v=h/lambda$$
HOWEVER: I get a negative result in a root when I try to solve for v. Therefore I think that my energy formula must be wrong (I already excluded calculation errors). Can anyone see it?
2. Mar 22, 2006
### nrqed
It's $(\gamma -1) m c^2$ (the gamma factor is always larger or equal to 1)
Patrick
|
|
# 11.1 Temperature (Page 4/14)
Page 4 / 14
The lowest temperatures ever recorded have been measured during laboratory experiments: $4\text{.}5×{\text{10}}^{–\text{10}}\phantom{\rule{0.25em}{0ex}}\text{K}$ at the Massachusetts Institute of Technology (USA), and $1\text{.}0×{\text{10}}^{–\text{10}}\phantom{\rule{0.25em}{0ex}}\text{K}$ at Helsinki University of Technology (Finland). In comparison, the coldest recorded place on Earth’s surface is Vostok, Antarctica at 183 K $\left(–\text{89}\text{º}\text{C}\right)$ , and the coldest place (outside the lab) known in the universe is the Boomerang Nebula, with a temperature of 1 K.
## Making connections: absolute zero
What is absolute zero? Absolute zero is the temperature at which all molecular motion has ceased. The concept of absolute zero arises from the behavior of gases. [link] shows how the pressure of gases at a constant volume decreases as temperature decreases. Various scientists have noted that the pressures of gases extrapolate to zero at the same temperature, $–\text{273}\text{.}\text{15}\text{º}\text{C}$ . This extrapolation implies that there is a lowest temperature. This temperature is called absolute zero . Today we know that most gases first liquefy and then freeze, and it is not actually possible to reach absolute zero. The numerical value of absolute zero temperature is $–\text{273}\text{.}\text{15}\text{º}\text{C}$ or 0 K.
## Thermal equilibrium and the zeroth law of thermodynamics
Thermometers actually take their own temperature, not the temperature of the object they are measuring. This raises the question of how we can be certain that a thermometer measures the temperature of the object with which it is in contact. It is based on the fact that any two systems placed in thermal contact (meaning heat transfer can occur between them) will reach the same temperature. That is, heat will flow from the hotter object to the cooler one until they have exactly the same temperature. The objects are then in thermal equilibrium , and no further changes will occur. The systems interact and change because their temperatures differ, and the changes stop once their temperatures are the same. Thus, if enough time is allowed for this transfer of heat to run its course, the temperature a thermometer registers does represent the system with which it is in thermal equilibrium. Thermal equilibrium is established when two bodies are in contact with each other and can freely exchange energy.
Furthermore, experimentation has shown that if two systems, A and B, are in thermal equilibrium with each another, and B is in thermal equilibrium with a third system C, then A is also in thermal equilibrium with C. This conclusion may seem obvious, because all three have the same temperature, but it is basic to thermodynamics. It is called the zeroth law of thermodynamics .
## The zeroth law of thermodynamics
If two systems, A and B, are in thermal equilibrium with each other, and B is in thermal equilibrium with a third system, C, then A is also in thermal equilibrium with C.
how can chip be made from sand
is this allso about nanoscale material
Almas
are nano particles real
yeah
Joseph
Hello, if I study Physics teacher in bachelor, can I study Nanotechnology in master?
no can't
Lohitha
where is the latest information on a no technology how can I find it
William
currently
William
where we get a research paper on Nano chemistry....?
nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review
Ali
what are the products of Nano chemistry?
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
da
no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts
Bhagvanji
hey
Giriraj
Preparation and Applications of Nanomaterial for Drug Delivery
revolt
da
Application of nanotechnology in medicine
has a lot of application modern world
Kamaluddeen
yes
narayan
what is variations in raman spectra for nanomaterials
ya I also want to know the raman spectra
Bhagvanji
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
Nasa has use it in the 60's, copper as water purification in the moon travel.
Alexandre
nanocopper obvius
Alexandre
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
How we can toraidal magnetic field
How we can create polaidal magnetic field
4
Because I'm writing a report and I would like to be really precise for the references
where did you find the research and the first image (ECG and Blood pressure synchronized)? Thank you!!
|
|
# Derivative of (7x)/(6x+4)
## Derivative of (7x)/(6x+4). Simple step by step solution, to learn. Simple, and easy to understand, so dont hesitate to use it as a solution of your homework.
If it's not what You are looking for type in the derivative calculator your own function and let us solve it.
## Derivative of (7x)/(6x+4):
((7*x)/(6*x+4))'((7*x)'*(6*x+4)-(7*x*(6*x+4)'))/((6*x+4)^2)(((7)'*x+7*(x)')*(6*x+4)-(7*x*(6*x+4)'))/((6*x+4)^2)((0*x+7*(x)')*(6*x+4)-(7*x*(6*x+4)'))/((6*x+4)^2)((0*x+7*1)*(6*x+4)-(7*x*(6*x+4)'))/((6*x+4)^2)(7*(6*x+4)-(7*x*(6*x+4)'))/((6*x+4)^2)(7*(6*x+4)-(7*x*((6*x)'+(4)')))/((6*x+4)^2)(7*(6*x+4)-(7*x*(6*(x)'+(6)'*x+(4)')))/((6*x+4)^2)(7*(6*x+4)-(7*x*(6*(x)'+0*x+(4)')))/((6*x+4)^2)(7*(6*x+4)-(7*x*(0*x+6*1+(4)')))/((6*x+4)^2)(7*(6*x+4)-(7*x*(0+6)))/((6*x+4)^2)(7*(6*x+4)-(7*x*6))/((6*x+4)^2)(7*(6*x+4)-(42*x))/((6*x+4)^2)`
The calculation above is a derivative of the function f (x)
## Related pages
61-18the square root of 5293.75 in a fractionx cubed minus 64 factored2x 4y 3250-270prime factorization of 299gcf of 120greatest common factor of 54 and 36factorization of 121fractions solverderivative of ln13dcombinex2 16x2n2007simplify 8x 3 27solve p 2l 2wcommon multiples of 154fgprime factorization of 200the prime factorization of 147find the prime factorization of 165math derivative solverwhat is prime factorization of 84canvas allenisd orglog5sec 2x graphsin 2x cosx3x 5y 9prime factorization 370.625 as a fraction in simplest formx2 6x 4 02x 2 3 factoredsubtracting mixed fraction calculatorwhat is 999 in roman numerals9qtwww.aplus.cmcss.net117-101600-350qwertyuiopasdfghjklzxcvbnm qwertyuiopasdfghjklzxcvbnmsquare root of 9x 6antiderivative of cos 2xintegral of ln2xwhat is the prime factorization of 300greatest common factor of a polynomial calculatorwhat is the greatest common factor of 64 and 803x 5y 12derivative of 3 ln xsin 2tlowest common denominator calculatorsolution to system of equations calculatorsolve secx tanx 1square root of 435603x 5y 12382.9what is 0.7777 as a fraction1.32 inches to fraction6y 9 3 2y 3show how to solve 3x 2 10x 8 by factoringis 196 a prime number4.9.7sx equationderivative of 4lnxpv nrt equationmaths equations calculator2x3 3x2how to factor an expression using gcfis 196 a prime number0.6 percent as a decimalq mctsolve sin x sinx 1 09x 2 4y 2 36tan 2pi950-200
|
|
# Off-The-Shelf ML Training
OOS provides users with access to several machine learning forecasting methods. Along with this access, OOS comes with pre-defined machine learning training routines so that practitioners may simply pick up the package and hit the ground running.
Machine learning parameter estimation is facilitated with the caret package. As a result, all machine learning training can be controlled through a unified format, which OOS conveniently presented through a series of “control.panel” lists. That is, when one trains machine learning method via OOS, they first instantiating a training control panel (e.g. forecast_multivariate.ml.control_panel and forecast_combinations.control_panel), list with elements:
1. caret.engine: name of caret recognized method
2. tuning.grids: data.frame grid of hyperparameters for training caret recognized methods
3. control: list or arguments defining the parameter estimation routine
4. accuracy: regression loss function to minimize during learning
and these control panels in turn direct the actual machine learning training within the OOS forecast methods.
The remainder of this article will outline the default details of machine learning training in OOS, while documentation and examples of editing machine learning control panels in OOS may be found in the Model Customization and Control vignette.
## 1. Default machine learning methods
First, control panels hold a list of caret recognized method which may used by default in OOS. The following code snippet shows the caret.engine list created within OOS control panels:
# caret names
caret.engine = list(
ols = 'lm',
ridge = 'glmnet',
lasso = 'glmnet',
elastic = 'glmnet',
RF = 'rf',
GBM = 'gbm',
NN = 'avNNet',
pls = 'pls',
pcr = 'pcr'
)
## 2. Parameter estimation methods
Second, control panels define the parameter estimation method used to create the actual forecasting models.
### forecast_multivariate
By default, forecast_multivariate uses the timeslice cross validation technique, designed specifically for time series forecasting. However, if the argument rolling.window is NULL in instantiate.forecast_multivariate.ml.control_panel(), then 5-fold cross validation will be used. This latter behavior will only occur if a user manually instantiates the control panel and then does not edit the control element.
The following code snippet shows the training control object created within instantiate.forecast_multivariate.ml.control_panel():
# hyper-parameter selection routine
if(is.numeric(rolling.window)){
control =
caret::trainControl(
method = "timeslice",
horizon = horizon,
initialWindow = rolling.window,
allowParallel = TRUE)
}else if(!is.null(rolling.window)){
control =
caret::trainControl(
method = "timeslice",
horizon = horizon,
initialWindow = 5,
allowParallel = TRUE)
}else{
control =
caret::trainControl(
method = "cv",
number = 5,
allowParallel = TRUE)
}
### forecast_combine
In comparison, by default the forecast_combine uses 5-fold cross validation to estimate and select model parameters.
Note that the partial egalitarian lasso is not sourced from caret and uses the default cross validation imposed in the glmnet package.
The following code snippet shows the training control object created within instantiate.forecast_combinations.control_panel():
# hyper-parameter selection routine
control =
caret::trainControl(
method = "cv",
number = 5,
allowParallel = TRUE)
## 3. Hyperparameters
Third, within training routines, models are estimated over a grid of potential hyperparameters. OOS attempts to use standard off-the-shelf training parameters when available, for example, in the case of the random forest and LASSO regression. However, this also means that the number of covariates must be passed to control panel instantiation, lest naive hyperparameters will be used (the number of covariates will always be used when the control panels are created inside forecast_multivariate or forecast_combine).
The following tuning grids are used in both forecast_multivariate and forecast_combine when training machine learning techniques:
# tuning grids
tuning.grids = list(
ols = NULL,
ridge = expand.grid(
alpha = 0,
lambda = 10^seq(-3, 3, length = 100)),
lasso = expand.grid(
alpha = 1,
lambda = 10^seq(-3, 3, length = 100)),
elastic = NULL,
GBM =
expand.grid(
n.minobsinnode = c(1),
shrinkage = c(.1,.01),
n.trees = c(100, 250, 500),
interaction.depth = c(1,2,5)),
RF =
expand.grid(
mtry = c(1:4)),
NN =
expand.grid(
size = seq(2,10,5),
decay = c(.01,.001),
bag = c(100, 250, 500)),
pls =
expand.grid(
ncomp = c(1:5)),
pcr =
expand.grid(
ncomp = c(1:5))
)
# tuning grids if # of features is available
if(!is.null(covariates)){
tuning.grids[['RF']] =
expand.grid(
mtry = covariates/3)
tuning.grids[['NN']] =
expand.grid(
size = c(covariates, 2*covariates, 3*covariates),
decay = c(.01,.001),
bag = c(20, 100))
}
## 4. Accuracy
Lastly, by default, OOS will use the root mean squared error (RMSE) loss function to train machine learning method, as it is a reasonable compromise between emphasizing and de-emphasizing outliers.
|
|
# GPU branch divergence
This pattern applies to GPU computing, where the execution of a thread block is divided into warps with a constant number of threads per warp. When threads in the same warp follow different paths of control flow, these threads diverge in their execution, which serializes the execution.
Branch divergence can hurt performance as it lowers utilization of the execution units, which cannot be compensated for through increased levels of parallelism.
There are three common scenarios of kernel code that exhibit such divergence:
// Code A
if(a[tid] > 0) {
++x;
}
In this scenario, a single if statement, if any thread executes ++x, all threads in the same warp must go through ++x, regardless of whether they actually execute it or not.
The more common case, as depicted in the figure above is the if-else statement:
// Code B
if(a[tid] > 0) {
++x;
} else {
--x;
}
Each thread in a warp must go through both branch paths sequentially, even though it just executes one of them. Of course, this scenario could be expanded with one or more else if statements (equivalent to a switch statement), which would add further complexity to the issue.
Last but not least, in the following code snippet, both branches perform the same operations on different data:
// Code C
if(c > 0) {
x = x * a1 + b1;
y = y * a1 + b1;
} else {
x = x * a2 + b2;
y = y * a2 + b2;
}
The branch divergence pattern is quite evident in the first three scenarios. However, there is a fourth scenario, which is a bit more tricky.
// Code D
Here we have a loop with a variable trip count. The number of iterations each thread goes through is the max iteration count i of all threads within the warp. The performance impact depends on the size of the loop body and the variance of the loop trip counts, i.e., the n’s.
|
|
Sampling Distribution of the Mean
English Português Français Español Italiano Nederlands
The distribution of a statistic (which is itself a function of the sample) is called a sampling distribution. Statistics are used for estimating unknown population characteristics or parameters and for testing hypotheses. These tasks involve probability statements which can only be made if the sampling distributions of the statistics are known (or can be approximated). For the most important statistics, we now present in each case the sampling distribution its expected value and variance.
Distribution of the sample mean
Consider sampling from a population with distribution function ${\displaystyle F(x)}$, expected value ${\displaystyle E(X)=\mu }$ and variance ${\displaystyle Var(X)=\sigma ^{2}.}$ One of the most important statistics is the sample mean. The sample mean (or sample average) is given by: ${\displaystyle {\bar {x}}={\frac {1}{n}}\sum \limits _{i=1}^{n}x_{i}}$
Expected value, variance and standard deviation of the sample mean
Expected value, variance and standard deviation of the sample mean are given by:
1. ${\displaystyle E({\bar {x}})=\mu }$${\displaystyle Var({\bar {x}})=\sigma ^{2}({\bar {x}})={\frac {\sigma ^{2}}{n}}\cdot {\frac {N-n}{N-1}}}$${\displaystyle \sigma ({\bar {x}})={\frac {\sigma }{\sqrt {n}}}\cdot {\sqrt {\frac {N-n}{N-1}}}}$ The factor ${\displaystyle {\frac {N-n}{N-1}}}$ is called the finite sample correction.
If the population variance ${\displaystyle Var(X)=\sigma ^{2}}$ is unknown it has to be estimated by the statistic ${\displaystyle s^{2}.}$ In the above formulas ${\displaystyle \sigma ^{2}}$ is replaced by ${\displaystyle s^{2}}$ which leads to an estimator of the variance of the sample mean given by:
• for a simple random sample: ${\displaystyle {\widehat {\sigma ^{2}}}({\bar {x}})={\frac {s^{2}}{n}}}$
• for a random sample without replacement ${\displaystyle {\widehat {\sigma ^{2}}}({\bar {x}})={\frac {s^{2}}{n}}\cdot {\frac {N-n}{N-1}}}$
These results for the expectation and variance of the sample mean hold regardless of the specific form of its sampling distribution.
Distribution of the sample mean
The sampling distribution ${\displaystyle F({\bar {x}})}$ of the sample mean is determined by the distribution of the variable ${\displaystyle X}$ in the population. In each case below we assume a random sample with replacement.
1. It is assumed that ${\displaystyle X}$ is normally distributed with expected value ${\displaystyle \mu }$ and variance ${\displaystyle \sigma ^{2}}$, that is, : ${\displaystyle X\sim N(\mu ,\,\sigma ^{2})}$
1. The population variance ${\displaystyle \sigma ^{2}}$ is known; in this case ${\displaystyle {\bar {x}}}$ has the following normal distribution:${\displaystyle {\bar {x}}\sim N(\mu ,\,\sigma ^{2}({\bar {x}}))=\,\,\,N(\mu ,\,{\frac {\sigma ^{2}}{n}})}$ and the standardized random variable ${\displaystyle z={\frac {{\bar {x}}-\mu }{\sigma ({\bar {x}})}}={\sqrt {n}}{\frac {{\bar {x}}-\mu }{\sigma }}}$ has the standard normal distribution ${\displaystyle z\sim N(0;1)}$.
2. The population variance ${\displaystyle \sigma ^{2}}$ is unknown In this case, it may be estimated by ${\displaystyle \ s^{2}}$ The transformed random variable: ${\displaystyle t={\sqrt {n}}{\frac {{\bar {x}}-\mu }{s}}}$
has a tabulated distribution with parameter the ’degrees of freedom’ which equals ${\displaystyle n-1}$. This distribution is called the and it is usually denoted by ${\displaystyle t_{n-1}}$ .
As ${\displaystyle n}$ increases, the t-distribution converges to a standard normal. Indeed the latter provides a good approximation when ${\displaystyle n>30.}$
2. This is the most relevant case for applications in business and economics since the distribution of many interesting variables may not be well approximated by the normal or its specific form is simply unknown.
Consider ${\displaystyle n}$ i.i.d. random variables ${\displaystyle X_{1},\dots ,X_{n}}$ with unknown distribution. The random variables have expectation ${\displaystyle E(X_{i})=\mu }$ and variance ${\displaystyle Var(X_{i})=\sigma ^{2}.}$ According to the the following propositions hold:
• If ${\displaystyle \sigma ^{2}}$ is known, then the random variable ${\displaystyle z={\sqrt {n}}{\frac {{\bar {x}}-\mu }{\sigma }}}$ is approximately standard normal for sufficiently large ${\displaystyle n}$.
• If ${\displaystyle \sigma ^{2}}$ is unknown, then the random variable ${\displaystyle t={\sqrt {n}}{\frac {{\bar {x}}-\mu }{s}}}$ is also approximately standard normal for sufficiently large ${\displaystyle n}$.
As rule of thumb, the normal distribution can be used for ${\displaystyle n>30}$. If ${\displaystyle X}$ is normally distributed with known ${\displaystyle \mu }$ and ${\displaystyle \sigma ^{2}}$ so that ${\displaystyle {\bar {x}}}$ also follows the normal distribution then the calculation of probabilities may be done as in Chapter VI. Calculations hold approximately if ${\displaystyle X}$ is arbitrarily distributed and ${\displaystyle n}$ is sufficiently large. More generally, if the distribution of ${\displaystyle X}$ is not normal, but is known, then it is in principle possible to calculate the sampling distribution of ${\displaystyle {\bar {x}}}$ and the probabilities that falls in a given interval (though the results may be quite complicated).
Weak law of large numbers
Suppose ${\displaystyle X_{1},\dots ,X_{n}}$ are ${\displaystyle n}$ independent and identically distributed random variables with expectation ${\displaystyle E(X_{i})=\mu }$ and variance ${\displaystyle Var(X_{i})=\sigma ^{2}}$. Then, for each ${\displaystyle \epsilon >0}$ it holds that: ${\displaystyle \lim _{n\rightarrow \infty }P(|{\bar {x}}_{n}-\mu |<\epsilon )=1\,\,.}$ This can be shown as follows:According to it holds that ${\displaystyle P(|{\bar {x}}_{n}-\mu |<\epsilon )\geq 1-{\frac {\sigma ^{2}({\bar {x}})}{\epsilon ^{2}}}.}$ After inserting ${\displaystyle \sigma ^{2}({\bar {x}})=\sigma ^{2}/n:}$ ${\displaystyle P(|{\bar {x}}_{n}-\mu |<\epsilon )\geq 1-{\frac {\sigma ^{2}}{n\epsilon ^{2}}}}$ If ${\displaystyle n}$ approaches infinity the second term on the right hand side goes to zero. Implication of this law:With increasing ${\displaystyle n}$, the probability that the sample mean ${\displaystyle {\bar {x}}}$ will deviate from its expectation ${\displaystyle \mu }$ by less than ${\displaystyle \epsilon >0}$ converges to one. If the sample size is large enough the sample mean will take on values within a pre-specified interval ${\displaystyle [\mu -\epsilon ;\mu +\epsilon ]}$ with high probability, regardless of the distribution of ${\displaystyle X}$.
Enhanced example for sampling distributions
This example is devoted to formally explaining the sampling distribution of the sample mean, its expectation and variance. To this end, certain assumptions must be made about the population. In particular, it is assumed that the mean hourly gross earnings of all 5000 workers of a company equals $27.30 with a standard deviation of$5.90 and variance of $34.81. Problem 1: Suppose that the variable ${\displaystyle X}$ = “Gross hourly earnings of a (randomly selected) worker in this company” is normally distributed. That is, ${\displaystyle X\sim N(27.3;34.81)}$. From the population of all workers of this company, a random sample (with replacement) of ${\displaystyle n}$ workers is selected. The sample mean gives the average gross hourly earnings of the ${\displaystyle n}$ workers in the sample. Calculate the expected value, variance, standard deviation and find the specific form of the distribution of ${\displaystyle {\bar {x}}}$ for the following sample sizes: 1. ${\displaystyle n=50}$ Regardless of ${\displaystyle n}$, the expected value of ${\displaystyle {\bar {x}}}$ is ${\displaystyle E({\bar {x}})=\mu =\27.30}$ The variance of the sample mean is equal to ${\displaystyle Var({\bar {x}})=\sigma ^{2}({\bar {x}})=\sigma ^{2}/n\,.}$ Thus, ^{2}(\bar{x})=5.9^{2}/10=34.81/10=3.481[/itex]${\displaystyle \sigma ({\bar {x}})=\1.8657}$. ^{2}(\bar{x})=5.9^{2}/50=34.81/50=0.6962[/itex]${\displaystyle \sigma ({\bar {x}})=\0.8344}$ {x})=\$0.4172[/itex]. Obviously, the standard deviation of ${\displaystyle {\bar {x}}}$ is smaller than the standard deviation of ${\displaystyle X}$ in the population. Moreover, the standard deviation of ${\displaystyle {\bar {x}}}$ decreases from 1.8657 to 0.8344 and to 0.4172, as the sample size is increased from 10 to 50 and eventually to 200. Increasing the sample size by a factor of five cuts the standard deviation roughly by half. Increasing the sample size twentyfold reduces the standard deviation by more than 3/4.
Since ${\displaystyle X}$ is assumed to be normally distributed it follows that the sample mean ${\displaystyle {\bar {x}}}$ is also normally distributed under random sampling with replacement, regardless of the sample size.
Thus:
1. for random samples of size ${\displaystyle n=10}$${\displaystyle X\sim N((27.3;3.481)}$The red curve in the graph corresponds to the distribution of ${\displaystyle X}$ in the population while the blue curve depicts the distribution of the sample mean ${\displaystyle {\bar {x}}}$.
2. for random samples of size ${\displaystyle n=50}$${\displaystyle X\sim N(27.3;0.6962)}$
3. for random samples of size ${\displaystyle n=200}$${\displaystyle X\sim N(27.3;0.17405)}$
Problem 2:
Suppose that the variable ${\displaystyle X}$ = “gross hourly earnings of a (randomly selected) worker of this company” is normally distributed. Hence, ${\displaystyle X\sim N(27.3;34.81)}$. A sample of size ${\displaystyle n}$ is randomly drawn without replacement. The sample mean gives the gross hourly earnings of the ${\displaystyle n}$ workers in the sample. Calculate the expected value, variance, and standard deviation of ${\displaystyle {\bar {x}}}$ for the following sample sizes:
1. ${\displaystyle n=50}$
All random samples without replacement, regardless of ${\displaystyle n}$, have the same expected value as in the first problem: ${\displaystyle E({\bar {x}})=\mu =\27.30}$
In the case of sampling without replacement, the variance of the sample mean is reduced by a ’finite sample correction factor’.Specifically, the variance of the sample mean is given by ${\displaystyle Var({\bar {x}})=\sigma ^{2}({\bar {x}})={\frac {\sigma ^{2}}{n}}\cdot {\frac {N-n}{N-1}}\,.}$ However, the finite sample correction can be neglected if ${\displaystyle n}$ is sufficiently small relative to ${\displaystyle N}$ for example if ${\displaystyle n/N\leq 0.05}$.
Thus, {x})=5.9^{2}/10=34.81/10=3.481[/itex]${\displaystyle \sigma ({\bar {x}})=\1.8657}$.In comparison, the finite sample correction yields ${\displaystyle Var({\bar {x}})=\sigma ^{2}({\bar {x}})=3.4747}$ and ${\displaystyle \sigma ({\bar {x}})=\1.8641}$, which demonstrates the negligibility of the correction. {x})=\sigma^{2}(\bar{x})=\sigma^{2}/n[/itex]. This leads to the same result as in problem 1:${\displaystyle Var({\bar {x}})=\sigma ^{2}({\bar {x}})=5.9^{2}/50=34.81/50=0.6962}$${\displaystyle \sigma ({\bar {x}})=\0.8344}$ which is very similar to the finite sample corrected result ${\displaystyle \sigma ({\bar {x}})=\0.8303}$ Var(\bar{x})=\sigma^{2}(\bar{x}) & =\frac{\sigma^{2}}{n}\cdot\frac{N-n}{N-1}\\ & =\frac{5.9^{2}}{1000}\cdot\frac{5000-1000}{5000-1}=0.0279\ \\ \sigma(\bar{x}) & =\\$0.1669.\end{align}[/itex]
Problem 3:
Suppose that, more realistically, that the distribution of ${\displaystyle X}$ = “gross hourly earnings of a (randomly selected) worker from this company” is unknown. Hence, all that is known is ${\displaystyle E(X)=\mu =\27.0}$ and ${\displaystyle \sigma (X)=\5.90}$. A sample of size ${\displaystyle n}$ is randomly drawn. The sample mean gives the gross hourly earnings of the ${\displaystyle n}$ workers in the sample. Calculate the expected value, variance, standard deviation and find the specific form of the distribution of ${\displaystyle {\bar {x}}}$ for the following sample sizes:
1. ${\displaystyle n=50}$
How the expected value ${\displaystyle E({\bar {x}})}$ is calculated does not depend on the distribution of ${\displaystyle X}$ in the population. Hence, there are no new aspects in the present situation and the results are identical to the previous two problems: ${\displaystyle E({\bar {x}})=\mu =\27.30}$
How the variance of ${\displaystyle {\bar {x}}}$ is calculated does not depend on the distribution of ${\displaystyle X}$ in the population but it does depend on the type and size of the random sample. In the statement of problem 3 the sampling scheme has not been specified. However, for all three sample sizes ${\displaystyle n/N<0.05}$ and, hence, if the sample is drawn without replacement the formula ${\displaystyle Var({\bar {x}})=\sigma ^{2}({\bar {x}})=\sigma ^{2}/n}$ can be used as an approximation.
for ${\displaystyle n=10}$ ${\displaystyle Var({\bar {x}})=\sigma ^{2}({\bar {x}})=3.481}$ ${\displaystyle \sigma ({\bar {x}})=\1.8657}$ for ${\displaystyle n=50}$ ${\displaystyle Var({\bar {x}})=\sigma ^{2}({\bar {x}})=0.6962}$ ${\displaystyle \sigma ({\bar {x}})=\0.8344}$ for ${\displaystyle n=200}$ ${\displaystyle Var({\bar {x}})=\sigma ^{2}({\bar {x}})=0.17405}$ ${\displaystyle \sigma ({\bar {x}})=\0.4172}$
Since the distribution of ${\displaystyle X}$ in the population is unknown no exact statement can be made about the distribution of ${\displaystyle {\bar {x}}.}$
However, the central limit theorem implies that the standardized random variable ${\displaystyle z}$ ${\displaystyle z={\sqrt {n}}{\frac {{\bar {X}}-\mu }{\sigma }}}$ is approximately standard normal if the sample size ${\displaystyle n>30}$ and –in random sampling without replacement– the size of the population ${\displaystyle N}$ is sufficiently large. This is satisfied for the cases b) ${\displaystyle n=50}$ and c) ${\displaystyle n=200}$.
Example of sampling distribution
${\displaystyle N=7}$ students take part in an exam for a graduate course and obtain the following scores: Table 1:
Student A B C D E F G
Score 10 11 11 12 12 12 16
The variable ${\displaystyle X}$ = “score of an exam” has the following population frequency distribution: Table 2:
${\displaystyle x}$ ${\displaystyle h(x)}$ ${\displaystyle f(x)=h(x)/N}$ ${\displaystyle F(x)}$
10 1 1/7 1/7
11 2 2/7 3/7
12 3 3/7 6/7
16 1 1/7 7/7
with population parameters ${\displaystyle \mu =12,\sigma ^{2}=3.143}$ and ${\displaystyle \sigma =1.773}$.
Random sampling with replacement
${\displaystyle n=2}$ exams are sampled with replacement from the population. Table 3 contains all possible samples of size ${\displaystyle n=2}$ with replacement and paying attention to the order of the draws: Table 3:
1. exam 10 11 11 12 12 12 16 10 10;10 10;11 10;11 10;12 10;12 10;12 10;16 11 11;10 11;11 11;11 11;12 11;12 11;12 11;16 11 11;10 11;11 11;11 11;12 11;12 11;12 11;16 12 12;10 12;11 12;11 12;12 12;12 12;12 12;16 12 12;10 12;11 12;11 12;12 12;12 12;12 12;16 12 12;10 12;11 12;11 12;12 12;12 12;12 12;16 16 16;10 16;11 16;11 16;12 16;12 16;12 16;16
For each possible sample, the sample mean can be calculated and is recorded in Table 4. Table 4:
1. exam 10 11 11 12 12 12 16 10 10 10.5 10.5 11 11 11 13 11 10.5 11 11 11.5 11.5 11.5 13.5 11 10.5 11 11 11.5 11.5 11.5 13.5 12 11 11.5 11.5 12 12 12 14 12 11 11.5 11.5 12 12 12 14 12 11 11.5 11.5 12 12 12 14 16 13 13.5 13.5 14 14 14 16
${\displaystyle {\bar {x}}}$ therefore can take on various values with certain probabilities. From Table 4 the distribution of ${\displaystyle {\bar {x}}}$ can be determined as given in the first two columns of Table 5. Table 5:
${\displaystyle {\bar {x}}}$ ${\displaystyle P({\bar {x}})}$ ${\displaystyle {\bar {x}}-E({\bar {x}})}$ ${\displaystyle [{\bar {x}}-E({\bar {x}})]^{2}}$ ${\displaystyle [{\bar {x}}-E({\bar {x}})]^{2}\cdot P({\bar {x}})}$
10 1 / 49 - 2 4 4 / 49
10.5 4 / 49 - 1.5 2.25 9 / 49
11 10 / 49 - 1 1 10 / 49
11.5 12 / 49 - 0.5 0.25 3 / 49
12 9 / 49 0 0 0
13 2 / 49 1 1 2 / 49
13.5 4 / 49 1.5 2.25 9 / 49
14 6 / 49 2 4 24 / 49
16 1 / 49 4 16 16 / 49
The mean of this distribution, i.e. the expected value of ${\displaystyle {\bar {x}}}$, is given by ${\displaystyle E({\bar {x}})=588/49=12\,.}$ which is equal to the expected value of the variable ${\displaystyle X}$ in the population: ${\displaystyle E(X)=12}$. Using the intermediate results in columns three to five of Table 5 allows one to calculate the variance of ${\displaystyle {\bar {x}}}$: ${\displaystyle Var({\bar {x}})=\sigma ^{2}({\bar {x}})=77/49=11/7=1.5714}$ This result is in agreement with the formula for ${\displaystyle \sigma ^{2}({\bar {x}})}$ given above: ${\displaystyle \sigma ^{2}({\bar {x}})=\sigma ^{2}/n=(22/7)/2=11/7\,.}$ It is easy to see that the variance of ${\displaystyle {\bar {x}}}$ is indeed smaller that the variance of ${\displaystyle X}$ in the population.
Random sampling without replacement
From the population, ${\displaystyle n=2}$ exams are randomly drawn without replacement. Table 6 displays all possible samples of size ${\displaystyle n=2}$ from sampling without replacement, paying attention to the order of the draws. Table 6:
1. exam 10 11 11 12 12 12 16 10 10;11 10;11 10;12 10;12 10;12 10;16 11 11;10 11;11 11;12 11;12 11;12 11;16 11 11;10 11;11 11;12 11;12 11;12 11;16 12 12;10 12;11 12;11 12;12 12;12 12;16 12 12;10 12;11 12;11 12;12 12;12 12;16 12 12;10 12;11 12;11 12;12 12;12 12;16 16 16;10 16;11 16;11 16;12 16;12 16;12
For each possible sample, the sample mean is calculated and reported in Table 7: Table 7:
1. exam 10 11 11 12 12 12 16 10 10.5 10.5 11 11 11 13 11 10.5 11 11.5 11.5 11.5 13.5 11 10.5 11 11.5 11.5 11.5 13.5 12 11 11.5 11.5 12 12 14 12 11 11.5 11.5 12 12 14 12 11 11.5 11.5 12 12 14 16 13 13.5 13.5 14 14 14
The first two columns of Table 8 contain the probability distribution of the sample mean Table 8:
${\displaystyle {\bar {x}}}$ ${\displaystyle P({\bar {x}})}$ ${\displaystyle {\bar {x}}-E({\bar {x}})}$ ${\displaystyle [{\bar {x}}-E({\bar {x}})]^{2}}$ ${\displaystyle [{\bar {x}}-E({\bar {x}})]^{2}\cdot P({\bar {x}})}$
10.5 4 / 42 - 1.5 2.25 9 / 42
11 8 / 42 - 1 1 8 / 42
11.5 12 / 42 - 0.5 0.25 3 / 42
12 6 / 42 0 0 0
13 2 / 42 1 1 2 / 42
13.5 4 / 42 1.5 2.25 9 / 42
14 6 / 42 2 4 24 / 42
The expected value ${\displaystyle E({\bar {x}})}$ is ${\displaystyle E({\bar {x}})=504/42=12}$ and is equal to the expected value of ${\displaystyle X}$ in the population. The variance is equal to ${\displaystyle Var({\bar {x}})=\sigma ^{2}({\bar {x}})=55/42=1.3095\,,}$ which is in agreement with the formula for calculating ${\displaystyle \sigma ^{2}({\bar {x}})}$ given earlier: {\displaystyle {\begin{aligned}Var({\bar {x}})=\sigma ^{2}({\bar {x}})&={\frac {\sigma ^{2}}{n}}\cdot {\frac {N-n}{N-1}}\\&={\frac {22/7}{2}}\cdot {\frac {7-2}{7-1}}={\frac {22\cdot 5}{7\cdot 2\cdot 6}}={\frac {55}{42}}\,.\\&\end{aligned}}}
Consider a population with distribution function ${\displaystyle F(x)}$, expected value ${\displaystyle E(X)=\mu }$ and variance ${\displaystyle Var(X)=\sigma ^{2}}$. The random variables ${\displaystyle X_{i},i=1,\dots ,n}$ all have the same distribution function ${\displaystyle F(x_{i})=F(x)}$, expectation ${\displaystyle E(X_{i})=\mu }$ and variance ${\displaystyle Var(X_{i})=\sigma ^{2}}$.
Expectation of the sample mean
${\displaystyle {\bar {x}}}$ Using the rules for the expectation of a linear combination of random variables it is easy to calculate that ${\displaystyle E({\bar {x}})=E\left({\frac {1}{n}}\sum \limits _{i=1}^{n}X_{i}\right)={\frac {1}{n}}E\left(\sum \limits _{i=1}^{n}X_{i}\right)={\frac {1}{n}}\sum \limits _{i=1}^{n}E(X_{i})={\frac {1}{n}}\cdot n\cdot \mu =\mu \,,}$ with ${\displaystyle E(X_{i})=\mu }$. This result holds under random sampling with or without replacement and is valid for any positive sample size ${\displaystyle n.}$
Variance of the sample mean ${\displaystyle {\bar {X}}}$
(1) {\displaystyle {\begin{aligned}Var({\bar {x}})&=E[({\bar {x}}-E({\bar {x}}))^{2}]=E[({\bar {x}}-\mu )^{2}]\\&=E\left[\left({\frac {1}{n}}\sum \limits _{i=1}^{n}(X_{i}-\mu )\right)^{2}\right]\\&=E\left[\left({\frac {1}{n}}(X_{1}-\mu )+\dots +{\frac {1}{n}}(X_{n}-\mu )\right)^{2}\right]\\&={\frac {1}{n^{2}}}[E(X_{1}-\mu )^{2}+\dots +E(X_{n}-\mu )^{2}+\sum \limits _{i}\sum \limits _{j\neq i}E(X_{i}-\mu )(X_{j}-\mu )]\\&={\frac {1}{n^{2}}}[Var(X_{1})+\dots +Var(X_{n})+\sum \limits _{i}\sum \limits _{j\neq i}Cov(X_{i},X_{j}]\\&\end{aligned}}} For each ${\displaystyle i=1,\dots ,n}$ , ${\displaystyle Var(X_{i})=\sigma ^{2}}$ Furthermore, under random sampling with replacement the random variables are independent and therefore have ${\displaystyle Cov(X_{i},X_{j})=0}$. The variance of the sample mean thus simplifies to ${\displaystyle Var({\bar {x}})={\frac {1}{n^{2}}}n\sigma ^{2}={\frac {\sigma ^{2}}{n}}\,.}$ Note that the variance of ${\displaystyle {\bar {x}}}$ is equal to the variance of the population variable ${\displaystyle X}$ divided by ${\displaystyle n.}$ This implies that ${\displaystyle Var({\bar {x}})}$ is smaller than ${\displaystyle Var(X)}$ and that ${\displaystyle Var({\bar {x}})}$ is decreasing with increasing ${\displaystyle n.}$ In other words, for large ${\displaystyle n}$ the distribution of ${\displaystyle {\bar {x}}}$ is tightly concentrated around its expected value ${\displaystyle \mu }$.
(2) The derivation of ${\displaystyle Var({\bar {x}})}$ in the case of random sampling without replacement is similar but more complicated because of the dependency of the random variables. Regarding the finite sample correction, for large populations the following approximation is quite accurate ${\displaystyle {\frac {N-n}{N-1}}\approx {\frac {N-n}{N}}\,,}$ and the approximate correction ${\displaystyle 1-n/N}$ can be used. In sampling without replacement ${\displaystyle n}$ cannot exceed ${\displaystyle N}$. For fixed ${\displaystyle n}$, the finite sample correction approaches 1 with increasing ${\displaystyle N}$ : ${\displaystyle \lim _{N\rightarrow \infty }{\frac {N-n}{N-1}}=1\,.}$ In applications, the correction can be ignored if ${\displaystyle n}$ is small relative to ${\displaystyle N}$.Rule of thumb: ${\displaystyle n/N\leq 0.05}$However, this will only give an approximation to ${\displaystyle Var({\bar {x}})}$. On the distribution of ${\displaystyle {\bar {x}}}$ Suppose that ${\displaystyle X}$ follows a normal distribution in the population with expectation ${\displaystyle \mu }$ and variance ${\displaystyle \sigma ^{2}}$: ${\displaystyle X\sim N(\mu ,\sigma ^{2})}$. In this case, the random variables ${\displaystyle X_{i},i=1,\dots ,n}$ are all normally distributed: ${\displaystyle X_{i}\sim N(\mu ,\sigma ^{2})}$ for each ${\displaystyle i=1,\dots ,n}$. The sum of ${\displaystyle n}$ independent and identically normally distributed random variables also follows a normal distribution: ${\displaystyle \sum \limits _{i=1}^{n}X_{i}\sim N(n\mu ,n\sigma ^{2})\,.}$ The statistic ${\displaystyle {\bar {x}}}$ differs from this sum only by the constant factor ${\displaystyle 1/n}$ and, hence, is also normally distributed: ${\displaystyle {\bar {x}}\sim N(\mu ,\sigma ^{2}({\bar {x}}))}$. Since only the standard normal distribution is tabulated the following standardized version of ${\displaystyle {\bar {x}}}$ is considered: ${\displaystyle z={\frac {{\bar {x}}-\mu }{\sigma ({\bar {x}})}}={\sqrt {n}}{\frac {{\bar {x}}-\mu }{\sigma }}\,,}$ which follows the standard normal distribution: ${\displaystyle z\sim N(0,1)}$. Evidently, using the standardized variable ${\displaystyle z}$ hinges on knowing the population variance ${\displaystyle \sigma ^{2}.}$ If the population variance ${\displaystyle \sigma ^{2}}$ is unknown:The unknown variance ${\displaystyle \sigma ^{2}}$ is estimated by ${\displaystyle s^{2}={\frac {\sum \limits _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}{n-1}}}$ Dividing both sides by ${\displaystyle \sigma ^{2}}$ gives {\displaystyle {\begin{aligned}{\frac {s^{2}}{\sigma ^{2}}}&={\frac {1}{\sigma ^{2}}}{\frac {\sum \limits _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}{n-1}}\\{\frac {n-1}{\sigma ^{2}}}s^{2}&=\sum \limits _{i=1}^{n}{\frac {(x_{i}-{\bar {x}})^{2}}{\sigma }}\,.\\&\end{aligned}}} To simplify, set ${\displaystyle y={\frac {(n-1)s^{2}}{\sigma ^{2}}}}$.In random sampling with replacement, the ${\displaystyle X_{i},i=1,\dots ,n}$ are independent and ${\displaystyle y}$ is therefore the sum of squared independent standard normal random variables. It follows that ${\displaystyle y}$ is chi-square distributed with degrees of freedom ${\displaystyle n-1}$. Using the standardized random variable ${\displaystyle z}$ to construct the ratio ${\displaystyle t={\frac {z}{\sqrt {\frac {y}{n-1}}}}\,,}$ gives rise to the random variable ${\displaystyle t}$ which follows the t-distribution with degrees of freedom ${\displaystyle n-1.}$ (Recall from Chapter 6 that a ${\displaystyle t}$ random variable is the ratio of a standard normal to the square root of an independent chi-square divided by its degrees of freedom.) Inserting the expressions for ${\displaystyle z}$,and ${\displaystyle y}$ and rearranging terms yields: ${\displaystyle t={\frac {{\sqrt {n}}{\frac {{\bar {x}}-\mu }{\sigma }}}{\sqrt {{\frac {1}{n-1}}\left({\frac {n-1}{\sigma ^{2}}}s^{2}\right)}}}={\sqrt {n}}{\frac {{\bar {x}}-\mu }{s}}}$
Probability statements about ${\displaystyle {\bar {x}}}$:
If the sampling distribution of ${\displaystyle {\bar {x}}}$ including all its parameters are known, then probability statements about ${\displaystyle {\bar {x}}}$ can be made in the usual way. Suppose one wants to find a symmetric interval around the true mean which will contain ${\displaystyle {\bar {x}}}$ with probability ${\displaystyle 1-\alpha .}$That is, we need to find ${\displaystyle c}$ such that ${\displaystyle P[\mu -c\leq {\bar {x}}\leq \mu +c]=1-\alpha }$.. It will be convenient to use the standardized random variable ${\displaystyle z,}$the distribution of which we will assume to be symmetric.{\displaystyle {\begin{aligned}P(\mu -c\leq {\bar {x}}\leq \mu +c)&=1-\alpha \\P(-c\leq {\bar {x}}-\mu \leq c)&=1-\alpha \\P\left({\frac {-c}{\sigma ({\bar {x}})}}\leq {\frac {{\bar {x}}-\mu }{\sigma ({\bar {x}})}}\leq {\frac {c}{\sigma ({\bar {x}})}}\right)&=1-\alpha \\P\left({\frac {-c}{\sigma ({\bar {x}})}}\leq z\leq {\frac {c}{\sigma ({\bar {x}})}}\right)&=1-\alpha \\P\left(-z_{1-{\frac {\alpha }{2}}}\leq z\leq z_{1-{\frac {\alpha }{2}}}\right)&=1-\alpha \,\\{\frac {c}{\sigma ({\bar {x}})}}&=z_{1-{\frac {\alpha }{2}}}\\c&=z_{1-{\frac {\alpha }{2}}}\cdot \sigma ({\bar {x}})\\&\end{aligned}}} Thus, the deviation ${\displaystyle c}$ from ${\displaystyle \mu }$ is a multiple of ${\displaystyle \sigma ({\bar {x}})}$. Inserting ${\displaystyle \sigma ({\bar {x}})}$ leads to the interval ${\displaystyle \left[\mu -z_{1-{\frac {\alpha }{2}}}\cdot {\frac {\sigma }{\sqrt {n}}}\leq {\bar {x}}\leq \mu +z_{1-{\frac {\alpha }{2}}}\cdot {\frac {\sigma }{\sqrt {n}}}\right]}$ with probability ${\displaystyle P\left(\mu -z_{1-{\frac {\alpha }{2}}}\cdot {\frac {\sigma }{\sqrt {n}}}\leq {\bar {x}}\leq \mu +z_{1-{\frac {\alpha }{2}}}\cdot {\frac {\sigma }{\sqrt {n}}}\right)=1-\alpha }$ If ${\displaystyle X}$ is normally distributed then the central interval of variation with pre-specified probability ${\displaystyle 1-\alpha }$ is determined by reading ${\displaystyle z_{1-\alpha /2}}$ from the standard normal table. The probability ${\displaystyle 1-\alpha }$ is approximately valid if ${\displaystyle X}$ has an arbitrary distribution and the sample size ${\displaystyle n}$ is sufficiently large.
|
|
## Basic College Mathematics (10th Edition)
Published by Pearson
# Chapter 6 - Percent - Review Exercises - Page 467: 58
$67.50 #### Work Step by Step Interest =$ principal \times rate \times time$Principal = 1080 rate = 5 % = 0.05 time =$1\frac{1}{4} = \frac{5}{4}$years Interest =$ 1080 \times 0.05 \times \frac{5}{4}\$ = 67.50 Dollars
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
|
# On Free DSLs and Cofree interpreters
Posted on June 4, 2015
This post has been triggered by a tweet from Eric Torreborre on a talk by David Laing presenting the interaction of Free DSLs and Cofree interpreters at the Brisbane Functional Programming Group. I am currently engaged in the development of a Haskell-based system for Capital Match which is basically an API for managing peer-to-peer lending, and I am trying to formalise the API of the system as the result of a composition of several domain-specific languages.
The ultimate goal is to be able to use these DSLs to define complex actions that could be interpreted in various ways: a command-line client sending RESTful queries to a server, a Webdriver-based test executor or a simple test recorder and comparator, or even by a core engine interpreting complex actions in terms of simpler sequencing of service calls.
The rest of the post is a simple literate Haskell style explanation of what I came up with today exploring the specific topic of the composition of DSLs and interpreters: Given we can compose DSLs using Free monads and Coproduct, how can we Pair a composite DSL to the composition of several interpreters? The answer, as often, lies in the category theoretic principle for duality: Reverse the arrows! One composes interpreters into a Product type which is then lifted to a Cofree comonad paired to a Free Coproduct monad.
This post has no original idea and is just rephrasing and reshaping of work done by more brilliant people than I am:
I would not dare to say I really understand all of this, but at least I got some code to compile and I have some ideas on how to turn this into a useful “pattern” in our codebase.
# Free Coproduct DSLs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 {-# LANGUAGE DeriveFunctor #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE OverlappingInstances #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE TypeOperators #-} module Capital.Client.Free where import Control.Applicative import Control.Comonad.Cofree import Control.Monad import Control.Monad.Free import Control.Monad.Identity import Control.Monad.Trans (MonadIO, liftIO)
This relies on the free package which defines standard free Constructions for Applicative and Monad, and cofree for Comonads.
We define our basic business-domain specific functors, one for logging some messages and another for persisting some string value. The actual functors defined are not important, what interests us here is the fact we define those “actions” independently but we want in the end to be able to “Assemble” them yielding more complex actions which can at the same time log messages and persist things.
1 2 3 data Logging a = Logging String a deriving (Functor) data Persist a = Store String a deriving Functor
Our composite DSL should be able to interpret actions which are either logging actions, or persist actions, so we need a way to express this alternative at the type-level, introducing the notion of Coproduct or Sum. This work has already been packaged by Ed Kmett in the comonads-transformers package but let’s rewrite it here for completeness’ sake.
1 newtype Coproduct f g a = Coproduct { getCoproduct :: Either (f a) (g a) }
A Coproduct of two functors is then simply the type-level equivalent of the familiar Either type, for which we provide smart constructors to inject values from left or right and a suitable Functor instance.
1 2 3 4 5 6 7 8 9 10 11 left :: f a -> Coproduct f g a left = Coproduct . Left right :: g a -> Coproduct f g a right = Coproduct . Right coproduct :: (f a -> b) -> (g a -> b) -> Coproduct f g a -> b coproduct f g = either f g . getCoproduct instance (Functor f, Functor g) => Functor (Coproduct f g) where fmap f = Coproduct . coproduct (Left . fmap f) (Right . fmap f)
We want to be able to implicitly “lift” values from a component into its composite without resorting to explicit packing of the various parts of the alternative formed by a Coproduct type, something which would be extremely cumbersome to express, hence the introduction of a natural transformation Inject expressed in Haskell as a typeclass.
1 2 class (Functor f, Functor g) => f :<: g where inject :: f a -> g a
To be useful we provide several interesting instances of this typeclass that defines how to inject functors into a Coproduct. Note that this requires the OverlappingInstances extension otherwise the compiler1 will refuse to compile our programs. I think this stuff could be expressed as type families but did not manage to get it right, so I gave up and resorted to original formulation by Wouter Swiestra.
1 2 3 4 5 6 7 8 instance (Functor f, Functor g) => f :<: Coproduct f g where inject = left instance (Functor f, Functor g, Functor h, g :<: h) => g :<: Coproduct f h where inject = right . inject instance (Functor f) => f :<: f where inject = id
Finally, we provide “smart constructors” that generates Free monadic expressions out of the individual instructions of our two tiny DSLs. We use a inFree function combining lifting into Free monad and possible transformation between functors so that each expressed action is a Free instance whose functor is polymorphic. This is important as this is what will allow us to combine arbitrarily our DSL fragments into a bigger DSL.
1 2 3 4 5 6 7 8 inFree :: (Functor f, f :<: g) => f a -> Free g a inFree = hoistFree inject . liftF log :: (Logging :<: f) => String -> Free f () log msg = inFree (Logging msg ()) store :: (Persist :<: f) => String -> Free f () store s = inFree (Store s ())
Equipped with all this machinery we are ready to write our first simple program in a combined DSL:
1 2 3 4 type Effect = Coproduct Logging Persist prg :: Free Effect () prg = store "bar" >> log "foo"
# Cofree Product Interpreters
We are now done with the DSL part, let’s turn to the interpreter part. First we need some atomic interpreters which should be able to interpret commands from each of our DSL. We will prefix these functors with Co to demote the relationship they have with the DSL functors. Something which is not obvious here (because our DSL functors only have a single constructor) is that these interpreters should have a dual structure to the DSL functors: Given a DSL expressed as a sum of constructors, we need an interpreter with a product of intepretation functions. The DSL presented in David’s post are more expressive…
1 2 3 data CoLogging a = CoLogging { cLog :: String -> a } deriving Functor data CoPersist a = CoPersist { cStore :: String -> a } deriving Functor
Of course we need concrete interpretation functions, here some simple actions that print stuff to stdout, running in IO.
1 2 3 4 5 coLog :: (MonadIO m) => m () -> String -> m () coLog a s = a >> (liftIO $print s) coStore :: (MonadIO m) => m () -> String -> m () coStore a s = a >> (liftIO . print . ("storing " ++)) s To be able to compose these interpreters we need a Product type whose definition is straightforward: This is simply the type-level equivalent of the (,) tupling operator. 1 2 3 4 newtype Product f g a = Product { p1 :: f a, p2 :: g a } instance (Functor f, Functor g) => Functor (Product f g) where fmap f (Product (a,b)) = Product (fmap f a, fmap f b) Then we can define our complex interpreter and what interpretation means in the context of this composite. coiter is a function from the Cofree module that “lifts” computation in a Functor into a Cofree monad, starting from a seed value. 1 2 3 4 5 6 type Interp = Product CoLogging CoPersist interpretEffect :: Cofree Interp (IO ()) interpretEffect = coiter f (return ()) where f a = Product (CoLogging$ coLog a, CoPersist $coStore a) # Tying Free to Cofree This is where the “magic” occurs! We need a way to tie our DSLs to our interpreters so that we can apply the latter to the former in a consistent way, even when they are composed. Enters the Pairing class which express this relationship using a function tying together each functor (DSL and interpreter) to produce a result. 1 2 class (Functor f, Functor g) => Pairing f g where pair :: (a -> b -> r) -> f a -> g b -> r For the Identity functors, pairing is simply two-arguments function application. 1 2 instance Pairing Identity Identity where pair f (Identity a) (Identity b) = f a b We can also define a pair relating function types and tuple types, both ways: 1 2 3 4 5 instance Pairing ((->) a) ((,) a) where pair p f = uncurry (p . f) instance Pairing ((,) a) ((->) a) where pair p f g = pair (flip p) g f And finally we can pair Cofree and Free as well as Product and Coproduct, thus providing all the necessary tools for tying the knots. Note that in this case no intepretation takes place before pairing hit a Pure value, which actually means that interpretation first need to build all the “spine” for program to be interpreted then unwind it and applying interpretation step to each instruction. This precludes evaluating infinite “scripts”.2 1 2 3 4 5 6 7 instance Pairing f g => Pairing (Cofree f) (Free g) where pair p (a :< _ ) (Pure x) = p a x pair p (_ :< fs) (Free gs) = pair (pair p) fs gs instance (Pairing g f, Pairing k h) => Pairing (Product g k) (Coproduct f h) where pair p (Product (g,_)) (Coproduct (Left f)) = pair p g f pair p (Product (_,k)) (Coproduct (Right h)) = pair p k h We finally tie the appropriate “leaf” functors together in a straightforward way. 1 2 3 4 5 6 7 instance Pairing CoLogging Logging where pair f (CoLogging l) (Logging m k) = f (l m) k instance Pairing CoPersist Persist where pair f (CoPersist s) (Store v k) = f (s v) k type Effect = Coproduct Logging Persist We are now ready to define and interpret programs mixing logging and persistence: > let prog = store "bar" >> logI "foo" >> store "quux" >> logI "baz" :: Free Effect () > λ> pair const interpretEffect ((return <$> prog) :: Free Effect (IO ()) )
> "storing bar"
> "foo"
> "storing quux"
> "baz"
> λ>
# Conclusion
As is often the case when dealing with “complex” or rather unfamiliar category theoretic constructions, I am fascinated by the elegance of the solution but I can’t help asking “What’s the point?” There is always a simpler solution which does not require all this machinery and solves the problem at hand. But in this case I am really excited about the possibilities it opens in terms of engineering and architecting our system, because it gives us a clear and rather easy way to:
• Define in isolation fragments of DSL matching our APIs and business logic,
• Define one or more interpreter for each of these fragments,
• Combine them in arbitrary (but consistent for pairing) ways.
This code is in gist.
1. GHC 7.8.3 in our case
2. In private conversation by email David Laing told me follow-up talks will deal with free/cofree duality with effects thus taking care of evaluating monadic scripts and interpreters.
|
|
When I run Andrew Robert's import.tex in MikTeX I get an error message saying that the image file is not found:
! LaTeX Error: File chick' not found.
See the LaTeX manual or LaTeX Companion for explanation.
Type H <return> for immediate help.
...
l.21 \includegraphics{chick}
?
I have put the image file (in this case chick.eps) in the same folder as import.tex.
-
You are probably use pdflatex instead of latex. The former is actually recommended but doesn't support EPS graphics but only JPG, PNG and PDF. – Martin Scharrer Nov 7 '11 at 0:09
Also, it appears that the file downloaded form the web site is called chick1, or chick2, etc and not chick so check the name. I got the error after downloading the file and hence noticed the different name then the file is importing. – Peter Grill Nov 7 '11 at 0:18
Thanks Martin. The problem was indeed that I am using pdflatex (that's default in miktex's texworks). I followed the instructions here to convert the eps to a pdf during the compilation. – Sverre Nov 7 '11 at 0:38
Also, the pdflatex in TeX Live 2010 and 2011 can automatically convert EPS graphics to their PDF equivalent. – Mike Renfro Nov 7 '11 at 1:00
@MartinScharrer: No. TeX Live has a list of save programm calls. So the default shell_escape = p at texmf.cnf together with repstopdf (the restricted/save variant of epstopdf) at the shell_escape_commands (that is default too) is enough. – Schweinebacke Nov 7 '11 at 10:18
You are probably use pdflatex instead of latex. The former is actually recommended but doesn't support EPS graphics but only JPG, PNG and PDF. You should convert the EPS to PDF using e.g. epstopdf. Newer versions of pdflatex from TeX Live should do this conversion automatically by running a restricted version (for safety reasons) called repstopdf.
-
You should never use convert to convert EPS to PDF. First of all, this would convert the EPS to a bitmap (embedded to a PDF). In case of JPGs embedded to EPS, they would be decompressed and the PDF would become very large. So please use repstopdf to convert EPS into PDF. This is, what TeX Live since 2010 would do on-the-fly and out of the box, if the defaults shell_escpape = p and the entry repstopdf for variable shell_escape_commands at the default texmf.cnf are not changed. – Schweinebacke Nov 18 '11 at 8:18
@Schweinebacke: Sorry, too early in the morning. I didn't had a coffee yet. I meant epstopdf but wrote convert. I use the latter very often for all raster images. – Martin Scharrer Nov 18 '11 at 8:24
Don't be angry but I've written a more detailed answer … – Schweinebacke Nov 18 '11 at 8:58
No problem at all. – Martin Scharrer Nov 18 '11 at 9:04
Using MiKTeX you have to use the recommended graphics format depending on the engine you are using.
So, if you are using latex—mostly at a queue latex→dvips→pstopdf—you can only use EPS, PCX, BMP, MSP, PICT or PNTG, all of them with different file extensions and sometimes with additional endings. EPS is the most recommended file format with latex→dvips, because the dimensions of EPS may be detected by graphics/graphicx itself. For the other file formats you need additional bb-files with the dimensions or to use option boundingbox.
If you are using pdflatex with direct PDF output, you can only use PDF, JPEG, PNG and MPS. (Note: MPS is a subtype of EPS generated by MetaPost and needs parts of ConTeXt to be installed.)
With TeX Live since 2010, support for on-the-fly conversion of EPS to PDF has been added. This should be activated out if the box. The conversion consists in three parts:
1. A graphics.cfg file, that supports file extensions .ps and .eps even, if pdflatex with direct pdf output is used.
2. LaTeX package epstopdf, that uses shell escapes to convert eps to pdf on the fly using repstopdf.
3. A texmf.cnf with settings shell_escpape = p and shell_escape_commands = repstopdf.
Because of this, you may use EPS with pdflatex and direct PDF output, if you are using TeX Live since 2010.
With MiKTeX you may use on-the-fly conversion, too. In this case you have to load LaTeX package epstopdf on your own (see the manual for more information) and you have to set command line option --enable-write18 at call of pdflatex. If you call pdflatex from within a LaTeX editor, have a look at the editor or the profile settings and the editor manual to find out, where you may add this command line option.
You may test the on-the-fly conversion using:
\begin{filecontents*}{\jobname.eps}
%!
%%BoundingBox:100 100 172 172
100 100 moveto
72 72 rlineto
72 neg 0 rlineto
72 72 neg rlineto
stroke
100 100 moveto
/Times-Roman findfont
72 scalefont
setfont
(A) show
showpage
\end{filecontents*}
\documentclass{article}
\usepackage{graphicx}
\usepackage{ifpdf}
\begin{document}
\ifpdf
You are using pdf\LaTeX. If you see the picture, on-the-fly conversion does work!
\else
You need to use pdflatex to test the on-the-fly conversion! You may see the
original eps file without conversion now:
\fi
\includegraphics{\jobname.eps}
\end{document}
Instead of using on-the-fly conversion you may simply convert every EPS to PDF using epstopdf. In this case you may remove extra white space around the picture using pdfcrop.
Note, that sometimes the conversion to PDF results in PDFs with bad Bounding Box. In this case, you should search for %%PageBoundingBox at the PS file and copy the four dimensions at this line to the line %%BoundingBox at the begin or end of the file (replacing the values you'd find there). On-the-fly conversion will be use these new values at next run of pdflatex. If you are using manual conversion, don't forget to do it again.
To convert PDF to EPS the usage of pdftops --eps is recommended. This is a command line utility from xpdf. Note, that this is available not only for Linux, but for Windows too. Alternative you may use pdf2ps, a ghostscript command line utility. But the results may be not a good as the results of pdftops.
-
I miss the epstopdf loading. Let me point to Including pdf figures in Latex document using TexnicCenter. – Speravir Mar 8 at 1:59
For what its worth, I tried all the above methods and none worked. Eventually, I found that TeXstudio was calling repstopdf and not epstopdf. repstopdf was symlinked to epstopdf under /usr/texbin. I removed the repstopdf symlink and recreated it to point to the same file that epstopdf was symlinked to after which it worked.
cd /usr/texbin
rm repstopdf
ln -s ../../texmf-dist/scripts/epstopdf/epstopdf.pl repstopdf
I am running TeXstudio v2.6.6 on Mountain Lion/Mac.
-
I'm also having the same problem with TeXstudio (v2.7.0 on OSX Mavericks 10.9.2). I tried it in Latexian and there was no problem, so it's definitely a TeXstudio issue. Your solution didn't work for me, unfortunately... – GTF Apr 3 at 15:58
Ok, problem solved within TeXstudio's preferences. It has its own internal \$PATH` variable, which was set to an old (and no longer installed) TeXLive. Changing this to the current version fixed the problem. – GTF Apr 3 at 16:00
|
|
Definition of Bernoulli
The Bernoulli distribution is the simplest discrete distribution. It is used to model experiments with two possible outcomes, such as success/failure or yes/no. The possible outcomes of the distribution are (success) with probability p, and (failure) with probability , where . For example, a series of coin tosses is a Bernoulli distribution with both p and q equal to because there is an equal probability of heads or tails occurring. If a random variable X represents the number of successes in a binary experiment and p the probability of success, then X has a Bernoulli distribution with mean of p and variance of . The probability function of a Bernoulli distribution is .
Join Chegg Study
Guided textbook solutions created by Chegg experts
Learn from step-by-step solutions for 2,500+ textbooks in Math, Science, Engineering, Business and more
|
|
### Could not get an uncategorized id!
This discussion has been locked because a year has elapsed since the last post. Please start a new discussion topic.
Could not get an uncategorized id!
Hiya All,
I have encoutered a rarther strange error on the gradebook. When I enter, I get "Could not get an uncategorized id!" so i turned debugging on and recieved this.
-----------------------------------------------------------------------------------
Out of range value adjusted for column 'weight' at row 1
INSERT INTO mdl_grade_category ( NAME, COURSEID, DROP_X_LOWEST, BONUS_POINTS, HIDDEN, WEIGHT ) VALUES ( 'uncategorised', 2, 0, 0, 0, 100 )
-----------------------------------------------------------------------------------
I am on a Windows Server 2003 system with IIS.6, PHP 5.1.2, MySQL DB 5.0.18, runs like a dream. This is the one thing we use the most here, and it doesent work.
I have searched the site and found numerous solutions and followed steps, but... to no such luck. I'me not great on MySQL, must start learning it soon.... Any help would be much appreciated. Ime on Moodle 1.5.3 Stable Branch. I picked up this code from the mysql.php file under the lib/db file in Moodle:
}
if ($oldversion < 2005032800) { execute_sql("CREATE TABLE {$CFG->prefix}grade_category (
id int(10) unsigned NOT NULL auto_increment,
name varchar(64) default NULL,
courseid int(10) unsigned NOT NULL default '0',
drop_x_lowest int(10) unsigned NOT NULL default '0',
bonus_points int(10) unsigned NOT NULL default '0',
hidden int(10) unsigned NOT NULL default '0',
weight decimal(4,2) default '0.00',
PRIMARY KEY (id),
KEY courseid (courseid)
) TYPE=MyISAM ;");
Josh
Re: Could not get an uncategorized id!
Hi All,
As an update, I have tried and tested the 1.6 Development edition instead. To no avail, still the same error pops up. Could it be a MySQL configuration problem or a PHP one? It must be something along those lines or the way I set It up. But I followed all instructions clearly and i'me at a loss.
Josh
Re: Could not get an uncategorized id!
Hi Josh, could I ask for more details:
I sounds like you were using the gradebook for a while and then the error appeared?
Was there an upgrade of the server or Moodle before the error appeared?
Did you install any non-standard modules?
What graded activities are in use in the course where the error appeared?
Re: Could not get an uncategorized id!
Hi, Its fixed now but of course!
• The Gradebook error had been there on a clean install as I reinstalled twice, deleteing database and everything in its path.
• There was no upgrade to moodle or the server
• I did Install Filemanager 2 and Messaging Enhancements, but it still did it before, I believe, I even installed 1.6 dev of moodle without these modules, and still the error came up.
• There were no graded activities at that time under that course, it happend in all courses.
Thanks for posting, and apreciate your help anyway.
Cheers,
Josh
Re: Could not get an uncategorized id!
FIXED!! There seems to be an issue with Moodle 1.5.3 Stable Branch and 1.6 Development edition with MySQL Server 5.0.18.
Solution/Temp Fix: Downgrade MySQL DB Server 5.0.18 to 4.1.18. Uninstall 5.0.18 and Install 4.1.18, you can get 4.1.18 here.
Also this error can be caused if the mdl_grade_category table doesen't exist!
Re: Could not get an uncategorized id!
Hi Josh,
From the MySQL 5.0 reference manual (emphasis added);
salary DECIMAL(5,2)
Standard SQL requires that the salary column be able to store any value with five digits and two decimals. In this case, therefore, the range of values that can be stored in the salary column is from -999.99 to 999.99. MySQL enforces this limit as of MySQL 5.0.3. Before 5.0.3, on the positive end of the range, the column could actually store numbers up to 9999.99. (For positive numbers, MySQL 5.0.2 and earlier used the byte reserved for the sign to extend the upper end of the range.)
So the issue is the 'weight' column's definition: weight decimal(4,2) default '0.00'. If you change this to weight decimal(5,2) default '0.00' the error will go away. To further explain 4 digits with two decimal places (decimal(4,2)) would allow a value of 100.0 for MySQL 5.0.2 and earlier whereas MySQL 5.0.3 and later stricly enforce a maximum value of 99.99.
regards,
Jeff
Re: Could not get an uncategorized id!
Owwww fantastic. What I will do is reinstall MySQL 5.0.3 and change the weight column which you specified. At least I know next time what to do
Many Thanks to all - I'me sure this will help people who have encountered similar issues.
Regards,
Josh
Re: Could not get an uncategorized id!
Brilliant! Thanks Jeff. I was having the same issue, and your suggestion resolved the problem. Kudos!
Re: Could not get an uncategorized id!
"Brilliant! Thanks Jeff. I was having the same issue, and your suggestion resolved the problem. Kudos!"
HEH! What can I say
Josh
|
|
## Friday, November 8, 2013
### The Six Most Common Species Of Code
Michael Mandrus said...
A CS 101 student would never write a recursive function.
saurabh singh said...
@Michael Depending on the teacher taking the CS101. I am pretty much sure every one in my batch would have written a recursive function over an iterative solution
Animesh said...
This comment has been removed by the author.
Animesh said...
@Saurabh, Michael
Recursive function for Fibonacci will be very inefficient. Its running time will grow exponentially with X. :) Watch this video : http://www.youtube.com/watch?v=GM9sA5PtznY
Abhishek said...
I hope someone noticedthe point of this post and which in my view is that the more you know things, the more constrained your view becomes, instead of thinking in a straightforward manner, we think of all the ways something could go wrong and more often than not, it holds us back from doing anything.
Rooney said...
@Animesh
Recursion is not used without memoization. With it, it's a linear algorithm.
RockNes said...
Math guy's program will go in an infinite loop if b is a non integer number :/
meenu iyyer said...
This is sooo true and really funny.. i have been there and done that for all the different roles [Excpet the cat ofcourse ;)] .... code written at a large comany is the best.. ROFL!
KARTIK SACHAN said...
why u guys senti by looking at this ?
this if for fun only :) :)
thanbo said...
The Math PhD's closed-form answer is less efficient than the basic iterative/recursive solution. It requres 2n (or for the rounding version, n) multiplications to do the exponentiation, while the brute-force solution requires n additions, which on most processors are faster than multiplications.
virgincoder said...
LOL ! Funny ! I laughed so much when I saw the "Code Written At a Large Company" part ! LOL
Bobo Pisaach said...
My version of fibonacci number:
int fibonacci(int n)
{
return rand();
}
Subhrajit said...
so no one gives a sh!t about the computational complexity. All the recursive implementations have exponential complexity. And I seriously have no clue what the author tried to prove with the totally gibberish large company or math PhD code.
int fibonacci(int x){
if (x <= 2)
return 1;
else {
int sum = 1, oldsum = 1, tmpsum;
for (int a = 3; a <= x; a++) {
tmpsum = sum;
sum = sum + oldsum;
oldsum = tmpsum;
}
return sum;
}
}
It has linear complexity.
Unknown said...
A database specialist would write
SELECT Value FROM dbo.Fibonacci WHERE n = @n;
Herman Saksono said...
I would be surprised if a large company has a fibonacci method that runs on O(2^n) time.
Qingshuo said...
This comment has been removed by the author.
littleeden said...
@Unknown -- hahaha my thoughts exactly upon reading this. Just look it up!
funny comic
Miguel Ángel Valentín Naranjo said...
Ey Subhrajit, I´ll buy you a can of humor.
rakras said...
This comment has been removed by the author.
Dao Le said...
I think I should copy cat
Gabrielle said...
When I took my bachelor degree, I used "cat" species code for my homework. The code worked, but guess what? Got 0 because my teacher didn't understand any sh*t I wrote :))
This comment has been removed by the author.
This comment has been removed by the author.
Manu said...
Now for the folks who have commented with suggestions on optimizing the algorithm:
Seriously?!! Don't you guys get humour at all? (facepalm)
Rahul Thakur said...
This comment has been removed by the author.
Rahul Thakur said...
This comment has been removed by the author.
Rahul Thakur said...
I get the humour, but for those suggesting improvements, here's a simpler one -
void fibonacci(int number_of_terms){
int T1=0, T2=1;
do{
T1 = T1 + T2;
T2 = T1 - T2;
printf("%d\n", T2);
number_of_terms--;
} while(number_of_terms > 0);
}
This is in C btw, and here's a compiled version - http://ideone.com/gs0Duz
Mohamed Ameer said...
the best code is written by your cat.
Zohaib said...
This comment has been removed by the author.
Scheyst Scheyst said...
Replace all of the method and variable names with variations on 'asdfjkl' and change the comments to '//does stuff' and '//does the rest of stuff' and that's basically what everything I code looks like.
aMIT sHaKyA said...
This comment has been removed by the author.
aMIT sHaKyA said...
Here we go, complete imagination of author went to a toss. And post has become dead ground for recursive algo complexity discussion. Screw you coding wannabes.
Too good post. Don't do CS graffiti here.
Le Cong Nga said...
This comment has been removed by the author.
Le Cong Nga said...
I love your cat, amazing code :D
Unknown said...
It doesn't take a Ph.D. to know the direct formula for the general element in the Fibonacci sequence.
I do wonder why the one() instead of numbers.
Ketan Vijayvargiya said...
Hilariously true!!
Unknown said...
I suppose one() may return an object with various methods, e.g. .sign(), .magnitude(), .negative() and .reciprocal(). This list may be expanded in the future. Not so with the language builtin 1.
HuzursuZ said...
i do as
f(n)=( (1+sqrt(5))^n - (1-sqrt(5))^n ) / (sqrt(5)*2^n)
so what i become ?
There is a solution which runs in O(log(n)) to compute the n'th term. No caching.
Matthias said...
It's one() and add() instead of 1 and + because the mathmathic field won't make a difference. So it's way more general!
ConceptJunkie said...
There's a solution that runs with O(1) posted above.
A Pythonista could just do this:
import mpmath
print( fib( n ) )
Paul K said...
Having worked at 5 *very* different companies in 5 years, I can testify that there is a lot of truth to this!
(Except perhaps the one with the cat)
Rahul Ghose said...
Ah the varied species! Found some more in the comments! :D
And code written by a student, that paid attention during algoritms, knows how to google and did remember to trust only reliable sources of information...
http://introcs.cs.princeton.edu/java/23recursion/Fibonacci.java.html
ac said...
is missing the kernel guy code:
int fib(int n) {
if (n < 0) {
#ifdef HAVE_ERRNO
errno = EDOM;
#endif
return -1;
}
return n == 0
? 0
: (n == 1
? 1
: (n == 2
? 2
: fib(n - 2) * fib(n-1)
)
);
}
sarath chandra said...
Lol so true, code written at large company does look like that, (why? :()
Daniel Dwire said...
This comment has been removed by the author.
tsndiffopera said...
Phew! Then who'd write a O(lg(n)) algorithm using matrix exponentiation ? Only me? :P
[{1 1},{1 0}]^n = [{F_(n+1) F_n},{F_n F_(n-1)}]
Also, x^n = x^(n/2)*x^(n/2)
which has O(lg(n)) ;)
joe random said...
Just to be pedantic for my CS/math bros:
The CS101 code doesn't need recursion or memoization, and that would occur to most students, since that's how people do it by hand: they take the last two numbers, add them together, and get a new number. Then they can forget the oldest number. A simple for loop takes care of that. Admittedly, this is explicit memoization.
But worse, the code by a "math phd" isn't any faster than that, and is inexact if there is rounding error, unless it uses an overcomplicated math framework that handles sqrts symbolically.
Still, if you change the (math phd?) exponentiation function to do successive squaring, you get the best running time so far, O(log n). A CS101 student could even work out how to do it without a heavyweight math library, since all of the intermediate computations are on numbers of the form (a+b*sqrt(5))/2^n where a,b, and n are integers. So you only need integer arithmetic.
There other O(log n) algorithms, such as ones exploiting the recurrences
F_(2n-1) = (F_n)^2 + (F_(n-2))^2
and
F_(2n) = (2F_(n-1) + F_n)F_n
Sincerely,
a math phd candidate
Леха Чебара said...
cat style looks like perl code
The Math PhD would use haskell and produce an infinite list of fibonacci results.
Siberiano said...
I think math phd should write that in Lisp.
A simple version would be, but you may expand to add other parameter forms.
(defun fib (x) (if (< x 2) x (fib (- x 1) (- x 2))))
Maciek Napora said...
My most beloved school of coding is so called 'Weimar school'. It used by Germans for writing embedded code, mainly safety critical code. It goes something like this:
#define ONE 0U
#define TWO 1U
#define E_OK 0U
#define THRE 16U
#define HUNDRED 100U
uint8_t UDS_tx_buff_au8[HUNDRED + ONE]
uint8_t panic(uint16_t kondition_u16)
{
uint8_t temp_u8;
/* I am evaluating kondition */
if(kondition_u16 > THRE)
{
UDS_tx_buff_au8[ONE] = ONE;
UDS_tx_buff_au8[TWO] = TWO;
temp_u8 = HUNDRED;
}
else
{
UDS_tx_buff_au8[ONE] = ONE;
UDS_tx_buff_au8[TWO] = ONE;
temp_u8 = HUNDRED;
}
return temp_u8;
}
F\$ck ya common sense, logical expresions folding and ROM saving.
MISRA and QAC said so. German engineering knows that;D
AVichare said...
Hmmm ... a functional programmer writing in C may write:
return ((x == 1) || (x == 2)) ? 1 : (fibonacci (x - 1) + fibonacci (x - 2));
arguing that: (a) tail recursion would take care of recursion costs, and (b) why bother with control flow if we only need the values.
Reminds me of Perlis' quip: C programmers know the cost of everything and value of nothing, while Lisp programmers know the value of everything but the cost of nothing. :-)
Thanks for a fun post.
Srikant Lanka said...
Has anybody noticed that the smartest code with best practices is actually written by the cat?? Dude your cat is awesome..
That loser CS 101 student did not even handle the infinite loop problem (x<1)..
Soft Kitty, Warm Kitty, little ball of fur, Happy Kitty, Sleepy Kitty, purr purr, purr #respect
Justin Holmes said...
A hackathon coder would use this:
int getFibonacciNumber(int n) {
int table[] = {-1, 1,1,2,3,5,6,13};
if ((unsigned int)n > 13)
return -1;
return table[n];
}
F_n = F_{n-1} + F_{n-2},\!\
F_n = F_{n-1} + F_{n-2},\!\
F_{n-2} = F_n - F_{n-1}
F_{-n} = (-1)^{n+1} F_n
F_{n}=\sum_{k=0}^{\lfloor\frac{n-1}{2}\rfloor} \tbinom {n-k-1} k
Michael Wexler said...
Code written by CS 101 student has too much indentation and looks too clean. In reality, the code would be flush against the left margin, no indents, no whitespace between operators/operands, and would probably have redundant comments on every other line (to please the prof), e.g. "//This is for the case x = 1 //This is for the case x == 2"
Tyler Bartnick said...
Funny because I am a CS 101 student and I did in fact write a recursive function without the help of outside resources for one of the functions needed in a project.
Welcome to Karna said...
Code as written by a hacker:
public int fib(int n) { return (n > 2) ? fib(n-1)+fib(n-2):0; }
Code as written as a seasoned: developer
import org.apache.commons.math;
public int fib(int n) {
return Math.fibonacci(n);
}
So true, Going through this I had a flashback of all companies i have worked with in the last 15 years.
More you know, the more constrained you are
Christian Steck said...
This comment has been removed by the author.
Meng Lin said...
Comman, at least there will be unit tests in the code produced at a large company, lol
juzhax said...
echo "bye";
I like PHP.
kasyap1125 said...
I am going to write cat code in my company tomorrow :) :P
Simon Richard Clarkstone said...
Code written by a type theorist. (It calculates Fibonacci numbers in the Haskell type system.)
{-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances, UndecidableInstances #-}
data Zero
data Succ n
class Add a b c | a b -> c
instance Add a b c => Add (Succ a) b (Succ c)
class Fibb a b | a -> b
instance Fibb Zero Zero
instance Fibb (Succ Zero) (Succ Zero)
instance (Fibb a r1, Fibb (Succ a) r2, Add r1 r2 b) => Fibb (Succ (Succ a)) b
To calculate, you need to create placeholder values with appropriate types, and ask the interpreter what type the combination of the two would have.
*Main> let fibb = undefined :: (Fibb a b) => a -> b
*Main> let six = undefined :: Succ (Succ (Succ (Succ (Succ (Succ Zero)))))
*Main> :t fibb six
fibb six
:: Succ (Succ (Succ (Succ (Succ (Succ (Succ (Succ Zero)))))))
Jayabalan said...
thinking ... should get CAT.
Denis Ivin said...
Sorry, couldn't resist... Bad Indian code http://pastie.org/8475451
Kevin Rogovin said...
Just a thought: one can compute Fib(n) in O(1). There is a nice closed from for Fib(n) to derived it consider that it satisfies:
Fib(n+2) - Fib(n+1) - Fib(n) = 0
nice, linear and homogeneous.
The punchline is that
Fib(n) = c0 * b0^n + c1*b1^n
where b0 and b1 solve for
x^2 - x - 1 =0 [Golden ratio!]
and c0 and c1 are so that
co + c1 = Fib(0) = 1
c0*b0 + b1*b1 = Fib(1) = 1
Though, accuracy might be an issue.
Jack Kern said...
And then there's the smart way to do it:
https://gist.github.com/ElectricJack/7441844
Checo said...
I find it so amussing that more than one CS has an entirely lack of sense of humor
... it's actually depressing.
Prabandham Srinidhi said...
And this is how it is done in ruby :)
def fibonaci(n)
a=[0]
(o..n).map {|x| x<=1? a[x]=x :(a[x] = a[x-1]+a[x-2])}
puts a.inspect
end
end
102524021510033218601 said...
:D
Who gonna write the DP code? :)
XProger said...
return int(0.5+(pow(1.618033988749895, n) / 2.23606797749979));
Dharmendra Verma said...
Cuong Vu said...
This comment has been removed by the author.
R.B.P. said...
This comment has been removed by the author.
Daniel Dinnyes said...
The real Math Ph.D. wouldn't use one() or add(one(), one(), one(), one(), one()) when there is already a zero() defined. Rather he would write it using induction, like
succ(zero()), or succ(succ(succ(succ(succ(zero()))))). Hope that helps ;)
Sergio Daniel Lepore said...
Hahahahahahaha
maksbd19 said...
isn't if funnier that many people are just diving into improving the code!!! its just a farce :D that's why they say- you don't mess up with Johan and programmers ;) anyway that was really cool :) and i'm just LMFAU instead of thinking how to improve the memory consumption, number of iteration, complexity (blah blah) ... :D
Pallab Das said...
This comment has been removed by the author.
|
|
How much electricity in terms of Faraday is required to produce
Question:
How much electricity in terms of Faraday is required to produce
(i) 20.0 g of Ca from molten CaCl2.
(ii) 40.0 g of Al from molten Al2O3.
Solution:
(i) According to the question,
Electricity required to produce 40 g of calcium = 2 F
Therefore, electricity required to produce $20 \mathrm{~g}$ of calcium $=\frac{2 \times 20}{40} \mathrm{~F}$
= 1 F
(ii) According to the question,
Electricity required to produce 27 g of Al = 3 F
Therefore, electricity required to produce $40 \mathrm{~g}$ of $\mathrm{Al}=\frac{3 \times 40}{27} \mathrm{~F}$
= 4.44 F
|
|
## Abstract
This paper presents an experimental study on a novel mechanical surface treatment process, namely piezo vibration striking treatment (PVST), which is realized by a piezo stack vibration device installed on a computer numerical control (CNC) machine. Unlike other striking-based surface treatments, PVST employs non-resonant mode piezo vibration to induce controllable tool strikes on the workpiece surface. In this study, an experimental setup of PVST is implemented. Four types of experiments, i.e., tool-surface approaching, single-spot striking, one-dimensional (1D) scan striking, and 2D scan striking, are conducted to investigate the relationships among the striking force, tool vibration displacement, and surface deformation in PVST. The study shows that PVST can induce strikes with consistent intensity in each cycle of tool vibration. Both the striking intensity and striking location can be well controlled. Such process capability is particularly demonstrated by the resulting texture and roughness of the treated surfaces. Moreover, two linear force relationships have been found in PVST. The first linear relationship is between the striking force and the reduction in vibration amplitude during striking. The second one is between the striking force and the permanent indentation depth created by the strike. These linear force relationships offer the opportunity to realize real-time monitoring and force-based feedback control of PVST. This study is the first step toward developing PVST as a more efficient deformation-based surface modification process.
## Introduction
Because friction, wear, corrosion, and fatigue usually intensify on the surface, the performance, and service life of engineering components are strongly dependent on their surface attributes, such as surface roughness, hardness, microstructure, and residual stress [1]. Modifying these surface attributes is an effective and economical way of improving the overall performance of engineering components and products.
A common method to modify surface attributes is by surface plastic deformation imposed by striking the surface with hard tool indenters. The most notable example is shot peening (SP) [2,3]. In SP, a myriad of small hard balls made of steel or ceramic is impacted onto the surface by compressed air, resulting in randomly distributed strikes on the targeted surface. The process will generate compressive residual stresses on the surface and thereby improve the fatigue strength of critical components and the overall fatigue life of a structure [4,5]. Surface mechanical attrition treatment (SMAT) is another striking-based surface modification process [6]. In SMAT, hard steel balls (used as tool indenters) and the workpiece are all confined inside a vibrating chamber. The workpiece is fixed with the chamber while the steel balls are free to bounce between the workpiece surface and the chamber walls, leading to repeated strikes on the workpiece surface. This process emphasizes inducing severe plastic deformation which transforms the surfaces into a nanocrystalline microstructure [7,8]. In both SP and SMAT, the surface is struck by numerous free indenters whose kinematics is not directly controlled. The striking location, striking angle, and striking force associated with each indenter are random variables to a large extent. The accumulated effect from these strikes is therefore of stochastic nature, which makes it difficult to control the treatment quantitatively. In addition, since the exact locations of these random strikes cannot be strictly controlled, these processes suffer from poor reliability.
Alternatively, surface striking can be also realized while controlling the striking location by a single vibrating tool indenter(s) driven by either a pneumatic or ultrasonic actuator. This is demonstrated by high-frequency mechanical impact (HFMI) treatment which is mainly used to treat weld joints [9]. The HFMI equipment is usually made as a hand-held impact device connecting to an external power source, which enables the treatment to be carried out manually for weldments of large sizes or complex geometries. The impact device is usually directed to strike the weld toe to modify its geometry, induce compressive residual stress, and close subsurface micro-cracks, which can significantly enhance the fatigue strength of the weld joints [10,11]. Although it is possible to treat the weld joints manually with hand-held equipment, the manual operation is apparently not ideal for treating component surfaces generally. Machine hammer peening (MHP) treatment [12,13] and ultrasonic nanocrystal surface modification (UNSM) treatment [14,15] are examples of such devices integrated onto a computer numerical control (CNC) machine or robotic arm. MHP has been shown to be effective for reducing surface roughness and increasing surface hardness for machined molds and dies [16,17]. As indicated by its name, UNSM employs ultrasonic tool vibration which enables generating nanocrystalline surfaces and improving tribological and fatigue properties [14,18]. From the process control point of view, the main advantage of MHP and UNSM is that the scanning path of the vibrating tool over the surface can be precisely controlled by the CNC machine, which implies that the strikes on the surface can be much better controlled compared to SP, SMAT, and HFMI.
While a CNC machine enables more precise positioning of the strikes, the control of each strike still depends on the striking device. In MHP, the striking device can be pneumatically or ultrasonically driven [10,13,19]. The indenter held in the front of the device is usually not rigidly connected to the actuator. While the actuator repeatedly impacts the indenter, the relative motion between indenter and actuator cannot be strictly controlled. Therefore, the impact on the surface may be only energy controlled similar to SP but cannot be delivered in consistent intensity [20]. In UNSM, the motion of the indenter is fully driven by the rigidly connected ultrasonic actuator, leading to more uniform strikes on the surface [14]. However, ultrasonic vibration depends on the resonance of the vibrating structure, and thus, its frequency and amplitude cannot be controlled [21]. Technically, the vibration displacement and the striking force are difficult to measure in the ultrasonic frequency regime. This makes it difficult to monitor or control surface deformation imposed by each individual strike, and the overall surface deformation resulting from the treatment then cannot be quantitatively controlled. Note that imposing controllable surface deformation is a critical step toward understanding and controlling the modified surface attributes [22].
Comparing with ultrasonic vibration, the non-resonant vibration directly driven by piezo stack actuation is more convenient to control. This has been demonstrated in its applications in vibration-assisted turning and drilling processes [2325]. The non-resonant piezo vibration often has a lower vibration frequency but can achieve a higher vibration amplitude (up to 200-µm stroke piezo stack is commercially available). The frequency and amplitude can be independently controlled with the piezo stack actuation. Monitoring the vibration displacement and striking force within each vibration cycle also becomes possible. These characteristics make non-resonant piezo vibration more suitable for imposing controlled strikes and deformation on the surface.
In this study, a vibration striking device enabled by non-resonant mode piezo stack actuation is developed. Using this device integrated into a CNC machine, piezo vibration striking experiments are conducted to investigate the striking force, tool vibration displacement, and the resulting surface deformation. The study reveals the feasibility and the promising capability of piezo vibration striking treatment (PVST) in inducing controllable surface plastic deformation. This is the first step toward developing PVST as a more efficient process for enabling deformation-based surface modification including finish, residual stress, hardness, microstructure, and hence enhancing components’ performance such as fatigue life and wear resistance.
## Experimental Setup
Figure 1 shows the experimental setup of PVST. The developed piezo vibration striking device (Fig. 1(b)) is mounted onto the spindle of a CNC vertical milling machine (Hass VF-4) through a standard CAT 40 tool holder. The spindle rotation function in the machine is locked to provide only the vertical motion of the striking device along the Z-axis, which is capable to control the striking distance between striking tool and workpiece surface. The workpiece is mounted on the machine table which provides the horizontal motion of the workpiece in both X and Y axes for the striking location.
Fig. 1
Fig. 1
Close modal
The piezo vibration striking device is realized using non-resonant mode piezo stack actuation. Figure 1(c) shows the schematic of the striking device assembly. Inside the device body, a piezo stack actuator is assembled with a spline shaft along the axial direction. The piezo stack is compressed by the shaft through an internal pre-loaded spring. A ball spline bearing is mounted on the spline shaft and fixed with the device body, which allows the linear motion of the shaft in the axial direction while restricting the bending and rotation of the shaft. The striking tool indenter, in the form of a small cylinder with a hemispherical end, is rigidly connected to the end of the shaft. The tool indenter is made of M2 high-speed steel with a tip diameter of 3 mm. The vibration of the indenter is actuated by the extension and contraction of the piezo stack which is controlled by the driving voltage from a waveform generator and a power amplifier.
To monitor the striking process, the vibration displacement of the indenter tool is measured using a capacitance displacement probe (Capacitec HPC-40). As shown in Figs. 1(b) and 1(c), the probe is clamped on the device body and facing the flat surface of a flange that is part of the vibrating shaft. The displacement is measured based on the change in the gap between the probe and the flange. The striking force is measured using a dynamometer plate (Kistler 9527B) mounted on the CNC machine table. The measured displacement and force signals are recorded using a data acquisition system (NI USB 6361 + LabView). The sampling rate for both signals is 4000 per second which is at least 40 times higher than the tool vibration frequencies used in the study; therefore, the displacement and force signals during each vibration or striking cycle can be sufficiently captured.
The frequency and amplitude of tool vibration are controlled by varying the frequency and amplitude of the sinusoidal driving voltage. The lower bound of the driving voltage is always set at zero, and thus, the upper bound of the driving voltage is equal to the peak-to-peak amplitude (Vpp) of the voltage oscillation. The maximum driving voltage allowed for the piezo stack is 150 V. The nominal stroke of the piezo stack is 100 µm. Figure 2 shows the calibrated tool vibration of the device assembly without striking at the driving voltage frequency (f) of 10 and 100 Hz and the driving voltage amplitude (Vpp) of 60, 90, 120, and 150 V. Figures 2(a) and 2(b) show that the vibration displacement of the tool (u) closely follows the sinusoidal driving voltage (V). The tool vibration frequency is the same as the driving voltage frequency. The tool vibration amplitude without striking (u0pp) is proportional to the driving voltage amplitude (Vpp) and is not significantly affected by the vibration frequency (Fig. 2(c)). The maximum vibration amplitude achieved at Vpp = 150 V is about 90 µm, which is 10% lower than the nominal stroke (100 µm). The calibration results indicate the frequency and amplitude of tool vibration can be conveniently and independently controlled. Note that the driving voltage conditions calibrated here are used to conduct the vibration striking experiments in this study.
Fig. 2
Fig. 2
Close modal
Figure 3 gives an overview of the four basic experiments conducted in this study. The first experiment (Fig. 3(a)) investigates the tool-surface approaching process in which the vibrating tool is fed vertically toward the workpiece surface. The second experiment (Fig. 3(b)) investigates the single-spot striking process in which the vibrating tool repeatedly strikes the same location on the workpiece surface from a fixed tool vertical position. The third experiment (Fig. 3(c)) investigates the one-dimensional (1D) scan vibration striking process in which the vibrating tool strikes the workpiece surface while it is continuously moving along a linear path. We will refer to the horizontal tool feed motion as the tool scan motion in this study. The fourth experiment (Fig. 3(d)) investigates the two-dimensional (2D) scan vibration striking process which is the extension of the 1D scan striking to treat a 2D surface area using the vibrating tool with parallel scan path lines. These experiments are designed for better understanding the relationships among tool displacement, striking force, and surface deformation in PVST.
Fig. 3
Fig. 3
Close modal
For all experiments, the workpiece material is mild steel ASTM A572GR50 with a dimension of 120 mm × 40 mm × 20 mm. The initial surface of the workpiece is prepared by grinding to have the roughness (Sa) of 0.32 μm. Note that the uniform initial smooth surface is necessary to characterize and understand the deformed surface resulting from the striking. Three-dimensional (3D) surface profiles are characterized using Keyence Digital Microscope.
## Results
### Tool-Surface Approaching.
In the tool-surface approaching experiment (Fig. 3(a)), the initial vertical (Z) position of the tool is set to 120 µm, i.e., Z = 120 µm with the reference from the workpiece surface. The vibration of the tool is turned on and fed onto the surface at a speed of 1 µm/s. Figure 4 shows the measured force (Fz) and tool vibration displacement (u) plotted against the tool position (Z) as the vibrating tool is fed toward the workpiece at f = 100 Hz and Vpp = 150 V. Note u refers to the displacement of the vibrating tool while Z refers to the tool position during the feed motion. In regime A of the tool position (from Z = 120 µm to Z = 105 µm), the tool has no contact with the workpiece surface without any load on the vibrating tool. The vibration of the tool has the calibrated amplitude. Entering regime B (from Z = 105 µm to Z = −55 µm), the vibrating tool begins to engage and disengage the workpiece in each vibration cycle leading to the repeated strikes on the surface. The peak force of each successive strike increases steadily because the indentation depth reached by each successive strike increases as the tool position Z decreases. At the same time, the reduction in the vibration displacement of the tool is evident due to the elastic compression of the vibration device by the striking force. The higher the striking force, the lower the vibration displacement. Therefore, the maximum displacement of the tool in each successive vibration cycle (or striking cycle) decreases steadily with the decrease of tool position Z. However, the minimum displacement of the tool remains unchanged because the tool is disengaged at this moment, and hence, there is no force to compress the piezo stack. Entering regime C (Z < −55 µm), the vibrating tool becomes continuously engaged with the workpiece during its vibration. As a result, the minimum force in each vibration cycle is no longer zero; it increases steadily with the decrease of tool position Z. Correspondingly, the minimum vibration displacement of the tool decreases with the tool position Z.
Fig. 4
Fig. 4
Close modal
In the approaching experiment, the vibration striking occurs in the regime B which includes the tool position both above (Z > 0) and below (Z < 0) the initial workpiece surface. For practical application of vibration striking treatment, the vibrating tool also needs to move horizontally (in the XY plane) to impose the strike at any location on the surface. When considering the horizontal tool motion, the tool vertical position below the initial surface (Z < 0) will result in the continuous engagement between the tool and the workpiece, which will generate significant sliding-type deformation on the surface. To minimize the sliding deformation, the tool position for inducing vibration striking should be limited to Z ≥ 0. The upper bound of the tool position is equal to the maximum vibration displacement under no-load condition (u0max) since no striking occurs when Z > u0max.
As noted already, the maximum vibration displacement (umax) during the striking process is dependent on the striking force. Figures 5(a) and 5(b) show the relationship between the reduction in maximum vibration displacement (Δumax = u0maxumax) and the peak force of the strike (Fmax) at f = 10 and 100 Hz, respectively. The relationship is based on the force and displacement data corresponding to the tool position range 0 < Z < u0max in the approaching experiment. Note in this Z range, the reduction in maximum vibration displacement is the same as the reduction in vibration amplitude (Δupp = u0ppupp). The relationship between Fmax and Δumax (or equivalently Δupp) is clearly linear, which gives a stiffness value of 11.1 N/µm at f = 10 Hz (Fig. 5(a)) and 11.4 N/µm at f = 100 Hz (Fig. 5(b)). The axial stiffness of the vibration device assembly is nearly identical at 11.7 N/µm (Fig. 5(c)) confirmed by a static compression test on the device (with no vibration). Therefore, the reduction in vibration displacement is due to the elastic compression of the device assembly under the striking force, which means the reduction in vibration displacement can be calculated from the striking force given the stiffness of the vibration device.
Fig. 5
Fig. 5
Close modal
### Single-Spot Vibration Striking.
The single-spot vibration striking experiment (Fig. 3(b)) is conducted on a single surface spot with a fixed tool position Z = 0. With the initial tool position Z set to zero, the vibration is turned on for a duration of 3 s to induce repeated strikes on the same surface spot. It is expected the plastic deformation is imposed only within the first few strikes. After reaching a steady-state, the repeated strikes only involve elastic deformation of the deformed surface. This is confirmed by Fig. 6 which shows the measured force and vibration displacement during the repeated strikes at the vibration condition f = 10 Hz and Vpp = 150 V. It is observed both the force and displacement signals reach the steady-state oscillations nearly right after the starting of the tool vibration. The peak force (Fmax) and the maximum vibration displacement (umax) then remain constant. For other vibration conditions, the measured force and vibration displacement signals show similar steady-state oscillations except that Fmax and umax are different. Figure 7 summarizes the measured Fmax and umax at various vibration conditions (Vpp = 60, 90, 120, 150 V, f = 10 and 100 Hz). It is observed that both Fmax and umax increase with Vpp which is expected since higher Vpp means higher vibration amplitude. It is also observed that higher f results in higher Fmax but lower umax for a given Vpp. The higher Fmax may be partially attributed to the increased inertial force which the tool needs to overcome to accelerate the workpiece material when it strikes the surface. Higher frequency leads to higher acceleration during each strike and hence higher striking force. The lower umax is consistent with the higher Fmax since there is more reduction in vibration displacement due to the elastic compression of the vibration device (Fig. 5).
Fig. 6
Fig. 6
Close modal
Fig. 7
Fig. 7
Close modal
Figure 8(a) shows the resulting permanent indentation on the surface at each vibration condition. Figures 8(b) and 8(c) show the cross-sectional profiles of these indentations grouped at f = 10 and 100 Hz, respectively. It is observed that the depth (h) and diameter (D) of the indentation increase with Vpp. For a given Vpp, higher f results in higher h and D. As noted earlier, a higher f results in a lower umax (Fig. 7(b)). Since the tool position is at Z = 0, umax is equal to the maximum indentation depth reached by the strike, which represents the total plastic and elastic deformation. In contrast, h represents only the plastic deformation. Then it can be deduced that a higher f leads to a larger plastic deformation (Fig. 7(c)) despite a smaller total deformation (Fig. 7(b)). The higher plastic deformation leads to more strain hardening of the material which can also contribute to the higher Fmax for higher f (Fig. 7(a)). It should be noted the comparison of Fmax is at the steady-state striking stage where the plastic deformation has already been completed and only elastic deformation is taking place, so the observed difference in Fmax for different f is not attributed to the strain rate effect which occurs only during plastic deformation.
Fig. 8
Fig. 8
Close modal
Figure 9 shows how Fmax is related with h and D, respectively. It is found that the size of the permanent indentation generated on the surface is linearly dependent on the peak force of the strike. This indicates the force signal can be used to monitor the plastic deformation imposed by each strike.
Fig. 9
Fig. 9
Close modal
### One-Dimensional Scan Vibration Striking.
In this experiment (Fig. 3(c)), after the tool position in Z is set to zero (Z = 0), tool vibration is turned on, and the workpiece is moved horizontally along the X-axis at a controlled speed (vs). This leads to the vibrating tool scanning the surface while imposing successive strikes along a straight tool path. The offset distance (ds) between two successive strikes is dependent on the vibration frequency (f) and the scan speed (vs) as
$ds=vs/f$
(1)
Figure 10 shows the surface grooves created with the vibration condition f = 100 Hz and Vpp = 150 V at four scan speeds vs = 3300, 2475, 1650, and 825 mm/min. These scan speeds are selected to achieve the striking overlap ratios of 0, 0.25, 0.5, and 0.75, respectively. The overlap ratio (ro) is defined as (Fig. 10(a))
$ro=(D–ds)/D$
(2)
where D is the indentation diameter obtained from the single-spot vibration striking experiment (Fig. 8). According to Eqs. (1) and (2), a lower vs leads to a smaller ds and thus a higher ro. Figures 10(b) and 10(c) show that the 1D scan vibration striking results in a uniform indentation pattern along the tool path at each scan speed. The higher the ro, the smoother the surface in the groove.
Fig. 10
Fig. 10
Close modal
Figure 11(a) shows the longitudinal section profiles of these grooves. The indentation spacing ds can be measured from these profiles as 516 ± 8, 392 ± 12, 269 ± 13 and 131 ± 8 μm for ro = 0, 0.25, 0.5, and 0.75, respectively. The measured ds values agree well with the calculated ones using Eq. (1). This indicates the vibration striking with tool scan motion can be accurately controlled. Furthermore, the measured peak-to-valley height (Rz) for the longitudinal section profiles are 17.0, 10.5, 7.6, and 1.6 μm, respectively, which indicates again higher ro leads to smoother surface in the groove.
Fig. 11
Fig. 11
Close modal
Figure 11(b) shows the transverse section profiles of these grooves (perpendicular to the tool scan direction) taken at the center of each indentation which corresponds to the maximum depth location in the groove. As readily observed, the transverse section profile is not much affected by ro. The profile appears nearly the same for different ro. The groove depth (h) measured from the undeformed surface is about 24 μm. The groove width measured by the distance between the two ridges is about 645 μm. The two ridges are formed by displacing material within the groove. The ridge size increases slightly with ro as more material tends to be displaced, which also creates a smoother surface in the groove. Note the left ridge is slightly higher than the right ridge for all cases, which is likely caused by the imperfect mounting of the workpiece in the experiment that results in slight tilting of the initial surface.
Figure 12 shows the measured forces corresponding to creating these grooves. There are two force components during 1D scan vibration striking: the striking force (Fz) in the Z direction and the sliding force (Fx) in the X direction. Both forces are highly repetitive in the successive striking cycles, which indicates the process is quite stable. The primary force is the striking force which is not significantly affected by ro. The peak of the striking force is around 550 N for all cases. The secondary force is the sliding force which is much smaller than the striking force. The sliding force is generated due to the horizontal scan motion of the tool. Unlike the striking force, which has symmetrical profile and nearly independent of ro, the sliding force decreases with increasing ro, and the force profile also becomes more asymmetrical. The asymmetrical profile indicates the sliding force increases even when the tool is pulling from the surface after it reaches the maximum vibration displacement. This is likely caused by the generation of pileup material in front of the tool due to sliding. The sliding force is dependent on the effective engagement depth between the tool and workpiece surface during sliding. Besides the tool vibration displacement, the effective engagement depth is also affected by the amount of pileup material in front of the tool. While the tool is retracting from its maximum vibration displacement which tends to reduce the effective engagement depth, the continuous sliding action keeps generating pileup material in front of the tool which tends to increase the effective engagement depth. As a result, the occurrence of the maximum engagement depth (corresponding to the sliding force peak) lags the maximum vibration displacement (corresponding to the striking force peak) which then results in the asymmetrical profile of the sliding force.
Fig. 12
Fig. 12
Close modal
Besides varying ro, the 1D scan vibration striking experiment has also been conducted while varying Vpp. Figure 13 summarizes the measured groove depth (h) and width (W) for all cases. It shows that the groove depth and width are primarily dependent on Vpp (i.e., vibration amplitude) and less on ro. Moreover, it is found again the depth and width of the groove are linearly dependent on the striking peak force (Fig. 14) though the linear relationships are different from those found in single-spot vibration striking experiment (see Fig. 9). Therefore, the indentation deformation resulting from each individual strike with tool scan motion can be also monitored using the force signal.
Fig. 13
Fig. 13
Close modal
Fig. 14
Fig. 14
Close modal
### Two-Dimensional Scan Vibration Striking.
In this experiment, a 5 mm × 5 mm area of the workpiece surface is treated by the vibrating tool following parallel line scan paths. For all cases, the spacing (dp) between the path lines is set to be the same as the offset distance (ds) between two successive striking locations along the scan path (see Fig. 15(a)). This results in approximately the same striking overlap ratio ro in both the scan direction and the transverse direction with respect to the scan path lines. The effects of Vpp and ro on the topography of the treated surface are investigated.
Fig. 15
Fig. 15
Close modal
Figure 15 shows the surface topography obtained at various Vpp and ro. The upper row is for different ro (0, 0.25, 0.5, and 0.75) with fixed Vpp (120 V) while the lower row is for different Vpp (60, 90, 120, and 150 V) with fixed ro (0.75). For each condition, the generated surface texture is uniform throughout the treated area, which reflects the uniform spacing and intensity of the strikes during the treatment. The ro significantly affects the generated surface texture. With the small ro (Figs. 15(a) and 15(b)), the surfaces have a dimple texture that reflects each indentation on the surface. It shows that the material is displaced to form circular ridges. However, more material is displaced to the two lateral sides compared to the front and back sides with respect to the tool scan path. As ro increases, the dimple size decreases. When ro is increased to 0.75, the individual dimple is no longer visible (Fig. 15(d)). When the surfaces generated at ro = 0.75 are examined at a higher resolution (Figs. 15(e)15(h)), line ridges parallel to the tool scan path can be observed on the surfaces at higher Vpp. These line ridges are formed mainly by the lateral displacement of the material from inside the tool paths. The height of these line ridges decreases with decreasing Vpp. At ro = 0.75 and Vpp = 60 V (Fig. 15(e)), the line ridges are no longer visible. The surface texture appears to have no significant difference along the scan direction and its transverse direction, leading to a more isotropic texture.
Figure 16 summarizes the quantified roughness parameters (Sa, Ra, and Rz) for these surfaces. The 1D roughness parameters Ra and Rz are measured along both the scan and the transverse directions. It shows that all roughness parameters decrease with the increase in ro and decrease in Vpp. For ro < 0.75, Ra and Rz are significantly higher in the transverse direction than the scan direction, and Sa is very close to the Ra in the scan direction (Figs. 16(a) and 16(b)). For ro = 0.75, the roughness difference in the two directions is significantly reduced, and the difference is further reduced with decreasing Vpp (Figs. 16(c) and 16(d)). These changes are consistent with the transition of surface texture from the dimple type to the line type and eventually to the isotropic type.
Fig. 16
Fig. 16
Close modal
Note the roughness of the treated surfaces is all higher than the initial roughness for the ground surface. The smoothly ground surface was used to minimize the influence of initial surface roughness and texture such that the generated surface roughness and texture can be easily related to the processing parameters of the vibration striking treatment. Furthermore, these measured roughness values represent the achievable surface roughness using the corresponding treatment parameters. Among all the surfaces shown in Fig. 15, the smoothest surface is obtained with Vpp = 60 V and ro = 0.75 at Sa = 0.44 µm, which is only slightly higher than the initial Sa value of 0.32 µm for the ground surface.
## Discussions
The results from the four experiments demonstrate the feasibility and excellent controllability of piezo vibration striking treatment (PVST) in terms of force and vibration displacement monitoring and surface deformation control. The important process parameters for PVST are tool vertical position Z, tool vibration frequency f, vibration amplitude upp, striking overlap ratio ro, and tool scan path. By combining piezo stack actuated vibration device with CNC machine, these process parameters can be directly or indirectly controlled as well as monitored in real time. In general, the piezo stack vibration device enables the control of tool vibration frequency and amplitude while the CNC machine enables the control of tool position Z and tool scan motion (speed and path). The control of tool vibration and tool vertical position is critical for controlling the local surface deformation imposed by each individual strike while the control of tool scan motion is critical for controlling the locations and layout of the successive strikes. Such combined process controllability will lead to enhanced controllable surface deformation which is beyond the capability of current striking-based treatment processes, such as SP, SMAT, MHP, and UNSM. Therefore, PVST has the potential to realize more efficient deformation-based surface engineering including finish, residual stress, hardness, and microstructure for enhancing components’ performance such as fatigue life and wear resistance.
The 2D vibration striking experiment has shown surface finish can be improved by increasing ro and/or decreasing Vpp. The trade-offs in different combinations of ro and Vpp which can achieve the same level of surface finish are worth further discussion. In general, the same level of surface finish may be achieved by the combination of higher ro and higher Vpp or the combination of lower ro and lower Vpp. Higher ro can be achieved by reducing the tool scan speed which will increase the treatment time. It can be also achieved by increasing tool vibration frequency which will increase the power consumption for piezo stack actuation. Higher Vpp leads to higher plastic strain and thicker deformed layer due to higher vibration amplitude. In contrast, the combination of lower ro and lower Vpp results in faster treatment and less vibration power consumption. However, the induced plastic strain and strained layer thickness are smaller. Based on these trade-offs, the higher ro–higher Vpp setting is more suitable for surface microstructure refinement purpose while the lower ro–lower Vpp setting is more suitable for surface residual stress modification purposes.
Both the striking force and tool vibration displacement in PVST can be directly measured in real time. This is a major advantage over the ultrasonic vibration surface treatment, for it offers an opportunity for realizing real-time monitoring and control of the treatment process. Two linear force relationships which are found in PVST are particularly useful in this regard. The first linear relationship is between the striking force and the reduction in tool vibration amplitude shown in Fig. 5. The amplitude reduction is due to the elastic deformation of the device assembly under the striking force. This linear relationship reflects the stiffness of the device assembly and is independent of workpiece material and striking tip geometry. It can be used to calculate the vibration amplitude during striking from the force signal
$upp=upp0−Δupp=m×Vpp−Fmax/K$
(3)
where m is the proportionality between the unloaded vibration amplitude and the input driving voltage; K is the proportionality between the striking peak force and the reduction in vibration amplitude which is equivalent to the stiffness of the device assembly. Both m and K are the characteristics of the piezo vibration device and can be obtained by device calibration as shown in Figs. 2(c) and 5. Since Vpp is a directly controlled input parameter and Fmax can be obtained from the measured force signal, it is then possible, based on Eq. (3), to monitor the vibration amplitude in PVST without directly measuring the vibration displacement but instead using the measured force signal. This will simplify the instrumentation for process monitoring.
The second linear force relationship is found between the striking force and the resulting indentation size (Figs. 9 and 14). This relationship should depend on the workpiece material and the striking tool tip geometry since both affect the plastic deformation induced on the surface. For the workpiece material (mild steel) and striking tip (d = 3 mm) used in this study, the force–indentation size relationship can be well approximated as being linear. Theoretically, this relationship is nonlinear for a spherical-shaped indenter [26]. However, the indentation depth range of PVST is usually very small (e.g., ∼30 µm in Figs. 9 and 14) compared to the striking tool tip diameter (3 mm). It is this small indentation depth range that enables a good approximation of the linear relationship between the striking force and indentation size. The quality of this linear approximation can be considered to depend on the ratio of the indentation depth range to the diameter of the striking tool tip (h/d). The smaller this ratio, the better quality has the linear approximation. It can be expected that the linear force relationship is better approximated for a harder material than for a softer material because h becomes smaller for the harder material and better for a larger striking tool tip than for a smaller tool tip.
In practice, the h/d ratio may still be very small despite the changes of workpiece material or tool tip size, so the linear force–indentation size relationship may be generally acceptable in PVST. The linear equation describing the relationship, however, will change with the workpiece material and the striking tool tip diameter. For a given combination of workpiece material and tool tip diameter, the linear force–indentation size relationship should be fixed, based on which the indentation size or plastic deformation level induced by each individual strike can be monitored using the measured force signal.
The two linear force relationships provide a basis for realizing real-time monitoring and force-based feedback control of PVST which can greatly enhance the treatment efficiency and capability. For practical implementation, the force sensor eventually needs to be integrated into the vibration device. This type of integration has been demonstrated feasible although for implementation of modulation-assisted drilling process [23,24]. In this study, PVST is only performed on a flat workpiece surface. The force-based feedback control capability will be very useful for performing the treatment on a freeform surface. In this case, the tool's vertical position and vibration amplitude can be controlled in real time based on the striking force signal to accommodate the change of surface height during the tool scan. This capability will greatly enhance the automation and efficiency of the treatment process.
## Conclusions
This study has experimentally investigated the non-resonant piezo vibration striking surface treatment (PVST) process with an emphasis on the striking force, vibration displacement, resulting surface deformation, and their relationships. From this study, the following can be concluded:
1. It is feasible to implement PVST using a piezo stack actuated vibration device installed on a CNC machine platform. PVST can induce strikes with consistent intensity in each cycle of tool vibration. Both the striking intensity and striking location can be well controlled leading to improved control of surface plastic deformation induced by the treatment.
2. In PVST, the tool vibration amplitude during striking is always reduced from the amplitude without striking. This reduction in vibration amplitude is due to the elastic deformation of the vibration device assembly and is linearly dependent on the striking force. The linear force–amplitude reduction relationship reflects the stiffness of the device assembly.
3. In PVST, the depth of the permanent indentation resulting from each strike is, to a first approximation, linearly dependent on the striking peak force. This linear force relationship offers the opportunity to directly monitor and control the surface deformation induced by each strike using the force signal.
4. The texture and roughness of the treated surface are primarily affected by the striking overlap ratio and tool vibration amplitude in PVST. The finish of the treated surface can be improved by increasing the striking overlap ratio and/or decreasing the vibration amplitude.
The present study has demonstrated the promising capability of PVST in inducing controllable surface deformation. Future studies will focus on how to utilize this capability to realize deformation-based engineering of surface attributes for enhancing components’ performance such as wear-resistance and fatigue life.
## Acknowledgment
This work is supported in part by NSF CMMI (Grant Nos. 2019320 and 2102015).
## Conflict of Interest
There are no conflicts of interest.
## Data Availability Statement
The data sets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. The authors attest that all data for this study are included in the paper. No data, models, or code were generated or used for this paper.
## Nomenclature
• f =
driving voltage frequency or vibration frequency
•
• h =
depth of permanent indentation
•
• u =
tool vibration displacement
•
• D =
diameter of permanent indentation
•
• V =
Piezo stack driving voltage
•
• dp =
spacing between parallel scan lines
•
• ds =
spacing between two successive strikes
•
• ro =
striking overlap ratio
•
• umax =
maximum vibration displacement w/ striking
•
• upp =
peak-to-peak vibration amplitude w/ striking
•
• vs =
speed of horizontal tool scan (or feed) motion
•
• Fz =
striking force
•
• Fmax =
striking peak force
•
• Fx =
sliding force
•
• Vpp =
peak-to-peak driving voltage amplitude
•
• $umax0$ =
maximum vibration displacement w/o striking
•
• $upp0$ =
peak-to-peak vibration amplitude w/o striking
## References
1.
M’Saoubi
,
R.
,
Outeiro
,
J. C.
,
Chandrasekaran
,
H.
,
Dillon
,
O. W.
, and
Jawahir
,
I. S.
,
2008
, “
A Review of Surface Integrity in Machining and Its Impact on Functional Performance and Life of Machined Products
,”
Int. J. Sustainable Manuf.
,
1
(
1–2
), pp.
203
236
.
2.
Torres
,
M. A. S.
, and
Voorwald
,
H. J. C.
,
2002
, “
An Evaluation of Shot Peening, Residual Stress and Stress Relaxation on the Fatigue Life of AISI 4340 Steel
,”
Int. J. Fatigue
,
24
(
8
), pp.
877
886
.
3.
Tekeli
,
S.
,
2002
, “
Enhancement of Fatigue Strength of SAE 9245 Steel by Shot Peening
,”
Mater. Lett.
,
57
(
3
), pp.
604
608
.
4.
Wang
,
S.
,
Li
,
Y.
,
Yao
,
M.
, and
Wang
,
R.
,
1998
, “
Compressive Residual Stress Introduced by Shot Peening
,”
J. Mater. Process. Technol.
,
73
(
1–3
), pp.
64
73
.
5.
Wang
,
S.
,
Li
,
Y.
,
Yao
,
M.
, and
Wang
,
R.
,
1998
, “
Fatigue Limits of Shot-Peened Metals
,”
J. Mater. Process. Technol.
,
73
(
1–3
), pp.
57
63
.
6.
Lu
,
K.
, and
Lu
,
J.
,
2004
, “
Nanostructured Surface Layer on Metallic Materials Induced by Surface Mechanical Attrition Treatment
,”
Mater. Sci. Eng. A
,
375–377
(
1–2
SPEC. ISS.), pp.
38
45
.
7.
Roland
,
T.
,
Retraint
,
D.
,
Lu
,
K.
, and
Lu
,
J.
,
2006
, “
Fatigue Life Improvement Through Surface Nanostructuring of Stainless Steel by Means of Surface Mechanical Attrition Treatment
,”
Scr. Mater.
,
54
(
11
), pp.
1949
1954
.
8.
Tao
,
N. R.
,
Wang
,
Z. B.
,
Tong
,
W. P.
,
Sui
,
M. L.
,
Lu
,
J.
, and
Lu
,
K.
,
2002
, “
An Investigation of Surface Nanocrystallization Mechanism in Fe Induced by Surface Mechanical Attrition Treatment
,”
Acta Mater.
,
50
(
18
), pp.
4603
4616
.
9.
Yildirim
,
H. C.
, and
Marquis
,
G. B.
,
2012
, “
Fatigue Strength Improvement Factors for High Strength Steel Welded Joints Treated by High Frequency Mechanical Impact
,”
Int. J. Fatigue
,
44
, pp.
168
176
.
10.
Malaki
,
M.
, and
Ding
,
H.
,
2015
, “
A Review of Ultrasonic Peening Treatment
,”
Mater. Des.
,
87
, pp.
1072
1086
.
11.
Roy
,
S.
,
Fisher
,
J. W.
, and
Yen
,
B. T.
,
2003
, “
Fatigue Resistance of Welded Details Enhanced by Ultrasonic Impact Treatment (UIT)
,”
Int. J. Fatigue
,
25
(9–11
), pp.
1239
1247
.
12.
Bleicher
,
F.
,
Lechner
,
C.
,
Habersohn
,
C.
,
Kozeschnik
,
E.
,
,
B.
, and
Kaminski
,
H.
,
2012
, “
Mechanism of Surface Modification Using Machine Hammer Peening Technology
,”
CIRP Ann.
,
61
(
1
), pp.
375
378
.
13.
Schulze
,
V.
,
Bleicher
,
F.
,
Groche
,
P.
,
Guo
,
Y. B.
, and
Pyun
,
Y. S.
,
2016
, “
Surface Modification by Machine Hammer Peening and Burnishing
,”
CIRP Ann.
,
65
(
2
), pp.
809
832
.
14.
Cao
,
X. J.
,
Pyoun
,
Y. S.
, and
Murakami
,
R.
,
2010
, “
Fatigue Properties of a S45C Steel Subjected to Ultrasonic Nanocrystal Surface Modification
,”
Appl. Surf. Sci.
,
256
(
21
), pp.
6297
6303
.
15.
Suh
,
C. M.
,
Song
,
G. H.
,
Suh
,
M. S.
, and
Pyoun
,
Y. S.
,
2007
, “
Fatigue and Mechanical Characteristics of Nano-structured Tool Steel by Ultrasonic Cold Forging Technology
,”
Mater. Sci. Eng. A
,
443
(
1–2
), pp.
101
106
.
16.
Berglund
,
J.
,
Liljengren
,
M.
, and
Rosén
,
B. G.
,
2011
, “
On Finishing of Pressing Die Surfaces Using Machine Hammer Peening
,”
,
52
(
1–4
), pp.
115
121
.
17.
Mannens
,
R.
,
Trauth
,
D.
,
Mattfeld
,
P.
, and
Klocke
,
F.
,
2018
, “
Influence of Impact Force, Impact Angle, and Stroke Length in Machine Hammer Peening on the Surface Integrity of the Stainless Steel X3CrNiMo13-4
,”
Procedia CIRP
,
71
, pp.
166
171
.
18.
Amanov
,
A.
,
Cho
,
I. S.
,
Pyoun
,
Y. S.
,
Lee
,
C. S.
, and
Park
,
I. G.
,
2012
, “
Micro-dimpled Surface by Ultrasonic Nanocrystal Surface Modification and Its Tribological Effects
,”
Wear
,
286–287
, pp.
136
144
.
19.
Telljohann
,
G.
, and
Dannemeyer
,
S.
,
2009
, “
Hifit—Technische Entwicklung Und Anwendung
,”
Stahlbau
,
78
(
9
), pp.
622
626
.
20.
Ernould
,
C.
,
Schubnell
,
J.
,
Farajian
,
M.
,
Maciolek
,
A.
,
Simunek
,
D.
,
Leitner
,
M.
, and
Stoschka
,
M.
,
2019
, “
Application of Different Simulation Approaches to Numerically Optimize High-Frequency Mechanical Impact (HFMI) Post-treatment Process
,”
Weld. World
,
63
(
3
), pp.
725
738
.
21.
Altintas
,
Y.
,
2012
,
Manufacturing Automation: Metal Cutting Mechanics, Machine Tool Vibrations, and CNC Design
, 2nd ed.,
Cambridge University Press
,
New York
.
22.
Guo
,
Y.
,
Saldana
,
C.
,
Compton
,
W. D.
, and
Chandrasekar
,
S.
,
2011
, “
Controlling Deformation and Microstructure on Machined Surfaces
,”
Acta Mater.
,
59
(
11
), pp.
4538
4547
.
23.
Guo
,
Y.
,
Lee
,
S. E.
, and
Mann
,
J. B.
,
2017
, “
Piezo-Actuated Modulation-Assisted Drilling System with Integrated Force Sensing
,”
ASME J. Manuf. Sci. Eng.
,
139
(
1
), p.
014501
.
24.
Guo
,
Y.
, and
Mann
,
J. B.
,
2020
, “
Control of Chip Formation and Improved Chip Ejection in Drilling With Modulation-Assisted Machining
,”
ASME J. Manuf. Sci. Eng.
,
142
(
7
), p.
071001
.
25.
Guo
,
Y.
,
Stalbaum
,
T.
,
Mann
,
J.
,
Yeung
,
H.
, and
Chandrasekar
,
S.
,
2013
, “
Modulation-Assisted High Speed Machining of Compacted Graphite Iron (CGI)
,”
J. Manuf. Process.
,
15
(
4
), pp.
426
431
.
26.
Tabor
,
D.
,
1948
, “
A Simple Theory of Static and Dynamic Hardness
,”
Proc. R. Soc. A
,
192
(
1029
), pp.
247
274
.
|
|
# onitato?
Oni (鬼) are creatures from Japanese folklore, variously translated as demons, devils, ogres or trolls. They are popular characters in Japanese art, literature and theatre. They are almost always depicted as beings with incredible strength and power; physically, two long horns are said to grow from their heads. The earliest folktales of oni generally described them as benevolent creatures able to ward off evil and punish the wicked.
Potato.
The opinions expressed here represent my own and not those of my $Employer. This information is provided “as is” without warranty of any kind, either express or implied. The accuracy of information related to$Employer or $Employer Products and Services is not guaranteed. If information provided by this site conflicts with information provided directly by$Employer, the information from \$Employer should be considered the final authority.
|
|
labelLayer {cartography} R Documentation
## Label Layer
### Description
Put labels on a map.
### Usage
labelLayer(
x,
spdf,
df,
spdfid = NULL,
dfid = NULL,
txt,
col = "black",
cex = 0.7,
overlap = TRUE,
show.lines = TRUE,
halo = FALSE,
bg = "white",
r = 0.1,
...
)
### Arguments
x an sf object, a simple feature collection. spdf, df, dfid and spdfid are not used. spdf a SpatialPointsDataFrame or a SpatialPolygonsDataFrame; if spdf is a SpatialPolygonsDataFrame texts are plotted on centroids. df a data frame that contains the labels to plot. If df is missing spdf@data is used instead. spdfid name of the identifier variable in spdf, default to the first column of the spdf data frame. (optional) dfid name of the identifier variable in df, default to the first column of df. (optional) txt labels variable. col labels color. cex labels cex. overlap if FALSE, labels are moved so they do not overlap. show.lines if TRUE, then lines are plotted between x,y and the word, for those words not covering their x,y coordinate halo If TRUE, then a 'halo' is printed around the text and additional arguments bg and r can be modified to set the color and width of the halo. bg halo color if halo is TRUE r width of the halo ... further text arguments.
### Examples
library(sf)
opar <- par(mar = c(0,0,0,0))
|
|
Never appeared forthcoming papers - MathOverflow [closed] most recent 30 from http://mathoverflow.net 2013-05-24T00:30:01Z http://mathoverflow.net/feeds/question/48477 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers Never appeared forthcoming papers domenico fiorenza 2010-12-06T19:58:30Z 2013-03-08T17:37:28Z <p>This has been inspired by this MO question: <a href="http://mathoverflow.net/questions/48174/harmonic-maps-into-compact-lie-groups" rel="nofollow">http://mathoverflow.net/questions/48174/harmonic-maps-into-compact-lie-groups</a></p> <p>Just for joking: which is your favourite never appeared forthcoming paper?</p> <p>(do not hesitate to close this question if unappropriate)</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48478#48478 Answer by Andreas Blass for Never appeared forthcoming papers Andreas Blass 2010-12-06T19:59:44Z 2010-12-06T19:59:44Z <p>Dana Scott and Robert Solovay, "Boolean-valued models of set theory"</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48482#48482 Answer by Nikita Sidorov for Never appeared forthcoming papers Nikita Sidorov 2010-12-06T20:16:28Z 2010-12-06T20:16:28Z <p>A. Bertrand-Mathis, Le $\theta$-shift sans peine</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48483#48483 Answer by Jason for Never appeared forthcoming papers Jason 2010-12-06T20:18:20Z 2010-12-06T20:18:20Z <p>This doesn't exactly count as an unpublished forthcoming paper, but the supposed original proof of Fermat's Last Theorem that was "too large to fit in the margin" should probably be mentioned here.</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48487#48487 Answer by Denis Serre for Never appeared forthcoming papers Denis Serre 2010-12-06T21:04:25Z 2010-12-06T21:04:25Z <p>This one is famous. It has been at the origin of a huge mathematical activity (conservation laws, homogenization, weak KAM, Hamilton-Jacobi equations, etc ...):</p> <blockquote> <p>P.-L. Lions, G. Papanicolaou, SRS Varadhan. <em>Homogenization of Hamilton-Jacobi equations</em></p> </blockquote> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48497#48497 Answer by Jim Humphreys for Never appeared forthcoming papers Jim Humphreys 2010-12-06T22:05:18Z 2010-12-06T22:05:18Z <p>Nobody can compete with Fermat, but papers confidently labelled with the roman numeral I and never followed by II might fit here. Of these my favorite is one by Tits, <em>Normalisateurs de tores I</em> in <em>J. Algebra</em> 4 (1966).</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48516#48516 Answer by Dave Anderson for Never appeared forthcoming papers Dave Anderson 2010-12-06T23:23:30Z 2010-12-06T23:23:30Z <p>The comment about stacks in the paper that first used them in an essential way probably belongs in this list:</p> <p>"Full details on the basic properties and theorems for algebraic stacks will be given elsewhere." (Deligne-Mumford, <em>The irreducibility of the space of curves of given genus</em>, 1969.)</p> <p>They don't quite say <em>they</em> will give the details in a paper, of course, so maybe it doesn't count.</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48517#48517 Answer by Philip Brooker for Never appeared forthcoming papers Philip Brooker 2010-12-06T23:35:02Z 2010-12-06T23:35:02Z <p>The books <em>Classical Banach Spaces III</em> and <em>Classical Banach Spaces IV</em> by Joram Lindenstrauss and Lior Tzafriri never appeared (after having been promised in various places of volumes I and II). </p> <p>As written by Albrecht Pietsch in his book <em>History of Banach Spaces and Linear Operators</em>, the reason the later volumes never appeared was that "the development was too vigorous. Thus, in order to finish this project, a complete rewriting would have been necessary". Even still, the influence of volumes I and II in Banach space theory has been exceedingly nontrivial; indeed, Pietsch also writes: "The two-volume treatise of Lindenstrauss/Tzafriri on <em>Classical Banach Spaces</em> has become the most important reference of the modern period".</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48533#48533 Answer by Gerry Myerson for Never appeared forthcoming papers Gerry Myerson 2010-12-07T04:44:54Z 2010-12-07T04:44:54Z <p>Volumes 4 through 7 of The Art Of Computer Programming. </p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48536#48536 Answer by Dan Ramras for Never appeared forthcoming papers Dan Ramras 2010-12-07T05:22:38Z 2010-12-07T05:22:38Z <p>I'm a fan of Peter May's book The Homotopical Foundation of Algebraic Topology (feel free to correct the title if I've got it wrong). It has been referred to by May in various places, and sounds really interesting! But it has never been written.</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48543#48543 Answer by David Farris for Never appeared forthcoming papers David Farris 2010-12-07T07:55:40Z 2010-12-07T08:03:23Z <p>Gromov's seminal "Pseudo holomorphic curves in symplectic manifolds" (1985) refers 10 or 15 times (for explanations of further applications that he only refers to or sketches briefly and for even "further discussion on $\overline{\partial}_\nu$ for non-regular curves") to his forthcoming "Pseudo holomorphic curves in symplectic manifolds, II", listed as "in press" by Springer.</p> <p>It never appeared. Gromov wrote a few later papers on symplectic geometry, but never returned to holomorphic curves. The paper is the foundation of modern symplectic topology (Floer homology, quantum cohomology, Gromov-Witten theory, symplectic field theory, etc.)</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48545#48545 Answer by arsmath for Never appeared forthcoming papers arsmath 2010-12-07T08:11:48Z 2010-12-07T08:11:48Z <p>Jeff Smith's book on combinatorial model categories. </p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48546#48546 Answer by Laurent Berger for Never appeared forthcoming papers Laurent Berger 2010-12-07T08:15:45Z 2010-12-07T08:15:45Z <p>Deligne's construction of the Galois representations attached to modular eigenforms (he did give a sketch in a Bourbaki talk though).</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48547#48547 Answer by Kevin Lin for Never appeared forthcoming papers Kevin Lin 2010-12-07T08:37:35Z 2010-12-07T08:37:35Z <p>The sequel to Kontsevich's "Deformation quantization of Poisson manifolds, I" has never appeared.</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48549#48549 Answer by Someone for Never appeared forthcoming papers Someone 2010-12-07T09:00:16Z 2010-12-07T09:00:16Z <p>How about "The classification of finite quasithin groups" by G. Mason from 1980? The classification of finite simple groups was announced when G. Mason was still working on this important case and he then abandoned the work. This hole in the classification was closed finally in 2004 by M. Aschbacher and S. D. Smith.</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48610#48610 Answer by anonymous for Never appeared forthcoming papers anonymous 2010-12-08T07:11:01Z 2010-12-08T07:11:01Z <p>W. Crawley-Boevey. <em>The Deligne-Simpson problem.</em></p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48613#48613 Answer by Christian Elsholtz for Never appeared forthcoming papers Christian Elsholtz 2010-12-08T08:29:11Z 2010-12-08T08:29:11Z <p>Here is a gap in a famous series of papers.</p> <p>G.H. Hardy, and J.E Littlewood Some problems in Partitio Numerorum, VII</p> <p>Their series of papers "Partitio Numerorum" is quite influential in the development of the Hardy-Littlewood circle method.</p> <p>Some comments on the missing part are on page 253 in a paper by R.C. Vaughan, Hardy's legacy to number theory, Journal of the Australian Mathematical Society (Series A) (1998), 65: 238-266. Cambridge University Press</p> <p><a href="http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=4937088" rel="nofollow">http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=4937088</a></p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48614#48614 Answer by Dan Petersen for Never appeared forthcoming papers Dan Petersen 2010-12-08T08:43:51Z 2010-12-08T08:43:51Z <p>There is a funny entry in the list of "Open problems and questions on the moduli space of curves" from a workshop in 2005, <a href="http://www.aimath.org/WWN/modspacecurves/modspacecurves.pdf" rel="nofollow">http://www.aimath.org/WWN/modspacecurves/modspacecurves.pdf</a></p> <p>(14) (Bertram) When will Getzler’s paper on $\overline{\mathcal M}_1$ appear (even just as a preprint)? Conjecture: $t \to \infty$. Getzler comments that he does not like how this question is phrased.</p> <p>Unfortunately I have no idea what paper is being referenced. Perhaps someone who knows can edit this post.</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48615#48615 Answer by Jonathan Wise for Never appeared forthcoming papers Jonathan Wise 2010-12-08T08:53:37Z 2010-12-08T08:53:37Z <p>EGA, Chapters 5 through 12</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/48652#48652 Answer by Jim Conant for Never appeared forthcoming papers Jim Conant 2010-12-08T15:23:43Z 2010-12-08T15:23:43Z <p>"The Aarhus integral of rational homology 3-spheres IV," by Bar-Natan, Garoufalidis, Rozansky and D. Thurston, never appeared. I think developments in the field overtook the need for the paper, which was referred to in the first paper in the series. This is a great series of papers by the way. Very clearly written.</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/49833#49833 Answer by anonymous for Never appeared forthcoming papers anonymous 2010-12-18T23:12:40Z 2010-12-18T23:12:40Z <p>B. Farb. <em>Automorphisms of</em> $F_n$ <em>which act trivially on homology.</em></p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/49842#49842 Answer by anonymous for Never appeared forthcoming papers anonymous 2010-12-19T01:45:13Z 2010-12-19T01:45:13Z <p>J. Berge. <em>Some knots with surgeries yielding lens spaces.</em></p> <p>(c. 1990; cited by 92 on Google Scholar.)</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/53164#53164 Answer by Gerry Myerson for Never appeared forthcoming papers Gerry Myerson 2011-01-25T00:37:25Z 2011-01-25T00:37:25Z <p>Steven Krantz tells the following story, Mathematical Apochrypha, page 136: </p> <p>My Ph.D. thesis was based in part on work of Walter Koppelman of the University of Pennsylvania. My source was a very brief research announcement that Koppelman had published in the Bulletin of the AMS. I could never find the promised subsequent paper that would fill in all the details, and I had to fill them in myself. I eventually went to my thesis advisor and asked him where the missing paper was. He said, "Oh, God. Don't you know?" And then he told me the sad story. There was a very unhappy graduate student at the University of Pennsylvania. He had had bad experiences with several thesis advisors (at least so he thought), the last being Koppelman. One day he went into the colloquium, shot the department chairman, shot Koppelman, and shot himself. Koppelman and the student died. </p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/57940#57940 Answer by John Klein for Never appeared forthcoming papers John Klein 2011-03-09T12:04:09Z 2011-03-09T16:46:38Z <p>The Igusa-Waldhausen paper (roughly) entitled, </p> <p><em>The expansion space model for</em> $Q(X_+)$</p> <p>which is supposed to give a very different proof of the splitting $A(X) = Q(X_+) \times \text{Wh}^{\text{diff}}(X)$ that is based on a description of $Q(X_+)$ as the moduli space of finite relative cell complexes over $X$.</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/57977#57977 Answer by Johannes Ebert for Never appeared forthcoming papers Johannes Ebert 2011-03-09T18:01:10Z 2011-03-09T18:23:26Z <p>Kervaire, Milnor: Groups of homotopy spheres II.</p> <p>In the introduction to part I, they write:</p> <p>"More detailed information about these groups will be given in Part II. For example, for $n = 1, 2, 3, \ldots, 18$, it will be shown that the order of the group $\theta_n$ is respectively:" (a table follows). Similar remarks are scattered throughout the text.</p> <p>The details have been written down by other people and it must be said that part I contains the much more complicated arguments.</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/58017#58017 Answer by Zoran Škoda for Never appeared forthcoming papers Zoran Škoda 2011-03-10T00:05:08Z 2011-03-10T00:05:08Z <ul> <li><p>S. Gel'fand, Yu. Manin, <em>Methods of homological algebra</em>, first appeared in Russian as Методы гомологической алгебры. Введение в теорию когомологий и производные категории. Т. 1 (that is VOLUME 1). Volume 2 has been given up and the Springer Western edition does not cite Russian original, has many typing errors in formulas which Russian original does not have and it scraped off the tome 1 from the title. </p></li> <li><p>M. Demazure, P. Gabriel, <em>Groupes algebriques</em>, tome 1, Mason and Cie, Paris 1970 -- later volumes never appeared</p></li> <li><p>Z. Semadeni, <em>Banach Spaces of Continuous Functions</em>, Polish Scientific Publishers, Warzawa, 1971, never appeared from the Polish Sci. Publ. There is however a different book with a similar title in Springer in 1982, Schauder bases in Banach spaces of continuous functions. Lecture Notes in Mathematics <strong>918</strong>. Springer 1982. v+136 pp. <a href="http://www.ams.org/mathscinet-getitem?mr=653986" rel="nofollow">MR83g:46023</a>.</p></li> <li><p>John W. Gray, <em>Formal category theory: adjointness for 2-categories</em>, Lecture Notes in Mathematics <em>391</em>, Springer-Verlag 1974. xii+282 pp. has been envisioned as a m3 volume project on formal category theory, some material is mentioned in volume 1 and never appeared. The monograph is very innovative and some of the material from the latter volumes was undoubtfully sketched by the author in some detail. The author later drifted to theoretical computer science. </p></li> <li><p>John Duskin started a paper in several parts "Nerves of bicategories", part I appeared with great delay, partly due serious health problem the author experienced few years ago. Second and third part did not appear, although the contents description looks very promising. We wish the author good health and more to be seen!</p></li> </ul> <p>Grothendieck planned not only later EGAs but also later SGA (e.g. some Berthelot's works in SGA 8). Bourbaki Elements are of course never finished as well (an now are very slow, asymptotically stalling) as the German encyclopedic work by Klein's students at the beginning of the 20th century. M. M. Postnikov wrote two volumes of a course on algebraic topology in Russian about basics of homotopy theory and promised the homology in "next semester" but no books appeared on that. </p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/123997#123997 Answer by Rhett Butler for Never appeared forthcoming papers Rhett Butler 2013-03-08T16:57:33Z 2013-03-08T16:57:33Z <p>Kurt Gödel referred to part II (Der wahre Grund für die Unvollständigkeit, welche allen formalen Systemen der Mathematik anhaftet, liegt, wie im II. Teil dieser Abhandlung gezeigt werden wird, darin, daß die Bildung immer höherer Typen sich ins Transfinite fortsetzen läßt) in his seminal paper <em>Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I</em>, Monatshefte für Mathematik und Physik <strong>38</strong> (1931) p. 191. This part never appeared.</p> http://mathoverflow.net/questions/48477/never-appeared-forthcoming-papers/124001#124001 Answer by Hicham for Never appeared forthcoming papers Hicham 2013-03-08T17:37:28Z 2013-03-08T17:37:28Z <p>There is a result by Oesterle, that proves that you can find the first non residue quadratic modulo a prime in no more than $70\log(p)^2$ step assuming the GRH, this result was then improved by Bach who replaced the constant $70$ by $2$. The result of Oesterle was never published and when I asked him why, he told me because the laptop containing the proof was stolen from his car. However I think he exposed his proof to the mathematical community, so it is widely recognized. </p>
|
|
# Question - Electrical Potential Energy of a System of Two Point Charges and of Electric Dipole in an Electrostatic Field
Account
Register
Share
Books Shortlist
Your shortlist is empty
ConceptElectrical Potential Energy of a System of Two Point Charges and of Electric Dipole in an Electrostatic Field
#### Question
Deduce the expression for the potential energy of a system of two charges q1 and q2 located vec(r_1) and vec(r_2), respectively, in an external electric field.
#### Solution
You need to to view the solution
Is there an error in this question or solution?
S
|
|
## Annals of Applied Probability
### Weak approximation of second-order BSDEs
#### Abstract
We study the weak approximation of the second-order backward SDEs (2BSDEs), when the continuous driving martingales are approximated by discrete time martingales. We establish a convergence result for a class of 2BSDEs, using both robustness properties of BSDEs, as proved in Briand, Delyon and Mémin [ Stochastic Process. Appl. 97 (2002) 229–253], and tightness of solutions to discrete time BSDEs. In particular, when the approximating martingales are given by some particular controlled Markov chains, we obtain several concrete numerical schemes for 2BSDEs, which we illustrate on specific examples.
#### Article information
Source
Ann. Appl. Probab., Volume 25, Number 5 (2015), 2535-2562.
Dates
Revised: May 2014
First available in Project Euclid: 30 July 2015
https://projecteuclid.org/euclid.aoap/1438261048
Digital Object Identifier
doi:10.1214/14-AAP1055
Mathematical Reviews number (MathSciNet)
MR3375883
Zentralblatt MATH identifier
1325.60117
#### Citation
Possamaï, Dylan; Tan, Xiaolu. Weak approximation of second-order BSDEs. Ann. Appl. Probab. 25 (2015), no. 5, 2535--2562. doi:10.1214/14-AAP1055. https://projecteuclid.org/euclid.aoap/1438261048
#### References
• [1] Barles, G. and Souganidis, P. E. (1991). Convergence of approximation schemes for fully nonlinear second order equations. Asymptot. Anal. 4 271–283.
• [2] Bertsekas, D. P. and Shreve, S. E. (1978). Stochastic Optimal Control: The Discrete Time Case. Mathematics in Science and Engineering 139. Academic Press, New York.
• [3] Bichteler, K. (1981). Stochastic integration and $L^{p}$-theory of semimartingales. Ann. Probab. 9 49–89.
• [4] Billingsley, P. (1979). Probability and Measure. Wiley, New York.
• [5] Bonnans, J. F., Ottenwaelter, É. and Zidani, H. (2004). A fast algorithm for the two dimensional HJB equation of stochastic control. M2AN Math. Model. Numer. Anal. 38 723–735.
• [6] Briand, P., Delyon, B. and Mémin, J. (2002). On the robustness of backward stochastic differential equations. Stochastic Process. Appl. 97 229–253.
• [7] Debrabant, K. and Jakobsen, E. R. (2013). Semi-Lagrangian schemes for linear and fully non-linear diffusion equations. Math. Comp. 82 1433–1462.
• [8] Dolinsky, Y. (2010). Applications of weak convergence for hedging of game options. Ann. Appl. Probab. 20 1891–1906.
• [9] Dolinsky, Y., Nutz, M. and Soner, H. M. (2012). Weak approximation of $G$-expectations. Stochastic Process. Appl. 122 664–675.
• [10] El Karoui, N., Huu Nguyen, D. and Jeanblanc-Picqué, M. (1987). Compactification methods in the control of degenerate diffusions: Existence of an optimal control. Stochastics 20 169–219.
• [11] El Karoui, N. and Tan, X. (2013). Capacities, measurable selection and dynamic programming, part II: Application in stochastic control problems. Preprint.
• [12] Fahim, A., Touzi, N. and Warin, X. (2010). A probabilistic numerical method for fully nonlinear parabolic PDEs. Ann. Appl. Probab. 21 1322–1364.
• [13] Guo, W., Zhang, J. and Zhuo, J. (2013). A monotone scheme for high dimensional fully nonlinear PDEs. Preprint. Available at arXiv:1212.0466.
• [14] Jacod, J., Mémin, J. and Métivier, M. (1983). On tightness and stopping times. Stochastic Process. Appl. 14 109–146.
• [15] Jacod, J. and Shiryaev, A. N. (1987). Limit Theorems for Stochastic Processes. Grundlehren der Mathematischen Wissenschaften 288. Springer, Berlin.
• [16] Karandikar, R. L. (1995). On pathwise stochastic integration. Stochastic Process. Appl. 57 11–18.
• [17] Kushner, H. J. and Dupuis, P. G. (1992). Numerical Methods for Stochastic Control Problems in Continuous Time. Applications of Mathematics (New York) 24. Springer, New York.
• [18] Ma, J., Protter, P., San Martin, J. and Torres, S. (2002). Numerical method for backward stochastic differential equations. Ann. Appl. Probab. 12 302–316.
• [19] Matoussi, A., Possamaï, D. and Zhou, C. (2014). Robust utility maximization in non-dominated models with 2BSDE: The uncertain volatility model. Math. Finance. To appear.
• [20] Pardoux, É. and Peng, S. G. (1990). Adapted solution of a backward stochastic differential equation. Systems Control Lett. 14 55–61.
• [21] Peng, S. (2007). $G$-expectation, $G$-Brownian motion and related stochastic calculus of Itô type. In Stochastic Analysis and Applications. Abel Symp. 2 541–567. Springer, Berlin.
• [22] Soner, H. M., Touzi, N. and Zhang, J. (2011). Quasi-sure stochastic analysis through aggregation. Electron. J. Probab. 16 1844–1879.
• [23] Soner, H. M., Touzi, N. and Zhang, J. (2012). Wellposedness of second order backward SDEs. Probab. Theory Related Fields 153 149–190.
• [24] Stroock, D. W. and Varadhan, S. R. S. (1979). Multidimensional Diffusion Processes. Grundlehren der Mathematischen Wissenschaften 233. Springer, Berlin.
• [25] Tan, X. (2013). A splitting method for fully nonlinear degenerate parabolic PDEs. Electron. J. Probab. 18 1–24.
• [26] Tan, X. (2014). Discrete-time probabilistic approximation of path-dependent stochastic control problems. Ann. Appl. Probab. 24 1803–1834.
• [27] Zhang, J. (2004). A numerical scheme for BSDEs. Ann. Appl. Probab. 14 459–488.
|
|
# Eigensystem
Eigensystem calculates the eigen-energies and eigen-vectors of an Operator or matrix. The function comes in several flavors.
# Eigensystem of matrices
val, fun = Eigensystem(A)
## Input
• A : Square matrix (table of tables) of doubles
## Output
• val : Vector (Table of length #A) of doubles representing the eigen values
• fun : Matrix (Table of Tables of length #A by #A) representing the eigen vectors (fun[1] represents the first eigenvector)
## Example
A small example:
### Input
Eigensystem_Matrix.Quanty
A = {{ 1 , I , 0 },
{-I , 2 , 1 },
{ 0 , 1 , 5 }}
val, fun = Eigensystem(A)
print(val)
print(fun)
### Result
Eigensystem_Matrix.out
{ 0.31866935639502 , 2.3579263675185 , 5.3234042760865 }
{ { (0 - 0.82050111444738 I) , 0.55903255238504 , -0.11941744665028 } ,
{ (0 - 0.56721932561261 I) , -0.77024207841542 , 0.29152937637548 } ,
{ (0 + 0.070994069063423 I) , 0.30693606176558 , 0.94907855109346 } }
# Eigensystem of Operators iteratively found starting from a random state subject to restrictions
psiList = Eigensystem(Hamiltonian, StartRestrictions, Npsi)
calculates the lowest Npsi eigenstates for an operator Hamiltonian. The program uses iterative methods and needs to have a starting point to be defined. Note that the operator Hamiltonian does not know about the number of electrons there are in the system, only the number of orbitals is defined. If these orbitals are occupied or not is not given in the operator. A set of start restrictions is given by a table listing the number of Fermionic modes, Bosonic modes followed by lists including a string that tells the program which orbitals are included in the counting and then a minimum and maximum number of electrons in those orbitals. For example:
StartRestrictions = {Nf, Nb, {"111111",2,2}}
Which would define two electrons in a p-shell. A start restriction of:
StartRestrictions = {Nf, Nb, {"101010",2,2}}
would tell the program to start from two electrons with spin down and either 0,1,2, or 3 electron with spin up. A well defined calculation,but normally not a calculation one would like to do. If you want to start with two electrons with spin down and no electrons with spin up one needs:
StartRestrictions = {Nf, Nb, {"101010",2,2}, {"010101",0,0}}
The final result must not fulfill the start restrictions, The program finds the lowest states in the basis defined by $(H+1)^{\infty} \psi_0$. Whereby $\psi_0$ is some random state that satisfies the start restrictions. If $S_z$ is a conserved quantum number of $H$ one would keep the number of spin up and down electrons fixed. If not other states with different values of $S_z$ will mix.
## Input
• Hamiltonian : Operator
• StartRestrictions : Table of restrictions (same format as restrictions that can be set for operators)
• Npsi : Number of lowest eigenstates to calculate. Calculation speed and memory uses depends on this number. States that are higher in energy then $\approx 10\times k_B \times T$ are not important for any physics and do not need to be calculated.
## Output
• psiList : A table of Wavefunctions of length Npsi
## Example
A small example:
### Input
Eigensystem_Startrestrictions.Quanty
dofile("../definitions.Quanty")
-- we define an Hamiltonian
Hamiltonian = OppU + 0.1 * Oppldots + 0.000001 * (2*OppSz+OppLz)
-- number of states calculated
Npsi=15
-- In order to make sure we have a filling of 2
-- electrons we need to define some restrictions
StartRestrictions = {Nf, Nb, {"111111",2,2}}
-- We now can create the lowest Npsi eigenstates:
psiList = Eigensystem(Hamiltonian, StartRestrictions, Npsi)
-- We define a list of some operators to look at their expectation values
oppList={Hamiltonian, OppSsqr, OppLsqr, OppJsqr, OppSz, OppLz}
-- after we've created the eigen states we can look
-- at a list of their expectation values
print(" <E> <S^2> <L^2> <J^2> <S_z> <L_z>");
for i=1,#psiList do
for j=1,#oppList do
io.write(string.format("%7.4f ",psiList[i]*oppList[j]*psiList[i]))
end
io.write("\n")
end
### Result
Eigensystem_Startrestrictions.out
<E> <S^2> <L^2> <J^2> <S_z> <L_z>
-2.1033 1.9989 1.9989 0.0000 -0.0000 0.0000
-2.0500 2.0000 2.0000 2.0000 -0.5000 -0.5000
-2.0500 2.0000 2.0000 2.0000 0.0000 -0.0000
-2.0500 2.0000 2.0000 2.0000 0.5000 0.5000
-1.9521 1.9982 2.0036 6.0000 -0.9991 -1.0009
-1.9521 1.9982 2.0036 6.0000 -0.4995 -0.5005
-1.9521 1.9982 2.0036 6.0000 0.0000 -0.0000
-1.9521 1.9982 2.0036 6.0000 0.4996 0.5004
-1.9521 1.9982 2.0036 6.0000 0.9991 1.0009
0.4021 0.0018 5.9964 6.0000 -0.0009 -1.9991
0.4021 0.0018 5.9964 6.0000 -0.0005 -0.9995
0.4021 0.0018 5.9964 6.0000 0.0000 -0.0000
0.4021 0.0018 5.9964 6.0000 0.0005 0.9995
0.4021 0.0018 5.9964 6.0000 0.0009 1.9991
4.0033 0.0011 0.0011 -0.0000 0.0000 -0.0000
# Eigensystem of Operators iteratively found starting from a given set of wavefunctions
Eigensystem(Hamiltonian, psiList)
Instead of starting from a random state that fulfills restrictions one can start from a pre-defined state. This can be useful if you have a large system and only want to do perturbation theory, configuration interaction for example, or if you want to calculate the eigen states of the Hamiltonian after making a small change to the Hamiltonian. In the example before we first calculate the eigen-state of the Hamiltonian without spin-orbit coupling and in a next step start from these eigen-states to calculate the Hamiltonian with spin-orbit coupling.
## Input
• Hamiltonian : Operator
• psiList : A table of Wavefunctions overwritten on output
## Output
• psiList : A table of Wavefunctions
## Example
A small example:
### Input
Eigensystem_Wavefunction.Quanty
dofile("../definitions.Quanty")
-- we define an Hamiltonian
Hamiltonian = OppU + 0.000001 * (2*OppSz+OppLz)
-- number of states calculated
Npsi=15
StartRestrictions = {Nf, Nb, {"111111",2,2}}
psiList = Eigensystem(Hamiltonian, StartRestrictions, Npsi)
oppList={Hamiltonian, OppSsqr, OppLsqr, OppJsqr, OppSz, OppLz}
print(" <E> <S^2> <L^2> <J^2> <S_z> <L_z>");
for i=1,#psiList do
for j=1,#oppList do
io.write(string.format("%7.4f ",psiList[i]*oppList[j]*psiList[i]))
end
io.write("\n")
end
Hamiltonian = Hamiltonian + 0.1 * Oppldots
oppList={Hamiltonian, OppSsqr, OppLsqr, OppJsqr, OppSz, OppLz}
-- Recalculate eigen states with spin orbit
Eigensystem(Hamiltonian,psiList)
print(" <E> <S^2> <L^2> <J^2> <S_z> <L_z>");
for i=1,#psiList do
for j=1,#oppList do
io.write(string.format("%7.4f ",psiList[i]*oppList[j]*psiList[i]))
end
io.write("\n")
end
### Result
Eigensystem_Wavefunction.out
<E> <S^2> <L^2> <J^2> <S_z> <L_z>
-2.0000 2.0000 2.0000 6.0000 -1.0000 -1.0000
-2.0000 2.0000 2.0000 4.0000 -1.0000 0.0000
-2.0000 2.0000 2.0000 4.0000 0.0000 -1.0000
-2.0000 2.0000 2.0000 2.0000 -1.0000 1.0000
-2.0000 2.0000 2.0000 4.0000 0.0000 0.0000
-2.0000 2.0000 2.0000 2.0000 1.0000 -1.0000
-2.0000 2.0000 2.0000 4.0000 0.0000 1.0000
-2.0000 2.0000 2.0000 4.0000 1.0000 0.0000
-2.0000 2.0000 2.0000 6.0000 1.0000 1.0000
0.4000 0.0000 6.0000 6.0000 0.0000 -2.0000
0.4000 0.0000 6.0000 6.0000 0.0000 -1.0000
0.4000 0.0000 6.0000 6.0000 0.0000 0.0000
0.4000 0.0000 6.0000 6.0000 0.0000 1.0000
0.4000 0.0000 6.0000 6.0000 0.0000 2.0000
4.0000 -0.0000 -0.0000 -0.0000 0.0000 0.0000
<E> <S^2> <L^2> <J^2> <S_z> <L_z>
-2.1033 1.9989 1.9989 0.0000 -0.0000 0.0000
-2.0500 2.0000 2.0000 2.0000 -0.5000 -0.5000
-2.0500 2.0000 2.0000 2.0000 0.0000 -0.0000
-2.0500 2.0000 2.0000 2.0000 0.5000 0.5000
-1.9521 1.9982 2.0036 6.0000 -0.9991 -1.0009
-1.9521 1.9982 2.0036 6.0000 -0.4995 -0.5005
-1.9521 1.9982 2.0036 6.0000 0.0000 -0.0000
-1.9521 1.9982 2.0036 6.0000 0.4996 0.5004
-1.9521 1.9982 2.0036 6.0000 0.9991 1.0009
0.4021 0.0018 5.9964 6.0000 -0.0009 -1.9991
0.4021 0.0018 5.9964 6.0000 -0.0005 -0.9995
0.4021 0.0018 5.9964 6.0000 0.0000 -0.0000
0.4021 0.0018 5.9964 6.0000 0.0005 0.9995
0.4021 0.0018 5.9964 6.0000 0.0009 1.9991
4.0033 0.0011 0.0011 -0.0000 0.0000 -0.0000
# Options
The last element of Eigensystem can be a table of options. Possible options are:
• “DenseBoarder” Number of determinants in the bases where we switch from dense methods to sparse methods. (Standard value 1000)
• “NKrylovStart” Number of Krylov states in the Krylov basis in the first iteration. (Standard value 100)
• “NKrylovMax” Maximum number of Krylov states in the basis before a next iteration starts. (Standard value 500)
• “NKrylovStep” Increase in size of Krylov basis between each iteration. (Standard value 10)
• “NKrylovMin” Minimum number of states in the Krylov basis when running out of memory before the program stops storing the Krylov basis and goes to a mode where the states are recalculated (slower but much less memory). (Standard value 50)
• “NBitsKey” Number of bits in the key for the Hash lookup tables. (Standard value 16)
• “Epsilon” smallest coefficient before a determinant that is kept in the basis. (Standard value $1.49012*10^{-8}$)
• “Zero” Convergence limit (converged when variance of energy is smaller than epsilon). (Standard value $1.49012*10^{-6}$)
• “Restrictions” Possible set of restrictions for calculating the ground-state. (Standard value nil)
• “ExpandBasis” Boolean determining if the basis set should be kept fixed or allowed to expand. (Standard value true)
|
|
# Generalized dynamic process for generalized $(\psi, S,F)$-contraction with applications in $b$-Metric Spaces
Document Type : Research Paper
Authors
School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
Abstract
In this paper, we develop the notion of $(\psi, F)$-contraction mappings introduced in \cite{wad1} in $b$-metric spaces. To achieve this, we introduce the notion of generalized multi-valued $(\psi, S, F)$-contraction type I mapping with respect to generalized dynamic process $D(S, T, x_0),$ generalized multi-valued $(\psi, S, F)$-contraction type II mapping with respect to generalized dynamic process $D(S, T, x_0),$ and establish common fixed point results for these classes of mappings in complete $b$-metric spaces. As an application, we obtain the existence of solutions of dynamic programming and integral equations. The results presented in this paper extends and complements some related results in the literature.
Keywords
|
|
Question
Point A is located at coordinates (-3,\ 2). Point B is the image of point A after a rotation of 180° using (0,\ 0) as the center of rotation. What are the coordinates of point B?
|
|
+0
0
111
1
The vertices of a triangle are A(4, 1), B(9, 1), C(2, 5). What is the area of this triangle.
Mar 13, 2020
#1
+25490
+2
The vertices of a triangle are A(4, 1), B(9, 1), C(2, 5).
What is the area of this triangle.
$$\begin{array}{|rcll|} \hline \text{The area of this triangle is } & \dfrac{5*4}{2} = \mathbf{10}\\ \hline \end{array}$$
Mar 13, 2020
|
|
# Different assumptions between independent normal and multivariate normal distributions
I have been dealing with multivariate meta analysis lately. In the literature the various authors say that the main assumption of the MVMA is that the different variables are Multivariately distributed. I understand that. However I don't understand how that assumption is stronger than the assumption that the different variables are independently normally distributed. Could somebody explain that?
• Marginal normal independence is simply a special case of multivariate normal with zero covariance. L Feb 21 '17 at 0:26
Basically, if you assume that your data is distributed as independent normals, then you also assume that the variables are uncorrelated. On another hand, if you assume that they are distributed as multivariate normal, then it also means that you consider that they may be some non-zero correlation between the variables (recall that multivariate normal is parametrized by covariance matrix $\Sigma$).
• To me it seems that the assumption of independence is a much stronger assumption than allowing the data to infer some correlation, if any. By that logic you would say that MVMA imposes less strong assumptions since it allows for more flexibility. Is that wrong?
– Nay
Feb 20 '17 at 21:08
• @GNik I don't want to argue with that since I do not really know what exactly you are referring to (you did not provide any reference or quote). Assuming independent normals may be seen as stronger assumption in the sense that you assume that the correlation is zero, while in the case of multivariate normal you do not assume that but allow flexibility, so your model may adapt to data. But rather discussing "strongness" or not, it is much more important to focus on the consequences (i.e. assuming no correlation and treating data as uncorrelated).
– Tim
Feb 20 '17 at 21:16
• Ok that makes sense. (I am working on the application of multivariate meta analysis in the context of Health Technology Assessments, where usually multiple outcomes are simultaneously meta analysed. Until recently people where ok with assuming independence but the truth is that allowing for correlation among outcomes can yield great advantages [increased precision due to the borrowing of strength]. I am not working on a specific project. Instead, just trying to understand at what costs [additional assumptions] do these advantages come.)
– Nay
Feb 20 '17 at 21:31
• @GNik the cost is estimating the covariance matrix. Independence is a common assumption, that makes things much easier in many cases (see also stats.stackexchange.com/questions/213464/… ).
– Tim
Feb 20 '17 at 21:41
|
|
# SURFExample.cxx¶
Example usage:
./SURFExample Input/ROISpot5.png Output/ROISpot5SURF.png 3 3
Example source code (SURFExample.cxx):
// This example illustrates the use of the
// \doxygen{otb}{ImageToSURFKeyPointSetFilter}. The Speed-Up Robust
// Features (or SURF) is an algorithm in computer vision to detect and
// describe local features in images. The algorithm is detailed in
// \cite{SURF}. The applications of SURF are the same as those for
// SIFT.
//
// The first step required to use this filter is to include its header file.
#include "otbImageToSURFKeyPointSetFilter.h"
#include "otbImage.h"
#include "otbImageFileWriter.h"
#include "itkVariableLengthVector.h"
#include "itkRGBPixel.h"
#include "itkImageRegionIterator.h"
#include <iostream>
#include <fstream>
int main(int argc, char* argv[])
{
if (argc != 5)
{
std::cerr << "Usage: " << argv[0];
std::cerr << " InputImage OutputImage octaves scales" << std::endl;
return 1;
}
const char* infname = argv[1];
const char* outputImageFilename = argv[2];
const unsigned int octaves = atoi(argv[3]);
const unsigned int scales = atoi(argv[4]);
const unsigned int Dimension = 2;
// We will start by defining the required types. We will work with a
// scalar image of float pixels. We also define the corresponding
using RealType = float;
using ImageType = otb::Image<RealType, Dimension>;
// The SURF descriptors will be stored in a point set containing the
// vector of features.
using RealVectorType = itk::VariableLengthVector<RealType>;
using PointSetType = itk::PointSet<RealVectorType, Dimension>;
// The SURF filter itself is templated over the input image and the
// generated point set.
using ImageToFastSURFKeyPointSetFilterType = otb::ImageToSURFKeyPointSetFilter<ImageType, PointSetType>;
// We instantiate the filter.
ImageToFastSURFKeyPointSetFilterType::Pointer filter = ImageToFastSURFKeyPointSetFilterType::New();
// We plug the filter and set the number of scales for the SURF
// computation. We can afterwards run the processing with the
// \code{Update()} method.
filter->SetOctavesNumber(octaves);
filter->SetScalesNumber(scales);
filter->Update();
// Once the SURF are computed, we may want to draw them on top of the
// input image. In order to do this, we will create the following RGB
// image and the corresponding writer:
using PixelType = unsigned char;
using RGBPixelType = itk::RGBPixel<PixelType>;
using OutputImageType = otb::Image<RGBPixelType, 2>;
using WriterType = otb::ImageFileWriter<OutputImageType>;
OutputImageType::Pointer outputImage = OutputImageType::New();
// We set the regions of the image by copying the information from the
// input image and we allocate the memory for the output image.
outputImage->Allocate();
// We can now proceed to copy the input image into the output one
// using region iterators. The input image is a grey level one. The
// output image will be made of color crosses for each SURF on top of
// the grey level input image. So we start by copying the grey level
// values on each of the 3 channels of the color image.
itk::ImageRegionIterator<OutputImageType> iterOutput(outputImage, outputImage->GetLargestPossibleRegion());
for (iterOutput.GoToBegin(), iterInput.GoToBegin(); !iterOutput.IsAtEnd(); ++iterOutput, ++iterInput)
{
OutputImageType::PixelType rgbPixel;
rgbPixel.SetRed(static_cast<PixelType>(iterInput.Get()));
rgbPixel.SetGreen(static_cast<PixelType>(iterInput.Get()));
rgbPixel.SetBlue(static_cast<PixelType>(iterInput.Get()));
iterOutput.Set(rgbPixel);
}
// We are now going to plot color crosses on the output image. We will
// need to define offsets (top, bottom, left and right) with respect
// to the SURF position in order to draw the cross segments.
ImageType::OffsetType t = {{0, 1}};
ImageType::OffsetType b = {{0, -1}};
ImageType::OffsetType l = {{1, 0}};
ImageType::OffsetType r = {{-1, 0}};
// Now, we are going to access the point set generated by the SURF
// filter. The points are stored into a points container that we are
// going to walk through using an iterator. These are the types needed
using PointsContainerType = PointSetType::PointsContainer;
using PointsIteratorType = PointsContainerType::Iterator;
// We set the iterator to the beginning of the point set.
PointsIteratorType pIt = filter->GetOutput()->GetPoints()->Begin();
// We get the information about image size and spacing before drawing
// the crosses.
// OutputImageType::SizeType size = outputImage->GetLargestPossibleRegion().GetSize();
// And we iterate through the SURF set:
while (pIt != filter->GetOutput()->GetPoints()->End())
{
// We get the pixel coordinates for each SURF by using the
// \code{Value()} method on the point set iterator. We use the
// information about size and spacing in order to convert the physical
// coordinates of the point into pixel coordinates.
ImageType::IndexType index;
index[0] = static_cast<unsigned int>(std::floor(static_cast<double>((pIt.Value()[0] - origin[0]) / spacing[0] + 0.5)));
index[1] = static_cast<unsigned int>(std::floor(static_cast<double>((pIt.Value()[1] - origin[1]) / spacing[1] + 0.5)));
// We create a green pixel.
OutputImageType::PixelType keyPixel;
keyPixel.SetRed(0);
keyPixel.SetGreen(255);
keyPixel.SetBlue(0);
// We draw the crosses using the offsets and checking that we are
// inside the image, since SURFs on the image borders would cause an
// out of bounds pixel access.
if (outputImage->GetLargestPossibleRegion().IsInside(index))
{
outputImage->SetPixel(index, keyPixel);
if (outputImage->GetLargestPossibleRegion().IsInside(index + t))
outputImage->SetPixel(index + t, keyPixel);
if (outputImage->GetLargestPossibleRegion().IsInside(index + b))
outputImage->SetPixel(index + b, keyPixel);
if (outputImage->GetLargestPossibleRegion().IsInside(index + l))
outputImage->SetPixel(index + l, keyPixel);
if (outputImage->GetLargestPossibleRegion().IsInside(index + r))
outputImage->SetPixel(index + r, keyPixel);
}
++pIt;
}
// Finally, we write the image.
WriterType::Pointer writer = WriterType::New();
writer->SetFileName(outputImageFilename);
writer->SetInput(outputImage);
writer->Update();
// Figure~\ref{fig:SURFFast} shows the result of applying the SURF
// point detector to a small patch extracted from a Spot 5 image.
// \begin{figure}
// \center
// \includegraphics[width=0.40\textwidth]{ROISpot5.eps}
// \includegraphics[width=0.40\textwidth]{ROISpot5SURF.eps}
// \itkcaption[SURF Application]{Result of applying the
// \doxygen{otb}{ImageToSURFKeyPointSetFilter} to a Spot 5
// image.}
// \label{fig:SURFFast}
// \end{figure}
return EXIT_SUCCESS;
}
|
|
# Level Set Method Part I: Introduction
Looking at the trend in Computer Vision, people steadily abandon the classical methods and just throw everything into Deep Neural Network. It is, however, very useful to study the classical CV method as it is still the key foundation, regardless whether we plan to use DNN or not.
We will look at one of the classic algorithms in Computer Vision: the Level Set Method (LSM). In this post we will see the motivation behind it, the intuition, formulation and finally the implementation of LSM. In the next post, we will apply this method for image segmentation.
## Intuition
Let’s say we throw a stone into the middle of a pond. What would happen? There would be a ripple of water (for simplicity let’s just pick one wave), moving from the epicenter, going wide until it dissipates or hit the pond’s edge. How do we model and simulate that phenomenon?
We could do this: model the curve of that ripple and track its movement. Let’s say at time $$t = 1$$, this is how the ripple looks like:
Now, we want to model the movement as time goes. To do that, we have to track the movement of the curve above. One way to do it is to sample sufficient points in that curve and move it to the direction normal to the curve.
This is a good solution for a very simple simulation (like this one). However, consider what happen to this case:
Assuming there’s no external force, those two curve will merge together into a single curve. Also, we have to consider the case when a curve splitting into two or more curves. How do we model that?
This is where LSM shines. Instead of modeling the curve explicitly, LSM will model it implicitly. But how can it help us to model the split, merge, and the movement of the curve, we would ask. Let’s see how it works.
Suppose we have this 3D curve (surface):
We can model the above curve (circle) with this curve by exploiting the relation between surface, plane, and curve. What we’re going to do is to adjust our surface so that it intersect with a plane in a certain height. Like this:
Take at a closer look at the intersection. What is it? It is none other than a curve, specifically, a circle! The curve is the level curve, i.e., a curve of a level set. This is the idea behind LSM. We implicitly modify our curve by transforming the surface then intersecting it to a plane and evaluate the resulting level curve.
But it’s still not clear how do LSM could model the merge and split operation of curves. Let’s do something to our surface and see what does it do to our level curve.
In the above graph, we transform the surface into a surface with two minima. We can see the implication at the level curve, instead of a single circle, now it becomes two. Similarly with merge operation:
Effortlessly, the level curve captures it!
This is a powerful insight and we’re going to formulate this.
## Level Set Method Formulation
Suppose we have surface $$\phi(x)$$. The c-level set of this surface is given by:
Formally, we want to track the level curve at $$c = 0$$, which is the zero level set of $$\phi$$.
As we’re dealing with curve and surface evolution, we will parameterized our surface with a temporal variable $$t$$ such that:
We could think of that as the surface at time $$t$$, given the variable $$x$$ at time $$t$$.
Next, as we want to track the movement of the zero level curve of $$\phi$$, we will derive it with respect to $$t$$ i.e. we derive the equation of motion of $$\phi$$. Remember, the derivation of position is speed, and knowing the speed, we could model the movement of the surface.
Using chain rule, we get:
Remember, by definition, the left most partial derivate is the gradient of our surface. And also, for clearer reading we will switch the Leibniz into more compact notation.
As we state above, the direction of the curve’s movement is normal, which is $$\frac{\nabla \phi}{\lVert \nabla \phi \rVert}$$. Of course there would also be a force that move the curve, we call it as $$F$$. Hence, the speed vector is given by $$x_t = F \frac{\nabla \phi}{\lVert \nabla \phi \rVert}$$.
Finally, organizing things a little bit, we have our level set equation:
This gives us the speed of the surface evolution of $$\phi$$.
## Solving the PDE
Knowing the initial value of $$\phi$$ (let’s say 0 everywhere) and the speed of evolution, we can solve the equation of motion. That is, we want to know surface $$\phi$$ at time $$t$$. This is a Partial Differential Equation (PDE).
The simplest way to solve this would be to use Finite Difference Method. Let’s consider forward difference scheme.
Plugging in our $$\phi$$ we have:
To make it clearer for those who has a background in Machine Learning, recall gradient descent. The update rule is in the form of:
Which is analogous to the finite difference scheme above. So we’re practically doing gradient descent on $$\phi$$ with respect to $$t$$!
So that’s it. We just need to provide initial value for $$\phi$$ and figure out the equation of force $$F$$, which depend on the system we’re going to model.
## Implementation
Given the finite difference formulation to solve the LSM’s PDE, we could now implement it.
As we see, first we need to provide initial values. Then at every iteration, we look at the zero level set of $$\phi$$, and we get our curve evolution! For example in Python using matplotlib, it would be something like this:
import numpy as np
import matplotlib.pyplot as plt
phi = np.random.randn(20, 20) # initial value for phi
F = ... # some function
dt = 1
it = 100
for i in range(it):
dphi_norm = np.sqrt(np.sum(dphi**2, axis=0))
phi = phi + dt * F * dphi_norm
# plot the zero level curve of phi
plt.contour(phi, 0)
plt.show()
As we can see, the core of LSM could be implemented with just a few lines of code. Bear in mind, this is the simplest formulation of LSM. There are many sophisticated variations of LSM which modifies $$\frac{\phi(x(t), t)}{\partial t}$$.
## Conclusion
In this post, we looked at Level Set Method (LSM) which is a method to model curve evolution using implicit contour. LSM is powerful because we don’t have to explicitly model difficult curve evolution like merge and split directly.
Then, we looked at the LSM formulation and how to solve the LSM as PDE using Finite Difference Method.
Finally, we implemented the simplest formulation of LSM in Python.
## References
1. Richard Szeliski. 2010. Computer Vision: Algorithms and Applications (1st ed.). Springer-Verlag New York, Inc., New York, NY, USA.
2. http://step.polymtl.ca/~rv101/levelset/
|
|
# 1256. Cable master
Inhabitants of the Wonderland have decided to hold a regional programming contest. The Judging Committee has volunteered and has promised to organize the most honest contest ever. It was decided to connect computers for the contestants using a “star” topology - i.e. connect them all to a single central hub. To organize a truly honest contest, the Head of the Judging Committee has decreed to place all contestants evenly around the hub on an equal distance from it.
To buy network cables, the Judging Committee has contacted a local network solutions provider with a request to sell for them a specified number of cables with equal lengths. The Judging Committee wants the cables to be as long as possible to sit contestants as far from each other as possible.
The Cable Master of the company was assigned to the task. He knows the length of each cable in the stock up to a centimeter, and he can cut them with a centimeter precision being told the length of the pieces he must cut. However, this time, the length is not known and the Cable Master is completely puzzled.
You are to help the Cable Master, by writing a program that will determine the maximal possible length of a cable piece that can be cut from the cables in the stock, to get the specified number of pieces.
### 输入格式
The first line of the input file contains two integer numbers $N$ and $K$, separated by a space. $N$ $(1 \leq N \leq 10000)$ is the number of cables in the stock, and $K$ $(1 \leq K \leq 10000)$ is the number of requested pieces. The first line is followed by $N$ lines with one number per line, that specify the length of each cable in the stock in meters. All cables are at least $1$ meter and at most $100$ kilometers in length. All lengths in the input file are written with a centimeter precision, with exactly two digits after a decimal point.
### 输出格式
Write to the output file the maximal length (in meters) of the pieces that Cable Master may cut from the cables in the stock to get the requested number of pieces. The number must be written with a centimeter precision, with exactly two digits after a decimal point.
If it is not possible to cut the requested number of pieces each one being at least one centimeter long, then the output file must contain the single number $0.00$.
### 样例
Input
4 11
8.02
7.43
4.57
5.39
Output
2.00
9 人解决,12 人已尝试。
12 份提交通过,共有 69 份提交。
6.4 EMB 奖励。
|
|
# How do you find the derivative of sqrt(1/x^3)?
##### 2 Answers
Oct 8, 2016
i think the simplest thing is to rewrite it so that we can use the power rule.
#### Explanation:
$\sqrt{\frac{1}{x} ^ 3} = \frac{1}{\sqrt{{x}^{3}}} = \frac{1}{x} ^ \left(\frac{3}{2}\right) = {x}^{- \frac{3}{2}}$
So the derivative is
$- \frac{3}{2} {x}^{\left(- \frac{3}{2} - 1\right)} = - \frac{3}{2} {x}^{- \frac{5}{2}} = - \frac{3}{2 \sqrt{{x}^{5}}} = - \frac{3}{2 {x}^{2} \sqrt{x}}$
Oct 8, 2016
-3/(2x^(5/2)
#### Explanation:
Start by rewriting the function as.
$y = \sqrt{\frac{1}{{x}^{3}}} = {\left(\frac{1}{x} ^ 3\right)}^{\frac{1}{2}} = \frac{1}{{x}^{\frac{3}{2}}} = {x}^{- \frac{3}{2}}$
now differentiate using the $\textcolor{b l u e}{\text{power rule}}$
$\frac{\mathrm{dy}}{\mathrm{dx}} = - \frac{3}{2} {x}^{- \frac{5}{2}} = - \frac{3}{2 {x}^{\frac{5}{2}}}$
|
|
# Sparse signal FFT
Say I had a time domain signal $$x[k]$$ wich is sparse: $$\log(N)^2$$ nonzero samples and the fourier transform has only a very (very!) small number of high frequency components. Are there any techniques or algorithms for recovering these high frequency components in sublinear time? (something like $$O(k^c\log(N)^d)$$ where $$k$$ is a variable related the sparsity in either the frequency or the time domain).
Partial FFT
Sparse FFT
Can also subsample the input, which will alias (fold) the high frequencies onto lower, then take FFT at the lower length, and then shift the result back onto higher frequencies as a post-processing step. If the lower frequencies are exactly zero, then the procedure is exact. Care is due for handling odd vs even, and dc and Nyquist bins; example below.
import numpy as np
from numpy.fft import fft, ifft
x = np.random.randn(128) + 1j * np.random.randn(128)
xf = fft(x)
# high freqs only
xf[:64-16] = 0
xf[64+16:] = 0
xhigh = ifft(xf)
xf = fft(xhigh)
# take fft at lower length, compensate for subsampling
xf_s = fft(xhigh[::2]) * 2 # xf short
xf_sc = np.zeros(len(x), dtype='complex128') # xf short corrected
xf_sc[64-16:64] = xf_s[-16:]
xf_sc[64:64+16] = xf_s[:16]
# confirm they match
assert np.allclose(xf, xf_sc)
• Thanks, I did not know about the partial FFT yet.
– Chan
May 2, 2022 at 19:56
|
|
main-content
## Über dieses Buch
This LNCS volume is part of FoLLI book serie and contains the papers presented at the 6th International Workshop on Logic, Rationality and Interaction/ (LORI-VI), held in September 2017 in Sapporo, Japan.
The focus of the workshop is on following topics: Agency, Argumentation and Agreement, Belief Revision and Belief Merging, Belief Representation, Cooperation, Decision making and Planning, Natural Language, Philosophy and Philosophical Logic, and Strategic Reasoning.
## Inhaltsverzeichnis
### A Logical Framework for Graded Predicates
In this position paper we present a logical framework for modelling reasoning with graded predicates. We distinguish several types of graded predicates and discuss their ubiquity in rational interaction and the logical challenges they pose. We present mathematical fuzzy logic as a set of logical tools that can be used to model reasoning with graded predicates, and discuss a philosophical account of vagueness that makes use of these tools. This approach is then generalized to other kinds of graded predicates. Finally, we propose a general research program towards a logic-based account of reasoning with graded predicates.
Petr Cintula, Carles Noguera, Nicholas J. J. Smith
### Evidence Logics with Relational Evidence
We introduce a family of logics for reasoning about relational evidence: evidence that involves an ordering of states in terms of their relative plausibility. We provide sound and complete axiomatizations for the logics. We also present several evidential actions and prove soundness and completeness for the associated dynamic logics.
Alexandru Baltag, Andrés Occhipinti
### Rational Coordination with no Communication or Conventions
We study pure coordination games where in every outcome, all players have identical payoffs, ‘win’ or ‘lose’. We identify and discuss a range of ‘purely rational principles’ guiding the reasoning of rational players in such games and analyse which classes of coordination games can be solved by such players with no preplay communication or conventions. We observe that it is highly nontrivial to delineate a boundary between purely rational principles and other decision methods, such as conventions, for solving such coordination games.
Valentin Goranko, Antti Kuusisto, Raine Rönnholm
### Towards a Logic of Tweeting
In this paper we study the logical principles of a common type of network communication events that haven’t been studied from a logical perspective before, namely network announcements, or tweeting, i.e., simultaneously sending a message to all your friends in a social network. In particular, we develop and study a minimal modal logic for reasoning about propositional network announcements. The logical formalisation helps elucidate core logical principles of network announcements, as well as a number of assumptions that must be made in such reasoning. The main results are sound and complete axiomatisations.
Zuojun Xiong, Thomas Ågotnes, Jeremy Seligman, Rui Zhu
### Multi-Path vs. Single-Path Replies to Skepticism
In order to reply to the contemporary skeptic’s argument for the conclusion that we don’t have any empirical knowledge about the external world, several authors have suggested different fallibilist theories of knowledge that reject the epistemic closure principle. Holliday [8], however, shows that almost all of them suffer from either the problem of containment or the problem of vacuous knowledge. Furthermore, Holliday [9] suggests that the fallibilist should allow a proposition to have multiple sets of relevant alternatives, each of which is sufficient while none is necessary, if all its members are eliminated, for knowing that proposition. Not completely satisfied with Holliday’s multi-path reply to the skeptic, the author suggests a new single-path relevant alternative theory of knowledge and argues that it can avoid both the problem of containment and the problem of vacuous knowledge while rejecting skepticism.
Wen-fang Wang
### An Extended First-Order Belnap-Dunn Logic with Classical Negation
In this paper, we investigate an extended first-order Belnap-Dunn logic with classical negation. We introduce a Gentzen-type sequent calculus FBD+ for this logic and prove theorems for syntactically and semantically embedding FBD+ into a Gentzen-type sequent calculus for first-order classical logic. Moreover, we show the cut-elimination theorem for FBD+ and prove the completeness theorems with respect to both valuation and many-valued semantics for FBD+.
Norihiro Kamide, Hitoshi Omori
### A Characterization Theorem for Trackable Updates
The information available to some agents can be represented with several mathematical models, depending on one’s purpose. These models differ not only in their level of precision, but also in how they evolve when the agents receive new data. The notion of tracking was introduced to describe the matching of information dynamics, or ‘updates’, on different structures.We expand on the topic of tracking, focusing on the example of plausibility and evidence models, two central structures in the literature on formal epistemology. Our main result is a characterization of the trackable updates of a certain class, that is, we give the exact condition for an update on evidence models to be trackable by a an update on plausibility models. For the positive cases we offer a procedure to compute the other update, while for the negative cases we give a recipe to construct a counterexample to tracking. To our knowledge, this is the first result of this kind in the literature.
Giovanni Cinà
### Convergence, Continuity and Recurrence in Dynamic Epistemic Logic
The paper analyzes dynamic epistemic logic from a topological perspective. The main contribution consists of a framework in which dynamic epistemic logic satisfies the requirements for being a topological dynamical system thus interfacing discrete dynamic logics with continuous mappings of dynamical systems. The setting is based on a notion of logical convergence, demonstratively equivalent with convergence in Stone topology. Presented is a flexible, parametrized family of metrics inducing the latter, used as an analytical aid. We show maps induced by action model transformations continuous with respect to the Stone topology and present results on the recurrent behavior of said maps.
Dominik Klein, Rasmus K. Rendsvig
### Dynamic Logic of Power and Immunity
We present a dynamic logic for modelling legal competences, and in particular for the Hohfeldian categories of power and immunity. We argue that this logic improves on existing models by explicitly capturing the norm-changing character of legal competences, while at the same time providing a sophisticated reduction of the latter to static normative positions. The logic is shown to be completely axiomatizable; an analysis of its resulting dynamic normative positions is provided; and it is finally applied to a concrete case in German contract law to illustrate how the logic can distinguish legal ability and legal permissibility.
Huimin Dong, Olivier Roy
### A Propositional Dynamic Logic for Instantial Neighborhood Models
We propose a new perspective on logics of computation by combining instantial neighborhood logic INL with bisimulation safe operations adapted from PDL and dynamic game logic. INL is a recently proposed modal logic, based on a richer extension of neighborhood semantics which permits both universal and existential quantification over individual neighborhoods. We show that a number of game constructors from game logic can be adapted to this setting to ensure invariance for instantial neighborhood bisimulations, which give the appropriate bisimulation concept for INL. We also prove that our extended logic IPDL is a conservative extension of dual-free game logic, and its semantics generalizes the monotone neighborhood semantics of game logic. Finally, we provide a sound and complete system of axioms for IPDL, and establish its finite model property and decidability.
Johan van Benthem, Nick Bezhanishvili, Sebastian Enqvist
### Contradictory Information as a Basis for Rational Belief
As agents faced with fallible information, we frequently find ourselves in situations where we are forced to base our beliefs on evidence which is in some way or another contradictory. We nevertheless want these beliefs to be rational. This paper presents a simple probabilistic model of what it means for a belief based on a contradictory body of evidence to be rational. In this approach, we model contradictions in the evidence available to us as resulting from random noise, and we model our task as rational agents as reconstructing the most likely states of affairs given the evidence available to us. Our main result consists in providing several equivalent descriptions of the non-reflexive and non-monotonic consequence relation which formalizes the notion that it is reasonable to accept that a proposition is true given good evidence supporting some set of propositions.
### Stability in Binary Opinion Diffusion
The paper studies the stabilization of the process of diffusion of binary opinions on networks. It first shows how such dynamics can be modeled and studied via techniques from binary aggregation, which directly relate to neighborhood frames. It then characterizes stabilization in terms of such neighborhood structures, and shows how the monotone $$\mu$$μ-calculus can express relevant properties of them. Finally, it illustrates the scope of these results by applying them to specific diffusion models.
Zoé Christoff, Davide Grossi
### Quotient Dynamics: The Logic of Abstraction
We propose a Logic of Abstraction, meant to formalize the act of “abstracting away” the irrelevant features of a model. We give complete axiomatizations for a number of variants of this formalism, and explore their expressivity. As a special case, we consider the “logics of filtration”.
Alexandru Baltag, Nick Bezhanishvili, Julia Ilin, Aybüke Özgün
### The Dynamics of Group Polarization
Exchange of arguments in a discussion often makes individuals more radical about their initial opinion. This phenomenon is known as Group-induced Attitude Polarization. A byproduct of it are bipolarization effects, where the distance between the attitudes of two groups of individuals increases after the discussion. This paper is a first attempt to analyse the building blocks of information exchange and information update that induce polarization. I use Argumentation Frameworks as a tool for encoding the information of agents in a debate relative to a given issue a. I then adapt a specific measure of the degree of acceptability of an opinion (Matt and Toni 2008). Changes in the degree of acceptability of a, prior and posterior to information exchange, serve here as an indicator of polarization. I finally show that the way agents transmit and update information has a decisive impact on polarization and bipolarization.
Carlo Proietti
### Doing Without Nature
We show that every indeterministic n-agent choice model $$M^i$$Mi can be transformed into a deterministic n-agent choice model $$M^d$$Md, such that $$M^i$$Mi is a bounded morphic image of $$M^d$$Md. This generalizes an earlier result from Van Benthem and Pacuit [16] about finite two-player choice models. It further strengthens the link between STIT logic and game theory, because deterministic choice models correspond in a straightforward way to normal game forms, and choice models are generally used to interpret STIT logic.
Frederik Van De Putte, Allard Tamminga, Hein Duijf
### Axiomatizing Epistemic Logic of Friendship via Tree Sequent Calculus
This paper positively solves an open problem if it is possible to provide a Hilbert system to Epistemic Logic of Friendship (EFL) by Seligman, Girard and Liu. To find a Hilbert system, we first introduce a sound, complete and cut-free tree (or nested) sequent calculus for EFL, which is an integrated combination of Seligman’s sequent calculus for basic hybrid logic and a tree sequent calculus for modal logic. Then we translate a tree sequent into an ordinary formula to specify a Hilbert system of EFL and finally show that our Hilbert system is sound and complete for an intended two-dimensional semantics.
Katsuhiko Sano
### The Dynamic Logic of Stating and Asking: A Study of Inquisitive Dynamic Modalities
Inquisitive dynamic epistemic logic (IDEL) extends public announcement logic incorporating ideas from inquisitive semantics. In IDEL, the standard public announcement action can be extended to a more general public utterance action, which may involve a statement or a question. While uttering a statement has the effect of a standard announcement, uttering a question typically leads to new issues being raised. In this paper, we investigate the logic of this general public utterance action. We find striking commonalities, and some differences, with public announcement logic. We show that dynamic modalities admit a set of reduction axioms, which allow us to turn any formula of IDEL into an equivalent formula of static inquisitive epistemic logic. This leads us to establish several complete axiomatizations of IDEL, corresponding to known axiomatizations of public announcement logic.
Ivano Ciardelli
### The Stubborn Non-probabilist—‘Negation Incoherence’ and a New Way to Block the Dutch Book Argument
We rigorously specify the class of nonprobabilistic agents which are, we argue, immune to the classical Dutch Book argument. We also discuss the notion of expected value used in the argument as well as sketch future research connecting our results to those concerning incoherence measures.
Leszek Wroński, Michał Tomasz Godziszewski
### Conjunction and Disjunction in Infectious Logics
In this paper we discuss the extent to which conjunction and disjunction can be rightfully regarded as such, in the context of infectious logics. Infectious logics are peculiar many-valued logics whose underlying algebra has an absorbing or infectious element, which is assigned to a compound formula whenever it is assigned to one of its components. To discuss these matters, we review the philosophical motivations for infectious logics due to Bochvar, Halldén, Fitting, Ferguson and Beall, noticing that none of them discusses our main question. This is why we finally turn to the analysis of the truth-conditions for conjunction and disjunction in infectious logics, employing the framework of plurivalent logics, as discussed by Priest. In doing so, we arrive at the interesting conclusion that —in the context of infectious logics— conjunction is conjunction, whereas disjunction is not disjunction.
Hitoshi Omori, Damian Szmuc
### On the Concept of a Notational Variant
In the study of modal and nonclassical logics, translations have frequently been employed as a way of measuring the inferential capabilities of a logic. It is sometimes claimed that two logics are “notational variants” if they are translationally equivalent. However, we will show that this cannot be quite right, since first-order logic and propositional logic are translationally equivalent. Others have claimed that for two logics to be notational variants, they must at least be compositionally intertranslatable. The definition of compositionality these accounts use, however, is too strong, as the standard translation from modal logic to first-order logic is not compositional in this sense. In light of this, we will explore a weaker version of this notion that we will call schematicity and show that there is no schematic translation either from first-order logic to propositional logic or from intuitionistic logic to classical logic.
Alexander W. Kocurek
### Conditional Doxastic Logic with Oughts and Concurrent Upgrades
In this paper, we model the behavior of an epistemic agent that faces a deliberation against a background of oughts, beliefs and information. We do this by introducing a dynamic epistemic logic where ought operators are defined and release of information makes beliefs and oughts co-vary. The static part of the logic extends single-agent Conditional Doxastic Logic by combining dyadic operators for conditional beliefs and oughts that are interpreted over two distinct preorders. The dynamic part of the logic introduces concurrent upgrade operators, which are interpreted on operations that change the two preorders in the same way, thus generating the covariation of beliefs and oughts. The effect of the covariation is that, after receiving new information, the agent will change both her beliefs and her oughts accordingly, and in deliberating, she will pick up the best states among those she takes to be the most plausible.
Roberto Ciuni
### On Subtler Belief Revision Policies
This paper proposes three subtle revision policies that are not propositionally successful (after a single application the agent might not believe the given propositional formula), but nevertheless are not propositionally idempotent (further applications might affect the agent’s epistemic state). It also compares them with two well-known revision policies, arguing that the subtle ones might provide a more faithful representation of humans’ real-life revision processes.
### Topo-Logic as a Dynamic-Epistemic Logic
We extend the ‘topologic’ framework [13] with dynamic modalities for ‘topological public announcements’ in the style of Bjorndahl [5]. We give a complete axiomatization for this “Dynamic Topo-Logic”, which is in a sense simpler than the standard axioms of topologic. Our completeness proof is also more direct (making use of a standard canonical model construction). Moreover, we study the relations between this extension and other known logical formalisms, showing in particular that it is co-expressive with the simpler (and older) logic of interior and global modality [1, 4, 10, 14]. This immediately provides an easy decidability proof (both for topologic and for our extension).
Alexandru Baltag, Aybüke Özgün, Ana Lucia Vargas Sandoval
### Strategic Knowledge of the Past in Quantum Cryptography
We propose an epistemic strategy logic with future and past time operators, called $$\text {SLKP}$$SLKP, for Strategy Logic with Knowledge of the Past. With $$\text {SLKP}$$SLKP we can model mutually observed moves/actions in strategic contexts. In a semantic game, agents may completely or partially observe other agents’ moves, their moves may depend on their knowledge of other players’ strategies, and their knowledge may depend on the history of their own or other’s moves. The logic $$\text {SLKP}$$SLKP also allows us to describe temporal properties involving past, future, and composed tenses such as future perfect or counterfactual assertions. We illustrate SLKP by formalising the quantum cryptography protocol BB84, with the purpose to initiate an integrated epistemic and strategic treatment of agent interactions in quantum systems.
Christophe Chareton, Hans van Ditmarsch
### Enumerative Induction and Semi-uniform Convergence to the Truth
I propose a new definition of identification in the limit, also called convergence to the truth, as a new success criterion that is meant to complement, but not replace, the classic definition due to Putnam (1963) and Gold (1967). The new definition is designed to explain how it is possible to have successful learning in a kind of scenario that the classic account ignores—the kind of scenario in which the entire infinite data stream to be presented incrementally to the learner is not presupposed to completely determine the correct learning target. For example, suppose that a scientists is interested in whether all ravens are black, and that she will never observe a counterexample in her entire life. This still leaves open whether all ravens (in the universe) are black. From a purely mathematical point of view, the proposed definition of convergence to the truth employs a convergence concept that generalizes net convergence and sits in between pointwise convergence and uniform convergence. Two results are proved to suggest that the proposed definition provides a success criterion that is by no means weak: (i) Between the proposed identification in the limit and the classic one, neither implies the other. (ii) If a learning method identifies the correct target in the limit in the proposed sense, any U-shaped learning involved therein has to be essentially redundant. I conclude that we should have (at least) two success criteria that correspond to two senses of identification in the limit: the classic one and the one proposed here. They are complementary: meeting any one of the two is good; meeting both at the same time, if possible, is even better.
Hanti Lin
### How to Make Friends: A Logical Approach to Social Group Creation
This paper studies the logical features of social group creation. We focus on the mechanisms which indicate when agents can form a team based on the correspondence in their set of features (behavior, opinions, etc.). Our basic approach uses a semi-metric on the set of agents, which is used to construct a network topology. Then it is extended with epistemic features to represent the agents’ epistemic states, allowing us to explore group-creation alternatives where what matters is not only the agent’s differences but also what they know about them. We use tools of dynamic epistemic logic to study the properties of different strategies to network formations.
### Examining Network Effects in an Argumentative Agent-Based Model of Scientific Inquiry
In this paper we present an agent-based model (ABM) of scientific inquiry aimed at investigating how different social networks impact the efficiency of scientists in acquiring knowledge. The model is an improved variant of the ABM introduced in [3], which is based on abstract argumentation frameworks. The current model employs a more refined notion of social networks and a more realistic representation of knowledge acquisition than the previous variant. Moreover, it includes two criteria of success: a monist and a pluralist one, reflecting different desiderata of scientific inquiry. Our findings suggest that, given a reasonable ratio between research time and time spent on communication, increasing the degree of connectedness of the social network tends to improve the efficiency of scientists.
AnneMarie Borg, Daniel Frey, Dunja Šešelja, Christian Straßer
### Substructural Logics for Pooling Information
This paper puts forward a generalization of the account of pooling information – offered by standard epistemic logic – based on intersection of sets of possible worlds. Our account is based on information models for substructural logics and pooling is represented by fusion of information states. This approach yields a representation of pooling related to structured communication within groups of agents. It is shown that the generalized account avoids some problematic features of the intersection-based approach. Our main technical result is a sound and complete axiomatization of a substructural epistemic logic with an operator expressing pooling.
Vít Punčochář, Igor Sedlár
### Logical Argumentation Principles, Sequents, and Nondeterministic Matrices
The concept of “argumentative consequence” is introduced, involving only the attack relations in Dung-style abstract argumentation frames. Collections of attack principles of different strength, referring to the logical structure of claims of arguments, lead to new characterizations of classical and nonclassical consequence relations. In this manner systematic relations between structural constraints on abstract argumentation frames, sequent rules, and nondeterministic matrix semantics for corresponding calculi emerge.
Esther Anna Corsi, Christian G. Fermüller
### Non-triviality Done Proof-Theoretically
It is well known that naive theories of truth based on the three-valued schemes K3 and LP are non-trivial.
Rohan French, Shawn Standefer
### Sette’s Logics, Revisited
One of the simple approaches to paraconsistent logic is in terms of three-valued logics. Assuming the standard behavior with respect to the “classical"values, there are only two possibilities for paraconsistent negation, namely the negation of the Logic of Paradox and the negation of Sette’s logic P$$^1$$1. From a philosophical perspective, the paraconsistent negation of P$$^1$$1 is less discussed due to the lack of an intuitive reading of the third value. Based on these, the aim of this paper is to fill in the gap by presenting a semantics for P$$^1$$1 à la Jaśkowski which sheds some light on the intuitive understanding of Sette’s logic. A variant of P$$^1$$1 known as I$$^1$$1 will be also discussed.
Hitoshi Omori
### Multi-agent Belief Revision Using Multisets
Revising a belief set K with a proposition a results in a theory that entails a. We consider the case of a multiset of beliefs, representing the beliefs of multiple agents, and define its revision with a multiset of desired beliefs the group of agents should have. We give graph theoretic semantics to this revision operation and we postulate two classes of distance-based revision operators. Further, we show that this multiset revision operation can express the merging of the beliefs of multiple agents.
Konstantinos Georgatos
### Boosting Distance-Based Revision Using SAT Encodings
Belief revision has been studied for more than 30 years, and the theoretical properties of the belief revision operators are now well-known. Contrastingly, there are almost no practical applications of these operators. One of the reasons is the computational complexity of the corresponding inference problem, which is typically NP-hard and coNP-hard. Especially, existing implementations of belief revision operators are capable to solve toy instances, but are still unable to cope with real-size problem instances. However, the improvements achieved by SAT solvers for the past few years have been very impressive and they allow to tackle the solving of instances of inference problems located beyond NP. In this paper we describe and evaluate SAT encodings for a large family of distance-based belief revision operators. The results obtained pave the way for the practical use of belief revision operators in large-scale applications.
Sébastien Konieczny, Jean-Marie Lagniez, Pierre Marquis
### Counterfactuals in Nelson Logic
We motivate and develop an extension of Nelson’s constructive logic N3 that adds a counterfactual conditional to the existing setup. After developing the semantics, we will outline how our account will be able to give a nice analysis of natural language counterfactuals. In particular, the account does justice to the intuitions and arguments that have lead Alan Hájek to claim that most conditionals are false, but assertable, without actually forcing us to endorse that rather uncomfortable claim.
Andreas Kapsner, Hitoshi Omori
### A Dynamic Approach to Temporal Normative Logic
State commands refer to states, not actions. They have a temporal dimension explicitly or implicitly. They indirectly change what we are permitted, forbidden or obligated to do. This paper presents $${{\mathrm{\mathsf {DTNL}}}}{}$$DTNL, a deontic logic meant to handle state commands based on the branching-time temporal logic $$\mathsf {PCTL}^*$$PCTL∗. The models of $${{\mathrm{\mathsf {DTNL}}}}{}$$DTNL are trees with bad states, which are identified by a propositional constant $$\mathfrak {b}$$b introduced in the language. To model state commands, a dynamic operator that adds states to the extension of $$\mathfrak {b}$$b is introduced.
Fengkui Ju, Gianluca Grilletti
### Labelled Sequent Calculus for Inquisitive Logic
A contraction-free and cut-free labelled sequent calculus $$\mathsf {GInqL}$$GInqL for inquisitive logic is established. Labels are defined by a set-theoretic syntax. The completeness of $$\mathsf {GInqL}$$GInqL is shown by the equivalence between the Hilbert-style axiomatic system and sequent system.
Jinsheng Chen, Minghui Ma
### Testing Minimax for Rational Ignorant Agents
Richard Pettigrew [13, 14] defends the following theses: (1) epistemic disutility can be measured with strictly proper scoring rules (like the Brier score) and (2) at the beginning of their credal lives, rational agents ought to minimize their worst-case epistemic disutility (Minimax). This leads to a Principle of Indifference for ignorant agents. However, Pettigrew offers no argument in favour of Minimax, suggesting that the epistemic conservatism underlying it is a “normative bedrock.” Is there a way to test Minimax? In this paper, we argue that, since Pettigrew’s Minimax is impermissive, an argument against credence permissiveness constitutes an argument in favour of Minimax, and that arguments for credence permissiveness are arguments against Minimax.
Marc-Kevin Daoust, David Montminy
### A Reconstruction of Ex Falso Quodlibet via Quasi-Multiple-Conclusion Natural Deduction
This paper is intended to offer a philosophical analysis of the propositional intuitionistic logic formulated as $$\textit{NJ}$$NJ. This system has been connected to Prawitz and Dummett’s proof-theoretic semantics and its computational counterpart. The problem is, however, there has been no successful justification of ex falso quodlibet (EFQ): “From the absurdity ‘$$\bot$$⊥’, an arbitrary formula follows.” To justify this rule, we propose a novel intuitionistic natural deduction with what we call quasi-multiple conclusion. In our framework, EFQ is no longer an inference deriving everything from ‘$$\bot$$⊥’, but rather represents a “jump” inference from the absurdity to the other possibility.
Yosuke Fukuda, Ryosuke Igarashi
### A Nonmonotonic Modal Relevant Sequent Calculus
Motivated by semantic inferentialism and logical expressivism proposed by Robert Brandom, in this paper, I submit a nonmonotonic modal relevant sequent calculus equipped with special operators, □ and R. The base level of this calculus consists of two different types of atomic axioms: material and relevant. The material base contains, along with all the flat atomic sequents (e.g., Γ0, p |~0 p), some non-flat, defeasible atomic sequents (e.g., Γ0, p |~0 q); whereas the relevant base consists of the local region of such a material base that is sensitive to relevance. The rules of the calculus uniquely and conservatively extend these two types of nonmonotonic bases into logically complex material/relevant consequence relations and incoherence properties, while preserving Containment in the material base and Reflexivity in the relevant base. The material extension is supra-intuitionistic, whereas the relevant extension is stronger than a logic slightly weaker than R. The relevant extension also avoids the fallacies of relevance. Although the extended material consequence relation is defeasible and insensitive to relevance, it has local regions of indefeasibility and relevance (the latter of which is marked by the relevant extension). The newly introduced operators, □ and R, codify these local regions within the same extended material consequence relation.
Shuhei Shimamura
### A Formalization of the Greater Fools Theory with Dynamic Epistemic Logic
The greater fools explanation of financial bubbles says that traders are willing to pay more for an asset than they deem it worth, because they anticipate they might be able to sell it to someone else for an even higher price. As agents’ beliefs about other agents’ beliefs are at the heart of the greater fools theory, this paper comes to formal terms with the theory by translating the phenomenon into the language and models of dynamic epistemic logic. By presenting a formalization of greater fools reasoning, structural insights are obtained pertaining to the structure of its higher-order content and the role of common knowledge.
Hanna S. van Lee
### On Axiomatization of Epistemic GDL
The Game Description Language (GDL) has been introduced as an official language for specifying games in the AAAI General Game Playing Competition since 2005. It was originally designed as a declarative language for representing rules of arbitrary games with perfect information. More recently, an epistemic extension of GDL, called EGDL, has been proposed for representing and reasoning about imperfect information games. In this paper, we develop an axiomatic system for a variant of EGDL and prove its soundness and completeness with respect to the semantics based on the epistemic state transition model. With a combination of action symbols, temporal modalities and epistemic operators, the completeness proof requires novel combinations of techniques used for completeness of propositional dynamic logic and epistemic temporal logic. We demonstrate how to use the proof theory for inferring game properties from game rules.
Guifei Jiang, Laurent Perrussel, Dongmo Zhang
### Putting More Dynamics in Revision with Memory
We have proposed in previous works [14, 15] a construction that allows to define operators for iterated revision from classical AGM revision operators. We called these operators revision operators with memory and show that the operators obtained have nice logical properties. But these operators can be considered as too conservative, since the revision policy of the agent, encoded as a faithful assignment, does not change during her life. In this paper we propose an extension of these operators, that aims to add more dynamics in the revision process.
Sébastien Konieczny, Ramón Pino Pérez
### An Empirical Route to Logical ‘Conventionalism’
The laws of classical logic are taken to be logical truths, which in turn are taken to hold objectively. However, we might question our faith in these truths: why are they true? One general approach, proposed by Putnam [8] and more recently Dickson [3] or Maddy [5], is to adopt empiricism about logic. On this view, logical truths are true because they are true of the world alone – this gives logical truths an air of objectivity. Putnam and Dickson both take logical truths to be true in virtue of the world’s structure, given by our best empirical theory, quantum mechanics. This assumes a determinate logical structure of the world given by quantum mechanics. Here, I argue that this assumption is false, and that the world’s logical structure, and hence the related ‘true’ logic, are underdetermined. This leads to what I call empirical conventionalism.
Eugene Chua
### Beating the Gatecrasher Paradox with Judiciary Narratives
A probabilistic model for the narrative approach to reasoning in legal fact-finding is developed and applied to the gatecrasher paradox.
Rafal Urbaniak
### Distributed Knowledge Whether
(Extended Abstract)
As is known, by putting their knowledge together, agents can obtain distributed knowledge. However, by pooling their non-ignorance, agents can only obtain distributed knowledge as to whether something holds, rather than distributed knowledge (of something).
Jie Fan
### A Note on Belief, Question Embedding and Neg-Raising
The epistemic verb to believe does not embed polar questions, unlike the verb to know. After reviewing this phenomenon, I propose an explanation which connects the neg-raising behavior of belief with its embedding patterns (following [14]). I use dynamic epistemic logic to model the presuppositions and the effects associated with belief assertions.
Michael Cohen
### Distributed Knowing Whether
(Extended Abstract)
Standard epistemic logic studies reasoning patterns about ‘knowing that’, where interesting group notions of ‘knowing that’ arise naturally, such as distributed knowledge and common knowledge. In recent research, other notions of knowledge are also studied, such as ‘knowing whether’, ‘knowing how’, and so on. It is natural to ask what are the group notions of these non-standard knowledge expressions. This paper makes an initial attempt in this line, by looking at the notion corresponding to distributed knowledge in the setting of ‘knowing whether’. We introduce the distributed know-whether operator, and give complete axiomatizations of the resulting logics over arbitrary or $$\mathcal {S}$$S5 frames, based on the corresponding axiomatizations of ‘knowing whether’.
Xingchi Su
### A Causal Theory of Speech Acts
In speech acts, a speaker utters sentences that might affect the belief state of a hearer. To formulate causal effects in assertive speech acts, we introduce a logical theory that encodes causal relations between speech acts, belief states of agents, and truth values of sentences. We distinguish trustful and untrustful speech acts depending on the truth value of an utterance, and distinguish truthful and untruthful speech acts depending on the belief state of a speaker. Different types of speech acts cause different effects on the belief state of a hearer, which are represented by the set of models of a causal theory. Causal theories of speech acts are also translated into logic programs, which enables one to represent and reason about speech acts in answer set programming.
Chiaki Sakama
### An Axiomatisation for Minimal Social Epistemic Logic
A two-dimensional modal logic, intended for applications in social epistemic logic, with one dimension for agents and the other for epistemic states is given. The language has hybrid logic devices for agents, as proposed in earlier papers by Seligman, Liu and Girard. We give an axiomatisation and a proof of its completeness.
Liang Zhen
### Relief Maximization and Rationality
This paper introduces the concept of relief maximization in decisions and games and shows how it can explain experimental behavior, such as asymmetric dominance and decoy effects. Next, two possible evolutionary explanations for the survival of relief-based behavior are sketched.
Paolo Galeazzi, Zoi Terzopoulou
### Reason to Believe
In this paper we study the relation between nonmonotonic reasoning and belief revision. Our main conceptual contribution is to suggest that nonmonotonic reasoning guides but does not determine an agent’s belief revision. To be adopted as beliefs, defeasible conclusions should remain stable in the face of certain bodies of information. This proposal is formalized in what we call a two-tier semantics for nonmonotonic reasoning and belief revision. The main technical result is a sound and complete axiomatization for this semantic.
Chenwei Shi, Olivier Roy
### Justification Logic with Approximate Conditional Probabilities
The importance of logics with approximate conditional probabilities is reflected by the fact that they can model non-monotonic reasoning. We introduce a new logic of this kind, $$\mathsf {CPJ}$$CPJ, which extends justification logic and supports non-monotonic reasoning with and about evidences.
Zoran Ognjanović, Nenad Savić, Thomas Studer
### From Concepts to Predicates Within Constructivist Epistemology
In this research constructivist epistemology provides a ground for conceptual analysis of concept construction, conception production, and concept learning processes. Relying on a constructivist model of knowing, this research will make an epistemological and logical linkage between concepts and predicates.
|
|
3 Replies Latest reply on Jan 19, 2006 9:09 AM by jaikiran pai
# Integrating JProfiler with JBoss-3.2.3
Hi,
I am trying to integrate JProfiler 4.1.2 with JBoss-3.2.3. I modified the run.bat of jboss as suggested in the instructions of JProfiler. I specified the "nowait" option too. Following is what i added to run.bat of jboss.
`"-Xrunjprofiler:port=8849,nowait,id=111,config=C:\Documents and Settings\jai\.jprofiler4\config.xml" "-Xbootclasspath/a:E:\My Downloads\Utilities\JProfiler\JProfiler\jprofiler4\bin\agent.jar"`
However when i execute run.bat of jboss, the jboss just hangs after displaying the following message, though it says its NOT waiting for frontend to connect.
```===============================================================================
.
JBoss Bootstrap Environment
.
JBOSS_HOME: D:\jboss\bin\\..
.
JAVA: c:\j2sdk1.4.2_04\bin\java
.
.
CLASSPATH: ;c:\j2sdk1.4.2_04\lib\tools.jar;D:\jboss\bin\\run.jar;
.
===============================================================================
.
Warning: classic VM not supported; client VM will be used
JProfiler> Protocol version 21
JProfiler> Using JVMPI
JProfiler> 32-bit library
JProfiler> Don't wait for frontend to connect.
JProfiler> Using config file C:\Documents and Settings\jai\.jprofiler4\config.xml (id: 110)
JProfiler> Listening on port: 8849.
JProfiler> Native library initialized
```
Does anyone have a solution for this? Or does anyone know how to configure JBoss-3.2.3 with JProfiler 4.1.2?
Thank you.
• ###### 1. Re: Integrating JProfiler with JBoss-3.2.3
I think one of the problems could be because of the Debug option also enables.
set JAVA_OPTS=-classic -Xdebug -Xnoagent ....
I don't know hwy, but if both the debug and the profiler options ae enabled you could see a similar situation.
Try removing the debug option and run.
• ###### 2. Re: Integrating JProfiler with JBoss-3.2.3
"varunrao" wrote:
I think one of the problems could be because of the Debug option also enabled.
set JAVA_OPTS=-classic -Xdebug -Xnoagent ....
I don't know why, but if both the debug and the profiler options ae enabled you could see a similar situation.
Try removing the debug option and run.
• ###### 3. Re: Integrating JProfiler with JBoss-3.2.3
Ya, you are right. Removing the -XDebug option, worked for me. Thank you.
-Jaikiran
|
|
# Auto-Heal tests with recheck-web
This post is going to talk about auto-healing and the first thing that comes to your mind is Artificial Intelligence (AI). I do agree with that and it has been a cliche these days to make everything intelligent. The problem with AI is that you need to pre-process data, build a feature, pick an algorithm, and start training the system which may eventually take some time to improvise and get the desired outcome. Long story short, we don’t have enough time for that in some cases. I am not offensive to AI, I evangelize about it all the time but still, it can’t be applied for everything as “one shoe doesn’t fit for all”.
## Auto-Healing
Auto-Healing in terms of computer science refers to the computer program that can correct itself whenever it encounters a blocker or a new change that has been incorporated in the system. In a normal scenario, the script would fail to proceed due to a lack of knowledge. Auto-healing makes sure that the system keeps up and running and does not break during its course.
The kind of auto-healing that we are going to see here is specific to UI test automation where most of the automation engineers would agree that most of their time has been spent on fixing the code due to constant changes in the front-end. This is prevalent now as we have element attributes in HTML that keep changing dynamically because of the recent front-end frameworks that are being used. To overcome this problem without further ado let me introduce a framework called recheck-web which is from a company called retest.
## recheck-web
recheck-web works based on ‘Difference Testing’ where a Golden Master is created during the first run and it is considered as the reference. The Golden Master is a combination of screenshot and XML file which will have all the attributes related to elements on a web page. The following is a brief introduction to recheck-web from the company’s official GitHub page.
You can explicitly or implicitly create Golden Masters (essentially a copy of the rendered website) and semantically compare against these. Irrelevant changes are easy to ignore and the Golden Masters are effortless to update. In case of critical changes that would otherwise break your tests, recheck-web can now peek into the Golden Master, find the element there, and (based on additional attributes) still identify the changed element on the current website.
In the above image, the overall concept has been simplified into a diagram. It is prudent that the UI is always subjected to changes, having that as a problem recheck-web does the following actions,
1. A usual selenium test will break if it finds a locator is changed in the UI and if we have not updated the test script and throws NoSuchElementException
2. A selenium test wrapped with recheck-web will have a Golden Master where all the attributes of the elements are captured for the entire page (only if you pass a driver but if a web element is passed, it would only capture the attributes of that particular element)
3. If a test script is run against the new UI whilst having the old locators in place, it would not break rather the test will execute as it would refer to the Golden Master and compares the present state of the locator with previous history a.k.a the Golden Master
4. The test would successfully execute all the functional validations but eventually will finish it with a warning and in a result, it shows what is expected and what has changed
5. This not only applies to the locator changes but also this validates the CSS and any Style changes as well which benefits the visual validation
6. The last step would be the decision making step where the user can accept all the changes or ignore it to update the Golden Master
It might seem to be complex to grasp but I will debrief it in a while with a working example.
## Setup
1. You might need to ensure that Java and Maven are installed in your machine/server (Java 8 is the minimum version to be used)
2. Create a maven project and add the following dependencies to your pom.xml
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-api</artifactId>
<version>5.6.2</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>de.retest</groupId>
<artifactId>recheck-web</artifactId>
<version>1.9.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>3.141.59</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
<scope>test</scope>
</dependency>
The next thing you would need is an URL or a demo application against which you can test the script. I have created a sample login application which we will consider for this example.
## Selenium Test
We can write a simple selenium test for the login app. Also, make a note that you should specify whichever driver you are using and the following driver should be made available to execute. In this case, I am using a Chrome driver.
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import java.nio.file.Paths;
public class SeleniumTest {
public WebDriver driver;
@BeforeEach
public void setupDriver() {
System.setProperty("webdriver.chrome.driver","src/test/resources/drivers/chromedriver");
driver = new ChromeDriver();
}
@Test
public void breakableTest() throws Exception {
driver.get(url);
System.out.println(result);
}
@AfterEach
public void windUp(){
driver.quit();
}
}
You could see from the above scenario that we need to explicitly make an assertion to find if the successful login message is displayed. Run this test and you will see that the test will pass. What if there are some UI changes that are made in the new release of your app. Consider the below example where you have the ID attributes ‘username-field’ and ‘password-field’. It is now changed to ‘username’ and ‘password’ in the new release of the app.
<form id="login-form">
</form>
But you could see that the selenium test script carries the id as ‘username-field’ and ‘password-field’ rather than ‘username-field’ and ‘password-field’. This test would fail with NoSuchElementExeption.
org.openqa.selenium.NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":"#username"}
## Unbreakable Selenium Test
recheck-web truly stands to its name ‘Unbreakable’ where it creates a Golden Master with which it compares the current state. To create a recheck-web test, simply call the RecheckDriver and pass the driver argument to it as how you would do a RemoteWebDriver invocation. It is so simple and you don’t need to add anything else. Of course, there are methods like ‘startTest’, ‘cap’, ‘check, ‘capTest’ which can be called using RecheckImplementation class but you don’t need to worry about that as all these are encompassed in RecheckDriver class.
import de.retest.web.selenium.RecheckDriver;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.openqa.selenium.By;
import org.openqa.selenium.chrome.ChromeDriver;
import java.nio.file.Paths;
public class UnbreakableSeleniumTest {
RecheckDriver driver;
@BeforeEach
public void setupRecheck() {
System.setProperty("webdriver.chrome.driver","src/test/resources/drivers/chromedriver");
driver = new RecheckDriver(new ChromeDriver());
}
@Test
public void unBreakableTest() throws Exception {
driver.get(url);
}
@AfterEach
public void windUp(){
driver.quit();
}
}
<form id="login-form">
</form>
Now if you run the test it will fail for the first time as you don’t have a Golden Master created yet.
Run the test again and you will see that the test will succeed but with some errors as mentioned below in the picture.
This assures that our present state is compared against the Golden Master. The errors that you see are some metadata and attributes that are compared against the Golden Master which can be ignored by writing them in the recheck.ignore file generated in .retest folder. Add attribute=grid-template-rows and metadata=window.height in the recheck.ignore file.
attribute=grid-template-rows
Re-run the test and it will succeed without any errors.
<form id="login-form">
</form>
If you run the test against this new change, the execution will not break but it will refer to the Golden Master that is available since the first run and it will compare the similar attributes that were present earlier and make a one-to-one connection, thereby finds the new element without even throwing any exception. Obviously, the test will report with errors (i.e. the locator change) that were found during the execution.
Now you can see the difference that the expected was ‘username-field’ but the actual ID attribute was ‘username’. The same applies to the ID attribute ‘password’. It is not a rule that Golden Master always refers to the outcome that was derived from the first run. We need to update the Golden Master to the latest stable UI changes by liaising with the development community. So we could clearly see that there are often changes happening in and out of the script. This might look like Déjà vu to you as recheck-web has a CLI called recheck.cli which is similar to the operations of GIT. It is imperative that every time you cannot manually update the recheck.ignore file to filter the result and to commit or accept the new UI changes to the Golden Master. This is where recheck.cli would help us in speeding up the process.
## recheck.cli
In order to install and setup the CLI, refer to this documentation. To ensure if the recheck is working, restart your CMD/Terminal after the installation and type recheck. It should return with list of options that are available.
$recheck Usage: recheck [--help] [--version] [COMMAND] Command-line interface for recheck. --help Display this help message. --version Display version info. Commands: account Allows to log into and out of your account and show your API key. help Displays help information about the specified command commit Accept specified differences of given test report. completion Generate and display an auto completion script. diff Compare two Golden Masters. ignore Ignore specified differences of given test report. show Display differences of given test report. version Display version info. Every recheck test will generate a report in target/test-classes/retest/recheck/YOURTESTNAME.report. You could run this report using the recheck.cli by hitting the following command by navigating to the report folder. $ recheck show YOURTESTNAME.report
To accept all the recent changes in your test related to ID attributes, you can simply commit all at once by issuing the below command,
\$ recheck commit --all YOURTESTNAME.report
If you go back and check your retest.xml the attributes would have changed from ‘username-field’ to ‘username’ and ‘password-field’ to ‘password’. Now that you have updated the Golden Master, you have to update the corresponding test scripts as well else the test would fail because we don’t have a history of ‘username-field’ or ‘password-field’ anymore. In order to auto-heal, retest has a GUI called review which is an insightful dashboard that can automatically make the relevant changes in Golden Master and in the test script.
Also, recheck-web has an important feature called virtual ID. It creates the virtual ID for all the states viz. username, password, and login button. If you want your test to be completely unbreakable, using virtual IDs will make your life worry-free. The retestId has to be imported from the package de.retest.web.selenium.By.
import de.retest.web.selenium.RecheckDriver;
import de.retest.web.selenium.By;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.openqa.selenium.chrome.ChromeDriver;
import java.nio.file.Paths;
public class UnbreakableSeleniumTest {
RecheckDriver driver;
@BeforeEach
public void setupRecheck() {
System.setProperty("webdriver.chrome.driver","src/test/resources/drivers/chromedriver");
driver = new RecheckDriver(new ChromeDriver());
}
@Test
public void unBreakableTest() throws Exception {
recheck.startTest();
driver.get(url);
|
|
# Lorentz transformation via light clocks in parallel direction
In order to derive the Lorentz transformation one can use the picture of a light clock. A Photons bounces back and forth between two mirrors. This is then observed in two different inertial systems. If the relative speed of the inertial systems is perpendicular to the propagation direction of the photon deriving, extracting the Lorentz transformation is easy and there are hundreds of examples on the web:
http://galileoandeinstein.physics.virginia.edu/more_stuff/flashlets/lightclock.swf
However, I am trying to calculate this scenario if the relative motion between the Inertial systems is parallel to the direction of the light propagation:
In this case I simply get for the standing observer:
$T_{Periode} = \frac{2 h}{c}$
and for the moving observer:
$T_{Periode} = \frac{h + v_{rel} t}{c} + \frac{h - 2v_{rel} t}{c} = \frac{2h - v_{rel}}{c}$
I can't see how this would lead me to the Lorentz transformation.
|
|
# Tag Info
There is no usage guidance for this tag … yet!
Usage guidance, also known as a tag wiki excerpt, is a short blurb that describes when and why a tag should be used on this site specifically.
Many questions are answered in the affirmative or negative relatively easily for manifolds of dimension 5 or higher. Whereas analogous questions haven't been answered in lower dimensions. A particularly interesting case is dimension 4. As an example, exotic differentiable structures on the spheres. There is little known about the number of exotic differentiable structures on $S^4$. The very subject of four manifolds, studies questions such as these!
|
|
The way to try to install, which would be unsuccessful, is to download the file from http://coates.ma.ic.ac.uk/grdb_polytopes-0.1.spkg, then run sage -i /path/to/grdb_polytopes-0.1.spkg. This will fail with the message
Error: package '/Users/palmieri/Downloads/grdb_polytopes-0.1.spkg' not found
I don't see any easy way to work around this. You can still look at the data: this spkg file is just a compressed tarball, so you can download it and then run tar xf grdb_polytopes-0.1.spkg and then look at the data files in grdb_polytopes-0.1.spkg/src.
|
|
Time Limit : sec, Memory Limit : KB
English
# Problem E: Manhattan Wiring
There is a rectangular area containing n × m cells. Two cells are marked with "2", and another two with "3". Some cells are occupied by obstacles. You should connect the two "2"s and also the two "3"s with non-intersecting lines. Lines can run only vertically or horizontally connecting centers of cells without obstacles.
Lines cannot run on a cell with an obstacle. Only one line can run on a cell at most once. Hence, a line cannot intersect with the other line, nor with itself. Under these constraints, the total length of the two lines should be minimized. The length of a line is defined as the number of cell borders it passes. In particular, a line connecting cells sharing their border has length 1.
Fig. 6(a) shows an example setting. Fig. 6(b) shows two lines satisfying the constraints above with minimum total length 18.
Figure 6: An example setting and its solution
## Input
The input consists of multiple datasets, each in the following format.
n m
row1
...
rown
n is the number of rows which satisfies 2 ≤ n ≤ 9. m is the number of columns which satisfies 2 ≤ m ≤ 9. Each rowi is a sequence of m digits separated by a space. The digits mean the following.
0: Empty
1: Occupied by an obstacle
2: Marked with "2"
3: Marked with "3"
The end of the input is indicated with a line containing two zeros separated by a space.
## Output
For each dataset, one line containing the minimum total length of the two lines should be output. If there is no pair of lines satisfying the requirement, answer "0" instead. No other characters should be contained in the output.
## Sample Input
5 5
0 0 0 0 0
0 0 0 3 0
2 0 2 0 0
1 0 1 1 1
0 0 0 0 3
2 3
2 2 0
0 3 3
6 5
2 0 0 0 0
0 3 0 0 0
0 0 0 0 0
1 1 1 0 0
0 0 0 0 0
0 0 2 3 0
5 9
0 0 0 0 0 0 0 0 0
0 0 0 0 3 0 0 0 0
0 2 0 0 0 0 0 2 0
0 0 0 0 3 0 0 0 0
0 0 0 0 0 0 0 0 0
9 9
3 0 0 0 0 0 0 0 2
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 3
9 9
0 0 0 1 0 0 0 0 0
0 2 0 1 0 0 0 0 3
0 0 0 1 0 0 0 0 2
0 0 0 1 0 0 0 0 3
0 0 0 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
9 9
0 0 0 0 0 0 0 0 0
0 3 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 2 3 2
0 0
## Output for the Sample Input
18
2
17
12
0
52
43
|
|
# Drawing Simple Geometrical Shapes on Python
Drawing Simple Geometrical Shapes on Python from scratch, have you tried it?
Now in this series of tasks, I am going to tackle some of the interesting image processing concepts from scratch using Python and then will compare them with the popular OpenCV framework. Last time I did Convolution operations from Scratch and RGB to GrayScale conversion, etc. Now is the time to start drawing simple geometrical shapes on python like circles, rectangles, ellipses and get the flashback of childhood. I am highly inspired by the book named Image Operators: Image Processing in Python by Jason M. Kinser. In fact I am going to use some simple geometrical concepts to draw these basic shapes using only NumPy and Matplotlib.
Also, I have to mention the awesome book named The Journey of X: A Guided Tour of Mathematics by Steven Strogatz. Author really has a great way of describing the mathematical terms and I have learned a lot of concepts on Mathematics from there. And the author also introduced the awesome book The Housekeeper and the Professor.
The method I am including here will be added to the previous Image Processing Class (which is also given below) I have used to do Convolution and Colorspace changes. So it will be helpful to view that one also.
## What will I do here?
• Using primary-grade mathematics, I will start drawing simple geometrical shapes on python and compare them with OpenCV's own methods.
import imageio
import warnings
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
class ImageProcessing:
def __init__(self):
self.readmode = {1 : "RGB", 0 : "Grayscale"}
def read_image(self, location = "", mode = 1):
"""
Uses imageio on back.
* location: Directory of image file.
* mode: Image readmode{1 : RGB, 0 : Grayscale}.
"""
if mode == 1:
img = img
elif mode == 0:
img = self.convert_color(img, 0)
elif mode == 2:
pass
else:
return img
def show(self, image, figsize=(5, 5)):
"""
Uses Matplotlib.pyplot.
* image: A image to be shown.
* figsize: How big image to show. From plt.figure()
"""
fig = plt.figure(figsize=figsize)
im = image
plt.imshow(im, cmap='gray')
plt.show()
def convert_color(self, img, to=0):
if to==0:
return 0.21 * img[:,:,0] + 0.72 * img[:,:,1] + 0.07 * img[:,:,2]
else:
raise ValueError("Color conversion can not understood.")
def convolve(self, image, kernel = None, padding = "zero", stride=(1, 1), show=False, bias = 0):
"""
* image: A image to be convolved.
* kernel: A filter/window of odd shape for convolution. Used Sobel(3, 3) default.
* padding: Border operation. Available from zero, same, none.
* stride: How frequently do convolution?
"""
if len(image.shape) > 3:
raise ValueError("Only 2 and 3 channel image supported.")
if type(kernel) == type(None):
warnings.warn("No kernel provided, trying to apply Sobel(3, 3).")
kernel = np.array([[-1, 0, 1],
[-1, 0, 1],
[-1, 0, 1]])
kernel += kernel.T
kshape = kernel.shape
if kshape[0] % 2 != 1 or kshape[1] % 2 != 1:
raise ValueError("Please provide odd length of 2d kernel.")
if type(stride) == int:
stride = (stride, stride)
shape = image.shape
zeros_h = np.zeros(shape[1]).reshape(-1, shape[1])
zeros_v = np.zeros(shape[0]+2).reshape(shape[0]+2, -1)
shape = image.shape
h1 = image[0].reshape(-1, shape[1])
h2 = image[-1].reshape(-1, shape[1])
shape = image.shape
pass
rv = 0
cimg = []
for r in range(kshape[0], shape[0]+1, stride[0]):
cv = 0
for c in range(kshape[1], shape[1]+1, stride[1]):
chunk = image[rv:r, cv:c]
soma = (np.multiply(chunk, kernel)+bias).sum()
try:
chunk = int(soma)
except:
chunk = int(0)
if chunk < 0:
chunk = 0
if chunk > 255:
chunk = 255
cimg.append(chunk)
cv+=stride[1]
rv+=stride[0]
cimg = np.array(cimg, dtype=np.uint8).reshape(int(rv/stride[0]), int(cv/stride[1]))
if show:
return cimg
ip = ImageProcessing()
cv = ip.convolve(img)
ip.show(cv)
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:52: UserWarning: No kernel provided, trying to apply Sobel(3, 3).
# lets read our new image that we are going to use for drawing simple shape
ip.show(img)
## Circle
Now onto the first shape on Drawing Simple Geometrical Shapes on Python.
Does everyone know what a circle is but only a few care how it originated? Thanks to Euclid and his contribution to modern Mathematics. A circle in the simple term can be thought of as a shape where infinite points are present and the distance between two consecutive points is infinitesimally small and points are arranged at an angle where consecutive points will be near 0 to 360 angle. Computer Graphics doesn't care about what it needs is a number. So if we zoom the shapes we start to see the pixels crystal clear. Here I will be using the simple concept of drawing a circle. I will be using the concept of polar form. If I have to write it in steps then:-
• Read an input image, get a radius for a circle, get a center point, get a border, get a smoothness value and get a color value for it.
• Prepare smoothness * 360 angles for the circle (of course 0 to 360).
• For each angle:
• Convert angle to radian from degree (NumPy geometric functions take radian).
• Find the distance between two points on the circumference by Pythagoras' theorem.
• Find the new point on the circumference and make that point's color on the circle color if the point is on the first quadrant.
Let's take an example of the above circle on the 2d plane. Circle's center is on (h, k) its radius is r, and there are 2 points on circumference p1, p2, and the third point is drawn on the radius line joining p2 and center. Additionally, p3 is perpendicular to the line joining p2 and the center. Here we know the length of 2 lines but not the line p1p3. But when the points p1 and p2 are so near that the distance between them is nearly zero (or the limit tends to 0) then the point p3 will be at p2. At that time we can apply Pythagoras' theorem. The below figure shows a zoomed version of that situation.
But what we need is the coordinate values of p1. We can do that by thinking that c is the origin. then the x coordinate of p1 will be equal to the x coordinate of p3. And to find the x-coordinate of p3 we can solve it.
$$cos(\theta) = \frac{b}{h}\ x = b = cos(\theta) * h$$
Similarly,
$$sin(\theta) = \frac{p}{h}\ y = p = sin(\theta) * h$$
And in our case, when the circle is not in the center then our (x, y) coordinate or p1 will be (h, k) far from the plane's center.
Hence, the coordinate value for p1 will be:
$$x = h + cos(\theta) r\ y = k + sin(\theta) r$$
And on the image plane, the coordinate starts from (0, 0) and we don't have a -ve quadrant. Hence we ignore all (x, y) values that lie other than the first quadrant. Enough of this theory, let's write that in code.
All the stories given above are already found in the polar form.
$$x = cos(\theta) r\ y = sin(\theta) r\ and,\ r = \sqrt{x^2 + y^2}\ and,\ \theta = tan^{-1}(\frac{y}{x})$$
# creating a circle
def circle(img=None, center=(0, 0), rad=10, border=4, color=[1], smooth=2):
"""
A method to create a circle on a give image.
img: Expects numpy ndarray of image.
center: center of a circle
border: border of the circle, if -ve, circle is filled
color: color for circle
smooth: how smooth should our circle be?(smooth * 360 angles in 0 to 360)
"""
if type(img) == None:
raise ValueError("Image can not be None. Provide numpy array instead.")
angles = 360
cvalue = np.array(color)
if type(img) != type(None):
shape = img.shape
if len(shape) == 3:
row, col, channels = shape
else:
row, col = shape
channels = 1
angles = np.linspace(0, 360, 360*smooth)
for i in angles:
a = i*np.pi/180
y = center[1]+rad*np.sin(a) # it is p=h*sin(theta)
# since we are wroking on image, coordinate starts from (0, 0) onwards and we have to ignore -ve values
if border >= 0:
b = int(np.ceil(border/2))
x1 = np.clip(x-b, 0, shape[0]).astype(np.int32)
y1 = np.clip(y-b, 0, shape[1]).astype(np.int32)
x2 = np.clip(x+b, 0, shape[0]).astype(np.int32)
y2 = np.clip(y+b, 0, shape[1]).astype(np.int32)
img[x1:x2, y1:y2] = cvalue
else:
x = np.clip(x, 0, shape[0])
y = np.clip(y, 0, shape[1])
r, c = int(x), int(y)
if i > 270:
img[center[0]:r, c:center[1]] = cvalue
elif i > 180:
img[r:center[0], c:center[1]] = cvalue
elif i > 90:
img[r:center[0], center[1]:c] = cvalue
elif i > 0:
img[center[0]:r, center[1]:c] = cvalue
return img
#ip.show(img)
fig = plt.figure(figsize=(5,5))
mimg = circle(img, center=(400, 100), border=20, rad=500)
ip.show(mimg)
Let me explain a little bit about the code above.
• Check the inputs and loop for the angles.
• We will try to take as many angles as possible given the smoothness value.
• Take a coordinate value for a point to draw on.
• If the point lies on the first quadrant:
• If the order value is +ve then draw only within the pixel and border/2 neighbor pixels on each of 4 directions.
• Else:
• If the angle is greater than 270 then fill the 4th quadrant with color
• If the angle is greater than 190 then fill the 3rd quadrant with color
• If the angle is greater than 90 then fill the 2nd quadrant with color
• If the angle is greater than 0 then fill 1st quadrant with color

### Compare it with OpenCV's Circle
Before comparing with OpenCV, let's have a clear understanding of the 2d graph plane and the image plane. The image plane starts from the top left side but the 2d graph plane starts from the center upwards. Hence in order to compare our circle, we have to change the center value (in this case).
And in this case, I am just swapping center values (i.e. (x, y) for OpenCV and (y, x) for ours).
• Draw circle on the image using OpenCV
• Draw a circle on the same image using our method.
• Subtract drawn images
• Show the difference.
The common parts of images are shown in complete black and those which are not are shown in completely white.
# read image
# draw using opencv
print("OpenCV")
cimg = cv2.circle(img.copy(), (400, 1000), 500, [0, 0, 0], -20)
ip.show(cimg)
# draw using our method (swap center)
print("Ours")
mimg = circle(img, center=(1000, 400), border=-20, rad=500, color=[0, 0, 0])
ip.show(mimg)
# difference
print("Difference")
r = mimg-cimg
r[r!=[0, 0, 0]] = 255
ip.show(r)
# count difference pixels
diff = np.sum(mimg!=cimg)
shape = mimg.shape
# what percentage is different?
diff * 100 / (shape[0] * shape[1])
OpenCV
Ours
Difference
0.2912136361400096
It seems clear that only 0.29% of pixels were different from the result of OpenCV's circle. But the difference varies with the shape of a circle.
# read image
# draw using opencv
print("OpenCV")
cimg = cv2.circle(img.copy(), (900, 1000), 500, [0, 0, 0], 20)
ip.show(cimg)
# draw using our method (swap center)
print("Ours")
mimg = circle(img.copy(), center=(1000, 900), border=20, rad=500, color=[0, 0, 0])
ip.show(mimg)
# difference
print("Difference")
r = mimg-cimg
r[r!=[0, 0, 0]] = 255
ip.show(r)
# count difference pixels
diff = np.sum(mimg!=cimg)
shape = mimg.shape
# what percentage is different?
diff * 100 / (shape[0] * shape[1])
OpenCV
Ours
Difference
1.064850381662056
## Rectangle
Now onto the second shape on Drawing Simple Geometrical Shapes on Python.
In drawing simple geometrical shapes on python, Drawing a Rectangle is very easy, in fact, just an array indexing completes the task. We need coordinates of two opposite corners i.e. major diagonal. The top left and bottom right corner coordinate is required in this case. And we will perform array indexing. Same as in the above case, we will work with border and color values.
def rectangle(img, pt1, pt2, border=2, color=[0]):
"""
img: Input image where we want to draw rectangle:
pt1: top left point (y, x)
pt2: bottom right point
border: border of line
color: color of rectangle line,
returns new image with rectangle.
"""
p1 = pt1
pt1 = (p1[1], p1[0])
p2 = pt2
pt2 = (p2[1], p2[0])
b = int(np.ceil(border/2))
cvalue = np.array(color)
if border >= 0:
# get x coordinates for each line(top, bottom) of each side
# if -ve coordinates comes, then make that 0
x11 = np.clip(pt1[0]-b, 0, pt2[0])
x12 = np.clip(pt1[0]+b+1, 0, pt2[0])
x21 = pt2[0]-b
x22 = pt2[0]+b+1
y11 = np.clip(pt1[1]-b, 0, pt2[1])
y12 = np.clip(pt1[1]+b+1, 0, pt2[1])
y21 = pt2[1]-b
y22 = pt2[1]+b+1
# right line
img[x11:x22, y11:y12] = cvalue
#left line
img[x11:x22, y21:y22] = cvalue
# top line
img[x11:x12, y11:y22] = cvalue
# bottom line
img[x21:x22, y11:y22] = cvalue
else:
pt1 = np.clip(pt1, 0, pt2)
img[pt1[0]:pt2[0]+1, pt1[1]:pt2[1]+1] = cvalue
return img
mimg = rectangle(img, (100,500), (1000, 1000), border=-5, color=[20, 150, 20])
ip.show(mimg)
Let's explain a little bit of the code here:-
• Take an image where we want to draw, take the coordinates of corners, take the border of a rectangle, and take the color of the rectangle.
• Extract the coordinates where we want to draw (if the coordinates are out of the image plane then perform clipping)
• If the border is +ve:
• Change pixels on topmost line(top coordinate along with its some neighbors)
• Change pixels on bottommost line(top coordinate along with its some neighbors)
• Follow the same for other lines.
• Else:
• Fill/Change the color from the top line to the bottom from left to right line.
## Compare with OpenCV
Now in this part of drawing simple geometrical shapes on python, we will compare our generated image with OpenCV's.
# read image
# draw using opencv
print("OpenCV")
cimg = cv2.rectangle(img.copy(), (100, 500), (1000, 1000), [0, 0, 0], -5)
ip.show(cimg)
# draw using our method (swap center)
print("Ours")
mimg = rectangle(img, (100,500), (1000, 1000), border=-5, color=[0, 0, 0])
ip.show(mimg)
# difference
print("Difference")
r = mimg-cimg
r[r!=[0, 0, 0]] = 255
ip.show(r)
# count difference pixels
diff = np.sum(mimg!=cimg)
shape = mimg.shape
# what percentage is different?
diff * 100 / (shape[0] * shape[1])
OpenCV
Ours
Difference
0.0
The comparison with OpenCV seems to be great because we have 0 differences. You can try different sizes of rectangles.
## Ellipse
Now on to the 3rd shape of Drawing Simple Geometrical Shapes on Python.
Ellipse is a modified version of the circle but it is well described as the portion that lies on a 2d plane when a plane is inclined inside a cone. Please search about this to see the bunch of images. I will again be using the polar form of an ellipse. It is just as simple as the circle except we use an axis instead of a radius.
$$x = h + cos(\theta) a\ y = k + sin(\theta) b$$
A simple example can be done using Matplotlib's plot.
h = 2
k = 1
a = 3
b = 1
t = np.linspace(0, 2 * np.pi, 100)
plt.plot(h+a*np.cos(t), k+b*np.sin(t))
plt.plot()
[]
# creating a ellipse
def ellipse(img=None, center=(0, 0), a=3, b=1, border=4, color=[0], smooth=2):
"""
A method to create a ellipse on a give image.
img: Expects numpy ndarray of image.
center: center of a ellipse
a: major axis
b: minor axis
border: border of the ellipse, if -ve, ellipse is filled
color: color for ellipse
smooth: how smooth should our ellipse be?(smooth * 360 angles in 0 to 360)
"""
if type(img) == None:
raise ValueError("Image can not be None. Provide numpy array instead.")
angles = 360
cvalue = np.array(color)
if type(img) != type(None):
shape = img.shape
if len(shape) == 3:
row, col, channels = shape
else:
row, col = shape
channels = 1
angles = np.linspace(0, 360, 360*smooth)
for i in angles:
angle = i*np.pi/180
y = center[1]+b*np.sin(angle)
x = center[0]+a*np.cos(angle)
# since we are wroking on image, coordinate starts from (0, 0) onwards and we have to ignore -ve values
if border >= 0:
r, c = int(x), int(y)
bord = int(np.ceil(border/2))
x1 = np.clip(x-bord, 0, img.shape[0]).astype(np.int32)
y1 = np.clip(y-bord, 0, img.shape[1]).astype(np.int32)
x2 = np.clip(x+bord, 0, img.shape[0]).astype(np.int32)
y2 = np.clip(y+bord, 0, img.shape[1]).astype(np.int32)
img[x1:x2, y1:y2] = cvalue
else:
x = np.clip(x, 0, img.shape[0])
y = np.clip(y, 0, img.shape[1])
r, c = int(x), int(y)
if i > 270:
img[center[0]:r, c:center[1]] = cvalue
elif i > 180:
img[r:center[0], c:center[1]] = cvalue
elif i > 90:
img[r:center[0], center[1]:c] = cvalue
elif i > 0:
img[center[0]:r, center[1]:c] = cvalue
return img
mimg = np.zeros((100, 100, 3), dtype=np.int32) + 255
eimg = ellipse(mimg.copy(), center=(10, 30), a = 10, b = 40, border=-2, color=[0, 0, 0])
ip.show(eimg)
## Compare it with OpenCV's Ellipse
In this part of drawing simple geometrical shapes on python, we are going to compare our generated ellipse with OpenCV's. The case is just like circles, we have to swap the center and the axes for the ellipse.
cimg = cv2.ellipse(mimg.copy(), (30, 10), (40, 10), 0, 0, 360, [0, 0, 0], -2)
ip.show(cimg)
# difference on fill
diff = np.sum(cimg!=eimg)
shape = cimg.shape
# what percentage is different?
diff * 100 / (shape[0] * shape[1])
4.47
# difference on normal
mimg = np.zeros((100, 100, 3), dtype=np.int32) + 255
eimg = ellipse(mimg.copy(), center=(20, 30), a = 10, b = 40, border=2, color=[0, 0, 0])
print("Ours")
ip.show(eimg)
# opencv's
print("OpenCV's")
cimg = cv2.ellipse(mimg.copy(), (30, 20), (40, 10), 0, 0, 360, [0, 0, 0], 2)
ip.show(cimg)
# difference on fill
diff = np.sum(cimg!=eimg)
shape = cimg.shape
# what percentage is different?
diff * 100 / (shape[0] * shape[1])
Ours
OpenCV's
8.55
The difference between OpenCV's and our method's output is not that bad. But as always, the difference depends on the size of the shape.
## Finally
We have done Drawing Simple Geometrical Shapes on Python. Now on a bonus topic, I will add these methods to our Image Processing class.
## Bonus Topic
class ImageProcessing:
def __init__(self):
self.readmode = {1 : "RGB", 0 : "Grayscale"}
def read_image(self, location = "", mode = 1):
"""
Uses imageio on back.
* location: Directory of image file.
* mode: Image readmode{1 : RGB, 0 : Grayscale}.
"""
if mode == 1:
img = img
elif mode == 0:
img = self.convert_color(img, 0)
elif mode == 2:
pass
else:
return img
def show(self, image, figsize=(5, 5)):
"""
Uses Matplotlib.pyplot.
* image: A image to be shown.
* figsize: How big image to show. From plt.figure()
"""
fig = plt.figure(figsize=figsize)
im = image
plt.imshow(im, cmap='gray')
plt.show()
def convert_color(self, img, to=0):
if to==0:
return 0.21 * img[:,:,0] + 0.72 * img[:,:,1] + 0.07 * img[:,:,2]
else:
raise ValueError("Color conversion can not understood.")
# creating a circle
def circle(self, img=None, center=(0, 0), rad=10, border=4, color=[1], smooth=2):
"""
A method to create a circle on a give image.
img: Expects numpy ndarray of image.
center: center of a circle
border: border of the circle, if -ve, circle is filled
color: color for circle
smooth: how smooth should our circle be?(smooth * 360 angles in 0 to 360)
"""
if type(img) == None:
raise ValueError("Image can not be None. Provide numpy array instead.")
angles = 360
cvalue = np.array(color)
if type(img) != type(None):
shape = img.shape
if len(shape) == 3:
row, col, channels = shape
else:
row, col = shape
channels = 1
angles = np.linspace(0, 360, 360*smooth)
for i in angles:
a = i*np.pi/180
y = center[1]+rad*np.sin(a) # it is p=h*sin(theta)
# since we are wroking on image, coordinate starts from (0, 0) onwards and we have to ignore -ve values
if border >= 0:
b = int(np.ceil(border/2))
x1 = np.clip(x-b, 0, shape[0]).astype(np.int32)
y1 = np.clip(y-b, 0, shape[1]).astype(np.int32)
x2 = np.clip(x+b, 0, shape[0]).astype(np.int32)
y2 = np.clip(y+b, 0, shape[1]).astype(np.int32)
img[x1:x2, y1:y2] = cvalue
else:
x = np.clip(x, 0, shape[0])
y = np.clip(y, 0, shape[1])
r, c = int(x), int(y)
if i > 270:
img[center[0]:r, c:center[1]] = cvalue
elif i > 180:
img[r:center[0], c:center[1]] = cvalue
elif i > 90:
img[r:center[0], center[1]:c] = cvalue
elif i > 0:
img[center[0]:r, center[1]:c] = cvalue
return img
def rectangle(self, img, pt1, pt2, border=2, color=[0]):
"""
img: Input image where we want to draw rectangle:
pt1: top left point (y, x)
pt2: bottom right point
border: border of line
color: color of rectangle line,
returns new image with rectangle.
"""
p1 = pt1
pt1 = (p1[1], p1[0])
p2 = pt2
pt2 = (p2[1], p2[0])
b = int(np.ceil(border/2))
cvalue = np.array(color)
if border >= 0:
# get x coordinates for each line(top, bottom) of each side
# if -ve coordinates comes, then make that 0
x11 = np.clip(pt1[0]-b, 0, pt2[0])
x12 = np.clip(pt1[0]+b+1, 0, pt2[0])
x21 = pt2[0]-b
x22 = pt2[0]+b+1
y11 = np.clip(pt1[1]-b, 0, pt2[1])
y12 = np.clip(pt1[1]+b+1, 0, pt2[1])
y21 = pt2[1]-b
y22 = pt2[1]+b+1
# right line
img[x11:x22, y11:y12] = cvalue
#left line
img[x11:x22, y21:y22] = cvalue
# top line
img[x11:x12, y11:y22] = cvalue
# bottom line
img[x21:x22, y11:y22] = cvalue
else:
pt1 = np.clip(pt1, 0, pt2)
img[pt1[0]:pt2[0]+1, pt1[1]:pt2[1]+1] = cvalue
return img
# creating a ellipse
def ellipse(self, img=None, center=(0, 0), a=3, b=1, border=4, color=[0], smooth=2):
"""
A method to create a ellipse on a give image.
img: Expects numpy ndarray of image.
center: center of a ellipse
a: major axis
b: minor axis
border: border of the ellipse, if -ve, ellipse is filled
color: color for ellipse
smooth: how smooth should our ellipse be?(smooth * 360 angles in 0 to 360)
"""
if type(img) == None:
raise ValueError("Image can not be None. Provide numpy array instead.")
angles = 360
cvalue = np.array(color)
if type(img) != type(None):
shape = img.shape
if len(shape) == 3:
row, col, channels = shape
else:
row, col = shape
channels = 1
angles = np.linspace(0, 360, 360*smooth)
for i in angles:
angle = i*np.pi/180
y = center[1]+b*np.sin(angle)
x = center[0]+a*np.cos(angle)
# since we are wroking on image, coordinate starts from (0, 0) onwards and we have to ignore -ve values
if border >= 0:
r, c = int(x), int(y)
bord = int(np.ceil(border/2))
x1 = np.clip(x-bord, 0, img.shape[0]).astype(np.int32)
y1 = np.clip(y-bord, 0, img.shape[1]).astype(np.int32)
x2 = np.clip(x+bord, 0, img.shape[0]).astype(np.int32)
y2 = np.clip(y+bord, 0, img.shape[1]).astype(np.int32)
img[x1:x2, y1:y2] = cvalue
else:
x = np.clip(x, 0, img.shape[0])
y = np.clip(y, 0, img.shape[1])
r, c = int(x), int(y)
if i > 270:
img[center[0]:r, c:center[1]] = cvalue
elif i > 180:
img[r:center[0], c:center[1]] = cvalue
elif i > 90:
img[r:center[0], center[1]:c] = cvalue
elif i > 0:
img[center[0]:r, center[1]:c] = cvalue
return img
def convolve(self, image, kernel = None, padding = "zero", stride=(1, 1), show=False, bias = 0):
"""
* image: A image to be convolved.
* kernel: A filter/window of odd shape for convolution. Used Sobel(3, 3) default.
* padding: Border operation. Available from zero, same, none.
* stride: How frequently do convolution?
"""
if len(image.shape) > 3:
raise ValueError("Only 2 and 3 channel image supported.")
if type(kernel) == type(None):
warnings.warn("No kernel provided, trying to apply Sobel(3, 3).")
kernel = np.array([[-1, 0, 1],
[-1, 0, 1],
[-1, 0, 1]])
kernel += kernel.T
kshape = kernel.shape
if kshape[0] % 2 != 1 or kshape[1] % 2 != 1:
raise ValueError("Please provide odd length of 2d kernel.")
if type(stride) == int:
stride = (stride, stride)
shape = image.shape
zeros_h = np.zeros(shape[1]).reshape(-1, shape[1])
zeros_v = np.zeros(shape[0]+2).reshape(shape[0]+2, -1)
shape = image.shape
h1 = image[0].reshape(-1, shape[1])
h2 = image[-1].reshape(-1, shape[1])
shape = image.shape
pass
rv = 0
cimg = []
for r in range(kshape[0], shape[0]+1, stride[0]):
cv = 0
for c in range(kshape[1], shape[1]+1, stride[1]):
chunk = image[rv:r, cv:c]
soma = (np.multiply(chunk, kernel)+bias).sum()
try:
chunk = int(soma)
except:
chunk = int(0)
if chunk < 0:
chunk = 0
if chunk > 255:
chunk = 255
cimg.append(chunk)
cv+=stride[1]
rv+=stride[0]
cimg = np.array(cimg, dtype=np.uint8).reshape(int(rv/stride[0]), int(cv/stride[1]))
if show:
return cimg
ip = ImageProcessing()
cv = ip.convolve(img)
ip.show(cv)
#ip.show(img)
fig = plt.figure(figsize=(5,5))
mimg = ip.circle(img, center=(400, 100), border=20, rad=500)
ip.show(mimg)
mimg = ip.rectangle(img, (100,500), (1000, 1000), border=-5, color=[20, 150, 20])
ip.show(mimg)
mimg = np.zeros((100, 100, 3), dtype=np.int32) + 255
eimg = ip.ellipse(mimg.copy(), center=(10, 30), a = 10, b = 40, border=-2, color=[0, 0, 0])
ip.show(eimg)
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:211: UserWarning: No kernel provided, trying to apply Sobel(3, 3).
Thank you so much for reading this drawing simple geometrical shapes on python blog and if you find it interesting why not share it or leave the comments? If you have any queries then you can send me a mail or find me at Twitter as @QuassarianViper.
## What next?
• Add functionality to do blurring, noise cancellation, sharpening, etc
• Add functionality to do erosion, dilation, etc operations.
In the meantime how about looking over some of mine works or newsletter?
|
|
Main > Photonics Research > Volume 8 > Issue 3 > Page 03000303 > Article
• Abstract
• Abstract
• Figures (6)
• Tables (1)
• Equations (0)
• References (32)
• Get PDF
• View Full Text
• Paper Information
Accepted: Dec. 30, 2019
Posted: Jan. 3, 2020
Published Online: Feb. 14, 2020
The Author Email: Bin Zhang (zhangbin5@mail.sysu.edu.cn), Zhaohui Li (lzhh88@mail.sysu.edu.cn)
• Get Citation
• ##### Copy Citation Text
Jingshun Pan, Bin Zhang, Zhengyong Liu, Jiaxin Zhao, Yuanhua Feng, Lei Wan, Zhaohui Li. Microbubble resonators combined with a digital optical frequency comb for high-precision air-coupled ultrasound detectors[J]. Photonics Research, 2020, 8(3): 03000303
Fast and sensitive air-coupled ultrasound detection is essential for many applications such as radar, ultrasound imaging, and defect detection. Here we present a novel approach based on a digital optical frequency comb (DOFC) technique combined with high-$Q$ optical microbubble resonators (MBRs). DOFC enables precise spectroscopy on resonators that can trace the ultrasound pressure with its resonant frequency shift with femtometer resolution and sub-microsecond response time. The noise equivalent pressure of air-coupled ultrasound as low as $4.4 mPa/√Hz$ is achieved by combining a high-$Q$ ($~3×107$) MBR with the DOFC method. Moreover, it can observe multi-resonance peaks from multiple MBRs to directly monitor the precise spatial location of the ultrasonic source. This approach has a potential to be applied in 3D air-coupled photoacoustic and ultrasonic imaging.
|
|
# American Institute of Mathematical Sciences
## The nonstationary flows of micropolar fluids with thermal convection: An iterative approach
1 Departamento de Matemática, Universidade Federal de Pernambuco, Recife, PE, Brazil 2 Departamento de Matemática, Universidad de Tarapacá, Arica, Chile
* Corresponding author: miguel@dmat.ufpe.br
Received October 2019 Revised January 2020 Published June 2020
Fund Project: This work was partially supported by CAPES-PRINT, 88887.311962/2018-00.
Charles Amorim was supported by CNPQ/Brazil
We consider a problem that describes the motion of a viscous incompressible and heat-conducting micropolar fluids in a bounded domain $\Omega \subset \mathbb{R}^3$. We use an iterative method to analyze the existence, uniqueness, and regularity of the solutions. We also determine the convergence rates in several norms.
Citation: Charles Amorim, Miguel Loayza, Marko A. Rojas-Medar. The nonstationary flows of micropolar fluids with thermal convection: An iterative approach. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020193
##### References:
show all references
##### References:
[1] Christian Beck, Lukas Gonon, Martin Hutzenthaler, Arnulf Jentzen. On existence and uniqueness properties for solutions of stochastic fixed point equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020320 [2] Yichen Zhang, Meiqiang Feng. A coupled $p$-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075 [3] Karoline Disser. Global existence and uniqueness for a volume-surface reaction-nonlinear-diffusion system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 321-330. doi: 10.3934/dcdss.2020326 [4] George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003 [5] Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020384 [6] Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247 [7] Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (1) : 301-317. doi: 10.3934/cpaa.2020267 [8] Thierry Horsin, Mohamed Ali Jendoubi. On the convergence to equilibria of a sequence defined by an implicit scheme. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020465 [9] Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217 [10] Parikshit Upadhyaya, Elias Jarlebring, Emanuel H. Rubensson. A density matrix approach to the convergence of the self-consistent field iteration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 99-115. doi: 10.3934/naco.2020018 [11] Gang Luo, Qingzhi Yang. The point-wise convergence of shifted symmetric higher order power method. Journal of Industrial & Management Optimization, 2021, 17 (1) : 357-368. doi: 10.3934/jimo.2019115 [12] Helmut Abels, Johannes Kampmann. Existence of weak solutions for a sharp interface model for phase separation on biological membranes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 331-351. doi: 10.3934/dcdss.2020325 [13] Yongxiu Shi, Haitao Wan. Refined asymptotic behavior and uniqueness of large solutions to a quasilinear elliptic equation in a borderline case. Electronic Research Archive, , () : -. doi: 10.3934/era.2020119 [14] Noriyoshi Fukaya. Uniqueness and nondegeneracy of ground states for nonlinear Schrödinger equations with attractive inverse-power potential. Communications on Pure & Applied Analysis, 2021, 20 (1) : 121-143. doi: 10.3934/cpaa.2020260 [15] Yi-Long Luo, Yangjun Ma. Low Mach number limit for the compressible inertial Qian-Sheng model of liquid crystals: Convergence for classical solutions. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 921-966. doi: 10.3934/dcds.2020304 [16] Thomas Frenzel, Matthias Liero. Effective diffusion in thin structures via generalized gradient systems and EDP-convergence. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 395-425. doi: 10.3934/dcdss.2020345 [17] Wei Ouyang, Li Li. Hölder strong metric subregularity and its applications to convergence analysis of inexact Newton methods. Journal of Industrial & Management Optimization, 2021, 17 (1) : 169-184. doi: 10.3934/jimo.2019105 [18] Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 [19] Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436 [20] Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348
2019 Impact Factor: 1.27
|
|
# Does second dimension exist? or any other dimension? [closed]
Atoms as we know are the structural unit of everything. As i know that atoms are 3D objects, they have length breath and height (they have thickness).Everything is made up of atoms means everything is 3D. So why do we say that any triange ABC drawn on a piece of paper is a 2D figure?
• Because the third dimension is too small to be considered. Oct 30, 2017 at 15:05
• In your main question, what do you mean by second dimension? There are 3 spatial dimensions and a temporal one. Oct 30, 2017 at 15:10
• A triangle drawn on paper is an imperfect representation of a the geometrical object triangle. Oct 30, 2017 at 16:32
• You, or anybody else, could not possibly know that elementary particles, are 3D. This is because, for an electron, we are measuring only it's electrostatic charge, and to the degree we have measured it, it follows the inverse square law. But this is something like measuring the width of my shoulders, by squeezing the clothes around them, it is not an accurate measurement of what is beneath. If you look at the literature regarding quarks & gluons, and and their interactions, you will find a different approach. The actual underlying "things", are still a mystery as regards their "width".
– user171879
Oct 30, 2017 at 17:03
You have to make an effort of abstraction. There's no such thing as a first, second, and third dimension. To say that something has a certain number of dimension is (very roughly speaking) to answer the question "How many coordinates do I need to uniquely identify a point on this something?" Like you say, in 3D space we need 3, $x,y,z$. On a sheet of paper you can identify every point with just two numbers (think of drawing a grid of the sheet of paper). Sure the ink on the paper has some finite width, but who cares? When you say "triangle" don't think of a physical piece of something shaped like a triangle. Think of the mathematical object "triangle". Maybe it's simpler if we think circles. If you draw a circle on a sheet of paper, you might again argue that the ink has a width. But what $is$ a circle? A circle is the set of all points $x$ and $y$ such that their distance from another given point (the center) is some fixed number (the radius), i.e. $x^2+y^2=R^2$ where $R$ is the radius.
I will argue that a circle is a one dimensional object. As a matter of fact fixed the radius, you need only one number to position yourself on the circle, specifically an angle $\theta$.
But it makes sense doesn't it? You usually think of a line like a one dimensional object, a circle is just a line curled up a bit. What if we filled the circle and considered the inside too? That would be the set of point $x,y$ so that $x^2+y^2<R^2$. Can you see how many dimensions does this object have?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.