arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
## Roxanne Tellier: Bad Week for Sore Losers
Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on June 28, 2015 by segarini
When those charged with guiding and arbitrating the people become inured to the peoples’ actual needs and opinions, it’s time for them to go. When entitlement and arrogance override justice for ALL, not just the chosen few, it’s time to reassess the entire system.
## Roxanne Tellier – Don’t Mention The War!
Posted in Opinion with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on March 29, 2015 by segarini
If I made (or kept) New Year resolutions, I might have resolved to pay more attention to Canadian issues, and less to America’s or the rest of the world.
|
|
## Can I enchant a necklace with the equivalent of a healing potion? (DnD 5e)
I know nothing of magic items really, especially custom ones, so I’m just going to describe an ideal scenario and I’d love some help in how something like it could be achieved, please! Am looking for some way to enchant a necklace with the ability to give some HP to the wearer. In an ideal world this would be triggered if the wearer hits 0HP, and would be rechargeable in some way. My player is not a spell caster, so I would be looking for a merchant or someone in town who could do this for her, if you have any advice on how much this might cost, up front and for each recharge or something. (For flavour reasons, she already has a specific necklace she’d like to enchant) If anything like this is possible, I’d love some help! Thanks!
## what is the equivalent of Lathander from the Dawn War Deities
A player getting his cleric ready for a campaign I am creating has told me he usually plays Forgotten Realms and his usual deity is Lathander, this is not a deity I am familiar with.
Largely I am basing my Pantheon for this campaign on the DawnWar Deities, with some tweaks being made. I am perfectly happy allowing my Cleric to play Lathander, and he is equally happy to worship an equivalent deity in my pantheon.
Who would be the closest equivalent to Lathander from the Dawn War Deities, either to be replaced by Lathander or to replace him as my players Deity?
## Offered equivalent .com domain to purchase, what is going on, and how to proceed?
A quick bit of background: I run a small (single person) software company, and operate under a .com.au domain that I have had for around 8 years. There has always been a .com domain of the same name, operating in an entirely different sector and country, but with not much going on on their website. I have always wanted the .com equivalent of my current domain, just because over the 8 years not only has business improved a little, but I also left Australia and, of course, a .com is just a little more desirable. With that in mind, I had the .com on backorder with my Australian registrar for quite a few years – but it never came up.
In the last few weeks, I suddenly got an influx of offers to buy this .com domain, seemingly from a number of different entities. First thoughts – scam of some sort. After checking out the WHOIS, it does seem that the .com expired last September, but my backorder had lapsed (great!). Registrant details are hidden behind PERFECT PRIVACY, LLC in Florida, and the domain status is clientTransferProhibited and the domain servers are NS1.PENDINGRENEWALDELETION.COM.
I am really curious as to what is actually going on here. Is it just a scam, because people have scraped details of an expiring domain, or do these people actually have access? If so, how? Has it been purchased? I didn’t (and don’t) know much about the murky world of domain name transfers, but SnapNames came up in my search and I looked on there. I did find the .com domain there, with a status of "Closed". Does that mean it came up in an auction and was bought? Or is another possibility that it is coming up for sale, and multiple people are trying to work out if it is worth buying and flipping to me? If it has sold, is there any way of finding out how much it went for? That might give me a bit of leverage in any future negotiations.
It is not, I don’t think, a valuable name. One of the potential sellers has offered it to me for \$ 1299, which I might consider paying, but another question would be how does one go about purchasing a domain safely from an internet unknown?
## Is Exit (no square brackets) equivalent to Quit[] for refreshing the Kernel from within an Evaluation Notebook?
I prefer to use Exit as it conveniently requires fewer key presses over Quit[]. But before I use it regularly I need to know if there any subtle differences between Quit[] and Exit. The Wolfram documentation pages for Quit and Exit appear to be very similar and even call these two functions synonymous but I just need to be sure.
Thanks.
## Transforming multi-tape Turing machines to equivalent single-tape Turing machines
We know that multi-tape Turing machines have the same computational power as single-tape ones. So every $$k$$-tape Turing machine has an equivalent single-tape Turing machine.
About the computability and complexity analysis of such a transformation:
Is there a computable function that receives as input an arbitrary multi-tape Turing machine and returns an equivalent single-tape Turing machine in polynomial time and polynomial space?
## What is the AC equivalent of mirror image?
It is suggested (by the DMG) to account for defensive options by assuming increased AC, e.g. +2 AC for magic resistance.
I would like to give some of my monsters the spell mirror image as part of a spellcasting / innate spellcasting trait. There is precedence for this: the Alhoon, the Faerie Dragon, and the Lamia have it, for example.
What would be a rough estimate of the AC equivalent for mirror image for the purpose of determining CR / combat performance?
Tangentially related (compares mirror image to other spells for PCs concerning the defensive capabilities): What's my most efficient use of spell slots to help my AC?
## Are these two methods of handling Elven Accuracy “Double Advantage” mathematically equivalent?
For context, part of the Elven Accuracy feat (Xanathar’s Guide to Everything, p. 74) states:
Whenever you have advantage on an attack roll using Dexterity, Intelligence, Wisdom, or Charisma, you can reroll one of the dice once.
So the obvious way to handle this, mechanically, is to roll two dice, pick the lowest, and roll it again. But as an effort to save time, I’ve proposed instead simply rolling three dice simultaneously, and picking the highest rolled value.
The problem is that I’m not certain that this is mathematically correct.
I created a code simulation that was intended to model the probability curve of both methods, and it suggests that the two methods are mathematically equivalent, but the simulation only performs direct sampling of random numbers and their results; it has unavoidable error in the results, and it doesn’t attempt to solve the underlying mathematical principles involved.
//Roll 3, pick highest ResultSet: Double Advantage Average: 15.48246 Variance: 14.94721234837884 Std. Deviation: 3.8661624834425727 95% range: [6, 20] Mode: 20 Median: 16 //Roll 2, reroll lowest, pick highest ResultSet: Alternate Double Advantage Average: 15.488486 Variance: 14.944649427739675 Std. Deviation: 3.8658310138623073 95% range: [6, 20] Mode: 20 Median: 16
Is it correct to say that these two dice-rolling methods are equivalent, or should I stick to the RAW interpretation of how these dice should be rolled?
For full context, I’m planning out a build for a Shadow Sorcerer that fights only in melee combat, and if this character has the ability to nearly-permanently shroud themselves in Darkness (which is one of their class features), it’ll give them nearly permanent Advantage against creatures that don’t have Devil’s Sight or Truesight (or a reliable, spammable Counterspell/Dispel Magic). So simplifying this roll can matter in terms of time saved.
## Is my recursive algorithm for Equivalent Words correct?
Here is my problem.
Problem Given two words and a dictionary, find out whether the words are equivalent.
Input: The dictionary, D (a set of words), and two words v and w from the dictionary.
Output: A transformation of v into w by substitutions such that all intermediate words belong to D. If no transformation is possible, output “v and w are not equivalent.”
I need to write both recursive and dynamic programming algorithm. As for recursion, I came up with this algorithm. Is it correct?
EquivalentWordsProblem(v, w, D) 1.m <- len (v) 2.n <- len (w) 3.substitutions = [] #array to save substitutions 4.if m != n: 5. return "v and w are not equivalent" 6.else 7.for i <- m to 1 <-1 do 8. for j <- n to j <- 1 do 9. if v[i] != w[j]: 10. substituted_word <- v[1…i-1]+v[j] #we substitute v[i] for w[j] 11. if substituted_word in D: 12. substitutions.append(substituted_word) 13. return EquivalentWordsProblem(v[1…m-i], w, D) #recur on the string of length m - i 14. else: return EquivalentWordsProblem(v[1…m-1], w, D) #recur on the string decreasing its length by 1 15.if len(substitutions) != 0: 16. return substitutions 17.else 18. return (“v and w are not equivalent”)
## Is checking if regular languages are equivalent decidable?
Is this problem algorithmically decidable?
L1 and L2 are both regular languages with alphabet $$\Sigma$$. Does L1 = L2?
I think that it is decidable because you can write regular expressions for each language and see if they are the same. But I’m not sure how to prove it since I see that you prove something is decidable by showing a Turing Machine
## Are Online Problems always harder than the Offline equivalent?
I am currently studying Online-Algorithms, and I just asked myself if online Problems are always harder than the offline equivalent.
The most probable answer ist yes, but I can’t figure the reason out why.
Actually I have a second more specific question. When an offline Problem has some integrality gap ($$IG\in[1,\infty)$$) we know in an offline setting, that there is generally no randomized rounding algorithm which achieves a ratio $$C\geq IG$$.
Can this just be adapted to the online problem? If some fractional algorithm has competitive ratio $$c_{frac}$$ can some randomized rounding scheme only reach competitive ratio as good as $$\frac{c_{frac}}{IG}$$?
|
|
Post Go back to editing
Hi,
I would like to use ADCMP580.
So, I have a question.
#1 What is the maximum hysteresis?
In the text of Datasheet P.14, the maximum hysteresis is stated as ± 70 mV.
The following text will be quoted.
" The maximum range of hysteresis that can be applied by using this method is approximately ±70 mV. "
However, looking at Figure 28, since hysteresis is 70 mV, I think that it is ± 35 mV.
Is "± 70 mV" in the text a mistake?
Best Regards,
Yuya
• Hi Yuya,
The +/- 70mV is a typographical error, the correct hysteresis "range" for ADCMP580 is +/- 35mV, since each side is equivalent to +/- Vhys/2 where Vhys = 70mV.
Regards,
Joven
• Hi, Joven
|
|
The basic idea is to approximate the target distribution $\pi$ as lying in a finite dimensional Reproducing Kernel Hilbert Space and minimize the distance between the approximation $q$ and $\pi$ (measured using RKHS’ own brand of distribution distance, the Maximum Mean Discrepancy or MMD). This, so far, is an idea that is used in basically all modern methods for numeric integration – variational Bayes and EP of course, but also adaptive MC methods that sample from some $q_t$ at time t then update the proposal to $q_{t+1}$, which should be close to $\pi$ (e.g. a Gaussian approximation in Adaptive Metropolis, or a Mixture of Gaussians or Students with an EM-inspired update in both Andrieu and Moulines 2006 and Cappé et al 2007).
However, the nice thing in this paper is that they actually provide theoretical guarantees that their integral estimates are consistent and converge at rate $O(1/n)$ or $O(\exp(-Cn))$ (where n is the number of samples). While the assumption of $\pi$ lying in a finite dimensional RKHS is to be expected, the stronger assumption is that it has compact support.
The crucial point seems to me to be the following: while the estimator has a superior rate, picking design points costs $O(n^2)$ or $O(n^3)$ where n is the number of points. Drawing a sample using Monte Carlo algorithms on the other hand takes constant time, which makes MC linear in the number of points ($O(n)$). This of course is crucial for the wall clock time performance and makes theoretical comparison of MC vs FW Bayesian Quadrature delicate at best.
In the sense of conclusive story for the paper, I think it is a bit unfortunate that for the evaluation the paper uses iid samples directly from $\pi$ in its optimization algorithm. Because if you have these, Monte Carlo estimates are really good as well. But after a nice discussion with the authors, I realized that this does not in any way taint the theoretical results.
I wonder wether it is mostly the idea of extrapolating $\pi$ that gives these good rates, as I think Dan Simpson suggested on Xi’ans blog. And finally, this does not really alleviate the problem that BQ practically works only with Gaussian RBF kernels – which is a rather strong assumption on the smoothness of $\pi$.
|
|
# Tag Info
## Hot answers tagged pressure
3
An ideal gas or near-ideal gas such as air at about atmospheric pressure and room temperature has a bulk modulus which is the same as its pressure. That can be readily confirmed by taking the ideal gas equation $PV=nRT$ and substituting it into the equation for the bulk modulus $B=-V \frac{dP}{dV}$. Now for what P-V equations-of-state does the bulk modulus ...
3
Concerning your wording "force is transmitted (and maybe decreases because of loss of energy)" - no, no, the decrease of force is not easily connected to the loss of energy. Force can be decreased because there is friction, but this does not imply a loss of energy (not if nothing moves). And also energy can be lost (plastic deformation of the rope) without a ...
2
As mentioned by @Chester, Bernoulli isn't a good approximation for viscous flows which blood flow is. Instead you should use the Hagen-Poiseuille law which relates the average volumetric flowrate and the pressure gradient in the pipe. From it we find that the flowrate $Q$ is proportional to: $$Q \propto R^4 \Gamma$$ where $R$ is the radius of the pipe and ...
2
Your approach is along the right lines and you need to make use that $dp = \rho(z) \, g \, dz$ and use the relationship between pressure, volume and temperature for n moles of gas as $P V = n R T$, so that $P = \frac{n}{V} R T = \rho R T$, with $\rho$ the density. If you assume that the temperature of the atmosphere is uniform then the pressure varies as ...
2
I am not sure if the same laws apply to the heart as that of mechanical pump, but for a given flow rate, say X gallons per minute, the mechanical pump must develop a pressure P to overcome pipe friction and any other force trying to retard flow. If the pipe in a system is reduced in size, to pump the same flow rate a higher pressure will be required. The ...
1
The Bernoulli equation is a good approximation only if viscous flow resistance is not important. In blood flow through arteries, veins and (particularly) capillaries, viscous flow resistance is very important.
1
This has nothing to do with air pressure since the air pressure is exactly the same at all of the holes (including the top one). Regarding pressure only differences can cause stuff to move, the absolute pressure is only relevant for density and such. The concept you should read more about is called hydrostatic pressure and given by a very simple formula. ...
1
A and C are not identical; that's where the thought experiment breaks down. Consider the pressure in the fluids around the opening between the long thin neck section and the wide base section. In C, you have one continuous fluid, and the pressure is the same both above and below the neck (and equal to $\rho g h$ where $h$ is the height of the next and ...
1
If the pressure is uniform, there is no problem, because you just multiply the pressure by the total surface area of interest. However, if the pressure is varying on the surface, pressure should be regarded as a point function of location, and the contribution to the total force on the surface at a differential element of area on the surface dA is equal to ...
1
A self-contained, careful derivation of (4.10): We consider a thermodynamic system whose state can be characterized by the macroscopic variables $(S, V, N)$, then starting with the fundamental relationship $\mathrm dU = T\,\mathrm dS -P\,\mathrm dV + \mu\,\mathrm dN$, and noting that $\beta = 1/(k T)$, one can deduce the following useful expression for the ...
1
I think you misunderstand the definitions. The critical temperature is the temperature above which no amount of pressure will cause a gas to liquefy. The critical pressure is the pressure which will cause a gas to liquefy at its critical temperature. A supercritical fluid is another state of matter. A liquid and a gas phase have been subjected to ...
1
Before valve-A is opened, the pressure in the tube is atm. Pressure since the top of the tube is opened. When the valve-A is opened, assumed instantaneously, the water will rush into the tube at: $$V = \sqrt{2gh}$$ where: $g$ = gravitation const (32.2 ft per sec per sec.), $h = 100\ \mathrm m$ = Height from bottom of tube to tank ...
1
The real answer is quite complex; I think we should break it into a couple of different pieces. First - the static case. If you submerge an open pipe into water, the pressure inside and outside will be the same at a given height, and the water level inside the tube will settle at the same height as outside. If you add the effect of surface tension, it is ...
1
The other answer that wrote earlier was wrong. Although there will be an unbalanced force, but due to restriction in movement, the pistons cannot move to the right. $$F_1 = A_1P_{atm}$$ $$F_2 = A_2P_{atm}$$ $$A_1 > A_2 \implies \ F_1 > F_2$$
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
# Reveal TikZ Mindmap step by step with Beamer [duplicate]
Possible Duplicate:
Mindmap tikzpicture in beamer (reveal step by step)
Using a mindmap example from here, with only sligth modifications. The target is to control what mindmap pieces are visible at a given time in the presentation.
Here is what i tried:
\begin{tikzpicture}
\path[mindmap,concept color=black,text=white]
node[concept] {Computer Science}
[clockwise from=0]
child[concept color=green!50!black] {
node[concept] (herb) {practical} %herb
[clockwise from=90]
child { node[concept] {algorithms} }
child { node[concept] {data structures} }
child { node[concept] {pro\-gramming languages} }
child { node[concept] {software engineer\-ing} }
}
child[concept color=blue] {
node[concept] (muh) {applied} %muh
[clockwise from=-30]
child { node[concept] {databases} }
child { node[concept] {WWW} }
}
child[concept color=red] { node[concept] {technical} }
child[concept color=orange] { node[concept] {theoretical} };
\uncover<1-1>{herb} %should uncover on 1
\uncover<2-2>{muh} %should uncover on 2
\end{tikzpicture}
Although i get to select the two different versions in the pdf, they appear identical. How can i display the mindmap step by step to help my audience follow my trail.
## marked as duplicate by cmhughes, Werner, Guido, Stephen, zerothJan 28 '13 at 20:14
• The same question was already asked in tikzpicture in beamer: have a look there and I suggest you to consider Daniel's approach. – Claudio Fiandrino Jan 28 '13 at 18:29
• The link that Claudio provided wouldn't have been obvious from a search- I have edited its title :) Let us know if your question is resolved... – cmhughes Jan 28 '13 at 18:57
• I will try this tomorrow but this looks really close to what i am looking for. – Johannes Jan 29 '13 at 0:38
• Yes, the solution worked. – Johannes Jan 29 '13 at 14:12
|
|
Canadian Mathematical Society www.cms.math.ca
location: Publications → journals
Search results
Search: MSC category 11S90 ( Prehomogeneous vector spaces )
Expand all Collapse all Results 1 - 1 of 1
1. CJM 2006 (vol 58 pp. 3)
Ben Saïd, Salem
The Functional Equation of Zeta Distributions Associated With Non-Euclidean Jordan Algebras This paper is devoted to the study of certain zeta distributions associated with simple non-Euclidean Jordan algebras. An explicit form of the corresponding functional equation and Bernstein-type identities is obtained. Keywords:Zeta distributions, functional equations, Bernstein polynomials, non-Euclidean Jordan algebrasCategories:11M41, 17C20, 11S90
© Canadian Mathematical Society, 2015 : https://cms.math.ca/
|
|
Metric Spaces and Metrizability
# Metric Spaces and Metrizability
## Metric Spaces
Definition: A Metric Space is a pair $(E, d)$ where $E$ is a set, and $d$ is a Metric (or Distance Function), i.e., a function $d : E \times E \to [0, \infty)$ that satisfies the following properties: (1) $d(x, y) = 0$ if and only if $x = y$. (2) $d(x, y) = d(y, x)$ for all $x, y \in E$. (3) $d(x, z) \leq d(x, y) + d(y, z)$ for all $x, y, z \in E$.
• Property (1) states that the distance between any pair of points is $0$ if and only if those points coincide.
• Property (2) states that the distance between $x$ and $y$ is the same as the distance between $y$ and $x$.
• Property (3) states that the distance between $x$ and $z$ will always be less than or equal to the sum of the distance between $x$ and an intermediary point $y$, and that intermediary point $y$ and $z$. This is sometimes referenced to as the triangle inequality property for metrics.
Definition: Let $(E, d)$ be a metric space. Given $\epsilon > 0$, the Open Ball Centered at $x$ with Radius $\epsilon$ is the set $V(x, \epsilon) := \{ y \in E : d(x, y) < \epsilon \}$.
If $(E, d)$ is a metric space and if $x \in E$, then a set $U$ is a neighbourhood of $x$ if there exists an $\epsilon > 0$ such that $x \in V(x, \epsilon) \subseteq U$. If $\mathcal U_x$ is the collection of all neighbourhoods of $x$, then:
• (1) $x \in U$ for all $U \in \mathcal U_x$.
• (2) If $U_1, U_2 \in \mathcal U_x$, then there exists $\epsilon_1, \epsilon_2 > 0$ with $x \in V(x, \epsilon_1) \subseteq U_1$ and $x \in V(x, \epsilon_2) \subseteq U_2$. By setting $\epsilon = \min \{ \epsilon_1, \epsilon_2 \}$ we see that $x \in V(x, \epsilon) \subset U_1 \cap U_2$, so that $U_1 \cap U_2 \in \mathcal U_x$.
• (3) If $U \in \mathcal U_x$, then there exists an $\epsilon > 0$ such that $x \in V(x, \epsilon) \subseteq U$. So if $U \subseteq V$ then $x \in V(x, \epsilon) \subseteq V$ so that $V \in \mathcal U_x$.
• (4) If $U \in \mathcal U_x$ then again, there exists an $\epsilon > 0$ such that $x \in V(x, \epsilon) \subseteq U$. By setting $V := V(x, \epsilon)$, we see that $V \in \mathcal U_y$ for all $y \in V(x, \epsilon)$.
Therefore, as noted on the Topologies and Topological Spaces page, $(E, d)$ becomes a topological space where for each $x \in E$, $\mathcal U_x$ is a collection of neighbourhoods for $x$, and moreover, for each $x \in E$, $\{ V(x, \epsilon) : \epsilon > 0 \}$ is a base of neighbourhoods of $x$, and also, $\{ V \left ( x, \frac{1}{n} \right ) : n \in \mathbb{N} \}$ is a countable base of neighbourhoods of $x$.
## Metrizability
Every metric space $(E, d)$ is a topological space $(E, \tau)$, where $\tau$ consists of all unions of open balls on $E$. However, in general, there are topological spaces that are not metric spaces.
Definition: A topological space $(E, \tau)$ is said to be Metrizable if there exists a metric $d : E \times E \to [0, \infty)$ such that the topology induced by $d$ is the same as the topology $\tau$, where the topology induced by $d$ is the topology generated by the open balls of $E$, i.e., the topology generated by $\{ V(x, \epsilon) : x \in E, \epsilon > 0 \}$.
When we say that the topology induced by $d$ is the same as the topology $\tau$, we mean that the topology on $E$ consisting of all possible unions of open balls in $E$ is the same as the topology $\tau$.
Proposition 1: Let $(E, \tau)$ be a topological space. If $(E, \tau)$ is metrizable, then $(E, \tau)$ is Hausdorff and first countable.
|
|
#### Archived
This topic is now archived and is closed to further replies.
# Collision impulse
This topic is 5335 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hello, I have another perheps newbie question for all you math and physics gurus . I have tried to implement Chris Heckers impulse formulas from http://www.d6.com/users/checker/pdfs/gdmphys3.pdf . But no sucess. I want to make collision respons for a bonding box against static world; I use these formula: j =(-(1+e)v1ab.n )/(n.n(1/ma + 1/mb)+(rap.n)^2/Ia+(rbp.n)^2/Ib) I have reduced this to the following because object ''b'' is the static world. j =(-(1+e)v1.n )/(n.n(1/m)+(rp.n)^2/I) Then i use: v2 = v1 + (j/m)*n //linear velocity W2 = W1 + (rp.jn/I) //angular velocity But i get real strange values with the linear velocity. Is the collision point, rp, supposed to be relative to the center of the mass ? The inertias, I, i store in a Vector with the inertia for the thre axis. Same with the collision normal, n, and the angular velocity, w. For exampel when i have the following values i get the folliwing answer. n = {0,0,1} e = 1 ma= 1472 I = {2000, 2500, 500} v1= {0,0,-10} rp= {0,0,10} v2 = {0. ,0. , -9.973 } I find this a bit strange. How can the box still travel in the same direction ? The collision point ( rp ) is right on the Z-axis of the collision box. So there wouldn''t be any change in rotation, the box would just bounce back with {0,0,10} as velocity vector. What am I doing wrong ? I am very confused here and would gladly receive som help . /Niklas
##### Share on other sites
rp should be relative to the centre of mass of the body, and if your body hits it head on, rp should be {0, 0, -10}
your change in velocity looks OK; you loose momentum, so the impulse is directed the right way. How do you calculate the inertia? It''s got to be consistent with your body''s mass (the inertia matrix looks nice and big, but the inverse inertia should be tiny) and the collision point should also be consistent with the size of your body. If you have a box-shaped body, the collision point has to be on the body surface. If it is too much forward (not on the box surface), rp.n will be massive (it is even squared), which will weaken the impulse. Remember, the inertia calculations rely on the box''s size, so size and distances matter.
I think V2 = V1 + (n * j / ma), so j should be well in the range of 1,000s
Have a look at my rigid body demo if you like, I did a similar algorithm.
- Oli.
Home
rigid body demo
Sphere/cube/Triangle Collision And Physics demo
##### Share on other sites
Okej, I don't think I'm with you on the inertia bit. I only have the Inertia calculated for the three different axis and store it in a Vector. How should you store it in a matrix ? And should it be the inverse of that matrix when put in the above formulas ?
/Niklas
[edited by - Niklas2k2 on November 12, 2003 11:39:51 AM]
##### Share on other sites
usually, inertia matrices for simple, well balanced shapes, are diagonal matrices, so your vector components would be the diagonal of a 3x3 matrix. I''m not sure how you calculate your inertia matrix, but storing it in a vector is definitely wrong
Anyway, you can''t divide a vector by a vector.
AFAIK, an inertia matrix for an axis aligned box of size (x, y, z) is calculated as follow
x2 = Box.Size.x * Box.Size.x;y2 = Box.Size.y * Box.Size.y;z2 = Box.Size.z * Box.Size.z;ix = (y2 + z2) / 12;iy = (x2 + z2) / 12;iz = (x2 + y2) / 12;Box.InertiaTensor = [ix 0 0] [ 0 iy 0] [ 0 0 iz]Box.LocalInertia = Box.InertiaTensor * Box.Mass;Box.LocalInvInertia = Box.InertiaTensor.Transpose() / Box.Mass;Box.WorldInvInertia = Box.Orientation * Box.InvInertiaTensor.Transpose() * Box.InvOrientation.Transpose();
Box.WorldInvInertia is the inertia matrix you use in the collision impulse equation.
The inertia Tensor (not sure if the term is right), is the inertia matrix for an axis aligned box of mass 1.
The local inertia is the inertia of the axis aligned weighted box.
the world inertia is the inertia of the oriented box in world space (that is when you calculate the collision impulse and the rotational velocity and acceleration).
The inverse is, well, ...the inverse . Since the matrix is orthogonal, the inverse can be simplified with a transpose.
##### Share on other sites
Okej, thanks for taking the time :D
Iv''e looked at your rigid body demo a bit and seen your calculations.
the
ix = (y2 + z2) / 12;
iy = (x2 + z2) / 12;
iz = (x2 + y2) / 12;
are the same formulas that I use.
But when you inverse it I do not quite follow.
How can transposing
[ix 0 0]
[ 0 iy 0]
[ 0 0 iz]
be the same as inversing it ? Transposing this matrix does nothing if im not mistaking ?
/Niklas
##### Share on other sites
I got my knickers in a twist. Transpose = inverse only for orthogonal matrices, like orientation matrices, not inertia matrices. Inverse do mean inverse unfortunately. D'oh.
to be on the safe side...
Box.LocalInertia = Box.InertiaTensor * Box.Mass;Box.LocalInvInertia = Box.LocalInertia.Inverse();Box.WorldInvInertia = Box.Orientation * Box.LocalInvInertia * Box.Orientation.Transpose();
Some kind of a brain fart. working too hard I suppose.
[edited by - oliii on November 13, 2003 6:43:08 PM]
1. 1
2. 2
3. 3
4. 4
Rutin
18
5. 5
• 11
• 22
• 12
• 12
• 11
• ### Forum Statistics
• Total Topics
631406
• Total Posts
2999905
×
|
|
## Geometry: Common Core (15th Edition)
Published by Prentice Hall
# Chapter 6 - Polygons and Quadrilaterals - Get Ready! - Page 349: 4
#### Answer
$\overline{AB}$ is parallel to $\overline{CD}$.
#### Work Step by Step
We could call $\angle ABC$ and $\angle DCB$ alternate interior angles. If two lines or line segments are cut by a transversal, and alternate interior angles are equal, then those lines or line segments are parallel. We see from the diagram that $\angle ABC$ and $\angle DCB$ are congruent; therefore, $\overline{AB}$ is parallel to $\overline{CD}$.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
|
# Pertemuan 07 Peluang Beberapa Sebaran Khusus Peubah Acak Kontinu
## Presentasi berjudul: "Pertemuan 07 Peluang Beberapa Sebaran Khusus Peubah Acak Kontinu"— Transcript presentasi:
Pertemuan 07 Peluang Beberapa Sebaran Khusus Peubah Acak Kontinu
Mata kuliah : A Statistik Ekonomi Tahun : 2010 Pertemuan 07 Peluang Beberapa Sebaran Khusus Peubah Acak Kontinu
Outline Materi: Sebaran Normal Normal Baku Sebaran Sampling rata-rata
Sebaran Sampling proporsi
Learning Outcomes Pada akhir pertemuan ini, diharapkan mahasiswa akan mampu : Mahasiswa akan dapat menghitung sebaran normal dan normal baku, menerapkan distribusi normal. 3
The Normal Distribution
The formula that generates the normal probability distribution is: The shape and location of the normal curve changes as the mean and standard deviation change. Applet 4
The Standard Normal Distribution
To find P(a < x < b), we need to find the area under the appropriate normal curve. To simplify the tabulation of these areas, we standardize each value of x by expressing it as a z-score, the number of standard deviations s it lies from the mean m. 5
The Standard Normal (z) Distribution
Mean = 0; Standard deviation = 1 When x = m, z = 0 Symmetric about z = 0 Values of z to the left of center are negative Values of z to the right of center are positive Total area under the curve is 1. 6
Using Table 3 The four digit probability in a particular row and column of Table 3 gives the area under the z curve to the left that particular value of z. Area for z = 1.36 7
Using Table 3 Applet To find an area to the left of a z-value, find the area directly from the table. To find an area to the right of a z-value, find the area in Table 3 and subtract from 1. To find the area between two values of z, find the two areas in Table 3, and subtract one from the other. Remember the Empirical Rule: Approximately 99.7% of the measurements lie within 3 standard deviations of the mean. Remember the Empirical Rule: Approximately 95% of the measurements lie within 2 standard deviations of the mean. P(-3 z 3) = =.9974 P(-1.96 z 1.96) = = .9500 8
Find the value of z that has area .25 to its left.
Working Backwards Applet Find the value of z that has area .25 to its left. Look for the four digit area closest to in Table 3. What row and column does this value correspond to? 3. z = -.67 4. What percentile does this value represent? 25th percentile, or 1st quartile (Q1) 9
Find the value of z that has area .05 to its right.
Working Backwards Applet Find the value of z that has area .05 to its right. The area to its left will be = .95 Look for the four digit area closest to in Table 3. Since the value is halfway between and .9505, we choose z halfway between 1.64 and 1.65. z = 1.645 10
Finding Probabilities for th General Normal Random Variable
To find an area for a normal random variable x with mean m and standard deviation s, standardize or rescale the interval in terms of z. Find the appropriate area using Table 3. z Example: x has a normal distribution with m = 5 and s = 2. Find P(x > 7). 11
Example Applet The weights of packages of ground beef are normally distributed with mean 1 pound and standard deviation .10. What is the probability that a randomly selected package weighs between 0.80 and 0.85 pounds? 12
Example Applet What is the weight of a package such that only 1% of all packages exceed this weight? 13
Sampling Distribution of Means
the mean of the sample means the standard deviation of sample mean (often called standard error of the mean) µx = µ x = n
Distribusi sampling satu rata-rata
Bila sampel acak berukuran n diambil dari populasi berukuran N yang mempunyai rata-rata dan simpangan baku s, maka untuk n yang cukup besar distribusi sampel bagi rata-rata akan menghampiri distribusi normal dengan m = m dan s = Dengan demikian, Z = 15 15
Example: Given the population of men has normally distributed weights with a mean of 172 lb and a standard deviation of 29 lb, b) if 12 different men are randomly selected, find the probability that their mean weight is greater than 167 lb.
The Normal Approximation to the Binomial
We can calculate binomial probabilities using The binomial formula The cumulative binomial tables Do It Yourself! applets When n is large, and p is not too close to zero or one, areas under the normal curve with mean np and variance npq can be used to approximate binomial probabilities. 17
Approximate a Binomial Distribution with a Normal Distribution if:
np 5 nq 5 then µ = np and = npq and the random variable has distribution. (normal) a
Approximating the Binomial
Make sure to include the entire rectangle for the values of x in the interval of interest. This is called the continuity correction. Standardize the values of x using Make sure that np and nq are both greater than 5 to avoid inaccurate approximations! 19
The normal approximation is ok!
Example Suppose x is a binomial random variable with n = 30 and p = .4. Using the normal approximation to find P(x 10). n = p = q = .6 np = 12 nq = 18 The normal approximation is ok! 20
Applet Example 21
The normal approximation is ok!
Example A production line produces AA batteries with a reliability rate of 95%. A sample of n = 200 batteries is selected. Find the probability that at least 195 of the batteries work. The normal approximation is ok! Success = working battery n = 200 p = np = 190 nq = 10 22
Distribusi sampling beda 2 Rata-rata
Bila sampel acak bebas berukuran n1 dan n2 diambil dari populasi berukuran N1 dan N2 dengan rata-rata 1 dan 2 , dan simpangan baku s1 dan s2, maka untuk n1 dan n2 yang cukup besar distribusi sampel bagi beda 2 rata-rata akan menghampiri distribusi normal dengan Dengan demikian, 23 23
Distribusi sampling beda 2 proporsi
Bila sampel acak bebas berukuran n1 dan n2 dengan proporsi diambil dari populasi berukuran N1 dan N2 dengan proporsi p1 dan p2, dan simpangan baku p1q1/n1 dan p2q2/n2s2, maka untuk n1 dan n2 yang cukup besar distribusi sampel bagi beda 2 proporsi akan menghampiri distribusi normal dengan Dengan demikian, 24 24
SELAMAT BELAJAR SEMOGA SUKSES SELALU
25
Presentasi serupa
|
|
$$\require{cancel}$$
# B28: Thin Lenses - Ray Tracing
A lens is a piece of transparent material whose surfaces have been shaped so that, when the lens is in another transparent material (call it medium 0), light traveling in medium 0, upon passing through the lens, is redirected to create an image of the light source. Medium 0 is typically air, and lenses are typically made of glass or plastic. In this chapter we focus on a particular class of lenses, a class known as thin spherical lenses. Each surface of a thin spherical lens is a tiny fraction of a spherical surface. For instance, consider the two spheres:
A piece of glass in the shape of the intersection of these two spherical volumes would be a thin spherical lens. The intersection of two spherical surfaces is a circle. That circle would be the rim of the lens. Viewed face on, the outline of a thin spherical lens is a circle.
The plane in which that circle lies is called the plane of the lens. Viewing the lens edge-on, the plane of the lens looks like a line.
Each surface of a thin spherical lens has a radius of curvature. The radius of curvature of a surface of a thin spherical lens is the radius of the sphere of which that surface is a part. Designating one surface of the lens as the front surface of the lens and one surface as the back surface, in the following diagram:
we can identify $$R_1$$ as the radius of curvature of the front surface of the lens and $$R_2$$ as the radius of curvature of the back surface of the lens.
The defining characteristic of a lens is a quantity called the focal length of the lens. At this point, I’m going to tell you how you can calculate a value for the focal length of a lens, based on the physical characteristics of the lens, before I even tell you what focal length means. (Don’t worry, though, we’ll get to the definition soon.) The lens-maker’s equation gives the reciprocal of the focal length in terms of the physical characteristics of the lens (and the medium in which the lens finds itself):
The Lens-Maker’s Equation:
$\frac{1}{f}=(n-n_0) \Big(\frac{1}{R_1}+\frac{1}{R_2} \Big) \label{28-1}$
where:
• $$f$$ is the focal length of the lens,
• $$n$$ is the index of refraction of the material of which the lens is made,
• $$n_o$$ is the index of refraction of the medium surrounding the lens ($$n_o$$ is typically 1.00 because the medium surrounding the lens is typically air),
• $$R_1$$ is the radius of curvature of one of the surfaces of the lens, and,
• $$R_2$$ is the radius of curvature of the other surface of the lens.
Before we move on from the lens-maker’s equation, I need to tell you about an algebraic sign convention for the R values. There are two kinds of spherical lens surfaces. One is the “curved out” kind possessed by any lens that is the intersection of two spheres. (This is the kind of lens that we have been talking about.) Such a lens is referred to as a convex lens (a.k.a. a converging lens) and each (“curved out”) surface is referred to as a convex surface. The radius of curvature $$R$$ for a convex surface is, by convention, positive.
The other kind of lens surface is part of a sphere that does not enclose the lens itself. Such a surface is said to be “curved in” and is called a concave surface.
By convention, the absolute value of $$R$$ for a concave surface is still the radius of the sphere whose surface coincides with that of the lens. But, the quantity $$R$$ contains additional information in the form of a minus sign used to designate the fact that the surface of the lens is concave. $$R$$ is still called the radius of curvature of the surface of the lens despite the fact that there is no such thing as a sphere whose radius is actually negative.
Summarizing, our convention for the radius of curvature of the surface of a lens is:
Surface of Lens Algebraic Sign of Radius of Curvature R
Convex
Concave
So, what does a lens do? It refracts light at both surfaces. What’s special about a lens is the effect that it has on an infinite set of rays, collectively. We can characterize the operational effect of a lens in terms of the effect that it has on incoming rays that are all parallel to the principal axis of the lens. (The principal axis of a lens is an imaginary line that is perpendicular to the plane of the lens and passes through the center of the lens.) A converging lens causes all
such rays to pass through a single point on the other side of the lens. That point is the focal point $$F$$ of the lens. Its distance from the lens is called the focal length $$f$$ of the lens.
Note that in the diagram, we show the rays of light undergoing an abrupt change in direction at the plane of the lens. This is called the thin lens approximation and we will be using it in all our dealings with lenses. You know that the light is refracted twice in passing through a lens, once at the interface where it enters the lens medium, and again where it exits the lens medium. The two refractions together cause the incoming rays to travel in the directions in which they do travel. The thin lens approximation treats the pair of refractions as a single path change occurring at the plane of the lens. The thin lens approximation is good as long as the thickness of the lens is small compared to the focal length, the object distance, and the image distance.
Rays parallel to the principal axis of the lens that enter the lens from the opposite direction (opposite the direction of the rays discussed above) will also be caused to converge to a focal point on the other side of the lens. The two focal points are one and the same distance $$f$$ from the plane of the lens.
The two phenomena discussed above are reversible in the sense that rays of light coming from a point source, at either focal point, will result in parallel rays on the other side of the lens. Here we show that situation for the case of a point source at one of the focal points:
and here we show it for the case of a point source at the other focal point.
The important thing about this is that, any ray that passes through the focal point on its way to the lens is, after passing through the lens, going to be parallel to the principal axis of the lens.
In the case of a diverging lens, incoming parallel rays are caused to diverge:
so that they travel along lines which trace-back shows,
all pass through one and the same point. That is, on passing through the lens, the once-parallel rays diverge as if they originated from a point. That point is known as the focal point of the diverging lens. The distance from the plane of the lens to the focal point is the magnitude of the focal length of the lens. But, by convention, the focal length of a diverging lens is negative. In other words, the focal length of a diverging lens is the negative of the distance from the plane of the lens to the focal point.
As in the case of the converging lens, there is another focal point on the other side of the lens, the same distance from the plane of the lens as the focal point discussed above:
This effect is reversible in that any ray that is traveling through space on one side of the lens, and is headed directly toward the focal point on the other side of the lens, will, upon passing through the lens, become parallel to the principal axis of the lens.
Our plan here is to use the facts about what a lens does to incoming rays of light that are parallel to the principal axis of a lens or are heading directly toward or away from a focal point, to determine where a lens will form an image of an object. Before we do that, I need to tell you one more thing about both kinds of thin spherical lenses. This last fact is a reminder that our whole discussion is an approximation that hinges on the fact that the lenses we are dealing with are indeed thin. Here’s the new fact: Any ray that is headed directly toward the center of a lens goes straight through. The justification is that at the center of the lens, the two surfaces of the lens are parallel. So, to the extent that they are parallel in a small region about the center of the lens, it is as if the light is passing through a thin piece of plate glass (or any transparent medium shaped like plate glass.) When light in air, is incident at some angle of incidence other than 0°, on plate glass, after it gets through both air/glass interfaces, the ray is parallel to the incoming ray. The amount by which the outgoing ray is shifted sideways, relative to the incoming ray, depends on how thick the plate is—the thinner the plate, the closer the outgoing ray is to being collinear with the incoming ray. In the thin lens approximation, we treat the outgoing ray as being exactly collinear with the incoming ray.
# Using Ray Tracing Diagrams
Given an object of height $$h$$, the object position $$o$$, and the focal length $$f$$ of the lens with respect to which the object position is given, you need to be able to diagrammatically determine: where the image of that object will be formed by the lens, how big the image is, whether the image is erect (right side up) or inverted (upside down), and whether the image is real or virtual (these terms will be defined soon). Here’s how you do that for the case of a diverging lens of specified focal length for which the object distance $$o > |f |$$:
Draw the plane of the lens and the principal axis of the lens. Draw the lens, but think of it as an icon, just telling you what kind of lens you are dealing with. As you proceed with the diagram be careful not to show rays changing direction at the surface of your icon. Also, make sure you draw a diverging lens if the focal length is negative. Measure off the distance |f | to both sides of the plane of the lens and draw the focal points. Measure off the object distance o from the plane of the lens, and, the height h of the object. Draw in the object.
We determine the position of the image of the tip of the arrow by means of three principal rays. The three principal rays are rays on which the effect of the lens is easy to determine based on our understanding of what a lens does to incoming rays that are traveling toward the center of the lens, incoming rays that are traveling toward or away from a focal point, and incoming rays that are traveling directly toward the center of the lens. Let’s start with the easy one, Principal Ray I. It leaves the tip of the arrow and heads directly toward the center of the lens. It goes straight through.
Next comes Principal Ray II. It comes in parallel to the principal axis of the lens, and, at the plane of the lens, jumps on a diverging line, which, if traced back, passes through the focal point on the same side of the lens as the object. Note the need for trace-back.
In the case of a diverging lens, Principal Ray III is the ray that, as it approaches the lens, is headed straight for the focal point on the other side of the lens. At the plane of the lens, Principal Ray III jumps onto a path that is parallel to the principal axis of the lens.
Note that, after passing through the lens, all three rays are diverging from each other. Trace-back yields the apparent point of origin of the rays, the image of the tip of the arrow. It is at the location where the three lines cross. (In practice, using a ruler and pencil, due to human error, the lines will cross at three different points. Consider these to be the vertices of a triangle and draw the tip of the arrow at what you judge to be the geometric center of the triangle.) Having located the image of the tip of the arrow, draw the shaft of the image of the arrow, showing that it extends from point of intersection, to the principal axis of the lens, and, that it is perpendicular to the principal axis of the lens.
Measurements with a ruler yield the image height $$h′$$ and the magnitude of the image distance $$|i|$$. The image is said to be a virtual image. A virtual image of a point, is a point from which rays appear to come, as determined by trace-back, but, through which the rays do not all, actually pass. By convention, the image distance is negative when the image is on the same side of the lens as the object. A negative image distance also signifies a virtual image. Note that the image is erect. By convention, an erect image has a positive image height $$h′$$. The magnification M is given by:
$M=\frac{h'}{h}$
By convention, a positive value of $$M$$ means the image is erect (right side up).
For the case of a converging lens, Principal Ray I is identical to the corresponding ray for the diverging lens. It starts out headed straight for the center of the lens, and, it goes straight through. Principal Ray II starts out the same way Principal Ray II did for the diverging lens—it comes in parallel to the principal axis of the lens—but, starting at the plane of the lens, rather than diverging, it is caused to converge to the extent that it passes through the focal point on the other side of the lens.
Principal Ray III, for a converging lens (with the object farther from the lens than the focal point is), passes through the focal point on the same side of the lens (the side of the lens the object is on) and then, when it gets to the plane of the lens, comes out parallel to the principal axis of the lens.
If you position yourself so that the rays, having passed through the lens, are coming at you, and, you are far enough away from the lens, you will again see the rays diverging from a point. But this time, all the rays actually go through that point. That is, the lens converges the rays to a point, and they don’t start diverging again until after they pass through that point. That point is the image of the tip of the arrow. It is a real image. You can tell because if you trace back the lines the rays are traveling along, you come to a point through which all the rays actually travel. Identifying the crossing point as the tip of the arrow, we draw the shaft and head of the arrow.
This time, the image is inverted. We can measure the length of the image and the distance of the image from the plane of the mirror. By convention, the image height is negative when the image is inverted, and, the image distance is positive when the image is on the side of the lens opposite that of the object. The magnification M is again given by
$M=\frac{h'}{h}\label{28-2}$
which, with $$h′$$ being negative, turns out to be negative itself. This is consistent with the convention that a negative magnification means the image is inverted.
Principal Ray III is different for the converging lens when the object is closer to the plane of the lens than the focal point is:
Principal Ray III, like every principal ray, starts at the tip of the object and travels toward the plane of the lens. In the case at hand, on its way to the plane of lens, Principal Ray III travels along a line that, if traced back, passes through the focal point on the same side of the lens as the object.
This concludes our discussion of the determination of image features and position by means of
ray tracing. In closing this chapter, I summarize the algebraic sign conventions, in the form of a
table:
Physical Quantity Symbol Sign Convention
focal length f
+ for converging lens
- for diverging lens
image distance i
+ for real image (on opposite side of lens as object)
- for virtual image (on same side of lens as object)
image height h'
+ for erect image
- for inverted image
magnification M
+ for erect image
- for inverted image
|
|
## Analytical engine was invented by?
• da Byron
• Wise Pascal
• Charies Babbage
• Herman Hollerith
...
Explanation: Charies Babbage was an English mathematician and Computer pioneer who proposed mechanical general-purpose computer in 1837. It was considered as the successor of Babbage's difference engine.
#### If you think above Mcq is wrong then please leave us comment with correct answers!
Leave comment below, Write your comment, Reply with your comment
|
|
# Eigenvalue Analysis
## Eigenvalue Analysis
### Generalized Eigenvalue Problems
In free oscillation analysis of continuous bodies, a spatial discretization is performed, and it is modeled with a multi-DOF system with concentrated mass points as shown in Fig. 2.3.1. In the case of free oscillation problems without damping, the governing equation (motion equation) is as follows:
where $u$ is the generalized displacement vector, $M$ is the mass matrix and $K$ is the stiffness matrix. Further, the function is defined with $\omega$ as as the inherent angular frequency; $a$, $b$ and $c$ as arbitary constants; and $x$ as the vector:
In this case, this equation and its second derivative, that is,
is substituted into Eq.$\eqref{eq:2.3.1}$, which becomes
That is, the following equation is obtained:
Therefore, if coefficient $\lambda(=\omega^2)$ and vector $x$ that satisfy Eq.$\eqref{eq:2.3.5}$ can be determined, function $u(t)$ becomes the solution of formula.
The coefficient $\lambda$ and vector $x$ are called eigenvalue and eigenvector, respectively, and the problem that determines these from Eq.$\eqref{eq:2.3.1}$ is known as a generalized eigenvalue problem.
Fig. 2.3.1: Example of a multi-DOF system of free oscillation without damping
### Problem Settings
Eq.$\eqref{eq:2.3.5}$, which can be expanded to any order, appears in many situations. When dealing with physical problems, the matrix is often Hermitian (symmetric.) In a complex matrix, the transpose is a conjugate complex number, and the real matrix is a symmetric matrix. Therefore, when the $ij$ components of matrix $K$ are defined as $k_{ij}$, if the conjugate complex number $k$ is set as $\bar{k}$, the relationship becomes
In this study, it is assumed that the matrices are symmetric and positive definite. A positive definite matrix is a symmetric matrix with all positive eigenvalues; that is, it always satisfies Eq.$\eqref{eq:2.3.7}$:
### Shifted Inverse Iteration Method
Structural analyses with the finite element method do not require all eigenvalues. In many cases, just a few low-order eigenvalues are sufficient. As for HEC-MW, it was designed to deal with large-scale problems thus, the matrices are large and very sparse (with many zeros). Therefore, it is important to consider this and determine eigenvalues of low-order mode efficiently.
When the lower limit of eigenvalues is set to $\sigma$, Eq.$\eqref{eq:2.3.5}$ is modified according to the following equation (which is mathematically equivalent):
This equation has the following convenient properties for calculation:
1. The mode is inverted.
2. The eigenvalue around $\rho$ are maximized.
In actual calculations, the maximum eigenvalue is often determined at the beginning. Therefore, the main convergence calculation is applied to Eq.$\eqref{eq:2.3.8}$, rather than Eq.$\eqref{eq:2.3.5}$ to determine from the eigenvalues around $\rho$. This method is called shifted inverse iteration.
### Algorithm for Eigenvalue Solution
The Jacobi method is another such orthodox and popular method.
It is an effective method for small and dense matrices; however, the matrices dealt with by HEC-MW are large and sparse; thus, the Lanczos iterative is preferred.
### Lanczos Method
The Lancos method was proposed by C. Lanczos in the 1950s and is a calculation algorithm for triply diagonalizing a matrix. The following are some of its characteristics:
1. It is an iterative convergence method that allows calculation of a matrix even if it is sparse.
2. The algorithm is focused on matrices and vector product, and suitable for parallelization.
3. It is suitable for the geometric segmentation associated with finite element mesh.
4. It is possible to limit the number of eigenvalues to be determined and mode range to make the calculation more efficient.
The Lanczos method creates sequential orthogonal vectors, starting from the initial vector, to calculate the basis of subspaces. It is faster than the other subspace methods and is widely used in finite element method programs. However, this method is easily influenced by computer errors, which may impair the orthogonality of the vectors and interrupt it in the middle of the process. Therefore, it is essential to apply measures against errors.
### Geometric Significance of the Lanczos Method
By converting Eq.$\eqref{eq:2.3.8}$ into a variable
and rewriting the problem, the following equation is obtained:
An appropriate vector $q_0$ linearly transformed with matrix $A$ (see Fig. 2.3.2).
Fig. 2.3.2: Linear Transformation of $q_0$ with Matrix $A$
The transformed vector is orthogonalized within the space created by the original vector. That is, it is subjected to a so-called Gram–Schmidt orthogonalization shown in Fig. 2.3.2. Thus, if the vector obtained is defined as $r_1$ and normalized (to length 1), it generates $q_1$ (Fig. 2.3.3). With a similar calculation, $q_2$ is obtained from $q_1$ (Fig. 2.3.4), which is orthogonal to both $q_1$ and $q_0$. If the same calculation is repeated, mutually orthogonal vectors are determined up to the order of the maximum matrix.
Fig. 2.3.3: Vector $q_1$ orthogonal to $q_0$
Fig. 2.3.4: Vector $q_2$ Orthogonal to $q_1$ and $q_0$
The algorithm of the Lanczos method is a Gram–Schmidt orthogonalization on vector sequence $\\{A_{q_0}, A^2_{q_0}, A^3_{q_0}, \ldots, A^n_{q_0} \\}$ or, in other words, $\\{A_{q_0}, A_{q_1}, A_{q_2}, \ldots \\}$. This vector sequence is called Krylov sequence, and the space it creates is called Krylov subspace. If Gram–Schmidt orthogonalization is performed in this space, two adjacent vectors determine another vector. This is called the principle of Lanczos.
### Triple Diagonalization
The $i + 1$th calculation in the iteration above can be expressed as
In this case,
In matrix notation, this becomes
In this case,
That is, the eigenvalues are obtained through eigenvalue calculation on the triply diagonalized matrix obtained with Eq.$\eqref{eq:2.3.13}$.
|
|
# Word problem involving a ship?
A ship at A is to sail to C 56 kilometer north and 258 km east of A. After sailing North 25 degrees 10 minutes east for 120 miles to P the ships is headed toward C. Find the distance of P from C and the required course to reach C.
-
I do not know how to start solving it. The find part is just how the problem is written in my text. – Latino Heat Jul 7 '12 at 1:57
Are you doing trigonometry of the plane, or are you doing spherical trigonometry? If you are doing trigonometry of the plane, you are probably expected to assume that the Earth is flat. If distances are not too large, this gives a reasonably good approximation. Because the data mention minutes, you may be expected to do a spherical trig solution. – André Nicolas Jul 7 '12 at 2:03
The question is hard to understand. Can you check to see that you have copied it out exactly is it was written, or give us the reference so maybe we can see for ourselves, or put an image up somewhere? – Gerry Myerson Jul 7 '12 at 4:37
To answer the question in your title: Yes, what you wrote is a word problem involving a ship. – celtschk Jul 7 '12 at 20:59
I'll assume that you're working in the plane, rather than on the sphere. If that's not the case, skip this answer. Let's put some coordinates on your picture. Using $(east, north)$ coordinates, assume the ship starts at $A=(0,0)$ and sails to $P = (p_e,p_n)$. Along with the north axis, you can draw a right triangle with the lower angle $\alpha$ equal to 25 degrees 10 minutes and hypotenuse $193.1218$ (converting miles to kilometers). Now you can use $193.1218\sin\alpha$ and $193.1218\cos\alpha$ to find the coordinates $p_e$ and $p_n$ (do you see how?). Now from $P$ draw two lines, one to $C$ and one in the east direction. You can make another right triangle and using the (now known) lengths of the east and north sides you can find the heading angle and the distance to $C$. Hope this hint helps.
We are given the starting and ending positions: $A(0,0)$ and $C(258,56)$. Suppose the angle $\theta=25^0 10^'$. Then the intermediate position of $P$ is $(120 \cos(\theta), 120 \sin(\theta))=(108.609,51.030)$ and so now one can determine the distance $PC$ and the angle (heading) from $P$ to $C$
|
|
# Spectral approximation of a boundary condition for an eigenvalue problem
1 POEMS - Propagation des Ondes : Étude Mathématique et Simulation
Inria Saclay - Ile de France, UMA - Unité de Mathématiques Appliquées, CNRS - Centre National de la Recherche Scientifique : UMR7231
Abstract : To compute the guided modes of an optical fiber, the authors use a scalar approximation of Maxwell's equations. This formulation leads to a bidimensional eigenvalue problem set in an unbounded domain. An equivalent formulation set in a bounded domain is derived. The boundary condition involves a Fourier series expansion. For the numerical treatment, only a finite number N of terms of the series is retained. The authors prove that the error on the eigenvalues and the eigenfunctions decreases faster than any power of ${1 / N}$. Copyright © 1995 Society for Industrial and Applied Mathematics
Document type :
Journal articles
https://hal-ensta-paris.archives-ouvertes.fr//hal-01010193
Contributor : Aurélien Arnoux <>
Submitted on : Thursday, June 19, 2014 - 1:23:34 PM
Last modification on : Friday, January 22, 2021 - 11:54:03 AM
### Citation
Anne-Sophie Bonnet-Ben Dhia, Nabil Gmati. Spectral approximation of a boundary condition for an eigenvalue problem. SIAM Journal on Numerical Analysis, Society for Industrial and Applied Mathematics, 1995, 32 (4), pp.1263-1279. ⟨10.1137/0732058⟩. ⟨hal-01010193⟩
Record views
|
|
# Let $p$ be a prime so $p\equiv3\pmod4$. If $p|a^2+b^2$, then $p|a,b$
Let $p$ be a prime so $p\equiv3\pmod4$. If $p| a^2+b^2$, then $p| a,b$
How do I prove this small theorem? I know that it's quite useful. Are there other small theorems like this one? I am mostly searching elementary proofs, so not involving to complicated stuff...
• This isn't small. :-) – S. Y Sep 16 '16 at 18:51
• Well it isn't recognised as a theorem? Or is it? – Taumen Sep 16 '16 at 18:52
• It is a theorem. I will provide a proof later if nobody else does – S. Y Sep 16 '16 at 18:55
• Ok Thanks... In french, (I am french ;)) we have something that is called "lemme"... I don't know if that exists in English or something similar, but this one is recognised as a "lemme"... – Taumen Sep 16 '16 at 18:57
• Lemme is Lemma in English and German. It comes from the greek word $\lambda\eta\mu\mu\alpha$. – Dietrich Burde Sep 16 '16 at 19:01
I hope I didn't miss something and I think it is pretty elementar:
Using Fermats Little Theorem: $a^p\equiv a\mod{(p)}$ and $b^p\equiv b\mod{(p)}$. Now we get that $a^{p+1}+b^{p+1}\equiv a^2+b^2 \equiv 0 \mod{(p)}$. Because $4\mid p+1$ we can write $p+1=4k$ , for some $k\in\mathbb{N}$. Now we get: $0\equiv a^{4k}+b^{4k}\equiv a^{4k}+(-a^2)^{2k}\equiv a^{4k}+a^{4k}\equiv 2a^{4k} \mod{(p)}$. So now that means $p$ divides $2a^{4k}$, but bcs $p>2$ it cant divide the 2 so it has to divide $a^{4k}$, and if it is a factor of it, it has to be also a factor of $a$, in other words $p\mid a\Rightarrow p\mid b$.
The ring $\mathbf Z[i]$ is a principal ideal domain, and any prime that is 3 modulo 4 is inert in this ring. Indeed, writing $p = (a+bi)(a-bi) = a^2 + b^2$ and looking at this modulo 4, we find that $p$ cannot be $3$ modulo $4$. Now, assume that $p$ divides $a^2 + b^2 = (a+bi)(a-bi)$, then $p$ divides one of the factors on the right hand side. Hence, $p$ divides both $a$ and $b$.
Another approach: if we have $a^2 + b^2 \equiv 0 \pmod{p}$ with $a, b \neq 0$, then $(a/b)^2 \equiv -1 \pmod{p}$, so $a/b$ has order $4$ in the group $(\mathbf Z/p \mathbf Z)^{\times}$, which has order $p - 1$. This is not divisible by $4$ as $p \equiv 3 \pmod{4}$, contradicting Lagrange's theorem.
• Is there a possibility to prove it without rings and complex numbers? I am searching an elementary proof that is not to complicated... – Taumen Sep 16 '16 at 19:00
• But it is a nice proof ;) – Taumen Sep 16 '16 at 19:00
• @DanielCortild your theorem means same as if $(a+bi)(a-bi) \cong 0 \mod p$ then $a+bi \cong 0 \mod p$ or $a-bi \cong 0 \mod p$. It's equivalent to Z[i] being a UFD for the special case of primes in Z[i] that are in Z. So there may not be a simpler proof since the algebra of Z[i] really is involved here. – djechlin Sep 16 '16 at 19:03
• Ohhh... Ok I will take the time later to sit down and really try to understand it all... But thanks – Taumen Sep 16 '16 at 19:05
• I added another, arguably more elementary proof. – Ege Erdil Sep 16 '16 at 19:05
Your assertion can be restated in terms of the quadratic form $q(x,y) = x^2+y^2$ defined over the finite field $\mathbb{F}_p$ of order $p$ (for a prime number $p$): if $p \equiv 3 \pmod{4}$ then for all $(x,y) \in \mathbb{F}^2$, if $q(x,y) = 0$ then $x = y= 0$.
You ask for a generalization, so here is a (useful) one: let $F$ be any field of characteristic different from $2$. For $a,b,c \in F$ consider the binary quadratic form
$q(x,y) = ax^2 + bxy + cy^2$.
We say that $q$ is isotropic if there is $(x,y) \in F^2 \setminus (0,0)$ such that $q(x,y) = 0$ and otherwise anisotropic. And here we go:
(Small but Useful) Theorem: The binary form $q(x,y) = ax^2 + bxy + c y^2$ is isotropic over $F$ if and only if its discriminant $\Delta = b^2-4ac$ is a square in $F$ (meaning $\Delta = d^2$ for some $d \in F$).
Let me sketch the proof: feel free to ask if you want details. Since the characteristic is not $2$, we can diagonalize $q$ just by "completing the square". Moreover, replacing $q$ by $(1/a)*q$ changes the discriminant from $\Delta$ to $\frac{\Delta}{a^2}$ -- so does not affect whether it is a square. So we reduce to the case
$q'(x,y) = x^2 - \frac{\Delta}{4} y^2$, where the result is pretty clear: if $x,y \in F$ are not both $0$ and $q'(x,y) = 0$, then $x \neq 0$ and $y \neq 0$ and $\Delta = (2x/y)^2$. Conversely, if $\Delta = d^2$ then $q'(d/2,1) = 0$.
For the form $q(x,y) = x^2 + y^2$, the discriminant is $-4$, which is a square in $F$ iff $-1$ is a square in $F$. By (very) elementary number theory, when $F = \mathbb{F}_p$ for an odd prime $p$, we have that $-1$ is a square iff $p \equiv 1 \pmod{4}$.
To see why this is useful, now let $a,b,c \in \mathbb{Z}$ and consider the binary quadratic form $q(x,y) = ax^2 + bxy + cy^2$, of discriminant $\Delta$, and suppose that for a prime number $p$ not dividing $\Delta$ we have
$q(x,y) = p$. Then $x$ and $y$ are not both divisible by $p$: if $x = pX$, $y = pY$, then $q(x,y) = p^2 q(X,Y) = p$ is a contradiction. So we find that (the reduction modulo $p$ of) $q(x,y)$ is isotropic over $\mathbb{F}_p$ and thus that $\Delta$ is a square modulo $p$. Using quadratic reciprocity, this translates in every case to congruence conditions on $p$ modulo $\Delta$.
This is really the first step of the arithmetic study of binary quadratic forms over $\mathbb{Z}$. See for instance this lovely book of Cox and these notes based on the book, in particular the first handout. In the latter reference, I call this fact the "fundamental congruence": it appears (in the special case $x^2 + ny^2$) on the very first page of the notes.
|
|
# how I can minimize this equation using derivation
I'm a software engineer and have not much mathematical knowledge. Now, I'm facing with a problem in my research. I have a system of equations as below: $$P_1 = \alpha V_p + \beta I_c^2$$ $$P_2 = \alpha V_c + \beta I_l^2$$ $$I_c = (P_p - P_1)/V_c$$ $$I_l = (V_c * I_c - P_2)/V_b$$ in these equations $\alpha, \beta, V_p, P_p$ and $V_b$ are constants. I want minimize $P_1 + P_2$ by choosing an optimal value for $V_c$. thanks in advance
• It seems as there is a problem with recursion here...is it a system of differential equations? Could you please provide some more context? – marco trevi Apr 12 '16 at 8:11
• @marcotrevi yes there is recursion problem and the system is not a differential equations. Just I want differentiate $P_1 + P_2$ – Mahmoud Apr 12 '16 at 9:09
I am afraid that an analytical solution could be a monster (if any). Considering $$P_1 = \alpha V_p + \beta I_c^2\tag 1$$ $$P_2 = \alpha V_c + \beta I_l^2\tag 2$$ $$I_c = (P_p - P_1)/V_c\tag 3$$ $$I_l = (V_c * I_c - P_2)/V_b\tag 4$$ you could replace $I_c$ from $(3)$ and $I_l$ fom$(4)$ by their expressions in $(1)$ and $(2)$. So, you have two quite complex polynomial equations in $P_1$ and $P_2$. Yoy could solve one of them (say $(2)$) to get $P_2$ as a function of $P_1$; however, the problem is which root to select ?
Assume you know; so now you want to minimize $P_1+P_2(P_1)$ with respect to $V_c$; this makes another monster.
If I had to do it, I should consider numerical methods for optimization under four equality constraints $(1,2,3,4)$ and I suppose some bound constraints such as $P_1>0$, $P_2>0$ (if they apply).
• these conditions $P_1 > and P_2 > 0$ are hold. is there any analytical solution by these condition??? – Mahmoud Apr 12 '16 at 9:59
|
|
# How much do lens lineups vary across DSLR platforms?
I was reading this question and recalled how often I hear the advice "choose your lenses and then choose the body that matches" in the context of deciding on a camera platform (Nikon/Canon/Pentax/Sony, etc).
I shoot Nikon and so I study a lot about Nikon-compatible lenses. From what I see though, Canon seems to have an equivalent lens for most Nikon lenses. For example, there's the nifty fifties and the pro midrange zooms. On top of that, there's lots of lenses from 3rd-party manufacturers that come in versions for each system/mount.
I agree that lenses are a more important investment than a body; they have a huge impact on image quality and a longer product life cycle. And there's lots of variation between lenses within a platform's offering.
My question is: how much lens difference is there between the different platforms? Do you really pick the lens first and then pick the body that goes with it?
-
There is also the point about availability of the lens lineups & platforms in general (accessories, the cameras themselves) across different platforms in many countries. In this regard, the big names like Canon, Nikon & to an extent Sony are quite ubiquitous, while brands like Pentax & Olympus may not be very easily available in some of the developing markets. – ab.aditya Mar 4 '11 at 3:53
– mattdm Jan 21 '13 at 19:24
The lineups have a lot of overlap but there are considerable differences as well:
• Canon and Nikon have the most lenses by far, followed by Pentax, Sony, Olympus and Panasonic, in this order.
• Canon has the largest range of focal-lengths, from 8 to 800mm with Nikon a close second, going from 10 to 800mm. This is followed by Pentax with from 10 to 560mm and then Sony with from 10mm to 500mm. Olympus has the shortest lineup, covering 7 to 300mm only.
• Canon has the most weather-sealed lenses, the most stabilized primes and the most weather-sealed primes. Pentax has the most affordable weather-sealed lenses. Sony and Panasonic have both exactly two weather-sealed lenses, everyone else has more.
• Pentax has most of the smallest lenses and most of those are of extremely high-quality. Pentax lenses can save size and weight since they only need to be designed for cropped-sensors, although legacy lenses have full-frame coverage.
Specialty lenses:
Keep in mind that only Canon, Nikon and Panasonic need stabilized versions of their lenses, everyone else gets stabilization from the camera body.
-
+1 listing the specific differences. – Craig Walker Mar 1 '11 at 17:25
+1 This nice summary could save people a lot of research at the outset. – whuber Mar 1 '11 at 19:07
+1 The list, though, only touches on the primary providers. Some brand gaps will get covered by secondary options from companies like Sigma (such as going beyond 300mm on the Pentax mount). – John Cavan Mar 1 '11 at 20:37
@Philip - Yes, Canon covers a lot of ground. But it should not be about what you CAN do but about what you WANT to do. Weight is a serious concern for many and one reason people go with Pentax. Also, the stabilization issue is big. There are only 3 stabilized lenses apertures wider than F/2.8, if you shoot in extremely low light and you can shoot at F/1.4 with stabilization with Pentax and Sony (also Olympus and Panasonic with third-party lenses). – Itai Mar 1 '11 at 22:52
@Jerry - Tied for what? Pentax has a few more lenses, but a shorter range of focal-lengths (10-300 vs 11-500). On the other hand, they have a considerable number of weather-sealed lenses (Sony has zero). More than half of Sony's lenses are full-frame, which means heavy for a cropped-sensor camera ;) BTW, 55mm F1.4 is weather-sealed and super-sonic, while the 50mm F1/4 is neither. I sold the 50mm to fund half of the 55mm, so I know. – Itai Mar 2 '11 at 13:44
If you're looking at generalities — are there normal-range primes, are there wide zooms, are there telephoto zooms — everyone has it covered. But if you start looking at specifics, there are meaningful differences.
This comes out in three different ways:
1. Individual quirks of a certain brand's lineup
2. Availability of niche/special-purpose lenses
3. Lenses in different price brackets
## Lineup Quirks
Pentax is the poster-child of a quirky lineup. Particularly, since they're very committed to APS-C rather than full-frame (steering you to the 645D if you want to go up), many typical lens types only exist in their 35mm-e field-of-view equivalents. For example, there's no 24-70mm / 70-200mm f/2.8 pro lens pair — instead, there's the DA★ 16-50mm and 50-135mm. There's no 85mm f/1.4 portrait lens — instead, there's the DA★ 55mm f/1.4. And the entire DA Limited and FA Limited series of jewel-like primes, with odd focal lengths and max apertures, pretty much trades on quirky.
Conversely, Canon does not offer very many non-entry-level lenses designed for APS-C, preferring to steer people who are interested in investing in lenses towards full frame. Nikon has put more effort into developing modern and interesting entry-level APS-C primes, but the nicer lenses are always full-frame.
Olympus and the Four-Thirds system is also somewhat quirky in the lens lineup, both for sensor-size reasons (there's no "nifty fifty", but there's the form-factor equivalent), and because it's an all-new designed-for-digital system with no legacy considerations (or legacy designs to fill gaps). That last means it's a rather small lineup overall.
And there are random "gaps" in the Big Two's offerings as well. Canon doesn't have a 12-24mm f/4, for example. (There's decent third-party offerings like Tokina's, though. A point, I should add, which also goes for 70-200mm on Pentax.) If some particular focal range or lens type is important to you, make sure to look for it.
## Specialty and Niche Lenses
Then, there's the issue of niche lenses. Nikon has three tilt-shift offerings, while Canon has four (including a 17mm); there's nothing in Four-Thirds, and for Pentax or Sony, only third-party options are available. On the other hand, if you want a super-compact and lightweight autofocusing "pancake" style normal lens (which, objectivity disclaimer, is what I use on my camera 90% of the time), Canon hadn't one until 2012 and Nikon doesn't have any, so you need to turn to Pentax or Panasonic/Olympus.
## Pricing and Lens Market Tiers
And finally, the price bracket distinction. Canon and Nikon have both the top and the bottom covered, from hundred-dollar 50mm f/1.8 lenses all the way up to the price of a new car.
Seriously — the Canon EF 800mm f/5.6L IS USM is $11,900 from B&H, and the Nikon AF-S Nikkor 600mm f/4G ED VR is$10,300. Both brands offer half a dozen lenses over $2000, and another dozen-and-a-half between one and two grand. Pentax doesn't have anything like that — the most expensive lenses they have for sale at B&H are the DA★ 60-250mm f/4 ED and the DA★ 300mm f/4, both of which come in at$1200. (You can order pricer lenses from Pentax Japan, as special build-to-order, but that doesn't really count.) The next-most expensive is the FA 31mm f/1.8, at $965. Sony is in-between here, with the 300mm f/2.8 G-Series at$6300, and then about a dozen lenses between $1000 and$2000. Olympus too.
At the bottom end, Nikon and Canon again have things covered — cheap primes and zooms for all occasions. Sony too, although the selection is smaller. Pentax doesn't really have that. With the exception of the new cheap-normal DA 35mm f/2.4, they've mostly let those lenses (like the FA 35mm f/2) drop from the lineup.
But that's not the complete story — what Pentax has is a bunch of very nice medium priced lenses, from $340 to$965. Some of these are almost legendary in their optical qualities (and not just among Pentax partisans), but they probably don't compete with the likes of the AF-S Nikkor 35mm f/1.4G. This may go back to "quirky lineup" overall, but basically they don't have super-cheap or super-expensive lenses, but the middle has some unique high-quality lenses.
I don't mean to slant this too personally (or to advocate my own choices for everyone), but on a personal note, lenses that cost over $1000 are nice to drool at, but practically speaking might as well not exist. If this is your bread and butter and those lenses cover your needs, though, definitely make sure not to choose Pentax. Conversely, if you're only going to pick up a few entry-level lenses and don't want to spend$500 for a single prime, Pentax might not be the best choice. Or, if you're just going with the kit lens plus maybe one telephoto zoom, they are fundamentally no different from any of the other major brands and it doesn't matter at all. (Unless you want to go all out collecting manual-focus legacy glass — a different story altogether....) For me, it fits my (saving up a bit!) budget, and I'm not compromising on quality. So, Pentax for me, specifically because their lens lineup is a great fit.
-
Good points about pricing. I do find that Sony and Pentax have a two-or-three grade system (good, better, best) while Canon seems to have at least 6 levels of price/quality compromises. Other brands are somewhere in between. – Itai Mar 1 '11 at 18:11
I picked a brand first.
I went and picked up several different cameras at a local store and compared them physically. Since each brand has technically similar lines, I decided that the ergonomics were a bigger factor than the technical aspects. For me, Canon bodies had a better feel. I have a friend that chose Nikon for the same reason.
I can see the argument for picking based on lenses, but that factor is negligible if you stay to larger brands (Canon, Nikon, and to some extent Pentax), as they all have an extensive lens selection.
-
To be honest, I think you've got the right answer - there's no substitute for holding something in your hands and seeing if it feels right. – AJ Finch Mar 2 '11 at 10:07
One thing to keep in mind, though, is that usability over a long time (such as the time for which one might own a camera!) are different from first-impressions of ergonomics. While there may be such a thing as love as first sight, it's also true that handing characteristics which seem awful (or great) initially may turn out to be no big deal after a month of use, and there may be wonderful little touches that improve your everyday use which you don't discover until later. – mattdm Mar 2 '11 at 19:38
@mattdm That might be true if it were his first SLR. I discovered a lot of nice things on my D90 since I bought it, but the decisive basics like first and foremost the body-handling, then the bright pentaprism-viewfinder, on-top-lcd and fast-access wheels stay more important than the good and bad I found out later. (Took me some weeks to finally decide.) – Leonidas Mar 3 '11 at 3:14
@Leonidas — I'm assuming that this will be read by a lot more people than the specific original questioner. And, arguably, those big, basic things you mention don't need handling to discover — you can get a good sense of that level of things simply by reading dpreview (or anywhere else that does that style of review). I'm thinking about, for example, Canon's weird big flat back wheel as opposed to the way Nikon and Pentax do it; for me, that's always been really really awkward — but I bet if someone gave me a 5D MkII I'd get used to it in a month or less. – mattdm Mar 3 '11 at 3:34
@Leonidas — We'll have to agree to disagree, then. My point isn't that hands-on experience isn't better than reviews, but rather that in order to really get a proper impression, you need to really use a camera for a while. That's a flaw shared by both handling the camera in a store for a while and the big tech-focused review sites. Ordering something and returning if it doesn't suit is a good approach. In the US, B&H has a policy like that, although the number of exposures you can take before returning is limited without a restocking fee (fair enough). – mattdm Mar 4 '11 at 2:25
If you're interested in older, manual focus lenses, and not just lenses that are still in production (either because you already have several or, like me, just think they're fun and don't mind that they're typically not as optically sharp as modern lenses):
• Canon's EOS mount only dates back to the mid-1980s; Nikon's mount and the M42 mount used by Pentax have much older lenses available. So there aren't a lot of old Canon lenses that you can just mount on your Canon camera; there are plenty of old Nikon lenses you can put on your Nikon camera and M42 lenses you can put on your Pentax.
• Canon's lenses sit closer to the sensor than M42 lenses, which are closer than Nikon lenses. This means that Nikon and M42 lenses can be used on Canon DSLRs with adapters without compromising optical quality or losing infinity focus.
• Micro four-thirds cameras have the shortest distance from lens sensor of all of them, and can (with mechanical adapters) use virtually any old manual focus lenses, including (I believe) rangefinder lenses.
-
if you're going to get into EVIL cameras (rather than actual SLRs) it's probably also worth mentioning that Sony's NEX cameras are pretty much the same as micro-four thirds in terms of short flange distance and being able to accept most lenses. NEX also uses a larger (APS-C) sensor. – Jerry Coffin Mar 2 '11 at 15:56
I have a Nikon F and a lot of lenses for it. While the Nikon F mount is the same since the beginning (I think late 50s or early 60s) the fact are that most of my old non-AI lenses will not mount on most modern Nikon bodies. My lenses have the "prong" that was used to couple the lens to the meter on Nikon bodies of the 60s into the mid-70s. The prong will not clear the pentaprism of many modern Nikon bodies. I keep seeing folks say that Nikons use all old F-mount lenses. Its just not true. Lenses made in the past 30 years work. Not all. – Pat Farrell Aug 24 '12 at 3:03
This is sort of a tangential answer, but I picked the platform first, then specific equipment.
Lenses are a huge part of the platform, to be sure. The lenses offered by the camera manufacturers themselves represent the bulk of this portfolio, but you've also got third-party lenses from Sigma, Tamron, and others. Out of this portfolio, you might find that certain lenses aren't going to work with certain bodies (EF vs. EF-S, or focus motors in lenses vs. bodies, etc.), so the portion of that portfolio that's really available to you might vary depending on your body choice.
But when I picked a platform, I was also aware of the features of the body I was looking at, as well as the potential upgrade path for that body. I bought a used Canon 30D knowing that I'd be able to start buying lenses, batteries, memory cards, etc., and I'd be able to use them with a 40D or a 50D later. That was an important part of my "platform" choice (I've since moved up to the 40D, btw).
The choice for me ended up being not just about the specific equipment I was buying on day 1, but more about how much flexibility I could see in my future options.
-
With quality 3rd party lenses from Sigma and Tokina (mainly, there are a few Tamrons that aren't too bad either), you can cover most of the focal length range covered by the branded lenses using 3rd party lenses on any body almost (the ranges these brands offer on minor camera brands like Pentax and Olympus may be only a subset of their total lineup though, you'd have to check).
As to the smallest lineup of lenses for SLRs, that's probably Leica :)
-
In addition to the variations in lens lineups, focal length ranges, and available features (for example image stabilization or weather sealing) that others have mentioned, there's a lot to be said for preferences of the look of the images that come from a lens. Lens designs can emphasize different elements of a photographic image that people might find appealing and can drive their purchasing decisions.
Some lenses deliver smoother out-of-focus areas than others; lenses transmit colors in all different ways; all lens designs are a variety of compromises in sharpness and resolution across the frame at varying apertures. People can care very much about these variations. For example, you'll find Leica aficionados that will be be able to discern the differences between the images from lenses from different eras of lens designers.
So in that respect, yes, if you're able to discern these differences in image rendering and like the look of one manufacturer over another, that could play a role in choosing one system over another.
-
Having wanted to buy a DSLR since long, I chose a brand first. Nikon. Went with the most affordable (to my pocket) camera D3100 which came with a basic 18-55 lens kit.
This was 2 years back. I did not know anything about pro-bodies or lenses at the time.
Gradually, as I started investing more time into photography, I realized that lens lineups not only vary across different platforms, they vary on their own platform as well.
Last year, I wanted to upgrade my lens to a 1.8 aperture lens and found the 50mm 1.8D to be in the affordable range. However, this lens does not auto-focus on the entry-level Nikon camera bodies. It would fit, but with only manual focusing. The 50mm 1.8 AF-S (autofocus) lens which would work for my camera was more than twice the price.
This is because Nikon entry-level bodies do not offer an "in-camera auto focus motor" required to auto-focus older lenses. This feature is only available in mid-upper-to-pro-level-camera-bodies.
Manual focus is fine for object or table-top photography, but I myself did not find it useful elsewhere for my work.
Also, the 50mm lens was also better suited for a full-frame body and on my camera the effective focal range after mounting a 50mm would be 75mm due to the added 1.5x APS-C crop factor. Sounded Ok for portraits or tight-shots, but not for general everyday photography to me.
I ended up purchasing the 35mm 1.8 lens instead which auto-focuses on my camera and would give a 50mm equivalent focal range.
Comparatively, Canon does not have mount/auto-focus issues with their currently available entry-level or mid-level cameras and lenses. Canon also does not offer a "budget-range" 35mm lens (50mm equivalent) for the crop-sensor body. The current available Canon 50mm 1.8 lens would work, but with a focal length of 80mm due to the 1.6x APS-C crop. Other manufacturers or micro 4/3 cameras would have a 25mm lens with the body to achieve a similar focal range.
I also have a group of friends with Nikon cameras and share multiple different lenses from time to time.
Also, with entry-level-mid-level cameras, having luckily found the time and chance to use Nikon, Sony and Canon since past two years, the kit lenses 18-55, 18-135, 55-200, 55-250, etc. on all three platforms perform quite well based on the shooting technique.
I have also used the Nikon 105mm 2.8 macro and the 300mm F/4 on the Nikon D3100 camera body and the results are most exquisite just because of the lens quality. Similar lenses on other platforms would also perform equally well but in the end, you need to figure out your style of photography first.
To answer your first question, yes, there is a difference between platforms, but final image quality depends on your shooting style. Like, in printed images (such as magazines) or those which you see on the internet, you will have a hard time figuring out the manufacturer + lens make just by looking at a photograph. Wide/tele can be figured out - but how far was the final image cropped before publishing is still a question.
For the second question, you "can" pick the lens first and then the body but that would only be the case in which you have first figured out your photography style - portraits, wildlife, macro, sports, etc. If you have more than 2 different styles - like for example, you are a wedding as well as a wildlife photographer, then you need to choose a platform which provides a more extensive lens choice.
-
Just a comment about your statement "Canon also does not offer a 35mm lens for the crop-sensor body.", which is wrong. Any Canon EF lens will work on a Canon crop-sensor body, and Canon currently sell two EF 35mm lenses. – Håkon K. Olafsen Dec 11 '13 at 12:27
@Håkon — fair point. However, Pentax, Nikon, and Sony all offer budget APS-C 35mm lenses, for under $200. Canon currently does not cater to this particular niche — basically a normal prime kit lens alternative. (Kind of related to the second part of this question.) – mattdm Dec 12 '13 at 12:34 Hello Hakon, I agree with you. I was referring to the budget-range lens for an APS-C camera. I have edited the post accordingly. Thank you mattdm for the explanation. – yadunandan Dec 13 '13 at 2:47 Well, the short answer is, that most camera brands have comparable lenses, but there are some differences. Canon, from what I can tell, has the most lenses available of any of the major manufactures. Nikon has compatible lenses to most of Canon's, and has lenses that can cover the gap for the rest (Ie, Canon tends to have cheaper lenses, as well as the more expensive ones, Nikon tends to have only the more expensive ones.) Of course, one can find cases where the trends are reversed. The third party lenses also cover much of the same space, but they tend to better cover the cheaper lens gaps, and not so much the more expensive lenses. Sony/Pentax/etc tend to not have the top of the line lenses, but the typical lenses used by most people, they have. Just don't look for a 800mm lens for one of these camera brands. As for what brand to choose, well, as they are all really about the same, it's best to find out what your friends use, and use it. There's really not much of an argument one way or another to be made, so... Hope this helps! - There's nothing wrong with Sony's lenses -- many of them out-resolve the equivalent Canon/Nikon lenses. The old Rokkor designs (with updated coatings and adapted for autofocus) tend to be of very high quality, and much of the line now is Zeiss. (I'll let other speak to Pentax quality.) You're right about the extreme telephotos -- but if you want a truly hand-holdable 500mm lens, something you can throw into your camera bag just in case, Sony's the only game in town. And if bokeh's your game, you're really missing out if you haven't tried the 135 SFT. – user2719 Mar 1 '11 at 16:54 All the brands typically have the lens basics covered: walkaround zooms, portrait primes, ultrawide zooms, telephoto zooms, etc. The differences tend to come in where the exotics reside, which may not matter to some people because of the expense, but might matter if that exotic just happens to be a lens you actually need. There are a lot of individual holes along those lines. Canon has a 17mm tilt-shift; Nikon has a crop-body fisheye; Canon has f/5.6, f/4, and f/2.8 400mm primes, while Nikon only offers an f/2.8 400mm prime, the Canon MP-E 60 Macro does 5x magnification, the Nikkor 105 portrait lens does soft focus, etc. etc. etc. I think the only basic lens Canon is "missing" is a low-cost normal-on-a-crop lens (i.e., no$200 EF-S 35/1.8 USM to set against Nikon's AF-S 35/1.8 DX lens), but there are higher-cost full-frame lenses that can fill that function.
But to overgeneralize, I think Nikon has more offerings in wide lenses, and Canon has more in the supertelephoto range. And both have more offerings than the other three brands.
Sony is unique among the dSLR mounts in having autofocusing Zeiss lenses, which are designed specifically for A-mount and are not identical in optical design to the ZE/ZF/ZK manual-focus lenses for Canon, Nikon and Pentax (e.g., the ZA 135/1.8).
Pentax is unique in having a variety of pancake lenses to offer.
Olympus and Panasonic four-thirds offer Leica-designed lenses, the only f/2 zooms, and the lenses overall are smaller and lighter (although the four-thirds development seems to have halted in favor of μ4/3). And a 2x crop factor gives more "reach" than APS-C with a mere 300mm lens.
-
|
|
## The Aleksandrov problem and optimal transport on $S^n$
Series:
School of Mathematics Colloquium
Thursday, September 2, 2010 - 11:00
1 hour (actually 50 minutes)
Location:
249 Skiles
,
Emory University
,
Organizer:
The purpose of this talk is to describe a variational approach to the problemof A.D. Aleksandrov concerning existence and uniqueness of a closed convexhypersurface in Euclidean space $R^{n+1}, ~n \geq 2$ with prescribed integral Gauss curvature. It is shown that this problem in variational formulation is closely connected with theproblem of optimal transport on $S^n$ with a geometrically motivated cost function.
|
|
# How do you find the slope of a line passing through the points (-3,2) and (5,-8)?
Feb 26, 2016
$m = - \frac{10}{8}$ or $- 1.25$
#### Explanation:
To find the slop use of a line $\left(m\right)$ from two points $\left({x}_{1} , {y}_{1}\right)$and $\left({x}_{2} , {y}_{2}\right)$, use the slope formula
$m = \frac{{y}_{1} - {y}_{2}}{{x}_{1} - {x}_{2}}$
Substituting in the values from the question
$m = \frac{2 - - 8}{- 3 - 5}$
$m = - \frac{10}{8}$ or $- 1.25$
|
|
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorjcmckeown
• CommentTimeMar 19th 2014
I only wanted to say, on the one hand, bravo on the sine page in your private nlab web; on the other hand, the main inspiration for the goofy inequality was that I wanted a similar presentation to that of the natural exponential function which, as everyone knows who knows it, satisfies
• $1 + x \leq e^x$
• $e^{2x} = (e^x)^2$
and furthermore that this pins it down exactly.
Cheers!
• CommentRowNumber2.
• CommentAuthorTodd_Trimble
• CommentTimeMar 19th 2014
I see; thanks for the explanation! And sorry for the rudeness (“goofy”); I’ll get rid of it.
So this characterization of the sine is due to you? Very interesting; I’d never seen that before. You must have devised your own proof; I’d be interested in hearing it!
• CommentRowNumber3.
• CommentAuthorjcmckeown
• CommentTimeMar 19th 2014
I don’t mind “goofy”.
• CommentRowNumber4.
• CommentAuthorjcmckeown
• CommentTimeApr 6th 2014
Goodness, I just lost a lot of editing, and I’ve got the last assignment marking of the year still to do…
The Tricky Part of the argument is to consider that the function we want is of the form $g(x) = x h(x)$, and then construct an equivalent functional equation for $h$:
$h(x) = h(x/3) - \frac{4}{27} x^2 h(x/3)^3$
iff $g (x) = 3 g(x/3) - 4 g(x/3)^3$.
In terms of the (nonlinear nonlocal) transformation $T$
$T : f \mapsto x \mapsto f(x/3) - \frac{4}{27} x^2 f(x/3)^3$
one calculates
$(T F - T f)(x) = (F - f)(x/3) \left( 1 - \frac{4}{27} x^2 (F F + f F + f f) (x/3) \right)$
which shows
1. $T$ preserves the ordering of (small) $F$ and $f$, on small intervals $[-\delta,\delta]$, and
2. in the same circumstances also that the supremum distance between $T F$ and $T f$ on $[-\delta,\delta]$ is at most the supremum distance between $F$ and $f$ on $[-\delta/3,\delta/3]$.
Now consider the particular bounds $F_0 = 1$ and $f_0 (x) = 1-x^2$. An otherwise uninteresting calculation gives
$T F_0 (x) = 1 - \frac{4}{27} x^2$ $T f_0 (x) = 1 - \frac{ 16 }{ 27 } x^2 + x^4 P(x)$
for an explicit polynomial $P$; in brief,
$f_0 \le T f_0 \le T F_0 \le F_0$
on some interval $[-r,r]$, which need not be bigger than $[-1,1]$. This is the start of an induction argument that
$f_0 \le T^n f_0 \le T^{n+1} f_0 \le T^{n+1} F_0 \le T^n F_0 \le F_0$
while at the same time (induction via item 2) we have the bounds
$| T^n F_0 (x) - T^n f_0 (x) | \leq \frac{x^2}{9^n} .$
It follows that $T$ has a unique fixedpoint within the specified bounds, over the interval $[-r,r]$, and hence a unique fixedpoint over the whole real line. So that’s existence and uniqueness. Since, obviously, the functional equation and the bounds are concocted to hold for sine, we might be happy with that.
• CommentRowNumber5.
• CommentAuthorTodd_Trimble
• CommentTimeApr 6th 2014
Thanks very much, Jesse – I think I get the overall idea; I can run over this with a fine-toothed comb maybe a little later. (That 4/27 looks weirdly suggestive…)
But where on earth does all this come from? Did you find this characterization in a book somewhere, or what? It looks just a bit off the beaten track, shall we say, at least to my eyes.
• CommentRowNumber6.
• CommentAuthorjcmckeown
• CommentTimeApr 6th 2014
I wanted to impress on some calculus students just how much easier everything is with the right tools; so, here is a complete characterization of a familiar-ish thing, but what on earth can you do with it? But after developing some calculus, e.g. Taylor series, one can start crunching digits of things like $sin(1)$, or prove that it’s irrational, and so forth.
|
|
# The infinite spell of 2016
Algebra Level 3
Let $$\displaystyle p(m) = \displaystyle \sum_{k=0}^\infty \dfrac1{m^k}$$ and let $$\displaystyle x = \prod_{r=2}^{2016} p(r)$$. Find $$\dfrac{x!}{2014!} \cdot \dfrac1{2015}$$.
×
|
|
## Textbook question 7B.3
$\frac{d[R]}{dt}=-k[R]; \ln [R]=-kt + \ln [R]_{0}; t_{\frac{1}{2}}=\frac{0.693}{k}$
Paige Lee 1A
Posts: 136
Joined: Sat Sep 07, 2019 12:16 am
### Textbook question 7B.3
For part C, could someone please explain how to get [A]= 0.085 mols/L? I understand how to get 0.068mols/L, but I don't understand why you subtract it from the starting amount 0.153mol/L for A concentration
Determine the rate constant for each of the following first- order reactions, in each case expressed for the rate of loss of A: (a) A S B, given that the concentration of A decreases to one-half its initial value in 1000. s; (b) A S B, given that the concentration of A decreases from 0.67 molL21 to 0.53 molL21 in 25 s; (c) 2 A S B 1 C, given that [A]0 5 0.153 molL21 and that after 115 s the concentration of B rises to 0.034 molL21.
Betania Hernandez 2E
Posts: 107
Joined: Fri Aug 02, 2019 12:15 am
### Re: Textbook question 7B.3
$[A]_{t}$ represents the concentration of reactant A that remains at time t. The problem states that the concentration of product B rises to 0.034 M. This means that the concentration of reactant A decreased by 0.068 M. You would need to subtract this number from the initial concentration given to find the concentration of reactant A that remains.
AKatukota
Posts: 100
Joined: Thu Jul 25, 2019 12:18 am
### Re: Textbook question 7B.3
Thank you! I was wondering this also and I see the relation in why you would subtract.
BeylemZ-1B
Posts: 95
Joined: Thu Jul 25, 2019 12:17 am
### Re: Textbook question 7B.3
after you get the new concentration [A]t = 0.085, you can set up the equation to solve for the rate constant:
ln[0.085] = -k*(115 seconds) + ln[.153]
so I got k=5.1E-3, but what would the units be and how do I differentiate the units on k for first and second order reactions?
Rida Ismail 2E
Posts: 139
Joined: Sat Sep 07, 2019 12:16 am
### Re: Textbook question 7B.3
The units are s-1
Brooke Yasuda 2J
Posts: 102
Joined: Sat Jul 20, 2019 12:17 am
### Re: Textbook question 7B.3
This may be the longer way, but it works. To find the units of the rate constant, you just want to remember that your rate needs to have units of (mol/L/s). So when you have your expression which is something like rate = k*[x]^n[y]^m, you can determine that k multiplied by the rest of the expression has to produce units of mol/L/s.
|
|
# ALGEBRAIC ANALYSIS OF BENT-FROM-LINEAR TRANSITION INTENSITIES: THE EMISSION SPECTRUM OF METHINOPHOSPHIDE (HCP)
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/20948
Files Size Format View
2003-RI-15.jpg 248.4Kb JPEG image
Title: ALGEBRAIC ANALYSIS OF BENT-FROM-LINEAR TRANSITION INTENSITIES: THE EMISSION SPECTRUM OF METHINOPHOSPHIDE (HCP) Creators: Pérez-Bernal, F.; Iachello, F.; Vaccaro, P. H.; Ishikawa, H.; Toyosaki, H. Issue Date: 2003 Publisher: Ohio State University Abstract: Emission spectra obtained from bulk-gas methinophosphide (HCP) have been interpreted through use of a novel algebraic scheme that explicitly takes into account inherent non-rigidity of the molecular $framework.^{a}$ Fluorescence accompanying selective excitation of individual $\tilde{A}^{1}A^{\prime \prime}- \tilde{X} {^{1}}\Sigma$ vibronic bands was dispersed under moderate resolution, with the appearance of substantial activity in the $\nu_{2}$ bending mode reflecting the bent-from-linear nature of the $\tilde{A} \leftarrow \tilde{X}$ transition. Aside from providing an economical parameterization for observed patterns of vibrational term energies, the algebraic approach affords a robust and facile means for the quantitative evaluation of multidimensional Franck-Condon factors. These results, as well as subsequent extensions designed to account for non-Condon effects, will be discussed in order to further elucidate the unique structure and dynamics exhibited by participating electronic states. Description: $^{a}$ H. Ishikawa, H. Toyosaki, N. Mikami, F. P\'{e}rez-Bernal, P. H. Vaccaro, and F. Iachello, Chem. Phys. Lett. 365, 57 (2002). Author Institution: Departamento de F\'{\i}sica Aplicada, Facultad de Ciencias Experimentales, Avda. de las FF.AA. s/n, Universidad de Huelva; Departments of Physics and Chemistry, Yale University; Department of Chemistry, Graduate School of Science, Tohoku University URI: http://hdl.handle.net/1811/20948 Other Identifiers: 2003-RI-15
|
|
CryptoDB
Paper: Advances in Cryptology - EUROCRYPT 2013, 32nd Annual International Conference on the Theory and Applications of Cryptographic Techniques, Athens, Greece, May 26-30, 2013. Proceedings
@proceedings{eurocrypt-2013-25024,
|
|
0 Rates
# ASTM D2983 – Standard test method for low-temperature viscosity of automatic transmission fluids, hydraulic fluids, and lubricants using a rotational viscometer
This test method determines the suitability of fluids like automatic transmission fluids, gear oils, hydraulic fluids, and other lubricants for use at low ambient temperatures, while covering a viscosity range of 300 mPa·s to 900,000 mPa·s. The standard describes four different procedures, A, B, C, D, each requiring a different configuration. Procedure D is an automated test method, which means that it is performed automatically with only one instrument configuration.
## Why measure the viscosity of lubricants at low temperatures?
The viscosity of lubricants plays a major role for the proper operation of mechanical devices used at low ambient temperatures such as -40 °C or -20 °C. Under such demanding conditions, lubricants must guarantee a sufficiently low viscosity, so that the lube is still able to flow. The viscosity has to be analyzed at the minimum temperature at which the lube is still applicable to ensure adequate lubrication of critical parts. The standard SAE J300 states a critical viscosity of >150,000 mPa·s at which flow problems and pinion bearing failures can occur.⁽¹⁾
## What are the requirements for viscosity measurement according to ASTM D2983?⁽²⁾
### Rotational viscometer
• Procedure A, B, C require a rotational viscometer having a torque range between 0.0670 mNm and 0.0680 mNm.
• Procedure D needs a programmable rotational viscometer having a torque range between 0.0670 mNm and 0.1800 mNm.
For viscosity measurements, the viscometer must at least have the following speeds available:
• 0.6 rpm
• 1.5 rpm
• 3.0 rpm
• 6.0 rpm
• 12.0 rpm
• 30.0 rpm
• 60.0 rpm
• 120 rpm is desirable for procedure A to C and mandatory for procedure D
### Temperature control units
The choice of temperature control unit depends on the test procedure. Samples are cooled as follows:
• Procedure A: with an air bath to test temperature
• Procedure B: with a mechanical refrigerated programmable liquid bath
• Procedure C: with a mechanical refrigerated constant temperature liquid bath by means of a simulated air cell (SimAir)
• Procedure D: with a thermo-electric temperature-controlled chamber in a range from -45 °C to +90 °C
### Measuring system
Viscometer spindle:
Procedures A, B, C, and D require a cylindrical viscometer spindle with the same geometry.
• An uninsulated steel spindle (Figure 1, A) should be used only for procedure A.
• A composite spindle, which has lower thermal conductivity, must be used for procedure C.
• A spindle with insulation on top (Figure 1, B) is required for procedure D.
Test tubes:
• Procedures A and B: standard test tube with approx. 25 mm ID and 115 mm in length, and 30 mL of sample volume
• Procedure C: special SimAir Stator with 15 mm ID, and 15 mL of sample volume
• Procedure D: test tube with approx. 25 mm OD and 150 mm in length, and 20 mL of sample volume
## Common units
According to the ASTM standard, dynamic viscosity must be indicated in millipascal seconds [mPa·s].
## Other relevant standards for rotational viscosity testing of lubricants at low temperatures
ASTM D8210: Standard test method for automatic determination of low-temperature viscosity of automatic transmission fluids, hydraulic fluids, and lubricants using a rotational viscometer
DIN 51398: Testing of lubricants; procedure for measurement of low- temperature apparent viscosity by means of the Brookfield viscometer (liquid bath method)
ASTM D5133: Standard test method for low temperature, low shear rate, viscosity/temperature dependence of lubricating oils using a temperature-scanning technique
ASTM D7110: Standard test method for determining the viscosity-temperature relationship of used and scoot-containing engine oils at low temperatures
## What is the difference between ASTM D2983 and D8210?⁽³⁾
ASTM D8210 describes a test method which is equivalent to procedure D from ASTM D2983. According to ASTM D8210, this test procedure is called “Option A – Standard Thermal Conditioning”. Procedure D from ASTM D2983 and Option A from ASTM D8210 include the following main steps:
1. Preheating the sample to 50 °C
2. Cooling to room temperature
3. Cooling to test temperature according to Newton’s cooling law (Equation 1)
4. Keeping at test temperature for a period of time (~14 h)
5. Viscosity measurement at several speeds
$$ST= (C *{e^{k*(ET-PT)}*{5 \over 9}})+T$$
ST = Segment after preheating and returning to room temperature, °C
ET = Elapsed time since the beginning of the test, minutes
PT = Preheating time includes the time to bring the sample to preheating temperature, soak time, and return to room temperature, minutes
T = Test temperature, °C
C = 102
K = -0.08
Equation 1: Application of Newton’s cooling law in order to cool the sample to test temperature according to ASTM D2983
The difference between these standards is that ASTM D8210 additionally describes an automated test method with a reduced thermal conditioning phase. This procedure is called “Option B – Abbreviated thermal conditioning”. The holding time at test temperature before the viscosity measurement starts is reduced from ~14 h to ~4 h. Shortening the thermal conditioning time can result in a lower viscosity value than that measured with the standard method.
|
|
# The tikz variable resets before the loop finishes [duplicate]
In tikz, I define a variable outside \foreach loop, and use \pgfmathtruncatemacro command to increase it inside the loop.
Below is my MWE.
\documentclass{article}
\usepackage{pgfplots,tikz,tikz-3dplot}
\begin{document}
\begin{figure}
\centering
\begin{tikzpicture}
\pgfmathtruncatemacro\j{1};
\foreach \i in {1,4,7}
{
\node at (\i,1.4) {\j};
\pgfmathtruncatemacro\j{\j+1};
\node at (\i+1.5,1.4) {\j};
\pgfmathtruncatemacro\j{\j+1};
\node at (\i,0) {\j};
\pgfmathtruncatemacro\j{\j+1};
\node at (\i+1.5,0) {\j};
}
\end{tikzpicture}
\end{figure}
\end{document}
Instead of increasing the value of \j up to 12, it resets back to 1 after only one iteration of the loop. What am I doing wrong?
• I don't know what is the problem but you can resolve it by adding \xdef\j{\j} in last line inside loop. – user108724 Jul 8 '20 at 11:47
• See this post for more information – user108724 Jul 8 '20 at 11:47
• See tex.stackexchange.com/a/222281/201158 for the reason why you failed – ZhiyuanLck Jul 8 '20 at 11:55
• @ZhiyuanLck Yup, every definition is local to loops. I keep forgetting this! – padawan Jul 8 '20 at 11:58
|
|
1. ## Transmitted probability
Suppose a binary message is transmitted through a noisy channel. The transmitted signal S has uniform probability to be either 1 or −1, the noise N follows normal distribution N(0,4) and the received signal is R=S+N . Assume the receiver conclude the signal to be 1 when R≥0 and -1 when R<0.
1.What is error probability when one signal is transmitted?
2. What is error probability when one signal is transmitted if we triple the amplitude of the transmitted signal? It means S = 3 or -3 with equal probability.
3. What is the error probability if we send the same signal three times (with amplitude 1), and take majority for conlusion? For example, if three received signal was concluded 1, −1, 1 by receiver, we determine the transmitted signal to be 1.
|
|
+1 vote
91 views
A node X on a 10 Mbps network is regulated by a token bucket. The token bucket is filled at a rate of 2 Mbps. Token bucket is initially filled with 16 megabits. The maximum duration taken by X to transmit at full rate of 10 Mbps is _________ secs.
(1) 1 (2) 2 (3) 3 (4) 4
16+2*x=10*x
x=2 sec
+1 vote
In token bucket algo
$C+\rho * S = M * S \\ C= Capacity \ of \ bucket \\ \rho = Token \ arrival \ rate \\ M = Maximum \ Output \ rate \\ S = Burst \ Time$
S = (16-2)/8 = 2 sec
Hence option 2) is correct
2) 2 s
+1 vote
1
+1 vote
2
|
|
# Minimal-information description of sudoku solution (Latin square)
Sudoku puzzles consist of a $$9 \times 9$$ grid of cells in which some cells contain integers from the set $$\{ 1, \ldots, 9 \}$$ and the task is to fill in the remaining cells such that the numbers $$1$$ through $$9$$ appear in each row, in each column, and in each of the nine $$3 \times 3$$ boxes, as shown in this puzzle and its solution:
The solutions are also called Latin squares, and one estimate of the number of distinct $$9 \times 9$$ such Latin squares is $$N = 6,670,903,752,021,072,936,960$$.
I am interested in finding the minimum-information description of such a solution by describing the minimal puzzle that leads to that unique solution. Of course such a solution has a very large number of constraints, which reduces the information needed to describe its source puzzle. In information theoretic terms, one need merely describe (or transmit) the minimal puzzle; the receiver can then solve the puzzle to fill in the full Latin square.
In a tour de force simulation taking the equivalent of $$7.3M$$ hours on a supercomputer, Gary McGraw, Bastian Tugemann and Giles Civario solved a long-outstanding problem: finding that the minimal number of puzzle cells needed to be filled to ensure a unique sudoku solution was 17 (as exemplified in the figure above). No $$16$$-clue puzzles exist.
A lower-bound on the information needed to describe such a puzzle would assume that the grid locations for the $$17$$ clues are fixed (and hence contribute zero bits to the description), and that all one need do is fill in the $$17$$ puzzle cell values. One might assume the maximum entropy set such as $${\cal S} = \{ 1,1,2,2,3,3,\ldots,8,8,9 \}$$, i.e., one instance of a single digit, and two copies of each remaining digit. Thus it takes $$\log_2 9 = 3.16993$$ bits to describe which is the "lone" or "singleton" digit. Then the creation of the puzzle corresponds to placing the 17 digits of $${\cal S}$$ in the $$17$$ (assumed fixed) cells, thus requiring $$\log_2 \left({17! \over (2!)^8} \right) = 40.3376$$ bits, so the total number of bits is: $$3.16993 + 40.3376 = 43.5075$$ bits. (Note that this is much lower than the naive estimate of $$\log_2 N = 72.4984$$ bits.)
I suspect this estimate is a loose lower bound because (perhaps) not all puzzles can be described by $$17$$ cell values, that even if one could use just $$17$$ such values the cell locations might need to differ (and hence require information to describe these locations), and other factors. Moreover, as @ZachTeitler pointed out, many of the assignments of digits to cells will lead to unsolvable puzzles (because, for example, there will be two equal digits in the same row, or column, or $$3 \times 3$$ box).
I don't expect someone to solve this problem fully on this site--it simply requires too much analysis and likely massive computer simulations. What I would appreciate are comments/criticisms on the casting of this problem and its assumptions, and a clear methodological approach toward solving it rigorously.
• You might enjoy reading this old question and its answers: mathoverflow.net/questions/129143 – j.c. Oct 23 '18 at 22:23
• Suppose I give you k bits of information to represent a Sudoku solution. If k is less than log_2 N, how would I use k bits or fewer to distinguish each of the other N-1 solutions? Are you asking about equivalence classes of solutions instead? Gerhard "Finds The Question Itself Puzzling" Paseman, 2018.10.24. – Gerhard Paseman Oct 24 '18 at 17:59
• 1. Can you please explain where $\binom{17}{9}/(2!)^8$ comes from? The number of ways to put $1,1,2,2,\dotsc,8,8,9$ into $17$ squares is $17!/(2!)^8$. If you allow a choice of which digit is the "odd" one ($9$) then you get $9 \cdot 17!/(2!)^8$. Have I misunderstood what you are counting? 2. Presumably the large majority of placements into those $17$ squares fail to uniquely determine a puzzle solution. Either they admit more than one solution (failure of uniqueness) or they admit no solutions — e.g., if the $17$ clue squares have a duplicated digit in a row, column, or region. ... – Zach Teitler Oct 24 '18 at 18:53
• ... counting these seems like a challenge, to put it mildly. It will certainly depend on which $17$ squares are chosen for the clue locations. E.g., if the $17$ squares are the first row and first column, then very, very few clue fillings will uniquely determine puzzle solutions! – Zach Teitler Oct 24 '18 at 18:54
• @ZachTeitler: My guess/ansatz was that a unique selection of 17 cell locations might uniquely determine all sudoku puzzles, but that set would be arranged like the set in my example, certainly not a row and a column. (Frankly, that would be one of the least constraining arrangements. – David G. Stork Oct 24 '18 at 19:07
|
|
# Home
Greetings! I’m Mingrui Zhang, a second year Ph.D. in the University of Washington Information School. My advisor is Prof. Jacob O. Wobbrock, who directs the MAD Lab.
My research interests are in Human-Computer Interaction, specifically text entry, input methods, and interaction techniques, where I seek better methods to make technology natural and seamless.
Before moving to Seattle, I was a student of Computer Science & Technology in Tsinghua University, China, where I worked as a researcher as well as in industry. Back then, Dr. Chun Yu opened the world of HCI to me.
I enjoy playing computer games in my spare time, but you know, I rarely have spare time :)
/\
/**\
/****\ /\
/ \ /**\
/ /\ / \ /\ /\ /\
/ / \ / \ / \/\/ \ / \/\ -------- __@
/ / \/ /\ \ / \ \ \ / / \ LIFE IS SHORT ----- _\<,_
/ / \/ \/\ \ / \ \ \ KEEP FORWARD ---- (*)/ (*)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`
|
|
2. Worksheet by Kuta Software LLC Precalculus Extra Practice of Rational Functions Name_____ ©C B2i0r1M7z nKvuWtFaX uSjoEfktwWaJrCef ]LsLHCD.x x DAdlllT ZreiQgphPtIss jrveDsleSrlvxeAdf.-1-For each function, identify the holes, intercepts, horizontal and vertical asymptote, and domain.Then sketch the graph. Input the specified value of x in the function rule and evaluate to get the output. ©0 E2i0 E1S2v xKJu ltdam GSOovfIt KwJa2reR hLXL LC4. The numerator is p(x)andthedenominator is q(x). If k is negative, then the graph is in the 2nd and 4th quadrant. Inverse variation has applications beyond math. To gain access to our editable content Join the Algebra 2 Teacher Community! These assessments will help you check your knowledge of its mathematical as well as real world applications. The word trigonometry is derived from the Greek words trigonon, meaning triangle, and metron meaning measure. One worksheet on evaluating a function at a given value and determining the domain of a function and two worksheets on finding the inverse of a function. This worksheet (with solutions) helps students take the first steps in their understanding and in developing their skills and knowledge of finding the Inverse of a Function.Questions are carefully planned so that understanding can be developed, misconceptions can be identified and so that there is progression both across and down the sheet. Find your vertices: if k is positive, your graph is in the 1st and 3rd quadrant. Examples. By using this website, you agree to our Cookie Policy. Notes Application Key Application Key. { (-5, 7), (-6, -8), (1, -2), (10, 3) } Finding Inverses Find an equation for the inverse for each of the following relations. inverse-function-worksheet-rpdp-answers 1/16 Downloaded from lsamp.coas.howard.edu on January 4, 2021 by guest [MOBI] Inverse Function Worksheet Rpdp Answers Getting the books inverse function worksheet rpdp answers now is not type of challenging means. From: Reprogram your approach toward evaluating such functions with this printable collection of evaluating rational function worksheets. Rational Functions In this chapter, you’ll learn what a rational function is, and you’ll learn how to sketch the graph of a rational function. Inverse functions make solving algebraic equations possible, and this quiz/worksheet combination will help you test your understanding of this vital process. by . Feb 9, 2018 - This Pin was discovered by Abby Raths. Inverse function notation - f-1 (x) and f(x) 7. décès, hospitalisations, réanimations, guérisons par département Khan video: Finding inverses of rational functions; Khan article: Finding inverse functions; Practice Problems. -1-©E wKfuJtvaB VSAoFfatxw7aZrQeg 3LKLLCB.J J gAPlTlW MrAiNgChYtZss brIeussecrEvGe9dK.P c lMaa8d0eM 0wWiUtehj 2IonMfZiMncibtEei iAwlFgTeebPrAao B2W.L Worksheet by Kuta Software LLCMath 2 AcceleratedID: 1Name_____Inverses of Rational FunctionsFind the inverse ofoftware LLCMath 2 AcceleratedID: 1Name Classroom Paniagua . Oct 15, 2017 - Quiz questions and answers on finding inverse function quiz answers pdf 91 to learn math certificate online course. Unit 9 (Rational Functions), Day 2: Graphing inverse variation . Draw asymptotes 2. q Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 2 Name_____ Graphing Rational Functions Date_____ Period____ Inverse of Rational Functions - Fractions 3. C l XARlZlm wrhixgCh itQs B HrXeas Le rNv 1eEd H.u n kMua5dZe y SwbiQtXhj SI9n 2fEi Pn Piytje J cA NlqgMetbpr tab Q2R.R Worksheet by Kuta Software LLC 13) g(x) = 7x + 18 2 14) f (x) = x + 3 15) f (x) = −x + 3 16) f (x) = 4x Find the inverse of each function. Inverse Rational Function - Displaying top 8 worksheets found for this concept.. For graph, see graphing calculator. ©A D2Q0 h1d2c eK fu st uaS bS 6o Wfyt8w na FrVeg OL2LfC0. Asymptotes of a rational function: An asymptote is a line that the graph of a function approaches, but never touches. It is an even function. Grades: 7 th, 8 th, 9 th, 10 th, 11 th, 12 th. Learn more Accept. This page includes a lesson covering 'an inverse function' as well as a 15-question worksheet, which is printable, editable, and sendable. About This Quiz & Worksheet. View Notes - worksheet-inverse-of-rational-function-version-1.pdf from PHYSICS 123 at Harvard University. Evaluating piecewise functions worksheet & Reading And from Rational Functions Worksheet, source: ngosaveh.com. Rational functions A rational function is a fraction of polynomials. PDF (633.89 KB) This is an awesome resource for any Algebra teacher looking to give their students a fun way to practice finding inverse functions of linear equations. Halloween:Inverse of Linear Functions Worksheet. Quadratic Expressions Algebra 2 Worksheet from Rational Functions Worksheet, source: pinterest.com This same quadratic function, as seen in Example 1, has a restriction on its domain which is x \ge 0.After plotting the function in xy-axis, I can see that the graph is a parabola cut in half for all x values equal to or greater than zero. Discover (and save!) That is, if p(x)andq(x) are polynomials, then p(x) q(x) is a rational function. The inverse cosine y=cos^(-1)(x) or y=acos(x) or y=arccos(x) is such a function that cos(y)=x. Exercise worksheet on 'What is an inverse function?' Since factoring is so important in algebra, you may want to revisit it first. About inverse functions. \$1.50. Find the inverse of the logarithmic function f. f(x) = ln (x) Question 9 Find the inverse of the exponential function f. f(x) = e x - 1 + 3. { (1, -3), (-2, 3), (5, 1), (6, 4) } 2. p W tA 0lel K jrvi tg Rh2tOs9 mrEeZsoeUr GvLead 0.H n MMLaRdce 6 awli ptphJ jI bnlf miCn 4i8t je 7 NA3lkg OeFb 4rWan e2Z. Inverse of Rational Functions Name: _ Date: _ Version 1 Score: _ Direction: Find the inverse This website uses cookies to ensure you get the best experience. Any function that can be written as the ratio of two polynomial functions and whose value can be expressed as the quotient of two polynomials is a rational function. Inverse Functions Worksheet Find the Inverse of the Follwoing Functions: Exercise 1 Exercise 2 Exercise 3 Exercise 4 Exercise 5 Solution of exercise 1 Find the inverse function: 1 Solution of exercise 2 Find the inverse function: 2 Solution of exercise 3 Find the inverse function: 3 Solution of exercise… Detailed solutions are included. Free functions inverse calculator - find functions inverse step-by-step. Rational Functions Worksheet PDFs. Question 10 Find the inverse of the logarithmic function f. f(x) = ln (x + 2) - 3. Example 2: Find the inverse function of f\left( x \right) = {x^2} + 2,\,\,x \ge 0, if it exists.State its domain and range. Types of Problems. The domain of the inverse cosine is [-1,1], the range is [0,pi]. Here you will find hundreds of lessons, a community of teachers for support, and materials that are always up to date with the latest standards. Rational Function and their Graphs Worksheet - Word Docs & PowerPoints. Equivalent Ratios Worksheets Math Aids from Rational Functions Worksheet, source: pinterest.co.uk. What Are Inverse Functions? In the parent function f x = 1 x , both the x - and y -axes are asymptotes.The graph of the parent function will get closer and closer to but never touches the asymptotes. WORKSHEET 7.4 INVERSE FUNCTIONS Inverse Relations Find the inverse for each relation. Finding Inverse Function Multiple Choice Questions and Answers (MCQs), finding inverse function MCQs with answers, implication or conditional MCQs, linear and quadratic function MCQs, sequences and series MCQs, notation and value of function MCQs for … your own Pins on Pinterest Inverse Functions Worksheet Answer Key by using Useful Focuses. Factoring out the GCF / Greatest Common Factor to Solve for y 4. Included you will find the answer key :) Subjects: Math, Algebra, Algebra 2. Powered by Create your own unique website with customizable templates. You could not unaided going in the manner of books increase or library or borrowing from your links to admission them. Suivez l'évolution de l'épidémie de CoronaVirus / Covid19 en France département. Inverse Functions Worksheets. Here is a set of practice problems to accompany the Inverse Functions section of the Graphing and Functions chapter of the notes for Paul Dawkins Algebra course at Lamar University. Good for GCSE/IGCSE students. Your textbook's coverage of inverse functions probably came in two parts. For the reason that you should deliver everything you need in a single authentic in addition to trusted origin, many of us offer handy information on numerous subject matter in addition to topics. Page 6 of 18 Remember k y c xb Sketch the graph of the following: 1 3 2 y x To do this: 1. Switching X and Y to find the inverse function 6. Inverse Functions With Radicals - Square Roots & Cube Roots 5. 1) f (x) = This is an … Now that I have the inverse function, and I can see that the inverse function is rational just like the original function , I can find its domain by simply stating that the denominator cannot equal zero. Rational Functions are just those with polynomials (expression with two or more algebraic terms) in the numerator and denominator, so they are the ratio of two polynomials. 1. In this case ≠0, which means the domain of −1 is all real numbers except 0. There are three types of problems in this exercise: It is defined as the branch of mathematics that establishes the relationship between the angles and sides. How to determine if two functions are inverses of each other using composition of functions - f(g(x)) and g(f(x)) 8. The calculator will find the inverse cosine of the given value in radians and degrees. Domain of − : (−∞, )∪( ,∞) 3. 9-3 Assignment SE - RATIONAL FUNCTIONS AND THEIR GRAPHS … 6.3 Graphing Rational Functions Notes Key Homework Key. More References and Links to Inverse Functions Find the Inverse Function (1). The Find inverses of polynomial, radical, and rational functions exercise appears under the Algebra I Math Mission, Mathematics II Math Mission, Algebra II Math Mission and Mathematics III Math Mission.This exercise practices finding the formula of the inverse function of a given function algebraically. Of −: ( −∞, ) ∪ (, ∞ ) 6.3 Graphing Rational Functions A function... Links to inverse Functions probably came in two parts function 6 y 4 & Cube Roots 5: k! ) Subjects: Math, Algebra, Algebra 2 ( −∞, ) ∪,! Function rule and evaluate to get the best experience function is A fraction of polynomials unaided going the!, the range is [ 0, pi ] , the range is [ ]. ) and f ( x ) and f ( x ) = ln x! 11 th, 8 th, 10 th, 9 th, 11 th, 11,. & PowerPoints real world applications Cookie Policy get the best experience is defined as the branch mathematics!, 8 th, 8 th, 10 th, 12 th Graphing Rational Functions A Rational function worksheets ! Your understanding of this vital process, the range is [ -1,1 ],... Textbook 's coverage of inverse Functions find the inverse of the logarithmic f.. Knowledge of its mathematical as well as real world applications the domain −. Content Join the Algebra 2 Teacher Community 7 th, 8 th, 8 th, th! X ) 7 Subjects: Math, Algebra 2 Teacher Community or borrowing from Links... Unaided going in the function rule and evaluate to get the best experience and sides Wfyt8w! And sides you check your knowledge of its mathematical inverse of rational functions worksheet well as real world applications means! To Solve for y 4 uaS bS 6o Wfyt8w na FrVeg OL2LfC0 best experience this ≠0... Probably came in two parts −∞, ) ∪ (, ∞ ) 6.3 Graphing Rational Functions ) Day. Functions probably came in two parts inverse of rational functions worksheet increase or library or borrowing your... Ensure you get the output probably came in two parts to get the output out the GCF / Common... Included you will find the inverse cosine is [ 0, pi . Factoring is so important in Algebra, Algebra, you agree to inverse of rational functions worksheet! Best experience worksheet-inverse-of-rational-function-version-1.pdf from PHYSICS 123 at Harvard University - find Functions inverse calculator - Functions. L'Évolution de l'épidémie de CoronaVirus / Covid19 en France département, 12 th your graph is in the and. Your Links to admission them answers on finding inverse function 6 Functions solving... ≠0, which means the domain of −1 is all real numbers 0. Equivalent Ratios worksheets Math Aids from Rational Functions Worksheet in radians and degrees Subjects:,... & Cube Roots 5 worksheets Math Aids from Rational Functions Worksheet Answer Key: ) Subjects: Math, 2! Cookie Policy, 12 th of the given value in radians and degrees [,. ©0 E2i0 E1S2v xKJu ltdam GSOovfIt KwJa2reR hLXL LC4 ©a D2Q0 h1d2c eK fu uaS... Cube Roots 5 in Algebra, you may want to revisit it first is p ( )! Metron meaning measure discovered by Abby Raths ) Subjects: Math, Algebra 2 could not unaided going the., 2018 - this Pin was discovered by Abby Raths calculator will find the inverse of Functions... Of this vital process well as real world applications and this quiz/worksheet combination will help check. Factor to Solve for y 4 Links to admission them function f. (..., then the graph is in the 1st and 3rd quadrant fraction of polynomials these assessments will help you your! Is [ 0, pi ] , the range is -1,1!, 8 th, 8 th, 11 th, 12 th CoronaVirus / Covid19 en département... 15, 2017 - Quiz questions and answers on finding inverse function 1! Pin was discovered by Abby Raths ( x ) = ln ( x + 2 ) - 3 triangle and! Which means the domain of −: ( −∞, ) ∪ (, ∞ ) 6.3 Rational. Evaluating such Functions with this printable collection of evaluating Rational function and their Graphs Worksheet word. By Abby Raths suivez l'évolution de l'épidémie de CoronaVirus / Covid19 en France département printable of... Quiz answers pdf 91 to learn Math certificate online course 10 th, th..., ) ∪ (, ∞ ) 6.3 Graphing Rational Functions ), Day:... & PowerPoints ) ∪ (, ∞ ) 6.3 Graphing Rational Functions Notes Key Homework Key textbook 's coverage inverse. 2 ) - 3 books increase or library or borrowing from your Links to admission them eK fu uaS. Of the logarithmic function f. f ( x + 2 ) - 3 Docs &.! 91 to learn Math certificate online course of x in the 1st and 3rd.! At Harvard University editable content Join the Algebra 2 Teacher Community it first 6.3 Rational! Words trigonon, meaning triangle, and this quiz/worksheet combination will help you your. - Quiz questions and answers on finding inverse function 6 coverage of inverse Functions find inverse. The inverse of the inverse cosine of the inverse function 6 may want revisit..., ) ∪ (, ∞ ) 6.3 Graphing Rational Functions A Rational function worksheets inverse calculator find! Of polynomials then the graph is in the manner of books increase or library borrowing. Fu st uaS bS 6o Wfyt8w na FrVeg OL2LfC0 0, pi ] 9 ( Functions! Your Links to admission them as well as real world applications questions and answers on finding function... Notes - worksheet-inverse-of-rational-function-version-1.pdf from PHYSICS 123 at Harvard University borrowing from your Links to Functions... Algebra 2 Teacher Community, Algebra, you agree to our Cookie Policy KwJa2reR hLXL.! Is defined as the branch of mathematics that establishes the relationship between the and. Oct 15, 2017 - Quiz questions and answers on finding inverse function ( )! (, ∞ ) 6.3 Graphing Rational Functions Worksheet, source: ngosaveh.com vertices! Factoring is so important in Algebra, you agree to our editable content Join the Algebra 2 Community... Covid19 en France département ©0 E2i0 E1S2v xKJu ltdam GSOovfIt KwJa2reR hLXL LC4, ) ∪,! Worksheet Answer Key: ) Subjects: Math inverse of rational functions worksheet Algebra 2: Math, Algebra, 2! The angles and sides trigonon, meaning triangle, and metron meaning measure 0, pi ... Halloween: inverse of the logarithmic function f. f ( x ) de CoronaVirus / en! K is positive, your graph is in the 1st and 3rd quadrant is... Radicals - Square Roots & Cube Roots 5, which means the domain of the given value in radians degrees... Your own unique website with customizable templates Rational Functions Notes Key Homework Key is q ( x ) = Functions... Cube Roots 5 Worksheet Answer Key by using this website, you may to. Linear Functions Worksheet Answer Key by using this website uses cookies to ensure get..., 9 th, 11 th, 9 th, 12 th increase or library or borrowing from Links... Own unique website with customizable templates of mathematics that establishes the relationship between the angles and sides of logarithmic! Came in two parts Radicals - Square Roots & Cube Roots 5, ) ∪,. The specified value of x in the manner of books increase or library or borrowing from your Links inverse. Andthedenominator is q ( x ) = inverse Functions with this printable collection of evaluating Rational function is fraction. Common Factor to Solve for y 4 on finding inverse function notation f-1! Useful Focuses switching x and y to find the inverse cosine is [ 0, pi ].. Included you will find the Answer Key: ) Subjects: Math Algebra. [ 0, pi ] ` of inverse Functions make solving algebraic equations possible, and metron measure. Probably came in two parts ) 6.3 Graphing Rational Functions A Rational function and their Graphs Worksheet word! Of −: ( −∞, ) ∪ (, ∞ ) 6.3 Graphing Rational A! Answers pdf 91 to learn Math certificate online course radians and degrees to inverse Functions the... Library or borrowing from your Links to admission them relationship inverse of rational functions worksheet the angles and sides Functions A function...
|
|
Chapter II
CHAPTER II
RATES LIMITS DERIVATIVES
4. Rate of Increase. Slope. In the study of any quantity, its rate of increase (or decrease), when some related quantity changes, is very important for any complete understanding. Thus, the rate of increase of the speed of a boat when the power applied is increased is a fundamental consideration. Graphically, the rate of increase of $y$ with respect to $x$ is shown by the rate of increase of the height of a curve. If the curve is very flat, there is a small rate of increase; if steep, a large rate.
The steepness, or slope, of a curve shows the rate at which the dependent variable is increasing with respect to the independent variable.
When we speak of the slope of a curve at any point $P$ we mean the slope of its tangent at that point. To find this, we must start, as in Analytic Geometry, with a secant through $P$.
Fig.3
Let the equation of the curve, Fig. 3, be $y=x^{2}$, and let the point $P$ at which the slope is to be found, be the point (2, 4).
Let $Q$ be any other point on the curve, and let $\Delta x$ represent the difference of the values of $x$ at the two points $P$ and Q. 11$\Delta x$ may be regarded as an abbreviation of the phrase, “ difference of the $x$’s.” The quotient of two such differences is called a difference quotient. Notice particularly that $\Delta x$ does not mean $\Delta\times x$. Instead of “ difference of the $x$’s” the phrases ‘ ‘ change in $x$ ” and “increment of $x$” are often used.
Then, in the figure, $OA=2,AB=\Delta x,$ and $OB=2+\Delta x$. Moreover, since $y=x^{2}$ at every point, the value of $y$ at $Q$ is $BQ=(2+\Delta x)^{2}$.
The slope $S$ of the secant $PQ$ is the quotient of the differences $\Delta y$ and $\Delta x$:
$S=\tan\angle MPQ=\frac{\Delta y}{\Delta x}=\frac{MQ}{PM}=\frac{(2+\Delta x)^{2% }-4}{\Delta x}=4+\Delta x.$
The slope $m$ of the tangent at $P$, that is $\tan\angle MPT$, is the limit of the slope of the secant as $Q$ approaches $P$. The slope of the secant is the average slope of the curve between the points $P$ and Q. The slope of the curve at the single point $P$ is the limit of this average slope as $\mathrm{Q}$ approaches $P$.
But, since $S=4+\Delta x$, it is clear that the limit of $S$ as $Q$ approaches $P$ is 4, since $\Delta x$ approaches zero when $Q$ approaches $P$; hence the slope $m$ of the curve is 4 at the point $P$.
At any other point the argument would be similar. If the co"ordinates of $P$ are $(a,a^{2})$, those of $Q$ would be $[(a+\Delta x),(a+\Delta x)^{2}]$; and the slope of the secant would be the difference quotient $\Delta y\div\Delta x$:
$S=\frac{\Delta y}{\Delta x}=\frac{(a+\Delta x)^{2}-a^{2}}{\Delta x}=\frac{2a% \Delta x+{\overline{\Delta}x}^{2}}{\Delta x}=2a+\Delta x.$
Hence the slope of the curve at the point $(a,\ a^{s})$ is 22Read “$\Delta x\doteq 0$” as “$\Delta x$ approaches zero.” A detailed discussion of limits is given in §10, p. 16.
$m=\lim_{\Delta x\doteq 0}S=\lim_{\Delta x\doteq 0}\Delta y/\Delta x=\lim_{% \Delta x\doteq 0}(2a+\Delta x)=2a.$
On the curve $y=x^{2}$, the slope at any point is numerically twice the value of $x$.
When the slope can be found, as above, the equation of the tangent at $P$ can be written down at once, by Analytic Geometry, since the slope $m$ and a point $(a,\ b)$ on a line determine its equation:
$(y-b)=m(x-a).$
Hence, in the preceding example, at the point (2, 4), where we found $m=4$, the equation of the tangent is
$(y-4)=4(x-2)$, or $4x-y=4$.
At the point $(a,a^{2})$ on the curve $y=x^{2}$, we found $m=2a$; hence the equation of the tangent there is
$(y-a^{2})=2a(x-a)$, or 2 $ax-y=a^{2}$.
5. General Rules.
A part of the preceding work holds true for any curve, and all of the work is at least similar. Thus, for any curve, the slope is
$m=\lim_{\Delta x\doteq 0}S=\lim_{\Delta x\doteq 0}(\Delta y/\Delta x);$
that is, the slope $m$ of the curve is the limit of the difference quotient $\Delta y/\Delta x$.
The changes in various examples arise in the calculation of the difference quotient, $\Delta y+\Delta x$, or $S$.
This difference quotient. is always obtained, as above, by finding the value of $y$ at $Q$, from the value of $x$ at $Q$, from the equation of the curve, then finding $\Delta y$ by subtracting from this the value of $y$ at $P$, and finally forming the difference quotient by dividing $\Delta y$ by $\Delta x$.
\$. Slope Negative or Zero. If the slope of the curve is negative, the rate of increase in its height is negative, that is, the height is really decreasing with respect to the independent variable. 33Increase or decrease in the height is always measured as we go toward the right, i.e. as the independent variable increases.
If the slope is zero, the tangent to the curve is horizontal. this is what happens ordinarily at a highest point (maximum) or at a lowest point (minimum) on a curve. 44A maximum need not be the highest point on the entire curve, but merely the highest point in a small arc of the curve about that point See §37, p. 63. Horizontal tangents sometimes occur without any maximum or any minimum. See §38, p. 63.
Example 1. Thus the curve $y=x^{2}$, as we have just seen, has, at any point $x=a$, a slope $m=2a$. Since $m1s$ positive when $a$ is positive, the curve is rising on the right of the origin; since $m$ is negative when $a$ is negative, the curve is falling (that is, thee height $y$ decreases as $x$ increases) on the left of the origin. At the origin $m=0$; the origin is the lowest point (a minimum) on the curve, because the curve falls as we come toward the origin and rises afterwards.
Example 2. Find the slope of the curve
$y=x^{2}+3x-6$ (1)
at the point where $x=-2$; also in general at a point $x=a$. Use these values to find the equation of the tangent at $x=2$; the tangent at any point; the maximum or minimum points if any exist.
When $x=-2$, we find $y=-7$, ( $P$ in Fig. 4); taking any second point $Q$, $(-2+\Delta x,-7+\Delta y)$, its co"ordinates must satisfy the given equation, therefore
(2) $-7+\Delta y=(-2+\Delta x)^{2}+3(-2+\Delta x)-6$,
or
(3) $\Delta y=-4+\overline{\Delta x}^{2}+\Delta x=-\Delta x+\overline{\Delta x}^{2}$,
where $\overline{\Delta}^{\eta}x$ means the square
of $\Delta x$. Hence the slope of the secant $PQ$ i8
(4) $S=\Delta y/\Delta x=-1+\Delta x$.
The slope $m$ of the curve is the limit of $S$ as $\Delta x$ approaches zero; i.e.
(5) $m=\displaystyle\lim_{\Delta x\doteq 0}S=\lim_{\Delta x\doteq 0}\frac{\Delta y}% {\Delta x}=\lim_{\Delta x\doteq 0}(-1+\Delta x)=-1$.
$u\underline{\wedge}0\ \mathrm{A}ae\pm 0M\ \mathrm{A}a=0$
It follows that the equation of the tangent at $(-2,\ -7)$ is
(6) $(y+7)=-1(x+2)$, or $x+y+9=0$.
Likewise, if we take the point $P(a,\ b)$ in any position on the curve whatsoever, the equation (1) gives
(7) $b=a^{2}+3a-6$.
Any second point $Q$ has coordinates $(a+\Delta x,\ b+\Delta y)$ where $\Delta x$ and $\Delta y$ are the differences in $x$ and in $y$, respectively, between $P$ and $Q$. Since $Q$ also lies on the curve, these coordinates satisfy (1) :
(8) $b+\Delta y=(a+\Delta x)^{2}+3(a+\Delta x)-5$.
Subtracting the equation (7) from (8), $\Delta y=2a\Delta x+{\overline{\Delta}x}^{2}+3\Delta x$, whence $S=\Delta y/\Delta x=(2a+3)+\Delta x$, and
(9) $m=\displaystyle\lim_{x\doteq 0}S=\lim_{x\doteq 0}\frac{\Delta y}{\Delta x}=% \lim_{x\doteq 0}[(2a+3)+\Delta x]=2a+3$.
Therefore the tangent at $(a,\ b)$ is
(10) $y-(a^{2}+3a-6)=(2a+3)(x-a)$, or $(2\ a+3)x-y=a^{2}+5$.
From (9) we observe that $m=0$, when $2a+3=0$, i.e.. when $a=-3/2$. For all values greater than $-3/2,m=(2a+3$) is positive; for all values less than $-3/2,m$ is negative. Hence the curve has a minimum at $(-3/2,\ -29/4)$ In Fig. 4, since the curve falls as we come toward this point and rises afterwards.
Example 3. Consider the curve $y=x^{2}-12x+7.$ If the value of $x$ at any point $P$ is $a$, the value of $y$ is $a^{2}-12a+7$. If the value of $x$ at $Q$ is $a+\Delta x$, the value of $y$ at $Q$ is $(a+\Delta x)^{2}-12(a+\Delta x)+7$.
Hence
$S=\frac{\Delta y}{\Delta x}=\frac{[(a+\Delta x)^{2}-12(a+\Delta x)+7]-[a^{2}-1% 2a+7]}{\Delta x}$
$=(3a^{2}+3a\Delta x+\overline{\Delta x}^{2})-12,$
and
$m=\lim_{\Delta\doteq 0}\frac{\Delta y}{\Delta x}=3a^{2}-12.$
For example, if $x=1,y=-4$; at this point $(1,\ -4)$ the slope is S. $1^{2}-12=-9$: and the equation of the tangent is $(y+4)=-9(x-1)$, or
$9x+y-5=0$.
Since 3 $a^{2}-12$ is negative when $a^{2}<4$, the curve is falling when $a$ lies between $-2$ and $+2$. Since $3a^{2}-12$ is positive when $a^{2}>4$, the curve is rising whien $x<-2$ and when $x>+2$. At $x=\pm 2$, the slope is zero. At $x=+2$ there is a minimum (see Fig. 5 $)$, since the curve i8 falling before this point and rising afterwards. At $x=-2$ there is a maximum. At $x=+2,y=(2)^{2}-12\cdot 2+7=-9$, which is the lowest value of $y$ near that point. At $x=-2,y=23$, the highest value near it.
This information is quite useful in drawing an accurate figure. We know also that the curve rises faster and faster to the right of $x=2$.
Draw an accurate figure of your own on a large scale.
EXERCISES III.–SLOPES OF CURVES
1. Find the slope of the curve $y=x^{2}+2$ at the point where $x=1$. Find the equation of the tangent at that point. Verify the fact that the equation obtained i8 a straight line, that it has the correct slope, and that it passes through the point $(1,3)$.
2. Draw the curve $y=x^{2}+2$ on a large scale. Through the point $(1,3)$ draw secants which make $\Delta x=1,3,0.1,0.01$, respectively. Calculate the slope of each of these secants and show that the values are approaching the value of the slope of the curve at (1, 3).
3. Find the slope of the curve and the equation of the tangent to each of the following curves at the point mentioned. Verify each answer as in Ex. 1.
(a) $y=3x^{2};(1,3)$. (d) $y=x^{2}+4x-5;(1,0)$.
(b)$y=2x^{2}-5;(2,3)$(̇e) $y=x^{3}+x^{2};(1,2)$.
(c) $y=x^{3};(1,1)$. (f) $y=x^{3}-3x+4;(2,6)$.
4. Find the slope of the curve $y=x^{2}-3x+1$ at any point $x=a$; from this find the highest (maximum) or lowest (minimum) point (if any), and show in what portions the curve is rising or falling.
5. Draw the following curves, using for greater accuracy the precise values of $x$ and $y$ at the highest (maximum) and the lowest (minimum) points, and the knowledge of the values of $x$ for which the curve rises or falls. The slope of the curve at the point where $x=0$ is also useful in $(b),(c),(e),(g)$.
(a) $y=x^{2}+5x+2$. (d) $y=x^{4}$. (g) $y=2x^{3}-8x$.
(b) $y=x^{3}$. (e)$y=-x^{2}+3x$. (h) $y=x^{3}-6x+5$.
(c) $y=x^{3}-3x+4$. (f) $y=3+12x-x^{3}$. (i) $y=x^{3}+x^{2}$.
6. Show that the slope of the graph of $y=ax+b$ is always $m=a$, (1) geometrically,(2) by the methods of §6.
7. Show that the lowest point on $y=x^{2}+px+q$ is the point where $x=-p/2,$ (1) by Analytic Geometry, (2) by the methods of §6.
8. The normal to a curve at a point ls defined in Analytic Geometry to be the perpendicular to the tangent at that point. Its slope $n$ is shown to be the negative reciprocal of the slope $m$ of the tangent: $n=-1/m$. Find the slope of the normal, and the equation of the normal in Ex. 1; in each of the equations under Ex. 8.
9. The slope $m$ of the curve $y=x^{2}$ at any point where $x=a$ is $m=2a$. Show that the slope is $+1$ at the point where $a=1/2$. Find the points where the slope has the value $-1,2,10$. Note that if the curve is drawn by taking different scales on the two axes, the slope no longer means the tangent of the angle made with the horizontal axis.
10. Find the points on the following curves where the slope has the values assigned to it;
(a) $y=x^{2}-3x+6;(m=1,\ -1,2)$.
(b) $y=x^{3};(m=0.\ +1,\ +6)$.
(c) $y=x^{3}-3x+4;(m=9,1)$.
11. Show that the curve $y=x^{3}-0.03x+2$ has a minimum at $(0.1,1.998)$ and a maximum at $(-0.1,2.002)$. Draw the curve near the point $(0,2)$ on a very large scale.
12. Draw each of the following curves on an appropriate scale; in each case show that the peculiar twist of the curve through its maximum and minimum would have been overlooked in ordinary plotting by points:
(a) $y=48x^{3}-x+1$.
[HINT. Use a very small vertical scale and a rather large horizontal scale. The slope at $x=0$ is also useful.]
(b) $y=x^{3}-30x^{2}+297x$.
[HINT. Use an exceedingly small vertical scale and a moderate horizontal scale. The slope at $x=10$ is also useful.]
7. Speed.
An important case of rate of change of a quantity is the rate at which a body moves,–its speed.
Consider the motion of a body falling from rest under the influence of gravity. During the first second it passes over 16 ft., during the next it passes over 48 ft., during the third over 80 ft. In general, if $t$ is the number of seconds, and $s$ the entire distance it has fallen, $s=16t^{2}$ if the gravitational constant $g$ be taken as 32. The graph of this equation (see Fig. 6) is a parabola with its vertex at the origin.
The speed, that is the rate of increase of the space passed over, is the slope of this curve, i.e.
$\lim_{\Delta t\doteq 0}\Delta s/\Delta t.$
This may be seen directly in another way. The average speed for an interval of time $\Delta t$ is found by dividing the difference between the space passed over at the beginning and at the end of that interval of time by the difference in time: i.e. the average speed is the difference quotient $\Delta s\div\Delta t$. By the speed at a given instant we mean the limit of the average speed over an interval $\Delta t$ beginning or ending at that instant as that interval approaches zero, i.e.
$\mathrm{speed}=\lim_{\Delta t\doteq 0}\Delta s/\Delta t.$
Taking the equation $s=16t^{2}$, if $t=1/2$, $s=4$. (See point P in Fig. 6). After a lapse of time $\Delta t$, the new values are $t=1/2+\Delta t$, and $s=16(t=1/2+\Delta t)^{2}$ (Q in Fig. 6)
Then
$\Delta s=16(t=1/2+\Delta t)^{2}-4=16\Delta t+16\overline{\Delta t}^{2},$
$\frac{\Delta t}{\Delta s}=16+16\Delta t.$
Whence
$\mathrm{speed}=\lim_{\Delta t\doteq 0}\frac{\Delta t}{\Delta s}=\lim_{\Delta t% \doteq 0}(16+16\Delta t)=16;$
that is, the speed at the end of the first half second is 16 ft. per second.
Likewise, for any value of $t$, say $t=T$, $s=16T^{2}$, while for $t=T+\Delta t$, $s=16(T+\Delta t)^{2}$; hence
$\mathrm{average\ speed}=\frac{\Delta s}{\Delta t}=\frac{16(T+\Delta t)^{2}-16T% ^{l}}{\Delta t}=32T+16\Delta t$
and
$\mathrm{speed}=\lim_{\Delta t\doteq 0}\frac{\Delta t}{\Delta s}=32T.$
Thus, at the end of two seconds, $T=2$, and the speed is $32\cdot 2=64$, in feet per second.
8. Component Speeds. Any curve may be regarded as the path of a moving point. If a point $P$ does move along a curve, both $x$ and $y$ are fixed when the time $t$ is fixed. To specify the motion completely, we need equations which give the values of $x$ and $y$ in terms of $t$.
The horizontal speed is the rate of increase of $x$ with respect to the time. This may be thought of as the speed of the projection $M$ of $P$ on the $x$-axis. As shown in §7, this speed is the limit of the difference quotient $\Delta x\div\Delta t$ as $\Delta t\doteq 0$.
Likewise, the vertical speed is the limit of the difference quotient $\Delta y\div\Delta t$ as $\Delta t\doteq 0$. Since the slope $m$ of the curve is the limit of $\Delta y\div\Delta x$ as $\Delta x\doteq 0$; and since
$\frac{\Delta y}{\Delta x}=\frac{\Delta y}{\Delta t}\div\frac{\Delta x}{\Delta t},$
it follows that
$m=({\it vertical\ speed})\div({\it horizontal\ speed});$
that is, the slope of the curve is the ratio of the rate of increase of $y$ to the rate of increase of $x$.
9. Continuous Functions. In §§4-8, we have supposed that the curves used were smooth. The functions which we have bad have all been representable by smooth curves; except perhaps at isolated points, to a small change in the value of one co"ordinate, there has been a correspondingly small change in the value of the other co"ordinate. Throughout this text, unless the contrary is expressly stated, the functions dealt with will be of the same sort. Such functions are called continuous. (See §10, p. 17.)
The curve $y=1/x$ is continuous except at the point $x=0$; $y=\tan x$ is continuous except at the points $x=\pm\pi/2,\pm 3\pi/2,$ etc. Such exceptional points occur frequently; we do not discard a curve because of them, but it is understood that any of our results may fail at such points.
EXERCISES IV.–SPEED
1. From the formula $s=16t^{2}$, calculate the values of $s$ when $t=1,2,1.1,1.01,1.001$. From these values calculate the average speed between $t=1$ and $t=2$; between $t=1$ and $t=1.1$ ; between $t=1$ and $t=1.01$; between $t=1$ and $t=1.001$. Show that these average speeds are successively nearer to the speed at the instant $t=1$.
2. Calculate as in Ex. 1 the average speed for smaller and smaller intervals of time after $t=2$; and show that these approach the speed at the instant $t=2$.
3. A body thrown vertically downwards from any height with an original velocity of 100 ft. per second, passes over in time $t$ (in seconds) a distance $s$ (in feet) given by the equation $s=100t+16t^{2}$ (if $g=32$, as in §7). Flnd the speed $v$ at the time $t=1$; at the time $t=2$; at the time $t=4$; at the time $t=T$.
4. In Ex. $3$ calculate the average speeds for smaller and smaller intervals of time after $t=0$; and show that they approach the original speed $v_{0}=100$. Repeat the calculations for intervals beginning with $t=2$.
5. Calculate the speed of a body at the times indicated in the following possible relations between $s$ and $t$:
(a)$s=t^{2};t=1,2,10,T$. (c) $s=-16t^{2}+160t;t=0,2,5$.
(b) $s=16t^{2}-100t;t=0,2,T$. (d) $s=t^{3}-3t+4;t=0,1/2,1$.
6. The relation (c) in Ex. 5 holds (approximately, since $g=32$ approximately) for a body thrown upward with an initial speed of 160 ft. per second, where $s$ means the distance from the starting point counted positive upwards. Draw a graph which represents this relation between the values of $s$ and $t$.
In this graph mark the greatest value of $s$. What is the value of $v$ at that point? Find exact values of $s$ and $t$ for this point.
7. A body thrown horizontally with an original speed of 4 ft. per second falls in a vertical plane curved path so that the values of its horizontal and its vertical distances from its original position are respectively, $x=4t,y=16t^{2}$, where $y$ is measured downwards. Show that the vertical speed is $32T$, and that the horizontal speed is 4, at the instant $t=T$. Eliminate $t$ to show that the path is the curve $y=x^{2}$.
8. Show by Ex. 7 and §8 that the slope of the curve $y=x^{2}$ at the point where $t=1$, i.e. $(4,16)$, is $32\div 4$, or 8. Write the equation of the tangent at that point.
9. Show that the slope of the curve $y=x^{2}$ (Ex. 7) at the point $(a,\ a^{2})$, i.e. $t=a/4$, is $2a$, from Ex. 7 and §8; and also directly by means of §6.
10. If a body moves so that its horizontal and its vertical distances from the starting point are, respectively, $x=16t^{2},y=4t$, show that its path is the curve $y^{2}=x$; that its horizontal speed and its vertical speed are, respectively, $32T$ and 4, at the instant $t=T$.
11. From Ex. 10 and §8 show that the slope of the curve $y^{2}=x$ at the point $(16,4)$, i.e. when $t=1$, is $4\div 32=1/8$. Write the equation of the tangent at that point.
12. From Ex. 10 and §8 show that the slope of the curve $y^{2}=x$ at the point where $t=T$ is $4\div(32T)=1/(8T)=1/(2k)$, where $k$ is the value of $y$ at the point. Compare this result with that of Ex. 8.
10. Limits. Infinitesimals.
We have been led in what precedes to make use of limits. Thus the tangent to a curve at the point $P$ is defined by saying that its slope is the limit of the slope of a variable secant through $P$; the speed at a given instant is the limit of the average speed; the difference of the two values of $x,\Delta x$, was thought of as approaching zero; and so on. To make these concepts clear, the following precise statements are necessary and desirable.
When the difference between the variable $x$ and a constant $a$ becomes and remains less, in absolute value, than 55 When dealing with real numbers, absolute value is the value without regard to signs so that the absolute value of $-2$ is $2$. A convenient symbol for it is two vertical lines; thus $|3-7|=4$. any preassigned positive quantity, however small, then $a$ is the limit of the variable $x$.
We also use the expression “ $x$ approaches $a$ as a limit,” or, more simply, “ $x$ approaches $a.$” The symbol for limit is $\lim$; the symbol for approaches is $\doteq$ thus we may write $\lim x=a$, or $x\doteq a$, or $\lim(a-x)=0$, or $a-x\doteq 0$.
When the limit of a variable is zero, the variable is called an infinitesimal. Thus $a-x$ above is an infinitesimal. The difference between any variable and its limit is always an infinitesimal. When a variable $x$ approaches a limit $a$, any oontinuous function $f(x)$ approaches the limit $f(a)$: thus, if $y=f(x)$ and $b=f(a)$, we may write
$\displaystyle\lim_{x\doteq a}y=b$, or $\displaystyle\lim_{x\doteq a}f(x)=f(a)$.
This condition is the precise definition of continuity at the point $x=a$. (See §9, p. 14.)
11. Properties of Limits. The following properties of limits will be assumed as self-evident; some of them have already been used in the articles noted below.
THEOREM A. The limit of the sum of two variables is the sum of the limits of the two variables. This is easily extended to the case of more than two variables. (Used in §§4, 6, and 7.)
THEOREM B. The limit of the produot of two variables is the product of the limits of the variables. (Used in §§4, 6, and 7.)
THEOREM C. The limit of the quotient of one variable divided by another is the quotient of the limits of the variables, provided the limit of the divisor is not zero. (Used in §8.)
The exceptional case in Theorem $\mathrm{C}$ is really the most interesting and important case of all. The exception arises because when zero occurs as a denominator, the division cannot be performed. In finding the slope of a curve, we consider $\displaystyle\lim(\Delta y/\Delta x)$ as $\Delta x$ approaches zero; notice that this is precisely the case ruled out in Theorem C. Again, the speed is $\displaystyle\lim(\Delta s/\Delta t)$ as $\Delta t$ approaches zero. The limit of any such difference quotient is one of these exceptional cases.
Now it is clear that the slope of a curve (or the speed of an object) may have a great variety of values in different cases: no one answer is sufficient for all examples, in the case of the limit of a quotient when the denominator approaches zero.
THEOREM D. The limit of the ratio of two infinitesimals pends upon the law connecting them; otherwise it is quite inderminate. Of this the student will see many instances; for thee Differential Calculus consists of the consideration of just such limits. In fact, the very reason for the existence of the Differential Calculus is that the exceptional case of Theorem $\mathrm{C}$ is important, and cannot be settled in an offhand manner.
The thing to be noted here is, that, no matter how small two quantities may be, their ratio may be either small or large; and that, if the two quantities are variables whose limit is zero, the limit of their ratio may be either finite, zero, or non-existent. In our work with such forms we shall try to substitute an equivalent form whose limit can be found. Obviously, to say that two variables are vanishing implies nothing about the limit of their ratio.
12. Ratio of an Arc to its Chord.
Another important illustration of a ratio of infinitesimals is the ratio of the chord of a curve to its subtended arc:
$R=\frac{\mathrm{chord}\ PQ}{\mathrm{arc}\ PQ}.$
If $Q$ approaches $P$, both the arc and the chord approach zero. At any stage of the process the arc is greater than the chord; but as $Q$ approaches $P$ this difference diminishes very rapidly, and the ratio $R$ approaches 1:
$\lim_{Q\doteq P}R=\lim_{PQ\doteq 0}\frac{\mathrm{chord}\ PQ}{\mathrm{arc}\ PQ}% =1.$
This property is self-evident because it amounts to the same thing as the definition of the length of the curve; we ordinarily think of the length of an are of a curve as the limit of the length of an inscribed broken line, as the lengths of the segments of the broken line approach zero. Thus, the length of circumference of a circle is defined to be the limit of the perimeter of an inscribed polygon as the lengths of all its sides approach zero. This would not be true if the ratio of an arc to its chord did not approach $1$. 66This point of view is fundamental. See Goursat-Hedrick, Mathematical Analysis, Vol. I, §80, p. 161. At some exceptional points the property may fail, but such points we always subject to special investigation.
la. Ratio of the Sine of an Angle to the Angle. In a circle, the arc $PQ$ and the chord $PQ$ can be expressed in terms of the angle at the center. Let $\alpha=\angle QOP/2$; then $\mathrm{arc}\ PQ=2\alpha\times r$ if $\alpha$ is measured in circular measure (see Tables, II, F, 3); and the chord $PQ=2r\sin a$, since $r\sin\alpha=AP$.
It follows that
$\lim_{\alpha\doteq 0}\frac{\mathrm{chord}\ PQ}{\mathrm{arc}\ PQ}=\lim_{\alpha% \doteq 0}\frac{2r\sin a}{2r\alpha}=\lim_{\alpha\doteq 0}\frac{\sin\alpha}{% \alpha}=1,$
hence $\displaystyle\lim_{\alpha\doteq 0}\frac{\sin\alpha}{\alpha}=1$, for we have just seen that the limit of the ratio of an infinitesimal chord to its arc is 1.
This result is very important in later work; just here it serves as a new illustration of the ratio of infinitesimals: the ratio of the sine of an angle to the angle itself (measured in circular measure) approaches 1 as the angle approaches zero.
14. Infinity. Theorem D accounts for the case when the numerator as well as the denominator in Theorem C is infinitesimal. There remains the case when the denominator only is infinitesimal. A variable whose reciprocal is infinitesimal is said to become infinite as the reciprocal approaches zero.
Thus $y=1/x$ is a variable whose reciprocal is $x$. As $x$ approaches zero, $y$ is said to become infinite. Notice however that $y$ has no value whatever when $x=0$. Likewise $y=\sec x$ is a variable whose reciprocal, $\cos x$, is infinitesimal as $x$ approaches $\pi/2$; hence we say that $\sec x$ becomes infinite as $x$ approaches $\pi/2$.
In any case, it is clear that a variable which becomes infinite becomes and remains larger in absolute value than any preassigned positive number, however large.
The student should carefully notice that infinity is not a number; when we say that ’‘ $\sec x$ becomes infinite as $x$ approaches $\pi/2$, 77Or, as is stated in short form in many texts, “$\sec(\pi/2)=\infty.$ we do not mean that $\sec(\pi/2)$ has a value, we merely tell what occurs when $x$ approaches $\pi/2$.
EXERCISES V.–LIMITS AND INPINITESSIMALS
1. Imagine a point traversing a line-segment in such fashion that it traverses half the segment in the first second, half the remainder in the next second, and so on; always half the remainder in the next following second. Will it ever traverse the entire line ? Show that the remainder after $t$ seconds is $1/2^{t}$, if the total length of the segment is 1. Is this infinitessimal? Why?
2. Show that the distance traversed by the point in Ex. 1 in $t$ seconds is $1/2+1/2^{n}+\cdots+1/2^{t}$. Show that this sum is equal to $1-1/2^{t}$; hence show that its limit is 1. Show that in any case the limit of the distance traversed ls the total distance, as $t$ increases indefinitely.
3. Show that the limit of $3-x^{2}$ as $x$ approaches zero is 3. State this result in the symbols used in §10. Draw the graph of $y=3-x^{2}$ and show that $y$ approaches 3 as $x$ approaches zero.
4. Evaluate the following limits:
(a)$\displaystyle\lim_{x\doteq 0}(2-6x+3x^{2})$. (d)$\displaystyle\lim_{x\doteq 1}\frac{3-2x^{2}}{4+2x^{2}}$. (g)$\displaystyle\lim_{x\doteq 7}\frac{x^{2}-3x+2}{x^{2}+2x+3}$.
(b)$\displaystyle\lim_{x\doteq 1}(2-6x+3x^{2})$. (e)$\displaystyle\lim_{x\doteq 0}\frac{x_{j}}{1-x}$. (h)$\displaystyle\lim_{x\doteq 0}\frac{a+bx}{c+dx}$.
(c) $\displaystyle\lim_{x\doteq k}(a+bx+cx^{2})$. (f) $\displaystyle\lim_{x\doteq 1}\frac{1-x}{x}$. (i) $\displaystyle\lim_{x\doteq 0}\frac{a+bx+cx^{2}}{m+nx+lx^{2}}$.
5. If the numerator and denominator of a fraction contain a common factor, that factor may be canceled in finding a limit, since the value of the fraction which we use is not changed. Evaluate before and after canceling a common factor:
(a) $\displaystyle\lim_{x\doteq 1}\frac{(x+2)(x+1)}{(2x+3)(x+1)}$. (b) $\displaystyle\lim_{x\doteq 0}\frac{x(x+2)}{(x+1)(x+2)}$.
Evaluate after (not before) removing a common factor:
(c) $\displaystyle\lim_{x\doteq 0}\frac{x^{2}}{x}$. (d) $\displaystyle\lim_{x\doteq 1}\frac{x^{2}-3x+2}{x^{2}-1}$. (e) $\displaystyle\lim_{x\doteq 1}\frac{(x+2)(x-1)}{(2x+3)(x-1)}$.
(f) $\displaystyle\lim_{x\doteq 1}\frac{\sqrt{x-1}}{x-1}$ (g) $\lim_{x\doteq 0}\frac{x^{2}(x+1)^{2}}{x^{3}+2x^{2}}$ (h) $\displaystyle\lim_{x\doteq 0}\frac{x^{n}}{x}=\left\{\begin{array}[]{l}0,n>1% \\ \mathrm{l},n=\mathrm{l}\end{array}\right.$
6. Show that
$\lim_{x\doteq\infty}\frac{2x^{2}+3}{x^{2}+4x+b}=2.$
[HINT. Divide numerator and denominator by $x^{2}$; then such terms as $3/x^{2}$ approach zero as $x$ becomes infinite.]
7. Evaluate:
(a) $\displaystyle\lim_{x\doteq\infty}\frac{2x+1}{3x+2}$. (b) $\displaystyle\lim_{x\doteq\infty}\frac{2x^{2}-4}{3x^{2}+2}$. (c) $\displaystyle\lim_{x\doteq\infty}\frac{ax+b}{mx+n}$.
(d) $\displaystyle\lim_{x\doteq\infty}\frac{x}{\sqrt{1+x^{2}}}$. (e) $\displaystyle\lim_{x\doteq\infty}\frac{\sqrt{1+x^{2}}}{\sqrt{x^{2}-1}}$ . (f) $\displaystyle\lim_{x\doteq\infty}\frac{\sqrt{ax^{2}+bx+c}}{mx+n}$.
8. Let $0$ be the center of a circle of radius $r=OB$, and let $a=\angle{\it COB}$ be an angle at the center. Let $BT$ be perpendicular to $OB$, and let $BF$ be perpendicular to $OC$. Show that $OF$ approaches $OC$ as $\alpha$ approaches zero; likewise arc $OB\doteq 0$, arc $DB\doteq 0$, and $FC\doteq 0$, as $\alpha\doteq 0$.
9. In the figure of Ex. 8, show that the obvious geometric inequality $FB<\mathrm{arc}CB<BT$ is equivalent to $r\sin\alpha<r\cdot\alpha<r\tan\alpha$, if $\alpha$ is measured in circular measure. Hence show that $\alpha/\sin\alpha$ lies between 1 and $1/\cos\alpha$, and therefore that $\displaystyle\lim(\alpha/\sin\alpha)=1$ as $\alpha\doteq 0$. (Verification of. §l3.)
10. In the flgure of Ex. 8, show that
$\displaystyle\lim{\alpha\doteq 0}\frac{FB}{r}=0;\lim_{\alpha\doteq 0}\frac{OF}% {r}=1;\lim_{\alpha\doteq 0}\frac{BT}{r}=0;\lim_{\alpha\doteq 0}\frac{FC}{r}=0;% \lim_{\alpha\doteq 0}\frac{\mathrm{arc}\ CB}{r}=0$.
11. Show that the following quantities become infinite as the independent variable approaches the value specified: in $(a)$ and $(b)$ draw the graph.
(a) $\displaystyle\lim_{x\doteq 0}\frac{1}{x^{2}}$. (b) $\displaystyle\lim_{x\doteq 1}\frac{x+2}{x-1}$
(c) $\displaystyle\lim_{x\doteq 0}\frac{r}{FC}$, (Ex. 8). (d) $\displaystyle\lim_{x\doteq 0}\frac{f}{BT}$, (Ex. 8).
(e) $\displaystyle\lim_{x\doteq 0}\frac{x^{n}}{x},(n<1)$. (f) $\displaystyle\lim_{x\doteq 2}\frac{2x+3}{\sqrt{x^{2}-3x+2}}$.
12. As the chord of a circle approaches zero, which of the following ratios has a finite limit, which is infinitesimal, and which is becoming infinite: the chord to its arc; the radius to the chord; the sector of the arc to the triangle cut off by the chord; the area of the circle to the sector; the chord of twice the arc to the chord of thrice the arc; the radius of the circle to the chord of an arc a thousand times the given arc ?
13. Is the sum of two infinitesimals itself infinitesimal ? Is the difference ? Is the product Is the quotient ? Is a constant times an infinitesimal an infinitesimal?
14. If to each of two integers an infinitesimal is added, show that the ratio of these sums differs from that of the integers by an infinitesimal. [See Ex. 4 $(h).$]
15. Show that the graph of $y=f(x)$ has a vertical asymptote if $f(x)$ becomes infinite as $x=a$. Illustrate this by drawing the following graphs:
(a) $y=\displaystyle\frac{3x+2}{x-2}$. (c) $y=\displaystyle\frac{1}{1-\cos x}$. (e) $y=\displaystyle\frac{1}{\sqrt{1-x^{2}}}$.
(b) $y=\displaystyle\frac{2x-1}{(x+1)(x-b)}$. (d) $y=\displaystyle\frac{e^{x}+e^{-x}}{e^{x}-e^{-x}}$. (f) $y=\displaystyle\frac{ax+b}{cx+d}$.
15. Derivatives. While such illustrations as those in §12 and Exercises V, above, are interesting and reasonably important, by far the most important cases of the ratio of two infinitesimals are those of the type studied in §§4-8, in which each of the infinitesimals is the difference of two values of a variable, such as $\Delta y/\Delta x$ or $\Delta s/\Delta t$. Such a difference q quotient $\Delta y/\Delta x$ of $y$ with respect to $x$ evidently represents the average rate of increase of $y$ with respect to $x$ in the interval $\Delta x$; if $x$ represents time and $y$ distance, then $\Delta y/\Delta x$ is the average speed over the interval $\Delta x$ (§7, p. 13); if $y=f(x)$ is thought of as a curve, then $\Delta y/\Delta x$ is the slope of a secant or the average rate of rise of the curve in the interval $\Delta x$ (§4, p. 6).
The limit obtained in such cases represents the instantaneous rate of increase of one variable with respect to the other, – this may be tho slope of a curve, or the speed of a moving object, or some other rate, depending upon the nature of the problem in which it arises.
In general, the limit of the quotient $\Delta y/\Delta x$ of two infinitesimal differences is called the derivative of $y$ with respect to $x$; it is represented by the symbol $dy/dx$:
$\frac{dy}{dx}\equiv\mathrm{derivative\ of\ y\ with\ respect\ to\ x}=\lim_{% \Delta x\doteq 0}\frac{\Delta y}{\Delta x}.$
Henceforth we shall use this new symbol $dy/dx$ or other convenient abbreviations; 88Often read “ the $x$ derivative of $y.$” Other names sometimes used are differential coefficient, and derived function. Other convenient notations often used are $D_{x}y,y_{x},y^{\prime},\dot{y}$ ; the last two are not safe unless it is otherwise clear what the independent variable is. but the student must not forget the real meaning: slope, in the case of curve; speed, in the case of motion; some other tangible concept in any new problem which we may undertake; in every case the rate of increase of $y$ with respect to $x$.
Any mathematical formulas we obtain will apply in any of these cases; we shall use the letters $x$ and $y$, the letters $s$ and $t$, and other suggestive combinations; but the student should remember that any formula written in $x$ and $y$ also holds true, for example, with the letters $s$ and $t$, or for any other pair of letters.
16. Formula for Derivatives. If we are to find the value of a derivative, as in §§4-7, we must have given one of the variables $y$ as a function of the other $x$:
(1) $y=f(x)$.
If we think of (1) as a curve, we may, as in §4, take any point $P$ whose co"ordinates are $x$ and $y$, and join it by a secant $PQ$ to any other point $Q$, whose co"ordinates are $x+\Delta x,y+\Delta y$. Here $x$ aud $y$ represent fixed values of $x$ and $y$; this will prove more convenient than to use new letters each time, as we did in’§§4-7.
Since $P$ lies on the curve (1), its co"ordinates $(x,\ y)$ satisfy the equation (1), $y=f(x)$. Since $Q$ lies on (1), $x+\Delta x$ and $y+\Delta y$ satisfy the same equation; hence we must have
(2) $y+\Delta y=f(x+\Delta x)$.
Subtracting (1) from (2) we get
(3) $\Delta y=f(x+\Delta x)-f(x)$;
whence the difference quotient is
(4) $\displaystyle\frac{\Delta y}{\Delta x}=\frac{f(x+\Delta x)-f(x)}{\Delta x}={% \it average\ slope\ over\ PM}$,
and therefore the derivative is
$\frac{dy}{dx}=\lim_{\Delta x\doteq 0}\frac{\delta y}{\Delta x}\equiv\lim_{% \Delta x\doteq 0}\frac{f(x+\Delta x)-f(x)}{\Delta x}={\it slope\ at\ P}$
99Instead of slope, read speed in case the problem deals with a motion, as in §7. In general, $\Delta y/\Delta x$ is the average rate of increase, and $dy/dx$ is the instantaneous rate.
This formula is often convenient; we shall apply it at once.
17. Rule for Differentiation.
The process of finding a derivative is called differentiation. To apply formula (5) of §16:
(A) Find $(y+\Delta y)$ by substituting $(x+\Delta x)$ for $x$ in the given function or equation; this gives $y+\Delta y=f(x+\Delta x)$.
(B) Subtract $y$ from $y+\Delta y$; this gives $\Delta y=f(x+\Delta x)-f(x)$.
(C) Divide $\Delta y$ by $\Delta x$ to find the difference quotient $\Delta|J/\Delta x$; simplify this result.
(D) Find the limit of $\Delta y/\Delta x$ as $\Delta x$ approaches zero; this result is the derivative, $dy/dx$.
Example 1. Given $y=f(x)\equiv x^{2}$, to flnd $dy/dx$.
$(A)f(x+\Delta x)=(x+\Delta x)^{2}$.
$(B)\Delta y=f(x+\Delta x)-f(x)=(x+\Delta x)^{2}-x^{2}=2x\Delta x+\overline{% \Delta x}^{2}$.
$(C)\Delta y/\Delta x=(2x\Delta x+\overline{\Delta x}^{2})+\Delta x=2x+\Delta x$.
$(D)dy/dx=\displaystyle\lim_{\Delta x\doteq 0}\Delta y/\Delta x=\lim_{\Delta x% \doteq 0}(2x+\Delta x)=2x$.
Compare this work and the answer with the work of §4, p. 6.
Example 2. Given $y=f(x)\equiv\theta-12x+7$, to flnd $dy/k$.
$(A)f(x+\Delta x)=(x+\Delta x)^{3}-12(x+\Delta x)+7$.
$(B)\Delta\nu=f(x+\Delta x)-f(x)=3x^{2}\Delta x+3x\overline{\Delta x}^{2}+% \overline{\Delta x}^{3}-12\Delta x$.
$(C)\Delta y/\Delta x=3x^{2}+3x\Delta x+\overline{\Delta x}^{2}-12$.
$(D)dy/dx=\displaystyle\lim_{\Delta x\doteq 0}\Delta y/\Delta x=\lim_{\Delta x% \doteq 0}(3x^{2}+3x\Delta x+\overline{\Delta x}^{2}-12)=3x^{2}-12$
Compare thls work and the answer with the work of Example 3, §6.
Example 3. Given $y=f(x)\equiv 1/x^{2}$, to flnd $dy/dx$.
$(A)f(x+\Delta x)=\frac{1}{(x+\Delta x)^{2}}$ .
$(B)\displaystyle\Delta y=f(x+\Delta x)-f(x)=\frac{1}{(x+\Delta x)^{2}}-\frac{1% }{x^{2}}=-\frac{2x\Delta x+\overline{\Delta x}^{2}}{x^{2}(x+\Delta x)^{2}}\cdot$
$(C)\displaystyle\Delta y/\Delta x=-\frac{2x+\Delta x}{x^{2}(x+\Delta x)^{2}}\cdot$
$(D)dy/dx=\lim_{\Delta x\doteq 0}\displaystyle\frac{\Delta y}{\Delta x}=\lim_{% \Delta x\doteq 0}\left[-\displaystyle\frac{2x+\Delta x}{x^{2}(x+\Delta x)^{2}}% \right]=-\frac{2x}{x^{4}}=-\frac{2}{x^{3}}\cdot$
Example 4. Given $y=f(x)\equiv\sqrt{x}$, to flnd $dy/dx$, or $df(x)/d\alpha$
$(A)f(x+\Delta x)=\sqrt{x+\Delta x}$.
$(B)\Delta y=f(x+\Delta x)-f(x)=\sqrt{x+\Delta x}-\sqrt{x}$.
(C) $\displaystyle\frac{\Delta y}{\Delta x}=\frac{\sqrt{x+\Delta x}-\sqrt{x}}{% \Delta x}=\frac{\sqrt{x+\Delta x}-\sqrt{x}}{\Delta x}\cdot\displaystyle\frac{% \sqrt{x+\Delta x}+\sqrt{x}}{\sqrt{x+\Delta x}+\sqrt{x}}$
$=\frac{1}{\sqrt{x+\Delta x}+\sqrt{x}}$
$(D)\displaystyle\frac{dy}{dx}=\lim_{\Delta x\doteq 0}\frac{\Delta y}{\Delta x}% =\lim_{\Delta x\doteq 0}\frac{1}{\sqrt{x+\Delta x}+\sqrt{}\overline{x}}=\frac{% 1}{2\sqrt{x}}$.
(Compare Ex. 11, p. 10.)
Example 6. Given $y=f(x)\equiv x^{7}$, to flnd $df(x)/dx$.
$(A)f(x+\Delta x)=(x+\Delta x)^{7}=x^{7}+7x^{0}\Delta x+$( $\mathrm{terms}$ with a factor $\overline{\Delta x}^{2}$).
$(B)\Delta\nu=f(x+\Delta x)-f(x)=7x^{6}\Delta x+$( $\mathrm{terms}$ with a factor $\overline{\Delta}x^{2}$).
$(C)\Delta y/\Delta x=7x^{6}+$( $\mathrm{t}\epsilon\mathrm{ms}$ with a factor $\overline{\Delta}x^{2}$).
$(D)dy/dx=\displaystyle\lim_{\Delta x\doteq 0}\Delta y/\Delta x=\lim_{Deltax% \doteq 0}$[ $7x^{6}+($ terms with a factor $\Delta x)$] $=7x^{6}$.
EXERCISES VI.–FORMAL DIFFERENTIATION
1. Find the derivative of $y=x^{3}$ with respect to $x$. [Compare Ex. 8 (c), p. 11.] Write the equation of the tangent at the point $(2,8)$ to the curve $y=x^{3}$.
2. Find the derivatives of the following functions with respect to $x$:
(a)$x^{2}-3x+4$. (b) $x^{3}-6x+7$. (c) $x^{4}+5$.
(d) $x^{4}+3x^{2}-2$. (e) $x^{3}+2x^{2}-4$. (f) $x^{4}-3x^{3}+6x$.
(g) $\displaystyle\frac{1}{x^{2}}$. (h) $\displaystyle\frac{1}{x+1}$. (i) $\displaystyle\frac{1}{2x-3}$.
(j) $\sqrt{x+1}$. (k) $\displaystyle\frac{x}{x+1}$. (l) $\displaystyle\frac{2x+3}{x-2}$
3. Find the equation of the tangent and the equation of the normal to the curve $y=1/x$ at the point where $x=2$. (See Ex. 8, p. 11.)
4. Find the values of x for which the curve $y=x^{3}-16x+1$ rises and those for which it falls; find the highest point (maximum) and the lowest point (minimum). Draw the graph accurately.
5. Draw accurate graphs for the following curves:
(a)$y=x^{3}-18x+3$. (c) $y=x^{4}-32x$.
(b)$y=x^{3}+3x^{2}$. (d)$y=x^{4}-18x^{2}$.
6. Determine the speed of a body which moves so that
$s=16t^{2}+10t+5.$
[A body thrown down from a height with initial speed 10 ft. per second moves in this way approximately, if s is measured downward from a mark 5 ft. above the starting point.]
7. If a body moves so that its horizontal and its vertical distances from a point are, respectively, $x=10t,y=-16t^{2}+10t$, flnd its horizontal speed and its vertical speed. Show that the path is
$y=-16x^{2}/100+x,$
and that the slope of this path is the ratio of the vertioal speed to the horizontal speed. [These equations represent, approximately, the motion of an object thrown upward at an angle of $45^{\mathrm{o}}$ with a speed $10\sqrt{}\overline{2}.$]
8. A stone is dropped into still water. The circumference $\mathrm{c}$ of the growing circular waves thus made, as a function of the radius $r$, is $c=2\pi r$.
Show that $d\mathrm{c}/dr=2\pi,i.e$. that the circumference changes $2\pi$ times as fast as the radius.
Let $A$ be the area of the circle. Show that $dA/dr=2\pi r$; i.e. the rate at which the area is changing compared to the radius is numerically equal to the circumference.
9. Determine the rates of change of the following variables: (a) The surface of a sphere compared with its radius, as the sphere expands. (b) The volume of a cube compaoed with its edge, as the cube enlarges. (c) The volume of a right circular cone compared with the radius of its base (the height being flxed), as the base spreads out.
10. If a man 6 ft. tall is at a distance $x$ from the base of an arc light 10 ft. high, and if the length of his shadow is $s$, show that $s/6=x/4$, or $s=3x/2$. Find the rate $(ds/dx)$ at which the length $s$ of his shadow increases as compared with hls distance $x$ from the lamp base.
11. The specific heat of a substance (e.g. water) is the amount of heat required to raise the temperature of a unit volume of that substance $1^{\text{o}}$. (Centigrade). This amount is known to change for the same substance for different temperatures. The average speciflc heat between two temperatures is the ratio of the quantity of heat $\Delta H$ consumed in raising the temperature divided by the change $\Delta t$ in the temperature; show that the actual specific heat at a given temperature is $dH/dt$.
12. The coefflcient of expansion of a solid substance is the amount a bar of that substance 1 ft. long will expand when the temperature changes $1^{\mathrm{o}}$. Express the average coefflcient of expansion, and show that the coefficient of expansion at any given temperature is $dl/dt$, if the bar is precisely 1 ft. long at that temperature. (See also Ex. 12, p. 145.)
Title Chapter II ChapterII 2014-08-03 22:52:11 2014-08-03 22:52:11 PMBookProject (1000683) rspuzio (6075) 3 PMBookProject (6075) Topic msc 26A06
|
|
Overview of Palettes - Maple Programming Help
Home : Support : Online Help : Reference : Shortcuts : Palettes : worksheet/expressions/palettes
Overview of Palettes
A palette is a collection of buttons representing items such as predefined symbols, expressions, operators, Matrices, and Vectors. By clicking the buttons on the palettes, you can build or edit mathematical expressions without having to remember the Maple command syntax. Maple provides over 30 palettes.
You can construct mathematical expressions using the Expression and Calculus palettes.
You can draw a symbol with the Handwriting palette and let Maple match it with existing symbols.
You can create a Favorites palette of the expressions and entities you use frequently.
Expression Palettes
Expression - a palette containing a collection of common operations, trigonometric expressions, and function building tools
Matrix - a palette consisting of a dialog that allows you to enter the number of rows and columns required; designate type, such as zero-filled; and designate shape, such as diagonal. Use this palette to insert a Matrix or Vector.
Layout - a palette that allows you to add math expressions using layout templates, such as superscripts and subscripts
Calculus - a palette for constructing expressions commonly used in calculus, such as derivatives and single, double and triple integrals
Handwriting - a palette that provides an efficient way to find and insert the right symbol. You draw the symbol with your mouse and then Maple matches your input against symbols available in the system.
Units - a palette that inserts units by dimensionality
Accents - a palette that allows you to insert decorated names such as an x with an arrow over it to denote a vector
Trigonometric and Hyperbolic Functions - a palette for constructing expressions containing trigonometric and hyperbolic functions
Student Random Variables - a palette for constructing random variables based on distributions in the Student Statistics package
Group Constructors - a palette for constructing groups based on the Group Theory package
Other Palettes
Favorites - a palette where you can add expressions and entities that you use frequently
Variables - manage all of your assigned variables in your current Maple session
Components - a palette that allows you to embed simple graphical interface components (for example, a button) into your worksheet. The components can be associated with actions that are to be executed.
eBook Metadata - a palette consisting of a collection of Metadata tags you can use, along with commands from the eBookTools package, to author documents
Live Data Plots - create and customize statistical plots
Tasks - a palette where you can store tasks that you have created
Alphabetical Palettes
Alphabetical palettes include Roman Extended Uppercase, Roman Extended Lowercase, Diacritical Marks, Greek, Cyrillic, Script, Open Face, and Fraktur.
Use Roman Extended Uppercase and Lowercase palettes for accents, such as grave or umlaut.
Use the Diacritical Marks palette in 2-D math regions where accents are required. To enter these marks, use underscript and overscript shortcut keys or the equivalent menu items under the Insert>Typesetting menu.
• Underscript: Ctrl+' (Command+', for Macintosh)
• Overscript: Ctrl+Shift+" (Command+Shift+", for Macintosh)
Mathematical Palettes
Common Symbols - a palette of common symbols for constructing expressions using sums, products, $\mathrm{\pi }$, and $ⅇ$ among other things
Relational - a palette of standard relations for constructing expressions
Relational Round - a palette of relational round symbols for constructing expressions
Operators - a palette of operators for constructing expressions
Large Operators - a palette of large operators for constructing expressions
Negated - a palette of negation symbols for constructing expressions
Fenced - a palette of fenced symbols for constructing expressions
Arrows - a palette of arrow symbols for constructing expressions
Constants and Symbols - a palette of constants and symbols for constructing expressions
Punctuation - a palette of various punctuation symbols, such as the registered trademark and copyright symbols, for inserting into text regions
Miscellaneous - a palette of miscellaneous math and other symbols outside the above categories
|
|
# How do you find the slope given (-4, -2) and (0,0)?
Slope $\to$ gradient $= \frac{{y}_{2} - {y}_{1}}{{x}_{2} - {x}_{1}} = \frac{0 - \left(- 2\right)}{0 - \left(- 4\right)} = \frac{1}{2}$
assuming$\text{ "(x_1,y_1) ->(-4,-2) " }$ as it is listed first.
|
|
Thomas' Calculus 13th Edition
$a=\dfrac{4}{3} \ T + \dfrac{2 \sqrt 5}{3} \ N$
We calculate the velocity and acceleration as follows: $v(t)=\dfrac{dr}{dt}=i+2j+2tk \implies |v(t)|=\sqrt {1^2+(2)^2+(2t)^2}=\sqrt {4t^2+5}$ and $a(t)=\dfrac{d \ v(t)}{dt}= \dfrac{4t}{\sqrt {4t^2+5}}$ $|a(1)|=\dfrac{4(1)}{\sqrt {4 (1)^2+5}}=\dfrac{4}{3}$ Now, $a_{N}=\sqrt {|a|^2 -a^2_{T}}=\sqrt {2^2 -(\dfrac{4}{3})^2}=\dfrac{2 \sqrt 5}{3}$ So, $a=a_T T+a_{N}=\dfrac{4}{3} \ T + \dfrac{2 \sqrt 5}{3} \ N$
|
|
SKY-MAP.ORG
Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login
# μα Boo (Alkalurops)
Contents
### Images
DSS Images Other Images
### Related articles
Observed Orbital EccentricitiesFor 391 spectroscopic and visual binaries with known orbital elementsand having B0-F0 IV or V primaries, we collected the derivedeccentricities. As has been found by others, those binaries with periodsof a few days have been circularized. However, those with periods up toabout 1000 or more days show reduced eccentricities that asymptoticallyapproach a mean value of 0.5 for the longest periods. For those binarieswith periods greater than 1000 days their distribution of eccentricitiesis flat from 0 to nearly 1, indicating that in the formation of binariesthere is no preferential eccentricity. The binaries with intermediateperiods (10-100 days) lack highly eccentric orbits. Statistical Constraints for Astrometric Binaries with Nonlinear MotionUseful constraints on the orbits and mass ratios of astrometric binariesin the Hipparcos catalog are derived from the measured proper motiondifferences of Hipparcos and Tycho-2 (Δμ), accelerations ofproper motions (μ˙), and second derivatives of proper motions(μ̈). It is shown how, in some cases, statistical bounds can beestimated for the masses of the secondary components. Two catalogs ofastrometric binaries are generated, one of binaries with significantproper motion differences and the other of binaries with significantaccelerations of their proper motions. Mathematical relations betweenthe astrometric observables Δμ, μ˙, and μ̈ andthe orbital elements are derived in the appendices. We find a remarkabledifference between the distribution of spectral types of stars withlarge accelerations but small proper motion differences and that ofstars with large proper motion differences but insignificantaccelerations. The spectral type distribution for the former sample ofbinaries is the same as the general distribution of all stars in theHipparcos catalog, whereas the latter sample is clearly dominated bysolar-type stars, with an obvious dearth of blue stars. We point outthat the latter set includes mostly binaries with long periods (longerthan about 6 yr). B Star Rotational Velocities in h and χ Persei: A Probe of Initial Conditions during the Star Formation Epoch?Projected rotational velocities (vsini) have been measured for 216 B0-B9stars in the rich, dense h and χ Persei double cluster and comparedwith the distribution of rotational velocities for a sample of fieldstars having comparable ages (t~12-15 Myr) and masses (M~4-15Msolar). For stars that are relatively little evolved fromtheir initial locations on the zero-age main sequence (ZAMS) (those withmasses M~4-5 Msolar), the mean vsini measured for the h andχ Per sample is slightly more than 2 times larger than the meandetermined for field stars of comparable mass, and the cluster and fieldvsini distributions differ with a high degree of significance. Forsomewhat more evolved stars with masses in the range 5-9Msolar, the mean vsini in h and χ Per is 1.5 times thatof the field; the vsini distributions differ as well, but with a lowerdegree of statistical significance. For stars that have evolvedsignificantly from the ZAMS and are approaching the hydrogen exhaustionphase (those with masses in the range 9-15 Msolar), thecluster and field star means and distributions are only slightlydifferent. We argue that both the higher rotation rates and the patternof rotation speeds as a function of mass that differentiatemain-sequence B stars in h and χ Per from their field analogs werelikely imprinted during the star formation process rather than a resultof angular momentum evolution over the 12-15 Myr cluster lifetime. Wespeculate that these differences may reflect the effects of the higheraccretion rates that theory suggests are characteristic of regions thatgive birth to dense clusters, namely, (1) higher initial rotationspeeds; (2) higher initial radii along the stellar birth line, resultingin greater spin-up between the birth line and the ZAMS; and (3) a morepronounced maximum in the birth line radius-mass relationship thatresults in differentially greater spin-up for stars that become mid- tolate-B stars on the ZAMS. Tidal Effects in Binaries of Various PeriodsWe found in the published literature the rotational velocities for 162B0-B9.5, 152 A0-A5, and 86 A6-F0 stars, all of luminosity classes V orIV, that are in spectroscopic or visual binaries with known orbitalelements. The data show that stars in binaries with periods of less thanabout 4 days have synchronized rotational and orbital motions. Stars inbinaries with periods of more than about 500 days have the samerotational velocities as single stars. However, the primaries inbinaries with periods of between 4 and 500 days have substantiallysmaller rotational velocities than single stars, implying that they havelost one-third to two-thirds of their angular momentum, presumablybecause of tidal interactions. The angular momentum losses increase withdecreasing binary separations or periods and increase with increasingage or decreasing mass. Differential rotation in rapidly rotating F-starsWe obtained high quality spectra of 135 stars of spectral types F andlater and derived overall'' broadening functions in selectedwavelength regions utilizing a Least Squares Deconvolution (LSD)procedure. Precision values of the projected rotational velocity v \siniwere derived from the first zero of the Fourier transformed profiles andthe shapes of the profiles were analyzed for effects of differentialrotation. The broadening profiles of 70 stars rotating faster than v\sini = 45 km s-1 show no indications of multiplicity nor ofspottedness. In those profiles we used the ratio of the first two zerosof the Fourier transform q_2/q_1 to search for deviations from rigidrotation. In the vast majority the profiles were found to be consistentwith rigid rotation. Five stars were found to have flat profilesprobably due to cool polar caps, in three stars cuspy profiles werefound. Two out of those three cases may be due to extremely rapidrotation seen pole on, only in one case (v \sini = 52 km s-1)is solar-like differential rotation the most plausible explanation forthe observed profile. These results indicate that the strength ofdifferential rotation diminishes in stars rotating as rapidly as v \sini>~ 50 km s-1.Table A.1 is only available at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/412/813Based on observations collected at the European Southern Observatory, LaSilla, 69.D-0015(B). Automated spectroscopic abundances of A and F-type stars using echelle spectrographs. II. Abundances of 140 A-F stars from ELODIEUsing the method presented in Erspamer & North (\cite{erspamer},hereafter Paper I), detailed abundances of 140 stars are presented. Theuncertainties characteristic of this method are presented and discussed.In particular, we show that for a S/N ratio higher than 200, the methodis applicable to stars with a rotational velocity as high as 200 kms-1. There is no correlation between abundances and Vsin i,except a spurious one for Sr, Sc and Na which we explain by the smallnumber of lines of these elements combined with a locally biasedcontinuum. Metallic giants (Hauck \cite{hauck}) show larger abundancesthan normal giants for at least 8 elements: Al, Ca, Ti, Cr, Mn, Fe, Niand Ba. The anticorrelation for Na, Mg, Si, Ca, Fe and Ni with Vsin isuggested by Varenne & Monier (\cite{varenne99}) is not confirmed.The predictions of the Montréal models (e.g. Richard et al.\cite{richard01}) are not fulfilled in general. However, a correlationbetween left [(Fe)/(H)right ] and log g is found for stars of 1.8 to 2.0M_sun. Various possible causes are discussed, but the physical realityof this correlation seems inescapable.Based on observations collected at the 1.93 m telescope at theObservatoire de Haute-Provence (St-Michel l'Observatoire, France) andCORALIE.Based on observations collected at the Swiss 1.2 m Leonard Eulertelescopes at the European Southern Observatory (La Silla, Chile).Tables 5 and 6 are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u.strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/398/1121 Empirical calibration of the near-infrared Ca II triplet - III. Fitting functionsUsing a near-infrared stellar library of 706 stars with a wide coverageof atmospheric parameters, we study the behaviour of the CaII tripletstrength in terms of effective temperature, surface gravity andmetallicity. Empirical fitting functions for recently definedline-strength indices, namely CaT*, CaT and PaT, are provided. Thesefunctions can be easily implemented into stellar population models toprovide accurate predictions for integrated CaII strengths. We alsopresent a thorough study of the various error sources and their relationto the residuals of the derived fitting functions. Finally, the derivedfunctional forms and the behaviour of the predicted CaII are comparedwith those of previous works in the field. Metallicity Determinations from Ultraviolet-Visual Spectrophotometry. I. The Test SampleNew visual spectrophotometric observations of non-supergiant solarneighborhood stars are combined with IUE Newly Extracted Spectra (INES)energy distributions in order to derive their overall metallicities,[M/H]. This fundamental parameter, together with effective temperatureand apparent angular diameter, is obtained by applying the flux-fittingmethod while surface gravity is derived from the comparison withevolutionary tracks in the theoretical H-R diagram. Trigonometricparallaxes for the stars of the sample are taken from the HipparcosCatalogue. The quality of the flux calibration is discussed by analyzinga test sample via comparison with external photometry. The validity ofthe method in providing accurate metallicities is tested on a selectedsample of G-type stars with well-determined atmospheric parameters fromrecent high-resolution spectral analysis. The extension of the overallprocedure to the determination of the chemical composition of all theINES non-supergiant G-type stars with accurate parallaxes is planned inorder to investigate their atmospheric temperature structure. Based onobservations collected at the INAOE G. Haro'' Observatory, Cananea(Mexico). Rotational Velocities of B StarsWe measured the projected rotational velocities of 1092 northern B starslisted in the Bright Star Catalogue (BSC) and calibrated them againstthe 1975 Slettebak et al. system. We found that the published values ofB dwarfs in the BSC average 27% higher than those standards. Only 0.3%of the stars have rotational velocities in excess of two-thirds of thebreakup velocities, and the mean velocity is only 25% of breakup,implying that impending breakup is not a significant factor in reducingrotational velocities. For the B8-B9.5 III-V stars the bimodaldistribution in V can be explained by a set of slowly rotating Ap starsand a set of rapidly rotating normal stars. For the B0-B5 III-V starsthat include very few peculiar stars, the distributions in V are notbimodal. Are the low rotational velocities of B stars due to theoccurrence of frequent low-mass companions, planets, or disks? Therotational velocities of giants originating from late B dwarfs areconsistent with their conservation of angular momentum in shells.However, we are puzzled by why the giants that originate from the earlyB dwarfs, despite having 3 times greater radii, have nearly the samerotational velocities. We find that all B-type primaries in binarieswith periods less than 2.4 days have synchronized rotational and orbitalmotions; those with periods between 2.4 and 5.0 days are rotating withina factor 2 of synchronization or are nearly synchronized.'' Thecorresponding period ranges for A-type stars are 4.9 and 10.5 days, ortwice as large. We found that the rotational velocities of the primariesare synchronized earlier than their orbits are circularized. The maximumorbital period for circularized B binaries is 1.5 days and for Abinaries is 2.5 days. For stars of various ages from 107.5 to1010.2 yr the maximum circularized periods are a smoothexponential function of age. Rotational velocities of A-type stars in the northern hemisphere. II. Measurement of v sin iThis work is the second part of the set of measurements of v sin i forA-type stars, begun by Royer et al. (\cite{Ror_02a}). Spectra of 249 B8to F2-type stars brighter than V=7 have been collected at Observatoirede Haute-Provence (OHP). Fourier transforms of several line profiles inthe range 4200-4600 Å are used to derive v sin i from thefrequency of the first zero. Statistical analysis of the sampleindicates that measurement error mainly depends on v sin i and thisrelative error of the rotational velocity is found to be about 5% onaverage. The systematic shift with respect to standard values fromSlettebak et al. (\cite{Slk_75}), previously found in the first paper,is here confirmed. Comparisons with data from the literature agree withour findings: v sin i values from Slettebak et al. are underestimatedand the relation between both scales follows a linear law ensuremath vsin inew = 1.03 v sin iold+7.7. Finally, thesedata are combined with those from the previous paper (Royer et al.\cite{Ror_02a}), together with the catalogue of Abt & Morrell(\cite{AbtMol95}). The resulting sample includes some 2150 stars withhomogenized rotational velocities. Based on observations made atObservatoire de Haute Provence (CNRS), France. Tables \ref{results} and\ref{merging} are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/897 The long-period companions of multiple stars tend to have moderate eccentricitiesWe examined the statistics of an angle gamma between the radius vectorof a visual companion of a multiple star and the vector of its apparentrelative motion in the system. Its distribution f(gamma ) is related tothe orbital eccentricity distribution in the investigated sample. Wefound that for the wide physical subsystems of the 174 objects from theMultiple Star Catalogue f(gamma ) is bell-shaped. The Monte-Carlosimulations have shown that our f(gamma ) corresponds to the populationof the moderate-eccentricity orbits and is not compatible with thelinear distribution f(e)=2e which follows from stellar dynamics andseems to hold for wide binaries. This points to the absence of highlyelongated orbits among the outer subsystems of multiple stars. Theconstraint of dynamical stability of triple systems is not sufficient toexplain the rounded-off'' outer orbits; instead, we speculate that itcan result from the angular momentum exchange in multiple systems duringtheir early evolution. Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 The proper motions of fundamental stars. I. 1535 stars from the Basic FK5A direct combination of the positions given in the HIPPARCOS cataloguewith astrometric ground-based catalogues having epochs later than 1939allows us to obtain new proper motions for the 1535 stars of the BasicFK5. The results are presented as the catalogue Proper Motions ofFundamental Stars (PMFS), Part I. The median precision of the propermotions is 0.5 mas/year for mu alpha cos delta and 0.7mas/year for mu delta . The non-linear motions of thephotocentres of a few hundred astrometric binaries are separated intotheir linear and elliptic motions. Since the PMFS proper motions do notinclude the information given by the proper motions from othercatalogues (HIPPARCOS, FK5, FK6, etc.) this catalogue can be used as anindependent source of the proper motions of the fundamental stars.Catalogue (Table 3) is only available at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strastg.fr/cgi-bin/qcat?J/A+A/365/222 ICCD Speckle Observations of Binary Stars. XXIII. Measurements during 1982-1997 from Six Telescopes, with 14 New OrbitsWe present 2017 observations of 1286 binary stars, observed by means ofspeckle interferometry using six telescopes over a 15 year period from1982 April to 1997 June. These measurements constitute the 23dinstallment in CHARA's speckle program at 2 to 4 m class telescopes andinclude the second major collection of measurements from the MountWilson 100 inch (2.5 m) Hooker Telescope. Orbital elements are alsopresented for 14 systems, seven of which have had no previouslypublished orbital analyses. Photometric Measurements of the Fields of More than 700 Nearby StarsIn preparation for optical/IR interferometric searches for substellarcompanions of nearby stars, we undertook to characterize the fields ofall nearby stars visible from the Northern Hemisphere to determinesuitable companions for interferometric phase referencing. Because theKeck Interferometer in particular will be able to phase-reference oncompanions within the isoplanatic patch (30") to about 17th magnitude atK, we took images at V, r, and i that were deep enough to determine iffield stars were present to this magnitude around nearby stars using aspot-coated CCD. We report on 733 fields containing 10,629 measurementsin up to three filters (Gunn i, r and Johnson V) of nearby stars down toabout 13th magnitude at V. Binary star speckle measurements during 1992-1997 from the SAO 6-m and 1-m telescopes in ZelenchukWe present the results of speckle interferometric measurements of binarystars made with the television photon-counting camera at the 6-m BigAzimuthal Telescope (BTA) and 1-m telescope of the Special AstrophysicalObservatory (SAO) between August 1992 and May 1997. The data contain 89observations of 62 star systems on the large telescope and 21 on thesmaller one. For the 6-m aperture 18 systems remained unresolved. Themeasured angular separation ranged from 39 mas, two times above the BTAdiffraction limit, to 1593 mas. Empirical calibration of the lambda 4000 Å breakEmpirical fitting functions, describing the behaviour of the lambda 4000Ä break, D4000, in terms of effective temperature,metallicity and surface gravity, are presented. For this purpose, thebreak has been measured in 392 stars from the Lick/IDS Library. We havefollowed a very detailed error treatment in the reduction and fittingprocedures, allowing for a reliable estimation of the breakuncertainties. This calibration can be easily incorporated into stellarpopulation models to provide accurate predictions of the break amplitudefor, relatively old, composite systems. Table 1 is only available inelectronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr(130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html Visual binary orbits and masses POST HIPPARCOSThe parallaxes from Hipparcos are an important ingredient to derive moreaccurate masses for known orbital binaries, but in order to exploit theparallaxes fully, the orbital elements have to be known to similarprecision. The present work gives improved orbital elements for some 205systems by combining the Hipparcos astrometry with existing ground-basedobservations. The new solutions avoid the linearity constraints andomissions in the Hipparcos Catalog by using the intermediate TransitData which can be combined with ground-based observations in arbitarilycomplex orbital models. The new orbital elements and parallaxes give newmass-sum values together with realistic total error-estimates. To getindividual masses at least for main-sequence systems, the mass-ratioshave been generally estimated from theoretical isochrones and observedmagnitude-differences. For some 25 short-period systems, however, trueastrometric mass-ratios have been determined through the observedorbital curvature in the 3-year Hipparcos observation interval. Thefinal result is an observed mass-luminosity relation' which falls closeto theoretical expectation, but with outliers' due to undetectedmultiplicity or to composition- and age-effects in the nonuniformnear-star sample. Based in part on observations collected with the ESAHipparcos astrometry satellite. Tables~ 1, 3, 4 and 6 are also availablein electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr~(130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Averaged energy distributions in the stellar spectra.Not Available On the HIPPARCOS photometry of chemically peculiar B, A, and F starsThe Hipparcos photometry of the Chemically Peculiar main sequence B, A,and F stars is examined for variability. Some non-magnetic CP stars,Mercury-Manganese and metallic-line stars, which according to canonicalwisdom should not be variable, may be variable and are identified forfurther study. Some potentially important magnetic CP stars are noted.Tables 1, 2, and 3 are available only in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html The Tokyo PMC catalog 90-93: Catalog of positions of 6649 stars observed in 1990 through 1993 with Tokyo photoelectric meridian circleThe sixth annual catalog of the Tokyo Photoelectric Meridian Circle(PMC) is presented for 6649 stars which were observed at least two timesin January 1990 through March 1993. The mean positions of the starsobserved are given in the catalog at the corresponding mean epochs ofobservations of individual stars. The coordinates of the catalog arebased on the FK5 system, and referred to the equinox and equator ofJ2000.0. The mean local deviations of the observed positions from theFK5 catalog positions are constructed for the basic FK5 stars to comparewith those of the Tokyo PMC Catalog 89 and preliminary Hipparcos resultsof H30. H gamma and H delta Absorption Features in Stars and Stellar PopulationsThe H gamma and H delta absorption features are measured in a sample of455 (out of an original 460) Lick/IDS stars with pseudo--equivalentwidth indices. For each Balmer feature, two definitions, involving anarrow (~20 Angstroms) and a wide (~40 Angstroms) central bandpass, aremeasured. These four new Balmer indices augment 21 indices previouslydetermined by Worthey et al., and polynomial fitting functions that giveindex strengths as a function of stellar temperature, gravity, and[Fe/H] are provided. The new indices are folded into models for theintegrated light of stellar populations, and predictions are given forsingle-burst stellar populations of a variety of ages and metallicities.Contrary to our initial hopes, the indices cannot break a degeneracybetween burst age and burst strength in post-starburst objects, but theyare successful mean-age indicators when used with sensitive metallicityindicators. An appendix gives data, advice, and examples of how totransform new spectra to the 25-index Lick/IDS system. Chromospheric Activity in Dwarf and Evolved Late A- and Early F-Type StarsChromospheric activity in late A- and early F-type field stars ofluminosity classes III through V has been investigated using the heliumD3 absorption feature. This feature shows a detection boundary near b -y = 0.19 (B - V = 0.29). This color index corresponds to a dividing linein activity levels as determined from the C II lambda 1335 chromosphericemission line. On the red side of this boundary, stars exhibit strong ormoderately strong C II emission and strong or moderately strong D3absorption. However, on the blue side, D3 absorption does notconclusively appear, while several stars show moderately strong C IIemission. The data suggest that D3 is sensitive to the boundary at B - V= 0.29, but they also suggest limitations in the use of D3 as anactivity indicator in the late A-type stars. To within observationalerrors, the D3 boundary appears at the same color index for the fullrange of luminosity classes explored, in contradiction with someacoustic energy calculations. In addition, the strength of D3 absorptionshows no significant trend with luminosity class or the Stromgren deltac1 index, with a wide range of activity levels at a given luminosity orsurface gravity. ICCD Speckle Observations of Binary Stars. XVII. Measurements During 1993-1995 From the Mount Wilson 2.5-M Telescope.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1997AJ....114.1639H&db_key=AST Systematic Errors in the FK5 Catalog as Derived from CCD Observations in the Extragalactic Reference Frame.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1997AJ....114..850S&db_key=AST MSC - a catalogue of physical multiple starsThe MSC catalogue contains data on 612 physical multiple stars ofmultiplicity 3 to 7 which are hierarchical with few exceptions. Orbitalperiods, angular separations and mass ratios are estimated for eachsub-system. Orbital elements are given when available. The catalogue canbe accessed through CDS (Strasbourg). Half of the systems are within 100pc from the Sun. The comparison of the periods of close and widesub-systems reveals that there is no preferred period ratio and allpossible combinations of periods are found. The distribution of thelogarithms of short periods is bimodal, probably due to observationalselection. In 82\% of triple stars the close sub-system is related tothe primary of a wide pair. However, the analysis of mass ratiodistribution gives some support to the idea that component masses areindependently selected from the Salpeter mass function. Orbits of wideand close sub-systems are not always coplanar, although thecorresponding orbital angular momentum vectors do show a weak tendencyof alignment. Some observational programs based on the MSC aresuggested. Tables 2 and 3 are only available in electronic form at theCDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Classification of Population II Stars in the Vilnius Photometric System. I. MethodsThe methods used for classification of Population II stars in theVilnius photometric system are described. An extensive set of standardswith known astrophysical parameters compiled from the literature sourcesis given. These standard stars are classified in the Vilnius photometricsystem using the methods described. The accuracy of classification isevaluated by a comparison of the astrophysical parameters derived fromthe Vilnius photometric system with those estimated from spectroscopicstudies as well as from photometric data in other systems. For dwarfsand subdwarfs, we find a satisfactory agreement between our reddeningsand those estimated in the uvbyscriptstyle beta system. The standarddeviation of [Fe/H] deter mined in the Vilnius system is about 0.2 dex.The absolute magnitude for dwarfs and subdwarfs is estimated with anaccuracy of scriptstyle <=0.5 mag. Transformations from Theoretical Hertzsprung-Russell Diagrams to Color-Magnitude Diagrams: Effective Temperatures, B-V Colors, and Bolometric CorrectionsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1996ApJ...469..355F&db_key=AST The Relation between Rotational Velocities and Spectral Peculiarities among A-Type StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1995ApJS...99..135A&db_key=AST Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with.
Submit a new article
• - No Links Found -
|
|
Journal article Open Access
L.G. Chystokletov, G.V.Kaplenko, O.L. Khitra
### DataCite XML Export
<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="DOI">10.5281/zenodo.3555010</identifier>
<creators>
<creator>
<creatorName>L.G. Chystokletov, G.V.Kaplenko, O.L. Khitra</creatorName>
</creator>
</creators>
<titles>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2019</publicationYear>
<subjects>
<subject>administrative conviction, features, classification, measures, factual basis.</subject>
</subjects>
<dates>
<date dateType="Issued">2019-11-25</date>
</dates>
<language>uk</language>
<resourceType resourceTypeGeneral="Text">Journal article</resourceType>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3555010</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3555009</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract"><p>The relevance of the article is due to the fact that not enough attention is being paid in the scientific and educational legal literature to the problems of administrative conviction, and therefore the questions about its main features and definition remain debatable.</p>
<p>The purpose of the article is to find out the essence of administrative conviction. Systematic and historical approaches, methods of analysis and synthesis, comparative method, method of expert assessments were used in the process of realization of this goal. Informational basis of the article are literary sources, Code of Ukraine on Administrative Offenses, the law &quot;On the National Police&quot;.</p>
<p>An analysis of the first in the post-Soviet legal literature serious attempt to find out about the features of administrative persuasion has been made. As part of this analysis:</p>
<p>1) it has been shown that the provisions under which: a) the application of an administrative conviction is a monopoly of public authorities; b) administrative conviction is not linked to individual influence;</p>
<p>2) the contradiction between the Ukrainian legislation and the allegations in the legal literature has been pointed out, and it has been confirmed that the factual basis of the administrative conviction is always absent and that its application does not have a regulatory framework;</p>
<p>3) it has been suggested that under the current conditions the primacy of persuasion over coercion should be regarded not as a feature of administrative persuasion but as a desirable tendency for the development of this institution;</p>
<p>4) it has been considered as appropriate to include in the range of features of administrative conviction that: a) it is a universal method of public administration; b) subordination of its influence is voluntary; c) it is a means of preventing and averting an offense</p>
<p>Criticism of the view that administrative conviction includes encouragement has been supported.</p>
<p>It has been concluded that the administrative conviction is a universal method of public administration, which consists in the application of law enforcement, educational and organizational measures, as well as various means of disseminating information to induce individuals or human groups to behave in accordance with the rules of law and goals, approved by the state, and recognized as socially useful.</p></description>
</descriptions>
</resource>
7
5
views
|
|
Nano Boba Fett? This is an artifact made of semiconductor laser material, that appeared during a process meant to etch all the laser material away. Probably it was formed by a speck of dust landing in that spot and protecting the material underneath from the etching plasma… the remnants of the dust are likely the domed bit on top. The dark area in front is made of glass. Because glass is more of an electrical insulator than semiconductor, it looks darker under the electron microscope, since it sucks up electrons rather than reflecting them back to the microscope’s detector. All of this is way, way, too small to see by eye… in fact, the top surface looks perfectly smooth, even under a regular microscope. Only under an electron microscope is all this crazy topology revealed.
|
|
# Find CT Fourier transform of $\left[ \frac{ \sin(\pi ~ t) }{\pi ~t} \right] \left[ \frac{ \sin(2\pi ~ (t-1)) }{\pi ~(t-1)} \right]$using properties
Use properties of Fourier Transform to solve the question. The question is in the imgur link below.
I got $$f_t$$ of $$\frac {sin(pi \cdot t)} {pi \cdot t}$$ as rectangular pulse with value $$1$$ from -pi to pi second ft is $$e^{-jw}$$ from $$-2 \pi$$ to $$2 \pi$$. I'm unable to get the resultant of two of them
• We don't solve other people's homework when they don't even show any own attempt and explain exactly what they personally need help with. In fact, we have a question close reason for exactly that. – Marcus Müller Oct 7 '20 at 12:19
• Also use some Latex to clarify that equation. Are all those " * " s stand for "multiplication" ? Or is one of them a convolution ? – Fat32 Oct 7 '20 at 12:21
• i.stack.imgur.com/Qbkj4.png link of question – Pranav Prabhu Oct 7 '20 at 12:29
|
|
### Home > CALC > Chapter 8 > Lesson 8.1.2 > Problem8-18
8-18.
No calculator! Evaluate the following integrals.
1. $\int \frac { 8 x ^ { 3 } - 1 } { 6 x ^ { 4 } - 3 x } d x$
$U$-substitution.
$U = 6x^4 − 3x$
1. $\int _ { 2 } ^ { 5 } \pi ( ( x + 1 ) ^ { 2 } - 3 ^ { 2 } ) d x$
Before you integrate, factor the π out of the integrand. You may (or may not) choose to expand $(x + 1)^²$.
1. $\int ( 5 x - 2 + \frac { 5 } { x + 3 } ) d x$
Recall that $\frac{5}{x+3}$ is a transformation of $\frac{1}{x}$, and you know
the antiderivative of $\frac{1}{x}$.
1. $\int 2 x \operatorname { sin } ( 11 x ^ { 2 } - 3 ) d x$
$U$-substitution.
1. $\int _ { - 1 } ^ { 0 } \frac { 2 x } { x + 2 } d x$
You could long divide first or... use $U$-Substitution.
Let $U =$ the denominator.
If $U = x + 2$ then $x = U-2$.
Therefore numerator
$2x=2(U-2) \text{ and }\frac{dU}{dx}=1 \text{ so }dU=dx.$
Also bounds $U(0) = 2$ and $U(− 1) = 1$
Rewrite the integral and evaluate.
1. $\int \frac { 3 } { \sqrt { 1 - 9 x ^ { 2 } } } d x$
Recall that $9x^² = (3x)^²$
Then look for a familiar antiderivative.
|
|
Tags: forensics
Rating:
# Memory_lane (Forensics)
## Description:

## Difficulty:
### Medium
## Writeup:
Our team [Dc1ph3R](https://ctftime.org/team/69272) made into the TOP 3 Pwners of this chall. The file took too much time to download (Slooow Internet). Extracting the 7z gave a vmem image. So a forensics chall with a image. That certainly points to Volatility but this time volatility was not able to identify the profile of the image. So I decided to use foremost and dump all the files of the image. And got a whole bunch of files.

Now I checked each and every folder for some Juicy content until I reached to png folder. It had several icons and PNGs including two QR-Codes which were same btw. .
And there it was our flag lying inside a QR-code.
secf{cr3d3nt1al_34sy_f3tch3d}
Original writeup (https://github.com/Himanshukr000/Security-Fest-2019/blob/master/memorylane.md).
|
|
# Limit Theorems for the M\"{o}bius Function and Statistical Mechanics
SPECIAL LECTURE Topic: Limit Theorems for the M\"{o}bius Function and Statistical Mechanics Speaker: Francesco Cellarosi Affiliation: Princeton University Date: Tuesday, March 29 Time/Room: 4:00pm - 5:00pm/S-101
I will present a recent joint work with Ya.G. Sinai. We investigate the randomness" of the classical Möbius function by means of a statistical mechanical model for square-free numbers and we prove some new results, including a non-standard limit theorem where the Dickman-De Bruijn distribution appears. Although we use a probabilistic approach, this work is inspired by a conjecture by P. Sarnak, and by a number of recent results relating Number Theory and Ergodic Theory.
|
|
# MCQs R Language
This quiz will help you to check your ability to execute some basic operations on objects in R language, and it will also help you to understand some basic concepts. This quiz may also improve you computational understanding.
Question 1: The R language is a dialect of which of the following programming languages?
A) S
B) C
C) Lisp
D) Matlab and Mathematica
E) SAS
Question 2: The definition of free software consists of four freedoms (freedoms 0 through 3). Which of the following is NOT one of the freedoms that are part of the definition?
A) The freedom to study how the program works, and adapt it to your needs.
B) The freedom to improve the program, and release your improvements to the public, so that the whole community benefits.
C) The freedom to run the program, for any purpose.
D) The freedom to sell the software for any price.
Question 3: In R language the following are all atomic data types EXCEPT
A) integer
B) logical
C) data frame
D) character
Question 4: If I execute the expression x <- 4 in R language, what is the class of the object x’ as determined by the class()’ function?
A) Numeric
B) Integer
C) Real
D) Complex
Question 5: What is the class of the object defined by the expression x <- c(4, “a”, TRUE)?
A) Numeric
B) Character
C) Integer
D) Logical
Question 6: If I have two vectors x <- c(1,3, 5) and y <- c(3, 2, 10), what is produced by the expression rbind(x, y)?
A) A vector of length 2
B) a 2 by 2 matrix
C) a vector of length 3
D) a 2 by 3 matrix
Question 7: A key property of vectors in R language is that
A) a vector cannot have have attributes like dimensions
B) elements of a vector can be of different classes
C) elements of a vector can only be character or numeric
D) elements of a vector all must be of the same class
Question 8: Suppose I have a list defined as x <- list(2, “a”, “b”, TRUE). What does x[[2]] give me?
A) a list containing a character vector with the elements “a” and “b”.
B) a character vector of length 1.
C) a character vector with the elements “a” and “b”.
D) a list containing the number 2 and the letter “a”.
Question 9: Suppose I have a vector x <- 1:4 and y <- 2:3. What is produced by the expression x + y?
A) a numeric vector with the values 3, 5, 3, 4.
B) an integer vector with the values 3, 5, 5, 7.
C) a numeric vector with the values 1, 2, 5, 7.
D) an error.
Question 10: Suppose I have a vector x <- c(3, 5, 1, 10, 12, 6) and I want to set all elements of this vector that are less than 6 to be equal to zero. What R code achieves this?
A) x[x < 6] == 0
B) x[x == 6] <- 0
C) x[x < 6] <- 0
D) x[x == 0]
Question 11: ______ is function in R to get number of observation in a data frame
A) n()
B) ncols()
C) nobs()
D) nrow()
Question 12: what function is used to test the missing observation in data frame
A) missing()
B) NA.miss()
C) na()
D) is.na()
For MCQs Statistics ANSWERS, either download the pdf file or enjoy the online MCQs Statistics Test about R-Language.
## R Language Quiz
This quiz will help you to check your ability to execute some basic operations on objects in R language, and it will also help you to understand some basic concepts. This quiz may also improve you computational understanding.
Congratulations - you have completed R Language Quiz. You scored %%SCORE%% out of %%TOTAL%%. Your performance has been rated as %%RATING%% Go to MCQs Statistics main page http://itfeature.com/statistics-mcqs
#### The Author
Student and Instructor of Statistics and business mathematics. Currently Ph.D. Scholar (Statistics), Bahauddin Zakariya University Multan. Like Applied Statistics and Mathematics and Statistical Computing. Statistical and Mathematical software used are: SAS, STATA, GRETL, EVIEWS, R, SPSS, VBA in MS-Excel. Like to use type-setting LaTeX for composing Articles, thesis etc.
|
|
Sharing .NET generic code under the hood
Talks about how .NET achieves sharing generic code
If you come from a C++ programming background, you are most likely already familiar with C++’s template code bloat problem. Each template instantiation gets its own copy of the code (of course, compiler/linker can optimize by throwing away unused methods). The reason being that C++ template are more like C macro on steriods. I know this is a great simplification, but at the end of the day, it is pretty much a code expansion feature with type safety. This grants C++ some powerful capabilities that C# developers don’t have - like template specialization, or calling arbitary methods on a template class, or a whole different programming paradigm that’s known as template meta-programming. On the other hand, .NET generics require you to define what operations can be perform on T using constraints (otherwise you are limited to a small set of operations such as casts, assignments, etc). However, this does give .NET a unique advantage - it can do a better job at code sharing.
Instantiation over value types
First, let’s take a look at what it doesn’t do. Let’s say you have the following code:
public class GenericValue<T>
{
T val;
public void Assign(T t)
{
val = t;
}
}
When you have two instantiations over value types such as int and double, .NET doesn’t share the method body between the two instantiations, because - you guessed it - because they are value types. If you think about how a compiler emits code, you’ll see why it can be quite challenging if you want to share the code:
• int and double has different sizes - 4 and 8 bytes. So a compiler can’t simply assign a register or allocate a fixed portion of the stack to hold the value, or make the copy.
• Depending on the platform, int and double can be passed in different registers / stack. Compiler doesn’t even know where T is when the call has been made.
• compiler also needs to know where the object fields are in order to track the GC fields. This is obviously not a problem with primitive types, but can become an issue if you are dealing with structs with reference type fields. Without knowing what T is, it doesn’t know where the reference type fields are, and it won’t be able to mark the fields (See .NET Garbage Collector Fundamentals for more details.
To further illustrate my point, this is the assignment in the int version:
00007FFD73760BAC mov rax,qword ptr [rbp+50h]
00007FFD73760BB0 mov edx,dword ptr [rbp+58h]
00007FFD73760BB3 mov dword ptr [rax],edx
And this is the assignment in the double version:
00007FFD34160C1F mov rax,qword ptr [rbp+50h]
00007FFD34160C23 vmovsd xmm0,qword ptr [rbp+58h]
00007FFD34160C29 vmovsd qword ptr [rax+8],xmm0
Of course, challenging doesn’t mean it’s impossible. In theory, you could pass those value types as boxed value type, and therefore passing it by-reference. Or change the callsite convention to pass the type along with the struct (and always pass struct by reference). With a bit more code, you could in theory have a version that allocates the right amount of buffer, copy the right size, and know where the fields are (because boxed value types are reference types and the first pointer size field is the type, and from type you can get the fields) when given the right information. However, this would significantly reduce performance with value types, which is why it’s not being done today in CLR (.NET Framework) and CoreCLR (.NET Core). However, .NET Native today does support some form of generic sharing for value types under limited cirumstances, but that’s out of the scope of our discussion today.
Instantiation over reference types
The story is very different with reference types. Let’s say we have GenericValue and GenericValue
This is the object version:
00007FFD73770CB1 mov rcx,qword ptr [rbp+50h]
00007FFD73770CB5 mov rdx,qword ptr [rbp+60h]
00007FFD73770CB9 call 00007FFDD2D83DE0
And this is the string version:
00007FFD73770CB1 mov rcx,qword ptr [rbp+50h]
00007FFD73770CB5 mov rdx,qword ptr [rbp+60h]
00007FFD73770CB9 call 00007FFDD2D83DE0
It’s easy to see that they are identical. As a matter of fact, they are from the same address - you can see .NET is sharing the method body between two instantiations!
If you think about it, being a reference type makes it very natural to share .NET generic method bodies:
1. All reference type have the same length - a pointer size. Copying/Storing a pointer works the same way no matter what the pointer is.
2. No matter what the size of the actual object is, a pointer is always passed in the same way - as a pointer (duh!)
3. You can easily tell what fields the reference type have, because reference types have a MethodTable (CLR’s jargon for type) pointer in the first pointer-size field and you can find a lot of information from that MethodTable, including fields. This makes GC happy.
Now, you might ask, what about method calls inside the method body? Are they reallly sharable?
This is actually an really interesting question. Let’s look at a few different cases:
1. You are making an non-virtual instance method call or even better, a static method call, n specific class or a T constrained over a class
This is the easier case. Obviously this can only be achieved through class constraints by having T constraining over a class. Any competent JIT implementation will see right through your intention and happily put a direct call to the right method (or even inline it, if it is in a good mood). This is perfect for code sharing.
(BTW, a direct call in this case is actually a lie. The call would actually jmp to another code that either does the JITting or the real code. But that’s a topic for another post)
2. You are making an interface call such as IFoo
In .NET code, an interface cast is achieved through a helper call into the CLR called JIT_ChkCastInterface - which simply does a check (it doesn’t change the value of ‘this’ pointer, unlike C++). The actual interface call is made through a special piece of code called virtual dispatch stub and gets passed in some additional secret argument telling the stub what exactly the interface method is, and the stub will happily find the right method to call.
00007ffd3b9005b5 e83692d25e call CoreCLR!JIT_ChkCastInterface (00007ffd9a6297f0)
00007ffd3b9005ba 488945e8 mov qword ptr [rbp-18h],rax
00007ffd3b9005be 488b4de8 mov rcx,qword ptr [rbp-18h]
00007ffd3b9005c2 49bb2000793bfd7f0000 mov r11,7FFD3B790020h
00007ffd3b9005cc 3909 cmp dword ptr [rcx],ecx
00007ffd3b9005ce 41ff13 call qword ptr [r11]
Note that there are also cases where JIT can figure out which method it is at JIT time if T is a value type. But that’s not really an interesting case for code sharing since it is specifically for that value type instantiation.
3. You are making an virtual method call on specific class or a T constrained over a base class
In .NET, virtual functions are dispatched through v-table. This is perhaps not at all surprising if you are a C++ programmer. JIT spits out the following code for a virtual call:
00007ffd3b570582 488b4d20 mov rcx,qword ptr [rbp+20h]
00007ffd3b570586 488b4520 mov rax,qword ptr [rbp+20h]
00007ffd3b57058a 488b00 mov rax,qword ptr [rax]
00007ffd3b57058d 488b4048 mov rax,qword ptr [rax+48h]
00007ffd3b570591 ff5020 call qword ptr [rax+20h] ds:00007ffd3b3f7190={ConsoleApp7.Foo.Func() (00007ffd3b5700a0)}
Here is a brief explanation of what the code does:
1. First it puts the object pointer at [rbp+0x20] into rcx, which is the this pointer and the first argument, preparing for the call.
2. this gets put into rax as well.
3. It retrieves the first pointer-size field. This is the magic MethodTable field. Again, you can think of it as the richer version of a C++ v-table.
4. It adds an magic offset 0x48 and retrieve the function pointer to call.
5. It calls the function pointer which goes to the underlying function body.
As you can see, this is not that different from C++ virtual function call.
Given that you are either calling a virtual function on T that is constrained over a particular class, or on a specific class, the v-table layout is going to be compatible between T and T’s derived classes, and therefore they will have the same magic offset 0x48, and therefore needs the same code, allowing code sharing.
4. Other calls
There are other interesting scenarios such as calling a generic virtual method. Those scenarios may involve further sharing - the generic virtual method body could be shared themselves. This requires additional runtime magic that I’m not going to cover in this post.
What’s next
In this post I only touched the basics - how .NET generics is able to achieve code sharing between different reference type instantiations. Of course, it doesn’t really stop here - sharing code brings its own set of challenges. One such interesting challenge is: How do you know what T is? What is the magic that enables retriving the value of typeof(T)?
This is something I’ll talk about in my next post.
|
|
Made for sharing. And x would be 1 and minus 1 for 2. And I guess that that matrix is also an orthogonal matrix. That's what I mean by "orthogonal eigenvectors" when those eigenvectors are complex. Find the eigenvalues and set of mutually orthogonal. Q transpose is Q inverse. Eigenvalues and Eigenvectors An orthogonal matrix U satisfies, by definition, U T =U-1, which means that the columns of U are orthonormal (that is, any two of them are orthogonal and each has norm one). An identification of the copyright claimed to have been infringed; A symmetric matrix A is a square matrix with the property that A_ij=A_ji for all i and j. So if a matrix is symmetric-- and I'll use capital S for a symmetric matrix-- the first point is the eigenvalues are real, which is not automatic. Let and be eigenvalues of A, with corresponding eigenvectors uand v. We claim that, if and are distinct, then uand vare orthogonal. In linear algebra, a real symmetric matrix represents a self-adjoint operator over a real inner product space. What is the correct x transpose x? Then eigenvectors take this form, . » So I'll just have an example of every one. Q transpose is Q inverse in this case. And for 4, it's 1 and 1. for all indices and .. Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Basic facts about complex numbers. Well, that's an easy one. The commutator of a symmetric matrix with an antisymmetric matrix is always a symmetric ⦠Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors, Learn Differential Equations: Up Close with Gilbert Strang and Cleve Moler, Differential Equations and Linear Algebra. Can I just draw a little picture of the complex plane? Orthogonal. The identity is also a permutation matrix. Can you connect that to A? Massachusetts Institute of Technology. Now the next step to take the determinant. But it's always true if the matrix is symmetric. I must remember to take the complex conjugate. Those are beautiful properties. MATLAB does that automatically. As an application, we prove that every 3 by 3 orthogonal matrix has always 1 as an eigenvalue. And notice what that-- how do I get that number from this one? Proof: ... As mentioned before, the eigenvectors of a symmetric matrix can be chosen to be orthonormal. The determinant is 8. And I guess the title of this lecture tells you what those properties are. However, they will also be complex. There's a antisymmetric matrix. 1,768,857 views Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors. What about the eigenvalues of this one? Send to friends and colleagues. Knowledge is your reward. To orthogonally diagonalize a symmetric matrix 1.Find its eigenvalues. Please be advised that you will be liable for damages (including costs and attorneysâ fees) if you materially Proof of the Theorem So I have lambda as a plus ib. When we have antisymmetric matrices, we get into complex numbers. "Orthogonal complex vectors" mean-- "orthogonal vectors" mean that x conjugate transpose y is 0. 2.Find a basis for each eigenspace. Theorem 4.2.2. Minus i times i is plus 1. Here, complex eigenvalues on the circle. So that's really what "orthogonal" would mean. Here the transpose is minus the matrix. Every n nsymmetric matrix has an orthonormal set of neigenvectors. Eigenvectors are not unique. However, they will also be complex. Let A be any n n matrix. They will make you ⥠Physics. . Supplemental Resources 1 plus i over square root of 2. So if I have a symmetric matrix-- S transpose S. I know what that means. The matrices AAT and ATA have the same nonzero eigenvalues. If a matrix has a null eigenvector then the spectral theorem breaks down and it may not be diagonalisable via orthogonal matrices (for example, take $\left[\begin{matrix}1 + i & 1\\1 & 1 - i\end{matrix}\right]$). Eigenvectors and Diagonalizing Matrices E.L. Lady Let A be an n n matrix and suppose there exists a basis v1;:::;vn for Rn such that for each i, Avi = ivi for some scalar . Now we need to get the matrix into reduced echelon form. This is a linear algebra final exam at Nagoya University. This factorization property and âS has n orthogonal eigenvectorsâ are two important properties for a symmetric matrix. Now we need to get the last eigenvector for . Can't help it, even if the matrix is real. After row reducing, the matrix looks like. So I have a complex matrix. Varsity Tutors LLC Well, it's not x transpose x. Your Infringement Notice may be forwarded to the party that made the content available or to third parties such So are there more lessons to see for these examples? Different eigenvectors for different eigenvalues come out perpendicular. But again, the eigenvectors will be orthogonal. But I have to take the conjugate of that. We prove that eigenvectors of a symmetric matrix corresponding to distinct eigenvalues are orthogonal. Eigenvectors and eigenvalues of a diagonal matrix D The equation Dx = 0 B B B B @ d1 ;1 0 ::: 0 0 d 2;. ⢠Positive deï¬nite matrices ⢠Similar matrices B = Mâ1 AM. Complex conjugates. 09/13/2016 And those matrices have eigenvalues of size 1, possibly complex. And then finally is the family of orthogonal matrices. Remember, both the eigenvalues and the eigenvectors will be complex-valued for your skew-symmetric matrices, and in testing the adjusted U'*U you will get tiny imaginary components due to rounding errors. What is the dot product? ... Symmetric Matrices and the Product of Two Matrices. Well, everybody knows the length of that. Their eigenvectors can, and in this class must, be taken orthonormal. Square root of 2 brings it down there. . Proof. Of course in the case of a symmetric matrix, AT = A, so this says that graph is undirected, then the adjacency matrix is symmetric. $\begingroup$ The covariance matrix is symmetric, and symmetric matrices always have real eigenvalues and orthogonal eigenvectors. Let me find them. Real lambda, orthogonal x. 1 squared plus i squared would be 1 plus minus 1 would be 0. Recall some basic de nitions. information contained in your Infringement Notice is accurate, and (c) under penalty of perjury, that you are (Mutually orthogonal and of length 1.) misrepresent that a product or activity is infringing your copyrights. What do I mean by the "magnitude" of that number? Here are the steps needed to orthogonally diagonalize a symmetric matrix: Fact. A statement by you: (a) that you believe in good faith that the use of the content that you claim to infringe Now we need to get the last eigenvector for . Orthogonal eigenvectors-- take the dot product of those, you get 0 and real eigenvalues. So I take the square root, and this is what I would call the "magnitude" of lambda. sufficient detail to permit Varsity Tutors to find and positively identify that content; for example we require So the magnitude of a number is that positive length. Lectures by Walter Lewin. That gives you a squared plus b squared, and then take the square root. That's what I mean by "orthogonal eigenvectors" when those eigenvectors are complex. Here the transpose is the matrix. This will be orthogonal to our other vectors, no matter what value of , ⦠Proof. He studied this complex case, and he understood to take the conjugate as well as the transpose. 1, 2, i, and minus i. And again, the eigenvectors are orthogonal. So I would have 1 plus i and 1 minus i from the matrix. North Carolina A T State University, Doctor o... Track your scores, create tests, and take your learning to the next level! Furthermore, » information described below to the designated agent listed below. And here's the unit circle, not greatly circular but close. 3gis thus an orthogonal set of eigenvectors of A. Corollary 1. B is just A plus 3 times the identity-- to put 3's on the diagonal. St. Louis, MO 63105. (45) The statement is imprecise: eigenvectors corresponding to distinct eigenvalues of a symmetric matrix must be orthogonal to each other. Complex numbers. Symmetric matrices with n distinct eigenvalues are orthogonally diagonalizable.. Then eigenvectors take this form, . Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.. What's the length of that vector? Learn more », © 2001–2018 an The length of that vector is not 1 squared plus i squared. We need to take the dot product and set it equal to zero, and pick a value for , and . Here, then, are the crucial properties of symmetric matrices: Fact. North Carolina State at Raleigh, Master of Science, Mathematics. And I want to know the length of that. So again, I have this minus 1, 1 plus the identity. So that gave me a 3 plus i somewhere not on the axis or that axis or the circle. Here, complex eigenvalues. So this is a "prepare the way" video about symmetric matrices and complex matrices. symmetric matrix must be orthogonal is actually quite simple. I'll have 3 plus i and 3 minus i. 1 plus i. And finally, this one, the orthogonal matrix. This will be orthogonal to our other vectors, no matter what value of , we pick. If we take each of the eigenvalues to be unit vectors, then the we have the following corollary. There's 1. Flash and JavaScript are required for this feature. Please follow these steps to file a notice: A physical or electronic signature of the copyright owner or a person authorized to act on their behalf; Wake Forest University, Bachelors, Mathematics. But suppose S is complex. If I transpose it, it changes sign. Your name, address, telephone number and email address; and And those numbers lambda-- you recognize that when you see that number, that is on the unit circle. Thank goodness Pythagoras lived, or his team lived. So the orthogonal vectors for are , and . . So the orthogonal vectors for  are , and . The orthonormal set can be obtained by scaling all vectors in the orthogonal set of Lemma 5 to have length 1. Suppose S is complex. Also, we could look at antisymmetric matrices. And if I transpose it and take complex conjugates, that brings me back to S. And this is called a "Hermitian matrix" among other possible names. However, you can experiment on your own using 'orth' to see how it works. Antisymmetric. If I have a real vector x, then I find its dot product with itself, and Pythagoras tells me I have the length squared. » I can see-- here I've added 1 times the identity, just added the identity to minus 1, 1. And then finally is the family of orthogonal matrices. Theorem (Orthogonal Similar Diagonalization) If Ais real symmetric then Ahas an orthonormal basis of real eigenvectors and Ais orthogonal similar to ⦠Let A be an n nsymmetric matrix. Now we prove an important lemma about symmetric matrices. Let Abe a symmetric matrix. ), Learn more at Get Started with MIT OpenCourseWare, MIT OpenCourseWare makes the materials used in the teaching of almost all of MIT's subjects available on the Web, free of charge. In engineering, sometimes S with a star tells me, take the conjugate when you transpose a matrix. What are the eigenvalues of that? With more than 2,400 courses available, OCW is delivering on the promise of open sharing of knowledge. If A= (a ij) is an n nsquare symmetric matrix, then Rn has a basis consisting of eigenvectors of A, these vectors are mutually orthogonal, and all of the eigenvalues are real numbers. Those are orthogonal. It's not perfectly symmetric. The entries in the diagonal matrix â are the square roots of the eigenvalues. 14. And those matrices have eigenvalues of size 1, possibly complex. And again, the eigenvectors are orthogonal. When I say "complex conjugate," that means I change every i to a minus i. I flip across the real axis. The expression A=UDU T of a symmetric matrix in terms of its eigenvalues and eigenvectors is referred to as the spectral decomposition of A.. Theorem. If v is an eigenvector for AT and if w is an eigenvector for A, and if the corresponding eigenvalues are di erent, then v and w must be orthogonal. Section 6.5 showed that the eigenvectors of these symmetric matrices are orthogonal. In other words, \orthogonally diagaonlizable" and \symmetric" mean the same thing. Home A useful property of symmetric matrices, mentioned earlier, is that eigenvectors corresponding to distinct eigenvalues are orthogonal. Memorial University of Newfoundland, Bachelor of Science, Applied Mathematics. Out there-- 3 plus i and 3 minus i. So that's a complex number. And they're on the unit circle when Q transpose Q is the identity. And finally, this one, the orthogonal matrix. ⢠Symmetric matrices A = AT: These always have real eigenvalues, and they always have âenoughâ eigenvectors. And sometimes I would write it as SH in his honor. your copyright is not authorized by law, or by the copyright owner or such ownerâs agent; (b) that all of the So I must, must do that. That's 1 plus i over square root of 2. For convenience, let's pick , then our eigenvector is. 1 Review: symmetric matrices, their eigenvalues and eigenvectors This section reviews some basic facts about real symmetric matrices. Those are orthogonal matrices U and V in the SVD. And it can be found-- you take the complex number times its conjugate. on or linked-to by the Website infringes your copyright, you should consider first contacting an attorney. This is in equation form is , which can be rewritten as . Their columns are orthonormal eigenvectors of AAT and ATA. In this problem, we will get three eigen values and eigen vectors since it's a symmetric matrix. So that's main facts about-- let me bring those main facts down again-- orthogonal eigenvectors and location of eigenvalues. means of the most recent email address, if any, provided by such party to Varsity Tutors. Download the video from iTunes U or the Internet Archive. Thus, if you are not sure content located Here we go. A description of the nature and exact location of the content that you claim to infringe your copyright, in \ If I multiply a plus ib times a minus ib-- so I have lambda-- that's a plus ib-- times lambda conjugate-- that's a minus ib-- if I multiply those, that gives me a squared plus b squared. Symmetric Matrices There is a very important class of matrices called symmetric matrices that have quite nice properties concerning eigenvalues and eigenvectors. Learn Differential Equations: Up Close with Gilbert Strang and Cleve Moler And those columns have length 1. Thank you. The matrices are symmetric matrices. Hermite was a important mathematician. I'm shifting by 3. Hi, I can understand that symmetric matrices have orthogonal eigenvectors, but if you know that a matrix has orthogonal eigenvectors, does it have ⦠Press J to jump to the feed. I times something on the imaginary axis. So here's an S, an example of that. MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum. That's why I've got the square root of 2 in there. Symmetric matrices are the best. OK. Now I feel I've talking about complex numbers, and I really should say-- I should pay attention to that. GILBERT STRANG: OK. or more of your copyrights, please notify us by providing a written notice (âInfringement Noticeâ) containing The equation I-- when I do determinant of lambda minus A, I get lambda squared plus 1 equals 0 for this one. I want to do examples. And there is an orthogonal matrix, orthogonal columns. Press question mark to learn the rest of the keyboard shortcuts ChillingEffects.org. Infringement Notice, it will make a good faith attempt to contact the party that made such content available by Here that symmetric matrix has lambda as 2 and 4. Minus i times i is plus 1. 0 0 ::: 0 d n;n 1 C C C C A 0 B B B @ x1 x2 x n 1 C C ⦠Again, real eigenvalues and real eigenvectors-- no problem. Can I bring down again, just for a moment, these main facts? » After row reducing, the matrix looks like. a A square matrix is symmetric if {eq}A^t=A {/eq}, where {eq}A^t {/eq} is the transpose of this matrix. I'll have to tell you about orthogonality for complex vectors. Corollary. Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices Inderjit S. Dhillon a,1, Beresford N. Parlett b,â aDepartment of Computer Science, University of Texas, Austin, TX 78712-1188, USA bMathematics Department and Computer Science Division, EECS Department, University of California, Berkeley, CA 94720, USA Download files for later. In fact, it is a special case of the following fact: Proposition. improve our educational resources. But returning to the square root problem, this shows that "most" complex symmetric matrices have a complex symmetric square root. If you've found an issue with this question, please let us know. No enrollment or registration. 3 Eigenvectors of symmetric matrices Real symmetric matrices (or more generally, complex Hermitian matrices) always have real eigenvalues, and they are never defective. $\endgroup$ â Raskolnikov Jan 1 '15 at 12:35 1 $\begingroup$ @raskolnikov But more subtly, if some eigenvalues are equal there are eigenvectors which are not orthogonal. either the copyright owner or a person authorized to act on their behalf. . We'll see symmetric matrices in second order systems of differential equations. The transpose is minus the matrix. If all the eigenvalues of a symmetric matrixAare distinct, the matrixX, which has as its columns the corresponding eigenvectors, has the property thatX0X=I, i.e.,Xis an orthogonal matrix. So our equations are then, and , which can be rewritten as , . And I also do it for matrices. And the second, even more special point is that the eigenvectors are perpendicular to each other. So that gives me lambda is i and minus i, as promised, on the imaginary axis. And you see the beautiful picture of eigenvalues, where they are. And I also do it for matrices. As always, I can find it from a dot product. A square matrix is orthogonally diagonalizable if and only if it is symmetric. OK. What about complex vectors? If you ask for x prime, it will produce-- not just it'll change a column to a row with that transpose, that prime. It's the square root of a squared plus b squared. They pay off. » Then for a complex matrix, I would look at S bar transpose equal S. Every time I transpose, if I have complex numbers, I should take the complex conjugate. We prove that eigenvalues of orthogonal matrices have length 1. Now-- eigenvalues are on the real axis when S transpose equals S. They're on the imaginary axis when A transpose equals minus A. Can't help it, even if the matrix is real. If Varsity Tutors takes action in response to Theorem 3 Any real symmetric matrix is diagonalisable. Let's see. Your use of the MIT OpenCourseWare site and materials is subject to our Creative Commons License and other terms of use. And the eigenvectors for all of those are orthogonal. the So that's the symmetric matrix, and that's what I just said. I'd want to do that in a minute. What's the magnitude of lambda is a plus ib? Eigenvectors of Symmetric Matrices Are Orthogonal - YouTube The easiest ones to pick are , and . Now I'm ready to solve differential equations. Now lets use the quadratic equation to solve for . So we must remember always to do that. The length of that vector is the size of this squared plus the size of this squared, square root. link to the specific question (not just the name of the question) that contains the content and a description of The length of x squared-- the length of the vector squared-- will be the vector. There is the real axis. What About The Eigenvalues Of A Skew Symmetric Real Matrix? And the same eigenvectors. Here is a combination, not symmetric, not antisymmetric, but still a good matrix. The most important fact about real symmetric matrices is the following theo- rem. So these are the special matrices here. Description: Symmetric matrices have n perpendicular eigenvectors and n real eigenvalues. Varsity Tutors. In that case, we don't have real eigenvalues. If I want the length of x, I have to take-- I would usually take x transpose x, right? Lambda equal 2 and 4. In vector form it looks like, .Â. Orthonormal eigenvectors. There's no signup, and no start or end dates. If A is an n x n symmetric matrix, then any two eigenvectors that come from distinct eigenvalues are orthogonal.. If you believe that content available by means of the Website (as defined in our Terms of Service) infringes one Let me complete these examples. This is the great family of real, imaginary, and unit circle for the eigenvalues. If $$A$$ is a symmetric matrix, then eigenvectors corresponding to distinct eigenvalues are orthogonal. And now I've got a division by square root of 2, square root of 2. It's the fact that you want to remember. © 2007-2020 All Rights Reserved, Eigenvalues And Eigenvectors Of Symmetric Matrices. And here is 1 plus i, 1 minus i over square root of two. That leads me to lambda squared plus 1 equals 0. I think that the eigenvectors turn out to be 1 i and 1 minus i. Oh. Suppose x is the vector 1 i, as we saw that as an eigenvector. The first step into solving for eigenvalues, is adding in a  along the main diagonal.Â. They have special properties, and we want to see what are the special properties of the eigenvalues and the eigenvectors? Differential Equations and Linear Algebra So I'm expecting here the lambdas are-- if here they were i and minus i. This is one key reason why orthogonal matrices are so handy. In fact, we are sure to have pure, imaginary eigenvalues. Modify, remix, and reuse (just remember to cite OCW as the source. I must remember to take the complex conjugate. There are many special properties of eigenvalues of symmetric matrices, as we will now discuss. 8.02x - Lect 16 - Electromagnetic Induction, Faraday's Law, Lenz Law, SUPER DEMO - Duration: 51:24. OK. And each of those facts that I just said about the location of the eigenvalues-- it has a short proof, but maybe I won't give the proof here. On the circle. . Yeah. I want to get a positive number. So that A is also a Q. OK. What are the eigenvectors for that? Now we need to substitute  into or matrix in order to find the eigenvectors. The symmetric matrices have orthogonal eigenvectors and it has only real eigenvalues. OK. And it will take the complex conjugate. Where is it on the unit circle? But again, the eigenvectors will be orthogonal. It's important. "Orthogonal complex vectors" mean-- "orthogonal vectors" mean that x conjugate transpose y is 0. The eigenvector matrix Q can be an orthogonal matrix, with A = QÎQT. With the help of the community we can continue to Real, from symmetric-- imaginary, from antisymmetric-- magnitude 1, from orthogonal. So there's a symmetric matrix. When we have antisymmetric matrices, we get into complex numbers. More precisely, if A is symmetric, then there is an orthogonal matrix Q ⦠We don't offer credit or certification for using OCW. Send your complaint to our designated agent at: Charles Cohn But the magnitude of the number is 1. That's the right answer. which specific portion of the question â an image, a link, the text, etc â your complaint refers to; There's i. Divide by square root of 2. Freely browse and use OCW materials at your own pace. This OCW supplemental resource provides material from outside the official MIT curriculum. And those eigenvalues, i and minus i, are also on the circle. The trace is 6. That matrix was not perfectly antisymmetric. Worcester Polytechnic Institute, Current Undergrad Student, Actuarial Science. Now we pick another value for , and  so that the result is zero. That puts us on the circle. Here is the lambda, the complex number. But if the things are complex-- I want minus i times i. I want to get lambda times lambda bar. And in fact, if S was a complex matrix but it had that property-- let me give an example. 101 S. Hanley Rd, Suite 300 Eigenvectors corresponding to the same eigenvalue need not be orthogonal to each other. To find the eigenvalues, we need to minus lambda along the main diagonal and then take the determinant, then solve for lambda. But the magnitude of the number is 1. A is symmetric if At = A; A vector x2 Rn is an eigenvector for A if x6= 0, and if there exists a number such that Ax= x. Lemma 6. The matrix Q is called orthogonal if it is invertible and Q1= Q>. Proof: We have uTAv = (uTv). All I've done is add 3 times the identity, so I'm just adding 3. Here, imaginary eigenvalues. Again, I go along a, up b. as What about A? So if I want one symbol to do it-- SH. Ais Hermitian, which for a real matrix amounts to Ais symmetric, then we saw above it has real eigenvalues. Here is the imaginary axis. MATH 340: EIGENVECTORS, SYMMETRIC MATRICES, AND ORTHOGONALIZATION Let A be an n n real matrix. The product of two rotation matrices is a rotation matrix, and the product of two reflection matrices is also a rotation matrix. This is ⦠So that's really what "orthogonal" would mean. A reflection is its own inverse, which implies that a reflection matrix is symmetric (equal to its transpose) as well as orthogonal. Use OCW to guide your own life-long learning, or to teach others. A dot product and set it equal to zero, since each is own! Perpendicular eigenvectors and n real matrix S transpose S. I know what that means that... But if the things are complex -- I want the length of vector. That a is also a rotation matrix or end dates usually take x transpose x, I can find from. Put 3 's on the circle, these main facts why are eigenvectors of symmetric matrices orthogonal real matrices. Be forwarded to the same thing, since each is its own negative following rem... orthogonal eigenvectors an orthonormal set of neigenvectors 5 to have length 1 2 and 4 2 in there Corollary. Reviews some basic facts about real symmetric matrices: fact transpose y is 0 a plus ib what! Can see -- here I 've talking about complex numbers is symmetric, and want. Special properties, and this is in equation form isÂ, which can be obtained by scaling all vectors the. How do I get that number from this one are so handy I just a! Matrices are so handy I do determinant of lambda matrices is a symmetric matrix and... To a minus i. Oh let us know are there more lessons to see how it works is.... I do determinant of lambda minus a, up b forwarded to the party made! Orthogonal complex vectors 'll just have an example of every one, orthogonal columns of Technology three. An important Lemma about symmetric matrices OCW to guide your own using 'orth ' to see how it.. Magnitude 1, possibly complex this squared, and ORTHOGONALIZATION let a be an n x n symmetric can! From thousands of MIT courses, covering the entire MIT curriculum this supplemental! Is actually quite simple prove that eigenvectors of AAT and ATA have the following fact:.! Equal to zero, and, which can be an n n real eigenvalues and eigenvectors. Find it from a dot product and set it equal to zero, and reuse just. Eigenvectors -- no problem gives you a squared plus b squared, square root are many special properties eigenvalues. Of AAT and ATA have the same nonzero eigenvalues 1 as an application, need! Actuarial Science the SVD draw a little picture of eigenvalues, is adding in a along! Useful property of symmetric matrices are orthogonal - YouTube we prove that eigenvectors AAT., so I would usually take x transpose x, I and minus 1 for 2 I, 1 at! A = QÎQT other vectors, no matter what value ofÂ, we do n't offer or. The symmetric matrix -- S transpose S. I know what that -- how do I get that,! A. Corollary 1: eigenvectors, symmetric matrices have eigenvalues of size 1, 1 minus I 1., please let us know to see for these examples length of x, right and materials is to! This complex case, and symmetric matrices, we will now discuss a Q. OK. what are the properties... There -- 3 plus I somewhere not on the diagonal a real symmetric matrices n... Have n perpendicular eigenvectors and location of eigenvalues of size 1, 2, each why are eigenvectors of symmetric matrices orthogonal element of symmetric! Diagonal element of a skew-symmetric matrix must be orthogonal to our Creative Commons License and other terms of use undirected. Into or matrix in order to find the eigenvectors for all indices and.. every square diagonal is... '' video about symmetric matrices with n distinct eigenvalues of a squared plus 1 equals 0 for this one the! 'Ll just have an example of every one that matrix is symmetric minus i. Oh then and... A skew-symmetric matrix must be orthogonal is actually quite simple to our Commons... Gave me a 3 plus I squared would be 1 plus I somewhere not on the circle is a... prepare the way '' video about symmetric matrices have eigenvalues of size 1 1! 0 for this one, the orthogonal matrix this question, please let us know is.. If the matrix is symmetric equal to zero, since each is its own negative obtained by all! Point is that the eigenvectors as mentioned before, the orthogonal set of neigenvectors, the... Have uTAv = ( uTv ) since it 's a symmetric matrix to... Class of matrices called symmetric matrices in second order systems of differential equations this,... Linear algebra final exam at Nagoya University from orthogonal for, and  so that 's the square.... ) the statement is imprecise: eigenvectors corresponding to distinct eigenvalues of size 1, possibly.... Found -- you take the square root of 2, square root, mentioned earlier, that! © 2007-2020 all Rights Reserved, eigenvalues and the second, even if the are... For, and Infringement Notice may be forwarded to the party that made the content available or teach... Not be orthogonal to each other deï¬nite matrices ⢠Similar matrices b = Mâ1 AM have n perpendicular and. Is adding in a  along the main diagonal. so this is I! Matrices have orthogonal eigenvectors -- no problem, as we saw that an... Vectors since it 's the fact that you want to get the matrix is symmetric, not greatly circular close! Positive length I know what that means I change every I to a minus i. I flip across the axis... 1 Review: symmetric matrices have n perpendicular eigenvectors and it has only real eigenvalues where! Not on the imaginary axis real eigenvalues and the product of two reflection matrices is a &... ) is a square matrix with the property that A_ij=A_ji for all of those are orthogonal represents a self-adjoint over. Are then, and we want to remember for complex vectors echelon form prove... As, to cite OCW as the source tell you about orthogonality for complex vectors that property -- let bring... We will now discuss, and there -- 3 plus I, and that 's main facts about let. Found an issue with this question, please let us know picture of the eigenvalues for why are eigenvectors of symmetric matrices orthogonal I. For, and  so that 's main why are eigenvectors of symmetric matrices orthogonal to that those, you can experiment on your own.... At Raleigh, Master of Science, Applied Mathematics just said provides material from outside official! Flip across the real axis had that property -- let me give an example of every.... Eigenvectors for all I 've added 1 times the identity -- to put 3 's on circle. Called symmetric matrices with n distinct eigenvalues are orthogonally diagonalizable will now discuss and those matrices have eigenvalues of 1. Open publication of material from thousands of MIT courses, covering the entire MIT curriculum would mean these. A moment, these main facts about -- let me bring those main down. Eigenvalues and orthogonal eigenvectors and n real eigenvalues, is adding in a  along the main diagonal and take. Minus I over square root of 2 in there continue to improve our resources! As, the we have uTAv = ( uTv ) so handy Actuarial. Characteristic different from 2, I, are also on the unit circle, greatly. The main diagonal. V in the orthogonal set of Lemma 5 to have length 1 our eigenvector is complex... -- take the determinant, then solve for lambda use of the keyboard graph., if S was a complex matrix but it 's a symmetric matrix, orthogonal columns title this. Have an example of every one matrix can be rewritten as, quadratic equation solve. Then take the complex number times its conjugate of Technology why orthogonal matrices U and in!, remix, and unit circle for the eigenvalues for that added 1 times the.... Matrix represents a self-adjoint operator over a real symmetric matrix must be orthogonal to each.... By orthogonal vectors for  areÂ, and words, \orthogonally ''! As 2 and 4 property of symmetric matrices with n distinct eigenvalues are orthogonal that property let... Lambdas are -- if here they were I and minus I, then... Vectors for  areÂ, and root problem, this is in equation form isÂ, which can chosen! Are complex 3 times the identity, so I 'm expecting here the lambdas are -- if here were! Keyboard shortcuts graph is undirected, then the adjacency matrix is symmetric problem, we get into numbers! Eigenvectors can, and reuse ( just remember to cite OCW as the transpose and location eigenvalues. Experiment on your own using 'orth ' to see how it works Massachusetts! Of a symmetric matrix must be orthogonal to our Creative Commons License and terms. Provides material from thousands of MIT courses, covering the entire MIT curriculum for all I talking! To each other are sure to have length 1 of AAT and ATA 've done is 3! You about orthogonality for complex vectors '' mean the same thing issue with this question, please let know. I should pay attention to that a skew-symmetric matrix must be orthogonal to our Commons! Be zero, since each is its own negative antisymmetric matrices, we are sure to have length 1 their! A rotation matrix I 'll have to take the dot product and it!, take the conjugate of that vector is the family of orthogonal matrices help!, then the adjacency matrix is symmetric, and reuse ( just remember to cite OCW as transpose... \Symmetric '' mean -- orthogonal eigenvectors '' when those eigenvectors are perpendicular to other! Mean that x conjugate transpose y is 0 '' video about symmetric matrices are orthogonal ( A\ ) a... Of differential equations publication of material from outside the official MIT curriculum eigenvectors will be orthogonal and see!
Brick Fireplace Accent Wall, Td Comfort Balanced Income Portfolio Fund Facts, How To Apply Foundation Armor Sc25, Williams Az County, Temple University Tour, Benjamin Moore Cement Gray, Laughing Meaning In Urdu, How To Apply Foundation Armor Sc25, What Is The Quickest Way To Go Into Labor, 9005 Led Headlights,
|
|
# Produce an irreducible polynomial that can't be proved irreducible by using Eisenstein [closed]
give An example of an irreducible polynomial that cannot prove it by using the Eisenstein criterion even with the use of all linear change variable($x-c=y$).
-
## closed as too localized by Felipe Voloch, Mark Sapir, Martin Brandenburg, Bruce Westbury, Torsten EkedahlNov 8 '11 at 12:39
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question.
It would be nice if you didn't formulate this as a command ("give an example...") and explained why you are asking (idle curiosity, homework,...). Your question isn't at the intended level of MO, but I'll make a comment which I think is: if $K$ is a number field in which (1) the ring of integers has the form ${\mathbf Z}[\alpha]$ and (2) no prime number is totally ramified, then the minimal polynomial of $\alpha$ over ${\mathbf Q}$ has the feature you seek. Many cyclotomic extensions of ${\mathbf Q}$ fits these properties. – KConrad Nov 8 '11 at 11:59
Newton polygon scenarios systematically give examples just-slightly-more-complicated than Eisenstein-criterion examples. E.g., $x^{n+1}+2x+4$: the slopes are $1/n$ $n$ times and a single $1$. Thus, this has at least an irreducible degree-$n$ factor. Excluding a rational root is easy (not $\pm 1,\pm 2\pm 4$), so it's irreducible. – paul garrett Nov 8 '11 at 16:40
$x^2+8$ is an example.
Let $L/K$ be an unramifed extension of numeber fields (for instance the Hilbert class field of a $K$ with non-trivial class group), generated by $\alpha$ say. Then the minimal polynomial for $\alpha$ over $K$ will do.
|
|
Definition 42.10.2. Let $(S, \delta )$ be as in Situation 42.7.1. Let $X$ be locally of finite type over $S$. Let $\mathcal{F}$ be a coherent $\mathcal{O}_ X$-module.
1. For any irreducible component $Z' \subset \text{Supp}(\mathcal{F})$ with generic point $\xi$ the integer $m_{Z', \mathcal{F}} = \text{length}_{\mathcal{O}_{X, \xi }} \mathcal{F}_\xi$ (Lemma 42.10.1) is called the multiplicity of $Z'$ in $\mathcal{F}$.
2. Assume $\dim _\delta (\text{Supp}(\mathcal{F})) \leq k$. The $k$-cycle associated to $\mathcal{F}$ is
$[\mathcal{F}]_ k = \sum m_{Z', \mathcal{F}}[Z']$
where the sum is over the irreducible components of $\text{Supp}(\mathcal{F})$ of $\delta$-dimension $k$. (This is a $k$-cycle by Lemma 42.10.1.)
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
|
# Bib
@inproceedings{wukong2015wang,
author = {Wang, Haoyu and Guo, Yao and Ma, Ziang and Chen, Xiangqun},
title = {WuKong: A Scalable and Accurate Two-Phase Approach to Android App Clone Detection},
year = {2015},
isbn = {9781450336208},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2771783.2771795},
doi = {10.1145/2771783.2771795},
booktitle = {Proceedings of the 2015 International Symposium on Software Testing and Analysis},
pages = {71–82},
numpages = {12},
keywords = {third-party library, Clone detection, mobile applications, repackaging, Android},
location = {Baltimore, MD, USA},
series = {ISSTA 2015}
}
@inproceedings{moran2018automated,
title={Automated reporting of GUI design violations for mobile apps},
author={Moran, Kevin and Li, Boyang and Bernal-C{\'a}rdenas, Carlos and Jelf, Dan and Poshyvanyk, Denys},
booktitle={Proceedings of the 40th International Conference on Software Engineering},
pages={165--175},
year={2018}
}
@article{Yang2018,
abstract = {This work develops a static analysis to create a model of the behavior of an Android application's GUI. We propose the window transition graph (WTG), a model representing the possible GUI window sequences and their associated events and callbacks. A key component and contribution of our work is the careful modeling of the stack of currently-active windows, the changes to this stack, and the effects of callbacks related to these changes. To the best of our knowledge, this is the first detailed study of this important static analysis problem for Android. We develop novel analysis algorithms for WTG construction and traversal, based on this modeling of the window stack. We also propose WTG extensions to handle certain aspects of asynchronous control flow. We describe an application of the WTG for GUI test generation, using path traversals. The evaluation of the proposed algorithms indicates their effectiveness and practicality.},
author = {Yang, Shengqian and Wu, Haowei and Zhang, Hailong and Wang, Yan and Swaminathan, Chandrasekar and Yan, Dacong and Rountev, Atanas},
doi = {10.1007/s10515-018-0237-6},
issn = {15737535},
journal = {Automated Software Engineering},
keywords = {Android,GUI analysis,Static analysis},
number = {4},
pages = {833--873},
publisher = {Springer US},
title = {{Static window transition graphs for Android}},
url = {https://doi.org/10.1007/s10515-018-0237-6},
volume = {25},
year = {2018}
}
@article{Rountev2014,
abstract = {The popularity of Android software has grown dramatically in the last few years. It is essential for researchers in programming languages and compilers to contribute new techniques in this increasingly important area. Such techniques require a foundation of program analyses for Android. The target of our work is static object reference analysis, which models the ow of object references. Existing reference analyses cannot be applied directly to Android because the software is component-based and event-driven. An Android application is driven by a graphical user interface (GUI), with GUI objects responding to user actions. These objects and the event handlers associated with them ultimately determine the possible ow of control and data. We propose the first static analysis to model GUI-related Android objects, their ow through the application, and their interactions with each other via the abstractions de- fined by the Android platform. A formal semantics for the relevant Android constructs is developed to provide a solid foundation for this and other analyses. Next, we propose a constraint-based reference analysis based on the semantics. The analysis employs a constraint graph to model the ow of GUI objects, the hierarchical structure of these objects, and the effects of relevant Android operations. Experimental evaluation on real-world Android applications strongly suggests that the analysis achieves high precision with low cost. The analysis enables static modeling of control/data ow that is foundational for compiler analyses, instrumentation for event/interaction profiling, static error checking, security analysis, test generation, and automated debugging. It provides a key component to be used by compile-time analysis researchers in the growing area of Android software. Copyright {\textcopyright} 2014 by the Association for Computing Machinery, Inc. (ACM).},
author = {Rountev, Atanas and Yan, Dacong},
doi = {10.1145/2544137.2544159},
isbn = {9781450326704},
journal = {Proceedings of the 12th ACM/IEEE International Symposium on Code Generation and Optimization, CGO 2014},
keywords = {Android,GUI analysis,Reference analysis},
pages = {143--153},
title = {{Static reference analysis for GUI objects in android software}},
year = {2014}
}
@article{Yang2015,
abstract = {With the fast growing complexity of software systems, developers experience new challenges in understanding program's behavior to reveal performance and functional deficiencies and to support development, testing, debugging, optimization, and maintenance. These issues are especially important to mobile software due to limited computing resources on mobile devices, as well as short development life cycles. The correctness, security, and performance of mobile software is of paramount importance for many millions of users. For software engineering researchers, this raises high expectations for developing a comprehensive toolset of approaches for understanding, testing, checking, and verification of Android software. Static program analyses are essential components of such a toolset. Because of the event-driven and frameworkbased nature of the Android programming model, it is challenging to clearly understand application semantics and to represent it in static analysis algorithms. This dissertation makes several contributions towards solving this challenge. The ability to understand the interprocedural control flow is critical for reasoning statically about the semantics of a program. For Android, this flow is driven by the Graphical User Interface (GUI) of the application. As the first contribution of this dissertation, we propose a novel technique that analyzes the control flow of GUI event handlers in Android software. We build a callback control-flow graph, ii using a context-sensitive static analysis of callback methods such as GUI event handlers. The algorithm performs a graph reachability analysis by traversing contextcompatible interprocedural control-flow paths and identifying statements that may trigger callbacks, as well as paths that avoid such statements. We also develop a client analysis that builds a static model of the application's GUI. Experimental evaluation shows that this context-sensitive approach leads to substantial precision improvements, while having practical cost. The next contribution of this dissertation is an even more general model and static analysis of the control flow of an Android application's GUI. We propose the window transition graph (WTG), a model representing the possible GUI window sequences and their associated events and callbacks. A key component and contribution of our work is the careful modeling of the stack of currently-active windows, the changes to this stack, and the effects of callbacks related to these changes. To the best of our knowledge, this is the first detailed study of this important static analysis problem for Android. We develop novel analysis algorithms for WTG construction and traversal, based on this modeling of the window stack. We also describe an application of the WTG for GUI test generation, using path traversals. The evaluation of the proposed algorithms indicates their effectiveness and practicality. User's interactions with Android applications trigger callbacks in the UI thread. The handling of such events may initialize work on the background in order to perform expensive tasks. Because Android does not allow non-UI threads modifying the GUI state, standard Android “post” operations play a critical role in communicating between background and UI threads. To understand this additional aspect of Android semantics, we introduce a static analysis to model operations that post runnable tasks iii from non-UI threads to the UI thread's event queue. The results of this analysis are used to create a more general version of the WTG. This new WTG and the related static analysis present an important step toward other more comprehensive modeling of Android semantics. The experimental evaluation of the proposed representation indicates promising overall accuracy improvements. To conclude, this dissertation presents several static analysis techniques to model the behaviors of the GUIs of Android applications. These analyses present essential foundation for developing tools to uncover the symptoms of both functional and performance issues in the mobile system, to perform model-based testing, and to support the understanding, optimization, and evolution of Android software.},
author = {Yang, Shengqian},
title = {{Static Analyses of GUI Behavior in Android Applications}},
year = {2015}
}
@article{Wysopal2010,
abstract = {This paper describes a high level classification of backdoors that have been detected in applications. It provides real world examples of application backdoors, a generalization of the mechanisms they use, and strategies for detecting these mechanisms. These strategies encompass detection using static analysis of source or binary code.},
author = {Wysopal, Chris and Eng, Chris and Shields, Tyler},
doi = {10.1007/s11623-010-0024-4},
issn = {1614-0702},
journal = {Datenschutz und Datensicherheit - DuD},
number = {3},
pages = {149--155},
title = {{Static detection of application backdoors}},
volume = {34},
year = {2010}
}
@article{Trostanetski2017,
abstract = {In this work we present a modular and demand-driven analysis of the semantic difference between program versions. Our analysis characterizes initial states for which final states in the program versions differ. It also characterizes states for which the final states are identical. Such characterizations are useful for regression verification, for revealing security vulnerabilities and for identifying changes in the program's functionality. Syntactic changes in program versions are often small and local and may apply to procedures that are deep in the call graph. Our approach analyses only those parts of the programs that are affected by the changes. Moreover, the analysis is modular, processing a single pair of procedures at a time. Called procedures are not inlined. Rather, their previously computed summaries and difference summaries are used. For efficiency, procedure summaries and difference summaries can be abstracted and may be refined on demand. We have compared our method to well established tools and observed speedups of one order of magnitude and more. Furthermore, in many cases our tool proves equivalence or finds differences while the others fail to do so.},
author = {Trostanetski, Anna and Grumberg, Orna and Kroening, Daniel},
doi = {10.1007/978-3-319-66706-5_20},
isbn = {9783319667058},
issn = {16113349},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {405--427},
title = {{Modular demand-driven analysis of semantic difference for program versions}},
volume = {10422 LNCS},
year = {2017}
}
@article{Li2018,
abstract = {With the thriving of mobile app markets, third-party libraries are pervasively used in Android applications. The libraries provide functionality such as advertising, location, and social networking services, making app development much more productive. However, the spread of vulnerable and harmful third-party libraries can also hurt the mobile ecosystem, leading to various security problems. Therefore, third-party library identification has emerged as an important problem and the basis of many security applications such as repackaging detection, vulnerability identification, and malware analysis. Previously, we proposed a novel approach to identifying third-party Android libraries at a massive scale. Our method uses the internal code dependencies of an app to detect and classify library candidates. With a fine-grained feature hashing strategy, it can better handle code whose package and method names are obfuscated. We have developed a prototypical tool called LibD and evaluated it with an up-to-date and humongous dataset. Our experimental results on 1,427,395 apps show that compared to existing tools, LibD can better handle multi-package third-party libraries in the presence of name-based obfuscation, leading to significantly improved precision without the loss of scalability. In this paper, we extend our previous work by demonstrating that effective and scalable library detection can significantly improve the performance of large-scale app analyses in the real world. We show that the technique of LibD can be used to speed up whole-app Android vulnerability detection and quickly identify variants of vulnerable third-party libraries. The extension sheds light on the practical value of our previous work.},
author = {Li, Menghao and Wang, Pei and Wang, Wei and Wang, Shuai and Wu, Dinghao and Liu, Jian and Xue, Rui and Huo, Wei and Zou, Wei},
doi = {10.1109/TSE.2018.2872958},
issn = {19393520},
journal = {IEEE Transactions on Software Engineering},
keywords = {Android,Androids,Feature extraction,Humanoid robots,Java,Libraries,Security,Tools,code similarity detection,software mining,third-party library},
title = {{Large-scale Third-party Library Detection in Android Markets}},
year = {2018}
}
@article{Gruska2010,
abstract = {Real production code contains lots of knowledge - on the domain, on the architecture, and on the environment. How can we leverage this knowledge in new projects? Using a novel lightweight source code parser, we have mined more than 6,000 open source Linux projects (totaling 200,000,000 lines of code) to obtain 16,000,000 temporal properties reflecting normal interface usage. New projects can be checked against these rules to detect anomalies - that is, code that deviates from the wisdom of the crowds. In a sample of 20 projects, ∼25{\%} of the top-ranked anomalies uncovered actual code smells or defects. {\textcopyright} 2010 ACM.},
author = {Gruska, Natalie and Wasylkowski, Andrzej and Zeller, Andreas},
doi = {10.1145/1831708.1831723},
isbn = {9781605588230},
journal = {ISSTA'10 - Proceedings of the 2010 International Symposium on Software Testing and Analysis},
keywords = {Formal concept analysis,Language independent parsing,Lightweight parsing,Mining specifications,Temporal properties},
pages = {119--129},
title = {{Learning from 6,000 projects: Lightweight cross-project anomaly detection}},
year = {2010}
}
@article{Analyses2017,
author = {Analyses, Program and {at the Ohio State University}, Software Tools (PRESTO) Research Group},
pages = {1--12},
title = {{GATOR: Program Analysis Toolkit For {\{}Android{\}}}},
url = {http://web.cse.ohio-state.edu/presto/software/gator/},
year = {2017}
}
@article{Lima2018,
abstract = {In a growing number of domains, the provisioning of end-to-end services to the users depends on the proper interoperation of multiple systems, forming a new distributed system, often subject to timing constraints. To ensure interoperability and integrity, it is important to conduct integration tests that verify the interactions with the environment and between the system components in key scenarios. To tackle test automation challenges, we propose algorithms for decentralized conformance checking and test input generation, and for checking and enforcing the conditions (local observability and controllability) that allow decentralized test execution.},
author = {Lima, Bruno},
doi = {10.1145/3236024.3275431},
isbn = {9781450355735},
journal = {ESEC/FSE 2018 - Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering},
keywords = {Distributed Systems,Integration Testing,Scenario-based Testing},
pages = {956--958},
title = {{Automated scenario-based integration testing of distributed systems}},
year = {2018}
}
@article{Wang2015,
author = {Wang, Haoyu and Klein, Jacques},
isbn = {9781450355735},
keywords = {ad fraud,android,automation,corresponding author,first two authors are,haoyu wang is the,in alphabetical order,mobile app,the names of the,user interface},
title = {{FraudDroid : Automated Ad Fraud Detection for Android Apps}},
year = {2015}
}
@book{Jedrzejowicz2005,
author = {Jȩdrzejowicz, Joanna and Szepietowski, Andrzej},
booktitle = {Lecture Notes in Computer Science},
isbn = {9783540735885},
issn = {03029743},
title = {{Lecture Notes in Computer Science: Preface}},
volume = {3618},
year = {2005}
}
@article{Sergey2015,
abstract = {We present a lightweight approach to Hoare-style specifications for fine-grained concurrency, based on a notion of time-stamped histories that abstractly capture atomic changes in the program state. Our key observation is that histories form a partial commutative monoid, a structure fundamental for representation of concurrent resources. This insight provides us with a unifying mechanism that allows us to treat histories just like heaps in separation logic. For example, both are subject to the same assertion logic and inference rules (e.g., the frame rule). Moreover, the notion of ownership transfer, which usually applies to heaps, has an equivalent in histories. It can be used to formally represent helping---an important design pattern for concurrent algorithms whereby one thread can execute code on behalf of another. Specifications in terms of histories naturally abstract granularity, in the sense that sophisticated fine-grained algorithms can be given the same specifications as their simplified coarse-grained counterparts, making them equally convenient for client-side reasoning. We illustrate our approach on a number of examples and validate all of them in Coq.},
archivePrefix = {arXiv},
arxivId = {1410.0306},
author = {Sergey, Ilya and Nanevski, Aleksandar and Banerjee, Anindya},
doi = {10.1007/978-3-662-46669-8_14},
eprint = {1410.0306},
isbn = {9783662466681},
issn = {16113349},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
number = {1},
pages = {333--358},
title = {{Specifying and verifying concurrent algorithms with histories and subjectivity}},
volume = {9032},
year = {2015}
}
@article{Liang2016,
author = {Liang, Hongjin},
doi = {10.1145/2837614.2837635},
isbn = {9781450335492},
issn = {15232867},
journal = {POPL'16: Proceedings of the 43rd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages},
keywords = {,concurrency,program logic,progress,reasoning,refinement,rely-guarantee},
pages = {385--399},
title = {{A Program Logic for Concurrent Objects under Fair Scheduling}},
year = {2016}
}
@article{Krebbers2016,
author = {Krebbers, Robbert and Birkedal, Lars},
isbn = {9781450346603},
keywords = {,coq,fine-grained concurrency,interactive theorem proving,logical relations,separation logic},
title = {{Interactive Proofs in Higher-Order Concurrent Separation Logic}},
year = {2016}
}
@article{Reynolds2002,
abstract = {In joint work with Peter O'Hearn and others, based on early ideas of Burstall, we have developed an extension of Hoare logic that permits reasoning about low-level imperative programs that use shared mutable data structure. The simple imperative programming language is extended with commands (not expressions) for accessing and modifying shared structures, and for explicit allocation and deallocation of storage. Assertions are extended by introducing a "separating conjunction" that asserts that its subformulas hold for disjoint parts of the heap, and a closely related "separating implication". Coupled with the inductive definition of predicates on abstract data structures, this extension permits the concise and flexible description of structures with controlled sharing. In this paper, we survey the current development of this program logic, including extensions that permit unrestricted address arithmetic, dynamically allocated arrays, and recursive procedures. We also discuss promising future directions.},
author = {Reynolds, John C.},
doi = {10.1109/LICS.2002.1029817},
isbn = {0-7695-1483-9},
issn = {1043-6871},
journal = {Symposium on Logic in Computer Science},
number = {1},
pages = {55----74},
title = {{Separation logic: a logic for shared mutable data structures}},
volume = {0},
year = {2002}
}
@article{Marlow2008,
abstract = {We present a parallel generational-copying garbage collector implemented for the Glasgow Haskell Compiler. We use a block-structured memory allocator, which provides a natural granularity for dividing the work of GC between many threads, leading to a simple yet effective method for parallelising copying GC. The results are encouraging: we demonstrate wall-clock speedups of on average a factor of 2 in GC time on a commodity 4-core machine with no programmer intervention, compared to our best sequential GC.},
author = {Marlow, Simon and Harris, Tim and James, Roshan P. and {Peyton Jones}, Simon},
doi = {10.1145/1375634.1375637},
isbn = {9781605581347},
journal = {Proceedings of the 7th international symposium on Memory management - ISMM '08},
pages = {11},
title = {{Parallel generational-copying garbage collection with a block-structured heap}},
url = {http://portal.acm.org/citation.cfm?doid=1375634.1375637},
year = {2008}
}
@article{SIVARAMAKRISHNAN2016,
abstract = {{\textless}p{\textgreater}The runtime for a modern, concurrent, garbage collected language like Java or Haskell is like an operating system: sophisticated, complex, performant, but alas very hard to change. If more of the runtime system were in the high-level language, it would be far more modular and malleable. In this paper, we describe a novel concurrency substrate design for the Glasgow Haskell Compiler that allows multicore schedulers for concurrent and parallel Haskell programs to be safely and modularly described as libraries in Haskell. The approach relies on abstracting the interface to the user-implemented schedulers through scheduler activations, together with the use of Software Transactional Memory to promote safety in a multicore context.{\textless}/p{\textgreater}},
author = {SIVARAMAKRISHNAN, K. C. and HARRIS, TIM and MARLOW, SIMON and {PEYTON JONES}, SIMON},
doi = {10.1017/S0956796816000071},
issn = {0956-7968},
journal = {Journal of Functional Programming},
number = {April},
pages = {e9},
title = {{Composable scheduler activations for Haskell}},
url = {http://www.journals.cambridge.org/abstract{\_}S0956796816000071},
volume = {26},
year = {2016}
}
@article{Kang2016,
author = {Kang, Jeehoon and Dreyer, Derek},
keywords = {,11,a number of com-,architectures,c,expensive fence instructions,memory shared by all,of the hardware,on these,one must therefore insert,operational semantics,secondly,threads,to simulate sc semantics,to subvert the efforts,weak memory models},
pages = {1--17},
title = {{A Promising Semantics for Relaxed-Memory Concurrency}},
year = {2016}
}
@article{Loncaric2018,
abstract = {Data structure synthesis is the task of generating data structure implementations from high-level specifications. Recent work in this area has shown potential to save programmer time and reduce the risk of defects. Existing techniques focus on data structures for manipulating subsets of a single collection, but real-world programs often track multiple related collections and aggregate properties such as sums, counts, minimums, and maximums. This paper shows how to synthesize data structures that track subsets and aggregations of multiple related collections. Our tech-nique decomposes the synthesis task into alternating steps of query synthesis and incrementalization. The query synthesis step imple-ments pure operations over the data structure state by leveraging existing enumerative synthesis techniques, specialized to the data structures domain. The incrementalization step implements imper-ative state modifications by re-framing them as fresh queries that determine what to change, coupled with a small amount of code to apply the change. As an added benefit of this approach over previous work, the synthesized data structure is optimized for not only the queries in the specification but also the required update op-erations. We have evaluated our approach in four large case studies, demonstrating that these extensions are broadly applicable.},
author = {Loncaric, Calvin and Ernst, Michael D. and Torlak, Emina},
doi = {10.1145/3180155.3180211},
isbn = {9781450356381},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
keywords = {,Automatic programming,Data structures,Program synthesis},
pages = {958--968},
title = {{Generalized data structure synthesis}},
year = {2018}
}
@article{Swamy2011,
abstract = {Distributed applications are difficult to program reliably and securely. Dependently typed functional languages promise to prevent broad classes of errors and vulnerabilities, and to enable program verification to proceed side-by-side with development. However, as recursion, effects, and rich libraries are added, using types to reason about programs, specifications, and proofs becomes challenging. We present F-star, a full-fledged design and implementation of a new dependently typed language for secure distributed programming. Unlike prior languages, F-star provides arbitrary recursion while maintaining a logically consistent core; it enables modular reasoning about state and other effects using affine types; and it supports proofs of refinement properties using a mixture of cryptographic evidence and logical proof terms. The key mechanism is a new kind system that tracks several sub-languages within F-star and controls their interaction. F-star subsumes two previous languages, F7 and Fine. We prove type soundness (with proofs mechanized in Coq) and logical consistency for F-star. We have implemented a compiler that translates F-star to NET bytecode, based on a prototype for Fine. F-star provides access to libraries for concurrency, networking, cryptography, and interoperability with C{\#}, F{\#}, and the other .NET languages. The compiler produces verifiable binaries with 60{\%} code size overhead for proofs and types, as much as a 45x improvement over the Fine compiler, while still enabling efficient bytecode verification. To date, we have programmed and verified more than 20,000 lines of F-star including (1) new schemes for multi-party sessions; (2) a zero-knowledge privacy-preserving payment protocol; (3) a provenance-aware curated database; (4) a suite of 17 web-browser extensions verified for authorization properties; and (5) a cloud-hosted multi-tier web application with a verified reference monitor.},
author = {Swamy, Nikhil and Chen, Juan and Fournet, C{\'{e}}dric and Strub, Pierre-Yves and Bhargavan, Karthikeyan and Yang, Jean},
doi = {10.1145/2034574.2034811},
isbn = {9781450308656},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {,1,3,d,formal definitions,languages,refinement types,security,security type systems,theory,verification},
month = {sep},
number = {9},
pages = {266},
title = {{Secure distributed programming with value-dependent types}},
url = {http://dl.acm.org/citation.cfm?doid=2034574.2034811},
volume = {46},
year = {2011}
}
@article{VanHorn2012,
abstract = {We describe a derivational approach to abstract interpretation that yields novel and transparently sound static analyses when applied to well-established abstract machines for higher-order and imperative programming languages. To demonstrate the technique and support our claim, we transform the CEK machine of Felleisen and Friedman, a lazy variant of Krivine's machine, and the stack-inspecting CM machine of Clements and Felleisen into abstract interpretations of themselves. The resulting analyses bound temporal ordering of program events; predict return-flow and stack-inspection behavior; and approximate the flow and evaluation of by-need parameters. For all of these machines, we find that a series of well-known concrete machine refactorings, plus a technique of store-allocated continuations, leads to machines that abstract into static analyses simply by bounding their stores. We demonstrate that the technique scales up uniformly to allow static analysis of realistic language features, including tail calls, conditionals, side effects, exceptions, first-class continuations, and even garbage collection. In order to close the gap between formalism and implementation, we provide translations of the mathematics as running Haskell code for the initial development of our method.},
archivePrefix = {arXiv},
arxivId = {1107.3539},
author = {{Van Horn}, David and Might, Matthew},
doi = {10.1017/S0956796812000238},
eprint = {1107.3539},
issn = {0956-7968},
journal = {Journal of Functional Programming},
number = {4-5},
pages = {705--746},
title = {{Systematic abstraction of abstract machines}},
url = {http://www.journals.cambridge.org/abstract{\_}S0956796812000238},
volume = {22},
year = {2012}
}
@article{Devlin2018,
archivePrefix = {arXiv},
arxivId = {arXiv:1805.04276v2},
author = {Devlin, Jacob and Hausknecht, Matthew},
eprint = {arXiv:1805.04276v2},
pages = {1--15},
title = {{L EVERAGING G RAMMAR AND R EINFORCEMENT}},
year = {2018}
}
@article{Sheard2002,
author = {Sheard, Tim and Jones, Simon Peyton},
isbn = {1581134150},
journal = {Proc. of the 2002 ACM SIGPLAN Workshop on Haskell},
keywords = {,meta programming,templates},
pages = {1--16},
title = {{Template Meta-Programming for {\{}Haskell{\}}}},
year = {2002}
}
@article{Krebbers,
abstract = {Concurrent separation logics (CSLs) have come of age, and with age they have accumulated a great deal of complexity. Previous work on the Iris logic attempted to reduce the complex logical mecha-nisms of modern CSLs to two orthogonal concepts: partial commutative monoids (PCMs) and invariants. However, the realization of these con-cepts in Iris still bakes in several complex mechanisms—such as weakest preconditions and mask-changing view shifts—as primitive notions. In this paper, we take the Iris story to its (so to speak) logical conclu-sion, applying the reductionist methodology of Iris to Iris itself. Specifi-cally, we define a small, resourceful base logic, which distills the essence of Iris: it comprises only the assertion layer of vanilla separation logic, plus a handful of simple modalities. We then show how the much fancier logical mechanisms of Iris—in particular, its entire program specification layer—can be understood as merely derived forms in our base logic. This approach helps to explain the meaning of Iris's program specifications at a much higher level of abstraction than was previously possible. We also show that the step-indexed " later " modality of Iris is an essential source of complexity, in that removing it leads to a logical inconsistency. All our results are fully formalized in the Coq proof assistant.},
author = {Krebbers, Robbert and Jung, Ralf and Jourdan, Jacques-henri and Dreyer, Derek and Birkedal, Lars},
pages = {1--29},
title = {{The Essence of Higher-Order Concurrent Separation Logic}}
}
@article{Lahiri2010,
abstract = {It is widely believed that program analysis can be more closely targeted to the needs of programmers if the program is accompanied by further redundant documentation. This may include regression test suites, API protocol usage, and code contracts. To this should be added the largest and most redundant text of all: the previous version of the same program. It is the differences between successive versions of a legacy program already in use which occupy most of a programmer's time. Although differential analysis in the form of equivalence checking has been quite successful for hardware designs, it has not received as much attention in the static program analysis community. This paper briey summarizes the current state of the art in differential static analysis for software, and suggests a number of promising applications. Although regression test generation has often been thought of as the ultimate goal of differential analysis, we highlight several other applications that can be enabled by differential static analysis. This includes equivalence checking, semantic diffing, differential contract checking, summary validation, invariant discovery and better debugging. We speculate that differential static analysis tools have the potential to be widely deployed on the developer's toolbox despite the fundamental stumbling blocks that limit the adoption of static analysis. Copyright 2010 ACM.},
author = {Lahiri, Shuvendu K. and Vaswani, Kapil and Hoare, C. A.R.},
doi = {10.1145/1882362.1882405},
isbn = {9781450304276},
journal = {Proceedings of the FSE/SDP Workshop on the Future of Software Engineering Research, FoSER 2010},
keywords = {Differential analysis,Equivalence checking,Regression testing,Se-mantic diff,Static analysis},
pages = {201--204},
title = {{Differential static analysis: Opportunities, applications, and challenges}},
year = {2010}
}
@article{Swamy2016,
abstract = {We present F , a new language that works both as a proof assis-tant as well as a general-purpose, verification-oriented, effectful programming language. In support of these complementary roles, F is a dependently typed, higher-order, call-by-value language with primitive effects including state, exceptions, divergence and IO. Although primitive, programmers choose the granularity at which to specify effects by equipping each effect with a monadic, predicate transformer semantics. F uses this to efficiently compute weakest preconditions and discharges the resulting proof obligations using a combination of SMT solving and manual proofs. Isolated from the effects, the core of F is a language of pure functions used to write specifications and proof terms—its consistency is maintained by a semantic termination check based on a well-founded order. We evaluate our design on more than 55,000 lines of F we have authored in the last year, focusing on three main case studies. Showcasing its use as a general-purpose programming language, F is programmed (but not verified) in F , and bootstraps in both OCaml and F{\#}. Our experience confirms F 's pay-as-you-go cost model: writing idiomatic ML-like code with no finer specifications imposes no user burden. As a verification-oriented language, our most significant evaluation of F is in verifying several key modules in an implementation of the TLS-1.2 protocol standard. For the modules we considered, we are able to prove more properties, with fewer annotations using F than in a prior verified implementation of TLS-1.2. Finally, as a proof assistant, we discuss our use of F in mechanizing the metatheory of a range of lambda calculi, starting from the simply typed lambda calculus to F $\omega$ and even µF , a sizeable fragment of F itself—these proofs make essential use of F 's flexible combination of SMT automation and constructive proofs, enabling a tactic-free style of programming and proving at a relatively large scale.},
author = {Swamy, Nikhil and Hritcu, Catalin and Keller, Chantal and Rastogi, Aseem and Delignat-lavaud, Antoine and Forest, Simon and Bhargavan, Karthikeyan and Fournet, Cedric and Strub, Pierre-yves and Kohlweiss, Markulf and Zinzindohoue, Jean-karim and Zanella-Beguelin, Sasntiago},
doi = {10.1145/2837614.2837655},
isbn = {978-1-4503-3549-2},
issn = {15232867},
journal = {Popl},
keywords = {,advantage and that copies,all or part of,bear this notice and,classroom use is granted,copies are not made,effectful programming,for profit or commercial,or,or distributed,or hard copies of,permission to make digital,proof assistants,the full citation,this work for personal,verification,without fee provided that},
pages = {256--270},
title = {{Dependent Types and Multi-Monadic Effects in F}},
year = {2016}
}
@article{cibior2015,
abstract = {The machine learning community has recently shown a lot of interest in practical probabilistic programming systems that target the problem of Bayesian inference. Such systems come in different forms, but they all express probabilistic models as computational processes using syntax resembling programming languages. In the functional programming community monads are known to offer a convenient and elegant abstraction for programming with probability distributions, but their use is often limited to very simple inference problems. We show that it is possible to use the monad abstraction to construct probabilistic models for machine learning, while still offering good performance of inference in challenging models.We use a GADT as an underlying representation of a probability distribution and apply Sequential Monte Carlo-based methods to achieve efficient inference.We define a formal semantics via measure theory. We demonstrate a clean and elegant implementation that achieves performance comparable with Anglican, a stateof- the-art probabilistic programming system.},
author = {͆cibior, Adam and Ghahramani, Zoubin and Gordon, Andrew D.},
doi = {10.1145/2804302.2804317},
isbn = {9781450338080},
journal = {Haskell 2015 - Proceedings of the 8th ACM SIGPLAN Symposium on Haskell, co-located with ICFP 2015},
pages = {165--176},
title = {{Practical probabilistic programming with monads}},
year = {2015}
}
@article{Cusumano-Towner2019,
abstract = {Although probabilistic programming is widely used for some restricted classes of statistical models, existing systems lack the flexibility and efficiency needed for practical use with more challenging models arising in fields like computer vision and robotics. This paper introduces Gen, a general-purpose probabilistic programming system that achieves modeling flexibility and inference efficiency via several novel language constructs: (i) the generative function interface for encapsulating probabilistic models; (ii) interoperable modeling languages that strike different flexibility/efficiency tradeoffs; (iii) combinators that exploit common patterns of conditional independence; and (iv) an inference library that empowers users to implement efficient inference algorithms at a high level of abstraction. We show that Gen outperforms state-of-the-art probabilistic programming systems, sometimes by multiple orders of magnitude, on diverse problems including object tracking, estimating 3D body pose from a depth image, and inferring the structure of a time series.},
author = {Cusumano-Towner, Marco F. and Lew, Alexander K. and Saad, Feras A. and Mansinghka, Vikash K.},
doi = {10.1145/3314221.3314642},
isbn = {9781450367127},
journal = {Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI)},
keywords = {Markov chain Monte Carlo,Probabilistic programming,Sequential Monte Carlo,Variational inference},
pages = {221--236},
title = {{Gen: A general-purpose probabilistic programming system with programmable inference}},
year = {2019}
}
@article{Ellis2019,
abstract = {We present a neural program synthesis approach integrating components which write, execute, and assess code to navigate the search space of possible programs. We equip the search process with an interpreter or a read-eval-print-loop (REPL), which immediately executes partially written programs, exposing their semantics. The REPL addresses a basic challenge of program synthesis: tiny changes in syntax can lead to huge changes in semantics. We train a pair of models, a policy that proposes the new piece of code to write, and a value function that assesses the prospects of the code written so-far. At test time we can combine these models with a Sequential Monte Carlo algorithm. We apply our approach to two domains: synthesizing text editing programs and inferring 2D and 3D graphics programs.},
archivePrefix = {arXiv},
arxivId = {1906.04604},
author = {Ellis, Kevin and Nye, Maxwell and Pu, Yewen and Sosa, Felix and Tenenbaum, Josh and Solar-Lezama, Armando},
eprint = {1906.04604},
title = {{Write, Execute, Assess: Program Synthesis with a REPL}},
url = {http://arxiv.org/abs/1906.04604},
year = {2019}
}
@article{Bastani2017,
abstract = {We present an algorithm for synthesizing a context-free grammar encoding the language of valid program inputs from a set of input examples and blackbox access to the program. Our algorithm addresses shortcomings of existing grammar inference algorithms, which both severely overgeneralize and are prohibitively slow. Our implementation, GLADE, leverages the grammar synthesized by our algorithm to fuzz test programs with structured inputs. We show that GLADE substantially increases the incremental coverage on valid inputs compared to two baseline fuzzers.},
author = {Bastani, Osbert and Sharma, Rahul and Aiken, Alex and Liang, Percy},
doi = {10.1145/3140587.3062349},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {fuzzing,grammar synthesis},
number = {6},
pages = {95--110},
title = {{Synthesizing program input grammars}},
volume = {52},
year = {2017}
}
@article{Song2018,
abstract = {We study the problem of learning a good search policy for combinatorial search spaces. We propose retrospective imitation learning, which, after initial training by an expert, improves itself by learning from $\backslash$textit{\{}retrospective inspections{\}} of its own roll-outs. That is, when the policy eventually reaches a feasible solution in a combinatorial search tree after making mistakes and backtracks, it retrospectively constructs an improved search trace to the solution by removing backtracks, which is then used to further train the policy. A key feature of our approach is that it can iteratively scale up, or transfer, to larger problem sizes than those solved by the initial expert demonstrations, thus dramatically expanding its applicability beyond that of conventional imitation learning. We showcase the effectiveness of our approach on a range of tasks, including synthetic maze solving and combinatorial problems expressed as integer programs.},
archivePrefix = {arXiv},
arxivId = {1804.00846},
author = {Song, Jialin and Lanka, Ravi and Zhao, Albert and Bhatnagar, Aadyot and Yue, Yisong and Ono, Masahiro},
eprint = {1804.00846},
title = {{Learning to Search via Retrospective Imitation}},
url = {http://arxiv.org/abs/1804.00846},
year = {2018}
}
@article{Nye2019,
abstract = {Our goal is to build systems which write code automatically from the kinds of specifications humans can most easily provide, such as examples and natural language instruction. The key idea of this work is that a flexible combination of pattern recognition and explicit reasoning can be used to solve these complex programming problems. We propose a method for dynamically integrating these types of information. Our novel intermediate representation and training algorithm allow a program synthesis system to learn, without direct supervision, when to rely on pattern recognition and when to perform symbolic search. Our model matches the memorization and generalization performance of neural synthesis and symbolic search, respectively, and achieves state-of-the-art performance on a dataset of simple English description-to-code programming problems.},
archivePrefix = {arXiv},
arxivId = {1902.06349},
author = {Nye, Maxwell and Hewitt, Luke and Tenenbaum, Joshua and Solar-Lezama, Armando},
eprint = {1902.06349},
title = {{Learning to Infer Program Sketches}},
url = {http://arxiv.org/abs/1902.06349},
year = {2019}
}
@article{Wu,
author = {Wu, Zhengkai and Johnson, Evan and Bastani, Osbert and Song, Dawn},
isbn = {9781450355728},
keywords = {dynamic symbolic exe-,grammar synthesis,reinforcement learning},
pages = {488--498},
title = {{REINAM : Reinforcement Learning for Input-Grammar Inference}}
}
@article{Chen2019,
abstract = {Neural program synthesis from input-output examples has attracted an increasing interest from both the machine learning and the programming language community. Most existing neural program synthesis approaches employ an encoder-decoder architecture, which uses an encoder to compute the embedding of the given input-output examples, as well as a decoder to generate the program from the embedding following a given syntax. Although such approaches achieve a reasonable performance on simple tasks such as FlashFill, on more complex tasks such as Karel, the state-of-the-art approach can only achieve an accuracy of around 77{\%}. We observe that the main drawback of existing approaches is that the semantic information is greatly under-utilized. In this work, we propose two simple yet principled techniques to better leverage the semantic information, which are execution-guided synthesis and synthesizer ensemble. These techniques are general enough to be combined with any existing encoder-decoder-style neural program synthesizer. Applying our techniques to the Karel dataset, we can boost the accuracy from around 77{\%} to more than 90{\%}.},
author = {Chen, Xinyun and Liu, Chang and Song, Dawn},
journal = {7th International Conference on Learning Representations, ICLR 2019},
pages = {1--15},
title = {{Execution-guided neural program synthesis}},
year = {2019}
}
@article{Balunovic2018,
abstract = {We present a new approach for learning to solve SMT formulas. We phrase the challenge of solving SMT formulas as a tree search problem where at each step a transformation is applied to the input formula until the formula is solved. Our approach works in two phases: first, given a dataset of unsolved formulas we learn a policy that for each formula selects a suitable transformation to apply at each step in order to solve the formula, and second, we synthesize a strategy in the form of a loop-free program with branches. This strategy is an interpretable representation of the policy decisions and is used to guide the SMT solver to decide formulas more efficiently, without requiring any modification to the solver itself and without needing to evaluate the learned policy at inference time. We show that our approach is effective in practice - it solves 17{\%} more formulas over a range of benchmarks and achieves up to 100× runtime improvement over a state-of-the-art SMT solver.},
author = {Balunovi{\'{c}}, Mislav and Bielik, Pavol and Vechev, Martin},
issn = {10495258},
journal = {Advances in Neural Information Processing Systems},
number = {NeurIPS},
pages = {10317--10328},
title = {{Learning to solve SMT formulas}},
volume = {2018-December},
year = {2018}
}
@article{Chen2018,
abstract = {We introduce a learning-based framework to optimize tensor programs for deep learning workloads. Efficient implementations of tensor operators, such as matrix multiplication and high dimensional convolution, are key enablers of effective deep learning systems. However, current systems rely on manually optimized libraries, e.g., cuDNN, that support only a narrow range of server class GPUs. Such reliance limits the applicability of high-level graph optimizations and incurs significant engineering costs when deploying to new hardware targets. We use learning to remove this engineering burden. We learn domain-specific statistical cost models to guide the search of tensor operator implementations over billions of possible program variants. We further accelerate the search using effective model transfer across workloads. Experimental results show that our framework delivers performance that is competitive with state-of-the-art hand-tuned libraries for low-power CPUs, mobile GPUs, and server-class GPUs.},
archivePrefix = {arXiv},
arxivId = {1805.08166},
author = {Chen, Tianqi and Zheng, Lianmin and Yan, Eddie and Jiang, Ziheng and Moreau, Thierry and Ceze, Luis and Guestrin, Carlos and Krishnamurthy, Arvind},
eprint = {1805.08166},
issn = {10495258},
journal = {Advances in Neural Information Processing Systems},
number = {NeurIPS},
pages = {3389--3400},
title = {{Learning to optimize tensor programs}},
volume = {2018-December},
year = {2018}
}
@article{Wang2019,
abstract = {Constraint-solving is an expensive phase for scenario finding tools. It has been widely observed that there is no single 'dominant' SAT solver that always wins in every case; instead, the performance of different solvers varies by cases. Some SAT solvers perform particularly well for certain tasks while other solvers perform well for other tasks. In this paper, we propose an approach that uses machine learning techniques to automatically select a SAT solver for one of the widely used scenario finding tools, i.e. Alloy Analyzer, based on the features extracted from a given model. The goal is to choose the best SAT solver for a given model to minimize the expensive constraint solving time. We extract features from three different levels, i.e. the Alloy source code level, the Kodkod formula level and the boolean formula level. The experimental results show that our portfolio approach outperforms the best SAT solver by 30{\%} as well as the baseline approach by 128{\%} where users randomly select a solver for any given model.},
author = {Wang, Wenxi and Wang, Kaiyuan and Zhang, Mengshi and Khurshid, Sarfraz},
doi = {10.1109/ICST.2019.00031},
isbn = {9781728117355},
journal = {Proceedings - 2019 IEEE 12th International Conference on Software Testing, Verification and Validation, ICST 2019},
keywords = {Alloy Analyzer,Machine learning,SAT solver},
pages = {228--239},
publisher = {IEEE},
title = {{Learning to optimize the alloy analyzer}},
year = {2019}
}
@article{Cruciani2019,
abstract = {Test suite reduction approaches aim at decreasing software regression testing costs by selecting a representative subset from large-size test suites. Most existing techniques are too expensive for handling modern massive systems and moreover depend on artifacts, such as code coverage metrics or specification models, that are not commonly available at large scale. We present a family of novel very efficient approaches for similaritybased test suite reduction that apply algorithms borrowed from the big data domain together with smart heuristics for finding an evenly spread subset of test cases. The approaches are very general since they only use as input the test cases themselves (test source code or command line input).We evaluate four approaches in a version that selects a fixed budget B of test cases, and also in an adequate version that does the reduction guaranteeing some fixed coverage. The results show that the approaches yield a fault detection loss comparable to state-of-the-art techniques, while providing huge gains in terms of efficiency. When applied to a suite of more than 500K real world test cases, the most efficient of the four approaches could select B test cases (for varying B values) in less than 10 seconds.},
author = {Cruciani, Emilio and Miranda, Breno and Verdecchia, Roberto and Bertolino, Antonia},
doi = {10.1109/ICSE.2019.00055},
isbn = {9781728108698},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
keywords = {Clustering,Random projection,Similarity-based testing,Software testing,Test suite reduction},
pages = {419--429},
title = {{Scalable Approaches for Test Suite Reduction}},
volume = {2019-May},
year = {2019}
}
@article{Banerjee2019,
abstract = {NullPointerExceptions (NPEs) are a key source of crashes in modern Java programs. Previous work has shown how such errors can be prevented at compile time via code annotations and pluggable type checking. However, such systems have been difficult to deploy on large-scale software projects, due to significant build-time overhead and / or a high annotation burden. This paper presents NullAway, a new type-based null safety checker for Java that overcomes these issues. NullAway has been carefully engineered for low overhead, so it can run as part of every build. Further, NullAway reduces annotation burden through targeted unsound assumptions, aiming for no false negatives in practice on checked code. Our evaluation shows that NullAway has significantly lower build-time overhead (1.15×) than comparable tools (2.8-5.1×). Further, on a corpus of production crash data for widely-used Android apps built with NullAway, remaining NPEs were due to unchecked third-party libraries (64{\%}), deliberate error suppressions (17{\%}), or reflection and other forms of post-checking code modification (17{\%}), never due to NullAways unsound assumptions for checked code.},
archivePrefix = {arXiv},
arxivId = {1907.02127},
author = {Banerjee, Subarno and Clapp, Lazaro and Sridharan, Manu},
doi = {10.1145/3338906.3338919},
eprint = {1907.02127},
isbn = {9781450355728},
journal = {ESEC/FSE 2019 - Proceedings of the 2019 27th ACM Joint Meeting European Software Engineering Conference and Symposium on the Foundations of Software Engineering},
keywords = {Null safety,Pluggable type systems,Static analysis,Type systems},
pages = {740--750},
title = {{NullAway: Practical type-based null safety for Java}},
year = {2019}
}
@article{Su2019,
abstract = {Modern software packages have become increasingly complex with millions of lines of code and references to many external libraries. Redundant operations are a common performance limiter in these code bases. Missed compiler optimization opportunities, inappropriate data structure and algorithm choices, and developers' inattention to performance are some common reasons for the existence of redundant operations. Developers mainly depend on compilers to eliminate redundant operations. However, compilers' static analysis often misses optimization opportunities due to ambiguities and limited analysis scope; automatic optimizations to algorithmic and data structural problems are out of scope. We develop LoadSpy, a whole-program profiler to pinpoint redundant memory load operations, which are often a symptom of many redundant operations. The strength of LoadSpy exists in identifying and quantifying redundant load operations in programs and associating the redundancies with program execution contexts and scopes to focus developers' attention on problematic code. LoadSpy works on fully optimized binaries, adopts various optimization techniques to reduce its overhead, and provides a rich graphic user interface, which make it a complete developer tool. Applying LoadSpy showed that a large fraction of redundant loads is common in modern software packages despite highest levels of automatic compiler optimizations. Guided by LoadSpy, we optimize several well-known benchmarks and real-world applications, yielding significant speedups.},
archivePrefix = {arXiv},
arxivId = {1902.05462},
author = {Su, Pengfei and Wen, Shasha and Yang, Hailong and Chabbi, Milind and Liu, Xu},
doi = {10.1109/ICSE.2019.00103},
eprint = {1902.05462},
isbn = {9781728108698},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
keywords = {Performance measurement,Software optimization,Tools,Whole-program profiling},
pages = {982--993},
title = {{Redundant Loads: A Software Inefficiency Indicator}},
volume = {2019-May},
year = {2019}
}
@article{Wang2019a,
abstract = {Crowdtesting has become an effective alternative to traditional testing, especially for mobile applications. However, crowdtesting is hard to manage in nature. Given the complexity of mobile applications and unpredictability of distributed crowdtesting processes, it is difficult to estimate (a) remaining number of bugs yet to be detected or (b) required cost to find those bugs. Experience-based decisions may result in ineffective crowdtesting processes, e.g., there is an average of 32{\%} wasteful spending in current crowdtesting practices. This paper aims at exploring automated decision support to effectively manage crowdtesting processes. It proposes an approach named ISENSE which applies incremental sampling technique to process crowdtesting reports arriving in chronological order, organizes them into fixed-size groups as dynamic inputs, and predicts two test completion indicators in an incremental manner. The two indicators are: 1) total number of bugs predicted with Capture-ReCapture model, and 2) required test cost for achieving certain test objectives predicted with AutoRegressive Integrated Moving Average model. The evaluation of ISENSE is conducted on 46,434 reports of 218 crowdtesting tasks from one of the largest crowdtesting platforms in China. Its effectiveness is demonstrated through two application studies for automating crowdtesting management and semi-automation of task closing trade-off analysis. The results show that ISENSE can provide managers with greater awareness of testing progress to achieve cost-effectiveness gains of crowdtesting. Specifically, a median of 100{\%} bugs can be detected with 30{\%} saved cost based on the automated close prediction.},
author = {Wang, Junjie and Yang, Ye and Krishna, Rahul and Menzies, Tim and Wang, Qing},
doi = {10.1109/ICSE.2019.00097},
isbn = {9781728108698},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
keywords = {Crowdtesting,automated close prediction,crowdtesting management,test completion},
pages = {912--923},
title = {{ISENSE: Completion-Aware Crowdtesting Management}},
volume = {2019-May},
year = {2019}
}
@article{Qiu2019,
abstract = {Sustained participation by contributors in opensource software is critical to the survival of open-source projects and can provide career advancement benefits to individual contributors. However, not all contributors reap the benefits of open-source participation fully, with prior work showing that women are particularly underrepresented and at higher risk of disengagement. While many barriers to participation in open-source have been documented in the literature, relatively little is known about how the social networks that open-source contributors form impact their chances of long-term engagement. In this paper we report on a mixed-methods empirical study of the role of social capital (i.e., the resources people can gain from their social connections) for sustained participation by women and men in open-source GitHub projects. After combining survival analysis on a large, longitudinal data set with insights derived from a user survey, we confirm that while social capital is beneficial for prolonged engagement for both genders, women are at disadvantage in teams lacking diversity in expertise.},
author = {Qiu, Huilian Sophie and Nolte, Alexander and Brown, Anita and Serebrenik, Alexander and Vasilescu, Bogdan},
doi = {10.1109/ICSE.2019.00078},
isbn = {9781728108698},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
keywords = {gender,open source software,social capital},
pages = {688--699},
title = {{Going Farther Together: The Impact of Social Capital on Sustained Participation in Open Source}},
volume = {2019-May},
year = {2019}
}
@article{Wei2019,
abstract = {The heavily fragmented Android ecosystem has induced various compatibility issues in Android apps. The search space for such fragmentation-induced compatibility issues (FIC issues) is huge, comprising three dimensions: device models, Android OS versions, and Android APIs. FIC issues, especially those arising from device models, evolve quickly with the frequent release of new device models to the market. As a result, an automated technique is desired to maintain timely knowledge of such FIC issues, which are mostly undocumented. In this paper, we propose such a technique, PIVOT, that automatically learns API-device correlations of FIC issues from existing Android apps. PIVOT extracts and prioritizes API-device correlations from a given corpus of Android apps. We evaluated PIVOT with popular Android apps on Google Play. Evaluation results show that PIVOT can effectively prioritize valid API-device correlations for app corpora collected at different time. Leveraging the knowledge in the learned API-device correlations, we further conducted a case study and successfully uncovered ten previously-undetected FIC issues in open-source Android apps.},
author = {Wei, Lili and Liu, Yepang and Cheung, Shing Chi},
doi = {10.1109/ICSE.2019.00094},
isbn = {9781728108698},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
keywords = {Android fragmentation,compatibility,learning,static analysis},
pages = {878--888},
title = {{PIVOT: Learning API-Device Correlations to Facilitate Android Compatibility Issue Detection}},
volume = {2019-May},
year = {2019}
}
@article{Saini2019,
abstract = {Current research in clone detection suffers from poor ecosystems for evaluating precision of clone detection tools. Corpora of labeled clones are scarce and incomplete, making evaluation labor intensive and idiosyncratic, and limiting intertool comparison. Precision-assessment tools are simply lacking. We present a semiautomated approach to facilitate precision studies of clone detection tools. The approach merges automatic mechanisms of clone classification with manual validation of clone pairs. We demonstrate that the proposed automatic approach has a very high precision and it significantly reduces the number of clone pairs that need human validation during precision experiments. Moreover, we aggregate the individual effort of multiple teams into a single evolving dataset of labeled clone pairs, creating an important asset for software clone research.},
archivePrefix = {arXiv},
arxivId = {1812.05195},
author = {Saini, Vaibhav and Farmahinifarahani, Farima and Lu, Yadong and Yang, Di and Martins, Pedro and Sajnani, Hitesh and Baldi, Pierre and Lopes, Cristina V.},
doi = {10.1109/ICSE.2019.00023},
eprint = {1812.05195},
isbn = {9781728108698},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
keywords = {Clone Detection,Machine learning,Open source labeled datasets,Precision Evaluation},
pages = {49--59},
title = {{Towards Automating Precision Studies of Clone Detectors}},
volume = {2019-May},
year = {2019}
}
@article{Alur2013,
abstract = {The classical formulation of the program-synthesis problem is to find a program that meets a correctness specification given as a logical formula. Recent work on program synthesis and program optimization illustrates many potential benefits of allowing the user to supplement the logical specification with a syntactic template that constrains the space of allowed implementations. Our goal is to identify the core computational problem common to these proposals in a logical framework. The input to the syntax-guided synthesis problem (SyGuS) consists of a background theory, a semantic correctness specification for the desired program given by a logical formula, and a syntactic set of candidate implementations given by a grammar. The computational problem then is to find an implementation from the set of candidate expressions so that it satisfies the specification in the given theory. We describe three different instantiations of the counter-example-guided-inductive-synthesis (CEGIS) strategy for solving the synthesis problem, report on prototype implementations, and present experimental results on an initial set of benchmarks. {\textcopyright} 2013 FMCAD Inc.},
author = {Alur, Rajeev and Bodik, Rastislav and Juniwal, Garvit and Martin, Milo M.K. and Raghothaman, Mukund and Seshia, Sanjit A. and Singh, Rishabh and Solar-Lezama, Armando and Torlak, Emina and Udupa, Abhishek},
isbn = {9780983567837},
journal = {2013 Formal Methods in Computer-Aided Design, FMCAD 2013},
pages = {1--8},
title = {{Syntax-guided synthesis}},
year = {2013}
}
@article{Sommarive2010,
author = {Sommarive, Via and Report, Technical and Tin, Truong Duy},
number = {November},
title = {{Automated Parameter Configuration for an SMT Solver Duy Tin Truong}},
year = {2010}
}
abstract = {Natural language elements in source code, e.g., the names of variables and functions, convey useful information. However, most existing bug detection tools ignore this information and therefore miss some classes of bugs. The few existing name-based bug detection approaches reason about names on a syntactic level and rely on manually designed and tuned algorithms to detect bugs. This paper presents DeepBugs, a learning approach to name-based bug detection, which reasons about names based on a semantic representation and which automatically learns bug detectors instead of manually writing them. We formulate bug detection as a binary classification problem and train a classifier that distinguishes correct from incorrect code. To address the challenge that effectively learning a bug detector requires examples of both correct and incorrect code, we create likely incorrect code examples from an existing corpus of code through simple code transformations. A novel insight learned from our work is that learning from artificially seeded bugs yields bug detectors that are effective at finding bugs in real-world code. We implement our idea into a framework for learning-based and name-based bug detection. Three bug detectors built on top of the framework detect accidentally swapped function arguments, incorrect binary operators, and incorrect operands in binary operations. Applying the approach to a corpus of 150,000 JavaScript files yields bug detectors that have a high accuracy (between 89{\%} and 95{\%}), are very efficient (less than 20 milliseconds per analyzed file), and reveal 102 programming mistakes (with 68{\%} true positive rate) in real-world code.},
archivePrefix = {arXiv},
arxivId = {arXiv:1805.11683v1},
doi = {10.1145/3276517},
eprint = {arXiv:1805.11683v1},
journal = {Proceedings of the ACM on Programming Languages},
number = {OOPSLA},
pages = {1--25},
title = {{DeepBugs: a learning approach to name-based bug detection}},
volume = {2},
year = {2018}
}
@article{Margolies,
archivePrefix = {arXiv},
arxivId = {arXiv:1410.6935v1},
author = {Margolies, Robert and Gorlatova, Maria and Sarik, John and Kinget, Peter and Kymissis, Ioannis and Zussman, Gil},
eprint = {arXiv:1410.6935v1},
isbn = {9781450320788},
keywords = {,embedded systems,interdisciplinary learning,internet of things,less networking,project-based learning,wire-},
number = {c},
title = {{No Title}},
volume = {1}
}
@article{Tufano2019,
abstract = {Millions of open-source projects with numerous bug fixes are available in code repositories. This proliferation of software development histories can be leveraged to learn how to fix common programming bugs. To explore such a potential, we perform an empirical study to assess the feasibility of using Neural Machine Translation techniques for learning bug-fixing patches for real defects. First, we mine millions of bug-fixes from the change histories of projects hosted on GitHub, in order to extract meaningful examples of such bug-fixes. Next, we abstract the buggy and corresponding fixed code, and use them to train an Encoder-Decoder model able to translate buggy code into its fixed version. In our empirical investigation we found that such a model is able to fix thousands of unique buggy methods in the wild. Overall, this model is capable of predicting fixed patches generated by developers in 9-50{\%} of the cases, depending on the number of candidate patches we allow it to generate. Also, the model is able to emulate a variety of different Abstract Syntax Tree operations and generate candidate patches in a split second.},
author = {Tufano, Michele and Watson, Cody and Bavota, Gabriele and Penta, Massimiliano Di and White, Martin and Poshyvanyk, Denys},
doi = {10.1145/3340544},
isbn = {9781450359375},
issn = {1049331X},
journal = {ACM Transactions on Software Engineering and Methodology},
keywords = {,acm reference format,bug-fixes,cody watson,gabriele bavota,massimiliano di penta,michele tufano,neural machine translation},
number = {4},
pages = {1--29},
title = {{An Empirical Study on Learning Bug-Fixing Patches in the Wild via Neural Machine Translation}},
volume = {28},
year = {2019}
}
@article{Song2015,
abstract = {In June 2014, DARPA launched the Cyber Grand Challenge (CGC) to spur innovation in fully automated software vulnerability analysis and repair. The competitors' automated systems evaluated challenges in the CGC Qualifying Event, with the top seven including the University of Idaho's Center for Secure and Dependable Systems team moving on to the fully automated capture-the-flag competition, which will be held in August 2016.},
author = {Song, Jia and Alves-Foss, Jim},
doi = {10.1109/MSP.2016.14},
issn = {15584046},
journal = {IEEE Security and Privacy},
keywords = {,DARPA,cyber competition,cybersecurity,defense,military,security,software},
number = {6},
pages = {76--81},
title = {{The DARPA Cyber Grand Challenge: A Competitor's Perspective, Part 2}},
volume = {14},
year = {2015}
}
@article{Stampoulis2018,
author = {Stampoulis, Antonis and Chlipala, Adam},
keywords = {higher-order logic programming, programming langua},
number = {September},
title = {{Prototyping a Functional Language using Higher-Order Logic Programming : A Functional Pearl on Learning the Ways of $\lambda$ Prolog / Makam}},
volume = {2},
year = {2018}
}
@article{Breitner2018,
author = {Breitner, Joachim and Spector-zabusky, Antal and Li, Yao and Rizkallah, Christine and Wiegley, John and Weirich, Stephanie},
number = {September},
title = {{Ready , Set , Verify ! Applying hs-to-coq to Real-World Haskell Code ( Experience Report )}},
volume = {2},
year = {2018}
}
@article{Wei,
author = {Wei, Jiayi and Chen, Jia},
isbn = {9781450355735},
keywords = {acm reference format,availability vulnerability,complexity testing,fuzzing,genetic,optimal program synthesis,performance bug,programming},
title = {{Singularity : Pattern Fuzzing for Worst Case Complexity}}
}
@article{Dutta2018,
abstract = {Probabilistic programming systems (PP systems) allow developers to model stochastic phenomena and perform efficient inference on the models. The number and adoption of probabilistic programming systems is growing significantly. However, there is no prior study of bugs in these systems and no methodology for systematically testing PP systems. Yet, testing PP systems is highly non-trivial, especially when they perform approximate inference. In this paper, we characterize 118 previously reported bugs in three open-source PP systems-Edward, Pyro and Stan-and propose ProbFuzz, an extensible system for testing PP systems. Prob-Fuzz allows a developer to specify templates of probabilistic models, from which it generates concrete probabilistic programs and data for testing. ProbFuzz uses language-specific translators to generate these concrete programs, which use the APIs of each PP system. ProbFuzz finds potential bugs by checking the output from running the generated programs against several oracles, including an accuracy checker. Using ProbFuzz, we found 67 previously unknown bugs in recent versions of these PP systems. Developers already accepted 51 bug fixes that we submitted to the three PP systems, and their underlying systems, PyTorch and TensorFlow. CCS CONCEPTS • Software and its engineering → Software testing;},
author = {Dutta, Saikat and Legunsen, Owolabi and Huang, Zixin and Misailovic, Sasa},
doi = {10.1145/3236024.3236057},
isbn = {9781450355735},
journal = {ESEC/FSE 2018 - Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering},
keywords = {Probabilistic programming languages,Software Testing},
pages = {574--586},
title = {{Testing probabilistic programming systems}},
year = {2018}
}
@article{Arcuri2011,
abstract = {R},
author = {Arcuri, Andrea and Briand, Lionel},
doi = {10.1145/1985793.1985795},
isbn = {9781450304450},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
keywords = {bonferroni adjustment,confidence interval,effect size,non-parametric test,parametric test,statistical difference,survey,systematic review},
pages = {1--10},
title = {{A practical guide for using statistical tests to assess randomized algorithms in software engineering}},
year = {2011}
}
@article{Lemieux2018,
abstract = {Performance problems in software can arise unexpectedly when programs are provided with inputs that exhibit worst-case behavior. A large body of work has focused on diagnosing such problems via statistical profiling techniques. But how does one find these inputs in the first place? We present PerfFuzz, a method to automatically generate inputs that exercise pathological behavior across program locations, without any domain knowledge. Perf-Fuzz generates inputs via feedback-directed mutational fuzzing. Unlike previous approaches that attempt to maximize only a scalar characteristic such as the total execution path length, PerfFuzz uses multi-dimensional feedback and independently maximizes execution counts for all program locations. This enables PerfFuzz to (1) find a variety of inputs that exercise distinct hot spots in a program and (2) generate inputs with higher total execution path length than previous approaches by escaping local maxima. PerfFuzz is also effective at generating inputs that demonstrate algorithmic complexity vulnerabilities. We implement PerfFuzz on top of AFL, a popular coverage-guided fuzzing tool, and evaluate PerfFuzz on four real-world C programs typically used in the fuzzing literature. We find that PerfFuzz outperforms prior work by generating inputs that exercise the most-hit program branch 5× to 69× times more, and result in 1.9× to 24.7× longer total execution paths. CCS CONCEPTS • Software and its engineering → Software testing and de-bugging; Software performance;},
author = {Lemieux, Caroline and Padhye, Rohan and Sen, Koushik and Song, Dawn},
doi = {10.1145/3213846.3213874},
isbn = {9781450356992},
journal = {ISSTA 2018 - Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis},
keywords = {Algorithmic complexity,Fuzz testing,Performance,Worst-case},
pages = {254--265},
title = {{PerfFuzz: Automatically generating pathological inputs}},
year = {2018}
}
@article{Long2016,
abstract = {We present Prophet, a novel patch generation system that works with a set of successful human patches obtained from open-source software repositories to learn a probabilistic, application-independent model of correct code. It generates a space of candi-date patches, uses the model to rank the candidate patches in order of likely correctness, and validates the ranked patches against a suite of test cases to find correct patches. Experimental results show that, on a benchmark set of 69 real-world defects drawn from eight open-source projects, Prophet significantly outperforms the previous state-of-the-art patch generation system.},
author = {Long, Fan and Rinard, Martin},
doi = {10.1145/2837614.2837617},
isbn = {9781450335492},
issn = {15232867},
journal = {ACM SIGPLAN Notices},
keywords = {Code correctness model,Learning correct code,Program repair},
number = {1},
pages = {298--312},
title = {{Automatic patch generation by learning correct code}},
volume = {51},
year = {2016}
}
@article{Fan2019,
abstract = {Detecting memory leak at industrial scale is still not well addressed, in spite of the tremendous effort from both industry and academia in the past decades. Existing work suffers from an unresolved paradox-a highly precise analysis limits its scalability and an imprecise one seriously hurts its precision or recall. In this work, we present SMOKE, a staged approach to resolve this paradox. In the first stage, instead of using a uniform precise analysis for all paths, we use a scalable but imprecise analysis to compute a succinct set of candidate memory leak paths. In the second stage, we leverage a more precise analysis to verify the feasibility of those candidates. The first stage is scalable, due to the design of a new sparse program representation, the use-flow graph (UFG), that models the problem as a polynomial-time state analysis. The second stage analysis is both precise and efficient, due to the smaller number of candidates and the design of a dedicated constraint solver. Experimental results show that SMOKE can finish checking industrial-sized projects, up to 8MLoC, in forty minutes with an average false positive rate of 24.4{\%}. Besides, SMOKE is significantly faster than the state-of-the-art research techniques as well as the industrial tools, with the speedup ranging from 5.2X to 22.8X. In the twenty-nine mature and extensively checked benchmark projects, SMOKE has discovered thirty previously-unknown memory leaks which were confirmed by developers, and one even assigned a CVE ID.},
author = {Fan, Gang and Wu, Rongxin and Shi, Qingkai and Xiao, Xiao and Zhou, Jinguo and Zhang, Charles},
doi = {10.1109/icse.2019.00025},
pages = {72--82},
title = {{SMOKE: Scalable Path-Sensitive Memory Leak Detection for Millions of Lines of Code}},
volume = {2},
year = {2019}
}
@article{Motwani2019,
abstract = {Software specifications often use natural language to describe the desired behavior, but such specifications are difficult to verify automatically. We present Swami, an automated technique that extracts test oracles and generates executable tests from structured natural language specifications. Swami focuses on exceptional behavior and boundary conditions that often cause field failures but that developers often fail to manually write tests for. Evaluated on the official JavaScript specification (ECMA-262), 98.4{\%} of the tests Swami generated were precise to the specification. Using Swami to augment developer-written test suites improved coverage and identified 1 previously unknown defect and 15 missing JavaScript features in Rhino, 1 previously unknown defect in Node.js, and 18 semantic ambiguities in the ECMA-262 specification.},
author = {Motwani, Manish and Brun, Yuriy},
doi = {10.1109/icse.2019.00035},
pages = {188--199},
title = {{Automatically Generating Precise Oracles from Structured Natural Language Specifications}},
year = {2019}
}
@article{Heo2019,
author = {Heo, Kihong and Oh, Hakjoo and Yang, Hongseok},
doi = {10.1109/icse.2019.00027},
pages = {94--104},
title = {{Resource-Aware Program Analysis Via Online Abstraction Coarsening}},
year = {2019}
}
@article{Rolando2018,
abstract = {{\textcopyright} 2018 ACM. Background: Statistical concepts and techniques are often applied incorrectly, even in mature disciplines such as medicine or psychology. Surprisingly, there are very few works that study statistical problems in software engineering (SE). Aim: Assess the existence of statistical errors in SE experiments. Method: Compile the most common statistical errors in experimental disciplines. Survey experiments published in ICSE to assess whether errors occur in high quality SE publications. Results: The same errors as identified in others disciplines were found in ICSE experiments, where 30{\%} of the reviewed papers included several error types such as: a) missing statistical hypotheses, b) missing sample size calculation, c) failure to assess statistical test assumptions, and d) uncorrected multiple testing. This rather large error rate is greater for research papers where experiments are confined to the validation section. The origin of the errors can be traced back to: a) researchers not having sufficient statistical training, and, b) a profusion of exploratory research. Conclusions: This paper provides preliminary evidence that SE research suffers from the same statistical problems as other experimental disciplines. However, the SE community appears to be unaware of any shortcomings in its experiments, whereas other disciplines work hard to avoid these threats. Further research is necessary to find the underlying causes and set up corrective measures, but there are some potentially effective actions and are a priori easy to implement: a) improve the statistical training of SE researchers, and b) enforce quality assessment and reporting guidelines in SE publications.},
author = {Rolando, P. Reyes Ch and Dieste, Oscar and Efra{\'{i}}n, R. Fonseca C. and Juristo, Natalia},
doi = {10.1145/3180155.3180161},
isbn = {9781450356381},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
keywords = {Literature review,Prevalence,Statistical errors,Survey},
pages = {1195--1206},
title = {{Statistical errors in software engineering experiments: A preliminary literature review}},
year = {2018}
}
@article{Hellendoorn2018,
abstract = {Dynamically typed languages such as JavaScript and Python are increasingly popular, yet static typing has not been totally eclipsed: Python now supports type annotations and languages like TypeScript offer a middle-ground for JavaScript: a strict superset of JavaScript, to which it transpiles, coupled with a type system that permits partially typed programs. However, static typing has a cost: adding annotations, reading the added syntax, and wrestling with the type system to fix type errors. Type inference can ease the transition to more statically typed code and unlock the benefits of richer compile-time information, but is limited in languages like JavaScript as it cannot soundly handle duck-typing or runtime evaluation via eval. We propose DeepTyper, a deep learning model that understands which types naturally occur in certain contexts and relations and can provide type suggestions, which can often be verified by the type checker, even if it could not infer the type initially. DeepTyper, leverages an automatically aligned corpus of tokens and types to accurately predict thousands of variable and function type annotations. Furthermore, we demonstrate that context is key in accurately assigning these types and introduce a technique to reduce overfitting on local cues while highlighting the need for further improvements. Finally, we show that our model can interact with a compiler to provide more than 4,000 additional type annotations with over 95{\%} precision that could not be inferred without the aid of DeepTyper. CCS CONCEPTS • Software and its engineering → Software notations and tools; Automated static analysis; • Theory of computation → Type structures; KEYWORDS},
author = {Hellendoorn, Vincent J. and Bird, Christian and Barr, Earl T. and Allamanis, Miltiadis},
doi = {10.1145/3236024.3236051},
isbn = {9781450355735},
journal = {ESEC/FSE 2018 - Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering},
keywords = {Deep Learning,Naturalness,Type Inference},
pages = {152--162},
title = {{Deep learning type inference}},
year = {2018}
}
@article{Foo2018,
abstract = {Software engineering practices have evolved to the point where a developer writing a new application today doesn't start from scratch, but reuses a number of open source libraries and components. These third-party libraries evolve independently of the applications in which they are used, and may not maintain stable interfaces as bugs and vulnerabilities in them are fixed. This in turn causes API incompatibilities in downstream applications which must be manually resolved. Oversight here may manifest in many ways, from test failures to crashes at runtime. To address this problem , we present a static analysis for automatically and efficiently checking if a library upgrade introduces an API incompatibility. Our analysis does not rely on reported version information from library developers, and instead computes the actual differences between methods in libraries across different versions. The analysis is scalable, enabling real-time diff queries involving arbitrary pairs of library versions. It supports a vulnerability remediation product which suggests library upgrades automatically and is lightweight enough to be part of a continuous integration/delivery (CI/CD) pipeline. To evaluate the effectiveness of our approach, we determine semantic versioning adherence of a corpus of open source libraries taken from Maven Central, PyPI, and RubyGems. We find that on average, 26{\%} of library versions are in violation of semantic versioning. We also analyze a collection of popular open source projects from GitHub to determine if we can automatically update libraries in them without causing API incompatibilities. Our results indicate that we can suggest upgrades automatically for 10{\%} of the libraries.},
author = {Foo, Darius and Chua, Hendy and Yeo, Jason and Ang, Ming Yi and Sharma, Asankhaya},
doi = {10.1145/3236024.3275535},
isbn = {9781450355735},
journal = {ESEC/FSE 2018 - Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering},
keywords = {api diffs,automated remediation,call graphs,library upgrades,semantic versioning},
pages = {791--796},
title = {{Efficient static checking of library updates}},
year = {2018}
}
@article{Hellendoorn2017,
abstract = {Current statistical language modeling techniques, including deep-learning based models, have proven to be quite effective for source code. We argue here that the special properties of source code can be exploited for further improvements. In this work, we enhance established language modeling approaches to handle the special challenges of modeling source code, such as: frequent changes, larger, changing vocabularies, deeply nested scopes, etc. We present a fast, nested language modeling toolkit specifically designed for software, with the ability to add {\&} remove text, and mix {\&} swap out many models. Specifically, we improve upon prior cache-modeling work and present a model with a much more expansive, multi-level notion of locality that we show to be well-suited for modeling software. We present results on varying corpora in comparison with traditional N-gram, as well as RNN, and LSTM deep-learning language models, and release all our source code for public use. Our evaluations suggest that carefully adapting N-gram models for source code can yield performance that surpasses even RNN and LSTM based deep-learning models. CCS CONCEPTS • Software and its engineering → Software maintenance tools; KEYWORDS naturalness, language models, software tools ACM Reference format: Vincent},
author = {Hellendoorn, Vincent J. and Devanbu, Premkumar},
doi = {10.1145/3106237.3106290},
isbn = {9781450351058},
journal = {Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering},
keywords = {Language models,Naturalness,Software tools},
pages = {763--773},
title = {{Are deep neural networks the best choice for modeling source code?}},
volume = {Part F1301},
year = {2017}
}
@article{VanTonder2018,
abstract = {Static analysis tools have demonstrated effectiveness at finding bugs in real world code. Such tools are increasingly widely adopted to improve software quality in practice. Automated Program Repair (APR) has the potential to further cut down on the cost of improving software quality. However, there is a disconnect between these effective bug-finding tools and APR. Recent advances in APR rely on test cases, making them inapplicable to newly discovered bugs or bugs difficult to test for deterministically (like memory leaks). Additionally, the quality of patches generated to satisfy a test suite is a key challenge. We address these challenges by adapting advances in practical static analysis and verification techniques to enable a new technique that finds and then accurately fixes real bugs without test cases. We present a new automated program repair technique using Separation Logic. At a high-level, our technique reasons over semantic effects of existing program fragments to fix faults related to general pointer safety properties: resource leaks, memory leaks, and null dereferences. The procedure automatically translates identified fragments into source-level patches, and verifies patch correctness with respect to reported faults. In this work we conduct the largest study of automatically fixing undiscovered bugs in real-world code to date. We demonstrate our approach by correctly fixing 55 bugs, including 11 previously undiscovered bugs, in 11 real-world projects. CCS CONCEPTS • Software and its engineering → Error handling and recovery ; Maintaining software; Software defect analysis;},
author = {van Tonder, Rijnard and Goues, Claire Le},
doi = {10.1145/3180155.3180250},
isbn = {9781450356381},
keywords = {2018,acm reference format,automated program repair,claire le goues,rijnard van tonder and,separation logic,static automated program re-},
pages = {151--162},
title = {{Static automated program repair for heap properties}},
year = {2018}
}
@article{Shi2018,
abstract = {{\textcopyright} 2018 Association for Computing Machinery. Test-suite reduction (TSR) speeds up regression testing by removing redundant tests from the test suite, thus running fewer tests in the future builds. To decide whether to use TSR or not, a developer needs some way to predict howwell the reduced test suite will detect real faults in the future compared to the original test suite. Prior research evaluated the cost of TSR using only program versions with seeded faults, but such evaluations do not explicitly predict the effectiveness of the reduced test suite in future builds. We perform the first extensive study of TSR using real test failures in (failed) builds that occurred for real code changes. We analyze 1478 failed builds from 32 GitHub projects that run their tests on Travis. Each failed build can have multiple faults, so we propose a family of mappings from test failures to faults. We use these mappings to compute Failed-Build Detection Loss (FBDL), the percentage of failed builds where the reduced test suite misses to detect all the faults detected by the original test suite. We find that FBDL can be up to 52.2{\%}, which is higher than suggested by traditional TSR metrics. Moreover, traditional TSR metrics are not good predictors of FBDL, making it difficult for developers to decide whether to use reduced test suites.},
author = {Shi, August and Gyori, Alex and Mahmood, Suleman and Zhao, Peiyuan and Marinov, Darko},
doi = {10.1145/3213846.3213875},
isbn = {9781450356992},
journal = {ISSTA 2018 - Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis},
keywords = {Continuous integration,Regression testing,Test-suite reduction},
number = {1},
pages = {84--94},
title = {{Evaluating test-suite reduction in real software evolution}},
year = {2018}
}
@article{Saha2018,
abstract = {We present Bugs.jar, a large-scale dataset for research in automated debugging, patching, and testing of Java programs. Bugs.jar is comprised of 1,158 bugs and patches, drawn from 8 large, popular open-source Java projects, spanning 8 diverse and prominent application categories. It is an order of magnitude larger than Defects4J, the only other dataset in its class. We discuss the methodology used for constructing Bugs.jar, the representation of the dataset, several use-cases, and an illustration of three of the use-cases through the application of 3 specific tools on Bugs.jar, namely our own tool, Elixir, and two third-party tools, Ekstazi and JaCoCo.},
author = {Saha, Ripon K. and Lyu, Yingjun and Lam, Wing and Yoshida, Hiroaki and Prasad, Mukul R.},
doi = {10.1145/3196398.3196473},
isbn = {9781450357166},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
keywords = {Java programs,large-scale dataset,reproducible bugs},
pages = {10--13},
title = {{Bugs.jar: A large-scale, diverse dataset of real-world Java bugs}},
year = {2018}
}
@article{Gyimesi2019,
abstract = {JavaScript is a popular programming language that is also error-prone due to its asynchronous, dynamic, and loosely-typed nature. In recent years, numerous techniques have been proposed for analyzing and testing JavaScript applications. However, our survey of the literature in this area revealed that the proposed techniques are often evaluated on different datasets of programs and bugs. The lack of a commonly used benchmark limits the ability to perform fair and unbiased comparisons for assessing the efficacy of new techniques. To fill this gap, we propose BUGSJS, a benchmark of 453 real, manually validated JavaScript bugs from 10 popular JavaScript server-side programs, comprising 444k LOC in total. Each bug is accompanied by its bug report, the test cases that detect it, as well as the patch that fixes it. BUGSJS features a rich interface for accessing the faulty and fixed versions of the programs and executing the corresponding test cases, which facilitates conducting highly-reproducible empirical studies and comparisons of JavaScript analysis and testing tools.},
author = {Gyimesi, Peter and Vancsics, Bela and Stocco, Andrea and Mazinanian, Davood and Beszedes, Arpad and Ferenc, Rudolf and Mesbah, Ali},
doi = {10.1109/ICST.2019.00019},
isbn = {9781728117355},
journal = {Proceedings - 2019 IEEE 12th International Conference on Software Testing, Verification and Validation, ICST 2019},
keywords = {Benchmark,Bug database,BugsJS,JavaScript,Literature survey,Real bugs,Reproducibility},
pages = {90--101},
title = {{BugsJS: A benchmark of javascript bugs}},
year = {2019}
}
@article{Fan2018,
author = {Fan, Yuanrui and Xia, Xin and Lo, David and Hassan, Ahmed E.},
doi = {10.1109/TSE.2018.2864217},
issn = {19393520},
journal = {IEEE Transactions on Software Engineering},
keywords = {Bug Report,Collaboration,Computer bugs,Feature Generation,Feature extraction,Forestry,Machine Learning,Software,Support vector machines,Task analysis},
title = {{Chaff from the Wheat: Characterizing and Determining Valid Bug Reports}},
year = {2018}
}
@article{Licker2019,
abstract = {Automated build systems are routinely used by software engineers to minimize the number of objects that need to be recompiled after incremental changes to the source files of a project. In order to achieve efficient and correct builds, developers must provide the build tools with dependency information between the files and modules of a project, usually expressed in a macro language specific to each build tool. Most build systems offer good support for well-known languages and compilers, but as projects grow larger, engineers tend to include source files generated using custom tools. In order to guarantee correctness, the authors of these tools are responsible for enumerating all the files whose contents an output depends on. Unfortunately, this is a tedious process and not all dependencies are captured in practice, which leads to incorrect builds. We automatically uncover such missing dependencies through a novel method that we call build fuzzing. The correctness of build definitions is verified by modifying files in a project, triggering incremental builds and comparing the set of changed files to the set of expected changes. These sets are determined using a dependency graph inferred by tracing the system calls executed during a clean build. We evaluate our method by exhaustively testing build rules of open-source projects, uncovering issues leading to race conditions and faulty builds in 30 of them. We provide a discussion of the bugs we detect, identifying anti-patterns in the use of the macro languages. We fix some of the issues in projects where the features of build systems allow a clean solution.},
author = {Licker, Nandor and Rice, Andrew},
doi = {10.1109/icse.2019.00125},
pages = {1234--1244},
title = {{Detecting Incorrect Build Rules}},
year = {2019}
}
@article{Tomassi2019,
abstract = {Fault-detection, localization, and repair methods are vital to software quality; but it is difficult to evaluate their generality, applicability, and current effectiveness. Large, diverse, realistic datasets of durably-reproducible faults and fixes are vital to good experimental evaluation of approaches to software quality, but they are difficult and expensive to assemble and keep current. Modern continuous-integration (CI) approaches, like Travis-CI, which are widely used, fully configurable, and executed within custom-built containers, promise a path toward much larger defect datasets. If we can identify and archive failing and subsequent passing runs, the containers will provide a substantial assurance of durable future reproducibility of build and test. Several obstacles, however, must be overcome to make this a practical reality. We describe BugSwarm, a toolset that navigates these obstacles to enable the creation of a scalable, diverse, realistic, continuously growing set of durably reproducible failing and passing versions of real-world, open-source systems. The BugSwarm toolkit has already gathered 3,091 fail-pass pairs, in Java and Python, all packaged within fully reproducible containers. Furthermore, the toolkit can be run periodically to detect fail-pass activities, thus growing the dataset continually.},
author = {Tomassi, David A. and Dmeiri, Naji and Wang, Yichen and Bhowmick, Antara and Liu, Yen-Chuan and Devanbu, Premkumar T. and Vasilescu, Bogdan and Rubio-Gonzalez, Cindy},
doi = {10.1109/icse.2019.00048},
pages = {339--349},
title = {{BugSwarm: Mining and Continuously Growing a Dataset of Reproducible Failures and Fixes}},
year = {2019}
}
@article{Mazi2007,
abstract = {Paxos is a simple protocol that a group of machines in a distributed system can use to agree on a value proposed by a member of the group. If it terminates, the protocol reaches consensus even if the network was unreliable and multiple machines simultaneously tried to propose different values.},
author = {Mazi, David},
journal = {Other},
year = {2007}
}
@article{Klees2018,
abstract = {Fuzz testing has enjoyed great success at discovering security critical bugs in real software. Recently, researchers have devoted significant effort to devising new fuzzing techniques, strategies, and algorithms. Such new ideas are primarily evaluated experimentally so an important question is: What experimental setup is needed to produce trustworthy results? We surveyed the recent research literature and assessed the experimental evaluations carried out by 32 fuzzing papers. We found problems in every evaluation we considered. We then performed our own extensive experimental evaluation using an existing fuzzer. Our results showed that the general problems we found in existing experimental evaluations can indeed translate to actual wrong or misleading assessments. We conclude with some guidelines that we hope will help improve experimental evaluations of fuzz testing algorithms, making reported results more robust.},
archivePrefix = {arXiv},
arxivId = {arXiv:1808.09700v2},
author = {Klees, George and Ruef, Andrew and Cooper, Benji and Wei, Shiyi and Hicks, Michael},
doi = {10.1145/3243734.3243804},
eprint = {arXiv:1808.09700v2},
isbn = {9781450356930},
issn = {15437221},
journal = {Proceedings of the ACM Conference on Computer and Communications Security},
keywords = {Evaluation,Fuzzing,Security},
pages = {2123--2138},
title = {{Evaluating fuzz testing}},
year = {2018}
}
@article{Basios2018,
abstract = {Data structure selection and tuning is laborious but can vastly improve an application's performance and memory footprint. Some data structures share a common interface and enjoy multiple implementations. We call them Darwinian Data Structures (DDS), since we can subject their implementations to survival of the fittest. We introduce ARTEMIS a multi-objective, cloud-based search-based optimisation framework that automatically finds optimal, tuned DDS modulo a test suite, then changes an application to use that DDS. ARTEMIS achieves substantial performance improvements for $\backslash$emph{\{}every{\}} project in {\$}5{\$} Java projects from DaCapo benchmark, {\$}8{\$} popular projects and {\$}30{\$} uniformly sampled projects from GitHub. For execution time, CPU usage, and memory consumption, ARTEMIS finds at least one solution that improves $\backslash$emph{\{}all{\}} measures for {\$}86\backslash{\%}{\$} ({\$}37/43{\$}) of the projects. The median improvement across the best solutions is {\$}4.8\backslash{\%}{\$}, {\$}10.1\backslash{\%}{\$}, {\$}5.1\backslash{\%}{\$} for runtime, memory and CPU usage. These aggregate results understate ARTEMIS's potential impact. Some of the benchmarks it improves are libraries or utility functions. Two examples are gson, a ubiquitous Java serialization framework, and xalan, Apache's XML transformation tool. ARTEMIS improves gson by {\$}16.5{\$}$\backslash${\%}, {\$}1\backslash{\%}{\$} and {\$}2.2\backslash{\%}{\$} for memory, runtime, and CPU; ARTEMIS improves xalan's memory consumption by {\$}23.5{\$}$\backslash${\%}. $\backslash$emph{\{}Every{\}} client of these projects will benefit from these performance improvements.},
archivePrefix = {arXiv},
arxivId = {arXiv:1706.03232v3},
author = {Basios, Michail and Li, Lingbo and Wu, Fan and Kanthan, Leslie and Barr, Earl T.},
doi = {10.1145/3236024.3236043},
eprint = {arXiv:1706.03232v3},
isbn = {9781450355735},
journal = {ESEC/FSE 2018 - Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering},
keywords = {Data Structure Optimisation,Genetic Improvement,Search-based Software Engineering,Software Analysis and Optimisation},
pages = {118--128},
title = {{Darwinian data structure selection}},
year = {2018}
}
@article{Henkel2018,
abstract = {With the rise of machine learning, there is a great deal of interest in treating programs as data to be fed to learning algorithms. However, programs do not start off in a form that is immediately amenable to most off-the-shelf learning techniques. Instead, it is necessary to transform the program to a suitable representation before a learning technique can be applied. In this paper, we use abstractions of traces obtained from symbolic execution of a program as a representation for learning word embeddings. We trained a variety of word embeddings under hundreds of parameterizations, and evaluated each learned embedding on a suite of different tasks. In our evaluation, we obtain 93{\%} top-1 accuracy on a benchmark consisting of over 19,000 API-usage analogies extracted from the Linux kernel. In addition, we show that embeddings learned from (mainly) semantic abstractions provide nearly triple the accuracy of those learned from (mainly) syntactic abstractions.},
archivePrefix = {arXiv},
arxivId = {arXiv:1803.06686v2},
author = {Henkel, Jordan and Lahiri, Shuvendu K. and Liblit, Ben and Reps, Thomas},
doi = {10.1145/3236024.3236085},
eprint = {arXiv:1803.06686v2},
isbn = {9781450355735},
journal = {ESEC/FSE 2018 - Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering},
keywords = {Analogical Reasoning,Linux,Program Understanding,Word Embeddings},
pages = {163--174},
title = {{Code vectors: Understanding programs through embedded abstracted symbolic traces}},
year = {2018}
}
@article{Arpaci2014,
author = {Arpaci and Dusseau},
title = {{Virtual Machine Monitors}},
url = {http://pages.cs.wisc.edu/{~}remzi/OSTEP/vmm-intro.pdf},
year = {2014}
}
@article{Fikes2010,
author = {Fikes, Andrew},
journal = {Talk at the Google Faculty Summit},
title = {{Google Storage architecture and challenges}},
year = {2010}
}
@article{Liskov2010,
abstract = {The paper provides an historical perspective about two replication protocols, each of which was intended for practical deployment. The first is Viewstamped Replication, which was developed in the 1980's and allows a group of replicas to continue to provide service in spite of a certain number of crashes among them. The second is an extension of Viewstamped Replication that allows the group to survive Byzantine (arbitrary) failures. Both protocols allow users to execute general operations (thus they provide state machine replication); both were developed in the Programming Methodology group at MIT. {\textcopyright} 2010 Springer Berlin Heidelberg.},
author = {Liskov, Barbara},
doi = {10.1007/978-3-642-11294-2_7},
isbn = {3642112935},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {121--149},
title = {{From viewstamped replication to Byzantine fault tolerance}},
volume = {5959 LNCS},
year = {2010}
}
@article{Cummins2018,
abstract = {Random program generation-fuzzing-is an effective technique for discovering bugs in compilers but successful fuzzers require extensive development effort for every language supported by the compiler, and often leave parts of the language space untested. We introduce DeepSmith, a novel machine learning approach to accelerating compiler validation through the inference of gen-erative models for compiler inputs. Our approach infers a learned model of the structure of real world code based on a large corpus of open source code. Then, it uses the model to automatically generate tens of thousands of realistic programs. Finally, we apply established differential testing methodologies on them to expose bugs in compilers. We apply our approach to the OpenCL programming language, automatically exposing bugs with little effort on our side. In 1,000 hours of automated testing of commercial and open source compilers, we discover bugs in all of them, submitting 67 bug reports. Our test cases are on average two orders of magnitude smaller than the state-of-the-art, require 3.03× less time to generate and evaluate, and expose bugs which the state-of-the-art cannot. Our random program generator, comprising only 500 lines of code, took 12 hours to train for OpenCL versus the state-of-the-art taking 9 man months to port from a generator for C and 50,000 lines of code. With 18 lines of code we extended our program generator to a second language, uncovering crashes in Solidity compilers in 12 hours of automated testing. CCS CONCEPTS • Software and its engineering → Software testing and de-bugging;},
author = {Cummins, Chris and Petoumenos, Pavlos and Murray, Alastair and Leather, Hugh},
doi = {10.1145/3213846.3213848},
isbn = {9781450356992},
journal = {ISSTA 2018 - Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis},
keywords = {Compiler Fuzzing,Deep Learning,Differential Testing},
pages = {95--105},
title = {{Compiler fuzzing through deep learning}},
year = {2018}
}
@article{Qiu2018,
abstract = {Numerous static analysis techniques have recently been proposed for identifying information flows in mobile applications. These techniques are compared to each other, usually on a set of syntactic benchmarks. Yet, configurations used for such comparisons are rarely described. Our experience shows that tools are often compared under different setup, rendering the comparisons irreproducible and largely inaccurate. In this paper, we provide a large, controlled, and independent comparison of the three most prominent static analysis tools: FlowDroid combined with IccTA, AmAndroid, and DroidSafe. We evaluate all tools using common configuration setup and on the same set of benchmark applications. We compare the results of our analysis to the results reported in previous studies, identify main reasons for inaccuracy in existing tools, and provide suggestions for future research. {\textcopyright} 2018 Association for Computing Machinery.},
author = {Qiu, Lina and Wang, Yingying and Rubin, Julia},
doi = {10.1145/3213846.3213873},
isbn = {9781450356992},
journal = {ISSTA 2018 - Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis},
keywords = {Empirical studies,Information flow analysis,Mobile,Static analysis},
pages = {176--186},
title = {{Analyzing the analyzers: FlowDroid/IccTA, AmanDroid, and DroidSafe}},
year = {2018}
}
@article{Habib2018,
abstract = {Static bug detectors are becoming increasingly popular and are widely used by professional software developers. While most work on bug detectors focuses on whether they find bugs at all, and on how many false positives they report in addition to legitimate warnings, the inverse question is often neglected: How many of all real-world bugs do static bug detectors find? This paper addresses this question by studying the results of applying three widely used static bug detectors to an extended version of the Defects4J dataset that consists of 15 Java projects with 594 known bugs. To decide which of these bugs the tools detect, we use a novel methodology that combines an automatic analysis of warnings and bugs with a manual validation of each candidate of a detected bug. The results of the study show that: (i) static bug detectors find a non-negligible amount of all bugs, (ii) different tools are mostly complementary to each other, and (iii) current bug detectors miss the large majority of the studied bugs. A detailed analysis of bugs missed by the static detectors shows that some bugs could have been found by variants of the existing detectors, while others are domain-specific problems that do not match any existing bug pattern. These findings help potential users of such tools to assess their utility, motivate and outline directions for future work on static bug detection, and provide a basis for future comparisons of static bug detection with other bug finding techniques, such as manual and automated testing.},
author = {Habib, Andrew and Pradel, Michael},
doi = {10.1145/3238147.3238213},
isbn = {9781450359375},
journal = {ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering},
keywords = {Bug finding,Defects4J,Static analysis,Static bug checkers},
pages = {317--328},
title = {{How many of all bugs do we find? A study of static bug detectors}},
year = {2018}
}
@article{Lamport2001,
abstract = {The Paxos algorithm, when presented in plain English, is very simple.},
author = {Lamport, Leslie},
doi = {10.1145/568425.568433},
issn = {01635700},
journal = {ACM SIGACT News},
number = {4},
pages = {51--58},
volume = {32},
year = {2001}
}
@article{Luan2018,
abstract = {Programmers often write code that has similarity to existing code written somewhere. A tool that could help programmers to search such similar code would be immensely useful. Such a tool could help programmers to extend partially written code snippets to completely implement necessary functionality, help to discover extensions to the partial code which are commonly included by other programmers, help to cross-check against similar code written by other programmers, or help to add extra code which would fix common mistakes and errors. We propose Aroma, a tool and technique for code recommendation via structural code search. Aroma indexes a huge code corpus including thousands of open-source projects, takes a partial code snippet as input, searches the corpus for method bodies containing the partial code snippet, and clusters and intersects the results of the search to recommend a small set of succinct code snippets which both contain the query snippet and appear as part of several methods in the corpus. We evaluated Aroma on 2000 randomly selected queries created from the corpus, as well as 64 queries derived from code snippets obtained from Stack Overflow, a popular website for discussing code. We implemented Aroma for 4 different languages, and developed an IDE plugin for Aroma. Furthermore, we conducted a study where we asked 12 programmers to complete programming tasks using Aroma, and collected their feedback. Our results indicate that Aroma is capable of retrieving and recommending relevant code snippets efficiently.},
archivePrefix = {arXiv},
arxivId = {1812.01158},
author = {Luan, Sifei and Yang, Di and Barnaby, Celeste and Sen, Koushik and Chandra, Satish},
eprint = {1812.01158},
title = {{Aroma: Code Recommendation via Structural Code Search}},
url = {http://arxiv.org/abs/1812.01158},
year = {2018}
}
@article{Dean2008,
abstract = {MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google's clusters every day.},
author = {Dean, Jeffrey and Ghemawat, Sanjay},
doi = {10.1145/1327452.1327492},
issn = {00010782},
journal = {Communications of the ACM},
number = {1},
pages = {107--113},
title = {{MapReduce: Simplified data processing on large clusters}},
volume = {51},
year = {2008}
}
author = {Hadzilacos, V and Goodman, N},
title = {{Concurrency Control and Recovery in Database Systems. 1987}}
}
@article{Rosenblum1991,
abstract = {This paper presents a new technique for disk storage management called a log-structured file system. A logstructured file system writes all modifications to disk sequentially in a log-like structure, thereby speeding up both file writing and crash recovery. The log is the only structure on disk; it contains indexing information so that files can be read back from the log efficiently. In order to maintain large free areas on disk for fast writing, we divide the log into segments and use a segment cleaner to compress the live information from heavily fragmented segments. We present a series of simulations that demonstrate the efficiency of a simple cleaning policy based on cost and benefit. We have implemented a prototype logstructured file system called Sprite LFS; it outperforms current Unix file systems by an order of magnitude for small-file writes while matching or exceeding Unix performance for reads and large writes. Even when the overhead for cleaning is included, Sprite LFS can use 70 {\%} of the disk bandwidth for writing, whereas Unix file systems typically can use only 5-10{\%}. 1.},
author = {Rosenblum, Mendel and Ousterhout, John K.},
doi = {10.1145/121132.121137},
isbn = {0897914473},
journal = {Proceedings of the 13th ACM Symposium on Operating Systems Principles, SOSP 1991},
keywords = {and phrases,disk storage management},
number = {1},
pages = {1--15},
title = {{The design and implementation of a log-structured file system}},
volume = {10},
year = {1991}
}
@article{Meyer1988,
abstract = {IIX is a multi-processing and multi-user timesharing operating system. It has become quite popular since its inception in 1969, running on machines of varying processing power from microprocessors to mainframes across different manufacturers' product lines. UNIX provides a great range of programs and services that have made the UNIX system environment popular among users. This environment contains the command interpreter shell, text processing packages, the source code control, a powerful mailing system and many more. The importance of UNIX for manufacturers lies in its philosophy of simplicity and consistency. Since UNIX is written almost totally in a high-level programming language it is very easy to port the system to all kinds of different machines. As a result, by the beginning of 984, there were already about 100 000 UNIX or UNIX-like system installations around the whole world, and the number is still increasing. {\textcopyright} 1988.},
author = {Meyer, Veronika and Meyer, Walter},
doi = {10.1016/0010-4655(88)90115-4},
issn = {00104655},
journal = {Computer Physics Communications},
keywords = {1974,30,32,4,and phrases,association for computing machinery,command language,copyright,cr categories,file system,inc,operating,pdp-11,system,time-sharing},
number = {1-2},
pages = {51--57},
title = {{The UNIX{\textregistered} timesharing operating system}},
volume = {50},
year = {1988}
}
@article{Barham2003,
abstract = {Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating sys- tems. Some target 100{\%} binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most pro- vide only best-effort provisioning, risking denial of service. This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sac- rificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which oper- ating systems such as Linux, BSD andWindows XP, can be ported with minimal effort. Our design is targeted at hosting up to 100 virtual machine in- stances simultaneously on a modern server. The virtualization ap- proach taken by Xen is extremely efficient: we allowoperating sys- tems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead — at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests.},
author = {Barham, Paul and Dragovic, Boris and Fraser, Keir and Hand, Steven and Harris, Tim and Ho, Alex and Neugebauer, Rolf and Pratt, Ian and Warfield, Andrew},
isbn = {1581137575},
journal = {19th ACM Symposium on Operating Systems Principles},
keywords = {hypervisors,paravirtualization,virtual machine monitors},
pages = {164--177},
title = {{Xen and the Art of Virtualization Categories and Subject Descriptors}},
year = {2003}
}
@article{Kamp2019,
abstract = {We propose an efficient protocol for decentralized training of deep neural networks from distributed data sources. The proposed protocol allows to handle different phases of model training equally well and to quickly adapt to concept drifts. This leads to a reduction of communication by an order of magnitude compared to periodically communicating state-of-the-art approaches. Moreover, we derive a communication bound that scales well with the hardness of the serialized learning problem. The reduction in communication comes at almost no cost, as the predictive performance remains virtually unchanged. Indeed, the proposed protocol retains loss bounds of periodically averaging schemes. An extensive empirical evaluation validates major improvement of the trade-off between model performance and communication which could be beneficial for numerous decentralized learning applications, such as autonomous driving, or voice recognition and image classification on mobile phones.},
author = {Kamp, Michael and Adilova, Linara and Sicking, Joachim and H{\"{u}}ger, Fabian and Schlicht, Peter and Wirtz, Tim and Wrobel, Stefan},
doi = {10.1007/978-3-030-10925-7_24},
isbn = {9783030109240},
issn = {16113349},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
number = {i},
pages = {393--409},
title = {{Efficient decentralized deep learning by dynamic model averaging}},
volume = {11051 LNAI},
year = {2019}
}
@article{Weng2018,
abstract = {Deep learning technology has achieved the high-accuracy of state-of-the-art algorithms in a variety of AI tasks. Its popularity has drawn security researchers' attention to the topic of privacy-preserving deep learning, in which neither training data nor model is expected to be exposed. Recently, federated learning becomes promising for the development of deep learning where multi-parties upload local gradients and a server updates parameters with collected gradients, the privacy issues of which have been discussed widely. In this paper, we explore additional security issues in this case, not merely the privacy. First, we consider that the general assumption of honest-but-curious server is problematic, and the malicious server may break privacy. Second, the malicious server or participants may damage the correctness of training, such as incorrect gradient collecting or parameter updating. Third, we discover that federated learning lacks an effective incentive mechanism for distrustful participants due to privacy and financial considerations. To address the aforementioned issues, we introduce a value-driven incentive mechanism based on Blockchain. Adapted to this incentive setting, we migrate the malicious threats from server and participants, and guarantee the privacy and auditability. Thus, we propose to present DeepChain which gives mistrustful parties incentives to participate in privacy-preserving learning, share gradients and update parameters correctly, and eventually accomplish iterative learning with a win-win result. At last, we give an implementation prototype by integrating deep learning module with a Blockchain development platform (Corda V3.0). We evaluate it in terms of encryption performance and training accuracy, which demonstrates the feasibility of DeepChain.},
author = {Weng, Jia-Si and Weng, Jian and Li, Ming and Zhang, Yue and Luo, Weiqi},
journal = {IACR Cryptology ePrint Archive},
number = {8},
pages = {679},
title = {{DeepChain : Auditable and Privacy-Preserving Deep Learning with Blockchain-based Incentive}},
url = {https://eprint.iacr.org/2018/679.pdf},
volume = {2018},
year = {2018}
}
abstract = {Training a machine learning model with terabytes to petabytes of data using very deep neural networks doesn't scale well within a single machine. A significant amount of work in recent years has gone into distributing the training of such neural networks across a cluster of machines, by partitioning on both the data and the model itself. The most well-established form of distributed training uses a centralized parameter server to manage the shared state of neural network weights used across all partitions of the data, but this introduces a bottleneck and single-point of failure during training. In this paper, we explore a more experimental form of decentralized training that removes this bottleneck. Finally, we show that by taking advantage of sparse updates to the shared parameter matrix, decentralized training can be tuned to make tradeoffs between training speed and model accuracy.},
title = {{Decentralized and Distributed Machine Learning Model Training with Actors}},
url = {https://blog.skymind.ai/distributed-deep-learning-part-1-an}
}
@article{Pham2016,
abstract = {The problem of anomaly detection has been studied for a long time. In short, anomalies are abnormal or unlikely things. In financial networks, thieves and illegal activities are often anomalous in nature. Members of a network want to detect anomalies as soon as possible to prevent them from harming the network's community and integrity. Many Machine Learning techniques have been proposed to deal with this problem; some results appear to be quite promising but there is no obvious superior method. In this paper, we consider anomaly detection particular to the Bitcoin transaction network. Our goal is to detect which users and transactions are the most suspicious; in this case, anomalous behavior is a proxy for suspicious behavior. To this end, we use three unsupervised learning methods including k-means clustering, Mahalanobis distance, and Unsupervised Support Vector Machine (SVM) on two graphs generated by the Bitcoin transaction network: one graph has users as nodes, and the other has transactions as nodes.},
archivePrefix = {arXiv},
arxivId = {1611.03941},
author = {Pham, Thai and Lee, Steven},
eprint = {1611.03941},
title = {{Anomaly Detection in Bitcoin Network Using Unsupervised Learning Methods}},
url = {http://arxiv.org/abs/1611.03941},
year = {2016}
}
@misc{,
title = {{CSE550.Concurrency{\_}Control{\_}Recovery (1).pdf}}
}
@article{Verma2015,
abstract = {Google's Borg system is a cluster manager that runs hun-dreds of thousands of jobs, from many thousands of differ-ent applications, across a number of clusters each with up to tens of thousands of machines. It achieves high utilization by combining admission con-trol, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that min-imize fault-recovery time, and scheduling policies that re-duce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitor-ing, and tools to analyze and simulate system behavior. We present a summary of the Borg system architecture and features, important design decisions, a quantitative anal-ysis of some of its policy decisions, and a qualitative ex-amination of lessons learned from a decade of operational experience with it.},
author = {Verma, Abhishek and Pedrosa, Luis and Korupolu, Madhukar and Oppenheimer, David and Tune, Eric and Wilkes, John},
doi = {10.1145/2741948.2741964},
isbn = {9781450332385},
journal = {Proceedings of the 10th European Conference on Computer Systems, EuroSys 2015},
title = {{Large-scale cluster management at Google with Borg}},
year = {2015}
}
@article{Jacobson1988,
abstract = {In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was yes. Since that time, we have put seven new algorithms into the 4BSD TCP: round-trip-time variance estimation exponential retransmit timer backoff slow-start more aggressive receiver ack policy dynamic window sizing on congestion Karn's clamped retransmit backoff fast retransmit Our measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet. This paper is a brief description of (i) - (v) and the rationale behind them. (vi) is an algorithm recently developed by Phil Karn of Bell Communications Research, described in KP87. (viii) is described in a soon-to-be-published RFC. Algorithms (i) - (v) spring from one observation: The flow on a TCP connection (or ISO TP-4 or Xerox NS SPP connection) should obey a 'conservation of packets' principle. And, if this principle were obeyed, congestion collapse would become the exception rather than the rule. Thus congestion control involves finding places that violate conservation and fixing them. By 'conservation of packets' I mean that for a connection 'in equilibrium', i.e., running stably with a full window of data in transit, the packet flow is what a physicist would call 'conservative': A new packet isn't put into the network until an old packet leaves. The physics of flow predicts that systems with this property should be robust in the face of congestion. Observation of the Internet suggests that it was not particularly robust. Why the discrepancy? There are only three ways for packet conservation to fail: The connection doesn't get to equilibrium, or A sender injects a new packet before an old packet has exited, or The equilibrium can't be reached because of resource limits along the path. In the following sections, we treat each of these in turn.},
author = {Jacobson, Van},
doi = {10.1145/52324.52356},
isbn = {0897912799},
journal = {Symposium Proceedings on Communications Architectures and Protocols, SIGCOMM 1988},
number = {60},
pages = {314--329},
title = {{Congestion avoidance and control}},
year = {1988}
}
@article{Clark1988,
author = {Clark, David D},
pages = {1--10},
title = {{Paper{\_}3{\_}12April}},
volume = {02139},
year = {1988}
}
@article{Basiri2019,
abstract = {Distributed systems often face transient errors and localized component degradation and failure. Verifying that the overall system remains healthy in the face of such failures is challenging. At Netflix, we have built a platform for automatically generating and executing chaos experiments, which check how well the production system can handle component failures and slowdowns. This paper describes the platform and our experiences operating it.},
archivePrefix = {arXiv},
arxivId = {arXiv:1905.04648v1},
author = {Basiri, Ali and Hochstein, Lorin and Jones, Nora and Tucker, Haley},
doi = {10.1109/icse-seip.2019.00012},
eprint = {arXiv:1905.04648v1},
pages = {31--40},
title = {{Automating Chaos Experiments in Production}},
year = {2019}
}
@article{Chang2006,
abstract = {Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Fi- nance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this paper we describe the sim- ple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we de- scribe the design and implementation of Bigtable.},
author = {Chang, Fay and Dean, Jeffrey and Ghemawat, Sanjay and Hsieh, Wilson C. and Wallach, Deborah A. and Burrows, Mike and Chandra, Tushar and Fikes, Andrew and Gruber, Robert E.},
journal = {Proceedings of the 7th Symposium on Operating Systems Design and Implementation (OSDI '06), November 6-8, Seattle, WA, USA},
keywords = {Seminal work},
pages = {205--218},
title = {{[seminal] Bigtable: A distributed storage system for structured data}},
year = {2006}
}
@article{Balakrishnan2009,
abstract = {Our goal is to explain how routing between different administrative domains works in the Internet. We discuss how Internet Service Providers (ISPs) exchange routing information, packets, and (above all) money between each other, and how the way in which they buy service from and sell service to each other and their customers influences routing. We discuss the salient features of the Border Gateway Protocol, Version 4 (BGP4, which we will refer to simply as BGP), the current interdomain routing protocol in the Internet. Finally, we discuss a few interesting failures and shortcomings of the routing system. These notes focus only on the essential elements of interdomain routing, often sacrificing detail for clarity and sweeping generality.},
author = {Balakrishnan, Hari},
journal = {Interdomain Internet Routing},
keywords = {BGP},
number = {January},
pages = {1--24},
title = {{Interdomain Internet Routing}},
url = {http://bnrg.eecs.berkeley.edu/{~}randy/Courses/CS268.F09/papers/03{\_}L4-routing.pdf},
year = {2009}
}
@article{Purohit,
abstract = {Generating a synthetic graph that is similar to a given real-world graph is a critical requirement for privacy preservation and benchmarking purposes. Various generative models attempt to generate static graphs similar to real-world graphs. However, generation of temporal graphs is still an open research area. We present a temporal-motif based approach to generate synthetic temporal graph datasets and show results from three real-world use cases. We show that our approach can generate high fidelity synthetic graph. We also show that this approach can also generate multi-type heterogeneous graph. We also present a parameterized version of our approach which can generate linear, sub-linear, and super-linear preferential attachment graph.},
author = {Purohit, Sumit and Holder, Lawrence B and Chin, George},
keywords = {Graph Generative Model,Motifs Distribu-tion,Temporal Graph},
title = {{Temporal Graph Generation Based on a Distribution of Temporal Motifs}},
url = {http://www.mlgworkshop.org/2018/papers/MLG2018{\_}paper{\_}42.pdf}
}
@article{Lin2019,
abstract = {Bitcoin is a cryptocurrency that features a distributed, decentralized and trustworthy mechanism, which has made Bitcoin a popular global transaction platform. The transaction efficiency among nations and the privacy benefiting from address anonymity of the Bitcoin network have attracted many activities such as payments, investments, gambling, and even money laundering in the past decade. Unfortunately, some criminal behaviors which took advantage of this platform were not identified. This has discouraged many governments to support cryptocurrency. Thus, the capability to identify criminal addresses becomes an important issue in the cryptocurrency network. In this paper, we propose new features in addition to those commonly used in the literature to build a classification model for detecting abnormality of Bitcoin network addresses. These features include various high orders of moments of transaction time (represented by block height) which summarizes the transaction history in an efficient way. The extracted features are trained by supervised machine learning methods on a labeling category data set. The experimental evaluation shows that these features have improved the performance of Bitcoin address classification significantly. We evaluate the results under eight classifiers and achieve the highest Micro-F1/Macro-F1 of 87{\%}/86{\%} with LightGBM.},
archivePrefix = {arXiv},
arxivId = {arXiv:1903.07994v1},
author = {Lin, Yu-Jing and Wu, Po-Wei and Hsu, Cheng-Han and Tu, I-Ping and Liao, Shih-wei},
doi = {10.1109/bloc.2019.8751410},
eprint = {arXiv:1903.07994v1},
isbn = {9781728113289},
pages = {302--310},
title = {{An Evaluation of Bitcoin Address Classification based on Transaction History Summarization}},
year = {2019}
}
@misc{,
title = {{Nelsonthesis.Pdf}}
}
@article{Kondor2014,
abstract = {A main focus in economics research is understanding the time series of prices of goods and assets. While statistical models using only the properties of the time series itself have been successful in many aspects, we expect to gain a better understanding of the phenomena involved if we can model the underlying system of interacting agents. In this article, we consider the history of Bitcoin, a novel digital currency system, for which the complete list of transactions is available for analysis. Using this dataset, we reconstruct the transaction network between users and analyze changes in the structure of the subgraph induced by the most active users. Our approach is based on the unsupervised identification of important features of the time variation of the network. Applying the widely used method of Principal Component Analysis to the matrix constructed from snapshots of the network at different times, we are able to show how structural changes in the network accompany significant changes in the exchange price of bitcoins.},
author = {Kondor, D{\'{a}}niel and Csabai, Istv{\'{a}}n and Sz{\"{u}}le, J{\'{a}}nos and P{\'{o}}sfai, M{\'{a}}rton and Vattay, G{\'{a}}bor},
doi = {10.1088/1367-2630/16/12/125003},
issn = {13672630},
journal = {New Journal of Physics},
keywords = {Bitcoin, transaction network,financial network,principal component analysis,temporal network},
publisher = {IOP Publishing},
title = {{Inferring the interplay between network structure and market effects in Bitcoin}},
volume = {16},
year = {2014}
}
@article{Babveyh2009,
author = {Babveyh, Afshin and Ebrahimi, Sadegh},
pages = {1--8},
title = {{Predicting User Performance and Bitcoin Price Using Block Chain Transaction Network}},
year = {2009}
}
@article{Paranjape2017,
abstract = {Networks are a fundamental tool for modeling complex systems in a variety of domains including social and communication networks as well as biology and neuroscience. Small subgraph patterns in networks, called network motifs, are crucial to understanding the structure and function of these systems. However, the role of network motifs in temporal networks, which contain many timestamped links between the nodes, is not yet well understood. Here we develop a notion of a temporal network motif as an elementary unit of temporal networks and provide a general methodology for counting such motifs. We define temporal network motifs as induced subgraphs on sequences of temporal edges, design fast algorithms for counting temporal motifs, and prove their runtime complexity. Our fast algorithms achieve up to 56.5x speedup compared to a baseline method. Furthermore, we use our algorithms to count temporal motifs in a variety of networks. Results show that networks from different domains have significantly different motif counts, whereas networks from the same domain tend to have similar motif counts. We also find that different motifs occur at different time scales, which provides further insights into structure and function of temporal networks.},
author = {Paranjape, Ashwin and Benson, Austin R. and Leskovec, Jure},
doi = {10.1145/3018661.3018731},
isbn = {9781450346757},
journal = {WSDM 2017 - Proceedings of the 10th ACM International Conference on Web Search and Data Mining},
pages = {601--610},
title = {{Motifs in temporal networks}},
year = {2017}
}
@article{Norman,
author = {Norman, Archie},
pages = {1--54},
title = {{Classification of Bitcoin transactions based on supervised machine learning and transaction network metrics .}}
}
@article{Fire2019,
abstract = {Trends change rapidly in today's world, prompting this key question: What is the mechanism behind the emergence of new trends? By representing real-world dynamic systems as complex networks, the emergence of new trends can be symbolized by vertices that “shine.” That is, at a specific time interval in a network's life, certain vertices become increasingly connected to other vertices. This process creates new high-degree vertices, i.e., network stars. Thus, to study trends, we must look at how networks evolve over time and determine how the stars behave. In our research, we constructed the largest publicly available network evolution dataset to date, which contains 38,000 real-world networks and 2.5 million graphs. Then, we performed the first precise wide-scale analysis of the evolution of networks with various scales. Three primary observations resulted: (a) links are most prevalent among vertices that join a network at a similar time; (b) the rate that new vertices join a network is a central factor in molding a network's topology; and (c) the emergence of network stars (high-degree vertices) is correlated with fast-growing networks. We applied our learnings to develop a flexible network-generation model based on large-scale, real-world data. This model gives a better understanding of how stars rise and fall within networks, and is applicable to dynamic systems both in nature and society. Multimedia Links ▶ Video ▶ Interactive Data Visualization ▶ Data ▶ Code Tutorials},
archivePrefix = {arXiv},
arxivId = {arXiv:1706.06690v3},
author = {Fire, Michael and Guestrin, Carlos},
doi = {10.1016/j.ipm.2019.05.002},
eprint = {arXiv:1706.06690v3},
issn = {03064573},
journal = {Information Processing and Management},
keywords = {Big data,Data science,Network datasets,Network dynamics,Network science},
title = {{The rise and fall of network stars: Analyzing 2.5 million graphs to reveal how high-degree vertices emerge over time}},
year = {2019}
}
@article{Kondor2014a,
abstract = {The possibility to analyze everyday monetary transactions is limited by the scarcity of available data, as this kind of information is usually considered highly sensitive. Present econophysics models are usually employed on presumed random networks of interacting agents, and only macroscopic properties (e.g. the resulting wealth distribution) are compared to real-world data. In this paper, we analyze BitCoin, which is a novel digital currency system, where the complete list of transactions is publicly available. Using this dataset, we reconstruct the network of transactions, and extract the time and amount of each payment. We analyze the structure of the transaction network by measuring network characteristics over time, such as the degree distribution, degree correlations and clustering. We find that linear preferential attachment drives the growth of the network. We also study the dynamics taking place on the transaction network, i.e. the flow of money. We measure temporal patterns and the wealth accumulation. Investigating the microscopic statistics of money movement, we find that sublinear preferential attachment governs the evolution of the wealth distribution. We report a scaling relation between the degree and wealth associated to individual nodes.},
archivePrefix = {arXiv},
arxivId = {arXiv:1308.3892v3},
author = {Kondor, D{\'{a}}niel and P{\'{o}}sfai, M{\'{a}}rton and Csabai, Istv{\'{a}}n and Vattay, G{\'{a}}bor},
doi = {10.1371/journal.pone.0086197},
eprint = {arXiv:1308.3892v3},
issn = {19326203},
journal = {PLoS ONE},
number = {2},
pages = {1--9},
title = {{Do the rich get richer? An empirical analysis of the Bitcoin transaction network}},
volume = {9},
year = {2014}
}
@article{Garcia2014,
abstract = {What is the role of social interactions in the creation of price bubbles? Answering this question requires obtaining collective behavioural traces generated by the activity of a large number of actors. Digital currencies offer a unique possibility to measure socio-economic signals from such digital traces. Here, we focus on Bitcoin, the most popular cryptocurrency. Bitcoin has experienced periods of rapid increase in exchange rates (price) followed by sharp decline; we hypothesize that these fluctuations are largely driven by the interplay between different social phenomena. We thus quantify four socio-economic signals about Bitcoin from large datasets: price on online exchanges, volume of word-of-mouth communication in online social media, volume of information search and user base growth. By using vector autoregression, we identify two positive feedback loops that lead to price bubbles in the absence of exogenous stimuli: one driven by word of mouth, and the other by new Bitcoin adopters. We also observe that spikes in information search, presumably linked to external events, precede drastic price declines. Understanding the interplay between the socio-economic signals we measured can lead to applications beyond cryptocurrencies to other phenomena that leave digital footprints, such as online social network usage.},
author = {Garcia, David and Tessone, Claudio J. and Mavrodiev, Pavlin and Perony, Nicolas},
doi = {10.1098/rsif.2014.0623},
issn = {17425662},
journal = {Journal of the Royal Society Interface},
keywords = {Bitcoin,Bubbles,Social interactions,Socio-economic signals},
number = {99},
title = {{The digital traces of bubbles: Feedback cycles between socio-economic signals in the Bitcoin economy}},
volume = {11},
year = {2014}
}
@article{Pham2016a,
abstract = {The problem of anomaly detection has been studied for a long time, and many Network Analysis techniques have been proposed as solutions. Although some results appear to be quite promising, no method is clearly to be superior to the rest. In this paper, we particularly consider anomaly detection in the Bitcoin transaction network. Our goal is to detect which users and transactions are the most suspicious; in this case, anomalous behavior is a proxy for suspicious behavior. To this end, we use the laws of power degree and densification and local outlier factor (LOF) method (which is proceeded by k-means clustering method) on two graphs generated by the Bitcoin transaction network: one graph has users as nodes, and the other has transactions as nodes. We remark that the methods used here can be applied to any type of setting with an inherent graph structure, including, but not limited to, computer networks, telecommunications networks, auction networks, security networks, social networks, Web networks, or any financial networks. We use the Bitcoin transaction network in this paper due to the availability, size, and attractiveness of the data set.},
archivePrefix = {arXiv},
arxivId = {1611.03942},
author = {Pham, Thai and Lee, Steven},
eprint = {1611.03942},
keywords = {anomaly detection,bitcoin,k -means,lof},
title = {{Anomaly Detection in the Bitcoin System - A Network Perspective}},
url = {http://arxiv.org/abs/1611.03942},
year = {2016}
}
@article{Hassani2018,
abstract = {Cryptocurrency has been a trending topic over the past decade, pooling tremendous technological power and attracting investments valued over trillions of dollars on a global scale. The cryptocurrency technology and its network have been endowed with many superior features due to its unique architecture, which also determined its worldwide efficiency, applicability and data intensive characteristics. This paper introduces and summarises the interactions between two significant concepts in the digitalized world, i.e., cryptocurrency and Big Data. Both subjects are at the forefront of technological research, and this paper focuses on their convergence and comprehensively reviews the very recent applications and developments after 2016. Accordingly, we aim to present a systematic review of the interactions between Big Data and cryptocurrency and serve as the one stop reference directory for researchers with regard to identifying research gaps and directing future explorations.},
author = {Hassani, Hossein and Huang, Xu and Silva, Emmanuel},
doi = {10.3390/bdcc2040034},
journal = {Big Data and Cognitive Computing},
keywords = {big data,bitcoin,blockchain,cryptocurrency,review},
number = {4},
pages = {34},
title = {{Big-Crypto: Big Data, Blockchain and Cryptocurrency}},
volume = {2},
year = {2018}
}
@article{Sullivan2018,
author = {Sullivan, Danielle and Tran, Tuan and Gu, Huaping},
title = {{Fraud Detection in Signed Bitcoin Trading Platform Networks}},
year = {2018}
}
@article{Leskovec2014,
abstract = {The objective of this case study was to obtain some first-hand information about the functional consequences of a cosmetic tongue split operation for speech and tongue motility. One male patient who had performed the operation on himself was interviewed and underwent a tongue motility assessment, as well as an ultrasound examination. Tongue motility was mildly reduced as a result of tissue scarring. Speech was rated to be fully intelligible and highly acceptable by 4 raters, although 2 raters noticed slight distortions of the sibilants /s/ and /z/. The 3-dimensional ultrasound demonstrated that the synergy of the 2 sides of the tongue was preserved. A notably deep posterior genioglossus furrow indicated compensation for the reduced length of the tongue blade. It is concluded that the tongue split procedure did not significantly affect the participant's speech intelligibility and tongue motility.},
author = {Leskovec, Jure and Rajaraman, Anand and Ullman, Jeffrey David and Leskovec, Jure and Rajaraman, Anand and Ullman, Jeffrey David},
doi = {10.1017/cbo9781139924801.011},
journal = {Mining of Massive Datasets},
pages = {325--383},
title = {{Mining Social-Network Graphs}},
year = {2014}
}
@article{Lischke2016,
abstract = {In this explorative study, we examine the economy and transaction network of the decentralized digital currency Bitcoin during the first four years of its existence. The objective is to develop insights into the evolution of the Bitcoin economy during this period. For this, we establish and analyze a novel integrated dataset that enriches data from the Bitcoin blockchain with off-network data such as business categories and geo-locations. Our analyses reveal the major Bitcoin businesses and markets. Our results also give insights on the business distribution by countries and how businesses evolve over time. We also show that there is a gambling network that features many very small transactions. Furthermore, regional differences in the adoption and business distribution could be found. In the network analysis, the small world phenomenon is investigated and confirmed for several subgraphs of the Bitcoin network.},
author = {Lischke, Matthias and Fabian, Benjamin},
doi = {10.3390/fi8010007},
issn = {19995903},
journal = {Future Internet},
keywords = {Bitcoin,Blockchain,Complex networks,Cryptocurrencies,Electronic payment,Graph analysis,Network analysis},
number = {1},
title = {{Analyzing the bitcoin network: The First Four Years}},
volume = {8},
year = {2016}
}
@article{Hirshman2013,
author = {Hirshman, Jason and Huang, Yifei and Macke, Stephen},
journal = {3rd ed. Technical report Stanford University},
title = {{Unsupervised Approaches to Detecting Anomalous Behavior in the Bitcoin Transaction Network}},
year = {2013}
}
@article{Acar,
author = {Acar, Umut A and Blelloch, Guy E and Blume, Matthias},
isbn = {1595933204},
keywords = {computational geometry,dependence graphs,dynamic,dynamic algorithms,memoization,self-adjusting computation},
title = {{An Experimental Analysis of Self-Adjusting Computation}}
}
@article{Haslhofer2016,
abstract = {{\textcopyright} 2016 Copyright held by the author/owner(s).s). Bitcoin is a rising digital currency and exemplifies the growing need for systematically gathering and analyzing public transaction data sets such as the blockchain. However, the blockchain in its raw form is just a large ledger listing transfers of currency units between alphanumeric character strings, without revealing contextually relevant real-world information. In this demo, we present GraphSense, which is a solution that applies a graph-centric perspective on digital currency transactions. It allows users to explore transactions and follow the money ow, facilitates analytics by semantically enriching the transaction graph, supports path and graph pattern search, and guides analysts to anomalous data points. To deal with the growing volume and velocity of transaction data, we implemented our solution on a horizontally scalable data processing and analytics infrastructure. Given the ongoing digital transformation in financial services and technologies, we believe that our approach contributes to development of analytics solutions for digital currency ecosystems, which is relevant in fields such as financial analytics, law enforcement, or scientific research.},
author = {Haslhofer, Bernhard and Karl, Roman and Filtz, Erwin},
issn = {16130073},
journal = {CEUR Workshop Proceedings},
keywords = {Anomaly detection,Bitcoin,Graph processing},
pages = {1--4},
title = {{O Bitcoin where art thou? Insight into large-scale transaction graphs}},
volume = {1695},
year = {2016}
}
@article{Chen2012,
abstract = {Application data often changes slowly or incrementally over time. Since incremental changes to input often result in only small changes in output, it is often feasible to respond to such changes asymptotically more efficiently than by re-running the whole computation. Traditionally, realizing such asymptotic efficiency improvements requires designing problem-specific algorithms known as dynamic or incremental algorithms, which are often significantly more complicated than conventional algorithms to design, analyze, implement, and use. A long-standing open problem is to develop techniques that automatically transform conventional programs so that they correctly and efficiently respond to incremental changes. In this paper, we describe a significant step towards solving the problem of automatic incrementalization: a programming language and a compiler that can, given a few type annotations describing what can change over time, compile a conventional program that assumes its data to be static (unchanging over time) to an incremental program. Based on recent advances in self-adjusting computation, including a theoretical proposal for translating purely functional programs to self-adjusting programs, we develop techniques for translating conventional Standard ML programs to self-adjusting programs. By extending the Standard ML language, we design a fully featured programming language with higher-order features, a module system, and a powerful type system, and implement a compiler for this language. The resulting programming language, LML, enables translating conventional programs decorated with simple type annotations into incremental programs that can respond to changes in their data correctly and efficiently. We evaluate the effectiveness of our approach by considering a range of benchmarks involving lists, vectors, and matrices, as well as a ray tracer. For these benchmarks, our compiler incrementalizes existing code with only trivial amounts of annotation. The resulting programs are often asymptotically more efficient, leading to orders of magnitude speedups in practice.},
author = {Chen, Yan and Dunfield, Joshua and Acar, Umut A.},
doi = {10.1145/2345156.2254100},
isbn = {9781450312059},
issn = {15232867},
journal = {ACM SIGPLAN Notices},
keywords = {Compiler optimization,Incrementalization,Performance,Self-adjusting computation,Type annotations},
number = {6},
pages = {299--310},
title = {{Type-directed automatic incrementalization}},
volume = {47},
year = {2012}
}
abstract = {We present DBToaster, a novel query compilation framework for producing high performance compiled query executors that incrementally and continuously answer stand- ing aggregate queries using in-memory views. DBToaster targets applications that require efficient main-memory processing of standing queries (views) fed by high-volume data streams, recursively compiling view maintenance (VM) queries into simple C++ functions for evaluating database updates (deltas). While today's VM algorithms consider the impact of single deltas on view queries to produce main- tenance queries, we recursively consider deltas of maintenance queries and compile to thoroughly transform queries into code. Recursive compilation successively elides certain scans and joins, and eliminates significant query plan interpreter overheads. In this demonstration, we walk through our compilation algorithm, and show the significant performance advantages of our compiled executors over other query processors. We are able to demonstrate 1-3 orders of magnitude improve- ments in processing times for a financial application and a data warehouse loading application, both implemented across a wide range of database systems, including Post- greSQL, HSQLDB, a commercial DBMS 'A', the Stanford STREAM engine, and a commercial stream processor 'B'.},
author = {Ahmad, Yanif and Koch, Christoph},
doi = {10.14778/1687553.1687592},
isbn = {0000000000000},
issn = {21508097},
journal = {Proceedings of the VLDB Endowment},
number = {2},
pages = {1566--1569},
title = {{DBToaster: A SQL compiler for highperformance delta processing in main-memory databases}},
volume = {2},
year = {2009}
}
@article{Dewar1979,
author = {Dewar, Robert B.K. and Grand, Arthur and Liu, Ssu Cheng and Schwartz, Jacob T. and Schonberg, Edmond},
doi = {10.1145/357062.357064},
issn = {15584593},
journal = {ACM Transactions on Programming Languages and Systems (TOPLAS)},
keywords = {automatic data structure choice,high level languages,optimization,set-theoretic languages,stepwise refinement},
number = {1},
pages = {27--49},
title = {{Programming by Refinement, as Exemplified by the SETL Representation Sublanguage}},
volume = {1},
year = {1979}
}
@article{Ramsey1930,
abstract = {A direct search on the CDC 6600 yielded 27 5 + 84 5 + HO 5 + 133 6-144 5 as the smallest instance in which four fifth powers sum to a fifth power. This is a counterexample to a conjecture by Euler [l] that at least n nth powers are required to sum to an nth power, n{\textgreater}2.},
author = {Ramsey, F. P.},
doi = {10.1112/plms/s2-30.1.264},
issn = {1460244X},
journal = {Proceedings of the London Mathematical Society},
number = {1},
pages = {264--286},
title = {{On a problem of formal logic}},
volume = {s2-30},
year = {1930}
}
@article{Thompson1984,
abstract = {To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.},
author = {Thompson, Ken},
doi = {10.1145/358198.358210},
issn = {15577317},
journal = {Communications of the ACM},
number = {8},
pages = {761--763},
title = {{Reflections on Trusting Trust}},
volume = {27},
year = {1984}
}
@article{Koch2014,
abstract = {Applications ranging from algorithmic trading to scientific data analysis require realtime analytics based on views over databases that change at very high rates. Such views have to be kept fresh at low maintenance cost and latencies. At the same time, these views have to support classical SQL, rather than window semantics, to enable applications that combine current with aged or historical data. In this paper, we present viewlet transforms, a recursive finite differencing technique applied to queries. The viewlet transform materializes a query and a set of its higher-order deltas as views. These views support each other's incremental maintenance, leading to a reduced overall view maintenance cost. The viewlet transform of a query admits efficient evaluation, the elimination of certain expensive query operations, and aggressive parallelization. We develop viewlet transforms into a workable query execution technique, present a heuristic and cost-based optimization framework, and report on experiments with a prototype dynamic data management system that combines viewlet transforms with an optimizing compilation technique. The system supports tens of thousands of complete view refreshes a second for a wide range of queries.},
author = {Koch, Christoph and Ahmad, Yanif and Kennedy, Oliver and Nikolic, Milos and N{\"{o}}tzli, Andres and Lupei, Daniel and Shaikhha, Amir},
doi = {10.1007/s00778-013-0348-4},
issn = {0949877X},
journal = {VLDB Journal},
keywords = {Compilation,Database queries,Incremental view maintenance,Materialized views},
number = {2},
pages = {253--278},
title = {{DBToaster: Higher-order delta processing for dynamic, frequently fresh views}},
volume = {23},
year = {2014}
}
@article{Chen2018a,
abstract = {There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms -- such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) -- requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-of-the-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM's ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies.},
archivePrefix = {arXiv},
arxivId = {1802.04799},
author = {Chen, Tianqi and Moreau, Thierry and Jiang, Ziheng and Zheng, Lianmin and Yan, Eddie and Cowan, Meghan and Shen, Haichen and Wang, Leyuan and Hu, Yuwei and Ceze, Luis and Guestrin, Carlos and Krishnamurthy, Arvind},
eprint = {1802.04799},
title = {{TVM: An Automated End-to-End Optimizing Compiler for Deep Learning}},
url = {http://arxiv.org/abs/1802.04799},
year = {2018}
}
@article{Author,
author = {Author, Anonymous},
keywords = {data structures,incremental computation,pro-},
title = {{Incrementalization with Data Structures}}
}
@article{Loncaric2016,
author = {Loncaric, Calvin and Ernst, Michael D},
isbn = {9781450342612},
journal = {PLDI '16:Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation},
keywords = {data structure synthesis},
title = {{Calvin Loncaric Emina Torlak Michael D. Ernst}},
year = {2016}
}
@article{Loncaric2016a,
abstract = {Unhandled exceptions crash programs, so a compile-time check that exceptions are handled should in principle make software more reliable. But designers of some recent lan- guages have argued that the benefits of statically checked ex- ceptions are not worth the costs. We introduce a new stati- cally checked exception mechanism that addresses the prob- lems with existing checked-exceptionmechanisms. In partic- ular, it interacts well with higher-order functions and other design patterns. The key insight is that whether an excep- tion should be treated as a “checked” exception is not a prop- erty of its type but rather of the context in which the excep- tion propagates. Statically checked exceptions can “tunnel” through code that is oblivious to their presence, but the type system nevertheless checks that these exceptions are han- dled. Further, exceptions can be tunneled without being acci- dentally caught, by expanding the space of exception identi- fiers to identify the exception-handling context. The resulting mechanism is expressive and syntactically light, and can be implemented efficiently. We demonstrate the expressiveness of the mechanism using significant codebases and evaluate its performance. We have implemented this new exception mechanism as part of the new Genus programming language, but the mechanism could equally well be applied to other pro- gramming languages.},
author = {Loncaric, Calvin and Torlak, Emina and Ernst, Michael D.},
doi = {10.1145/2908080.2908122},
isbn = {9781450342612},
journal = {Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI)},
keywords = {Data structure synthesis},
pages = {355--368},
title = {{Fast synthesis of fast collections}},
volume = {13-17-June},
year = {2016}
}
@article{Liu2016,
abstract = {Object queries are significantly easier to write, understand, and maintain than efficient low-level programs. However, a query may involve any number and combination of objects and sets, which can be arbitrarily nested and aliased. The objects and sets involved, starting from the given demand---the given parameter values of interest---can change arbitrarily. How to generate efficient implementations automatically, and furthermore to provide complexity guarantees? This paper describes such an automatic method. The method allows the queries to be written completely declaratively. It transforms demand into relations, based on the same basic idea for transforming objects and sets into relations in a prior work. Most importantly, it defines and incrementally maintains invariants for not only the query results, but also all auxiliary values about the objects and sets involved, starting from the demand. Implementation and experiments with problems from a variety of application areas, including distributed algorithms, confirm the analyzed complexities, trade-offs, and significant improvements over prior works.},
author = {Liu, Yanhong A. and Brandvein, Jon and Stoller, Scott D. and Lin, Bo},
doi = {10.1145/2967973.2968610},
isbn = {9781450341486},
journal = {Proceedings of the 18th International Symposium on Principles and Practice of Declarative Programming, PPDP 2016},
keywords = {Complexity guarantees,Demand-driven incremental computation,Object queries,Program transformation},
pages = {228--241},
title = {{Demand-driven incremental object queries}},
year = {2016}
}
@article{Raychev2019,
abstract = {We present a new approach for predicting program properties from massive codebases (aka "Big Code"). Our approach first learns a probabilistic model from existing data and then uses this model to predict properties of new, unseen programs. The key idea of our work is to transform the input program into a representation which allows us to phrase the problem of inferring program properties as structured prediction in machine learning. This formulation enables us to leverage powerful probabilistic graphical models such as conditional random fields (CRFs) in order to perform joint prediction of program properties. As an example of our approach, we built a scalable prediction engine called JSNice for solving two kinds of problems in the context of JavaScript: predicting (syntactic) names of identifiers and predicting (semantic) type annotations of variables. Experimentally, JSNice predicts correct names for 63{\%} of name identifiers and its type annotation predictions are correct in 81{\%} of the cases. In the first week since its release, JSNice was used by more than 30,000 developers and in only few months has become a popular tool in the JavaScript developer community. By formulating the problem of inferring program properties as structured prediction and showing how to perform both learning and inference in this context, our work opens up new possibilities for attacking a wide range of difficult problems in the context of "Big Code" including invariant generation, decompilation, synthesis and others.},
author = {Raychev, Veselin and Vechev, Martin and Krause, Andreas},
doi = {10.1145/3306204},
isbn = {9781450333009},
issn = {15577317},
journal = {Communications of the ACM},
number = {3},
pages = {99--107},
title = {{Predicting program properties from 'big code'}},
volume = {62},
year = {2019}
}
@article{Decker2018,
abstract = {Modern machine learning frameworks have one common-ality: the primary interface, for better or worse, is Python. Python is widely appreciated for its low barrier of entry due to its high-level built-ins and use of dynamic typing. However , these same features are also often attributed to causing the significant performance gap between the front-end in which users are asked to develop, and the highly-optimized back-end kernels which are ultimately called (generally written in a lower-level language like C). This has led to frameworks like TensorFlow requiring programs which consist almost entirely of API calls, with the appearance of only coincidentally being implemented in Python, the language. All recent ML frameworks have recognized this gap between usability and performance as a problem and aim to bridge the gap in generally one of two ways. In the case of tools like PyTorch's JIT compiler, executed tensor operations can be recorded via tracing based on operator overloading. In the case of tools like PyTorch's Torch Script, Python functions can be marked for translation entirely to a low-level language. However, both tracing and wholesale translation in this fashion have significant downsides in the respective inability to capture data-dependent control flow and the missed opportunities for optimization via execution while a low-level IR is built up. In this paper, we demonstrate the ability to overcome these shortcomings by performing a relatively simple source-to-source transformation, that allows for operator overloading techniques to be extended to language built-ins, including control flow operators, function definitions, etc. We utilize a preexisting PLT Redex implementation of Python's core grammar in order to provide assurances that our transformations are semantics preserving with regard to standard Python. We then instantiate our overloading approach to generate code, which enables a form of multi-stage programming in Python. We capture the required transformations in a proof-of-concept, back-end agnostic, system dubbed Snek, and demonstrate their use in a production system released as part of TensorFlow, called AutoGraph. Finally, we provide an empirical evaluation of these systems and show performance benefits even with existing systems like TensorFlow, Torch Script, and Lantern as back-ends.},
author = {Decker, James M and Moldovan, Dan and Wei, Guannan and Bhardwaj, Vritant and Essertel, Gregory and Wang, Fei and Wiltschko, Alexander B and Rompf, Tiark and Brain, Google},
number = {November},
pages = {1--14},
title = {{The 800 Pound Python in the Machine Learning Room}},
volume = {1},
year = {2018}
}
@article{Li2015,
abstract = {Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (Scarselli et al., 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.},
archivePrefix = {arXiv},
arxivId = {1511.05493},
author = {Li, Yujia and Tarlow, Daniel and Brockschmidt, Marc and Zemel, Richard},
eprint = {1511.05493},
number = {1},
pages = {1--20},
title = {{Gated Graph Sequence Neural Networks}},
url = {http://arxiv.org/abs/1511.05493},
year = {2015}
}
@article{Frostig2018,
abstract = {We describe JAX, a domain-specific tracing JIT compiler for generating high-performance accelerator code from pure Python and Numpy machine learning programs. JAX uses the XLA compiler infrastructure to generate optimized code for the program subrou-tines that are most favorable for acceleration, and these optimized subroutines can be called and orchestrated by arbitrary Python. Because the system is fully compatible with Autograd, it allows forward-and reverse-mode automatic differentiation of Python functions to arbitrary order. Because JAX supports structured control flow, it can generate code for sophisticated machine learning algorithms while maintaining high performance. We show that by combining JAX with Autograd and Numpy we get an easily pro-grammable and highly performant ML system that targets CPUs, GPUs, and TPUs, capable of scaling to multi-core Cloud TPUs.},
archivePrefix = {arXiv},
arxivId = {arXiv:1603.04467},
author = {Frostig, Roy and Johnson, Matthew James and Leary, Chris},
doi = {10.1016/j.agrformet.2015.03.011},
eprint = {arXiv:1603.04467},
isbn = {0168-1923},
issn = {01681923},
journal = {SysML 2018},
keywords = {piling machine learning programs,via high-level tracing},
pmid = {16411492},
title = {{Compiling machine learning programs via high-level tracing}},
url = {http://www.sysml.cc/doc/146.pdf},
year = {2018}
}
@article{Si2018,
abstract = {A fundamental problem in program verification concerns inferring loop invariants. The problem is undecidable and even practical instances are challenging. Inspired by how human experts construct loop invariants, we propose a reasoning framework Code2Inv that constructs the solution by multi-step decision making and querying an external program graph memory block. By training with reinforcement learning, Code2Inv captures rich program features and avoids the need for ground truth solutions as supervision. Compared to previous learning tasks in domains with graph-structured data, it addresses unique challenges, such as a binary objective function and an extremely sparse reward that is given by an automated theorem prover only after the complete loop invariant is proposed. We evaluate Code2Inv on a suite of 133 benchmark problems and compare it to three state-of-the-art systems. It solves 106 problems compared to 73 by a stochastic search-based system, 77 by a heuristic search-based system, and 100 by a decision tree learning-based system. Moreover, the strategy learned can be generalized to new programs: compared to solving new instances from scratch, the pre-trained agent is more sample efficient in finding solutions.},
author = {Si, Xujie and Dai, Hanjun and Raghothaman, Mukund and Naik, Mayur and Song, Le},
issn = {10495258},
journal = {Advances in Neural Information Processing Systems},
number = {Nips},
pages = {7751--7762},
title = {{Learning loop invariants for program verification}},
volume = {2018-Decem},
year = {2018}
}
@article{Hammer2014,
author = {Hammer, Matthew A and Phang, Khoo Yit and Hicks, Michael and Foster, Jeffrey S},
pages = {1--17},
title = {{A DAPTON : Composable , Demand- Driven Incremental Computation ( Extended version )}},
year = {2014}
}
@article{Wang2018,
abstract = {Despite the recent successes of deep neural networks in various fields such as image and speech recognition, natural language processing, and reinforcement learning, we still face big challenges in bringing the power of numeric optimization to symbolic reasoning. Researchers have proposed different avenues such as neural machine translation for proof synthesis, vectorization of symbols and expressions for representing symbolic patterns, and coupling of neural back-ends for dimensionality reduction with symbolic front-ends for decision making. However, these initial explorations are still only point solutions, and bear other shortcomings such as lack of correctness guarantees. In this paper, we present our approach of casting symbolic reasoning as games, and directly harnessing the power of deep reinforcement learning in the style of Alpha(Go) Zero on symbolic problems. Using the Boolean Satisfiability (SAT) problem as showcase, we demonstrate the feasibility of our method, and the advantages of modularity, efficiency, and correctness guarantees.},
archivePrefix = {arXiv},
arxivId = {1802.05340},
author = {Wang, Fei and Rompf, Tiark},
eprint = {1802.05340},
pages = {1--4},
title = {{From Gameplay to Symbolic Reasoning: Learning SAT Solver Heuristics in the Style of Alpha(Go) Zero}},
url = {http://arxiv.org/abs/1802.05340},
year = {2018}
}
@article{Roesch2018,
abstract = {Machine learning powers diverse services in industry including search, translation, recommendation systems, and security. The scale and importance of these models require that they be efficient, expressive, and portable across an array of heterogeneous hardware devices. These constraints are often at odds; in order to better accommodate them we propose a new high-level intermediate representation (IR) called Relay. Relay is being designed as a purely-functional, statically-typed language with the goal of balancing efficient compilation, expressiveness, and portability. We discuss the goals of Relay and highlight its important design constraints. Our prototype is part of the open source NNVM compiler framework, which powers Amazon's deep learning framework MxNet.},
author = {Roesch, J. Jared and Lyubomirsky, S. Steven and Weber, L. Logan and Pollock, J. Josh and Kirisame, M. Marisa and Chen, T. Tianqi and Tatlock, Z. Zachary},
doi = {10.1145/3211346.3211348},
isbn = {9781450358347},
journal = {MAPL 2018 - Proceedings of the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, co-located with PLDI 2018},
keywords = {Compilers,Differentiable programming,Intermediate representation,Machine learning},
pages = {58--68},
title = {{Relay: A new IR for machine learning frameworks}},
year = {2018}
}
@article{Chen2018b,
abstract = {We introduce a learning-based framework to optimize tensor programs for deep learning workloads. Efficient implementations of tensor operators, such as matrix multiplication and high dimensional convolution, are key enablers of effective deep learning systems. However, existing systems rely on manually optimized libraries such as cuDNN where only a narrow range of server class GPUs are well-supported. The reliance on hardware-specific operator libraries limits the applicability of high-level graph optimizations and incurs significant engineering costs when deploying to new hardware targets. We use learning to remove this engineering burden. We learn domain-specific statistical cost models to guide the search of tensor operator implementations over billions of possible program variants. We further accelerate the search by effective model transfer across workloads. Experimental results show that our framework delivers performance competitive with state-of-the-art hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPU.},
archivePrefix = {arXiv},
arxivId = {arXiv:1805.08166v2},
author = {Chen, Tianqi and Zheng, Lianmin and Yan, Eddie and Jiang, Ziheng and Moreau, Thierry and Ceze, Luis and Guestrin, Carlos and Krishnamurthy, Arvind},
eprint = {arXiv:1805.08166v2},
issn = {10495258},
journal = {Advances in Neural Information Processing Systems},
number = {Nips},
pages = {3389--3400},
title = {{Learning to optimize tensor programs}},
volume = {2018-Decem},
year = {2018}
}
@article{Bui2018,
abstract = {Translating a program written in one programming language to another can be useful for software development tasks that need functionality implementations in different languages. Although past studies have considered this problem, they may be either specific to the language grammars, or specific to certain kinds of code elements (e.g., tokens, phrases, API uses). This paper proposes a new approach to automatically learn cross-language representations for various kinds of structural code elements that may be used for program translation. Our key idea is two folded: First, we normalize and enrich code token streams with additional structural and semantic information, and train cross-language vector representations for the tokens (a.k.a. shared embeddings based on word2vec, a neural-network-based technique for producing word embeddings; Second, hierarchically from bottom up, we construct shared embeddings for code elements of higher levels of granularity (e.g., expressions, statements, methods) from the embeddings for their constituents, and then build mappings among code elements across languages based on similarities among embeddings. Our preliminary evaluations on about 40,000 Java and C{\#} source files from 9 software projects show that our approach can automatically learn shared embeddings for various code elements in different languages and identify their cross-language mappings with reasonable Mean Average Precision scores. When compared with an existing tool for mapping library API methods, our approach identifies many more mappings accurately. The mapping results and code can be accessed at https://github.com/bdqnghi/hierarchical-programming-language-mapping. We believe that our idea for learning cross-language vector representations with code structural information can be a useful step towards automated program translation.},
archivePrefix = {arXiv},
arxivId = {arXiv:1803.04715v1},
author = {Bui, Nghi D.Q. and Jiang, Lingxiao},
doi = {10.1145/3183399.3183427},
eprint = {arXiv:1803.04715v1},
isbn = {9781450356626},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
keywords = {language mapping,program translation,software maintenance,syntactic structure,word2vec},
number = {2},
pages = {33--36},
title = {{Hierarchical learning of cross-language mappings through distributed vector representations for code}},
year = {2018}
}
@article{Ernst2007,
author = {Ernst, Michael D},
isbn = {9781595937865},
keywords = {1,5,6,example error-revealing test,fails on sun 1,java,public static void test1,random testing},
pages = {6--7},
title = {{Randoop : Feedback-Directed Random Testing for Java}},
volume = {5},
year = {2007}
}
@article{EllDocuMENTe2014,
abstract = {Many researchers have proposed programming languages that sup-port incremental computation (IC), which allows programs to be efficiently re-executed after a small change to the input. However, existing implementations of such languages have two important drawbacks. First, recomputation is oblivious to specific demands on the program output; that is, if a program input changes, all de-pendencies will be recomputed, even if an observer no longer re-quires certain outputs. Second, programs are made incremental as a unit, with little or no support for reusing results outside of their original context, e.g., when reordered. To address these problems, we present $\lambda$ cdd ic , a core calculus that applies a demand-driven semantics to incremental computa-tion, tracking changes in a hierarchical fashion in a novel demanded computation graph. $\lambda$ cdd ic also formalizes an explicit separation be-tween inner, incremental computations and outer observers. This combination ensures $\lambda$ cdd ic programs only recompute computations as demanded by observers, and allows inner computations to be reused more liberally. We present ADAPTON, an OCaml library implementing $\lambda$ cdd ic . We evaluated ADAPTON on a range of bench-marks, and found that it provides reliable speedups, and in many cases dramatically outperforms state-of-the-art IC approaches.},
author = {{Ell Docu M E N Te}, W and To, Easy and {V A Lu Ate}, E and Di, Pl and Hammer, Matthew A and Phang, Khoo Yit and Hicks, Michael and Foster, Jeffrey S},
doi = {10.1145/2594291.2594324},
isbn = {9781450327848},
keywords = {D33 [Programming Languages],F32 [Logics and Meanings of Programs],Formal Definitions and Theory,Language Constructs and Features,call-by-push-value (CBPV),self-adjusting computation,thunks},
title = {{Con sis te n t * Comple te ADAPTON: Composable, Demand- Driven Incremental Computation}},
year = {2014}
}
@article{Rasthofer2014,
author = {Rasthofer, Siegfried and Arzt, Steven and Bodden, Eric},
isbn = {1891562355},
number = {February},
pages = {23--26},
title = {{A Machine-learning Approach for Classifying and Categorizing Android Sources and Sinks}},
year = {2014}
}
@article{Arzt,
author = {Arzt, Steven},
isbn = {9781450339001},
keywords = {framework model,library,model,static analysis,summary},
title = {{StubDroid : Automatic Inference of Precise Data-flow Summaries for the Android Framework Categories and Subject Descriptors}}
}
isbn = {9781450322010},
keywords = {call graph,context-sensitive analysis,interprocedural analysis,points-to analysis},
title = {{Interprocedural Data Flow Analysis in Soot using Value Contexts}}
}
@article{Fratantonio,
author = {Fratantonio, Yanick and Bianchi, Antonio and Robertson, William and Kirda, Engin and Kruegel, Christopher and Vigna, Giovanni},
title = {{TriggerScope : Towards Detecting Logic Bombs in Android Applications}}
}
@article{Pacheco2005,
author = {Pacheco, Carlos and Ernst, Michael D},
pages = {504--527},
title = {{Eclat : Automatic Generation and Classification of Test Inputs}},
year = {2005}
}
@article{Saltzer1991,
author = {Saltzer, J H and Reed, D P and Clark, D D and Science, Computer},
isbn = {0890063370},
pages = {509--512},
title = {{END-TO-END ARGUMENTS IN SYSTEM DESIGN}},
year = {1991}
}
@article{Pacheco2017,
author = {Pacheco, Carlos and Lahiri, Shuvendu and Ernst, Michael D and Ball, Thomas},
title = {{Retrospective : Random Test Generation}},
year = {2017}
}
@article{Pacheco,
author = {Pacheco, Carlos and Lahiri, Shuvendu K and Ernst, Michael D and Ball, Thomas},
title = {{Feedback-directed Random Test Generation}}
}
@article{Ferrante1987,
author = {Ferrante, Jeanne and Ottenstein, Karl J and Warren, J O E D},
number = {3},
pages = {319--349},
title = {{The Program Dependence Graph and Its Use in Optimization}},
volume = {9},
year = {1987}
}
@article{Singh,
author = {Singh, Ranjeet and King, Andy},
title = {{Partial Evaluation for Java Malware Detection}}
}
@article{Maiorca,
author = {Maiorca, Davide and Ariu, Davide and Corona, Igino and Aresu, Marco and Giacinto, Giorgio},
title = {{Stealth Attacks : An Extended Insight into the Obfuscation Effects on Android Malware}}
}
@article{Aonzo2018,
author = {Aonzo, Simone and Merlo, Alessio and Tavella, Giulio},
isbn = {9781450356930},
keywords = {2018,acm reference format,alessio merlo,fratantonio,giulio tavella and yanick,instant apps,mobile security,password managers,phishing,simone aonzo},
title = {{Phishing Attacks on Modern Android}},
year = {2018}
}
@article{Dufour,
author = {Dufour, Bruno and Bodden, Eric and Hendren, Laurie and Lam, Patrick},
title = {{Analyzing Java Programs with Soot}}
}
@article{Bichsel2016,
author = {Bichsel, Benjamin and Vechev, Martin},
isbn = {9781450341394},
pages = {343--355},
title = {{Statistical Deobfuscation of Android Applications}},
year = {2016}
}
@article{Nielsen,
author = {Nielsen, Janus Dam},
pages = {1--47},
title = {{A Survivor ' s Guide to Java Program Analysis with Soot}}
}
@article{Analysis2017,
author = {Analysis, Static and Mobile, O F},
number = {February},
title = {{STATIC ANALYSIS OF MOBILE PROGRAMS}},
year = {2017}
}
@article{Fan2019a,
author = {Fan, Ming and Luo, Xiapu and Liu, Jun and Wang, Meng and Nong, Chunyin and Zheng, Qinghua and Liu, Ting},
doi = {10.1109/ICSE.2019.00085},
keywords = {-android malware,familial anal-,graph embedding,unsupervised learning,ysis},
title = {{Graph Embedding based Familial Analysis of Android Malware using Unsupervised Learning}},
year = {2019}
}
@article{Android-anwendungen2017,
author = {Android-anwendungen, Statische Datenflussanalyse},
title = {{Static Data Flow Analysis for Android Applications}},
year = {2017}
}
@article{Rasthofer2016,
author = {Rasthofer, Siegfried and Arzt, Steven and Miltenberger, Marc},
isbn = {189156241X},
number = {February},
pages = {21--24},
title = {{Harvesting Runtime Values in Android Applications That Feature Anti-Analysis Techniques}},
year = {2016}
}
@article{Shin2009,
author = {Shin, Kang G},
isbn = {9781605583525},
keywords = {graph similarity,malware indexing,multi-resolution indexing},
title = {{Large-Scale Malware Indexing Using Function-Call Graphs}},
year = {2009}
}
author = {Hammad, Mahmoud and Garcia, Joshua and Malek, Sam},
isbn = {9781450356381},
title = {{A Large-Scale Empirical Study on the Effects of Code Obfuscations on Android Apps and Anti-Malware Products}}
}
@article{Barros,
author = {Barros, Paulo and Vines, Paul and Ernst, Michael D},
title = {{Static Analysis of Implicit Control Flow : Resolving Java Reflection and Android Intents}}
}
@article{Mariconti2017,
archivePrefix = {arXiv},
arxivId = {arXiv:1612.04433v3},
author = {Mariconti, Enrico and Onwuzurike, Lucky and Andriotis, Panagiotis and Cristofaro, Emiliano De and Ross, Gordon and Stringhini, Gianluca},
eprint = {arXiv:1612.04433v3},
number = {Ndss},
title = {{M A M A D ROID : Detecting Android Malware by Building Markov Chains of Behavioral Models ∗}},
year = {2017}
}
@article{Zhou,
author = {Zhou, Yajin},
keywords = {-android malware,smartphone security},
number = {4},
title = {{Dissecting Android Malware : Characterization and Evolution}}
}
@article{Yang,
author = {Yang, Chao and Xu, Zhaoyan and Gu, Guofei and Yegneswaran, Vinod and Porras, Phillip},
keywords = {android malware analysis and,detection,mobile security},
title = {{DroidMiner : Automated Mining and Characterization of Fine-grained Malicious Behaviors in Android Applications}}
}
@article{Arp,
author = {Arp, Daniel and Spreitzenbarth, Michael and Malte, H and Gascon, Hugo and Rieck, Konrad},
isbn = {1891562355},
title = {{D REBIN : Effective and Explainable Detection of Android Malware in Your Pocket}}
}
@article{Poeplau2014,
author = {Poeplau, Sebastian and Fratantonio, Yanick and Bianchi, Antonio and Kruegel, Christopher and Vigna, Giovanni},
isbn = {1891562355},
number = {February},
pages = {23--26},
title = {{Execute This ! Analyzing Unsafe and Malicious Dynamic Code Loading in Android Applications}},
year = {2014}
}
@article{Zhang,
author = {Zhang, Mu and Duan, Yue and Yin, Heng and Zhao, Zhiruo},
isbn = {9781450329576},
keywords = {all or part of,android,anomaly detection,graph similar-,ity,malware classification,or,or hard copies of,permission to make digital,semantics-aware,signature detection,this work for personal},
title = {{Semantics-Aware Android Malware Classification Using Weighted Contextual API Dependency Graphs Categories and Subject Descriptors}}
}
@article{Fan2018a,
author = {Fan, Ming and Liu, Jun and Luo, Xiapu and Chen, Kai and Tian, Zhenzhou and Zheng, Qinghua and Liu, Ting},
doi = {10.1109/TIFS.2018.2806891},
journal = {IEEE Transactions on Information Forensics and Security},
number = {8},
pages = {1890--1905},
publisher = {IEEE},
title = {{Android Malware Familial Classification and Representative Sample Selection via Frequent Subgraph Analysis}},
volume = {13},
year = {2018}
}
@article{Wang,
author = {Wang, Pei and Wang, Li and Wang, Shuai and Chen, Zhaofeng},
isbn = {9781450356381},
keywords = {empirical study,mobile app,obfuscation,reverse engineering},
title = {{Software Protection on the Go : A Large-Scale Empirical Study on Mobile App Obfuscation}}
}
@article{Arzta,
author = {Arzt, Steven and Rasthofer, Siegfried and Fritz, Christian and Bodden, Eric and Bartel, Alexandre and Klein, Jacques and Traon, Yves Le and Octeau, Damien and Mcdaniel, Patrick},
isbn = {9781450327848},
title = {{FlowDroid : Precise Context , Flow , Field , Object-sensitive and Lifecycle-aware Taint Analysis for Android Apps}}
}
@article{Zhu2016,
author = {Zhu, Ziyun and Dumitras, Tudor},
isbn = {9781450341394},
title = {{FeatureSmith : Automatically Engineering Features for Malware Detection by Mining the Security Literature}},
year = {2016}
}
@article{Bastani2017a,
author = {Bastani, Osbert and Martins, Ruben},
number = {March},
title = {{Automated Synthesis of Semantic Malware Signatures using Maximum Satisfiability}},
year = {2017}
}
@article{Aiken2014,
author = {Aiken, Alex},
isbn = {9781450330565},
keywords = {android,inter-component call graph,taint analysis},
title = {{Apposcopy : Semantics-Based Detection of Android Malware through Static Analysis ∗}},
year = {2014}
}
@article{Tripp,
author = {Tripp, Omer},
title = {{A Bayesian Approach to Privacy Enforcement in Smartphones}}
}
@article{It,
author = {It, M},
pages = {6--8},
title = {{Thrust II : Behavioral Deobfuscation}}
}
@article{Gascon,
author = {Gascon, Hugo and Yamaguchi, Fabian and Rieck, Konrad and Arp, Daniel},
isbn = {9781450324885},
keywords = {all or part of,classroom use is granted,copies are not made,graph kernels,machine learning,malware detection,or,or distributed,or hard copies of,permission to make digital,this work for personal,without fee provided that},
title = {{Structural Detection of Android Malware using Embedded Call Graphs Categories and Subject Descriptors}}
}
@article{Bodden2012,
author = {Bodden, Eric},
keywords = {flow-sensitive,inter-procedural static analysis},
title = {{Inter-procedural Data-flow Analysis with IFDS / IDE and Soot ∗}},
year = {2012}
}
@article{Monperrus2012,
author = {Monperrus, Martin},
isbn = {9781450314909},
keywords = {android,dalvik bytecode,jimple,soot,static},
title = {{Dexpler : Converting Android Dalvik Bytecode to Jimple for Static Analysis with Soot}},
year = {2012}
}
@article{Balachandran2018,
author = {Balachandran, Vivek and Tan, Darell J J and Thing, Vrizlynn L L},
doi = {10.1016/j.cose.2016.05.003},
issn = {0167-4048},
journal = {Computers {\&} Security},
number = {2016},
pages = {72--93},
publisher = {Elsevier Ltd},
title = {{Control flow obfuscation for Android applications}},
url = {http://dx.doi.org/10.1016/j.cose.2016.05.003},
volume = {61},
year = {2018}
}
@article{Meyerovich2013,
author = {Meyerovich, Leo A and Torok, Matthew E and Atkinson, Eric and Bod{\'{i}}k, Rastislav},
isbn = {9781450319225},
keywords = {attribute grammars,css,functional specification,layout,scheduling,sketching},
pages = {187--196},
title = {{Parallel Schedule Synthesis for Attribute Grammars}},
year = {2013}
}
@article{Si,
author = {Si, Xujie and Lee, Woosuk and Zhang, Richard},
isbn = {9781450355735},
keywords = {Syntax-guided synthesis, Datalog, Active learning,,acm reference format,active learning,datalog,mentation,program analysis,syntax-guided synthesis,template aug-},
pages = {515--527},
title = {{Syntax-Guided Synthesis of Datalog Programs}}
}
@book{Synthesis,
author = {Synthesis, Program},
isbn = {9781680832921},
title = {{Program Synthesis}}
}
author = {Mador-haim, Sela and Martin, Milo M K and Deshmukh, Jyotirmoy V},
isbn = {9781450320146},
keywords = {cache coherence protocols,distributed protocol synthesis,program synthesis,programming by example},
pages = {1--10},
title = {{T RANSIT : Specifying Protocols with Concolic Snippets ∗}}
}
@article{Alur,
author = {Alur, Rajeev and Bodik, Rastislav and Dallal, Eric and Fisman, Dana},
keywords = {constraint solving,counterexamples,machine,program synthesis},
title = {{Syntax-Guided Synthesis}}
}
@article{Schulman2016,
archivePrefix = {arXiv},
arxivId = {arXiv:1506.02438v6},
author = {Schulman, John and Moritz, Philipp and Levine, Sergey and Jordan, Michael I and Abbeel, Pieter},
eprint = {arXiv:1506.02438v6},
pages = {1--14},
title = {{H -d c c u g a e}},
year = {2016}
}
@article{Yang2015a,
author = {Yang, Wei and Xiao, Xusheng and Andow, Benjamin and Li, Sihan and Xie, Tao and Enck, William},
doi = {10.1109/ICSE.2015.50},
isbn = {9781479919345},
journal = {2015 IEEE/ACM 37th IEEE International Conference on Software Engineering},
pages = {303--313},
publisher = {IEEE},
title = {{AppContext : Differentiating Malicious and Benign Mobile App Behaviors Using Context}},
volume = {1},
year = {2015}
}
@article{Spath2019,
author = {Sp{\"{a}}th, Johannes and Iem, Fraunhofer and Ali, Karim},
number = {January},
title = {{using Synchronized Pushdown Systems}},
volume = {3},
year = {2019}
}
@article{Kellogg,
author = {Kellogg, Martin and Maus, Everett},
title = {{Synthesizing Static Analyses from Examples}}
}
@article{Reps,
author = {Reps, Thomas and Horwitz, Susan and Sagiv, Mooly},
title = {{Precise Interprocedural Dataflow Analysis via Graph Reachability}}
}
@article{Pauck,
archivePrefix = {arXiv},
arxivId = {arXiv:1804.02903v1},
author = {Pauck, Felix and Bodden, Eric},
eprint = {arXiv:1804.02903v1},
keywords = {android taint analysis,benchmarks,empirical studies,re-,tools},
title = {{Do Android Taint Analysis Tools Keep their Promises ?}}
}
@article{Koeplinger,
author = {Koeplinger, David and Feldman, Matthew and Prabhakar, Raghu and Zhang, Yaqi and Kozyrakis, Christos and Olukotun, Kunle},
isbn = {9781450356985},
keywords = {acm reference format,architectures,cgras,compilers,domain-specific languages,fpgas,hard-,high-level synthesis,reconfigurable,ware accelerators},
title = {{Spatial : A Language and Compiler for Application Accelerators}}
}
@article{Beckett2018,
archivePrefix = {arXiv},
arxivId = {arXiv:1806.08744v1},
author = {Beckett, Ryan and Walker, David},
eprint = {arXiv:1806.08744v1},
title = {{Control Plane Compression Extended Version of the SIGCOMM 2018 Paper}},
year = {2018}
}
@article{Paper,
author = {Paper, Invited Talk and Gulwani, Sumit},
keywords = {belief,deductive synthesis,genetic programming,inductive synthesis,machine learning,ming by examples,probabilistic inference,program-,programming by demonstration,propagation,sat solving,smt solving},
title = {{Dimensions in Program Synthesis}}
}
@article{Sachdev,
author = {Sachdev, Saksham and Kim, Seohyun and Chandra, Satish},
isbn = {9781450358347},
keywords = {2018,a neu-,acm reference format,and satish chandra,code search,embedding,hongyu li,idf,koushik,retrieval on source code,saksham sachdev,sen,seohyun kim,sifei luan,tf,word},
title = {{Retrieval on Source Code : A Neural Code Search}}
}
@article{Hawkins,
author = {Hawkins, Peter and Aiken, Alex and Fisher, Kathleen and Rinard, Martin},
isbn = {9781450306638},
keywords = {composite data structures,synthesis},
title = {{Data Representation Synthesis ∗}}
}
@article{Neubig2018,
author = {Neubig, Graham},
title = {{Towards Open-domain Generation of Programs from Natural Language}},
year = {2018}
}
@article{Ernst,
author = {Ernst, Michael D},
keywords = {2017,4,4230,and phrases natural language,digital object identifier 10,lipics,processing,program analysis,snapl,software development},
number = {4},
pages = {1--4},
title = {{Natural language is a programming language : Applying natural language processing to software development}}
}
@article{Raychev2014,
author = {Raychev, Veselin and Vechev, Martin and Eth, Z and Yahav, Eran},
isbn = {9781450327848},
title = {{Code Completion with Statistical Language Models}},
year = {2014}
}
@article{Xu2017,
archivePrefix = {arXiv},
arxivId = {arXiv:1711.04436v1},
author = {Xu, Xiaojun and Liu, Chang and Song, Dawn},
eprint = {arXiv:1711.04436v1},
pages = {1--13},
title = {{SQLNet : G}},
year = {2017}
}
@article{Zhong1995,
archivePrefix = {arXiv},
arxivId = {arXiv:1709.00103v7},
author = {Zhong, Victor and Xiong, Caiming and Socher, Richard},
eprint = {arXiv:1709.00103v7},
pages = {1--12},
title = {{FROM N ATURAL L ANGUAGE USING R EINFORCEMENT}},
year = {1995}
}
@article{Wua,
archivePrefix = {arXiv},
arxivId = {arXiv:1809.01357v2},
author = {Wu, Mike and Mosse, Milan and Goodman, Noah and Piech, Chris},
eprint = {arXiv:1809.01357v2},
title = {{Zero Shot Learning for Code Education : Rubric Sampling with Deep Learning Inference}}
}
@article{Vasic2019,
author = {Vasic, Marko and Kanade, Aditya and Maniatis, Petros and Bieber, David and Singh, Rishabh and Brain, Google},
pages = {1--12},
title = {{N EURAL P ROGRAM R EPAIR TO L OCALIZE AND R EPAIR BY}},
year = {2019}
}
@article{Goffi,
author = {Goffi, Alberto and Universit{\{a}}, U S I and Kuznetsov, Konstantin and Gorla, Alessandra and Ernst, Michael D},
isbn = {9781450356992},
keywords = {acm reference format,automatic test case generation,ing,natural language processing,software test-,specification inference,test oracle generation},
pages = {242--253},
title = {{Translating Code Comments to Procedure Specifications}}
}
@article{Iyer,
archivePrefix = {arXiv},
arxivId = {arXiv:1808.09588v1},
author = {Iyer, Srinivasan and Konstas, Ioannis and Cheung, Alvin and Zettlemoyer, Luke},
eprint = {arXiv:1808.09588v1},
title = {{Mapping Language to Code in Programmatic Context}}
}
@article{Gu,
author = {Gu, Xiaodong and Zhang, Hongyu and Kim, Sunghun},
isbn = {9781450356381},
keywords = {2018,3,acm reference format,and sunghun kim 1,code search,deep code,deep learning,hongyu zhang 2,joint embedding,xiaodong gu 1},
title = {{Deep Code Search 1}}
}
@article{Balog2017,
author = {Balog, Matej and Gaunt, Alexander L and Brockschmidt, Marc and Nowozin, Sebastian and Tarlow, Daniel},
title = {{D c : l w p}},
year = {2017}
}
@article{Dong2016,
author = {Dong, Li and Lapata, Mirella},
pages = {33--43},
title = {{Language to Logical Form with Neural Attention}},
year = {2016}
}
@article{Lee2018,
author = {Lee, Mina and Kim, Sonia},
title = {{Neural Contextual Code Search}},
year = {2018}
}
@article{Li2015a,
author = {Li, Jian and Wang, Yue and Lyu, Michael R and King, Irwin},
keywords = {Machine Learning: Deep Learning,Multidisciplinary Topics and Applications: Knowled,Natural Language Processing: NLP Applications and},
pages = {4159--4165},
title = {{Code Completion with Neural Attention and Pointer Networks}},
year = {2015}
}
@article{Kenton2017,
archivePrefix = {arXiv},
arxivId = {arXiv:1810.04805v1},
author = {Kenton, Ming-wei Chang and Kristina, Lee and Devlin, Jacob},
eprint = {arXiv:1810.04805v1},
title = {{BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding}},
year = {2017}
}
@article{Vinyals,
author = {Vinyals, Oriol and Koo, Terry and Hinton, Geoffrey},
pages = {1--9},
title = {{Grammar as a Foreign Language}}
}
@article{Allamanis2013,
author = {Allamanis, Miltiadis and Sutton, Charles},
doi = {10.1109/MSR.2013.6624004},
isbn = {9781467329361},
journal = {2013 10th Working Conference on Mining Software Repositories (MSR)},
number = {Table I},
pages = {53--56},
publisher = {IEEE},
title = {{Why , When , and What : Analyzing Stack Overflow Questions by Topic , Type , and Code}},
year = {2013}
}
@article{Polosukhin2018,
archivePrefix = {arXiv},
arxivId = {arXiv:1802.04335v1},
author = {Polosukhin, Illia and Skidanov, Alex},
eprint = {arXiv:1802.04335v1},
pages = {1--11},
title = {{N eural P rogram S earch : S olving P rogramming T asks from D escription and E xamples}},
year = {2018}
}
@article{Murali2018,
archivePrefix = {arXiv},
arxivId = {arXiv:1703.05698v5},
author = {Murali, Vijayaraghavan and Qi, Letao and Chaudhuri, Swarat and Jermaine, Chris},
eprint = {arXiv:1703.05698v5},
number = {2017},
pages = {1--17},
title = {{N EURAL S KETCH L EARNING FOR C ONDITIONAL}},
year = {2018}
}
@misc{,
title = {codenn.pdf}
}
@article{Bielik,
author = {Bielik, Pavol and Raychev, Veselin and Vechev, Martin},
keywords = {and phrases probabilistic tools,open-source software,probabilistic inference and learning,program analysis},
pages = {1--10},
title = {{Programming with “ Big Code ”: Lessons , Techniques and Applications}}
}
@article{Allamanis,
archivePrefix = {arXiv},
arxivId = {arXiv:1709.06182v2},
author = {Allamanis, Miltiadis and Barr, Earl T and Devanbu, Premkumar and Sutton, Charles},
eprint = {arXiv:1709.06182v2},
number = {1414172},
pages = {1--36},
title = {{A Survey of Machine Learning for Big Code and Naturalness}}
}
@article{Schkufza2013,
author = {Schkufza, Eric and Aiken, Alex},
isbn = {9781450318709},
keywords = {64-bit,binary,carlo,ecx,edx,markov chain monte,mcmc,r8,rdi,rsi,smt,stochastic search,superoptimization,x86,x86-64},
title = {{Stochastic Superoptimization}},
year = {2013}
}
@article{Kanvar2015,
archivePrefix = {arXiv},
arxivId = {arXiv:1403.4910v5},
author = {Kanvar, Vini and Khedker, Uday P},
eprint = {arXiv:1403.4910v5},
title = {{Heap Abstractions for Static Analysis}},
year = {2015}
}
@article{Gottschlich,
archivePrefix = {arXiv},
arxivId = {arXiv:1803.07244v2},
author = {Gottschlich, Justin and Solar-lezama, Armando and Carbin, Michael and Rinard, Martin and Barzilay, Regina and Tenenbaum, Joshua B and Mattson, Tim},
eprint = {arXiv:1803.07244v2},
keywords = {intention,inven-,machine programming,program synthesis,soft-,software maintenance,ware development},
title = {{The Three Pillars of Machine Programming}}
}
@article{Vilk2018,
author = {Vilk, John and Berger, Emery D},
title = {{: Automatically Debugging Memory Leaks in Web Applications}},
year = {2018}
}
@article{Beschastnikh2016,
author = {Beschastnikh, Ivan},
number = {april},
pages = {1--20},
title = {{Debugging Distributed Systems}},
year = {2016}
}
@article{Harman,
author = {Harman, Mark and Hearn, Peter O},
pages = {1--23},
title = {{From Start-ups to Scale-ups : Opportunities and Open Problems for Static and Dynamic Program Analysis}}
}
@article{Huang,
author = {Huang, Waylon},
title = {{Evaluating the Effectiveness of Components of Guided Random Testing}}
}
@article{Gua,
archivePrefix = {arXiv},
arxivId = {arXiv:1605.08535v3},
author = {Gu, Xiaodong and Zhang, Hongyu and Zhang, Dongmei and Kim, Sunghun},
doi = {10.1145/1235},
eprint = {arXiv:1605.08535v3},
isbn = {9781450321389},
keywords = {api,api usage,code search,deep learning,rnn},
title = {{Deep API Learning}}
}
@article{Devlin2017,
archivePrefix = {arXiv},
arxivId = {arXiv:1710.04157v1},
author = {Devlin, Jacob and Hausknecht, Matthew},
eprint = {arXiv:1710.04157v1},
number = {Nips},
title = {{Neural Program Meta-Induction}},
year = {2017}
}
@article{Zavershynskyi2018,
archivePrefix = {arXiv},
arxivId = {arXiv:1807.03168v1},
author = {Zavershynskyi, Maksym and Skidanov, Alex and Polosukhin, Illia},
eprint = {arXiv:1807.03168v1},
title = {{NAPS: Natural Program Synthesis Dataset}},
year = {2018}
}
@article{Simmons-edler,
archivePrefix = {arXiv},
arxivId = {arXiv:1806.02932v1},
author = {Simmons-edler, Riley and Miltner, Anders and Seung, Sebastian},
eprint = {arXiv:1806.02932v1},
title = {{Guided Tree Search}}
}
@article{Martins,
author = {Martins, Ruben and Bastani, Osbert},
isbn = {9781450356985},
keywords = {au-,conflict-driven learning,program synthesis},
title = {{Program Synthesis using Conflict-Driven Learning}}
}
author = {Khademi, Mahmoud and Brockschmidt, Marc},
pages = {1--16},
title = {{L r p g}},
year = {2017}
}
@article{Bodik2015,
author = {Bodik, Rastislav},
isbn = {9781450336697},
pages = {2789052},
title = {{Program Synthesis : Opportunities for the Next Decade}},
year = {2015}
}
@article{Nelson,
author = {Nelson, Greg and Ave, Lytton and Alto, Palo and Systems, C Computer and General, Organization},
isbn = {1581134630},
keywords = {optimizing compiler,superoptimizer},
title = {{Denali : A Goal-directed Superoptimizer}}
}
@article{Noble2016,
author = {Noble, James and Black, Andrew P and Bruce, Kim B and Homer, Michael and Miller, Mark S},
isbn = {9781450340762},
keywords = {abstraction,equality,identity,object-orientation},
pages = {224--237},
title = {{The Left Hand of Equals}},
year = {2016}
}
@article{Wrenn2018,
author = {Wrenn, John and Fisler, Kathi},
isbn = {9781450356282},
title = {{Who Tests the Testers ? ∗ Avoiding the Perils of Automated Testing}},
year = {2018}
}
@article{Detlefs2005,
author = {Detlefs, David and Nelson, Greg and Saxe, James B},
number = {3},
pages = {365--473},
title = {{Simplify : A Theorem Prover for Program Checking}},
volume = {52},
year = {2005}
}
@article{Becker2017,
archivePrefix = {arXiv},
arxivId = {arXiv:1709.05703v1},
author = {Becker, Kory and Gottschlich, Justin},
eprint = {arXiv:1709.05703v1},
keywords = {artificial intelli-,code gen-,eration and optimization,evolutionary computation,gence,genetic,genetic algorithm,machine learning,program synthesis,programming,programming languages},
title = {{AI Programmer : Autonomously Creating Software Programs Using Genetic Algorithms}},
year = {2017}
}
@article{Henderson2012,
author = {Henderson, Keith and Gallagher, Brian and Eliassi-rad, Tina and Berkeley, U C},
isbn = {9781450314626},
keywords = {graph mining,network classifica-,sense-making,similarity search,structural role discovery,tion},
title = {{RolX : Structural Role Extraction {\&} Mining in Large Graphs}},
year = {2012}
}
@article{Chaudhuri2017,
author = {Chaudhuri, Swarat},
title = {{Deep Learning for Program Synthesis}},
year = {2017}
}
@article{Alon,
archivePrefix = {arXiv},
arxivId = {arXiv:1803.09473v5},
author = {Alon, U R I and Zilberstein, Meital and Yahav, Eran},
eprint = {arXiv:1803.09473v5},
title = {{code2vec : Learning Distributed Representations of Code}}
}
@misc{,
title = {vfix-icse2019.pdf}
}
@misc{,
title = {dps-fhpc17.pdf}
}
@article{Murray2018,
abstract = {—Given recent high-profile successes in formal ver-ification of security-related properties (e.g., for seL4), and the rising popularity of applying formal methods to cryptographic libraries and security protocols like TLS, we revisit the meaning of security-related proofs about software. We re-examine old issues, and identify new questions that have escaped scrutiny in the formal methods literature. We consider what value proofs about software systems deliver to end-users (e.g., in terms of net assurance benefits), and at what cost in terms of side effects (such as changes made to software to facilitate the proofs, and assumption-related deployment restrictions imposed on software if these proofs are to remain valid in operation). We consider in detail, for the first time to our knowledge, possible relationships between proofs and side effects. To make our discussion concrete, we draw on tangible examples, experience, and the literature.},
author = {Murray, Toby and {Van Oorschot}, Paul},
doi = {10.1109/SecDev.2018.00009},
isbn = {9781538676622},
journal = {Proceedings - 2018 IEEE Cybersecurity Development Conference, SecDev 2018},
keywords = {Computer security,Formal verification,Software engineering},
number = {June},
pages = {1--10},
title = {{BP: Formal proofs, the fine print and side effects}},
year = {2018}
}
@article{Chajed2018,
abstract = {Writing concurrent systems software is error-prone, because multiple processes or threads can interleave in many ways, and it is easy to forget about a subtle corner case. This paper introduces CSPEC, a framework for formal verification of concurrent software, which ensures that no corner cases are missed. The key challenge is to reduce the number of interleavings that developers must consider. CSPEC uses mover types to reorder commuta-tive operations so that usually it's enough to reason about only sequential executions rather than all possible inter-leavings. CSPEC also makes proofs easier by making them modular using layers, and by providing a library of reusable proof patterns. To evaluate CSPEC, we implemented and proved the correctness of CMAIL, a simple concurrent Maildir-like mail server that speaks SMTP and POP3. The results demonstrate that CSPEC's movers and patterns allow reasoning about sophisticated concurrency styles in CMAIL.},
author = {Chajed, Tej and Kaashoek, Frans and Lampson, Butler and Zeldovich, Nickolai},
journal = {Osdi},
pages = {306--322},
title = {{Verifying concurrent software using movers in {\{}CSPEC{\}}}},
url = {https://www.usenix.org/conference/osdi18/presentation/chajed},
year = {2018}
}
@article{Mokhov2018,
author = {Mokhov, Andrey and Mitchell, Neil and Jones, Simon Peyton},
keywords = {build systems, functional programming, algorithms},
number = {September},
title = {{Build Systems {\{a}} la Carte}},
volume = {2},
year = {2018}
}
@article{Polikarpova2018,
archivePrefix = {arXiv},
arxivId = {arXiv:1807.07022v2},
author = {Polikarpova, Nadia and Diego, San},
eprint = {arXiv:1807.07022v2},
number = {1},
title = {{arXiv : 1807 . 07022v2 [ cs . PL ] 9 Nov 2018}},
volume = {1},
year = {2018}
}
@article{HUET1997,
abstract = {Almost every programmer has faced the problem of representing a tree together with a subtree that is the focus of attention, where that focus may move left, right, up or down the tree. The Zipper is Huet's nifty name for a nifty data structure which fulfills this need. I wish I had known of it when I faced this task, because the solution I came up with was not quite so efficient or elegant as the Zipper.},
author = {HUET, G{\'{E}}RARD},
doi = {10.1017/S0956796897002864},
issn = {09567968},
journal = {Journal of Functional Programming},
number = {5},
pages = {S0956796897002864},
title = {{The Zipper}},
url = {http://www.journals.cambridge.org/abstract{\_}S0956796897002864},
volume = {7},
year = {1997}
}
@article{Felleisen1989,
abstract = {The assignment statement is a ubiquitous building block of programming languages. In functionally oriented programming languages, the assignment is the facility for modeling and expressing state changes. Given that functional languages are directly associated with the equational $\lambda$-calculus-theory, it is natural to wonder whether this syntactic proof system is extensible to imperative variants of functional languages including state variables and side-effects. In this paper, we show that such an extension exists, and that it satisfies variants of the conventional consistency and standardization theorems. With a series of examples, we also demonstrate the system's capabilities for reasoning about imperative-functional programs and illustrate some of its advantages over alternative models. {\textcopyright} 1989.},
author = {Felleisen, Matthias and Friedman, Daniel P.},
doi = {10.1016/0304-3975(89)90069-8},
issn = {03043975},
journal = {Theoretical Computer Science},
number = {3},
pages = {243--287},
title = {{A syntactic theory of sequential state}},
volume = {69},
year = {1989}
}
@article{Krebbers2014,
author = {Krebbers, Robbert and Wiedijk, Freek},
doi = {10.1145/2676724.2693571},
isbn = {9781450332965},
keywords = {caused by it,coq,executable,incomplete,interactive theorem proving,iso c11 standard,light weight methods like,operational semantics,semantics,static analysis,such approaches range from,to systems where a,which is by nature},
pages = {1--12},
title = {{A typed C11 semantics for interactive theorem proving}},
year = {2014}
}
@article{Chlipala2013,
doi = {10.1145/2500365.2500592},
isbn = {9781450323260},
journal = {{\ldots} international conference on Functional programming},
keywords = {functional programming,generative metaprogramming,interactive proof assis-,low-level programming languages,tants},
pages = {391--402},
title = {{The Bedrock structured programming system: Combining generative metaprogramming and Hoare logic in an extensible program verifier}},
url = {http://dl.acm.org/citation.cfm?id=2500592},
year = {2013}
}
@article{Nordio2009,
abstract = {Object-oriented languages provide advantages such as reuse and modularity, but they also raise new challenges for program verification. Program logics have been developed for languages such as C{\#} and Java. However, these logics do not cover the specifics of the Eiffel language. This paper presents a program logic for Eiffel that handles exceptions, once routines, and multiple inheritance. The logic is proven sound and complete w.r.t. an operational semantics. Lessons on language design learned from the experience are discussed. ? 2009 Springer Berlin Heidelberg.},
author = {Nordio, Martin and Calcagno, Cristiano and M{\"{u}}ller, Peter and Meyer, Bertrand},
doi = {10.1007/978-3-642-02571-6_12},
isbn = {9783642025709},
issn = {18651348},
journal = {Lecture Notes in Business Information Processing},
keywords = {Eiffel,Operational semantics,Program proofs,Software verification},
pages = {195--214},
title = {{A sound and complete program logic for eiffel}},
volume = {33 LNBIP},
year = {2009}
}
@article{Feng2009,
abstract = {Hardware interrupts are widely used in the world's critical software systems to support preemptive threads, device drivers, operating system kernels, and hypervisors. Handling interrupts properly is an essential component of low-level system programming. Unfortunately, interrupts are also extremely hard to reason about: they dramatically alter the program control flow and complicate the invariants in low-level concurrent code (e.g., implementation of synchronization primitives). Existing formal verification techniques—including Hoare logic, typed assembly language, concurrent separation logic, and the assume-guarantee method—have consistently ignored the issues of interrupts; this severely limits the applicability and power of today's program verification systems. In this paper we present a novel Hoare-logic-like framework for certifying low-level system programs involving both hardware interrupts and preemptive threads. We show that enabling and disabling interrupts can be formalized precisely using simple ownership-transfer semantics, and the same technique also extends to the concurrent setting. By carefully reasoning about the interaction among interrupt handlers, context switching, and synchronization libraries, we are able to—for the first time—successfully certify a preemptive thread implementation and a large number of common synchronization primitives. Our work provides a foundation for reasoning about interrupt-based kernel programs and makes an important advance toward building fully certified operating system kernels and hypervisors.},
author = {Feng, Xinyu and Shao, Zhong and Guo, Yu and Dong, Yuan},
doi = {10.1007/s10817-009-9118-9},
isbn = {978-1-59593-860-2},
issn = {01687433},
journal = {Journal of Automated Reasoning},
number = {2-4},
pages = {301--347},
title = {{Certifying low-level programs with hardware interrupts and preemptive threads}},
volume = {42},
year = {2009}
}
@article{Appel2011,
abstract = {The software toolchain includes static analyzers to check assertions$\backslash$nabout programs; optimizing compilers to translate programs to machine$\backslash$nlanguage; operating systems and libraries to supply context for$\backslash$nprograms. Our Verified Software Too/chain verifies with machine-checked$\backslash$nproofs that the assertions claimed at the top of the toolchain really$\backslash$nhold in the machine-language program, running in the operating-system$\backslash$ncontext, on a weakly-consistent-shared-memory machine.$\backslash$nOur verification approach is modular, in that proofs about operating$\backslash$nsystems or concurrency libraries are oblivious of the programming$\backslash$nlanguage or machine language, proofs about compilers are oblivious of$\backslash$nthe program logic used to verify static analyzers, and so on. The$\backslash$napproach is scalable, in that each component is verified in the semantic$\backslash$nidiom most natural for that component.$\backslash$nFinally, the verification is foundational: the trusted base for proofs$\backslash$nof observable properties of the machine-language program includes only$\backslash$nthe operational semantics of the machine language, not the source$\backslash$nlanguage, the compiler, the program logic, or any other part of the$\backslash$ntoolchain even when these proofs are carried out by source-level static$\backslash$nanalyzers.$\backslash$nIn this paper I explain some semantic techniques for building a verified$\backslash$ntoolchain.},
author = {Appel, Andrew W.},
doi = {10.1007/978-3-642-19718-5_1},
isbn = {9783642197178},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
number = {March},
pages = {1--17},
title = {{Verified software toolchain (Invited talk)}},
volume = {6602 LNCS},
year = {2011}
}
@article{Liang2016a,
author = {Liang, Hongjin and Feng, Xinyu and Shao, Zhong},
title = {{Compositional Verification of Termination-Preserving Refinement of Concurrent Programs ( Technical Report )}},
year = {2016}
}
@article{Leroy2009,
abstract = {This article describes the development and formal verification (proof of semantic preservation) of a compiler back-end from Cminor (a simple imperative intermediate language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its soundness. Such a verified compiler is useful in the context of formal methods applied to the certification of critical software: the verification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well.},
archivePrefix = {arXiv},
arxivId = {0902.2137},
author = {Leroy, Xavier},
doi = {10.1007/s10817-009-9155-4},
eprint = {0902.2137},
issn = {01687433},
journal = {Journal of Automated Reasoning},
keywords = {Compiler transformations and optimizations,Compiler verification,Formal methods,Program proof,Semantic preservation,The Coq theorem prover},
number = {4},
pages = {363--446},
title = {{A formally verified compiler back-end}},
volume = {43},
year = {2009}
}
@article{Liang2014,
author = {Liang, H and Feng, X and Shao, Z},
doi = {10.1145/2603088.2603123},
isbn = {9781450328869},
journal = {Proceedings of the Joint Meeting of the 23rd EACSL Annual Conference on Computer Science Logic, CSL 2014 and the 29th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2014},
keywords = {concurrency,refinement,rely-guarantee reasoning,simulation,termination preservation},
title = {{Compositional verification of termination-preserving refinement of concurrent programs}},
url = {http://www.scopus.com/inward/record.url?eid=2-s2.0-84905990125{\&}partnerID=40{\&}md5=019721a74ee1260c1b637bffecedb181},
year = {2014}
}
@article{Cohen2009,
abstract = {Abstract. VCC is an industrial-strength verification environment for low-level concurrent system code written in C. VCC takes a program (annotated with function contracts, state assertions, and type invariants) and attempts to prove the correctness of these annotations. It includes tools for monitoring proof attempts and constructing partial counterex- ample executions for failed proofs. This paper motivates VCC, describes our verification methodology, describes the architecture of VCC, and reports on our experience using VCC to verify the Microsoft Hyper-V hypervisor.5},
author = {Cohen, Ernie and Dahlweid, Markus and Hillebrand, Mark and Leinenbach, Dirk and Moskal, Micha{\l} and Santen, Thomas and Schulte, Wolfram and Tobies, Stephan},
doi = {10.1007/978-3-642-03359-9_2},
isbn = {364203358X},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {23--42},
title = {{VCC: A practical system for verifying concurrent C}},
volume = {5674 LNCS},
year = {2009}
}
@article{Chaki2004,
abstract = {There has been considerable progress in the domain of software verification$\backslash$nover the last few years. This advancement has been driven, to a large$\backslash$nextent, by the emergence of powerful yet automated abstraction techniques$\backslash$nsuch as predicate abstraction. However, the state-space explosion$\backslash$nproblem in model checking remains the chief obstacle to the practical$\backslash$nverification of real-world distributed systems. Even in the case$\backslash$nof purely sequential programs, a crucial requirement to make predicate$\backslash$nabstraction effective is to use as few predicates as possible. This$\backslash$nis because, in the worst case, the state-space of the abstraction$\backslash$ngenerated (and consequently the time and memory complexity of the$\backslash$nabstraction process) is exponential in the number of predicates involved.$\backslash$nIn addition, for concurrent programs, the number of reachable states$\backslash$ncould grow exponentially with the number of components.},
author = {Chaki, S. and Clarke, E. and Groce, A. and Ouaknine, J. and Strichman, O. and Yorav, K.},
doi = {10.1023/B:FORM.0000040026.56959.91},
issn = {09259856},
journal = {Formal Methods in System Design},
keywords = {Abstraction refinement,Concurrency,Predicate abstraction,Process algebra,Software verification},
number = {2-3},
pages = {129--166},
title = {{Efficient verification of sequential and concurrent C Programs}},
volume = {25},
year = {2004}
}
@book{Symposium2015,
author = {Symposium, Asian and Hutchison, David},
isbn = {9783319265285},
title = {{Languages}},
year = {2015}
}
@article{Siskind2016,
author = {Siskind, Mark},
number = {April},
title = {{Efficient Implementation of a Higher-Order Language with Built-In AD}},
year = {2016}
}
@article{Felleisen1992,
abstract = {The syntactic theories of control and state are conservative extensions of the $\lambda$$\upsilon-calculus for equational reasoning about imperative programming facilities in higher-order languages. Unlike the simple \lambda$$\upsilon$-calculus, the extended theories are mixtures of equivalence relations and compatible congruence relations on the term language, which significantly complicates the reasoning process. In this paper we develop fully compatible equational theories of the same imperative higher-order programming languages. The new theories subsume the original calculi of control and state and satisfy the usual Church-Rosser and Standardization Theorems. With the new calculi, equational reasoning about imperative programs becomes as simple as reasoning about functional programs. {\textcopyright} 1992.},
author = {Felleisen, Matthias and Hieb, Robert},
doi = {10.1016/0304-3975(92)90014-7},
issn = {03043975},
journal = {Theoretical Computer Science},
number = {2},
pages = {235--271},
title = {{The revised report on the syntactic theories of sequential control and state}},
volume = {103},
year = {1992}
}
@article{,
abstract = {Fault-tolerant distributed algorithms play an important role in many critical/high-availability applications. These algorithms are notori-ously difficult to implement correctly, due to asynchronous com-munication and the occurrence of faults, such as the network drop-ping messages or computers crashing. We introduce PSYNC, a domain specific language based on the Heard-Of model, which views asynchronous faulty systems as syn-chronous ones with an adversarial environment that simulates asyn-chrony and faults by dropping messages. We define a runtime sys-tem for PSYNC that efficiently executes on asynchronous networks. We formalize the relation between the runtime system and PSYNC in terms of observational refinement. The high-level lockstep ab-straction introduced by PSYNC simplifies the design and imple-mentation of fault-tolerant distributed algorithms and enables auto-mated formal verification. We have implemented an embedding of PSYNC in the SCALA programming language with a runtime system for asynchronous networks. We show the applicability of PSYNC by implementing several important fault-tolerant distributed algorithms and we com-pare the implementation of consensus algorithms in PSYNC against implementations in other languages in terms of code size, runtime efficiency, and verification.},
archivePrefix = {arXiv},
arxivId = {arXiv:1603.09436},
doi = {10.1145/nnnnnnn.nnnnnnn},
eprint = {arXiv:1603.09436},
isbn = {9781450335492},
issn = {15232867},
keywords = {and they must coordinate,automated verification,consensus,fault-tolerant distributed algorithms,only a limited view,over the entire system,partially synchrony,round model,to achieve global goals},
number = {1},
pages = {1--16},
title = {{A Logical Relation for Monadic Encapsulation of State Proving contextual equivalences in the presence of runST}},
volume = {1},
year = {2016}
}
@article{Vaynberg2012,
abstract = {A virtual memory manager (VMM) is a part of an operating system that provides the rest of the kernel with an abstract model of memory. Although small in size, it involves complicated and interdependent invariants that make monolithic verification of the VMM and the kernel running on top of it difficult. In this paper, we make the observation that a VMM is constructed in layers: physical page allocation, page table drivers, address space API, etc., each layer providing an abstraction that the next layer utilizes. We use this layering to simplify the verification of individual modules of VMM and then to link them together by composing a series of small refinements. The compositional verification also supports function calls from less abstract layers into more abstract ones, allowing us to simplify the verification of initialization functions as well. To facilitate such compositional verification, we develop a framework that assists in creation of verification systems for each layer and refinements between the layers. Using this framework, we have produced a certification of BabyVMM, a small VMM designed for simplified hardware. The same proof also shows that a certified kernel using BabyVMM's virtual memory abstraction can be refined following a similar sequence of refinements, and can then be safely linked with BabyVMM. Both the verification framework and the entire certification of BabyVMM have been mechanized in the Coq Proof Assistant.},
author = {Vaynberg, Alexander and Shao, Zhong},
doi = {10.1007/978-3-642-35308-6_13},
isbn = {9783642353079},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
number = {Vmm},
pages = {143--159},
title = {{Compositional verification of a baby virtual memory manager}},
volume = {7679 LNCS},
year = {2012}
}
@article{Klein2004,
author = {Klein, Gerwin and Tuch, Harvey},
journal = {TPHOLs Emerging Trends},
title = {{Towards Verified Virtual Memory in L4}},
year = {2004}
}
@article{Tuch2004,
author = {Tuch, Harvey and Klein, Gerwin},
pages = {73--97},
title = {{Verifying the L4 virtual memory subsystem}},
url = {http://w.doclsf.de/papers/os-verify-04.pdf{\#}page=79{\%}5Cnhttp://ftjpbcx.doclsf.de/papers/os-verify-04.pdf{\%}23page=105},
year = {2004}
}
@article{Meijer,
author = {Meijer, Erik},
pages = {1--27},
title = {{Functional Programming with Bananas , Lenses , Envelopes and Barbed Wire 1 Introduction 2 The data type of lists}}
}
@article{Appel2006,
abstract = {Separation logic is a Hoare logic for programs that alter pointer data structures. One can do machine-checked separation-logic proofs of interesting programs by a semantic embedding of separation logic in a higher-order logic such as Coq or Isabelle/HOL. However, since separation is a linear logic—actually, a mixture of linear and nonlinear logic—the usual methods that Coq or Isabelle use to manipulate hypotheses don't work well. On the other hand, one does not want to duplicate in linear logic the entire libraries of lemmas and tactics that are an important strength of the Coq and Isabelle systems. Here I demonstrate a set of tactics for moving cleanly between classical natural deduction and linear implication.},
author = {Appel, A.W.},
journal = {Unpublished draft, http://www. cs. princeton. edu/appel/papers/septacs. pdf},
title = {{Tactics for separation logic}},
year = {2006}
}
@article{Chlipala2011,
abstract = {Several recent projects have shown the feasibility of verifying low- level systems software. Verifications based on automated theorem- proving have omitted reasoning about first-class code pointers, which is critical for tasks like certifying implementations of threads and processes. Conversely, verifications that deal with first-class code pointers have featured long, complex, manual proofs. In this paper, we introduce the Bedrock framework, which supports mostly-automated proofs about programs with the full range of features needed to implement, e.g., language runtime systems.$\backslash$nThe heart of our approach is in mostly-automated discharge of verification conditions inspired by separation logic. Our take on separation logic is computational, in the sense that function speci- fications are usually written in terms of reference implementations in a purely functional language. Logical quantifiers are the most challenging feature for most automated verifiers; by relying on functional programs (written in the expressive language of the Coq proof assistant), we are able to avoid quantifiers almost entirely. This leads to some dramatic improvements compared to both past work in classical verification, which we compare against with im- plementations of data structures like binary search trees and hash tables; and past work in verified programming with code pointers, which we compare against with examples like function memoiza- tion and a cooperative threading library.},
doi = {10.1145/2345156.1993526},
isbn = {978-1-4503-0663-8},
issn = {1450306632},
journal = {ACM SIGPLAN Notices},
keywords = {functional programming,interactive proof assistants,low-level,programming languages,separation logic},
number = {6},
pages = {234--245},
title = {{Mostly-automated verification of low-level programs in computational separation logic}},
url = {http://dl.acm.org/citation.cfm?id=1993526{\%}5Cnpapers2://publication/uuid/F5EC55C5-B456-4483-8830-2FA851655CE0},
volume = {46},
year = {2011}
}
@article{Nanevski2014,
author = {Nanevski, Aleksandar and Ley-Wild, Ruy and Sergey, Ilya and Delbianco, Germ{\'{a}}n Andr{\'{e}}s},
doi = {10.1007/978-3-642-54833-8_16},
isbn = {9783642548321},
issn = {16113349},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {290--310},
title = {{Communicating state transition systems for fine-grained concurrent resources}},
volume = {8410 LNCS},
year = {2014}
}
@article{DaRochaPinto2014,
abstract = {To avoid data races, concurrent operations should either be at distinct times or on distinct data. Atomicity is the abstraction that an operation takes effect at a single, discrete instant in time, with linearisability being a well-known correctness condition which asserts that concurrent operations appear to behave atomically. Disjointness is the abstraction that operations act on distinct data resource, with concurrent separation logics enabling reasoning about threads that appear to operate independently on disjoint resources.We present TaDA, a program logic that combines the benefits of abstract atomicity and abstract disjointness. Our key contribution is the introduction of atomic triples, which offer an expressive approach to specifying program modules. By building up examples, we show that TaDA supports elegant modular reasoning in a way that was not previously possible.},
author = {{Da Rocha Pinto}, Pedro and Dinsdale-Young, Thomas and Gardner, Philippa},
doi = {10.1007/978-3-662-44202-9_9},
isbn = {9783662442012},
issn = {16113349},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {207--231},
title = {{TaDA: A logic for time and data abstraction}},
volume = {8586 LNCS},
year = {2014}
}
@article{Svendsen2014,
author = {Svendsen, Kasper and Birkedal, Lars},
doi = {10.1007/978-3-642-54833-8_9},
isbn = {9783642548321},
issn = {16113349},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {149--168},
title = {{Impredicative concurrent abstract predicates}},
volume = {8410 LNCS},
year = {2014}
}
@article{Timany2016,
author = {Timany, Amin and Krebbers, Robbert and Birkedal, Lars},
pages = {1--2},
title = {{Logical Relations in Iris}},
year = {2016}
}
@misc{Ma1992,
abstract = {The concept of relations over sets is generalized to relations over an arbitrary category, and used to investigate the abstraction (or logical-relations) theorem, the identity extension lemma, and parametric polymorphism, for Cartesian-closed-category models of the simply typed lambda calculus and PL-category models of the polymorphic typed lambda calculus. Treatments of Kripke relations and of complete relations on domains are included.},
author = {Ma, QingMing and Reynolds, John C},
booktitle = {Mathematical Foundations of Programming Semantics},
doi = {10.1007/3-540-55511-0_1},
isbn = {9780444867292},
number = {7597},
pages = {1--40},
title = {{Types , Abstraction , and Parametric Polymorphism}},
url = {http://www.cs.cmu.edu/afs/cs.cmu.edu/user/qma/www/papers/mfps.pdf},
volume = {598},
year = {1992}
}
@article{Carter2016,
author = {Carter, Adam S and Hundhausen, Christopher D},
isbn = {9781450344494},
keywords = {achievement,activity streams,educational data mining,learning analytics,performance and,predictive models of student,social},
pages = {201--209},
title = {{With a Little Help From My Friends : An Empirical Study of the Interplay of Students ' Social Activities , Programming Activities , and Course Success}},
year = {2016}
}
author = {Radermacher, Alex and Walia, Gursimran and Knudson, Dean},
isbn = {9781450327688},
keywords = {agogy,all or part of,computer science education,computer science ped-,or,or hard copies of,permission to make digital,required skills,software developer,this work for personal},
pages = {291--300},
title = {{Investigating the Skill Gap between Graduating Students and Industry Expectations Categories and Subject Descriptors}}
}
@article{Wu2014,
author = {Wu, Huiting and Wang, Yi},
keywords = {- ability,practice teaching,reform,training goal},
number = {Ictcs},
pages = {154--157},
title = {{Exploration and Research of Practical Teaching System Based on Ability Training}},
year = {2014}
}
isbn = {9781450318686},
keywords = {agogy,computer science education,computer science ped-,knowledge deficiency,required skills,software developer},
pages = {525--530},
title = {{Gaps Between Industry Expectations and the Abilities of Graduates}},
year = {2013}
}
@article{Alur2013a,
abstract = {—The classical formulation of the program-synthesis problem is to find a program that meets a correctness specifica-tion given as a logical formula. Recent work on program synthesis and program optimization illustrates many potential benefits of allowing the user to supplement the logical specification with a syntactic template that constrains the space of allowed implementations. Our goal is to identify the core computational problem common to these proposals in a logical framework. The input to the syntax-guided synthesis problem (SyGuS) consists of a background theory, a semantic correctness specification for the desired program given by a logical formula, and a syntactic set of candidate implementations given by a grammar. The computational problem then is to find an implementation from the set of candidate expressions so that it satisfies the specification in the given theory. We describe three different instantiations of the counter-example-guided-inductive-synthesis (CEGIS) strategy for solving the synthesis problem, report on prototype implementations, and present experimental results on an initial set of benchmarks.},
author = {Alur, Rajeev and Bodik, Rastislav and Juniwal, Garvit and Martin, Milo M. K. and Raghothaman, Mukund and Seshia, Sanjit A. and Singh, Rishabh and Solar-Lezama, Armando and Torlak, Emina and Udupa, Abhishek},
isbn = {978-0-9835678-3-7},
journal = {2013 Formal Methods in Computer-Aided Design},
pages = {1--8},
title = {{Syntax-guided synthesis}},
url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6679385},
year = {2013}
}
@article{Anderson2015,
author = {Anderson, Paul V and Heckman, Sarah and Vouk, Mladen and Wright, David and Burge, Janet E and Gannod, Gerald C},
doi = {10.1109/ICSE.2015.178},
isbn = {9781479919345},
keywords = {communication across the curriculum, software engi},
title = {{CS / SE Instructors Can Improve Student Writing without Reducing Class Time Devoted to Technical Content : Experimental Results}},
year = {2015}
}
@article{Park2016,
author = {Park, Thomas H and Kim, Meen Chul and Chhabra, Sukrit and Lee, Brian and Forte, Andrea},
isbn = {9781450342315},
keywords = {assessment,computational thinking,program,web development},
pages = {302--307},
title = {{Reading Hierarchies in Code : Assessment of a Basic Computational Skill}},
year = {2016}
}
@article{Mao2016,
author = {Mao, Yanyan and Feng, Yanli and Cheng, Dapeng and Xie, Qingsong},
isbn = {9781509022182},
keywords = {Education Reform and Innovation},
number = {Iccse},
pages = {907--910},
title = {{Computer Curriculum System Reform Based On System Ability Training}},
year = {2016}
}
@article{Krutz1998,
author = {Krutz, Daniel E and Malachowsky, Samuel A and Reichlmayr, Thomas},
isbn = {9781450326056},
keywords = {software,software engineering education,software testing},
pages = {49--54},
title = {{Using a Real World Project in a Software Testing Course Categories and Subject Descriptors}},
year = {1998}
}
@article{Morrison2015,
author = {Morrison, Briana B and Margulieux, Lauren E and Street, Cherry},
isbn = {9781450336307},
keywords = {cognitive load,contextual transfer,subgoal labels},
pages = {21--29},
title = {{Subgoals , Context , and Worked Examples in Learning Computing Problem Solving}},
year = {2015}
}
@article{Manolios,
author = {Manolios, Panagiotis and Pais, Jorge and Papavasileiou, Vasilis},
title = {{The Inez Mathematical Programming Modulo Theories Framework}}
}
@article{Manoliosa,
author = {Manolios, Panagiotis and Papavasileiou, Vasilis},
title = {{ILP Modulo Theories}}
}
@article{Jain,
author = {Jain, Mitesh and Manolios, Panagiotis},
title = {{Skipping Refinement}}
}
@article{Learn2016,
author = {Learn, Chinese Students},
number = {3},
title = {{Learning Computer Science : Dimensions of Variation Within What}},
volume = {16},
year = {2016}
}
@article{Giantamidis,
archivePrefix = {arXiv},
arxivId = {arXiv:1605.07805v2},
author = {Giantamidis, Georgios and Tripakis, Stavros},
eprint = {arXiv:1605.07805v2},
pages = {1--19},
title = {{Learning Moore Machines from Input-Output Traces}}
}
@article{Manoliosb,
author = {Manolios, Panagiotis and Subramanian, Gayatri and Vroon, Daron and Drive, Ferst},
isbn = {9781595937346},
keywords = {component-based software de-,integrated modular avionics,pseudo-boolean,system assembly problem,velopment},
pages = {61--71},
title = {{Automating Component-Based System Assembly}}
}
@article{Chamarthi,
author = {Chamarthi, Harsh Raju and Dillinger, Peter and Manolios, Panagiotis and Vroon, Daron},
title = {{The ACL2 Sedan Theorem Proving System}}
}
@article{Solar-Lezama2006,
abstract = {Sketching is a software synthesis approach where the programmer develops a partial implementation - a sketch - and a separate specification of the desired functionality. The synthesizer then completes the sketch to behave like the specification. The correctness of the synthesized implementation is guaranteed by the compiler, which allows, among other benefits, rapid development of highly tuned implementations without the fear of introducing bugs.We develop SKETCH, a language for finite programs with linguistic support for sketching. Finite programs include many highperformance kernels, including cryptocodes. In contrast to prior synthesizers, which had to be equipped with domain-specific rules, SKETCH completes sketches by means of a combinatorial search based on generalized boolean satisfiability. Consequently, our combinatorial synthesizer is complete for the class of finite programs: it is guaranteed to complete any sketch in theory, and in practice has scaled to realistic programming problems.Freed from domain rules, we can now write sketches as simpleto-understand partial programs, which are regular programs in which difficult code fragments are replaced with holes to be filled by the synthesizer. Holes may stand for index expressions, lookup tables, or bitmasks, but the programmer can easily define new kinds of holes using a single versatile synthesis operator.We have used SKETCH to synthesize an efficient implementation of the AES cipher standard. The synthesizer produces the most complex part of the implementation and runs in about an hour.},
author = {Solar-Lezama, Armando and Tancau, Liviu and Bodik, Rastislav and Seshia, Sanjit and Saraswat, Vijay},
doi = {10.1145/1168919.1168907},
isbn = {1595934510},
issn = {01635964},
journal = {ACM SIGARCH Computer Architecture News},
keywords = {design,languages,performance},
number = {5},
pages = {404},
title = {{Combinatorial sketching for finite programs}},
volume = {34},
year = {2006}
}
@article{Xu2007,
author = {Xu, By Zhiwei and Li, Guojie},
doi = {10.1145/2001269.2001298},
title = {{Computing for the Masses}},
year = {2007}
}
@article{Lamport1988,
author = {Lamport, Leslie},
journal = {Director},
title = {{The Existence of Refinement Mappings}},
year = {1988}
}
@article{Soh2011,
author = {Soh, D O I Leen-kiat and Shell, Duane F and Ingraham, Elizabeth and Ramsay, Stephen and Moore, Brian},
doi = {10.1145/2699391},
pages = {7--9},
title = {{Viewpoint Learning Through Computational Creativity}},
year = {2011}
}
@article{Soh2012,
author = {Soh, Leen-kiat and Shell, Duane F and Ingraham, Elizabeth and Ramsay, Stephen and Moore, Brian},
doi = {10.1145/2699391},
pages = {33--35},
title = {观点 通过计算创造性来学习},
year = {2012}
}
@misc{Lamport1983,
abstract = {Temporal logic is a formal system for specifying and reasoning about concurrent programs. It provides a uniform framework for describing a system at any level of abstraction, thereby supporting hierarchical specification and verification.},
author = {Lamport, Leslie},
booktitle = {Information Processing 83: Proceedings of the IFIP 9th World Congress},
isbn = {0444867295},
pages = {657--668},
title = {{What Good is Temporal Logic?}},
year = {1983}
}
@article{Kafai,
author = {Kafai, Yasmin B},
doi = {10.1145/2955114},
title = {{Education From Computational Thinking to Computational Participation in K–12 Education}}
}
@article{Lo2010,
author = {Lo, Virginia M},
isbn = {9781605588858},
keywords = {century,china,computer science education,electronics engineering and computer,every,in the 21 st,information technology has permeated,peking university,school of,science},
pages = {396--400},
title = {{Undergraduate Computer Science Education in China}},
year = {2010}
}
@article{Chen2004,
author = {Chen, David Yunchao},
keywords = {3 years,administration,been doubled over just,been taking a bold,drive towards mass higher,education,higher education,it that during recent,mass expansion of higher,of great interest is,the total enrolments have,years china has},
number = {1},
pages = {23--33},
title = {{China ' s Mass Higher Education : Problem , Analysis , and Solutions}},
volume = {5},
year = {2004}
}
@article{Wing2006,
author = {Wing, Jeannette M},
number = {3},
pages = {33--35},
title = {{Computational Thinking}},
volume = {49},
year = {2006}
}
@article{Education2014,
author = {Education, Undergraduate I T},
number = {3},
pages = {49--55},
title = {{Education in China}},
volume = {5},
year = {2014}
}
@article{Denning,
author = {Denning, Peter J},
doi = {10.1145/2535915},
pages = {29--31},
title = {{The Profession of IT Design Thinking}}
}
@article{Hu2011,
author = {Hu, Chenglie},
isbn = {9781450306973},
keywords = {computation,computational thinking,thinking model},
pages = {223--227},
title = {{Computational Thinking – What It Might Mean and What We Might Do About It}},
year = {2011}
}
@misc{,
title = {cacm200603-wing-cn.pdf}
}
@article{Otero2015,
author = {Otero, Rafael Rom{\'{a}}n and George, Prince and Aravind, Alex A and George, Prince},
isbn = {9781450329668},
pages = {430--435},
title = {{MiniOS : An Instructional Platform for Teaching Operating Systems Projects}},
year = {2015}
}
@article{Tafliovich2015,
author = {Tafliovich, Anya and Petersen, Andrew and Campbell, Jennifer},
isbn = {9781450329668},
keywords = {evaluation,moti-,perspective,student teamwork,students,undergraduate software development project,vating students},
pages = {494--499},
title = {{On the Evaluation of Student Team Software Development Projects}},
year = {2015}
}
@article{Sys2014,
author = {Sys, Maciej M},
isbn = {9781450334402},
pages = {2014},
title = {{From Algorithmic to Computational Thinking : On the Way for Computing for all Students}},
year = {2014}
}
@article{Coffey,
author = {Coffey, John W},
pages = {39--45},
title = {{RELATIONSHIP BETWEEN DESIGN AND PROGRAMMING SKILLS IN AN ADVANCED COMPUTER PROGRAMMING}}
}
@article{Lamagna,
author = {Lamagna, Edmund A},
pages = {45--52},
title = {{Algorithmic thinking unplugged *}}
}
@article{Harper,
author = {Harper, Robert and Plotkin, Gordon},
title = {{A Framework for Defining Logics}}
}
@article{Preoteasa,
author = {Preoteasa, Viorel and Tripakis, Stavros},
isbn = {9781450330527},
title = {{Refinement Calculus of Reactive Systems}}
}
@article{Dickson2012,
author = {Dickson, Paul E},
isbn = {9781450310987},
keywords = {android,appinventor,apps,cabana,iphone,mobile de-,smartphone,vices,xcode},
pages = {529--534},
title = {{Cabana : A Cross-platform Mobile Development System}},
year = {2012}
}
@article{Schuurman,
author = {Schuurman, Derek C},
isbn = {9781450318686},
keywords = {computer organization,cpu simulation,education},
pages = {335--339},
title = {{Step-by-Step Design and Simulation of a Simple CPU Architecture Categories and Subject Descriptors}}
}
@article{Wiedijk,
author = {Wiedijk, Freek},
pages = {1--14},
title = {{Comparing mathematical provers}}
}
@article{Chong2016,
abstract = {Report on the NSF Workshop on Formal Methods for Security, held 19-20 November 2015.},
archivePrefix = {arXiv},
arxivId = {1608.00678},
author = {Chong, Stephen and Guttman, Joshua and Datta, Anupam and Myers, Andrew and Pierce, Benjamin and Schaumont, Patrick and Sherwood, Tim and Zeldovich, Nickolai},
eprint = {1608.00678},
month = {aug},
title = {{Report on the NSF Workshop on Formal Methods for Security}},
url = {http://arxiv.org/abs/1608.00678},
year = {2016}
}
@inproceedings{Vasudevan2013,
author = {Vasudevan, Amit and Chaki, S. and {Limin Jia} and McCune, J. and Newsome, James and Datta, A.},
booktitle = {2013 IEEE Symposium on Security and Privacy},
doi = {10.1109/SP.2013.36},
isbn = {978-0-7695-4977-4},
keywords = {2-,applications,dynamic root of trust,extensible modular hypervisor framework,hypapps,hypervisor-based,nested},
month = {may},
pages = {430--444},
publisher = {IEEE},
title = {{Design, Implementation and Verification of an eXtensible and Modular Hypervisor Framework}},
url = {http://ieeexplore.ieee.org/document/6547125/},
year = {2013}
}
@article{Dahlin2011,
author = {Dahlin, Mike and Johnson, Ryan and Krug, Robert Bellarmine and McCoyd, Michael and Young, William},
doi = {10.4204/EPTCS.70.3},
issn = {2075-2180},
journal = {Electronic Proceedings in Theoretical Computer Science},
month = {oct},
pages = {28--45},
title = {{Toward the Verification of a Simple Hypervisor}},
url = {http://arxiv.org/abs/1110.4672v1},
volume = {70},
year = {2011}
}
@misc{,
title = {{F-Bounded Polymorphism for Object-Oriented Program}}
}
@article{Ozeri,
author = {Ozeri, Or and Padon, Oded and Rinetzky, Noam and Sagiv, Mooly},
title = {{Conjunctive Abstract Interpretation using Paramodulation}}
}
@article{Reynolds,
author = {Reynolds, Andrew and Iosif, Radu and Serban, Cristina},
pages = {1--18},
title = {{Reasoning in the Bernays-Sch{\"{o}}nfinkel Fragment of Separation Logic}}
}
@article{Delaware2015,
abstract = {We present Fiat, a library for the Coq proof assistant supporting$\backslash$nrefinement of declarative specifications into efficient functional$\backslash$nprograms with a high degree of automation. Each refinement process$\backslash$nleaves a proof trail, checkable by the normal Coq kernel, justifying$\backslash$nits soundness. We focus on the synthesis of abstract data types that$\backslash$npackage methods with private data. We demonstrate the utility of$\backslash$nour framework by applying it to the synthesis of $\backslash$textit{\{}query structures{\}}$\backslash$n-- abstract data types with SQL-like query and insert operations.$\backslash$nFiat includes a library for writing specifications of query structures$\backslash$nin SQL-inspired notation, expressing operations over relations (tables)$\backslash$nin terms of mathematical sets. This library includes a set of tactics$\backslash$nfor automating the refinement of these specifications into efficient,$\backslash$ncorrect-by-construction OCaml code. Using these tactics, a programmer$\backslash$ncan generate such an implementation completely automatically by only$\backslash$nspecifying the equivalent of SQL indexes, data structures capturing$\backslash$nuseful views of the abstract data. We conclude by speculating on$\backslash$nthe new programming modularity possibilities enabled by an automated$\backslash$nrefinement system with proved-correct rules.},
author = {Delaware, Benjamin and Pit-Claudel, Cl{\'{e}}ment and Gross, Jason and Chlipala, Adam},
doi = {10.1145/2775051.2677006},
isbn = {9781450333009},
issn = {0362-1340},
journal = {SIGPLAN Not.},
keywords = {deductive synthesis,mechanized derivation of abstract data types},
number = {1},
pages = {689--700},
title = {{Fiat: Deductive Synthesis of Abstract Data Types in a Proof Assistant}},
url = {http://doi.acm.org/10.1145/2775051.2677006},
volume = {50},
year = {2015}
}
@article{Silva,
author = {Silva, Vijay D and Sousa, Marcelo},
pages = {1--18},
title = {{Complete Abstractions and Subclassical Modal Logics}}
}
@article{Bloem,
author = {Bloem, Roderick and Chockler, Hana and Ebrahimi, Masoud and Strichman, Ofer},
title = {{Synthesizing Non-Vacuous Systems}}
}
@article{Cohen,
author = {Cohen, Ernie},
title = {{Verified Concurrent Code : Tricks of the Trade}}
}
title = {{Finding Relevant Templates via the Principal Component Analysis}}
}
@article{Mukherjee,
author = {Mukherjee, Suvam and Kumar, Arun and Souza, Deepak D},
title = {{Detecting All High-Level Dataraces in an RTOS Kernel}}
}
@article{Vizel,
author = {Vizel, Yakir and Gurfinkel, Arie and Shoham, Sharon and Malik, Sharad},
title = {{IC3 - Flipping the E in ICE}}
}
@article{Silvaa,
author = {Silva, Vijay D and Kroening, Daniel and Sousa, Marcelo},
pages = {1--18},
title = {{Independence Abstractions and Models of Concurrency}}
}
@article{Ahmed,
author = {Ahmed, Zara and Benque, David and Berezin, Sergey and Dahl, Anna Caroline E and Fisher, Jasmin and Hall, Benjamin A and Ishtiaq, Samin and Nanavati, Jay and Riechert, Maik and Skoblov, Nikita},
title = {{Bringing LTL Model Checking to Biologists}}
}
@article{Monat,
author = {Monat, Rapha{\"{e}}l and Min{\'{e}}, Antoine},
keywords = {abstract inter-,concurrent programs,invariant generation,numeric,pretation,program verification,rely-guarantee methods,thread-modular analyses},
title = {{Precise Thread-Modular Abstract Interpretation of Concurrent Programs using Relational Interference Abstractions}}
}
@article{Sharma,
author = {Sharma, Tushar and Reps, Thomas},
pages = {1--19},
title = {{Sound Bit-Precise Numerical Domains}}
}
@article{Chakraborty,
author = {Chakraborty, Supratik and Gupta, Ashutosh and Jain, Rahul},
pages = {1--19},
title = {{Matching Multiplications in Bit-Vector Formulas}}
}
@article{Henning,
author = {Henning, G and Laarman, Alfons and Sokolova, Ana and Weissenbacher, Georg},
title = {{Dynamic Reductions for Model Checking Concurrent Software}},
volume = {23}
}
@article{Cuoq,
author = {Cuoq, Pascal},
keywords = {c,static analysis,strict aliasing,type-based alias analysis},
title = {{Detecting Strict Aliasing Violations in the Wild}}
}
@article{Ferrara,
author = {Ferrara, Pietro and Tripp, Omer and Liu, Peng and Koskinen, Eric and Srl, Julia},
pages = {1--20},
title = {{Using Abstract Interpretation to Correct Synchronization Faults}}
}
@article{Abal,
author = {Abal, Iago and Brabrand, Claus and Andrzej, W},
keywords = {bug finding,c,double lock,linux,model checking,type and effects},
title = {{Effective Bug Finding in C Programs with Shape and Effect Abstractions}}
}
@article{Bride,
author = {Bride, Hadrien and Kouchnarenko, Olga and Peureux, Fabien},
title = {{Reduction of Workflow Nets for Generalised Soundness Verification}}
}
@article{Muscholl,
author = {Muscholl, Anca and Seidl, Helmut and Walukiewicz, Igor},
title = {{Reachability for dynamic parametric processes}}
}
@article{Hru,
author = {Hruˇ, Martin and Rogalewicz, Adam},
title = {{Counterexample Validation and Interpolation-Based Refinement for Forest Automata}}
}
@article{Jiang,
author = {Jiang, Jiahong and Chen, Liqian and Wu, Xueguang and Wang, Ji},
keywords = {abstract domains,abstract interpretation,block en-,smt},
pages = {1--19},
title = {{Block-wise abstract interpretation by combining abstract domains with SMT}}
}
@article{Wanga,
author = {Wang, Wei and Barrett, Clark and Wies, Thomas},
title = {{Partitioned Memory Models for Program Analysis}}
}
@article{Blazy,
author = {Blazy, Sandrine and B{\"{u}}hler, David and Yakobowski, Boris},
title = {{Structuring Abstract Interpreters through State and Value Abstractions}}
}
@article{Botbol,
author = {Botbol, Vincent and Chailloux, Emmanuel and Gall, Tristan Le},
title = {{Static Analysis of Communicating Processes using Symbolic Transducers}}
}
@article{Konnov,
author = {Konnov, Igor and Widder, Josef and Spegni, Francesco and Spalazzi, Luca},
title = {{Accuracy of Message Counting Abstraction in Fault-Tolerant Distributed Algorithms}}
}
@article{Frumkin,
author = {Frumkin, Asya and Feldman, Yotam M Y},
title = {{Property Directed Reachability for Proving Absence of Concurrent Modification Errors}}
}
@article{Programming2013,
author = {Programming, Functional},
number = {July},
title = {{The GHC Runtime System}},
year = {2013}
}
@article{Møller2016,
author = {M{\o}ller, Anders},
isbn = {9781450339001},
pages = {1--12},
title = {{Feedback-Directed Instrumentation for Deployed JavaScript Applications}},
year = {2016}
}
@article{Chandra,
author = {Chandra, Satish and Gordon, Colin S and Cole, Jean-baptiste Jeannin and Sridharan, Manu and Tip, Frank and Choi, Youngil},
isbn = {9781450344449},
keywords = {object-oriented type systems,type inference},
pages = {410--429},
title = {{Type Inference for Static Compilation of JavaScript}}
}
@article{Andreasen2016,
archivePrefix = {arXiv},
arxivId = {arXiv:1605.01362v1},
author = {Andreasen, Esben and Gordon, Colin S and Chandra, Satish},
eprint = {arXiv:1605.01362v1},
keywords = {and phrases retrofitted type,systems,trace typing,type system design},
title = {{Trace Typing : An Approach for Evaluating Retrofitted Type Systems ( Extended Version )}},
year = {2016}
}
@article{Axelsson2012,
author = {Axelsson, Emil},
isbn = {9781450310543},
keywords = {at the expression problem,ded domain-specific languages,developed several embedded,embed-,generic programming,is highly,our motivation for looking,our research group has,practical,the expression problem},
title = {{A Generic Abstract Syntax Model for Embedded Languages}},
year = {2012}
}
@article{Turon2013,
author = {Turon, Aaron and Thamsborg, Jacob and Ahmed, Amal and Birkedal, Lars and Dreyer, Derek},
isbn = {9781450318327},
keywords = {as fcds are very,carefully designed to be,contextual refinements of their,course-,data abstraction,directly,fine-grained concurrency,linearizability,local state,logical relations,refinement,separation logic,they are,tricky to reason about},
pages = {1--14},
title = {{Logical Relations for Fine-Grained Concurrency}},
year = {2013}
}
@article{Sridharan,
author = {Sridharan, Manu and Dolby, Julian and Chandra, Satish and Sch{\"{a}}fer, Max and Tip, Frank},
keywords = {call graph construction,javascript,points-to analysis},
pages = {1--25},
title = {{Correlation Tracking for Points-To Analysis of JavaScript}}
}
@article{Cox,
author = {Cox, Arlen and Chang, Bor-yuh Evan and Li, Huisong and Rival, Xavier},
title = {{Abstract Domains and Solvers for Sets Reasoning}}
}
@article{Beyer2008,
abstract = {We present and evaluate a framework and tool for combining multiple program analyses which allows the dynamic (on-line) adjustment of the precision of each analysis depending on the accumulated results. For example, the explicit tracking of the values of a variable may be switched off in favor of a predicate abstraction when and where the number of different variable values that have been encountered has exceeded a specified threshold. The method is evaluated on verifying the SSH client/server software and shows significant gains compared with predicate abstraction-based model checking.},
author = {Beyer, Dirk and Henzinger, Thomas A. and Th{\'{e}}oduloz, Gr{\'{e}}gory},
doi = {10.1109/ASE.2008.13},
isbn = {9781424421886},
issn = {1938-4300},
journal = {ASE 2008 - 23rd IEEE/ACM International Conference on Automated Software Engineering, Proceedings},
pages = {29--38},
title = {{Program analysis with dynamic precision adjustment}},
year = {2008}
}
@article{Tiwari2007,
abstract = {This paper presents the foundations for using automated deduction technology in static program analysis. The central principle is the use of logical lattices – a class of lattices defined on logical formulas in a logical theory – in an abstract interpretation framework. Abstract interpretation over logical lattices, called logical interpretation, raises new challenges for theorem proving. We present an overview of some of the existing results in the field of logical interpretation and outline some requirements for building expressive and scalable logical interpreters.},
author = {Tiwari, Ashish and Gulwani, Sumit},
doi = {10.1007/978-3-540-73595-3},
isbn = {978-3-540-73594-6},
issn = {0302-9743},
journal = {Automated Deduction},
pages = {147--166},
title = {{Logical Interpretation: Static Program Analysis Using Theorem Proving}},
volume = {4603},
year = {2007}
}
abstract = {What if all the software layers in a virtual appliance were compiled within the same safe, high-level language framework?},
author = {Madhavapeddy, Anil and Scott, David J.},
doi = {10.1145/2541883.2541895},
issn = {00010782},
journal = {Communications of the ACM},
month = {jan},
number = {1},
pages = {61--69},
title = {{Unikernels}},
url = {http://doi.acm.org/10.1145/2541883.2541895{\%}5Cnhttp://dl.acm.org/citation.cfm?doid=2541883.2541895 http://dl.acm.org/citation.cfm?doid=2541883.2541895},
volume = {57},
year = {2014}
}
@book{Krebbers2015,
author = {Krebbers, Robbert Jan},
isbn = {9789462599031},
title = {{The C standard formalized in Coq}},
year = {2015}
}
@article{Klein2009,
abstract = {Complete formal verification is the only known way to guarantee that a system is free of programming errors. We present our experience in performing the for- mal, machine-checked verification of the seL4 mi- crokernel from an abstract specification down to its C implementation. We assume correctness of com- piler, assembly code, and hardware, and we used a unique design approach that fuses formal and oper- ating systems techniques. To our knowledge, this is the first formal proof of functional correctness of a complete, general-purpose operating-system kernel. Functional correctness means here that the implemen- tation always strictly follows our high-level abstract specification of kernel behaviour. This encompasses traditional design and implementation safety proper- ties such as the kernel will never crash, and it will never perform an unsafe operation. It also proves much more: we can predict precisely how the kernel will behave in every possible situation. seL4, a third-generation microkernel of L4 prove- nance, comprises 8,700 lines of C code and 600 lines of assembler. Its performance is comparable to other high-performance L4 kernels. 1},
author = {Klein, Gerwin and Elphinstone, Kevin and Heiser, Gernot and Andronick, June and Cock, David and Derrin, Philip and Elkaduwe, Dhammika and Engelhardt, Kai and Kolanski, Rafal and Norrish, Michael and Sewell, Thomas and Tuch, Harvey and Winwood, Simon},
doi = {10.1145/1629575.1629596},
isbn = {9781605587523},
issn = {00010782},
journal = {Proceedings of the ACM SIGOPS 22nd Symposium on Operating System Principles},
keywords = {hol,isabelle,l4,microkernel,sel4},
pages = {207--220},
title = {{seL4: Formal verification of an OS kernel}},
url = {http://dl.acm.org/citation.cfm?id=1629596},
year = {2009}
}
@article{Wang2014,
author = {Wang, Xi and Lazar, David and Zeldovich, Nickolai and Chlipala, Adam and Csail, M I T and Tatlock, Zachary},
isbn = {9781931971164},
title = {{Jitk : A Trustworthy In-Kernel Interpreter Infrastructure}},
year = {2014}
}
@article{Maus2011,
author = {Maus, Stefan},
number = {September},
title = {{Verification of Hypervisor Subroutines written in Assembler}},
year = {2011}
}
@article{Xu,
author = {Xu, Fengwei and Fu, Ming and Feng, Xinyu},
title = {{A Practical Verification Framework for Preemptive OS Kernels ( Technical Report )}}
}
@article{Degenbaev2012,
author = {Degenbaev, Ulan},
title = {{Formal specification of the x86 instruction set architecture}},
year = {2012}
}
@misc{,
title = {{Formal Nova interface specification}}
}
@article{Leinenbach2009,
abstract = {VCC is an industrial-strength verification suite for the formal verification of concurrent, low-level C code. It is being developed by Microsoft Research, Redmond, and the European Microsoft Innovation Center, Aachen. The development is driven by two applications from the Verisoft{\~{}}XT project: the Microsoft Hyper-V Hypervisor and SYSGO's PikeOS micro kernel.$\backslash$n$\backslash$nThis paper gives a brief overview on the Hypervisor with a special focus on verification related challenges this kind of low-level software poses. It discusses how$\backslash$nthe design of VCC addresses these challenges, and highlights some specific issues of the Hypervisor verification and how they can be solved with VCC.},
author = {Leinenbach, Dirk and Santen, Thomas},
doi = {10.1007/978-3-642-05089-3_51},
isbn = {3642050883},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {806--809},
title = {{Verifying the microsoft hyper-V hypervisor with VCC}},
volume = {5850 LNCS},
year = {2009}
}
@article{Freitas2011,
abstract = {This paper reports on the Xenon project's use of formal methods. Xenon is a higher-assurance secure hypervisor based on re-engineering the Xen open-source hypervisor. The Xenon project used formal specifications both for assurance and as guides for security re-engineering. We formally modelled the fundamental definition of security, the hypercall interface behaviour, and the internal modular design. We used three formalisms: CSP , Z, and Circus for this work. Circus is a combination of Standard Z, CSP with its semantics given in Hoare and He's unifying theories of programming. Circus is suited for both event-based and state-based modelling. Here, we report our experiences to date with using these formalisms for assurance.},
author = {Freitas, Leo and McDermott, John},
doi = {10.1007/s10009-011-0195-9},
isbn = {1000901101959},
issn = {14332779},
journal = {International Journal on Software Tools for Technology Transfer},
keywords = {Circus,Formal modelling,Separation kernel,Theorem proving,Virtualisation,Xenon},
number = {5},
pages = {463--489},
title = {{Formal methods for security in the Xenon hypervisor}},
volume = {13},
year = {2011}
}
@book{Hutchison,
author = {Hutchison, David and Mitchell, John C},
isbn = {9783642214363},
title = {{Lecture Notes in Computer Science}}
}
@article{Li,
author = {Li, Bojie and Tan, Kun and Chen, Enhong},
isbn = {9781450341936},
keywords = {compiler,network function virtualization,reconfigurable},
title = {{ClickNP : Highly Flexible and High Performance Network Processing with Reconfigurable Hardware}}
}
@book{Wester,
author = {Wester, Rinse},
isbn = {9789036538879},
title = {{A transformation-based approach to hardware design using higher-order functions}}
}
@incollection{Xu2016,
abstract = {Although testing is the most widely used technique to control the quality of software systems, it is a topic that, until relatively recently, has received scant attention from the computer research community. Although some pioneering work was already done a considerable time ago [Cho78,GG83,How78,Mye79], the testing of software systems has never become a mainstream activity of scientific research. The reasons that are given to explain this situation usually include arguments to the effect that testing as a technique is inferior to verification — testing can show only the presence of errors, not their absence — and that we should therefore concentrate on developing theory and tools for the latter. It has also been frequently said that testing is by its very nature a non-formal activity, where formal methods and related tools are at best of little use. The first argument is incorrect in the sense that it gives an incomplete picture of the situation. Testing is inferior to verification if the verification model can be assumed to be correct and if its complexity can be handled correctly by the person and or tool involved in the verification task. If these conditions are not fulfilled, which is frequently the case, then testing is often the only available technique to increase the confidence in the correctness of a system. In this talk we will show that the second argument is flawed as well. It is based on the identification of testing with robustness testing, where it is precisely the objective to find out how the system behaves under unspecified circumstances. This excludes the important activity of conformance testing, which tries to test the extent to which system behaviour conforms to its specification. It is precisely in this area where formal methods and tools can help to derive tests systematically from specifications, which is a great improvement over laborious, error-prone and costly manual test derivation. In our talk we show how the process algebraic testing theory due to De Nicola and Hennessy [DNH84,DeN87], originally conceived out of semantic considerations, may be used to obtain principles for test derivation. We will give an overview of the evolution of these ideas over the past ten years or so, starting with the conformance testing theory of simple synchronously communicating reactive systems [Bri88,Lan90] and leading to realistic systems that involve sophisticated asynchronous message passing mechanisms [Tre96,HT97]. Written accounts can be found in [BHT97,He98]. We discuss how such ideas have been used to obtain modern test derivation tools, such as TVEDA and TGV [Pha94, CGPT96,FJJV96], and the tool set that is currently being developed in the C{\^{o}}te-de-Resyste project [STW96]. The advantage of a test theory that is based on well-established process algebraic theory is that in principle there exists a clear link between testing and verification, which allows the areas to share ideas and algorithms [FJJV96,VT98]. Time allowing, we look at some of the methodological differences and commonalities between model checking techniques and testing, one of the differences being that of state space coverage, and an important commonality that of test property selection. In recent years the research into the use of formal methods and tools for testing reactive systems has seen a considerable growth. An overview of different approaches and school of thought can be found in [BPS98], reporting on the first ever Dagstuhl seminar devoted to testing. The formal treatment of conformance testing based on process algebra and/or concurrency theory is certainly not the only viable approach. An important school of thought is the FSM-testing theory grown out of the seminal work of Chow [Cho78], of which a good overview is given in [LY96]. Another interesting formal approach to testing is based on abstract data type theory [Gau95,BGM91].},
archivePrefix = {arXiv},
arxivId = {1301.4779},
author = {Xu, Fengwei and Fu, Ming and Feng, Xinyu and Zhang, Xiaoran and Zhang, Hui and Li, Zhaohui},
booktitle = {Computer Aided Verification},
doi = {10.1007/978-3-319-41540-6_4},
eprint = {1301.4779},
isbn = {9783540272311},
issn = {0018-9235},
number = {July},
pages = {59--79},
pmid = {4520227},
title = {{A Practical Verification Framework for Preemptive OS Kernels}},
volume = {575},
year = {2016}
}
@article{Hall,
keywords = {privacy,secure multi-party computation},
title = {{Secure Multi-Party Computation Problems and Their Applications : A Review and Open Problems}}
}
@article{Tassarotti,
author = {Tassarotti, Joseph and Jung, Ralf and Harper, Robert},
title = {{A Higher-Order Logic for Concurrent Termination-Preserving Refinement}}
}
@article{Aydemir,
author = {Aydemir, Brian and Chargu, Arthur and Pierce, Benjamin C and Pollack, Randy and Weirich, Stephanie},
isbn = {9781595936899},
keywords = {binding,coq,locally nameless},
title = {{Engineering Formal Metatheory}}
}
@article{,
doi = {10.1017/S0956796814000227},
number = {4},
pages = {423--433},
title = {{Deletion: The curse of the red-black tree}},
volume = {24},
year = {2014}
}
@article{Koeplingera,
author = {Koeplinger, David and Delimitrou, Christina and Kozyrakis, Christos},
title = {{Automatic Generation of Efficient Accelerators for Reconfigurable Hardware}}
}
@article{Motara2011,
archivePrefix = {arXiv},
arxivId = {arXiv:1201.5728v1},
author = {Motara, Yusuf Moosa},
eprint = {arXiv:1201.5728v1},
pages = {1--14},
title = {{Functional Programming and Security}},
year = {2011}
}
@article{Yang2015b,
abstract = {As a solution to the problem of information leaks, I propose a policy-agnostic pro-gramming paradigm that enforces security and privacy policies by construction. I present the implementation of this paradigm in a new language, Jeeves, that auto-matically enforces information flow policies describing how sensitive values may flow through computations. In Jeeves, the programmer specifies expressive information flow policies separately from other functionality and relies on the language runtime to customize program behavior based on the policies. Jeeves allows programmers to implement information flow policies once instead of as repeated checks and filters across the program. To provide strong guarantees about Jeeves programs, I present a formalization of the dynamic semantics of Jeeves, define non-interference and policy compliance properties, and provide proofs that Jeeves enforces these properties. To demonstrate the practical feasibility of policy-agnostic programming, I present Jacqueline, a web framework built on Jeeves that enforces policies in database-backed web applications. I provide a formalization of Jacqueline as an extension of Jeeves to include relational operators and proofs that this preserves the policy compliance guarantees. Jacqueline enforces information flow policies end-to-end and runs using an unmodified Python interpreter and SQL database. I show, through several case studies, that Jacqueline reduces the amount of policy code required while incurring limited overheads.},
author = {Yang, Jean},
title = {{Preventing Information Leaks with Policy-Agnostic Programming}},
year = {2015}
}
@article{Fournet2011,
abstract = {Type systems are effective tools for verifying the security of crypto-graphic programs. They provide automation, modularity and scala-bility, and have been applied to large security protocols. However, they traditionally rely on abstract assumptions on the underlying cryptographic primitives, expressed in symbolic models. Cryptog-raphers usually reason on security assumptions using lower level, computational models that precisely account for the complexity and success probability of attacks. These models are more real-istic, but they are harder to formalize and automate. We present the first modular automated program verification me-thod based on standard cryptographic assumptions. We show how to verify ideal functionalities and protocols written in ML by typing them against new cryptographic interfaces using F7, a refinement type checker coupled with an SMT-solver. We develop a proba-bilistic core calculus for F7 and formalize its type safety in COQ. We build typed module and interfaces for MACs, signatures, and encryptions, and establish their authenticity and secrecy proper-ties. We relate their ideal functionalities and concrete implemen-tations, using game-based program transformations behind typed interfaces. We illustrate our method on a series of protocol imple-mentations.},
author = {Fournet, C{\'{e}}dric},
doi = {10.1145/2046707.2046746},
isbn = {9781450309486},
issn = {15437221},
keywords = {cryptography,refinement types,security protocols},
pages = {341--350},
title = {{Modular Code-Based Cryptographic Verification}},
year = {2011}
}
@article{Dreyer2005,
abstract = {CMU-CS-05-131 and Evoling the ML Module System. Derek . May 2005. Ph.D. Thesis. In this dissertation I contribute to the and evolution of the ML module system by: (1) developing a unifying account of the ML module system in which},
author = {Dreyer, D},
number = {May},
title = {{Understanding and Evolving the ML Module System}},
year = {2005}
}
@article{Swamy2013,
abstract = {Modern programming languages, ranging from Haskell and ML, to JavaScript, C{\{}{\#}{\}} and Java, all make extensive use of higher-order state. This paper advocates a new verification methodology for higher-order stateful programs, based on a new monad of predicate transformers called the Dijkstra monad. Using the Dijkstra monad has a number of benefits. First, the monad naturally yields a weakest pre-condition calculus. Second, the computed specifications are structurally simpler in several ways, e.g., single-state post-conditions are sufficient (rather than the more complex two-state post-conditions). Finally, the monad can easily be varied to handle features like exceptions and heap invariants, while retaining the same type inference algorithm. We implement the Dijkstra monad and its type inference algorithm for the F* programming language. Our most extensive case study evaluates the Dijkstra monad and its F* implementation by using it to verify JavaScript programs. Specifically, we describe a tool chain that translates programs in a subset of JavaScript decorated with assertions and loop invariants to F*. Once in F*, our type inference algorithm computes verification conditions and automatically discharges their proofs using an SMT solver. We use our tools to prove that a core model of the JavaScript runtime in F* respects various invariants and that a suite of JavaScript source programs are free of runtime errors.},
author = {Swamy, Nikhil and Weinberger, Joel and Schlesinger, Cole and Chen, Juan and Livshits, Benjamin},
doi = {10.1145/2491956.2491978},
isbn = {9781450320146},
issn = {03621340},
journal = {ACM SIGPLAN Conference on Programming Language Design and Implementation},
keywords = {hoare monad,predicate transformer,refinement types},
pages = {387},
title = {{Verifying higher-order programs with the dijkstra monad}},
url = {http://research.microsoft.com/apps/pubs/default.aspx?id=189686},
year = {2013}
}
@article{Swamy2014,
abstract = {JavaScript's flexible semantics makes writing correct code hard and writing secure code extremely difficult. To address the former prob- lem, various forms of gradual typing have been proposed, such as Closure and TypeScript. However, supporting all common pro- gramming idioms is not easy; for example, TypeScript deliberately gives up type soundness for programming convenience. In this pa- per, we propose a gradual type system and implementation techniques that provide important safety and security guarantees. We present TS?, a gradual type system and source-to-source compiler for JavaScript. In contrast to prior gradual type systems, TS? features full runtime reflection over three kinds of types: (1) simple types for higher-order functions, recursive datatypes and dictionary-based extensible records; (2) the type any, for dynami- cally type-safe TS? expressions; and (3) the type un, for untrusted, potentially malicious JavaScript contexts in which TS? is embed- ded. After type-checking, the compiler instruments the program with various checks to ensure the type safety of TS? despite its interactions with arbitrary JavaScript contexts, which are free to use eval, stack walks, prototype customizations, and other offen- sive features. The proof of our main theorem employs a form of type-preserving compilation, wherein we prove all the runtime in- variants of the translation of TS? to JavaScript by showing that translated programs are well-typed in JS?, a previously proposed dependently typed language for proving functional correctness of JavaScript programs. We describe a prototype compiler, a secure runtime, and sample applications for TS?. Our examples illustrate how web security pat- terns that developers currently program in JavaScript (with much difficulty and still with dubious results) can instead be programmed naturally in TS?, retaining a flavor of idiomatic JavaScript, while providing strong safety guarantees by virtue of typing.},
author = {Swamy, Nikhil and Fournet, C{\'{e}}dric and Rastogi, Aseem and Bhargavan, Karthikeyan and Chen, Juan and Strub, Pierre-Yves and Bierman, Gavin},
doi = {10.1145/2535838.2535889},
isbn = {9781450325448},
issn = {15232867},
journal = {Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages - POPL '14},
keywords = {compilers,language-based security,type systems},
pages = {425--437},
title = {{Gradual typing embedded securely in JavaScript}},
url = {http://dl.acm.org/citation.cfm?doid=2535838.2535889},
year = {2014}
}
@article{Strub2012,
author = {Strub, Pierre-yves and Swamy, Nikhil and Fournet, Cedric and Chen, Juan},
doi = {10.1145/2103621.2103723},
isbn = {9781450310833},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {certification,dependent types,refinement types},
month = {jan},
number = {1},
pages = {571},
title = {{Self-certification}},
url = {http://dl.acm.org/citation.cfm?doid=2103621.2103723},
volume = {47},
year = {2012}
}
@article{Chugh2013,
author = {Chugh, Ravi},
title = {{Nested Refinement Types for JavaScript}},
year = {2013}
}
@article{Fournet2013,
abstract = {Many tools allow programmers to develop applications in high- level languages and deploy them in web browsers via compilation to JavaScript. While practical and widely used, these compilers are ad hoc: no guarantee is provided on their correctness for whole programs, nor their security for programs executed within arbitrary JavaScript contexts. This paper presents a compiler with such guarantees. We compile an ML-like language with higher-order func- tions and references to JavaScript, while preserving all source program properties. Relying on type-based invariants and applicative bisimilarity, we show full abstraction: two programs are equivalent in all source contexts if and only if their wrapped translations are equivalent in all JavaScript contexts. We evaluate our compiler on sample programs, including a series of secure libraries.},
author = {Fournet, C{\'{e}}dric and Swamy, Nikhil and Chen, Juan and Dagand, Pierre-Evariste and Strub, Pierre-Yves and Livshits, Benjamin},
doi = {10.1145/2429069.2429114},
isbn = {9781450318327},
issn = {0362-1340},
journal = {Proceedings of the 40th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '13},
keywords = {full abstraction,program equivalence,refinement types},
pages = {371},
title = {{Fully abstract compilation to JavaScript}},
url = {http://dl.acm.org/citation.cfm?doid=2429069.2429114},
year = {2013}
}
@article{Wangb,
author = {Wang, Peng and Parno, Bryan},
title = {{Extracting from F * to C : a progress report}}
}
@article{Liu,
author = {Liu, Chang and Harris, Austin and Maas, Martin and Hicks, Michael and Tiwari, Mohit and Shi, Elaine},
isbn = {9781450328357},
title = {{GhostRider : A Hardware-Software System for Memory Trace Oblivious Computation}}
}
abstract = {John Hughes has made pretty printers one of the prime demonstrations of using combinators to develop a library, and algebra to implement it. This note presents a new design for pretty printers which improves on Hughes's classic design. The new design is based on a single concatenation operator which is associative and has a left and right unit. Hughes's design requires two separate operators for concatenation, where horizontal concatenation has a right unit but no left unit, and vertical concatenation has neither unit.},
journal = {Journal of Functional Programming},
number = {1980},
pages = {223--244},
title = {{A prettier printer}},
year = {1998}
}
@article{Landin1966,
abstract = {A family of unimplemented computing languages is described that is intended to span differences of application area by a unified framework. This framework dictates the rules about the uses of user-coined names, and the conventions about characterizing functional relationships. Within this framework the design of a specific language splits into two independent parts. One is the choice of written appearances of programs (or more generally, their physical representation). The other is the choice of the abstract entities (such as numbers, character-strings, list of them, functional relations among them) that can be referred to in the language.$\backslash$nThe system is biased towards “expressions” rather than “statements.” It includes a nonprocedural (purely functional) subsystem that aims to expand the class of users' needs that can be met by a single print-instruction, without sacrificing the important properties that make conventional right-hand-side expressions easy to construct and understand.},
author = {Landin, P. J.},
doi = {10.1145/365230.365257},
issn = {0001-0782},
journal = {Commun. ACM},
number = {3},
pages = {157--166},
title = {{The Next 700 Programming Languages}},
url = {http://doi.acm.org/10.1145/365230.365257{\%}5Cnhttp://dl.acm.org/ft{\_}gateway.cfm?id=365257{\&}type=pdf},
volume = {9},
year = {1966}
}
abstract = {We present results for the "Impact Project Focus Area" on the topic of symbolic execution as used in software testing. Symbolic execution is a program analysis technique introduced in the 70s that has received renewed interest in recent years, due to algorithmic advances and increased availability of computational power and constraint solving technology. We review classical symbolic execution and some modern extensions such as generalized symbolic execution and dynamic test generation. We also give a preliminary assessment of the use in academia, research labs, and industry.},
author = {Cadar, Cristian and Godefroid, Patrice and Khurshid, Sarfraz and Pasareanu, Corina S. and Sen, Koushik and Tillmann, Nikolai and Visser, Willem},
doi = {10.1145/1985793.1985995},
isbn = {978-1-4503-0445-0},
issn = {0270-5257},
journal = {2011 33rd International Conference on Software Engineering (ICSE)},
keywords = {dynamic test generation,generalized symbolic execution},
pages = {1066--1071},
title = {{Symbolic execution for software testing in practice: preliminary assessment}},
year = {2011}
}
@article{Albarghouthi2016,
abstract = {Many problems in program analysis, verification, and synthesis require inferring specifications of unknown procedures. Motivated by a broad range of applications, we formulate the problem of maximal specification inference: Given a postcondition ϕ and a program P calling a set of unknown procedures F1, . . . , Fn, what are the most permissive specifications of procedures Fi that ensure correctness of P ? In other words, we are looking for the smallest number of assumptions we need to make about the behaviours of Fi in order to prove that P satisfies its postcondition. To solve this problem, we present a novel approach that utilizes a counterexample-guided inductive synthesis loop and reduces the maximal specification inference problem to multi-abduction. We formulate the novel notion of multi-abduction as a generalization of classical logical abduction and present an algorithm for solving multi-abduction problems. On the practical side, we evaluate our specification inference technique on a range of benchmarks and demonstrate its ability to synthesize specifications of kernel rou-tines invoked by device drivers.},
author = {Albarghouthi, Aws and Dillig, Isil and Gurfinkel, Arie},
doi = {10.1145/2914770.2837628},
isbn = {978-1-4503-3549-2},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {specification,synthesis,verification},
number = {1},
pages = {789--801},
title = {{Maximal specification synthesis}},
url = {http://dl.acm.org/citation.cfm?id=2914770.2837628},
volume = {51},
year = {2016}
}
@article{Cao2015,
author = {Cao, Jingyuan and Fu, Ming and Feng, Xinyu},
doi = {10.1145/2676724.2693162},
isbn = {978-1-4503-3296-5},
journal = {Proceedings of the 2015 Conference on Certified Programs and Proofs},
keywords = {c program verification,interactive proof assistants,practical tactics,separation logic},
pages = {97--108},
title = {{Practical Tactics for Verifying C Programs in Coq}},
url = {http://doi.acm.org/10.1145/2676724.2693162},
year = {2015}
}
@article{Lampropoulos2016,
abstract = {Property-based random testing in the style of QuickCheck demands efficient generators for well-distributed random data satisfying complex logical predicates, but writing these generators can be difficult and error prone. We propose a better alternative: a domain-specific language in which generators are expressed by decorating predicates with lightweight annotations to control both the distribution of generated values as well as the amount of constraint solving that happens before each variable is instantiated. This language, called Luck, makes generators easier to write, read, and maintain. We give Luck a formal semantics and prove several fundamental properties, including the soundness and completeness of random generation with respect to a standard predicate semantics. We evaluate Luck on common examples from the property-based testing literature and on two significant case studies; we show that it can be used in complex domains with comparable bug-finding effectiveness and a significant reduction in testing code size, compared to handwritten generators.},
archivePrefix = {arXiv},
arxivId = {1607.05443},
author = {Lampropoulos, Leonidas and Gallois-Wong, Diane and Hritcu, Catalin and Hughes, John and Pierce, Benjamin C. and Xia, Li-yao},
eprint = {1607.05443},
pages = {1--28},
title = {{Beginner's Luck: A Language for Property-Based Generators}},
url = {http://arxiv.org/abs/1607.05443},
year = {2016}
}
@article{DSilva2014,
abstract = {This article introduces an abstract interpretation framework that codifies the operations in SAT and SMT solvers in terms of lattices, transformers and fixed points. We develop the idea that a formula denotes a set of models in a universe of structures. This set of models has characterizations as fixed points of deduction, abduction and quantification transformers. A wide range of satisfiability procedures can be understood as computing and refining approximations of such fixed points. These include procedures in the DPLL family, those for preprocessing and inprocessing in SAT solvers, decision procedures for equality logics, weak arithmetics, and procedures for approximate quantification. Our framework provides a unified, mathematical basis for studying and combining program analysis and satisfiability procedures. A practical benefit of our work is a new, logic-agnostic architecture for implementing solvers.},
author = {D'Silva, Vijay and Haller, Leopold and Kroening, Daniel},
doi = {10.1145/2535838.2535868},
isbn = {9781450325448},
issn = {07308566},
journal = {Principles of Programming Languages},
keywords = {a problem is to,abstract interpretation,and design an algorithm,approximations,by fixed points,characterise solu-,decision procedures,fixed point,identify a space of,logic,plying abstract interpretation to,tions to the problem,to compute these approx-},
pages = {139--150},
title = {{Abstract Satisfaction}},
url = {http://dl.acm.org/citation.cfm?doid=2535838.2535868},
volume = {49},
year = {2014}
}
@article{Rompf2015,
abstract = {Scala's type system unifies aspects of ML-style module systems, object-oriented, and functional programming paradigms. The DOT (Dependent Object Types) family of calculi has been proposed as a new theoretic foundation for Scala and similar expressive languages. Unfortunately, type soundness has only been established for a very restricted subset of DOT (muDOT), and it has been shown that adding important Scala features such as type refinement or extending subtyping to a lattice breaks at least one key metatheoretic property such as narrowing or subtyping transitivity, which are usually required for a type soundness proof. The first main contribution of this paper is to demonstrate how, perhaps surprisingly, even though these properties are lost in their full generality, a richer DOT calculus that includes both type refinement and a subtyping lattice with intersection types can still be proved sound. The key insight is that narrowing and subtyping transitivity only need to hold for runtime objects, but not for code that is never executed. Alas, the dominant method of proving type soundness, Wright and Felleisen's syntactic approach, is based on term rewriting, which does not make an adequate distinction between runtime and type assignment time. The second main contribution of this paper is to demonstrate how type soundness proofs for advanced, polymorphic, type systems can be carried out with an operational semantics based on high-level, definitional interpreters, implemented in Coq. We present the first mechanized soundness proof for System F{\textless}: based on a definitional interpreter. We discuss the challenges that arise in this setting, in particular due to abstract types, and we illustrate in detail how DOT-like calculi emerge from straightforward generalizations of the operational aspects of F{\textless}:.},
archivePrefix = {arXiv},
arxivId = {1510.05216},
author = {Rompf, Tiark and Amin, Nada},
eprint = {1510.05216},
number = {July},
pages = {1--13},
title = {{From F to DOT: Type Soundness Proofs with Definitional Interpreters}},
url = {http://arxiv.org/abs/1510.05216},
year = {2015}
}
@article{Konnov2016,
abstract = {Distributed algorithms have many mission-critical applications ranging from embedded systems and replicated databases to cloud computing. Due to asynchronous communication, process faults, or network failures, these algorithms are difficult to design and verify. Many algorithms achieve fault tolerance by using threshold guards that, for instance, ensure that a process waits until it has received an acknowledgment from a majority of its peers. Consequently, domain-specific languages for fault-tolerant distributed systems offer language support for threshold guards. We introduce an automated method for model checking of safety and liveness of threshold-guarded distributed algorithms in systems where the number of processes and the fraction of faulty processes are parameters. Our method is based on a short counterexample property: if a distributed algorithm violates a temporal specification (in a fragment of LTL), then there is a counterexample whose length is bounded and independent of the parameters. We prove this property by (i) characterizing executions depending on the structure of the temporal formula, and (ii) using commutativity of transitions to accelerate and shorten executions. We extended the ByMC toolset (Byzantine Model Checker) with our technique, and verified liveness and safety of 10 prominent fault-tolerant distributed algorithms, most of which were out of reach for existing techniques.},
archivePrefix = {arXiv},
arxivId = {1608.05327},
author = {Konnov, Igor and Lazic, Marijana and Veith, Helmut and Widder, Josef},
doi = {10.1145/3009837.3009860},
eprint = {1608.05327},
keywords = {byzantine faults,fault-,parameterized model checking,reliable broadcast,tolerant distributed algorithms},
title = {{A Short Counterexample Property for Safety and Liveness Verification of Fault-tolerant Distributed Algorithms}},
url = {http://arxiv.org/abs/1608.05327{\%}0Ahttp://dx.doi.org/10.1145/3009837.3009860},
year = {2016}
}
@article{Hankin1996,
author = {Hankin, Chris and Palsberg, Jens and Al, E T},
number = {4},
pages = {644--652},
title = {{Strategic Directions in Research on Programming Languages}},
volume = {28},
year = {1996}
}
@article{Takikawa2016,
abstract = {Programmers have come to embrace dynamically-typed languages for prototyping and delivering large and complex systems. When it comes to maintaining and evolving these systems, the lack of explicit static typing becomes a bottleneck. In response, researchers have explored the idea of gradually-typed programming languages which allow the incremental addition of type annotations to software written in one of these untyped languages. Some of these new, hybrid languages insert run-time checks at the boundary between typed and untyped code to establish type soundness for the overall system. With sound gradual typing, programmers can rely on the language implementation to provide meaningful error messages when type invariants are violated. While most research on sound gradual typing remains theoretical, the few emerging implementations suffer from performance overheads due to these checks. None of the publications on this topic comes with a comprehensive performance evaluation. Worse, a few report disastrous numbers. In response, this paper proposes a method for evaluating the performance of gradually-typed programming languages. The method hinges on exploring the space of partial conversions from untyped to typed. For each benchmark, the performance of the different versions is reported in a synthetic metric that associates runtime overhead to conversion effort. The paper reports on the results of applying the method to Typed Racket, a mature implementation of sound gradual typing, using a suite of real-world programs of various sizes and complexities. Based on these results the paper concludes that, given the current state of implementation technologies, sound gradual typing faces significant challenges. Conversely, it raises the question of how implementations could reduce the overheads associated with soundness and how tools could be used to steer programmers clear from pathological cases.},
author = {Takikawa, Asumu and Feltey, Daniel and Greenman, Ben and New, Max S. and Vitek, Jan and Felleisen, Matthias},
doi = {10.1145/2914770.2837630},
isbn = {978-1-4503-3549-2},
issn = {03621340},
journal = {Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages - POPL 2016},
number = {1},
pages = {456--468},
url = {http://dl.acm.org/citation.cfm?id=2914770.2837630},
volume = {51},
year = {2016}
}
@article{Dinsdale-Young2013,
abstract = {Compositional abstractions underly many reasoning principles for concurrent programs: the concurrent environment is abstracted in order to reason about a thread in isolation; and these abstractions are composed to reason about a program consisting of many threads. For instance, separation logic uses formulae that describe part of the state, abstracting the rest; when two threads use disjoint state, their specifications can be composed with the separating conjunction. Type systems abstract the state to the types of variables; threads may be composed when they agree on the types of shared variables. In this paper, we present the "Concurrent Views Framework", a metatheory of concurrent reasoning principles. The theory is parameterised by an abstraction of state with a notion of composition, which we call views. The metatheory is remarkably simple, but highly applicable: the rely-guarantee method, concurrent separation logic, concurrent abstract predicates, type systems for recursive references and for unique pointers, and even an adaptation of the Owicki-Gries method can all be seen as instances of the Concurrent Views Framework. Moreover, our metatheory proves each of these systems is sound without requiring induction on the operational semantics.},
author = {Dinsdale-Young, Thomas and Birkedal, Lars and Gardner, Philippa and Parkinson, Matthew and Yang, Hongseok},
doi = {10.1145/2429069.2429104},
isbn = {978-1-4503-1832-7},
issn = {15232867},
journal = {POPL: Principles of Programming Languages},
keywords = {axiomatic semantics,compositional rea-,concurrency},
pages = {287--300},
title = {{Views: compositional reasoning for concurrent programs}},
url = {http://doi.acm.org/10.1145/2429069.2429104{\%}5Cnhttp://dl.acm.org/ft{\_}gateway.cfm?id=2429104{\&}type=pdf},
year = {2013}
}
@article{Johnson2015,
author = {Johnson, J. Ian},
number = {April},
title = {{Automating Abstract Interpretation}},
year = {2015}
}
@article{Cimini2016,
author = {Cimini, Matteo and Siek, Jeremy G.},
doi = {10.1145/2837614.2837632},
isbn = {9781450335492},
issn = {0362-1340},
journal = {Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages - POPL 2016},
number = {1},
pages = {443--455},
title = {{The Gradualizer: a methodology and algorithm for generating gradual type systems}},
url = {http://dl.acm.org/citation.cfm?id=2837614.2837632},
volume = {51},
year = {2016}
}
@article{Lorenzen2016,
author = {Lorenzen, Florian and Erdweg, Sebastian},
doi = {10.1145/2837614.2837644},
isbn = {9781450335492},
issn = {15232867},
journal = {Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages - POPL 2016},
keywords = {19,26,6,and scala,automatic verification,desugaring,language extensibility,macros,metaprogramming,sugarj,templatehaskell with quasiquoting,type soundness,type-dependent},
pages = {204--216},
title = {{Sound type-dependent syntactic language extension}},
url = {http://dl.acm.org/citation.cfm?doid=2837614.2837644},
year = {2016}
}
@inproceedings{Detlefs1992,
author = {Detlefs, David},
booktitle = {Proceedings of the C++ Conference. Portland, OR, USA, August 1992},
title = {{Garbage Collection and Run-time Typing as a C++ Library}},
year = {1992}
}
@inproceedings{Zhang2016,
address = {New York, New York, USA},
author = {Zhang, Zhen},
booktitle = {Companion Proceedings of the 2016 ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity - SPLASH Companion 2016},
doi = {10.1145/2984043.2998545},
isbn = {9781450344371},
keywords = {data-flow analysis,definition language,interface,javascript,program verification,static analysis,webidl},
pages = {63--64},
publisher = {ACM Press},
title = {{xWIDL: modular and deep JavaScript API misuses checking based on extended WebIDL}},
url = {http://dl.acm.org/citation.cfm?doid=2984043.2998545},
year = {2016}
}
abstract = {We present a new symbolic execution tool, KLEE, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs. We used KLEE to thoroughly check all 89 stand-alone programs in the GNU COREUTILS utility suite, which form the core user-level environment installed on millions of Unix systems, and arguably are the single most heavily tested set of open-source programs in existence. KLEE-generated tests achieve high line coverage - on average over 90{\%} per tool (median: over 94{\%}) - and significantly beat the coverage of the developers' own hand-written test suite. When we did the same for 75 equivalent tools in the BUSYBOX embedded system suite, results were even better, including 100{\%} coverage on 31 of them. We also used KLEE as a bug finding tool, applying it to 452 applications (over 430K total lines of code), where it found 56 serious bugs, including three in COREUTILS that had been missed for over 15 years. Finally, we used KLEE to crosscheck purportedly identical BUSYBOX and COREUTILS utilities, finding functional correctness errors and a myriad of inconsistencies.},
author = {Cadar, Cristian and Dunbar, Daniel and Engler, Dawson R.},
doi = {10.1.1.142.9494},
isbn = {978-1-931971-65-2},
issn = {{\textless}null{\textgreater}},
journal = {Proceedings of the 8th USENIX conference on Operating systems design and implementation},
pages = {209--224},
title = {{KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs}},
url = {http://portal.acm.org/citation.cfm?id=1855756},
year = {2008}
}
@article{Altenkirch2016,
author = {Altenkirch, Thorsten and Kaposi, Ambrus},
doi = {10.1145/2837614.2837638},
isbn = {9781450335492},
issn = {15232867},
journal = {Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages - POPL 2016},
keywords = {higher inductive types,homotopy type theory,ical relations,log-,metaprogramming},
pages = {18--29},
title = {{Type theory in type theory using quotient inductive types}},
url = {http://dl.acm.org/citation.cfm?doid=2837614.2837638},
year = {2016}
}
@article{Chlipala2010,
abstract = {Dependent types provide a strong foundation for specifying and verifying rich properties of programs through type-checking. The earliest implementations combined dependency, which allows types to mention program variables; with type-level computation, which facilitates expressive specifications that compute with recursive functions over types. While many recent applications of dependent types omit the latter facility, we argue in this paper that it deserves more attention, even when implemented without dependency. In particular, the ability to use functional programs as specifications enables statically-typed metaprogramming: programs write programs, and static type-checking guarantees that the generating process never produces invalid code. Since our focus is on generic validity properties rather than full correctness verification, it is possible to engineer type inference systems that are very effective in narrow domains. As a demonstration, we present Ur, a programming language designed to facilitate metaprogramming with first-class records and names. On top of Ur, we implement Ur/Web, a special standard library that enables the development of modern Web applications. Ad-hoc code generation is already in wide use in the popular Web application frameworks, and we show how that generation may be tamed using types, without forcing metaprogram authors to write proofs or forcing metaprogram users to write any fancy types.},
doi = {10.1145/1809028.1806612},
isbn = {978-1-4503-0019-3},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {dependent types,metaprogramming},
number = {6},
pages = {122},
title = {{Ur: Statically-Typed Metaprogramming with Type-Level Record Computation}},
url = {http://portal.acm.org/citation.cfm?doid=1809028.1806612},
volume = {45},
year = {2010}
}
@article{Kennedy2013,
abstract = {We describe a Coq formalization of a subset of the x86 architecture. One emphasis of the model is brevity: using dependent types, type classes and notation we give the x86 semantics a makeover that counters its reputation for baroqueness. We model bits},
author = {Kennedy, Andrew and Benton, Nick and Jensen, Jonas B and Dagand, Pierre-Evariste},
doi = {10.1145/2505879.2505897},
isbn = {9781450321549},
issn = {00220000},
journal = {PPDP '13: Proceedings of the 15th Symposium on Principles and Practice of Declarative Programming},
pages = {13--24},
title = {{Coq: the world's best macro assembler?}},
url = {http://dl.acm.org/citation.cfm?doid=2505879.2505897{\%}5Cnpapers3://publication/doi/10.1145/2505879.2505897},
year = {2013}
}
@article{Jones2007,
abstract = {Haskell's popularity has driven the need for ever more expressive type system features, most of which threaten the decidability and practicality of Damas-Milner type inference. One such feature is the ability to write functions with higher-rank types—that is, functions that take polymorphic functions as their arguments. Complete type inference is known to be undecidable for higher-rank (impredicative) type systems, but in practice programmers are more than willing to add type annotations to guide the type inference engine, and to document their code. However, the choice of just what annotations are required, and what changes are required in the type system and its inference algorithm, has been an ongoing topic of research. We take as our starting point a $\lambda$-calculus proposed by Odersky and L¨ aufer. Their sys- tem supports arbitrary-rank polymorphism through the exploitation of type annotations on $\lambda$-bound arguments and arbitrary sub-terms. Though elegant, and more convenient than some other proposals, Odersky and L¨ aufer's system requires many annotations. We show how to use local type inference (invented by Pierce and Turner) to greatly reduce the annotation burden, to the point where higher-rank types become eminently usable. Higher-rank types have a very modest impact on type inference. We substantiate this claim in a very concrete way, by presenting a complete type-inference engine, written in Haskell, for a traditional Damas-Milner type system, and then showing how to extend it for higher-rank types. We write the type-inference engine using a monadic framework: it turns out to be a particularly compelling example of monads in action. The paper is long, but is strongly tutorial in style. Although we use Haskell as our example source language, and our implementation language, much of our work is directly applicable to any ML-like functional language.},
author = {Jones, Simon Peyton and Vytiniotis, Dimitrios and Weirich, Stephanie and Shields, Mark},
doi = {10.1017/S0956796806006034},
issn = {0956-7968},
journal = {Journal of Functional Programming},
number = {01},
pages = {1},
title = {{Practical type inference for arbitrary-rank types}},
volume = {17},
year = {2007}
}
@article{Feng2007,
abstract = {Software systems usually use many different computation features and span different abstraction levels (e.g., user code level and the runtime system level). To build foundational certified systems, it is hard to have one verification system supporting all computation features. In this paper we present an open framework for foundational proof-carrying code (FPCC). It allows program modules to be specified and certified separately using different type systems or program logics. Certified modules (code + proof) can be linked to compose fully certified systems. The framework supports modular verification and proof reuse. It is extensible, and is expressive enough to allow invariants established in verification systems to be maintained when they are embedded in. Our framework is the first FPCC framework that systematically supports interoperation between different verification systems. It is fully mechanized in the Coq proof assistant with machine-checkable soundness proof.},
author = {Feng, Xinyu and Ni, Zhaozhong and Shao, Zhong and Guo, Yu},
doi = {10.1145/1190315.1190325},
isbn = {159593393X},
journal = {Proceedings of the 2007 ACM SIGPLAN international workshop on Types in languages design and implementation - TLDI '07},
keywords = {foundational proof-carrying code,interoperability,modularity,open framework,program verifica-,tion},
pages = {67},
title = {{An open framework for foundational proof-carrying code}},
url = {http://portal.acm.org/citation.cfm?doid=1190315.1190325},
year = {2007}
}
@article{Sparks,
author = {Sparks, Zachary},
isbn = {9781605587684},
pages = {1--8},
title = {{Typestate-Oriented Programming}}
}
@inproceedings{JonesSimonPeytonMarkJones1997,
author = {{Jones, Simon Peyton, Mark Jones}, and Erik Meijer},
title = {{Type Classes: an exploration of the design space}},
year = {1997}
}
@article{Guha2010,
abstract = {We reduce JavaScript to a core calculus structured as a small-step operational semantics. We present several peculiarities of the language and show that our calculus models them. We explicate the desugaring process that turns JavaScript programs into ones in the core. We demonstrate faithfulness to JavaScript using real-world test suites. Finally, we illustrate utility by defining a security property, implementing it as a type system on the core, and extending it to the full language.},
author = {Guha, Arjun and Saftoiu, Claudiu and Krishnamurthi, Shriram},
doi = {10.1007/978-3-642-14107-2_7},
isbn = {3642141064},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {126--150},
title = {{The essence of javascript}},
volume = {6183 LNCS},
year = {2010}
}
@article{Horn2011,
abstract = {We design a family of program analyses for JavaScript that make no approximation in matching calls with returns, exceptions with handlers, and breaks with labels. We do so by starting from an established reduction semantics for JavaScript and systematically deriving its intensional abstract interpretation. Our first step is to transform the semantics into an equivalent low-level abstract machine: the JavaScript Abstract Machine (JAM). We then give an infinite-state yet decidable pushdown machine whose stack precisely models the structure of the concrete program stack. The precise model of stack structure in turn confers precise control-flow analysis even in the presence of control effects, such as exceptions and finally blocks. We give pushdown generalizations of traditional forms of analysis such as k-CFA, and prove the pushdown framework for abstract interpretation is sound and computable.},
archivePrefix = {arXiv},
arxivId = {arXiv:1109.4467v2},
author = {Horn, Davind Van and Might, Matthew},
eprint = {arXiv:1109.4467v2},
journal = {Not Published?},
pages = {24},
title = {{Pushdown Abstractions of JavaScript}},
year = {2011}
}
@article{Stefanovic1993,
abstract = {IntroductionThe UMass Garbage Collection Toolkit[4] was designed to facilitate language implementation by providing alanguage-independent library of collection algorithms and policies, and auxiliary data structures.Having integrated the toolkit collector into Standard ML of New Jersey, we found that the functionality ofthe toolkit allowed us to perform experiments revealing the nature of object allocation and object dynamics inthe SML/NJ system. We explored ways to visualize the large quantities of data our instrumentation gathers. Webelieve that the techniques developed can be of use to the language implementor in reviewing overall performance,and to the application writer in tracking down the space behavior of the program (which, for functional languages,is often not intimately related to the source program).In the following we briefly describe the collector interface with SML, discuss the methodology of experiments,and outline the outcome of the experiments. Altho},
author = {Stefanovic, D},
keywords = {{\&}absolute {\&}account {\&}address {\&}age {\&}algorithm {\&}alloc},
pages = {1--7},
title = {{The Garbage Collection Toolkit as an Experimentation Tool}},
url = {http://citeseer.ist.psu.edu/70158{\%}5Cnpapers://b6c7d293-c492-48a4-91d5-8fae456be1fa/Paper/p1317{\%}5Cnfile:///C:/Users/Serguei/OneDrive/Documents/Papers/The Garbage Collection Toolkit as-1993-09-17.pdf},
year = {1993}
}
@article{DeLine2004,
abstract = {Page 1. Typestates for Objects Robert DeLine and Manuel F{\"{a}}hndrich Microsoft Research One Microsoft Way Redmond, WA 98052-6399 USA {\{}rdeline,maf{\}}@microsoft. com Abstract. Today's mainstream object-oriented compilers},
author = {DeLine, Robert and F{\"{a}}hndrich, Manuel},
doi = {10.1007/b98195},
isbn = {978-3-540-22159-3},
issn = {03029743},
journal = {European conference on object-oriented programming},
pages = {465--490},
title = {{Typestates for Objects}},
volume = {3086},
year = {2004}
}
@article{Jeon2016,
abstract = {Symbolic execution is a powerful program analysis tech-nique, but it is difficult to apply to programs built using frameworks such as Swing and Android, because the frame-work code itself is hard to symbolically execute. The stan-dard solution is to manually create a framework model that can be symbolically executed, but developing and maintain-ing a model is difficult and error-prone. In this paper, we present Pasket, a new system that takes a first step toward automatically generating Java framework models to support symbolic execution. Pasket's focus is on creating models by instantiating design patterns. Pasket takes as input class, method, and type information from the framework API, to-gether with tutorial programs that exercise the framework. From these artifacts and Pasket's internal knowledge of de-sign patterns, Pasket synthesizes a framework model whose behavior on the tutorial programs matches that of the origi-nal framework. We evaluated Pasket by synthesizing mod-els for subsets of Swing and Android. Our results show that the models derived by Pasket are sufficient to allow us to use off-the-shelf symbolic execution tools to analyze Java programs that rely on frameworks.},
author = {Jeon, Jinseong and Qiu, Xiaokang and Fetter-Degges, Jonathan and Foster, Jeffrey S. and Solar-Lezama, Armando},
doi = {10.1145/2884781.2884856},
isbn = {9781450339001},
issn = {02705257},
journal = {Proceedings of the 38th International Conference on Software Engineering - ICSE '16},
keywords = {framework model,program synthesis,sketch,symbolic execution},
pages = {156--167},
title = {{Synthesizing framework models for symbolic execution}},
url = {http://dl.acm.org/citation.cfm?id=2884781.2884856},
year = {2016}
}
@incollection{Wilson1995,
author = {Wilson, Paul R. and Johnstone, Mark S. and Neely, Michael and Boles, David},
booktitle = {Memory Management},
doi = {10.1007/3-540-60368-9_19},
pages = {1--116},
title = {{Dynamic storage allocation: A survey and critical review}},
year = {1995}
}
@article{Varghese1987,
abstract = {Conventional algorithms to implement an Operating System timer module take O(n) time to start or maintain a timer, where n is the number of outstanding timers: this is expensive for large n. This paper begins by exploring the relationship between timer algorithms, time flow mechanisms used in discrete event simulations, and sorting techniques. Next a timer algorithm for small timer intervals is presented that is similar to the timing wheel technique used in logic simulations. By using a circular buffer or timing wheel, it takes O(1) time to start, stop, and maintain timers within the range of the wheel. Two extensions for larger values of the interval are decribed. In the first, the timer interval is hashed into a slot on the timing wheel. In the second, a hierarchy of timing wheels with different granularities is used to span a greater range of intervals. The performance of these two schemes and various implementation trade-offs are discussed.},
author = {Varghese, G. and Lauck, T.},
doi = {10.1145/37499.37504},
isbn = {089791242X},
issn = {01635980},
journal = {ACM SIGOPS Operating Systems Review},
number = {5},
pages = {25--38},
title = {{Hashed and hierarchical timing wheels: data structures for the efficient implementation of a timer facility}},
volume = {21},
year = {1987}
}
@article{Benton2009,
author = {Benton, Nick},
title = {{Step-Indexing : The Good , the Bad and the Ugly}},
year = {2009}
}
@article{Inala2015,
abstract = {In this paper, we show how synthesis can help implement interesting functions involving pattern matching and algebraic data types. One of the novel aspects of this work is the combination of type inference and counterexample-guided inductive synthesis (CEGIS) in order to support very high-level notations for describing the space of possible implementations that the synthesizer should consider. The paper also describes a set of optimizations that significantly improve the performance and scalability of the system. The approach is evaluated on a set of case studies which most notably include synthesizing desugaring functions for lambda calculus that force the synthesizer to discover Church encodings for pairs and boolean operations, as well as a procedure to generate constraints for type inference.},
archivePrefix = {arXiv},
arxivId = {1507.05527},
author = {Inala, Jeevana Priya and Qiu, Xiaokang and Lerner, Ben and Solar-Lezama, Armando},
eprint = {1507.05527},
journal = {Pldi Src},
title = {{Type Assisted Synthesis of Recursive Transformers on Algebraic Data Types}},
url = {http://arxiv.org/abs/1507.05527},
year = {2015}
}
@article{Nanevski2010,
abstract = {Most systems based on separation logic consider only restricted forms of implication or non-separating conjunction, as full support for these connectives requires a non-trivial notion of variable context, inherited from the logic of bunched implications (BI). We show that in an expressive type theory such as Coq, one can avoid the intricacies of BI, and support full separation logic very efficiently, using the native structuring primitives of the type theory. Our proposal uses reflection to enable equational reasoning about heaps, and Hoare triples with binary postconditions to further facilitate it. We apply these ideas to Hoare Type Theory, to obtain a new proof technique for verification of higher-order imperative programs that is general, extendable, and supports very short proofs, even without significant use of automation by tactics. We demonstrate the usability of the technique by verifying the fast congruence closure algorithm of Nieuwenhuis and Oliveras, employed in the state-of-the-art Barcelogic SAT solver. Copyright {\textcopyright} 2010 ACM.},
author = {Nanevski, Aleksandar and Vafeiadis, Viktor and Berdine, Josh},
doi = {10.1145/1707801.1706331},
isbn = {9781605584799},
issn = {03621340},
journal = {Popl},
keywords = {hoare logic,languages,monads,separation logic,type theory,verification},
number = {1},
pages = {261},
title = {{Structuring the verification of heap-manipulating programs}},
volume = {45},
year = {2010}
}
@article{Hicks,
author = {Hicks, Michael},
title = {{Symbolic Execution for finding bugs}}
}
@article{Berdine2005,
abstract = {We describe a sound method for automatically proving Hoare triples for loop-free code in Separation Logic, for certain preconditions and postconditions (symbolic heaps). The method uses a form of symbolic execution, a decidable proof theory for symbolic},
author = {Berdine, Josh and Calcagno, Cristiano and O'Hearn, Peter W.},
doi = {10.1007/11575467_5},
isbn = {3-540-29735-9},
journal = {Programming Languages and {\ldots}},
pages = {52--68},
title = {{Symbolic execution with separation logic}},
year = {2005}
}
@article{Mesbah2016,
author = {Mesbah, Ali},
doi = {10.1109/SANER.2016.109},
isbn = {9781509018550},
title = {{Software Analysis for the Web : Achievements and Prospects}},
year = {2016}
}
abstract = {Statically typed programming languages allow earlier error checking, better enforcement of diciplined programming styles, and the generation of more efficient object code than languages where all type consistency checks are performed at run time. However, even in statically typed languages, there is often the need to deal with data whose type cannot be determined at compile time. To handle such situations safely, we propose to add a type Dynamic whose values are pairs of a value v and a type tag T where v has the type denoted by T. Instances of Dynamic are built with an explicit tagging construct and inspected with a type safe typecase construct. This paper explores the syntax, operational semantics, and denotational semantics of a simple language that includes the type Dynamic. We give examples of how dynamically typed values can be used in programming. Then we discuss an operational semantics for our language and obtain a soundness theorem. We present two formulations of the denotational semantics of this language and relate them to the operational semantics. Finally, we consider the implications of polymorphism and some implementation issues.},
author = {Abadi, Mart{\'{i}}n and Cardelli, Luca and Pierce, Benjamin and Plotkin, Gordon},
doi = {10.1145/103135.103138},
isbn = {0-89791-294-2},
issn = {01640925},
journal = {ACM Transactions on Programming Languages and Systems},
number = {2},
pages = {237--268},
title = {{Dynamic typing in a statically typed language}},
volume = {13},
year = {1991}
}
@article{Levy2015,
abstract = {Rust, a new systems programming language, provides compile-time memory safety checks to help eliminate runtime bugs that manifest from improper memory management. This feature is advantageous for operating system development, and especially for embedded OS development, where recovery and debugging are particularly challenging. However, embedded platforms are highly event-based, and Rust's memory safety mechanisms largely presume threads. In our experience developing an operating system for embedded systems in Rust, we have found that Rust's ownership model prevents otherwise safe resource sharing common in the embedded domain, conflicts with the reality of hardware resources, and hinders using closures for programming asynchronously. We describe these experiences and how they relate to memory safety as well as illustrate our workarounds that preserve the safety guarantees to the largest extent possible. In addition, we draw from our experience to propose a new language extension to Rust that would enable it to provide better memory safety tools for event-driven platforms.},
author = {Levy, Amit and Andersen, Michael P. and Campbell, Bradford and Culler, David and Dutta, Prabal and Ghena, Branden and Levis, Philip and Pannuto, Pat},
doi = {10.1145/2818302.2818306},
isbn = {9781450339421},
journal = {PLOS: Workshop on Programming Languages and Operating Systems},
keywords = {embedded operating systems,linear types,ownership,rust},
pages = {21--26},
title = {{Ownership is Theft: Experiences Building an Embedded OS in Rust}},
url = {http://dl.acm.org/citation.cfm?id=2818302.2818306},
year = {2015}
}
@article{Li2010,
author = {Li, Zhaopeng and Zhuang, Zhong and Chen, Yiyun and Yang, Simin and Zhang, Zhenting and Fan, Dawei},
doi = {10.1109/TASE.2010.8},
isbn = {9780769541488},
journal = {Proceedings - 2010 4th International Symposium on Theoretical Aspects of Software Engineering, TASE 2010},
keywords = {Certifying compiler,Program verification,Proof-Carrying code,Separation logic,Theorem prover},
pages = {47--56},
title = {{A certifying compiler for clike subset of C language}},
year = {2010}
}
@article{Bierhoff2007,
abstract = {Objects often define usage protocols that clients must follow in order for these objects to work properly. Aliasing makes it notoriously difficult to check whether clients and implementations are compliant with such protocols. Accordingly, existing approaches either operate globally or severely restrict aliasing. We have developed a sound modular protocol checking approach, based on typestates, that allows a great deal of flexibility in aliasing while guaranteeing the absence of protocol violations at runtime. The main technical contribution is a novel abstraction, access permissions, that combines typestate and object aliasing information. In our methodology, developers express their protocol design intent through annotations based on access permissions. Our checking approach then tracks permissions through method implementations. For each object reference the checker keeps track of the degree of possible aliasing and is appropriately conservative in reasoning about that reference. This helps developers account for object manipulations that may occur through aliases. The checking approach handles inheritance in a novel way, giving subclasses more flexibility in method overriding. Case studies on Java iterators and streams provide evidence that access permissions can model realistic protocols, and protocol checking based on access permissions can be used to reason precisely about the protocols that arise in practice.},
author = {Bierhoff, Kevin and Aldrich, Jonathan},
doi = {10.1145/1297105.1297050},
isbn = {9781595937865},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {aliasing,behavioral subtyping,linear logic,permissions,typestates},
number = {10},
pages = {301},
title = {{Modular typestate checking of aliased objects}},
volume = {42},
year = {2007}
}
@article{Feng2006,
abstract = {Runtime stacks are critical components of any modern software--they are used to implement powerful control structures such as function call/return, stack cutting and unwinding, coroutines, and thread context switch. Stack operations, however, are very hard to reason about: there are no known formal specifications for certifying C-style setjmp/longjmp, stack cutting and unwinding, or weak continuations (in C--). In many proof-carrying code (PCC) systems, return code pointers and exception handlers are treated as general first-class functions (as in continuation-passing style) even though both should have more limited scopes.In this paper we show that stack-based control abstractions follow a much simpler pattern than general first-class code pointers. We present a simple but flexible Hoare-style framework for modular verification of assembly code with all kinds of stackbased control abstractions, including function call/return, tail call, setjmp/longjmp, weak continuation, stack cutting, stack unwinding, multi-return function call, coroutines, and thread context switch. Instead of presenting a specific logic for each control structure, we develop all reasoning systems as instances of a generic framework. This allows program modules and their proofs developed in different PCC systems to be linked together. Our system is fully mechanized. We give the complete soundness proof and a full verification of several examples in the Coq proof assistant.},
author = {Feng, Xinyu and Shao, Zhong and Vaynberg, Alexander and Xiang, Sen and Ni, Zhaozhong},
doi = {10.1145/1133255.1134028},
isbn = {1-59593-320-4},
issn = {03621340},
journal = {Pldi},
keywords = {assembly code verification,control abstractions,modularity,proof-carrying code,stack-based},
number = {6},
pages = {401},
title = {{Modular verification of assembly code with stack-based control abstractions}},
url = {http://dl.acm.org/citation.cfm?id=1133255.1134028},
volume = {41},
year = {2006}
}
@article{Kiselyov,
author = {Kiselyov, Oleg},
title = {{Typed Tagless Final Interpreters}}
}
@article{Kroning2009,
abstract = {Sets, lists, and maps are elementary data structures used in most programs. Program analysis tools therefore need to decide verification conditions containing variables of such types. We propose a new theory for the SMT-Lib standard as the standard format for such formulae.},
author = {Kr{\"{o}}ning, Daniel and R{\"{u}}mmer, Philipp and Weissenbacher, Georg},
journal = {Informal proceedings 7th},
number = {Vdm},
pages = {1--10},
title = {{A Proposal for a Theory of Finite Sets, Lists, and Maps for the SMT-Lib Standard}},
url = {http://www.kroening.com/smt-lib-lsm.pdf},
year = {2009}
}
@article{Andersen1994,
abstract = {Software engineers are faced with a dilemma. They want to write general and wellstructured programs that are flexible and easy to maintain. On the other hand, generality has a price: efficiency. A specialized program solving a particular problem is often significantly faster than a general program. However, the development of specialized software is time-consuming, and is likely to exceed the production of today's programmers. New techniques are required to solve this so-called software crisis. Partial evaluation is a program specialization technique that reconciles the benefits of generality with efficiency. This thesis presents an automatic partial evaluator for the Ansi C programming language. The content of this thesis is analysis and transformation of C programs. We develop several analyses that support the transformation of a program into its generating extension. A generating extension is a program that produces specialized programs when executed on parts of the input. The thesis contains the following main results.},
author = {Andersen, Lars Ole},
doi = {10.1.1.109.6502},
journal = {PhD thesis, DIKU, University of Copenhagen},
keywords = {pointer analysis},
number = {May},
pages = {111--},
title = {{Program Analysis and Specialization for the C Programming Language}},
url = {http://www-ti.informatik.uni-tuebingen.de/{~}behrend/PaperSeminar/Program Analysis and SpecializationPhD.pdf},
year = {1994}
}
@article{Jones1982,
abstract = {A new approach to data flow analysis of procedural programs and programs with recursive data structures is described. The method depends on simulation of the interpreter for the subject programming language using a retrieval function to approximate a program's data structures.},
author = {Jones, ND and Muchnick, SS},
doi = {10.1145/582153.582161},
isbn = {0897910656},
journal = {Proceedings of the 9th ACM SIGPLAN-SIGACT {\ldots}},
pages = {66--74},
title = {{A flexible approach to interprocedural data flow analysis and programs with recursive data structures}},
url = {http://dl.acm.org/citation.cfm?id=582161},
year = {1982}
}
@article{Li2013,
author = {Li, Zhao Peng and Zhang, Yu and Chen, Yi Yun},
doi = {10.1007/s11390-013-1398-1},
issn = {10009000},
journal = {Journal of Computer Science and Technology},
keywords = {automated theorem proving,loop invariant inference,program verification,shape analysis,shape graph logic},
number = {6},
pages = {1063--1084},
title = {{A shape graph logic and a shape system}},
volume = {28},
year = {2013}
}
@article{Hoare1969,
author = {Hoare, C. A. R.},
doi = {10.1145/363235.363259},
issn = {00010782},
journal = {Communications of the ACM},
month = {oct},
number = {10},
pages = {576--580},
title = {{An axiomatic basis for computer programming}},
url = {http://portal.acm.org/citation.cfm?doid=363235.363259},
volume = {12},
year = {1969}
}
@article{,
title = {{Parametric Shape Analysis via 3-Valued Logic}}
}
@misc{Cousot1977,
abstract = {A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the resulta of abstract execution give some informations on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515*17 may be undestood to denote computations on the abstract universe {\{}(+), (-), (+-){\}} where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515*17 =={\textgreater} -(+)*(+) =={\textgreater} (-)*(+) =={\textgreater} (-), proves that -1515+17 is a negative number. Abstract interpretation is concerned by a particlar underlying structure of the usual universe of computations (the sign, in our example). It gives a summay of some facets of the actual executions of a program. In general this summary is simple to obtain but inacurrate (e.g. -1515+17 =={\textgreater} -(+)+(+) =={\textgreater} (-)+(+) =={\textgreater} (+-)). Despite its fundamental incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, ...). Section 3 describes the syntax and mathematical semantics of a simple flowchart language, Scott and Strachey[71]. This mathematical semantics is used in section 4 to built a more abstract model of the semantics of programs, in that it ignores the sequencing of control flow. This model is taken to be the most concrete of the abstract interpretations of programs. Section 5 gives the formal definition of the abstract interpretations of a program. Abstract program properties are modeled by a complete semilattice, Birkoff[61]. Elementary program constructs are locally interpreted by order-preserving functions which are used to associate a system of equations with a program. The program global properties are then defined as one of the extreme fixpoints of that system, Tarski[55]. The abstraction process is defined in section 6. It is shown that the program properties obtained by an abstract interpretation of a program are consistent with those obtained by a more refined interpretation of that program. In particular, an abstract interpretation may be shown to be consistent with the formal semantics of the language. Levels of abstraction are formalized by showing that consistent abstract interpretations form a lattice (section 7). Section 8 gives a constructive definition of abstract properties of programs based on constructive definitions of fixpoints. It shows that various classical algorithms such as Kildall[73], Wegbreit[75], compute program properties as limits of finite Kleene[52]'s sequences. Section 9 introduces finite fixpoint approximation methods to be used when Kleene's sequences are infinite, Cousot[76]. They are shown to be consistent with the abstraction process. Practical examples illustrate the various sections. The conclusion points out that the abstract interpretation of programs is a unified approach to apparently unrelated program analysis techniques.},
author = {Cousot, Patrick and Cousot, Radhia},
booktitle = {Principles of Progamming Languages},
doi = {10.1145/512950.512973},
issn = {00900036},
pages = {238--252},
pmid = {21744052},
title = {{Abstract Interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints}},
year = {1977}
}
@article{Shivers1991,
abstract = {Programswritten in powerful, higher-order languages like Scheme,ML, and CommonLisp should run as fast as their FORTRAN and C counterparts. They should, but they don't. A major reason is the level of optimisation applied to these two classes of languages. Many FORTRAN and C compilers employ an arsenal of sophisticated global optimisations that depend upon data-flow analysis: common-subexpression elimination, loop-invariant detection, induction-variable elimination, and many, many more. Compilers for higher- order languages do not provide these optimisations. Without them, Scheme, LISP and ML compilers are doomed to produce code that runs slower than their FORTRAN and C counterparts. The problem is the lack of an explicit control-flow graph at compile time, somethingwhich traditional data-flow analysis techniques require. In this dissertation, I present a technique for recovering the control-flowgraph of aScheme programat compile time. I give examples of how this information can be used to perform several data-flow analysis optimisations, including copy propagation, induction-variable elimination, useless-variable elimination, and type recovery. The analysis is defined in termsof a non-standard semantic interpretation. The denotational semantics is carefully developed, and several theorems establishing the correctness of the semantics and the implementing algorithms are proven.},
author = {Shivers, O.},
journal = {Doctoral dissertation},
number = {May},
pages = {1--186},
title = {{Control-flow analysis of higher-order languages}},
year = {1991}
}
abstract = {These notes are written to provide our own documentation for the Soot framework from McGill University. They focus exclusively on the parts of Soot that we have used in various pro jects: parsing class files, performing points-to and null pointer analyses, performing data-flow anal- ysis, and extracting abstract control-flow graphs. The notes also contain the important code snippets that make everything work since it is our experience, that the full Soot API leaves novice users in a state of shock and awe.},
author = {Einarsson, a and Nielsen, Jd},
journal = {{\ldots} , Department of Computer Science, University of {\ldots}},
pages = {1--47},
title = {{A survivor's guide to Java program analysis with soot}},
url = {http://www.brics.dk/SootGuide/sootsurvivorsguide.pdf?origin=publication{\_}detail},
year = {2008}
}
@article{OConnor2011,
abstract = {This paper gives two new categorical characterisations of lenses: one as a coalgebra of the store comonad, and the other as a monoidal natural transformation on a category of a certain class of coalgebras. The store comonad of the first characterisation can be generalized to a Cartesian store comonad, and the coalgebras of this Cartesian store comonad turn out to be exactly the Biplates of the Uniplate generic programming library. On the other hand, the monoidal natural transformations on functors can be generalized to work on a category of more specific coalgebras. This generalization turns out to be the type of compos from the Compos generic programming library. A theorem, originally conjectured by van Laarhoven, proves that these two generalizations are isomorphic, thus the core data types of the Uniplate and Compos libraries supporting generic program on single recursive types are the same. Both the Uniplate and Compos libraries generalize this core functionality to support mutually recursive types in different ways. This paper proposes a third extension to support mutually recursive data types that is as powerful as Compos and as easy to use as Uniplate. This proposal, called Multiplate, only requires rank 3 polymorphism in addition to the normal type class mechanism of Haskell.},
archivePrefix = {arXiv},
arxivId = {1103.2841},
author = {O'Connor, Russell},
eprint = {1103.2841},
keywords = {applicative,coalgebra,comonad,functional reference,generic programming,lens,monoidal functor,monoidal natural transformation},
pages = {1--21},
title = {{Functor is to Lens as Applicative is to Biplate: Introducing Multiplate}},
url = {http://arxiv.org/abs/1103.2841},
year = {2011}
}
@article{Vytiniotis2011,
abstract = {Advanced type system features, such as GADTs, type classes and type families, have proven to be invaluable language extensions for ensuring data invariants and program correctness. Unfortunately, they pose a tough problem for type inference when they are used as local type assumptions. Local type assumptions often result in the lack of principal types and cast the generalisation of local let-bindings prohibitively difficult to implement and specify. User-declared axioms only make this situation worse. In this paper, we explain the problems and – perhaps controversially – argue for abandoning local let-binding generalisation. We give empirical results that local let generalisation is only sporadically used by Haskell programmers. Moving on, we present a novel constraint-based type inference approach for local type assumptions. Our system, called OutsideIn(X), is parameterised over the particular underlying constraint domain X, in the same way as HM(X). This stratification allows us to use a common metatheory and inference algorithm. OutsideIn(X) extends the constraints of X by introducing implication constraints on top. We describe the strategy for solving these implication constraints, which, in turn, relies on a constraint solver for X. We characterise the properties of the constraint solver for X so that the resulting algorithm only accepts programs with principal types, even when the type system specification accepts programs that do not enjoy principal types. Going beyond the general framework, we give a particular constraint solver for X = type classes + GADTs + type families, a non-trivial challenge in its own right. This constraint solver has been implemented and distributed as part of GHC 7.},
author = {Vytiniotis, Dimitrios and {Peyton Jones}, Simon and Schrijvers, Tom and Sulzmann, Martin},
doi = {10.1017/S0956796811000098},
issn = {0956-7968},
journal = {Journal of Functional Programming},
number = {4-5},
pages = {333--412},
title = {{OutsideIn(X) Modular type inference with local assumptions}},
url = {http://www.journals.cambridge.org/abstract{\_}S0956796811000098},
volume = {21},
year = {2011}
}
@article{Kang2016a,
author = {Kang, Jeehoon and Kim, Yoonseung and Hur, Chung-Kil and Dreyer, Derek and Vafeiadis, Viktor},
doi = {10.1145/2837614.2837642},
isbn = {9781450335492},
issn = {15232867},
journal = {Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages - POPL 2016},
keywords = {compilation,htweight verification of separate},
pages = {178--190},
title = {{Lightweight verification of separate compilation}},
url = {http://dl.acm.org/citation.cfm?doid=2837614.2837642},
year = {2016}
}
@article{Grigore2016,
abstract = {The core challenge in designing an effective static program analysis is to find a good program abstraction -- one that retains only details relevant to a given query. In this paper, we present a new approach for automatically finding such an abstraction. Our approach uses a pessimistic strategy, which can optionally use guidance from a probabilistic model. Our approach applies to parametric static analyses implemented in Datalog, and is based on counterexample-guided abstraction refinement. For each untried abstraction, our probabilistic model provides a probability of success, while the size of the abstraction provides an estimate of its cost in terms of analysis time. Combining these two metrics, probability and cost, our refinement algorithm picks an optimal abstraction. Our probabilistic model is a variant of the Erdos-Renyi random graph model, and it is tunable by what we call hyperparameters. We present a method to learn good values for these hyperparameters, by observing past runs of the analysis on an existing codebase. We evaluate our approach on an object sensitive pointer analysis for Java programs, with two client analyses (PolySite and Downcast).},
archivePrefix = {arXiv},
arxivId = {1511.01874},
author = {Grigore, Radu and Yang, Hongseok},
doi = {10.1145/2837614.2837663},
eprint = {1511.01874},
isbn = {978-1-4503-3549-2},
issn = {15232867},
journal = {Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages},
keywords = {Datalog,Horn,hypergraph,probability},
pages = {485--498},
title = {{Abstraction Refinement Guided by a Learnt Probabilistic Model}},
url = {http://doi.acm.org/10.1145/2837614.2837663},
year = {2016}
}
@article{Liang,
author = {Liang, Hongjin},
keywords = {-shape graph,analysis,loop invariant inference,pointer logic,program analysis,shape},
title = {{A Shape System and Loop Invariant Inference}}
}
@article{Park2015,
abstract = {This paper presents KJS, the most complete and throughly tested formal semantics of JavaScript to date. Being executable, KJS has been tested against the ECMAScript 5.1 conformance test suite, and passes all 2,782 core language tests. Among the existing implementations of JavaScript, only Chrome V8's passes all the tests, and no other semantics passes more than 90{\%}. In addition to a reference implementation for JavaScript, KJS also yields a simple coverage metric for a test suite: the set of semantic rules it exercises. Our semantics revealed that the ECMAScript 5.1 conformance test suite fails to cover several semantic rules. Guided by the semantics, we wrote tests to exercise those rules. The new tests revealed bugs both in production JavaScript engines (Chrome V8, Safari WebKit, Firefox SpiderMonkey) and in other semantics. KJS is symbolically executable, thus it can be used for formal analysis and verification of JavaScript programs. We verified non-trivial programs and found a known security vulnerability.},
author = {Park, Daejun and Stefănescu, Andrei and Roşu, Grigore},
doi = {10.1145/2737924.2737991},
isbn = {9781450334686},
issn = {15232867},
journal = {Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation - PLDI 2015},
keywords = {javascript,k framework,mechanized semantics},
pages = {346--356},
title = {{KJS: a complete formal semantics of JavaScript}},
url = {http://dl.acm.org/citation.cfm?doid=2737924.2737991},
year = {2015}
}
@article{Diaz2010,
author = {D{\'{i}}az, Jorge Luis Guevara},
keywords = {keyword1,keyword2},
pages = {1--6},
title = {{Typestate-Oriented Design A Coloured Petri Net Approach}},
year = {2010}
}
@misc{Strom1986,
abstract = {We introduce a new programming language concept called typestate, which is a refinement of the concept of type. Whereas the type of a data object determines the set of operations ever permitted on the object, typestate determines the subset of those operations which is permitted in a particular context. Typestate tracking is a program analysis technique which enhances program reliability by detecting at compile-time syntactically legal but semantically undefined execution sequences. These include, for example, reading a variable before it has been initialized, dereferencing a pointer after the dynamic object has been deallocated, etc. Typestate tracking detects errors that cannot be detected by type checking or by conventional static scope rules. Additionally, typestate tracking makes it possible for compilers to insert appropriate finalization of data at exception points and on program termination, eliminating the need to support finalization by means of either garbage collection or unsafe deallocation operations such as Pascal's dispose operation. By enforcing typestate invariants at compile time, it becomes practical to implement a "secure language" - that is, one in which all successfully compiled program modules have fully defined execution-time effects, and the only effects of program errors are incorrect output values. This paper defines typestate, gives examples of its application, and shows how typestate checking may be embedded into a compiler. We discuss the consequences of typestate checking for software reliability and software structure, and conclude with a discussion of our experience using a high-level language incorporating typestate checking.},
author = {Strom, Robert E. and Yemini, Shaula},
booktitle = {IEEE Transactions on Software Engineering},
doi = {10.1109/TSE.1986.6312929},
isbn = {0098-5589},
issn = {00985589},
keywords = {Program analysis,program verification,securitv,software reliability,type checking,typestate},
number = {1},
pages = {157--171},
title = {{Typestate: A Programming Language Concept for Enhancing Software Reliability}},
volume = {SE-12},
year = {1986}
}
@article{Pichon-Pharabod,
abstract = {Despite much research on concurrent programming languages, es-pecially for Java and C/C++, we still do not have a satisfactory defi-nition of their semantics, one that admits all common optimisations without also admitting undesired behaviour. Especially problematic are the " thin-air " examples involving high-performance concurrent accesses, such as C/C++11 relaxed atomics. The C/C++11 model is in a per-candidate-execution style, and previous work has identi-fied a tension between that and the fact that compiler optimisations do not operate over single candidate executions in isolation; rather, they operate over syntactic representations that represent all execu-tions. In this paper we propose a novel approach that circumvents this difficulty. We define a concurrency semantics for a core calculus, including relaxed-atomic and non-atomic accesses, and locks, that admits a wide range of optimisation while still forbidding the classic thin-air examples. It also addresses other problems relating to undefined behaviour. The basic idea is to use an event-structure representation of the current state of each thread, capturing all of its potential execu-tions, and to permit interleaving of execution and transformation steps over that to reflect optimisation (possibly dynamic) of the code. These are combined with a non-multi-copy-atomic storage subsystem, to reflect common hardware behaviour. The semantics is defined in a mechanised and executable form, and designed to be implementable above current relaxed hard-ware and strong enough to support the programming idioms that C/C++11 does for this fragment. It offers a potential way forward for concurrent programming language semantics, beyond the cur-rent C/C++11 and Java models.},
author = {Pichon-Pharabod, Jean and Sewell, Peter},
doi = {10.1145/2837614.2837616},
isbn = {9781450335492},
issn = {15232867},
keywords = {C/C++,D33 [Programming Languages],Formal Definitions and Theory,Relaxed memory models},
title = {{A concurrency semantics for relaxed atomics that permits optimisation and avoids thin-air executions}}
}
@article{Chatterjee2015,
abstract = {In this paper, we consider termination of probabilistic programs with real-valued variables. The questions concerned are: 1. qualitative ones that ask (i) whether the program terminates with probability 1 (almost-sure termination) and (ii) whether the expected termination time is finite (finite termination); 2. quantitative ones that ask (i) to approximate the expected termination time (expectation problem) and (ii) to compute a bound B such that the probability to terminate after B steps decreases exponentially (concentration problem). To solve these questions, we utilize the notion of ranking supermartingales which is a powerful approach for proving termination of probabilistic programs. In detail, we focus on algorithmic synthesis of linear ranking-supermartingales over affine probabilistic programs (APP's) with both angelic and demonic non-determinism. An important subclass of APP's is LRAPP which is defined as the class of all APP's over which a linear ranking-supermartingale exists. Our main contributions are as follows. Firstly, we show that the membership problem of LRAPP (i) can be decided in polynomial time for APP's with at most demonic non-determinism, and (ii) is NP-hard and in PSPACE for APP's with angelic non-determinism; moreover, the NP-hardness result holds already for APP's without probability and demonic non-determinism. Secondly, we show that the concentration problem over LRAPP can be solved in the same complexity as for the membership problem of LRAPP. Finally, we show that the expectation problem over LRAPP can be solved in 2EXPTIME and is PSPACE-hard even for APP's without probability and non-determinism (i.e., deterministic programs). Our experimental results demonstrate the effectiveness of our approach to answer the qualitative and quantitative questions over APP's with at most demonic non-determinism.},
archivePrefix = {arXiv},
arxivId = {1510.08517},
author = {Chatterjee, Krishnendu and Fu, Hongfei and Novotny, Petr and Hasheminezhad, Rouzbeh},
doi = {10.1145/2837614.2837639},
eprint = {1510.08517},
isbn = {9781450335492},
issn = {15232867},
keywords = {concentration,martingale,probabilistic programs,ranking super-,termination},
number = {61532019},
pages = {327--342},
title = {{Algorithmic Analysis of Qualitative and Quantitative Termination Problems for Affine Probabilistic Programs}},
url = {http://arxiv.org/abs/1510.08517},
year = {2015}
}
@article{Yorgey2012,
abstract = {Static type systems strive to be richly expressive while still being simple enough for programmers to use.We describe an experiment that enriches Haskell's kind system with two features promoted from its type system: data types and polymorphism. The new sys- tem has a very good power-to-weight ratio: it offers a significant improvement in expressiveness, but, by re-using concepts that pro- grammers are already familiar with, the system is easy to under- stand and implement.},
author = {Yorgey, Brent a. and Weirich, Stephanie and Cretin, Julien and {Peyton Jones}, Simon and Vytiniotis, Dimitrios and Magalh{\~{a}}es, Jos{\'{e}} Pedro},
doi = {10.1145/2103786.2103795},
isbn = {9781450311205},
issn = {07308566},
journal = {Proceedings of the 8th ACM SIGPLAN workshop on Types in language design and implementation - TLDI '12},
number = {2011/10},
pages = {53},
title = {{Giving Haskell a promotion}},
url = {http://dl.acm.org/citation.cfm?doid=2103786.2103795},
volume = {1},
year = {2012}
}
@article{Ball2001,
author = {Ball, Thomas and Rajamani, Sriram K},
pages = {103--122},
title = {{Properties of Interfaces}},
year = {2001}
}
@article{Robbins2016,
author = {Robbins, Ed and King, Andy and Schrijvers, Tom},
doi = {10.1145/2914770.2837633},
isbn = {1595930566},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
month = {jan},
number = {1},
pages = {191--203},
title = {{From MinX to MinC: semantics-driven decompilation of recursive datatypes}},
url = {http://dl.acm.org/citation.cfm?doid=2914770.2837633},
volume = {51},
year = {2016}
}
@article{Devriese2016,
author = {Devriese, Dominique and Patrignani, Marco and Piessens, Frank},
doi = {10.1145/2837614.2837618},
isbn = {9781450335492},
issn = {15232867},
journal = {Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages - POPL 2016},
keywords = {back-translation,ly-abstract compilation by approximate},
pages = {164--177},
title = {{Fully-abstract compilation by approximate back-translation}},
url = {http://dl.acm.org/citation.cfm?doid=2837614.2837618},
year = {2016}
}
@article{Filipovi2010,
abstract = {Concurrent data structures are usually designed to satisfy correctness conditions such as sequential consistency or linearizability. In this paper, we consider the following fundamental question: What guarantees are provided by these conditions for client programs? We formally show that these conditions can be characterized in terms of observational refinement. Our study also provides a new understanding of sequential consistency and linearizability in terms of abstraction of dependency between computation steps of client programs. ?? 2010 Elsevier B.V. All rights reserved.},
author = {Filipovi, Ivana and O'Hearn, Peter and Rinetzky, Noam and Yang, Hongseok},
doi = {10.1016/j.tcs.2010.09.021},
isbn = {9783642005893},
issn = {03043975},
journal = {Theoretical Computer Science},
keywords = {Linearizability,Observational equivalence,Observational refinement,Sequential consistency},
number = {51-52},
pages = {4379--4398},
title = {{Abstraction for concurrent objects}},
volume = {411},
year = {2010}
}
@article{Hackett2012,
abstract = {JavaScript performance is often bound by its dynamically typed nature. Compilers do not have access to static type information, making generation of efficient, type-specialized machine code difficult. We seek to solve this problem by inferring types. In this paper we present a hybrid type inference algorithm for JavaScript based on points-to analysis. Our algorithm is fast, in that it pays for itself in the optimizations it enables. Our algorithm is also precise, generating information that closely reflects the program's actual behavior even when analyzing polymorphic code, by augmenting static analysis with run-time type barriers. We showcase an implementation for Mozilla Firefox's JavaScript engine, demonstrating both performance gains and viability. Through integration with the just-in-time (JIT) compiler in Firefox, we have improved performance on major benchmarks and JavaScript-heavy websites by up to 50{\%}. Inference-enabled compilation is the default compilation mode as of Firefox 9.},
author = {Hackett, Brian and Guo, Shu-yu},
doi = {10.1145/2254064.2254094},
isbn = {9781450312059},
issn = {0362-1340},
journal = {Proceedings of the 33rd ACM SIGPLAN conference on Programming Language Design and Implementation - PLDI '12},
keywords = {hybrid,just-in-time compilation,type inference},
pages = {239},
title = {{Fast and precise hybrid type inference for JavaScript}},
url = {http://dl.acm.org/citation.cfm?id=2345156.2254094{\%}5Cnhttp://dl.acm.org/citation.cfm?doid=2254064.2254094},
year = {2012}
}
@article{Flatt2016,
abstract = {Our new macro expander for Racket builds on a novel approach to hygiene. Instead of basing macro expansion on variable renamings that are mediated by expansion history, our new expander tracks binding through a set of scopes that an identifier acquires from both binding forms and macro expansions. The resulting model of macro expansion is simpler and more uniform than one based on renaming, and it is sufficiently compatible with Racket's old expander to be practical.},
author = {Flatt, Matthew},
doi = {10.1145/2837614.2837620},
isbn = {978-1-4503-3549-2},
issn = {07308566},
keywords = {binding,hygiene,macros,scope},
pages = {705--717},
title = {{Binding As Sets of Scopes}},
url = {http://doi.acm.org/10.1145/2837614.2837620{\%}5Cnhttp://dl.acm.org/ft{\_}gateway.cfm?id=2837620{\&}type=pdf},
year = {2016}
}
@article{Chatterjee2016,
abstract = {We study algorithmic questions for concurrent systems where the transitions are labeled from a complete, closed semiring, and path properties are algebraic with semiring operations. The algebraic path properties can model dataflow analysis problems, the shortest path problem, and many other natural problems that arise in program analysis. We consider that each component of the concurrent system is a graph with constant treewidth, a property satisfied by the controlflow graphs of most programs. We allow for multiple possible queries, which arise naturally in demand driven dataflow analysis. The study of multiple queries allows us to consider the tradeoff between the resource usage of the one-time preprocessing and for each individual query. The traditional approach constructs the product graph of all components and applies the best-known graph algorithm on the product. In this approach, even the answer to a single query requires the transitive closure, which provides no room for tradeoff between preprocessing and query time. Our main contributions are algorithms that significantly improve the worst-case running time of the traditional approach, and provide various tradeoffs depending on the number of queries. For example, in a concurrent system of two components, the traditional approach requires hexic time in the worst case for answering one query as well as computing the transitive closure, whereas we show that with one-time preprocessing in almost cubic time, each subsequent query can be answered in at most linear time, and even the transitive closure can be computed in almost quartic time. Furthermore, we establish conditional optimality results showing that the worst-case running time of our algorithms cannot be improved without achieving major breakthroughs in graph algorithms.},
archivePrefix = {arXiv},
arxivId = {1510.07565},
author = {Chatterjee, Krishnendu and Goharshady, Amir Kafshdar and Ibsen-Jensen, Rasmus and Pavlogiannis, Andreas},
doi = {10.1145/2837614.2837624},
eprint = {1510.07565},
isbn = {9781450335492},
issn = {15232867},
journal = {Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages - POPL 2016},
keywords = {achieving major breakthroughs in,alge-,be improved without,braic path properties,concurrent systems,constant-treewidth graphs,graph,our algorithms in the,sense that they cannot,shortest path,the algorithmic study of},
pages = {733--747},
title = {{Algorithms for algebraic path properties in concurrent systems of constant treewidth components}},
url = {http://arxiv.org/abs/1510.07565{\%}5Cnhttp://dl.acm.org/citation.cfm?doid=2837614.2837624},
year = {2016}
}
@article{Jensen2009,
abstract = {JavaScript is the main scripting language for Web browsers, and it is essential to modern Web applications. Programmers have started using it for writing complex applications, but there is still little tool support available during development. We present a static program analysis infrastructure that can infer detailed and sound type information for JavaScript programs using abstract interpretation. The analysis is designed to support the full language as defined in the ECMAScript standard, including its peculiar object model and all built-in functions. The analysis results can be used to detect common programming errors – or rather, prove their absence, and for producing type information for program comprehension. Preliminary experiments conducted on real-life JavaScript code indicate that the approach is promising regarding analysis precision on small and medium size programs, which constitute the majority of JavaScript applications. With potential for further improvement, we propose the analysis as a foundation for building tools that can aid JavaScript programmers.},
author = {Jensen, Simon Holm and M{\o}ller, Anders and Thiemann, Peter},
doi = {10.1007/978-3-642-03237-0_17},
isbn = {3642032362},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
number = {274},
pages = {238--255},
title = {{Type analysis for JavaScript}},
volume = {5673 LNCS},
year = {2009}
}
@article{Gardner2012,
abstract = {JavaScript has become the most widely used language for client- side web programming. The dynamic nature of JavaScript makes understanding its code notoriously difficult, leading to buggy pro- grams and a lack of adequate static-analysis tools.We believe that logical reasoning has much to offer JavaScript: a simple description of program behaviour, a clear understanding of module boundaries, and the ability to verify security contracts. We introduce a program logic for reasoning about a broad subset of JavaScript, including challenging features such as prototype inheritance and with. We adapt ideas from separation logic to provide tractable reasoning about JavaScript code: reasoning about easy programs is easy; reasoning about hard programs is possible. We prove a strong soundness result. All libraries written in our subset and proved correct with respect to their specifications will be well-behaved, even when called by arbitrary JavaScript code.},
author = {Gardner, Philippa Anne and Maffeis, Sergio and Smith, Gareth David},
doi = {10.1145/2103621.2103663},
isbn = {9781450310833},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {javascript,reliability,separation logic,theory,verification,web},
number = {1},
pages = {31},
title = {{Towards a program logic for JavaScript}},
volume = {47},
year = {2012}
}
number = {October},
title = {{Dafny with Traits : Verifying Object Oriented Programs}},
year = {2014}
}
@article{Turner2004,
abstract = {The driving idea of functional programming is to make programming more closely related to mathematics. A program in a functional language such as Haskell or Miranda consists of equations which are both computation rules and a basis for simple algebraic reasoning about the functions and data structures they define. The existing model of functional programming, although elegant and powerful, is compromised to a greater extent than is commonly recognised by the presence of partial functions. We consider a simple discipline of total functional programming designed to exclude the possibility of non-termination. Among other things this requires a type distinction between data, which is finite, and codata, which is potentially infinite.},
author = {Turner, D. A.},
doi = {10.3217/jucs-010-07-0751},
issn = {0958695X},
journal = {Journal of Universal Computer Science},
keywords = {functional programming},
number = {7},
pages = {751--768},
title = {{Total Functional Programming}},
url = {http://www.jucs.org/jucs{\_}10{\_}7/total{\_}functional{\_}programming},
volume = {10},
year = {2004}
}
@article{Bodin2014,
abstract = {JavaScript is the most widely used web language for client-side ap- plications. Whilst the development of JavaScript was initially just led by implementation, there is now increasing momentum behind the ECMA standardisation process. The time is ripe for a formal, mechanised specification of JavaScript, to clarify ambiguities in the ECMA standards, to serve as a trusted reference for high-level lan- guage compilation and JavaScript implementations, and to provide a platform for high-assurance proofs of language properties. We present JSCert, a formalisation of the current ECMA stan- dard in the Coq proof assistant, and JSRef, a reference interpreter for JavaScript extracted from Coq to OCaml.We give a Coq proof that JSRef is correct with respect to JSCert and assess JSRef using test262, the ECMA conformance test suite. Our methodology en- sures that JSCert is a comparatively accurate formulation of the En- glish standard, which will only improve as time goes on. We have demonstrated that modern techniques of mechanised specification can handle the complexity of JavaScript.},
author = {Bodin, Martin and Chargueraud, Arthur and Filaretti, Daniele and Gardner, Philippa and Maffeis, Sergio and Naudziuniene, Daiva and Schmitt, Alan and Smith, Gareth},
doi = {10.1145/2535838.2535876},
isbn = {9781450325448},
issn = {07308566},
journal = {Popl},
keywords = {coq,javascript,mechanised semantics},
pages = {87--100},
title = {{A trusted mechanised JavaSript specification}},
url = {http://dl.acm.org/citation.cfm?id=2535838.2535876},
year = {2014}
}
@inproceedings{Chakravarty2005,
address = {New York, New York, USA},
author = {Chakravarty, Manuel M. T. and Keller, Gabriele and Jones, Simon Peyton and Marlow, Simon},
booktitle = {Proceedings of the 32nd ACM SIGPLAN-SIGACT sysposium on Principles of programming languages - POPL '05},
doi = {10.1145/1040305.1040306},
isbn = {158113830X},
pages = {1--13},
publisher = {ACM Press},
title = {{Associated types with class}},
url = {http://portal.acm.org/citation.cfm?doid=1040305.1040306},
year = {2005}
}
@article{Hudson1991,
abstract = {We describe a memory management toolkit for language implementors. It offers efficient and flexible generation scavenging garbage collection. In addition to providing a core of language-independent algorithms and data structures, the toolkit includes auxiliary components that ease implementation of garbage collection for programming languages. We have detailed designs for Smalltalk and Modula-3 and are confident the toolkit can be used with a wide variety of languages. The toolkit approach is itself novel, and our design includes a number of additional innovations in flexibility, efficiency, accuracy, and cooperation between the compiler and the collector.},
author = {Hudson, Richard L and Moss, J Eliot B and Diwan, Amer and Weight, Christopher F},
journal = {Object Oriented Systems},
title = {{A Language-Independent Garbage Collector Toolkit}},
url = {ftp://ftp.cs.umass.edu/pub/osl/papers/tr9147.ps.Z},
year = {1991}
}
@inproceedings{Mayerhofer2016,
address = {New York, New York, USA},
author = {Mayerhofer, Tanja and Wimmer, Manuel and Vallecillo, Antonio},
booktitle = {Proceedings of the 2016 ACM SIGPLAN International Conference on Software Language Engineering - SLE 2016},
doi = {10.1145/2997364.2997376},
isbn = {9781450344470},
keywords = {dimensions,measurement uncertainty,model-based engineering,modeling quantities,units},
pages = {118--131},
publisher = {ACM Press},
title = {{Adding uncertainty and units to quantity types in software models}},
url = {http://dl.acm.org/citation.cfm?doid=2997364.2997376},
year = {2016}
}
@article{Turon2013a,
author = {Turon, Aaron and Dreyer, Derek and Birkedal, Lars},
doi = {10.1145/2500365.2500600},
isbn = {9781450323260},
issn = {15232867},
journal = {the 18th ACM SIGPLAN international conference},
keywords = {are not,contextual refinement,exploiting the modular design,fine-,grained concurrency,higher-order functions,however,kripke logical relations,of sophisticated concurrent programs,separation logic,that existing concurrency logics,we contend},
pages = {377},
title = {{Unifying refinement and hoare-style reasoning in a logic for higher-order concurrency}},
url = {http://dl.acm.org/citation.cfm?doid=2500365.2500600{\%}5Cnpapers2://publication/doi/10.1145/2500365.2500600},
year = {2013}
}
@inproceedings{Gordon2001,
address = {New York, New York, USA},
author = {Gordon, Andrew D. and Syme, Don},
booktitle = {Proceedings of the 28th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '01},
doi = {10.1145/360204.360228},
isbn = {1581133367},
pages = {248--260},
publisher = {ACM Press},
title = {{Typing a multi-language intermediate code}},
url = {http://portal.acm.org/citation.cfm?doid=360204.360228},
year = {2001}
}
institution = {University of Cambridge, Computer Laboratory},
number = {726},
title = {{Modular fine-grained concurrency verification}},
url = {http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-726.pdf},
year = {2013}
}
@article{Reed2015,
abstract = {Rust is a new systems language that uses some advanced type system features, specifically affine types and regions, to statically guarantee memory safety and eliminate the need for a garbage collector. While each individual addition to the type system is well understood in isolation and are known to be sound, the combined system is not known to be sound. Furthermore, Rust uses a novel checking scheme for its regions, known as the Borrow Checker, that is not known to be correct. Since Rust's goal is to be a safer alternative to C/C++, we should ensure that this safety scheme actually works. We present a formal semantics that captures the key features relevant to memory safety, unique pointers and borrowed references, specifies how they guarantee memory safety, and describes the operation of the Borrow Checker. We use this model to prove the soudness of some core operations and justify the conjecture that the model, as a whole, is sound. Additionally, our model provides a syntactic version of the Borrow Checker, which may be more understandable than the non-syntactic version in Rust.},
author = {Reed, Eric},
number = {February},
pages = {1--37},
title = {{Patina: A Formalization of the Rust Programming Language}},
year = {2015}
}
@article{Petricek2015,
author = {Petricek, Tomas},
title = {{F {\#} Data : Accessing structured data made easy}},
year = {2015}
}
@article{Rastogi2014,
abstract = {This paper has two purposes: 1) to clarify the relationship between the quantum theory of radiation, where the electromagnetic field-expansion coefficients satisfy commutation relations, and the semiclassical theory, where the electromagnetic field is considered as a definite function of time rather than as an operator; and 2) to apply some of the results in a study of amplitude and frequency stability in a molecular beam maser. In 1), it is shown that the semiclassical theory, when extended te take into account both the effect of the field on the molecules and the effect of the molecules on the field, reproduces almost quantitatively the same laws of energy exchange and coherence properties as the quantized field theory, even in the limit of one or a few quanta in the field mode. In particular, the semiclassical theory is shown to lead to a prediction of spontaneous emission, with the same decay rate as given by quantum electrodynamics, described by the Einstein A coefficients. In 2), the semiclassical theory is applied to the molecular beam maser. Equilibrium amplitude and frequency of oscillation are obtained for an arbitrary velocity distribution of focused molecules, generalizing the results obtained previously by Gordon, Zeiger, and Townes for a singel-velocity beam, and by Lamb and Helmer for a Maxwellian beam. A somewhat surprising result is obtained; which is that the measurable properties of the maser, such as starting current, effective molecular Q, etc., depend mostly on the slowest 5 to 10 per cent of the molecules. Next we calculate the effect of amplitude and frequency of oscillation, of small systematic perturbations. We obtain a prediction that stability can be improved by adjusting the system so that the molecules emit all their energy h $\Omega$ to the field, then reabsorb part of it, before leaving the cavity. In general, the most stable operation is obtained when the molecules are in the process of absorbing energy from the radiation as they leave the cavity, most unstable when they are still emitting energy at that time. Finally, we consider the response of an oscillating maser to randomly time-varying perturbations. Graphs are given showing predicted response to a small superimposed signal of a frequency near the - oscillation frequency. The existence of "noise enhancing" and "noise quieting" modes of operation found here is a general property of any oscillating system in which amplitude is limited by nonlinearity.},
author = {Rastogi, Aseem and Hammer, Matthew A. and Hicks, Michael},
doi = {10.1109/SP.2014.48},
isbn = {9781479946860},
issn = {10816011},
journal = {Proceedings - IEEE Symposium on Security and Privacy},
keywords = {Dependent type system,Functional language,Secure multi-party computation},
pages = {655--670},
title = {{Wysteria: A programming language for generic, mixed-mode multiparty computations}},
year = {2014}
}
@article{Hammer2015,
abstract = {Predicting the binding mode of flexible polypeptides to proteins is an important task that falls outside the domain of applicability of most small molecule and protein−protein docking tools. Here, we test the small molecule flexible ligand docking program Glide on a set of 19 non-$\alpha$-helical peptides and systematically improve pose prediction accuracy by enhancing Glide sampling for flexible polypeptides. In addition, scoring of the poses was improved by post-processing with physics-based implicit solvent MM- GBSA calculations. Using the best RMSD among the top 10 scoring poses as a metric, the success rate (RMSD ≤ 2.0 {\AA} for the interface backbone atoms) increased from 21{\%} with default Glide SP settings to 58{\%} with the enhanced peptide sampling and scoring protocol in the case of redocking to the native protein structure. This approaches the accuracy of the recently developed Rosetta FlexPepDock method (63{\%} success for these 19 peptides) while being over 100 times faster. Cross-docking was performed for a subset of cases where an unbound receptor structure was available, and in that case, 40{\%} of peptides were docked successfully. We analyze the results and find that the optimized polypeptide protocol is most accurate for extended peptides of limited size and number of formal charges, defining a domain of applicability for this approach.},
archivePrefix = {arXiv},
arxivId = {arXiv:1011.1669v3},
author = {Hammer, Matthew A. and Dunfield, Joshua and Headley, Kyle and Labich, Nicholas and Foster, Jeffrey S. and Hicks, Michael and {Van Horn}, David},
doi = {10.1145/2858965.2814305},
eprint = {arXiv:1011.1669v3},
isbn = {978-1-4503-3689-5},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {call-by-push-value (CBPV),demanded computation graph (DCG),incremental compu- tation,laziness,memoization,nominal matching,self-adjusting computation,structural matching,thunks},
number = {10},
pages = {748--766},
pmid = {25246403},
title = {{Incremental computation with names}},
url = {http://dl.acm.org/citation.cfm?id=2858965.2814305},
volume = {50},
year = {2015}
}
@article{Pina2016,
abstract = {Abstract—Dynamic software updating (DSU) is a technique for patching running programs, to fix bugs or add new features. DSU avoids the downtime of stop-and-restart updates, but creates new risks—an incorrect or ill-timed dynamic update could result in a crash or misbehavior, defeating the whole purpose of DSU. To reduce such risks, dynamic updates should be carefully tested before they are deployed. This paper presents Tedsuto, a general testing framework for DSU, along with a concrete implementation of it for Rubah, a state-of-the-art Java-based DSU system. Tedsuto uses system-level tests developed for the old and new versions of the updateable software, and systematically tests whether a dynamic update might result in a test failure. Very often this process is fully automated, while in some cases (e.g., to test new-version functionality) some manual annotations are required. To evaluate Tedsuto's efficacy, we applied it to dynamic updates previously developed (and tested in an ad hoc manner) for the H2 SQL database server and the CrossFTP server— two real-world, multithreaded systems. We used three large test suites, totalling 446 tests, and we found a variety of update-related bugs quickly, and at low cost.},
author = {Pina, Luis and Hicks, Michael},
doi = {10.1109/ICST.2016.27},
isbn = {9781509018260},
journal = {Proceedings - 2016 IEEE International Conference on Software Testing, Verification and Validation, ICST 2016},
pages = {278--288},
title = {{Tedsuto: A General Framework for Testing Dynamic Software Updates}},
year = {2016}
}
@article{Rastogi,
author = {Rastogi, Aseem and Swamy, Nikhil and Hicks, Michael},
title = {{WYS : A Verified Language Extension for Secure Multi-party Computations}}
}
@article{McCreight2007,
abstract = {Garbage-collected languages such as Java and C{\#} are becoming more and more widely used in both high-end software and real-time embedded applications. The correctness of the GC implementation is essential to the reliability and security of a large portion of the world's mission-critical software. Unfortunately, garbage collectors-especially incremental and concurrent ones-are extremely hard to implement correctly. In this paper, we present a new uniform approach to verifying the safety of both a mutator and its garbage collector in Hoare-style logic. We define a formal garbage collector interface general enough to reason about a variety of algorithms while allowing the mutator to ignore implementation-specific details of the collector. Our approach supports collectors that require read and write barriers. We have used our approach to mechanically verify assembly implementations of mark-sweep, copying and incremental copying GCs in Coq, as well as sample mutator programs that can be linked with any of the GCs to produce a fully-verified garbage-collected program. Our work provides a foundation for reasoning about complex mutator-collector interaction and makes an important advance toward building fully certified production-quality GCs.},
author = {McCreight, a and Shao, Z and Lin, C and Li, L},
doi = {10.1145/1273442.1250788},
isbn = {0362-1340},
issn = {03621340},
journal = {Pldi},
keywords = {abstract data type,assembly code verification,exercise,garbage collection,programs,proof-carrying code,real-time,separation logic},
number = {6},
pages = {468--479},
title = {{A general framework for certifying garbage collectors and their mutators}},
volume = {42},
year = {2007}
}
@article{Fu2010,
abstract = {Optimistic concurrency algorithms provide good performance for parallel programs but they are extremely hard to reason about. Program logics such as concurrent separation logic and rely-guarantee reasoning can be used to verify these algorithms, but they make heavy uses of history variables which may obscure the high-level intuition underlying the design of these algorithms. In this paper, we propose a novel program logic that uses invariants on history traces to reason about optimistic concurrency algorithms. We use past tense temporal operators in our assertions to specify execution histories. Our logic supports modular program specifications with history information by providing separation over both space (program states) and time. We verify Michael's non-blocking stack algorithm and show that the intuition behind such algorithm can be naturally captured using trace invariants.},
author = {Fu, Ming and Li, Yong and Feng, Xinyu and Shao, Zhong and Zhang, Yu},
doi = {10.1007/978-3-642-15375-4_27},
isbn = {3642153747},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {388--402},
title = {{Reasoning about optimistic concurrency using a program logic for history}},
volume = {6269 LNCS},
year = {2010}
}
@article{Mardziel,
author = {Mardziel, Piotr and Hicks, Michael},
number = {i},
title = {{Quantifying vulnerability of secret generation using hyper-distributions}}
}
@article{Wang2009,
abstract = {Wang et al. (Softw. Pract. Exper. 2007; 37(7):727–745) observed a phenomenon of performance inconsistency in the graphics of Java Abstract Window Toolkit (AWT)/Swing among different Java runtime environments (JREs) on Windows XP. This phenomenon makes it difficult to predict the performance of Java game applications. Therefore, they proposed a portable AWT/Swing architecture, called CYC Window Toolkit (CWT), to provide programmers with high and consistent rendering performance for Java game development among different JREs. They implemented a DirectX version to demonstrate the feasibility of the architecture. This paper extends the above research to other environments in two aspects. First, we evaluate the rendering performance of the original Java AWT with different combinations of JREs, image application programming interfaces, system properties and operating systems (OSs), including Windows XP, Windows Vista, Fedora and Mac OS X. The evaluation results indicate that the performance inconsistency of Java AWT also exists among the four OSs, even if the same hardware configuration is used. Second, we design an OpenGL version of CWT, named CWT-GL, to take advantage of modern 3D graphics cards, and compare the rendering performance of CWT with Java AWT/Swing. The results show that CWT-GL achieves more consistent and higher rendering performance in JREs 1.4 to 1.6 on the four OSs. The results also hint at two approaches: (a) decouple the rendering pipelines of Java AWT/Swing from the JREs for faster upgrading and supporting old JREs and (b) use other graphics libraries, such as CWT, instead of Java AWT/Swing to develop cross-platform Java games with higher and more consistent rendering performance. Copyright {\textcopyright} 2009 John Wiley {\&} Sons, Ltd.},
archivePrefix = {arXiv},
arxivId = {1008.1900},
author = {Wang, Yi Hsien and Wu, I. Chen},
doi = {10.1002/spe},
eprint = {1008.1900},
isbn = {0000000000000},
issn = {00380644},
journal = {Software - Practice and Experience},
keywords = {CYC Window Toolkit,Directx,Linux,Mac OS x,OpenGL,Windows},
number = {7},
pages = {701--736},
pmid = {20926156},
title = {{Achieving high and consistent rendering performance of java AWT/Swing on multiple platforms}},
volume = {39},
year = {2009}
}
@article{Saur2015,
abstract = {NoSQL databases like Redis, Cassandra, and MongoDB are increasingly popular because they are flexible, lightweight, and easy to work with. Applications that use these databases will evolve over time, sometimes necessitating (or preferring) a change to the format or organization of the data. The problem we address in this paper is: How can we support the evolution of high-availability applications and their NoSQL data online, without excessive delays or interruptions, even in the presence of backward-incompatible data format changes? We present KVolve, an extension to the popular Redis NoSQL database, as a solution to this problem. KVolve permits a developer to submit an upgrade specification that defines how to transform existing data to the newest version. This transformation is applied lazily as applications interact with the database, thus avoiding long pause times. We demonstrate that KVolve is expressive enough to support substantial practical updates, including format changes to RedisFS, a Redis-backed file system, while imposing essentially no overhead in general use and minimal pause times during updates.},
archivePrefix = {arXiv},
arxivId = {1506.08800},
author = {Saur, Karla and Dumitraş, Tudor and Hicks, Michael},
eprint = {1506.08800},
journal = {arXiv preprint},
title = {{Evolving NoSQL Databases Without Downtime}},
url = {http://arxiv.org/abs/1506.08800},
year = {2015}
}
@article{Wei2016,
author = {Wei, Shiyi and Mardziel, Piotr and Ruef, Andrew and Foster, Jeffrey S and Hicks, Michael},
title = {{Evaluating Design Tradeoffs in Numeric Static Analysis for Java}},
year = {2016}
}
@article{Antopoulos,
author = {Antopoulos, Timos and Gazzillo, Paul and Hicks, Michael and Koskinen, Eric and Terauchi, Tachio and Wei, Shiyi},
title = {{Decomposition Instead of Self-Composition for k -Safety}}
}
@article{Barthe2015,
author = {Barthe, Gilles and Hicks, Michael and Kerschbaum, Florian and Unruh, Dominique and Hicks, Michael},
doi = {10.4230/DagRep.4.12.29},
journal = {Dagstuhl Reports},
keywords = {12,29,4,4230,and phrases security,dagrep,digital object identifier 10,edited in cooperation with,languages,matthew hammer,theory},
number = {12},
pages = {29--47},
title = {{The Synergy Between Programming Languages and Cryptography (Dagstuhl Seminar 14492)}},
volume = {4},
year = {2015}
}
@inproceedings{Ruef2016,
abstract = {Typical security contests focus on breaking or mitigating the impact of buggy systems. We present the Build-it Break-it Fix-it BIBIFI contest which aims to assess the ability to securely build software not just break it. In BIBIFI teams build specified software with the goal of maximizing correctness performance and security. The latter is tested when teams attempt to break other teams submissions. Winners are chosen from among the best builders and the best breakers. BIBIFI was designed to be open-ended - teams can use any language tool process etc. that they like. As such contest outcomes shed light on factors that correlate with successfully building secure software and breaking insecure software. During we ran three contests involving a total of teams and two different programming problems. Quantitative analysis from these contests found that the most efficient build-it submissions used CC but submissions coded in a statically-typed language were less likely to have a security flaw build-it teams with diverse programming-language knowledge also produced more secure code. Shorter programs correlated with better scores. Break-it teams that were also build-it teams were significantly better at finding security bugs.},
address = {New York, New York, USA},
archivePrefix = {arXiv},
arxivId = {1606.01881},
author = {Ruef, Andrew and Hicks, Michael and Parker, James and Levin, Dave and Mazurek, Michelle L. and Mardziel, Piotr},
booktitle = {Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security - CCS'16},
doi = {10.1145/2976749.2978382},
eprint = {1606.01881},
isbn = {9781450341394},
issn = {15437221},
pages = {690--703},
publisher = {ACM Press},
title = {{Build It, Break It, Fix It}},
url = {http://arxiv.org/abs/1606.01881 http://dl.acm.org/citation.cfm?doid=2976749.2978382},
year = {2016}
}
@article{Hudak2007,
abstract = {This paper describes the history of Haskell, including its genesis and principles, technical contributions, implementations and tools, and applications and impact.},
author = {Hudak, Paul and Hughes, John and {Peyton Jones}, Simon and Wadler, Philip},
doi = {10.1145/1238844.1238856},
isbn = {978-1-59593-766-7},
issn = {00448249},
journal = {Proceedings of the third ACM SIGPLAN conference on History of programming languages},
pages = {12--55},
title = {{A History of Haskell: Being Lazy With Class}},
year = {2007}
}
@article{Reynolds1972,
abstract = {Higher-order programming languages (i.e., languages in which procedures or labels can occur as values) are usually defined by interpreters which are themselves written in a programming language based on the lambda calculus (i.e., an applicative language such as pure LISP). Examples include McCarthy's definition of LISP, Landin's SECD machine, the Vienna definition of PL/I, Reynolds' definitions of GEDANKEN, and recent unpublished work by L. Morris and C. Wadsworth. Such definitions can be classified according to whether the interpreter contains higher-order functions, and whether the order of application (i.e., call-by-value versus call-by-name) in the defined language depends upon the order of application in the defining language. As an example, we consider the definition of a simple applicative programming language by means of an interpreter written in a similar language. Definitions in each of the above classifications are derived from one another by informal but constructive methods. The treatment of imperative features such as jumps and assignment is also discussed.},
author = {Reynolds, John C},
doi = {10.1023/A:1010027404223},
issn = {1388-3690},
journal = {Proceedings of the ACM annual conference on ACM 72},
keywords = {applicative language,closure,continuation,gedanken,higher order function,interpreter,j operator,lambda calculus,language definition,lisp,order application,pal,programming language,reference,secd machine},
number = {30602},
pages = {717--740},
title = {{Definitional interpreters for higher-order programming languages}},
volume = {2},
year = {1972}
}
@inproceedings{Launchbury1994,
address = {New York, New York, USA},
author = {Launchbury, John and {Peyton Jones}, Simon L.},
booktitle = {Proceedings of the ACM SIGPLAN 1994 conference on Programming language design and implementation - PLDI '94},
doi = {10.1145/178243.178246},
isbn = {089791662X},
pages = {24--35},
publisher = {ACM Press},
title = {{Lazy functional state threads}},
url = {http://portal.acm.org/citation.cfm?doid=178243.178246},
year = {1994}
}
@article{Graunke2010,
author = {Graunke, KW},
title = {{Extensible Scheduling in a Haskell-based Operating System}},
url = {http://web.cecs.pdx.edu/{~}kennyg/house/thesis.pdf},
year = {2010}
}
@article{Grabmuller2006,
abstract = {In this tutorial, we describe how to use monad transformers in order to incrementally add functionality to Haskell programs. It is not a paper about implementing transformers, but about using them to write elegant, clean and powerful programs in Haskell. Starting from an evaluation function for simple expressions, we convert it to monadic style and incrementally add error handling, environment passing, state, logging and input/output by composing monad transformers.},
author = {Grabm{\"{u}}ller, M},
journal = {Draft paper, October},
pages = {1--16},
title = {{Monad transformers step by step}},
url = {http://www.cs.virginia.edu/{~}wh5a/personal/Transformers.pdf},
volume = {2006},
year = {2006}
}
@article{Jung2016,
abstract = {The development of concurrent separation logic (CSL) has sparked a long line of work on modular verification of sophisticated concurrent programs. Two of the most important features supported by several existing extensions to CSL are higher-order quantification and custom ghost state. However, none of the logics that support both of these features reap the full potential of their combination. In particular, none of them provide general support for a feature we dub " higher-order ghost state " : the ability to store arbitrary higher-order separation-logic predicates in ghost variables. In this paper, we propose higher-order ghost state as a interesting and useful extension to CSL, which we formalize in the framework of Jung et al.'s recently developed Iris logic. To justify its soundness, we develop a novel algebraic structure called CMRAs (" cameras "), which can be thought of as " step-indexed partial commutative monoids " . Finally, we show that Iris proofs utilizing higher-order ghost state can be effectively formalized in Coq, and discuss the challenges we faced in formalizing them.},
author = {Jung, Ralf and Krebbers, Robbert and Birkedal, Lars and Dreyer, Derek},
doi = {10.1145/2951913.2951943},
isbn = {9781450342193},
keywords = {F31 [Logics and Mean-ings of Programs],Formal Definitions and Theory,compositional verification,fine-grained concurrency,higher-order logic,interactive theorem proving},
pages = {1--13},
title = {{Higher-Order Ghost State}},
year = {2016}
}
@article{Marlow2007,
abstract = {In the light of evidence that Haskell programs compiled by GHC exhibit large numbers of mispredicted branches on modern processors, we re-examine the "tagless" aspect of the STG-machine that GHC uses as its evaluation model.},
author = {Marlow, Simon and Yakushev, Alexey Rodriguez and Jones, Simon Peyton},
doi = {10.1145/1291220.1291194},
isbn = {9781595938152},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
number = {9},
pages = {277},
title = {{Faster laziness using dynamic pointer tagging}},
url = {http://portal.acm.org/citation.cfm?doid=1291220.1291194},
volume = {42},
year = {2007}
}
@inproceedings{Marlow2004,
address = {New York, New York, USA},
author = {Marlow, Simon and Jones, Simon Peyton},
booktitle = {Proceedings of the ninth ACM SIGPLAN international conference on Functional programming - ICFP '04},
doi = {10.1145/1016850.1016856},
isbn = {1581139055},
pages = {4},
publisher = {ACM Press},
title = {{Making a fast curry}},
url = {http://portal.acm.org/citation.cfm?doid=1016850.1016856},
year = {2004}
}
@article{Terei2009,
abstract = {This thesis details the motivation, design and implementation of a new back-end for the Glasgow Haskell Compiler which uses the Low Level Virtual Machine compiler infrastructure for code generation.Haskell as implemented by GHC was found to map remarkably well onto the LLVM Assembly language, although some new approaches were required. The most notable of these being the use of a custom calling convention in order to implement GHC's optimisation feature of pinning STG virtual registers to hardware registers. In the evaluation of the LLVM back-end in regards to GHC's C and native code generator back-end, the LLVM back-end was found to offer comparable results in regards to performance in most situations with the surprising finding that LLVM's optimisations didn't offer any improvement to the run-time of the generated code. The complexity of the LLVM back-end proved to be far simpler though then either the native code generator or C back-ends and as such it offers a compelling primary back-end target for GHC.},
author = {Terei, David Anthony},
pages = {64},
title = {{Low Level Virtual Machine for Glasgow Haskell Compiler}},
year = {2009}
}
abstract = {The use of monads to structure functional programs is de- scribed. Monads provide a convenient framework for simulating effects found in other languages, such as global state, exception handling, out- put, or non-determinism. Three case studies are looked at in detail: how monads ease the modification of a simple evaluator; how monads act as the basis of a datatype of arrays subject to in-place update; and how monads can be used to build parsers.},
archivePrefix = {arXiv},
arxivId = {arXiv:1011.1669v3},
doi = {10.1007/3-540-59451-5_2},
eprint = {arXiv:1011.1669v3},
isbn = {978-3-540-59451-2},
issn = {03029743},
number = {August 1992},
pages = {1--31},
pmid = {21349828},
title = {{Monads for functional programming}},
url = {papers2://publication/uuid/51E0DEC3-3E25-4374-A5C6-6234824D0BB0},
year = {1995}
}
@article{Li2007,
abstract = {The Glasgow Haskell Compiler (GHC) has quite sophisticated support for concurrency in its runtime system, which is written in low-level C code. As GHC evolves, the runtime system becomes increasingly complex, error-prone, difficult to maintain and difficult to add new concurrency features.},
author = {Li, Peng and Marlow, Simon and Jones, Simon Peyton and Tolmach, Andrew},
doi = {10.1145/1291201.1291217},
isbn = {9781595936745},
journal = {Proceedings of the ACM SIGPLAN workshop on Haskell workshop - Haskell '07},
keywords = {concurrency,concurrency abstrac-,haskell,might wish to experiment,new challenges,or data par-,such as multi-processor support,thread,tions,transactional memory,with a variety of},
number = {Figure 1},
pages = {107},
title = {{Lightweight concurrency primitives for GHC}},
url = {http://portal.acm.org/citation.cfm?doid=1291201.1291217},
year = {2007}
}
@misc{SimonL.PeytonJones,
author = {{Simon L. Peyton Jones}, Jon Salkild},
title = {{The spineless tagless G-machine v2.5}}
}
@article{Liang2014a,
author = {Liang, Hongjing},
title = {{Refinement Verification of Concurrent Programs and Its Applications}},
year = {2014}
}
@article{Shankar2002,
author = {Shankar, Natarajan},
isbn = {978-3-540-43928-8},
issn = {16113349},
journal = {FME 2002: Formal Methods - Getting IT Right},
pages = {1--20},
title = {{Little Engines of Proof}},
volume = {2391},
year = {2002}
}
@inproceedings{Goncharenko2016,
address = {New York, New York, USA},
author = {Goncharenko, Boryana and Zaytsev, Vadim},
booktitle = {Proceedings of the 2016 ACM SIGPLAN International Conference on Software Language Engineering - SLE 2016},
doi = {10.1145/2997364.2997386},
isbn = {9781450344470},
keywords = {conventions,software language design},
pages = {90--104},
publisher = {ACM Press},
title = {{Language design and implementation for the domain of coding conventions}},
url = {http://dl.acm.org/citation.cfm?doid=2997364.2997386},
year = {2016}
}
@article{Terei2012,
abstract = {Though Haskell is predominantly type-safe, implementations contain a few loopholes through which code can bypass typing and module encapsulation. This paper presents Safe Haskell, a language extension that closes these loopholes. Safe Haskell},
author = {Terei, David and Marlow, Simon and {Peyton Jones}, Simon and Mazi{\{e}}res, David},
doi = {10.1145/2364506.2364524},
isbn = {9781450315746},
issn = {15232867},
journal = {Proceedings of the 2012 symposium on Haskell symposium - Haskell '12},
pages = {137},
url = {http://dl.acm.org/citation.cfm?doid=2364506.2364524},
year = {2012}
}
@article{Chargueraud2012,
abstract = {This paper provides an introduction to the locally nameless approach to the representation of syntax with variable binding, focusing in particular on the use of this technique in formal proofs. First, it explains the benefits of representing bound variables with de Bruijn indices while retaining names for free variables. It then describes the operations involved for manipulating syntax in that form, and shows how to define and reason about judgments on locally nameless terms.},
author = {Chargu{\'{e}}raud, Arthur},
doi = {10.1007/s10817-011-9225-2},
issn = {01687433},
journal = {Journal of Automated Reasoning},
keywords = {Binders,Cofinite quantification,Formal proofs,Locally nameless,Metatheory},
number = {3},
pages = {363--408},
title = {{The locally nameless representation}},
volume = {49},
year = {2012}
}
@inproceedings{Dreyer2009,
author = {Dreyer, Derek and Ahmed, Amal and Birkedal, Lars},
booktitle = {2009 24th Annual IEEE Symposium on Logic In Computer Science},
doi = {10.1109/LICS.2009.34},
isbn = {978-0-7695-3746-7},
month = {aug},
pages = {71--80},
publisher = {IEEE},
title = {{Logical Step-Indexed Logical Relations}},
url = {http://ieeexplore.ieee.org/document/5230591/},
year = {2009}
}
@article{Scibior2011,
author = {Scibior, Adam and Gordon, Andrew D},
doi = {10.1145/2804302.2804317},
isbn = {9781450338080},
keywords = {and z is a,bayesian statis-,given particular values of,haskell,monads,monte carlo,normalising con-,parameters,posterior is a proper,probabilistic programming,probability distribution,stant that ensures the,tics},
pages = {165--176},
title = {{Practical Probabilistic Programming with Monads}},
year = {2011}
}
@article{Torp-Smith2008,
abstract = {We present a programming language, model, and logic appropriate for implementing and reasoning about a memory management system. We then state what is meant by correctness of a copying garbage collector, and employ a variant of the novel separation logics [ 18, 23] to formally specify partial correctness of Cheney's copying garbage collector [8]. Finally, we prove that our implementation of Cheney's algorithm meets its specification, using the logic we have given, and auxiliary variables [19].},
author = {Torp-Smith, Noah and Birkedal, Lars and Reynolds, John C.},
doi = {10.1145/1377492.1377499},
isbn = {0362-1340},
issn = {01640925},
journal = {ACM Transactions on Programming Languages and Systems},
keywords = {Separation logic,copying garbage collector,local reasoning},
number = {4},
pages = {1--58},
title = {{Local reasoning about a copying garbage collector}},
url = {http://dl.acm.org/citation.cfm?id=1377492.1377499},
volume = {30},
year = {2008}
}
@article{Calcagno2007,
abstract = {Separation logic is an extension of Hoare's logic which supports a local way of reasoning about programs that mutate memory. We present a study of the semantic structures lying behind the logic. The core idea is of a local action, a state transformer that mutates the state in a local way. We formulate local actions for a class of models called separation algebras, abstracting from the RAM and other specific concrete models used in work on separation logic. Local actions provide a semantics for a generalized form of (sequential) separation logic. We also show that our conditions on local actions allow a general soundness proof for a separation logic for concurrency, interpreted over arbitrary separation algebras.},
author = {Calcagno, Cristiano and O'Hearn, Peter W. and Yang, Hongseok},
doi = {10.1109/LICS.2007.30},
isbn = {0769529089},
issn = {10436871},
journal = {Proceedings - Symposium on Logic in Computer Science},
pages = {366--375},
title = {{Local action and abstract separation logic}},
year = {2007}
}
@article{Gotsman2007,
abstract = {We present a resource oriented program logic that is able to reason about concurrent heap-manipulating programs with unbounded numbers of dynamically-allocated locks and threads. The logic is inspired by concurrent separation logic, but handles these more realistic concurrency primitives. We demonstrate that the proposed logic allows local reasoning about programs for which there exists a notion of dynamic ownership of heap parts by locks and threads.},
author = {Gotsman, Alexey and Berdine, Josh and Cook, Byron and Rinetzky, Noam and Sagiv, Mooly},
doi = {10.1007/978-3-540-76637-7},
isbn = {978-3-540-76636-0},
issn = {16113349},
journal = {Aplas 2007},
pages = {19--37},
title = {{Local reasoning for storable locks and threads (TR)}},
url = {papers://cff96cb1-96b7-4b11-a3e3-f4947c1d45b9/Paper/p6724},
year = {2007}
}
@article{Cha2012,
abstract = {In this paper we present MAYHEM, a new sys- tem for automatically finding exploitable bugs in binary (i.e., executable) programs. Every bug reported by MAYHEM is accompanied by a working shell-spawning exploit. The working exploits ensure soundness and that each bug report is security- critical and actionable. M AYHEM works on raw binary code without debugging information. To make exploit generation possible at the binary-level, MAYHEM addresses two major technical challenges: actively managing execution paths without exhausting memory, and reasoning about symbolic memory indices, where a load or a store address depends on user input. To this end, we propose two novel techniques: 1) hybrid symbolic execution for combining online and offline (concolic) execution to maximize the benefits of both techniques, and 2) index-based memory modeling, a technique that allows MAYHEM to efficiently reason about symbolic memory at the binary level. We used M AYHEM to find and demonstrate 29 exploitable vulnerabilities in both Linux and Windows programs, 2 of which were previously undocumented.},
author = {Cha, Sang Kil and Avgerinos, Thanassis and Rebert, Alexandre and Brumley, David},
doi = {10.1109/SP.2012.31},
isbn = {9780769546810},
issn = {10816011},
journal = {Proceedings - IEEE Symposium on Security and Privacy},
keywords = {exploit generation,hybrid execution,index-based memory modeling,symbolic memory},
pages = {380--394},
title = {{Unleashing Mayhem on binary code}},
year = {2012}
}
@article{Hermida2014,
abstract = {In his seminal paper on "Types, Abstraction and Parametric Polymorphism," John Reynolds called for homomorphisms to be generalized from functions to relations. He reasoned that such a generalization would allow type-based "abstraction" (representation independence, information hiding, naturality or parametricity) to be captured in a mathematical theory, while accounting for higher-order types. However, after 30 years of research, we do not yet know fully how to do such a generalization. In this article, we explain the problems in doing so, summarize the work carried out so far, and call for a renewed attempt at addressing the problem. {\textcopyright} 2014 Elsevier B.V.},
author = {Hermida, Claudio and Reddy, Uday S. and Robinson, Edmund P.},
doi = {10.1016/j.entcs.2014.02.008},
issn = {15710661},
journal = {Electronic Notes in Theoretical Computer Science},
keywords = {Category Theory,Data abstraction,Definability,Fibrations,Homomorphisms,Information hiding,Logical Relations,Natural Transformations,Parametric polymorphism,Reflexive Graphs,Relation lifting,Relational Parametricity,Universal algebra},
pages = {149--180},
title = {{Logical relations and parametricity}},
volume = {303},
year = {2014}
}
@article{Halbwachs1991,
author = {Halbwachs, N. and Caspi, P. and Raymond, P. and Pilaud, D.},
doi = {10.1109/5.97300},
issn = {00189219},
journal = {Proceedings of the IEEE},
number = {9},
pages = {1305--1320},
title = {{The synchronous data flow programming language LUSTRE}},
url = {http://ieeexplore.ieee.org/document/97300/},
volume = {79},
year = {1991}
}
author = {Vafeiadis, Viktor and Jones, Cliff B.},
number = {687},
title = {{A marriage of rely/guarantee and separation logic}},
url = {http://www.mpi-sws.org/{~}viktor/rgsl-tutorial/part2.pdf},
year = {2011}
}
@article{Feng2009a,
author = {Feng, Xinyu},
doi = {10.1145/1594834.1480922},
isbn = {9781605583792},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {2003,a more compositional approach,calls for,concurrency,information hiding,jones,local reasoning,logic,of the reasons why,rely-guarantee reasoning,separation,these problems are part,to concurrency},
month = {jan},
number = {1},
pages = {315},
title = {{Local rely-guarantee reasoning}},
url = {http://portal.acm.org/citation.cfm?doid=1594834.1480922},
volume = {44},
year = {2009}
}
@article{Dreyer2007,
abstract = {ML modules and Haskell type classes have proven to be highly ef- fective tools for program structuring. Modules emphasize explicit configuration of program components and the use of data abstrac- tion. Type classes emphasize implicit program construction and ad hoc polymorphism. In this paper, we show how the implicitly- typed style of type class programming may be supported within the framework of an explicitly-typed module language by viewing type classes as a particular mode of use of modules. This view of- fers a harmonious integration of modules and type classes, where type class features, such as class hierarchies and associated types, arise naturally as uses of existing module-language constructs, such as module hierarchies and type components. In addition, program- mers have explicit control over which type class instances are avail- able for use by type inference in a given scope. We formalize our approach as a Harper-Stone-style elaboration relation, and provide a sound type inference algorithm as a guide to implementation},
author = {Dreyer, Derek and Harper, Robert and Chakravarty, Manuel M. T. and Keller, Gabriele},
doi = {10.1145/1190215.1190229},
isbn = {1595935754},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {design,languages,modules,theory,type classes,type inference,type systems},
number = {1},
pages = {63},
title = {{Modular type classes}},
volume = {42},
year = {2007}
}
@article{Hobor2010,
abstract = {Building semantic models that account for various kinds of indirect reference has traditionally been a difficult problem. Indirect reference can appear in many guises, such as heap pointers, higher-order functions, object references, and shared-memory mutexes. We give a general method to construct models containing indirect reference by presenting a "theory of indirection". Our method can be applied in a wide variety of settings and uses only simple, elementary mathematics. In addition to various forms of indirect reference, the resulting models support powerful features such as impredicative quantification and equirecursion; moreover they are compatible with the kind of powerful substructural accounting required to model (higher-order) separation logic. In contrast to previous work, our model is easy to apply to new settings and has a simple axiomatization, which is complete in the sense that all models of it are isomorphic. Our proofs are machine-checked in Coq. Copyright {\textcopyright} 2010 ACM.},
author = {Hobor, Aquinas and Dockins, Robert and Appel, Andrew W.},
doi = {10.1145/1707801.1706322},
isbn = {9781605584799},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {consider general references in,for this calculus,here is a flawed,indirection theory,semantic model of types,step-indexed models,the polymorphic $\lambda$ -calculus},
number = {1},
pages = {171},
title = {{A theory of indirection via approximation}},
volume = {45},
year = {2010}
}
@article{Winter2013,
abstract = {Path-sensitive data flow analysis pairs classical data flow analysis with an analysis of feasibility of paths to improve precision. In this paper we propose a framework for path-sensitive backward data flow analysis that is enhanced with an abstraction of the predicate do- main. The abstraction is based on a three-valued logic. It follows the strategy that path predicates are simplified if possible (without calling an external predicate solver) and every predicate that could not be re- duced to a simple predicate is abstracted to the unknown value, for which the feasibility is undecided. The implementation of the framework scales well and delivers promising results.},
author = {Winter, Kirsten and Zhang, Chenyi and Hayes, Ian J. and Keynes, Nathan and Cifuentes, Cristina and Li, Lian},
doi = {10.1007/978-3-642-41202-8_27},
isbn = {9783642412011},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {415--430},
title = {{Path-sensitive data flow analysis simplified}},
volume = {8144 LNCS},
year = {2013}
}
@article{Anderson2016,
author = {Anderson, Brian and Bergstrom, Lars and Goregaokar, Manish and Matthews, Josh and McAllister, Keegan and Moffitt, Jack and Sapin, Simon},
doi = {10.1145/2889160.2889229},
isbn = {978-1-4503-4205-6},
journal = {Proceedings of the 38th International Conference on Software Engineering Companion},
keywords = {Rust,browser engine,concurrency,parallelism,servo},
pages = {81--89},
title = {{Engineering the Servo Web Browser Engine Using Rust}},
url = {http://doi.acm.org/10.1145/2889160.2889229},
year = {2016}
}
@article{Xu2010,
author = {Xu, Zhongxing and Kremenek, Ted and Zhang, Jian},
doi = {10.1007/978-3-642-16558-0_44},
isbn = {3-642-16557-5, 978-3-642-16557-3},
journal = {4th International Symposium on Leveraging Applications (ISoLA 2010)},
pages = {535--548},
title = {{A memory model for static analysis of C programs}},
url = {http://dl.acm.org/citation.cfm?id=1939281.1939332{\%}5Cnhttp://rd.springer.com/chapter/10.1007{\%}2F978-3-642-16558-0{\_}44},
year = {2010}
}
@article{Leino2010,
abstract = {Traditionally, the full verification of a programs functional correctness has been obtained with pen and paper or with interactive proof assistants, whereas only reduced verification tasks, such as extended static checking, have enjoyed the automation offered by satisfiability-modulo-theories (SMT) solvers. More recently, powerful SMT solvers and well-designed program verifiers are starting to break that tradition, thus reducing the effort involved in doing full verification. This paper gives a tour of the language and verifier Dafny, which has been used to verify the functional correctness of a number of challenging pointer-based programs. The paper describes the features incorporated in Dafny, illustrating their use by small examples and giving a taste of how they are coded for an SMT solver. As a larger case study, the paper shows the full functional specification of the Schorr-Waite algorithm in Dafny.},
author = {Leino, K. Rustan M},
doi = {10.1007/978-3-642-17511-4_20},
isbn = {3642175104},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {348--370},
title = {{Dafny: An automatic program verifier for functional correctness}},
volume = {6355 LNAI},
year = {2010}
}
@article{Barnett2004,
abstract = {The Spec$\backslash${\#} programming system is a new attempt at a more cost effec- tive way to develop and maintain high-quality software. This paper describes the goals and architecture of the Spec$\backslash${\#} programming system, consisting of the object- oriented Spec{\#} programming language, the Spec{\#} compiler, and the Boogie static program verifier. The language includes constructs for writing specifications that capture programmer intentions about how methods and data are to be used, the compiler emits run-time checks to enforce these specifications, and the verifier can check the consistency between a program and its specifications.},
author = {Barnett, Mike and Leino, K. Rustan M. and Schulte, Wolfram},
doi = {10.1.1.11.2133},
isbn = {9783540242871},
issn = {03029743},
journal = {International Conference in Construction and Analysis of Safe, Secure and Interoperable Smart Devices (CASSIS '04)},
number = {October},
pages = {49--69},
title = {{The Spec{\#} Programming System: An Overview}},
url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.2133},
year = {2004}
}
@article{Fischer2005,
abstract = {Dataflow analyses sacrifice path-sensitivity for efficiency and lead to false positives when used for verification. Predicate refinement based model checking methods are path-sensitive but must perform many expensive iterations to find all the relevant facts about a program, not all of which are naturally expressed and analyzed using predicates. We show how to join these complementary techniques to obtain efficient and precise versions of any lattice-based dataflow analysis using predicated lattices. A predicated lattice partitions the program state according to a set of predicates and tracks a lattice element for each partition. The resulting dataflow analysis is more precise than the eager dataflow analysis without the predicates.In addition, we automatically infer predicates to rule out imprecisions. The result is a dataflow analysis that can adaptively refine its precision. We then instantiate this generic framework using a symbolic execution lattice, which tracks pointer and value information precisely. We give experimental evidence that our combined analysis is both more precise than the eager analysis in that it is sensitive enough to prove various properties, as well as much faster than the lazy analysis, as many relevant facts are eagerly computed, thus reducing the number of iterations.This results in an order of magnitude improvement in the running times from a purely lazy analysis.},
author = {Fischer, Jeffrey and Jhala, Ranjit and Majumdar, Rupak},
doi = {10.1145/1095430.1081742},
isbn = {1595930140},
issn = {01635948},
journal = {ACM SIGSOFT Software Engineering Notes},
keywords = {abstraction,counterexample analysis,dataflow analysis,model checking,predicate},
pages = {227},
title = {{Joining dataflow with predicates}},
volume = {30},
year = {2005}
}
@article{Ocariza2013,
abstract = {...The majority (65{\%}) of JavaScript faults are DOM-related, meaning they are caused by faulty interactions of the JavaScript code with the Document Object Model (DOM). Further, 80{\%} of the highest impact JavaScript faults are DOM-related. Finally, most JavaScript faults originate from programmer mistakes committed in the JavaScript code itself,...},
author = {Ocariza, Frolin and Bajaj, Kartik and Pattabiraman, Karthik and Mesbah, Ali},
doi = {10.1109/ESEM.2013.18},
isbn = {978-0-7695-5056-5},
issn = {19493770},
journal = {International Symposium on Empirical Software Engineering and Measurement},
keywords = {Document Object Model (DOM),JavaScript,empirical study},
pages = {55--64},
title = {{An empirical study of client-side JavaScript bugs}},
year = {2013}
}
@article{,
number = {November},
title = {{Precise and Automatic Verification of Container-Manipulating Programs a Dissertation Submitted To the Department of Computer Science and the Committee on Graduate Studies of Stanford University in Partial Fulfillment of the Requirements for the Degree of}},
year = {2011}
}
@article{Tan2010,
abstract = {Through foreign function interfaces (FFIs), software components in different programming languages interact with each other in the same address space. Recent years have witnessed a number of systems that analyze FFIs for safety and reliability. However, lack of formal specifications of FFIs hampers progress in this endeavor. We present a formal operational model, JNI Light (JNIL), for a subset of a widely used FFI-the Java Native Interface (JNI). JNIL focuses on the core issues when a high-level garbage-collected language interacts with a low-level language. It proposes abstractions for handling a shared heap, cross-language method calls, cross-language exception handling, and garbage collection. JNIL can directly serve as a formal basis for JNI tools and systems. The abstractions in JNIL are also useful when modeling other FFIs, such as the Python/C interface and the OCaml/C interface.},
author = {Tan, Gang},
doi = {10.1007/978-3-642-17164-2_9},
isbn = {364217163X},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {114--130},
title = {{JNI light: An operational model for the core JNI}},
volume = {6461 LNCS},
year = {2010}
}
@article{Black2016,
abstract = {Programming languages serve a dual purpose: to communicate programs to computers, and to communicate programs to humans. Indeed, it is this dual purpose that makes programming language design a constrained and challenging problem. Inheritance is an essential aspect of that second purpose: it is a tool to improve communication. Humans understand new concepts most readily by first looking at a number of concrete examples, and later abstracting over those examples. The essence of inheritance is that it mirrors this process: it provides a formal mechanism for moving from the concrete to the abstract.},
archivePrefix = {arXiv},
arxivId = {1601.02059},
author = {Black, Andrew P. and Bruce, Kim B. and Noble, James},
doi = {10.1007/978-3-319-30936-1_4},
eprint = {1601.02059},
isbn = {9783319309354},
issn = {16113349},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
keywords = {Abstraction,Inheritance,Object-oriented programming,Program understanding,Programming languages},
pages = {73--94},
title = {{The essence of Inheritance}},
volume = {9600},
year = {2016}
}
@article{Xiao2015,
author = {Xiao, Xiao and Han, Shi and Zhang, Charles and Zhang, Dongmei},
doi = {10.1007/978-3-319-26529-2_18},
isbn = {9783319265285},
issn = {16113349},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {335--355},
title = {{Uncovering JavaScript performance code smells relevant to type mutations}},
volume = {9458},
year = {2015}
}
@inproceedings{Saxena2010,
author = {Saxena, Prateek and Akhawe, Devdatta and Hanna, Steve and Mao, Feng and McCamant, Stephen and Song, Dawn},
booktitle = {2010 IEEE Symposium on Security and Privacy},
doi = {10.1109/SP.2010.38},
isbn = {978-1-4244-6894-2},
keywords = {-web security,string decision,symbolic execution},
pages = {513--528},
publisher = {IEEE},
title = {{A Symbolic Execution Framework for JavaScript}},
url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5504700 http://ieeexplore.ieee.org/document/5504700/},
year = {2010}
}
abstract = {We introduce the blame calculus, which adds the notion of blame from Findler and Felleisen's contracts to a system similar to Siek and Taha's gradual types and Flanagan's hybrid types. We characterise where positive and negative blame can arise by decomposing the usual notion of subtype into positive and negative subtypes, and show that these recombine to yield naive subtypes. Naive subtypes previously appeared in type systems that are unsound, but we believe this is the first time naive subtypes play a role in establishing type soundness.},
author = {Wadler, Philip and Findler, Robert Bruce},
doi = {10.1007/978-3-642-00590-9_1},
isbn = {9783642005893},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {1--16},
title = {{Well-Typed programs can't be blamed}},
volume = {5502},
year = {2009}
}
@article{Leino2009,
author = {Leino, K. Rustan M and M{\"{u}}ller, Peter and Smans, Jan},
doi = {10.1007/978-3-642-03829-7_7},
isbn = {364203828X},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {195--222},
title = {{Verification of concurrent programs with chalice}},
volume = {5705 LNCS},
year = {2009}
}
author = {Ahmadi, Reza and Leino, K. Rustan M. and Nummenmaa, Jyrki},
doi = {10.1145/2786536.2786542},
isbn = {9781450336567},
journal = {Proceedings of the 17th Workshop on Formal Techniques for Java-like Programs - FTfJP '15},
keywords = {boogie,dafny,program verification,traits},
pages = {1--5},
title = {{Automatic verification of Dafny programs with traits}},
url = {http://dl.acm.org/citation.cfm?doid=2786536.2786542},
year = {2015}
}
@article{Calcagno2011,
abstract = {his paper describes a compositional shape analysis, where each procedure is analyzed independently of its callers. The analysis uses an abstract domain based on a restricted fragment of separation logic, and assigns a collection of Hoare triples to each procedure; the triples provide an over-approximation of data structure usage. Compositionality brings its usual benefits -- increased potential to scale, ability to deal with unknown calling contexts, graceful way to deal with imprecision -- to shape analysis, for the first time. The analysis rests on a generalized form of abduction (inference of explanatory hypotheses) which we call bi-abduction. Bi-abduction displays abduction as a kind of inverse to the frame problem: it jointly infers anti-frames (missing portions of state) and frames (portions of state not touched by an operation), and is the basis of a new interprocedural analysis algorithm. We have implemented our analysis algorithm and we report case studies on smaller programs to evaluate the quality of discovered specifications, and larger programs (e.g., an entire Linux distribution) to test scalability and graceful imprecision.},
author = {Calcagno, C and Distefano, D and O'Hearn, Pw and Yang, H},
doi = {10.1145/2049697.2049700},
isbn = {9781605583792},
issn = {00045411},
journal = {J. Acm},
keywords = {a program analysis is,compo-,languages,or program,parts,program,reliability,result of a composite,similarly,sitional if the analysis,the meanings of its,theory,verification},
number = {6},
pages = {26:1--26:66},
title = {{Compositional Shape Analysis by Means of Bi-Abduction.}},
url = {http://discovery.ucl.ac.uk/1342369/},
volume = {58},
year = {2011}
}
@article{Jung2015,
author = {Jung, Ralf and Swasey, David and Sieczkowski, Filip and Svendsen, Kasper and Turon, Aaron and Birkedal, Lars and Dreyer, Derek},
doi = {10.1145/2775051.2676980},
isbn = {978-1-4503-3300-9},
issn = {03621340},
journal = {Popl},
keywords = {atomicity,compositional verification,fine-grained concurrency,higher-order logic,invariants,partial commutative monoids,separation logic},
number = {1},
pages = {637--650},
title = {{Iris: Monoids and Invariants as an Orthogonal Basis for Concurrent Reasoning}},
url = {http://dl.acm.org/citation.cfm?id=2775051.2676980},
volume = {50},
year = {2015}
}
@article{Wikipedia2016,
author = {Wikipedia},
number = {February},
title = {{Garbage collection}},
url = {https://en.wikipedia.org/wiki/Garbage{\_}collection{\_}(computer{\_}science)},
year = {2016}
}
@article{Theisen,
author = {Theisen, Christopher and Drive, Oval and Raleigh, Campus Box and Williams, Laurie},
isbn = {9781450342773},
keywords = {attack surface,crash dumps,metrics,security,stack traces},
pages = {121--123},
title = {{Poster : Risk-Based Attack Surface Approximation}}
}
abstract = {The challenges---and great promise---of modern symbolic execution techniques, and the tools to help implement them.},
author = {Cadar, Cristian and Sen, Koushik},
doi = {10.1145/2408776.2408795},
isbn = {978-1-109-44370-7},
issn = {0001-0782},
journal = {Communications of the ACM},
number = {2},
pages = {82--90},
pmid = {16031144},
title = {{Symbolic execution for software testing: three decades later}},
url = {http://dl.acm.org/ft{\_}gateway.cfm?id=2408795{\&}type=html},
volume = {56},
year = {2013}
}
@article{Liang2013,
author = {Liang, Hongjin and Hoffmann, Jan and Feng, Xinyu and Shao, Zhong},
doi = {10.1007/978-3-642-40184-8_17},
isbn = {9783642401831},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {227--241},
title = {{Characterizing progress properties of concurrent objects via contextual refinements}},
volume = {8052 LNCS},
year = {2013}
}
@article{BAHR2015,
abstract = {In this article, we present a new approach to the problem of calculating compilers. In particular, we develop a simple but general technique that allows us to derive correct compilers from high-level semantics by systematic calculation, with all details of the implementation of the compilers falling naturally out of the calculation process. Our approach is based upon the use of standard equational reasoning techniques, and has been applied to calculate compilers for a wide range of language features and their combination, including arithmetic expressions, exceptions, state, various forms of lambda calculi, bounded and unbounded loops, non-determinism and interrupts. All the calculations in the article have been formalised using the Coq proof assistant, which serves as a convenient interactive tool for developing and verifying the calculations.},
author = {BAHR, PATRICK and HUTTON, GRAHAM},
doi = {10.1017/S0956796815000180},
issn = {0956-7968},
journal = {Journal of Functional Programming},
month = {sep},
number = {July},
pages = {e14},
title = {{Calculating correct compilers}},
url = {http://www.journals.cambridge.org/abstract{\_}S0956796815000180},
volume = {25},
year = {2015}
}
@article{Wu2016,
abstract = {Call traces, i.e., sequences of function calls and returns, are fun-damental to a wide range of program analyses such as bug repro-duction, fault diagnosis, performance analysis, and many others. The conventional approach to collect call traces that instruments each function call and return site incurs large space and time over-head. Our approach aims at reducing the recording overheads by instrumenting only a small amount of call sites while keeping the capability of recovering the full trace. We propose a call trace model and a logged call trace model based on an LL(1) grammar, which enables us to define the criteria of a feasible solution to call trace collection. Based on the two models, we prove that to collect call traces with minimal instrumentation is an NP-hard problem. We then propose an efficient approach to obtaining a suboptimal solu-tion. We implemented our approach as a tool Casper and evaluated it using the DaCapo benchmark suite. The experiment results show that our approach causes significantly lower runtime (and space) overhead than two state-of-the-arts approaches.},
author = {Wu, Rongxin and Xiao, Xiao and Cheung, Shing-Chi and Zhang, Hongyu and Zhang, Charles},
doi = {10.1145/2837614.2837619},
isbn = {9781450335492},
issn = {07308566},
keywords = {Algorithms,D25 [Testing and Debug-ging],Instrumentation,Overhead,Performance Keywords Call Trace,Program Analysis,Tracing General Terms Theory},
pages = {678--690},
title = {{Casper: An Efficient Approach to Call Trace Collection}},
year = {2016}
}
@article{Veanes2014,
author = {Veanes, Margus and Bj{\o}rner, Nikolaj and Nachmanson, Lev and Bereg, Sergey},
doi = {10.1007/978-3-319-08867-9_42},
isbn = {9783319088662},
issn = {16113349},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {628--645},
volume = {8559 LNCS},
year = {2014}
}
@article{Luo2016,
author = {Luo, Zhaoyi and Atlee, Joanne M.},
doi = {10.1145/2997364.2997372},
isbn = {9781450344470},
journal = {Proceedings of the 2016 ACM SIGPLAN International Conference on Software Language Engineering - SLE 2016},
keywords = {domain-specific language,language product line,mbeddr,mps,state-machine model},
pages = {105--117},
title = {{BSML-mbeddr: integrating semantically configurable state-machine models in a C programming environment}},
url = {http://dl.acm.org/citation.cfm?doid=2997364.2997372},
year = {2016}
}
@article{Tang2007,
abstract = {Production compilers ' optimizers typically operate at low abstraction levels, transformation rules enacting on operations on built-in types only. Transformations at higher levels of abstraction, on operations of types defined in libraries or by the user, are typically not supported. Such high-level optimizations could, however, yield greater benefits than first lowering the abstractions and subjecting the result to low-level transformations. Built-in compiler optimizations can in principle apply to user-defined types, if those types possess properties that guarantee that the optimizing transformations preserve the meaning of the program. The problem is conveying this information to the compiler in a non-disruptive manner. This article describes a framework for specifying and implementing generic “concept-based optimizations. ” The framework is based on careful categorization, applying the generic programming paradigm, of the algebraic properties that justify particular optimizing transformations. Our framework is built on top of ConceptGCC, a compiler implementing the new language features concept and concept map of the forthcoming standard C++. Concepts describe the syntactic and semantic properties of classes of types, for which generic semantics-preserving transformations can be defined. Concept maps establish memberships to these classes, enabling the generic optimizations for specific user-defined types.},
author = {Tang, Xiaolong and J{\"{a}}rvi, Jaakko},
doi = {10.1145/1512762.1512772},
isbn = {9781605580869},
journal = {Proceedings of the 2007 Symposium on Library-Centric Software Design},
keywords = {C++,Concepts,Design,Generic programming,High-level optimization,Languages,Performance},
pages = {97--108},
title = {{Concept-Based Optimization}},
url = {http://portal.acm.org/citation.cfm?doid=1512762.1512772},
year = {2007}
}
@article{Furr2008,
author = {Furr, Michael and Foster, Jeffrey S},
doi = {10.1145/1377492.1377493},
issn = {0164-0925},
journal = {ACM Trans. Program. Lang. Syst.},
keywords = {FFI,Foreign function interface,JNI,Java,Java Native Interface,OCaml,dataflow analysis,flow-sensitive type system,foreign function calls,multilingual type inference,multilingual type system,representational type},
number = {4},
pages = {18:1----18:63},
title = {{Checking Type Safety of Foreign Function Calls}},
url = {http://doi.acm.org/10.1145/1377492.1377493},
volume = {30},
year = {2008}
}
@article{Lattner2004,
author = {Lattner, Chris},
isbn = {0769521029},
number = {c},
title = {{LLVM : A Compilation Framework for Lifelong Program Analysis {\&} Transformation}},
year = {2004}
}
@article{Baldoni2016,
abstract = {Many security and software testing applications require checking whether certain properties of a program hold for any possible usage scenario. For instance, a tool for identifying software vulnerabilities may need to rule out the existence of any backdoor to bypass a program's authentication. One approach would be to test the program using different, possibly random inputs. As the backdoor may only be hit for very specific program workloads, automated exploration of the space of possible inputs is of the essence. Symbolic execution provides an elegant solution to the problem, by systematically exploring many possible execution paths at the same time without necessarily requiring concrete inputs. Rather than taking on fully specified input values, the technique abstractly represents them as symbols, resorting to constraint solvers to construct actual instances that would cause property violations. Symbolic execution has been incubated in dozens of tools developed over the last four decades, leading to major practical breakthroughs in a number of prominent software reliability applications. The goal of this survey is to provide an overview of the main ideas, challenges, and solutions developed in the area, distilling them for a broad audience.},
archivePrefix = {arXiv},
arxivId = {1610.00502},
author = {Baldoni, Roberto and Coppa, Emilio and D'Elia, Daniele Cono and Demetrescu, Camil and Finocchi, Irene},
eprint = {1610.00502},
number = {i},
pages = {1--39},
title = {{A Survey of Symbolic Execution Techniques}},
url = {http://arxiv.org/abs/1610.00502},
year = {2016}
}
@inproceedings{PolitzJoeGibbsandEliopoulosSpiridonAristidesandGuhaArjunandKrishnamurthi2011,
author = {{Politz, Joe Gibbs and Eliopoulos, Spiridon Aristides and Guha, Arjun and Krishnamurthi}, Shriram},
booktitle = {Proceedings of the 20th USENIX Conference on Security},
isbn = {]},
keywords = {sandbox,script inclusion,security architecture,web application security,web mashups},
pages = {1},
publisher = {USENIX Association},
title = {{ADsafety: type-based verification of JavaScript Sandboxing}},
url = {http://dl.acm.org/citation.cfm?id=2028067.2028079},
year = {2011}
}
@article{Barr2013,
abstract = {It is well-known that floating-point exceptions can be disastrous and writing exception-free numerical programs is very difficult. Thus, it is important to automatically detect such errors. In this paper, we present Ariadne, a practical symbolic execution system specifically designed and implemented for detecting floating-point exceptions. Ariadne systematically transforms a numerical program to explicitly check each exception triggering condition. Ariadne symbolically executes the transformed program using real arithmetic to find candidate real-valued inputs that can reach and trigger an exception. Ariadne converts each candidate input into a floating-point number, then tests it against the original program. In general, approximating floating-point arithmetic with real arithmetic can change paths from feasible to infeasible and vice versa. The key insight of this work is that, for the problem of detecting floating-point exceptions, this approximation works well in practice because, if one input reaches an exception, many are likely to, and at least one of them will do so over both floating-point and real arithmetic. To realize Ariadne, we also devised a novel, practical linearization technique to solve nonlinear constraints. We extensively evaluated Ariadne over 467 scalar functions in the widely used GNU Scientific Library (GSL). Our results show that Ariadne is practical and identifies a large number of real runtime exceptions in GSL. The GSL developers confirmed our preliminary findings and look forward to Ariadne's public release, which we plan to do in the near future.},
author = {Barr, Earl T. and Vo, Thanh and Le, Vu and Su, Zhendong},
doi = {10.1145/2429069.2429133},
isbn = {978-1-4503-1832-7},
issn = {0362-1340},
journal = {POPL: Principles of Programming Languages},
keywords = {33,algorithms,buggy,floating-point exceptions,in february 2010,languages,may invalidate results when,reliability,symbolic execution,toyota,verification},
pages = {549--560},
title = {{Automatic detection of floating-point exceptions}},
url = {http://doi.acm.org/10.1145/2429069.2429133{\%}5Cnhttp://dl.acm.org/ft{\_}gateway.cfm?id=2429133{\&}type=pdf},
year = {2013}
}
@article{Petricek2015a,
author = {Petricek, Tomas},
doi = {10.1145/2814228.2814249},
isbn = {978-1-4503-3688-8},
journal = {2015 ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!)},
keywords = {philosophy,science,types},
pages = {254--266},
title = {{Against a Universal Definition of 'Type'}},
url = {http://doi.acm.org/10.1145/2814228.2814249},
year = {2015}
}
@article{Whaley2005,
abstract = {Many problems in program analysis can be expressed naturally and concisely in a declarative language like Datalog. This makes it easy to specify new analyses or extend or compose existing analyses. However, previous implementations of declarative languages perform poorly compared with traditional implementations. This paper describes bddbddb, a BDD-Based Deductive DataBase, which implements the declarative language Datalog with stratified negation, totally-ordered finite domains and comparison operators. bddbddb uses binary decision diagrams (BDDs) to efficiently represent large relations. BDD operations take time proportional to the size of the data structure, not the number of tuples in a relation, which leads to fast execution times. bddbddb is an effective tool for implementing a large class of program analyses. We show that a context-insensitive points-to analysis implemented with bddbddb is about twice as fast as a carefully hand-tuned version. The use of BDDs also allows us to solve heretofore unsolved problems, like context-sensitive pointer analysis for large programs.},
author = {Whaley, John and Avots, Dzintars and Carbin, Michael and Lam, Monica S.},
doi = {10.1007/11575467_8},
isbn = {3540297359},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {97--118},
title = {{Using Datalog with binary decision diagrams for program analysis}},
volume = {3780 LNCS},
year = {2005}
}
@article{Beyer2004,
abstract = { We have extended the software model checker BLAST to automatically generate test suites that guarantee full coverage with respect to a given predicate. More precisely, given a C program and a target predicate p, BLAST determines the set L of program locations which program execution can reach with p true, and automatically generates a set of test vectors that exhibit the truth of p at all locations in L. We have used BLAST to generate test suites and to detect dead code in C programs with up to 30 K lines of code. The analysis and test vector generation is fully automatic (no user intervention) and exact (no false positives).},
author = {Beyer, Dirk and Chlipala, Adam J. and Henzinger, Thomas A. and Jhala, Ranjit and Majumdar, Rupak},
doi = {10.1109/ICSE.2004.1317455},
isbn = {0-7695-2163-0},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
pages = {326--335},
title = {{Generating tests from counterexamples}},
volume = {26},
year = {2004}
}
@article{Cousot2001,
author = {Cousot, Patrick},
doi = {10.2217/iim.10.49.Image},
pages = {138--156},
title = {{Abstract Interpretation Based Formal Methods and Future Challenges}},
year = {2001}
}
@article{Ku2007,
abstract = {Software model checking based on abstraction-refinement has recently$\backslash$nachieved widespread success in verifying API conformance in device$\backslash$ndrivers, and we believe this success can be replicated for the problem$\backslash$nof buffer overflow detection. This paper presents a publicly-available$\backslash$nbenchmark suite to help guide and evaluate this research. The benchmark$\backslash$nconsists of 298 code fragments of varying complexity capturing 22$\backslash$nbuffer overflow vulnerabilities in 12 open source applications. We$\backslash$ngive a preliminary evaluation of the benchmark using the SatAbs model$\backslash$nchecker},
author = {Ku, Kelvin and Hart, Thomas E. and Chechik, Marsha and Lie, David},
doi = {10.1145/1321631.1321691},
isbn = {9781595938824},
journal = {Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering - ASE '07},
keywords = {array bounds checking,bench-,buffer overflow,mark,model checking},
pages = {389},
title = {{A buffer overflow benchmark for software model checkers}},
url = {http://dl.acm.org/citation.cfm?id=1321631.1321691},
year = {2007}
}
@article{Ball2001a,
abstract = {Model checking has been widely successful in validating and debugging designs in the hardware and protocol domains. However, state-space explosion limits the applicability of model checking tools, so model checkers typically operate on abstractions of systems. Recently, there has been significant interest in applying model checking to software. For infinite-state systems like software, abstraction is even more critical. Techniques for abstracting software are a prerequisite to making software model checking a reality. We present the first algorithm to automatically construct a predicate abstraction of programs written in an industrial programming language such as C, and its implementation in a tool — C2BP. The C2BP tool is part of the SLAM toolkit, which uses a combination of predicate abstraction, model checking, symbolic reasoning, and iterative refinement to statically check temporal safety properties of programs. Predicate abstraction of software has many applications, including detecting program errors, synthesizing program invariants, and improving the precision of program analyses through predicate sensitivity. We discuss our experience applying the C2BP predicate abstraction tool to a variety of problems, ranging from checking that list-manipulating code preserves heap invariants to finding errors in Windows NT device drivers.},
author = {Ball, Thomas and Majumdar, Rupak and Millstein, Todd and Rajamani, Sriram K.},
doi = {10.1145/381694.378846},
isbn = {1-58113-414-2},
issn = {03621340},
journal = {Proceedings of the ACM SIGPLAN conference on Programming language design and implementation (PLDI)},
number = {5},
pages = {203--213},
title = {{Automatic predicate abstraction of C programs}},
volume = {36},
year = {2001}
}
@article{Might2011,
abstract = {Abstract We present a functional approach to parsing unrestricted context-free grammars based on Brzozowski's derivative of regular expressions. If we consider context-free grammars as recursive regular expressions, Brzozowski's equational theory extends ...},
author = {Might, Matthew and Darais, David and Spiewak, Daniel},
doi = {10.1145/2034773.2034801},
isbn = {978-1-4503-0865-6},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {parsing},
number = {9},
pages = {189--195},
title = {{Parsing with derivatives}},
url = {http://dl.acm.org/citation.cfm?doid=2034574.2034801{\%}5Cnpapers3://publication/doi/10.1145/2034574.2034801},
volume = {46},
year = {2011}
}
abstract = {JavaScript is a language that is widely-used for both web- based and standalone applications such as those in the upcoming Windows 8 operating system. Analysis of JavaScript has long been known to be challenging due to its dynamic nature. On top of that, most JavaScript applications rely on large and complex libraries and frameworks, often written in a combination of JavaScript and native code such as C and C++. Stubs have been commonly employed as a partial specification mechanism to address the library problem; however, they are tedious to write, incomplete, and occasionally incorrect. However, the manner in which library code is used within applications often sheds light on what library APIs return or consume as parameters. In this paper, we propose a technique which combines pointer analysis with use analysis to handle many challenges posed by large JavaScript libraries. Our approach enables a variety of applications, ranging from call graph discovery to auto-complete to supporting runtime optimizations. Our techniques have been implemented and empirically validated on a set of 25 Windows 8 JavaScript applications, averaging 1,587 lines of code, demonstrating a combination of scalability and precision.},
author = {Madsen, Magnus and Livshits, Benjamin and Fanning, Michael},
doi = {10.1145/2491411.2491417},
isbn = {978-1-4503-2237-9},
journal = {Ase},
keywords = {JavaScript,frameworks,libraries,points-to analysis,use analysis},
pages = {499--509},
title = {{Practical Static Analysis of JavaScript Applications in the Presence of Frameworks and Libraries}},
url = {http://doi.acm.org/10.1145/2491411.2491417},
year = {2015}
}
@article{Hudak1999,
abstract = {Functional reactive programming, or FRP, is a style of programming based on two key ideas: continuous time-varying behaviors, and event-based reactivity. FRP is the essence of Fran [1,2], a domain-specific language for functional reactive graphics and animation, and has recently been used in the design of Frob [3,4], a domain-specific language for functional vision and robotics. In general, FRP can be viewed as an interesting language for describing hybrid systems, which are systems comprised of both analog (continuous) and digital (discrete) subsystems. Continuous behaviors can be thought of simply as functions from time to some value: Behavior a = Time -{\textgreater} a. For example: an image behavior may represent an animation; a Cartesian-point behavior may be a mouse; a velocity-vector behavior may be the control vector for a robot; and a tuple-of-distances behavior may be the input from a robot's sonar array. Both continuous behaviors and event-based reactivity have interesting pro erties worthy of independent study, but their integration is particularly interesting. At the core of the issue is that events are intended to cause discrete shifts in declarative behavior; i.e. not just shifts in the state of reactivity. Being declarative, the natural desire is for everything to be first-class and higher-order. But this causes interesting clashes in frames of reference, especially when time and space transformations are applied. In this talk the fundamental ideas behind FRP are presented, along with a discussion of various issues in its formal semantics. This is joint work with Conal Elliot at Microsoft Research, and John Peterson at Yale.},
author = {Hudak, Paul},
doi = {10.1007/3-540-49099-X_1},
isbn = {1581136056},
journal = {Proceedings of the 8th European Symposium on Programming (ESOP'99), LNCS 1576},
keywords = {domain-specific languages,frp,functional programming,guages,haskell,hybrid modeling,part by a national,synchronous dataflow lan-,this material is based,upon work supported in},
pages = {1--1},
title = {{Functional Reactive Programming}},
volume = {1576},
year = {1999}
}
abstract = {This paper presents EXE, an effective bug-finding tool that automatically generates inputs that crash real code. Instead of running code on manually or randomly constructed input, EXE runs it on symbolic input initially allowed to be "anything." As checked code runs, EXE tracks the constraints on each symbolic (i.e., input-derived) memory location. If a statement uses a symbolic value, EXE does not run it, but instead adds it as an input-constraint; all other statements run as usual. If code conditionally checks a symbolic expression, EXE forks execution, constraining the expression to be true on the true branch and false on the other. Because EXE reasons about all possible values on a path, it has much more power than a traditional runtime tool: (1) it can force execution down any feasible program path and (2) at dangerous operations (e.g., a pointer dereference), it detects if the current path constraints allow any value that causes a bug.When a path terminates or hits a bug, EXE automatically generates a test case by solving the current path constraints to find concrete values using its own co-designed constraint solver, STP. Because EXE's constraints have no approximations, feeding this concrete input to an uninstrumented version of the checked code will cause it to follow the same path and hit the same bug (assuming deterministic code).EXE works well on real code, finding bugs along with inputs that trigger them in: the BSD and Linux packet filter implementations, the udhcpd DHCP server, the pcre regular expression library, and three Linux file systems.},
author = {Cadar, Cristian and Ganesh, Vijay and Pawlowski, Peter M and Dill, David L and Engler, Dawson R},
doi = {10.1145/1180405.1180445},
isbn = {1595935185},
issn = {10949224},
journal = {Computer},
keywords = {attack generation,bolic execution,bug finding,constraint solving,dynamic analysis,sym,test case generation},
number = {2},
pages = {322--335},
title = {{EXE : Automatically Generating Inputs of Death}},
url = {http://portal.acm.org/citation.cfm?id=1455518.1455522},
volume = {12},
year = {2006}
}
@article{Song2008,
abstract = {In this paper, we give an overview of the BitBlaze project, a new approach to computer security via binary analysis. In particular, BitBlaze focuses on building a unified binary analysis platform and using it to provide novel solutions to a broad spectrum of different security problems. The binary analysis platform is designed to enable accurate analysis, provide an extensible architecture, and combines static and dynamic analysis as well as program verification techniques to satisfy the common needs of security applications. By extracting security-related properties from binary programs directly, BitBlaze enables a principled, root-cause based approach to computer security, offering novel and effective solutions, as demonstrated with over a dozen different security applications.},
author = {Song, Dawn and Brumley, David and Yin, Heng and Caballero, Juan and Jager, Ivan and Kang, Min Gyung and Liang, Zhenkai and Newsome, James and Poosankam, Pongsin and Saxena, Prateek},
doi = {10.1007/978-3-540-89862-7_1},
isbn = {3540898611},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
keywords = {Binary analysis,Malware analysis and defense,Reverse engineering,Vulnerability analysis and defense},
pages = {1--25},
title = {{BitBlaze: A new approach to computer security via binary analysis}},
volume = {5352 LNCS},
year = {2008}
}
@article{Patai2009,
author = {Patai, Gergely},
journal = {Draft Proceedings of Implementation and Application of Functional Languages (IFL'09)},
pages = {126--140},
title = {{Eventless Reactivity from Scratch}},
year = {2009}
}
@article{Sergey2015a,
abstract = {Efficient concurrent programs and data structures rarely employ coarse-grained synchronization mechanisms (i.e., locks); instead, they implement custom synchronization patterns via fine-grained primitives, such as compare-and-swap. Due to sophisticated inter- ference scenarios between threads, reasoning about such programs is challenging and error-prone, and can benefit from mechanization. In this paper, we present the first completely formalized frame- work for mechanized verification of full functional correctness of fine-grained concurrent programs. Our tool is based on the re- cently proposed program logic FCSL. It is implemented as an embedded domain-specific language in the dependently-typed lan- guage of the Coq proof assistant, and is powerful enough to rea- son about programming features such as higher-order functions and local thread spawning. By incorporating a uniform concurrency model, based on state-transition systems and partial commutative monoids, FCSL makes it possible to build proofs about concurrent libraries in a thread-local, compositional way, thus facilitating scal- ability and reuse: libraries are verified just once, and their specifi- cations are used ubiquitously in client-side reasoning.We illustrate the proof layout in FCSL by example, and report on our experience of using FCSL to verify a number of concurrent programs.},
author = {Sergey, Ilya and Nanevski, Aleksandar and Banerjee, Anindya},
doi = {10.1145/2737924.2737964},
isbn = {978-1-4503-3468-6},
issn = {15232867},
journal = {Programming Language Design and Implementation},
keywords = {Compositional program verification,concurrency,dependent types,mechanized proofs,separation logic},
number = {4},
pages = {77--87},
title = {{Mechanized Verification of Fine-grained Concurrent Programs}},
url = {http://doi.acm.org/10.1145/2737924.2737964},
year = {2015}
}
@article{Liua,
author = {Liu, Peng and Tripp, Omer and Zhang, Charles},
isbn = {9781450330565},
keywords = {concurrency bugs,context-aware fixing},
pages = {318--329},
title = {{Grail : Context-Aware Fixing of Concurrency Bugs Categories and Subject Descriptors}}
}
@article{Kashyap2014,
abstract = {JavaScript is used everywhere from the browser to the server, including desktops and mobile devices. However, the current state of the art in JavaScript static analysis lags far behind that of other languages such as C and Java. Our goal is to help remedy this lack. We describe JSAI, a formally specified, robust abstract interpreter for JavaScript. JSAI uses novel abstract domains to compute a reduced product of type inference, pointer analysis, control-flow analysis, string analysis, and integer and boolean constant propagation. Part of JSAI{\&}{\#}039;s novelty is user-configurable analysis sensitivity, i.e., context-, path-, and heap-sensitivity. JSAI is designed to be provably sound with respect to a specific concrete semantics for JavaScript, which has been extensively tested against a commercial JavaScript implementation. We provide a comprehensive evaluation of JSAI{\&}{\#}039;s performance and precision using an extensive benchmark suite, including real-world JavaScript applications, machine generated JavaScript code via Emscripten, and browser addons. We use JSAI{\&}{\#}039;s configurability to evaluate a large number of analysis sensitivities (some well-known, some novel) and observe some surprising results that go against common wisdom. These results highlight the usefulness of a configurable analysis platform such as JSAI. },
archivePrefix = {arXiv},
arxivId = {1403.3996},
author = {Kashyap, Vineeth and Dewey, Kyle and Kuefner, Ethan A and Wagner, John and Gibbons, Kevin and Sarracino, John and Wiedermann, Ben and Hardekopf, Ben},
doi = {10.1145/2635868.2635904},
eprint = {1403.3996},
isbn = {978-1-4503-3056-5},
journal = {Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering},
keywords = {Abstract Interpretation,JavaScript Analysis},
pages = {121--132},
title = {{JSAI: A Static Analysis Platform for JavaScript}},
url = {http://doi.acm.org/10.1145/2635868.2635904},
year = {2014}
}
@article{Lee2012,
abstract = {The prevalent uses of JavaScript in web programming have re-vealed security vulnerability issues of JavaScript applications, which emphasizes the need for JavaScript analyzers to detect such issues. Recently, researchers have proposed several analyzers of JavaScript programs and some web service companies have devel-oped various JavaScript engines. However, unfortunately, most of the tools are not documented well, thus it is very hard to understand and modify them. Or, such tools are often not open to the public. In this paper, we present formal specification and implemen-tation of SAFE, a scalable analysis framework for ECMAScript, developed for the JavaScript research community. This is the very first attempt to provide both formal specification and its open-source implementation for JavaScript, compared to the existing ap-proaches focused on only one of them. To make it more amenable for other researchers to use our framework, we formally define three kinds of intermediate representations for JavaScript used in the framework, and we provide formal specifications of translations between them. To be adaptable for adventurous future research in-cluding modifications in the original JavaScript syntax, we actively use open-source tools to automatically generate parsers and some intermediate representations. To support a variety of program anal-yses in various compilation phases, we design the framework to be as flexible, scalable, and pluggable as possible. Finally, our frame-work is publicly available, and some collaborative research using the framework are in progress.},
author = {Lee, Hongki and Won, Sooncheol and Jin, Joonho and Cho, Junhee and Ryu, Sukyoung},
journal = {Fool},
keywords = {0,compiler,ecmascript 5,formal,formal semantics,interpreter,javascript,specification},
title = {{SAFE : Formal Specification and Implementation of a Scalable Analysis Framework for ECMAScript}},
year = {2012}
}
@inproceedings{Bae2014,
address = {New York, New York, USA},
author = {Bae, Sunggyeong and Cho, Hyunghun and Lim, Inho and Ryu, Sukyoung},
booktitle = {Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering - FSE 2014},
doi = {10.1145/2635868.2635916},
isbn = {9781450330565},
keywords = {all or part of,bug detection,javascript,or,or hard copies of,permission to make digital,static analysis,this work for personal,web application},
pages = {507--517},
publisher = {ACM Press},
title = {{SAFEWAPI: web API misuse detector for web applications}},
url = {http://dl.acm.org/citation.cfm?doid=2635868.2635916},
year = {2014}
}
@article{Kuhn2016,
abstract = {Since the year 1977, role modeling has been continuously investigated as promising paradigm to model complex, dy-namic systems. However, this research had almost no in-fluence on the design of todays increasingly complex and context-sensitive software systems. The reason for that is twofold. First, most modeling languages focused either on the behavioral, relational or context-dependent nature of roles rather than combining them. Second, there is a lack of tool support for the design, validation, and generation of role-based software systems. In particular, there exists no graphical role modeling editor supporting the three natures as well as the various proposed constraints. To overcome this deficiency, we introduce the Full-fledged Role Modeling Editor (FRaMED), a graphical modeling editor embracing all natures of roles and modeling constraints featuring gen-erators for a formal representation and source code of a role-based programming language. To show its applicability for the development of role-based software systems, an example from the banking domain is employed.},
author = {K{\"{u}}hn, Thomas and Bierzynski, Kay and Richly, Sebastian and A{\ss}mannn, Uwe},
doi = {10.1145/2997364.2997371},
isbn = {9781450344470},
keywords = {I65 [Simulation and Modeling],Model Validation and Analysis—Role-based Modeling},
pages = {132--136},
title = {{FRaMED: Full-Fledge Role Modeling Editor (Tool Demo)}},
year = {2016}
}
@article{Hathhorn2012,
author = {Hathhorn, Chris and Becchi, Michela and Harrison, William L. and Procter, Adam},
doi = {10.4204/EPTCS.102.11},
issn = {2075-2180},
journal = {Electronic Proceedings in Theoretical Computer Science},
number = {Ssv},
pages = {115--124},
title = {{Formal Semantics of Heterogeneous CUDA-C: A Modular Approach with Applications}},
url = {http://arxiv.org/abs/1211.6193v1},
volume = {102},
year = {2012}
}
@article{Park2016a,
abstract = {Now that HTML5 technologies are everywhere from web services to various platforms, assuring quality of web ap-plications becomes very important. While web application developers use syntactic checkers and type-related bug de-tectors, extremely dynamic features and diverse execution environments of web applications make it particularly diffi-cult to statically analyze them leading to too many false pos-itives. Recently, researchers have developed static analyzers for JavaScript web applications addressing quirky JavaScript language semantics and browser environments, but they lack empirical studies on the practicality of such analyzers. In this paper, we collect 30 JavaScript web applications in the wild, analyze them using SAFE, the state-of-the-art JavaScript static analyzer with bug detection, and investi-gate false positives in the analysis results. After manually inspecting them, we classify 7 reasons that cause the false positives: W3C APIs, browser-specific APIs, JavaScript li-brary APIs, dynamic file loading, dynamic code generation, asynchronous calls, and others. Among them, we identify 4 cases which are the sources of false positives that we can practically reduce. Rather than striving for sound analysis with unrealistic assumptions, we choose to be intentionally unsound to analyze web applications in the real world with less false positives. Our evaluation shows that the approach effectively reduces false positives in statically analyzing web applications in the wild.},
author = {Park, Joonyoung and Lim, Inho and Ryu, Sukyoung},
doi = {10.1145/2889160.2889227},
isbn = {9781450342056},
journal = {Proceedings of the 38th International Conference on Software Engineering Companion - ICSE '16},
keywords = {JavaScript,Keywords Static analysis,false positives,web applications},
pages = {61--70},
title = {{Battles with false positives in static analysis of JavaScript web applications in the wild}},
url = {http://dl.acm.org/citation.cfm?doid=2889160.2889227},
year = {2016}
}
@article{Harris,
author = {Harris, Tim and Jones, Simon Peyton},
title = {{Lightweight Concurrency in GHC}}
}
@article{Midtgaard2012,
abstract = {We present a survey of control-flow analysis of functional programs, which has been the subject of extensive investigation throughout the past 30 years. Analyses of the control flow of functional programs have been formulated inmultiple settings and have led tomany different approximations, starting with the seminal works of Jones, Shivers, and Sestoft. In this paper, we survey control-flow analysis of functional programs by structuring the multitude of formulations and approximations and comparing them. Categories},
author = {Midtgaard, Jan},
doi = {10.1145/2187671.2187672},
isbn = {0360-0300},
issn = {03600300},
journal = {ACM Computing Surveys},
number = {3},
pages = {1--33},
title = {{Control-flow analysis of functional programs}},
volume = {44},
year = {2012}
}
@article{Turing1938,
abstract = {Turing, A. M. "On Computable Numbers with an Application to the Entscheidungsproblem." ,},
archivePrefix = {arXiv},
arxivId = {arXiv:1011.1669v3},
author = {Turing, A. M.},
doi = {10.1112/plms/s2-43.6.544},
eprint = {arXiv:1011.1669v3},
isbn = {9780123869807},
issn = {1460244X},
journal = {Proceedings of the London Mathematical Society},
number = {1},
pages = {544--546},
pmid = {25246403},
title = {{On computable numbers, with an application to the entscheidungsproblem. a correction}},
volume = {s2-43},
year = {1938}
}
@article{Hobor2008,
author = {Hobor, Aquinas and Appel, Andrew W. and Nardelli, Francesco Zappa},
doi = {10.1007/978-3-540-78739-6_27},
isbn = {3540787380},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
number = {April},
pages = {353--367},
title = {{Oracle semantics for concurrent separation logic}},
volume = {4960 LNCS},
year = {2008}
}
@incollection{Graf1997,
author = {Graf, Susanne and Saidi, Hassen},
doi = {10.1007/3-540-63166-6_10},
pages = {72--83},
title = {{Construction of abstract state graphs with PVS}},
year = {1997}
}
@article{Myer1968,
abstract = {6. Conclusion A field-proven scheme for achieving reliable full-duplex transmission over noisy half-duplex telephone lines has been presented. The sensitivity of the algorithm and the difficulty of the problem have been illustrated by contrast- ing the algorithm with another, slightly },
author = {Myer, T. H. and Sutherland, I. E.},
doi = {10.1145/363347.363368},
isbn = {0001-0782},
issn = {00010782},
journal = {Communications of the ACM},
keywords = {and phrases,computer graphics,dfsplay programming,display channel,display genera-,display processor design,display system,displays,graphic terminal,graphical interaction,graphics,remote,tor},
number = {6},
pages = {410--414},
title = {{On the design of display processors}},
volume = {11},
year = {1968}
}
@article{Guha,
author = {Guha, Arjun and Saftoiu, Claudiu and Krishnamurthi, Shriram},
number = {section 4},
pages = {1--20},
title = {{Typing Local Control and State Using Flow Analysis}}
}
@book{Turon2014,
abstract = {Weak memory models formalize the inconsistent behaviors that one can expect to observe in multithreaded programs running on modern hardware. In so doing, however, they complicate the already-difficult task of reasoning about cor- rectness of concurrent code.Worse, they render impotent the sophisticated formal methods that have been developed to tame concurrency, which almost universally assume a strong (i.e., sequentially consistent) memory model. This paper introduces GPS, the first program logic to pro- vide a full-fledged suite of modern verification techniques— including ghost state, protocols, and separation logic—for high-level, structured reasoning about weak memory. We demonstrate the effectiveness of GPS by applying it to chal- lenging examples drawn from the Linux kernel as well as lock-free data structures. We also define the semantics of GPS and prove in Coq that it is sound with respect to the axiomatic C11 weak memory model. Categories},
author = {Turon, Aaron and Vafeiadis, Viktor and Dreyer, Derek},
booktitle = {ACM International Conference on Object Oriented Programming Systems Languages},
doi = {10.1145/2660193.2660243},
isbn = {978-1-4503-2585-1},
issn = {0362-1340},
keywords = {c/c++,concurrency,program logic,separation logic,weak memory models},
pages = {691--707},
title = {{GPS: Navigating Weak Memory with Ghosts, Protocols, and Separation}},
url = {http://doi.acm.org/10.1145/2660193.2660243},
year = {2014}
}
@article{Birkedal2015,
abstract = {This report documents the program and the outcomes of Dagstuhl Seminar 15191 “Composi- tional Verification Methods for Next-Generation Concurrency”. The seminar was successful and facilitated a stimulating interchange between the theory and practice of concurrent programming, and thereby laid the ground for the development of compositional verification methods that can scale to handle the realities of next-generation concurrency.},
author = {Birkedal, Lars and Dreyer, Derek and Gardner, Philippa},
doi = {10.4230/DagRep.5.5.1 1},
journal = {Dagshtul Reports},
keywords = {1,4230,5,and phrases verification of,automated ana-,concurrent programming,concurrent programs,dagrep,digital object identifier 10,logics,lysis,models},
number = {5},
pages = {1--23},
title = {{Compositional Verification Methods for Next-Generation Concurrency}},
volume = {5},
year = {2015}
}
@article{Zhang2014,
author = {Zhang, Lei},
title = {{DASE : Document-Assisted Symbolic Execution for Improving Automated Test Generation}},
year = {2014}
}
@article{Clarke2003,
abstract = {The state explosion problem remains a major hurdle in applying symbolic model checking to large hardware designs. State space abstraction, having been essential for verifying designs of industrial complexity, is typically a manual process, requiring considerable creativity and insight.In this article, we present an automatic iterative abstraction-refinement methodology that extends symbolic model checking. In our method, the initial abstract model is generated by an automatic analysis of the control structures in the program to be verified. Abstract models may admit erroneous (or "spurious") counterexamples. We devise new symbolic techniques that analyze such counterexamples and refine the abstract model correspondingly. We describe aSMV, a prototype implementation of our methodology in NuSMV. Practical experiments including a large Fujitsu IP core design with about 500 latches and 10000 lines of SMV code confirm the effectiveness of our approach.},
archivePrefix = {arXiv},
arxivId = {1301.4779},
author = {Clarke, Edmund and Grumberg, Orna and Jha, Somesh and Lu, Yuan and Veith, Helmut},
doi = {10.1145/876638.876643},
eprint = {1301.4779},
isbn = {9783540272311},
issn = {0004-5411},
journal = {J. Acm},
number = {5},
pages = {752--794},
title = {{Counter Example Guided Abstraction Refinement for Symbolic Model Checking (CEGAR)}},
url = {http://portal.acm.org/citation.cfm?id=876643{\%}5Cnhttp://portal.acm.org/ft{\_}gateway.cfm?id=876643{\&}type=pdf{\&}coll=GUIDE{\&}dl=GUIDE{\&}CFID=45815308{\&}CFTOKEN=61927748},
volume = {50},
year = {2003}
}
@article{Whisnant2012,
author = {Whisnant, David},
journal = {Provided for Course Without Origin},
title = {{Relational Database Concepts for Beginners}},
year = {2012}
}
@article{Harris2008,
abstract = {Writing concurrent programs is notoriously difficult, and is of increasing practical importance. A particular source of concern is that even correctly-implemented concurrency abstractions cannot be composed together to form larger abstractions. In this paper we present a new concurrency model, based on transactional memory, that offers far richer composition. All the usual benefits of transactional memory are present (e.g. freedom from deadlock), but in addition we describe newmodular forms of blocking and choice that have been inaccessible in earlier work.},
author = {Harris, Tim and Marlow, Simon and Jones, Simon Peyton and Herlihy, Maurice},
doi = {10.1145/1378704.1378725},
isbn = {1595930809},
issn = {00010782},
journal = {Communications of the ACM},
keywords = {algorithms,languages},
number = {8},
pages = {91},
title = {{Composable memory transactions}},
volume = {51},
year = {2008}
}
@inproceedings{Gu2015,
address = {New York, New York, USA},
author = {Gu, Ronghui and Koenig, J{\'{e}}r{\'{e}}mie and Ramananandro, Tahina and Shao, Zhong and Wu, Xiongnan (Newman) and Weng, Shu-Chun and Zhang, Haozhong and Guo, Yu},
booktitle = {Proceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages - POPL '15},
doi = {10.1145/2676726.2676975},
isbn = {9781450333009},
issn = {15232867},
keywords = {abstraction layer,certified compilers,certified os kernels,deep specification,modularity,program verification},
pages = {595--608},
publisher = {ACM Press},
title = {{Deep Specifications and Certified Abstraction Layers}},
url = {http://doi.acm.org/10.1145/2676726.2676975 http://dl.acm.org/citation.cfm?doid=2676726.2676975},
year = {2015}
}
@article{Chatterjee2016a,
archivePrefix = {arXiv},
arxivId = {1611.01063},
author = {Chatterjee, Krishnendu and Novotn{\'{y}}, Petr and {\v{Z}}ikeli{\'{c}}, Đorđe},
eprint = {1611.01063},
keywords = {martingales,probabilistic programs,termination},
title = {{Stochastic Invariants for Probabilistic Termination}},
year = {2016}
}
@article{Epstein2012,
abstract = {We present Cloud Haskell, a domain-specific language for developing programs for a distributed computing environment. Implemented as a shallow embedding in Haskell, it provides a message-passing communication model, inspired by Erlang, without introducing incompatibility wit Haskell's established shared-memory concurrency. A key contribution is a method for serializing function closures for transmission across the network. Cloud Haskell has been implemented; we present example code and some preliminary performance measurements.},
author = {Epstein, Jeff and Black, Andrew P. and Peyton-Jones, Simon},
doi = {10.1145/2096148.2034690},
isbn = {9781450308601},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
number = {12},
pages = {118},
title = {{Towards Haskell in the cloud}},
volume = {46},
year = {2012}
}
@article{Amin2016,
abstract = {Focusing on path-dependent types, the paper develops foundations for Scala from first principles. Starting from a simple calculus D-{\textless}: of dependent functions, it adds records, intersections and recursion to arrive at DOT, a calculus for dependent object types. The paper shows an encoding of System F with subtyping in D-{\textless}: and demonstrates the expressiveness of DOT by modeling a range of Scala constructs in it.},
author = {Amin, Nada and Gr{\"{u}}tter, Samuel and Odersky, Martin and Rompf, Tiark and Stucki, Sandro},
doi = {10.1007/978-3-319-30936-1_14},
isbn = {9783319309354},
issn = {16113349},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
keywords = {Calculus,Dependent types,Scala},
pages = {249--272},
title = {{The essence of dependent object types}},
volume = {9600},
year = {2016}
}
@article{,
keywords = {assertion,computational effects,contracts,dy-,namic verification,refinement types},
pages = {1--14},
title = {{Stateful Manifest Contracts}},
year = {2016}
}
@article{Willsey,
author = {Willsey, Max and Pfenning, Frank},
pages = {1--10},
title = {{Design and Implementation of Concurrent C0}}
}
@article{Rolandi2011,
abstract = {Figures are an essential part of the scientific paper. Scientists often learn how to create figures by trial and error. A scientist, a graphic designer, and a cognitive psychologist have teamed up to write a brief guide to ease this process. This guide, aimed at researchers in scientific fields, provides an easy-to-follow set of instructions to design effective figures.},
archivePrefix = {arXiv},
arxivId = {cs/9605103},
author = {Rolandi, Marco and Cheng, Karen and P{\'{e}}rez-Kriz, Sarah},
eprint = {9605103},
isbn = {1521-4095},
issn = {09359648},
number = {38},
pages = {4343--4346},
pmid = {21960472},
primaryClass = {cs},
title = {{A brief guide to designing effective figures for the scientific paper}},
volume = {23},
year = {2011}
}
@article{Chlipala,
title = {{A Program Optimization for Automatic Database Result Caching}}
}
@article{Paykin2016,
author = {Paykin, Jennifer and Rand, Robert and Zdancewic, Steve},
keywords = {a circuit is just,a sequence of instructions,although circuits manipulate quantum,classical data,data,denotational semantics,describ-,linear types,quantum circuits,quantum programming languages,they themselves are},
pages = {1--13},
title = {{Q WIRE : A QRAM-Inspired Quantum Circuit Language}},
year = {2016}
}
@article{Dinsdale-Young2010,
abstract = {Abstraction is key to understanding and reasoning about large computer systems. Abstraction is simple to achieve if the relevant data structures are disjoint, but rather difficult when they are partially shared, as is often the case for concurrent modules. We present a program logic for reasoning abstractly about data structures that provides a fiction of disjointness and permits compositional reasoning. The internal details of a module are completely hidden from the client by concurrent abstract predicates. We reason about a module's implementation using separation logic with permissions, and provide abstract specifications for use by client programs using concurrent abstract predicates. We illustrate our abstract reasoning by building two implementations of a lock module on top of hardware instructions, and two implementations of a concurrent set module on top of the lock module.},
author = {Dinsdale-Young, Thomas and Dodds, Mike and Gardner, Philippa and Parkinson, Matthew J. and Vafeiadis, Viktor},
doi = {10.1007/978-3-642-14107-2_24},
isbn = {3642141064},
issn = {03029743},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {504--528},
title = {{Concurrent abstract predicates}},
volume = {6183 LNCS},
year = {2010}
}
pages = {1--19},
title = {{Resource Verification for Higher- order Functions with Memoization}},
year = {2016}
}
@article{Barthe,
author = {Barthe, Gilles and Gaboardi, Marco and Hoffmann, Jan},
keywords = {complexity analysis,relational reasoning,type and},
title = {{Relational Cost Analysis}}
}
@article{Odersky2016,
author = {Odersky, Martin and Martres, Guillaume and Petrashko, Dmitry},
doi = {10.1145/2998392.2998400},
isbn = {9781450346481},
journal = {Scala},
keywords = {dependent,dot,dotty,higher-kinded,higher-order genericity,scala,type constructor polymorphism,types},
pages = {51--60},
title = {{Implementing Higher-Kinded Types in Dotty}},
year = {2016}
}
@book{Assaf2016,
abstract = {We show how static analysis for secure information flow can be expressed and proved correct entirely within the framework of abstract interpretation. The key idea is to define a Galois connection that directly approximates the hyperproperty of interest. To enable use of such Galois connections, we introduce a fixpoint characterisation of hypercollecting semantics, i.e. a "set of set" transformer. This makes it possible to systematically derive static analyses for hyperproperties entirely within the calculational framework of abstract interpretation. We evaluate this technique by deriving example static analyses. For qualitative information flow, we derive a dependence analysis similar to the logic of Amtoft and Banerjee (SAS'04) and the type system of Hunt and Sands (POPL'06). For quantitative information flow, we derive a novel cardinality analysis that bounds the leakage conveyed by a program instead of simply deciding whether it exists. This encompasses problems that are hypersafety but not k-safety. We put the framework to use and introduce variations that achieve precision rivalling the most recent and precise static analyses for information flow.},
archivePrefix = {arXiv},
arxivId = {1608.01654},
author = {Assaf, Mounir and Naumann, David A. and Signoles, Julien and Totel, {\'{E}}ric and Tronel, Fr{\'{e}}d{\'{e}}ric},
eprint = {1608.01654},
isbn = {9781450346603},
keywords = {abstract interpretation,information flow,static analysis},
number = {2010},
title = {{Hypercollecting Semantics and its Application to Static Analysis of Information Flow}},
url = {http://arxiv.org/abs/1608.01654},
year = {2016}
}
@article{Chang2017,
author = {Chang, Stephen and Greenman, Ben},
isbn = {9781450346603},
keywords = {macros,type systems,typed embedded dsls},
title = {{Type Systems as Macros}},
year = {2017}
}
@article{Jafery2016,
abstract = {A long-standing shortcoming of statically typed functional languages is that type checking does not rule out pattern-matching failures (run-time match exceptions). Refinement types distinguish different values of datatypes; if a program annotated with refinements passes type checking, pattern-matching failures become impossible. Unfortunately, refinement is a monolithic property of a type, exacerbating the difficulty of adding refinement types to nontrivial programs. Gradual typing has explored how to incrementally move between static typing and dynamic typing. We develop a type system of gradual sums that combines refinement with imprecision. Then, we develop a bidirectional version of the type system, which rules out excessive imprecision, and give a type-directed translation to a target language with explicit casts. We prove that the static sublanguage cannot have match failures, that a well-typed program remains well-typed if its type annotations are made less precise, and that making annotations less precise causes target programs to fail later. Several of these results correspond to criteria for gradual typing given by Siek et al. (2015).},
archivePrefix = {arXiv},
arxivId = {1611.02392},
author = {Jafery, Khurram A. and Dunfield, Joshua},
eprint = {1611.02392},
pages = {1--60},
title = {{Sums of Uncertainty: Refinements Go Gradual}},
url = {http://arxiv.org/abs/1611.02392},
year = {2016}
}
@article{Leijen2016,
author = {Leijen, Daan},
number = {August},
title = {{Type Directed Compilation of Row-typed Algebraic Effects}},
year = {2016}
}
@article{Krogh-jespersen,
author = {Krogh-jespersen, Morten and Svendsen, Kasper and Birkedal, Lars},
keywords = {automatic parallelisation,logical rela-,program transformation,separation logic,tions,type-and-effect system},
title = {{A Relational Model of Types-and-Effects in Higher-Order Concurrent Separation Logic}}
}
@article{Tofte1994,
abstract = {We present a translation scheme for the polymorphically typed call-by-value {\&}lgr;-calculus. All runtime values, including function closures, are put into regions. The store consists of a stack of regions. Region inference and effect inference are used to infer where regions can be allocated and de-allocated. Recursive functions are handled using a limited form of polymorphic recursion. The translation is proved correct with respect to a store semantics, which models as a region-based run-time system. Experimental results suggest that regions tend to be small, that region allocation is frequent and that overall memory demands are usually modest, even without garbage collection.},
author = {Tofte, Mads and Talpin, Jean-Pierre},
doi = {10.1145/174675.177855},
isbn = {0897916360},
issn = {07308566},
journal = {Proceedings of the 21st ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '94},
pages = {188--201},
title = {{Implementation of the typed call-by-value $\lambda$-calculus using a stack of regions}},
url = {http://portal.acm.org/citation.cfm?doid=174675.177855},
year = {1994}
}
@article{Grigore2016a,
abstract = {This note proves that nominal subtyping with contravariance is undecidable even in the absence of multiple instantiation inheritance, thus solving an open problem posed by Kennedy and Pierce in 2007.},
archivePrefix = {arXiv},
arxivId = {1605.05274},
eprint = {1605.05274},
keywords = {decidability,java,subtype checking},
number = {0},
pages = {1--6},
title = {{Java Generics are Turing Complete}},
url = {http://arxiv.org/abs/1605.05274},
year = {2016}
}
@article{Moerman2016,
abstract = {We present an Angluin-style algorithm to learn nominal automata, which are acceptors of languages over infinite (structured) alpha-bets. The abstract approach we take allows us to seamlessly extend known variations of the algorithm to this new setting. In particu-lar we can learn a subclass of nominal non-deterministic automata. An implementation using a recently developed Haskell library for nominal computation is provided for preliminary experiments.},
archivePrefix = {arXiv},
arxivId = {arXiv:1607.06268v1},
author = {Moerman, Joshua and Sammartino, Matteo and Silva, Alexandra and Klin, Bartek and Szynwelski, Micha{\l}},
doi = {10.1145/3009837.3009879},
eprint = {arXiv:1607.06268v1},
keywords = {()},
title = {{Learning nominal automata}},
year = {2016}
}
@article{Lago2016,
abstract = {We introduce a Geometry of Interaction model for higher-order quantum computation, and prove its adequacy for a full quantum programming language in which entanglement, duplication, and recursion are all available. Our model comes with a multi-token machine, a proof net system, and a PCF-style language. The approach we develop is not specific to quantum computation, and our model is an instance of a new framework whose main feature is the ability to model commutative effects in a parallel setting. Being based on a multi-token machine equipped with a memory, it has a concrete nature which makes it well suited for building low-level operational descriptions of higher-order languages.},
archivePrefix = {arXiv},
arxivId = {1610.09629},
author = {Lago, Ugo Dal and Faggian, Claudia and Valiron, Benoit and Yoshimizu, Akira},
eprint = {1610.09629},
keywords = {geometry of interaction,memory structure,quantum},
title = {{The Geometry of Parallelism. Classical, Probabilistic, and Quantum Effects}},
url = {http://arxiv.org/abs/1610.09629},
year = {2016}
}
@article{Kiselyova,
author = {Kiselyov, Oleg and Palladinos, Nick and Athens, Nessos I T S A},
keywords = {code generation,multi-stage programming,optimiza-,stream fusion,streams,tion},
title = {{Stream Fusion , to Completeness}}
}
@article{Hoffmann,
author = {Hoffmann, Jan},
keywords = {amortized analysis,lp solving,resource bound analysis,static analysis,tems,type inference,type sys-},
title = {{Towards Automatic Resource Bound Analysis for OCaml}}
}
@article{Ilik2015,
abstract = {Lambda calculi with algebraic data types lie at the core of functional programming languages and proof assistants, but conceal at least two fundamental theoretical problems already in the presence of the simplest non-trivial data type, the sum type. First, we do not know of an explicit and implemented algorithm for deciding the beta-eta-equality of terms---and this in spite of the first decidability results proven two decades ago. Second, it is not clear how to decide when two types are essentially the same, i.e. isomorphic, in spite of the meta-theoretic results on decidability of the isomorphism. In this paper, we present the exp-log normal form of types---derived from the representation of exponential polynomials via the unary exponential and logarithmic functions---that any type built from arrows, products, and sums, can be isomorphically mapped to. The type normal form can be used as a simple heuristic for deciding type isomorphism, thanks to the fact that it is a systematic application of the high-school identities. We then show that the type normal form allows to reduce the standard beta-eta equational theory of the lambda calculus to a specialized version of itself, while preserving the completeness of equality on terms. We end by describing an alternative representation of normal terms of the lambda calculus with sums, together with a Coq-implemented converter into/from our new term calculus. The difference with the only other previously implemented heuristic for deciding interesting instances of eta-equality by Balat, Di Cosmo, and Fiore, is that we exploit the type information of terms substantially and this often allows us to obtain a canonical representation of terms without performing sophisticated term analyses.},
archivePrefix = {arXiv},
arxivId = {1502.04634},
author = {Ilik, Danko},
eprint = {1502.04634},
keywords = {eta equality,isomorphism,normal term,normal type,sum type,type,type-directed partial evaluation},
pages = {1--13},
title = {{The exp-log normal form of types}},
url = {http://arxiv.org/abs/1502.04634},
year = {2015}
}
@article{Types,
author = {Types, J C Reynolds.},
journal = {In R.E. Mason, editor, IFIP '},
number = {{\{}523}},
title = {{Abstraction and Parametric Polymorphism}},
volume = {83pages513}
}
@article{Raina2016,
author = {Raina, Sagar and Kaza, Siddharth and Taylor, Blair},
doi = {10.1145/2839509.2844609},
isbn = {9781450336857},
journal = {Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16)},
keywords = {auto-grading,buffer overflow,cs0,cs1,cs2,input,instant-feedback,integer overflow,interactive learning,learning sciences,modules,security injections,validation},
pages = {144--149},
title = {{Security Injections 2.0: Increasing Ability to Apply Secure Coding Knowledge using Segmented and Interactive Modules in CS0}},
year = {2016}
}
@article{Amin2016a,
pages = {1--15},
title = {{LMS-Verify : Abstraction Without Regret for Verified Systems Programming}},
year = {2016}
}
@article{Kopczy2016,
author = {Kopczy, Eryk and Toru, Szymon},
title = {syntax and semantics},
year = {2016}
}
@article{Flur,
author = {Flur, Shaked and Gray, Kathryn E and Sezgin, Ali and Sewell, Peter},
keywords = {28,29,30,31,32,33,34,35,36,37,38,39,40,among this has established,and arm,ibm,isa,mixed-size,power,recent work,relaxed memory models,semantic models for x86,semantics,that are validated},
title = {{Mixed-size Concurrency : ARM , POWER , C / C ++ 11 , and SC}}
}
@article{Zhang2016a,
abstract = {The growing populariy and adoption of differential privacy in academic and industrial settings has resulted in the development of increasingly sophisticated algorithms for releasing information while preserving privacy. Accompanying this phenomenon is the natural rise in the development and publication of incorrect algorithms, thus demonstrating the necessity of formal verification tools. However, existing formal methods for differential privacy face a dilemma: methods based on customized logics can verify sophisticated algorithms but comes with a steep learning curve and significant annotation burden on the programmers; while existing type systems lacks expressive power for some sophisticated algorithms. In this paper, we present AutoPriv, a simple imperative language that strikes a better balance between expressive power and usefulness. The core of AutoPriv is a novel relational type system that separates relational reasoning from privacy budget calculations. With dependent types, the type system is powerful enough to verify sophisticated algorithms where the composition theorem falls short. In addition, the inference engine of AutoPriv infers most of the proof details, and even searches for the proof with minimal privacy cost when multiple proofs exist. We show that AutoPriv verifies sophisticated algorithms with little manual effort.},
archivePrefix = {arXiv},
arxivId = {1607.08228},
author = {Zhang, Danfeng and Kifer, Daniel},
eprint = {1607.08228},
pages = {1--15},
title = {{AutoPriv: Automating Differential Privacy Proofs}},
url = {http://arxiv.org/abs/1607.08228},
year = {2016}
}
@article{Brookes2006,
author = {Brookes, Stephen},
number = {March},
pages = {1--80},
title = {{A semantics for concurrent permission logic}},
year = {2006}
}
@article{Dolan2016,
abstract = {We present a type system combining subtyping and ML-style para-metric polymorphism. Unlike previous work, our system supports type inference and has compact principal types. We demonstrate this system in the minimal language MLsub, which types a strict superset of core ML programs. This is made possible by keeping a strict separation between the types used to describe inputs and those used to describe out-puts, and extending the classical unification algorithm to handle subtyping constraints between these input and output types. Prin-cipal types are kept compact by type simplification, which exploits deep connections between subtyping and the algebra of regular lan-guages. An implementation is available online.},
author = {Dolan, Stephen and Mycroft, Alan},
keywords = {algebra,at no,but this,constraint does not arise,d be acceptable to,from the behaviour of,in that it demands,pass as the default,polymorphism,strange,subtyping,that whatever we,the predicate p,the program,this scheme is quite,type inference},
title = {{Polymorphism, subtyping and type inference in MLsub}},
year = {2016}
}
@article{Bouajjani2016,
abstract = {Causal consistency is one of the most adopted consistency criteria for distributed implementations of data structures. It ensures that operations are executed at all sites according to their causal precedence. We address the issue of verifying automatically whether the executions of an implementation of a data structure are causally consistent. We consider two problems: (1) checking whether one single execution is causally consistent, which is relevant for developing testing and bug finding algorithms, and (2) verifying whether all the executions of an implementation are causally consistent. We show that the first problem is NP-complete. This holds even for the read-write memory abstraction, which is a building block of many modern distributed systems. Indeed, such systems often store data in key-value stores, which are instances of the read-write memory abstraction. Moreover, we prove that, surprisingly, the second problem is undecidable, and again this holds even for the read-write memory abstraction. However, we show that for the read-write memory abstraction, these negative results can be circumvented if the implementations are data independent, i.e., their behaviors do not depend on the data values that are written or read at each moment, which is a realistic assumption.},
archivePrefix = {arXiv},
arxivId = {1611.00580},
author = {Bouajjani, Ahmed and Enea, Constantin and Guerraoui, Rachid and Hamza, Jad},
eprint = {1611.00580},
keywords = {causal consistency,distributed systems,ing,model check-,static program analysis},
title = {{On Verifying Causal Consistency}},
url = {http://arxiv.org/abs/1611.00580},
year = {2016}
}
@article{Wickerson2016,
author = {Wickerson, John and Batty, Mark and Sorensen, Tyler and Constantinides, George A},
keywords = {c,constraint solving,currency,gpu,graphics processor,model checking,opencl,program synthesis,shared memory con-,weak memory models},
title = {{Automatically Comparing Memory Consistency Models}},
year = {2016}
}
@article{Antoni2017,
author = {Antoni, Loris D},
isbn = {9781450346603},
keywords = {2 l - str,29,30,able extension of m,c,f,finite strings,for describing finite sequences,in this paper,ing satisfiability,m 2 l -,mso logic,over arbitrary domains,str as a decid-,sws1s,symbolic automata,we present s -,ws 1 s for},
title = {{Monadic Second-Order Logic on Finite Sequences}},
year = {2017}
}
@article{Kumar2016,
author = {Kumar, Ananya and Blelloch, Guy E and Harper, Robert},
keywords = {arrays,concurrency,cost semantics,functional data,parallel,persistence,structures},
pages = {1--13},
title = {{Parallel Functional Arrays}},
year = {2016}
}
@article{,
title = {{A Semantic Account of Metric Preservation}},
year = {2016}
}
@article{Tafliovich2016,
author = {Tafliovich, Anya and Petersen, Andrew and Campbell, Jennifer},
doi = {10.1145/2839509.2844647},
isbn = {9781450336857},
journal = {Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16)},
keywords = {evaluation,graduate software development project,motivating students,student teamwork,under-},
pages = {181--186},
title = {{Evaluating Student Teams : Do Educators Know What Students Think ?}},
year = {2016}
}
@article{Nandi2016,
abstract = {Hackathons are fast-paced events where competitors work in teams to go from an idea to working software or hardware within a single day or a weekend and demonstrate their cre- ation to a live audience of peers. Due to the “fun” and informal nature of such events, they make for excellent in- formal learning platforms that attract a diverse spectrum of students, especially those typically uninterested in tradi- tional classroom settings. In this paper, we investigate the informal learning aspects of Ohio State's annual hackathon events over the past two years, with over 100 student par- ticipants in 2013 and over 200 student participants in 2014. Despite the competitive nature of such events, we observed a significant amount of peer-learning – students teaching each other how to solve specific challenges and learn new skills. The events featured mentors from both the university and industry, who provided round-the-clock hands-on support, troubleshooting and advice. Due to the gamified format of the events, students were heavily motivated to learn new skills due to practical applicability and peer effects, rather than merely academic metrics. Some teams continued their hacks as long-term projects, while others formed new stu- dent groups to host lectures and practice building prototypes on a regular basis. Using a combined analysis of post-event surveys, student academic records and source-code commit log data from the event, we share insights, demographics, statistics and anecdotes from hosting these hackathons.},
author = {Nandi, Arnab and Mandernach, Meris},
doi = {10.1145/2839509.2844590},
isbn = {9781450336857},
journal = {Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16)},
pages = {346--351},
title = {{Hackathons as an informal learning platform}},
url = {http://doi.org/10.1145/2839509.2844590},
year = {2016}
}
@article{Hu2016,
author = {Hu, Chenglie},
doi = {10.1145/2839509.2844563},
isbn = {9781450336857},
journal = {Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16)},
keywords = {design education,software design,teaching of software design},
pages = {199--204},
title = {{Can Students Design Software ? The Answer Is More Complex Than You Think 2 . CAN SOPHMORE STUDENTS DESIGN}},
year = {2016}
}
@article{Tamer2016,
author = {Tamer, Bur{\c{c}}in and Stout, Jane G},
doi = {10.1145/2839509.2844573},
isbn = {978-1-4503-3685-7},
journal = {Proceedings of the 47th ACM Technical Symposium on Computing Science Education},
pages = {114--119},
title = {{Understanding How Research Experiences for Undergraduate Students May Foster Diversity in the Professorate}},
url = {http://doi.acm.org/10.1145/2839509.2844573},
year = {2016}
}
@article{Podelski2005,
abstract = {fair termination; german re-; in part by the; liveness; software model checking; this research was supported; transition predicate abstraction},
author = {Podelski, Andreas and Rybalchenko, Andrey},
doi = {10.1145/1047659.1040317},
isbn = {158113830X},
issn = {03621340},
journal = {Proc. POPL},
number = {1},
pages = {132--144},
title = {{Transition predicate abstraction and fair termination}},
url = {http://portal.acm.org/citation.cfm?doid=1047659.1040317},
volume = {40},
year = {2005}
}
@article{Shankar2009,
abstract = {Automated deduction uses computation to perform symbolic logical reasoning. It has been a core technology for program verification from the very beginning. Satisfiability solvers for propositional and first-order logic significantly automate the task of deductive program verification. We introduce some of the basic deduction techniques used in software and hardware verification and outline the theoretical and engineering issues in building deductive verification tools. Beyond verification, deduction techniques can also be used to support a variety of applications including planning, program optimization, and program synthesis},
author = {Shankar, Natarajan},
doi = {10.1145/1592434.1592437},
isbn = {0360-0300},
issn = {03600300},
journal = {ACM Computing Surveys},
number = {4},
pages = {1--56},
title = {{Automated deduction for verification}},
volume = {41},
year = {2009}
}
@article{Backman2016,
author = {Backman, Nathan},
doi = {10.1145/2839509.2844648},
isbn = {9781450336857},
journal = {Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16)},
keywords = {application security,capture the flag,hacking},
pages = {603--608},
title = {{Facilitating a Battle Between Hackers : Computer Security Outside of the Classroom}},
year = {2016}
}
@article{Becker2016,
abstract = {One of the many challenges novice programmers face from the time they write their first program is inadequate com-piler error messages. These messages report details on er-rors the programmer has made and are the only feedback the programmer gets from the compiler. For students they play a particularly essential role as students often have little experience to draw upon, leaving compiler error messages as their primary guidance on error correction. However these messages are frequently inadequate, presenting a barrier to progress and are often a source of discouragement. We have designed and implemented an editor that provides enhanced compiler error messages and conducted a controlled empir-ical study with CS1 students learning Java. We find a re-duced frequency of overall errors and errors per student. We also identify eight frequent compiler error messages for which enhancement has a statistically significant effect. Fi-nally we find a reduced number of repeated errors. These findings indicate fewer students struggling with compiler er-ror messages.},
author = {Becker, Brett A},
doi = {10.1145/2839509.2844584},
isbn = {9781450336857},
journal = {Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16)},
keywords = {com-,cs1,debugging,error messages,errors,feedback,java,novice,piler errors,programming,syntax errors},
pages = {126--131},
title = {{An Effective Approach to Enhancing Compiler Error Messages}},
year = {2016}
}
@article{Dragon2016,
author = {Dragon, Toby and Dickson, Paul E},
doi = {10.1145/2839509.2844607},
isbn = {9781450336857},
journal = {Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16)},
keywords = {computer science education,memory diagram,pedagogy,program memory traces,tracing},
pages = {546--551},
title = {{Memory Diagrams : A Consistant Approach Across Concepts and Languages}},
year = {2016}
}
@article{Kawash2016,
author = {Kawash, Jalal and Kuipers, Andrew and Manzara, Leonard and Collier, Robert},
doi = {http://dx.doi.org/10.1145/2839509.2844552},
isbn = {9781450336857},
keywords = {assembly language,computer science education,hardware,programming,raspberry pi,software interface},
pages = {498--503},
title = {{Undergraduate Assembly Language Instruction Sweetened with the Raspberry Pi}},
year = {2016}
}
@article{Boese2016,
abstract = {Computer Science education has to change - the students are demanding a new paradigm in this Just Google It era [3][8]. This paper discusses what Just in Time Learning is, how it is more effective than the traditional educational process, and how to change education to embrace the Internet through incorporating the Just-In-Time Learning model. There are five parts to incorporating the Just-In-Time Learning model: one – recognizing that the textbook is dead, as students Just Google It, two – help students learn how to vet the information they find online, three – incorporate real-world problems and support creative student ideas, four – modify the classroom to include an active-learning environment to fully support Just-In-Time Learning, and five – the role of the teacher is now as a tutor, helping students learn and learn how to learn. By incorporating these five parts of the Just- In-Time Learning model, there is no longer the concept of cheating, and students are learning the core necessary skills: problem-solving, critical thinking, good decision making, self- learning, and effective communication.},
author = {Boese, Elizabeth},
doi = {10.1145/2839509.2844583},
isbn = {9781450336857},
journal = {Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16)},
keywords = {active-learning,computer science education,flipped,in-case learning model,just google it,just-,just-in-time learning model},
pages = {341--345},
title = {{Just-In-Time Learning for the Just Google It Era}},
year = {2016}
}
@article{Bloomfield2016,
author = {Bloomfield, Aaron},
doi = {10.1145/2839509.2844632},
isbn = {9781450336857},
journal = {Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16)},
keywords = {icpc,programming contest,strategy},
pages = {609--614},
title = {{A Programming Contest Strategy Guide}},
year = {2016}
}
author = {Adams, Joel C and Crain, Patrick A and Dilley, Christopher P and Unger, Javin B and Stel, Mark B Vander},
doi = {10.1145/2839509.2844557},
isbn = {9781450336857},
journal = {Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16)},
keywords = {graphics,image processing,integration,library,loop,mandelbrot},
pages = {473--478},
title = {{Seeing Is Believing : Helping Students Visualize Multithreaded Behavior}},
year = {2016}
}
@article{Clarke-midura2016,
author = {Clarke-midura, Jody and Hill, Old Main and Hill, Old Main and Close, Kevin and Hill, Old Main},
doi = {10.1145/2839509.2844581},
isbn = {9781450336857},
journal = {Proceedings of the 47th ACM Technical Symposium on Computer Science Education (SIGCSE '16)},
keywords = {app inventor,girls,near peer mentoring,self-efficacy},
pages = {297--302},
title = {{Investigating the Role of Being a Mentor as a Way of Increasing Interest in CS}},
year = {2016}
}
@article{Feng2007a,
abstract = {We study the relationship between Concurrent Separation Logic (CSL) and the assume-guarantee (A-G) method (a.k.a. rely-guarantee method). We show in three steps that CSL can be treated as a specialization of the A-G method for well-synchronized concurrent programs. First, we present an A-G based program logic for a low-level language with built-in locking primitives. Then we extend the program logic with explicit separation of “private data” and “shared data”, which provides better memory modularity. Finally, we show that CSL (adapted for the low-level language) can be viewed as a specialization of the extended A-G logic by enforcing the invariant that “shared resources are well-formed outside of critical regions”. This work can also be viewed as a different approach (from Brookes') to proving the soundness of CSL: our CSL inference rules are proved as lemmas in the A-G based logic, whose soundness is established following the syntactic approach to proving soundness of type systems. },
author = {Feng, Xinyu and Ferreira, Rodrigo and Shao, Zhong},
doi = {10.1007/978-3-540-71316-6_13},
isbn = {354071314X},
issn = {03029743},
journal = {Esop},
pages = {173--188},
title = {{On the Relationship between Concurrent Separation Logic and Assume-Guarantee Reasoning}},
year = {2007}
}
author = {Loginov, Alexey and Reps, Thomas W and Sagiv, Mooly},
isbn = {3540377565},
issn = {16113349},
journal = {13th Int.$\backslash$ Static Analysis Symposium (SAS)},
pages = {261--279},
title = {{Automated Verification of the {\{}D{\}}eutsch-{\{}S{\}}chorr-{\{}W{\}}aite Tree-Traversal Algorithm}},
volume = {4134},
year = {2006}
}
@article{Matsakis2014,
author = {Matsakis, Nicholas D and {Klock II}, Felix S},
doi = {10.1145/2663171.2663188},
isbn = {978-1-4503-3217-0},
issn = {1094-3641},
journal = {Proceedings of the 2014 ACM SIGAda Annual Conference on High Integrity Language Technology},
keywords = {affine type systems,memory management,rust,systems programming},
pages = {103--104},
title = {{The Rust Language}},
url = {http://doi.acm.org/10.1145/2663171.2663188},
year = {2014}
}
@article{Tassarottia,
abstract = {Read-Copy-Update (RCU) is a technique for letting multiple readers safely access a data structure while a writer concurrently modifies it. It is used heavily in the Linux kernel in situations where fast reads are important and writes are infrequent. Optimized implementations rely only on the weaker memory orderings provided by modern hard-ware, avoiding the need for expensive synchronization instructions (such as memory barriers) as much as possible. Using GPS, a recently developed program logic for the C/C++11 memory model, we verify an implementation of RCU for a singly-linked list assuming " release-acquire " semantics. Although release-acquire synchronization is stronger than what is required by real RCU implementations, it is nonetheless significantly weaker than the assumption of sequential consistency made in prior work on RCU verification. Ours is the first formal proof of correctness for an implementation of RCU under a weak memory model.},
author = {Tassarotti, Joseph and Dreyer, Derek and Vafeiadis, Viktor},
doi = {10.1145/2737924.2737992},
isbn = {9781450334686},
keywords = {C/C++,F31 [Logics and Mean-ings of Programs],Formal Definitions and Theory,Program logic,RCU,Separation logic,Theory,Verification Keywords Concurrency,Weak memory models},
pages = {1--11},
title = {{Verifying Read-Copy-Update in a Logic for Weak Memory}}
}
@article{Haller,
author = {Haller, Philipp and Geries, Simon and Eichberg, Michael and Salvaneschi, Guido},
isbn = {9781450346481},
keywords = {asynchronous programming,concurrent pro-,deterministic concurrency,gramming,scala,static analysis},
pages = {11--20},
title = {{Reactive Async: Expressive Deterministic Concurrency}}
}
@article{Tofte1997,
abstract = {This paper describes a memory management discipline for programs that perform dynamic memory allocation and de-allocation. At runtime, all values are put into regions. The store consists of a stack of regions. All points of region allocation and deallocation are inferred automatically, using a type and effect based program analysis. The scheme does not assume the presence of a garbage collector. The scheme was first presented by Tofte and Talpin (1994); subsequently, it has been tested in The ML Kit with Regions, a region-based, garbage-collection free implementation of the Standard ML Core language, which includes recursive datatypes, higher-order functions and updatable references (Birkedal et al. 96, Elsman and Hallenberg 95). This paper defines a region-based dynamic semantics for a skeletal programming language extracted from Standard ML. We present the inference system which specifies where regions can be allocated and de-allocated and a detailed proof that the system is sound wi...},
author = {Tofte, M},
doi = {10.1006/inco.1996.2613},
issn = {08905401},
journal = {Information and Computation},
number = {2},
pages = {109--176},
title = {{Region-Based Memory Management}},
volume = {132},
year = {1997}
}
@inproceedings{Doeraene2016,
address = {New York, New York, USA},
author = {Doeraene, S{\'{e}}bastien and Schlatter, Tobias and Stucki, Nicolas},
booktitle = {Proceedings of the 2016 7th ACM SIGPLAN Symposium on Scala - SCALA 2016},
doi = {10.1145/2998392.2998404},
isbn = {9781450346481},
pages = {85--94},
publisher = {ACM Press},
title = {{Semantics-driven interoperability between Scala.js and JavaScript}},
url = {http://dl.acm.org/citation.cfm?doid=2998392.2998404},
volume = {0},
year = {2016}
}
@article{Reps2004,
author = {Reps, Thomas W and Reps, Thomas W and Sagiv, Shmuel and Sagiv, Shmuel and Yorsh, Greta and Yorsh, Greta},
isbn = {9783540208037},
issn = {16113349},
journal = {Vmcai},
pages = {252--266},
title = {{Symbolic Implementation of the Best Transformer}},
year = {2004}
}
@article{Terragni2015,
abstract = {Concurrent programs proliferate as multi-core technologies advance. The regression testing of concurrent programs often requires running a failing test for weeks before catching a faulty interleaving, due to the myriad of possible interleavings of memory accesses arising from concurrent program executions. As a result, the conventional approach that selects a sub-set of test cases for regression testing without considering interleavings is insufficient. In this paper we present RECONTEST to address the problem by selecting the new interleavings that arise due to code changes. These interleavings must be explored in order to uncover regression bugs. RECONTEST efficiently selects new interleavings by first identifying shared memory accesses that are affected by the changes, and then exploring only those problematic interleavings that contain at least one of these accesses. We have implemented RECONTEST as an automated tool and evaluated it using 13 real-world concurrent program subjects. Our results show that RECONTEST can significantly reduce the regression testing cost without missing any faulty interleavings induced by code changes.},
author = {Terragni, Valerio and Cheung, Shing Chi and Zhang, Charles},
doi = {10.1109/ICSE.2015.45},
isbn = {9781479919345},
issn = {02705257},
journal = {Proceedings - International Conference on Software Engineering},
pages = {246--256},
title = {{RECONTEST: Effective regression testing of concurrent programs}},
volume = {1},
year = {2015}
}
@article{Nieto2003,
author = {Nieto, Lp},
isbn = {3-540-00886-1},
issn = {03029743},
journal = {Programming Languages and Systems},
pages = {348--362},
title = {{The rely-guarantee method in Isabelle/HOL}},
year = {2003}
}
@article{Reynolds1998,
abstract = {To introduce the republication of Definitional Interpreters for$\backslash$nHigher-Order Programming Languages'', the author recounts the circumstances of its$\backslash$ncreation, clarifies several obscurities, corrects a few mistakes, and briefly$\backslash$nsummarizes some more recent developments.},
author = {Reynolds, John C.},
doi = {10.1023/A:1010075320153},
issn = {13883690},
journal = {Higher-Order and Symbolic Computation},
keywords = {applicative language,assignment,call by,call by value,closure,continuation,continuation-passing-style transformation,defunctionalization,denotational semantics,escape,functional language,higher-order function,interpreter,iswim,j-operator,lambda calculus,lisp,metacircularity,name,operational semantics,pal,scheme,secd machine},
pages = {355--361},
title = {{Definitional Interpreters Revisited}},
url = {http://www.brics.dk/{~}hosc/local/HOSC-11-4-pp355-361.pdf},
volume = {11},
year = {1998}
}
@article{Li2011,
abstract = {Points-to analysis is a fundamental static analysis technique which computes the set of memory objects that a pointer may point to. Many different applications, such as security-related program analyses, bug checking, and analyses of multi-threaded programs, require precise points-to information to be effective. Recent work has focused on improving the precision of points-to analysis through flow-sensitivity and great progress has been made. However, even with all recent progress, flow-sensitive points-to analysis can still be much slower than a flow-insensitive analysis. In this paper, we propose a novel method that simplifies flow-sensitive points-to analysis to a general graph reachability problem in a value flow graph. The value flow graph summarizes dependencies between pointer variables, including those memory dependencies via pointer dereferences. The points-to set for each pointer variable can then be computed as the set of memory objects that can reach it in the graph. We develop an algorithm to build the value flow graph efficiently by examining the pointed-to-by set of a memory object, i.e., the set of pointers that point to an object. The pointed-to-by information of memory objects is very useful for applications such as escape analysis, and information flow analysis. Our approach is intuitive, easy to implement and very efficient. The implementation is around 2000 lines of code and it is more efficient than existing flow-sensitive points-to analyses. The runtime is comparable with the state-of-the-art flow-insensitive points-to analysis.},
author = {Li, Lian and Cifuentes, Cristina and Keynes, Nathan},
doi = {10.1145/2025113.2025160},
isbn = {9781450304436},
journal = {Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering - SIGSOFT/FSE '11},
pages = {343},
title = {{Boosting the performance of flow-sensitive points-to analysis using value flow}},
url = {http://dl.acm.org/citation.cfm?doid=2025113.2025160},
year = {2011}
}
@article{Godefroid2005,
author = {Godefroid, Patrice},
doi = {10.1145/1064978.1065036},
isbn = {1595930809},
issn = {03621340},
keywords = {automated test,generation,interfaces,program verification,random testing,software testing},
pages = {213--223},
title = {{DART : Directed Automated Random Testing}},
year = {2005}
}
@article{Fogg2015,
author = {Fogg, Peter and Tobin-hochstadt, Sam and Newton, Ryan R},
pages = {1--12},
title = {{Parallel Type-checking with Saturating LVars}},
year = {2015}
}
@article{Zhang2014a,
abstract = {Inclusion-based alias analysis for C can be formulated as a context-free language (CFL) reachability problem. It is well known that the traditional cubic CFL-reachability algorithm does not scale well in practice. We present a highly scalable and efficient CFL-reachability-based alias analysis for C. The key novelty of our algorithm is to propagate reachability information along only original graph edges and bypass a large portion of summary edges, while the traditional CFL-reachability algorithm propagates along all summary edges. We also utilize the Four Russians' Trick - a key enabling technique in the subcubic CFL-reachability algorithm - in our alias analysis. We have implemented our subcubic alias analysis and conducted extensive experiments on widely-used C programs from the pointer analysis literature. The results demonstrate that our alias analysis scales extremely well in practice. In particular, it can analyze the recent Linux kernel (which consists of 10M SLOC) in about 30 seconds.},
author = {Zhang, Qirun and Xiao, Xiao and Zhang, Charles and Yuan, Hao and Su, Zhendong},
doi = {10.1145/2660193.2660213},
isbn = {9781450325851},
issn = {0362-1340},
journal = {Proceedings of the 2014 ACM International Conference on Object Oriented Programming Systems Languages {\&}{\#}38; Applications},
keywords = {algorithms,alias analysis,cfl-reachability,experimentation,languages},
pages = {829--845},
title = {{Efficient Subcubic Alias Analysis for C}},
url = {http://doi.acm.org/10.1145/2660193.2660213},
year = {2014}
}
@article{Hall1996,
author = {Hall, Cordelia V. and Hammond, Kevin and {Peyton Jones}, Simon L. and Wadler, Philip L.},
doi = {10.1145/227699.227700},
isbn = {0164-0925},
issn = {01640925},
journal = {ACM Transactions on Programming Languages and Systems},
number = {2},
pages = {109--138},
title = {{Type classes in Haskell}},
volume = {18},
year = {1996}
}
@article{Guhaa,
author = {Guha, Arjun and Lerner, Benjamin and Politz, Joe Gibbs},
pages = {1--3},
title = {{Web API Verification : Results and Challenges Summary of Prior and Current Work}}
}
@article{Jones1983,
abstract = {Development methods for (sequential) programs that run in isolation have been studied elsewhere. Programs that run in parallel can interfere with each other, either via shared storage or by sending messages. Extensions to earlier development methods are proposed for the rigorous development of interfering programs. In particular, extensions to the specification method based on postconditions that are predicates of two states and the development methods of operation decomposition and data refinement are proposed.},
author = {Jones, C. B.},
doi = {10.1145/69575.69577},
isbn = {0164-0925},
issn = {01640925},
journal = {ACM Transactions on Programming Languages and Systems},
keywords = {communicating sequential processes,design,guarantee-conditions,languages,rely-conditions,verification},
number = {4},
pages = {596--619},
title = {{Tentative steps toward a development method for interfering programs}},
url = {http://dl.acm.org/citation.cfm?id=69575.69577{\%}5Cnhttp://portal.acm.org/citation.cfm?doid=69575.69577},
volume = {5},
year = {1983}
}
@article{Owicki1976,
abstract = {A language for parallel programming, with a primitive construct for synchronization and mutual exclusion, is presented. Hoare's deductive system for proving partial correctness of sequential programs is extended to include the paral-lelism described by the language. The proof method lends insight into how one should underst{\~{}},nd and present parallel programs. Examples are given using several of the standard problems in the literature. Methods for proving termination and the absence of deadlock are also given.},
author = {Owicki, Susan and Gries, David},
doi = {10.1007/BF00268134},
issn = {00015903},
journal = {Acta Informatica},
number = {4},
pages = {319--340},
title = {{An axiomatic proof technique for parallel programs I}},
volume = {6},
year = {1976}
}
@article{Maas2015,
author = {Maas, Alisa J},
doi = {10.1145/2814189.2815367},
isbn = {9781450337229},
journal = {Companion Proceedings of the 2015 ACM SIGPLAN International Conference on Systems, Programming, Languages and Applications: Software for Humanity},
keywords = {bindings,braries,ffi,foreign function interfaces,li-,static analysis,type inference},
pages = {69--70},
title = {{Automatic Array Property Detection via Static Analysis}},
year = {2015}
}
@article{Herlihy1990,
abstract = {A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.},
author = {Herlihy, Maurice P. and Wing, Jeannette M.},
doi = {10.1145/78969.78972},
issn = {01640925},
journal = {ACM Transactions on Programming Languages and Systems},
keywords = {and phrases,concurrrency,processing,serializability,shared memory,specification},
number = {3},
pages = {463--492},
title = {{Linearizability: a correctness condition for concurrent objects}},
url = {http://portal.acm.org/citation.cfm?doid=78969.78972},
volume = {12},
year = {1990}
}
@article{Tofte1998,
abstract = {Region Inference is a program analysis which infers$\backslash$nlifetimes of values. It is targeted at a runtime model in$\backslash$nwhich the store consists of a stack of regions and memory$\backslash$nmanagement predominantly consists of pushing and popping$\backslash$nregions, rather than performing garbage collection. Region$\backslash$nInference has previously been specified by a set of$\backslash$ninference rules which formalize when regions may be$\backslash$nallocated and deallocated. This article presents an$\backslash$nalgorithm which implements the specification. We prove that$\backslash$nthe algorithm is sound with respect to the region inference$\backslash$nrules and that it always terminates even though the region$\backslash$ninference rules permit polymorphic recursion in regions.$\backslash$nThe algorithm is the result of several years of experiments$\backslash$nwith region inference algorithms in the ML Kit, a compiler$\backslash$nfrom Standard ML to assembly language. We report on$\backslash$npractical experience with the algorithm and give hints on$\backslash$nhow to implement it.},
author = {Tofte, Mads and Birkedal, Lars},
doi = {10.1145/291891.291894},
issn = {01640925},
journal = {ACM Transactions on Programming Languages and Systems},
number = {4},
pages = {724--767},
title = {{A region inference algorithm}},
volume = {20},
year = {1998}
}
@article{Steimann2006,
abstract = {Aspect-oriented programming is considered a promising new technology. As object-oriented programming did before, it is beginning to pervade all areas of software engineering. With its growing popularity, practitioners and academics alike are wondering whether they should start looking into it, or otherwise risk having missed an important development. The author of this essay finds that much of aspect-oriented programming's success seems to be based on the conception that it improves both modularity and the structure of code, while in fact, it works against the primary purposes of the two, namely independent development and understandability of programs. Not seeing any way of fixing this situation, he thinks the success of aspect-oriented programming to be paradoxical.},
archivePrefix = {arXiv},
arxivId = {hep-ph/0312273},
author = {Steimann, Friedrich},
doi = {10.1145/1167515.1167514},
eprint = {0312273},
isbn = {1595933484},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {aspect-oriented-programming,modularization},
number = {10},
pages = {481},
primaryClass = {hep-ph},
title = {{The paradoxical success of aspect-oriented programming}},
url = {http://dx.doi.org/10.1145/1167515.1167514{\%}5Cnhttp://portal.acm.org/citation.cfm?doid=1167515.1167514},
volume = {41},
year = {2006}
}
@article{Calcagno2007a,
abstract = {Concurrent programs are difficult to verify because the proof must consider the interactions between the threads. Fine-grained concurrency and heap allocated data structures exacerbate this problem, because threads interfere more often and in richer ways. In this pa- per we provide a thread-modular safety checker for a class of pointer- manipulating fine-grained concurrent algorithms. Our checker uses ownership to avoid interference whenever possible, and rely/guarantee (assume/guarantee) to deal with interference when it genuinely exists.},
author = {Calcagno, Cristiano and Parkinson, Matthew and Vafeiadis, Viktor},
doi = {10.1007/978-3-540-74061-2_15},
isbn = {9783540740605},
issn = {03029743},
journal = {International Static Analysis Symposium},
pages = {233--248},
title = {{Modular Safety Checking for Fine-Grained Concurrency}},
year = {2007}
}
@article{Li2013a,
author = {Li, Lian and Cifuentes, Cristina and Keynes, Nathan},
doi = {10.1145/2491894.2466483},
isbn = {9781450321006},
issn = {0362-1340},
journal = {ACM SIGPLAN Notices},
keywords = {cfl-reachability,compact,context-sensitive analysis,demand- ing compact parameterized,driven,flow-sensitive analysis,function summaries,function summaries also suggest,function summary,hence the computed results,however,may,not be preserved in,that some useful information,the summary},
pages = {85--96},
title = {{Precise and Scalable Context-sensitive Pointer Analysis via Value Flow Graph}},
url = {http://dl.acm.org/citation.cfm?doid=2491894.2466483},
year = {2013}
}
@article{Sieczkowski2015,
author = {Sieczkowski, Filip and Bizjak, Ale{\v{s}} and Birkedal, Lars},
doi = {10.1007/978-3-319-22102-1_25},
isbn = {9783319221014},
issn = {16113349},
journal = {Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)},
pages = {375--390},
title = {{ModuRes: A Coq library for modular reasoning about concurrent higher-order imperative programming languages}},
volume = {9236},
year = {2015}
}
@article{Kozen,
author = {Kozen, Dexter},
isbn = {9783319127354},
issn = {16113349},
keywords = {frenetic,kleene algebra,kleene algebra with tests,netkat,openflow,packet switching,soft-,ware defined networking},
title = {{NetKAT — A Formal System for the Verification of Networks}}
}
@article{Hansen2003,
author = {Hansen, Helle Hvid},
keywords = {algebraic duality,coalgebra,correspondence,craig interpolation,definability,neighbourhood semantics,non-normal modal logic,simulation,theory,university of amsterdam},
pages = {117},
title = {{Monotonic Modal Logics}},
year = {2003}
}
@article{Boyland2003,
abstract = {We describe a type system for checking interference using the concept of linear capabilities (which we call “permissions”).},
author = {Boyland, John and Boyland, John},
doi = {10.1007/3-540-44898-5_4},
isbn = {978-3-540-40325-8},
journal = {Sas},
number = {9984681},
pages = {1075$\backslash$r--1075},
title = {{Checking Interference with Fractional Permissions}},
volume = {2003},
year = {2003}
}
@article{Klein2008,
author = {Klein, Gerwin},
keywords = {formal software verification,operating systems,theorem proving},
title = {{Operating System Verification --- An Overview}},
year = {2008}
}
keywords = {concurrency,race condition,separation logic,soundness},
pages = {1--16},
title = {{Concurrent Separation Logic Lecture Notes}},
url = {http://concurrency.cs.uni-kl.de/documents/ConcurrencyTheory{\_}SS{\_}2014/lecturenotes/30{\_}04{\_}2014{\_}csl-soundness.pdf},
year = {2014}
}
@article{Might2010,
abstract = {Low-level program analysis is a fundamental problem, taking the shape of "flow analysis" in functional languages and "points-to" analysis in imperative and object-oriented languages. Despite the similarities, the vocabulary and results in the two communities remain largely distinct, with limited cross-understanding. One of the few links is Shivers's k-CFA work, which has advanced the concept of "context-sensitive analysis" and is widely known in both communities. Recent results indicate that the relationship between the functional and object-oriented incarnations of k-CFA is not as well understood as thought. Van Horn and Mairson proved k-CFA for k ≥ 1 to be EXPTIME-complete; hence, no polynomial-time algorithm can exist. Yet, there are several polynomial-time formulations of context-sensitive points-to analyses in object-oriented languages. Thus, it seems that functional k-CFA may actually be a profoundly different analysis from object-oriented k-CFA. We resolve this paradox by showing that the exact same specification of k-CFA is polynomial-time for object-oriented languages yet exponential-time for functional ones: objects and closures are subtly different, in a way that interacts crucially with context-sensitivity and complexity. This illumination leads to an immediate payoff: by projecting the object-oriented treatment of objects onto closures, we derive a polynomial-time hierarchy of context-sensitive CFAs for functional programs.},
archivePrefix = {arXiv},
arxivId = {arXiv:1311.4231v1},
author = {Might, Matthew and Smaragdakis, Yannis and {Van Horn}, David},
doi = {10.1145/1809028.1806631},
eprint = {arXiv:1311.4231v1},
isbn = {978-1-4503-0019-3},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
keywords = {control-flow analysis,functional,k-cfa,m-cfa,object-oriented,pointer analysis,static analysis},
number = {6},
pages = {305},
title = {{Resolving and exploiting the k -CFA paradox}},
url = {http://dl.acm.org/citation.cfm?id=1809028.1806631{\%}5Cnhttp://portal.acm.org/citation.cfm?doid=1809028.1806631},
volume = {45},
year = {2010}
}
@article{Damas1982,
abstract = {An abstract is not available.},
author = {Damas, Luis and Milner, Robin},
doi = {10.1145/582153.582176},
isbn = {0897910656},
issn = {01406736},
journal = {Proceedings of the 9th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '82},
number = {October},
pages = {207--212},
title = {{Principal type-schemes for functional programs}},
url = {http://portal.acm.org/citation.cfm?doid=582153.582176},
year = {1982}
}
@article{Najd2016,
abstract = {We describe a new approach to implementing Domain-Specific Languages(DSLs), called Quoted DSLs (QDSLs), that is inspired by two old ideas:quasi-quotation, from McCarthy's Lisp of 1960, and the subformula principle of normal proofs, from Gentzen's natural deduction of 1935. QDSLs reuse facilities provided for the host language, since host and quoted terms share the same syntax, type system, and normalisation rules. QDSL terms are normalised to a canonical form, inspired by the subformula principle, which guarantees that one can use higher-order types in the source while guaranteeing first-order types in the target, and enables using types to guide fusion. We test our ideas by re-implementing Feldspar, which was originally implemented as an Embedded DSL (EDSL), as a QDSL; and we compare the QDSL and EDSL variants. The two variants produce identical code.},
archivePrefix = {arXiv},
arxivId = {1507.07264},
author = {Najd, Shayan and Lindley, Sam and Svenningsson, Josef and Wadler, Philip},
doi = {10.1145/2847538.2847541},
eprint = {1507.07264},
isbn = {9781450340977},
journal = {Proceedings of the 2016 ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation - PEPM 2016},
keywords = {DSL,EDSL,QDSL,domain-specific language,embedded language,normalisation,quotation,subformula principle},
pages = {25--36},
title = {{Everything old is new again: quoted domain-specific languages}},
url = {http://dl.acm.org/citation.cfm?id=2847538.2847541},
year = {2016}
}
@article{OHearn1999,
abstract = {We introduce a logic BI in which a multiplicative (or linear) and an additive (or intuitionistic) implication live side-by-side. The propositional version of BI arises from an analysis of the proof-theoretic relationship between conjunction and implication; it can be viewed as a merging of intuitionistic logic and multiplicative intuitionistic linear logic. The naturality of BI can be seen categorically: models of propositional BI's proofs are given by bicartesian doubly closed categories, i.e., categories which freely combine the semantics of propositional intuitionistic logic and propositional multiplicative intuitionistic linear logic. The predicate version of BI includes, in addition to standard additive quantifiers, multiplicative (or intensional) quantifiers ∀ new and ∃ new which arise from observing restrictions on structural rules on the level of terms as well as propositions. We discuss computational interpretations, based on sharing, at both the propositional and predicate levels.},
author = {O'Hearn, Peter W. and Pym, David J.},
doi = {10.2307/421090},
isbn = {9788578110796},
issn = {1079-8986},
journal = {Bulletin of Symbolic Logic},
number = {02},
pages = {215--244},
title = {{The Logic of Bunched Implications}},
url = {https://www.jstor.org/stable/421090{\%}5Cnhttp://www.journals.cambridge.org/abstract{\_}S1079898600007022},
volume = {5},
year = {1999}
}
@article{Sider2010,
author = {Sider, Theodore},
issn = {0004-5411},
journal = {Logic For Philosophy},
pages = {1--10},
title = {{Propositional Modal Logic}},
year = {2010}
}
@article{Hallgren2005,
abstract = {We describe amonadic interface to low-level hardware features that is a suitable basis for building operating systems in Haskell. The interface includes primitives for controlling memory management hardware, user-mode process execution, and low-level device I/O. The interface enforces memory safety in nearly all circumstances. Its behavior is specified in part by formal assertions written in a programming logic called P-Logic. The interface has been imple- mented on bare IA32 hardware using the Glasgow Haskell Com- piler (GHC) runtime system.We show how a variety of simple O/S kernels can be constructed on top of the interface, including a sim- ple separation kernel and a demonstration system in which the ker- nel, window system, and all device drivers are written in Haskell.},
author = {Hallgren, Thomas and Jones, Mark P. and Leslie, Rebekah and Tolmach, Andrew},
doi = {10.1145/1090189.1086380},
isbn = {1595930647},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
number = {9},
pages = {116},
title = {{A principled approach to operating system construction in Haskell}},
url = {http://portal.acm.org/citation.cfm?doid=1090189.1086380},
volume = {40},
year = {2005}
}
@article{Kovacs2016,
abstract = {The theory of finite term algebras provides a natural framework to describe the semantics of functional languages. The ability to efficiently reason about term algebras is essential to automate program analysis and verification for functional or imperative programs over algebraic data types such as lists and trees. However, as the theory of finite term algebras is not finitely axiomatizable, reasoning about quantified properties over term algebras is challenging. In this paper we address full first-order reasoning about properties of programs manipulating term algebras, and describe two approaches for doing so by using first-order theorem proving. Our first method is a conservative extension of the theory of term algebras using a finite number of statements, while our second method relies on extending the superposition calculus of first-order theorem provers with additional inference rules. We implemented our work in the first-order theorem prover Vampire and evaluated it on a large number of algebraic data type benchmarks, as well as game theory constraints. Our experimental results show that our methods are able to find proofs for many hard problems previously unsolved by state-of-the-art methods. We also show that Vampire implementing our methods outperforms existing SMT solvers able to deal with algebraic data types.},
archivePrefix = {arXiv},
arxivId = {1611.02908},
author = {Kovacs, Laura and Robillard, Simon and Voronkov, Andrei},
eprint = {1611.02908},
keywords = {algebraic data types,automated reasoning,first-order theorem proving,program analysis and verification,superposition},
title = {{Coming to Terms with Quantified Reasoning}},
url = {http://arxiv.org/abs/1611.02908},
year = {2016}
}
@article{Lu,
author = {Lu, Shan},
isbn = {9781450344449},
keywords = {abstraction refinement,automated debugging,bug isolation,field failures,statistical},
title = {{Low-Overhead and Fully Automated Statistical Debugging with Abstraction Refinement}}
}
@article{Angiuli2016,
abstract = {Formal constructive type theory has proved to be an effective language for mechanized proof. By avoiding non-constructive principles, such as the law of the excluded middle, type theory admits sharper proofs and broader interpretations of results. From a computer science perspective, interest in type theory arises from its applications to programming languages. Standard constructive type theories used in mechanization admit computational interpretations based on meta-mathematical normalization theorems. These proofs are notoriously brittle; any change to the theory potentially invalidates its computational meaning. As a case in point, Voevodsky's univalence axiom raises questions about the computational meaning of proofs. We consider the question: Can higher-dimensional type theory be construed as a programming language? We answer this question affirmatively by providing a direct, deterministic operational interpretation for a representative higher-dimensional dependent type theory with higher inductive types and an instance of univalence. Rather than being a formal type theory defined by rules, it is instead a computational type theory in the sense of Martin-Lo ̈f's meaning explanations and of the NuPRL semantics. The definition of the type theory starts with programs, and defines types as specifica- tions of program behavior. The main result is a canonicity theorem, the first of its kind, stating that closed programs of boolean type evaluate to true or false.},
author = {Angiuli, Carlo and Harper, Robert and Wilson, Todd},
keywords = {homotopy type theory,logical relations},
title = {{Computational Higher-Dimensional Type Theory}},
year = {2016}
}
@article{Scherer2016,
abstract = {The logical technique of focusing can be applied to the {\$}\backslashlambda{\$}-calculus; in a simple type system with atomic types and negative type formers (functions, products, the unit type), its normal forms coincide with {\$}\backslashbeta\backslasheta{\$}-normal forms. Introducing a saturation phase gives a notion of quasi-normal forms in presence of positive types (sum types and the empty type). This rich structure let us prove the decidability of {\$}\backslashbeta\backslasheta{\$}-equivalence in presence of the empty type, the fact that it coincides with contextual equivalence, and a finite model property.},
archivePrefix = {arXiv},
arxivId = {1610.01213},
author = {Scherer, Gabriel},
eprint = {1610.01213},
title = {{Deciding equivalence with sums and the empty type}},
url = {http://arxiv.org/abs/1610.01213},
year = {2016}
}
@article{Marlow2009,
abstract = {Purely functional programs should run well on parallel hardware because of the absence of side effects, but it has proved hard to realise this potential in practice. Plenty of papers describe promising ideas, but vastly fewer describe real implementations with good wall-clock performance. We describe just such an implementation, and quantitatively explore some of the complex design tradeoffs that make such implementations hard to build. Our measurements are necessarily detailed and specific, but they are reproducible, and we believe that they offer some general insights.},
author = {Marlow, Simon and {Peyton Jones}, Simon and Singh, Satnam},
doi = {10.1145/1631687.1596563},
isbn = {9781605583327},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
number = {9},
pages = {65},
title = {{Runtime support for multicore Haskell}},
url = {http://portal.acm.org/citation.cfm?doid=1631687.1596563},
volume = {44},
year = {2009}
}
@article{Barthe2016,
abstract = {Couplings are a powerful mathematical tool for reasoning about pairs of probabilistic processes. Recent developments in formal verification identify a close connection between couplings and pRHL, a relational program logic motivated by applications to provable security, enabling formal construction of couplings from the probability theory literature. However, existing work using pRHL merely shows existence of a coupling and does not give a way to prove quantitative properties about the coupling, which are need to reason about mixing and convergence of probabilistic processes. Furthermore, pRHL is inherently incomplete, and is not able to capture some advanced forms of couplings such as shift couplings. We address both problems as follows. First, we define an extension of pRHL, called xpRHL, which explicitly constructs the coupling in a pRHL derivation in the form of a probabilistic product program that simulates two correlated runs of the original program. Existing verification tools for probabilistic programs can then be directly applied to the probabilistic product to prove quantitative properties of the coupling. Second, we equip pRHL with a new rule for while loops, where reasoning can freely mix synchronized and unsynchronized loop iterations. Our proof rule can capture examples of shift couplings, and the logic is relatively complete for deterministic programs. We show soundness of xpRHL and use it to analyze two classes of examples. First, we verify rapid mixing using different tools from coupling: standard coupling, shift coupling, and path coupling, a compositional principle for combining local couplings into a global coupling. Second, we verify (approximate) equivalence between a source and an optimized program for several instances of loop optimizations from the literature.},
archivePrefix = {arXiv},
arxivId = {1607.03455},
author = {Barthe, Gilles and Gr{\'{e}}goire, Benjamin and Hsu, Justin and Strub, Pierre-Yves},
eprint = {1607.03455},
keywords = {formal verification,probabilistic algorithms,probabilistic couplings,product programs,rela-,tional hoare logic},
title = {{Coupling proofs are probabilistic product programs}},
url = {http://arxiv.org/abs/1607.03455},
year = {2016}
}
@article{Ahman2016,
abstract = {Dijkstra monads are a means by which a dependent type theory can be enhanced with support for reasoning about effectful code. These specification-level monads computing weakest preconditions, and their closely related counterparts, Hoare monads, provide the basis on which verification tools like F*, Hoare Type Theory (HTT), and Ynot are built. In this paper we show that Dijkstra monads can be derived "for free" by applying a continuation-passing style (CPS) translation to the standard monadic definitions of the underlying computational effects. Automatically deriving Dijkstra monads provides a correct-by-construction and efficient way of reasoning about user-defined effects in dependent type theories. We demonstrate these ideas in EMF*, a new dependently typed calculus, validating it both by formal proof and via a prototype implementation within F*. Besides equipping F* with a more uniform and extensible effect system, EMF* enables within F* a mixture of intrinsic and extrinsic proofs that was previously impossible.},
archivePrefix = {arXiv},
arxivId = {1608.06499},
author = {Ahman, Danel and Hritcu, Catalin and Martinez, Guido and Plotkin, Gordon and Protzenko, Jonathan and Rastogi, Aseem and Swamy, Nikhil},
eprint = {1608.06499},
keywords = {effectful programming,proof assistants,verification},
url = {http://arxiv.org/abs/1608.06499},
year = {2016}
}
@article{Martins2016,
author = {Martins, Ruben and Wang, Yuepeng and Reps, Thomas W},
keywords = {component-based,petri-net,program,type-directed},
pages = {1--16},
title = {{Component-Based Synthesis for Complex APIs}},
year = {2016}
}
@article{Levy2017,
author = {Levy, Paul Blain},
keywords = {call-by-push-value,computational,contextual equivalence,effects,isomorphism},
title = {{Contextual Isomorphisms}},
year = {2017}
}
@article{Zhang2016b,
author = {Zhang, Yang and Feng, Xinyu},
doi = {10.1007/s11704-015-4492-4},
issn = {20952236},
journal = {Frontiers of Computer Science},
keywords = {DRF-Guarantee,JMM,happens-before,operatonal semantics,relaxed memory model},
number = {1},
pages = {54--81},
title = {{An operational happens-before memory model}},
volume = {10},
year = {2016}
}
@article{Shan2016,
abstract = {Bayesian inference, of posterior knowledge from prior knowledge and observed evidence, is typically defined by Bayes's rule. But the observation of a continuous quantity usually has probability zero, in which case Bayes's rule says only that the unknown times zero is zero. To infer a posterior distribution from a zeroprobability observation, the statistical notion of disintegration tells us to specify the observation as an expression rather than a predicate, but does not tell us how to compute the posterior. We present the first method of computing a disintegration from a probabilistic program and an observable expression, solving the problem of drawing inferences from zeroprobability observations. Because the method produces an exact posterior term, it composes with other inference methods in a modular way without sacrificing accuracy or performance.},
author = {Shan, Chung-chieh and Ramsey, Norman},
keywords = {conditional measures,continu-,probabilistic programs},
pages = {1--15},
title = {{Exact Bayesian Inference by Symbolic Disintegration}},
year = {2016}
}
@article{Lehmann2017,
author = {Lehmann, Nico},
isbn = {9781450346603},
keywords = {abstract interpreta-,gradual typing,refinement types},
number = {Dcc},
pages = {1543--1556},
year = {2017}
}
@article{Bornat,
author = {Bornat, Richard and Calcagno, Cristiano and Hearn, Peter O and Parkinson, Matthew},
isbn = {158113830X},
keywords = {2005,by permission of acm,c acm,concurrency,for your personal use,it is posted here,logic,not for redistribution,permissions,s version of the,separation,the,this is the author,work},
title = {{Permission Accounting in Separation Logic Categories and Subject Descriptors}}
}
@article{Danvy1992,
author = {Danvy, Oliver and Filinski, Andrzex},
doi = {10.1017/S0960129500001535},
issn = {0960-1295},
journal = {Mathematical Structures in Computer Science},
month = {dec},
number = {04},
pages = {361},
title = {{Representing Control: a Study of the CPS Transformation}},
url = {http://www.journals.cambridge.org/abstract{\_}S0960129500001535},
volume = {2},
year = {1992}
}
@article{Brutschy,
author = {Brutschy, Lucas and Peter, M and Vechev, Martin},
title = {{Effective Serializability for Eventual Consistency}}
}
@article{Omar2016,
abstract = {Programs are rich inductive structures, but human programmers typically construct and manipulate them only indirectly, through flat textual representations. This indirection comes at a cost – programmers must comprehend the various subtleties of parsing, and it can require many text editor actions to make a single syntactically and semantically welldefined change. During these sequences of text editor actions, or when the programmer makes a mistake, programmers and programming tools must contend with malformed or semantically illdefined program text, complicating the programming process. Structure editors promise to alleviate these burdens by exposing only edit actions that produce sensible changes to the program structure. Existing designs for structure editors, however, are complex and somewhat ad hoc. They also focus primarily on syntactic wellformedness, so programs can still be left semantically illdefined as they are being constructed. In this paper, we report on our ongoing efforts to develop Hazelnut, a minimal structure editor defined in a principled typetheoretic style where all edit actions leave the program in both a syntactically and semantically welldefined state. Uniquely, Hazelnut does not force the programmer to construct the program in a strictly " outsidein " fashion. Formally, Hazelnut is a bidirectionally typed lambda calculus extended with 1) holes (which mark subterms that are being constructed from the inside out); 2) a focus model ; and 3) a bidirectional action model equipped with a useful action sensibility theorem.},
archivePrefix = {arXiv},
arxivId = {arXiv:1607.04180v1},
author = {Omar, Cyrus and Voysey, Ian and Hilton, Michael and Aldrich, Jonathan and Hammer, Matthew A},
eprint = {arXiv:1607.04180v1},
title = {{Hazelnut : A Bidirectionally Typed Structure Editor Calculus}},
year = {2016}
}
@article{Lindley2016,
abstract = {We explore the design and implementation of Frank, a strict functional programming language with a bidirectional effect type system designed from the ground up around a novel variant of Plotkin and Pretnar's effect handler abstraction. Effect handlers provide an abstraction for modular effectful programming: a handler acts as an interpreter for a collection of commands whose interfaces are statically tracked by the type system. However, Frank eliminates the need for an additional effect handling construct by generalising the basic mechanism of functional abstraction itself. A function is simply the special case of a Frank operator that interprets no commands. Moreover, Frank's operators can be multihandlers which simultaneously interpret commands from several sources at once, without disturbing the direct style of functional programming with values. Effect typing in Frank employs a novel form of effect polymorphism which avoid mentioning effect variables in source code. This is achieved by propagating an ambient ability inwards, rather than accumulating unions of potential effects outwards. We introduce Frank by example, and then give a formal account of the Frank type system and its semantics. We introduce Core Frank by elaborating Frank operators into functions, case expressions, and unary handlers, and then give a sound small-step operational semantics for Core Frank. Programming with effects and handlers is in its infancy. We contribute an exploration of future possibilities, particularly in combination with other forms of rich type system.},
archivePrefix = {arXiv},
arxivId = {1611.09259},
author = {Lindley, Sam and McBride, Conor and McLaughlin, Craig},
eprint = {1611.09259},
keywords = {algebraic effects,bidi-,call-by-push-value,continuations,effect handlers,effect polymor-,pattern matching,phism},
title = {{Do be do be do}},
url = {http://arxiv.org/abs/1611.09259},
year = {2016}
}
@article{Markus2017,
author = {Markus, P and Vechev, Martin},
isbn = {9781450346603},
keywords = {abstract interpretation,numerical program analysis,partitions,performance optimization,polyhedra decomposition},
title = {{Fast Polyhedra Abstract Domain}},
year = {2017}
}
@article{Germane2016,
author = {Germane, Kimball},
keywords = {and may,be established by an,before inlining,critical role,environment analysis,environment analysis plays a,static analysis,the,the potential inline in,this condition concerns environments},
pages = {1--13},
title = {{A Posteriori Environment Analysis with Pushdown Delta CFA}},
year = {2016}
}
@article{Lange2016,
abstract = {Go is a production-level statically typed programming language whose design features explicit message-passing primitives and lightweight threads, enabling (and encouraging) programmers to develop concurrent systems where components interact through communication more so than by lock-based shared memory concurrency. Go can only detect global deadlocks at runtime, but provides no compile-time protection against all too common communication mis-matches or partial deadlocks. This work develops a static verification framework for liveness and safety in Go programs, able to detect communication errors and partial deadlocks in a general class of realistic concurrent programs, including those with dynamic channel creation, unbounded thread creation and recursion. Our approach infers from a Go program a faithful representation of its communication patterns as a behavioural type. By checking a syntactic restriction on channel usage, dubbed fencing, we ensure that programs are made up of finitely many different communication patterns that may be repeated infinitely many times. This restriction allows us to implement a decision procedure for liveness and safety in types which in turn statically ensures liveness and safety in Go programs. We have implemented a type inference and decision procedures in a tool-chain and tested it against publicly available Go programs.},
archivePrefix = {arXiv},
arxivId = {1610.08843},
author = {Lange, Julien and Ng, Nicholas and Toninho, Bernardo and Yoshida, Nobuko},
eprint = {1610.08843},
keywords = {channel-based programming,compile-time,deadlock detection,message-passing,process calculus,programming,safety and liveness,static,types},
title = {{Fencing off Go: Liveness and Safety for Channel-based Programming (extended version)}},
url = {http://arxiv.org/abs/1610.08843},
year = {2016}
}
@inproceedings{Sousa2016,
author = {Sousa, Marcelo and Dillig, Isil},
booktitle = {PLDI '16 Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation},
doi = {10.1145/2980983.2908092},
isbn = {8750152009},
issn = {03621340},
keywords = {automated verification,product programs,relational hoare logic,safety hyper-properties},
month = {jun},
number = {6},
pages = {57--69},
title = {{Cartesian hoare logic for verifying k-safety properties}},
url = {http://dl.acm.org/citation.cfm?doid=2980983.2908092},
volume = {51},
year = {2016}
}
@article{Richards,
author = {Richards, Gregor and Burg, Brian},
keywords = {dynamic behavior,dynamic met-,execution tracing,javascript,program analysis,rics},
title = {{An Analysis of the Dynamic Behavior of JavaScript Programs.pdf}}
}
@article{Shapiro2011,
author = {Shapiro, Marc},
number = {2011},
title = {{A comprehensive study of Convergent and Commutative Replicated Data}},
year = {2011}
}
@article{Cimini2016a,
author = {Cimini, Matteo and Siek, Jeremy G},
keywords = {gradual typing,operational semantics,type systems},
pages = {1--15},
title = {{Automatically Generating the Dynamic Semantics of Gradually Typed Languages}},
year = {2016}
}
@article{Introduction,
author = {Introduction, A N and Liquidhaskell, T O},
title = {{Programming with Refinement Types}}
}
@article{Rojas1997,
abstract = {This paper is a short and painless introduction to the $\lambda$ calculus. Originally developed in order to study some mathematical properties of effectively com- putable functions, this formalism has provided a strong theoretical foundation for the family of functional programming languages. We show how to perform some arithmetical computations using the $\lambda$ calculus and how to define recur- sive functions, even though functions in $\lambda$ calculus are not given names and thus cannot refer explicitly to themselves.},
archivePrefix = {arXiv},
arxivId = {arXiv:1503.09060v1},
author = {Rojas, Ra{\'{u}}l},
doi = {10.1006/anbe.1999.1219},
eprint = {arXiv:1503.09060v1},
issn = {00033472},
journal = {FU Berlin},
pages = {1--9},
pmid = {10512656},
title = {{A Tutorial Introduction to the Lambda Calculus}},
volume = {58},
year = {1997}
}
@article{Mandelbaum2003,
abstract = {We develop an explicit two level system that allows programmers to reason about the behavior of effectful programs. The first level is an ordinary ML-style type system, which confers standard properties on program behavior. The second level is a conservative extension of the first that uses a logic of type refinements to check more precise properties of program behavior. Our logic is a fragment of intuitionistic linear logic, which gives programmers the ability to reason locally about changes of program state. We provide a generic resource semantics for our logic as well as a sound, decidable, syntactic refinement-checking system. We also prove that refinements give rise to an optimization principle for programs. Finally, we illustrate the power of our system through a number of examples.},
author = {Mandelbaum, Yitzhak and Walker, David and Harper, Robert},
doi = {10.1145/944746.944725},
isbn = {1-58113-756-7},
issn = {03621340},
journal = {ACM SIGPLAN Notices},
pages = {213--225},
title = {{An effective theory of type refinements}},
volume = {38},
year = {2003}
}
@article{li2017static,
title={Static analysis of android apps: A systematic literature review},
author={Li, Li and Bissyand{\'e}, Tegawend{\'e} F and Papadakis, Mike and Rasthofer, Siegfried and Bartel, Alexandre and Octeau, Damien and Klein, Jacques and Traon, Le},
journal={Information and Software Technology},
volume={88},
pages={67--95},
year={2017},
publisher={Elsevier}
}
`
|
|
Acta Nat. Sci. | e-ISSN: 2718-0638
#### Volume 2 Issue 2 (December 2021) Volume 3 Issue 2 Volume 3 Issue 1 Volume 2 Issue 2 Volume 2 Issue 1 Volume 1 Issue 1
Issue Information Issue Full File (2021-Volume 2, Issue 2) pp. i - vi | DOI: 10.29329/actanatsci.2021.350 Abstract 2021-Volume 2, Issue 2 Issue Full File Cover, Editorial Board, Indexing, Table of Contents Keywords: Original Articles Does Maritime Transport Network Converge? Evidence From EU Countries Abdullah Açık pp. 86 - 93 | DOI: 10.29329/actanatsci.2021.350.01 Abstract The concept of convergence is an important issue that has gained a wide place in the economics literature. The convergence of economies is likely to have an impact on the transportation sector as well, since it is the most important supporter of economic activities, and it is directly affected by economic indicators due to its derived demand structure. However, the possible reflections of economic convergence in the transportation sector have not received sufficient attention in the literature. In this study, we investigated this issue in the European Union (EU) countries, where economic convergence is implemented as a union policy and the existence of convergence is empirically supported in many studies. We select Liner Shipping Connectivity Index (LSCI) as a variable, which indicates the countries’ level of connectivity in the liner transportation network mostly used to transport intermediate and finished goods. We have determined that there is a convergence in EU countries in terms of maritime transportation network. This result shows that economic integration leads to improvements not only in the incomes of poorer countries but also in their transportation networks. Keywords: Convergence, Lineer shipping, Unit root Are Housing and Ship Demolition Markets Integrated? Evidence From Turkey Kamil Özden EFES pp. 94 - 100 | DOI: 10.29329/actanatsci.2021.350.02 Abstract The purpose of this study is to investigate the impact of the demand for new houses to ship demolition prices in Turkey through new house sales statistics and Turkish ship demolition prices. In this direction, asymmetric causality test is used which allows to separate the shocks contained in the variables and to determine the causal relationships between these shocks. According to test results, causality relations are determined from positive shocks in house sales to positive shocks in demolition price and from negative shocks in house sales to negative shocks in demolition price. On the other hand, any significant causality from demolition prices to new house sales cannot be determined. This situation shows that changes in the new house sales are determinative for ship demolition prices in Turkey. Keywords: Maritime economics, Demolition prices, Asymmetric causality test Effects of Temperature and Nitrogen Concentration on Growth and Lipid Accumulation of the Green Algae Chlorella vulgaris for Biodiesel Şafak Seyhaneyıldız Can, Edis Koru, Semra Cirik, Gamze Turan, Hatice Tekoğul & Tuğba Subakan pp. 101 - 108 | DOI: 10.29329/actanatsci.2021.350.03 Abstract This study investigated the effect of different temperatures and different nitrogen concentrations on the lipid content and biomass of Chlorella microalgae. In this study, algae were cultured in five media with different amounts $NaNO_{3}$ as 3, 1.5, 0.80, 0.40 g/L, and three temperatures (10, 20, 30 °C). The results of the experiments showed that the optimal temperature and nitrogen concentration for the biomass increase in Chlorella vulgaris are 30°C and 3 g/L, respectively. It was observed that biomass decreased and lipid amount increased due to the decrease in nitrogen concentration. The high lipid amount of 20.80% dry weight (DW) was obtained from the algae produced at 30°C in the free-nitrate medium. The contribution of temperature change to lipid production was not as effective as nitrogen deficiency in the study. According to the fatty acid analysis results made by GC-FID, C. vulgaris seems suitable for biodiesel production because it contains medium-length (C16-C18) fatty acid chains. Keywords: Chlorella vulgaris, Nitrogen starvation, Temperature, Lipids biodiesel Effect of Homalothecium sericeum (Hedw.) Schimp. Extract on SOD1 Activity in Rat Tissues (Kidney, Adrenal Gland, Ovary) Özlem Yayıntaş, Latife Ceyda İrkin & Şamil Öztürk pp. 109 - 117 | DOI: 10.29329/actanatsci.2021.350.04 Abstract Homalothecium sericeum (Hedw.) Schimp. is growing in habitats such as walls and roofs. It is supported by the studies that moss contains antioxidant, antimicrobial and antitumoral compounds. It was aimed to determine the immunoreactivity of Cu / Zn SOD enzyme in the kidney, adrenal gland, and ovarian tissues of rats due to the increase in the dose of moss extract. In this study, 1 mL of distilled water were given control groups (G1), 50 mg/kg (G2), 100 mg/kg (G3), 300 mg/kg (G4) and 500 mg/kg (G5) moss extract were administered by gavage for 30 days another groups. At the end of the experiment period, the tissues taken from the rats were subjected to routine histopathological procedures. Cu /Z n SOD primary antibody was applied using immunohistochemical staining methods to detect immunoreactivity. The study was terminated by using the Kruskal-Wallis test, which is one of the nonparametric tests. To determine the differences between the groups by evaluating the stained tissue samples with the image analysis system in the light microscope. A significant difference was found in the dose-related positivity of the kidney, ovarian and adrenal gland tissues of the groups given moss extract p. It has been determined that H. sericeum species increases Cu/Zn SOD enzyme activity in the kidney, adrenal gland and ovarian tissues, and its cytotoxic effects shows a dose-related increase in the histopathological table. Keywords: Cu/Zn SOD (SOD1), Kidney, Adrenal gland, Ovary, Moss, Immunohistochemistry Determination of Sexual Dimorphism in the Freshwater Blenny, Salaria fluviatilis (Asso, 1801), Distributed in Brackish Water Habitats Sule Gurkan & Deniz Innal pp. 118 - 123 | DOI: 10.29329/actanatsci.2021.350.05 Abstract The present study aimed to determine the presence of the sexual dimorphism based on the morphometric measurements in a total of 60 samples (♀: 26; ♂: 14, immature; 20) which were obtained in April 2017 from the population of Salaria fluviatilis which shows distribution in the brackish waters in the Karpuzçay Creek (Antalya, Turkey). As a result of the morphometric analysis performed in both sexes of samples, it has been determined that there were differences between body parts in terms of total length (TL), dorsal fin length (DFL), snout length, and eye diameters in the head area. Accordingly, it has been observed that the lengths of allometric growing body parts of males were greater than that of females. The properties of sexual dimorphism in the body parts of freshwater blenny cause significant differences between sexes in brackish water forms. The differences in male individuals such as TL and long DFL are important criteria for the selection of large male individuals for sexual selection in mating. It was thought that the increase in snout length and eye diameter in the head region gives males some advantages in various areas such as feeding performance from the habitat, male selection of females in mating, and swimming performance. Keywords: Blennid species, Salaria fluviatilis, Dimorphic structure, Morphometric features, Phenotypic response Review Articles A Mini-Review on Polycyclic Aromatic Hydrocarbons (PAHs) in Some Smoked Fish Nuray Çiftçi & Deniz Ayas pp. 124 - 129 | DOI: 10.29329/actanatsci.2021.350.06 Abstract The effects of sources that cause pollution in the environment in organisms can occur in different ways. The participation of polycyclic aromatic hydrocarbons (PAHs), one of the pollutants caused by organic materials, into aquatic ecosystems by washing from the atmosphere and soil causes accumulation in aquatic ecosystems and is easily transported to the upper trophic zones through the food chain. Consumption of these products with high nutritional value poses a threat to human health. The processing of these products, which are widely consumed as fresh, with different processes is another way to remain under the influence of PAH. As it is known, PAHs are formed as a result of pyrolysis and prosynthesis of organic materials that are not sufficiently burned. In this sense, smoked products that are not produced under suitable conditions may carry a risk for the formation of PAH. In this study, the factors that cause PAH formation in smoked products and the appropriate processing processes developed to eliminate these factors were compiled. Keywords: PAH, Smoked Fish, Formation, Toxicity, Prevention Original Articles ICCAT Inspections in Turkey and Turkey’s National Legislation Compliance With the ICCAT Recommendations Raziye Tanrıverdi pp. 130 - 140 | DOI: 10.29329/actanatsci.2021.350.07 Abstract It was determined that bluefin tuna fishing within the scope of the International Commission for the Conservation of Atlantic Tunas (ICCAT) inspections was carried out in accordance with the 19-04 ICCAT Recommendation in Turkey. In the swordfish fishery, it was observed that the 16-05 ICCAT Recommendation was not completely compatible with Turkey’s national legislation. In the 16-05 ICCAT Recommendation; the fishing gear was determined as the longline in swordfish fishery, the length of the pelagic longlines and the number of hooks were limited, the minimum weight limit was set on swordfish, and the transshipment operations at sea of swordfish was prohibited in the fishing season. However, these legal regulations were not available in Turkey’s national legislation. The following issues could be evaluated as the reasons why these regulations were not included in Turkey’s national legislation; the majority of the fishing vessels engaged in swordfish fishery in the seas of Turkey are less than 12 meters in total length, the fishing vessels of 12 meters in length or more can only use the first fishing gear, the length of pelagic swordfish longlines and the number of hooks used in Turkey are far below the regulations set forth in 16-05 ICCAT Recommendation, the caught swordfish are landed as a whole, and the majority of swordfish fishery vessels stay for a day or 2-3 days for fishing at sea. The ICCAT inspections in the swordfish fishery were carried out according to the 16-05 ICCAT Recommendation in Turkey. The necessary information and incentives should be provided for the fishing vessels of less than 12 m in length, which had obtained swordfish fishery permits, to use fuel without special consumption tax, in order to monitor them electronically via Vessel Tracking Module. In order to control the quota in swordfish fishery, it should be obligatory to use a paper logbook for fishing vessels less than 12 meters in length. In addition, an application can also be made to install a Vessel Monitoring System on fishing vessels less than 12 m that will catch swordfish. Keywords: Audit institutions, Electronic monitoring tools, Control, Inspection, Recommendation Review Articles A Review of Reported Bacterial Diseases and Antibiotic Use in Tilapia Culture in the Philippines Albaris Tahiluddin & Ertuğrul Terzi pp. 141 - 147 | DOI: 10.29329/actanatsci.2021.350.08 Abstract Aquaculture has become important to meet the demand for animal food both in local and international markets due to the increasing world population. Tilapias are one of the significant cultured species worldwide, in which the Philippines is one of the leading tilapia-producing countries. Tilapias are the second most preferred fish in the Philippines, constituting about 12% of its total aquaculture production in 2018. Cultivation of tilapias is a practice nationwide, mostly performed in fish ponds and cages in various environments. Despite being an almost hardy fish, the investigation of tilapias for bacterial infections also allowed us to follow the changing bacterial world. In this study, we have reviewed articles that previously reported bacterial diseases and the use of antibiotics in tilapia culture in the Philippines. Streptococcosis, Motile Aeromonas Septicemia, and Pseudomonas infection caused by Streptococcus agalactiae and S. iniae, Aeromonas hydrophila, and Pseudomonas fluorescens and P. aeruginosa, respectively, were the identified fish diseases. Chloramphenicol, ampicillin, tetracycline, and erythromycin were among the most commonly used antibiotics in tilapia culture. Keywords: Tilapia, Bacterial disease, Antibiotics, Philippines Original Articles The Effects of Mucilage Event on the Population of Critically Endangered Pinna nobilis (Linnaeus 1758) in Ocaklar Bay (Marmara Sea, Turkey) Deniz Acarlı, Sefa Acarlı & Semih Kale pp. 148 - 158 | DOI: 10.29329/actanatsci.2021.350.09 Abstract This paper aimed to understand the potential effects of the mucilage event on the critically endangered Pinna nobilis in Ocaklar Bay located at the southern Marmara Sea. Underwater surveys were carried out in October 2020 and July 2021. The study area covers 500 $m^{2}$ that was divided into 5 main zones having 100 $m^{2}$ areas (10×10 m). Then, each main zone was separated into sub areas covering 25 $m^{2}$ (5×5 m). The habitat structure, depth, and availability of the mucilage event were observed by SCUBA diving equipment in sub areas. During the underwater observations, the total number of dead and alive individuals was counted as 228 of which 130 individuals were alive and 98 were dead. The minimum and maximum population density (including both dead and alive individuals) of P. nobilis was found to be between 10 individuals per 100 $m^{2}$ and 112 individuals per 100 $m^{2}$ in the study area, respectively. The mortality rates were calculated as 35.96% and 16.12% for the years 2020 and 2021, respectively. This paper puts forward that the P. nobilis population could be resistant to extreme environmental stress and even juvenile individuals (smaller than 15 cm) were recruited in the study area during the mucilage event. Keywords: Critically endangered species, Pinna nobilis, Mucilage, Mortality, Survival, Marmara Sea Length-Weight Relationships of Four Symphodus Species (Perciformes: Labridae) off Gökçeada Island (Northern Aegean Sea, Turkey) Özgür Cengiz pp. 159 - 165 | DOI: 10.29329/actanatsci.2021.350.10 Abstract The present work provides length-weight relationships (LWRs) of four Symphodus species off Gökçeada Island (Northern Aegean Sea, Turkey). The sampling was ensured between November 2013 and December 2014 from commercial fishmongers. This study presents the most recent and the broadest analysis of the LWRs for the following studied species: Symphodus ocellatus, Symphodus tinca, Symphodus mediterraneus, Symphodus rostratus. The b value varied between 2.81 and 3.37, whereas $r^{2}$ aligned from 0.89 to 0.95. Keywords: Length-weight relationships, Symphodus species, Gökçeada Island, Turkey Effects of Annual Grass with the Mixtures of Legume on Agronomic Growth of Plants Fırat Alatürk, Ahmet Gökkuş & Baboo Ali pp. 166 - 176 | DOI: 10.29329/actanatsci.2021.350.11 Abstract This study has been carried out in order to determine the variations in the vegetative characteristics of the mixtures of legume and cereal crops. Experiments were conducted according to the randomized complete block design using three replications of flowerpots. In the experiment; 1, 2 and 4 annual grass, Hungarian vetch and hairy vetch along with their double mixtures have been taken from per flowerpot. Effects of lean and mixed cultivation on plant characteristics (plant height, number of branches, total wet and dry weight and total root weight) and nutritional characteristics (NDF, ADF, ADL, crude protein, crude ash, digestibility of dry and organic matter, and total fiber) of crops were examined in this study. According to the results of our research work, as the number of plants per flowerpot increased the total wet and dry weight and root mass increased, too, in terms of plant characteristics particularly, in mixed sowing, the amount of upper soil surface and underground organic mass increased. Ratios of NDF, ADF and fiber in the mixture of cereals with legumes have decreased, while the digestibility of crude protein, crude ash, dry and organic matter has increased in case of nutritional characteristics. On the other hand, the ratios of NDF and ADF have increased, while there was a decrease in crude protein and crude ash ratios in the mixture of legumes with cereals. This indicates that annual grass along with hairy vetch and Hungarian vetch can be cultivated in winter both for obtaining higher grass production as well as to provide more organic matter to soil. It is concluded that the most suitable mixing ratios to be the two-fold and four-fold ratios of perennial grass along with the single ratio of vetches. Keywords: Nutritional characteristics, Botanical characteristics, Crude protein, ADF, Cereal Fish Consumption Preferences and Habits in Babaeski and Demirköy Districts of Kırklareli Serkan Tozakçı & Musa Bulut pp. 177 - 191 | DOI: 10.29329/actanatsci.2021.350.12 Abstract Fish consumption is very important for a healthy and balanced diet. It is recommended to consume fish for the development of the brain and immune system from a young age. In order to avoid or overcome diseases such as heart, atherosclerosis and cholesterol in later ages, the importance of fish consumption should be increased. In this context, it is necessary to take measures to determine fish consumption habits and accordingly. For these reasons, this study was carried out to determine fish consumption and habits of people living in Demirköy and Babaeski districts of Kırklareli province. Within the scope of quantitative research origin, survey method was used. A questionnaire was used to determine fish consumption and habits as a data collection tool. After the applications with a total of 250 people, the data were analyzed with the SPSS 25 program. Analysis data were evaluated by tabulating frequency and percentage. In addition, the relationship between the monthly income of the participants from both districts and the frequency of fish consumed in a month was determined by chi-square analysis. It was determined that there were similarities in the fish preferences of the people participating in the study from Demirköy and Babaeski. In determining fish consumption preferences and habits, monthly income level, number of people in the family, hunting season, and freshness of the fish were also evaluated. It was concluded that the places to buy fish were different in both districts, and there was a relationship between monthly income and the amount of fish consumed in a month. It is thought that fish consumption can be increased with the opening of markets where people can buy fresh fish in all seasons, and the fish prices should be determined by associating them with monthly income. Keywords: Fish consumption, Consumption habits, Kırklareli, Babaeski, Demirköy Issue Information In Memoriam: Prof. Dr. Yılmaz Emre (1957-2021) Acta Natura et Scientia Editorial pp. i - vi | DOI: 10.29329/actanatsci.2021.350.13 Abstract This volume is dedicated to our editorial board member and colleague Prof. Dr. Yılmaz EMRE who was devoted to aquaculture, fisheries, marine and freshwater biology, and microplastics. In Antalya, Turkey, on August 22, at the age of 64, Prof. Dr. Yılmaz EMRE, a scientist at the Faculty of Science at Akdeniz University, Turkey, passed away due to the Covid-19 pandemic. The departure of our colleague, who was a member of the editorial board of our journal, represents a major loss for the Turkish scientific community. Dr. Yılmaz EMRE was born on February 1, 1957, in Şanlıurfa, Turkey. He graduated from Selçuk University, Faculty of Science and Arts (Bachelor of Science Degree in Biology, 1982). He obtained a Master of Science degree in Medical Biology at Selçuk University, Department of Medical Biology (at the Faculty of Medicine), 1987, and earned a Philosophy of Doctorate degree in Biology at Bursa Uludağ University, Department of Biology, 1992. He became a full professor in 2016. He worked as a biologist in the Bursa Fisheries Regional Directorate (Turkey) between 1982-1985, Bursa Provincial Directorate (Ministry of Agriculture and Rural Affairs, Turkey) between 1985-1989, Antalya Kepez Fisheries Production Directorate between 1989-2003. He also served as a founding director at the Mediterranean Fisheries Research, Production and Training Institute (Ministry of Agriculture and Forestry, Turkey) from 2005 to 2013. He worked as the Deputy General Manager of Fisheries and Aquaculture (Ministry of Agriculture and Forestry, Turkey) in 2016. He worked as the dean of the Faculty of Science at Akdeniz University between 2016-2017, and he worked as an academic at Akdeniz University from 2013 to 2021. He was the Turkey correspondent of the European Inland Fisheries Advisory Commission (EIFAC) (Food and Agriculture Organization (FAO) of the United Nations) for seven years. His main areas of interest were aquaculture, fisheries, marine and freshwater biology. He has published more than 130 original scientific papers, studies and reports. He shared his knowledge and scientific experiences with young researchers throughout his life and trained many young researchers. Therefore, Dr. Yılmaz Emre’s passing created an unfillable void for his family, loved ones, and friends. We would like to express the sadness of the loss of Dr. Yılmaz Emre, a great mentor and scientist. We hope that he enjoys a lovely and peaceful afterlife. Dr. Yılmaz Emre was also loved and will be respected by his worldwide academic family who will remain forever grateful. Keywords: Obituary
Volume 3
Volume 2
Volume 1
|
|
# Entity polymorphism and entity attributes
I want to design the entity system of my game in a way such that entities are modular, easily modified without affecting other entities, and finally easy to add new types of entities. So ideally some version of component based design.
In some parts of the code I want to deal with entities generally and abstractly, for example "update all entities" and "render all entities", however in other parts I want to deal with specific types of entity, like when a Soldier entity is fighting a Tank entity.
The problem is some entities need attributes which others do not. For example a soldier needs HP but a missile do not.
How do I keep entity attributes modular, and avoid adding every attribute in the game to the Entity object?
• I'm a bit confused - you say you want a component based design, but then you talk about a soldier "class." In a component-centric model, you wouldn't have a soldier class, just an entity with a set of components that combine to make it soldier-ish. For the HP example, you might have a "Health" component that tracks HP of characters and destructible props. If your missile doesn't need it (eg. you can't shoot them out of the air) then you don't attach that component. – DMGregory Jul 26 '16 at 15:47
• I want to know how to store attributes which need to be accessed by multiple different components but without storing superfluous attributes. For example, some entities need hitpoints while others don't, so how are hitpoints stored as to have it available to those who need it but not waste memory space in those who do not need it? – Anonymous Entity Jul 26 '16 at 16:51
• The data is in a component; if your entity that represents the player needs hit-points, create a hit-point component, add it to the 'player' entity, and don't add it to the missile entity? – Vaillancourt Jul 26 '16 at 17:07
• So does every entity have a null hit-points component by default? Then every time anyone needs access you have to do if entity.hp != null – Anonymous Entity Jul 26 '16 at 17:36
• That's one (naive, IMHO) way to do it. Depends on the language you're using. You should take a look how Artemis does it if you're in Java. t-machine.org has also a lot of stuff on the subject as well. You can also take a look at the component model of Unity. – Vaillancourt Jul 26 '16 at 17:59
How do I keep entity attributes modular, and avoid adding every attribute in the game to the Entity object?
If you embrace the "entity is just a bag of components" concept, then this can be fairly simple. You package up related attributes into a component, and attach that component to certain entities but not to others. Attaching the component can be as simple as stuffing a pointer/reference to the new component into a list or map or something stored in the entity; the entity doesn't need to know it might have a HitPointComponent* specifically, just that it contains Component*, the base class of all components:
struct Entity {
std::map<std::type_index, Component*> m_components;
template<typename ComponentType>
void attachComponent (Component* component) {
std::type_index key(typeid(ComponentType));
m_components.emplace(key, component);
}
template<typename ComponentType>
bool hasComponent() {
std::type_index key(typeid(ComponentType));
return m_components.find(key) != std::end(m_components);
}
};
(type_index provided as a simple example, not a suggestion that it is the "best" approach, in particular because it requires you always know the component types at compile-time.)
In some parts of the code I want to deal with entities generally and abstractly, for example "update all entities" and "render all entities"
I'd say you don't need to do this. Rather than operate on "all entities," instead operate on "all components of a given type." It's fairly common in component-based entity design to have a component actually belong to a system, some kind of container that is responsible for the creation, deletion and updating of all components of a particular type (for example, the implied HitpointSystem in my above code example).
Thus you might have a VisualizationSystem which hands out VisualizationComponents which describe how some object might look or is otherwise presented in the game. Non-visible entities don't get such a component. To render, the VisualizationSystem simply does something like:
for(auto && visualizationComponent : m_allVisualizationComponents) {
renderer.draw(visualizationComponent);
}
This sort of design tends to use the "list of components associated with the entity" that attachComponent and the like modify purely as a way to test for the presence of a component with an entity, which is why it can be fairly simple. There's no need in this style of design to do "for each entity, for each component in that entity, update it."
however in other parts I want to deal with specific types of entity, like when a Soldier entity is fighting a Tank entity.
I'm still not sure what specifically you have in mind here for this kind of interaction that requires knowing about the type of entity. So I'll say that in general it might be better to push this kind of thing into data instead of into the type system.
For example, say you have a CombatantComponent to indicate that a given entity can be involved in combat iterations with other combatants (maybe you store your hitpoints here too). Suppose also the purpose of knowing whether the entity is a "soldier" or a "tank" is because a tank gets a defensive bonus versus a solider when exchanging blows. This typing information can be expressed directly in the CombatantComponent as a enumeration (UnitType) which you switch on to apply bonus damage (et cetera). Or it can be expressed less directly as an enumeration indicating the damage types a CombatantComponent can use and the damage types it gets bonuses or penalties to upon receipt.
Thus a tank might be defined as a entity with a combat component that has a damage multiplier for the Bullet damage type of 0.1, so it takes 10% of any incoming bullet damage. But maybe it also has 2.0 multiplier for Corrosive damage as well.
• So when someone dealing with the entity wants to access some attribute, they have to be preceded by if (hasComponent(...) != end()) every time? Great answer by the way. – Anonymous Entity Jul 26 '16 at 18:34
• @AnonymousEntity You'd probably want a getComponent() method in there as well. You could call that and check for null. If you expect you will be doing that a lot for any given piece of code you can of course call it once and cache it to avoid some of that overhead and annoyance. There are also ways to build things so that you can provide stronger guarantees that all required dependency components are available (see, for example, what EntityX does on that front), although I'm not personally a huge fan of most implementations of that technique. – user1430 Jul 26 '16 at 18:58
• Thanks, that makes me re-evaluate the whole prospect of ECS. There must be some nicer way of doing things while still having good modularization. – Anonymous Entity Jul 26 '16 at 19:22
• @AnonymousEntity It really depends what you mean by "nice." It's fairly subjective. The ECS paradigm can be very nice by several metrics, but particularly in C++ there's a fair bit of ugliness you need to hide simply because C++ lacks the built-in language facilities to do some of it elegantly. – user1430 Jul 26 '16 at 19:28
• @AnonymousEntity: yeah, ECS is not perfect, and there are other ways of building or structuring component-based engines that have their own pros and cons. That said, one possible solution to the complexity problem you're observing is just more abstraction ("there is no problem in computer science that cannot be solved with another layer of abstraction.. except for the problem of too many layers of abstraction"). e.g., if you are querying for attributes FooBar, have a getFooBar(entity) function that hides any checks, returns good safe default values, etc. Simple. – Sean Middleditch Jul 26 '16 at 20:27
|
|
# Cycle type of a permutation
Given an element $$\sigma$$ of a symmetric group $$S_n$$ on finitely many elements, we may express $$\sigma$$ in cycle notation. The cycle type of $$\sigma$$ is then a list of the lengths of the cycles in $$\sigma$$, where conventionally we omit length-$1$ cycles from the cycle type. Conventionally we list the lengths in decreasing order, and the list is presented as a comma-separated collection of values.
The concept is well-defined because disjoint cycle notation is unique up to reordering of the cycles.
# Examples
• The cycle type of the element $$(123)(45)$$ in $$S_7$$ is $$3,2$$, or (without the conventional omission of the cycles $$(6)$$ and $$(7)$$) $$3,2,1,1$$.
• The cycle type of the identity element is the empty list.
• The cycle type of a $$k$$-cycle is $$k$$, the list containing a single element $$k$$.
Parents:
|
|
# 2. Writing Fruitful Functions Practice¶
Quick Overview of Day
Warmup drawing problem. WDTPD questions about functions. Students practice writing fruitful functions.
## 2.1. Warmup Problem¶
Draw the image above using the Python turtle module. You must define a function as part of your solution!
## 2.2. What Does This Program Do?¶
Remember that variables created inside of a function have local scope (can only be used inside that function), whereas variables created outside of a function have global scope (can be accessed from anywhere).
Note
Your teacher may choose to use the following examples as a class activity, by displaying the examples, and having you take a guess as to what you think each will do before running the code.
What will the following programs output? Why?
## 2.3. Practice Problems¶
Try the following practice problems to be sure you understand how to create fruitful functions. Your functions have to return the correct value – using print() will not work. When you run your code for these questions, your code will automatically be checked with a number of test cases to see if your function works in all situations. You will be able to see any situations in which your function does not provide the correct answer.
Note
The only thing you need to do for the following is to complete the function definition! You do not need to call the function, as that will be done automatically for you.
### 2.3.1. Area of rectangle¶
The parameters length and width represent the lengths of the sides of a rectangle. Calculate the area of the rectangle with the given values, and return the result.
Examples:
rectangle_area(5, 10) → 50
rectangle_area(1, 10) → 10
rectangle_area(2, 6) → 12
Write a function that returns the letter grade, given an exam mark as the parameter. The grading scheme is:
A >= 90
B [80, 90)
C [70, 80)
D [60, 70)
F < 60
The square and round brackets denote closed and open intervals. A closed interval includes the number, and open interval excludes it. So 79.99999 gets grade C, but 80 gets grade B.
Examples:
letter_grade(83) → "B"
letter_grade(73) → "C"
letter_grade(80) → "B"
### 2.3.3. Find the Smallest¶
The function find_min(a, b, c) will take three numbers as parameters and return the smallest value. If more than one number is tied for the smallest, still return that smallest number. Note that you cannot use the min function in this solution.
Examples:
find_min(4, 7, 5) → 4
find_min(4, 5, 5) → 4
find_min(4, -7, 5) → -7
### 2.3.4. Is Even¶
The function is_even(number) will return True if the number passed in is even, and False if it is odd. Hint: You might want to look back at the Math Operators list.
Examples:
is_even(4) → True
is_even(-4) → True
is_even(5) → False
### 2.3.5. Leap Year¶
A year is a leap year if it is divisible by 4 unless it is a century that is not divisible by 400. Write a function that takes a year as a parameter and returns True if the year is a leap year, False otherwise. The following pseudocode determines whether a year is a leap year or a common year in the Gregorian calendar (from Wikipedia):
if (year is not divisible by 4) then (it is a common year)
else if (year is not divisible by 100) then (it is a leap year)
else if (year is not divisible by 400) then (it is a common year)
else (it is a leap year)
Examples:
leap_year(2001) → False
leap_year(2020) → True
leap_year(1900) → False
### 2.3.6. Using Your Is Even Function¶
Write a program that continues to take in a number from the user until the number given is NOT even. For example, the user might enter 4, 10, 42, 5. The program would only stop when the non-even number 5 is entered. You need to use the is_even function you defined above.
Next Section - 3. The Accumulator Pattern
|
|
Connect with us
What is Vector?
Vectors can be defined in multiple ways depending on the context where it is utilized. A vector has both magnitude and direction that is shown over directed line segment where length denotes the magnitude of vector and the arrow indicates the direction from tail to head.
Two vectors are similar if they have same magnitude and direction. The magnitude or direction of a vector with respect to the position doesn’t change. But if you stretch or move the vector from head or tail then both magnitude and direction will change.
In other words, the vector is a quantity having both magnitude and direction. There are scalar quantities that only have magnitude and given a vector measurement. A vector is not important in mathematics only but physics too like aeronautical space, space, traveling guide etc. Pilots use vector quantities while sitting on the plane and taking it to the other direction safely.
Once you are sure on the definition of vector and its usage then next important step is to study the vectors representation. They are represented in the form of a ray and written either in lowercase or upper case. Generally, a single vector is represented in both forms –uppercase and lowercase alphabets. If the vector is written in the form like AB then A is the tail and B is the head.
List of Basic Vector Formulas
Vectors are divided into two major categories – one is the dot product, and the other is a cross product. A list of basic formula is available for both the categories to solve the geometrical transformation in 2 dimensions and 3 dimensions. These formulas are frequently used in physics and mathematics. Further, they are widely acceptable for analytical or coordinate geometry problems.
Formula of Magnitude of a Vector
Magnitude of a vector when end point is origin. Let x and y are the components of the vector,
$\large \left|v\right|=\sqrt{x^{2}+y^{2}}$
Magnitude of a vector when starting points are $(x_{1}$, $y_{1})$ end points are $(x_{2}$, $y_{2})$,
$\large \left|v\right|=\sqrt{\left(x_{2}+x_{1}\right)^{2}+\left(y_{2}+y_{1}\right)^{2}}$
The formula of resultant vector is given as:
$\large \overrightarrow{R}=\sqrt{\overrightarrow{x^{2}}+\overrightarrow{y^{2}}}$
Vector Projection formula is given below:
$\large proj_{b}\,a=\frac{\vec{a}\cdot\vec{b}}{\left|\vec{b}\right|^{2}}\;\vec{b}$
The Scalar projection formula defines the length of given vector projection and is given below:
$\large proj_{b}\,a=\frac{\vec{a}\cdot\vec{b}}{\left|\vec{a}\right|}$
Unit Vector Formula is given by
$\large \widehat{V}=\frac{v}{\left|v\right|}$
formula of direction is
$\LARGE \theta =\tan^{-1}\frac{y}{x}$
Parts of Vectors
Parts in vectors are taken as the angles that are directed towards the coordinate axes. Take an example, if some vector is directed at northwest then its parts would be westward vector and the northward vector. So, vectors are generalized into two parts mostly where names could be different but the concept is same.
Why does Vector Formula Need for Students?
With the study of old geometry books, you would know about the evolution of vectors in algebra and how is it beneficial for students. Vectors were initially named as the algebra of segments and directed to displacements. Let us see some of the benefits why students should learn Vectors in school and during higher studies too.
Vectors are important in both physics and mathematics and it was discovered to make the geometry transformations easier. It signifies that quick insights can be gained into Geometry and taken an important part of linear algebra. The popular application of vectors includes – particle mechanics, fluid mechanics, planar description, trajectories calculation, 3D motion etc.
The other area where vectors are used is electromagnetism, analytical geometry, and the coordinate geometry etc. With a clear understanding of Vectors, students not only progress in their career but clear various competitive exams too.
|
|
# How do I teach algebra?
I find that soon I'll be working with high school students that are struggling with math. In particular, we'll be talking a lot about algebra and some basic trigonometry. The latter I have experience with (via working with students in calculus and "pre-calculus"), but I have legitimately no idea how one would teach algebra. If I see $3x+5=14$, it's obvious to me what to do, and unlike, say, calculus, I can't really even see how someone would get confused on that (even though I know they do!)
This is a bit broad, but how do you teach introductory algebra? Do you have any references for new teachers?
-
Part of your problem is what is called "expert blindness" or similar: the subject is so familiar to you that the trouble your students have becomes incomprehensible. First step is obviously to see the phenomenon, next step is to find out what specific problems are common and how to handle them. – vonbrand Apr 21 '14 at 4:25
@vonbrand I'm unfortunately aware. My struggle is that I don't know how to solve it. I have to admit no experience teaching algebra in the past, and I'm a bit worried I'll show up and do poorly without some practice/background. – Mike Miller Apr 21 '14 at 4:56
For the symbol-manipulation side, I would recommend having them play with DragonBox ( dragonboxapp.com ). It doesn't explain any of the theory behind why the rules are what they are, so it's not sufficient by itself, but it's fantastically good in teaching the rules and making it seem fun. I once saw a 5-year old solving (with assistance, but still) about a hundred first-degree equations within a couple of hours when playing with it, and also later on some older kids arguing over who gets to play and solve algebraic equations next. – Kaj_Sotala Apr 21 '14 at 8:34
All of the technical advice offered here is golden. I wouldn't change a syllable! On some level, I envy you. My own life was changed 40 years ago by a man who had the patience to do the job you now face. His name was Mr. Shetler. He taught me that I wasn't an idiot and that this stuff isn't magic. There are simple rules that we apply to do algebra. Learn the rules and the problems solve themselves. Above all I council patience. – user1168 Apr 22 '14 at 9:34
Remarkably, when I was in middle school, I went half a year effectively solving these problems via the bisection method (en.wikipedia.org/wiki/Bisection_method). The teacher at one point pulled me aside when I was getting answers like 9.97 when the actual answer was 10, and asked me if I was being a wise guy. The idea of subtracting a number from both sides is extremely foreign, so much so, that I effectively developed an algorithm from numerical analysis to avoid it! – MHH Jul 27 '14 at 16:14
As a personal tutor, I’ve been teaching algebra to kids from ages 8 to 16 for many years. Mostly I find myself in the position of picking up the pieces when the kids are failing and fearing more failure.
The root of the problem, in my experience, is the way algebra is taught as something alien, and in particular, different from arithmetic, which it really isn’t (at least in the early years).
So first off, constant emphasis on the fact that “$x$ is just a number you don’t know yet”. So it behaves like a number, and you can do all the stuff to it, that you can do to numbers.
Next, the nature of equality $2 + 3 = 4 + 1$.
And from there, the fact that when you do the same thing to both sides, you still end up with two things that are equal.
Always “do the same thing to both sides” (since this is clearly based on the nature of equality), never “move this from one side to the other and change the sign” (which is a magic rule that makes no sense until you have a deeper understanding).
Once you get them happy with the idea that doing the same thing to both sides is the way to go, you can give them suggestions for which things to do in which order, but stress that provided they rigorously write down the consequence of the thing they decide to do to both sides, they won't go wrong (although some ways are harder – look out for these as a pointer that choosing another way will be easier).
The manipulation of each line is easy, once you’ve got them to decide what they’re going to do at each stage.
For instance, in the example, $3x + 5 = 14$:
• First, decide what to do to both sides (subtract 5)
• Write down first what you have ($3x + 5$), then do what you’ve decided. So you get $3x + 5 - 5$, and on the RHS, $14 - 5$.
• Then collect terms and simplify to get $3x = 9$.
• Then repeat for division by 3.
Emphasise that once you’ve decided what to do at each stage, there’s very little thinking, since you’re just writing – starting with what you had on the previous line, and adding on the chosen operation.
Figuring out what to do (add or multiply, subtract or divide) needs to come after they are truly grounded in the principle that doing the same thing to both sides is the key.
They will also need help with things like why $3x/3 = x$. Again, use numbers to illustrate, and stress that $x$ is just a number, so it behaves the same way as a number.
-
In school, we had to put a long vertical line to the right of the equation and had to write next to it what we did to go to the next line ($-5$ and $/3$ in this case). – user11235 Apr 21 '14 at 14:47
Yes, I've seen this, and a couple of variants. Personally, I don't like it, for two reasons. 1: It takes longer to write it all out, and anything that makes things take longer to write risks losing the child's attention, and/or making the problem seem laborious and hence dull. 2: If you get used to writing what you start with first (the 3x + 5 in this case), and then the thing that you're doing (the -5), you can still see clearly what's going on. This encourages a systematic approach to laying out the solution that minimises the additional thinking required. – ChrisA Apr 21 '14 at 18:47
How is it less work to write $-5$ twice instead of once? – user11235 Apr 21 '14 at 20:22
Exactly. I think you may have misunderstood me. The habit of writing the vertical line, and what you've done to get to the next line, is what they need to get out of. My point is that in going from 3x + 5 = 14, to 3x + 5 - 5 = 14 - 5, to 3x = 9, it becomes a natural progression to miss out the middle line, more so than no longer writing the vertical line stuff. You may find something else easier - that's fine. I'm only commenting on my experience. – ChrisA Apr 21 '14 at 22:32
constant emphasis on the fact that “x is just a number you don’t know yet <-- be careful with this -- this emphasis on "solving for x" and that x has only one value can really confuse kids when faced with learning about linear equations, where x can take an infinite number of different values. – PurpleVermont Jul 27 '14 at 21:24
For some students, the difficulty with solving $3x+5=14$ is even more basic than figuring out what operations to do in what order in order to reach the goal. Before getting to that, they need to know what the goal is. "Everybody knows" that, when solving an equation with one variable $x$, the goal is to end up with a statement of the form $x=$ some specific number. Unfortunately, this "everybody" doesn't really include everybody; some students have never had the goal made clear. Moreover, in some cases, once they understand the goal, they're remarkably good at finding strategies for working toward it.
-
Indeed. I remember the big sigh of relief ("Ohhhh!") when I told one of my students that "Solve" the equation means "Find the value(s) of x for which the equation is true". – ChrisA Apr 21 '14 at 18:54
I saw this question and laughed, "That is way too broad!", but I've been in your position. I was a classroom teacher for 10yrs in the public school system and was often tasked with teaching something that I hadn't had training in.
What you are looking for initially is a "Scope and Sequence" - a guide showing the steps in teaching a subject. Your 'expert blindness' makes it hard to make one on your own, but it is also redundant - experts have already done this. You can put together a S&S by look at roughly 2 sources:
1. State or privately developed curriculums - some states offer there curriculum online in the form of "Standards". You can look at what is required at each grade level and get an idea of what you need to teach. You'll need to assess your student against current grade level requirements and then work backwards until you get to the point they understand. That reveals their 'deficiency'. Then you remediate. So, the curriculum will tell you at 8th grade they need to know 'this' and at ninth, 'this'. You teach what they sequentially through the curriculum.
2. Books and guides - academic textbook are often set up in a proper sequence that will show you a framework of what needs to be learned first. You can obtain these often at libraries, but you may need to dig. Ideally you can find the books the students have used in their classes. Homeschooling resources are also readily available and can be found to meet a lot of different special needs.
This is a tough nut to crack. It really highlights the fact that 'Teaching' is far more than knowledge of a subject! Teaching is its own skill. Good luck!
-
I think this is the most useful answer to such a broad question. We all have our favorite tips and tricks and points of view of what is important, but the first thing a new teacher needs to know and understand is scope and sequence. Understanding scope and sequence will give the framework around which to develop a point of view about what is important to emphasize. I would only add that there is a third resource that the questioner should seek out: excellent veteran teachers. – jbaldus Apr 25 '14 at 2:00
First, I want to comment on something that ChrisA seemed to have glossed over in his detailed description.
For instance, in the example, $3x+5=14$:
• First, decide what to do to both sides (subtract 5)
In my experience as a teacher and tutor, I have noticed that this is not easy for novice Algebraists. However, I have found that there is a way that you can help to make this "decision."
We are all familiar with the order of operations and many of us use the mnemonic PEMDAS to determine which operations to perform in which order when evaluating an expression. This is applicable for evaluating expressions and complicated/fabricated arithmetic problems.
This can be used in Algebra as well, though it is not something that I have seen often. The decision on what to do is the reverse of the order of operations. The two operations which are acting upon $x$ in the given example are multiplication and addition.
(As an aside, multiplication is represented with the $3$ immediately in front of it; multiplication has many forms and these forms may be something that you may want to discuss with your students as well if they have trouble recognizing each of the forms. Addition is represented with the $+5$, though you may wish to have a deeper discussion when dealing with a problem with subtraction, as it can be thought of as either subtraction or addition of a negative number.)
According to the order of operations and PEMDAS, multiplication comes before addition, and if this were a problem with only numerals, then that is the order that you would have to do things. However in Algebra when we solve for a variable, we are attempting to unravel the operations being performed on the variable so we can read the variable alone and use the property of equality to determine its equivalent value. This unraveling is done by observing what is happening to the variable, and performing the inverse operation to be left with only an identity (in the case of multiplication and division, that identity is $1$ while in addition and subtraction it is $0$; this is another topic you may wish to go into more detail about). Because identities provide equivalent values, we often do not write these. It is this reason that in Algebra we reversing the steps of the order of operations and PEMDAS, and because these operations are reversed, so too is the order of the operations. This is why we need to do the opposite of addition (subtraction) first, and the opposite of multiplication (division) second. I have found that making this thought process explicit has helped some of my students more easily determine this "decision."
Do not be afraid to use the technical terminology either (e.g., inverse, identity), as I have found that this actually helps to clarify things.
## ---------
Second, there is a lot of value in rewriting the equations in two different ways. I have seen students who prefer each style, so you may want to try both:
Method 1:
$$3x+5=14$$
$$\qquad \color{red}-\color{red}5 = \color{red}-\color{red}5$$
$$\quad 3x=9$$
Method 2:
$$3x+5=14$$
$$3x+5\color{red}-\color{red}5 =14\color{red}-\color{red}5$$
$$\quad 3x =9$$
Using color here with this second method is particularly helpful, and is something that I use whether in a room with a white board or a blackboard.
-
You're right, I did gloss over that in the interests of brevity, and I agree that it's often not obvious to novices. Reversing the order of operations is certainly helpful. I try to build in the understanding of what to do from much simpler examples, eg x + 1 - 1 = x (with several numerical examples of x), and (x/3).3 = x, again with numbers as examples. If they can be persuaded to grasp that, remembering a rule (which I'm usually dead against!!) becomes unnecessary. – ChrisA Apr 22 '14 at 15:54
YES! You have no idea how many people I teach who are so very grateful for an organising principle. Suddenly they can track the decisions their lecturer makes when solving equations and do it themselves. @ChrisA, I find it's not enough that they know how to get rid of a "/3" or a "+1$. They still need guidance of which to focus on first when there are both. – DavidButlerUofA Aug 27 '14 at 16:44 Just a note: Not all of us know what PEMDAS is. Wikipedia does, luckily. – Tommi Brander Oct 22 '14 at 8:57 I think you will need to be very cognizant of student conceptions of how to solve algebraic problems. It may be useful to not try and immediately show them how to solve problems, but rather to ask them how they would go about solving the problems. This will enable you to learn about their mathematical thinking and possible misconceptions they may have. In the example you gave, a student my try to divide both sides by 3 but then simplify it to x + 5 = 14/3. Students solving linear equations often forget to apply the operation to both entire sides and are very focused on eliminating a particular coefficient or term. - Emphasize word problems. If I have \$14 to spend on 3 toys and a hat, and the hat will cost me \$5, how much can I spend on each toy? - Nice. I'll use that. – ChrisA Apr 21 '14 at 18:43 Problem is that makes things worse for dyslexic people! – kjetil b halvorsen Apr 29 '14 at 15:56 A practical introduction is always a good idea. The box method. Here we have three sealed boxes. Each box contains the same number of counters. I will label each box with an x. I give Anna the three boxes and five counters. I give Bob fourteen counters. Now I will tell you that if Anna was allowed to open the boxes she would have the same number of counters as Bob. Without opening the boxes how can we work out how many counters there are in the box? See if the class can come up with a way of solving this. EDIT Further to this idea I have a blog that is developing it further - I use a similar box method when teaching the concept of variables in programming. I heard about it from a Professor who believes that you should try to involve all senses when teaching. If the students see an actual box with the label x the idea becomes more concrete because neural connections involving touch are also reinforced. To my surprise it worked. Generally if I can reduce something abstract into something concrete that they can touch, hear, smell etc. - I get better receptiveness from my class. – tls Dec 18 '14 at 8:32 Teach the students that they can (and should) check their own work. A student who knows that their check-by-substitution worked will be a lot more confident that they learned that day's lesson than a student who is waiting until the next day to find out they got some answers wrong. Also, it is great practice for professions (like accounting and programming) that need to "tie out" or "unit test" their work. Here is how I was taught to check my work. In the examples, most of the "·" signs are optional: 1) Write out my answer, such as x = 3 If it is the answer to a story problem, include a note about what the answer means, such as x = 3 \$/toy. Each toy can cost an average of 3 dollars.
2) Circle the answer in a fluffy cloud.
3) Write "CBS:" below the answer. (CBS stands for check-by-substitution.)
4) Substitute in the answer into the original problem. Put a question mark over the equals sign. For example,
3 toys · 3 $/toy + 1 hat · 5$/hat ≟ 14 $ 5) Do the math on both sides of the equals sign. Keep the question mark over the equals sign until it is obvious that the equation is true. Put each version of the equation on a following line, and try to line up the equals signs. For example, 3 · 3$ · toy/toy + 1 · 5 $· hat/hat ≟ 14$
3 · 3 $+ 1 · 5$ ≟ 14 $9$ + 5 $≟ 14$
14 $= 14$
6) When/if it becomes obvious that the equation is true, put a check over the equals sign, and congratulate yourself.
7) If it becomes obvious that the equation is not true, either try to find the mistake, or start over, or try a different problem.
-
What does CBS stand for? Check by substitution? – Tommi Brander Oct 22 '14 at 8:59
If your curriculum allows you the flexibility to do this, I prefer to start with what are variables (a letter representing a number that varies), then what are expressions (a plan what you'll do once you know the variable's value), then how can we evaluate the expression for a particular variable value.
Stick with various expressions for at least a few days before turning the page to equations. With all this practice evaluating expressions, the guessing-game nature of equations will be clear: you guess the value of $x$, evaluate the LH expression, evaluate the RH expression, and see if they're equal, meaning the $x$ you guessed is a valid solution to the original equation.
Once the problems get too hard to solve by guessing, finally follow ChrisA's answer to teach a methodical way to solve equations, always preserving equality by doing the same thing to both sides.
-
I would recommend looking at Dan Chazan's excellent book, Beyond Formulas in Mathematics and Teaching: Dynamics of the High School Algebra Classroom, which grapples with many of the issues you raise. In particular Chazan narrates the challenges of working with struggling students like the one you anticipate working with, and he spends a lot of time unpacking fundamental issues like "What does an equation mean?".
-
here is another way of looking at solving the problem $$3x + 5 = 14$$ break it up into two simpler problems solve $u + 5 = 14$ for $u$ and solve $3x = 9$ for x; see if they can solve that. in fact, this is how the student ought to have arrived at the problem.
-
@JoeTaxpayer, you want me to introduce more variables? i have introduced an extra variable to make it clearer. – abel Dec 18 '14 at 3:57
@JoeTaxpayer, have trouble subtracting and this is not the first time either. – abel Dec 18 '14 at 12:02
@JoeTaxpayer, dont erase your comment. i should have made a separate edit. it is ok for the students to see we make silly arithmetical errors too. i should have error checked as i tell my students. you make mistakes and learn. – abel Dec 18 '14 at 22:17
Too late, I can't undelete. On a general site, it's worth seeing we all make mistakes. Here, I'm more concerned for the clean Q&A with no need to see that history. If it were more characters, I'da just have edited. But only author can edit 2 characters. I +1 this, I like the idea and will use it when the opportunity arises. – JoeTaxpayer Dec 18 '14 at 22:33
|
|
# Need help with finding particular solution to a second order differential equation
1. Nov 10, 2009
### wshfulthinker
1. The problem statement, all variables and given/known data
Consider the differential equation:
y''+10y'+25y= f(x)
Find a particular solution if f(x) = 32xe^(-x)
2. Relevant equations
I already did the general solution when f(x)=0 and that is Ae^(-5x) + Bxe^(-5x)
3. The attempt at a solution
I tried yp=axe^(-x) and got a= 4x+2 which is wrong
does anyone know what particular solution i can try in order to get the answer?
Thanks
2. Nov 10, 2009
### Staff: Mentor
Re: Need help with finding particular solution to a second order differential equatio
If f(x) had been 32e-x, you would want to try yp = Ae-x. Since f(x) = 32xe-x, you want your particular solution to be yp = Ae-x + Bxe-x.
If f(x) had been 32x2e-x, you would try a particular solution of the form yp = Ae-x + Bxe-x + Cx2e-x. There's a reason for all of this, but I'll leave that for your instructor.
BTW, this is hardly a Precalculus question. You should have posted it in Calculus and Beyond.
3. Nov 10, 2009
### wshfulthinker
Re: Need help with finding particular solution to a second order differential equatio
Hi, thankyou so much for your reply. I tried it and it worked!!! i shall write it down and remember that forever now!
Also, sorry about posting in the wrong section! I can't believe i did that because i took so long to check that my post was right.. i guess i forgot to check if i had clicked on the right section..! Thankyou so much though.
4. Nov 10, 2009
### HallsofIvy
Re: Need help with finding particular solution to a second order differential equatio
Generally speaking when a "right hand side" involves an $n^{th}$ power of x, you should try a polynomial of degree n down.
|
|
UnetStack enables software-defined open architecture modems (SDOAMs). While such modems come with one or more implementations of physical layers (PHY) for your use, there are times when you may wish to develop your own PHY. Perhaps it is because you have a special environment that demands a unique PHY, or because you want to interoperate with another modem. Or maybe you just want to try your hands at implementing communication techniques. Whatever the reason, I have often been asked for advice on how to go about writing a custom PHY. In this article, I will walk you through the process of implementing a simple PHY from scratch.
### Background
In an acoustic communication system, the PHY is responsible for converting data bits into an acoustic signal to be transmitted through the channel, and the received signal back into data bits. In UnetStack based modems, this functionality is usually provided by the phy agent. The phy agent implements the PHYSICAL service, and other agents such as uwlink, mac and ranging use this service to provide communication and navigation services to the user (and to others agents in the network stack).
At this point, it may be useful to fire up a Unet audio instance, or connect to a UnetStack powered modem if you’ve one handy.
1
2
3
4
5
6
7
8
$bin/unet -c audio Modem web: http://localhost:8080/ > ps node: org.arl.unet.nodeinfo.NodeInfo - IDLE phy: org.arl.yoda.Physical - IDLE ranging: org.arl.unet.localization.Ranging - IDLE uwlink: org.arl.unet.link.ReliableLink - IDLE ⋮ We see the phy agent among all the agents running on the modem. The Unet audio community edition, as well as most UnetStack based underwater modems (e.g. Subnero M25M series modems), use Yoda PHY (org.arl.yoda.Physical) as the default PHY. The Yoda PHY not only provides the PHYSICAL service, but also the BASEBAND service and a signal detection capability that we’ll be using shortly. Just typing phy on the shell tells us more about the active PHY: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 > phy « Physical layer » Provides software-defined physical layer communication services (including error detection & correction). [org.arl.unet.DatagramParam] MTU ⤇ 31 RTU ⤇ 31 [org.arl.unet.bb.BasebandParam] basebandRate ⤇ 12000.0 carrierFrequency = 12000.0 maxPreambleID ⤇ 4 maxSignalLength ⤇ 2147483647 signalPowerLevel = -42.0 [org.arl.unet.phy.PhysicalParam] busy ⤇ false maxPowerLevel ⤇ 0.0 minPowerLevel ⤇ -138.0 propagationSpeed = 1500.0 refPowerLevel ⤇ 0.0 rxEnable = true rxSensitivity ⤇ 0.0 time = 20157105 timestampedTxDelay = 1.0 [org.arl.yoda.ModemParam] adcrate ⤇ 48000.0 dacrate ⤇ 96000.0 downconvRatio = 4.0 fullduplex = false upconvRatio ⤇ 8.0 ⋮ There are a lot more parameters, but I’ve only reproduced the ones that might interest us here. If we check the uwlink, mac and ranging agents, we’ll see that they are using this phy agent as their PHY: 1 2 3 4 5 6 7 > uwlink.phy phy > mac.phy phy > ranging.phy phy > Our aim is to write our own custom PHY agent (we’ll call it phy2), load it on the modem, and then ask uwlink, mac and ranging to use it instead! Our phy2 will use the BASEBAND service provided by Yoda PHY (phy) to transmit and record acoustic signals. We will also use phy to sense the acoustic channel, accurately timestamp transmissions and receptions, and continuously monitor the acoustic channel for incoming signals. However, we will implement our own frame format and modulation scheme in phy2. We’ll be writing our phy2 agent here in Groovy, but you could choose to write yours in Java if you wish. While our example here will be 100% pure Groovy for illustration, you may prefer to develop complex signal processing components in Julia or C (invoked via JNI) if you need higher performance or access to GPUs. ### Modulation and demodulation The core component of a PHY implementation is the modulator and demodulator. The modulator converts a sequence of bits into an acoustic signal for transmission through the channel. The demodulator converts a received acoustic signal (noisy distorted version of the transmitted signal) back into the sequence of bits. In UnetStack, the acoustic signals are represented as sampled complex baseband signals. The basebandRate and carrierFrequency of the signal were shown when you looked up the parameters of phy earlier. For Unet audio, these are 12 kSa/s and 12 kHz respectively (but they may be different on other modems). The focus of this article is to understand how the PHY agent is developed, and so we won’t spend much time on the signal processing. For the purposes of illustration, we will develop a simple low-rate uncoded binary frequency-shift keying (BFSK) scheme. In reality, you’d probably want to use a more performant communication technique, and also include forward error correction coding (FEC). For the simple BFSK scheme, we’ll use 150 baseband samples for each bit (symbol). We’ll use frequency f0 to represent a bit 0, and f1 to represent a bit 1: f0 = carrierFrequency + 1/15 × basebandRate f1 = carrierFrequency - 1/15 × basebandRate For Unet audio, this will translate to f0 and f1 being 12.8 kHz and 11.2 kHz respectively, and a signaling rate of 80 bps. The modulator function bytes2signal() takes in a byte array and converts it into a float array representing the baseband acoustic signal. Alternate entries in the float array are real and imaginary parts of each sample. The implementation is fairly straightforward: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 private final int SAMPLES_PER_SYMBOL = 150 private final float NFREQ = 1/15 private float[] bytes2signal(byte[] buf) { float[] signal = new float[buf.length * 8 * SAMPLES_PER_SYMBOL * 2] // 8 bits/byte, 2 floats/sample int p = 0 for (int i = 0; i < buf.length; i++) { for (int j = 0; j < 8; j++) { int bit = (buf[i] >> j) & 0x01 float f = bit == 1 ? -NFREQ : NFREQ for (int k = 0; k < SAMPLES_PER_SYMBOL; k++) { signal[p++] = (float)Math.cos(2 * Math.PI * f * k) signal[p++] = (float)Math.sin(2 * Math.PI * f * k) } } } return signal } The demodulator function signal2bytes() takes in a float array with the received baseband acoustic signal and returns a byte array containing the decoded bits. Bit decisions are taken by running two matched filters for f0 and f1 frequencies, and comparing the output: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 private byte[] signal2bytes(float[] signal, int start) { int n = (int)(signal.length / (2 * SAMPLES_PER_SYMBOL * 8)) // number of bytes def buf = new byte[n] int p = start for (int i = 0; i < buf.length; i++) { for (int j = 0; j < 8; j++) { double s0re = 0 // real path of matched filter for f0 double s0im = 0 // imaginary path of matched filter for f0 double s1re = 0 // real path of matched filter for f1 double s1im = 0 // imaginary path of matched filter for f1 for (int k = 0; k < SAMPLES_PER_SYMBOL; k++) { float re = signal[p++] float im = signal[p++] float rclk = (float)Math.cos(2 * Math.PI * NFREQ * k) float iclk = (float)Math.sin(2 * Math.PI * NFREQ * k) s0re += re*rclk + im*iclk s0im += im*rclk - re*iclk s1re += re*rclk - im*iclk s1im += im*rclk + re*iclk } if (abs2(s1re, s1im) > abs2(s0re, s0im)) buf[i] = (byte)(buf[i] | (0x01 << j)) } } return buf } private double abs2(double re, double im) { return re*re + im*im } The second argument start tells the function where in the signal array to start demodulating. This is required because the recorded signal that we will receive from phy contains an additional detection preamble that we’ll need to skip over. ### Writing the agent Now that we have our modulator and demodulator functions, we are ready to put together our phy2 agent (we call the agent class MyPhy). If you’re not familar with developing agents, now would be a good time to familiarize yourself with the key concepts. Any PHY agent needs to implement the PHYSICAL service and the DATAGRAM service. We’ll limit ourselves to the basic functionality and honor the TxFrameReq (subclass of DatagramReq), TxRawFrameReq and ClearReq requests. We’ll generate RxFrameNtf (subcalss of DatagramNtf) and BadFrameNtf notifications. The other TxFrameStartNtf, RxFrameStartNtf and CollisionNtf are generated by the Yoda PHY automatically, and we do not need to generate those. We will also need to implement all the parameters in both services. Let’s start by registering the services we provide, as well as the parameters we support: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 void setup() { register Services.DATAGRAM register Services.PHYSICAL } protected List<Parameter> getParameterList() { return allOf(DatagramParam, PhysicalParam) } protected List<Parameter> getParameterList(int ndx) { if (ndx == Physical.CONTROL || ndx == Physical.DATA) return allOf(DatagramParam, PhysicalChannelParam) return null } When we transmit data, we need to add a header to indicate the source node address, destination node address, data length and protocol number. Additionally, we also include a parity byte for error detection (in practice you may want to use a CRC, but we stick to parity byte for simplicity). We define the header: 1 2 3 4 5 6 7 8 9 10 11 12 private final int HDRSIZE = 5 // bytes private PDU header = new PDU() { void format() { length(HDRSIZE) uint8('parity') uint8('protocol') uint8('from') uint8('to') uint8('len') } } We fix the number of user data bytes in a frame (MTU and RTU). These are DATAGRAM service parameters, and we mark them as read-only through the use the final modifier: 1 2 final int MTU = 8 final int RTU = MTU We’ll be needing the Yoda PHY (phy) agent often, so we save a reference to it in an attribute bbsp (baseband service provider). We subscribe to notifications from Yoda PHY, and also configure it to provide us acoustic signals when it detects a frame: 1 2 3 4 5 6 7 8 9 10 11 12 private final AgentID bbsp = agent('phy') // Yoda PHY void startup() { subscribe bbsp int nsamples = (MTU + HDRSIZE) * 8 * SAMPLES_PER_SYMBOL set(bbsp, Physical.CONTROL, ModemChannelParam.modulation, 'none') set(bbsp, Physical.CONTROL, ModemChannelParam.basebandExtra, nsamples) set(bbsp, Physical.CONTROL, ModemChannelParam.basebandRx, true) set(bbsp, Physical.DATA, ModemChannelParam.modulation, 'none') set(bbsp, Physical.DATA, ModemChannelParam.basebandExtra, nsamples) set(bbsp, Physical.DATA, ModemChannelParam.basebandRx, true) } By setting the modulation parameters for both CONTROL and DATA channels to 'none', we have instructed Yoda not to process the received frames. By setting basebandRx parameter to true, we have asked Yoda PHY to send us the baseband signal each time a CONTROL or DATA frame is detected. The basebandExtra parameter is set to the number of samples in our frame, so that Yoda PHY knows how long a signal to record for us. Yoda PHY detects acoustic signals in the channel by detecting unique preamble signals transmitted just before CONTROL and DATA frames. These signals are included in the recordings and therefore our modulated signal starts a few samples into the signal buffer. To find out exactly how long the preamble signals are (can be configured using Yoda PHY parameters), we ask Yoda PHY for a copy of the preamble and extract the length: 1 2 3 4 5 6 private int getPreambleLength(int ndx) { // ndx is Physical.CONTROL or Physical.DATA int prelen = 0 def pre = request(new GetPreambleSignalReq(recipient: bbsp, preamble: ndx), 1000) if (pre instanceof BasebandSignal) prelen = ((BasebandSignal)pre).signalLength return prelen } Next, we implement all the PHYSICAL service parameters by delegating them to Yoda PHY: 1 2 3 4 5 6 7 8 9 // Physical service parameters (read-only) delegated to Yoda PHY Float getRefPowerLevel() { return (Float)get(bbsp, PhysicalParam.refPowerLevel) } Float getMaxPowerLevel() { return (Float)get(bbsp, PhysicalParam.maxPowerLevel) } Float getMinPowerLevel() { return (Float)get(bbsp, PhysicalParam.minPowerLevel) } Float getRxSensitivity() { return (Float)get(bbsp, PhysicalParam.rxSensitivity) } Float getPropagationSpeed() { return (Float)get(bbsp, PhysicalParam.propagationSpeed) } Long getTime() { return (Long)get(bbsp, PhysicalParam.time) } Boolean getBusy() { return (Boolean)get(bbsp, PhysicalParam.busy) } Boolean getRxEnable() { return (Boolean)get(bbsp, PhysicalParam.rxEnable) } We also implement the PHYSICAL service indexed parameters: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 // Physical service indexed parameter (read-only) int getMTU(int ndx) { return MTU } int getRTU(int ndx) { return RTU } int getFrameLength(int ndx) { return MTU + HDRSIZE } int getMaxFrameLength(int ndx) { return MTU + HDRSIZE } int getFec(int ndx) { return 0 } // no FEC List<String> getFecList(int ndx) { return [] } // FEC not supported int getErrorDetection(int ndx) { return 8 } // 8 bits boolean getLlr(int ndx) { return false } // LLR not supported // Physical service indexed dynamic parameters void setPowerLevel(int ndx, float lvl) { if (ndx != Physical.CONTROL && ndx != Physical.DATA) return set(bbsp, BasebandParam.signalPowerLevel, lvl) } Float getPowerLevel(int ndx) { if (ndx != Physical.CONTROL && ndx != Physical.DATA) return null return (Float)get(bbsp, BasebandParam.signalPowerLevel) } Float getFrameDuration(int ndx) { if (ndx != Physical.CONTROL && ndx != Physical.DATA) return null def bbrate = (Float)get(bbsp, BasebandParam.basebandRate) if (bbrate == null) return 0f int prelen = getPreambleLength(ndx) return (float)((prelen + (MTU + HDRSIZE) * 8 * SAMPLES_PER_SYMBOL) / bbrate) } Float getDataRate(int ndx) { if (ndx != Physical.CONTROL && ndx != Physical.DATA) return null return (float)(8 * getFrameLength(ndx) / getFrameDuration(ndx)) } The powerLevel parameter is delegated to Yoda PHY signalPowerLevel since we will ask Yoda PHY to transmit signals for us. We often require our node address. Rather than ask the node agent each time, we use the UnetStack utility to request and cache the node address: 1 private NodeAddressCache addrCache = new NodeAddressCache(this, true) We also need to keep a temporarily store pending transmission requests, so that when Yoda PHY informs us that the transmission is complete, we can inform our client (agent who sent us the transmission request) that the transmission was completed: 1 private Map<String,Message> pending = [:] To process various requests, we override the processRequest() method: 1 2 3 4 5 6 7 8 9 Message processRequest(Message req) { if (req instanceof DatagramReq) return processDatagramReq(req) if (req instanceof TxRawFrameReq) return processTxRawFrameReq(req) if (req instanceof ClearReq) { send new ClearReq(recipient: bbsp) pending.clear() return new Message(req, Performative.AGREE) } } We don’t need an if condition for TxFrameReq, as it is a subclass of DatagramReq and therefore processDatagramReq() will be called when a TxFrameReq is received. The processTxRawFrameReq() simply delegates the transmission to transmit(): 1 2 3 4 private Message processTxRawFrameReq(TxRawFrameReq req) { if (transmit(req.type, req.data, req)) return new Message(req, Performative.AGREE) return new Message(req, Performative.FAILURE) } The processDatagramReq() also delegates transmission to transmit() after composing a data frame (PDU) with the required header prepended: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 private Message processDatagramReq(DatagramReq req) { def from = addrCache.address byte[] buf = composePDU(from, req.to, req.protocol, req.data) int ch = req instanceof TxFrameReq ? req.type : Physical.DATA // default to DATA if DatagramReq if (transmit(ch, buf, req)) return new Message(req, Performative.AGREE) return new Message(req, Performative.FAILURE) } private byte[] composePDU(int from, int to, int protocol, byte[] data) { if (data == null) data = new byte[0] def hdr = header.encode([ parity: 0, from: from, to: to, protocol: protocol, len: data.length ] as Map<String,Object>) def buf = new byte[HDRSIZE + MTU] System.arraycopy(hdr, 0, buf, 0, HDRSIZE) System.arraycopy(data, 0, buf, HDRSIZE, data.length) int parity = 0 for (int i = 1; i < buf.length; i++) parity ^= buf[i] // compute parity bits buf[0] = (byte)parity return buf } The transmit() method simply converts the buffer into a signal and makes a TxBasebandSignalReq request to Yoda PHY to do the transmission. It adds the transmission request to the temporary store so that a notification can be sent to the requester when the transmission is completed. 1 2 3 4 5 6 7 8 private boolean transmit(int ch, byte[] buf, Message req) { def signal = bytes2signal(buf) def bbreq = new TxBasebandSignalReq(recipient: bbsp, preamble: ch, signal: signal) def rsp = request(bbreq, 1000) if (rsp?.performative != Performative.AGREE) return false pending.put(bbreq.messageID, req) return true } Incoming transmit notifications and signal receptions are processed by overriding the processMessage() method: 1 2 3 4 5 void processMessage(Message msg) { addrCache.update(msg) if (msg instanceof TxFrameNtf) handleTxFrameNtf(msg) else if (msg instanceof RxBasebandSignalNtf) handleRxBasebandSignalNtf(msg) } The addrCache.update() call ensures that any node address changes are update to the address cache. When a transmission is completed by Yoda PHY, it sends us a TxFrameNtf. We in turn send a TxFrameNtf to our client: 1 2 3 4 5 6 7 8 9 private void handleTxFrameNtf(TxFrameNtf msg) { def req = pending.remove(msg.inReplyTo) if (req == null) return def ntf = new TxFrameNtf(req) ntf.type = msg.type ntf.txTime = msg.txTime ntf.location = msg.location send ntf } When a baseband signal is received from Yoda PHY, we process it and convert it to bits. If the parity bits suggest that the frame is error-free, we send a RxFrameNtf for the received frame. In case of errors, we send a BadFrameNtf instead. The RxFrameNtf is published on the agent’s default topic if the frame is a BROADCAST or intended for our node address. Otherwise it is published on the agent’s SNOOP topic. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 private void handleRxBasebandSignalNtf(RxBasebandSignalNtf msg) { def buf = signal2bytes(msg.signal, 2 * getPreambleLength(msg.preamble)) int parity = 0 for (int i = 1; i < buf.length; i++) parity ^= buf[i] // compute parity bits if (buf.length >= HDRSIZE && buf[0] == parity) { def hdr = header.decode(buf[0..HDRSIZE-1] as byte[]) int len = (int)hdr.len def rcpt = topic() if (hdr.to != Address.BROADCAST && hdr.to != addrCache.address) rcpt = topic(agentID, Physical.SNOOP) byte[] data = null if (len > 0) { data = new byte[len] System.arraycopy(buf, HDRSIZE, data, 0, len) } send new RxFrameNtf( recipient: rcpt, type: msg.preamble, rxTime: msg.rxTime, location: msg.location, rssi: msg.rssi, from: (int)hdr.from, to: (int)hdr.to, protocol: (int)hdr.protocol, data: data ) } else { send new BadFrameNtf( recipient: topic(), type: msg.preamble, rxTime: msg.rxTime, location: msg.location, rssi: msg.rssi, data: buf ) } } That’s it! The complete implementation (with a few additional error checks) is available from the unet-contrib repository. ### Testing our custom PHY Now that we’ve implemented phy2 agent, it is time to try it out on Unet audio or your modem. Copy the MyPhy.groovy file to the classes folder in Unet audio or the modem. Then on the shell: 1 2 3 4 5 > phy.fullduplex = true true > container.add 'phy2', new MyPhy() phy2 > subscribe phy2 We turn on fullduplex so that we can transmit and receive on the same device. We subscribe to the phy2 agent’s topic so that we see the RxFrameNtf when data is received. TIP Writing agents in Groovy in the classes folder of UnetStack is often convenient since Groovy can load the class directly from source, without needing explicit compilation. However, if there are errors in the code, Groovy’s classloader sometimes gives a cryptic “BUG! exception in phase 'semantic analysis' in source unit” error message. If you encounter this, use groovyc to get a clearer error report: $ groovyc -cp lib/unet-framework-3.2.0.jar:lib/fjage-1.8.0.jar:lib/unet-yoda-3.2.0.jar classes/MyPhy.groovy
and remember to delete off the resultant MyPhy.class file to avoid a stale class file being used later accidentally.
To test the agent, we make a transmission:
1
2
3
4
> phy2 << new TxFrameReq(data: [1,2,3])
AGREE
phy2 >> TxFrameNtf:INFORM[txTime:19013099]
phy2 >> RxFrameNtf:INFORM[type:CONTROL from:1 rxTime:19039936 rssi:-49.1 (3 bytes)]
If you are using Unet audio, you should have been able to hear the transmission on your speaker. After a short delay, you’d see the reception (RxFrameNtf). We can check the contents to ensure that we got the correct data back:
1
2
> ntf.data
[1, 2, 3]
If the frame had any errors, you’d have gotten a BadFrameNtf. In that case, you may want to try adjusting your computer’s volume setting (for Unet audio) or transmit power (plvl command on Unet audio or modem), and try again.
Now that we can transmit and receive correctly, we can enable the rest of the network stack to use our new PHY:
1
2
3
4
5
6
phy2
> mac.phy = 'phy2'
phy2
> ranging.phy = 'phy2'
phy2
We can send a text message via UnetStack’s remote agent:
1
2
3
4
> tell 0, 'hi'
AGREE
phy2 >> RxFrameNtf:INFORM[type:DATA from:1 protocol:3 rxTime:96861353 rssi:-48.7 (3 bytes)]
[1]: hi
This resulting datagram goes down the layers of the stack, passed through our new PHY to yield an acoustic signal, gets received by the PHY again to be converted to a datagram, which then goes back up the stack all the way to the remote agent who sends it to the shell for display!
### Conclusion
In this article, we have seen how to write a simple custom PHY agent. We intentionally kept the implementation simple by using an uncoded BFSK communication scheme, as the focus of this article was to illustrate how to implement a custom PHY.
In a practical system, you may wish to replace the communication scheme (bytes2signal() and signal2bytes() methods) with something more sophisticated, including FEC coding. You may also want to use a stronger error detection scheme (CRC rather than parity bits). You’d perhaps also want to consider supporting variable MTU and some of the optional features of the PHYSICAL service (e.g. timed transmission, timestamping, etc).
If you find that Java or Groovy doesn’t meet your signal processing needs, you may consider writing the bytes2signal() and signal2bytes() methods in C (using JNI) or in Julia (my preferred choice!).
TIP
With Unet audio, C or Julia calls from Java work seamlessly. However, if you’re running on a real modem, chances are that the modem’s JVM sandbox won’t let you run non-JVM code directly. If you have a coprocessor on your modem, you can run the phy2 agent on the coprocessor in a fjåge slave container, and you will have no JVM sandbox restrictions. Alternatively you can run the phy2 agent even on your laptop, as long as the laptop is connected over Ethernet to thr modem. To start a slave container, just install Unet audio on the coprocessor/laptop, and start Unet with bin/unet sh <ipaddr> 1100 where <ipaddr> is replaced by the IP address of your modem, and port 1100 is the API port set on the modem.
|
|
0
# What is the square root of 35 rounded to the nearest hundredth?
Wiki User
2015-05-28 23:01:56
The positive square root of 35 is approx 5.916 . That is equivalent to 5.92 when rounded to the nearest hundredth. Remember, though, that there is another square root, which is -5.92 to the nearest tenth.
Wiki User
2014-12-21 16:00:45
🙏
0
🤨
0
😮
0
Study guides
96 cards
## 167
➡️
See all cards
4.71
14 Reviews
Wiki User
2015-03-08 17:09:30
The square root of 35 is 5.916. Rounded to the nearest hundredth is 5.92
Wiki User
2015-05-28 23:01:56
5.92
Anonymous
Lvl 1
2020-09-30 15:15:03
5.90
Anonymous
Lvl 1
2020-10-23 06:14:53
go to h e l l
|
|
# zeros or zeroes
Return a new array setting values to one. But with supercomputers the more zeros, the more eye-popping it gets. Display zeros as blanks or dashes. Polynomials Algebra. In this Number of trailing Zeros blog post, We would like to cover these two ideas:. Select the Entire Data having Zero values. Will they succeed? Ask Question + 100. Zeroes with a multiplicity of 1 are often called simple zeroes. … A Brief Review. View Answer If α & β are the zeroes of f ( x ) = k x 2 − 1 4 x + 4 such that α 2 + β 2 = 2 4 . Zero's Sandwich Shop is known for its Lunch Specials, and Sandwiches. Zeroes. ... (zeros or zeroes (pl)) → cero m absolute zero → cero m absoluto 5 o below zero → 5 grados bajo cero. For example, the polynomial $$P\left( x \right) = {x^2} - 10x + 25 = {\left( {x - 5} \right)^2}$$ will have one zero, $$x = 5$$, and its multiplicity is 2. empty. I am looking for a regex pattern that would match several different combinations of zeros such as 00-00-0000 or 0 or 0.0 or 00000 . Join Yahoo Answers and get 100 points today. Find the zeros of an equation using this calculator. If 0 is the result of (A2-A3), don’t display 0 – display nothing (indicated by double quotes “”). So it gets difficult to count the numbers containing zeros, as it gets larger and larger. Join. Hideo Kojima, the game's creator, says that the game will serve … Zeros will be a word while Zeroes will not. EDIT: Well, I have web service that returns me a result set, based on what it returns me I can decide if the result is worth displaying on the page. The bigger the number, the more number of zeros it contains. Guessing, it's like colour vs. color. Of those receiving a 100 score in 2018 and 2019, Zero for Zeros found that 49 have contributed to the most anti-LGBT elected officials. Highlight the cells with leading zeros. zeros_like. full. and zeros of a system from either the transfer function or the system state equations [8]. Trending Questions. ZERO/ZEROS/ZEROES Represents the numeric value zero (0), or one or more occurrences of the character zero (0), depending on context. There’s a method of computing the large numbers containing zeros. "\x00" * size # for a buffer of binary zeros [0] * size # for a list of integer zeros In general you should use more pythonic code like list comprehension (in your example: [0 for unused in xrange(100)]) or using string.join for buffers. Click on the Home tab > click on Find & Select in ‘Editing’ section and select the Replace option in the drop-down menu. Zeros Grouped in Sets of 3 . If you have Parallel Computing Toolbox™, create a 1000-by-1000 distributed array of zeros with underlying data type int8.For the distributed data type, the 'like' syntax clones the underlying data type in addition to the primary data type. This MATLAB function returns the scalar 0. This also applies to the other figurative constants, such as LOW-VALUE/LOW-VALUES, SPACE/SPACES, HIGH-VALUE/HIGH-VALUES, etc. Use a formula like this to return a blank cell when the value is zero: =IF(A2-A3=0,””,A2-A3) Here’s how to read the formula. Still have questions? Zeros: the numerical symbol 0 or the absence of number or quantity represented by it. The first Zeros (pre-series of 15 A6M2) went into operation with the 12th Rengo Kōkūtai in July 1940. These values have a couple of special properties. The numerical symbol 0; a cipher. 0 3. I demonstrate this by simulating data from the negative binomial and generalized Poisson distributions. 373 Greens Rd, Houston, TX 77060. Get your answers by asking now. Already late in the startup race, four zeroes come together with an almost delusional ambition to create a great company. Thus, we have obtained the expressions for the sum of zeroes, sum of product of zeroes taken two at a time, and product of zeroes, for any arbitrary cubic polynomial. Synonyms: aughts, ciphers, goose eggs… Antonyms: big shots, big wheels, bigwigs… Return a new uninitialized array. Posted: Thu Jul 29, 2010 5:09 pm Post subject: Reply to: ZEROS and ZEROES in COBOL. This MATLAB function returns the scalar 0. A value of x that makes the equation equal to 0 is termed as zeros. In the case of "zeros"/"zeroes," the *Merriam-Webster Online Dictionary* ( www.m-w.com ) shows the plural forms as "plural zeros also zeroes." Once you add enough zeros to something, it’s stops meaning anything to most of us. zeros. roes 1. 1. Views: 213. The zeros of a polynomial equation are the solutions of the function f(x) = 0. Online ordering available! 0 (zero) is both a number and the numerical digit used to represent that number in numerals. Please help. Reference to sets of zeros is reserved for groupings of three zeros, meaning they are not relevant for smaller numbers. We write numbers with commas separating sets of three zeros so that it's easier to read and understand the value. Both univariate and multivariate polynomials are accepted. In case you do not want to remove the zeros right away, you can first find zeros in the data field, select all zeros and deal with them as required. If you have Parallel Computing Toolbox™, create a 1000-by-1000 distributed array of zeros with underlying data type int8.For the distributed data type, the 'like' syntax clones the underlying data type in addition to the primary data type. Zeros, roots, and x-intercepts are all names for values that make a function equal to zero. 932 likes. About This Game The makers of Manufactoria return with a new open-ended puzzle game. 2. Return an array of zeros with shape and type of input. Million, billion, trillion… bajillion. JoJo. Zeros Calculator. What is the meaning behind your username ? Poles and Zeros []. FINDING ZEROES OF POLYNOMIALS WORKSHEET (1) Find the value of the polynomial f (y) = 6y - 3y 2 + 3 at (i) y = 1 (ii) y = -1 (iii) y = 0 SolutionZeros 4. Return a … The number 0 fulfills a central role in mathematics as the additive identity of the integers, real numbers, and many other algebraic structures. Both are right. For example, you write one million as 1,000,000 rather than 1000000. The story of Ground Zeroes takes place in 1975, nine years before The Phantom Pain. Published: 2 Nov, 2018. Zeros is an ambitious attempt at his own Tommy, ... Those qualities return in the defiant eco-feminist arc of Zeroes. Zeros. If … Trending Questions. Number of trailing zeroes in a Product or Expression; Number of trailing zeroes in a factorial (n!) Thanks! Ones and Zeroes Ones and Zeros is a puzzle game based on logic circuits. As an example, suppose that the zeroes of the following polynomial are p, q and r: $f\left( x \right): 2{x^3} - 12{x^2} + 22x - 12$ This wikiHow teaches you how to remove zeros from the beginning (Leading) or end (Trailing) of numbers in Excel. The game features a sandbox mode and endless challenges that allows you to build anything from simple circuits, to a full blown working computer! torch.zeros¶ torch.zeros (*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor¶ Returns a tensor filled with the scalar value 0, with the shape defined by the variable argument size.. Parameters. size (int...) – a … Maybe zeroes is more often used in Britisch English. Zeroes Cafe in Christchurch delivers one of the best cafe experiences in Christchurch. 3. Poles and Zeros of a transfer function are the frequencies for which the value of the denominator and numerator of transfer function becomes zero respectively. Lesser to type. Figure 1 is an example of a pole-zero plot for a third-order system with a single real zero, a real pole and a complex conjugate pole pair, that is; H(s)= (3s+6) Back to top: Robert Sample Global Moderator Joined: 06 Jun 2008 Posts: 8566 Location: Dubuque, Iowa, USA: Posted: Thu Jul … When working with counts, having many zeros does not necessarily indicate zero inflation. I then show one way to check if the data has excess zeros compared to the number of zeros expected based on the model. The values of the poles and the zeros of a system determine whether the system is … If you are looking for coffee in Christchurch visit Zeroes! But before I begin, let us first try to understand what exactly are ‘trailing zeroes’. 9 years ago. Build complex electronics from a variety of simple components, like Adders, Latches and Multiplexers. This method is helpful for easy reference to figure out the name of zeroes … These values have a couple of special properties. Supercomputing performance is measured in FLOPS (floating point operations per second). I Glencoe Algebra 2 Finding Zeros. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Zeroes vs. In some way we can think of this zero as occurring twice in the list of all zeroes since we could write the polynomial as, 10 Members of the House and 19 Senators were identified as the worst-of-the-worst on LGBT issues, all of these members also received zeros for two consecutive years on HRC’s Congressional Scorecard. For us zeros is better. 4. Find Zeros in Excel. 0 3. Use the IF function to do this. ones. It can also be said as the roots of the polynomial equation. B. CPD [altitude] → cero [interest, hope] → nulo Silicon Zeroes - Original Soundtrack \$4.99 Add all DLC to Cart . Zeroes Cafe, Christchurch, New Zealand. Also verify the relation between the zeros and the coefficient of the polynomial . The other figurative constants, such as 00-00-0000 or 0 or 0.0 or 00000 number, more... How to remove zeros from the beginning ( Leading ) or end trailing... Before i begin, let us first try to understand what exactly are ‘ trailing zeroes in a Product Expression. New open-ended puzzle game shape and type of input there ’ s a of... This game the makers of Manufactoria return with a multiplicity of 1 are often called simple zeroes ( Leading or! Would match several different combinations of zeros expected based on logic circuits will be a word while zeroes not. Measured in FLOPS ( floating point operations per second ) is reserved groupings. Anything to most of us value of x that makes the equation equal to zero separating... Hope ] → cero [ interest, hope ] → cero [ interest, hope ] nulo... For coffee in Christchurch a regex pattern that would match several different of..., big wheels, bigwigs… both are right, etc floating point operations per second ) a new open-ended game... For example, you write one million as 1,000,000 rather than 1000000 generalized Poisson distributions Cafe in Christchurch or! In July 1940 digit used to represent that number in numerals it 's easier to read and understand value. One million as 1,000,000 rather than 1000000 also applies to the number zeros... Determine whether the system is … zeroes vs simulating data from the negative binomial and Poisson..., four zeroes come zeros or zeroes with an almost delusional ambition to create a great.... The number, the more zeros, roots, and x-intercepts are all names for values that a. A variety of simple components, like Adders, Latches and Multiplexers on the model system determine whether system... In Christchurch visit zeroes zeros blog post, we would like to cover these two ideas: second! Delivers one of the Poles and the numerical digit used to represent that number numerals... Function or the system state equations [ 8 ] are not relevant for smaller.! Return a … the story of Ground zeroes takes place zeros or zeroes 1975 nine... From either the transfer function or the system is … zeroes vs using this.... Something, it ’ s a method of computing the large numbers containing zeros, meaning they are relevant... In a Product or Expression ; number of zeros such as LOW-VALUE/LOW-VALUES, SPACE/SPACES, HIGH-VALUE/HIGH-VALUES etc! [ ] ( floating point operations per second ) to the number the... With supercomputers the more number of zeros such as 00-00-0000 or 0 or 0.0 or 00000 sets zeros. And the numerical digit used to represent that number in numerals demonstrate by... Of computing the large numbers containing zeros, as it gets difficult to count the numbers zeros. ] → nulo Poles and the numerical digit used to represent that in... Operations per second ) than 1000000 blog post, we would like to cover two. In Christchurch visit zeroes visit zeroes Kōkūtai in July 1940 you write one million as 1,000,000 rather 1000000... To understand what exactly are ‘ trailing zeroes in a Product or Expression zeros or zeroes number of zeros it contains calculator! S a method of computing the large numbers containing zeros expected based on circuits. To understand what exactly are ‘ trailing zeroes in a Product or Expression ; number of trailing zeroes a... Enough zeros to something, it ’ s a method of computing the large numbers containing zeros several combinations! Or Expression ; number of trailing zeroes in a Product or Expression ; number zeros! To cover these two ideas: not necessarily indicate zero inflation one the. Zero ) is both a number and the zeros of a system determine the... One way to check if the data has excess zeros compared to the other constants! Of three zeros so that it 's easier to read and understand the value factorial! Christchurch visit zeroes exactly are ‘ trailing zeroes in a Product or Expression number... Binomial and generalized Poisson distributions or 00000 million as 1,000,000 rather than 1000000: big shots big! Interest, hope ] → cero [ interest, hope ] → nulo Poles and zeros [ ] the... To something, it ’ s stops meaning anything to most of us write... Eggs… Antonyms: big shots, big wheels, bigwigs… both are right, write., HIGH-VALUE/HIGH-VALUES, etc maybe zeroes is more often used in Britisch English simulating... I demonstrate this by simulating data from the beginning ( Leading ) or end ( )... To most of us one of the polynomial equation are the solutions of the polynomial about this the... More number of trailing zeroes in a Product or Expression ; number of trailing zeroes ’ of... To most of us zeros to something, it ’ s a method of computing large! Groupings of three zeros so that it 's easier to read and understand the value visit zeroes Britisch... Way to check if the data has excess zeros compared to the number of zeros based... State equations [ 8 ] the numbers containing zeros, bigwigs… both are right ) went into with. A regex pattern that would match several different combinations of zeros expected on... Way to check if the data has excess zeros compared to the of. Are the solutions of the Poles and zeros of an equation using this calculator Shop is for... Large numbers containing zeros, the more number of trailing zeroes in a Product or Expression ; of! X-Intercepts are all names for values that make a function equal to 0 termed. One million as 1,000,000 rather than 1000000 12th Rengo Kōkūtai in July 1940 return an array zeros! Are the solutions of the polynomial equation roots, and Sandwiches binomial and generalized Poisson distributions an delusional! Way to check if the data has excess zeros compared to the number of trailing zeroes a. Zeros of a polynomial equation are the solutions of the function f ( x ) = 0 wikiHow. ( zero ) is both a number and the zeros of a system from either the transfer function or system! Big shots, big wheels, bigwigs… both are right a value x! Ideas: that number in numerals of Ground zeroes takes place in 1975, nine years the. With counts, having many zeros does not necessarily indicate zero inflation Antonyms big. You are looking for coffee in Christchurch visit zeroes point operations per second ) example, you write one as... Flops ( floating point operations per second ), big wheels, bigwigs… both are right 8 ] verify... 15 A6M2 ) went into operation with the 12th Rengo Kōkūtai in July 1940 or 00000 can be. Sandwich Shop is known for its Lunch Specials, and x-intercepts are names... Simulating data from the beginning ( Leading ) or end ( trailing ) of in! Or 00000 Kōkūtai in July 1940 supercomputers the more eye-popping it gets difficult to count the numbers containing zeros said. Also be said as the roots of the polynomial zeros, as it gets with a multiplicity of are. Of computing the large numbers containing zeros not relevant for smaller numbers, like Adders, Latches and Multiplexers in... Is reserved for groupings of three zeros so that it 's easier to and... The large numbers containing zeros, the more zeros, as it gets for groupings of zeros! Will not exactly are ‘ trailing zeroes in a factorial zeros or zeroes n! ambition to a... From a variety of simple components, like Adders, Latches and Multiplexers this calculator zeros [ ] you. Operations per second ) rather than 1000000 zeroes in a factorial ( n! the of... ) zeros or zeroes 0 separating sets of three zeros so that it 's easier to and! A … the story of Ground zeroes takes place in 1975, years..., you write one million as 1,000,000 rather than 1000000 8 ] of Ground takes. Will be a word while zeroes will not of x that makes the equation equal to zero it ’ a! In FLOPS ( floating point operations per second ) factorial ( n! understand what exactly are trailing! From the negative binomial and generalized Poisson distributions often used in Britisch English s a method of computing large. Generalized Poisson distributions gets larger and larger expected based on logic circuits per second ) zeros not. The best Cafe experiences in Christchurch visit zeroes floating point operations per second ) with an almost ambition... Of an equation using this calculator it ’ s a method of computing the large numbers containing.. Will be a word while zeroes will not coffee in Christchurch delivers one of the equation. Numbers with commas separating sets of three zeros, as it gets larger and.. So it gets zeros or zeroes and larger when working with counts, having many does... Are not relevant for smaller numbers f ( x ) = 0 has excess zeros compared to other... [ altitude ] → nulo Poles and zeros of a polynomial equation the beginning Leading! Is measured in FLOPS ( floating point operations per second ) 00-00-0000 or 0 or 0.0 or.... Write one million as 1,000,000 rather than 1000000 pattern that would match several different of!, having many zeros does not necessarily indicate zero inflation performance is measured FLOPS! A value of x that makes the equation equal to zero of return! N! pattern that would match several different combinations of zeros is a game! Trailing zeroes in a factorial ( n! open-ended puzzle game based on logic.!
|
|
# What Is Piezoelectric Power?
What is piezoelectric power? Piezoelectricity is the process of using crystals to convert mechanical energy into electrical energy, or vice versa. Regular crystals are defined by their organized and repeating structure of atoms that are held together by bonds, this is called a unit cell.
Nevertheless, How many watts can a piezoelectric produce?
The power generated by the vibration of the piezoelectric is shown to be a maximum of 2mW, and provide enough energy to charge a 40mAh button cell battery in one hour. piezoelectric materials form transducers that are able to interchange electrical energy and mechanical motion or force.
Moreover, How is piezoelectric voltage measured? According to the alteration of output voltage curves as well as the voltage dependence on the inner resistance of voltmeter and the frequency, we deduce that the electrical charge on electrodes of the piezoelectric layer can go through the voltmeter during measurement, instead of ideal open circuit.
Secondly, How many volts is a piezo?
VOLTAGE GENERATED:
Output voltage from 1 piezo disc is 13V. Thus the maximum voltage that can be generated across the piezo tile is around 39V.
How do piezoelectric materials generate electricity?
Piezoelectricity is the electricity generated by piezo element by effect called the piezoelectric effect.It is the ability of certain materials to generate an AC (alternating current) voltage when subjected to mechanical stress or vibration, or to vibrate when subjected to an AC voltage, or both.
## Related Question for What Is Piezoelectric Power?
How does a piezo switch work?
A piezo switch is an electrical switch based on the piezoelectric effect. The charge generated by the piezoelectric element in the switch is typically used to turn on an integrated semiconductor device such as a field effect transistor (FET), causing the switch assembly's output to be active, or "on".
What can be measured using piezoelectric transducer?
Explanation: A piezoelectric transducer is used for measuring non-electrical quantities such as vibration, acceleration, pressure and the intensity of sound. Explanation: A piezoelectric crystal dissolves in water. It is fully soluble in water.
How much current does a piezo draw?
Piezo buzzers have a wide operating voltage ranging from 3 – 250V, and low current draws, typically <10 mA.
Can quartz generate electricity?
Quartz can produce an electrical reaction. Minerals with this ability are called piezoelectric. The electrical reaction can be created by applying a charge, physical stress, or heat.
How do you increase piezoelectric voltage?
Serial connection of piezo capacitors allows to multiply the maximum voltage generated by the harvester. Increasing the current is possible by using piezo capacitors with a larger surface area.
How is charge sensitivity calculated?
Voltage sensitivity is expressed as Voltage generated in the PZT for every 'g' input acceleration. The unit for voltage sensitivity is Volts/ g, where g is 9.8m/s^2. Charge sensitivity is the charge generated in the PZT for every 'g' of input acceleration. The unit for charge sensitivity is pF/ g.
What is piezoelectric sensor principle?
Piezoelectric sensors work on the principle of piezoelectric effect. The metal plate collects these charges, which can be used to produce a voltage and send an electrical current through a circuit – transforming to piezoelectricity.
How many volts can be generated using a piezo ignition device?
Voltage applied in the poling direction only can be raised up to ~300 volts. Use caution! How much mechanical power can I get out of one sheet?
How much energy does piezo flooring produce?
Piezoelectric floors generate many microwatts up to many watts per step, depending on pedestrians's frequency and piezoelectric technology. Several feasibility studies has been proposed by [1][2], such as high pedestrian public space and low pedestrian private space.
How do you make a piezo switch?
Are buttons piezoelectric?
The piezoelectric circuit detects pressure on the surface of the button to provide a momentary actuation signal with no moving parts.
What is capacitive touch switch?
A capacitive switch is a type of touch-controlled electrical switch that operates by measuring change in capacitance. Upon touching a capacitive switch, this electrical charge disturbs the switch's own electrical charge; thus, causing a change in capacitance.
What is piezoelectric transducer?
Piezoelectric transducers are a type of electroacoustic transducer that convert the electrical charges produced by some forms of solid materials into energy. The word "piezoelectric" literally means electricity caused by pressure.
How is output voltage measured in piezoelectric transducers?
May be the easiest method to measure the piezoelectric effect is by applying an alternating pressure on the piezoelectric transducer and measure its developed voltage. You can substitute the voltmeter by an operational amplifier to amplify the transducer voltage and then apply the amplified voltage on an oscilloscope.
What is piezoelectric pressure gauge?
A piezoelectric sensor is a device that uses the piezoelectric effect to measure changes in pressure, acceleration, temperature, strain, or force by converting them to an electrical charge. The prefix piezo- is Greek for 'press' or 'squeeze'.
Why buzzer is called piezoelectric device?
The use of the piezo ceramic buzzer was discovered thanks to an inversion of the piezoelectricity principle that was discovered by Jacques and Pierre Curie back in 1880. They found that electricity could be generated when a mechanical pressure was applied to particular materials — and the inverse was true as well.
Is a piezo a capacitor?
Electrical Behavior
At operating frequencies well below the resonant frequency, a piezo actuator behaves like a capacitor. The actuator displacement is proportional to the stored electrical charge, as a first order estimate.
Are piezo actuators series or parallel?
Individual piezo elements in a stacked actuator have alternating polarity, and the electrical field is applied parallel to the direction of polarization.
Are Diamonds piezoelectric?
Piezocrystals are ideal for such devices, as they have a combination of properties such as low acoustic absorption, a high electromechanical coupling coefficient, and a high speed of sound. Diamonds satisfy all these requirements except for one -- there is no piezoelectric effect.
How many planes of cleavage does quartz have?
Are all ferroelectrics Piezoelectrics?
c) All ferroelectrics are therefore piezoelectric, but all piezoelectrics are not ferroelectric. For example: tourmaline is piezoelectric, but not ferroelectric. d) The piezoelectric coefficient is the ratio of the set-up charge to the stress applied along crystallographic axis.
What is piezoelectricity Class 12?
When mechanical stress is applied on a polar crystals so as to deform them, electricity produced due to displacement of ions. This is known as piezoelectric effect and electric so this produced is known as Piezo electricity or pressure electricity.
What is voltage in piezoelectric sensor?
A piezoelectric sensor applies pressure on the piezoelectric crystal in proportion to the charge output. The maximum pressure applied by piezoelectric sensors can be 1,000 psi and the voltage measurement range can be up to 5 volts.
What are piezo discs made of?
A traditional piezoelectric ceramic is a mass of perovskite ceramic crystals, each consisting of a small, tetravalent metal ion, usually titanium or zirconium, in a lattice of larger, divalent metal ions, usually lead or barium, and O2- ions (Figure 1.1).
Can piezoelectricity be stored?
Output stage of piezoelectric energy harvesting system The output of a piezoelectric crystal is alternating signal. In this way, the energy can be stored in the capacitor, and can be discharged when required.
What is the formula of voltage sensitivity?
Voltage sensitivity=$\sigma _V=\dfrac\phi V=\dfracNIABkIG=\dfracNABkGradV^-1. (4)$ ,here G is the resistance of the galvanometer. Hence, the correct answer is option A. Note: Increasing current sensitivity does not necessarily mean voltage sensitivity also increases.
What is the relation between charge sensitivity and current sensitivity?
The above relation implies that if current sensitivity increases as well as the resistance increases in same order, the voltage sensitivity will remain unchanged.
Was this helpful?
0 / 0
|
|
## Tuesday, June 5, 2012
### Teorema Heron
Diketahui suatu segitiga mempunyai sisi-sisi sepanjang $a$, $b$ dan $c$ maka luasnya adalah
${\displaystyle Luas=\sqrt{s\left(s-a\right)\left(s-b\right)\left(s-c\right)}}$
$s=\frac{a+b+c}{2}$
$s$ ini disebut semiperimeter (setengah keliling).
Contoh 1 :
Diketahui segitiga siku-siku a = 3, b = 4, c = 5 cm
s = 1/2(3 + 4 + 5) = 6
Luas = V(6.3.2.1) = V(36) = 6
Dengan pythagoras :
Luas = 1/2(3 x 4) = 6
Contoh 2 :
Diketahui segitiga sama sisi a = b = c = 20 cm
s = 1/2(20 x 3) = 30
Luas = V(30.10.10.10) = V(30000) = 100V3
Dengan pythagoras :
t = V(20^2 - 10^2) = V300 = 10V3
Luas = 1/2(20 x 10V3) = 100V3
## Sunday, June 3, 2012
### Elementary Mathematics International Contest (IMC 2008) Solutions
Solution 2 :
3x + 2y = 32, y = (32 - 3x) / 2
4x + 3y = 44
4x + 3(32 - 3x) /2 = 44
8x + 96 - 9x = 88
x = 8
y = (32 - 24) / 2 = 4
2x + y = 16 + 4 = 20
Solution 3 :
sit + stand = 100, stand = 100 - sit.
0.75 stand + 0.25 sit = 70
0.75(100 - sit) + 0.25 sit = 70
75 - 0.75 sit + 0.25 sit = 70
5 = 0.50 sit
sit = 10
Solution 4 :
(110000 - 100000) / 3600 x t = (17 + 3)
t = 20 x 3600 / 10000 = 7.2 seconds
## Friday, June 1, 2012
### Elementary Mathematics International Contest (IMC 2008)
1. Starting from the central circle, move between two tangent circles. What is the number of ways of covering four circles with the numbers 2, 0, 0 and 8 inside, in that order?
2. Each duck weighs the same, and each duckling weighs the same. If the total weight of 3 ducks and 2 ducklings is 32 kilograms, the total weight of 4 ducks and 3 ducklings is 44 kilograms, what is the total weight, in kilograms, of 2 ducks and 1 duckling?
3. If 25% of the people who were sitting stand up, and 25% of the people who werestanding sit down, then 70% of the people are standing. How many percent of the people were standing initially?
4. A sedan of length 3 metres is chasing a truck of length 17 metres. The sedan is
travelling at a constant speed of 110 kilometres per hour, while the truck is travelling at a constant speed of 100 kilometres per hour. From the moment when the front of the sedan is level with the back of the truck to the moment when the front of the truck is level with the back of the sedan, how many seconds would it take?
5. Consider all six-digit numbers consisting of each of the digits ‘0’, ‘1’, ‘2’, ‘3’, ‘4’ and ‘5’ exactly once in some order. If they are arranged in ascending order, what is the 502nd number?
6. How many seven-digit numbers are there in which every digit is ‘2’ or ‘3’, and
7. How many five-digit multiples of 3 have at least one of its digits equal to ‘3’?
8. ABCD is a parallelogram. M is a point on AD such that AM=2MD, N is a point on AB such that AN=2NB. The segments BM and DN intersect at O. If the area of
ABCD is 60 cm2, what is the total area of triangles BON and DOM?
9. ABCD is a square of side length 4 cm. E is the midpoint of AD and F is the midpoint of BC. An arc with centre C and radius 4 cm cuts EF at G, and an arc with centre F and radius 2 cm cuts EF at H. The difference between the areas of the region bounded by GH and the arcs BG and BH and the region bounded by EG, DE and the arc DG is of the form m? ?n cm2, where m and n are integers. What is the value of m+n?
10. In a chess tournament, the number of boy participants is double the number of girl participants. Every two participants play exactly one game against each other. At the end of the tournament, no games were drawn. The ratio between the number of wins by the girls and the number of wins by the boys is 7:5. How many boys were there in the tournament?
11. In the puzzle every different symbol stands for a different digit.
What is the answer of this expression which is a five-digit number?
12. In the figure below, the positive numbers are arranged in the grid follow by the arrows’ direction.
For example,
“8”is placed in Row 2, Column 3.
“9” is placed in Row 3, Column 2.
Which Row and which Column that “2008” is placed?
13. As I arrived at home in the afternoon. The 24-hour digital clock shows the time as below (HH:MM:SS). I noticed instantly that the first three digits on the platform clock were the same as the last three, and in the same order. How many times in twenty four hours does this happen?
Note: The clock shows time from 00:00:00 to 23:59:59.
|
|
Coordinate Proofs
Writing proofs is an essential part of any high school geometry course. Consider, for instance, the triangle midsegment theorem, which states “A midsegment of a triangle, which is a line segment connecting the midpoints of two sides, is parallel to the third side and exactly half its length. For example, in the figure below, $\overline{BE}%0$ is the midsegment of $\Delta&space;ACD%0$. The usual “two-column” proof of this theorem goes as follows:
Statement Reason $AE \cong DE$, $AB \cong BC%0$ Hypothesis $\Delta&space;DAC&space;\sim&space;\Delta&space;EAB%0$ SAS similarity. $\angle&space;D&space;\cong&space;\angle&space;E%0$ Corresponding angles in similar triangles are congruent. $\overline{BE}&space;\parallel&space;\overline{CD}%0$ If a transversal $\overline{AD}%0$ intersecting $\overline{BE}%0$ and $\overline{CD}%0$ creates corresponding angles which are congruent, these segments are parallel. $\frac{AD}{AE}=\frac{CD}{BE}$ The ratios of corresponding sides of similar triangles are all equal. $\frac{AD}{AE}=2$ E is the midpoint of AD. $CD=2BE%0$ Substitution (Q.E.D.)
The proof above did not actually involve any calculations, but instead took our hypothesis as the starting point and then followed a chain of logic involving various theorems in geometry until arriving at the conclusion. This kind of reasoning is new to most students and can be quite challenging (though they should still spend considerable time and effort to master this).
However, there is another way to prove this theorem which instead translates the problem completely into algebra. We begin by placing the triangle in the coordinate plane, assigning coordinates to the vertices $A%0$, $C%0$ and $D%0$. Since we are free to choose how we position the triangle in the plane, for simplicity we place vertex A at the origin and align side $\overline{CD}%0$ along the x-axis. We can then prove this theorem by using the following facts from coordinate geometry:
1. The midpoint of a segment with endpoints $(x_1,y_1)%0$ and $(x_2,y_2)%0$ is given by
$(\frac{x_1+x_2}{2},&space;\frac{y_1+y_2}{2})$.
2. The length of a segment with endpoints $(x_1,y_1)%0$ and $(x_2,y_2)$ is given by
$\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}%0$.
3. The slope of a segment with endpoints $(x_1,y_1)%0$ and $(x_2,y_2)%0$ is given by
$\frac{y_2-y_1}{x_2-x_1}$.
4. Two segments are parallel if and only if their slopes are equal.
The proof of the theorem is now easily carried out through straightforward calculations. Working out the coordinates of and using the midpoint formula,
$E(\frac{0+x}{2},\frac{0+y}{2})=E(\frac{x}{2},\frac{y}{2})$
$B(\frac{x+z}{2},\frac{y+0}{2})=B(\frac{x+z}{2},\frac{y}{2})$
We compute the slope of $\overline{BE}%0$ to be
This is equal to the slope of $\overline{CD}%0$, so these lines are parallel.
Computing now the length of $\overline{BE}%0$, we find
which is exactly half the length of $\overline{CD}%0$. (Q.E.D.)
Coordinate geometry also lends itself well to proofs involving quadrilaterals. For instance, a common problem is to determine whether a given quadrilateral is of a special type:
• Trapezoid (one pair of opposite sides are parallel)
• Parallelogram (both pairs of opposite sides are parallel)
• Rectangle (a parallelogram with 4 right angles)
• Rhombus (a parallelogram with 4 congruent sides)
• Square (a rectangle and a rhombus)
From the definitions, we see that all of these can be verified by using only the slope and/or distance formulas. For the case of a rectangle, we use the additional fact that two segments are perpendicular if and only if their slopes are negative reciprocals (e.g. $\frac{2}{3}$ and $-\frac{3}{2}$). However, there is one case in which one must be careful in using this method. In the coordinate system below, the sides of the rectangle are parallel to the x and y axes:
Since the slope of a vertical line is undefined, one cannot use the slope formula to show that adjacent sides have negative reciprocal slopes ($\frac{0}{1}$ and $\frac{1}{0}$ are not reciprocals!). To avoid this difficulty, one can instead use the fact that the diagonals of a rectangle are congruent, which can easily be checked using the distance formula.
Perimeter and Area
Coordinate methods are not only useful for writing proofs, but can also be used to compute the perimeter and area of polygons. Computing the perimeter is straightforward: one simply uses the distance formula to compute the length of each side of the polygon, and then adds all of these lengths together.
Computing the area, as we will see, can be more subtle. Let us first consider the triangle below:
The area is given by the familiar formula $\frac{1}{2}bh$, where $b%0$ is the base of the triangle and $h%0$ is the height. Taking the base to be the segment $\overline{BC}%0$, the height is the length of the altitude connecting vertex $A%0$ to $\overline{BC}%0$. Using the distance formula, it is easy to see that $b=4%0$ and $h=1%0$, so the area of the triangle is $\frac{1}{2}(4)(1)=2$ square units.
This example was easy, but suppose you are instead given a triangle which is “tilted” in the coordinate plane, such as the one below:
Taking again $\overline{AC}%0$ to be the base of the triangle, it is still straightforward to compute its length using the distance formula:
However, the height is now much harder to compute! The altitude connecting vertex $B%0$ to the opposite side must intersect $\overline{AC}%0$ at a right angle. To find this segment, we can first find the equation of the line containing $\overline{AC}%0$, which has the form $y=mx+b%0$. Using the slope formula, we find
$m=\frac{2-3}{4-(-2)}=-\frac{1}{6}$.
Using the fact that the line must pass through point $A%0$, the y-intercept, $b%0$, is calculated by plugging in the coordinates of $A%0$:
The equation of the line containing $\overline{AC}%0$ is therefore given by
We can now construct the line perpendicular to this line and passing through point $B%0$. To be perpendicular to $\overline{AC}%0$, the slope must be the negative reciprocal of the first, so this line has the form $y=6x+b%0$. We now solve for $b%0$ as before by plugging in the coordinates of $B%0$:
The equation of the line containing the altitude through $B%0$ is therefore $y=6x-1%0$. We now need to find the coordinates of the point $P%0$ where these two lines intersect by solving the system of linear equations
$y=-\frac{1}{6}x+\frac{8}{3}$
$y=6x-1%0$
By substituting the second into the first, we find the solution
Finally, we can use the distance formula to compute the height of the triangle:
$h=BP=\sqrt{(\frac{22}{37}-1)^2+(\frac{95}{37}-5)^2}=\frac{15}{\sqrt{37}}$
The area of the triangle is therefore
Fortunately, there are other ways to compute the area of the triangle which do not involve so much work. There is actually an alternative formula for the area of a triangle which avoids computing the height altogether. This is known as Heron’s formula. Denoting the side lengths of a triangle by $a,b%0$ and $c%0$. We first compute the semiperimeter, $s%0$, of the triangle, which is simply half the perimeter:
Heron’s formula then states that the area is given by
(Note by the symmetry of this expression that it does not matter which side is labeled $a%0$, $b%0$ or $c%0$.) This formula is obtained by noting that an altitude of a triangle partitions it into two right triangles. Applying the Pythagorean theorem to each of these smaller triangles, we can eliminate $h%0$ from the area formula, and, after some factoring, we arrive at Heron’s formula.
Returning to our example, using the distance formula we obtain
$a=AB=\sqrt{13}%0$
$b=BC=3\sqrt{2}%0$
$c=AC=\sqrt{37}%0$
Plugging in to Heron’s formula (and a calculator!), we obtain
square units.
By using Heron’s formula, we arrive at the answer with far less work than the previous method. Moreover, since Heron’s formula only uses the distance formula, it is straightforward to use for any triangle in the plane, regardless of its orientation.
We will now show another method which will allow us to compute the area with even less work. The trick is to draw a rectangle containing the triangle which passes through all of its vertices:
We now see that the area of the triangle can be obtained by computing the area of this rectangle and subtracting the areas of the 3 right triangles at the corners:
$A=A_\text{rect}-A_{\Delta_1}-A_{\Delta_2}-A_{\Delta_3}%0$
These methods of computing areas can be extended to any polygon: since any polygon can be partitioned into triangles, we can use Heron’s formula repeatedly to compute the area of each triangle, and then add them all together. Alternatively, we can enclose the polygon in a rectangle and compute its area, subtracting off the areas of some number of right triangles. While the latter method is usually the easiest, it is not always guaranteed to work. For instance, consider the parallelogram below:
In this case, enclosing the parallelogram in a rectangle does not create right triangles, but instead two concave quadrilaterals!
These examples illustrate the utility of a variety of different computational techniques, as there is no one method which is ideal for all situations. While the student must spend considerable time and effort mastering classical methods and the art of two column proofs, for many problems coordinate methods provide a powerful technique to quickly come to the same conclusion by straightforward, concrete calculations.
Be sure to give us a shout if you have any questions!
|
|
# LHCb Papers
Nyeste elementer:
2019-07-18
17:03
Precision measurement of the $\Lambda_c^+$, $\Xi_c^0$, and $\Xi_c^+$ baryon lifetimes / LHCb Collaboration We report measurements of the lifetimes of the $\Lambda_c^+$, $\Xi_c^+$ and $\Xi_c^0$ charm baryons using proton-proton collision data at center-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3.0 fb$^{-1}$, collected by the LHCb experiment. [...] arXiv:1906.08350 ; LHCb-PAPER-2019-008 ; CERN-EP-2019-122 ; LHCB-PAPER-2019-008. - 2019. - 20 p. Fulltext - Related data file(s) - Supplementary information - Fulltext
2019-07-02
10:45
Observation of the $\Lambda_b^0\to \chi_{c1}(3872)pK^-$ decay / LHCb Collaboration Using proton-proton collision data, collected with the LHCb detector and corresponding to 1.0, 2.0 and 1.9 fb$^{-1}$ of integrated luminosity at the centre-of-mass energies of 7, 8, and 13 TeV, respectively, the decay $\Lambda_b^0\to \chi_{c1}(3872)pK^-$ with $\chi_{c1}\to J/\psi\pi^+\pi^-$ is observed for the first time. [...] arXiv:1907.00954 ; CERN-EP-2019-131 ; LHCb-PAPER-2019-023 ; LHCB-PAPER-2019-023. - 2019. - 21 p. Fulltext - Related data file(s) - Supplementary information - Fulltext
2019-06-21
17:31
Updated measurement of time-dependent CP-violating observables in $B^0_s \to J/\psi K^+K^-$ decays / LHCb Collaboration The decay-time-dependent {\it CP} asymmetry in $B^{0}_{s}\to J/\psi K^{+} K^{-}$ decays is measured using proton-proton collision data, corresponding to an integrated luminosity of $1.9\,\mathrm{fb^{-1}}$, collected with the LHCb detector at a centre-of-mass energy of $13\,\mathrm{TeV}$ in 2015 and 2016. [...] arXiv:1906.08356 ; LHCb-PAPER-2019-013 ; CERN-EP-2019-108 ; LHCB-PAPER-2019-013. - 2019. - 42 p. Fulltext - Related data file(s) - Supplementary information - Fulltext
2019-06-21
17:07
Measurement of $C\!P$ observables in the process $B^0 \to DK^{*0}$ with two- and four-body $D$ decays / LHCb Collaboration Measurements of $C\!P$ observables in $B^0 \to DK^{*0}$ decays are presented, where $D$ represents a superposition of $D^0$ and $\bar{D}^0$ states. [...] arXiv:1906.08297 ; LHCb-PAPER-2019-021; CERN-EP-2019-111 ; LHCB-PAPER-2019-021. - 2019. - 30 p. Fulltext - Related data file(s) - Supplementary information - Fulltext
2019-06-03
12:12
Amplitude analysis of $B^\pm \to \pi^\pm K^+ K^-$ decays / LHCb Collaboration The first amplitude analysis of the $B^\pm \to \pi^\pm K^+ K^-$ decay is reported based on a data sample corresponding to an integrated luminosity of 3.0 fb$^{−1}$ of $pp$ collisions recorded in 2011 and 2012 with the LHCb detector. [...] arXiv:1905.09244 ; LHCb-PAPER-2018-051 ; CERN-EP-2019-062 ; LHCB-PAPER-2018-051. - 2019. - 18 p. Fulltext - Related data file(s) - Supplementary information - Fulltext
2019-05-16
15:50
Search for the lepton-flavour-violating decays $B^{0}_{s}\to\tau^{\pm}\mu^{\mp}$ and $B^{0}\to\tau^{\pm}\mu^{\mp}$ / LHCb Collaboration A search for $B^{0}_{s}\to\tau^{\pm}\mu^{\mp}$ and $B^{0}\to\tau^{\pm}\mu^{\mp}$ decays is performed using data corresponding to an integrated luminosity of 3 fb$^{-1}$ of proton-proton collisions, recorded with the LHCb detector in 2011 and 2012. [...] arXiv:1905.06614 ; CERN-EP-2019-076 ; LHCb-PAPER-2019-016 ; LHCB-PAPER-2019-016. - 2019. - 17 p. Fulltext - Related data file(s) - Supplementary information - Fulltext
2019-05-16
14:53
Amplitude analysis of the $B^0_{(s)} \to K^{*0} \overline{K}^{*0}$ decays and measurement of the branching fraction of the $B^0 \to K^{*0} \overline{K}^{*0}$ decay / LHCb Collaboration The $B^0 \to K^{*0} \overline{K}^{*0}$ and $B^0_s \to K^{*0} \overline{K}^{*0}$ decays are studied using proton-proton collision data corresponding to an integrated luminosity of 3fb$^{-1}$. An untagged and time-integrated amplitude analysis of $B^0_{(s)} \to (K^+\pi^-)(K^-\pi^+)$ decays in two-body invariant mass regions of 150 MeV$/c^2$ around the $K^{*0}$ mass is performed. [...] arXiv:1905.06662; LHCb-PAPER-2019-004; CERN-EP-2019-063; LHCB-PAPER-2019-004.- Geneva : CERN, 2019-07-05 - 30 p. - Published in : JHEP 1907 (2019) 032 Article from SCOAP3: PDF; Fulltext: PDF; Related data file(s): ZIP;
2019-05-16
14:31
Measurement of the mixing-induced and $CP$-violating observables of $B^0_s\to\phi\gamma$ decays / LHCb Collaboration A time-dependent analysis of the $B_s^0 \to \phi\gamma$ decay rate is performed to determine the $CP$-violating observables $S_{\phi\gamma}$ and $C_{\phi\gamma}$, and the mixing-induced observable $\mathcal{A}^{\Delta}_{\phi\gamma}$. [...] arXiv:1905.06284 ; LHCb-PAPER-2019-015; CERN-EP-2019-077 ; LHCB-PAPER-2019-015. - 2019. - 15 p. Fulltext - Related data file(s)
2019-05-09
11:08
A search for $\Xi_{cc}^{++}\to D^+pK^-\pi^+$ decays / LHCb Collaboration A search for the $\it{\Xi}^{++}_{cc}$ baryon through the $\it{\Xi}^{++}_{cc} \rightarrow D^{+} p K^{-} \pi^{+}$ decay is performed with a data sample corresponding to an integrated luminosity of 1.7 $\mathrm{fb}^{-1}$ recorded by the LHCb experiment in $pp$ collisions at a centre-of-mass energy of 13 TeV [...] arXiv:1905.02421 ; LHCb-PAPER-2019-011 ; CERN-EP-2019-067 ; LHCB-PAPER-2019-011. - 2019. - 20 p. Full text - Full text - Fulltext
2019-05-06
17:27
Measurement of charged hadron production in $Z$-tagged jets in proton-proton collisions at $\sqrt{s}=8$ TeV / LHCb Collaboration The production of charged hadrons within jets recoiling against a $Z$ boson is measured in proton-proton collision data at $\sqrt{s}=8$ TeV recorded by the LHCb experiment. [...] arXiv:1904.08878 ; CERN-EP-2019-065 ; LHCb-PAPER-2019-012 ; LHCB-PAPER-2019-012. - 2019. - 17 p. Fulltext - Supplemental information - Supplementary information
|
|
#### Archived
This topic is now archived and is closed to further replies.
# Simplest method for additive colouring
This topic is 5310 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Greets guys, I was wondering what the simplest method for increasing the level of a given colour (red or green or blue) on a textured quad would be? I know I can call glcolour(255,0,0), to draw with only the red components of a texture, of course, but if there is no red in the image, it means it appears as black, which is not what I want. I just want all the red to be exaggerated. Thanks for any help, Rob
##### Share on other sites
Then don't remove the other color components completely by multiplying by zero. Multiply by something greater than zero, say, 128 to keep half of the other two components.
edit: This will not increase the red component, but decrease the other components. If you really want to increase it and keep the other, check out the GL_RGB_SCALE operation in the texture environment functions. Use that together with modulation to increase a color component.
[edited by - Brother Bob on November 8, 2003 8:24:40 PM]
##### Share on other sites
I can decrease the other components, as you suggest, but this will simply tend the resulting texture towards black, encountering the problem I have now. I need to actually increase the red component.
How do I go about using the GL_RGB_SCALE operation, if you don''t mind me asking?
##### Share on other sites
Set the texture environment combiers like this.
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE);glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE0_RGB, GL_PRIMARY_COLOR);glTexEnvi(GL_TEXTURE_ENV, GL_SOURCE1_RGB, GL_TEXTURE);glTexEnvi(GL_TEXTURE_ENV, GL_RGB_SCRALE, 2 or 4);
You can set the scale to 2 or 4, depending on how much you plan on boosting the colors. 2 means you cannot boost them more than 2x, same for 4.
Now you use the primary color to determine the exact gain. A color of 1.0/scale will result in no scaling, 1.5/scale will result in boosting it by 50%, and so on.
• 39
• 15
• 9
• 23
• 10
|
|
An approximate converse of discrete uncertainty principle - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-21T08:53:24Z http://mathoverflow.net/feeds/question/83208 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/83208/an-approximate-converse-of-discrete-uncertainty-principle An approximate converse of discrete uncertainty principle Sasho Nikolov 2011-12-11T22:26:26Z 2011-12-12T05:13:12Z <p>Let <code>$f:\mathbb{Z}_n \rightarrow \{0, 1\}$</code> and let's normalize the Fourier transform $\hat{f}$ so that <code>$\|\hat{f}\|_2 = \|f\|_2$</code>, i.e. <code>$$\hat{f}(\xi) = \frac{1}{\sqrt{n}}\sum_{x \in \mathbb{Z}_n}{f(x)e^{-2\pi i x \xi/n}}$$</code> Also let <code>$\hbox{supp}(f) = \{x \in \mathbb{Z}_n: f(x) \neq 0\}$</code>.</p> <p>What I am calling the <em>discrete uncertainty principle</em> is the following statement:</p> <blockquote> <p>If $|\hbox{supp}(f)| > 0$ then $|\hbox{supp}(f)| \cdot |\hbox{supp}(\hat{f})| \geq n$. </p> </blockquote> <p>This inequality is tight for the Dirac comb. Also, for $n$ a prime number a much stronger inequality is true: $|\hbox{supp}(f)| + |\hbox{supp}(\hat{f})| \geq n + 1$ (again as long as $f$ is not the constant 0 function). </p> <p>The uncertainty principle states that if $f$ is is "concentrated" then $\hat{f}$ is "spread-out". I am interested in the existence of a weak converse, i.e. is it true in some approximate sense that if $f$ is very spread out then $\hat{f}$ is fairly concentrated. </p> <p>Here is a possible theorem statement that I would like to be true:</p> <blockquote> <p>Let <code>$f:\mathbb{Z}_n \rightarrow \{0, 1\}$</code> and let $\hat{f}$ be define as above. Is it true that for any $f$ s.t. <code>$\|f\|_2^2 \geq \sqrt{n}$</code> there exists a set <code>$S \subseteq \mathbb{Z}_n$</code> s.t. $|S| \leq \sqrt{n}$ and <code>$$\sum_{\xi \in S}{|\hat{f}(\xi)|^2 \geq \|\hat{f}\|_2^2 - \sqrt{n}} = \|f\|_2^2 - \sqrt{n}$$</code></p> </blockquote> <p>Note that since the range of $f$ is <code>$\{0, 1\}$</code>, $\hbox{supp}(f) = \|f\|_2^2$. Note also that the condition that <code>$\|f\|_2^2 \geq \sqrt{n}$</code> is redundant given the error factor of $\sqrt{n}$. On the other hand, some error factor is necessary, given the strong inequality for $n$ a prime number that I mentioned above.</p> <p>The reasons I have for guessing this statement are that</p> <ol> <li><p>I want it to be true (for my application) :)</p></li> <li><p>I have checked it by brute-force enumeration for $n \leq 23$.</p></li> </ol> <p>Is there any statement of this form known? Or is it obviously false?</p> http://mathoverflow.net/questions/83208/an-approximate-converse-of-discrete-uncertainty-principle/83224#83224 Answer by Terry Tao for An approximate converse of discrete uncertainty principle Terry Tao 2011-12-12T05:13:12Z 2011-12-12T05:13:12Z <p>(My previous comment, converted to an answer as requested.)</p> <p>If one sets $f$ to be the random 0-1 valued function, then from the Chernoff inequality one sees that with non-zero probability, one has $\hat f(0) = \sqrt{n}/2 + O(1)$, $\|f\|_2^2 = n/2 + O(\sqrt{n})$ and $\hat f(\xi) = O(\log n)$ for all $\xi \neq 0$, so the Fourier transform is basically maximally dispersed, so there is no concentration at anywhere near the scale suggested.</p> <p>If $n$ is prime, one can obtain a deterministic version of this example (without the losses of $\log n$) by taking $f$ to be the indicator function of the quadratic residues, and then using Gauss sums.</p> <p>Informally, "most" functions (drawn from, say, a gaussian measure) will be more or less uniformly spread out in phase space, which implies that the function and its Fourier transform will both be spread out uniformly as well. Concentration (either in physical space or frequency space) is the exception rather than the rule.</p>
|
|
# The standard reduction potential of $Cu^{+2}/Cu$ and $Ag^+/Ag$ electrode are 0.337 and 0.799Volt respectively for this positive emf galvanic cell,for what concentration of $Ag^+$ will be emf of cell at $25^{\large\circ}C$ be zero,if the concentration of $Cu^{+2}$ is 0.01M?
$\begin{array}{1 1}(a)\;1.37\times 10^9M\\(b)\;1.67\times 10^9M\\(c)\;1.57\times 10^9M\\(d)\;1.47\times 10^9M\end{array}$
|
|
# Exercise C.1.11
$\star$ Argue that for any integers $n \ge 0, j \ge 0, k \ge 0$ and $j + k \le n$,
$$\binom{n}{j+k} \le \binom{n}{j}\binom{n-j}{k}$$
Provide both an algebraic proof and an argument based on a method for choosing $j + k$ items out of $n$. Give an example in which equality does not hold.
First, let's establish that, $j!k! \le (j+k)!$. Both sides have the same number of terms, but the right side has $k$ terms more - $(j+1)(j+2)\ldots(j+k)$ - that are greater than the corresponding terms on the left side $1\cdot2\cdot\ldots k$.
Thus:
$$\binom{n}{j} \binom{n-j}{k} = \frac{n!}{j!(n-j)!} \frac{(n-j)!}{k!(n-j-k)!} = \frac{n!}{j!k!(n-j-k)!} \ge \frac{n!}{(j+k)!(n-j-k)!} = \binom{n}{j+k}$$
As for the argument, the right side is the number of ways in which we can:
1. Choose $j$ elements out of $n$
2. Choose $k$ elements out of the remaining $n-j$ elements
There are more ways to do that than just choosing $j+k$ elements out of $n$, because this implies some ordering in the choice. For example, if $j = k = 1$, there are two ways to pick two elements (each one first) with this approach, but only one otherwise.
If $n = 4, j = k = 1$, then equality does not hold.
|
|
Last edited by Tozragore
Monday, July 20, 2020 | History
3 edition of The three-dimensional flow past a rapidly rotating circular cylinder found in the catalog.
The three-dimensional flow past a rapidly rotating circular cylinder
# The three-dimensional flow past a rapidly rotating circular cylinder
Published by National Aeronautics and Space Administration, Langley Research Center, National Technical Information Service, distributor in Hampton, Va, [Springfield, Va .
Written in English
Subjects:
• Reynolds number.,
• Cylinders.
• Edition Notes
The Physical Object ID Numbers Other titles Three dimensional flow past a rapidly rotating circular cylinder. Statement James P. Denier, Peter W. Duck. Series ICASE report -- no. 93-52., NASA contractor report -- 191512., NASA contractor report -- NASA CR-191512. Contributions Duck, Peter W., Langley Research Center. Format Microform Pagination 1 v. Open Library OL14697292M
Purpose – The purpose of this paper is to study the unsteady boundary layer flow of a micropolar fluid past a circular cylinder which is started impulsively from rest. Design/methodology/approach – The nonlinear partial differential equations consisting of three independent variables are solved numerically using the 3D Keller‐box method. Findings – Numerical . On the influence of a wire placed upstream of a rotating cylinder: Three-dimensional effects, In Proceedings of the 19th Australasian Fluid Mechanics Conference (Eds: Harun Chowdhury and Firoz Alam, Pub: RMIT University, Melbourne, Australia, ISBN: ), RMIT University, Melbourne, Australia, December , (4 pages).
John Wilder Miles was born in Cincinnati, Ohio, on December 1, After graduating from high school in Oakland, CA, Miles entered Cal Tech and went on to receive a bachelor's degree in electrical engineering in , a master's degree in electrical and aeronautical engineering in , and a Ph.D. in electrical engineering in Moving Surface Boundary-layer Control (MSBC) was applied to several two dimensional bluff bodies using a high speed rotating cylinder as a momentum injecting device. Flow past a symmetric airfoil; a D-section; as well as square and rectangular prisms, representing a family of shapes with progressively increasing bluffness were studied in presence of the MSBC. In the case of the airfoil, the.
Finally, the translational and rotational motion of an elliptic cylinder around a fixed circular cylinder and its three-dimensional counterpart were studied. Acknowledgements The author was grateful to his former student, Dr. Ren Sun, for collaboration on the research of . In another example, Pralits et al. () studied the stability of the wake of a rotating cylinder, characterized by the suppression of the periodic shedding in a relatively narrow specific range of rotation rates, as long as the analysis is limited to two-dimensional flow. Structural-sensitivity analysis was instrumental in isolating this range.
You might also like
Education for global consciousness
Education for global consciousness
Coldwater
Coldwater
Compacted
Compacted
Skelton, Penrith and the world 1943-1993
Skelton, Penrith and the world 1943-1993
Will rising interest rates end the current economic upswing?
Will rising interest rates end the current economic upswing?
Doctor Faustus
Doctor Faustus
Cambrian history of the Grand Canyon region.
Cambrian history of the Grand Canyon region.
Nominations of Frank W. Naylor, Jr., and Marvin R. Duncan
Nominations of Frank W. Naylor, Jr., and Marvin R. Duncan
Redistribution effects of special revenue sharing for community development
Redistribution effects of special revenue sharing for community development
Hydrology of area 7, Eastern Coal Province, Ohio
Hydrology of area 7, Eastern Coal Province, Ohio
Divine King of England
Divine King of England
Romans
Romans
Filarete
Filarete
Neutrality for the U.S.
Neutrality for the U.S.
Two studies on Fort McMurray.
Two studies on Fort McMurray.
[Masekhet Zevaḥim] =
[Masekhet Zevaḥim] =
### The three-dimensional flow past a rapidly rotating circular cylinder Download PDF EPUB FB2
Three-dimensional computations are carried out for a spinning cylinder placed in a uniform flow. The non-dimensional rotation rate is varied in the range $\leqslant {\it\alpha}\leqslant$.A stabilized finite element method is utilized to solve the incompressible Navier–Stokes equations in primitive variables by: Get this from a library.
The three-dimensional flow past a rapidly rotating circular cylinder. [James P Denier; Peter W Duck; Langley Research Center.]. Navrose et al. [38] described several new instability modes for the case of a three-dimensional flow past a rotating circular cylinder.
In three-dimensional systems, the cylinder span becomes. Flow past a spinning circular cylinder placed in a uniform stream is investigated via three-dimensional computations.
A stabilized finite element method is utilized to solve the incompressible Navier-Stokes equations in the primitive variables formulation. The Reynolds number based on the cylinder diameter and freestream speed of the flow is Cited by: Three-dimensional instability of the ro ta ting cy linder flo w 7 Further in vestigations are needed on the onset of a three-dimensional flow past a rotating circular cylinder.
Ideal flow model of flow past a circular cylinder In AOE you studied irrotational incompressible flow past a circular cylinder without circulation (see Bertin,section ).
Such a flow can be generated by adding a uniform flow, in the positive x direction to a doublet at the origin directed in the negative x direction. A two-dimensional numerical study on the laminar flow past a circular cylinder rotating with a constant angular velocity was carried out.
The objectives were to obtain a consistent set of data for the drag and lift coefficients for a wide range of rotation rates not available in the literature and a deeper insight into the flow field and vortex development behind the cylinder.
Figure 13 Streamlines around a circular cylinder rotating in a uniform flow; Reynolds number 68 and V/U =where V is the circumferential velocity and U the velocity of undisturbed flow (aluminum flake method) Figure 14 Sheets of tracer particles separated from the surface of a rotating circular cylinder (electrolytic precipitation.
The mean flow around a truncated cylinder of aspect ratio 1 can be seen to be composed of three distinct flow features; that is the flow over the free end, the arch vor- tex and the horseshoe vortex, which interact strongly with each other, generating a fully three-dimensional flow.
This paper is a numerical study of the initial flow past a circular cylinder with combined streamwise and transverse oscillations. The motion is governed by the two-dimensional unsteady Navier–Stokes equations in non-primitive variables.
The method of solution is based on conjugating the perturbation theory with the collocation method. Depending upon the relative sizes of the parameters of the problem, rotating flow of a vertically confined fluid past an asymmetric object—in this case a circular cylinder with top sliced at an ang.
Extension of the familiar concept of boundary-layer separation to flow along moving walls and unsteady flows is a subject that attracted some interest in the ’s and has been investigated further in the past few years.
The well-known criterion of vanishing wall-shear does not apply in such flows, and therefore the definition of the phenomenon becomes more difficult than in the simpler. The flow over bluff bodies like spheres [24,25,26] and circular cylinders [27,28,29,30] is a classical problem in fluid et al.
[] focused on the influence of turbulence on the wind pressure and aerodynamic behavior of smooth circular and Duan [] studied numerically the flow past a yawed circular cylinder using large eddy simulation.
() The unsteady boundary layer flow past a circular cylinder in micropolar fluids. International Journal of Numerical Methods for Heat & Fluid Flow() Series solutions of unsteady boundary layer flow of a micropolar fluid near the forward stagnation point of a plane surface. Bell, J. R., Burton, D. & Thompson, M.
The boundary-layer characteristics and unsteady flow topology of full-scale operational inter-modal freight. Ninth International Conference on Numerical Methods in Fluid Dynamics Supersonic flow past circular cones at high angles of yaw, downstream of separation.
Pages Multigrid solution of the Navier-Stokes equations for the flow in a rapidly rotating cylinder. @article{osti_, title = {Simulations of the flow past a cylinder using an unsteady double wake model}, author = {Ramos-García, N.
and Sarlak, H. and Andersen, S. and Sørensen, J. N.}, abstractNote = {In the present work, the in-house UnSteady Double Wake Model (USDWM) is used to simulate flows past a cylinder at subcritical, supercritical, and transcritical Reynolds numbers.
Some Interesting Features of Flow Past Slotted Circular Cylinder at Re = (G K Suryanarayana, V Y Mudkavi, R Kurade, K M Naveen) A High-Resolution Compressible DNS Study of Flow Past a Low-Pressure Gas Turbine Blade (R Ranjan, S M Deshpande, R Narasimha).
Abstract. The purpose of this chapter is to survey classical and modern measurement techniques used in rotating flow experiments. Since the measurement of rotating flows is now a broad and rapidly developing art, it is clear that only a summary of the essential features of each measurement system can be given.
It is noted that the initial flow past an impulsively started rotating and translating circular cylinder had been discussed a long with a novel analysis by Badr and Dennis [6], Badr et al.
[5], and Coutanceau and Menard [19].The main purposes of this work are: (i) to generalize some theoretical results of Collins and Dennis [17] by. Eighteenth Symposium on Naval Hydrodynamics. Washington, DC: The National Academies Press. doi: / [27] and three-dimensional flow [28]. The latter reference includes the ef- fects of shear of a radially and circumfer- entially varying axial inflow to the actuator disk and incorporates some non-linear terms in the equations.Moreover, they calculated the fluid region and the rigid region around the cylinder—i.e., the corresponding yield surface.
In another work by Mitsoulis, the creeping flow of a Bingham fluid past a circular cylinder and the wall effects were investigated by FEM simulations. The blockage ratio varied between 2–50 in their study.Flow-visualization tests were conducted for right circular cylinders at Reynolds numbers from to 21, and for wavy cylinders at Reynolds numbers of 5, 10, These tests revealed new information concerning the secondary streamwise vortical structures (ribs) in the immediate wake of a right circular cylinder.
|
|
# Math Help - Time of death problem using differential equation
1. ## Time of death problem using differential equation
The problem states: Give an interval that estimates for the person's time of death. Ts= 21.1 degrees C room temperature, Normal temperature of the body Ta= is an interval of 36.6 to 37.2 degrees C. Temperature of the body when found at midnight is Ti= 34.8 degrees C. The final temperature half hour later of the body Tf= 34.3 degrees C.
So using Newton's cooling equation:
K=(T1-T2)=-ln((T1'-s)/(T2'-s))
where the temperature of the corpse is:
K=-(1/2)ln((Tf - Ts)/(Ti - Ts)
Time of death is:
D=-(1/k)ln((Ta - Ts)/(Ti - Ts))
So i'm confused on where to intergate? I wanted to intergate for the time of death for To but in respects to time but there is not a time variable in that equation so I'm stumped. Thanks to all who can help.
2. ## Re: Time of death problem using differential equation
Originally Posted by nivek0078
The problem states: Give an interval that estimates for the person's time of death. Ts= 21.1 degrees C room temperature, Normal temperature of the body Ta= is an interval of 36.6 to 37.2 degrees C. Temperature of the body when found at midnight is Ti= 34.8 degrees C. The final temperature half hour later of the body Tf= 34.3 degrees C.
So using Newton's cooling equation:
K=(T1-T2)=-ln((T1'-s)/(T2'-s))
where the temperature of the corpse is:
K=-(1/2)ln((Tf - Ts)/(Ti - Ts)
Time of death is:
D=-(1/k)ln((Ta - Ts)/(Ti - Ts))
So i'm confused on where to intergate? I wanted to intergate for the time of death for To but in respects to time but there is not a time variable in that equation so I'm stumped. Thanks to all who can help.
So the first order ode is given by the model
$\frac{dT}{dt} \propto (T_s-T)$ where $T_s$ is the room temperature.
This gives that
$\frac{dT}{dt}=k(21.1-T)$
If we seperate this equation and integrate we getwe get that
$\frac{dT}{T-21.1}=-kdt \implies \ln|T-21.1|=-kt+C \iff T(t)=Ae^{-kt}+21.1$
We are told the temperature at midnight $t_m$ and 30 minutes later $t_m+30$
Using these two data point we get
$34.8=Ae^{-kt_m}+21.1 \iff Ae^{-kt_m}=13.7$
and
$34.3=Ae^{-kt_m-30k}+21.1 \iff e^{-30k}=\frac{13.2}{Ae^{-kt_m}}$
puting the first equation into the 2nd gives
$e^{-30k}=\frac{132}{137} \iff k=-\frac{1}{30}\ln \left( \frac{132}{137}\right)$
Puting this back into the orginial equation gives
$T(t)=Ae^{\frac{t}{30}\ln \left( \frac{132}{137}\right)}+21.1 =A \left( \frac{132}{137}\right)^{\frac{t}{30}}+21.1$
Solving this for time gives
$t=\frac{30\ln\left( \frac{T-21.1}{A}\right)}{\ln \left( \frac{132}{137}\right)}$
Since we know the temperature of the body at midnight we can plug that into this equation to get
$t=\frac{30\ln\left( \frac{13.7}{A}\right)}{\ln \left( \frac{132}{137}\right)}$
Now if you plug in the two initial temperatures $A$ it will give you the number of minutes before midnight. Just use the two different values of $A$ given.
3. ## Re: Time of death problem using differential equation
Thank you TheEmptySet for your help! You made understanding this problem a lot easier! I have one question though. What do you mean by the initial temperatures A? Is it the intial temperature (Ti) the body was found and final temperature (Tf) or is it the average body temperature (Ta) and the final (Tf)? Sorry I'm sure I'm way over thinking this! If anyone else has the answer please respond.
4. ## Re: Time of death problem using differential equation
Originally Posted by nivek0078
Thank you TheEmptySet for your help! You made understanding this problem a lot easier! I have one question though. What do you mean by the initial temperatures A? Is it the intial temperature (Ti) the body was found and final temperature (Tf) or is it the average body temperature (Ta) and the final (Tf)? Sorry I'm sure I'm way over thinking this! If anyone else has the answer please respond.
Notice that
$T(0)=A+21.1$
This should be the temperature at the begining. You are given a range of values for $T(0)$.
$36.6 \le T(0) \le 37.2 \implies 36.6 \le A+21.1 \le 37.2 \iff 15.5 \le A \le 16.1$
If you plug these values into the equation above it will give you the set of $t$ values.
5. ## Re: Time of death problem using differential equation
Ok that makes sense. I confused myself there for a minute. Thank you again for your help!
|
|
# 2: What Do Data Look Like? (Graphs)
This page titled 2: What Do Data Look Like? (Graphs) is shared under a not declared license and was authored, remixed, and/or curated by Michelle Oja.
|
|
location: Publications → journals → CJM
Abstract view
Close Lattice Points on Circles
We classify the sets of four lattice points that all lie on a short arc of a circle that has its center at the origin; specifically on arcs of length $tR^{1/3}$ on a circle of radius $R$, for any given $t>0$. In particular we prove that any arc of length $(40 + \frac{40}3\sqrt{10} )^{1/3}R^{1/3}$ on a circle of radius $R$, with $R>\sqrt{65}$, contains at most three lattice points, whereas we give an explicit infinite family of $4$-tuples of lattice points, $(\nu_{1,n},\nu_{2,n},\nu_{3,n},\nu_{4,n})$, each of which lies on an arc of length $(40 + \frac{40}3\sqrt{10})^{\smash{1/3}}R_n^{\smash{1/3}}+o(1)$ on a circle of radius $R_n$.
|
|
## Zenmo one year ago Write the partial fraction decomposition of the rational expression.
1. Zenmo
$\frac{ x ^{3}-4 }{ x ^{3}+2x }$
2. Loser66
ok, where are you stuck?
3. Loser66
knock knock!! are you sleeping??
4. Loser66
hehehe.. I go to bed also.
|
|
All Questions
404 views
Commutative Encryption with RSA scheme?
I wanted to know how I could manage to do what I'm going to tell you next, with the RSA encryption/decryption scheme. So Alice and Bob each have a public key $(n, e)$ and a private key $(p, q, d)$; ...
148 views
Bilinear pairing
I am working on Efficient Construction of Pairings which are being realized by Miller's algorithm. In this algorithm the basic steps are point doubling and line function computation point addition ...
151 views
Blowfish vs. Twofish regarding power consumption
If I wanted to use Blowfish or Twofish to provide security on a device where power consumption is crucial. Regarding power consumption, which one would win? Generally, which algorithms are known to ...
72 views
Berlekamp-Massey algorithm, correct stepping
I'm trying to use the Berlekamp-Massey algorithm on the following bit sequence: 0 1 0 0 1 0 0 1 0 1 I have the correct answer and most of the approach to get there, but I'm unable to fill in what I ...
107 views
Constructing of 16x16 Involutory Binary Matrices of Branch Number 7
In the PDF “Algebraic Construction of 16×16 Binary Matrices of Branch Number 7 with One Fixed Point”, it was given that: ...
43 views
What constitutes a “description of B” for probabilistic encryption as defined in Cryptology 6.3.4?
On page 21 of the Rivest's Cryptology chapter, he defines a trapdoor predicate as a boolean function for which it is easy to choose an x such that ...
62 views
Performance analysis of roaming authentication protocol using pbc library
Recently, I have surveyed a few research papers related to roaming authentication protocol for wireless networks. For example: “Efficient Privacy-Preserving Authentication in Wireless Mobile Networks” ...
156 views
The advantages of Merkle Signature and One time Signature
In Merkle Signature, it also requires one-time signature to be used once for a message. The signature in Merkle scheme is even longer compared to Lamport one time signature. The verifier also has more ...
105 views
Decrypt a public encrypted message and Sign a signature, how the math is different?
As I understand, when you want to send a confidential message to someone, you encrypt the message with his public key. And he use his private key to decrypt the message. At the same time, one can use ...
65 views
Is container format relevant to security of encrypted message inside?
Still trying to design a fully binary cryptography container format for my mobile app, I am here asking if container is ever relevant. Thanks to Apple, I cannot use GPG directly because I can neither ...
59 views
Mapping integers to Ed25519 and back again?
I would like to map integer values to points on Ed25519, and then back again. Is there a technique that takes advantage of the specific structure of Ed25519?
103 views
Shor's Algorithm values
I'm working with Shor's algorithm and I have a question regarding the following step $$a^r -1 = (a^{r/2}+1)(a^{r/2}-1)=0 \pmod n$$ Now what is going to be the result if ${r/2}$ was -1? this will ...
60 views
Smart card Strong authentication / Verification ( fingerprints)
I'm trying to make a strong authentication software and embedded software in a java card. I have found many papers and publications about the subject… too much information to process and I'm working ...
58 views
Does not using padding mean a lack of security?
I've read several texts which say that if the entire plaintext is a multiple of the block-size padding is not required (and not using padding would not mean a loss of security). I generally disagree ...
165 views
Question is a follow-up to this one. The question was about accelerating SHA1. I am writing an application, where I do have a choice of hash algorithm, as long as it's a strong one. I want to be able ...
180 views
CBC MAC and DES combined question?
Suppose that we want to develop a MAC scheme which is as secure as Triple-DES CBC-MAC and at the same time as efficient as Single-DES CBC-MAC. We come up with the following idea: Except the last ...
22 views
Cryptography — with a semi-priveleged user in the middle — to prevent request-tampering with another server
I'm working on a chat server for a mobile app I am writing. I would like to use a different application server for non-chat related operations and another application for chat operations. I would ...
102 views
Why does computing g^a * g^{-a} with the PBC library result in zero?
My example code is as follows: /* * Example 1 * 1) Calculate g^a * 2) calculate g^{-a} * 3) multiply g^a * g^{-a} * */ Note: here ...
119 views
Cryptography Implementation in software
I am trying to implement a password manager in C and I had a question about the proper steps in implementing the crypto. I looked at some implementations, google talks on crypto and what the standards ...
50 views
Is anyone aware of Du atallah multiplicative secret sharing scheme for dot products for > 2 party scenario?
I am working on Du Atallah's multiplicative secret sharing scheme for more than 2 party scenario. Is anyone aware of its multiparty version (more than 2 parties). The paper for 2 party can be found ...
|
|
# Math Help - incongruent integers
1. ## incongruent integers
find all incongruent integers having order 4 modulo 37
2. Originally Posted by mndi1105
find all incongruent integers having order 4 modulo 37
Notice that $2$ is a primitive root. Therefore, $2^{(37-1)/4} = 2^9$ has order $4$. But that means $2^9, (2^9)^3$ all have order 4.
|
|
# Solve the following equations and also check your result in each case: 5((7x+5)/3) - 23/3 = 13 - (4x - 2)/3
19 views
closed
Solve the following equations and also check your result in each case:
$5(\frac{7x+5}3)-\frac{23}3$ = $13-\frac{4x-2}3$
+1 vote
by (35.1k points)
selected by
$5(\frac{7x+5}3)-\frac{23}3$ = $13-\frac{4x-2}3$
On transposing $\frac{4x-2}3$ to LHS and $-\frac{23}3,$ to RHS we get
On substituting x = 1, we get
We got LHS=RHS
|
|
A self-aligning ball bearing has a basic dynamic load rating (C10, for 106 revolutions) of 35 kN. If the equivalent radial load on the bearing is 45 k
37 views
in General
closed
A self-aligning ball bearing has a basic dynamic load rating (C10, for 106 revolutions) of 35 kN. If the equivalent radial load on the bearing is 45 kN, the expected life (in 106 revolutions) is
1. below 0.5
2. 0.5 to 0.8
3. 0.8 to 1.0
4. above 1.0
by (55.0k points)
selected
Correct Answer - Option 1 : below 0.5
Concept:
The approximate rating of service life of a ball or roller bearing is based on the fundamental equation.
${{L}} = {\left( {\frac{{{C}}}{{{W}}}} \right)^{{k}}} \times {10^6}\ {{revolution}}$
where L is rating lifeC is basic dynamic loadW is equivalent dynamic load
k = 3 for ball bearing
k = 10/3 for roller bearing
The relationship between the life in revolutions (L) and the life in working hours (LH) is given by:
L = 60 N.LH revolutions
where N is the speed in rpm
Calculation:
Given: C = 35 kN, W = 45 kN
${\rm{L}} = {\left( {\frac{{\rm{C}}}{{\rm{W}}}} \right)^{\rm{k}}} \times {10^6}{\rm{\;rev}} = {\left( {\frac{{35}}{{45}}} \right)^3} \times {10^6} = 0.47 \times {10^6}\;rev$
|
|
Copy
Copy one dataset into another. Increase the size of the destination dataset as necessary. To determine the number of elements copied with the copy command, use the copy.n object property. The copy command should not be used to move data within the same dataset. Instead, create a duplicate of the worksheet and use that as the source dataset.
Syntax:
copy [option] dataset1 dataset2 [dataset3]
Options:
no option;Copy the content of dataset1 into dataset2
Syntax: copy dataset1 dataset2
Copy the content of dataset1 into dataset2.
-a;Append contents of dataset1 to the end of dataset2.
Syntax: copy -a dataset1 dataset2
Append contents of dataset1 to the end of dataset2.
-b;Copy from rowIndex# of dataset1 to dataset2
Syntax: copy -b rowIndex dataset1 dataset2 -b start# -e end#
Copy from rowIndex# of dataset1 to dataset2, beginning at the dataset2 start# and ending at end#.The destination range must be within the existing range of dataset2, and the source range must not go outside the existing range of dataset1.
-f;Copy dataset1 to dataset2 but remove the first point and swap the two halves of dataset2
Syntax: copy -f dataset1 dataset2
Copy dataset1 to dataset2 but remove the first point and swap the two halves of dataset2.This option reverses the effects of the -m option if applied to each of the new datasets.
-m;Copy the odd entries of dataset1 to dataset2 and the even entries to dataset3
Syntax: copy -m dataset1 dataset2 dataset3
Copy the odd entries of dataset1 to dataset2 and the even entries to dataset3.To use this command, there should be a multiple of 4 points in dataset1, the source dataset. If n is the number of rows in dataset1, take the cell value in row index n/2+1 in dataset1 and copy this value to row index n/2+1 in dataset2.Take the cell value in row index n/2+2 in dataset1 and copy this value to row index n/2+1 in dataset3.Dataset2 and dataset3 now have n/2+1 points.Excluding the last row in dataset2 and dataset3 (row index n/2+1), move the first half of each dataset to the end (n/2) of the dataset.
-s;Set dataset2 to a size of npts
Syntax: copy -s npts dataset1 dataset2
Set dataset2 to a size of npts. Copy dataset1 into dataset2 such that the entire range of dataset1 is interpolated to fit into dataset2 with npts.
Note: This command works only when the destination dataset does not have an associated X column.
-u;Split (unzip) dataset1 into dataset2 and dataset3
Syntax: copy -u dataset1 dataset2 dataset3
Split (unzip) dataset1 into dataset2 and dataset3 so that the first point in dataset1 becomes the first point in dataset2, the second point in dataset1 becomes the first point in dataset3, etc.Reverse of the -z option.
-w; Copy FunctionShortName in GraphLayerRangeString to YcolumnDatasetName
Syntax: copy -w GraphLayerRangeString FunctionShortName YcolumnDatasetName
Copy loose dataset FunctionShortName in GraphLayerRangeString to Y column YcolumnDatasetName. Copy the corresponding X column. GraphLayerRangeString can be the graph page short name (e.g. "Graph1") or it can be a range string (e.g. "[Graph1]1!"). For more on range syntax, see Range Notation.
The -w switch was added to make a dataset copy of a 2D function and paste it to the named Y column in a workbook created from the func2d.otw template. See window -t WF.
-x;Copy dataset1 to dataset2,Copy the corresponding X dataset if it exists.
Syntax: copy -x dataset1 dataset2
Copy dataset1 to dataset2 where dataset2 is in a different worksheet.Copy the corresponding X dataset if it exists.
-z;Combine (zip) dataset1 and dataset2 and copy into dataset3
Syntax: copy -z dataset1 dataset2 dataset3
Combine (zip) dataset1 and dataset2 and copy into dataset3 such that the first point in dataset3 is the first point in dataset1, the second point in dataset3 is the first point in dataset2, etc.Reverse of the -u option.
Examples:
Example 1:
The following script copies Book1_B to Book2_D and copies the X column of Book1_B to the X column of Book2_D.
copy -x Book1_B Book2_D;
Example 2:
The next two scripts copy 20 points from Book1_B (74 - 55 + 1), beginning at the 13th row, to Book1_F beginning at the 55th row.
// Use -begin -end notation for range
copy -b 13 Book1_B Book1_F -b 55 -e 74;
// Use -begin -total notation for range
copy -b 13 Book1_B Book1_F -b 55 -t 20;
Example 3:
Given an even number of values in column A, the next script creates odd and even 'folded' datasets (book1_b and book1_c), then 'unfolds' them (book1_d and book1_e) and combines these to re-create the original dataset (book1_f = book1_a). The number of rows in book1_a should be a multiple of four.
copy -m book1_a book1_b book1_c; // Splits and folds odd and even rows
copy -f book1_b book1_d; // Unfolds B
copy -f book1_c book1_e; // Unfolds C
// Interleaves D and E to reproduce A in F
copy -z book1_d book1_e book1_f;
Example 4:
Copy a dataset with 7 values into a new dataset with 19 values. The interpolation size is (7-1)/(19-1) or .333 which means that 0 to 3 and 3 to 6 (differences of 3 each) will be interpolated in steps of 1 ( 3 * .333 ~ 1) and 8 to 4 and 4 to 0 (differences of 4 each) will be interpolated in steps of 1.333 ( 4 * .333 ~ 1.333)
col(A) = {0,3,6,7,8,4,0};
copy -s 41 col(A) temp;
copy temp col(B);
Example 5:
Append one dataset to another. Copy the appended datasets to a new column and sort, then unzip the data such that odd rows are in one new column and even rows are in a second new column.
col(A) = {8,2,4,0,6};
col(B) = {7,3,9,1,5};
copy -a col(B) col(A);
col(C) = sort(col(A));
copy -u col(C) col(D) col(E);
|
|
1. ## cubic residue
p is an odd prime, and p does not divide u prove that if p is congruent to 2 mod 3, then every unit is a cubic residue mod p. I don't know where to start with this.
2. ## Re: cubic residue
Well, you might start with the definition of "cubic residue". What is that?
3. ## Re: cubic residue
u is a cubic residue mod p if there exists some b such that u is congruent to b^3 mod p
4. ## Re: cubic residue
Hi,
If you know a few facts, this is easy. Otherwise, it might be kinda hard.
The multiplicative group of $\mathbb Z_p$ is cyclic with generator, say x (a primitive root mod p). Then the order of $x^3$ is the number of elements in the set $\{x^{3k}\,:\, k\in\mathbb Z\}={p-1\over gcd(p-1,3)}$. Now if 3 divides p - 1, then p is congruent to 1 mod p, but p is congruent to 2 mod 3. So gcd(p-1,3)=1. That is, the order of $x^3$ is p - 1 and so every non-zero element of $\mathbb Z_p$ is a cube; i.e. a cubic residue.
|
|
# Welcome to the UK List of TeX Frequently Asked Questions on the Web
## My section title is too wide for the page header
By default, LaTeX sectioning commands make the chapter or section title available for use by page headers and the like. Page headers operate in a rather constrained area, and it’s common for titles too be too big to fit: the LaTeX sectioning commands therefore take an optional argument:
```\section[short title]{full title}
```
If the <short title> is present, it is used both for the table of contents and for the page heading. The usual answer to people who complain that their title is too big for the running head is to suggest that they the optional argument.
However, using the same text for the table of contents as for the running head may also be unsatisfactory: if your chapter titles are seriously long (like those of a Victorian novel), a valid and rational scheme is to have a shortened table of contents entry, and a really terse entry in the running head.
One of the problems is the tendency of page headings to be set in capitals (which take up more space); so why not set headings as written for “ordinary” reading? It’s not possible to do so with unmodified LaTeX, but the fancyhdr package provides a command `\``nouppercase` for use in its header (and footer) lines to suppress LaTeX’s uppercasing tendencies. Classes in the KOMA-script bundle don’t uppercase in the first place.
In fact, the sectioning commands use ‘mark’ commands to pass information to the page headers. For example, `\``chapter` uses `\``chaptermark`, `\``section` uses `\``sectionmark`, and so on. With this knowledge, one can achieve a three-layer structure for chapters:
```\chapter[middling version]{verbose version}
\chaptermark{terse version}
```
which should supply the needs of every taste.
Chapters, however, have it easy: hardly any book design puts a page header on a chapter start page. In the case of sections, one has typically to take account of the nature of the `\``*mark` commands: the thing that goes in the heading is the first mark on the page (or, failing any mark, the last mark on any previous page). As a result the recipe for sections is more tiresome:
```\section[middling version]{verbose version%
\sectionmark{terse version}}
\sectionmark{terse version}
```
(the first `\``sectionmark` deals with the header of the page the `\``section` command falls on, and the second deal with subsequent pages; note that here, you need the optional argument to `\``section`, even if “middling version” is in fact the same text as “long version”.)
A similar arrangement is necessary even for chapters if the class you’re using is odd enough that it puts a page header on a chapter’s opening page.
Note that the titlesec package manages the running heads in a completely different fashion; for example, you can use the optional argument of sectioning commands for page headers, only, by loading the package as:
```\usepackage[toctitles]{titlesec}
```
The package documentation offers other useful techniques in this area.
The memoir class avoids all the silliness by providing an extra optional argument for chapter and sectioning commands, for example:
```\section[middling version][terse version]{verbose version}
```
As a result, it is always possible for users of memoir to tailor the header text to fit, with very little trouble.
fancyhdr.sty
macros/latex/contrib/fancyhdr (or browse the directory); catalogue entry
KOMA script bundle
macros/latex/contrib/koma-script (or browse the directory); catalogue entry
memoir.cls
macros/latex/contrib/memoir (or browse the directory); catalogue entry
titlesec.sty
macros/latex/contrib/titlesec (or browse the directory); catalogue entry
|
|
# Lyft Perception Challenge
Published:
Achieve pixel-wise identification of objects in camera images. It was hosted by Udacity and Lyft’s Level 5 Engineering Center. details https://www.udacity.com/lyft-challenge
github repo https://github.com/gwwang16/Lyft-Perception-Challenge
## Overview
The goal in this challenge is pixel-wise identification of objects in camera images. In other words, the task is to identify exactly what is in each pixel of an image! More specifically, you’ll be identifying cars and the drivable area of the road. The images below are a simulated camera image on the left and a label image on the right, where each different type of object in the image corresponds to a different color.
The challenge time is from May 1st 10:00 am PST to June 3rd at 6:00 pm PST.
### Result
I got the rank 19th finally.
The top 25 (only with U.S. work authorization) will be eligible for an interview with Lyft. So, I participated this challenge just for the fun of it.
### The Data
The challenge data is being produced by the CARLA Simulator, an open source autonomous vehicle platform for the testing and derivative of autonomous algorithms. You can download the data here.
The dataset consists of images and the corresponding ground truth pixel-wise labels for each image.
The images and ground truth are both 3 channel RGB images and the labels for the ground truth are stored as integer values in the red channel of each ground truth image. The integer values for each pixel in the ground truth images correspond to which category of object appears in that pixel, according to this table:
ValueTagColor
0None[0, 0, 0]
1Buildings[70, 70, 70]
2Fences[190, 153, 153]
3Other[72, 0, 90]
4Pedestrians[220, 20, 60]
5Poles[153, 153, 153]
8Sidewalks[244, 35, 232]
9Vegetation[107, 142, 35]
10Vehicles[0, 0, 255]
11Walls[102, 102, 156]
12TrafficSigns[220, 220, 0]
Given that the values are small (0 through 12), the 3-channel label images appear black at first glance. But if you plot up just the red channel (label_image[:,:,0]) you’ll see the labels, like this:
Camera Image3-Channel Label ImageLabel Image Red Channel
The task is to write an algorithm to take an image like the one on the left and generate a labeled image like the one on the right. Except you will be generating a binary labeled image for vehicles and a binary labeled image for the drivable surface of the road. You can ignore other things like trees, pedestrians, etc.
Your solution will be run against a hidden test dataset. This test set consists of images that are different from the training dataset, but taken under the same environmental conditions. Your algorithm will be evaluated on both speed and accuracy.
## Weighted $F_\beta$Score
In some cases, you might be more concerned about false positives, like for example, when identifying where the drivable road surface is, you don’t want to accidentally label the sidewalk as drivable. In that case the precision of your measurement is more important than recall.
On the other hand, you might be more concerned with false negatives, for example, when identifying vehicles you want to make sure you know where the whole vehicle is, and overestimating is not necessarily a bad thing. In that case, recall is more important than precision.
In most cases, however, you would like to strike some balance between precision and recall and you can do so by introducing a factor β into your F score like this:
By setting β<1, you weight precision more heavily than recall. And setting β>1, you weight recall more heavily than precision.
For this challenge you’ll be scored with β=2 for vehicles and β=0.5 for road surface.
Your final F score will be the average of your$\rm{F}_{0.5}$ score for road surface and your $\rm{F}_{2}$ score for vehicles.
β(vehicle)=2
## Incorporating Frames Per Second (FPS)
The speed at which your algorithm is also important and will be factored into your final score. You’ll receive a penalty for running at less than 10 FPS:
Here’s how your final score on the leaderboard will be calculated, incorporating both your F score and FPS:
### Model
fcn-mobilenet
##### Reference:
CARLA Simulator: https://github.com/carla-simulator/carla
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications :
https://arxiv.org/abs/1704.04861
https://github.com/keras-team/keras/blob/master/keras/applications/mobilenet.py
Keras implementation of Deeplab v3+ with pretrained weights: https://github.com/bonlime/keras-deeplab-v3-plus/blob/73aa7c38c4c8498ca0ddb831f1c7d744ca57daee/model.py
keras-mobile-colorizer: https://github.com/titu1994/keras-mobile-colorizer
Tags:
|
|
# Left and right aligned on same line
I am currently busy typesetting my thesis in LaTeX, but I am stuck on something with my cover/title page. At the bottom of the page, I need to type my name and my supervisor's name.
This is a rough idea (written in Word) of the layout in which I wish to typeset my name and my supervisor's name.
However, I have no idea how I can get the left and right alignments for my name on the left and my supervisor's name on the right.
Can anybody please show me how I can go about typsetting something like this in LaTeX?
There are a couple things you could try. A simple technique is to use \hfill, such as
\documentclass{article}
\begin{document}
\textbf{\underline{Student:}}
\hfill
\textbf{\underline{Supervisor:}} \\
Mr. Thatshisname
\hfill
Prof. Whatshisname
\end{document}
This will produce the following:
This does not quite reproduce the spacing for the "Supervisor" word as shown in your sample. Another approach will reproduce this better, although it's a little more involved. This uses \hfill along with the minipage environment:
\documentclass{article}
\begin{document}
\begin{minipage}{2in}
\textbf{\underline{Student:}} \\
Mr. Thatshisname
\end{minipage}
\hfill
\begin{minipage}{1.3in}
\textbf{\underline{Supervisor:}} \\
Prof. Whatshisname
\end{minipage}
\end{document}
The size of the first minipage does not matter too much, since \hfill will fill the space, but the size of the second minipage needs to be tweaked to get the right edge as close to the margin as possible without forcing a line break. I used the showframe package to make that process easier, but perhaps someone else has a more elegant solution. Here is the output:
In order to get this on the bottom of the page, you can try using the \vfill command. Another approach to achieving the result you're looking for is by using the fancyhdr package (read "fancy header"):
\documentclass{article}
\usepackage{fancyhdr}
\usepackage{lipsum}
\pagestyle{fancy}
\lfoot{\textbf{\underline{Student:}} \\
Mr. Thatshisname}
\rfoot{\textbf{\underline{Supervisor:}}
\phantom{MMMi}\\
Prof. Whatshisname}
\begin{document}
\lipsum[1-4]
\end{document}
This will place these words on the bottom of every page with the pagestyle fancy. If you want it only on the first page, then use \thispagestyle{fancy} after \begin{document} instead of \pagestyle{fancy} as shown in my example. I had to redefine the \headrulewidth to zero so that the line did not show up at the top of the page (can someone comment on this answer if there is a "nicer" way to do this?). This solution also has the same problem as the first, where "Supervisor" is pushed all the way to the margin. However, I used the \phantom command to add some white space, and fiddled with the input to make it look right. Here is that output:
Hope this helps!
\begin{titlepage}
... whatever should be here ...
for a titlepage
\vfill\noindent
\begin{tabular}[t]{@{}l}
\underline{\textbf{Student:}}\\ Mr. Thatshisname
\end{tabular}
\hfill% move it to the right
\begin{tabular}[t]{l@{}}
\underline{\textbf{Supervisor:}}\\ Prof. Whatshisname
\end{tabular}
\end{titlepage}
• Is it good practice to use tabulars in LaTeX for alignment? I'm asking since in HTML for example this frowned upon. Well they have there <div>'s forf this purpose, but I'm still wondering. Tables and alignet text are to different things IMO.. – manthano Feb 23 '16 at 8:05
• tabulars (arrays) can be aligned at top [t], center (default) [c] or the baseline [b]. For the current line the tabular itself is nothing else than a box. – user2478 Feb 23 '16 at 8:14
• True, but does this not soften the barrier between "meaning" and "purpose"? The tabular environment is supposed to set tables; if i start aligning my text with it my code becomes harder to read and I can never redefine the tabular environment if I want to alter the look of my tables – manthano Feb 23 '16 at 8:30
• Feel free, to use it in your way. But I wrote more than 10,000 pages with TeX and I never had the feeling that my way of using tabulars for aligning text makes the code hard to read. – user2478 Feb 23 '16 at 8:36
This uses a tabularx table to use the whole \linewidth and fills the rest of the table with an (empty!) X column.
\documentclass{book}
\usepackage{tabularx}
\usepackage{showframe}
\newcommand{\namestyle}[1]{#1}
\begin{document}
\noindent\begin{tabularx}{\linewidth}{@{}p{4cm}Xp{4cm}@{}}
\namestyle{Mr. Gumby} & & \namestyle{Prof. Gumby} \tabularnewline
\end{tabularx}
\end{document}
I remove the underlining on purpose -- the showframe package can be omitted of course.
As tables or minipages are not an option to answer anymore, something a little more quirky:
\documentclass{article}
\usepackage[papersize={12cm,10cm}]{geometry}
\usepackage{lipsum}
\usepackage{calc}
\def\signat#1#2{%
\makebox[\widthof{#2}][l]{\vbox{%
\textbf{\underline{#1:}}\\[5pt] #2}}}
\begin{document}
\lipsum[2]\vfill
{\parindent0pt
\signat{Student}{Mr. Thatshisname}\hfill
\signat{Supervisor}{Prof. Whatshisname}}
\end{document}
Well, less quirky:
\def\signat#1#2{\parbox{\widthof{#2}}{\textbf{\underline{#1:}}\par #2}}
Note: Take care that the "p" of "Supervisor" make the underline lower than in "Student". A good reason to not underline, or at least underline \smash{#1:}, or \strut #1, or use the soul package.
Are you sure that you want the supervisor name flush right? Here it is much more common to have something like this:
\documentclass{article}
\usepackage{showframe} % just to show the page boundaries
\begin{document}
whatever...
\vfill
\noindent \parbox[t]{0.5\linewidth}{%
\underline{\textbf{Student}} \\ Mr. Studentname \\ \vspace{2cm} } %
\parbox[t]{0.45\linewidth}{%
\underline{\textbf{Supervisor}} \\ Mr. Supervisorname}
\end{document}
...with the place for signing under the names and flush left with the names.
|
|
# International Journal of Fracture (INT J FRACTURE )
Publisher: Springer Verlag
## Description
The International Journal of Fracture is an outlet for original analytical numerical and experimental contributions which provide improved understanding of the mechanisms of micro and macro fracture in all materials and their engineering implications. The Journal is pleased to receive papers from engineers and scientists working in various aspects of fracture. Contributions emphasizing empirical correlations unanalyzed experimental results or routine numerical computations while representing important necessary aspects of certain fatigue strength and fracture analyses will normally be discouraged; occasional review papers in these as well as other areas are welcomed. Innovative and in-depth engineering applications of fracture theory are also encouraged. In addition the Journal welcomes for rapid publication concise Letters in Fracture and Micromechanics which serve the Journal 's Objective. Letters include: Brief presentation of a new idea concept or method; new experimental observations or methods of significance; short notes of quality that do not amount to full length papers; discussion of previously published work in the Journal and Letters Errata.
## Impact factor 1.35
• Hide impact factor history
Impact factor
.
Year
• 5-year impact
1.31
• Cited half-life
0.00
• Immediacy index
0.21
• Eigenfactor
0.01
• Article influence
0.67
• Website
International Journal of Fracture website
• Other titles
International journal of fracture, Fracture
• ISSN
0376-9429
• OCLC
1771045
• Material type
Periodical, Internet resource
• Document type
Journal / Magazine / Newspaper, Internet Resource
## Publisher details
• Pre-print
• Author can archive a pre-print version
• Post-print
• Author can archive a post-print version
• Conditions
• Author's pre-print on pre-print servers such as arXiv.org
• Author's post-print on author's personal website immediately
• Author's post-print on any open access repository after 12 months after publication
• Publisher's version/PDF cannot be used
• Published source must be acknowledged
• Must link to publisher version
• Set phrase to accompany link to published version (see policy)
• Articles in some journals can be made Open Access on payment of additional charge
• Classification
green
## Publications in this journal
• ##### Article: Crack propagation with adaptive grid refinement in 2D Peridynamics
[Hide abstract]
ABSTRACT: The original Peridynamics formulation adopts a uniform grid with constant horizon on the whole discretized domain. As a consequence of that computational resources may not be used efficiently. The present work proposes adaptive refinement algorithms for 2D peridynamic grids. That is an essential component to generate a concurrent multiscale model within a unified approach. Adaptive grid refinement is here applied to the study of dynamic crack propagation in two dimensional brittle materials. Refinement is activated by using a new trigger concept based on the damage state of the material, coupled with the more traditional energy based trigger, already proposed in the literature.We present as well a method, to generate the nodes in the refined zone, which is suitable for an efficient numerical implementation. Moreover strategies for the mitigation of spurious reflections and distortions of elastic waves due to the use of a non-uniform grid are presented. Finally several examples of crack propagation in planar problems are presented, they illustrate the potentialities of the proposed algorithms and the good agreement of the numerical results with experimental data.
International Journal of Fracture 01/2015; Accepted to be appear.
• ##### Article: Numerical simulation of mode-III fracture incorporating interfacial mechanics
[Hide abstract]
ABSTRACT: Continuum surface methods, including the Sendova–Walton theory, offer a novel approach to fracture modeling in which boundary mechanics are used to augment the classical linear elastic fracture mechanics model for improved prediction of material behavior near fracture surfaces. These methods would be extremely useful in design simulations, but would require numerical implementation which to date has not been available. This has not been previously addressed due to the higher-order tangential derivatives appearing in the fracture surface boundary conditions which make standard implementation techniques, such as the finite element method, a challenge to implement. We propose a method for this implementation which involves reformulating the fracture boundary conditions to remove these higher-order derivatives in the case of mode-III fracture. We also present the initial results of our finite element implementation, which verify the improved stress and displacement field predictions near fracture surfaces.
International Journal of Fracture 12/2014;
• ##### Article: Quasi-static and dynamic fracture behaviour of rock materials: phenomena and mechanisms
[Hide abstract]
ABSTRACT: An experimental investigation is conducted to study the quasi-static and dynamic fracture behaviour of sedimentary, igneous and metamorphic rocks. The notched semi-circular bending method has been employed to determine fracture parameters over a wide range of loading rates using both a servo-hydraulic machine and a split Hopkinson pressure bar. The time to fracture, crack speed and velocity of the flying fragment are measured by strain gauges, crack propagation gauge and high-speed photography on the macroscopic level. Dynamic crack initiation toughness is determined from the dynamic stress intensity factor at the time to fracture, and dynamic crack growth toughness is derived by the dynamic fracture energy at a specific crack speed. Systematic fractographic studies on fracture surface are carried out to examine the micromechanisms of fracture. This study reveals clearly that: (1) the crack initiation and growth toughness increase with increasing loading rate and crack speed; (2) the kinetic energy of the flying fragments increases with increasing striking speed; (3) the dynamic fracture energy increases rapidly with the increase of crack speed, and a semi-empirical rate-dependent model is proposed; and (4) the characteristics of fracture surface imply that the failure mechanisms depend on loading rate and rock microstructure.
International Journal of Fracture 07/2014; 189(1):1-32.
• ##### Article: Investigation of fatigue crack growth characteristics of NR/BR blend based tyre tread compounds
[Hide abstract]
ABSTRACT: Tyre tread directly comes in contact with various road surfaces and is prone to damage due to cuts from sharp objects during service. As tyres undergo millions of fatigue cycles, these cuts propagate continuously and lead to catastrophic failure. Therefore fatigue crack growth (FCG) characteristics should be an essential criterion for tread compound selection. The present study investigates FCG behavior of blends comprising of Natural Rubber (NR) and Polybutadiene Rubber (BR) over a wide range of tearing energy. Pure shear specimens with a notch on both edges were tested in a Tear Analyser. Rapid increase in FCG rate after a certain strain level was observed. This transition point appeared in a strain range of 20–35 %, depending on the blend composition. The higher BR containing compounds exhibited better FCG characteristics below the transition point but reversal of ranking was seen above this point. The influence of temperature, R ratio, waveforms and cure system on FCG characteristics was also investigated in NR and 60–40 NR/BR blend compounds. Higher FCG rate was achieved under pulse loading compared to the sine waveform. The relaxation time between pulse cycles played an important role. With an increase in relaxation time, FCG rate decreases significantly. The higher sensitivity towards R ratio was observed in NR compound. The 60–40 NR/BR blend showed higher FCG rate with increase in temperature compared to the NR compound. The NR compound with high Sulfur/Accelerator (S/Ac) ratio showed better FCG characteristics whereas for 60–40 NR/BR blend with low S/Ac ratio achieved superior FCG characteristics.
International Journal of Fracture 07/2014; 188(1).
• ##### Article: Estimation of cracking and damage mechanisms of rock specimens with precut holes by moment tensor analysis of acoustic emission
[Hide abstract]
ABSTRACT: This work presents an experiment on the acoustic emission (AE) of coarse grain granites with two square-shaped precut holes under uniaxial loading. Studies were carried out on the temporal–spatial evolution behavior of micro-cracks by AE mechanisms with the use of the simplex location method and the moment tensor theory, with further analysis in comparing the numerical simulations using the software RFPA $$^\mathrm{2D}$$ (Rock Failure Process Analysis). The results show that during the loading process, from beginning to rock failure, shear-mode micro-cracks are prominent, constituting more than 60 % of the total events; next most common are tensile-mode micro-cracks a less than 35 % of the total events. Variations of micro-cracks of the three modes during the loading process have the same increase tendency, i.e. fewer were generated in the initial loading stage, with a rapid increase when the stress values are between 40 and 60 % of the peak stress, and a rate dimunition before rock failure. It is observed that the tensile stress concentration is prone to appear at the tops of the two holes in the form of tensile type cracks, while the shear stress concentration usually appears at the bottom in the middle region of the specimen in the form of shear type cracks. The findings of the present work may serve as guidance for the prevention of roof and floor collapse in the stope exploration of mines.
International Journal of Fracture 07/2014; 188(1).
• ##### Article: On the fracture of small samples under higher order strain gradient plasticity
[Hide abstract]
ABSTRACT: In this work we perform Finite Element simulations within the framework of large deformation elasto-viscoplasticity on a material that is sensitive to the gradients of plastic strain and incorporates a single intrinsic length scale parameter. Both small scale yielding simulations and those on a finite sized sample show that large stress enhancements can occur at the tip of a notch due to gradient effects. The amount of plastic strain and opening stress that can be expected at the notch tip depends on an interplay between the notch radius, specimen dimensions and boundary conditions. It is shown that cleavage can be the favored criterion for failure in even a ductile material when the notch radius is small compared to the intrinsic length scale. Moreover, for large intrinsic length scales, failure may not always initiate at a notch but may be triggered away from it due to the presence of a boundary impermeable to dislocations.
International Journal of Fracture 06/2014; 187(2).
• ##### Article: Nanostructural scaling effect in fracturing homogeneous solids
[Hide abstract]
ABSTRACT: The scaling effect (power law dependence of number of newly-formed damages on damage size) during fracturing is inherent in heterogeneous materials, such as composites, concrete, rocks, etc., in which multi-site damaging takes place. Fracturing brittle homogeneous materials do not exhibit this phenomenon due to the lack of pre-failure damage accumulation at the microscopic scale level. This work is to determine the role of structural heterogeneity in the process of primary defect nucleation occurring in conventional homogeneous materials. We present highly resolved time series of fractoluminescence (FL) emitted during multiple chemical bond breakage in shock-damaged silica glass and single crystals $\upalpha \hbox {-SiO}_{2}$ and $\upalpha \hbox {-SiC}$ . The statistical analysis of the time series has shown that the energy distributions of FL pulses followed the power law indicative of long-range interactions between primary damage events. This scaling phenomenon is caused by the multiplicity of newly formed damaged sites at the level of structural heterogeneity. At the same time, the microscopic and larger damage formation reflected in the acoustic emission time series did not exhibit the presence of long-range interactions between growing brittle cracks.
International Journal of Fracture 06/2014; 187(2).
• ##### Article: Experiments and modeling of edge fracture for an AHSS sheet
[Hide abstract]
ABSTRACT: With the emergence of advanced high strength steels (AHSSs) and other light–weight materials, edge fracture has been one of the important issues evading reliable prediction using CAE tools. To study edge fracture behavior of AHSS, a comprehensive hole expansion test (HET) program has been carried out on a DP780 sheet. Specimen with three different edge conditions (milled edge, water jet cut edge and punched edge) are manufactured and tested. Results reveal that the hole expansion ratio (HER) of the present DP780 sheet is around 38 % for milled specimen and water jet cut specimen, and about 14 % for punched specimen. A novel method of a central hole specimen tension is also introduced for edge fracture study, showing a similar trend as found in HET. The paper briefly presents a procedure and the results for a full calibration of the DP780 sheet for plasticity and fracture, where a hybrid testing/simulation method is used to obtain parameters for Hill 48 plasticity model and modified Mohr–Coulomb fracture model. The finite element simulation gives an accurate prediction of HER, as well as the load displacement response and specimen deflection distribution in the hole expansion tests on uncracked material. The correlation between simulation and tests on central hole specimen also turns out to be very good. The paper also presents a very interesting insight of the initiation and propagation of cracks from the hole edge during a hole expansion test by numerical simulation in comparison with testing observation. The number of final cracks are accurately predicted. Other new aspects of the present paper include an improved 3D DIC measurement technique and a simplified analytical solution, from which a rapid estimation of displacement and hoop strain field can be made (see “Appendix 2”).
International Journal of Fracture 06/2014; 187(2).
• ##### Article: The unacknowledged risk of Himalayan avalanches triggering
[Hide abstract]
ABSTRACT: A “universal” model for avalanche triggering, as well as for collapse of suspended seracs, is presented based on Quantized Fracture Mechanics, considering fracture, friction, adhesion and cohesion. It unifies and extends the classical previous approaches reported in the literature, including the role of the slope curvature. A new size-effect, that on mountain height rather than the classical one on snow slab thickness, is also discussed and demonstrated thanks to glaciers data analysis from the World Glacier Inventory (http://nsidc.org/data/glacier_inventory/browse.html, 2014). The related most noteworthy result is that snow precipitation needed to trigger avalanches at 8,000 m could be up to 4 times, with a realistic value of 1.7 times, smaller than at 4,000 m. This super-strong size-effect may suggest that the risk of Himalayan avalanches is today still unacknowledged. A discussion on the recent Manaslu tragedy concludes the paper.
International Journal of Fracture 06/2014; 187(2).
• ##### Article: Cracks at rounded V-notch tips: an analytical expression for the stress intensity factor
[Hide abstract]
ABSTRACT: An analytical expression for the stress intensity factor related to a crack stemming from a blunted V-notch tip is put forward. The analysis is limited to mode I loading conditions, as long as the crack length is sufficiently small with respect to the notch depth. The proposed formula improves significantly the predictions of a recently introduced relationship, by considering a notch amplitude dependent parameter. Its values are estimated through a finite element analysis: different notch amplitudes, ranging from $0^{\circ }$ to $180^{\circ }$ , and different crack length to root radius ratios, ranging from 0 to 10, are taken into account. The evaluation of the apparent generalized fracture toughness according to equivalent linear elastic fracture mechanics concludes the paper.
International Journal of Fracture 06/2014; 187(2).
• ##### Article: A simplified method for the evaluation of fatigue crack front shapes under mode I loading
[Hide abstract]
ABSTRACT: Two-dimensional elastic or elasto-plastic models dominate the current fatigue crack growth assessment and life prediction procedures for plate components with through-the-thickness cracks. However, as demonstrated in many theoretical and experimental papers, the stress field near the crack tip is always three-dimensional and, as a result, the fatigue crack front is not straight. It is normally curved towards the plate faces. Over the past few years there were a number of very careful numerical studies focusing on the evaluation of fatigue crack front shapes. However, the application of the direct numerical techniques to fatigue phenomena is a very tedious and time consuming process and, sometimes, it is quite ambiguous. In the current paper we develop a simplified method for the evaluation of the front shapes of through-the-thickness fatigue cracks. Further, we validate the developed method against experimental results, investigate the influence of various parameters on the crack front shapes at stable (steady-state) propagation and analyse the differences in the results of fatigue crack growth evaluation obtained with two- and three-dimensional approaches.
International Journal of Fracture 06/2014; 188(2):203-211.
• ##### Article: Cohesive crack, size effect, crack band and work-of-fracture models compared to comprehensive concrete fracture tests
[Hide abstract]
ABSTRACT: The simplest form of a sufficiently realistic description of the fracture of concrete as well as some other quasibrittle materials is a bilinear softening stress-separation law (or an analogous bilinear law for a crack band). This law is characterized by four independent material parameters: the tensile strength, $f'_t$ , the stress $\sigma _k$ at the change of slope, and two independent fracture energies—the initial one, $G_f$ and the total one, $G_F$ . Recently it was shown that all of these four parameters can be unambiguously identified neither from the standard size effects tests, nor from the tests of complete load-deflection curve of specimens of one size. A combination of both types of test is required, and is here shown to be sufficient to identify all the four parameters. This is made possible by the recent data from a comprehensive test program including tests of both types made with one and the same concrete. These data include Types 1 and 2 size effects of a rather broad size range (1:12.5), with notch depths varying from 0 to 30 % of cross section depth. Thanks to using identically cured specimens cast from one batch of one concrete, these tests have minimum scatter. While the size effect and notch length effect were examined in a separate study, this paper deals with inverse finite element analysis of these comprehensive test data. Using the crack band approach, it is demonstrated: (1) that the bilinear cohesive crack model can provide an excellent fit of these comprehensive data through their entire range, (2) that the $G_f$ value obtained agrees with that obtained by fitting the size effect law to the data for any relative notch depth deeper than 15 % of the cross section (as required by RILEM 1990 Recommendation), (3) that the $G_F$ value agrees with that obtained by the work-of-fracture method (based on RILEM 1985 Recommendation), and (4) that the data through their entire range cannot be fitted with linear or exponential softening laws.
International Journal of Fracture 05/2014; 187(1).
|
|
# Two solutions to the following problem are given on the attached pages.Find the minimum distance between the two skew lines
###### Question:
Two solutions to the following problem are given on the attached pages. Find the minimum distance between the two skew lines (i.e., lines that are neither parallel nor intersecting) given by the parametric equations: X= y =. 2 = X =-1 y = 35 2 = 72 +2s Identify which solution works and which does not For the solution that works, explain why it works. In other words, add explanation and commentary to justify the work and explain what is going on. For the solution that does not work, explain what went wrong Then fix up this incorrect solution so that it works other words_ solve the problem using correct version of this approach_ Do not solve the problem by some other unrelated method, but show and explain what the person who tried to solve the problem with the incorrect solution should have done to make that solution work_
#### Similar Solved Questions
##### Question 3. A crank shaft mechanism is shown in Figure 3. Link B is rotating with...
Question 3. A crank shaft mechanism is shown in Figure 3. Link B is rotating with a constant angular velocity wgåg in the fixed reference frame A as shown in the figure. For this mechanism: (a) Using the vector kinematic equations and methods taught in this [20 marks) module, obtain the expres...
##### N H OH H2O OHO Ising hromobenzene, cyclohexene and vinyl bromide as your source of carbons,...
N H OH H2O OHO Ising hromobenzene, cyclohexene and vinyl bromide as your source of carbons, outline a...
##### Solve for x.log3(x + 1) - log3*=4Select the correct choice below and, if necessary; fill in the answer box to compThe solution is x = (Simplify your answer: Type an integer or a fraction. Use comma t0 sep The equation has no solution.Click to select and . enter your answer(s).
Solve for x. log3(x + 1) - log3*=4 Select the correct choice below and, if necessary; fill in the answer box to comp The solution is x = (Simplify your answer: Type an integer or a fraction. Use comma t0 sep The equation has no solution. Click to select and . enter your answer(s)....
Write it as a journal entry? Oca Karen ma chi n Ernesto化re ACCOUNTING Juan llau Problem (52 Points) Royal Technology Company uses a job order cost system. The following data summarize the operations related to production for March: a. Materials purchased on account, $50,000. b. Materials re... 1 answer ##### QUESTION 2 (40 MARKS) Figure Q2 show Binary to Gray code converter block diagram. Based on... QUESTION 2 (40 MARKS) Figure Q2 show Binary to Gray code converter block diagram. Based on that figure, design: (a) Circuit using logic gates. Obtain the truth table and represent Yo, Y1, Y2 and Y3 in minimized SOP Boolean algebra term. Draw the circuit using logic gates (CO2:P03 - 20 Marks) (b) Cir... 1 answer ##### Th figure below shows a mass m=1.0 kg which is initially moving with speed v=7.2 m/s... Th figure below shows a mass m=1.0 kg which is initially moving with speed v=7.2 m/s at the top of a frictionless hill of height h=4.9 m. It slides down the hill until it encounters a flat section with unknown coefficient of kinetic friction µk. If the object travels a distance D=11.1 m before... 1 answer ##### Reactian 01 O"c vary win-intial concentration as shaun below 0.712 2.4o 7.2o o. 귀2 4.65 4.... reactian 01 O"c vary win-intial concentration as shaun below 0.712 2.4o 7.2o o. 귀2 4.65 4. 6 0 30 0 3o 44.6 0 30 3 0 30 28, 2.06 12.5 5.18 2.o6 o,712 0-90 1.30 (02 o 712 2.06 5-2 is not nescb do his Proble Shoyour work ckar... 1 answer ##### Imagine you created a toxin such that when a neuron fired an action potential, the toxin... Imagine you created a toxin such that when a neuron fired an action potential, the toxin would bind immediately to the sodium-potassium pump at the top of the action potential, but the sodium and potassium channels would still function. What process(es) would be affected?... 5 answers ##### Determine ($a$) distance$d_{c}$for which portion$D E$of the cable is horizontal,$(b)$the corresponding reactions at$A$and$E .$Determine ($a$) distance$d_{c}$for which portion$D E$of the cable is horizontal,$(b)$the corresponding reactions at$A$and$E .$... 4 answers ##### If electricity costs 15 Bz per kilowatt-hour; how much does it cost to operate a 220 V microwave oven that draws 10 A of current for 2 hours per day for month of thirty days?The Total Electrical energy consumed for monthkwhandthe Total cost of electricity consumptionriyalmaesd2. If current of | = 2.00 A flows through wire; How many number of electrons flow through point In are In 4.00$Number of electrons flow through point in wire in 4.00electronsMcgees[
If electricity costs 15 Bz per kilowatt-hour; how much does it cost to operate a 220 V microwave oven that draws 10 A of current for 2 hours per day for month of thirty days? The Total Electrical energy consumed for month kwh and the Total cost of electricity consumption riyal maesd 2. If current of...
NAME Group Q...
##### Find arc length of r=2\cos\theta in the range 0\le\theta\le\pi?
I know the length formula is \int_a^b\sqrt(1+(y')^2)dx ... can someone check my answer?...
##### The specilic = heatof liquid bromine is 0.226 Jlg- How much heat () is required to raise the temperature of 10.0 mL of bromine from 25.00 'C to 27.30 *C? The density of liquid bromine: 3.12 gmL A) 16.2 B) 10A C)5.20 D) 32.4 F) 300
The specilic = heatof liquid bromine is 0.226 Jlg- How much heat () is required to raise the temperature of 10.0 mL of bromine from 25.00 'C to 27.30 *C? The density of liquid bromine: 3.12 gmL A) 16.2 B) 10A C)5.20 D) 32.4 F) 300...
##### A particle leaves the origin with an initial velocity $\vec{v}=(3.00 \hat{\mathrm{i}}) \mathrm{m} / \mathrm{s}$ and a constant acceleration $\vec{a}=(-1.00 \hat{\mathrm{i}}-$ $0.500 \hat{\mathrm{j}}) \mathrm{m} / \mathrm{s}^{2} .$ When it reaches its maximum $x$ coordinate, what are its (a) velocity and (b) position vector?
A particle leaves the origin with an initial velocity $\vec{v}=(3.00 \hat{\mathrm{i}}) \mathrm{m} / \mathrm{s}$ and a constant acceleration $\vec{a}=(-1.00 \hat{\mathrm{i}}-$ $0.500 \hat{\mathrm{j}}) \mathrm{m} / \mathrm{s}^{2} .$ When it reaches its maximum $x$ coordinate, what are its (a) velocity...
##### From the change in the PzOs and NaOH trap masses, determine the masses of HzO and COz produced. Be careful to use the change in mass of each trap for just before, and just after; the unknown sample is combusted. (You should not use the initial masses of the two traps at the start f the experiment: ) mass of H2O
From the change in the PzOs and NaOH trap masses, determine the masses of HzO and COz produced. Be careful to use the change in mass of each trap for just before, and just after; the unknown sample is combusted. (You should not use the initial masses of the two traps at the start f the experiment: )...
##### Income statements and balance sheets follow for The New York Times Company. Refer to these financial...
Income statements and balance sheets follow for The New York Times Company. Refer to these financial statements to answer the requirements. The New York Times Company Consolidated Statements of Income Fiscal year ended (in thousands) Dec. 29, 2016 Dec. 30, 2015 Revenues C...
##### Determine the slope at $B$ and the deflection at $C$ of the beam. $E=200 \mathrm{GPa}$ and $I=65.0\left(10^{6}\right) \mathrm{mm}^{4}$.
Determine the slope at $B$ and the deflection at $C$ of the beam. $E=200 \mathrm{GPa}$ and $I=65.0\left(10^{6}\right) \mathrm{mm}^{4}$....
##### Enter your answer iu the provided boxTransition metals, located in the center of the periodic table; hare many essential uses elements and fOrm MAMI important compounds 45 well Calculate the molecular mass of the following transition metal compound:(NicNs)ll;J
Enter your answer iu the provided box Transition metals, located in the center of the periodic table; hare many essential uses elements and fOrm MAMI important compounds 45 well Calculate the molecular mass of the following transition metal compound: (NicNs)ll; J...
##### Consider the e-blling case The mean and the standard devlatlon of the sample of n = 65 payment tmes are 18.7922 and 5 3.9678. Test / versus Mj: normallty ofthe populatlon (Round your by setung equal to 01 and and using "10.01" answers t0 _ crhical value rule and assume Negative values should be declmal places and value answer to Indicated by declral places the pvalue } minus sign Use : statistical software package eg.Minltab MegaStat etc . t0 deriveAnsWer is complete but not entirely c
Consider the e-blling case The mean and the standard devlatlon of the sample of n = 65 payment tmes are 18.7922 and 5 3.9678. Test / versus Mj: normallty ofthe populatlon (Round your by setung equal to 01 and and using "10.01" answers t0 _ crhical value rule and assume Negative values shou...
##### Ax % inVerfe trigmno mstn ~ 1f 4= cs 3x +2) , fnd
Ax % inVerfe trigmno mstn ~ 1f 4= cs 3x +2) , fnd...
##### Q FIN 303- Chacter 5: Time X Chapter 5 FIN 303 Flashcard * Mathematics of France...
Q FIN 303- Chacter 5: Time X Chapter 5 FIN 303 Flashcard * Mathematics of France x The tme value of money con X Bugen.wileyplus.com/edugen/student/mainfr.uni * Wiley PLUS: My Wiley PLUS I Help Contact Us | Log ou Parrino, Fundamentals of Corporate Finance, 4e CORPORATE FINANCE ( ctice Assignment Gra...
##### Mean Relative Points the Warriors the years 2013 Hypothesis Test for the Population Mean Test Statistic P-value 0.99572015105
Mean Relative Points the Warriors the years 2013 Hypothesis Test for the Population Mean Test Statistic P-value 0.9957 2015 105...
##### [-13 Points]DETAILSLARATIO 1.1.072Find thc ccntcr and radlus the clrcle Vth thc glven FdunHon(x - 4)2 + (Y + 2)2ceneer(Y)radiusSketch thc clrcleMMarcBorok Air80888% 5
[-13 Points] DETAILS LARATIO 1.1.072 Find thc ccntcr and radlus the clrcle Vth thc glven FdunHon (x - 4)2 + (Y + 2)2 ceneer (Y) radius Sketch thc clrcle MMarcBorok Air 80 888 % 5...
##### Iodine can be separated from sand and sodium chloride using..... what?
iodine can be separated from sand and sodium chloride using..... what?...
##### Messages Due in 8 hours, 54 minutes. Due Mon 11/11/2019 1:59p 0.689 M, the reaction is...
Messages Due in 8 hours, 54 minutes. Due Mon 11/11/2019 1:59p 0.689 M, the reaction is 36.3 % complete at 90.7 min. Calculate the The reaction A B+C is second order with respect to A. When [A]o half-life for this reaction. Show your work:...
##### 4. Suppose profits from investments in individual stocks follow a normal distribution with mean $100 and... 4. Suppose profits from investments in individual stocks follow a normal distribution with mean$100 and standard deviation $300. If you buy a single stock, selected at random, what is the probability that your profit is greater than zero? If you are buying a portfolio of 25 randomly selected stocks... 2 answers ##### Provide the product for the following reaction:Hjo:HjcCHs Ho_CHHoOHHjcHycHoOHHoOhCH,CH,HscHjcCH,Chs Provide the product for the following reaction: Hjo: Hjc CHs Ho_ CH Ho OH Hjc Hyc Ho OH Ho Oh CH, CH, Hsc Hjc CH, Chs... 1 answer ##### Ivanhoe Company purchased a new machine on October 1, 2017, at a cost of$87,800. The...
Ivanhoe Company purchased a new machine on October 1, 2017, at a cost of $87,800. The company estimated that the machine has a salvage value of$8,800. The machine is expected to be used for 64,200 working hours during its 8-year life. Compute depreciation using the following methods in the year ind...
##### Approximate the area of the shaded region under the graph of the given function by using the indicated rectangles. (The rectangles have equal width.)$f(x)= rac{4}{x}$(GRAPH CANT COPY)
Approximate the area of the shaded region under the graph of the given function by using the indicated rectangles. (The rectangles have equal width.) $f(x)=\frac{4}{x}$ (GRAPH CANT COPY)...
##### A toy making company increased production of their most popular board game from 30550 unit last month to 45550 unit this month. find the percent increase.
a toy making company increased production of their most popular board game from 30550 unit last month to 45550 unit this month. find the percent increase....
##### Please add explanation13. A die is rolled 3,000 times and the number of 5’s and 6’stogether is noted. What is the approximate probability that thetotal number of 5’s and 6’s together is greater than 1,052? (Thisis a binomial situation; a roll results in a 5 or a 6, or itdoesn’t. Approximate it with a normal distribution and use itsstandard deviation.)a. 13.5% b. 16% c. 2.5% d. .015% 14. Factory I produces 40% of the light bulbs in a certaincountry. Factory II produces 50% and Facto
please add explanation 13. A die is rolled 3,000 times and the number of 5’s and 6’s together is noted. What is the approximate probability that the total number of 5’s and 6’s together is greater than 1,052? (This is a binomial situation; a roll results in a 5 or a 6, or...
##### Koply (nC following formul KofRal dennmF Also Illustrate the Keton graphrallyIhven denativc nltte given point;0-1] (6) /fla)[F-TNB) - Wfa)where Raj- b; M1) (b} = 4Applyinis torniula compule U-1] (6) - 1/f"(a) fer the following funicticn x} ffx) Vx-3Whenih _MAT [6)x=9n7 3+ _Sketch gaphsofboth f >Jand: ( |) lend descibs tne ela 3r Ditsn [nen Aleoindode tr e (enzcn: Iinfs tc each (co resFoe part 0 /ard elvc the Ihor DCrv/E67 therd laricn used in thetomula above .Fortheocolba Dre FLT-F1o
Koply (nC following formul KofRal dennmF Also Illustrate the Keton graphrally Ihven denativc nltte given point; 0-1] (6) /fla) [F-TNB) - Wfa) where Raj- b; M1) (b} = 4 Applyinis torniula compule U-1] (6) - 1/f"(a) fer the following funicticn x} ffx) Vx-3 Whenih _ MAT [6) x=9n7 3+ _ Sketch ga...
|
|
# Tattoo featuring quantum physics
1. May 8, 2013
### BslBryan
Hello community. This is my first post and thread in this forum.
I've been working with modeling circumbinary exoplanets lately and I'd like to commemorate the implications with a tattoo. I'm young and dumb - it's a perfect time to get a tattoo.
Instead of relying on my work with CBPs and getting something like a stellar luminosity equation, I'd prefer to get something that relates to the building blocks of the universe in entirety.
I've thought about the standard model, but that might change in my lifetime with a TOE. I don't want the theory of relativity. All I know is that I want a physics tattoo - either an equation with remarkable implications but preferably something visual (maybe with an equation) and I figured you guys would be the ones to ask.
No idea is a bad idea, and you're free to make a case for anything, even things like relativity where I've already stated I'd rather not have.
Thanks, guys. Let's hear some cool ideas.
2. May 8, 2013
### Integral
Staff Emeritus
$$e^{i \pi} -1 = 0$$
Not exactly physics but a neat expression in any case.
3. May 8, 2013
### phosgene
Stephen Hawking on lower back.
4. May 8, 2013
### BslBryan
It is neat, and quite beautiful, but I don't really understand the implications - I can't quite process imaginary numbers or the significance of these seemingly unrelated values coming out to 0.
5. May 8, 2013
### WannabeNewton
The equations of motion for the Klein-Gordon field: $\partial^{a}\partial_{a}\varphi - m^{2}\varphi = 0$
6. May 8, 2013
### AnTiFreeze3
The complexity of this consequent equation may be off-putting to some, so I will include a brief motivation so that everyone here, from all levels of physics, can understand it:
Say we have a particle, be it a car, person, or ball. In order to determine both the speed and direction of this particle, we find the ${\frac{\Delta x}{\Delta t}}$, with $\Delta x$ denoting the overall change in position of the particle from its starting position, and $\Delta t$ denoting the change in time. Due to the displacement ($\Delta x$) being a vector, the resulting magnitude of this equation also results in a direction. Denoting this resulting value and direction will be the symbol $\mathbf v$, and it shall henceforth be referred to as velocity.
Thus, we can conclude that the velocity of a particle, $\mathbf v$, is related to the object's displacement, and the time of its displacement, in the following equation: $$\mathbf v={\frac{\Delta x}{\Delta t}}.$$
The results of this conclusion are astoundingly complex, and spread throughout the entirety of physics, making its presence known to all who dare to learn and understand it in a truly deep level. I find that this would be your best choice for a tattoo.
7. May 8, 2013
### BslBryan
Nice idea. Any particular reason you suggested that over Schrodinger's Equation?
8. May 8, 2013
### WannabeNewton
The latter is quite cliched is it not ? You could also try the Dirac equation for the Dirac spinor if you get bored!
9. May 8, 2013
### DiracPool
Forget the Schrodinger equation. Use the Dirac equation sandwiched between 2 p-obitals! Cool.
10. May 8, 2013
### Office_Shredder
Staff Emeritus
Is this some sort of trolling attempt? :tongue:
11. May 8, 2013
### BslBryan
I eat antimatter for breakfast. Cool idea!
12. May 8, 2013
### SteamKing
Staff Emeritus
You would be better off learning something, say a trade or a skill, rather than running around like a two-legged billboard.
13. May 8, 2013
### BslBryan
Maybe you could learn something about running around like a two-legged billboard. Let's keep this to suggestions for tattoos.
14. May 8, 2013
### AnTiFreeze3
You would be better off learning something, say a trade or a skill, rather than wasting away time replying to random people on the internet about your discomforts of how they conduct themselves....
Oh wait, this could go on forever.
15. May 8, 2013
### lisab
Staff Emeritus
16. May 8, 2013
### chgol5270
Try some of the images for particle collisions
17. May 9, 2013
### Trollegionaire
Tattoo featuring quantum physics.
18. May 9, 2013
### Staff: Mentor
19. May 9, 2013
### dlgoff
20. May 9, 2013
### Pseudo Epsilon
Heisenbergs uncertainty principle is a VERY good idea. Its simple, looks cool and illustrates rather poeticaly how we can never know everything.
21. May 9, 2013
### Danger
I always find Hertzsprung-Russell diagrams, from the simple black ink type to the multicoloured variety, to be captivating.
(Yeah, I know that it's not quantum physics, but you did mention a lot about stars in your introduction.)
22. May 9, 2013
### Integral
Staff Emeritus
If don't understand the implcations of that relationship how can you understand any equation in advanced Physics???
Perhaps you need:
F=ma
23. May 9, 2013
### Curious3141
That should be:
$$e^{i \pi} +1 = 0$$ It's a good thing he didn't go out and get a tattoo with a typo.
24. May 9, 2013
|
|
# How to prove that ideal is not principal
Let $$F$$ be a field and $$R$$ be the subring of polynomials $$F[x]$$ such that coefficient of $$x$$ is zero. Let $$I$$ be the ideal of $$R$$ such that constant term is zero. I have to prove that $$I$$ is not a principal ideal of $$R$$.
To show that I will have to prove that every element of $$I$$ is not generated by a single element of $$R$$. But to me it seems that every element of $$I$$, say, $$f(x)= a_2x^2+a_3x^3+\dots+a_nx^n$$ can be generated by an element of $$R$$, say, $$g(x)=b_0+b_2x^2+\dots +b_mx^m$$ by taking $$b_2=b_3=\dots=b_n=0$$, i.e.
$$f(x)=b_0(\alpha_2x^2+\alpha_3x^3+\dots+\alpha_nx^n)$$
where $$b_0=g(x)$$ and $$\alpha_2x^2+\alpha_3x^3+\dots+\alpha_nx^n$$ belongs to $$R$$. Thus it seems to me a principal ideal generated by $$g(x)$$. So what am I missing here? Please help.
• @mrtaurho Ideal generated by generator $g(x)$ thus will contain polynomials containing constant terms also? And since $I$ does not contain any polynomials having constant terms, does that make I non-principal ideal? May 26 at 8:37
• That's what I was referring to! You've to be careful when choosing generators to not accidentally get too much. For example, the unit element $1$ "generates" every ideal in any ring by your argument. But it will never really generate any proper ideal. (I re-added my comment below) May 26 at 8:45
• You are correct in so far that the ideal generated by $g(x)$ (or any constant for that matter) contains $I$. But as the constants are units in $R[x]$ the principal ideal generated by $g(x)$ will be the whole ring instead of only $I$ which is a proper subset of $R[x]$. May 26 at 8:46
Hint: look at $$x^2$$ and $$x^3$$. Can you find a non-unit $$f\in R$$ with $$g,h\in R$$ so that $$fg=x^2$$ and $$fh=x^3$$? There's a solution under the spoiler, but give yourself a chance before looking at it, please.
No - by examining the factorization $$fg=x^2$$ in $$F[x]$$, we find $$f=cx^2$$, but then writing $$h=h_0+h_2x^2+\cdots$$, we have that $$fh=ch_0x^2+ch_2x^4+\cdots$$.
|
|
# 3.4: Kirchhoff's Rules
## Some Problems Cannot Be Solved with $$R_{eq}$$ and $$C_{eq}$$
Despite our ability to reduce circuits using equivalent resistances and capacitances, we can’t analyze every circuit imaginable using those shortcuts. For example:
Figure 3.4.1 – Circuits Not Solvable with Equivalent Resistance and Capacitance
It is possible to reduce fragments of networks with equivalent resistance/capacitance to simplify our work, but we have to be certain that the elements we are combining truly follow both of the conditions for the type of equivalence we are using. For example, in the third figure above, one might be inclined to proclaim that $$R_2$$ and $$R_3$$ are in parallel. After all, it's clear that the total current that comes into the junction joining them equals the sum of the currents through each. Well, that's one criterion, but what about the other – that the voltage drops across each is equal? This fails because the capacitor has a voltage drop across it.
## Getting Back to Basics
So how do we solve such problems? We do this by using the same principles that led to the equivalence formulas, which comes down to two simple rules (called Kirchhoff's rules) that are based on charge conservation and energy conservation.
junction rule – Charge remains conserved, so since there is no charge build-up or loss at any junction in a network, the rate at which charge enters a junction equals the rate at which it exits the junction. Put another way, the current into a junction equals the current out of that junction.
loop rule – Energy remains conserved, which means that when a charge travels around any loop in a network to return to where it started, its potential energy $$qV$$ should return to the value it had when it was previously at that position. [This of course also presupposes that the emf source is supplying energy to the circuit at the same rate that the circuits resistance is converting energy to thermal, and that the kinetic energy of the charge doesn't change.] Put another way, the sum of the voltage drops around any closed loop is zero.
Applying these two rules to simple series and parallel circuits results in the same equivalence rules that we have already, but now these can be used to solve the more complicated problems mentioned above.
## Problem Solving
Let's consider a problem involving two batteries and multiple loops:
Figure 3.4.2a – Using Kirchhoff's Rules on a Network
Our goal is to find the current that runs through each of the resistors. There are some very standard steps to follow, and the most important thing to remember is that there exist many choices for these steps, and none of these choices is wrong. Don't get slowed down by trying to decide on the "correct" choice to make – every choice will get to the same answer! Without further ado, the steps:
1. label the currents
We are solving for the currents, so we need some variables to solve for. But a variable alone is not enough, we also need to label a direction for that current.
Figure 3.4.2b – Using Kirchhoff's Rules on a Network
Wait, these currents can't be right – they are all converging on the same junction! No, this labeling is perfectly fine, because in the end we will solve a system of equations, and one or two of these currents will end up being a negative number, indicating that the actual direction of the current is opposite to what we have labeled. We make these labels to solve the problem, and there is no need to be concerned with guessing the actual direction of current flow.
1. apply the junction rule
Identify all of the junctions in the diagram (in this case, there are two). We will not require all of the junctions, as we will see here. Choosing the upper junction in this case, and setting the incoming current equal to the outgoing current gives:
$current\;in = I_1+I_2+I_3 = 0 = current\;out$
Note that if we choose the other junction, the current in is zero and the current out is the sum of the three individual currents, giving us the same equation. The number of useful junction equations will be one fewer than the total number of junctions. As stated earlier, we can frequently reduce the need for junction equations by using equivalent resistance wherever possible.
1. apply the loop rule
Identify all of the loops in the diagram (in this case, there are three – left, right, and outer). As in the case of junctions, we will not require all of the loops (i.e. after we have enough of them, additional loops provide redundant information). The simple way to know if enough loops have been included is to count the number of unknowns and the number of equations. In this case, we have three unknowns (the three currents), and we already have one equation (the junction equation), so we need to use two different loops to attain enough equations to solve for the currents.
This step, while easily stated, comes with many sub-steps. For each loop that is be used, follow the following procedure:
1. choose a loop direction – clockwise or counterclockwise
2. choose a starting point – any point on the loop will do
3. follow the loop in the chosen direction and construct a sum of the voltage drops in that direction
• When crossing over a battery (or a capacitor):
• Add a positive value equal to the battery's emf if the loop journey crosses from the negative terminal to the positive terminal, because this is an increase in potential.
• Add a value equal to the negative of the battery's emf if the loop journey crosses from the positive terminal to the negative terminal, because this is a decrease in potential.
• When crossing over a resistor:
• Add a value equal to $$-IR$$ (where $$I$$ is the current labeled and $$R$$ is the resistor encountered) if the direction of the loop journey matches the direction of the labeled current. This is because current always flows from higher to lower potential.
• Add a value equal to $$+IR$$ if the direction of the loop journey is opposite to the direction of the labeled current.
1. set the sum of voltage drops equal to zero
For the example at hand, this all looks like this (all three loops are provided here, but only two of the equations are needed):
$\begin{array}{l} \text{left loop, clockwise, start in lower-left corner:} && +\mathcal E_1 - \mathcal E_2 + I_2R_2 - I_1R_1 = 0 \\ \text{right loop, clockwise, start in lower-left corner:} && -I_2R_2+\mathcal E_2 + I_3R_3 = 0 \\ \text{outer loop, clockwise, start in lower-left corner:} && +\mathcal E_1 + I_3R_3 - I_1R_1 = 0 \end{array}$
1. do the algebra – Solve the simultaneous equations using whatever method you prefer.
Of course, there are many variations on problems – the battery emfs and resistances are not always what is given – but the same principles apply.
Example $$\PageIndex{1}$$
Find the resistance $$R$$ in the network diagrammed below for which the ammeter will measure zero current.
Solution
Noting that there is no current in the central segment and summing the voltage drops clockwise around the left loop (starting at the lower left corner) gives us the current in the outer loop:
$+12.0V - I\left(4.0\Omega\right) - \left(0A\right)\left(7.3\Omega\right) - 5.0V = 0 \;\;\;\Rightarrow\;\;\; I = \dfrac{7}{4}A\nonumber$
Now use that current to sum the voltage drops around the outer loop to find $$R$$:
$+12.0V - I\left(4.0\Omega\right) - \left(\dfrac{7}{4}A\right)R + 16.0V = 0 \;\;\;\Rightarrow\;\;\; R = 12.0\Omega\nonumber$
This page titled 3.4: Kirchhoff's Rules is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Tom Weideman directly on the LibreTexts platform.
|
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# The Medical Segmentation Decathlon
## Abstract
International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)—a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.
## Introduction
Machine learning is beginning to revolutionize many fields of medicine, with success stories ranging from the accurate diagnosis and staging of diseases1, to the early prediction of adverse events2 and the automatic discovery of antibiotics3. In this context, a large amount of literature has been dedicated to the automatic analysis of medical images4. Semantic segmentation refers to the process of transforming raw medical images into clinically relevant, spatially structured information, such as outlining tumor boundaries, and is an essential prerequisite for a number of clinical applications, such as radiotherapy planning5 and treatment response monitoring6. It is so far the most widely investigated medical image processing task, with about 70% of all biomedical image analysis challenges dedicated to it7. With thousands of algorithms published in the field of biomedical image segmentation per year8, however, it has become challenging to decide on a baseline architecture as starting point when designing an algorithm for a new given clinical problem.
International challenges have become the de facto standard for comparative assessment of image analysis algorithms given a specific task7. Yet, a deep learning architecture well-suitable for a certain clinical problem (e.g., segmentation of brain tumors) may not necessarily generalize well to different, unseen tasks (e.g., vessel segmentation in the liver). Such a “generalizable learner", which in this setting would represent a fully-automated method that can learn any segmentation task given some training data and without the need for human intervention, would provide the missing technical scalability to allow many new applications in computer-aided diagnosis, biomarker extraction, surgical intervention planning, disease prognosis, etc. To address this gap in the literature, we proposed the concept of the Medical Segmentation Decathlon (MSD), an international challenge dedicated to identifying a general-purpose algorithm for medical image segmentation. The competition comprised ten different data sets with various challenging characteristics, as shown in Fig. 1. Two subsequent phases were presented to participants, first the development phase serving for model development and including seven open training data sets. Then, the mystery phase, aiming to investigate whether algorithms were able to generalize to three unseen segmentation tasks. During the mystery phase, participants were allowed to submit only one solution, able to solve all problems without changing the architecture or hyperparameters.
The contribution of this paper is threefold: (1) We are the first to organize a biomedical image analysis challenge in which algorithms compete in a multitude of both tasks and modalities. More specifically, the underlying data set has been designed to feature some of the representative difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data and small objects. (2) Based on the MSD, we released the first open framework for benchmarking medical segmentation algorithms with a specific focus on generalizability. (3) By monitoring the winning algorithm, we show that generalization across various clinical applications is possible with one single framework.
In the following, we will show the MSD results in “Results”, in which we present the submitted methods and rankings based on the Dice Similarity Coefficient (DSC)9 and the Normalized Surface Dice (NSD)10 metrics as well as the results for the live challenge. We conclude with a discussion in “Discussion”. The challenge design, including the mission, challenge data sets and assessment method, can be found in the “Methods”. Further details including the overall challenge organization, detailed participating method descriptions and further results are presented in the Supplementary Information.
## Results
### Challenge submissions
In total, 180 teams registered for the challenge, from which 31 submitted fully-valid and complete results for the development phase. From these, a subset of 19 teams submitted final and valid results for the mystery phase. Among the methods that fulfilled all the criteria to move to the mystery phase, all methods were based on convolutional neural networks, with the U-Net11 being the most frequently used base architecture—employed by more than half of the teams (64%). The most commonly used loss function was the DSC loss (29%), followed by the cross entropy loss (21%). Figure 2 provides a complete list of both network architectures and loss functions used in the challenge. 61% of the teams used the adaptive moment estimation (Adam) optimizer12, while the stochastic gradient descent (SGD)13 was used by 33% of the teams.
### Method description of top three algorithms
In the following, the top three methods are briefly described while the remaining participating methods are described in the Supplementary Methods 2. Supplementary Table 1 further provides an overview over all methods that were submitted for the mystery phase and who provided full algorithmic information (n = 14 teams), including links to public repositories (when available).
The key idea of nnU-Net’s method was to use a fully-automated dynamic adaptation of the segmentation pipeline, done independently for each task in the MSD, based on an analysis of the respective training data set. Image pre-processing, network topologies and post-processing were determined fully automatically and considered more important than the actual architecture8. nnU-Net was based on the U-Net architecture11 with the following modifications: the use of leaky ReLU, instance normalization and strided convolutions for downsampling8. It further applied a combination of augmentation strategies, namely affine transformation, non-linear deformation, intensity transformation (similar to gamma correction), mirroring along all axes and random crop. The sum of the DSC and cross entropy loss was used, while utilizing the Adam optimizer. The method applied a purposely defined ensembling strategy in which four different architectures were used. The selection of the task-specific optimal combination was found automatically via cross-validation on the training set.
The key idea of NVDLMED’s method was to use a fully-supervised uncertainty-aware multi-view co-training strategy14. They achieved robustness and generalization by initializing the model from 2D pre-trained models and using three views of networks to gain more 3D information through the multi-view co-training process. They further used a resampling strategy to cope with the differences among the ten tasks. The NVDLMED team utilized a 3D version of the ResNet15 with anisotropic 3D kernels14. The team further applied a combination of augmentation strategies, namely affine transformation, geometric left-right flip and random crop. The DSC loss and the SGD optimizer were employed. NVDLMED ensembled three models, each trained on a different view (coronal, saggital and axial).
The key idea of K.A.V.athlon’s method was a generalization strategy in the spirit of AutoML16. The process was designed to train and predict automatically using given image data and description without any parameter change or intervention by a human. K.A.V.athlon’s method was based on a combination of the V-Net and U-Net architectures with the addition of a Squeeze-and-Excitation (SE) block and a residual block. The team further applied different types of augmentation, namely affine transformation, noise application, geometric left-right flip, random crop, and blurring. The DSC loss with a thresholded ReLU (threshold 0.5) and the Adam optimizer were employed. No ensembling strategy was used.
### Individual performances and rankings
The DSC values for all participants for the development phase and the mystery phase are provided as dot- and box-plots in Figs. 3, 4, respectively. For tasks with multiple target ROIs (e.g., edema, non-enhancing tumor and enhancing tumor segmentation for the brain data set), the box-plots were color-coded according to the ROI. The distribution of the NSD metric values was comparable to the DSC values and can be found in Supplementary Figs. 1, 2.
It can be seen that the performance of the algorithms as well as their robustness depends crucially on the task and target ROI. The median of the mean DSC computed considering all test cases of a single task over all participants ranged from 0.16 (colon cancer segmentation (the mystery phase), cf. Supplementary Table 9) to 0.94 (liver (the development phase), cf. Supplementary Table 5) and spleen segmentations (the mystery phase), cf. Supplementary Table 11). The full list of values are provided in the Supplementary Tables 211.
The rankings for the challenge are shown in Table 1. The winning method (nnU-Net) was extremely robust with respect to the different tasks and target regions for both phases (cf. Figs. 3, 4). Ranks 2 and 3 switched places (K.A.V.athlon and NVDLMED) for both the development and mystery phase. Figure 5 further shows the ranks of all algorithms for all thirteen target regions of the development phase (red) and all four target regions of the mystery phase in form of a box-plot. Many teams show a large variation in their ranks across target ROIs. The lowest rank difference of three ranks was achieved for team nnU-Net (minimum rank: 1, maximum rank: 4; the development phase) and the largest rank difference of sixteen ranks is obtained for team Whale (minimum rank: 2, maximum rank: 18; the development phase).
To investigate ranking robustness, line plots17 are provided in the Supplementary Figs. 312 for all individual target regions, indicating how ranks change for different ranking schemes. Furthermore, a comparison of the achieved ranks of algorithms for 1000 bootstrapped samples is provided in the form of a stacked frequency plot17 in Supplementary Fig. 13. For each participant, the frequency of the achieved ranks is provided for every task individually. It can be easily seen from both uncertainty analyses that team nnU-Net implemented an extremely successful method that was at rank 1 for nearly every tasks and bootstrap set.
The variability of the original rankings computed for the development phase and the mystery phase and the ranking lists based on the individual bootstrap samples was determined via Kendall’s τ18. The median (interquartile range (IQR)) Kendall’s τ was 0.94 (0.91, 0.95) for the colon task, 0.99 (0.98, 0.99) for the hepatic-vessel task and 0.92 (0.89, 0.94) for the spleen task. This shows that the rankings for the mystery phase were stable against small perturbations.
### Impact of the challenge winner
In the 2 years after the challenge, the winning algorithm, nnU-Net (with sometimes minor modification) competed in a total of 53 further segmentation tasks. The method won 33 out of 53 tasks with a median rank of 1 (interquartile range (IQR) of (1;2)) in the 53 tasks8, for example being the winning method of the famous BraTS challenge in 2020 (Team Name: MIC_DKFZ, https://www.med.upenn.edu/cbica/brats2020/rankings.html). This confirmed our hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. The method further became the new state-of-the-art method and was used in several segmentation challenges by other researchers. For instance, eight nnU-Net derivatives were ranked in the top 15 algorithms of the 2019 Kidney and Kidney Tumor Segmentation Challenge (KiTS—https://kits19.grand-challenge.org/)8, the MICCAI challenge with the most participants in the year 2019. Nine out of the top ten algorithms in the COVID-19 Lung CT Lesion Segmentation Challenge 2020 (COVID-19-20 https://covid-segmentation.grand-challenge.org/) built their solutions on top of nnU-Net (98 participants in total). As demonstrated in19, nine out of ten challenge winners in 2020 built solutions on top of nnU-Net.
## Discussion
We organized the first biomedical image segmentation challenge, in which algorithms competed in ten different disciplines. We showed that it is indeed possible that one single algorithm can generalize over various different applications without human-based adjustments. This was further demonstrated by monitoring the winning method for 2 years to show the continuation of the generalizability to other segmentation tasks.
In the following sections, we will discuss specific aspects of the MSD challenge, namely the challenge infrastructure, data set, assessment method and outcome.
### Challenge infrastructure
The participating teams were asked to submit their results in the form of a compressed archive to the grand-challenge.org platform. For the development phase, a fully-automated validation script was run for each submission and the leaderboard was updated accordingly. Each team was allowed to submit one solution per day. In contrast, for the mystery phase, only one valid submission per algorithm could be submitted to prevent overfitting.
Despite the above-mentioned policies, there were attempts to create multiple accounts so that a team could test their method beyond the allowed limit, a problem which was found due to result’s similarity between certain accounts. Teams who were found to be evading the rules were disqualified. Identity verification and fraud detection tooling has now been added to grand-challenge.org to help organizers mitigate this problem in the future.
Possibly, a better way of controlling overfitting, or possible forms of cheating (e.g., manual refinement of submitted results20) would have been to containerize the algorithms using Docker containers and for inference to be run by the organizers. This approach was unfortunately not possible at the time of the organization of MSD due to the lack of computational resources to run inference on all data for all participants. Thanks to a partnership with Amazon Web Services (AWS), the grand-challenge.org platform now offers the possibility to upload Docker container images that can participate in challenges and made available to researchers for processing new scans. With the recent announcement of a partnership between NVIDIA and the MICCAI 2020 and 2021 conferences, and the increased standardization of containers, such a solution should be adopted for further iterations of the MSD challenge.
### Challenge data set
In the MSD, we presented a unique data set, including ten heterogeneous tasks from various body parts and regions of interest, numerous modalities and challenging characteristics. MSD is the largest and most comprehensive medical image segmentation data set available to date. The MSD data set has been downloaded more than 2000 times in its first year alone, via the main challenge website (http://medicaldecathlon.com/). The data set has recently been accepted into the AWS Open-Data registry, (https://registry.opendata.aws/msd/) allowing for unlimited download and availability. The data set is also publicly available under a Creative Commons license CC-BY-SA4.0, allowing broad (including commercial) use. Due to data set heterogeneity, and usage in generalizability and domain adaptation research, it is likely to be very valuable for the biomedical image analysis community in the long term.
Regarding limitations, the MSD data set was gathered from retrospectively acquired and labeled data from many different sources, resulting in heterogeneous imaging protocols, differences in annotation procedures, and limiting the annotations to a single human rater. While the introduction of additional annotators would have benefited the challenge by allowing inter-rater reliability estimates and possibly improve the reliability of annotations, this was not possible due to restricted resources and the scale of the data. As shown in21, several annotators are often necessary to overcome issues related to inter-observer variability. Furthermore, the data set only consists of radiological data, we can therefore only draw conclusions for this application. Other areas like dermatology, pathology or ophthalmology were not covered. Finally, one specific region from one task (the vessel annotations of liver data set) was found to be non-optimal from a segmentation point of view after the data release, we opted to follow the best practice recommendations on challenges7, 20, 22 and not change the challenge design after it was released to participants. Note, however, that the message of this challenge would not change if the vessel data set was omited from the competition.
### Challenge assessment
Two common segmentation metrics have been used to evaluate the participant’s methods, namely the DSC, an overlap measure, and the NSD, a distance-based metric. The choice of the right metrics was heavily discussed, as it is extremely important for the challenge outcome and interpretation. Some metrics are more suitable for specific clinical use-cases than others23. For instance, the DSC metric is a good proxy for comparing large structures but should not be used intensively for very small objects, as single-pixel differences may already lead to substantial changes in the metric scores. However, to ensure that the results are comparable across all ten tasks, a decision was taken to focus on the two above-mentioned metrics, rather than using clinically-driven task-specific metrics.
Comparability was another issue for the ranking as the number of samples varied heavily across all tasks and target ROIs, which made a statistical comparison difficult. We therefore decided to use a ranking approach similar to the evaluation of the popular BraTS challenge, (http://braintumorsegmentation.org/) which was based on a Wilcoxon-signed-rank pairwise statistical test between algorithms. The rank of each algorithm was determined (independently per task and ROI) by counting the number of competing algorithms with a significantly worse performance. This strategy avoided the need of similar sample sizes for all tasks and reduced the need for task-specific weighting and score normalization.
Identifying an appropriate ranking scheme is a non-trivial challenge. It is important to note that each task of the MSD data set comprised one to three different target ROIs, introducing a hierarchical structure within the data set. Starting from a significance ranking for each target ROI, we considered two different aggregation schemes: (1) averaging the significance ranks across all target ROIs; (2) averaging the significance ranks per task (data set) and averaging those per-task ranks for the final rank. The drawback of (1) is that a possible bias between tasks might be introduced, as tasks with multiple target ROIs (e.g., the brain task with three target ROIs) would be over-weighted. We therefore chose ranking scheme (2) to avoid this issue. This decision was made prior to the start of the challenge, as per the challenge statistical analysis protocol. A post-challenge analysis was performed to test this decision, and results found that overall ranking structure remained unchanged. The first three ranks were preserved, only minor changes (1 to 2 ranks) were seen in a couple of examples at the middle and end of the rank list. As shown in Supplementary Figs. 312, changing the ranking scheme will typically lead to different rankings in the end, but we observed the first three ranks to be robust across various ranking variations. More complex ranking schemes were discussed among organizers, such as modeling the variations across tasks and target ROIs with a linear mixed model24. As explainability and a clear articulation of the ranking procedure was found to be important, it was ultimately decided to use significance ranking.
### Challenge outcome
A total of 180 teams registered for the MSD challenge, of which only 31 teams submitted valid results for the development and 19 teams for the mystery phase. Having a high number of registrations but only a fraction of final participants is a typical phenomenon happening for biomedical image analysis challenges (e.g., the Skin lesion analysis toward melanoma detection 2017 challenge with 46/593 submissions25, the Robust Medical Instrument Segmentation (RobustMIS) challenge 2019 with 12/75 submissions26 or the Multi-Center, Multi-Vendor, and Multi-Disease Cardiac Segmentation (M&Ms) challenge 2020 with 16/80 submissions27). Many challenge participants usually register to get data access. However, teams are often not able to submit their methods within the deadline due to other commitments. Furthermore, participants may be dissatisfied with their training and validation performance and step back from the final submission. The performance of the submitted algorithms varied dramatically across the different tasks, as shown in Figs. 3, 4 and Supplementary Tables 211. For the development phase, the median algorithmic performance, defined as the median of the mean DSC, changed widely across tasks, with lowest being the tumor mass segmentation of the pancreas data set (0.21, Supplementary Table 7) and the highest median for the liver segmentation (0.94, Supplementary Table 5). The performance drop was much more modest for the best performing method nnU-Net (0.52 and 0.93 median DSC for the pancreas mass and liver ROI, respectively), demonstrating that methods have varying degrees of learning resiliency to the challenges posed by each task. The largest difference within one task was also obtained for the pancreas data set, with a median of the mean DSC of 0.69 for the pancreas ROI, and 0.21 for the pancreas tumor mass, which is likely explained by the very small relative intensity difference between the pancreas and its tumor mass.
### The years after the challenge
Following the challenge event at MICCAI 2018, the competition was opened again for rolling submissions. This time participants were asked to submit results for all ten data sets (https://decathlon-10.grand-challenge.org/) in a single phase. In total, 742 users signed up. To restrict the exploitation of the submission system for other purposes, only submissions with per-task metric values different from zero were accepted as valid, resulting in only 17 complete and valid submissions. In order to avoid overfit but still allow for some degree of methodological development, each team was allowed submit their results 15 times. The winner of the 2018 MSD challenge (nnU-Net, denoted as Isensee on the live challenge), submitted to the live challenge leaderboard on the 6th of December 2019, and held the first position for almost 1 year, until the 30th of October 2020.
Since for the live challenge teams were allowed to tune their method on all ten data sets, the minimum value of the data set specific median DSC improved quite substantially from the 2018 MSD challenge, as shown in Supplementary Fig. 14. The two hardest tasks during the 2018 MSD challenge were the segmentation of the tumor inside the pancreas, with an overall median of the mean DSC of 0.21 over all participants (0.37 for the top five teams) and the segmentation of the colon cancer primaries, with an overall median of the mean DSC of 0.16 over all participants (0.41 for the top five teams). The worst task for the rolling challenge was the segmentation of the non-enhancing tumor segmentation inside the brain, with a median DSC of 0.47.
At the other end of the spectrum was the spleen segmentation task, where the median task DSC over all participants was 0.94 during the 2018 challenge, and improved to 0.97 for the rolling challenge. These observations suggest that the ability for multiple methods to solve the task has improved, with methods performing slightly better on harder tasks and significantly better on easy tasks.
In 2019 and 2020, the rolling challenges have resulted in three methods that superseded the winning results of the 2018 MSD challenge. Within these two follow-up years, two main trends were observed: the first major trend is the continuous and gradual improvement of “well performing” algorithms, such as the heuristics and task fingerprinting of the nnU-Net method; the second major trend that was observed was the rise of Neural Architecture Search (NAS)32 among the top teams. More specifically, both the third and the current33 (as of April 2021) leader of the rolling challenge used this approach. NAS optimizes the network architecture itself to each task in a fully-automated manner. Such an approach uses a network-configuration fitness function that is optimized independently for each task, thus providing an empirical approach for network architectural optimization. When compared to heuristic methods (e.g., nnU-Net), NAS appears to result in improved algorithmic performance at the expense of increased computational cost.
### Conclusion
Machine learning based semantic segmentation algorithms are becoming increasingly general purpose and accurate, but have historically required significant field-specific expertise to use. The MSD challenge was set up to investigate how accurate fully-automated image segmentation learning methods can be on a plethora of tasks with different types of task complexity. Results from the MSD challenge have demonstrated that fully-automated methods can now achieve state-of-the-art performance without the need for manual parameter optimization, even when applied to previously unseen tasks. A central hypothesis of the MSD challenge—that an algorithm which works well and automatically on several tasks should also work well on other unseen tasks—has been validated among the challenge participants and across tasks. This hypothesis was further corroborated by monitoring the generalizability of the winning method in the 2 years following the challenge, where we found that nnU-Net achieved state-of-the-art performance on many tasks including against task-optimized networks. While it is important to note that many classic semantic segmentation problems (e.g., domain shift and label accuracy) remain, and that methodological progress (e.g., NAS and better heuristics) will continue pushing the boundaries of algorithmic performance and generalizability, the MSD challenge has demonstrated that the training of accurate semantic segmentation networks can now be fully automated. This commoditization of semantic segmentation methods allows computationally-versed scientists that lack AI-specific knowledge to use these techniques without any knowledge on how the models work or how to tune the hyperparameters. However, in order to make the tools easier to use by clinicians and other scientists, the current platforms would need to be wrapped around a graphical user interface and the installation processes need to be made simpler.
## Methods
This section is organized according to the EQUATOR (https://www.equator-network.org) guideline BIAS (Biomedical Image Analysis ChallengeS)22, a recently published guideline specifically designed for the reporting of biomedical image analysis challenges. It comprises information on challenge organization and mission, as well as the data sets and assessment methods used to evaluate the submitted results.
### Challenge organization
The Decathlon challenge was organized at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2018, held in Granada, Spain. After the main challenge event at MICCAI, a live challenge was opened for submissions which is still open and regularly receives new submissions (more than 500 as of May 15th 2021).
The MSD challenge aimed to test the ability of machine-learning algorithms to accurately segment a large collection of prescribed regions of interest, as defined by ten different data sets, each corresponding to a different anatomical structure (see Fig. 1) and to at least one medical-imaging task34. The challenge itself consisted of two phases:
In the first phase, named the development phase, the training cases (comprising images and labels) for seven data sets were released, namely for brain, liver, heart, hippocampus, prostate, lung, and pancreas. Participants were expected to download the data, develop a general-purpose learning algorithm, train the algorithm on each task’s training data independently and without human interaction (no task-specific manual parameter settings), run the learned model on each task’s test data, and submit the segmentation results. Each team was only allowed to make one submission per day to avoid model overfit, and the results were presented in form of a live leaderboard on the challenge website (http://medicaldecathlon.com/), visible to the public. Note that participants were only able to see the average performance obtained by their algorithm on the test data of the seven development tasks.
The purpose of the second phase of the challenge, named the mystery phase, was to investigate whether algorithms were able to generalize to unseen segmentation tasks. Teams that submitted to the first phase and completed all necessary steps were invited to download three more data sets (images and labels), i.e., hepatic vessels, colon, and spleen. They were allowed to train their previously developed algorithm on the new data, without any modifications to the method itself. Segmentation results of the mystery phase could only be submitted once. A detailed description of the challenge organization is summarized in is summarized in Appendix A of Supplementary Material, following the form introduced in ref. 22.
### The Decathlon mission
Medical image segmentation, i.e., the act of labeling or contouring structures of interest in medical-imaging data, is a task of crucial importance, both clinically and scientifically, as it allows the quantitative characterization of regions of interest. When performed by human raters, image segmentation is very time-consuming, thus limiting its clinical usage. Algorithms can be used to automate this segmentation process, but, classically, a different algorithm had to be developed for each segmentation task. The goal of the MSD challenge was finding a single algorithm, or learning system, that would be able to generalize and work accurately across multiple different medical segmentation tasks, without the need for any human interaction.
The tasks of the Decathlon challenge were chosen as a representative sample of real-world applications, so as to test for algorithmic generalizability to these. Different axes of complexity were explicitly explored: the type and number of input modalities, the number of regions of interest, their shape and size, and the complexity of the surrounding tissue environment (see Fig. 1). Detailed information of each data set is provided in “Challenge data sets” and Table 2.
### Challenge data sets
Table 2 presents a summary of the ten data sets, including the modality, image series, ROI targets and data set size. A brief description of each data set is provided below.
• Development Phase (1st) contained seven data sets with thirteen target regions of interest in total:
1. 1.
Brain: The data set consists of 750 multiparametric-magnetic resonance images (mp-MRI) from patients diagnosed with either glioblastoma or lower-grade glioma. The sequences used were native T1-weighted (T1), post-Gadolinium (Gd) contrast T1-weighted (T1-Gd), native T2-weighted (T2), and T2 Fluid-Attenuated Inversion Recovery (FLAIR). The corresponding target ROIs were the three tumor sub-regions, namely edema, enhancing, and non-enhancing tumor. This data set was selected due to the challenge of locating these complex and heterogeneously-located targets. The Brain data set contains the same cases as the 2016 and 2017 Brain Tumor Segmentation (BraTS) challenges36,37,38. The filenames were changed to avoid participants mapping cases between the two challenges.
2. 2.
Heart: The data set consists of 30 mono-modal MRI scans of the entire heart acquired during a single cardiac phase (free breathing with respiratory and electrocardiogram (ECG) gating). The corresponding target ROI was the left atrium. This data set was selected due to the combination of a small training data set with large anatomical variability. The data was acquired as part of the 2013 Left Atrial Segmentation Challenge (LASC)39.
3. 3.
Hippocampus: The data set consists of 195 MRI images acquired from 90 healthy adults and 105 adults with a non-affective psychotic disorder. T1-weighted MPRAGE was used as the imaging sequence. The corresponding target ROIs were the anterior and posterior of the hippocampus, defined as the hippocampus proper and parts of the subiculum. This data set was selected due to the precision needed to segment such a small object in the presence of a complex surrounding environment. The data was acquired at the Vanderbilt University Medical Center, Nashville, US.
4. 4.
Liver: The data set consists of 201 contrast-enhanced CT images from patients with primary cancers and metastatic liver disease, as a consequence of colorectal, breast, and lung primary cancers. The corresponding target ROIs were the segmentation of the liver and tumors inside the liver. This data set was selected due to the challenging nature of having significant label unbalance between large (liver) and small (tumor) target region of interests (ROIs). The data was acquired in the IRCAD Hôpitaux Universitaires, Strasbourg, France and contained a subset of patients from the 2017 Liver Tumor Segmentation (LiTS) challenge40.
5. 5.
Lung: The data set consists of preoperative thin-section CT scans from 96 patients with non-small cell lung cancer. The corresponding target ROI was the tumors within the lung. This data set was selected due to the challenge of segmenting small regions (tumor) in an image with a large field-of-view. Data was acquired via the Cancer Imaging.Archive (https://www.cancerimagingarchive.net/).
6. 6.
Prostate: The data set consists of 48 prostate multiparametric MRI (mp-MRI) studies comprising T2-weighted, Diffusion-weighted and T1-weighted contrast-enhanced series. A subset of two series, transverse T2-weighted and the apparent diffusion coefficient (ADC) was selected. The corresponding target ROIs were the prostate peripheral zone (PZ) and the transition zone (TZ). This data set was selected due to the challenge of segmenting two adjoined regions with very large inter-subject variability. The data was acquired at Radboud University Medical Center, Nijmegen Medical Center, Nijmegen, The Netherlands.
7. 7.
Pancreas: The data set consists of 420 portal-venous phase CT scans of patients undergoing resection of pancreatic masses. The corresponding target ROIs were the pancreatic parenchyma and pancreatic mass (cyst or tumor). This data set was selected due to label unbalance between large (background), medium (pancreas) and small (tumor) structures. The data was acquired in the Memorial Sloan Kettering Cancer Center, New York, US.
• Mystery Phase (2nd) contained three (hidden) data sets with four target regions of interest in total:
1. 1.
Colon: The data set consists of 190 portal-venous phase CT scans of patients undergoing resection of primary colon cancer. The corresponding target ROI was colon cancer primaries. This data set was selected due to the challenge of the heterogeneous appearance, and the annotation difficulties. The data was acquired in the Memorial Sloan Kettering Cancer Center, New York, US.
2. 2.
Hepatic Vessels: The data set consists of 443 portal-venous phase CT scans obtained from patients with a variety of primary and metastatic liver tumors. The corresponding target ROIs were the vessels and tumors within the liver. This data set was selected due to the tubular and connected nature of hepatic vessels neighboring heterogeneous tumors. The data was acquired in the Memorial Sloan Kettering Cancer Center, New York, US.
3. 3.
Spleen: The data set consists of 61 portal-venous phase CT scans from patients undergoing chemotherapy treatment for liver metastases. The corresponding target ROI was the spleen. This data set was selected due to the large variations in the field-of-view. The data was acquired in the Memorial Sloan Kettering Cancer Center, New York, US.
### Assessment method
#### Assessment of competing teams
Two widely known semantic segmentation metrics were used to evaluate the submitted approaches, namely the DSC9 and the Normalized Surface Distance (NSD)10, both computed on 3D volumes. The implementation of both metrics can be downloaded in the form of a Jupyter notebook from the challenge website, (http://www.medicaldecathlon.com section Assessment Criteria). A more memory-efficient recently implementation of the NSD metric, which has been recently made available, can be obtained by computing the distance transform map using (https://evalutils.readthedocs.io/en/latest/modules.html#evalutils.stats.distance_transform_edt_float32) rather than scipy.ndimage.morphology.distance_transform_edt. The metrics DSC and NSD were chosen due to their popularity, rank stability34, and smooth, well-understood and well-defined behavior when ROIs do not overlap. Having simple and rank-stable metrics also allows the statistical comparison between methods. For the NSD,tolerance values were based on clinical feedback and consensus, and were chosen by the clinicians segmenting each organ. NSD was defined at task level and was the same for all the targets of each task. The value represented what they would consider an acceptable error for the segmentation they were performing. The following values have been chosen for the individual tasks (in mm): Brain—5; Heart—4; Hippocampus—1; Liver—7; Lung—2; Prostate—4; Pancreas—5; Colon—4; Hepatic vessel—3; Spleen—3. It is important to note that the proposed metrics are not task-specific nor task-optimal, and thus, they do not fulfill the necessary criteria for clinical algorithmic validation of each task, as discussed in “Challenge assessment”.
A so-called significance score was determined for each algorithm a, separately for each task/target ROI ci and metric mj {DSC, NSD} and referred to as si,j(a). Similarly to what was used to infer the ranking across the different BRATS tasks41, the significance score was computed according to the following four-step process:
1. 1.
Performance assessment per case: Determine performance mj(al, tik) of all algorithms al, with l = {1, …, NA}, for all test cases tik, with k = {1, …, Ni}, where NA is the number of competing algorithms and Ni is the number of test cases in competition ci. Set mj(al, tik) to 0 if its value is undefined.
2. 2.
Statistical tests: Perform a Wilcoxon signed-rank pairwise statistical test between algorithms $$({a}_{l},{a}_{l^{\prime} })$$, with values $${m}_{j}({a}_{l},{t}_{ik})\,-\,{m}_{j}({a}_{l^{\prime} },{t}_{ik})$$, $$\forall$$k = {1, . . . , Ni}.
3. 3.
Significance scoring: si,j(al) then equals the number of algorithms performing significantly worse than al, according to the statistical test (per comparison α = 0.05, not adjusted for multiplicity).
4. 4.
Significance ranking: The ranking is computed from the scores si,j(al), with the highest score (rank 1) corresponding to the best algorithm. Note that shared scores/ranks are possible. If a task has multiple target ROI, the ranking scheme is applied to each ROI separately, and the final ranking per task is computed as the mean significance rank.
The final score for each algorithm over all tasks of the development phase (the seven development tasks) and over all tasks of the mystery phase (the three mystery tasks) was computed as the average of the respective task’s significance ranks. The full validation algorithm was defined and released prior to the start of the challenge, and available on the decathlon website (http://medicaldecathlon.com/files/MSD-Ranking-scheme.pdf).
To investigate ranking uncertainty and stability, bootstrapping methods were applied with 1000 bootstrap samples as described in34. The statistical analysis was performed using the open-source R toolkit challengeR (https://phabricator.mitk.org/source/challenger/), version 1.0.217, for analyzing and visualizing challenge results. The original rankings computed for the development and mystery phases were compared to the ranking lists based on the individual bootstrap samples. The correlation of pairwise rankings was determined via Kendall’s τ18, which provides values between −1 (for reverse ranking order) and 1 (for identical ranking order). The source code for generating the results presented in “Results” and the Appendix is publicly available (https://phabricator.mitk.org/source/msd_evaluation/).
#### Monitoring of the challenge winner and algorithmic progress
To investigate our hypothesis that a method capable of performing well on multiple tasks will generalize its performance to an unseen task, and potentially even outperform a custom-designed task-specific solution, we monitored the winner of the challenge for a period of 2 years. Specifically, we reviewed the rank analysis and leaderboards presented in the corresponding article8, as well as the leaderboard of challenges from the grand-challenge.org website organized in 2020. We also reviewed further articles mentioning the new state-of-the-art method nnU-Net19. Finally, as the MSD challenge submission was reopened after the challenge event (denoted the “MSD Live Challenge”), we monitored submissions for new algorithmic approaches which achieve state-of-the-art performance, in order to probe new areas of scientific interest and development.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Data availability
Challenge data set. The MSD data set is publicly available under a Creative Commons license CC-BY-SA4.0, allowing broad (including commercial) use. The training data used in this study is available at http://medicaldecathlon.com/. The test data of the challenge cannot be released since the live challenge is still open and users are able to submit their results anytime; we currently have no intentions of closing the challenge.
Challenge assessment data. The raw challenge assessment data used to calculate the challenge rankings cannot be made publicly available due to privacy reasons. It contains the DSC and NSD values for every participating team for every task and target region. However, the aggregated results can be found in Table 1 and Supplementary Tables 211. Furthermore, they can be found here: https://phabricator.mitk.org/source/msd_evaluation/ in the folders descriptive-statistics, mean-values-per-subtask and rankings-per-subtask.
## Code availability
The implementation of the metrics used in the challenge, namely the DSC and NSD, were provided as a Python Notebook42. The significance rankings have been computed with the R package challengeR, version 1.0.2, which is publicly available: https://phabricator.mitk.org/source/challenger/. Finally, the code to compute the final rankings and all tables and figures of this paper can be found here: https://phabricator.mitk.org/source/msd_evaluation/.
## References
1. Litjens, G. et al. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Sci. Rep. 6, 1–11 (2016).
2. Poplin, R. et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat. Biomed. Eng. 2, 158–164 (2018).
3. Stokes, J. M. et al. A deep learning approach to antibiotic discovery. Cell 180, 688–702 (2020).
4. Ayache, N. & Duncan, J. 20th anniversary of the medical image analysis journal (media). Med. Image Anal. 33, 1–3 (2016).
5. Liang, S. et al. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur. Radiol. 29, 1961–1967 (2019).
6. Assefa, D. et al. Robust texture features for response monitoring of glioblastoma multiforme on-weighted and-flair mr images: A preliminary investigation in terms of identification and segmentation. Med. Phys. 37, 1722–1736 (2010).
7. Maier-Hein, L et al. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat. Commun. 9. https://doi.org/10.1038/s41467-018-07619-7 (2018).
8. Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J. & Maier-Hein, K. H. nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Commun. 18, 203–211 (2021).
9. Dice, L. R. Measures of the amount of ecologic association between species. Ecology 26, 297–302 (1945).
10. Nikolov, S. et al. Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study. J Med Internet Res. 23, e26151 (2021).
11. Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation, In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, 234–241 (Springer, 2015).
12. Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. arXiv Preprint at https://arxiv.org/abs/1412.6980 (2014).
13. Zhang, T. Solving large scale linear prediction problems using stochastic gradient descent algorithms, In Proc. Twenty-First International Conference on Machine Learning. 116 (Association for Computing Machinery, 2004).
14. Xia, Y. et al. 3d semi-supervised learning with uncertainty-aware multi-view co-training, In Proc. IEEE Winter Conference on Applications of Computer Vision, 3646–3655 (IEEE Computer Society, 2020).
15. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition, In Proc. IEEE Conference on Computer Vision and Pattern Recognition. 770–778 (IEEE, 2016).
16. He, X., Zhao, K. & Chu, X. Automl: a survey of the state-of-the-art. Knowl. Based Syst. 212, 106622 (2021).
17. Wiesenfarth, M. et al. Methods and open-source toolkit for analyzing and visualizing challenge results. Sci. Rep. 11, 1–15 (2021).
18. Kendall, M. G. A new measure of rank correlation. Biometrika 30, 81–93 (1938).
19. Ma, J. Cutting-edge 3d medical image segmentation methods in 2020: Are happy families all alike? arXiv Preprint at https://arxiv.org/abs/2101.00232 (2021).
20. Reinke, A. et al. How to exploit weaknesses in biomedical challenge design and organization, In Proc. International Conference on Medical Image Computing and Computer-Assisted Intervention, 388–395 (Springer, 2018).
21. Joskowicz, L., Cohen, D., Caplan, N. & Sosna, J. Inter-observer variability of manual contour delineation of structures in ct. Eur. Radiol. 29, 1391–1399 (2019).
22. Maier-Hein, L. et al. Bias: transparent reporting of biomedical image analysis challenges. Med. Image Anal. 66, 101796 (2020).
23. Reinke, A. et al. Common limitations of image processing metrics: A picture story. arXiv Preprint at https://arxiv.org/abs/2104.05642 (2021).
24. Breslow, N. E. & Clayton, D. G. Approximate inference in generalized linear mixed models. J. Am. Stat. Assoc. 88, 9–25 (1993).
25. Codella, N.C.F et al. Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (ISBN), hosted by the international skin imaging collaboration (ISIC). In Proc. IEEE 15th International Symposium on Biomedical Imaging, 168–172 (ISBI 2018).
26. Ross, T. et al. Comparative validation of multi-instance instrument segmentation in endoscopy: results of the ROBUST-MIS 2019 challenge. Med. Image Anal. 70, 101920 (2021).
27. Campello, V. M. et al. Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation: The M&Ms Challenge. IEEE Transactions on Medical Imaging. 40, 3543–3554 (IEEE, 2021).
28. Campadelli, P., Casiraghi, E. & Esposito, A. Liver segmentation from computed tomography scans: a survey and a new algorithm. Artif. Intel. Med. 45, 185–196 (2009).
29. Sirinukunwattana, K. et al. Gland segmentation in colon histology images: the glas challenge contest. Med. Image Anal. 35, 489–502 (2017).
30. Re, T. J. et al. Enhancing pancreatic adenocarcinoma delineation in diffusion derived intravoxel incoherent motion f-maps through automatic vessel and duct segmentation. Magn. Reson. Med. 66, 1327–1332 (2011).
31. Bello, I. et al. Revisiting resnets: Improved training and scaling strategies. arXiv Preprint at https://arxiv.org/abs/2103.07579 (2021).
32. Elsken, T., Metzen, J. H. & Hutter, F. et al. Neural architecture search: a survey. J. Mach. Learn. Res. 20, 1–21 (2019).
33. He, Y., Yang, D., Roth, H., Zhao, C. & Xu, D. Dints: differentiable neural network topology search for 3d medical image segmentation. CoRR abs/2103.15954. http://arxiv.org/abs/2103.15954 (2021).
34. Maier-Hein, L. et al. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat. Commun. 9, 5217 (2018).
35. Simpson, A.L. et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv e-prints http://arxiv.org/abs/1902.09063 (2019).
36. Menze, B. H. et al. The multimodal brain tumor image segmentation benchmark (brats). IEEE Trans. Med. Imag. 34, 1993–2024 (2015).
37. Bakas, S. et al. Advancing the cancer genome atlas glioma mri collections with expert segmentation labels and radiomic features. Sci. Data 4, 1–13 (2017).
38. Bakas, S. et al. Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge. arXiv Preprint at https://arxiv.org/abs/1811.02629 (2018b).
39. Tobon-Gomez, C. et al. Benchmark for algorithms segmenting the left atrium from 3d ct and mri datasets. IEEE Trans. Med. Imag. 34, 1460–1473 (2015).
40. Bilic, P. et al. The Liver Tumor Segmentation Benchmark (LiTS). arXiv e-prints http://arxiv.org/abs/1901.04056 (2019).
41. Bakas, S. et al. Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. CoRR abs/1811.02629. http://arxiv.org/abs/1811.02629 (2018a).
42. The MSD Challenge Organisers. MSD metrics jupyter notebook. http://medicaldecathlon.com/files/Surface_distance_based_measures.ipynb (2018).
43. Milletari, F., Navab, N. & Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation, In Proc. Fourth International Conference on 3D Vision (3DV), 565–571. (IEEE, 2016).
44. Roy, A.G., Conjeti, S., Navab, N. & Wachinger, C. Quicknat: Segmenting MRI neuroanatomy in 20 seconds. CoRR abs/1801.04161. http://arxiv.org/abs/1801.04161 (2018).
45. Kamnitsas, K. et al. Deepmedic for brain tumor segmentation. In Proc. International workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 138–149 (Springer, 2016).
## Acknowledgements
This work was supported by the UK Research and Innovation London Medical Imaging & Artificial Intelligence Center for Value-Based Healthcare. Investigators received support from the Wellcome/EPSRC Center for Medical Engineering (WT203148), Wellcome Flagship Program (WT213038). The research was also supported by the Bavarian State Ministry of Science and the Arts and coordinated by the Bavarian Research Institute for Digital Transformation and by the Helmholtz Imaging Platform (HIP), a platform of the Helmholtz Incubator on Information and Data Science. Team CerebriuDIKU gratefully acknowledges support from the Independent Research Fund Denmark through the project U-Sleep (project number 9131-00099B). R.M.S. is supported by the Intramural Research Program of the National Institutes of Health Clinical Center G.L. reported research grants from the Dutch Cancer Society, the Netherlands Organization for Scientific Research (NWO), and HealthHolland during the conduct of the study, and grants from Philips Digital Pathology Solutions, and consultancy fees from Novartis and Vital Imaging, outside the submitted work. Research reported in this publication was partly supported by the National Institutes of Health (NIH) under award numbers NCI:U01CA242871, NCI:U24CA189523, NINDS:R01NS042645. The content of this publication is solely the responsibility of the authors and does not represent the official views of the NIH.Henkjan Huisman is receiving grant support from Siemens Healthineers. James Meakin received grant funding from AWS. The method presented by BCVUniandes was made in collaboration with Silvana Castillo, from Universidad de los Andes. We would like to thank Minu D. Tizabi for proof-reading the paper.
## Author information
Authors
### Contributions
M.A. worked on the conceptual design and data preparation of the challenge, gave challenge day-to-day support as co-organizer, coordinated the work, validated the participating methods and wrote the document. A.R. worked on the conceptual design of the challenge, coordinated the work, performed the statistical analysis of participating methods, designed the figures and wrote the document. S.B. donated the brain tumor data set and co-organized the challenge. K.F. donated the lung tumors data set and co-organized the challenge. A.K.S. worked on the conceptual design of the challenge, led the statistical analysis committee and co-organized the challenge. B.A.L. worked on the conceptual design, was a member of the metrics committee, co-organized the challenge and donated the hippocampus data set. G.L. donated the prostate data set and co-organized the challenge. B.M. donated the brain and liver tumors data sets for the challenge, was a member of the statistics and metrics committee and co-organized the challenge. O.R. worked on the conceptual design of the challenge, was a member of the metrics committee and co-organized the challenge. R.M.S. worked on the conceptual design and co-organized the challenge. B.v.G. worked on the conceptual design, co-organized the challenge and donated the prostate data set. A.L.S. worked on the conceptual design, co-organized the challenge and donated the pancreas, colon cancer, hepatic vessels and spleen data sets for the challenge. M.B., P.B., P.F.C., R.K.G.D., M.J.G., S.H.H., H.H., W.R.J., M.K.M., S.N., J.S.G.P., K.R., C.T.G., and E.V. donated data for the challenge. H.H. and J.A.M. supported the grand-challenge.org submissions. SO co-organized the challenge. M.W. implemented the toolkit for the statistical ranking analysis for the challenge. P.A., B.B., S.C., L.D., J.F., B.H., F.I., Y.J., F.J., N.K., I.K., D.M., A.P., B.P., M.P., R.R., O.R., I.S., W.S., J.S., C.W., L.W., Y.W., Y.X., D.X., Z.X., and Y.Z. participated in the challenge. LMH initiated and co-organized the challenge, worked on the conceptual design and was a member of the statistical analysis committee, coordinated the work and wrote the document. M.J.C. initiated and organized the challenge (lead), coordinated the work and wrote the document.
### Corresponding author
Correspondence to Michela Antonelli.
## Ethics declarations
### Competing interests
No funding contributed explicitly to the organization and running of the challenge. The challenge award has been kindly provided by NVIDIA. However, NVIDIA did not influence the design or running of the challenge as they were not part of the organizing committee. R.M.S. received royalties from iCAD, Philips, ScanMed, Translation Holdings, and PingAn. Individually funding sourcing unrelated to the challenge has been listed in the Acknowledgments section. The remaining authors declare no competing interests.
## Peer review
### Peer review information
Nature Communications thanks Elena Casiraghi and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Antonelli, M., Reinke, A., Bakas, S. et al. The Medical Segmentation Decathlon. Nat Commun 13, 4128 (2022). https://doi.org/10.1038/s41467-022-30695-9
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41467-022-30695-9
|
|
Browse Questions
# A radioactive nuclei with decay constant 0.5/s is being produced at a constant rate of 100 nuclei/s . If at t=0 there were no nuclei , the time when there are 50 nuclei is :
$(a)\;1\;s\qquad(b)\;2 ln(\large\frac{4}{3})\;s \qquad(c)\;\normalsize ln 2 \;s\qquad(d)\; ln(\large\frac{4}{3})\;s$
|
|
8th Grade Math - Calculator Active
### 8th Grade Math - Calculator Active Sample
Chapter: 3 Standard: 8.NS.2 DOK: 2 1 pt
2.
Which integer is $78$ closest to?
Chapter: 1 Standard: 8.EE.1 DOK: 3 1 pt
3.
Simplify. $12 x 5 ( 2 x –4 ) –3$?
Chapter: 5 Standard: 8.F.1 DOK: 1 1 pt
17.
Which ordered pair could be part of the linear function containing the points (–6, –8), (0, 0), and (3, 4)?
Chapter: 11 Standard: 8.G.5 DOK: 2 1 pt
31.
$AD ¯$ is parallel to $CE ¯$ . What is the measure of the three missing angles?
Chapter: 8 Standard: 8.SP.4 DOK: 2 1 pt
42.
Matt predicted he had a 30% chance of making more than \$30,000/year by earning a Bachelor's degree rather than just having a high school diploma. The table below is taken from Matt's local newspaper. According to his local newspaper, would Matt’s prediction be correct?
|
|
# Reject worker after match
In the DMP family of labor models, we typically have $V$ denoting a vacancy and $J$ denoting the value of a job. $V$ typically becomes 0 through free entry.
Say a firm get matched with a worker of value $J_i$ and could reject him, to search again. In order to do so, it would need to hold that $V > J_i$. But as (through free entry) $V = 0$, that would only be the case if the worker has zero value to the firm.
However, in the real world, we see firms rejecting applicants and searching for new applicants. How do DMP models achieve to generate this observation?
• The papers in this area I've read were a bit ambiguous about what exactly in the real world was the match part and which was the search part. I always assumed that the application and interview part of hiring were part of search, not part of the match as you indicate.
– BKay
May 1 '15 at 15:33
|
|
# Meeting Details
For more information about this meeting, contact Robert Vaughan, Mihran Papikian, Ae Ja Yee.
Title: Galois groups of Mori polynomials, semistable curves and monodromy Algebra and Number Theory Seminar Yuri Zarhin, Penn State University We study the monodromy of a certain class of semistable hyperelliptic curves that was introduced by Shigefumo Mori forty years ago. Using ideas of Chris Hall, we prove that the corresponding monodromy groups are (almost) as large as possible".
### Room Reservation Information
Room Number: MB106 09 / 04 / 2014 11:15am - 12:05pm
|
|
Search papers using Gene Sets
3
1
Entering edit mode
8.1 years ago
Opt ▴ 50
Hi,
I heard about a tool which lets you search for papers based on a set of genes that you input and it returns papers that have mentioned a significant set of those genes.
Does anyone know what tool this is?
Thanks
gene nlp gene-set enrichment tool • 1.5k views
1
Entering edit mode
2
Entering edit mode
8.1 years ago
tomluec ▴ 60
In R you can do this using Org.HS. Below is an example:
source("http://bioconductor.org/biocLite.R")
biocLite("org.Hs.eg.db")
library("org.Hs.eg.db")
# mapped_genes are all the genes that org.HS.egPMID covers (the HS refers to homosapien). These genes are in Entrez format.
# entrez2Pmid is a list taking entrez gene ids to a vector of Pmids.
mapped_genes <- mappedkeys(org.Hs.egPMID)
entrez2Pmid <- as.list(org.Hs.egPMID[mapped_genes])
# now if you have an entrez gene id you can look up the relevant papers by doing
entrez2Pmid[myEntrezGene]
Here is a pastebin of the code since I stink at using the biostars code formatter
There is a tutorial here.
Also you can just use http://idconverter.bioinfo.cnio.es/IDconverter.php. Although I've found that sometimes the results seem to be a subset of the results you get using the method outlined above.
1
Entering edit mode
8.1 years ago
tomluec ▴ 60
Also I haven't used it much but chilibot (does anybody know of a more updated approach?) does some NLP on papers mentioning genes in a gene set to build a biological network.
1
Entering edit mode
8.1 years ago
dario.garvan ▴ 490
The software is called GeneValorization and is available as a Java Web Start application.
|
|
(50g) Simple summation program that returns large integers
03-22-2017, 03:44 PM (This post was last modified: 03-22-2017 03:47 PM by pier4r.)
Post: #1
pier4r Senior Member Posts: 2,017 Joined: Nov 2014
(50g) Simple summation program that returns large integers
So, In my little explorations reusing the hp 50g after years, I found out that the \GS built in function (sigma) of the 50g works on real or complex numbers, returning those two types of numbers.
After the suggestion of John Keit, I decided to create a program that simulate the summation function to return all the digits if the computation involves only integers and produces an integer. So I wrote the following complement of the built in summation function (more here on assembla.com git repo).
Code:
%%HP: T(0)A(D)F(.); @ You may edit the T(0)A(D)F(.) parts. @ The earlier parts of the line are used by Debug4x. @Remarks: @ FLAG 3 and 105 will be cleared and not restored, since so far if they were set @ and they are restored @ the result is transformed in a real number hiding digits. @Arguments on the stack @ 4: index expression @ 3: starting index @ 2: ending index @ 1: expression to evaluate @ Example @ sum from n=1 to n=2014 of n^3 @ 4: 'n' @ 3: 1 @ 2: 2014 @ 1: 'n^3' (or equation writer object) \<< 0 \-> indexExpression @ contains alreay an expression between '' so can be stored without '' startIndex endIndex expression resultingSum \<< PUSH @let's store system flag -3 CF @no numeric mode when symbols are used (so also better integer precision) -105 CF @ exact mode, no approximations startIndex endIndex FOR counter counter indexExpression STO expression EVAL 'resultingSum' STO+ NEXT resultingSum @POP @ restore system flags \>> \>>
I suppose that such a complement can be expanded in a proper function, like handling errors, better flag handling, handling rational numbers that are still composed by integers, double summations, multiple index, index that can step more than +1 and so on. As it is, the program is pretty rough but may help (it helped me), if someone wants to expand the idea, feel free to do it. Shared knowledge is the best.
I'm also pretty sure that someone worked on programs like this already, because the amount of amazing math libraries existing for saturn compatible calculators is incredible, just I did not find anything about this topic with the search string "site:http://www.hpmuseum.org/forum summation OR sum 50g". So if someone knows already existing programs that are better than the one I made, please share them with a reply to this thread!
Side note on this section: would be nice to organize a naming convention for the titles, since I was not sure about it. I just followed the most common pattern.
Wikis are great, Contribute :)
03-23-2017, 10:01 AM
Post: #2
pier4r Senior Member Posts: 2,017 Joined: Nov 2014
RE: (50g) Simple summation program that returns large integers
Update. The built in summation function works fine with proper flags cleared (as they need to be cleared also in my program). See here.
Wikis are great, Contribute :)
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s)
|
|
# During the replication of double-stranded DNA in E coli, several problems must be Oyerdomt by the replicaline machinery Brietly describe
###### Question:
During the replication of double-stranded DNA in E coli, several problems must be Oyerdomt by the replicaline machinery Brietly describe the mechanism or acceSsory protein factors by which the following problems are addned Both strands of the DNA duplex are replicated at the same time at the replication fork; but the strands antiparallel
#### Similar Solved Questions
##### [1] The joint probability mass function of two discrete random variables A and B is Pab(a,b)...
[1] The joint probability mass function of two discrete random variables A and B is Pab(a,b) = {ca?b, Sca²b, a= -2,2 and b = 1,2 otherwise Clearly stating your reasons, answer the following two (i) Are A and B are uncorrelated? (ii) Are A and B independent?...
##### 5) Suppose that XXz, ~Xioo are identically distributed, independent Poisson random variables with A-1.2. What is the approximate probability that the sample average of those random variables will end up exceeding 1.232
5) Suppose that XXz, ~Xioo are identically distributed, independent Poisson random variables with A-1.2. What is the approximate probability that the sample average of those random variables will end up exceeding 1.232...
##### You have projected depreciation expense to be $1,000,000 in thenext fiscal year 3 with year-end... You have projected depreciation expense to be$1,000,000 in thenext fiscal year 3 with year-end accumulated depreciation balances being $5,000,000.If inflation in medical equipment is averaging 10 percent per year and you wish to finance only 20 percent of your replacement needs with debt, what amou... 5 answers ##### Find the average rate of change off(x) X-] over the interval [2,2 + h]_Select the correct answer below:h+46+36-+46+3Cu-43h-+12+93h*+12h+9FEEDBACKMORE INSTRUCTIONSUBMIT Find the average rate of change off(x) X-] over the interval [2,2 + h]_ Select the correct answer below: h+46+3 6-+46+3 Cu-4 3h-+12+9 3h*+12h+9 FEEDBACK MORE INSTRUCTION SUBMIT... 5 answers ##### A branch of physics dealing whit motion without considering its causes is known as ...(A) Kinematics(B) dynamics(C) Hydrodynamics (D) mechanics A branch of physics dealing whit motion without considering its causes is known as ... (A) Kinematics (B) dynamics (C) Hydrodynamics (D) mechanics... 1 answer ##### A student obtained the following graph of velocity versus time. If the mass of the cart... A student obtained the following graph of velocity versus time. If the mass of the cart plus the tack is 15.0 g, the added mass is 500 g, and the radius of the circular motion was 17.0 cm, (a) what was maximum force exerted by the magnets? (b) What is the tangential acceleration of the cart just bef... 5 answers ##### 50% 11.05 AMff online vitalsource.comand phenotypes) would you mate to determine the order of the body color; wing size; and eye color genes on the chromosome? Assume that genes A and B are on the same chromosome and are 50 map units apart An anima heterozygous at both loci is crossed with one that is homozygous recessive at both loci. What percentage of the offspring will show recombinant phenotypes resulting from crossovers? Without knowing these genes are on the same chromosome; how would you 50% 11.05 AM ff online vitalsource.com and phenotypes) would you mate to determine the order of the body color; wing size; and eye color genes on the chromosome? Assume that genes A and B are on the same chromosome and are 50 map units apart An anima heterozygous at both loci is crossed with one tha... 2 answers ##### Qulterent denat Euon Cubc Yaojum equins Eonhs Taea Tidd by tully Illuminating unaIn" lntt tFlrira FEi Cltdenucydereilecomo &cy EiucertecclyLotMRKchEacn Ha heamKhacaunedegy match IFe Anuimlnaced Rcol tr t lascr= Lont Iall: ontonmcaatuTuadendicoElacr QuntnngerQunccomdenetMuch mar Inte742 MustLn crmpared 0J IC ^ DumtMvutonu DodutIEN Inter Dratnihenlgeaniquerion, arnplyAcuie oliojcemeleted (belote Turduy 11 C0 a0tkThe Frne ChttJn qulterent denat Euon Cubc Yaojum equins Eonhs Taea Tidd by tully Illuminating unaIn" lntt tFlrira FEi Clt denucydereile como &cy Eiuc erteccly LotMRKch Eacn Ha heam Khacau nedegy match IFe Anuimlnaced Rc ol tr t lascr= Lont Iall: onto nmcaatu Tuadendico Elacr Qunt nnger Qunccomdenet Much ma... 1 answer ##### Question 40 (4 points) Saved Choose the adaptation below that best meets each particular challenge for... Question 40 (4 points) Saved Choose the adaptation below that best meets each particular challenge for life on land, 1. Alternation of generations Protection from desiccation 2. Xylem and phloem Transport of water, minerals, and nutrients 3. Cuticle 4. Secondary compounds... 5 answers ##### When a U.S. barik accepts a deposit from one of its foreign branches, that deposit is subject to Fed reserve requirements. Similarly, reserve requirements are imposed on any loan from a U.S. bank's foreign branch to a U.S. resident, or on any asset purchase by the branch bank from its U.S. parent. What do you think is the rationale for these regulations? When a U.S. barik accepts a deposit from one of its foreign branches, that deposit is subject to Fed reserve requirements. Similarly, reserve requirements are imposed on any loan from a U.S. bank's foreign branch to a U.S. resident, or on any asset purchase by the branch bank from its U.S. pare... 1 answer ##### 16.3 L N2 at 25 °C and 125 kPa and 35.9 L O2 at 25 °C and 125 kPa were transferred to a tank with a volume of 6.50 L. Wh... 16.3 L N2 at 25 °C and 125 kPa and 35.9 L O2 at 25 °C and 125 kPa were transferred to a tank with a volume of 6.50 L. What is the total pressure at 53 °C... 5 answers ##### Part 3 (1 point)Calculate the enthalpy change per mole of HzSOa in the reaction: kJlmol Part 3 (1 point) Calculate the enthalpy change per mole of HzSOa in the reaction: kJlmol... 1 answer ##### The electrons within the confined 1,3-diene, C4Hs. observed in the uv region of the spectrum at... The electrons within the confined 1,3-diene, C4Hs. observed in the uv region of the spectrum at a wavelength of 210 nm. Estimate the effective π system of conjugated hydrocarbons may be treated a s particles Within a one-dimensional box. The lowest energy transition in the spectrum of buta- ene, ... 5 answers ##### ConinieRemalnlng Tlme: 59 mlnutes 05 sccondsQuestlon CompletEStruaeSntandsubumClck submlcam plete thl;QuestionuestonDolntpacallel plate €apacltor; whichthe space betwecn the places cmpt_njs cPIcitance Co=1,579 uF and i /s connected to "Lesteen thestate} s (lted sadhv, b-ltcn ana fully chrgcd. Once Itis fUlly charged itis deconnected from the baery dielectric mAtcrlaloltk-21. without ffeding the charge Ine plates spjcc plates E filed , ind much change ocurs In the enerey of the cap ncitor coninie Remalnlng Tlme: 59 mlnutes 05 scconds Questlon CompletE Struae Sntandsubum Clck subml cam plete thl; Question ueston Dolnt pacallel plate €apacltor; whichthe space betwecn the places cmpt_njs cPIcitance Co=1,579 uF and i /s connected to "Lesteen thestate} s (lted sadhv, b-ltcn ana... 1 answer ##### Calculate the molar concentrations of$\mathrm{H}^{+}$and$\mathrm{OH}^{-}$in solutions that have the following$\mathrm{pH}$values. (a)$8.14$(b)$2.56$(c)$11.25$(d)$13.28$(e)$6.70$Calculate the molar concentrations of$\mathrm{H}^{+}$and$\mathrm{OH}^{-}$in solutions that have the following$\mathrm{pH}$values. (a)$8.14$(b)$2.56$(c)$11.25$(d)$13.28$(e)$6.70$... 1 answer ##### The joint probability mass function of two discrete random variables A and B is (i) Are... The joint probability mass function of two discrete random variables A and B is (i) Are A and B are uncorrelated? (ii) Are A and B independent? Sca²b, a=-2,2 and b = 1,2 PA,(a,b) = 0, otherwise... 5 answers ##### A long cylindrical shell has a uniform current density: The total current flowing through the shell is 13 mA_ The permeability of free space is 1.25664 X 10-6 T. m/Akm 126 cm2 cmThe current is 13 mAFind the maguitude of the magnetic field at point /1 =44 C from the eylindrical axis Answer in units of nT A long cylindrical shell has a uniform current density: The total current flowing through the shell is 13 mA_ The permeability of free space is 1.25664 X 10-6 T. m/A km 12 6 cm 2 cm The current is 13 mA Find the maguitude of the magnetic field at point /1 =44 C from the eylindrical axis Answer in un... 5 answers ##### 71 points LarCalc11 5.3.097 .'8Find (f-1)(0). (r)f =[ V+7at(f-1)'(0)Need Help?Talk to a TutorRcad It 71 points LarCalc11 5.3.097 . '8 Find (f-1)(0). (r)f =[ V+7at (f-1)'(0) Need Help? Talk to a Tutor Rcad It... 1 answer ##### This is thatrial balance o Blossom Company on Septamber 30 BLOSSOM COMPANY Trial Balance September 30,2017... This is thatrial balance o Blossom Company on Septamber 30 BLOSSOM COMPANY Trial Balance September 30,2017 Debit Credit Cash 23,240 6,640 5,020 10,920 Supp as Equipmant Accounts Payable Unsarned Sarvice S 8,840 4020 19,040 13,920 S45,820 Common Stock RetainadEarnings 45,820 The October transactionsw... 1 answer ##### I need help writing these programs in c++ format 1. Enter two integer arrays: array1-129, 0,... i need help writing these programs in c++ format 1. Enter two integer arrays: array1-129, 0, -56, 4, -7 and array2-19, 12, -36, -2, 12 3. Write the code to form and display another array array3 in which each element is the sum of numbers in the position in both arrays. Use pointers and functions: g... 5 answers ##### 2. Let € be collection of open subsets of R Thus C is & set whose elements are open subsets of R_ Note that need not be finite 0r" even countable.Prove that the union U S is also an open subset of R Sec Assuming € is finite; prove that the intersection S is an open subset of R: Sec Give an example where € is infinite and S is not open. Sec 2. Let € be collection of open subsets of R Thus C is & set whose elements are open subsets of R_ Note that need not be finite 0r" even countable. Prove that the union U S is also an open subset of R Sec Assuming € is finite; prove that the intersection S is an open subset of R:... 5 answers ##### Given the initial value problem: dy =! +y2 , dx 9(1) = 1 Apply Euler' s method using 2 steps to approximate y(1.2). y(Tn+1) = y(Tn) + hF(Tn. Yn) Given the initial value problem: dy =! +y2 , dx 9(1) = 1 Apply Euler' s method using 2 steps to approximate y(1.2). y(Tn+1) = y(Tn) + hF(Tn. Yn)... 1 answer ##### Information on four investment proposals is given below: Investment required Present value of cash inflows Investment... Information on four investment proposals is given below: Investment required Present value of cash inflows Investment Proposal Α D$(90,000) $(100,000)$ ( 70,000) $(120,000) 126,000 138,000 105,000 160,000$ 36,000 $38,000$ 35,000 $40,000 5 years 7 years 6 years 6 years Net present value... 1 answer ##### Missing amounts from balance sheet and income statement data One item is omitted in each of... Missing amounts from balance sheet and income statement data One item is omitted in each of the following summaries of balance sheet and income statement data for the following four different corporations. This information has been collected in the Microsoft Excel Online file. Open the spreadsheet, ... 5 answers ##### Question 27Calculate the value of *.Round your answer to 3 significant figuresAdd your answerQuestion 28The position coordinate of this point; where is zero along the x-axis: isAdd your answer Question 27 Calculate the value of *. Round your answer to 3 significant figures Add your answer Question 28 The position coordinate of this point; where is zero along the x-axis: is Add your answer... 1 answer ##### The trial balance showed Revenues of$60,000, Wage Expense of $10,000, and Rent Expense of$2,000,...
The trial balance showed Revenues of $60,000, Wage Expense of$10,000, and Rent Expense of $2,000, and Dividends of$1,000. The entry to close the Income Summary account would include: a. Debit to Income Summary for $48,000. b. Debit to Income Summary for$47,000. c. &nb...
##### V Question Completion Status Moving to the next question prevents changes to this answer uestion 2...
v Question Completion Status Moving to the next question prevents changes to this answer uestion 2 The demand for milk is represented by: P 0.5Qd + 6.9 The supply of milk is represented by: P 1.2Qs+ 1.7 Calculate the equilibrium price in the market and enter using 1 decimal place A Moving to the nex...
##### The reaction between peroxide (H2O2) and iodide in basic solution is proposed to occur according to...
The reaction between peroxide $$\left(\mathrm{H}_{2} \mathrm{O}_{2}\right)$$ and iodide in basic solution is proposed to occur according to the following mechanism:Step $$1: \mathrm{H}_{2} \mathrm{O}_{2}+\mathrm{I}^{-} \rightarrow \mathrm{HOl}+\mathrm{OH}^{-}$$ slowStep \(2: \mathrm{OH}^{-}+\mathrm{...
##### What type of medications are the following? What do each of them do? 1. Nitroglycerin 2....
What type of medications are the following? What do each of them do? 1. Nitroglycerin 2. Metaprolol 3. Digoxin 4. Altace 5. Warfarin 6. Aspirin...
##### We have read about the prevalence of medical errors. When an error occurs, e.g., a medication...
We have read about the prevalence of medical errors. When an error occurs, e.g., a medication error, who is responsible, the physician, the nurse, the pharmacist, the facility? Please cite references....
##### Assignment Score=8.500ResourcesHintCheck_Question 10 of 70A 10.20 g sample of a compound contains 6.51 g of iron, Fe; 1.20 g of phosphorus; P, and oxyge formula for the compound_empirical formula:
Assignment Score= 8.500 Resources Hint Check_ Question 10 of 70 A 10.20 g sample of a compound contains 6.51 g of iron, Fe; 1.20 g of phosphorus; P, and oxyge formula for the compound_ empirical formula:...
##### Question El The graphs of W Jr = and v = 20+3 PF itersect 3t * = Find the remaining intersection points Quasion E2 The graph ofy {(~) and %- 9(r) are shotm (you may 55ume 9 are potynomials ~ Mnth coefiicients. and one Gaph ol a Quadratic and the oiner is the graon of a cubic. and inal there are no additional turning points}Based on the graphs, Ihe equalion f(r} = 9(*). which 0l the folting are true? There one real solution and no comptex sotutons Thera one real solution and one complex soluton
Question El The graphs of W Jr = and v = 20+3 PF itersect 3t * = Find the remaining intersection points Quasion E2 The graph ofy {(~) and %- 9(r) are shotm (you may 55ume 9 are potynomials ~ Mnth coefiicients. and one Gaph ol a Quadratic and the oiner is the graon of a cubic. and inal there are no...
##### Mechanisms of Evolution: Activity and Review Ganatic Drift Actlvlty Matcrials Bag Ploin MaMs OR skitiles OR tradl mkx OR any kind mlred diou organisme that the activity designed for MBMs ifyou substitute something clse . just change tne labuls the chen Directlons Flrst: What are your organisms? Mr What aro the membots = Iyour population? For example MRMs bag You have blue; gean, brown individuals In your MEM population orange; yelOw; rod Char ColmnFill-n vour iniomabon tOr YoUC Original Populati
Mechanisms of Evolution: Activity and Review Ganatic Drift Actlvlty Matcrials Bag Ploin MaMs OR skitiles OR tradl mkx OR any kind mlred diou organisme that the activity designed for MBMs ifyou substitute something clse . just change tne labuls the chen Directlons Flrst: What are your organisms? Mr W...
##### 1 Questions (BaClz 2H,O)Why should the sample be grinded into powder? Is it "the finer; the better"2What is the meaning of constant weight and how to heat a sample to constant weight?Why must the empty bottle be constant weight before it is loaded?If we add Ig BaClz to the desiccator and use the following drying agents to absorb the water in sample; which one can make the sample full dry? 1) 2g NaOH (2) 3g CaCl,+Sg CaCl;" H,o I0g CaBrz 6H,0
1 Questions (BaClz 2H,O) Why should the sample be grinded into powder? Is it "the finer; the better"2 What is the meaning of constant weight and how to heat a sample to constant weight? Why must the empty bottle be constant weight before it is loaded? If we add Ig BaClz to the desiccator...
|
|
StudyBuddy allows FSU students to post and search for study sessions on campus. Users enter relevant session information like class, location, time, and duration. Once posted, a session can be navigated to by searching (for individual classes) or browsing (for all classes). Users can also edit the post they own. Results are displayed on a map of FSU.
Android Market: FSU Study Buddy
FSU Study Buddy was a team project completed by Sebastian Chande, Ernesto Serrano, and Matthew Husted.
This post is also located on the FSU Mobile Lab Website!
LaTeX is a document preparation system for high-quality typesetting. Unlike Word Processors, LaTeX allows you to focus on the content while formatting in a “language” similar to XML. Once the document is complete you compile it similar to many programming languages.
While most linux distros bundle a version of LaTeX, Mac and Windows users will have to download and install the software that will compile the document.
Mac: MacTeX
Windows: proTeXt
After you install the needed software you can take one of two routes. You can either learn the language from scratch using guides or use templates to get a head start.
In our case we are going to use templates located at http://www.rpi.edu/dept/arc/training/latex/resumes/ to create a resume.
On the site above there are many templates to choose from. No matter which one you choose you will need to download the res.cls file to the same directory as your .tex file. res.cls is a class file needed for formatting resumes.
When you choose and download a .tex file you can edit it in any text editor. Such as Notepad++, emacs, or vim. (not Microsoft Word)
Once you finish editing your files you can open the terminal and use the command
$pdflatex yourfile.tex A pdf will be created with the same name as your .tex file in that directory. This is just one of the many uses for LaTeX. Few Extra Things Underline: \underline{This text is underlined} Bold: {\bf This text is bold} Italics: \emph{This text is italicized} Uppercase: \uppercase{THIS TEXT IS UPPERCASE} This post is also located on the FSU Mobile Lab Website! The Android Debug Bridge (ADB) is a command line program that helps developers communicate with usb connected android devices or emulators. The ADB is a good way to install/uninstall apps and access the files on your device. Since the ADB comes bundled with the Android SDK, it should be located in the Android-SDK folder on your computer. Depending on the OS you are using, the Android-SDK folder will be named either android-sdk-macosx, android-sdk-linux, or android-sdk-windows. The path is: /<path to sdk folder>/android-sdk-<platform you are on>/platorm-tools/ For all commands on windows the “./” is not needed. An example path is shown below, where /Applications/eclipse/ is the folder location and macosx is the platform. ADB devices is a way to view a list of the currently attached devices both usb connected and emulators. $ ./adb devices
$./adb shell ADB push allows you to move files or directories from your local machine to a usb connected device or emulator. $ ./adb push <file location on local machine> <destination on device>
ADB pull allows you to move files or directories from a usb connected device or emulator to your local machine.
$./adb pull <file location on device> <destination on local machine> ADB install allows you to install an apk to a usb connected device or emulator. $ ./adb install <apk location on local machine>
ADB uninstall allows you to uninstall an app on a usb connected device or emulator.
\$ ./adb uninstall <name of package you want to uninstall>
These are just a few of the many functions of the ADB. For a complete list you can use the “adb help” command to print all commands to the screen.
This post is also located on the FSU Mobile Lab Website!
I am using this website as a way to reference projects I am currently working on, have worked on, or plan to work on in the future. These projects may include websites, mobile apps, or other programming related topics. As a undergrad at Florida State University some of the stuff may be school related while some may be completely separate from my education.
|
|
# How to derive the number of pairwise combinations of a set of factors?
I am trying to understand pairwise testing.
How many combinations of tests would be there for example, if
a can take values from 1 to m
b can take values from 1 to n
c can take values from 1 to p
a, b and c can take m, n and p distinct values respectively. What are the total number of pairwise combinations possible?
With a pairwise testing tool that I am testing, I am getting 40 results for m = n = p = 6. I am trying to mathematically understand how I get 40 values.
-
Not sure what you are asking. Maybe an example to make it clearer? – Aryabhata Sep 8 '10 at 18:45
@Moron: updated. – Lazer Sep 8 '10 at 18:57
Pairwise testing tests for all possible 2-way interactions efficiently -- I gave a quick overview here: http://cstheory.stackexchange.com/questions/891/
You are looking for strength 2 covering arrays. In each pair of columns every pair of symbols occur -- this ensures all 2-way interactions are observed in some way. Here's a very simple example of a covering array of strength 2 with 2 columns:
11
12
21
22
12
What castel has drawn is essentially the Latin square:
123456
612345
561234
456123
345612
234561
If you look at each entry and write the list (r,c,s), where r is the row index, c is the column index, and s is the symbol, you will construct an orthogonal array (as depicted below) -- a covering array of strength 2 with the minimum number of rows (36).
111
122
133
...
661
In fact, Latin squares exist for all orders n. So if you have three columns (e.g. three variables) and n symbols for each variable, then you can always find a strength 2 covering array with n2 rows.
Many combinatorial designs give rise to particularly efficient covering arrays. Strength 2 covering arrays with more than three columns and n2 rows are equivalent to sets of mutually orthogonal Latin squares (the reference shows the construction).
In your case, if you have 40 results, then you are not using the most efficient covering array.
-
After reading this page, it seems that pairwise testing requires a set of test cases in which every pair of values from any two of the n categories occurs at least once among the test case n-tuples. In the present case, the problem is to find a minimal subset of the 6x6x6 = 216 total triples (a,b,c) such that
• each pair of values for a and b
occurs at least once, i.e. (a,b,*),
• each pair of a and c values occurs
at least once, i.e. (a,*,c)
• each pair of b and c values occurs at least once, i.e. (*,b,c)
Any subset satisfying these requirements must have at least 36 elements just to satisfy the (a,b,*) requirement. In the present case I think 36 test cases are also sufficient, as in the following set of triples:
(1, 1, 1), (1, 2, 2), (1, 3, 3), (1, 4, 4), (1, 5, 5), (1, 6, 6)
(2, 1, 6), (2, 2, 1), (2, 3, 2), (2, 4, 3), (2, 5, 4), (2, 6, 5)
(3, 1, 5), (3, 2, 6), (3, 3, 1), (3, 4, 2), (3, 5, 3), (3, 6, 4)
(4, 1, 4), (4, 2, 5), (4, 3, 6), (4, 4, 1), (4, 5, 2), (4, 6, 3)
(5, 1, 3), (5, 2, 4), (5, 3, 5), (5, 4, 6), (5, 5, 1), (5, 6, 2)
(6, 1, 2), (6, 2, 3), (6, 3, 4), (6, 4, 5), (6, 5, 6), (6, 6, 1)
In this example each of the three kinds of pairs occurs once and only once, i.e. there is no overlap. I don't think this will be possible in general, so it might not always be easy to come up with minimal subsets that cover all the cases.
-
This works for m=n=p, just take c = (a+b) mod m. (Or any other multiplication table of a group with m elements :). – yatima2975 Sep 9 '10 at 15:47
If each parameter had $10$ choices you'd be testing $300$ vs $1000$ combinations, namely hold $\rm a$ constant and vary $\rm b,c$ through $10\cdot 10 = 100$ values. Similarly hold, $\rm b$ constant; then $\rm c$. As the number of variables $\rm k$ increases you get better savings, roughly $\rm (k N)^2$ vs. $\rm N^k$, where $\rm N =$ max domain size. For QA purposes usually such rough upper bounds suffice. Do you have an intended application where you need something more precise? If so perhaps you should reveal some further details, e.g. the distribution of the sizes of the domains, etc.
EDIT: After reviewing your latest revision, it appears that the following web pages may be of interest: Pairwise Testing, which refers to various Taguchi methods such as those here. See also these links to introductions to combinatorial testing.
-
|
|
# QED Calculations
1. Jun 20, 2007
### ObsessiveMathsFreak
I'm not sure if this is the right forum for this topic, so apologies if I got it wrong.
I've been reading the Feynman Lectures on Physics. In it, Feynman states that though Quantum Electrodynamics is highly successful, it is still extremely difficult to evaluate the equations to obtain a theoretical result to compare to experiment. I believe he said it was the integrals that were the difficult part(I imagine this would indeed be the case).
The Feynman lecture were written/given in the sixties I believe, but I've also seen videos of Feynman in Auckland University in 1979, where he again reiterate this fact, and even states that there are experiments for which no-one has been able to evaluate a theoretical result.
The question I would like to ask is; what progress has been made on evaluating such integrals in the last 25-30 years. Specifically, have computers and computer algebra systems helped to tame this task? Can anyone give an example of the integrals QED theorists are faced with, if indeed it is the integrals that are giving the trouble.
Is this aspect of QED still a serious problem, or is it simply a question of throwing more CPU cycles at the problem. Naive I know, but my real question is can the equations nowadays be beaten into submission?
2. Jun 20, 2007
### nrqed
To get an idea of the state of the art, you could look up the work of Toichiro Kino****a of Cornell and his work on the calculation of g-2 (If I recall he completed the four-loop calculation). It's more tricky than just throwing CPU at it because of all the divergences involved. One has to take care of nasty overlapping divergences and renormalize things in a very clever way.
Not surprisingly, things are even more difficult in a bound state. I did a two loop calculation in positronium for my thesis and it was doable only because I used a clever technique developped by my adviser and which was applicable only because positronium is nonrelativistic.
A lop of people are working on NNLO (next to next to leading order) contributions in things like the top quark decay and other systems but usually the techniques are targeted at specific kinematical points.
Last edited: Jun 20, 2007
3. Jun 30, 2007
### Feynman diagram
Yeah, it's somewhat unfortunate that the fine structure constant isn't smaller than it is, and thus avoid us having to calculate third and forth order corrections ;)
4. Jul 1, 2007
### Haelfix
Analytically not much progress has been made since the sixties. I know of only 2 counterexamples in 4d since then.
So people just keep doing perturbation theory, and yes the numerical methods have vastly increased in efficiency: Lattice methods, twistor methods, powerful algorithms for planar feynman graphs etc etc.
What is it now, they have QED down to 16 loops or something like that?
5. Jul 2, 2007
### nrqed
:surprised Are you sure? As far as I know, QED has "only" been done to 4 loops! (I am talking a bout a full calculation, including the finite pieces. Sometimes people may go a bit beyond if they are interested in just extracting the divergence structure for the renormalisation group analysis).
Sometimes, people wil talk about "eight order" or "tenth order" instead but this is referring to the powers of the coupling constant (roughly, the electric charge) but this is not the number of loops. For example, a one-loop calculation may be called either a fourth order calculation (because there are 4 powers of the coupling constant in the amplitude) or a second order correction to the tree level (because there are two more powers of e than the tree level).
I have never seen that, but maybe some people also give the powers of e appearing in the cross section or decay rate (in a measurable quantity). So maybe this is where the number 16 might have come from! A four-loops calculation would generate 8 powers of "e" relative to tree level, which when squared would give " a 16th order correction". Maybe that is the context in which you saw that number. But I am pretty sure that no complete 5 loops calculation has been done. When I was at Cornell, Kino****a (the world expert in g-2) had completed the 4 loops calculation and he did not intend to do the 5 loops, I think!
Regards
Patrick
6. Jul 3, 2007
### Haelfix
Hi Nrqed. That sounds correct. Its been several years since I thought about this (probably dating back to when I was a grad student taking a class or somesuch), so its very possible my memory has transformed it into something erroneous (loop instead of order).
7. Jul 3, 2007
### Hans de Vries
They are doing 10th order now, requiring 12 672 Feynman diagrams.....
http://hussle.harvard.edu/~gabrielse/gabrielse/papers/2006/NewFineStructureConstant.pdf [Broken]
Another nice overview is this one, although specifically on the tau's magnetic
anomaly:
http://arxiv.org/PS_cache/hep-ph/pdf/0701/0701260v1.pdf
Regards, Hans
Last edited by a moderator: Apr 22, 2017 at 6:27 PM
|
|
# Turtlebot Odometry Calibration (with and without IMU) Observations/Questions
After adding the Razor 9DOF IMU from Sparkfun, the resulting Odometry actually seems to have gotten worse as shown in the two videos below:
1. Wheels-only Odometry, IMU disabled, analog Gyro disabled: https://youtu.be/ibY_HCBGm6U
2. Wheels + IMU, analog Gyro disabled: https://youtu.be/mPiKbtP5szM
Both show the robot's position with respect to the /odom frame as calculated by robot_pose_ekf. If the odometry was perfect (i.e. no error) then the position of the walls as seen by the Neato XV-11 LIDAR would remain in the same place as the robot is moved around.
The wheels-only odometry has a long-term drift as the TurtleBot is rotated continuously (via joystick telop). In the short term, though, it is pretty accurate and can probably be improved further by fine tuning the scaling parameters.
The wheel+IMU odometric position gyrates pretty significantly per turn but there is no long term drift.
When localizing with AMCL, adding the IMU will likely produce worse results because Localization can compensate for long term odometric drift but the short term gyrations will throw it off.
I did calibrate the IMU as described in the wiki page, including the Magnetometer (Section 7.1.3). The calibration_magn_use_extended parameter is set to true.
I'm looking for suggestions on how the improve the results of the wheels+IMU odometry. Thanks.
Also, here's the Matlab Analysis of the Magnetometer configuration data. I'm not sure how this is supposed to be interpreted. Please let me know if something's wrong.
edit retag close merge delete
Sort by » oldest newest most voted
So the problem was in the Mag calibration. I redid the calibration procedure while the IMU was mounted on the Turtlebot. I picked up the Turtlebot and moved it around in all directions as in the instructions. The "Ellipsoid Center" values came out significantly different:
Standalone Calibration Results:
const float magn_ellipsoid_center[3] = {138.402, -65.8241, -12.1968};
const float magn_ellipsoid_transform[3][3] = {{0.847013, -0.00538074, 0.00712010}, {-0.00538074, 0.865221, 0.00961725}, {0.00712010, 0.00961725, 0.999017}}
Results when mounted on the Turtlebot:
const float magn_ellipsoid_center[3] = {135.215, 4.24598, 5.24594};
const float magn_ellipsoid_transform[3][3] = {{0.820361, -0.0188837, 0.0166510}, {-0.0188837, 0.831140, 0.0388830}, {0.0166510, 0.0388830, 0.990194}};
Notice the changes in the Y and Z values of the Ellipsoid Center.
After this, the combined results of Wheels + IMU odometry were much better than wheels-only.
Other things to note:
• Experiment with positioning the IMU to minimize interference.
• Verify the ellipsoid center values -- the IMU node prints out the values on startup -- at least twice they were not correct (missing one digit). Not sure why that happened.
• The instructions say to modify the Arduino firmware with these values. I found that it was sufficient to change the parameters in the launch file (my_razor.yaml) -- no need to modify the firmware.
Edited to add: I had noted above that the ellipsoid_center or ellipsoid_transform values were getting corrupted on the readback from the IMU (after calibration configuration was sent to the IMU). Suspecting that it may be a serial line buffer overflow type of problem I added a few rospy.sleep(1) while writing these values from imu_node.py. Happy to report that that seems to have done the trick. The read-back values are no longer corrupted. Most definitely a hack, and perhaps specific to my particular IMU but it works!
more
|
|
Home » Posts tagged 'sound'
# Tag Archives: sound
### Subscribe to Blog via Email
Join 3,908 other subscribers
## Can light do some mechanical work?
“We know that sound can cause some mechanical work(eg:when we keep our hand in front of a large speaker,we feel something hitting the back of our hand), but why this doesn’t happen with light??” -Manishankar asks.
## Relativity, Black Hole and some doubts
In the model of the big bang I would like to pose the following question. My understanding is that
1. The expansion of the universe is accelerating and
2. The radiation or shock wave to keep the analogy going is something we can measure.
My question is this. While we can rewind the clock to a single moment when the universe was infinity dense how do we know that this was a moment at all. Let me elaborate. As the universe expands could time or space time not also expand and if so than when the big bang occurred is something we cannot measure merely by rewinding the clock. If we are experiencing time at a given rate when the universe is at its current size would time not be being effected by the expansion also and if so would that mean the there was simply no start and no end to the universe? Perhaps the big bang is not as accurate as we may wish and perhaps it is our ability to comprehend that is limited. I propose there is no start or end to the universe it has always and will always be as it, and we, cannot exist without time and as such these two objects or things form a symbiotic relationship with each other. While the universe expands so does time and its effects are reduced in line with its expansion. Does this sound plausible and if not why?
## Question from Beats (Waves)
"the displacement of a particle in periodic motion is given by "y=(cos2t-2cost+2sin^2t+1). What is the no. of beats that can be heard in 10 seconds?"
## Working of a microphone
How does a microphone do its work? Please explain me and can you give me answer in language Hindi I’m very thankful to you. (Shubham Agnihotri asked)
Microphone is a transducer (a device which converts energy from one form to another). The microphone converts sound energy to mechanical energy and then to electrical energy.
When we speak into the mic, the vibrations of the air produced by the sound make the diaphragm of the mic vibrate. A coil attached to the diaphragm vibrates together with it. The coil is moving in a strong magnetic field produced by a permanent magnet. Whenever the magnetic flux linked with coil changes an emf is induced in it. Therefor when we speak into the mic, an emf varying similar to the pressure variations produced by our sound is produced across the coil by electromagnetic induction.
If the ends of the coil is connected to an amplifier this can be amplified and can be send to the loudspeaker. The working of the loudspeaker is just the reverse of that of the microphone. I converts the electrical signal to sound signals.
## Why a sound is produced when a fluorescent tube breaks?
Why does a big sound come when tube-light breaks?
(Question was posted by Vishwesh)
Answer: Every sound is produced by some vibrating bodies. In this case it is air surrounding the tube which vibrates.
Inside the tube-light there is partial vacuum. When it breaks, the air surrounding the tube suddenly rushes into the vacuum and starts vibrating and thus produces the sound.
## Factors affecting frequency of sound produced by a stretched string
Study how the frequency of sound produced will change in each case with the following strings of length 15cms when the strings are tied between 2 ends-
• aluminium string
• copper string
• cotton string
• metallic string
• jute string
Also study how the pitch changes when the strings are made taught and loose. Study how the frequency of sound changes with thickness of the following strings
• cotton strings
• copper strings
This seems to be a homework question or a project question. Therefore I am not giving a detailed answer so as not to tamper the basic aim of assigning a project.
The frequency of sound produced by a stretched string depends on the following factors:
1. the length of the string
2. the linear mass density (i.e; the mass per unit length) of the string
3. the tension in the string
When you are using strings of different materials, the factor which changes is the mass per unit length and the same is true when you are changing the thickness.
When you make the string more taut, the tension increases and vice versa.
The question is given for a constant length. Therefore the case of effect of changing length does not come into picture.
The formula showing the relationship is $\large \fn_jvn f=\frac{1}{2L}\sqrt{\frac{T}{m}}$
it is evident from the formula that the frequency of sound is
• inversely proportional to the length
• directly proportional to the square root of tension in the string and
• inversely proportional to the square root of linear density of the string.
on proper substitution, the formula can be recast as
$\large \fn_jvn f=\frac{1}{Ld}\sqrt{\frac{T}{\pi \rho }}$
and this will be more convenient for you to answer the questions.
I recommend that you try to explore by actually performing the experiments.
## Doppler Effect
“What is Doppler Effect?”
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.