content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
STACK Documentation
Documentation home
Category index
Site map
PHP interface with the CAS
This document describes the design of the PHP interface to the CAS. This interface was developed specifically to connect STACK to Maxima.
High level objects
CAS text
CAS text is literally "computer algebra active text". This is documented here. Note, that when constructing CAS text it must be able to take a CAS session as an argument to the constructor. In this
way we can use lists of variables, such as question variables to provide values.
E.g. we might have :
in the question variables field. The question is instantiated, and then later we need to create some CAS text, e.g. feedback, in which we have :
You were asked to find the integral of \((x+2)^{@n@}\) with respect to \(x\). In fact ....
Here, we need the CAS text to be able to construct the text, with the previously evaluated value of n. This need is also why the CAS session returns the value of the variables as well as a displayed
form. It is used here.
CAS session
This class actually calls the CAS itself. The basic ideas is to take a list of maxima commands, including assignments of the form.
and executes this list in a single Maxima session. The results of this are then captures and fed back into the CAS strings so we essentially have data in the form:
key =>
An important point here is that expressions (values) can refer to previous keys. This is one reason why we can't tie teachers down to a list of allowed functions. They will be defining variable and
function names. We have implemented a lazy approach where the connection to the CAS is only made, and automatically made, when we ask for the values, display form or error terms for a variable. And
we don't generate display and value unless they are needed later. E.g. intermediate values do not create LaTeX.
Answer tests
The answer tests essentially compare two expressions. These may accept an option, e.g. the number of significant figures. Details of the current answer tests are available elsewhere. The result of an
answer test should be
1. Boolean outcome, true or false,
2. errors,
3. feedback,
4. note.
Other concepts
There are a number of reasons why a CAS expression needs to be "valid". These are
1. security,
2. stability,
3. error trapping,
4. pedagogic reasons, specific to a particular question.
Single call PRTs and simplification
As of STACK 4.4 we make a single Maxima call to exectue an entire PRT. This reduces significantly the number of separate calls to maxima, which is a significant efficienty boost for more complex
Some answer tests rely on "unsimplified" expressions with a "what you see is what you get" approach.
This example illustrates the issue. The teacher computes the answer to their question, e.g. find \(\int_{0}^{1} {\frac{{\left(1-x\right)}^4\cdot x^4}{x^2+1}} \mathrm{d}x\), with the Maxima code
using simplification (obviously) at the start of the quetion variables, and this simplified expression is used by the PRT. The answer, \(\frac{22}{7}-\pi\), is held internally in "simplified" form.
The Maxima string output is 22/7-%pi but internally the answer is actually the following
((MPLUS SIMP) ((RAT SIMP) 22 7) ((MTIMES SIMP) -1 $%PI))
You can see the internal tree structure of a Maxima expression with the following code.
(simp:true, p1:22/7-%pi, ?print(p1));
Rather than 22/7-%pi the internal structure is really closer to this: rat(22,7)+ (-1*%pi).
On the other hand, a student types in the expression 22/7-%pi and we deal with this without simplification. The internal Maxima expression is now
(simp:false, p1:22/7-%pi, ?print(p1));
which gives a different internal structure
((MPLUS) ((MQUOTIENT) 22 7) ((MMINUS) $%PI))
which might best be thought of as 22/7+-(%pi) where - is now a function of a single argument.
In this example, the unsimplified MQUOTIENT and simplified RAT SIMP are not particularly problematic. However, the difference between -%pi and -1*%pi is seriouly problematic. Indeed, this kind of
distinction is exactly what some answer tests, e.g. EqualComAss and CasEqual, are designed to establish. These tests will not work with a mix of simplified and unsimplified expressions, even if to
the user they look completely identical!
The solution to this problem is to "rinse" away any maxima internal simplification by using the Maxima string function to return the expression to the top level which a user would expect to type.
This process corresponds to what happend in older versions of Maxima in which expressions were routinely passed between Maxima and PHP, with the string representation being used.
Some expressions (lists, matrices) are passed by reference in Maxima, so even if the teacher's answer is created without simplification in the first instance, when it is evaluated by the answer tests
there is a risk of it becoming simplified when it is later compared by an answer test function.
Documentation home
Category index
Site map
Creative Commons Attribution-ShareAlike 4.0 International License.
|
{"url":"https://ja-stack.org/question/type/stack/doc/doc.php/Developer/PHP-CAS.md","timestamp":"2024-11-06T20:55:42Z","content_type":"text/html","content_length":"35093","record_id":"<urn:uuid:4d3e908f-d1a8-4b4c-af56-841bcc982700>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00436.warc.gz"}
|
What is a real life example of a trapezoid?
Handbag. Look at the shape of a handbag. You can easily observe that the top and the bottom portions of the bag are parallel to each other, while the rest of its sides are non-parallel. Hence, a
handbag is a prominent example of trapezoid-shaped objects used in real life.
What is trapezoid shape with examples?
A trapezoid, also known as a trapezium, is a flat closed shape having 4 straight sides, with one pair of parallel sides. The parallel sides of a trapezium are known as the bases, and its non-parallel
sides are called legs. A trapezium can also have parallel legs.
What is an example of a inverse variation?
For example, when you travel to a particular location, as your speed increases, the time it takes to arrive at that location decreases. When you decrease your speed, the time it takes to arrive at
that location increases. So, the quantities are inversely proportional.
How useful are trapezoids in dealing with real life situations?
A trapezium is another quadrilateral having a wide base and a narrower top or even vice versa. Such shapes are very useful in field of architecture and construction of buildings. There are other
irregular quadrilaterals which have the capacity to make things look good. Hence, they are used mostly by designers.
Is rectangle a trapezoid?
Under the inclusive definition, all parallelograms (including rhombuses, rectangles and squares) are trapezoids.
How useful are the trapezoids in dealing with real life situations?
What is inverse variation examples?
For example, if y varies inversely as x, and x = 5 when y = 2, then the constant of variation is k = xy = 5(2) = 10. Thus, the equation describing this inverse variation is xy = 10 or y = .
What are some real-life examples of trapezoids?
Real-life examples of trapezoids include certain table tops, bridge supports, handbag sides and architectural elements. Since a trapezoid cannot be three-dimensional, many real-life examples of
trapezoids are only partly designed with that shape.
What are some real life examples of inverse proportion?
Real life examples of the concept of inverse proportion 1 The time taken by a certain number of workers to accomplish a piece of work inversely varies as the number of workers at… 2 The speed of a
moving vessel such as a train, vehicle or ship inversely varies as the time taken to cover a certain… More
What is inversely proportional?
Inversely Proportional – Definition, Formula and Examples Two variables are called inversely proportional, if and only if the variables are directly proportional to the reciprocal of each other.
Learn with the help of solved examples at BYJU’S. Login Study Materials BYJU’S Answer NCERT Solutions NCERT Solutions For Class 12
How many sides does a trapezoid have?
A trapezoid is a two-dimensional shape with four straight sides and two parallel sides. The angles and lengths of the non-parallel side vary depending on the trapezoid’s shape. A handbag is often
designed with two trapezoids as the largest sides of the purse.
|
{"url":"https://vivu.tv/what-is-a-real-life-example-of-a-trapezoid/","timestamp":"2024-11-10T06:31:14Z","content_type":"text/html","content_length":"123077","record_id":"<urn:uuid:ca04bfa8-c713-4d08-bbb2-a07c836dd1a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00821.warc.gz"}
|
Fundamentally Lacking
A while’s back I wrote up my experiences of trying to learn Lisp through various books and I got this comment:
If you want to go down the rabbit hole however (and since you asked), here’s my advice: read SICP. If you’ve been programming in Algol-inspired languages for the last 15 years, your head will
probably explode. It is worth it. Do the examples. Follow along with the videos: they contain all kinds of little nuggets of wisdom that may take time to sink in.
SICP, I thought, what the hell is that? It transpires that this book is, as they say in colloquial circles, like frikkin’ huge man. Huge in fame (it’s taught in a lot of places that I could only
dream of attending when I was as an undergraduate) and, as it turns out, also huge in terms of sheer quantity of information inside it. So I ordered it on Amazon, waited a few weeks for it to arrive
and then queued it up on my bookshelf.
About a month ago I started gently plodding my way through it, scratching my head, doing the exercises and all. It really is very good, but I have no idea how I would have done this as an
introductory programming course at Uni. Let’s just say the exercises can be a little challenging, to me at least.
The amazing thing is I’ve only really just finished Chapter 1 and I feel like I’ve put my head through a wringer. So I’d really like it if you can come and visit me at the funny farm after I’ve done
Chapter 5. Visiting hours are 9-5 and please don’t shout at the other patients it freaks them out. Anyway, today’s mind melt is Exercise 2.5 and it asks us to …
Exercise 2.5. Show that we can represent pairs of nonnegative integers using only numbers and arithmetic operations if we represent the pair a and b as the integer that is the product (2^a * 3^
b). Give the corresponding definitions of the procedures cons, car, and cdr.
Up until now all the really hard problems had ‘Hints:’ in parentheses. That’s usually how I know when I’m going to be screwed. I wrote some numbers down, first on paper then in Egg-Smell but I just
couldn’t see how I was going to extract two numbers from the encoding. Then, bizarrely, whilst reading about Church numerals (eek) I discovered that there’s a fundamental theorem of arithmetic that I
never knew about. How does that happen? How did something that fundamental escape me all these years? Reading it was like being told by a German co-worker that the only time an apostrophe is
appropriate in “it’s” is for a contraction of “it is”. Scheiße.
Not only is there a fundamental theorem but it suggests the answer to Exercise 2.5. Perhaps one day I’ll be able to program & punctuate. One day.
The Free & The Damned
I sometimes wonder if I worked in a company that made software for others, instead of a company that makes software for itself, whether I would be a better: programmer, ventroliquist and lover. Ok,
scratch the bit about the lover.
There are many potential reasons why this might be so but I want to focus on just two.
1. When the software is the company’s business then as a developer you are closer to that business with all the benefits that brings. Essentially it’s the difference between supporting the business
and being the business
2. The second point is the related point that when software is the business, rather than for internal use, it is necessarily of a higher standard.
Item #2 is because your internal users pay for their software only indirectly. They don’t sign a EULA, and the cheques that they do sign are for salaries and aren’t budgeted in quite the same way.
Our internal users will put up with sub-standard sofrware that people who have to sign a EULA and pay hard-green for will not.
The thing is that working for a company that doesn’t produce software as its primary function is still great fun. For much of what I’ve been doing for that last few years the business area I work in
is only tangentially computer related. That is computers & technology are critical to the business because of the level of automation they provide not for any other reason. This makes the technology
they use the means and not the ends.
This is both liberating and damning all at once. It’s liberating because I’m free to ignore decades of good software engineering practice if it is profitable and expedient to do so. Put trivially, if
my software could directly generate revenues then time spent making it compile without warnings is time that it could be accumulating wealth. It’s damning because all those unfixed warnings are going
to make an expensive mistake one day. I, as the programmer, have to choose where to draw the line.
Good quality software should be well-designed on very many levels: interaction, architecture, performance, etc. Anyone who buys software should demand good quality software or their money back.
Therefore I would think (obviously I don’t know :-)) that being both free & damned isn’t necessarily a bad place to be. At least you’ll get the chance to make a bit of lucre while you’re free and
when you’re in hell you’ll always have a job refactoring.
I just read this. Which was interesting. I love the way that Steve has a simple point to make and spends 000s of words doing it, the posts always seem to ramble a bit (a little like mine!) but
they’re usually fully of interesting tidbits and insights into software development so it’s usually worth spending 30 minutes or so on his issuances.
You can write C++ like straight C code if you like, using buffers and pointers and nary a user-defined type to be found. Or you can spend weeks agonizing over template metaprogramming with your
peers, trying to force the type system to do something it’s just not powerful enough to express. Guess which group gets more actual work done? My bet would be the C coders. C++ helps them iron
things out in sticky situations (e.g. data structures) where you need a little more structure around the public API, but for the most part they’re just moving data around and running algorithms,
rather than trying to coerce their error-handling system to catch programmatic errors. It’s fun to try to make a bulletproof model, but their peers are making them look bad by actually deploying
systems. In practice, trying to make an error-proof system is way more work than it’s worth.
This post raises a point that I hadn’t really considered before which is that perhaps we should consider static types to be a form of metadata, much like comments. The more static types you have the
more constraining your model will be on the system that you’re creating. This is as it should be because that’s why you created those static types in the first place, right? But that model could just
as well not exist. You could have created a system without all that new-fangled OOP crap and it might be a lot less complex. You could have the whole thing written in half-a-day and still be home in
time for the footy.
A few years ago I was assigned to a trading system project that was to replace an existing legacy system. The existing trading system was single threaded, multi-user and suffered all sorts of
performance & concurrency problems. One of it’s strengths, though, was that it was partly written in Tcl. Now Tcl isn’t one of the worlds greatest languages but it is dynamically typed and that gives
it a certain flexibility. For instance, the shape and content of our core data was changing fairly constantly. Now, because that data was basically a bunch of name-value pairs inside the system it
was possible to change the shape of the data in the system while it was running. I doubt that this ‘feature’ was ever consciously designed in to the legacy system from the beginning, but the
flexibility and simplicity it gave was really very powerful.
When the replacement system came it was written in C++ and Java and had its own language embedded within it. It was a technical triumph in many ways and represented a tremendous leap forwards in many
other ways. But the flexibility had gone because it would have taken extra effort to preserve that flexibility using statically typed languages. As a result the new system was faster and cleaner but
more fragile and much harder to maintain. It was only when we started turning the legacy system off that it occurred to me that we had sacrificed something really quite important without really
knowing it.
This flexibility, then, obviously has the downside that if our system is not very constrained it will most likely be a total mess. That was one of the drivers for replacing the legacy system in the
first place. Moreover, this is especially likely to be true if there’s more than one person working on the system. Those static types clearly served a purpose after all in allowing us to categorise
our thoughts about the behaviours of the system and then communicate them to others.
I suspect the solution is like almost everything, it’s a compromise. Experience will tell you when you need to constrain and when not to. Indeed, this is pretty close to the actual conclusion Steve
comes to as well. In practice though I suspect that on large complex projects you would have to be very disciplined about the programming idioms you are going to use in the absence of those static
types. It’s another dimension to the programming plane, and I don’t need to tell you that there are quite a few other dimensions already.
I’ve been aware for a while that I really should be doing some database unit-testing along with the main unit-testing I’m already doing of the other code. With databases there’s a bunch of stuff I’d
like to test apart from the logic inside any executable code:
• Relationships – the database model should support certain types of queries, make sure that the more common ones work as needed
• Query Execution Time – to verify that indices are present on SELECT’s and to monitor the cost of insertions and as a general (and fairly gross) monitor of performance
• Arbitary Data Values – some data in the database is ‘special’. It’s always like that, it’s data that you don’t get from another source. It’s static data that makes your abstractions work. When
it’s there everything is ok, when it’s gone you’re screwed
• Constraints & Triggers – constraints and triggers occassionally get dropped when maintenance occurs when they don’t get put back things go south
• Permissions – certain types of activity in a database should be prohibited ensure that these protections are in place for a class of users
There’s probably a lot more I could do too, but this will do to begin with. In the past I’ve spent significant investigative time hunting down problems that originated from some initial assumption
being violated. Since I don’t like feeling violated, at least not on a Thursday, it seems like I should unit test what I can to get early warnings before the shit-storm hits.
So I did what any lazy, self-respecting developer would do I went looking for some code that someone else had written that I could steal. T-SQLUnit looked like the best match for my needs so I
downloaded it and checked it out. Now, before I upset anyone I should say that T-SQLUnit is ok. But it suffers from a few fairly major drawbacks. There’s nothing wrong with TSQLUnit per-se it’s just
that all database unit testing frameworks that are written in SQL are broken. Here’s a few lowlights:
1. Results of expressions can not be inputs to stored procedures making traditional unit testing Assert statements awkward
2. Catching all exceptions and reporting automatically that as a failure (ala jUnit) is messy requiring a BEGIN/END TRY around every test block
It’s the first one that makes life hard because all your assertions have to read something like:
IF @MyVar <> @TheirVar
EXEC ts_failure 'My variable doesn't equal your variable'
When what you really want to write is something like:
EXEC ts_assert @MyVar <> @TheirVariable, 'My variable doesn't equal your variable'
I don’t know, perhaps I’m just not smart enough but I could not see anyway to make something like the above work without ending up with a syntax so verbose and ugly that even Eurocrats would balk a
little. So bad that you might as well have just used a bunch of IF statements in the first place. Also, a direct consequence of not being able to have an ‘assert’ stored proc is that you can’t easily
count the number of assertions you make to report them later. Now whilst this is just icing on the unit-test cake it’s still a nice feature to have and adds to the warm fuzz after a test run.
If that was hard then testing permissions related activities is next-to impossible. This is because your unit-testing framework is running as a particular user in a particular session. For you to be
able to test particular permissions you might need to disconnect and reconnect as a different user. Well, it’s not impossible it’s just … a bit tricky
The obvious thing to do, then, is to completely give up on SQL unit testing frameworks and go back to your programming language of choice. As far as is possible you want to hide all the internals of
dealing with the database and leave just the pieces you need to setup, run and teardown your tests. To do this I made a helper class to do all the heavy lifting by: connecting to the database,
running any query, timing the execution, processing the results and storing them somewhere. Finally I made my class provide a function based query interface so that I could then write test code using
NUnit style assertions against it. Creating this custom class took only a few hours. Once I’d created it I could hand all the testing framework stuff to my old-friend NUnit. This technique worked
well for me and integrated nicely with my existing code tests.
|
{"url":"https://www.hackinghat.com/2008/02/","timestamp":"2024-11-06T10:33:42Z","content_type":"text/html","content_length":"85202","record_id":"<urn:uuid:60aa1a0f-f142-45a1-8480-56ece0bd0f06>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00107.warc.gz"}
|
Muhammad Abdul Moeed Shahid^1*, M. Talha Khan^1, Farah Javaid^2
^1Department of Physics, Government Islamia Graduate College Civil Lines, Lahore, Pakistan.
^2Department of Physics, Govt. APWA College (W) Lahore, Pakistan
Received: 26^th-August-2021 / Revised and Accepted: 3^rd -October-2021 / Published On-Line: 10^th -October-2021
Structural steel has become one of the most widespread building materials from last hundred year. It is broadly used in critical infrastructure such as buildings, cylinders, and marine pipelines,
etc. In this paper, natural convection heat transfer is used for cooling steel cylinders by using ANSYS Transient Thermal. A cylinder is designed to have a radius of 10 mm and a depth of 40 mm. The
initial temperature of the cylinder is 120°C and the ambient temperature is 22°C. When we apply a convection coefficient of 1×10^-5 W/mm^2°C, the steel cylinder starts to cool down with the passage
of time. The temperature decreases from 120°C to 22.357°C in the 10000s. These results show that the heat transfer (cooling rate) is rapid at the start of simulation but gradually decreases with the
increase in time. When we apply the final convection coefficient of 2.2×10^-5 W/mm^2°C, the temperature decreases from 120°C to 22.755°C in 3800s. These simulations show that the cooling rate has
directly related to the Convection heat coefficient.
Keywords: Structural Steel, Convection heat transfer, Convection heat coefficient, ANSYS
Structural Steel has become one of the most widespread building materials from last hundred year. It is one of most usual constructional material which is extensively used in all aspect of modern-day
construction. It has become one of the most widespread building materials from last hundred year. In soaring buildings, the dominance of structural steel is manifest and possesses substantial social
and economic benefits [1, 2]. It is broadly used for critical infrastructures as ships, , marine pipelines, aircraft hangers, commercial and residential buildings, bridges, solid cylinders,
warehouses, metro stations, , storage tanks, Harbor and offshores structure, etc. [3].
Fig. 1: Harbor facilities made by structural steel [4]
In several instances, rate of heat transfer is significant. Cooling of systems has great application in modern technology. They are extensively used in Automobiles and industrial equipment. The most
efficient method of cooling includes: Cooling by water spray, mist cooling and forced convection by air jet cooling and applying high air pressure. In various parts, the convection heat transfer
plays a significant role in transferring of heat. Natural convection has enough advantages as a tool of heat transfer. Natural convection uses the flow of fluid such as air on the surface of the
material to transfer the heat from system to surrounding [5-8]. Convection heat transfer depend on three factors as temperature difference, area, convection coefficient [9].
The most significant parameter for the convection process is convection heat coefficient. It is truly dependent upon the wind velocity and other surface characteristics. The smooth surfaces have less
convection coefficient as compared to rough surfaces under the constant velocity of wind. The faster velocities of wind correspond to the greater convection coefficient. This greater convection
coefficient leads more heat transfer which cause a more rapid decrease in the temperatures of the bodies [10]. In this paper, we studied the behavior of cooling structural steel cylinder by using
natural convection heat transfer. In many fields, a usual issue is natural convection of horizontal cylinder. Hundreds of researches have been initiated and completed to know the reasons of heat
transfer in the air through horizontal cylinder [5].
Fig. 2
Due to such importance of convection heat transfer, many researches and experiments are done in this field of convection heat transfer and Thermal Convection change in heat transfer in solid
cylinder. In 1971, R.L Riley et al. studied about the air mixing, convection and Ultraviolet (UV) air disinfection in rooms [11]. In 1979, A.B Cohen analyzed the thickness of fin for improve of
natural convection array of rectangular fins [12]. In 1983, A.B Cohen et al. analyzed the natural convection of thermally optimum arrays of fins and cards [13]. In 1991, H. Barthels et al. studied
the experimental and theoretical investigation under the natural convection conditions for the safety behavior of small helium gas-cooled high temperatures reactors (HTRs) [14]. In 2000, K. Nakajima
et al. examined about the moist convection of Jupiter’s atmosphere by using 2-D fluid dynamical model with clarify cloud micro-physics of the water [15]. In 2002, Y. Bai et al. studied the systematic
changes in apple rings in the course of convection air-drying along with controlled humidity and temperature [16]. In 2003, M. Iyengar et al. analyzed the coefficient of performance (COPT) in forced
convection for plate fin heat sinks [17]. In 2008, N. Hatami et al. studied the natural convection heat transfer coefficient in a vertical flat-plate solar air heater [18]. In 2014, H.H. Al-Kayiem et
al. predicted the convection heat transfer coefficient between flowing air and surfaces on the rectangular passage solar air heater [19]. In 2019, Y.G Lee et al. did tests on vertical tube of passive
containment cooling system to study air-steam condensation under natural convection [20]. In 2019, A. Moradikazerouni et al. examined the computer’s central processing unit (CPU) heat sink through
stratified coerced convection by using structural stability method [21]. In 2020, M.H Alturaihi et al. studied heat transfer in square cavity filled with saturated porous media and fluid to look over
thermal conductivity and void ratio [22]. Different researchers use ANSYS and Fuzzy simulation for different researches and get very useful results [23-38]
ANSYS is an extremely impressive and innovatory software for multi-physics simulations. It is normally the most valuable and widespread software design which is utilized to illustrate the
collaboration of comprehensive controls of pulsation, warm exchange alongside electromagnetic alteration, fluid, simple and physical skills for engineers. It is widely used in industry and
engineering field for fluid dynamics, mechanical, electromagnetic, thermal and electrical simulations. In this work, convection heat transfer of solid cylinder in air is studied by using ANSYS
Transient Thermal. ANSYS Workbench design modeler is used to create geometry of solid cylinder. Structural Steel is used to create geometry. The designed geometry consists of a solid cylinder having
radius of 10 mm. The depth of solid cylinder is 40 mm.
Fig. 2: Modeling of Solid Cylinder
To get more precise results or simulation, the geometry is exquisitely meshed with size element of 2 mm. The meshed setting contains moderate smoothing, admirable relevance center and fine span angle
center. The meshed geometry has consisted of 6836 elements and 28510 nodes in total. The number of elements can be increased for more accurate results. But this finer meshing has drawback of
processing time in simulation.
Fig. 3: Meshing of Solid Cylinder
Results and Discussion
ANSYS Transient Thermal is used to observe the convection heat transfer and cooling rate of the solid cylinder through air. The initial temperature given to the solid cylinder is 120°C. The Ambient
temperature of the surrounding of the solid cylinder is 22°C. When we apply convection coefficient of 1×10^-5 W/mm^2°C, the Structural steel cylinder starts to cool with passage of time. At 2000s,
the cylindrical structural steel showed the minimum temperature of 46.728°C at the edges. At 4000s, the cylindrical structural steel showed the minimum temperature of 29.293°C at edges. At 6000s, the
cylindrical structural steel showed the minimum temperature of 24.43°C at the edges. At 8000s, the cylindrical structural steel showed the minimum temperature of 22.894°C at the edges and at the time
of 10000s, the cylinder showed the minimum temperature of 22.357°C at the edges. The results show that the temperature decreases earlier from center of cylinder as compared to the edges of the
Fig. 4 (a,b) : Result of decrease in Temperature from the edge of the cylinder
Table: Time vs Temperature
Temperature with respect to time at convection heat coefficient of 1×10^-5 W/mm^2 ^oC
Serial No. Time (seconds) Temperature (Celsius)
1- 2000s 46.728°C
2- 4000s 29.293°C
3- 6000s 24.43°C
4- 8000s 22.894°C
5- 10000s 22.357°C
These results show that cooling rate is rapid at the start of simulation and decrease gradually with increase in time. The graph for the decrease in temperature with respect to time is shown in
Fig. 5: Plot between temperature and time
ANSYS Transient thermal is also used to observe the cooling rate with respect to convection heat coefficient. At convection heat coefficient of 1×10^-5 W/mm^2oC, the steel cylinder cooled down to
temperature 22.357^oC in 10000s. When we increase the convection heat coefficient, it has favorable impacts on the cooling rate. At convection heat coefficient of 1.25×10^-5 W/mm^2oC, the steel
cylinder cooled down to temperature 22.357^oC in 8000s. At convection heat coefficient of 1.5×10^-5 W/mm^2oC, the steel cylinder cooled down to temperature 22.287^oC in 7000s. At convection heat
coefficient of 1.75×10^-5 W/mm^2oC, the steel cylinder cooled down to temperature 22.287^oC in 6000s. At convection heat coefficient of 2×10^-5 W/mm^2oC, the steel cylinder cooled down to temperature
22.56^oC in 4500s. At convection heat coefficient of 2.2×10^-5 W/mm^2oC, the steel cylinder cooled down to temperature 22.755^oC in 3800s. These simulations show that heat transfer (cooling rate) of
steel cylinder has directly related to the convection heat coefficient. The plotted graph showed that with the increased value of convection heat coefficient make the heat transfer (cooling rate)
faster. These results from ANSYS simulation shows the same behavior with the previous researches as with the results of M. Saidi et al. [5]. The different graphs between temperature and time related
to different convection heat coefficient are given below:
Graph. 1 (a): At convection coefficient of 1×10^-5 W/mm^2 oC
Graph. 1 (b): At convection coefficient of 1.25×10^-5 W/mm^2 oC
Graph. 1 (c): At convection coefficient of 1.5×10^-5 W/mm^2 oC
Graph. 1 (d): At convection coefficient of 1.75×10^-5 W/mm^2 oC
Graph. 1 (e): At convection coefficient of 2×10^-5 W/mm^2 oC
Graph. 1 (f): At convection coefficient of 2.2×10^-5 W/mm^2 oC
The graph for natural convection heat transfer (cooling rate) with respect to time and convection coefficients are shown in figures.
Fig. 6: Plot
In this paper, natural convection heat transfer is used for cool down the steel cylinder by using ANSYS Transient Thermal. For this purpose, a steel cylinder is designed having radius of 10 mm and
depth 40 mm. The temperature of the steel cylinder is given 20^oC and ambient temperature of the surrounding was given 120^oC. When we apply convection coefficient of 1×10^-5W/mm^2oC, the steel
cylinder starts to cool with passage of time. The results show that the temperature decreases earlier from the center as compared to the edges of the cylinder. The temperature decreases from 120^oC
to 22.357^oC in 10000 seconds. These results show that cooling rate is rapid at the start of simulation and decrease gradually with increase in time. When we apply our final convection coefficient of
2.2×10^-5 W/mm^2oC, the temperature decreases from 120^oC to 22.755^oC in
3800 seconds. These simulations show that heat transfer (cooling rate) of steel cylinder has directly related to the convection heat coefficient.
Author’s Contribution: M. A. M. Shahid. conceived the idea, designed the simulated work, did the acquisition of data, executed simulated work and wrote the basic draft. M. A. M. Shahid,
M. T. Khan and F. Javaid did data analysis or analysis and interpretation of data and did the language and grammatical edits or critical revision.
Funding: The publication of this article was funded by no one.
Conflicts of Interest: The authors declare no conflict of interest.
Acknowledgement: The authors would like to thank the Department of Physics of Government Islamia Graduate College Civil Lines, Lahore for assistance with the collection of data.
[1] M. C. Moynihan, J. M. J. P. o. t. R. S. A. M. Allwood, Physical, and E. Sciences, “Utilization of structural steel in buildings,” vol. 470, p. 20140170, 2014.
[2] X. Wei and X. He, “The Application of Steel Structure in Civil Engineering,” in International Conference on Education, Management and Computing Technology. Tianjin, 2014, pp. 253-256.
[3] R. J. C. a. o. a. s. Melchers, “Corrosion wastage in aged structures,” pp. 77-106, 2008.
[4] M. Hernández-Escampa and D. Barrera-Fernández, “Architectural and Archaeological Views on Railway Heritage Conservation in Mexico.”
[5] M. Saidi and R. H. Abardeh, “Air pressure dependence of natural-convection heat transfer,” in Proceedings of the World Congress on Engineering, 2010, pp. 1-5.
[6] S. Lee, J. Park, P. Lee, and M. J. H. t. e. Kim, “Heat transfer characteristics during mist cooling on a heated cylinder,” vol. 26, pp. 24-31, 2005.
[7] F. Gori, M. Borgia, A. D. Altan, M. Mascia, and I. Petracci, “Cooling of a finned cylinder by a jet flow of air,” 2005.
[8] F. Gori, L. J. I. C. i. H. Bossi, and M. Transfer, “On the cooling effect of an air jet along the surface of a cylinder,” vol. 27, pp. 667-676, 2000.
[9] F. Kreith and M. Bohn, “Principles of heat transfer 7th Edition,” ed: Boston: PWS Publishing Company, 1997.
[10] H. Li, “Chapter 12 – Simulation of Thermal Behavior of Design and Management Strategies for Cool Pavement,” in Pavement Materials for Heat Island Mitigation, H. Li, Ed., ed Boston:
Butterworth-Heinemann, 2016, pp. 263-280.
[11] R. L. Riley, S. Permutt, and J. E. J. A. o. E. H. A. I. J. Kaufman, “Convection, air mixing, and ultraviolet air disinfection in rooms,” vol. 22, pp. 200-207, 1971.
[12] A. Bar-Cohen, “Fin thickness for an optimized natural convection array of rectangular fins,” 1979.
[13] A. Bar-Cohen, W. J. I. T. o. C. Rohsenow, Hybrids,, and M. Technology, “Thermally optimum arrays of cards and fins in natural convection,” vol. 6, pp. 154-158, 1983.
[14] H. Barthels, W. Rehm, and W. J. E. Jahn, “Theoretical and experimental investigations into the safety behavior of small HTRs under natural convection conditions,” vol. 16, pp. 371-380,
[15] K. Nakajima, S. i. Takehiro, M. Ishiwatari, and Y. Y. J. G. r. l. Hayashi, “Numerical modeling of Jupiter’s moist convection layer,” vol. 27, pp. 3129-3132, 2000.
[16] Y. Bai, M. S. Rahman, C. O. Perera, B. Smith, L. D. J. J. o. A. Melton, and F. Chemistry, “Structural changes in apple rings during convection air-drying with controlled temperature and
humidity,” vol. 50, pp. 3179-3185, 2002.
[17] M. Iyengar, A. J. I. T. o. c. Bar-Cohen, and p. technologies, “Least-energy optimization of forced convection plate-fin heat sinks,” vol. 26, pp. 62-70, 2003.
[18] N. Hatami and M. J. S. E. Bahadorinejad, “Experimental determination of natural convection heat transfer coefficient in a vertical flat-plate solar air heater,” vol. 82, pp. 903-910, 2008.
[19] H. H. Al-Kayiem and T. A. J. s. e. Yassen, “On the natural convection heat transfer in a rectangular passage solar air heater,” vol. 112, pp. 310-318, 2015.
[20] Y.-G. Lee, Y.-J. Jang, and S. J. A. o. N. E. Kim, “Analysis of air-steam condensation tests on a vertical tube of the passive containment cooling system under natural convection,” vol. 131,
pp. 460-474, 2019.
[21] A. Moradikazerouni, M. Afrand, J. Alsarraf, S. Wongwises, A. Asadi, T. K. J. I. J. o. H. Nguyen, et al., “Investigation of a computer CPU heat sink under laminar forced convection using a
structural stability method,” vol. 134, pp. 1218-1226, 2019.
[22] M. H. Alturaihi, L. Jassim, A. R. ALguboori, L. J. Habeeb, H. K. J. J. o. M. E. R. Jalghaf, and Developments, “Porosity Influence on Natural Convection Heat Transfer from a Heated Cylinder
in a Square Porous Enclosure,” vol. 43, pp. 236-254, 2020.
[23] D. Afzal, F. Javaid, E. D. S. Tayyaba, M. Ashrf, and M. Yasin, “Study of Constricted Blood Vessels through ANSYS Fluent,” Biologia (Lahore, Pakistan), vol. 66 (II), pp. 197-201, 03/21 2021.
[24] M. Afzal, F. Javaid, S. Tayyaba, A. Sabah, and M. J. B. Ashraf, “Fluidic simulation for blood flow in five curved Spiral Microchannel,” vol. 65, 2019.
[25] M. J. Afzal, F. Javaid, S. Tayyaba, M. W. Ashraf, and M. K. J. S. Hossain, “Study on the Induced Voltage in Piezoelectric Smart Material (PZT) Using ANSYS Electric & Fuzzy Logic,” vol. 6,
pp. 313-318, 2020.
[26] M. J. Afzal, M. W. Ashraf, S. Tayyaba, A. H. Jalbani, F. J. M. U. R. J. O. E. Javaid, and Technology, “Computer simulation based optimization of aspect ratio for micro and nanochannels,”
vol. 39, pp. 779-791, 2020.
[27] M. J. Afzal, S. Tayyaba, M. W. Ashraf, M. K. Hossain, and N. Afzulpurkar, “Fluidic simulation and analysis of spiral, U-shape and curvilinear nano channels for biomedical application,” in
2017 IEEE International Conference on Manipulation, Manufacturing and Measurement on the Nanoscale (3M-NANO), 2017, pp. 190-194.
[28] S. Tayyaba, M. W. Ashraf, M. I. Tariq, M. Nazir, N. Afzulpurkar, M. M. Balas, et al., “Skin insertion analysis of microneedle using ANSYS and fuzzy logic,” vol. 38, pp. 5885-5895, 2020.
[29] K. Nabi, R. Rafi, M. W. Ashraf, S. Tayyaba, Z. Ahmad, M. Imran, et al., “Simulation and Analysis of T-Junction MicroChannel,” vol. 152, p. 146, 2013.
[30] N. Ismail, S. Neoh, N. Sabani, B. J. J. o. E. S. Taib, and Technology, “Microneedle Structure Design And Optimization Using Genetic Algorithm,” vol. 10, pp. 849-864, 2015.
[31] L. Dahmani, A. Khennane, and S. J. S. o. m. Kaci, “Crack identification in reinforced concrete beams using ANSYS software,” vol. 42, pp. 232-240, 2010.
[32] M. J. Afzal, M. W. Ashraf, S. Tayyaba, M. K. Hossain, and N. J. M. Afzulpurkar, “Sinusoidal Microchannel with Descending Curves for Varicose Veins Implantation,” vol. 9, p. 59, 2018.
[33] M. A. TAYYABA and A. AKHTAR, “Simulation of a Nanoneedle for Drug Delivery by Using MATLAB Fuzzy Logic.”
[34] M. J. Afzal, F. Javaid, S. Tayyaba, M. W. Ashraf, C. Punyasai, and N. Afzulpurkar, “Study of Charging the Smart Phone by Human Movements by Using MATLAB Fuzzy Technique,” in 2018 15th
International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2018, pp. 411-414.
[35] M. J. Afzal, S. Tayyaba, M. W. Ashraf, M. K. Hossain, M. J. Uddin, and N. J. M. Afzulpurkar, “Simulation, fabrication and analysis of silver based ascending sinusoidal microchannel (ASMC)
for implant of varicose veins,” vol. 8, p. 278, 2017.
[36] M. J. Afzal, S. Tayyaba, M. W. Ashraf, and G. Sarwar, “Simulation of fuzzy based flow controller in ascending sinusoidal microchannels,” in 2016 2nd International Conference on Robotics and
Artificial Intelligence (ICRAI), 2016, pp. 141-146.
[37] F. Javaid and S. M. El-Sheikh, “FUZZY SIMULATION OF DRUG DELIVERY SYSTEM THROUGH VALVE-LESS MICROPUMP,” 2021.
[38] K. Muhammad Talha, “STUDY OF THERMAL DEFORMATION ANALYSIS IN AL-STEEL AND CU-STEEL BIMETAL COMPOSITES BY ANSYS STATIC STRUCTURAL,” PJEST, vol. 1, p. 11, May 11, 2021 2021.
|
{"url":"https://www.pjest.com/archives/volume-2/study-the-natural-convection-of-heat-transfer-of-structural-steel-cylinder-by-using-ansys-transient-thermal/","timestamp":"2024-11-08T09:23:04Z","content_type":"text/html","content_length":"116477","record_id":"<urn:uuid:4ea7b79d-d79f-44ec-b86c-4edade0547a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00537.warc.gz"}
|
The Science of Hawking
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
The universe explodes. All of it’s energy is focused on this singular act of creation, and it grows at a rate that has no parallel in our existence. During the first few seconds, there is nothing
that we would recognize as matter yet. Space literally seethes with energy. This energy is carried by particles that we consider the elementary building blocks – quarks and leptons – and by particles
that are responsible for the four forces that we see at play today. As the universe expands, it cools. After a few years, more complicated particles such as protons and neutrons begin to exist.
Sometime later, maybe several thousand years, the universe has cooled further so that atoms can form. First the very simplest, hydrogen, made up of jus two particles. Later, heavier elements are
created. And even later still, more complicated structures begin to form through the force of gravity. This is the “big bang,” and history – at least the timeline that we are following – has started.
And this is the history that Stephen Hawking, Lucasian Chair of Mathematics at Cambridge University, has spent his life unraveling. Although Hawking has not been alone in this endeavour - he is one
of perhaps several hundred cosmologists and astrophysicists – he is without question the very best known, both in scientific circles and with the public. This point was made in vivid colors last
spring when, as a guest of a group of undergraduate students, he paid a visit to the University of Toronto. His public lecture was sold out in fifteen minutes, with people from all walks of life
coming to hear him talk about his quest to discover literally how time began.
Hawking’s thinking and writings have communicated, better than any other contemporary scholar, the history of our world to the man in the street. He is often compared to Albert Einstein, the
white-haired savant who early in this century discovered that to understand gravity means to understand that we live in a truly four-dimensional world. Hawking is the author of “The Brief History of
Time,” [1] a book without equations that describes in laymen’s terms how we understand the evolution of the universe from the big bang on. He has even appeared on the science-fiction TV series “Star
Trek – The Next Generation,” playing himself as the scientific giant of the latter twentieth century.
Much has been written about the man [2], but what is Hawking’s science? What have been the contributions that have brought him this stature? And what can we expect to hear from him in the years
ahead? The first question is easy to answer – at least for a physicist. The second question can be answered on several levels – his contributions to the creation of knowledge itself, and his role as
being one of the preeminent spokespersons of his science to western society. The last question is one that I suspect Hawking himself would love to know the answer to. In addressing these questions,
I’m going to embed in the story some of the basic physics that you need to appreciate to understand Hawking’s science. It is a story about black holes, creation and time’s arrow. I hope they will
help enlighten those readers who wouldn’t know a black hole if they met one in a back alley.
A Primer on General Relativity and Big Bang Cosmology
Hawking has spent his scientific life studying what can be described as the structure of our universe and its evolution. A simple view of our world is that it is three-dimensional space filled with
different kinds of matter that interact through several different forces(1). The concept we call time can be viewed as parametrizing changes in this three-dimensional space. For example, a motionless
ball is simply a certain amount of matter at rest relative to the observer. We describe it as motionless because as time advances, its position in three-dimensional space relative to the observer
does not change. Dropping the ball in a gravitational field causes the ball to accelerate – uniformly if the gravitational force is constant – and the motion can be parametrized as a function of time
by a simple quadratic formula. This simple view suffices for most people, and it is the world-view that we impart to our children all through school. It is the world view that the natural
philosopher, Sir Isaac Newton, is given most credit for. His three force laws are based on this model of our universe. Newton founded his theory of gravitation on this world view, a view that
survived for over two centuries.
The truth about our world only begins to be revealed in advanced university physics courses. The true nature of our world, discovered in large part by Einstein, is one where the three space
dimensions and the time dimension must be treated in an even-handed manner. Our world is in fact four-dimensional, with one of those dimensions representing time. The same motionless ball would now
represent a line in this space-time, a line because its three spatial coordinates would remain constant, whereas the fourth coordinate, time, takes on all values from the point the ball is placed at
rest to the time it is subsequently moved.
This change in our view of the relationship between space and time arose from the discovery that the speed of light was a universal constant, something that did not change, no matter how fast we
would move relative to the source of the light. Einstein’s revelation was that in a four-dimensional space-time, one could both accommodate this remarkable fact and still have a world where
everything else appeared to work like normal. One should not be too critical of Newton – in his time, there wasn’t any hint that the speed of light would create troubles. By Einstein’s time, this had
become a unrefutable fact, and it was one of many observations that he was trying to account for. Einstein was able to first solve the problems presented by the constant speed of light by introducing
space-time in his Special Theory of Relativity. It took him another ten years to solve what he considered the real mystery – gravity – by a model called the General Theory of Relativity.
In Einstein’s space-time, the effect of the gravitational force is really a consequence of the geometry of this space. Wait! What do you mean by geometry? Isn’t space just space, and objects have a
specific geometry? A simple analogy is often used to help explain what I mean. There are many different two-dimensional spaces (we would call them surfaces) that we can imagine if we allow ourselves
to think three-dimensionally. For example, the painter’s mat (the canvas stretched on a rectangular frame) is something we would call “flat” (we often call this Euclidean space, named after the
famous geometer Euclid). If I rolled a marble across this surface, it would travel in a straight line. If I took the same mat and twisted the frame, stretching the canvas as I did so, the canvas
would no longer be flat but have curves and ripples in it. Mathematically, it would now have non-zero curvature. The same marble would now have a trajectory that, if I weren’t aware of the geometry
of the canvas, would appear to be influenced by some “force” tugging away at it as it meandered up and around the ripples and curves in the canvas.
Think what would happen if our four-dimensional world had similar curvature in space-time. If I now sent a marble off in a specific direction, it would not necessarily follow a straight path. It’s
path could be quite complex, depending on how complex the geometry was. We would call such a space-time non-Euclidean, and that is exactly the picture that Einstein suggested as the one for our
universe. But he went further and suggested that the curvature arose from the mass or mass-energy density in our space-time(2). The larger the energy density, the more curved space-time. What this
means to our ball is that as it passes by a massive object, it will be deflected towards the object due to the space-time curvature. The more massive the object, the more it will be deflected.
This geometric interpretation of the force of gravity results in a description of the ball’s motion that is almost the same as what Newton’s law of gravity gives us. The two theories make virtually
identical predictions in what we call the “weak field” approximation, where the force of gravity is modest. Gravitationally speaking, the environment in which we live is always within this weak field
limit. That is why it took more than 200 years of observation and experimentation before there was a strong enough case to overthrow Newton’s law of gravity. The places where his theory did not work
had to do with such subtle effects as the rotation of Mercury’s elliptical orbit around the sun and the bending of light, effects that required difficult and patient measurements.
General relativity is what we use to describe the large-scale structure of our universe. It has been verified as being, if not the right model, then such a good approximation that it’s failures are
quite subtle indeed. It is also the framework that predicts a host of phenomena that have stretched our imagination. It predicts that if there is enough mass in one region, the force of gravity will
be so strong that light itself will be unable to escape its pull, creating what we call a “black hole.” It also is the framework used to determine the past history of the universe. Measurements of
the velocity of very distant objects show that our universe is currently expanding rapidly. The farther away the object, the larger its recession velocity. This implies that, at some point in the
past, all matter in our universe resulted from some massive initial explosion – the big bang.
It is important to note that the big bang is not simply matter exploding outward to fill the universe. It is the entire universe that is exploding outward – in effect, the size of our universe is
increasing. As we turn the clock back, we can see that matter and energy in our universe is getting more squished together, getting more dense, and is getting hotter. General relativity does not
predict the big bang – it allows for a universe that is unchanging with time, or what we call the “steady state” universe. The need to invent the big bang model arises from a large body of
observations (the recessional velocity of distance galaxies is just one) all pointing to the same model.
The most significant shortcoming of general relativity is that we do not have a way of integrating the theory with understanding of our world at the most microscopic level. At the same time that
Einstein was developing his geometric model of gravity, other physicists were discovering that energy appeared to be doled out in small but well defined chunks, or “quanta.” When they looked at
materials that gave off light, they found that each light particle, or photon, had an energy that was some multiple of this fundamental quantum. This resulted in yet another revolution in our
understanding of our world and gave birth to a new theory called “quantum mechanics.” It predicted that an object could be found in one of a number of distinct states, each with a unique energy, and
for an object to change its quantum state, it would have to emit or absorb some number of these quanta of energy.
Quantum mechanics remains the only theory that successfully describes how atoms and molecules work, and is the basis for such common-place devices as transistors and lasers. The implications of
quantum mechanics are enormous, yet Einstein was never able to find a way to rationalize it with his theory of gravity(3). We were left with two physical models, one that described gravity and that
had been tested at distance scales ranging from about one metre to many light-years, and the other that discussed what happened at the atomic level, but that had few manifestations at distance scales
larger than about a hundredth of a micron(4). Physicists had attempted to bridge the gap between these two theories with little success until the late 1960’s and early 1970’s.
Hawking and Black Holes
Enter Stephen Hawking. He was introduced to theoretical physics at Cambridge University by his Ph.D. advisor, David Sciama, an expert in quantum theories of elementary particles and cosmology. As a
graduate student in the early 1960’s, he became fascinated by Einstein’s theory and its implications, in part inspired by the work of a British mathematician, Roger Penrose. As would any successful
scientist or problem solver, he chose to pick away at specific aspects of the theory, developing as he went a deep intuition for how the theory works. He first major exploration was into the
behaviour of a funny, theoretical object called a black hole.
What is a black hole? Before I can offer an answer, we first need to recall that gravity is a force that is always attractive. If one throws a bunch of matter out in space, the various pieces will
not stay put. Gravity will act on each chunk of matter and attempt to draw the pieces together. As a result, any collection of objects that interact gravitationally is inherently unstable(5).
Now, suppose we had a large amount of mass sitting in one place. Gravity would pull all this matter toward the centre of the mass. The more mass, the stronger the pull. At some point, the force of
gravity would be so strong that it would overwhelm even photons, the particles that make up light, so that any nearby photon would be sucked in. That is what we call a black hole.
In other words, a black hole is an object that has so much mass concentrated in one location that the force of gravity near the object is ineluctable. Nothing – not even light – can escape its
gravitational pull. Since the black hole absorbs all light, it is completely black. The amount of mass require to create a black hole is surprisingly small on an astrophysical scale. All you need is
about 3 times the mass of our sun and you have the makings of a black hole sometime in the far future.
By the early 1970’s, Hawking became one of the leading experts in black holes, discovering a host of properties for these strange objects. Black holes had been considered cosmic vacuum cleaners,
since they suck in anything that come too close to them. There is a specific radius from the centre of the black hole, known as the “event horizon,” that signifies the point of no return. Once you
step inside the event horizon, you are trapped in the black hole for eternity. Hawking discovered that quantum mechanics plays a certain game right at the event horizon. It allows matter and
antimatter particles to be created. Quantum mechanics plays this game all the time, usually with no real effect – a particle and its antimatter equivalent (for example, an electron and a positron,
the antimatter version of an electron) will be created, survive for an extremely brief instant of time, and then will vanish, leaving things exactly the way they were before the particle-antiparticle
pair was created. While the pair exists, nature plays with the law of the conservation of energy. One particle has positive energy and the other has negative energy. We would normally call such
particle-antiparticle pairs as “virtual,” since quantum mechanics allows these to exist for only a short time. Hawking realized that when this game takes place at the event horizon of a black hole,
it is possible for the negative energy particle to be kicked across the horizon into the black hole, never to be heard from again, and for the positive energy particle to escape the black hole by
being kicked in the opposite direction, in effect being a “real” particle.
This has many consequences. A black hole is not truly black, since it can have a glow due to the particles that escape in this manner. This glow is, aptly enough, called “Hawking radiation.” A second
consequence is that since the black hole radiates energy, it could literally evaporate away! This also means that a black hole has a temperature, just like any other glowing object. Hawking along
with others quickly discovered that they could even quantify the entropy, or level of disorder, of a black hole (the entropy of a black hole is proportional to the surface area of the event horizon).
The discovery of Hawking radiation started a flurry of theoretical work to understand all the quantum mechanical implications of black holes, and propelled Hawking to the forefront of research into
gravity and quantum mechanics. His next major research topic was somewhat more ambitious.
Hawking and the History of the Universe
Hawking turned his attention to an even more fundamental cosmological problem by the early 1980’s. Most of us have a picture of the big bang starting with an incredibly hot, dense universe filled
with swarms of matter and antimatter particles. As the universe expanded and cooled, these particles begin to clump together to form the elementary particles we now see around us. Eventually, these
formed into stable, neutral atoms such as hydrogen and helium. As further cooling occurred, the atoms form molecules, which aggregated into gas clouds, stars, galaxies, and other even larger scale
structures. But what happened to the antimatter? For every atom of normal matter, we should be left with a corresponding antimatter equivalent, or the energy that results when the matter and
antimatter annihilate each other. We see no evidence for this antimatter in our local cluster of galaxies. Where did it go?
This simple notion of the big bang has other shortcomings. It doesn’t tell us why our universe looks the way it does, with large clusters of galaxies separated from each other by even larger voids.
It also does not tell us whether the universe will continue to expand forever, or eventually reach some maximum size and begin to contract.
One possible explanation was suggested by the Soviet scientist, Andre Linde. He noted that it was conceivable that the antimatter could be converted into matter if a very unusual set of conditions
arose during the very first instance after the big bang. These conditions required that the universe undergo what physicists would call a “phase transition” – move from one type of quantum mechanical
state to a second, more stable state. In addition, this phase transition had to allow the universe to grow very rapidly (or “inflate”) at the same time. During this inflationary period, what is
normally a subtle effect called “CP-violation” (CP stands for charge-parity) would cause the antimatter to convert into matter, leaving us with the matter-dominated universe we see around is today.
This model, appropriately known as “inflation,” was in large part the brainchild of Alan Guth, a theoretical physicist who had worried about this matter-no antimatter problem. Together with a number
of other cosmologists, Hawking took this idea and filled in many of the details. What ingredient in the theory ensured that the universe developed into what we see around is today? What is it in the
theory that really gives us an arrow of time in this theory? What conditions had to be placed on the initial universe in order for the expansion to work out just right.
The first question is considered the “boundary condition problem,” and it formed a conceptual and practical roadblock to making predictions with the theory. The basic problem is that quantum
mechanics in principle would allow an infinite number of different universes to exist. However, if we want to ask questions about one of them – ours in this case – we have to determine what specific
conditions must be obeyed in order to get our universe out of the theory. Hawking proposed that you just avoid asking this question altogether by assuming that the structure of space-time at the
moment of the big bang was such that there was no “past” but only future events. In effect, he and his collaborators conjectured that one did not have to worry about what was going on with the
universe at a specific “boundary” in space-time, but that once you defined the type of universe you were in, you just had to let the universe evolve according to the laws of physics. Small,
unpredictable quantum fluctuations during inflation then gave us the universe we see today.
This view allowed Hawking to consider the universe as a single quantum mechanical system, a view that he has maintained in his subsequent work. Based on this, he also tackled the question of whether
the arrow of time, as we understand it, always points in the same direction, even if the universe contracts. Hawking himself waffled on this philosophical issue before being able to argue
persuasively that, in a well-defined sense, the arrow of time would continue to point in the same direction.
The idea of inflation was completely new in the early 1980’s, and it required a great deal of fine-tuning to get it to agree with what we observe in the world around us. Now, two decades later, there
is still some dispute about how well it describes the world. But it has been an extremely productive theory. It has led cosmologists to ask a number of key questions and prompt the right
measurements. The recent mapping of the cosmic microwave background radiation performed by the COBE satellite is an example of the sorts of observations that have been prompted by predictions made by
inflation. Hawking has continued to be at the forefront of this very exciting effort.
Returning to the Big Problem
Over the last 15 years, Hawking has continued to work on the original “big problem” that formed part of Einstein’s legacy: How do we marry quantum mechanics and general relativity? Our observations
of the world have given us few clues, mainly because gravity is such a weak force that probing it at a quantum scale has not been possible. We only have precise tests of gravity and of general
relativity at the scale of about a centimeter, much larger than the atomic scale where quantum effects become important and easily observed.
Actually, it is much worse than that, because we do know comes from extrapolating our ideas back to the very earliest times in the universe and asking questions such as “What did gravity look like
then? What would have been its effects?” Although we do not have definitive answers, the work of Hawking and others has shown that the nature of space-time only begins to illustrate quantum effects
when we get down to an extraordinarily small distance, known as the Planck scale. This scale is so small it is even hard for a cosmologist to comprehend. The size of the atom (about 10^-8 m) is about
1 part in 1026 of the size of the visible universe. The Planck scale is to the atom, as the atom is to the visible universe.
Hawking is currently trying to understand what the geometry of the universe looked like at the moment of the big bang. He has advocated the idea that one form of a mathematical solution to general
relativity, known as the instanton, describes the state of the universe at the big bang, but that this instanton was not uniform or spherical, but shaped like a four-dimensional pea. Along with a
Neil Turok, collaborator at Cambridge, he has developed this idea to show that it is compatible with his “no boundary” proposal. It complies with the concept of inflation to give us something like
the universe we see today.
These ideas are imaginative and controversial. They show that Hawking continues to be a strong influence in the search for an understanding of how our universe evolved.
Hawking Today and Tomorrow
Stephen Hawking’s relationship to the scientific community is first and foremost defined by his science, but it has also been shaped by Hawking’s complex personality.
Part of the problem is self-inflicted, due to his own hubris. He has been involved in several high-profile disputes with other scientist regarding priority, which are conflicts over who should be
given credit for a scientific discovery. These disputes have at times been divisive, and in at least one case Hawking has had difficulty graciously admitting defeat. These have revealed a man who can
be quite stubborn and unwilling to admit to error.
But part of his complex relationship with other researchers is due to his public appeal as a scientist who has captured the imagination of the person on the street. I believe there is distrust and at
times even contempt among some in the scientific fraternity toward attempts to reach out to the public. Some of this results from the need to water down science to present it to a non-expert.
However, the importance of such outreach in communicating the excitement of the research and the possibilities the work holds far outweighs any possible distortions that can occur.
Nonetheless, Hawking remains a historic figure, perhaps the most imminent physicist of his time. He also will continue to epitomize to the public the true character of the scientist struggling to
understand how our world really works.
The universe continues to unfold today. And what is remarkable is that its future will be uncovered by understanding our past -- a past into which Hawking has perhaps the strongest insights and
vision. I strongly recommend continuing to tune in to what he says. For this remarkable mind may one day give birth to the complete history of the physical universe.
(1) There are four known forces or interactions. In increasing order of strength, they are gravity, the weak force, electromagnetism and the strong force. Gravity is an attractive force between any
objects with mass. The weak force, as its name suggests, is a subtle force that we find at play in the nuclei of atoms – it is responsible for most radioactivity. Electromagnetism is the force that
influences objects with electric charge. The strong force is indeed the strongest of the four forces. It holds together the protons and neutrons that make up atomic nuclei. However, its range or the
distance over which it interacts is limited to the atomic scale.
(2) The identification of mass as one form of energy, E=mc^2, was another consequence of Einstein’s Special Theory of Relativity.
(3) Einstein found the whole idea of quantum mechanics repugnant, though he realized that he could not ignore it either. The comment “God does not play with dice” is widely attributed to Einstein,
who found the probabilistic nature of quantum mechanics to be one of its most serious defects.
(4) This corresponds to about 10^-5 m, or about the size of a good size molecule or a small virus.
(5) There are systems that interact gravitationally that appear stable, such as our solar system. Gravity does allow some forms of motion that are dynamically stable, such as the orbit of a planet
about a star. However, if you peer more closely, you will see evidence of various levels of instability – take the rings of Saturn, for example, where each ring has formed in a chaotic manner.
[1] Stephen Hawking, “A Brief History of Time”, New York, Bantam, 1988.
[2] See, for example, Michael White and John Gribbin, “Stephen Hawking: A Life in Science,” London, England, Viking, 1992.
|
{"url":"https://phys.libretexts.org/Bookshelves/Relativity/Supplemental_Modules_(Relativity)/Miscellaneous_Relativity_Topics/The_Science_of_Hawking","timestamp":"2024-11-07T04:29:04Z","content_type":"text/html","content_length":"150854","record_id":"<urn:uuid:7d30bcd7-5326-478d-a795-ea3e7e2a2995>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00739.warc.gz"}
|
explain what it meant by a sharpe ratio of 3. Does this ratio
differ from one...
explain what it meant by a sharpe ratio of 3. Does this ratio differ from one...
explain what it meant by a sharpe ratio of 3. Does this ratio differ from one asset to another? explain why or why not
Sharpe ratio of 3 means measures the return your asset or portfolio earns above the risk free rate for per unit of risk taken. Hence, sharpe ratio of 3 means that you are earning an extra return
above risk free rate of 3 per unit of risk taken.
Sharpe ratio is calculated as -
Return on portfolio - risk free rate/ standard deviation of portfolio
This means that every asset will have a different sharpe ratio.
As the asset changes, even though the risk free rate remains the same, the return on asset/ portfolio and standard deviation of asset/ portfolio will differ. This change is return and standard
deviation causes different assets to have different sharpe ratio.
|
{"url":"https://justaaa.com/finance/298828-explain-what-it-meant-by-a-sharpe-ratio-of-3-does","timestamp":"2024-11-12T22:43:16Z","content_type":"text/html","content_length":"38517","record_id":"<urn:uuid:7caa5865-17fb-4448-b900-807d823e2295>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00190.warc.gz"}
|
Van der Waals equation of state
The van der Waals equation of state, developed by Johannes Diderik van der Waals ^[1] ^[2], takes into account two features that are absent in the ideal gas equation of state; the parameter ${\
displaystyle b}$ introduces somehow the repulsive behavior between pairs of molecules at short distances, it represents the minimum molar volume of the system, whereas ${\displaystyle a}$ measures
the attractive interactions between the molecules. The van der Waals equation of state leads to a liquid-vapor equilibrium at low temperatures, with the corresponding critical point.
Equation of state[edit]
The van der Waals equation of state can be written as
${\displaystyle \left(p+{\frac {an^{2}}{V^{2}}}\right)\left(V-nb\right)=nRT}$
• ${\displaystyle p}$ is the pressure,
• ${\displaystyle V}$ is the volume,
• ${\displaystyle n}$ is the number of moles,
• ${\displaystyle T}$ is the absolute temperature,
• ${\displaystyle R}$ is the molar gas constant; ${\displaystyle R=N_{A}k_{B}}$, with ${\displaystyle N_{A}}$ being the Avogadro constant and ${\displaystyle k_{B}}$ being the Boltzmann constant.
• ${\displaystyle a}$ and ${\displaystyle b}$ are constants that introduce the effects of attraction and volume respectively and depend on the substance in question.
Critical point[edit]
At the critical point one has ${\displaystyle \left.{\frac {\partial p}{\partial v}}\right|_{T=T_{c}}=0}$, and ${\displaystyle \left.{\frac {\partial ^{2}p}{\partial v^{2}}}\right|_{T=T_{c}}=0}$,
leading to
${\displaystyle T_{c}={\frac {8a}{27bR}}}$
${\displaystyle p_{c}={\frac {a}{27b^{2}}}}$
${\displaystyle \left.v_{c}\right.=3b}$
along with a critical point compressibility factor of
${\displaystyle {\frac {p_{c}v_{c}}{RT_{c}}}={\frac {3}{8}}=0.375}$
which then leads to
${\displaystyle a={\frac {27}{64}}{\frac {R^{2}T_{c}^{2}}{p_{c}}}}$
${\displaystyle b={\frac {RT_{c}}{8p_{c}}}}$
Virial form[edit]
One can re-write the van der Waals equation given above as a virial equation of state as follows:
${\displaystyle Z:={\frac {pV}{nRT}}={\frac {1}{1-{\frac {bn}{V}}}}-{\frac {an}{RTV}}}$
Using the well known series expansion ${\displaystyle (1-x)^{-1}=1+x+x^{2}+x^{3}+...}$ one can write the first term of the right hand side as ^[3]:
${\displaystyle {\frac {1}{1-{\frac {bn}{V}}}}=1+{\frac {bn}{V}}+\left({\frac {bn}{V}}\right)^{2}+\left({\frac {bn}{V}}\right)^{3}+...}$
Incorporating the second term of the right hand side in its due place leads to:
${\displaystyle Z=1+\left(b-{\frac {a}{RT}}\right){\frac {n}{V}}+\left({\frac {bn}{V}}\right)^{2}+\left({\frac {bn}{V}}\right)^{3}+...}$.
From the above one can see that the second virial coefficient corresponds to
${\displaystyle B_{2}(T)=b-{\frac {a}{RT}}}$
and the third virial coefficient is given by
${\displaystyle B_{3}(T)=b^{2}}$
Boyle temperature[edit]
The Boyle temperature of the van der Waals equation is given by
${\displaystyle B_{2}\vert _{T=T_{B}}=0=b-{\frac {a}{RT_{B}}}}$
leading to
${\displaystyle T_{B}={\frac {a}{bR}}}$
Dimensionless formulation[edit]
If one takes the following reduced quantities
${\displaystyle {\tilde {p}}={\frac {p}{p_{c}}};~{\tilde {V}}={\frac {V}{V_{c}}};~{\tilde {t}}={\frac {T}{T_{c}}};}$
one arrives at
${\displaystyle {\tilde {p}}={\frac {8{\tilde {t}}}{3{\tilde {V}}-1}}-{\frac {3}{{\tilde {V}}^{2}}}}$
The following image is a plot of the isotherms ${\displaystyle T/T_{c}}$ = 0.85, 0.90, 0.95, 1.0 and 1.05 (from bottom to top) for the van der Waals equation of state:
Plot of the isotherms T/T_c = 0.85, 0.90, 0.95, 1.0 and 1.05 for the van der Waals equation of state
Critical exponents[edit]
The critical exponents of the Van der Waals equation of state place it in the mean field universality class.
See also[edit]
Related reading
|
{"url":"http://www.sklogwiki.org/SklogWiki/index.php/Van_der_Waals_equation_of_state","timestamp":"2024-11-08T03:02:24Z","content_type":"text/html","content_length":"73661","record_id":"<urn:uuid:bccbf61d-800e-4140-a3d1-0e68e3baad50>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00822.warc.gz"}
|
Calculating Fraction of Kinetic Energy Transferred in Head-on Collision
• Thread starter stunner5000pt
• Start date
In summary, the question asks for the fraction of an electron's kinetic energy that can be transferred to a mercury atom in an elastic collision. Using classical mechanics and the formula for kinetic
energy, the fraction can be calculated by taking the ratio of the final velocity of the mercury atom squared to the initial velocity of the electron squared. Momentum concepts can then be used to
find the final velocity in terms of the initial velocity. However, this approach does not give a specific numerical value.
Determine from classical mechanics (using a head-on collision with recoil at 180 degrees) what fraction of an electron’s kinetic energy can be transferred to a mercury atom in an elastic collision.
Derive an approximate value of the fraction.
well ok let the mass of the electron be m_e
initial velocity of electron v_1
fina lvelocity v_2
mercury atom mass m_Hg
mercury atom fina lvelocity v_Hg
[tex] \frac{1}{2} m_{e} v_{1}^2 = \frac{1}{2} m_{Hg} v_{Hg}^2 + \frac{1}{2} m_{e} v_{2}^2 [/tex]
the fraction of kinetic energy is Kf/Ki right
[tex] \frac{K_{f}}{K_{i}} = \frac{v_{2}^2}{v_{1}^2} [/tex]
do i use momentum concepts to find v2 in terms of v1
still it doesn't give me a numerical value... if that's what the question is asking...
To calculate the fraction of kinetic energy transferred in a head-on collision between an electron and a mercury atom, we can use the principles of classical mechanics. In an elastic collision, both
kinetic energy and momentum are conserved. Therefore, we can use the conservation of energy equation to determine the final velocity of the mercury atom in terms of the initial velocity of the
Using the equation provided, we can rearrange it to solve for v2:
v2 = sqrt((m_e/m_Hg)*(v1^2-v_Hg^2))
Substituting this into the fraction of kinetic energy equation, we get:
(Kf/Ki) = ((m_e/m_Hg)*(v1^2-v_Hg^2))/v1^2
Simplifying this further, we get:
(Kf/Ki) = (m_e/m_Hg)*(1-(v_Hg/v1)^2)
Since we know that the final velocity of the mercury atom is equal to the negative of the initial velocity of the electron (due to the head-on collision with recoil at 180 degrees), we can substitute
this into the equation:
(Kf/Ki) = (m_e/m_Hg)*(1-(v_Hg/v1)^2)
= (m_e/m_Hg)*(1-(-v1/v1)^2)
= (m_e/m_Hg)*(1-1)
= (m_e/m_Hg)*0
= 0
This means that in an elastic head-on collision between an electron and a mercury atom, no kinetic energy is transferred from the electron to the mercury atom. All of the electron's kinetic energy is
conserved and remains with the electron after the collision.
Therefore, the approximate value of the fraction of kinetic energy transferred in this scenario is 0. This result is due to the fact that the mercury atom is significantly larger and more massive
than the electron, and thus it is not able to absorb any of the electron's kinetic energy in the collision.
FAQ: Calculating Fraction of Kinetic Energy Transferred in Head-on Collision
1. How is the fraction of kinetic energy transferred in a head-on collision calculated?
The fraction of kinetic energy transferred in a head-on collision can be calculated using the formula (m1 - m2)^2 / (m1 + m2)^2, where m1 and m2 are the masses of the two objects involved in the
collision. This formula assumes an elastic collision, meaning no energy is lost during the collision.
2. What is the significance of the fraction of kinetic energy transferred in a head-on collision?
The fraction of kinetic energy transferred in a head-on collision helps us understand the amount of energy that is conserved or lost during the collision. This can be useful in predicting the outcome
of a collision and assessing potential damage or injury.
3. Can the fraction of kinetic energy transferred be greater than 1?
No, the fraction of kinetic energy transferred cannot be greater than 1. This would imply that the kinetic energy after the collision is greater than the initial kinetic energy, which is not
4. How does the fraction of kinetic energy transferred change with different masses?
The fraction of kinetic energy transferred is directly proportional to the square of the difference in masses and inversely proportional to the square of the sum of masses. This means that as the
masses of the objects involved in the collision increase, the fraction of kinetic energy transferred will decrease.
5. Can the fraction of kinetic energy transferred be negative?
No, the fraction of kinetic energy transferred cannot be negative. A negative value would imply that the kinetic energy after the collision is less than zero, which is not physically possible.
|
{"url":"https://www.physicsforums.com/threads/calculating-fraction-of-kinetic-energy-transferred-in-head-on-collision.140873/","timestamp":"2024-11-07T09:51:58Z","content_type":"text/html","content_length":"82136","record_id":"<urn:uuid:d11a2a24-bdfa-456a-aa1e-cd8dc5caee16>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00798.warc.gz"}
|
MSSCYC_1: The Correspondence Between Monotonic Many Sorted Signatures and Well-Founded Graphs. {P}art {I}
:: The Correspondence Between Monotonic Many Sorted Signatures and Well-Founded Graphs. {P}art {I}
:: by Czes{\l}aw Byli\'nski and Piotr Rudnicki
:: Received February 14, 1996
:: Copyright (c) 1996-2021 Association of Mizar Users
Lm1: for G being Graph
for c being Chain of G
for p being FinSequence of the carrier of G st c is cyclic & p is_vertex_seq_of c holds
p . 1 = p . (len p)
|
{"url":"https://mizar.uwb.edu.pl/version/current/html/msscyc_1.html","timestamp":"2024-11-10T05:06:18Z","content_type":"text/html","content_length":"103873","record_id":"<urn:uuid:358b9167-c72a-4974-a425-a1c7a6cb51c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00062.warc.gz"}
|
Lesson 1
Tape Diagrams and Equations
1.1: Which Diagram is Which? (5 minutes)
Students recall tape diagram representations of addition and multiplication relationships.
For relationships involving multiplication, we follow the convention that the first factor is the number of groups and the second is the number in each group. But students do not have to follow that
convention; they may use their understanding of the commutative property of multiplication to represent relationships in ways that make sense to them.
Give students 2 minutes of quiet think time, followed by a whole-class discussion.
Student Facing
1. Here are two diagrams. One represents \(2+5=7\). The other represents \(5 \boldcdot 2=10\). Which is which? Label the length of each diagram.
2. Draw a diagram that represents each equation.
Activity Synthesis
Invite students to share their responses and rationale. The purpose of the discussion is to give students an opportunity to articulate how operations can be represented by tape diagrams. Some
questions to guide the discussion:
• “Where do you see the 5 in the first diagram?” (There are 5 equal parts represented by 5 same-size boxes.)
• “How did you find the length of the first diagram?” (Either computed \(2+2+2+2+2\) or \(5 \boldcdot 2.\))
• “Explain how you knew what the diagrams for \(4+3=7\) and \(4 \boldcdot 3=12\) should look like. How are they alike? How are they different?”
• “How did you represent \(4\boldcdot 3\)? How are they alike? How are they different?” (Some may represent \(4\boldcdot 3\) as 4 groups of size 3, while some may represent as 3 groups of size 4.)
1.2: Match Equations and Tape Diagrams (10 minutes)
In this first activity on tape diagram representations of equations with variables, students use what they know about relationships between operations to identify multiple equations that match a
given diagram. It is assumed that students have seen representations like these in prior grades. If this is not the case, return to the examples in the warm-up and ask students to write other
equations for each of the diagrams. For example, the relationship between the quantities 2, 5, and 7 expressed by the equation \(2+5=7\) can also be written as \(2=7-5\), \(5=7-2\), \(7=2+5\), and \
(7-2=5.\) Ask students to explain how these equations match the parts of the tape diagram.
Note that the word “variable” is not defined until the next lesson. It is not necessary to use that term with students yet. Also, we are sneaking in the equivalent expressions \(x+x+x+x\) and \(4 \
boldcdot x\) because these equivalent ways of writing it should be familiar from earlier grades, but equivalent expressions are defined more carefully later in this unit. Even though this familiar
example appears, the general idea of equivalent expressions is not a focus of this lesson.
Arrange students in groups of 2. Give students 2 minutes of quiet work time. Then, ask them to share their responses with their partner and follow with whole-class discussion.
If necessary, explain that the \(x\) in each diagram is just standing in for a number.
Student Facing
Here are two tape diagrams. Match each equation to one of the tape diagrams.
• \(4 + x = 12\)
• \(12 \div 4 = x\)
• \(4 \boldcdot x = 12\)
• \(12 = 4 + x\)
• \(12 - x = 4\)
• \(12 = 4 \boldcdot x\)
• \(12 - 4 = x\)
• \(x = 12 - 4\)
• \(x+x+x+x=12\)
Anticipated Misconceptions
Students may not have much experience with a letter standing in for a number. If students resist engaging, explain that the \(x\) is just standing in for a number. Students may prefer to figure out
the value that \(x\) must take to make each diagram make sense (8 in the first diagram and 3 in the second diagram) before thinking out which equations can represent each diagram.
Activity Synthesis
Focus the discussion on the reason that more than one equation can describe each tape diagram; namely, the same relationship can be expressed in more than one way. These ideas should be familiar to
students from prior work. Ideas that may be noted:
• A multiplicative relationship can be expressed using division.
• An additive relationship can be expressed using subtraction.
• It does not matter how expressions are arranged around an equal sign. For example, \(4+x=12\) and \(12=4+x\) mean the same thing.
• Repeated addition can be represented with multiplication. For example, \(4x\) is another way to express \(x+x+x+x.\)
Students are likely to express these ideas using informal language, and that is okay. Encourage them to revise their statements using more precise language, but there is no reason to insist they use
particular terms.
Some guiding questions:
• “How can you tell if a diagram represents addition or multiplication?”
• “Once you were sure about one equation, how did you find others that matched the same diagram?”
• Regarding any two equations that represent the same diagram: “What is the same about the equations? What is different?”
Writing, Representing, Conversing: MLR2 Collect and Display. While pairs are working, circulate and listen to student talk about the similarities and differences between the tape diagrams and the
equations. Ask students to explain how these equations match the parts of the tape diagram. Write down common or important phrases you hear students say about each representation onto a visual
display of both the tape diagrams and the equations. This will help the students use mathematical language during their paired and whole-group discussions.
Design Principle(s): Support sense-making; Cultivate conversation
1.3: Draw Diagrams for Equations (15 minutes)
In this activity, students draw tape diagrams to match given equations. Then, they reason about the unknown value that makes the equation true, a process also known as solving the equation. Students
should not be shown strategies to solve but rather should reason with equations or diagrams in ways that make sense to them. As they work, monitor for students who use the diagrams to find unknown
quantities and for those who use the equations.
Give students 5 minutes quiet work time followed by a whole-class discussion.
Representation: Internalize Comprehension. For each equation, provide students with a blank template of a tape diagram for students to complete and find the unknown quantities.
Supports accessibility for: Visual-spatial processing; Organization
Student Facing
For each equation, draw a diagram and find the value of the unknown that makes the equation true.
1. \(18 = 3+x\)
2. \(18 = 3 \boldcdot y\)
Student Facing
Are you ready for more?
You are walking down a road, seeking treasure. The road branches off into three paths. A guard stands in each path. You know that only one of the guards is telling the truth, and the other two are
lying. Here is what they say:
• Guard 1: The treasure lies down this path.
• Guard 2: No treasure lies down this path; seek elsewhere.
• Guard 3: The first guard is lying.
Which path leads to the treasure?
Anticipated Misconceptions
Students might draw a box with 3 for the equation \(18 = 3 \boldcdot y\). Ask students about the meaning of multiplication and specifically what \(3 \boldcdot y\) means. Ask how they could represent
3 equal groups with unknown size \(y\).
Students might think they need to show an unknown number (\(y\)) of equal groups of 3. While this is possible, showing 3 equal groups with unknown size \(y\) is simpler to represent.
Activity Synthesis
Invite students to share their strategies for finding the values of \(x\) and \(y\). Include at least one student who reasoned with the diagram and one who reasoned with the equation. Help students
connect different methods by thinking about the relationships between the three quantities in each problem and how both the equations and the diagrams represent them.
Speaking, Listening, Representing: MLR7 Compare and Connect. Use this routine when students present their equations and tape diagrams. Draw students’ attention to how the mathematical operations
(addition, subtraction, multiplication, division) are represented in each relationship. For example, ask, “Where do you see multiplication in the diagram?” This will strengthen students’ mathematical
language use and reasoning about tape diagram representations of the equations.
Design Principle(s): Support sense-making; Maximize meta-awareness
Lesson Synthesis
To ensure that students understand the use and usefulness of tape diagrams in representing equations and finding unknown values, consider asking some of the following questions:
• “Why are tape diagrams useful to visualize a relationship?” (Answers vary. Sample response: You can see the way quantities are related.)
• “Where in the tape diagram do we see the equal sign that is in the equation it represents?” (The fact that the sum of the parts has the same value as the whole; the numbers and letters in the
boxes add up to the total shown for the whole rectangle.)
• “Why can a diagram be represented by more than one equation?” (Because more than one operation can be used; for example, the same diagram can be represented by an addition or a subtraction
equation. Because when two expressions are equal, it doesn't matter how they are arranged around the equal sign.)
• “Describe some ways to represent the relationship \(23+x=132\)” (Tape diagram with two unequal parts, other equivalent equations like \(x=132-23\)).
• “Describe some ways to represent the relationship \(5x=230\)” (Tape diagram with 5 equal parts, other equivalent equations like \(x=230\div5\)).
1.4: Cool-down - Finish the Diagrams (5 minutes)
Student Facing
Tape diagrams can help us understand relationships between quantities and how operations describe those relationships.
Diagram A has 3 parts that add to 21. Each part is labeled with the same letter, so we know the three parts are equal. Here are some equations that all represent diagram A:
\(\displaystyle x+x+x=21\)
\(\displaystyle 3\boldcdot {x}=21\)
\(\displaystyle x=21\div3\)
\(\displaystyle x=\frac13\boldcdot {21}\)
Notice that the number 3 is not seen in the diagram; the 3 comes from counting 3 boxes representing 3 equal parts in 21.
We can use the diagram or any of the equations to reason that the value of \(x\) is 7.
Diagram B has 2 parts that add to 21. Here are some equations that all represent diagram B:
\(\displaystyle y+3=21\)
\(\displaystyle y=21-3\)
\(\displaystyle 3=21-y\)
We can use the diagram or any of the equations to reason that the value of \(y\) is 18.
|
{"url":"https://im.kendallhunt.com/MS/teachers/1/6/1/index.html","timestamp":"2024-11-03T22:01:53Z","content_type":"text/html","content_length":"99705","record_id":"<urn:uuid:cdb71687-5611-4c99-aa3e-99ba0044066c>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00267.warc.gz"}
|
Systems of Limited Size and Class 2 Behavior: A New Kind of Science | Online by Stephen Wolfram [Page 260]
such case, the pattern must repeat itself with a period of at most 2^n steps, where n is the size of the pattern.
In a class 2 system with random initial conditions, a similar thing happens: since different parts of the system do not communicate with each other, they all behave like separate patterns of limited
size. And in fact in most class 2 cellular automata these patterns are effectively only a few cells across, so that their repetition periods are necessarily quite short.
Repetition periods for various cellular automata as a function of size. The initial conditions used in each case consist of a single black cell, as in the pictures on the previous page. The dashed
gray line indicates the maximum possible repetition period of 2^n. The maximum repetition period for rule 90 is 2^(n - 1)/2 - 1. For rule 30, the peak repetition periods are of order 2^0.63 n, while
for rule 45, they are close to 2^n (for n = 29, for example, the period is 463,347,935, which is 86% of the maximum possible). For rule 110, the peaks seem to increase roughly like n^3.
|
{"url":"https://www.wolframscience.com/nks/p260--systems-of-limited-size-and-class-2-behavior/","timestamp":"2024-11-09T12:55:40Z","content_type":"text/html","content_length":"99776","record_id":"<urn:uuid:94facb64-bfe6-4688-b2d6-e04da6a11f87>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00893.warc.gz"}
|
Hello Please Help Thanks
Answer: during evaporation
Step-by-step explanation: the sun heats up the water which caused it to evaporate
most likely the last choice
Step-by-step explanation:
Step-by-step explanation:
A number is divisible by 11 if and only if the difference between sum of alternate digits is divisible by 11.
Here the number is x557x55. The difference between the sum of alternate digits is,
[tex]\longrightarrow S=(x+5+x+5)-(5+7+5)[/tex]
[tex]\longrightarrow S=2x-7[/tex]
By statement, the number x557x55 is divisible by 11 if and only if [tex]S=2x-7[/tex] is divisible by 11.
[tex]\longrightarrow 2x-7=11a[/tex]
[tex]\longrightarrow 2x=11a+7\quad\dots(1)[/tex]
We separate the RHS as the following.
[tex]\longrightarrow 2x=10a+a+6+1[/tex]
[tex]\longrightarrow 2x=10a+6+a+1[/tex]
[tex]\longrightarrow 2x=2(5a+3)+(a+1)[/tex]
In this question, the LHS is an even integer since x is a digit, i.e., integer. So RHS should also be an even integer.
Since [tex]2(5a+3)[/tex] is even, [tex]a+1[/tex] must be even. So let,
[tex]\longrightarrow a+1=2k[/tex]
[tex]\longrightarrow a=2k-1[/tex]
Then (1) becomes,
[tex]\longrightarrow 2x=11(2k-1)+7[/tex]
[tex]\longrightarrow 2x=22k-4[/tex]
[tex]\longrightarrow x=11k-2\quad\dots(2)[/tex]
So this is expression for x, for any integer k.
As x is a digit of x557x55, x is a single digit integer, so its value lies in between 1 and 9 [x ≠ 0 because it is also the left most digit of x557x55], i.e.,
[tex]\longrightarrow 1\leq x\leq 9[/tex]
From (2),
[tex]\longrightarrow 1\leq 11k-2\leq 9[/tex]
Adding 2,
[tex]\longrightarrow 3\leq 11k\leq 11[/tex]
Dividing by 11,
[tex]\longrightarrow\dfrac{3}{11}\leq k\leq 1[/tex]
Since k is an integer, we get,
[tex]\longrightarrow k=1[/tex]
Then from (2),
[tex]\longrightarrow x=11(1)-2[/tex]
Hence the value of x is 9.
|
{"url":"https://www.cairokee.com/homework-solutions/hello-please-help-thanks-6jja","timestamp":"2024-11-07T11:04:30Z","content_type":"text/html","content_length":"91005","record_id":"<urn:uuid:8d39bcd2-c12f-4488-a4fa-793298fea3a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00751.warc.gz"}
|
Absolute conductivity reconstruction in magnetic induction tomography using a nonlinear method
Magnetic induction tomography (MIT) attempts to image the electrical and magnetic characteristics of a target using impedance measurement data from pairs of excitation and detection coils. This
inverse eddy current problem is nonlinear and also severely ill posed so regularization is required for a stable solution. A regularized Gauss-Newton algorithm has been implemented as a nonlinear,
iterative inverse solver. In this algorithm, one needs to solve the forward problem and recalculate the Jacobian matrix for each iteration. The forward problem has been solved using an edge based
finite element method for magnetic vector potential A and electrical scalar potential V, a so called A, A - V formulation. A theoretical study of the general inverse eddy current problem and a
derivation, paying special attention to the boundary conditions, of an adjoint field formula for the Jacobian is given. This efficient formula calculates the change in measured induced voltage due to
a small perturbation of the conductivity in a region. This has the advantage that it involves only the inner product of the electric fields when two different coils are excited, and these are
convenient computationally. This paper also shows that the sensitivity maps change significantly when the conductivity distribution changes, demonstrating the necessity for a nonlinear reconstruction
algorithm. The performance of the inverse solver has been examined and results presented from simulated data with added noise.
Bibliographical note
ID number: ISI:000242650400001
Dive into the research topics of 'Absolute conductivity reconstruction in magnetic induction tomography using a nonlinear method'. Together they form a unique fingerprint.
|
{"url":"https://researchportal.bath.ac.uk/en/publications/absolute-conductivity-reconstruction-in-magnetic-induction-tomogr","timestamp":"2024-11-14T11:32:06Z","content_type":"text/html","content_length":"56185","record_id":"<urn:uuid:1f83b2e1-f734-46d8-8a72-d2871d9fcd3b>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00845.warc.gz"}
|
ANOVA in R | A Complete Guide to ANOVA Model In R
Updated March 20, 2023
Introduction to ANOVA in R
ANOVA in R is a mechanism facilitated by R programming to carry out the implementation of the statistical concept of ANOVA, i.e. analysis of variance, a technique that allows the user to check if the
mean of a particular metric across a various population is equal or not, through the formulation of the null and alternative hypothesis, with R programming providing effective functionalities to
implement the concept through various functions and packages.
Why ANOVA?
• This technique is used to answer the hypothesis while analyzing multiple groups of data. There are multiple statistical approaches; however, the ANOVA in R is applied when comparison needs to be
done on more than two independent groups, as in our previous example, three different age groups.
• ANOVA technique measures the mean of the independent groups to provide researchers with the result of the hypothesis. In order to get accurate results, sample means, sample size, and standard
deviation from each individual group must be taken into account.
• It is possible to observe the mean individually for each of the three groups for comparison. However, this approach has limitations and may prove incorrect because these three comparisons don’t
consider total data and thus may lead to type 1 error. R provides us with the function to conduct the ANOVA analysis to examine variability among the independent groups of data. There are five
stages of conducting the ANOVA analysis. In the first stage, data is arranged in csv format, and the column is generated for each variable. One of the columns would be a dependent variable, and
the remaining is the independent variable. In the second stage, the data is read in R studio and named appropriately. In the third stage, a dataset is attached to individual variables and read by
the memory. Finally, the ANOVA in R is defined and analyzed. In the below sections, I’ve provided a couple of case study examples in which ANOVA techniques should be used.
• Six insecticides were tested on 12 fields each, and the researchers counted the number of bugs that remained in each field. Now the farmers need to know if the insecticides make any difference
and which one they best use. You answer this question by using the aov() function to perform an ANOVA.
• Fifty patients received one of five cholesterol-reducing drug treatments (trt). Three of the treatment conditions involved the same drug administered as 20 mg once per day (1 time), 10mg twice
per day (2 times) 5 mg four times per day (4 times). The two remaining conditions (drugD and drugE) represented competing drugs. Which drug treatment produced the greatest cholesterol reduction
ANOVA One-Way in R
• The one-way method is one of the basis ANOVA technique in which variance analysis is applied, and the mean value of multiple population groups is compared.
• One-way ANOVA got its name because of the availability of one-way classified data. In a one-way ANOVA single dependent variable and one or more independent variables may be available.
• For example, we will perform the ANOVA technique on the cholesterol dataset. The dataset consists of two variables trt ( which are treatments at 5 different levels) and response variables.
Independent variable – groups of drug treatment, dependent variable – means of 2 or more groups ANOVA. From these results, you can confirm taking the 5 mg doses 4 times a day was better than
taking a twenty mg dose once a day. Drug D has better effects when compared to that drug E.
Drug D provides better results if taken in 20 mg doses compared to drug E,
Uses cholesterol dataset in the multcomp package.
aov_model <- aov(response ~ trt)
The ANOVA F test for treatment (trt) is significant (p < .0001), providing evidence that the five treatments aren’t all equally effective.
The plotmeans() function in the gplots package can be used to produce a graph of group means and their confidence intervals. This clearly shows treatment differences.
plotmeans(response ~ trt, xlab="Treatment", ylab="Response",
main="Mean Plot\nwith 95% CI")
Let’s examine the output from TukeyHSD() for pairwise differences between group means.
The mean cholesterol reductions for 1 time and 2 times aren’t significantly different from each other (p = 0.138), whereas the difference between 1 time and 4 times is significantly different (p <
par(mar=c(5,8,4,2)) # increase left margin plot(TukeyHSD(aov_model), las = 2)
Confidence in results depends on the degree to which your data satisfies the assumptions underlying the statistical tests. In a one-way ANOVA, the dependent variable is assumed to be normally
distributed and have equal variance in each group. You can use a Q-Q plot to assess the normality assumption library(car).
Q-Q plot(lm(response ~ trt, data=cholesterol), simulate=TRUE, main=”Q-Q Plot”, labels=FALSE)
Dotted line = 95% confidence envelope, suggesting that the normality assumption has been met fairly well ANOVA assumes that variances are equal across groups or samples. The Bartlett test can be used
to verify that assumption
bartlett.test(response ~ trt, data=cholesterol). Bartlett’s test indicates that the variances in the five groups don’t differ significantly (p = 0.97).
ANOVA is also sensitive to outliers test for outliers using the outlierTest() function in the car package. You may not need to run this package to update your car library.
update.packages(checkBuilt = TRUE)
install.packages("car", dependencies = TRUE)
From the output, you can see that there’s no indication of outliers in the cholesterol data (NA occurs when p > 1). Taking the Q-Q plot, Bartlett’s test, and outlier test together, the data appear to
fit the ANOVA model quite well.
Two-Way Anova in R
Another variable is added in the Two-way ANOVA test. When there are two independent variables, we will need to use two-way ANOVA rather than the one-way ANOVA technique used in the previous case
where we had one continuous dependent variable and more than one independent variable. In order to verify two-way ANOVA, multiple assumptions need to be satisfied.
• Availability of independent observations
• Observations should be normally distributed
• Variance should be equal in observations
• Outliers should not be present
• Independent errors
To verify the two-way ANOVA, another variable called BP is added to the dataset. The variable indicates the rate of blood pressure in patients. We would like to verify if there is any statistical
difference between BP and dosage given to the patients.
df <- read.csv(“file.csv”)
anova_two_way <- aov(response ~ trt + BP, data = df)
From the output, it can be concluded that both the trt and BP are statistically different from 0. Hence, the Null hypothesis can be rejected.
Benefits of ANOVA in R
• ANOVA test determines the difference in mean between two or more independent groups. This technique is very useful for multiple items analysis which is essential for market analysis. Using the
ANOVA test, one can get necessary insights from the data.
• For example, during a product survey where multiple information such as shopping lists, customer likes, and dislikes are collected from the users.
• The ANOVA test helps us to compare groups of the population. The group could either be Male vs Female or various age groups. ANOVA technique helps in distinguish between the mean values of
different groups of the population, which are indeed different.
ANOVA is one of the most commonly used methods for hypothesis testing. This article has performed an ANOVA test on the data set consisting of fifty patients who received cholesterol-reducing drug
treatment and have further seen how two-way ANOVA can be performed when an additional independent variable is available.
Recommended Articles
This is a guide to ANOVA in R. Here we discuss the one-way and two-way Anova model along with respective examples and benefits of ANOVA. You can also go through our other suggested articles to learn
more –
|
{"url":"https://www.educba.com/anova-in-r/","timestamp":"2024-11-03T09:55:31Z","content_type":"text/html","content_length":"319739","record_id":"<urn:uuid:7c86069f-7c04-477b-b1a8-441debe5f622>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00436.warc.gz"}
|
IFERROR Function
Return value
The value you specify for error conditions.
• value - The value, reference, or formula to check for an error.
• value_if_error - The value to return if an error is found.
How to use
The IFERROR function returns a custom result when a formula returns an error and a normal result when a formula calculates without an error. The typical syntax for the IFERROR function looks like
In the example above, "formula" represents a formula that might return an error, and "custom" represents the value that should be returned if the formula returns an error. This makes IFERROR an
elegant way to trap and manage errors in one step. Before the introduction of IFERROR, it was necessary to use more complicated nested IF statements together with the older ISERROR function.
You can use the IFERROR function to trap and handle errors produced by other formulas or functions. IFERROR checks for the following errors: #N/A, #VALUE!, #REF!, #DIV/0!, #NUM!, #NAME?, or #NULL!.
Example 1 - Trap #DIV/0! errors
In the example shown, the formula in E5 copied down is:
This formula catches the #DIV/0! error that occurs when the quantity field is empty or zero, and replaces it with zero. You are free to change the zero (0) to suit your needs. For example, to display
nothing, you could use an empty string ("") instead of zero:
This version of the formula will trap the #DIV/0! error and return an empty string, which looks like an blank cell.
Example 2 - request input before calculating
Sometimes you may want to suppress a calculation until the worksheet receives specific input. For example, if A1 contains 10, B1 is blank, and C1 contains the formula =A1/B1, the following formula
will return a #DIV/0 error if B1 is empty:
=IFERROR(A1/B1) // returns #DIV! if B1 is empty
The formula below has been modified to use the IFERROR function to trap the #DIV/0! error and remap it to the message "Please enter a value in B1".
=IFERROR(A1/B1,"Please enter a value in B1")
As long as B1 is empty, C1 will display the message "Please enter a value in B1" if B1. When a number is entered in B1, the formula will return the result of A1/B1.
Example 3 - Sum and ignore errors
A common problem in Excel is that errors in data will corrupt the results of other formulas. For example, in the worksheet shown below, the goal is to sum values in the range D5:D15. However, because
the range D5:D15 contains #N/A errors, the SUM function will return #N/A:
=SUM(D5:D15) // returns #N/A
To ignore the #N/A errors and sum the remaining values, we can adjust the formula to use the IFERROR function like this:
=SUM(IFERROR(D5:D15,0)) // returns 152.50
Essentially, we use IFERROR to map the errors to zero and then sum the result. For more details and alternatives, see Sum and ignore errors.
Example 3 - VLOOKUP #N/A
When VLOOKUP cannot find a lookup value, it returns an #N/A error. You can use the IFERROR function to catch the #N/A error VLOOKUP throws when a lookup value isn't found like this:
=IFERROR(VLOOKUP(value,data,column,0),"Not found")
In this example, the IFERROR function evaluates the result returned by VLOOKUP. If no error is present, the result is returned normally. However, if VLOOKUP returns an #N/A error, IFERROR catches the
error and returns "Not Found".
IFERROR or IFNA?
The IFERROR function is useful, but it is a rather blunt instrument that will trap all kinds of errors. For example, if a function is misspelled in a formula, Excel will return the #NAME? error and
IFERROR will catch that error too, and return an alternate result. This can cause IFERROR to hide an important problem. In most cases, it makes more sense to use the IFNA function with VLOOKUP
instead of IFERROR.
=IFNA(VLOOKUP(value,data,column,0),"Not found")
Unlike IFERROR, IFNA only traps the #N/A error.
Other error functions
Excel provides several error-related functions, each with a different behavior:
• If value is empty, it is evaluated as an empty string ("") and not an error.
• If value_if_error is supplied as an empty string (""), no message is displayed when an error is detected.
• In Excel 2013+, you can use the IFNA function to trap and handle #N/A errors specifically.
|
{"url":"https://exceljet.net/functions/iferror-function","timestamp":"2024-11-06T05:36:37Z","content_type":"text/html","content_length":"73588","record_id":"<urn:uuid:bddbfee7-ae35-4a40-adf6-9640ae934e84>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00864.warc.gz"}
|
Ebook Parallel Computing And Mathematical Optimization: Proceedings Of The Workshop On Parallel Algorithms And Transputers For Optimization, Held At The University Of Siegen, Frg, November 9, 1990 1991
Ebook Parallel Computing And Mathematical Optimization: Proceedings Of The Workshop On Parallel Algorithms And Transputers For Optimization, Held At The University Of Siegen, Frg, November 9, 1990
Ebook Parallel Computing And Mathematical Optimization: Proceedings Of The Workshop On Parallel Algorithms And Transputers For Optimization, Held At The University Of Siegen, Frg, November 9, 1990
by Hilda 3.3
Springer International Publishing, 2014. CSF moment, with command on benzodiazepinen, M, and GB. Oxford University Press, 2014. Comprehensive Principles and Practice is a total power controlled for
aspects, hollows, and physical reading perspectives indexed in the action, j, and course of ontology Revelation court page. The ebook Parallel Computing and Mathematical Optimization: of four classic
users, organized by Blair Dear format others in 1997 and 2001, was that textbook teams started the Children. Although the ebook Parallel Computing and Mathematical Optimization: Proceedings of the
Workshop on Parallel Algorithms design were to focus the side, Blair was them at l; view path and, in History, they was their privacy as a involved quality view and started to classify a authority of
the operational language true sharepoint aspekte 1970s. EBOOKEE is a ebook Parallel Computing and Mathematical Optimization: Proceedings of the Workshop on Parallel Algorithms and l of admins on the
Aid( human Mediafire Rapidshare) and is really run or be any servants on its File. Please understand the other noivas to see guarantees if any and ebook Parallel Computing and us, we'll find able
systems or systems fully. The Physical and the Moral; Anthropology, Physiology, and Philosophical Medicine in France, 1750-1850. Cambridge: Cambridge University Press, 1994. The ebook Parallel
Computing and Mathematical Optimization: Proceedings of the Workshop on Parallel Algorithms and Transputers takes instead loved. The tendency's largest file uncertainty. Greeks of a same and original
monetary ebook Parallel Computing and Mathematical Optimization: Proceedings of the Workshop. The available,
46E5ekYrZd5UCcmNuYEX24FRjWVMgZ1ob79cRViyfvLFZjfyMhPDvbuCe54FqLQvVCgRKP4UUMMW5fy3ZhVQhD1JLLufBtuBy ebook Parallel Computing and Mathematical Optimization: Proceedings of the Workshop on Parallel
Algorithms that exalted the possible twelve from Icemoon to Earth, visited a new world website, leading it, filtering it. With a Other ebook Parallel Computing and Mathematical Optimization:
Proceedings of the Workshop on Parallel that classified processes and view then an There Incomplete kind. Since you' ebook Parallel Computing and Mathematical Optimization: Proceedings of the
Workshop on Parallel Algorithms and Transputers for Optimization, Held at the University of Siegen, FRG, exactly typed rights, Pages, or known roots, you may understand from a IM end-of-life issue.
|
{"url":"http://zimmer-timme.de/content/pdf.php?q=ebook-Parallel-Computing-and-Mathematical-Optimization%3A-Proceedings-of-the-Workshop-on-Parallel-Algorithms-and-Transputers-for-Optimization%2C-Held-at-the-University-of-Siegen%2C-FRG%2C-November-9%2C-1990-1991/","timestamp":"2024-11-11T17:00:19Z","content_type":"text/html","content_length":"15647","record_id":"<urn:uuid:af753fdb-c42b-4c06-ad76-cda4ceba10c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00083.warc.gz"}
|
Unlocking the Mysteries of Rotating Black Holes: Could They Lead to Other Universes? - Science Teacher Stuff
Unlocking the Mysteries of Rotating Black Holes: Could They Lead to Other Universes?
Rotating black holes have long enthralled scientists because of their peculiar dynamics and capacity to reveal more basic cosmic truths. In this article, we investigate the scientific underpinnings
of black hole generation, the frame-dragging effects of their spin, and the speculative hypothesis how they might serve as gates to other realms. Motivated by my love of astronomy, I have studied
these topics and am keen to talk about how concepts like energy extraction and time travel can one day alter our knowledge of the cosmos, all around the major key of rotating black holes.
Table of Contents
What Makes Rotating Black Holes Unique?
Ever wonder what happens when a cosmic giant whirls at amazing rates? Both physicists and science fiction aficionados have been enthralled by this question, which has resulted in the intriguing realm
of revolving black holes. These vast objects rotate, distorting spacetime and producing amazing phenomena; they are not only big.
The Dance of Spacetime: Frame Dragging
Imagine a spinning top, perhaps. The top pulls the air around it to produce a vortex-like effect. In a similar vein, a rotating black hole pulls spacetime around it to produce frame dragging—a
vortex-like action. Einstein’s theory of general relativity—which treats gravity as a curvature of spacetime—directly produces this effect. With their spin, rotating black holes twist spacetime such
that it resembles a huge whirlpool. A mesmerizing ballet of forces results from the more spacetime gets dragged along with a revolving black hole as you approach it.
Exploring the Mysteries of Rotating Black Holes: The Penrose Process
The Penrose process is among the most fascinating feature of rotating black holes. This speculative device could be able to harvest energy from these heavenly power plants. The Penrose process makes
advantage of frame dragging effect. Consider a particle dispatched into the whirlpool black hole. The particle divides into two as it gets close to the event horizon, the limit beyond which
nothing—not even light—can flee. One falls into the singularity, the infinitely dense point at the center of the black hole, while the other runs away with more energy than the original particle
carried. Like getting a free energy boost from the black hole itself whirling!
The Potential for Inter-Universal Travel?
The notion that revolving black holes might act as gates to other worlds is maybe the most amazing feature of them. Though rather theoretical, this concept has captivated many people. Certain
theoretical models propose that the frame dragging effect may produce shortcuts or wormholes across spacetime, therefore enabling travel to other worlds. Nevertheless, this is a quite unlikely but
interesting option given the strong gravitational forces and the complexity of spacetime close to a rotating black hole.
Astrophysics’s exciting frontier is the study of whirling black holes and their special qualities. It may open more profound cosmic secrets, therefore changing our knowledge of the universe and our
role within it. Though the prospect of inter-universal travel sounds more like science fiction than science reality, it is evidence of the countless opportunities found in the large cosmic terrain.
Can Black Holes Be Used as Energy Sources?
Ever considered what drives the universe? Alternatively, what secrets lurk outside our current reality? For millennia, intellectuals and scientists have been enthralled with these topics; one of the
most fascinating opportunities resides in the depths of space and with an enigmatic entity known as a black hole. Some of the most important riddles of the cosmos can be solved by these cosmic giants
with their great gravitational pull and capacity to twist spacetime.
Tapping Into the Power of Black Holes
A feature of “black holes” that most intrigues me is the Penrose process. This concept suggests how one can get energy from these cosmic powerhouses. Imagine yourself aiming a particle toward a
“black hole”. The particle divides in two when it gets near the event horizon, the point of no return from which even light cannot escape. One reaches the “black hole”‘s center and falls into the
“singularity,” a location of high density and gravity; the other escapes using more energy than the original particle carried. It’s like having the spinning “black hole” itself provide a free energy
boost! The “black hole”‘s “frame dragging” property—where its spin drags “spacetime” around it—allows this procedure. Imagine a spinning ice skater dragging the ice around with them—just as a
whirling “black hole” drags “spacetime”.
Exploring the Possibilities of Black Hole Portals
Although using the energy of “black holes” is a great thought, another fascinating idea connects to these celestial giants. These portals, then, are doors to another “spacetime”. Consider “spacetime”
as a fabric; a “black hole” is like a heavy object laid on that fabric to produce a deep well. Einstein’s theory of general relativity holds that “black holes” can generate these portals allowing
items to enter and maybe leave to a different area of “spacetime”. The ramifications of this idea are astounding and might completely change our conception of “space” and “time”.
The Challenges and Future Prospects
There are difficulties even if using “black holes” and investigating these portals is quite fascinating. Sending things near the “event horizon” is quite challenging due to the great “gravitational
force” close to a “black hole”. Furthermore, theoretically, the Penrose process and the idea of “black hole portals” need more study and technological developments to confirm their feasibility.
Notwithstanding the difficulties, “black holes” have aroused great scientific interest. To grasp and maybe use these celestial giants, researchers are vigorously investigating fresh theoretical
models and technology. Scientists are working nonstop to reveal the secrets of these cosmic powerhouses, from building new technology for “space” travel to examining the behavior of “black holes”
through sophisticated simulations. Maybe one day the stars will be a source of energy and travel, not merely of light, therefore driving a future we can only dream of.
Could Rotating Black Holes Lead to Other Universes?
Have you ever looked up into the night sky and pondered whether other worlds, beyond our reach with our telescopes, exist? For millennia, people have been enthralled with this subject; now, thanks to
modern physics, we are beginning to explore the idea that our cosmos could not be unique. Research of rotating black holes, those cosmic behemoths with great gravitational force and twist spacetime
around them, is one of the most fascinating directions of inquiry. Might the secrets of inter-universal travel be revealed by these celestial giants?
Imagine a spinning top; as it whirls, it pulls the surrounding air into a vortex-like pattern. Comparably, a “rotating black hole” pulls spacetime around it to produce a vortex-like action known as
“frame dragging.” Einstein’s theory of general relativity—which treats gravity as a curvature of spacetime—directly produces this effect. According to Einstein’s theory, the curvature of spacetime
itself manifests itself as gravity rather than only a force. Massive objects such as “rotating black holes” can thus distort spacetime around them to produce a kind of gravitational well. Fascinating
consequences of this warping of spacetime relate to the nature of reality and the possibility for inter-universal travel.
“Black Hole Portals” and the “Frame Dragging” Effect
Certain theoretical models propose that this “frame dragging” phenomenon may produce “wormholes,” or shortcuts across spacetime, therefore enabling travel to other worlds. Imagine if we could pass
via a “rotating black hole” into another universe! A “wormhole” is a hypothetical tunnel across spacetime that might link two distinct places in our world or perhaps two different universes. Consider
it as a tunnel through a mountain; it lets you pass from one side of the mountain to the other without crossing the summit. Still, the concept of “wormholes” is still very much within theoretical
The Penrose Process: A Gateway to “Energy Extraction from Black Holes”
Rotating black holes have among its most fascinating features the Penrose process, a hypothesized method enabling “energy extraction from black holes.” Using a black hole’s frame dragging property,
scientists speculate that the Penrose process might be used to extract energy from it. The theory holds that a particle can split in two, one of which descends into the black hole and the other of
which runs away with more energy than the initial particle.
The Search for Evidence: A Cosmic Treasure Hunt
In theoretical physics, the concept of “black hole portals” is still quite much in development. Although this idea lacks direct proof, the theoretical possibilities are fascinating enough to demand
ongoing investigation. With sophisticated telescopes and simulations to examine the behavior of “rotating black holes,” scientists are aggressively looking for proof of these portals.
Watching the behavior of matter close to “rotating black holes” is one approach researchers are looking for proof of “black hole portals.” Should a “black hole” be spinning, spacetime can be deformed
in a surrounding area. This distortion can propel stuff to extraordinarily high speeds, producing a signal detectable with telescopes.
Understanding the “Event Horizon”
Understanding the idea of the “event horizon,” a fundamental component in the behavior of “rotating black holes,” will help us to explore its riddles further. Beyond which nothing, not even light,
can escape, the “event horizon” is the line encircling a “black hole”. Once something reaches the “event horizon,” it is permanently caught under the gravitational pull of the “black hole.” There is
no return.
Fascinating idea that emphasizes the severe nature of “rotating black holes” is the “event horizon”. This is a point of no return where the known rules of physics fail. Beyond which the gravitational
pull of a black hole is so powerful that nothing—not even light—can escape, the “event horizon” is a limit in spacetime.
The Search Continues
Astrophysics still finds great fascination in the riddles of “rotating black holes” and their possible portal to other worlds. Even if the data is still elusive, the quest of this knowledge keeps
pushing the frontiers of our knowledge of the cosmos and stimulates scientific activity. Though we might never be able to pass through a “rotating black hole” to another universe, the quest of
knowledge about these mysterious phenomena will inspire next generations of both dreamers and scientists.
If you have read this far, I can already tell you find the secrets of the cosmos as fascinating as I do! “Why Do Celestial Bodies Rotate?” explores one issue that truly helped me grasp cosmic motion
more fully. Deciphering Rotational Motion: Physics Unleashed This essay provides some interesting eye-opening analysis delving deeply into the causes behind anything from planets to black holes’s
If you would want further information about black holes especially, I suggest consulting Wikipedia’s “Rotating black hole” entry for a comprehensive discussion. And Britannica’s “The Concept of
Multiverse” article is absolutely worth reading if you find the mind-bending possibility of many worlds intriguing. I believe you will find these materials equally fascinating; they enabled me to
make the connections!
1 thought on “Unlocking the Mysteries of Rotating Black Holes: Could They Lead to Other Universes?”
Leave a Comment
|
{"url":"https://scienceteacherstuff.com/unlocking-the-mysteries-of-rotating-black-holes-could-they-lead-to-other-universes/","timestamp":"2024-11-06T05:56:52Z","content_type":"text/html","content_length":"110864","record_id":"<urn:uuid:92acda99-8588-4f9b-8f48-af2853b185c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00028.warc.gz"}
|
Free boundary minimal surfaces with connected boundary and arbitrary genus
Staff Dipartimento di Matematica
Università degli Studi Trento
38123 Povo (TN)
Tel +39 04 61/281508-1625-1701-3898-1980.
dept.math [at] unitn.it
Luogo: Povo Zero, via Sommarive 14 (Povo) – Sala Seminari "-1"
Ore: 14.30
• Alessandro Carlotto (ETH Zürich)
Besides their self-evident geometric significance, which can be traced back at least to Courant, free boundary minimal surfaces also naturally arise in partitioning problems for convex bodies, in
capillarity problems for fluids and, as has significantly emerged in recent years thanks to work of Fraser and Schoen, in connection to extremal metrics for Steklov eigenvalues for manifolds with
boundary (i. e. for eigenvalues of the corresponding Dirichlet-to-Neumann map). The theory has been developed in various interesting directions, yet many fundamental questions remain open. One of the
most basic ones can be phrased as follows: does the Euclidean unit ball contain free boundary minimal surfaces of any given topological type? In spite of significant advances, the answer to such a
question has proven to be very elusive. I will present some joint work with Giada Franz and Mario Schulz where we answer (in the affirmative) the well-known question whether there exist in B^3
(embedded) free boundary minimal surfaces of genus one and one boundary component. In fact, we prove a more general result: for any g there exists in B^3 an embedded free boundary minimal surface of
genus g and connected boundary. The proof builds on global variational methods, in particular on a suitable equivariant counterpart of the Almgren-Pitts min-max theory, and on a striking application
of Simon's lifting lemma.
Referente: Lorenzo Mazzieri
|
{"url":"https://webmagazine.unitn.it/evento/dmath/74636/free-boundary-minimal-surfaces-with-connected-boundary-and-arbitrary-genus","timestamp":"2024-11-09T17:53:11Z","content_type":"text/html","content_length":"36669","record_id":"<urn:uuid:a1be170e-39ca-4ad5-a906-45df91cda6a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00166.warc.gz"}
|
Paired Sample T-Test in R
The paired sample t-test is designed for comparing two related groups. This statistical tool is ideal when data points are linked, such as before-and-after measurements or paired observations. It
helps determine if there's a significant difference between the two measurements within each pair.
At The Statistics Assignment Help, we will discuss more about the mechanics of the paired sample t-test including its calculations, assumptions and how to interpret results.
What is Paired Data?
Paired data refers to observations linked together. This occurs when data is collected from the same individuals or matched pairs under different conditions or time points. This relationship between
data points is crucial for statistical analysis.
Examples of paired data include:
• Pre-test and post-test scores of students
• Blood pressure measurements before and after medication
• Weight measurements of individuals before and after a diet program
It is essential to understand that, for paired data, proper analysis techniques should be employed such as the use of paired sample t-test.
What is Paired Sample T-Test?
To compare the means of two related groups, use the paired sample t-test. This statistical tool is designed for situations where each data point in one group has a corresponding pair in the other,
such as before-and-after measurements.
The T-test statistic measures the difference between the sample means relative to the variability within the pairs.
• Null hypothesis (H0): There is no significant difference between two samples.
• Alternative hypothesis (H1): T-test results indicate that there are significant differences in means between these two groups.
• Test Statistic: Within-pairs variation is used for t – test which estimates differences between mean values of two paired groups.
• P-value: If we assume that null hypothesis is true about this result observed and p-value calculates how extreme it will be or beyond; if it remains smaller than 0.05, therefore, null hypothesis
should be rejected since it appears highly unlikely to be factual.
Conducting a Paired Sample T-Test in R
The paired sample t-test is a statistical technique used to compare the means of two related groups. In R, conducting this test involves several steps:
• Data Preparation: The analysis will begin by importing the data into R by use of functions such as read. csv() or read. table(). Make sure that your data is organized in the correct format, with
two columns, one for the two observations each with their correlation.
• Calculate the Difference Scores: Make a new variable measuring the difference between the corresponding observations of the paired variables. This can be performed using vectorized functions in
• Perform the T-Test: Perform the paired sample t-test on the above computed difference scores using the t. test() command found in R. The paired = TRUE argument is used to define the model as a
paired t-test.
• Interpret the Results: Investigate on the output in details, which should include t-statistic, degrees of freedom, p-value and confidence limit. A p-value of 0. 05 or less means there are
significant differences between the paired means.
By following these steps and understanding the output, you can effectively conduct and interpret paired sample t-tests in R.
Assumptions and Interpreting Results
The paired sample t-test, like other statistical tests, operates under certain assumptions. The most crucial assumption is the normality of the differences between the paired observations. This means
the differences should follow a normal distribution. While the t-test is relatively robust to violations of normality, severe departures can impact the results.
This is how to interpret the findings of a paired samples t-test:
• T-value: The t-statistic shows the amount by which the averages of various groups differ relative to the intra-group variability.
• P-value: This indicates how likely it is that, if nothing else changes, or any future research with same or higher extremity would occur. Generally, low P-values (below 0.05) indicate a high
possibility of rejecting null hypothesis.
• Confidence interval: It offers a number of possible values for the true difference between the two sampling means.
|
{"url":"https://thestatisticsassignmenthelp.com/blog-details/paired-samplet-test-in-r","timestamp":"2024-11-11T01:50:31Z","content_type":"text/html","content_length":"78491","record_id":"<urn:uuid:d8449fd5-5e6c-4f93-b51c-e4f67bfe711b>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00548.warc.gz"}
|
Learning equivalence classes of directed acyclic latent variable models from multiple datasets with overlapping variables, incl. discussion by Ricardo Silva
Published on May 06, 20113910 Views
While there has been considerable research in learning probabilistic graphical models from data for predictive and causal inference, almost all existing algorithms assume a single dataset of i.i.d
Learning equivalence classes of acyclic models with latent and selection variables from multiple datasets with overlapping variables00:00
Learning from single i.i.d. dataset00:13
Learning from multiple datasets with overlapping variables00:59
Examples: Learning neural cascades during cognitive tasks01:54
Formal Problem Statement (1)03:37
Formal Problem Statement (2)03:56
Formal Problem Statement (3)04:23
Formal Problem Statement (4)04:28
Formal Problem Statement (6)04:40
Errors due to latent variables (1)05:32
Errors due to latent variables (2)05:55
Errors due to latent variables (3)06:06
Errors due to latent variables (4)06:26
Maximal Ancestral Graphs (1)06:50
Maximal Ancestral Graphs (2)07:30
Maximal Ancestral Graphs (3)07:53
Maximal Ancestral Graphs (4)08:21
Maximal Ancestral Graphs (5)08:37
Markov Equivalence and PAGs (1)10:48
Markov Equivalence and PAGs (2)11:45
Restated Goal12:54
Related Approach: ION Algorithm (1)13:59
Related Approach: ION Algorithm (2)15:11
Related Approach: ION Algorithm (3)15:38
Related Approach: ION Algorithm (4)15:50
Related Approach: ION Algorithm (5)16:41
Related Approach: ION Algorithm (6)17:09
Related Approach: ION Algorithm (7)17:17
Conditional Independence Testing with Multiple Datasets (1)17:40
Conditional Independence Testing with Multiple Datasets (2)17:54
Conditional Independence Testing with Multiple Datasets (3)18:09
Conditional Independence Testing with Multiple Datasets (4)18:27
Conditional Independence Testing with Multiple Datasets (5)18:45
Conditional Independence Testing with Multiple Datasets (6)18:49
Conditional Independence Testing with Multiple Datasets (7)19:06
Conditional Independence Testing with Multiple Datasets (8)19:24
The Integration of Overlapping Datasets (IOD) Algorith (1)19:53
The Integration of Overlapping Datasets (IOD) Algorith (2)20:18
The Integration of Overlapping Datasets (IOD) Algorith (3)20:37
The Integration of Overlapping Datasets (IOD) Algorith (4)21:11
The Integration of Overlapping Datasets (IOD) Algorith (5)21:34
Removing edges and adding orientations (1)21:58
Removing edges and adding orientations (2)22:15
Removing edges and adding orientations (3)22:39
Removing edges and adding orientations (4)22:44
Removing edges and adding orientations (5)23:33
Removing edges and adding orientations (6)24:16
Removing edges and adding orientations (7)24:50
Removing edges and adding orientations (8)24:54
Example (1)25:14
Example (2)25:30
Example (3)25:38
Example (4)25:51
Example (5)26:20
Correctness and Completeness (1)26:34
Correctness and Completeness (2)26:49
Correctness and Completeness (3)26:59
Simulations - 2 Datasets, |V| = 1427:42
Simulations - 3 Datasets, |V| = 1428:06
Application: Learning neural cascades during cognitive tasks28:10
Conclusion (1)28:54
Conclusion (2)29:07
Conclusion (3)29:16
Conclusion (4)29:27
Conclusion (5)29:31
Conclusion (6)29:34
Conclusion (7)29:38
Conclusion (8)29:47
Discussion of “Learning Equivalence Classes of Acyclic Models with Latent and Selection Variables from Multiple Datasets with Overlapping Variables”29:59
On overlapping variables and partial information30:20
Built-in robustness31:10
On selection bias31:55
Beyond independence constraints (1)32:33
Beyond independence constraints (2)33:07
The Bayesian approach33:44
Related problems: finding substructure by generalizing penalized composite likelihood?34:13
Other approaches: generalizing penalized composite likelihood?34:44
|
{"url":"https://videolectures.net/videos/aistats2011_tillman_learning","timestamp":"2024-11-06T05:58:48Z","content_type":"text/html","content_length":"186015","record_id":"<urn:uuid:216b3288-36cf-4399-8774-5f640d0458dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00765.warc.gz"}
|
Lesson 11
A New Way to Measure Angles
Problem 1
Here is a central angle that measures 1.5 radians. Select all true statements.
The radius is 1.5 times longer than the length of the arc defined by the angle.
The length of the arc defined by the angle is 1.5 times longer than the radius.
The ratio of arc length to radius is 1.5.
The ratio of radius to arc length is 1.5.
The area of the whole circle is 1.5 times the area of the slice.
The circumference of the whole circle is 1.5 times the length of the arc formed by the angle.
Problem 2
Match each arc length \(\ell\) and radius \(r\) with the measure of the central angle of the arc in radians.
Problem 3
Han thinks that since the arc length in circle A is longer, its central angle is larger. Do you agree with Han? Show or explain your reasoning.
Problem 4
Circle B is a dilation of circle A.
1. What is the scale factor?
2. What is the area of the 15 degree sector in circle A?
3. What is the area of the 15 degree sector in circle B?
4. What is the ratio of the areas of the sectors?
5. How does the ratio of areas of the sectors compare to the scale factor?
Problem 5
Priya and Noah are riding different size Ferris wheels at a carnival. They started at the same time. The highlighted arcs show how far they have traveled.
1. How far has Noah traveled?
2. How far has Priya traveled?
3. If the Ferris wheels will each complete 1 revolution, who do you think will finish first?
Problem 6
A circle has radius 8 units, and a central angle is drawn in. The length of the arc defined by the central angle is \(4\pi\) units. Find the area of the sector outlined by this arc.
Problem 7
Clare is trying to explain how to find the area of a sector of a circle. She says, “First, you find the area of the whole circle. Then, you divide by the radius.“ Do you agree with Clare? Explain or
show your reasoning.
Problem 8
Line \(BD\) is tangent to a circle with diameter \(AB\). List 2 right angles.
|
{"url":"https://im-beta.kendallhunt.com/HS/teachers/2/7/11/practice.html","timestamp":"2024-11-09T20:15:28Z","content_type":"text/html","content_length":"97663","record_id":"<urn:uuid:85051d63-4035-4a51-b7b4-692614742f8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00315.warc.gz"}
|
Gravitational collapse
Gravitational collapse in astronomy is the sudden inward fall of a massive body under the influence of the force of gravity. It occurs when all other forces fail to supply a sufficiently high
pressure to counterbalance gravity and keep the massive body in (dynamical) equilibrium. Gravitational collapse is at the heart of the structure formation in the universe. An initial smooth
distribution of matter will eventually collapse and cause the hierarchy of structures, such as clusters of galaxies, stellar groups, stars and planets. For example, a star is born through the
gravitational collapse of a cloud of interstellar matter. The compression caused by the collapse raises the temperature until nuclear fuel ignites in the center of the star and the collapse comes to
a halt. The thermal pressure gradient (leading to expansion) compensates the gravity (leading to compression) and a star is the dynamical equilibrium between these two forces.
More specifically the term gravitational collapse refers to the gravitational collapse of a star at the end of its life time, also called the death of the star. When all stellar energy sources are
exhausted, the interior of a star will undergo a gravitational collapse. In this sense a star is a "temporary" equilibrium state between a gravitational collapse at stellar birth and a gravitational
collapse at stellar death. The end states are called compact stars, either white dwarfs or neutron stars. Very massive stars cannot find a new dynamical equilibrium; they keep contracting. They are
said to undergo a continued gravitational collapse or catastrophic gravitational collapse. With increasing speed the stellar density increases beyond any bound to infinite densities and the stars
shrinks in a time much less than a second to a pointlike object. This is a physical singularity, a problem unsolved in present day physics. At the final stage the density of matter is so high that
current gravitational theories do not apply. Before the singular state is reached, however, the condensed matter has the properties of a black hole and the ultimate fate cannot be observed, in
The gravitational collapse of the interior of a star releases so much energy that the outer layers are blown away in an explosion. The remnants of explosions leading to the formation of white dwarfs
are observed as planetary nebulae. Larger explosions, leading to the formation of a neutron star or black hole, are observed as supernovae, of which remnants can be observed. When the outer layers of
a star are already removed (through a stellar wind for example), a catastrophic gravitational collapse can be seen as a gamma ray burst, a short flash of gamma rays lasting only seconds to minutes
(see also gamma-ray astronomy). Each gamma ray burst marks the birth of a black hole, usually in a very distant galaxy.
Catastrophic gravitational collapse toward a black hole
Missing image
A general relativistic description of catastrophic gravitational collapse has two points of view: as seen by a comoving observer and as seen by a distant (stationary) observer.
Viewed by a comoving observer
An observer standing on a star in catastrophic gravitational collapse towards the black hole state undergoes a free fall (that is: in a comoving frame he does not feel gravity to first order). He
only feels the tidal force (difference between the gravity on his head and his feet). This force increases beyond bounds as the star shrinks to a smaller radius. In the transverse direction the
comoving observer during the catastrophic gravitational collapse will be squashed by the combination of the tidal force and the increasing curvature of space.
This free fall will end in a finite proper time, with an infinite length and with thickness zero, while in the limit volume zero is reached and the density is increased to infinity.
The comoving observer does not feel any particular force when he passes the Schwarzschild radius (the radius of a black hole, also called the event horizon). In other words, this radius is not a
physical singularity. If the black hole is large, perhaps a supermassive black hole at the center of a galaxy, the tidal forces may not even be strong at this radius. However, his observations of the
outside world change dramatically. During the fall he will see the horizon on the surface rising upward through the gravitational deflection of light. Just below the horizon he will see more and more
light coming from the back of the star until he can see the entire stellar surface. At the same time the part of the sky above him is becoming a smaller and smaller region around his zenith. When he
passes the Schwarzschild radius, nothing is left of the outside world and he can't see any stars in the sky. Instead he sees the (shrinking) stellar surface in every direction. "Direction" becomes
meaningless, or, rather, all directions become down.
Before the free falling observer passes the Schwarzschild radius, a call for help signal can in principle reach the distant Earth or a spaceship. After passing this radius, all the signals he sends
out will fall along with him in the gravitational collapse and never reach the outside world (hence the name event horizon).
Viewed by a distant (stationary) observer
A stationary observer at Earth or in a distant orbit will have an entirely different view on the catastrophic gravitational collapse. A clock of the free falling observer is in a stronger part of the
gravitational field and when viewed from a distance appears to tick slower (gravitational time dilation). Also radiation is seen to tick slower and thus is observed at a longer wavelength (
gravitational redshift). As the free falling observer (in his time) falls faster and faster toward the Schwarzschild radius, the stationary observer sees him progressing slower and slower towards the
Schwarzschild radius and will never see him passing that stage. Instead the stationary observer will see collapse progessively dimmer and redder, until the entire star plus comoving observer
disappears in much less than a second. The last photon the stationary observer will receive, comes from a stage of the collapsing star just outside the Schwarzschild radius.
See also Scientific wager.
|
{"url":"https://academickids.com/encyclopedia/index.php/Gravitational_collapse","timestamp":"2024-11-02T08:19:35Z","content_type":"application/xhtml+xml","content_length":"31487","record_id":"<urn:uuid:d7b5dcb0-6cc4-4095-9d56-4f7cc08ab216>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00033.warc.gz"}
|
An Onsager singularity theorem for Leray solutions of incompressible Navier-Stokes
We study in the inviscid limit the global energy dissipation of Leray solutions of incompressible Navier-Stokes on the torus T^d, assuming that the solutions have norms for Besov space B^σ,&inf; [3]
(T^d), σ &insin; (0, 1], that are bounded in the L^3-sense in time, uniformly in viscosity. We establish an upper bound on energy dissipation of the form O(v(3σ-1)/(σ+1)), vanishing as v ← 0 if σ > 1
/3. A consequence is that Onsager-type 'quasi-singularities' are required in the Leray solutions, even if the total energy dissipation vanishes in the limit v ← 0, as long as it does so sufficiently
slowly. We also give two sufficient conditions which guarantee the existence of limiting weak Euler solutions u which satisfy a local energy balance with possible anomalous dissipation due to
inertial-range energy cascade in the Leray solutions. For σ &insin; (1/3, 1) the anomalous dissipation vanishes and the weak Euler solutions may be spatially 'rough' but conserve energy.
All Science Journal Classification (ASJC) codes
• Statistical and Nonlinear Physics
• Mathematical Physics
• General Physics and Astronomy
• Applied Mathematics
• Onsager's conjecture
• anomalous dissipation
• fluid turbulence
Dive into the research topics of 'An Onsager singularity theorem for Leray solutions of incompressible Navier-Stokes'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/an-onsager-singularity-theorem-for-leray-solutions-of-incompressi","timestamp":"2024-11-11T20:38:11Z","content_type":"text/html","content_length":"49955","record_id":"<urn:uuid:4980f260-c08d-4963-af4b-dd03dfaeaffa>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00857.warc.gz"}
|
Interview w/ M chief engineer Biermann hints at M3/M4 0-60 in 4.3 seconds and 26 MPG
Originally Posted by
I think the most reliable number we can use is the "80kg less than a similarly equiped E92" quoted by BMW.
Strongly agree.
No cars perform acceleration runs without a driver. That leads me to say that weights should always be reported with driver, with fluids, 90% or 100% fuel (what ever you like). I'm not too fussed
about 7 kg of cargo.
There is not a chance in hell that we'll see a single new M4 with driver weigh in at less than 1500 kg...
1500 kg with no driver, no DCT, no cargo is within about 10 lbs of "80 kg lighter than" 3704 lb assuming it includes DCT. Specifically
1500 + 68(driver) + 7(cargo) + 20(DCT) + 80(difference) = 1675 kg = 3692 lb. I sure hope the 1500 kg figure does not include CSiC brakes... Thus APPLES TO APPLES for a US DCT car is the following:
E92 M3: 3704 lb
F82 M4: 3528 lb (again simply 80 kg less)
Edit: And this 3528 lb should be very close to what the US website/brochure will list.
My earliest estimate/calculation of the car's weight was about 3590 back in December of 2010...
|
{"url":"https://f80.bimmerpost.com/forums/showthread.php?s=bae313d9046aaadee101e7c9bc9f1b2e&t=916310&page=5","timestamp":"2024-11-12T22:59:11Z","content_type":"application/xhtml+xml","content_length":"234654","record_id":"<urn:uuid:bfc3a16e-31ba-4ac6-a1e6-0320f1592b4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00040.warc.gz"}
|
Colloquia — Spring 2012
Friday, April 27, 2012
Title: The Schwarzian norm and distortion of conformal mappings
Speaker: Peter Duren, University of Michigan
Time: 3:00pm‐4:00pm
Place: PHY 130
Dmitry Khavinson
The Schwarzian derivative of a locally univalent analytic function is $$ Sf=\left(f''/f'\right)'-\frac12\left(f''/f'\right)^2. $$ The Schwarzian norm of a function defined in the unit disk \(\mathbb
{D}\) is given by $$ \|Sf\|=\sup\limits_{z\in\mathbb{D}}\left(1-|z|^2\right)^2|Sf(z)|. $$ We begin by discussing some basic properties of the Schwarzian derivative and its classical applications to
problems of differential equations and conformal mapping. A long-celebrated theorem of Nehari says that if \(\|Sf\|\le 2\) then \(f\) is (globally) univalent in \(\mathbb{D}\). A weaker bound \(\|Sf\
|\le 2\left(1+\delta^2\right)\) for some \(\delta > 0\) is known to imply the sharp lower bound \(d(\alpha,\beta)\ge\pi/\delta\) on the hyperbolic distance between any pair of distinct points \(\
alpha\) and \(\beta\) in \(\mathbb{D}\) where \(f(\alpha)=f(\beta)\). We will outline a proof that any upper bound on the Schwarzian norm is equivalent to both an upper bound and a lower bound on
two-point distortion. It seems remarkable that such precise geometric information is encoded in the Schwarzian norm. Finally, we propose to describe a generalization of Nehari's theorem to harmonic
mappings, or rather their lifts to a minimal surface.
Friday, April 20, 2012
Title: Exact solution of the six-vertex model: the Riemann-Hilbert approach
Speaker: Pavel Bleher, Indiana University-Purdue University Indianapolis
Time: 3:00pm‐4:00pm
Place: PHY 130
Razvan Teodorescu
We will review the Riemann-Hilbert approach to the exact solution of the sixvertex model with domain wall boundary conditions in different phase regions. This is a joint project with Karl Liechty.
Thursday, April 19, 2012
Title: How the Genome Folds
Speaker: Erez Lieberman Aiden, Harvard Society of Fellows/
Visiting Faculty at Google
Time: 11:00am‐12:00pm
Place: PHY 120
Nagle Lecture Committee
I describe Hi-C, a novel technology for probing the three-dimensional architecture of whole genomes. Developed together with collaborators at the Broad Institute and UMass Medical School, Hi-C
couples proximity-dependent DNA ligation and massively parallel sequencing.
Our lab employs Hi-C to construct spatial proximity maps of the human genome. Using Hi-C, it is possible to confirm the presence of chromosome territories and the spatial proximity of small,
gene-rich chromosomes. Hi-C maps also reveal an additional level of genome organization that is characterized by the spatial segregation of open and closed chromatin to form two genome-wide
compartments. At the megabase scale, the conformation of chromatin is consistent with a fractal globule, a knot-free conformation that enables maximally dense packing while preserving the ability to
easily fold and unfold any genomic locus. The fractal globule is distinct from the more commonly used globular equilibrium model. Our results demonstrate the power of Hi-C to map the dynamic
conformations of whole genomes.
Friday, April 13, 2012
Title: Construction of Riemann surfaces by using parallel translations
Speaker: Kiyoshi Ohba, Ochanomizu University
Time: 3:00pm‐4:00pm
Place: PHY 130
Masahiko Saito
A Riemann surface is a connected orientable surface with a complex structure. The problem of how to parametrize the complex structures on a fixed topological surface originated with B. Riemann. In
this talk, I will introduce some methods of constructing Riemann surfaces by cutting the complex plane along line segments and pasting by parallel translations, and observe visually the deformation
of complex structures of Riemann surfaces.
Friday, March 30, 2012
Title: Heine, Hilbert, Padé, Riemann, and Stieltjes: John Nuttall’s work 25 years later
Speaker: Andrei Martínez-Finkelshtein, Universidad de Almería
Almería, SPAIN
Time: 3:00pm‐4:00pm
Place: PHY 120
E. Rakhmanov
In 1986 J. Nuttall published in Constructive Approximation a paper where he studied the behavior of the denominators (“generalized Jacobi polynomials”) and the remainders of the Padé approximants to
a special class of algebraic functions with 3 branch points. I will try to look at this problem 25 years later from a modern perspective. On one hand, the generalized Jacobi polynomials constitute an
instance of the so-called Heine-Stieltjes polynomials, i.e. they are solutions of linear ODE with polynomial coefficients. On the other, they satisfy complex orthogonality relations, and thus are
suitable for the Riemann-Hilbert asymptotic analysis. Along with the names mentioned in the title, this talk features also a special appearance by Riemann surfaces, quadratic differentials, compact
sets of minimal capacity, special functions and other characters.
Friday, March 23, 2012
Title: \(n\)-ary algebras: from Physics to Mathematics
Speaker: Abdenacer Makhlouf, University of Haute Alsace
Time: 3:00pm‐4:00pm
Place: PHY 130
Mohamed Elhamdadi
Lie algebras and Poisson algebras have played an extremely important role in mathematics and physics for a long time. Their generalizations, known as \(n\)-Lie algebras and Nambu algebras also arise
naturally in physics in many different contexts. In this talk, I will review some basics on \(n\)-ary algebras, present some key constructions and discuss the representation theory and cohomology
Friday, March 9, 2012
Title: Multi-Dimensional Obliquely Reflected BSDEs Arising from Real Options
Speaker: Shanjian Tang, Fudan University
Time: 3:00pm‐4:00pm
Place: PHY 130
Yuncheng You
The definition of real options is recalled. The price of real options is associated to an optimal switching problem for backward stochastic differential equations (BSDEs). Interestingly, the
associated dynamical programming equation for such kind of optimal switching problem is a multi-dimensional BSDE with oblique reflection.
Friday, February 24, 2012
Title: Fourier bases on fractal Hilbert Spaces
Speaker: Keri Kornelson, University of Oklahoma
Norman, OK
Time: 3:00pm‐4:00pm
Place: PHY 130
Catherine Bénéteau
The study of Bernoulli convolution measures, which are supported on Cantor subsets of the real line, dates back to the 1930's, and experienced a resurgence with the connection between the measures
and iterated function systems. We will use this IFS approach to consider the question of Fourier bases on the \(L^2\) spaces with respect to Bernoulli convolution measures.
There are some interesting phenomena that arise in this setting, e.g., one can sometimes scale or shift the Fourier frequencies of an orthonormal basis by an integer and obtain another ONB.
We also describe properties of the unitary operator mapping between two such Fourier bases. This operator exhibits a fractal-like self-similarity, leading us to call it an “operator-fractal”.
Monday, February 13, 2012
Title: An introduction to handlebody-knot theory
Speaker: Atsushi Ishii, Tsukuba University
Time: 3:05pm‐3:55pm
Place: PHY 108
Masahiko Saito
A handlebody-knot is a handlebody embedded in the \(3\)-sphere. A handlebody-link is a disjoint union of handlebodies embedded in the \(3\)-sphere. A handlebody-knot is a \(1\)-component
handlebody-link. Two handlebody-links are equivalent if one can be transformed into the other by an isotopy of the \(3\)-sphere. I will explain how two handlebody-links can be distinguished. We may
decompose handlebody-links to distinguish them, or we may use invariants. This talk is an introduction to handlebody-knot theory.
Friday, February 10, 2012
Title: Some Combinatoric and Comedic Consequences of the Proof of the Pizza Conjecture
Speaker: Rick Mabry^*, ouisiana State University in Shreveport
Shreveport, LA
Time: 4:00pm‐5:00pm
Place: PHY 130
Athanassios Kartsatos
The recent proof of the “Pizza Conjecture” by Deiermann and Mabry was accomplished by wishful thinking. Unable to solve the original problem, the would-be solvers generalized the problem in a fairly
extreme way and hoped for the best. The resulting problem was ultimately solved by reducing a nice, continuous (calculus) problem to a gruesome, discrete (combinatorial) one. The solution made some
news, and, in a comedy of errors, cheesy comments flooded the internet in the aftermath. This talk will highlight some mathematical, gastronomical, and comical slices of what ensued, and look at a
more recent pizza theorem and its combinatorial solution.
Title: Sparse Ramsey Hosts
Speaker: Kevin Milans, University of South Carolina
Columbia, SC
Time: 3:00pm‐4:00pm
Place: PHY 130
Brendan Nagle
In Ramsey Theory, we study conditions under which every partition of a large structure yields a part with additional structure. For example, Van der Waerden's theorem states that every \(s\)-coloring
of the integers contains arbitrarily long monochromatic arithmetic progressions, and the Hales-Jewett Theorem guarantees that every game of tic-tac-toe in high dimensions has a winner. Ramsey's
Theorem implies that for any target graph \(G\), every \(s\)-coloring of the edges of some sufficiently large host graph contains a monochromatic copy of \(G\). In Ramsey's Theorem, the host graph is
dense (in fact complete). We explore conditions under which the host graph can be sparse and still force a monochromatic copy of \(G\).
We write \(H \stackrel{s}{\to} G\) if every \(s\)-edge-coloring of \(H\) contains a monochromatic copy of \(G\). The \(s\)-color Ramsey number of \(G\) is the minimum \(k\) such that some \(k\)
-vertex graph \(H\) satisfies \(H \stackrel{s}{\to} G\). The degree Ramsey number of \(G\) is the minimum \(k\) such that some graph \(H\) with maximum degree \(k\) satisfies \(H \stackrel{s}{\to} G
\). Chvátal, Rödl, Szemerédi, and Trotter proved that the Ramsey number of bounded-degree graphs grows only linearly, sharply contrasting the exponential growth that generally occurs when the
bounded-degree assumption is dropped. We are interested in the analogous degree Ramsey question: is the \(s\)-color degree Ramsey number of \(G\) bounded by some function of \(s\) and the maximum
degree of \(G\)? We resolve this question in the affirmative when \(G\) is restricted to a family of graphs that have a global tree structure; this family includes all outerplanar graphs. We also
investigate the behavior of the \(s\)-color degree Ramsey number as \(s\) grows. This talk includes results from three separate projects that are joint with P. Horn, T. Jiang, B. Kinnersley, V. Rödl,
and D. West.
Wednesday, February 8, 2012
Title: Automata generating free products of groups of order \(2\)
Speaker: Dmytro Savchuk, SUNY Binghamton
Binghamton, NY
Time: 3:00pm‐4:00pm
Place: PHY 118
Nata a Jonoska
We construct a family of automata with \(n\) states, \(n>3\), acting on a rooted binary tree that generate the free products of cyclic groups of order \(2\). Groups generated by automata is a
fascinating class of groups that includes counterexamples to several famous conjectures in group theory. I will start from discussing the definition and main properties of these groups. Then I will
give a short exposition of the history of the question, explain the construction and main ideas behind the proof, which involve the notion of a dual automaton.
This is a joint result with Yaroslav Vorobets of Texas A&M University.
Monday, February 6, 2012
Title: Cyclic Sieving and Cluster Multicomplexes
Speaker: Brendon Rhoades, University of Southern California
Los Angeles, CA
Time: 3:05pm‐3:55pm
Place: PHY 108
Brian Curtin
Let \(X\) be a finite set, \(C=\langle c\rangle\) be a finite cyclic group acting on \(X\), and \(X(q)\in N[q]\) be a polynomial with nonnegative integer coefficients. Following Reiner, Stanton, and
White, we say that the triple \((X,C,X(q))\) exhibits the *cyclic sieving phenomenon* if for any integer \(d>0\), the number of fixed points of \(c^d\) is equal to \(X(\zeta^d)\), where \(\zeta\) is
a primitive \(|C|^{\mathrm{th}}\) root of unity. We explain how one can use representation theory to prove instances of the cyclic sieving phenomenon involving the action of tropical Coxeter elements
on (complexes closely related to) cluster complexes. The representation theory involves cluster monomial bases of geometric realizations of finite type cluster algebras.
Friday, February 3, 2012
Title: Searching for Structure in Graph Theory: Chromatic Index and Immersion
Speaker: Jessica McDonald, Simon Fraser University
Burnaby, British Columbia
Time: 3:00pm‐4:00pm
Place: PHY 130
Brendan Nagle
Searching for structure is a fundamental theme in graph theory. The celebrated Goldberg-Seymour Conjecture is an example of this — it asserts that all multigraphs with “high” chromatic index contain
a “dense” subgraph. In this talk we discuss edge-colouring in this context, and use the method of Tashkinov trees to gain new insights. Namely, we extend a classical characterization result of
Vizing, and prove an approximation bound towards the Goldberg-Seymour Conjecture. We also consider the important containment relation of immersion. In particular, motivated by the Graph Minors
Project of Robertson and Seymour and by Hadwiger's Conjecture, we explore conditions under which graphs and digraphs contain clique immersions. The results we obtain are in analogue to the clique
subdivision theorem of Bollobas-Thomason and Komlos-Szemeredi.
Wednesday, February 1, 2012
Title: Duality and equivalence of graphs in surfaces
Speaker: Iain Moffatt, University of South Alabama
Mobile, AL
Time: 3:00pm‐4:00pm
Place: PHY 108
Masahiko Saito
This talk revolves around two fundamental constructions in graph theory: duals and medial graphs. There are a host of well-known relations between duals and medial graphs of graphs drawn in the
plane. (This can also be thought of in terms of knot diagrams and their graphs.) By considering these relations we will be led to the working principle that duality and equality of plane graphs are
equivalent concepts. It is then natural to ask what happens when we change our notion of equality. In this talk we will see how isomorphism of abstract graphs corresponds to an extension of duality
called twisted duality, and how twisted duality extends the fundamental relations between duals and medial graphs from graphs in the plane to graphs in other surfaces. We will then go on to see how
this group action leads to a deeper understanding of the properties of, and relationships among, various graph polynomials, including the chromatic polynomial, the Penrose polynomial, and topological
Tutte polynomials.
Friday, January 20, 2012
Title: Freak Waves in the Ocean
Speaker: Victor L'vov, Weizmann Institute of Science
Rehovot, Israel
Time: 3:00pm‐4:00pm
Place: PHY 130
Arcadii Grinshpan
Ships are disappearing all over the world’s oceans at a rate of about one every week. These drownings often happen in mysterious circumstances. With little evidence researchers usually put the blame
on human errors or poor maintenance. But an alarming series of drownings and near drownings including world class vessels has pushed the search for better reasons than the regular ones: freak (or
rogue, monster, killing) waves.
A freak wave in the ocean is a catastrophic event when energy and momentum of the wave field spontaneously concentrate in a localized area of space generating of short wave train consisting of
several waves with energy and momentum density in order of magnitude exceeding the background level. Freak waves could be disastrous for ships, drilling platforms, lighthouses and other coastal
I will present observations of freak waves, their effect on ships and discuss possible mechanisms of their creation and evolutions.
^* Rick Mabry received his Ph.D. from USF in 1985 under the direction of Professor A. G. Kartsatos.↵
|
{"url":"https://secure.cas.usf.edu/depts/mth/research/colloquia/spring12/","timestamp":"2024-11-09T13:37:53Z","content_type":"text/html","content_length":"57068","record_id":"<urn:uuid:3d5b5ec2-2bd0-47fc-b955-b2c9340bbf8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00315.warc.gz"}
|
Magnetometer Calibration
Magnetometers detect magnetic field strength along a sensor's X,Y and Z axes. Accurate magnetic field measurements are essential for sensor fusion and the determination of heading and orientation.
In order to be useful for heading and orientation computation, typical low cost MEMS magnetometers need to be calibrated to compensate for environmental noise and manufacturing defects.
Ideal Magnetometers
An ideal three-axis magnetometer measures magnetic field strength along orthogonal X, Y and Z axes. Absent any magnetic interference, magnetometer readings measure the Earth's magnetic field. If
magnetometer measurements are taken as the sensor is rotated through all possible orientations, the measurements should lie on a sphere. The radius of the sphere is the magnetic field strength.
To generate magnetic field samples, use the imuSensor object. For these purposes it is safe to assume the angular velocity and acceleration are zero at each orientation.
N = 500;
acc = zeros(N,3);
av = zeros(N,3);
q = randrot(N,1); % uniformly distributed random rotations
imu = imuSensor('accel-mag');
[~,x] = imu(acc,av,q);
axis equal
title('Ideal Magnetometer Data');
Hard Iron Effects
Noise sources and manufacturing defects degrade a magnetometer's measurement. The most striking of these are hard iron effects. Hard iron effects are stationary interfering magnetic noise sources.
Often, these come from other metallic objects on the circuit board with the magnetometer. The hard iron effects shift the origin of the ideal sphere.
imu.Magnetometer.ConstantBias = [2 10 40];
[~,x] = imu(acc,av,q);
axis equal
title('Magnetometer Data With a Hard Iron Offset');
Soft Iron Effects
Soft iron effects are more subtle. They arise from objects near the sensor which distort the surrounding magnetic field. These have the effect of stretching and tilting the sphere of ideal
measurements. The resulting measurements lie on an ellipsoid.
The soft iron magnetic field effects can be simulated by rotating the geomagnetic field vector of the IMU to the sensor frame, stretching it, and then rotating it back to the global frame.
nedmf = imu.MagneticField;
Rsoft = [2.5 0.3 0.5; 0.3 2 .2; 0.5 0.2 3];
soft = rotateframe(conj(q),rotateframe(q,nedmf)*Rsoft);
for ii=1:numel(q)
imu.MagneticField = soft(ii,:);
[~,x(ii,:)] = imu(acc(ii,:),av(ii,:),q(ii));
axis equal
title('Magnetometer Data With Hard and Soft Iron Effects');
Correction Technique
The magcal function can be used to determine magnetometer calibration parameters that account for both hard and soft iron effects. Uncalibrated magnetometer data can be modeled as lying on an
ellipsoid with equation
In this equation R is a 3-by-3 matrix, b is a 1-by-3 vector defining the ellipsoid center, x is a 1-by-3 vector of uncalibrated magnetometer measurements, and is a scalar indicating the magnetic
field strength. The above equation is the general form of a conic. For an ellipsoid, R must be positive definite. The magcal function uses a variety of solvers, based on different assumptions about R
. In the magcal function, R can be assumed to be the identity matrix, a diagonal matrix, or a symmetric matrix.
The magcal function produces correction coefficients that take measurements which lie on an offset ellipsoid and transform them to lie on an ideal sphere, centered at the origin. The magcal function
returns a 3-by-3 real matrix A and a 1-by-3 vector b. To correct the uncalibrated data compute
Here x is a 1-by-3 array of uncalibrated magnetometer measurements and m is the 1-by-3 array of corrected magnetometer measurements, which lie on a sphere. The matrix A has a determinant of 1 and is
the matrix square root of R. Additionally, A has the same form as R : the identity, a diagonal, or a symmetric matrix. Because these kinds of matrices cannot impart a rotation, the matrix A will not
rotate the magnetometer data during correction.
The magcal function also returns a third output which is the magnetic field strength . You can use the magnetic field strength to set the ExpectedMagneticFieldStrength property of ahrsfilter.
Using the magcal Function
Use the magcal function to determine calibration parameters that correct noisy magnetometer data. Create noisy magnetometer data by setting the NoiseDensity property of the Magnetometer property in
the imuSensor. Use the rotated and stretched magnetic field in the variable soft to simulate soft iron effects.
imu.Magnetometer.NoiseDensity = 0.08;
for ii=1:numel(q)
imu.MagneticField = soft(ii,:);
[~,x(ii,:)] = imu(acc(ii,:),av(ii,:),q(ii));
To find the A and b parameters which best correct the uncalibrated magnetometer data, simply call the function as:
[A,b,expMFS] = magcal(x);
xCorrected = (x-b)*A;
Plot the original and corrected data. Show the ellipsoid that best fits the original data. Show the sphere on which the corrected data should lie.
de = HelperDrawEllipsoid;
The magcal function uses a variety of solvers to minimize the residual error. The residual error is the sum of the distances between the calibrated data and a sphere of radius expMFS.
r = sum(xCorrected.^2,2) - expMFS.^2;
E = sqrt(r.'*r./N)./(2*expMFS.^2);
fprintf('Residual error in corrected data : %.2f\n\n',E);
Residual error in corrected data : 0.01
You can run the individual solvers if only some defects need to be corrected or to achieve a simpler correction computation.
Offset-Only Computation
Many MEMS magnetometers have registers within the sensor that can be used to compensate for the hard iron offset. In effect, the (x-b) portion of the equation above happens on board the sensor. When
only a hard iron offset compensation is needed, the A matrix effectively becomes the identity matrix. To determine the hard iron correction alone, the magcal function can be called this way:
[Aeye,beye,expMFSeye] = magcal(x,'eye');
xEyeCorrected = (x-beye)*Aeye;
[ax1,ax2] = de.plotCalibrated(Aeye,beye,expMFSeye,x,xEyeCorrected,'Eye');
view(ax1,[-1 0 0]);
view(ax2,[-1 0 0]);
Hard Iron Compensation and Axis Scaling Computation
For many applications, treating the ellipsoid matrix as a diagonal matrix is sufficient. Geometrically, this means the ellipsoid of uncalibrated magnetometer data is approximated to have its semiaxes
aligned with the coordinate system axes and a center offset from the origin. Though this is unlikely to be the actual characteristics of the ellipsoid, it reduces the correction equation to a single
multiply and single subtract per axis.
[Adiag,bdiag,expMFSdiag] = magcal(x,'diag');
xDiagCorrected = (x-bdiag)*Adiag;
[ax1,ax2] = de.plotCalibrated(Adiag,bdiag,expMFSdiag,x,xDiagCorrected,...
Full Hard and Soft Iron Compensation
To force the magcal function to solve for an arbitrary ellipsoid and produce a dense, symmetric A matrix, call the function as:
Auto Fit
The 'eye', 'diag', and 'sym' flags should be used carefully and the output values inspected. In some cases, there may be insufficient data for a high order ('diag' or 'sym') fit and a better set of
correction parameters can be found using a simpler A matrix. The 'auto' fit option, which is the default, handles this situation.
Consider the case when insufficient data is used with a high order fitter.
xidx = x(:,3) > 100;
xpoor = x(xidx,:);
[Apoor,bpoor,mfspoor] = magcal(xpoor,'diag');
There is not enough data spread over the surface of the ellipsoid to achieve a good fit and proper calibration parameters with the 'diag' option. As a result, the Apoor matrix is complex.
0.0000 + 0.4722i 0.0000 + 0.0000i 0.0000 + 0.0000i
0.0000 + 0.0000i 0.0000 + 0.5981i 0.0000 + 0.0000i
0.0000 + 0.0000i 0.0000 + 0.0000i 3.5407 + 0.0000i
Using the 'auto' fit option avoids this problem and finds a simpler A matrix which is real, symmetric, and positive definite. Calling magcal with the 'auto' option string is the same as calling
without any option string.
[Abest,bbest,mfsbest] = magcal(xpoor,'auto');
Comparing the results of using the 'auto' fitter and an incorrect, high order fitter show the perils of not examining the returned A matrix before correcting the data.
Calling the magcal function with the 'auto' flag, which is the default, will try all possibilities of 'eye', 'diag' and 'sym' searching for the A and b which minimizes the residual error, keeps A
real, and ensures R is positive definite and symmetric.
The magcal function can give calibration parameters to correct hard and soft iron offsets in a magnetometer. Calling the function with no option string, or equivalently the 'auto' option string,
produces the best fit and covers most cases.
|
{"url":"https://de.mathworks.com/help/fusion/ug/magnetometer-calibration.html","timestamp":"2024-11-14T18:51:27Z","content_type":"text/html","content_length":"85350","record_id":"<urn:uuid:2a9acdc0-3752-4820-8f31-c8bea0273898>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00627.warc.gz"}
|
Function machines
Our chosen students improved 1.19 of a grade on average - 0.45 more than those who didn't have the tutoring.
In order to access this I need to be confident with:
Here we will learn about function machines, including finding outputs, finding inputs and using function machines to solve equations.
There are also function machine worksheets based on Edexcel, AQA and OCR exam questions, along with further guidance on where to go next if youβ re still stuck.
Function machines are used to apply operations in a given order to a value known as the input. The final value produced is known as the output.
A function machine can be applied to numbers or be used for algebraic manipulation. They can be used to solve number problems, solve equations and rearrange formulae.
To solve equations or rearrange formulae we need to use inverse operations and work backwards. We will see how to use function machines to solve equations on this page.
Not all equations can be solved using a function machine but they can be applied to a lot of situations where the unknown is on one side of the equation.
Function machines can be used to help produce tables of values for graphs such as quadratic or cubic graphs.
A number machine is an alternative name for function machines. A number machine is a way of writing the rules which link the inputs and the outputs.
In order to solve an equation using a function machine:
Get your free function machines worksheet of 20+ questions and answers. Includes reasoning and applied questions.
Function machines is part of our series of lessons to support revision on functions in algebra. You may find it helpful to start with the main functions in algebra lesson for a summary of what to
expect, or use the step by step guides below for further detail on individual topics. Other lessons in this series include:
2Draw a function machine starting with the unknown as the Input and the value the equation is equal to as the Output.
3Work backwards, applying inverse operations to find the unknown Input.
Consider the order of operations being applied to the unknown.
Draw a function machine starting with the unknown as the Input and the value the equation is equal to as the Output.
Work backwards, applying inverse operations to find the unknown Input.
The order of operations is \div \;2, then - \;6 .
The order of operations is + \;2, then \times \; 5 .
The order of operations is \times\;3, then - \; 1 , then \div \; 4 .
The order of operations is \times \; 2, then + \; 18 , then \div \; 3 .
When using two-step function machines or others with more operations to solve equations, a common error is to forget to work backwards. The inverse operations are used but in the wrong order.
A common error is to not follow the correct order of operations when creating a function machine for an equation.E.g.For the equation 2x-1=7 , the multiplication by two takes place before subtracting
1. Find the missing Output and missing Input for the function machine.
Work forwards to find a and backwards, using inverse operations, to find b.
2. Find the missing Output and missing Input for the function machine.
Work forwards to find m and backwards, using inverse operations, to find n.
3. Find the missing Output and missing Input for the function machine.
Work forwards to find p and backwards, using inverse operations, to find q.
4. Select the correct the function machine for the equation:
x is the input, the operation is multiplying by 5 , the output is 10.
5. Select the correct the function machine for the equation:
x is the input, the operation is multiplying by 2 , the second operation is subtracting 6, the output is 10.
6. Select the correct function machine and solution to the equation.
Working backwards, you need to add four, and then divide by 2.
(a) What is the output when the input is 6 ?
(b) What is the output when the input is 10 ?
2.Β (a) Use the function machine to write a formula for y in terms of h. ?
(b) Use inverse operations to write a formula for h in terms of y.
Prepare your KS4 students for maths GCSEs success with Third Space Learning. Weekly online one to one GCSE maths revision lessons delivered by expert maths tutors.
|
{"url":"https://thirdspacelearning.com/gcse-maths/algebra/function-machines/","timestamp":"2024-11-04T16:56:51Z","content_type":"text/html","content_length":"340790","record_id":"<urn:uuid:bcf04f79-e13f-4f7d-9caf-2b1d58c8e153>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00665.warc.gz"}
|
Unit 2 (Ch.2) Motion | Quizalize
Feel free to use or edit a copy
includes Teacher and Student dashboards
Measure skills
from any curriculum
Tag the questions with any skills you have. Your dashboard will track each student's mastery of each skill.
With a free account, teachers can
• edit the questions
• save a copy for later
• start a class game
• automatically assign follow-up activities based on students’ scores
• assign as homework
• share a link with colleagues
• print as a bubble sheet
• Which graph best matches a person sitting on a bench and waiting?
• Q1
Which graph best matches a person sitting on a bench and waiting?
• Q2
Which graph best matches a person walking away at a constant speed forward?
• Q3
Which graph best matches a person walking away slowing and returning quickly?
• Q4
Which person below is moving the fastest?
• Q5
A duck flies 60 meters in 10 seconds. What is the duck’s speed?
• Q6
A beetle crawls 2 cm/minute for ten minutes. How far did it crawl?
none of the above.
.2 cm
5 cm
20 cm
• Q7
A quantity of 60 km/h is a _________________.
• Q8
What is the final velocity of a car that accelerates from rest at 9.0 ft/s^2 for 8.0 s?
17 ft/s
72 ft/s
1.125 ft/s
0.89 ft/s
• Q9
A pitcher throws a ball at 40.0 m/s and the ball is electronically timed to arrive at home plate 0.4625 s later. What is the distance from the pitcher to the home plate?
18.5 m
86.5 m
0.0116 m
86.485 m
• Q10
What is a sprinter's speed if a distance of 200.0 m is covered in 21.4 s?
179 m/s
9.35 m/s
0.107 m/s
.93454944 m/s
• Q11
A car with an initial velocity of 88.0 ft/s is able to come to a stop over a distance of 100.0 ft when the brakes are applied. If they decelerate at 9.8 ft/s^2, how much time was required for the
stopping process?
862 s
.111 s
8.98 s
Need more information to answer.
• Q12
If an object starts from rest and attains a velocity of 20 feet per second after 4 seconds, then its acceleration in ft/ s^2 is
• Q13
The rate of change of displacement is called
• Q14
Acceleration can be negative.
• Q15
A bus covers a distance of 180 miles in 3 hours due North. Its average velocity in miles per hour is
.0167 m/h
60 m/h
60 m/h N
.0167 m/h N
|
{"url":"https://resources.quizalize.com/view/quiz/unit-2-ch2-motion-8edc3b0d-a077-41c9-b99e-931cf232f304","timestamp":"2024-11-13T07:57:17Z","content_type":"text/html","content_length":"150583","record_id":"<urn:uuid:3e289f73-f51f-4e15-aafd-aa3721dd6477>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00776.warc.gz"}
|
(Solution) MATH225N Week 1 Assignment: Comparing Sampling Methods
1. Question: A television station plans to send a crew to a polling center on an election day.
Because they do not have time to interview each individual voter, they decide to count voters
leaving the polling location and ask every 20th voter for an interview. What type of sampling
is this?
2. Question: In order to study the shoe sizes of people in his town, Billy samples the population
by dividing the residents by age and randomly selecting a proportionate number of residents
from each age group. Which type of sampling is used?
3. Question: The management of a large airline wants to estimate the average time after takeoff
taken before the crew begins serving snacks and beverages on their flights. Assuming that
management has easy access to all of the information that would be required to select flights
by each proposed method, which of the following would be reasonable methods of stratified
sampling? Select all that apply.
4. Question: To study the mean respiratory rate of all people in his state, Frank samples the
population by dividing the residents by towns and randomly selecting 12 of the towns. He
then collects data from all the residents in the selected towns. Which type of sampling is
5. Question: When is stratified sampling appropriate?
6. Question: In reference to different sampling methods, cluster sampling includes the steps:
use simple random sampling to select a set of groups; every individual in the chosen groups is
included in the sample.
7. Question: When is cluster sampling appropriate?
8. Question: To study the mean head size of all people in her state, Jacqueline collects data
9. Question: An executive for a large national restaurant chain with multiple locations in each
of 513 counties wants to personally sample the cleanliness of the chain’s restaurants
throughout the country by visiting restaurants. The executive wants a good-quality sample but
wants to minimize travel time and expenses. Which of the following sampling methods
would be most appropriate?
10. Question: In order to study the wrist sizes of people in her town, Kathryn samples the
population by dividing the residents by age and randomly selecting a proportionate number of
residents from each age group. Which type of sampling is used?
11. Question: A manufacturer has three tool centers that each make about 1000 tools every day.
In order to implement better quality-control procedures, the manager wants to start sampling
the tools made each day to be able to identify issues as quickly as possible. Which sampling
method would be most appropriate?
12. Question: To study the mean blood pressure of all people in her state, Christine samples the
population by dividing the residents by towns and randomly selecting 9 of the towns. She
then collects data from all the residents in the selected towns. Which type of sampling is
13. Question: When is using a simple random sample appropriate?
14. Question: Donald is studying the eating habits of all students attending his school. He
samples the population by dividing the students into groups by grade level and randomly
selecting a proportionate number of students from each group. He then collects data from the
sample. Which type of sampling is used?
15. Question: A grocer receives cartons of 12 eggs in boxes of 100 cartons. In a particular month,
the grocer receives 4 shipments of eggs with 20 boxes in each shipment. The grocer wants to
estimate the proportion of cartons he receives this month that include at least one broken egg.
Which of the following sampling methods would be most appropriate?
16. Question: When considering different sampling methods, stratified sampling includes the
……..please click the icon below to purchase all solutions at $10
|
{"url":"https://scoltutorials.com/downloads/solution-math225n-week-1-assignment-comparing-sampling-methods/","timestamp":"2024-11-04T11:11:54Z","content_type":"text/html","content_length":"61787","record_id":"<urn:uuid:526d87d3-d9a5-49a9-ab2c-e50e59f20911>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00605.warc.gz"}
|
Hyperbolic Functions - (AP Calculus AB/BC) - Vocab, Definition, Explanations | Fiveable
Hyperbolic Functions
from class:
AP Calculus AB/BC
Hyperbolic functions are a set of mathematical functions that are analogs to the trigonometric functions. They are defined in terms of exponential functions and can be used to model various physical
congrats on reading the definition of Hyperbolic Functions. now let's actually learn it.
"Hyperbolic Functions" also found in:
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
|
{"url":"https://library.fiveable.me/key-terms/ap-calc/hyperbolic-functions","timestamp":"2024-11-15T01:11:06Z","content_type":"text/html","content_length":"246041","record_id":"<urn:uuid:c2157f9d-70e6-44c2-9809-156c56516f3c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00093.warc.gz"}
|
Can You Figure Out The Right Solution? This Math Equation Is Breaking The InternetCan You Figure Out The Right Solution? This Math Equation Is Breaking The InternetCan You Figure Out The Right Solution? This Math Equation Is Breaking The Internet
Share on Facebook Twitter
and social media users are divided over what the correct answer happens to be. Take a moment to work through it yourself and see what answer you come up with: 6 ÷ 2(1+2) =
The vast majority of people end up with an answer of either 1 or 9. Some even think it is 0, 3, or even 6, but if you came up with any of those three numbers, you're wrong and way off! However, the
reason why 9 and 1 are given as the main answers for this problem comes down to how people were taught to do math. However, in the end only one answer can be correct and if you came up with 9 then
congratulations, you solved it! So why has this equation caused so much confusion that its managed to go viral!? According to Presh Talwalkar, the man behind the YouTube channel MindYourDecisions
which posted the explanation, there is an ‘old’ way and a ‘new’ way to approach the problem. Depending on which method you use, you will end up with either 1 doing it the old way, or 9 doing it the
modern way. Check out his video for a much more thorough and complete explanation of each step and see if you agree with what's being asserted. Despite the comprehensive run through, it looks like a
lot of people are still confused and arguing over what the correct answer is in the comments!
If you completely botched the answer or failed to come up with the correct solution, don't be too hard on yourself. Forgetting how to do math problems like this one is normal and inevitable. Other
than simple addition and subtraction, most of us don't do any type of complex math once we're finished with school. If and when we do need to solve something above our abilities, we use a calculator
or simply Google the answer. Between changes in the way math curriculum is taught, modern technology, and the passage of time, our math abilities tend to decrease and fade away. Whatever the case may
be, it's fascinating and quite interesting how one equation can end up causing such a stir. People are really passionate about what they think is the correct answer and way to go about solving this
equation. It just goes to show that even what we view as simple math isn't as straight forward and predictable as we often assume it to be. Check out the video and see what you think!
Share on Facebook Share on Twitter
|
{"url":"http://www.social-consciousness.com/2017/07/quiz-this-math-equation-breaking-internet-can-you-figure-out-right-solution.html","timestamp":"2024-11-07T20:28:24Z","content_type":"application/xhtml+xml","content_length":"99994","record_id":"<urn:uuid:a6f3bf6f-504c-412b-8611-ceca5bcafd46>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00167.warc.gz"}
|
Scott Rome
Consider the retrieval problem in a recommendation system. One way to model the problem is to create a big classification model predicting the next item a user will click or watch. This basic
approach could be extended to large catalogs via a sampled softmax loss, as discussed in our last post. Naturally, the resulting dataset will only contain positive examples (i.e., what was clicked).
In this post, we explore a more optimized version of this approach for more complex models via in-batch negatives. We derive the LogQ Correction to improve the estimate of the gradient and present
code in PyTorch to implement the method, building on the previous blog post.
Read More
|
{"url":"http://srome.github.io/page5/","timestamp":"2024-11-12T03:32:17Z","content_type":"text/html","content_length":"11008","record_id":"<urn:uuid:baddff04-94f0-4e2e-8511-573e4492c1e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00090.warc.gz"}
|
Limit basic kya hoti hai limit iski kya jarurat hai
... | Filo
Question asked by Filo student
Limit basic kya hoti hai limit iski kya jarurat hai
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
4 mins
Uploaded on: 9/19/2022
Was this solution helpful?
Found 7 tutors discussing this question
Discuss this question LIVE for FREE
5 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Limit basic kya hoti hai limit iski kya jarurat hai
Updated On Sep 19, 2022
Topic Calculus
Subject Mathematics
Class Class 12
Answer Type Video solution: 1
Upvotes 95
Avg. Video Duration 4 min
|
{"url":"https://askfilo.com/user-question-answers-mathematics/limit-basic-kya-hoti-hai-limit-iski-kya-jarurat-hai-31353731373336","timestamp":"2024-11-07T09:07:06Z","content_type":"text/html","content_length":"309823","record_id":"<urn:uuid:0bb2e4c8-828f-4541-993c-eb101f71574a>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00541.warc.gz"}
|
3 Times Table Worksheets - Worksheets Day
What are 3 Times Table Worksheets?
3 times table worksheets are educational tools that help children learn and practice multiplication facts related to the number 3. These worksheets typically consist of a series of math problems
where the student needs to multiply a given number by 3.
Why are 3 Times Table Worksheets important?
Learning multiplication tables is crucial for developing strong math skills. The 3 times table is particularly important as it helps children understand the concept of multiplication and builds a
foundation for more complex mathematical operations.
How to use 3 Times Table Worksheets?
Using 3 times table worksheets is simple. Students are presented with multiplication problems and are required to provide the correct answers. They can either write the answers directly on the
worksheet or orally respond to a teacher or parent. Repetitive practice using these worksheets helps in memorizing the multiplication facts.
Benefits of using 3 Times Table Worksheets
1. Easy learning: The structured format of these worksheets makes it easier for children to grasp the concept of multiplication and memorize the 3 times table.
2. Practice makes perfect: Regular practice with the worksheets helps reinforce the multiplication facts, ensuring children become faster and more accurate in their calculations.
3. Confidence booster: As children become proficient in the 3 times table, their confidence in math grows, making them more willing to tackle other mathematical concepts.
How to make 3 Times Table Worksheets engaging?
1. Use visuals: Incorporate colorful images, charts, or diagrams to make the worksheets visually appealing and enhance understanding.
2. Include real-life examples: Relate multiplication to everyday situations to make it more relatable and interesting for the students.
3. Add interactive elements: Include puzzles, games, or interactive exercises to make the learning process more enjoyable and engaging.
3 times table worksheets are valuable tools for helping children learn and master multiplication facts. By providing structured practice, these worksheets contribute to improving math skills,
boosting confidence, and laying a strong foundation for more advanced mathematical concepts.
Times Table Rhyme
3 Times Table Worksheets
|
{"url":"https://www.worksheetsday.com/3-times-table-worksheets/","timestamp":"2024-11-05T03:30:44Z","content_type":"text/html","content_length":"50632","record_id":"<urn:uuid:e290bf15-a045-4118-82b8-4c05d4a68278>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00178.warc.gz"}
|
Current Search: Marine turbines -- Mathematical models
Reliability-based fatigue design of marine current turbine rotor blades.
Hurley, Shaun., College of Engineering and Computer Science, Department of Civil, Environmental and Geomatics Engineering
The study presents a reliability-based fatigue life prediction model for the ocean current turbine rotor blades. The numerically simulated bending moment ranges based on the measured current
velocities off the Southeast coast line of Florida over a one month period are used to reflect the short-term distribution of the bending moment ranges for an idealized marine current turbine
rotor blade. The 2-parameter Weibull distribution is used to fit the short-term distribution and then used to...
Show moreThe study presents a reliability-based fatigue life prediction model for the ocean current turbine rotor blades. The numerically simulated bending moment ranges based on the measured
current velocities off the Southeast coast line of Florida over a one month period are used to reflect the short-term distribution of the bending moment ranges for an idealized marine current
turbine rotor blade. The 2-parameter Weibull distribution is used to fit the short-term distribution and then used to obtain the long-term distribution over the design life. The long-term
distribution is then used to determine the number of cycles for any given bending moment range. The published laboratory test data in the form of an ε-N curve is used in conjunction with the
long-term distribution of the bending moment ranges in the prediction of the fatigue failure of the rotor blade using Miner's rule. The first-order reliability method is used in order to
determine the reliability index for a given section modulus over a given design life. The results of reliability analysis are then used to calibrate the partial safety factors for load and
Show less
Date Issued
Subject Headings
Turbines, Blades, Materials, Fatigue, Marine turbines, Mathematical models, Composite materials, Mathematical models, Structural dynamics
Document (PDF)
Data gateway for prognostic health monitoring of ocean-based power generation.
Gundel, Joseph., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
On August 5, 2010 the U.S. Department of Energy (DOE) has designated the Center for Ocean Energy Technology (COET) at Florida Atlantic University (FAU) as a national center for ocean energy
research and development. Their focus is the research and development of open-ocean current systems and associated infrastructure needed to development and testing prototypes. The generation of
power is achieved by using a specialized electric generator with a rotor called a turbine. As with all machines,...
Show moreOn August 5, 2010 the U.S. Department of Energy (DOE) has designated the Center for Ocean Energy Technology (COET) at Florida Atlantic University (FAU) as a national center for ocean
energy research and development. Their focus is the research and development of open-ocean current systems and associated infrastructure needed to development and testing prototypes. The
generation of power is achieved by using a specialized electric generator with a rotor called a turbine. As with all machines, the turbines will need maintenance and replacement as they near the
end of their lifecycle. This prognostic health monitoring (PHM) requires data to be collected, stored, and analyzed in order to maximize the lifespan, reduce downtime and predict when failure is
eminent. This thesis explores the use of a data gateway which will separate high level software with low level hardware including sensors and actuators. The gateway will v standardize and store
the data collected from various sensors with different speeds, formats, and interfaces allowing an easy and uniform transition to a database system for analysis.
Show less
Date Issued
Subject Headings
Machinery, Monitoring, Marine turbines, Mathematical models, Fluid dynamics, Structural dynamics
Document (PDF)
Detection, localization, and identification of bearings with raceway defect for a dynamometer using high frequency modal analysis of vibration across an array of accelerometers.
Waters, Nicholas., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
This thesis describes a method to detect, localize and identify a faulty bearing in a rotating machine using narrow band envelope analysis across an array of accelerometers. This technique is
developed as part of the machine monitoring system of an ocean turbine. A rudimentary mathematical model is introduced to provide an understanding of the physics governing the vibrations caused
by a bearing with a raceway defect. This method is then used to detect a faulty bearing in two setups : on a...
Show moreThis thesis describes a method to detect, localize and identify a faulty bearing in a rotating machine using narrow band envelope analysis across an array of accelerometers. This
technique is developed as part of the machine monitoring system of an ocean turbine. A rudimentary mathematical model is introduced to provide an understanding of the physics governing the
vibrations caused by a bearing with a raceway defect. This method is then used to detect a faulty bearing in two setups : on a lathe and in a dynamometer.
Show less
Date Issued
Subject Headings
Marine turbines, Mathematical models, Vibration, Measurement, Fluid dynamics, Dynamic testing
Document (PDF)
Software framework for prognostic health monitoring of ocean-based power generation.
Bowren, Mark., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
On August 5, 2010 the U.S. Department of Energy (DOE) has designated the Center for Ocean Energy Technology (COET) at Florida Atlantic University (FAU) as a national center for ocean energy
research and development of prototypes for open-ocean power generation. Maintenance on ocean-based machinery can be very costly. To avoid unnecessary maintenance it is necessary to monitor the
condition of each machine in order to predict problems. This kind of prognostic health monitoring (PHM) requires a...
Show moreOn August 5, 2010 the U.S. Department of Energy (DOE) has designated the Center for Ocean Energy Technology (COET) at Florida Atlantic University (FAU) as a national center for ocean
energy research and development of prototypes for open-ocean power generation. Maintenance on ocean-based machinery can be very costly. To avoid unnecessary maintenance it is necessary to monitor
the condition of each machine in order to predict problems. This kind of prognostic health monitoring (PHM) requires a condition-based maintenance (CBM) system that supports diagnostic and
prognostic analysis of large amounts of data. Research in this field led to the creation of ISO13374 and the development of a standard open-architecture for machine condition monitoring. This
thesis explores an implementation of such a system for ocean-based machinery using this framework and current open-standard technologies.
Show less
Date Issued
Subject Headings
Machinery, Monitoring, Marine turbines, Mathematical models, Fluid dynamics, Structural dynamics
Document (PDF)
Modeling and control of the "C-Plane" ocean current turbine.
VanZwieten, James H., Florida Atlantic University, Driscoll, Frederick R., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The "C-Plane" is a submerged ocean current turbine that uses sustained ocean currents to produce electricity. This turbine is moored to the sea floor and is capable of changing depth, as the
current profile changes, to optimize energy production. A 1/30th scale physical prototype of the C-Plane is being developed and the analysis and control of this prototype is the focus of this
work. A mathematical model and dynamic simulation of the 1/30th scale C-Plane prototype is created to analyze this...
Show moreThe "C-Plane" is a submerged ocean current turbine that uses sustained ocean currents to produce electricity. This turbine is moored to the sea floor and is capable of changing depth, as
the current profile changes, to optimize energy production. A 1/30th scale physical prototype of the C-Plane is being developed and the analysis and control of this prototype is the focus of this
work. A mathematical model and dynamic simulation of the 1/30th scale C-Plane prototype is created to analyze this vehicle's performance, and aid in the creation of control systems. The control
systems that are created for this prototype each use three modes of operation and are the Mixed PID/Bang Bang, Mixed LQR/PID/Bang Bang, and Mixed LQG/PID/Bang Bang control systems. Each of these
controllers is tested using the dynamic simulation and Mixed PID/Bang Bang controller proves to be the most efficient and robust controller during these tests.
Show less
Date Issued
Subject Headings
Marine turbines--Automatic control, Ocean energy resources, Marine turbines--Mathematical models
Document (PDF)
Hydrodynamic analysis of ocean current turbines using vortex lattice method.
Goly, Aneesh, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The main objective of the thesis is to carry out a rigorous hydrodynamic analysis of ocean current turbines and determine power for a range of flow and geometric parameters. For the purpose, a
computational tool based on the vortex lattice method (VLM) is developed. Velocity of the flow on the turbine blades, in relation to the freestream velocity, is determined through induction
factors. The geometry of trailing vortices is taken to be helicoidal. The VLM code is validated by comparing its...
Show moreThe main objective of the thesis is to carry out a rigorous hydrodynamic analysis of ocean current turbines and determine power for a range of flow and geometric parameters. For the
purpose, a computational tool based on the vortex lattice method (VLM) is developed. Velocity of the flow on the turbine blades, in relation to the freestream velocity, is determined through
induction factors. The geometry of trailing vortices is taken to be helicoidal. The VLM code is validated by comparing its results with other theoretical and experimental data corresponding to
flows about finite-aspect ratio foils, swept wings and a marine current turbine. The validated code is then used to study the performance of the prototype gulfstream turbine for a range of
parameters. Power and thrust coefficients are calculated for a range of tip speed ratios and pitch angles. Of all the cases studied, the one corresponding to tip speed ratio of 8 and uniform
pitch angle 20 produced the maximum power of 41.3 [kW] in a current of 1.73 [m/s]. The corresponding power coefficient is 0.45 which is slightly less than the Betz limit power coefficient of
0.5926. The VLM computational tool developed for the research is found to be quite efficient in that it takes only a fraction of a minute on a regular laptop PC to complete a run. The tool can
therefore be efficiently used or integrated into software for design optimization.
Show less
Date Issued
Subject Headings
Marine turbines, Mathematical models, Water currents, Forecasting, Mathematical models, Aerodynamics, Mathematics, Finite element method
Document (PDF)
Design and finite element analysis of an ocean current turbine blade.
Asseff, Nicholas S., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
A composite 3 meter ocean current turbine blade has been designed and analyzed using Blade Element Theory (BET) and commercial Finite Element Modeling (FEM) code, ANSYS. It has been observed that
using the numerical BET tool created, power production up to 141 kW is possible from a 3 bladed rotor in an ocean current of 2.5 m/s with the proposed blade design. The blade is of sandwich
construction with carbon fiber skin and high density foam core. It also contains two webs made of S2-glass for...
Show moreA composite 3 meter ocean current turbine blade has been designed and analyzed using Blade Element Theory (BET) and commercial Finite Element Modeling (FEM) code, ANSYS. It has been
observed that using the numerical BET tool created, power production up to 141 kW is possible from a 3 bladed rotor in an ocean current of 2.5 m/s with the proposed blade design. The blade is of
sandwich construction with carbon fiber skin and high density foam core. It also contains two webs made of S2-glass for added shear rigidity. Four design cases were analyzed, involving
differences in hydrodynamic shape, material properties, and internal structure. Results from the linear static structural analysis revealed that the best design provides adequate stiffness and
strength to produce the proposed power without any structural failure. An Eigenvalue Buckling analysis confirmed that the blade would not fail from buckling prior to overstressed laminate failure
if the loading was to exceed the Safety Factor.
Show less
Date Issued
Subject Headings
Marine turbines, Mathematical models, Fluid dynamics, Structural dynamics, Composite materials, Mathematical models
Document (PDF)
Mathematical modeling of wave-current interactions in marine current turbines.
Singh, Amit J., College of Engineering and Computer Science, Department of Civil, Environmental and Geomatics Engineering
The concept of marine current turbines was developed by Peter Fraenkel in the early 1970s. Ever since Fraenkel's efforts to modify and test the technology, several worldwide agencies have been
exploiting the technology to retrofit the marine current turbine to their particular application. The marine current turbine has evolved from generating a few kilowatts to a few gigawatts. The
present study focuses on a megawatt sized turbine to be located offshore the coast of Ft. Lauderdale, Florida....
Show moreThe concept of marine current turbines was developed by Peter Fraenkel in the early 1970s. Ever since Fraenkel's efforts to modify and test the technology, several worldwide agencies
have been exploiting the technology to retrofit the marine current turbine to their particular application. The marine current turbine has evolved from generating a few kilowatts to a few
gigawatts. The present study focuses on a megawatt sized turbine to be located offshore the coast of Ft. Lauderdale, Florida. The turbine is to be placed in a similar location as a 20 kW test
turbine developed by the Southeast National Marine Renewable Energy Center (SNMREC) at Florida Atlantic University, Dania Beach, FL. Data obtained from the SNMREC is used in the mathematical
model. ANSYS FLUENT is chosen as the CFD software to perform wave-current interaction simulation for the present study. The turbine is modeled in SolidWorks, then meshed in ANSYS ICEM CFD, then
run in FLUENT. The results obtained are compared to published work by scholarly articles from Fraenkel, Barltrop and many other well known marine energy researchers. The effects of wave height on
the turbine operation are analyzed and the results are presented in the form of plots for tip speed ratio and current velocity.
Show less
Date Issued
Subject Headings
Wave resistance (Thermodynamics), Structural design, Mathematical models, Laser Doppler velocimetry, Marine turbines, Mathematical models
Document (PDF)
Methodology for fault detection and diagnostics in an ocean turbine using vibration analysis and modeling.
Mjit, Mustapha., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
This thesis describes a methodology for mechanical fault detection and diagnostics in an ocean turbine using vibration analysis and modeling. This methodology relies on the use of advanced
methods for machine vibration analysis and health monitoring. Because of some issues encountered with traditional methods such as Fourier analysis for non stationary rotating machines, the use of
more advanced methods such as Time-Frequency Analysis is required. The thesis also includes the development of...
Show moreThis thesis describes a methodology for mechanical fault detection and diagnostics in an ocean turbine using vibration analysis and modeling. This methodology relies on the use of
advanced methods for machine vibration analysis and health monitoring. Because of some issues encountered with traditional methods such as Fourier analysis for non stationary rotating machines,
the use of more advanced methods such as Time-Frequency Analysis is required. The thesis also includes the development of two LabVIEW models. The first model combines the advanced methods for
on-line condition monitoring. The second model performs the modal analysis to find the resonance frequencies of the subsystems of the turbine. The dynamic modeling of the turbine using Finite
Element Analysis is used to estimate the baseline of vibration signals in sensors locations under normal operating conditions of the turbine. All this information is necessary to perform the
vibration condition monitoring of the turbine.
Show less
Date Issued
Subject Headings
Marine turbines, Mathematical models, Fluid dynamics, Structural dynamics, Composite materials, Mathematical models, Elastic analysis (Engineering)
Document (PDF)
Fatigue modeling of composite ocean current turbine blade.
Akram, Mohammad Wasim, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The success of harnessing energy from ocean current will require a reliable structural design of turbine blade that is used for energy extraction. In this study we are particularly focusing on
the fatigue life of a 3m length ocean current turbine blade. The blade consists of sandwich construction having polymeric foam as core, and carbon/epoxy as face sheet. Repetitive loads (Fatigue)
on the blade have been formulated from the randomness of the ocean current associated with turbulence and...
Show moreThe success of harnessing energy from ocean current will require a reliable structural design of turbine blade that is used for energy extraction. In this study we are particularly
focusing on the fatigue life of a 3m length ocean current turbine blade. The blade consists of sandwich construction having polymeric foam as core, and carbon/epoxy as face sheet. Repetitive
loads (Fatigue) on the blade have been formulated from the randomness of the ocean current associated with turbulence and also from velocity shear. These varying forces will cause a cyclic
variation of bending and shear stresses subjecting to the blade to fatigue. Rainflow Counting algorithm has been used to count the number of cycles within a specific mean and amplitude that will
act on the blade from random loading data. Finite Element code ANSYS has been used to develop an S-N diagram with a frequency of 1 Hz and loading ratio 0.1 Number of specific load cycles from
Rainflow Counting in conjunction with S-N diagram from ANSYS has been utilized to calculate fatigue damage up to 30 years by Palmgren-Miner's linear hypothesis.
Show less
Date Issued
Subject Headings
Turbines, Blades, Materials, Fatigue, Marine turbines, Mathematical models, Structural dynamics, Composite materials, Mathematical models, Sandwich construction, Fatigue
Document (PDF)
Development of an integrated computational tool for design and analysis of composite turbine blades under ocean current loading.
Zhou, Fang., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
A computational tool has been developed by integrating National Renewable Energy Laboratory (NREL) codes, Sandia National Laboratories' NuMAD, and ANSYS to investigate a horizontal axis composite
ocean current turbine. The study focused on the design, analysis, and life prediction of composite blade considering random ocean current, cyclic rotation, and hurricane-driven ocean current. A
structural model for a horizontal axis FAU research OCT blade was developed. Following NREL codes were used...
Show moreA computational tool has been developed by integrating National Renewable Energy Laboratory (NREL) codes, Sandia National Laboratories' NuMAD, and ANSYS to investigate a horizontal axis
composite ocean current turbine. The study focused on the design, analysis, and life prediction of composite blade considering random ocean current, cyclic rotation, and hurricane-driven ocean
current. A structural model for a horizontal axis FAU research OCT blade was developed. Following NREL codes were used: PreCom, BModes, ModeShape, AeroDyn and FAST. PreComp was used to compute
section properties of the OCT blade. BModes and ModeShape calculated the mode shapes of the blade. Hydrodynamic loading on the OCT blade was calculated by modifying the inputs to AeroDyn and
FAST. These codes were then used to obtain the dynamic response of the blade, including blade tip displacement, normal force (FN) and tangential force (FT), flap and edge bending moment
distribution with respect to blade rotation.
Show less
Date Issued
Subject Headings
Structural dynamics, Fluid dynamics, Marine turbines, Mathematical models, Turbines, Blades, Design and construction
Document (PDF)
Numerical Simulation of an Ocean Current Turbine Operating in a Wake Field.
Pyakurel, Parakram, VanZwieten, James H., Dhanak, Manhar R., Florida Atlantic University, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
An Ocean Current Turbine (OCT) numerical simulation for creating, testing and tuning flight and power takeoff controllers, as well as for farm layout optimization is presented. This simulation
utilizes a novel approach for analytically describing oceanic turbulence. This approach has been integrated into a previously developed turbine simulation that uses unsteady Blade Element
Momentum theory. Using this, the dynamical response and power production of a single OCT operating in ambient...
Show moreAn Ocean Current Turbine (OCT) numerical simulation for creating, testing and tuning flight and power takeoff controllers, as well as for farm layout optimization is presented. This
simulation utilizes a novel approach for analytically describing oceanic turbulence. This approach has been integrated into a previously developed turbine simulation that uses unsteady Blade
Element Momentum theory. Using this, the dynamical response and power production of a single OCT operating in ambient turbulence is quantified. An approach for integrating wake effects into this
single device numerical simulation is presented for predicting OCT performance within a farm. To accomplish this, far wake characteristics behind a turbine are numerically described using
analytic expressions derived from wind turbine wake models. These expressions are tuned to match OCT wake characteristics calculated from CFD analyses and experimental data. Turbine wake is
characterized in terms of increased turbulence intensities and decreased mean wake velocities. These parameters are calculated based on the performance of the upstream OCT and integrated into the
environmental models used by downstream OCT. Simulation results are presented that quantify the effects of wakes on downstream turbine performance over a wide range of relative downstream and
cross stream locations for both moored and bottom mounted turbine systems. This is done to enable the development and testing of flight and power takeoff controllers designed for maximizing
energy production and reduce turbine loadings.
Show less
Date Issued
Subject Headings
Turbulence--Mathematical models., Marine turbines--Mathematical models., Wind turbines--Aerodynamics--Mathematical models., Structural dynamics., Computational fluid dynamics., Fluid dynamic
measurements., Atmospheric circulation.
Document (PDF)
Design and analysis of an ocean current turbine performance assessment system.
Young, Matthew T., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
This thesis proposes a sensor approach for quantifying the hydrodynamic performance of Ocean Current Turbines (OCT), and investigates the influence of sensor-specific noise and sampling rates on
calculated turbine performance. Numerical models of the selected sensors are developed, and then utilized to add stochastic measurement error to numerically-generated, non-stochastic OCT data.
Numerically-generated current velocity and turbine performance measurements are used to quantify the relative...
Show moreThis thesis proposes a sensor approach for quantifying the hydrodynamic performance of Ocean Current Turbines (OCT), and investigates the influence of sensor-specific noise and sampling
rates on calculated turbine performance. Numerical models of the selected sensors are developed, and then utilized to add stochastic measurement error to numerically-generated, non-stochastic OCT
data. Numerically-generated current velocity and turbine performance measurements are used to quantify the relative influence of sensor-specific error and sampling limitations on sensor
measurements and calculated OCT performance results. The study shows that the addition of sensor error alters the variance and mean of OCT performance metric data by roughly 7.1% and 0.24%,
respectively, for four evaluated operating conditions. It is shown that sensor error results in a mean, maximum and minimum performance metric to Signal to Noise Ration (SNR) of 48.6% and 6.2%,
Show less
Date Issued
Subject Headings
Marine turbines, Mathematical models, Fluid dynamics, Structural dynamics, Stochastic processes, Rotors, Design and construction, Testing
Document (PDF)
Dissipation and eddy mixing associated with flow past an underwater turbine.
Reza, Zaqie, College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The objective of this thesis is to analyze the flow past an ocean current turbine using a finite volume Navier-Stokes CFD solver. A full 3-D RANS approach in a moving reference frame is used to
model the flow. By employing periodic boundary conditions, one-third of the flow-field is analyzed and the output is replicated to other sectors. Following validation of the computation with an
experimental study, the flow fields and particle paths for the case of uniform and sheared incoming flows...
Show moreThe objective of this thesis is to analyze the flow past an ocean current turbine using a finite volume Navier-Stokes CFD solver. A full 3-D RANS approach in a moving reference frame is
used to model the flow. By employing periodic boundary conditions, one-third of the flow-field is analyzed and the output is replicated to other sectors. Following validation of the computation
with an experimental study, the flow fields and particle paths for the case of uniform and sheared incoming flows past a generic turbine with various blade pitch angles are evaluated and
analyzed. Flow field and wake expansion are visualized. Eddy viscosity effects and its dependence on flow field conditions are investigated.
Show less
Date Issued
Subject Headings
Vibration (Aerodynamics), Fine element method, Marine turbines, Mathematical models, Water currents, Forecasting, Computational fluid dynamics
Document (PDF)
Complete thermal design and modeling for the pressure vessel of an ocean turbine -: a numerical simulation and optimization approach.
Kaiser, Khaled., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
This thesis is an approach of numerical optimization of thermal design of the ocean turbine developed by the Centre of Ocean Energy and Technology (COET). The technique used here is the
integrated method of finite element analysis (FEA) of heat transfer, artificial neural network (ANN) and genetic algorithm (GA) for optimization purposes.
Date Issued
Subject Headings
Thermal analysis, Computer programs, Heat exchangers, Design and construction, Marine turbines, Testing, Mathematical models, Fluid dynamics
Document (PDF)
Numerical performance prediction for FAU's first generation ocean current turbine.
Vanrietvelde, Nicolas., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
This thesis presents the analytically predicted position, motion, attitude, power output and forces on Florida Atlantic University's (FAU) first generation ocean current turbine for a wide range
of operating conditions. These values are calculated using a 7- DOF dynamics simulation of the turbine and the cable that attaches it to the mooring system. The numerical simulation modifications
and upgrades completed in this work include developing a wave model including the effects of waves into...
Show moreThis thesis presents the analytically predicted position, motion, attitude, power output and forces on Florida Atlantic University's (FAU) first generation ocean current turbine for a
wide range of operating conditions. These values are calculated using a 7- DOF dynamics simulation of the turbine and the cable that attaches it to the mooring system. The numerical simulation
modifications and upgrades completed in this work include developing a wave model including the effects of waves into the simulation, upgrading the rotor model to specify the number of blades and
upgrading the cable model to specify the number of cable elements. This enhanced simulation is used to quantify the turbine's performance in a wide range of currents, wave fields and when
stopping and starting the rotor. For a uniform steady current this simulation predicts that when the rotor is fixed in 1.5 m/s current the drag on the turbine is 3.0 kN, the torque on the rotor
is 384 N-m, the turbine roll and pitch are 2.4º and -1.2º . When the rotor is allowed to spin up to the rotational velocity where the turbine produces maximum power, the turbine drag increases to
7.3 kN, the torque increases to 1482 N-m, the shaft power is 5.8 kW, the turbine roll increases to 9º and the turbine pitch stays constant. Subsequently, a sensitivity analysis is done to
evaluate changes in turbine performance caused by changes in turbine design and operation. This analysis show, among other things, that a non-axial flow on the turbine of up to 10º has a minimal
effect on net power output and that the vertical stable position of the turbine varies linearly with the weight/buoyancy of the turbine with a maximum variation of 1.77 m for each increase or
decrease of 1 kg at a current speed of 0.5 m/s.
Show less
Date Issued
Subject Headings
Marine turbines, Mathematical models, Structural dynamics, Rotors, Design and construction, Testing, Fluid dynamics
Document (PDF)
Numerical simulation tool for moored marine hydrokinetic turbines.
Hacker, Basil L., Ananthakrishnan, Palaniswamy, VanZwieten, James H., College of Engineering and Computer Science, Department of Ocean and Mechanical Engineering
The research presented in this thesis utilizes Blade Element Momentum (BEM) theory with a dynamic wake model to customize the OrcaFlex numeric simulation platform in order to allow modeling of
moored Ocean Current Turbines (OCTs). This work merges the advanced cable modeling tools available within OrcaFlex with well documented BEM rotor modeling approach creating a combined tool that
was not previously available for predicting the performance of moored ocean current turbines. This tool allows...
Show moreThe research presented in this thesis utilizes Blade Element Momentum (BEM) theory with a dynamic wake model to customize the OrcaFlex numeric simulation platform in order to allow
modeling of moored Ocean Current Turbines (OCTs). This work merges the advanced cable modeling tools available within OrcaFlex with well documented BEM rotor modeling approach creating a combined
tool that was not previously available for predicting the performance of moored ocean current turbines. This tool allows ocean current turbine developers to predict and optimize the performance
of their devices and mooring systems before deploying these systems at sea. The BEM rotor model was written in C++ to create a back-end tool that is fed continuously updated data on the OCT’s
orientation and velocities as the simulation is running. The custom designed code was written specifically so that it could operate within the OrcaFlex environment. An approach for numerically
modeling the entire OCT system is presented, which accounts for the additional degree of freedom (rotor rotational velocity) that is not accounted for in the OrcaFlex equations of motion. The
properties of the numerically modeled OCT were then set to match those of a previously numerically modeled Southeast National Marine Renewable Energy Center (SNMREC) OCT system and comparisons
were made. Evaluated conditions include: uniform axial and off axis currents, as well as axial and off axis wave fields. For comparison purposes these conditions were applied to a geodetically
fixed rotor, showing nearly identical results for the steady conditions but varied, in most cases still acceptable accuracy, for the wave environment. Finally, this entire moored OCT system was
evaluated in a dynamic environment to help quantify the expected behavioral response of SNMREC’s turbine under uniform current.
Show less
Date Issued
Subject Headings
Fluid dynamics, Hydrodynamics -- Research, Marine turbines -- Mathematical models, Ocean wave power, Structural dynamics
Document (PDF)
Numerical simulation and prediction of loads in marine current turbine full-scale rotor blades.
Senat, Junior., College of Engineering and Computer Science, Department of Civil, Environmental and Geomatics Engineering
Marine current turbines are submerged structures and subjected to loading conditions from both the currents and wave effects. The associated phenomena posed significant challenge to the analyses
of the loading response of the rotor blades and practical limitations in terms of device location and operational envelopes. The effect of waves on marine current turbines can contribute to the
change of flow field and pressure field around the rotor and hence changes the fluid forces on the rotor....
Show moreMarine current turbines are submerged structures and subjected to loading conditions from both the currents and wave effects. The associated phenomena posed significant challenge to the
analyses of the loading response of the rotor blades and practical limitations in terms of device location and operational envelopes. The effect of waves on marine current turbines can contribute
to the change of flow field and pressure field around the rotor and hence changes the fluid forces on the rotor. However, the effect of the waves on the rotor depends on the magnitude and
direction of flow velocity that is induced by the waves. An analysis is presented for predicting the torque, thrust, and bending moments resulting from the wave-current interactions at the root
of rotor blades in a horizontal axis marine current turbine using the blade element-momentum (BEM) theory combined with linear wave theory. Parametric studies are carried out to better understand
the influence of important parameters , which include wave height, wave frequency, and tip-speed ratio on the performance of the rotor. The periodic loading on the blade due to the steady spatial
variation of current speeds over the rotor swept area is determined by a limited number of parameters, including Reynolds number, lift and drag coefficients, thrust and torque coefficients, and
power coefficient. The results established that the BEM theory combined with linear wave theory can be used to analyze the wavecurrent interactions in full-scale marine current turbine. The power
and thrust coefficients can be analyzed effectively using the numerical BEM theory in conjunction with corrections to the tip loss coefficient and 3D effects., It has been found both thrust and
torque increase as the current speed increases, and in longer waves the torque is relatively sensitive to the variation of wave height. Both in-plane and out-of-plane bending moments fluctuate
significantly and can be predicted by linear wave theory with blade element-momentum theory.
Show less
Date Issued
Subject Headings
Marine turbines, Mathematical models, Structural dynamics, Fluid dynamics, Rotors, Design and construction
Document (PDF)
Vibration analysis for ocean turbine reliability models.
Wald, Randall David., College of Engineering and Computer Science, Department of Computer and Electrical Engineering and Computer Science
Submerged turbines which harvest energy from ocean currents are an important potential energy resource, but their harsh and remote environment demands an automated system for machine condition
monitoring and prognostic health monitoring (MCM/PHM). For building MCM/PHM models, vibration sensor data is among the most useful (because it can show abnormal behavior which has yet to cause
damage) and the most challenging (because due to its waveform nature, frequency bands must be extracted from...
Show moreSubmerged turbines which harvest energy from ocean currents are an important potential energy resource, but their harsh and remote environment demands an automated system for machine
condition monitoring and prognostic health monitoring (MCM/PHM). For building MCM/PHM models, vibration sensor data is among the most useful (because it can show abnormal behavior which has yet
to cause damage) and the most challenging (because due to its waveform nature, frequency bands must be extracted from the signal). To perform the necessary analysis of the vibration signals,
which may arrive rapidly in the form of data streams, we develop three new wavelet-based transforms (the Streaming Wavelet Transform, Short-Time Wavelet Packet Decomposition, and Streaming
Wavelet Packet Decomposition) and propose modifications to the existing Short-TIme Wavelet Transform. ... The proposed algorithms also create and select frequency-band features which focus on the
areas of the signal most important to MCM/PHM, producing only the information necessary for building models (or removing all unnecessary information) so models can run on less powerful hardware.
Finally, we demonstrate models which can work in multiple environmental conditions. ... Our results show that many of the transforms give similar results in terms of performance, but their
different properties as to time complexity, ability to operate in a fully streaming fashion, and number of generated features may make some more appropriate than others in particular
applications, such as when streaming data or hardware limitations are extremely important (e.g., ocean turbine MCM/PHM).
Show less
Date Issued
Subject Headings
Marine turbines, Mathematical models, Fluid dynamics, Structural dynamics, Vibration, Measurement, Stochastic processes
Document (PDF)
A Power Quality Monitoring System for a 20 kW Ocean Turbine.
Cook, Kevin, Xiros, Nikolaos I., Florida Atlantic University
This thesis explores an approach for the measurement of the quality of power generated by the Center of Ocean and Energy Technology's prototype ocean turbine. The work includes the development of
a system that measures the current and voltage waveforms for all three phases of power created by the induction generator and quantifies power variations and events that occur within the system.
These so called "power quality indices" are discussed in detail including the definition of each and how...
Show moreThis thesis explores an approach for the measurement of the quality of power generated by the Center of Ocean and Energy Technology's prototype ocean turbine. The work includes the
development of a system that measures the current and voltage waveforms for all three phases of power created by the induction generator and quantifies power variations and events that occur
within the system. These so called "power quality indices" are discussed in detail including the definition of each and how they are calculated using LabYiew. The results of various tests
demonstrate that this system is accurate and may be implemented in the ocean turbine system to measure the quality of power produced by the turbine. The work then explores a dynamic model of the
ocean turbine system that can be used to simulate the response of the turbine to varying conditions.
Show less
Date Issued
Subject Headings
Marine turbines--Mathematical models, Fluid dynamics, Power electronics, Finite element method
Document (PDF)
|
{"url":"https://fau.digital.flvc.org/islandora/search/catch_all_subjects_mt%3A(Marine%20turbines%20--%20Mathematical%20models)","timestamp":"2024-11-10T15:10:30Z","content_type":"text/html","content_length":"184713","record_id":"<urn:uuid:a07b6bcd-6768-4250-ac99-4d613e0e80db>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00260.warc.gz"}
|
Conferences and courses
The members of the research team had been part of the organization of different research activities and lecturing graduate courses.
Planned activities
- Finite Geometry and Ramsey Theory, Banff, September 2025 (S. Ball and P. Morris: organisers)
- New trends in arithmetic combinatorics and related fields, Granada, April 2025 (O. Serra: organiser)
- BGSMath advanced course Threshold phenomena in random structures, Barcelona, November 2024 (P. Morris, T. Naia, G. Perarnau: organisers)
- CRM Exploratory Workshop on interplays between Algebra, Combinatorics and Proof Verification, CRM Barcelona, July 2024 (J. Rue: organizer, A. de Mier: speaker)
- VI Encuentro Conjunto RSME-SMM, Valencia, July 2024 (G. Perarnau: organiser of a special session)
- Discrete Mathematics Days 2024, Alcalà de Henares, July 2024 (M. Noy: member of the scientific committee, L. Vena: member of the organizing committee)
- Journées Nationales de l’Informatique Mathématique, Grenoble, March 2024 (G. Perarnau: plenary speaker)
- Congreso Bienal de la Real Sociedad Matemática Española - Matemática Discreta y Algorítmica, Pamplona, January 2024 (M. Noy: organiser of a special session and invited speaker).
- Discrete Probability Days, CRM Barcelona, October 2023 (P. Morris and G.Perarnau: invited speakers)
- RandNET Workshop on Graph Limits and Networks, Prague, September 2023 (Part of our RandNet RISE network project)
- EUROCOMB'23: European Conference on Combinatorics (O. Serra: co-chair of the scientific committee, M. Noy: member of the scientific committee).
- BARCELONA INTRODUCTION TO MATHEMATICAL RESEARCH 2023, Barcelona, July 2023 (G. Perarnau: dissemination talk, J. Rué: course lecturer)
- FOCM'2023, Paris, June 2023 (J. Rué: organizer of workshop "Graph theory and combinatorics")
- 21st INFORMS Applied Probability Society Conference, Nancy, June 2023 (G. Perarnau: mini-sympossium invited speaker)
- Aart and Combinatorics, Izmir, February 2023 (S. Ball: organizer)
- XV Barcelona Weekend in Group Theory (P. Burillo, J. Delgado, E. Ventura: organizers; M. Roy: speaker).
- FPSAC 2021, Bar-Ilan University in Ramat-Gan, Israel, January 2022 (A. de Mier: plenary speaker).
- Discrete Mathematics Days, Santander, June 2022 (M. Noy: chair of the scientific committee, A. de Mier, L. Vena: members of the scientific committee, G. Perarnau: plenary speaker).
- Probability and Combinatorics: a conference in celebration of Bruce Reed's mathematical career, Oxford, July 2022. (G.Perarnau: invited participant)
- Workshop on Logic and Random Discrete Structures, Dagstuhl, February 2022 (M. Noy: member of the organizing commitee)
- Combinatorial Reconfiguration Workshop, Banff International Research Station, May 2022 (G. Perarnau: invited participant)
- Algorithmic and Enumerative Combinatorics, Vienna, July 2022 (M. Noy: invited speaker)
- RandNet workshop on Random graphs, Eindhoven, August 2022 (Part of our RandNet RISE network project).
- EUROCOMB'21: European Conference on Combinatorics (GAPCOMB members form the organizing committee; O.Serra: co-chair of the scientific committee, M. Noy: member of the scientific committee, J. Rué,
G. Perarnau: chairs of the organizing committee).
- BYMAT conference 2020: Bringing Young Mathematicians Together, December 2020 (M. Noy: invited speaker)
- 15th Workshop on Computational Logic and Applications, October 2020 (M.Noy: invited speaker)
- FPSAC'20, Ramat Gan, Israel (July 2020) (A. de Mier: plenary speaker)
- University of Cape Town, Cape Town, South Africa, (April 2020) (E. Ventura: plenary speaker)
- Workshop on Combinatorics, Oberwolfach, Germany, January 2020 (G. Perarnau: Invited participant).
- Workshop on Random Graphs. University of Tornio, January 2020 (G. Perarnau: Invited speaker).
- Workshop on Graph Theory & Combinatorics in Thuringia, Erfurt, July 2020 (G. Perarnau: Invited speaker).
- BgsMath advanced Course: Quantum Error-Correcting Codes (8 hours), Universitat Politècnica de Catalunya, January 2020 (S.Ball: organizer and lecturer, Felix Huber, ICFO),
- Schloss Dagstuhl Seminar 19401: Comparative Theory for Graph Polynomials, October 2019 (M. Noy, A. de Mier: invited participants).
- CIRM Summer School "Random trees and graphs Summer School", July 2019 (J. Rué: organizer)
- 14th Barcelona Weekend on Group Theory (E. Ventura, P. Burillo: organizers)
- British Combinatorial Conference (G. Perarnau: co-chair of organizing committee)
- Finite Geometry Workshop 2019, University of Szeged, Hungary, February 2019 (S.Ball: invited speaker)
- CSASC 2018: Joint Meeting of the Czech, Slovenian, Austrian, Slovak and Catalan Mathematical Societies (O. Serra: organizer of the session 'Graph Theory and Combinatorics')
- 21st International Workshop for young Mathematicians (O.Serra: lecturer)
- Discrete Mathematics Days, Sevilla, June 2018 (O.Serra: chair of the scientific committee, J.Rué, A. de Mier: members of the scientific committee)
- Workshop on Enumerative Combinatorics, Oberwolfach, May 2018 (M. Noy: organizer)
- Workshop on Coding and Information Theory, Harvard, USA, April 2018 (S.Ball: invited speaker)
- 13th Barcelona Weekend on Group Theory (E. Ventura, P. Burillo: organizers)
- ALEA meeting, CIRM Luminy, France, March 2018. (M. Noy: lecturer on Logic and random graphs)
- Discretaly: A Workshop in Discrete Mathematics, La Sapienza, Rome, February 2018 (S. Ball: invited speaker)
- 18th Journées Combinatoires et Algorithmiques du Littoral Méditerranéen JCALM, Barcelona, January 2018 (O. Serra: organizer)
- The music of Numbers: a conference in honour of Javier Cilleruelo, ICMAT Madrid, September 2017 (J.Rué, O.Serra: organizers)
- Group Theory session at the 'Second Joint Meeting Edinburgh Mathematical Society - Societat Catalana de Matemàtiques', Edimburgh 2017 (E. Ventura: organizer)
- Summer school on finite geometries, Brigthon, UK 2017 (S. Ball: lecturer)
- Graph Theory and Combinatorics session at FOCM 2017, UB Barcelona, July 2017 (M. Noy: organizer)
- Groups of Intermediate Growth, Seville, July 2017: workshop and three advanced courses (P. Burillo: organizer)
- BGSMath Monthly Program 'Random Structures and Beyond', FME-UPC, May-June 2017 (J. Rué, O. Serra: organizers)
- BGSMath graduate course 'Interactions of harmonic analysis, combinatorics and number theory', UB Barcelona, April-May 2017 (J. Rué, O. Serra: organizers and lecturers)
- Twelfth Barcelona Weekend in Group Theory. FME-UPC Barcelona, May 2017 (P. Burillo: organizer)
- BGSMath Advanced Course: Interactions of harmonic analysis, combinatorics and number theory, April-May 2017 (J. Rué, O. Serra: lecturers on application Fourier analysis in combinatorics and number
- Combinatorics and Graph Theory session at CSASC 2016, IEC Barcelona, September 2016 (O. Serra: organizer)
- Algebra and combinatorics session at 6th Iberian Mathematical Meeting, USC, Spain, October 2017 (J. Pfeifle: organizer)
- New directions in Combinatorics, Singapur, May 2016 (S. Ball: lecturer)
- Symposium Diskrete Mathematik , Freie Universität Berlin, July 2016 (J.Rué: organizer)
- Discrete Mathematics Days 2016, FME-UPC Barcelona, July 2016 (J.Rué: chair of the Organizing Committee, O. Serra: chair of the Scientific Committee)
- Eleventh Barcelona Weekend in Group Theory, FME-UPC Barcelona, May 2016 (P. Burillo, E. Ventura: organizers)
- ALEA in Europe Meeting, Munich, February 2016 (M. Noy: lecturer on Random Graphs from Constrained Classes)
- Fall school on Random Graphs, Cargèse (Corsica), September 2015 (M. Noy and O. Serra lecturers on Random graphs from constrained classes of graphs and the Lovász Local Lemma)
- Eurocomb 2015, Bergen, August-September 2015 (O. Serra: Chair of the Scientific Committee)
- Berlin-Poznán-Hamburg Seminar, 20th Anniversary, Freie Universität Berlin, 29-30 May (J. Rué: organizer)
- Tenth Barcelona Weekend in Group Theory, FME UPC Barcelona, May 2015 (P. Burillo, E. Ventura: organizers)
- 'Geometric Group Theory' session at the Joint Meeting of the Edinburgh Mathematical Society with the Catalan Mathematical Society May 2015 (P. Burillo, E. Ventura: organizers)
- Let's Matroid! CUSO Doctoral School, Neuchatel, May 2015 (A. de Mier: lecturer on Transversal matroids)
- Random Discrete Structures Session at Barcelona Mathematical Days, IEC, November 2014 (O. Serra: organizer)
- 3-week BlockCourse: towards the Polynomial Freiman-Ruzsa Conjecture, Freie Universität Berlin, October 2014 (J. Rué: organizer) Some photos
- Clay Mathematical Summer School 2014: Periods and Motives, ICMAT Madrid, June 2014 (J. Rué: organizer)
- Ninth Barcelona Weekend in Group Theory, FME UPC, May 2014 (P. Burillo, E. Ventura: organizers)
- Workshop on Enumerative Combinatorics , Oberwolfach, March 2014 (M. Noy: organizer)
|
{"url":"https://gapcomb.upc.edu/en/Activities","timestamp":"2024-11-11T12:43:53Z","content_type":"application/xhtml+xml","content_length":"49937","record_id":"<urn:uuid:6c4d4624-0083-46c4-8da3-1a7fa8887b97>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00277.warc.gz"}
|
Continuous Functions
This page is intended to be a part of the Real Analysis section of Math Online. Similar topics can also be found in the Calculus section of the site.
Continuity at a Point
Definition: Let $f : A \to \mathbb{R}$ be a function and let $c \in A$. We say that $f$ is continuous at the point $c$ if $\forall \epsilon > 0$ $\exists \delta > 0$ such that $\forall x \in A$ with
$\mid x - c \mid < \delta$ then $\mid f(x) - f(c) \mid < \epsilon$. Another definition of continuity of $f$ at $c$ is $\forall x \in A \cap V_{\delta} (c)$ we have that $f(x) \in V_{\epsilon} (f(c))$
The graphic below illustrates the definition a function being continuous at a point $c$ in its domain.
Therefore, a function $f$ is continuous at the point $c$ if for every such $\epsilon > 0$ and corresponding $\delta > 0$, when $x$ in in the domain of $A$ and is $\delta$-close $c$, then $f(x)$ is $\
epsilon$-close to $f(c)$.
Continuity at Cluster and Isolated Points
If $c$ is a cluster point of the set $A$, then $f$ is continuous at $c$ if and only if $\lim_{x \to c} f(x) = f(c)$.
If $c$ is an isolated point of the set $A$, then $f$ is always continuous at $c$. Why? For all $\epsilon > 0$ there will exist a $\delta > 0$ such that if $x \in A$ and $\mid x - c \mid < \delta$
(notice how we aren't restricting $x$ to be $c$) then $\mid f(x) - f(c) \mid < \epsilon$. In other words, if we get choose $\epsilon$ small enough, then the corresponding $\delta$-neighbourhood
around $c$ will only contain $c$ and thus the corresponding $\epsilon$-neighbourhood around $f(c)$ will only contain $f(c)$ and clearly $\mid f(c) - f(c) \mid = 0 < \epsilon$ since $\epsilon$ is
Continuity on a Set
Definition: Let $f : A \to \mathbb{R}$ be a function and let $B \subseteq A$. Then $f$ is continuous on the set $B$ if $\forall b \in B$, $f$ is continuous at $b$.
For example, consider the function $f : \mathbb{R} \to \mathbb{R}$ defined by $f(x) = 5$ which can geometrically be represented as a horizontal line that crosses the $x$-axis at the point $(0, 5)$.
Since $f(x) = 5$ for all $x \in \mathbb{R}$, it shouldn't be too hard to understand that $f$ is continuous on all of $\mathbb{R}$.
Another example if the function $f(x) = x$ which is the diagonal line that intersects the origin.
Yet another example is any function $p : \mathbb{R} \to \mathbb{R}$ that is an $n^{\mathrm{th}}$-degree polynomial defined by $p(x) = a_0 + a_1x + a_2x^2 + ... + a_nx^n$ where $a_0, a_1, ..., a_n \in
\mathbb{R}$. $p$ will be continuous on all of $\mathbb{R}$ as well.
The last three examples depicted functions there were continuous on all of $\mathbb{R}$. This is not always the case though. For example, consider the function $f : (0, \infty) \to \mathbb{R}$
defined by $f(x) = \frac{1}{x}$. The domain of this function is $\{ x \in \mathbb{R} : x > 0 \}$. We will show that $f$ is continuous on its entire domain.
Let $c$ be a member of the domain of $f$, i.e., $c \in D(f) = (0, \infty)$. Let $\epsilon > 0$ be given. We need to find $\delta > 0$ such that $\forall x \in D(f)$ where $\mid x - c \mid < \delta$
then $\biggr \rvert \frac{1}{x} - \frac{1}{c} \biggr \rvert < \epsilon$. Now:
\biggr \rvert \frac{1}{x} - \frac{1}{c} \biggr \rvert = \frac{\mid x - c \mid}{xc}
We want $\frac{\mid x - c \mid}{xc} < \epsilon$. Let $\delta = \mathrm{min} \{ \frac{c}{2}, \frac{\epsilon c^2}{2} \}$. Then:
\frac{\mid x - c \mid}{xc} < \frac{2\mid x - c \mid}{c^2} < \frac{ 2\epsilon c^2}{2c^2} = \epsilon
Therefore $f$ is continuous for all $c$ in its domain $D(f)$.
|
{"url":"http://mathonline.wikidot.com/continuous-functions","timestamp":"2024-11-13T21:03:21Z","content_type":"application/xhtml+xml","content_length":"20414","record_id":"<urn:uuid:01525e65-5d58-4126-a517-37f8f07ea18a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00026.warc.gz"}
|
Find Lowest Number In Array Js With Code Examples
In this article, we will look at how to get the solution for the problem, Find Lowest Number In Array Js With Code Examples
How do you find the smallest number?
Steps to find the smallest number.
• Count the frequency of each digit in the number.
• If it is a non-negative number then. Place the smallest digit (except 0) at the left most of the required number.
• Else if it is a negative number then. Place the largest digit at the left most of the required number.
const arr = [14, 58, 20, 77, 66, 82, 42, 67, 42, 4]
const min = arr.reduce((a, b) => Math.min(a, b))
const arr = [14, 58, 20, 77, 66, 82, 42, 67, 42, 4]
const min = Math.min(...arr)
const arr = [1,2,3,4,5]
console.log(Math.min(...arr)) // 1
//Not using Math.min:
const min = function(numbers) {
let smallestNum = numbers[0];
for(let i = 1; i < numbers.length; i++) {
if(numbers[i] < smallestNum) {
smallestNum = numbers[i];
return smallestNum;
Array.min = function( array ){
return Math.min.apply( Math, array );
How do you find the shortest string in an array?
1. Finding Shortest String in List or ArrayList :
• Using Stream.min() method.
• Using Stream.collect() method.
• Using Stream.reduce() method.
• Using Stream.sorted() method.
• Using IntStream.summaryStatistics() method.
• Using Collection.min() method.
How do you find the lowest value in an array?
M = min( A ) returns the minimum elements of an array. If A is a vector, then min(A) returns the minimum of A . If A is a matrix, then min(A) is a row vector containing the minimum value of each
column of A .
How do you find min and max values in an array in JavaScript?
Method 1: Using Math.min() and Math.max() The min() and max() methods of the Math object are static functions that return the minimum and maximum element passed to it. These functions could be passed
an array with the spread(…) operator.
How do you find the smallest number in an array JavaScript?
I find that the easiest way to return the smallest value of an array is to use the Spread Operator on Math. min() function.
How do you find the smallest number in a 2D array Java?
sort() :
• Declare a 2D int array to sort called data .
• Declare two int s, one to hold the maximum value, the other the minimum value. The initial value of the maximum should be Integer. MIN_VALUE and the initial value of the minimum should be
• Loop through data from 0 to data. length :
• Output your result.
What is math ABS in JS?
The Math.abs() function returns the absolute value of a number.
How do you find the smallest number in a 2D array?
// assume first element is // largest and smallest largest = arr[0][0]; smallest = arr[0][0]; Due to above problem, better to assign first element of array to the smallest and largest variable then
compare it with remaining elements of the array.
How do you find the lowest and highest number in Java?
How To Find the Largest and Smallest Value in Java
• If you need to find the largest number of 2 numbers, you have to use Math. max().
• If you need the find the smallest number of 2 numbers, you have to use Math. min().
• Please note that with the help of Math. max() and Math.
How do you find the highest and lowest number in an array?
We can use min_element() and max_element() to find the minimum and maximum elements of the array in C++.
How To Add A Class To An Element In Javascript With Code Examples
In this article, we will look at how to get the solution for the problem, How To Add A Class To An Element In Javascript With Code Examples How do I add a class to an element in HTML? HTML. Using .
add() method: This method is used to add a class name to the selected element. var element = document.getElementById('element'); element.classList.add('class-1');
element.classList.add('class-2', 'class-3'); element.classList.remove('class-3'); // ge
Text Animation Python With Code Examples
In this article, we will look at how to get the solution for the problem, Text Animation Python With Code Examples Can you code animation with Python? You can create animations in Python by calling a
plot function inside of a loop (usually a for-loop). The main tools for making animations in Python is the matplotlib. animation. Animation base class, which provides a framework around which the
animation functionality is built. import sys from time import sleep def animate(text): for letter in t
String Search From Multiple Files With Code Examples
In this article, we will look at how to get the solution for the problem, String Search From Multiple Files With Code Examples How do you search for a string in all files in a directory in Unix? Use
grep to search for lines of text that match one or many regular expressions, and outputs only the matching lines. Using the grep command, we can recursively search all files for a string on a Linux.
import os text = input("input text : ") path = input("path : ") # os.chdir(path) def getfiles(path):
How To Set Gpu Python With Code Examples
In this article, we will look at how to get the solution for the problem, How To Set Gpu Python With Code Examples How do I change my default GPU? Open the "Nvidia Control Panel." Click "3D Settings"
> "Manage 3D Settings." Click the "Program Settings" tab. On the "Global Settings tab," locate and select the "Preferred graphics processor" option. CUDA_VISIBLE_DEVICES=0,1 python script.py How do I
prioritize GPU over CPU? How Do You Prioritize Gpu Usage? Choose either Intel Graphics Settings
Python Append Variable To List With Code Examples
In this article, we will look at how to get the solution for the problem, Python Append Variable To List With Code Examples Can you store a variable in a list? Since lists can contain any Python
variable, it can even contain other lists. cars=["toyota", "volvo", "civic"] cars.append("honda") print(cars) Can I put variables in a list Python? Since a list can contain any Python variables, it
can even contain other lists. How do you add a string to a list? Use list. insert() to insert to a loc
|
{"url":"https://www.isnt.org.in/find-lowest-number-in-array-js-with-code-examples.html","timestamp":"2024-11-07T10:54:20Z","content_type":"text/html","content_length":"150171","record_id":"<urn:uuid:e5f2f8f6-05c3-4118-8d2c-0b7f6f3ffc62>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00651.warc.gz"}
|
Success is in NP
Recent Posts
One of the most interesting parts of theoretical computer science is complexity theory. At its core complexity theory attempts to answer this question: what kinds of problems are easy and what kinds
of problems are hard for a computer to solve? Problems are divided up into two classes: P and NP.*
A problem in P is easy for a computer to solve. A problem in NP is (we think) hard for a computer to solve. However, all problems in NP share the same thing: if you have the right answer, it’s very
easy for a computer to verify that the answer is right.
So NP problems are problems where coming to the right solution is hard, but verifying that your solution is correct is easy. To me this sounds like success.
Now let’s dive into the details so that I can explain why I think this is true.
We said before that a problem in P is easy for a computer to solve. What this really means is that a problem in P is polynomial-time solvable. Basically we’re saying that if the problem is in P it
can be solved in a reasonable amount of time by a computer. Something like calculating the greatest common divisor between two numbers is in P.
More interesting is the set of problems in NP. Problems in NP are not known to be solvable in polynomial time by a deterministic machine. This is a fancy way of saying that there may be a way for a
computer to solve problems in NP in a reasonable amount of time, but right now we don’t know of a way to do it.
A good example of a problem in NP is the traveling salesman problem. Imagine we have a network of 20 towns. Some towns are connected by a road and some aren’t. The traveling salesman problem asks: is
there a way to visit every town in the network only once and return to the original town that you started in.
Imagine Alan is a salesman trying to figure this problem out without a map. He knows the towns that are connected with roads to the town he is currently in, but he has no way of getting an overall
picture of the problem. The only way to solve this is essentially by trying every single combination of routes from one town to another until he ends up back at the town he started in. As you can
see, this will probably take Alan a while.
But here’s the interesting distinction. Let’s say you were Alan’s skeptical boss. It would be very easy to verify that Alan had indeed found a route that visits each city only once and ends up back
at the town he started in.
Before he starts out on his journey Alan says, “Hey Boss I’m going out on my awesome sales route.”
All you have to do is give Alan a napkin and say, “I don’t believe you’ve found this route. To prove it here’s what I want you to do: every time you reach a city write the city’s name on this
Alan’s an honorable guy; you know he wouldn’t cheat at this. And so he sets off on his journey. When he arrives back a few weeks later, all you have to do is look at the napkin with the list of
cities he’s visited and compare it to the list of cities he should have visited. If every city is accounted for, you’ve verified that Alan has solved the problem.
I believe that success is very similar to this. Just like Alan has to visit every possible combination of cities in order to figure out how to go to each city only once and end up back at the city he
started in, successful people have tried thousands of different combinations of actions to reach their goals. This takes years and years. It’s a hard problem.
But once they hit on the right combination of ingredients (specific to them and their particular situation) it’s very easy verify that such a combination did indeed make them successful.
People say “In order to be successful Bill Gates did this, then this, then this, then this, and then Microsoft IPO’d.” But in reality what happened is Bill Gates tried millions of different
combinations of actions and ideas (many of them simultaneously) but eventually he hit on a path that took him where he wanted to go. Then he showed us his napkin with the route he took, so that we
could verify it.
I’m still figuring out what to write on my napkin. Are you?
P.S An interesting extension of this concept is the VC firm. A VC firm is like giving Alan the traveling salesman superpowers. Imagine if every time Alan is in a town and needs to decide which town
to go to next, he could split himself up. One copy of Alan goes to town A, one copy of Alan goes to town B, and one copy of Alan goes to town C. When those copies have to make decisions they just
make more copies. The first copy to arrive at the town Alan originally started in has solved the problem.
If there are a bunch of copies of Alan running around from one town to another it makes it much easier to come to a solution. VC firms do this with companies. They have a bunch of companies running
from town to town trying to solve “success.” Most of the companies get tired of doing all that running and die out. One or two find that route in a reasonable amount of time. We see them showing us
their napkins in tech news every day.
*Saying that problems are divided into P and NP is a simplification. P problems are in fact contained within the set of NP problems. However, all problems in NP may or may not be in P.
I’d love to hear from on Twitter at @danshipper. Or check out my startup Airtime for Email. We help you standardize and centrally control all of your company email signatures.
|
{"url":"https://danshipper.com/success-is-in-np","timestamp":"2024-11-05T02:46:21Z","content_type":"application/xhtml+xml","content_length":"48475","record_id":"<urn:uuid:907665ef-2ed7-475f-b0e0-d5dee0ffdf95>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00685.warc.gz"}
|
Browsing by Author "Urrutia Galicia, Jorge"
Browsing by Author "Urrutia Galicia, Jorge"
Now showing items 1-16 of 16
• Aichholzer, Oswin; Fabila Monroy, Ruy; Gonzalez Aguilar, Hernan; Hackl, Thomas; Heredia, Marco A.; Huemer, Clemens; Urrutia Galicia, Jorge; Vogtenhuber, Birgit (2014-08-01) Article Open Access
We consider a variant of a question of Erdos on the number of empty k-gons (k-holes) in a set of n points in the plane, where we allow the k-gons to be non-convex. We show bounds and structural
results on maximizing and ...
• Bereg, Sergey; Hurtado Díaz, Fernando Alfredo; Kano, Mikio; Korman, Matias; Lara, Dolores; Seara Ojea, Carlos; Silveira, Rodrigo Ignacio; Urrutia Galicia, Jorge; Verbeek, Kevin (2015) Article
Open Access
Let SS be a finite set of geometric objects partitioned into classes or colors . A subset S'¿SS'¿S is said to be balanced if S'S' contains the same amount of elements of SS from each of the
colors. We study several ...
• Cano, Javier; Garcia Olaverri, Alfredo Martin; Hurtado Díaz, Fernando Alfredo; Shakai, Toshinori; Tejel Altarriba, Francisco Javier; Urrutia Galicia, Jorge (2015-09) Article Open Access
Let P be a set of n points in the plane in general position. A subset H of P consisting of k elements that are the vertices of a convex polygon is called a k-hole of P, if there is no element of
P in the interior of its ...
• Alegría Galicia, Carlos; Orden, David; Palios, Leonidas; Seara Ojea, Carlos; Urrutia Galicia, Jorge (2019-04) Article Open Access
We study the problem of rotating a simple polygon to contain the maximum number of elements from a given point set in the plane. We consider variations of this problem where the rotation center
is a given point or lies on ...
• Fabila Monroy, Ruy; Garcia Olaverri, Alfredo Martin; Hurtado Díaz, Fernando Alfredo; Jaume, Rafel; Pérez Lantero, Pablo; Saumell, Maria; Silveira, Rodrigo Ignacio; Tejel Altarriba, Francisco
Javier; Urrutia Galicia, Jorge (2014) Conference report Open Access
We study the cyclic sequences induced at in nity by pairwise-disjoint colored rays with apices on a given bal- anced bichromatic point set, where the color of a ray is inherited from the color of
its apex. We derive a lower ...
• Fabila Monroy, Ruy; Garcia Olaverri, Alfredo Martin; Hurtado Díaz, Fernando Alfredo; Jaume, Rafel; Pérez Lantero, Pablo; Saumell, Maria; Silveira, Rodrigo Ignacio; Tejel Altarriba, Francisco
Javier; Urrutia Galicia, Jorge (2018-05) Article Open Access
We study the cyclic color sequences induced at infinity by colored rays with apices being a given balanced finite bichromatic point set. We first study the case in which the rays are required to
be pairwise disjoint. We ...
• Garcia Olaverri, Alfredo Martin; Hurtado Díaz, Fernando Alfredo; Tejel Altarriba, Francisco Javier; Urrutia Galicia, Jorge (2016-04) Article Open Access
Let S be a set of n points in the plane and let R be a set of n pairwise non-crossing rays, each with an apex at a different point of S. Two sets of non-crossing rays R1R1 and R2R2 are considered
to be different if the ...
• Alegría Galicia, Carlos; Orden Martin, David; Seara Ojea, Carlos; Urrutia Galicia, Jorge (Springer Nature, 2021-03) Article Open Access
Let P be a set of n points in the plane. We compute the value of ¿¿[0,2p) for which the rectilinear convex hull of P, denoted by RHP(¿), has minimum (or maximum) area in optimal O(nlogn) time and
O(n) space, improving the ...
• Aichholzer, Oswin; Fabila Monroy, Ruy; Hackl, Thomas; Huemer, Clemens; Urrutia Galicia, Jorge (2014-03-01) Article Open Access
Let S be a k-colored (finite) set of n points in , da parts per thousand yen3, in general position, that is, no (d+1) points of S lie in a common (d-1)-dimensional hyperplane. We count the number
of empty monochromatic ...
• Aichholzer, Oswin; Fabila Monroy, Ruy; Flores Peñaloza, David; Hackl, Thomas; Huemer, Clemens; Urrutia Galicia, Jorge; Vogtenhuber, Birgit (2009) Conference report Open Access
We study a generalization of the classical problem of illumination of polygons. Instead of modeling a light source we model a wireless device whose radio signal can penetrate a given number k of
walls. We call these ...
• Aichholzer, Oswin; Fabila Monroy, Ruy; Gonzalez Aguilar, Hernan; Hackl, Thomas; Heredia, Marco A.; Huemer, Clemens; Urrutia Galicia, Jorge; Valtr, Pavel; Vogtenhuber, Birgit (2015-08-01) Article
Open Access
We consider a variation of the classical Erdos-Szekeres problems on the existence and number of convex k-gons and k-holes (empty k-gons) in a set of n points in the plane. Allowing the k-gons to
be non-convex, we show ...
• Alegría Galicia, Carlos; Orden, David; Seara Ojea, Carlos; Urrutia Galicia, Jorge (2018-03-01) Article Open Access
We study the Oß-hull of a planar point set, a generalization of the Orthogonal Convex Hull where the coordinate axes form an angle ß. Given a set P of n points in the plane, we show how to
maintain the Oß-hull of P while ...
• Aichholzer, Oswin; Fabila-Monroy, Ruy; Hurtado Díaz, Fernando Alfredo; Pérez Lantero, Pablo; Ruiz Vargas, Andrés; Urrutia Galicia, Jorge; Vogtenhuber, Birgit (2014) Conference report Restricted
access - publisher's policy
We consider sets L = {l1,...., ln} of n labeled lines in general position in R3, and study the order types of point sets fp1; : : : ; png that stem from the intersections of the lines in L with
(directed) planes II, not ...
• Pérez Lantero, Pablo; Seara Ojea, Carlos; Urrutia Galicia, Jorge (Springer, 2024-05-04) Article Open Access
Let P be a set of n points in R3 in general position, and let RCH(P) be the rectilinear convex hull of P. In this paper we obtain an optimal O(n log n) time and O(n) space algorithm to compute
RCH(P). We also obtain an ...
• Alegría Galicia, Carlos; Orden Martin, David; Seara Ojea, Carlos; Urrutia Galicia, Jorge (2023-04-01) Article Open Access
We explore the separability of point sets in the plane by a restricted-orientation convex hull, which is an orientation-dependent, possibly disconnected, and non-convex enclosing shape that
generalizes the convex hull. Let ...
• Cano, Javier; Hurtado Díaz, Fernando Alfredo; Urrutia Galicia, Jorge (Universidad de Sevilla, 2013) Conference report Open Access
Let S be a set of n points inRdin general position.A set H of k-flats is called an mk-stabber of S if the relative interior of anym-simplex with vertices in S is intersected by at least one
element of H. In thispaper we ...
|
{"url":"https://upcommons.upc.edu/browse?value=Urrutia%20Galicia,%20Jorge&type=author","timestamp":"2024-11-04T15:13:00Z","content_type":"text/html","content_length":"73120","record_id":"<urn:uuid:6c5ef8c2-14f1-47cd-9f99-b58b3bd1e478>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00716.warc.gz"}
|
Bessel function - Wikiquote
Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are the canonical solutions y(x) of Bessel's differential equation
${\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+(x^{2}-\alpha ^{2})y=0}$
for an arbitrary complex number α (the order of the Bessel function). Although α and −α produce the same differential equation for real α, it is conventional to define different Bessel functions for
these two values in such a way that the Bessel functions are mostly smooth functions of α.
The most important cases are for α an integer or half-integer. Bessel functions for integer α are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to
Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer α are obtained when the Helmholtz equation is solved in spherical coordinates.
• When I see equations, I see the letters in colors – I don’t know why. As I’m talking, I see vague pictures of Bessel functions from Jahnke and Ernde’s book, with light-tan j’s, slightly
violet-bluish n’s, and dark brown x’s flying around. And I wonder what the hell it must look like to the students.
□ Richard Feynman, What Do You Care What Other People Think? (1988), It’s as Simple as One, Two, Three…
• I remember... at a Board meeting at Cambridge, the subject of Bessel's functions came into the discussion... to include them in an examination syllabus. Their utility in connection with Applied
Mathematics having been referred to, a very great Pure Mathematician who was present ejaculated—"Yes, Bessel's functions are very beautiful functions, in spite of their having practical
applications." It would have been interesting to have heard what this great man would have said if he had known that Professor Perry would one day propose the desecration of these beautiful
functions by recommending them as suitable playthings for young boys.
• [To use spherical harmonies or Bessel functions is] to be able to start in mathematics in Cambridge just about the place where some of the best mathematical men now end their studies forever.
|
{"url":"https://en.wikiquote.org/wiki/Bessel_function","timestamp":"2024-11-14T09:18:27Z","content_type":"text/html","content_length":"51825","record_id":"<urn:uuid:4d38bd32-6210-42bf-8d1b-46d5c04bc6b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00488.warc.gz"}
|
Sphere partition function of Calabi-Yau GLSMs
The sphere partition function of Calabi-Yau gauged linear sigma models (GLSMs) has been shown to compute the exact Kaehler potential of the Kaehler moduli space of a Calabi-Yau. We propose a
universal expression for the sphere partition function evaluated in hybrid phases of Calabi-Yau GLSMs that are fibrations of Landau-Ginzburg... Show more
|
{"url":"https://synthical.com/article/Sphere-partition-function-of-Calabi-Yau-GLSMs-16b79beb-435a-415b-bb42-0d56c808e6b5?","timestamp":"2024-11-09T19:01:22Z","content_type":"text/html","content_length":"59027","record_id":"<urn:uuid:df71912b-0fc9-4099-a983-128139869660>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00052.warc.gz"}
|
Top Most Asked Angular Interview Questions and Answers
1) What is Angular? / What do you know about Angular?
Angular is one of the most popular JavaScript frameworks developed and maintained by Google. It is an open-source front-end web framework based on TypeScript. It is most suited for developing
enterprise web applications because its code is reusable and maintainable.
2) What are some powerful features integrated into Angular?
Angular integrates some powerful features like declarative templates, end to end tooling, dependency injection and various other best practices that smoothens the development path.
3) What is the main purpose of Angular?
The main purpose of using Angular is to create fast, dynamic and scalable web applications. We can create these applications very easily with Angular using components and directives.
Angular was started as a SPA (Single-Page-Application) framework, and now it supports dynamic content based on different users through dependency injection. It provides a platform for easy
development of web-based applications and empowers the front end developers in curating cross-platform applications. YouTubeTV is the most popular example that uses Angular.
4) What is the difference between AngularJS and Angular?
Let's compare the features of AngularJS and Angular in a tabular form:
A list of differences between AngularJS and Angular-
Feature AngularJS Angular
AngularJS was the very first version initially released in 2010. It was a The later Angular versions were a complete rewrite of AngularJS. For example, Angular 2 was initially released
Version browser-side JavaScript used within HTML code and created a revolution in in 2016. There is nothing common between Angular2 and AngularJS except the core developer's team. After that,
web application development. It is popularly known as AngularJS. Angular 6, Angular 7, Angular 8, Angular 9, and Angular 10 were released that are very similar to each other.
These later versions are known as Angular.
Architecture AngularJS supports the MVC design model. Angular uses components and directives.
Supported The recommended and supported language of AngularJS is JavaScript. The recommended and supported language of Angular is TypeScript.
Expression In AngularJS, a specific ng directive is required for the image/property Angular uses () for event binding and [] for property binding.
Syntax and an event.
Mobile AngularJS doesn't provide any mobile support. Angular provides mobile support.
Dependency There is no concept of Dependency Injection in AngularJS. Angular supports hierarchical Dependency Injection with uni-directional tree-based change detection.
Routing In AngularJS, $routeprovider.when() is used for routing configs. In Angular, @RouteConfig{(?)} is used for the routing config.
Structure It is the first and basic version, so it is very easy to manage. It has a very simplified structure that makes the development and maintenance of large applications very easy.
Speed It is slower because of its limited features. It is faster than AngularJS because of its upgraded features.
Support It doesn't provide support or new updates anymore. It provides active support, and frequent new updates are made.
5) What are the biggest advantages of using Angular?
Following is the list of the biggest advantages of using the Angular framework:
• Angular supports two-way data-binding.
• It follows MVC pattern architecture.
• It supports static templates and Angular template.
• It facilitates you to add a custom directive.
• It also supports RESTfull services.
• Validations are supported in Angular.
• Angular provides client and server communication.
• It provides support for dependency injection.
• It provides powerful features like Event Handlers, Animation, etc.
6) What do you understand by Angular expressions? How are Angular expressions different from JavaScript expressions?
Angular expressions are code snippets that are used to bind application data to HTML. Angular resolves the expressions, and the result is returned to where the expression is written. Angular
expressions are usually written in double braces: {{ expression }} similar to JavaScript.
{{ expression }}
Following is a list of some differences between Angular expressions and JavaScript expressions:
1. The most crucial difference between Angular expressions and JavaScript expressions is that the Angular expressions allow us to write JavaScript in HTML. On the other hand, the JavaScript
expressions don't allow.
2. The Angular expressions are evaluated against a local scope object. On the other hand, the JavaScript expressions are evaluated against the global window object. We can understand it better with
an example. Suppose we have a component named test:
In the above example, we can see that the Angular expression is used to display the message property. In the present template, we are using Angular expressions, so we cannot access a property outside
its local scope (in this case, TestComponent). This proves that Angular expressions are always evaluated based on the scope object rather than the global object.
3. The Angular expressions can handle null and undefined, whereas JavaScript expressions cannot.
See the following JavaScript example:
After running the above code, you see undefined displayed on the screen. Although it's not ideal to leave any property undefined, the user does not need to see this.
Now see the following Angular example:
In the above example, you will not see undefined being displayed on the screen.
4. In Angular expressions, we cannot use loops, conditionals, and exceptions. The difference which makes Angular expressions quite beneficial is the use of pipes. Angular uses pipes (known as filters
in AngularJS) to format data before displaying it.
See this example:
In the above example, we have used a predefined pipe called lowercase, which transforms all the letters in lowercase. If you run the above example, you will see the output displayed as "hello
On the other hand, JavaScript does not have the concept of pipes.
7) What are templates in Angular?
In Angular, templates contain Angular-specific elements and attributes. These are written with HTML and combined with information coming from the model and controller, which are further rendered to
provide the user's dynamic view.
8) What is the difference between an Annotation and a Decorator in Angular?
In Angular, annotations are the "only" metadata set of the class using the Reflect Metadata library. They are used to create an "annotation" array. On the other hand, decorators are the design
patterns used for separating decoration or modification of a class without actually altering the original source code.
9) Why was Angular introduced as a client-side framework?
Before the introduction of Angular, web developers used VanillaJS and jQuery to develop dynamic websites. Later, when the websites became more complex with added features and functionality, it was
hard for them to maintain the code. Along with this, there were no provisions of data handling facilities across the views by jQuery. The need for a client-side framework like Angular was obvious
that can make life easier for the developers by handling separation of concerns and dividing code into smaller bits of information (components).
Client-side frameworks like Angular facilitate developers to develop advanced web applications like Single-Page-Application. These applications can also be developed using VanillaJS, but the
development process becomes slower by doing so.
10) How does an Angular application work?
Every Angular app contains a file named angular.json. This file contains all the configurations of the app. While building the app, the builder looks at this file to find the application's entry
point. See the structure of the angular.json file:
When the application enters the build section, the options object's main property defines the entry point of the application. The application's entry point is main.ts, which creates a browser
environment for the application to run and calls a function called bootstrapModule, which bootstraps the application.
These two steps are performed in the following order inside the main.ts file:
In the above line of code, AppModule is getting bootstrapped.
The AppModule is declared in the app.module.ts file. This module contains declarations of all the components.
Below is an example of app.module.ts file:
In the above file, you can see that AppComponent is getting bootstrapped. It is defined in app.component.ts file. This file interacts with the webpage and serves data to it.
Below is an example of app.component.ts file:
Each component is declared with three properties:
1. Selector - It is used to access the component.
2. Template/TemplateURL - It contains HTML of the component.
3. StylesURL - It contains component-specific stylesheets.
Now, Angular calls the index.html file. This file consequently calls the root component that is app-root. The root component is defined in app.component.ts.
See how the index.html file looks like:
The HTML template of the root component is displayed inside the <app-root> tags.This is the way how every angular application works.
11) Why is Angular preferred over other frameworks? / What are some advantages of Angular over other frameworks?
Due to the following features, Angular is preferred over other frameworks:
Extraordinary Built-in Features: Angular provides several out of the box built-in features like routing, state management, RxJS library, Dependency Injection, HTTP services, etc. That's why the
developers do not need to look for the above-stated features separately.
Declarative UI: Angular has declarative UI. It uses HTML to render the UI of an application as it is a declarative language. It is much easier to use than JavaScript.
Long-term Google Support: Angular is developed and maintained by Google. Google has a long term plan to stick with Angular and provide support.
12) What are the different Lifecycle hooks of Angular? Explain them in short.
When the Angular components are created, they enter their lifecycle and remain when they are destroyed. Angular Lifecycle hooks are used to check the phases and trigger changes at specific phases
during the entire duration.
ngOnChanges( ): This method is called when one or more input properties of the component are changed. The hook receives a SimpleChanges object containing the previous and current values of the
ngOnInit( ): This is the second lifecycle hook. It is called once, after the ngOnChanges hook. It is used to initialize the component and sets the input properties of the component.
ngDoCheck( ): This hook is called after ngOnChanges and ngOnInit and is used to detect and act on changes that Angular cannot detect. In this hook, we can implement our change detection algorithm.
ngAfterContentInit( ): This hook is called after the first ngDoCheck hook. This hook responds after the content gets projected inside the component.
ngAfterContentChecked( ): This hook is called after ngAfterContentInit and every subsequent ngDoCheck. It responds after the projected content is checked.
ngAfterViewInit( ): This hook is called after a component's view or initializing a child component's view.
ngAfterViewChecked( ): This hook is called after ngAfterViewInit. It responds after the component's view or when the child component's view is checked.
ngOnDestroy( ): This hook is called just before Angular destroys the component. This is used to clean up the code and detach event handlers.
In the above hooks we have described, the ngOnInit hook is the most often used hook. Let's see how to use the ngOnInit hook. If you have to process a lot of data during component creation, it's
better to do it inside the ngOnInit hook rather than the constructor:
See the example:
In the above code, you can see that we have imported OnInit, but we have used the ngOnInit function. This is how we can use the rest of the hooks as well.
13) What is AOT in Angular?
In Angular, AOT stands for Ahead-Of-Time compiler. It is used to convert your Angular HTML and TypeScript code into efficient JavaScript code during the build phase before the browser downloads and
runs that code. By compiling the application during the build process provides a faster rendering in the browser.
14) What is the reason for using the AOT compiler in Angular?
An Angular application is made of several components and their HTML templates. Because of these Angular components and templates, the browsers are not able to understand them directly. So, Angular
applications require a compilation process before they run in a browser. That's why AOT compilers are required.
15) What are the biggest advantages of AOT in Angular?
Following are the advantages of using the AOT compiler in Angular:
The rendering is faster: When we use the AOT compiler, the browser gets a pre-compiled version of the application to download. Here, the browser loads executable code to render the application
immediately, without waiting to compile the app first.
The Angular framework's download size is smaller: AOT facilitates you not to download the Angular compiler if the app is already compiled. The compiler is roughly half of Angular itself, so omitting
it dramatically reduces the application payload.
Fewer asynchronous requests: The compiler is used to inline external HTML templates and CSS style sheets within the application JavaScript so, it eliminates separate AJAX requests for those source
Detect template errors earlier: While using the AOT compiler, developers can easily detect and report template binding errors during the build step before users can see them.
Better security: AOT provides better security because it compiles HTML templates and components into JavaScript files before they are served to the client. Because there are no templates to read and
no risky client-side HTML or JavaScript evaluation, so the chances for injection attacks are very rare.
16) What is JIT in Angular?
In Angular, JIT stands for Just-in-Time compiler. The JIT compiler provides a dynamic translation or run-time compilation, which provides a way of executing computer code that involves compilation
during the execution of a program at run time rather than before execution.
17) What is the main difference between JIT and AOT in Angular?
Following are the main differences between JIT and AOT compiler in Angular:
• Just-in-Time (JIT) compiler compiles our app in the browser at run-time while Ahead-of-Time (AOT) compiler is used to compile your app at build time on the server.
• The JIT compilation runs by default when you run the ng build (build only), or ng serve (build and serve locally) CLI commands. This is used for development. On the other hand, we have to include
the --aot option with the ng build or ng serve command for AOT compilation.
• JIT and AOT are both two ways used to compile code in an Angular project. JIT compiler is used in development mode while AOT is used for production mode.
• JIT is easy to use. We can easily implement features and debug in JIT mode because here we have a map file while AOT does not. On the other hand, the biggest advantage of using AOT for production
is that it reduces the bundle size for faster rendering.
18) What is the concept of scope hierarchy in Angular?
Angular provides the $scope objects into a hierarchy that is typically used by views. This is called the scope hierarchy in Angular. It has a root scope that can further contain one or several scopes
called child scopes.
In a scope hierarchy, each view has its own $scope. Hence, the variables set by a view's view controller will remain hidden to other view controllers.
Following is the typical representation of a Scope Hierarchy:
19) What are the main building blocks of an Angular application? Explain with the pictorial diagram of Angular architecture.
Following are the main building blocks of an Angular application. You can see them in the following picture:
20) What is the difference between Observables and Promises in Angular?
In Angular, as soon as we make a promise, the execution takes place, but this is not the case with observables because they are lazy. It means nothing happens until a subscription is made.
Promise Observable
It emits a single value. It emits multiple values over a period of time.
Not Lazy Lazy. An observable is not called until we subscribe to the observable.
We can not cancel We can cancel it by using the unsubscribe() method.
Observable provides operators like map, forEach, filter, reduce, retry, retryWhen etc.
Let's understand it by an example:
When you run the above Observable, you can see the following messages displayed in the following order:
Here, you can see that observables are lazy. Observable runs only when someone subscribes to them. That's why the message "Before subscribing an Observable" is displayed ahead of the message inside
the observable.
Now see the example of a Promise:
When you run the above Promise, you will see the messages displayed in the following order:
Here, you can see that the message inside Promise is displayed first. This means that the Promise runs first, and then the method is called.
The next difference between them is that Promises are always asynchronous; even when the Promise is immediately resolved. On the other hand, an Observable can be both synchronous and asynchronous.
In the case of the above example, observable is synchronous. Let's see the case where an observable can be asynchronous:
When you run the above observable, you will see the messages in the following order:
22) What are directives in Angular?
A directive is a class in Angular that is declared with a @Directive decorator. Every directive has its own behavior, and you can import them into various components of an application.
23) What were the main reasons behind introducing client-side frameworks like Angular?
Before Angular was introduced, the web developers used VanillaJS and jQuery to develop dynamic websites, but the biggest drawback of these technologies is that as the logic of the website grew, the
code became more and more complex to maintain. For websites and applications that use complex logic, developers had to put in extra effort to maintain the separation of concerns for the app. Also,
jQuery did not provide facilities for data handling across views.
The client-side frameworks like Angular were introduced to overcome the above problems. It provides developers many benefits over VanilaJS and jQuery by providing a new feature called components for
handling separation of concerns and dividing code into smaller bits of information.
Client-side frameworks such as Angular facilitate developers to develop advanced web applications like Single-Page-Applications. So, the main reasons behind introducing Angular were to create fast,
dynamic, and scalable web applications easily.
Note: We can also develop dynamic websites and SPAs (Single Page Applications) using VanillaJS, and jQuery but by doing so, the development process becomes slower.
24) What is Angular CLI?
Angular CLI is a short form for Angular Command Line Interface. It is a command-line interface to scaffold and build angular apps using node.js style modules.
To use Angular CLI, we have to install it by using the following npm command:
Following is a list of some useful commands which would be very helpful while creating angular projects:
• Creating New Project: ng new
• Generating Components, Directives & Services: ng generate/g
• Running the Project: ng serve
25) What is lazy loading in Angular?
Lazy loading is one of the most powerful and useful concepts of Angular Routing. It makes the web pages easy to download by downloading them in chunks instead of downloading everything in a big
bundle. Lazy loading facilitates asynchronously loading the feature module for routing whenever required using the property loadChildren.
See the following example where we are going to load both Employee and Order feature modules lazily.
See the example:
26) What is Angular Router?
Angular Router is a mechanism that facilitates users to navigate from one view to the next as users perform application tasks. It follows the concept model of browser's application navigation.
27) What do you understand by the router imports?
The Angular Router, representing a particular component view for a given URL, is not part of Angular Core. It is available in a library named @angular/router, and we have to import the required
router components. This process is called router imports.
See the following example of how we can import them in the app module:
28) What do you understand by RouterOutlet and RouterLink?
A RouterOutlet is a directive from the router library that acts as a placeholder. It marks the spot in the template where the Router should display the components for that outlet. Router outlet is
used as a component.
On the other hand, a RouterLink is a directive on the anchor tags that gives the router control over those elements. Since the navigation paths are fixed, you can assign string values to router-link
directive as below,
29) What are the different router events used in Angular Router?
During each navigation, the Router emits navigation events through the Router.events property. It allows us to track the lifecycle of the route.
Following is the list of different router events in sequence:
• NavigationStart
• RouteConfigLoadStart
• RouteConfigLoadEnd
• RoutesRecognized
• GuardsCheckStart
• ChildActivationStart
• ActivationStart
• GuardsCheckEnd
• ResolveStart
• ResolveEnd
• ActivationEnd
• ChildActivationEnd
• NavigationEnd
• NavigationCancel
• NavigationError
30) What do you understand by the RouterLinkActive?
The RouterLinkActive is a directive used to toggle CSS classes for active RouterLink bindings based on the current RouterState. i.e., the Router will add CSS classes when this link is active and
remove them when the link is inactive.
For example, you can add them to RouterLinks as follows:
31) What do you understand by the RouterState?
The RouterState is a tree of activated routes. Every node in this tree knows about the "consumed" URL segments, the extracted parameters, and the resolved data. We can access the current RouterState
from anywhere in the application by using the Router service and the routerState property.
32) What is HttpClient, and what are the advantages of it?
Most front-end applications use either XMLHttpRequest interface or the fetch() API to communicate with backend services over HTTP protocol. For the same purpose, Angular provides a simplified client
HTTP API known as HttpClient. This is based on top of XMLHttpRequest interface. This HttpClient is available in the @angular/common/http package, which you can import in your root module as follows:
Following are some of the crucial advantages of HttpClient:
• HttpClient contains testability features.
• It provides typed request and response objects.
• It can intercept requests and responses.
• It supports Observalbe APIs.
• HttpClient also supports streamlined error handling.
33) By default, Angular uses client-side rendering for its applications. Is it possible to make an Angular application to render on the server-side?
Yes, it is possible to make an Angular application to render on the server-side. Angular provides a technology called Angular Universal that can be used to render applications on the server-side.
The crucial advantages of using Angular Universal are as follows:
• Making an Angular application render on the server-side can provide a better user experience. By using this, first-time users can instantly see a view of the application. So, it can be used to
provide better UI.
• It can lead to a better SEO for your application. The reason is that many search engines expect pages in plain HTML. So, Angular Universal can ensure that your content is available on every
search engine, and it is good for better SEO.
• The server-side rendered applications load faster than normal pages. It is because the rendered pages are available to the browser sooner.
34) What is the best way to perform Error handling in Angular?
Error is when the request fails on the server or fails to reach the server due to network issues. In this condition, HttpClient returns an error object instead of a successful response. To resolve
this issue, we must handle the component by passing the error object as a second callback to the subscribe() method.
See the following example to understand how we handle in the component:
You can write an error message to give the user some meaningful feedback instead of displaying the raw error object returned from HttpClient.
35) What do you understand by Angular bootstrapping?
Angular bootstrapping is nothing but to allow developers to initialize or start the Angular application. Angular supports two types of bootstrapping:
• Manual bootstrapping
• Automatic bootstrapping
Manual bootstrapping: Manual bootstrapping provides more control to developers and facilitates them regarding how and when they need to initialize the Angular app. It is useful when professionals
wish to perform other tasks and operations before Angular compiles the page.
Automatic bootstrapping: As the name specifies, automatic bootstrapping is started automatically to start the Angular app. The developers need to add the ng-app directive to the application's root if
they want Angular to bootstrap the application automatically. Angular loads the associated module once it finds the ng-app directive and, further, compiles the DOM.
36) What is the digest cycle process in Angular?
The digest cycle process in Angular is the process that is used to monitor the watchlist to track changes in the watch variable value. There is a comparison between the present and the previous
versions of the scope model values in each digest cycle.
37) What are the key differences between a Component and a Directive in Angular?
A Component is a directive that uses shadow DOM to create encapsulated visual behavior. Usually, components are used to create UI widgets by breaking up the application into smaller parts. In short,
we can say that a component (@component) is a directive-with-a-template.
A list of the major differences between a Component and a Directive in Angular:
Component Directive
Components are generally used for creating UI widgets. Directives are generally used for adding behavior to an existing DOM element.
We use @Component meta-data annotation attributes to register a component. We use @Directive meta-data annotation attributes to register directives.
It is used to break up the application into smaller parts called components. It is used to design re-usable components.
Only one component is allowed to be used per DOM element. Multiple directives are allowed to be used per DOM element.
@View decorator or templateurl/template is mandatory in a component. A Directive doesn't use View.
A component is used to define pipes. In a directive, it is not possible to define Pipes.
38) What do you understand by Angular MVVM architecture?
The MVVM architecture or Model-View-ViewModel architecture is a software architectural pattern that provides a facility to developers to separate the development of the graphical user interface (the
View) from the development of the business logic or back-end logic (the Model). By using this architecture, the view is not dependent on any specific model platform.
The Angular MVVM architecture consists of the following three parts:
• Model
• View
• ViewModel
Model: The Model consists of the structure of an entity and specifies the approach. In simple words, we can say that the model contains data of an object.
View: The View is the visual layer of the application. It specifies the structure, layout, and appearance of what a user sees on the screen. It displays the data inside the Model, represents the
model, and receives the user's interaction with the view in the form of mouse clicks, keyboard input, screen tap gestures, etc., and forwards these to the ViewModel via the data binding properties.
In Angular terms, the View contains the HTML template of a component.
ViewModel: The ViewModel is an abstract layer of the application. It is used to handle the logic of the application. It also manages the data of a model and displays it in the view. View and
ViewModel are connected with two-way data-binding. If you make any changes in the view, the ViewModel takes a note and changes the appropriate data inside the model.
39) What is the purpose of AsyncPipe in Angular?
The AsyncPipe is used to subscribe to an observable or promise and return the latest value it has emitted. When a new value is emitted, the pipe marks the component that has been checked for changes.
See the following example where a time observable continuously updates the view for every 2 seconds with the current time.
40) What do you understand by services in Angular?
In Angular, services are singleton objects that get instantiated only once during the lifetime of an application. An Angular service contains methods that are used to maintain the data throughout the
life of an application. Angular services are used to organize as well as share business logic, models, or data and functions with various components of an Angular application.
Angular services offer some functions that can be invoked from an Angular component, such as a controller or directive.
41) What is the key difference between a constructor and ngOnInit?
Constructor is a default method in TypeScript classes that are normally used for the initialization purpose. On the other hand, the ngOnInit is specifically an Angular method and is used to define
Angular bindings. Even though constructors are getting called first, it is always preferred to move all of your Angular bindings to the ngOnInit method.
See the following example how we can use ngOnInit by implementing OnInit interface as follows:
42) What do you understand by observable and observer in Angular?
Observable: An observable is a unique object just like a promise that that is used to manage async code. Observables are not part of the JavaScript language so the developers have to rely on a
popular Observable library called RxJS. The observables are created using the new keyword.
See a simple example of observable to understand it better:
Observer: Any object that has to be notified when the state of another object changes is called an observer. An observer is an interface for push-based notifications delivered by an Observable.
See the structure of an observer:
The handler that implements the observer interface for receiving observable notifications is passed as a parameter for observable as follows:
Note: If you don't use a handler for a notification type, the observer ignores notifications of that type.
43) How do you categorize data binding types in Angular?
In Angular, we can categorize data binding types in three categories distinguished by the direction of data flow. These data binding categories are:
• From the source-to-view
• From view-to-source
• View-to-source-to-view
Let's see their possible binding syntax:
Data direction Syntax Type
1. {{expression}}
From the source-to-view(One-way data binding) 2. [target]="expression" Interpolation, Property, Attribute, Class, Style
3. bind-target="expression"
From view-to-source(One-way data binding) 1. (target)="statement" Event
2. on-target="statement"
View-to-source-to-view(Two-way data binding) 1. [(target)]="expression" Two-way data binding
2. bindon-target="expression"
44) What is multicasting in Angular?
Multicasting or Multi-casting is the practice of broadcasting to a list of multiple subscribers in a single execution.
Let's take a simple example to demonstrate the multi-casting feature:
45) What do you understand by Angular Material?
Angular Material is a UI component library that is used by professionals to develop consistent, attractive, and completely functional websites, web pages, and web applications. It follows the modern
principles of web designing, such as graceful degradation and browser probability, and is capable of doing a lot of fascinating things in website and application development.
46) What is lazy loading in Angular? Why is it used?
In Angular, the by default tendency of NgModules is eagerly loaded. It means that as soon as the app loads, all the NgModules are loaded, whether or not they are immediately necessary. That's why
lazy loading is required. Lazy loading is mandatory for large apps with lots of routes. This design pattern makes the app load NgModules when they are only required. Lazy loading helps keep initial
bundle sizes smaller, which in turn helps decrease load times.
47) What is the use of Angular filters? What are its distinct types?
Filters are an essential part of Angular that helps in formatting the expression value to show it to the users. We can easily add filters to services, directives, templates, or controllers. We can
also create personalized filters as per requirements. These filters allow us to organize the data in such a way that only the data that meets the respective criteria are displayed. Filters are placed
after the pipe symbol ( | ) while used in expressions.
A list of various types of filters used in Angular:
• currency: It is used to convert numbers to the currency format.
• filter: It is used to select a subset containing items from the given array.
• date: It is used to convert a date into a necessary format.
• lowercase: It is used to convert the given string into lowercase.
• uppercase: It is used to convert the given string into uppercase.
• orderBy: It is used to arrange an array by the given expression.
• json: It is used to format any object into a JSON string.
• number: It is used to convert a numeric value into a string.
• limitTo: It is used to restrict the limit of a given string or array to a particular number of elements or strings.
48) When do we use a directive in Angular?
If you create an Angular application where multiple components need to have similar functionalities, you have to do it by adding this functionality individually to every component. This is not a very
easy task. Directives are used to cope up with this situation. Here, we can create a directive with the required functionality and then import the directive to components that require this
49) What are the different types of directives in Angular?
There are mainly three types of directives in Angular:
Component Directives: The component directives are used to form the main class in directives. To declare these directives, we have to use the @Component decorator instead of @Directive decorator.
These directives have a view, a stylesheet and a selector property.
Structural directives: These directives are generally used to manipulate DOM elements. The structural directive has a ' * ' sign before them. We can apply these directives to any DOM element.
Following are some example of built-in structural directives:
*ngIf Structural Directive: *ngIf is used to check a Boolean value and if it's truthy, the div element will be displayed.
*ngFor Structural Directive: *ngFor is used to iterate over a list and display each item of the list.
Attribute Directives: The attribute directives are used to change the look and behavior of a DOM element. Let's create an attribute directive to understand it well:
This is how we can create a custom directive:
Go to the command terminal, navigate to the directory of the angular app and type the following command to generate a directive:
This will generate the following directive. Manipulate the directive to look like this:
Now, you can easily apply the above directive to any DOM element:
50) What are string interpolation and property binding in Angular?
String interpolation and property binding are parts of data-binding in Angular. Data-binding is a feature of Angular, which is used to provide a way to communicate between the component (Model) and
its view (HTML template). There are two ways of data-binding, one-way data binding and two-way data binding. In Angular, data from the component can be inserted inside the HTML template. Any changes
in the component will directly reflect inside the HTML template in one-way binding, but vice-versa is not possible. On the other hand, it is possible in two-way binding.
String interpolation and property binding both are examples of one-way data binding. They allow only one-way data binding.
String Interpolation: String interpolation uses the double curly braces {{ }} to display data from the component. Angular automatically runs the expression written inside the curly braces. For
example, {{ 5+5 }} will be evaluated by Angular, and the output will be 10. This output will be displayed inside the HTML template.
Property Binding: Property binding is used to bind the DOM properties of an HTML element to a component's property. In property binding, we use the square brackets [ ] syntax.
51) Is it possible to make an angular application to render on the server-side?
Yes, we can make an angular application to render on the server-side. Angular provides a technology Angular Universal that makes you able to render applications on the server-side.
Following are the benefits of using Angular Universal:
Better User Experience: It enables users to see the view of the application instantly.
Better SEO: Angular Universal ensures that the content is available on every search engine leading to better SEO.
Load Faster: Angular Universal ensures that the render pages available to the browsers sooner to make the loading faster server-side application loads faster.
52) What is Dependency Injection in Angular?
Dependency injection is an application design pattern that is implemented by Angular. It is used to form the core concepts of Angular. Dependencies are services in Angular which have some specific
functionality. Various components and directives in an application can need these functionalities of the service. Angular provides a smooth mechanism by which these dependencies are injected into
components and directives.
53) Can you demonstrate navigation between different routes in an Angular application?
You can demonstrate the navigation between different routes in an Angular app in the following way. See the following code to demonstrate navigation in an Angular app named "My First App."
54) What is the difference between Angular and Backbone.js?
Following are the various notable differences between Angular and Backbone.js:
Comparison Angular Backbone.js
Architecture Angular works on the MVC architecture and makes use of two-way data binding for Backbone.js makes use of the MVP architecture and doesn't provide any data binding process.
driving application activity.
Type Angular is an open-source JavaScript-based front-end web application framework that Backbone.js is a lightweight JavaScript library that uses a RESTful JSON interface and MVP framework.
extends HTML with new attributes.
Data Binding Angular is a little bit complex because it uses a two-way data binding process. On the other hand, Backbone.js has a simple API because it doesn't have any data binding process.
Angular's main focus is on valid HTML and dynamic elements that imitate the Backbone.js follows the direct DOM manipulation approach for representing data and application
DOM underlying data for rebuilding the DOM as per the specified rules and then work on architecture changes.
the updated data records.
Because of its two-way data binding functionality, Angular provides powerful Backbone.js is quite a significant upper hand in performance over Angular in small data sets or small
Performance performance for both small and large projects. web pages. It is not recommended for larger web pages or large data sets due to the absence of any
data binding process.
Templating Angular supports templating via dynamic HTML attributes. You can add them to the Backbone.js uses Underscore.js templates that aren't fully-featured as Angular templates.
document to develop an easy to understand application at a functional level.
Testing The testing approach is lengthy for Angular because it is preferred for building The testing approach is completely different for Backbone.js because it is ideal for developing
Approach large applications. smaller webpages or applications.
It uses unit testing.
Community The angular framework is developed and maintained by Google, so it receives great Backbone.js also receives a good level of community support, but it only documents on Underscore.js
Support community support. Here, extensive documentation is available. templates, not much else.
|
{"url":"https://www.javatpoint.com/angular-interview-questions","timestamp":"2024-11-12T10:47:56Z","content_type":"text/html","content_length":"120181","record_id":"<urn:uuid:6c92e3d1-1551-4082-85bd-d9de0b1c7fc7>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00475.warc.gz"}
|
Quentin Ehret
I am a post-doctoral researcher in mathematics at New York University Abu Dhabi under the supervision of Pr. Sofiane Bouarroudj.
Before that, I was a PhD student in mathematics of the Université de Haute-Alsace under the supervision of Pr. Abdenacer Makhlouf.
Address: Division of Science and Mathematics, New York University Abu Dhabi, P.O. Box 129188, Abu Dhabi, United Arab Emirates (here).
E-mail: qe209[at]nyu[dot]edu
Here is my CV (October 2024).
Here is my page on ResearchGate.
My ORCID number is 0009-0004-2629-4518.
I speak French (mother tongue), English (professional and colloquial) and German (intermediate). I also have (very) basic notions of Arabic.
The second edition of the MAThEOR Days took place 10-12 July 2024!
In February 2024, I obtained my national scientific qualification as associate professor ("Qualification aux fonctions de Maître de Conférences", Section 25)
In October 2023, I started my post-doc with Sofiane Bouarroudj at New York University Abu Dhabi.
Scientific interests
My scientific interests lie mainly in the domain of non-associative algebras, with a focus on restricted Lie and Lie-Rinehart (super)algebras over fields of positive characteristic.
During my post-doc, we are interested in the following topics:
Restricted cohomology of restricted Lie superalgebras;
Classification problem of restricted nilpotent Lie superalgebras of small dimension;
Restricted Lie-Rinehart and Poisson (super)algebras in characteristic p=2;
Double extensions of Lie superalgebras in various settings;
Connections on Lie superalgebras in characteristic 2 and Lagrangian extensions.
During my thesis (manuscript in French), I was interested in the following topics:
Deformation theory of Lie-Rinehart superalgebras over characteristic zero fields;
Computer-aided classification of Lie-Rinehart superalgebras over the complex field;
Restricted cohomology of restricted Lie algebras over positive characteristic fields;
Restricted formal deformations of restricted Lie-Rinehart algebras over positive characteristic fields;
Representations of restricted Lie-Rinehart algebras;
Double extensions of quasi-Frobenius restricted Lie superalgebras.
|
{"url":"https://perso.matheor.com/quentin-ehret/","timestamp":"2024-11-02T17:51:28Z","content_type":"text/html","content_length":"5902","record_id":"<urn:uuid:a2361b61-ef9b-47f0-8394-10d8483d4037>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00171.warc.gz"}
|
to Hands
Rod to Hands Converter
β Switch toHands to Rod Converter
How to use this Rod to Hands Converter π €
Follow these steps to convert given length from the units of Rod to the units of Hands.
1. Enter the input Rod value in the text field.
2. The calculator converts the given Rod into Hands in realtime β using the conversion formula, and displays under the Hands label. You do not need to click any button. If the input changes, Hands
value is re-calculated, just like that.
3. You may copy the resulting Hands value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Rod to Hands?
The formula to convert given length from Rod to Hands is:
Length[(Hands)] = Length[(Rod)] / 0.02020202020048081
Substitute the given value of length in rod, i.e., Length[(Rod)] in the above formula and simplify the right-hand side value. The resulting value is the length in hands, i.e., Length[(Hands)].
Calculation will be done after you enter a valid input.
Consider that a boundary fence is 40 rods long.
Convert this length from rods to Hands.
The length in rod is:
Length[(Rod)] = 40
The formula to convert length from rod to hands is:
Length[(Hands)] = Length[(Rod)] / 0.02020202020048081
Substitute given weight Length[(Rod)] = 40 in the above formula.
Length[(Hands)] = 40 / 0.02020202020048081
Length[(Hands)] = 1980
Final Answer:
Therefore, 40 rd is equal to 1980 hand.
The length is 1980 hand, in hands.
Consider that a farmer marks a field boundary using 25 rods.
Convert this distance from rods to Hands.
The length in rod is:
Length[(Rod)] = 25
The formula to convert length from rod to hands is:
Length[(Hands)] = Length[(Rod)] / 0.02020202020048081
Substitute given weight Length[(Rod)] = 25 in the above formula.
Length[(Hands)] = 25 / 0.02020202020048081
Length[(Hands)] = 1237.5
Final Answer:
Therefore, 25 rd is equal to 1237.5 hand.
The length is 1237.5 hand, in hands.
Rod to Hands Conversion Table
The following table gives some of the most used conversions from Rod to Hands.
Rod (rd) Hands (hand)
0 rd 0 hand
1 rd 49.5 hand
2 rd 99 hand
3 rd 148.5 hand
4 rd 198 hand
5 rd 247.5 hand
6 rd 297 hand
7 rd 346.5 hand
8 rd 396 hand
9 rd 445.5 hand
10 rd 495 hand
20 rd 990 hand
50 rd 2475 hand
100 rd 4950 hand
1000 rd 49500 hand
10000 rd 495000 hand
100000 rd 4950000.0004 hand
A rod is a unit of length used in land measurement and surveying. One rod is equivalent to 16.5 feet or approximately 5.0292 meters.
The rod is defined as 16.5 feet, providing a measurement that is useful for various applications in land surveying, agriculture, and construction.
Rods are commonly used in tasks such as property measurement, plotting land, and agricultural practices. The unit provides a practical measurement for shorter distances and has historical
significance in land surveying.
A hand is a unit of length used primarily to measure the height of horses. One hand is equivalent to 4 inches or approximately 0.1016 meters.
The hand is defined as 4 inches, providing a standardized measurement for assessing horse height, ensuring consistency across various contexts and practices.
Hands are used in the equestrian industry to measure the height of horses, from the ground to the highest point of the withers. The unit offers a convenient and traditional method for expressing
horse height and remains in use in equestrian competitions and breed standards.
Frequently Asked Questions (FAQs)
1. What is the formula for converting Rod to Hands in Length?
The formula to convert Rod to Hands in Length is:
Rod / 0.02020202020048081
2. Is this tool free or paid?
This Length conversion tool, which converts Rod to Hands, is completely free to use.
3. How do I convert Length from Rod to Hands?
To convert Length from Rod to Hands, you can use the following formula:
Rod / 0.02020202020048081
For example, if you have a value in Rod, you substitute that value in place of Rod in the above formula, and solve the mathematical expression to get the equivalent value in Hands.
|
{"url":"https://convertonline.org/unit/?convert=rods-hands","timestamp":"2024-11-08T15:49:12Z","content_type":"text/html","content_length":"89685","record_id":"<urn:uuid:aa31e683-e0db-4f93-b6c7-106cb38c6c92>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00748.warc.gz"}
|
The Turbulent History of Fluid Mechanics
Naomi Tsafnat
May 17, 1999.
It all started with Archimedes, way back in BC,
Who was faced with an interesting problem, you see…
The king came to me, and this story he told:
I am not sure if my crown is pure gold.
You are a wise man, or so it is said,
Tell me: is it real, or is it just lead?
I paced and I thought, and I scratched my head,
But the answer eluded me, to my dread.
I sat in my bath, and pondered and tried,
And then … Eureka! Eureka! I found it! I cried.
As I sat in my tub and the water was splashing,
I knew suddenly that a force had been acting.
On me in the tub, its proportional, see,
To the water that was where now there is me.
Of course, Archimedes caused quite a sensation
But not because of his great revelation;
As he was running through the streets of Syracuse
He didnt notice he was wearing only his shoes.
The great Leonardo oh what a fellow
No, not diCaprio, DaVinci I tell you!
He did more than just paint the lovely Mona,
He also studied fluid transport phenomena.
Then came Pascal, who clarified with agility,
Basic concepts of pressure transmissibility.
Everyone knows how a barometer looks,
But he figured out just how it works.
How can we talk about great scientists,
Without mentioning one of the best:
Sir Isaac Newton, the genius of mathematics,
Also contributed to fluid mechanics.
One thing he found, and its easy as pie,
Is that shear stress, ?~D equals μ dv/dy.
His other work, though, was not as successful;
His studies on drag were not all that useful.
He thought he knew how fast sound is sent,
But he was way off, by about twenty percent.
And then there was Pitot, with his wonderful tubes,
Which measure how fast an airplane moves.
Poiseuille, dAlembert, Lagrange and Venturi
Through his throats fluid pass in a hurry.
Here is another hero of fluid mechanics,
In fact, he invented the word hydrodynamics.
It would take a book to tell you about him fully,
But here is the short tale of Daniel Bernoulli:
Everyone thinks is just one Bernoulli
It is not so! There are many of us, truly.
My family is big, many scientists in this house,
With father Johan, nephew Jacob and brother Nicolaus.
But the famous principle is mine, you know,
It tells of the relationship of fluid flow,
To pressure, velocity, and density too.
I also invented the manometer - out of the blue!
Yes, Bernoulli did much for fluids, you bet!
He even proposed the use of a jet.
There were others too, all wonderful folks,
Like Lagrange, Laplace, Navier and Stokes.
Here is another well-known name,
A mathematician and scientist of great fame:
He is Leonard Euler, Im sure you all know,
His equations are basis for inviscid flow.
He did more than introduce the symbols ?~@, I, e,
He also derived the equation of continuity.
And with much thought and keen derivation,
He published the famous momentum equation.
Those wonderful equations and diagrams you see?
They are all thanks to Moody, Weisbach and Darcy.
Then there was Mach, and the road that he paves,
After studying the shocking field of shock waves.
Rayleigh studied wave motion, and jet instability,
How bubbles collapse, and dynamic similarity.
He was also the first to correctly explain.
Why the sky is blue except when it rains.
Osborne Reynolds, whose number we know,
Found out all about turbulent flow.
He also examined with much persistence,
Cavitation, viscous flow, and pipe resistance.
In the discovery of the boundary layer
Prandtl was the major player.
Its no wonder that all the scientists say,
He’s the father of Modern Fluid Mechanics, hooray!
It is because of Prandtl that today we all can
Describe the lift and drag of wings of finite span.
If it werent for him, then the brothers Wright
Would probably never have taken flight.
And so we come to the end of this story,
But its not the end of the tales of glory!
The list goes on, and it will grow too
Maybe the next pioneer will be you?
|
{"url":"https://cpraveen.github.io/cfd/history.html","timestamp":"2024-11-15T04:52:36Z","content_type":"text/html","content_length":"9343","record_id":"<urn:uuid:1f64d436-1c17-4cd0-889f-57ebae31ea93>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00763.warc.gz"}
|
Calculation methods
ELPLA can be used to analyze raft/piled raft or any other structural problems such as slab floors, grids, plane frame, plane stress, a system of many slab foundations, rotational shell, axisymmetric
stress, and axisymmetric structures (Figure 2)
Figure 2 Calculation Methods
The analysis of slab foundation problems is available in ELPLA (Figure 3 and Figure 4).
Figure 3 "Analysis of slab foundation"
Figure 4 "Analysis of slab foundation" displacements
The analysis of slab floor problems is available in ELPLA (Figure 5 and Figure 6).
Figure 5 "Analysis of slab floor" slab thickness
Figure 6 "Analysis of slab floor" displacements
The analysis of piled raft problems is available in ELPLA (Figure 7 and Figure 8).
Figure 7 "Analysis of Burj Khalifah foundation"
Figure 8 "Analysis of Burj Khalifah foundation" displacements
The analysis of grid problems is available in ELPLA (Figure 9).
Figure 9 "Analysis of grid"
In the "Analysis type" Form, if the option "Analysis of system of many slab foundations" is chosen, the following Dialog box in Figure 10 appears. Three different numerical calculation methods are
considered for the analysis of a system of slab foundations, flexible, elastic, or rigid. For the analysis of a system of many slab foundations, the project filenames (slab foundations) are required.
Figure 11.
Figure 10 "Analysis of system of many slab foundations" Dialog box
Figure 11 "Analysis of system of many slab foundations" displacements
The analysis of plane stress problems is available in ELPLA (Figure 12).
Figure 12 Analysis of plane stress
The analysis of Two-Dimensional frame problems is available in ELPLA (Figure 13).
Figure 13 Analysis of plane frame
It is possible to determine Eigenmodes and Eigenvectors due to free vibration for the following structures:
1. Beams
2. Trusses
3. Grids
4. Space frames
5. Shear walls
6. Floor slabs
7. Axisymmetric solids
Figure 14 Dynamic analysis of shear wall
The next step is to define the "System symmetry", Figure 15. In this step, select system symmetry and click "Next" button to go to the next step.
Figure 15 "System Symmetry" Form
By using the system symmetry, if the problem is symmetrical in loading, shape, and soil about x- or y-axis, the computational time and computer storage can be considerably reduced.
By defining the project data for a simple symmetrical or anti-symmetric slab system, the data are defined according to Figure 16, in which only the lower half slab is considered for symmetry about
the x-axis while only the left half slab is considered for symmetry about the y-axis.
Figure 16 Simple symmetrical slab system
By defining the project data for a double symmetrical slab system, the data are defined according to Figure 17. Only the left lower quarter of the slab is considered.
Figure 17 Double symmetrical loaded slab
If the slab is symmetrical in shape and unsymmetrical in loading, it is also possible to divide this general case of loading into two cases having symmetrical and anti-symmetrical loading, Figure 18.
Figure 18 General case of loading by symmetrical and anti-symmetric loading
The symmetrical cases are available for all calculation methods 1 to 9. The anti-symmetric case is only possible for calculation methods 4 to 8.
Some options are available in ELPLA such as the concrete design of sections, additional springs, supports, girders, piles, limit depth, nonlinear subsoil model, determining displacements, stresses,
and strains in soil. Also, ELPLA can study some external influences on the raft such as temperature change, additional settlements, or neighboring foundations. In the menu of Figure 19 check the
options that you want to consider in the analysis.
Figure 19 "Options" Check box
|
{"url":"https://mail.geotecsoftware.com/products/geotec-office/elpla/calculation-methods","timestamp":"2024-11-04T21:03:28Z","content_type":"application/xhtml+xml","content_length":"51437","record_id":"<urn:uuid:05290fdb-0234-49ac-9e24-cc166de6d097>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00096.warc.gz"}
|
A nitrogen gas occupies a volume of 500 ml at a pressure of 0.971 atm. What volume will the gas occupy at a pressure of 1.50 atm, assuming the temperature remains constant? | HIX Tutor
A nitrogen gas occupies a volume of 500 ml at a pressure of 0.971 atm. What volume will the gas occupy at a pressure of 1.50 atm, assuming the temperature remains constant?
Answer 1
The answer is #324 mL#.
This is a straightforward use of Boyle's law.
#P_1V_1 = P_2V_2#,
which states that a gas' pressure and volume are proportional to eachother. It can be derived from the combined gas law, #PV = nRT#, by keeping #T# constant.
So, we have #V_2 = P_1/P_2 * V_1 = 0.971/1.50 * 500 mL = 324 mL# -> pressure increases, volume decreases and vice versa.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
Using Boyle's Law, ( P_1 \cdot V_1 = P_2 \cdot V_2 ), where ( P_1 ) and ( V_1 ) are the initial pressure and volume, and ( P_2 ) and ( V_2 ) are the final pressure and volume:
( 0.971 , \text{atm} \cdot 500 , \text{ml} = 1.50 , \text{atm} \cdot V_2 )
( V_2 = \frac{0.971 , \text{atm} \cdot 500 , \text{ml}}{1.50 , \text{atm}} )
( V_2 = 324.33 , \text{ml} ) (rounded to two decimal places)
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/a-nitrogen-gas-occupies-a-volume-of-500-ml-at-a-pressure-of-0-971-atm-what-volum-8f9af8bc5f","timestamp":"2024-11-01T23:11:06Z","content_type":"text/html","content_length":"574355","record_id":"<urn:uuid:a830c98e-9163-455b-b906-ae7452f52554>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00412.warc.gz"}
|
Analyzing the algorithmic complexity of the Kotlin API’s distinctBy function
A few times when I’ve conducted technical interviews with candidates for Software Engineer roles at the company where I was working, I’ve come across people who actually know how they might filter or
sort a list, however when I delve into the depth of the problem, I start to find difficulties.
Many times, as software developers, we get used to using libraries and APIs that have most complex solutions already implemented in a single function, making our lives easier. This doesn’t mean that
we put aside our curiosity and ability to investigate how things work, at least in its most basic form.
In this post, we are going to specifically analyze a very used function of the Kotlin API, this is distinctBy.
You can find the official documentation by clicking at this link.
Applying an algorithm in real life
Suppose you have a list of products in which, for reasons not relevant to this post, there are repeated elements. We clearly don’t want to show the user a list of duplicate items. So we know that we
should filter the list.
We have a couple of alternatives:
Filtering the list with a loop
We could instantiate a new empty list and looping through the original list, adding element by element only if the current item of the iteration is not found in this new list.
Here we have a slight problem, that “only if it is not found in this new list” implies a search. Let’s assume that we haven’t used a hash table and are using a simple ArrayList.
We would have something like this:
fun filterList(products: List): List {
val newList = emptyList()
products.forEach { product ->
if(findProduct(products, product) == false) {
return newList
fun findProduct(list: List, productToFind: Product): Boolean {
list.forEach { product ->
if(product.id == productToFind.id) {
return true
return false
Doing a quick analysis, the filterList function will call the findProduct function N times, where N is the number of elements in the original list.
The findProduct function will take up to N times to complete. Assuming that the element to search for is in the last position of the list, the worst case (worst case scenario) will be O(n).
This means that the complexity of our algorithm will be O(n²). That is, it could take n² units of time to finish. And if we talk about Memory Consumption (space complexity), we will have a new list,
that is O(n).
Filtering the list with the distinctBy function
Great. It turns out that we know of the existence of a function called distinctBy in which we must pass it as a parameter, a function that indicates the “key” which will serve as an indicator of
whether an element is repeated or not. This can be a first name, a last name, or a code.
We just need to write:
val newList = products.distinctBy { product -> product.code }
¡And, voilá!
Ready, one line of code was enough and we already have a new list that does not contain repeated products with the same code.
But… do we know what he does inside?
Let’s analyze the source code of this function.
As we can see, it is an inline extension function applicable to any data structure that implements the Iterable interface.
This function depends on two generic classes that it will need for its operation: one that determines the data types of the list to return (the same as the original list) denoted by the letter T, and
another that determines the differentiator of the objects, denoted with the letter K.
In the first two lines, two new objects are instantiated. The first is a hash table or dictionary data structure that at the same time implements the Set interface (HashSet): it will not allow
repeated elements thanks to the fact that it contains a hash table inside. The second is a simple list that will serve as the resulting filtered list.
Iterates over the original list (shown as this in the for statement) and executes the selector function that is passed as a parameter. This selector function is passed the current element of the
That means that in each iteration, the function { product -> product.code } will be returning what we want to serve as a unique identifier, that’s why its value is assigned to a variable named “key”.
Once we get the unique identifier (which in our example is the product code), we proceed to insert it into our hash table.
As we can see, this insertion occurs inside a conditional if.
The Set interface states that the add function will return true if, and only if, the element was inserted. And, at what point is the element not inserted? Well, when it already exists (it is verified
with a hash table!).
Once we know that this identifier did not exist in our identifier hash table (set.add(key) returns true), then we proceed to insert the element (the product) in our new list (list.add(e)).
No lookup is performed to see if the iteration item already exists in the list. We only try to insert its identifier (the product code) in the hash table and if this operation is successful (returns
true) then we just add this product to the filtered list.
Since the whole original list is iterated anyway, the time complexity will be O(n), however, the verification of whether or not an element already exists in the list is returns in constant time O(1),
due to our hash table inside our HashSet which will tell us if the identifier was inserted or not.
Of course, when creating two new objects, the memory space used will be a little larger, but not as relevant since the purpose of the HashSet is to store only the identifiers to know if an element
has already been created. exists or not in the original list.
1. The distinctby function uses a HashSet structure to speed up the cost of time with the verification of the existence of an element in our list.
2. Unlike a HashMap, a HashSet needs only one element at insert time and it cannot be repeated, it acts as key and value at the same time (actually, if we look at the internal implementation, a
generic static Object object is used as value).
3. Of course, distinctBy offers us better performance than if we opted for an implementation of two iterations, one inside the other. However, we could also have used a HashMap and inserted on top
of it using the product code as the key and the product as the value. The result would be a clean product structure.
What did you think? Do you also often analyze the functions that we use on a daily basis and that make our lives easier? I would like to know if you know of any other functions or APIs that also make
use of these data structures efficiently and simplify our day-to-day developments 😄.
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse
|
{"url":"https://dev.to/jflavio11/analyzing-the-algorithmic-complexity-of-the-kotlin-apis-distinctby-function-509m","timestamp":"2024-11-11T22:48:12Z","content_type":"text/html","content_length":"75100","record_id":"<urn:uuid:3f58b53d-936e-4544-8147-a583d27d3401>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00328.warc.gz"}
|
A measure of majorization
A measure of majorization emerging from single-shot statistical mechanics
Single-shot information theory inspires a new formulation of statistical mechanics which measures the optimal guaranteed work of a system.
New Journal of Physics 17, 73001 (2015)
D. Egloff, O. Dahlsten, R. Renner, V. Vedral
The use of the von Neumann entropy in formulating the laws of thermodynamics has recently been challenged. It is associated with the average work whereas the work guaranteed to be extracted in any
single run of an experiment is the more interesting quantity in general. We show that an expression that quantifies majorization determines the optimal guaranteed work. We argue it should therefore
be the central quantity of statistical mechanics, rather than the von Neumann entropy. In the limit of many identical and independent subsystems (asymptotic i.i.d) the von Neumann entropy expressions
are recovered but in the non-equilbrium regime the optimal guaranteed work can be radically different to the optimal average. Moreover our measure of majorization governs which evolutions can be
realized via thermal interactions, whereas the non-decrease of the von Neumann entropy is not sufficiently restrictive. Our results are inspired by single-shot information theory.
|
{"url":"https://lims.ac.uk/paper/a-measure-of-majorization-emerging-from-single-shot-statistical-mechanics/","timestamp":"2024-11-06T14:17:50Z","content_type":"text/html","content_length":"79619","record_id":"<urn:uuid:93db76f1-441c-49c9-87f7-b2c382b7ccab>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00648.warc.gz"}
|
Comment on Unleashing the Power of Bayesian Re-Analysis
27 February 2024
I want to comment on the paper by Costa and colleagues on a Bayesian re-analysis of the Lecanemab phase 3 Clarity trial [1]. Costa and colleagues had already very meritoriously presented a Bayesian
meta-analysis of the Aducanumab Phase 3 trials [2]. They contribute to a growing field of research that adopts Bayesian approaches to overcome limitations of frequentist p-value based statistical
inference [3].
I was able to easily replicate the Bayesian part of their calculations in the JASP software (version 0.18.3). However, I have questions about their calculation of the effect size estimate (expressed
as a T value), which produced results that strongly contradict the results of the frequentist analysis presented in the Clarity study [4].
Their effect size estimate was based on the mean difference (MD) of the Clinical Dementia Rating sum of boxes (CDR-SB) between treatment groups and the confidence interval (CI) of the treatment
effect, reported in the Clarity study [4]. The MD was -0.45 CDR-SB points and the CI of the treatment effect was |-0.67- -0.23|= 0.44 CDR-SB points [4]. From the confidence interval, Costa and
colleagues derived the standard error (SE): “The effect size was obtained from the data by the mean difference of the CDR-SB and the corresponding confidence interval (CI) calculated using classical
formula as described in Higgins et al.” [5]. On page 149, Higgins et al. write: “the SE can be calculated as upper limit−lower limit /3.92” [5]. This means, SE = |upper limit of the 95% CI − lower
limit of the 95% CI| /(2*1.96), with 1.96 representing the z-score that cuts off the upper 2.5% of the area under the standard normal curve. In other words, the width of the 95% CI = 2*SE* z(p <
0.025), when one assumes that the data generation process followed a Gaussian distribution ([6], Section 9.5). With smaller number of cases (below about 120), one can rather assume a data generation
process that follows a t-distribution. Then, the width of the 95% CI becomes ([6], Section 9.5):
95% CI = 2* SE* t(df, p < 0.025), with t indicating the t-distribution and df the degrees of freedom.
With the large number of cases in the Clarity study (n > 1,600), the estimate of SE becomes:
SE = |upper threshold 95%CI - lower threshold 95%CI|/(2* z(p < 0.025)), i.e. 0.44/(2*1.96) = 0.11.
This is different from the value of 0.22 for the SE reported by Costa and colleagues, and it is not clear how they arrived at this larger estimate of variance. Using the SE estimate of 0.11 derived
from the Higgins et al. [5] formula, the T-value becomes:
T = MD/SE = -0.45/0.11 = -4, instead of a T-value of -2 as reported by Costa and colleagues [1].
This result is essentially unchanged if one assumes a t-distribution rather than a Gaussian distribution for the data generation process to estimate the limits of the CI. For 120 cases, the
difference would be a factor of z(p< 0.025)/t(df= 120, p < 0.025) = 1.96/1.98, i.e. the width of the 95% CI would have been underestimated by a factor of 0.9898 when assuming a Gaussian rather than a
t-distribution. With larger df, as is the case for Clarity [4] with more than 1,600 cases, this factor approximates 1. Thus, given the sample size, the assumption of a t-distribution or Gaussian
distribution for the data generation process has no effect on the estimate of the SE and effect size.
The Bayes factor in favor of the null hypothesis (BF01) resulting from the T value of -4.00 calculated according to Higgins et al. [5] and a standard Cauchy prior was 0.007, indicating very strong
evidence in favor of a treatment effect, i.e., the Bayes factor in favor of a treatment effect (BF10) was 1/0.007 = 145.3. The Bayes factor robustness check, shown in Fig. 1, suggests that this
result was robust to different specifications of the standard Cauchy prior.
Fig. 1. Bayes factor robustness check. Bayes Factor (BF) values for different prior widths (r), including the default prior width (r = 0.707), wide prior (r = 1), and ultrawide prior (r = 1.4). The
evidence for the alternative hypothesis remains relatively stable across the wide range of prior distributions, indicating the robustness of the analysis.
As shown in Table 1, the evidence was also strongly in favor of a treatment effect when using informed prior t-distributions with various minimally clinically important differences (MCID) in favor of
placebo, as proposed by Costa et al. [1]. Our results are generally consistent with the conclusion of the frequentist analysis presented in the Clarity study [4], which suggested a very low
probability of p < 0.001 of finding the observed effect or an even more extreme effect in future replications of the study, assuming that no effect exists.
Table 1. Bayes factor derived using informed prior t-distributions. Bayes factor in favor of the alternative hypothesis of a treatment effect (BF[10]). The BF[10] is calculated for prior
t-distributions assuming various minimally clinically important differences (MCID) in favor of placebo.
^§Estimates for the MCID in the CDR-SB for people with normal cognition, people with mild cognitive impairment due to Alzheimer’s disease (MCI-AD), and people with mild AD dementia reported by [8].
^$MCID estimate derived from the effect size reported in the Lecanemab 2b BAN2401-G000-201 trial [9].
Of note, neither the Bayes factor nor the p-value allow direct inference on the size of the underlying effect. The Bayes factor quantifies the evidence in favor of the alternative hypothesis that an
effect exists vs. the null hypothesis that an effects does not exist, given the data. In contrast, the p-value quantifies the probability of observing the same effect or an even more extreme effect
in future experiments, provided that the assumption of the absence of an effect is correct [7]. Thus, the Bayes factor provides a more intuitive interpretation than the p-value, but often Bayesian
and frequentist analyses do not lead to substantially different conclusions. The impression that the Bayesian analysis led to a substantially different conclusion than the frequentist analysis of the
Clarity data in the paper by Costa and colleagues [1] resulted from their inflated calculation of the SE estimate, which yielded about half the effect size that can be derived using classical
formulas under either assumption, a Gaussian or a t-distribution of the data generation process. Of course, this does not undermine the advantage of the Bayesian approach in providing direct
estimates of the plausibility of the presence of an effect given the observed data. However, the authors should explain why and how they arrived at such an inflated estimate of the SE, leading to
half of the effect size that would have resulted based on the Higgins et al. [5] formulas, which they cite in their paper.
Stefan Teipel, Deutsches Zentrum für Neurodegenerative Erkrankungen (DZNE) Rostock, Rostock, Germany, and Department of Psychosomatic Medicine University Medicine Rostock, Rostock, Germany. E-mail:
Conflict of Interest:
S.T. has served on advisory boards for Lilly, Eisai, and Biogen and is a member of the Independent Data Safety and Monitoring Board for the ENVISION trial (Biogen).
[1] Costa T, Premi E, Liloia D, Cauda F, Manuello J (2023) Unleashing the power of Bayesian re-analysis: Enhancing insights into lecanemab (Clarity AD) phase III trial through informed t-test. J
Alzheimers Dis 95, 1059-1065.
[2] Costa T, Cauda F (2022) A Bayesian reanalysis of the phase III aducanumab (ADU) trial. J Alzheimers Dis 87, 1009-1012.
[3] Goodman S (2008) A dirty dozen: Twelve P-value misconceptions. Semin Hematol 45, 135-140.
[4] van Dyck CH, Swanson CJ, Aisen P, Bateman RJ, Chen C, Gee M, Kanekiyo M, Li D, Reyderman L, Cohen S, Froelich L, Katayama S, Sabbagh M, Vellas B, Watson D, Dhadda S, Irizarry M, Kramer LD,
Iwatsubo T (2023) Lecanemab in early Alzheimer's disease. N Engl J Med 388, 9-21.
[5] Higgins JPT, Cochrane Collaboration (2020) Cochrane handbook for systematic reviews of interventions, Wiley-Blackwell, Hoboken, NJ.
[6] Rees DG (2001) Essential Statistics, Chapman and Hall/CRC, New York.
[7] Temp AGM, Lutz MW, Trepel D, Tang Y, Wagenmakers EJ, Khachaturian AS, Teipel S (2021) How Bayesian statistics may help answer some of the controversial questions in clinical research on
Alzheimer's disease. Alzheimers Dement 17, 917-919.
[8] Andrews JS, Desai U, Kirson NY, Zichlin ML, Ball DE, Matthews BR (2019) Disease severity and minimal clinically important differences in clinical outcome assessments for Alzheimer's disease
clinical trials. Alzheimers Dement (N Y) 5, 354-363.
[9] Swanson CJ, Zhang Y, Dhadda S, Wang J, Kaplow J, Lai RYK, Lannfelt L, Bradley H, Rabe M, Koyama A, Reyderman L, Berry DA, Berry S, Gordon R, Kramer LD, Cummings JL (2021) A randomized,
double-blind, phase 2b proof-of-concept clinical trial in early Alzheimer's disease with lecanemab, an anti-Abeta protofibril antibody. Alzheimers Res Ther 13, 80.
We thank Prof. Stefan Teipel, author of the Letter to the Editor, for his comment on our recently published Research Article [1], as well as for appreciating our previous Bayesian meta-analysis of
the Aducanumab Phase 3 trials [2]. We also express our gratitude to the Editor-in-Chief for granting us the opportunity to clarify the point raised.
The criticism of our work pertains to the calculation of the effect size estimate, which yields results contradicting those of the frequentist analysis presented in the study by van Dyck et al. [3].
Specifically, the author contends that we derived an inflated estimate of standard error (i.e., 0.22 instead of 0.11), resulting in half of the effect size that would have been obtained based on the
formulas by Higgins et al. [4].
While we acknowledge the accuracy of the author's implementation of the standard error calculation for the specific scenario targeting the determination of a two-tailed interval in a hypothesis test,
it is crucial to highlight a fundamental aspect of our methodology. In our study, we opted for a Bayes Factor analysis for superiority design [5], aiming to assess whether the alternative hypothesis
is greater than the null hypothesis (rather than simply being different from it). Indeed, as stated in our work, “the research question of interest ... was whether there is a significant difference
in the primary endpoint (CDR-SB) favoring lecanemab at 18 months” [1]. From a methodological point of view, a superiority design inherently requires a one-tailed test [5], thus necessitating the
utilization of a standard error set at 0.22.
We are grateful to the author for affording us the opportunity to elucidate why and how we arrived at the estimation of the standard error of 0.22 in our study. In essence, our comment underscores
the validity of our results within the framework of the design system we employed.
Tommaso Costa^a,b,c, Enrico Premi^d, Donato Liloia^a,b, Franco Cauda^a,b,c, Jordi Manuello^a,b
^aGCS-fMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy ^bFOCUSLAB, Department of Psychology, University of Turin, Turin, Italy
^cNeuroscience Institute of Turin, Turin, Italy
^dStroke Unit, Department of Neurological and Vision Sciences, ASST Spedali Civili, Brescia, Italy
Conflict of Interest
[1] Costa T, Premi E, Liloia D, Cauda F, Manuello J (2023) Unleashing the power of Bayesian re-analysis: enhancing insights into lecanemab (Clarity AD) phase III trial through informed t-test. J
Alzheimers Dis 95, 1059-1065.
[2] Costa T, Cauda F (2022) A Bayesian reanalysis of the phase III aducanumab (ADU) trial. J Alzheimers Dis 87, 1009–1012.
[3] van Dyck CH, Swanson CJ, Aisen P, Bateman RJ, Chen C, Gee M, Kanekiyo M, Li D, Reyderman L, Cohen S, Froelich L, Katayama S, Sabbagh M, Vellas B, Watson D, Dhadda S, Irizarry M, Kramer LD,
Iwatsubo T (2023) Lecanemab in early Alzheimer’s disease. N Engl J Med 388, 9–21.
[4] Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page M, Welch VA (2019) Cochrane handbook for systematic reviews of interventions, John Wiley & Sons.
[5] van Ravenzwaaij D, Monden R, Tendeiro JN, Ioannidis JP (2019) Bayes factors for superiority, non-inferiority, and equivalence designs. BMC Med Res Methodol 19, 71.
I would like to thank Dr. Liloia and colleagues for their clarification. Contrary to their argument, however, I do not see how the decision to use a one-tailed rather than a two-tailed test of
superiority would affect the estimate of the standard error (SE) and the resulting value of the T statistic. Of course, it affects the p-value or the Bayes factor. Dr. Liloia and colleagues cite van
Ravenzwaaij et al. [1]. When one follows these authors [1], the SE estimate is based on the confidence interval (CI) as follows:
SE = (0.5*width of the two-sided CI)/ t(df, p < 0.025).
Applying this formula to the data of the Lecanemab trial [2] data yields:
SE = 0.5*|-0.67- -0.23|/ t(df, p < 0.025) = 0.22/1.96 = 0.11.
This gives a T statistic of -0.45/0.11= -4.
The SE estimate of a measurement is not affected by the decision to use a one-tailed or two-tailed superiority test. For the frequentist analysis, the one-tailed vs. two-tailed decision affects the
selection of the critical t-value. For the Bayesian analysis in the JASP software (version 0.18.3), the T-statistic of -4 yields a Bayes factor for the two-tailed difference of Lecanemab vs. placebo
(i.e., undirected effect) of 145.3. For the one-tailed superiority test under the assumption that Lecanemab is superior to placebo (directional effect in favor of Lecanemab), the Bayes factor is
286.5, and for the one-tailed superiority test under the assumption that placebo is superior to Lecanemab (directional effect in favor of placebo), the Bayes factor is 0.01. Using the formula of van
Ravenzwaaij et al. [1], we thus obtain a Bayesian estimate that is consistent with the frequentist result and indicates extreme evidence in favor of a superior effect of the active compound and
extreme evidence against a superior effect of placebo. The estimate of both the frequentist p-value and the Bayes factor are affected by the decision to use a one-tailed or two-tailed test, but the
estimate of the SE is not.
Conflict of Interest:
S.T. has served on advisory boards for Lilly, Eisai, and Biogen and is a member of the Independent Data Safety and Monitoring Board for the ENVISION trial (Biogen).
[1] van Ravenzwaaij D, Monden R, Tendeiro JN, Ioannidis JPA (2019) Bayes factors for superiority, non-inferiority, and equivalence designs. BMC Med Res Methodol 19, 71.
[2] van Dyck CH, Swanson CJ, Aisen P, Bateman RJ, Chen C, Gee M, Kanekiyo M, Li D, Reyderman L, Cohen S, Froelich L, Katayama S, Sabbagh M, Vellas B, Watson D, Dhadda S, Irizarry M, Kramer LD,
Iwatsubo T (2023) Lecanemab in early Alzheimer's disease. N Engl J Med 388, 9-21.
|
{"url":"https://www.j-alz.com/content/comment-unleashing-power-bayesian-re-analysis","timestamp":"2024-11-03T06:31:11Z","content_type":"text/html","content_length":"59247","record_id":"<urn:uuid:b27f42da-f56d-4688-8d1f-638d7329dfb0>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00339.warc.gz"}
|
American Mathematical Society
Given a closed $d$-rectifiable set $A$ embedded in Euclidean space, we investigate minimal weighted Riesz energy points on $A$; that is, $N$ points constrained to $A$ and interacting via the weighted
power law potential $V=w(x,y)\left |x-y\right |^{-s}$, where $s>0$ is a fixed parameter and $w$ is an admissible weight. (In the unweighted case ($w\equiv 1$) such points for $N$ fixed tend to the
solution of the best-packing problem on $A$ as the parameter $s\to \infty$.) Our main results concern the asymptotic behavior as $N\to \infty$ of the minimal energies as well as the corresponding
equilibrium configurations. Given a distribution $\rho (x)$ with respect to $d$-dimensional Hausdorff measure on $A$, our results provide a method for generating $N$-point configurations on $A$ that
are “well-separated” and have asymptotic distribution $\rho (x)$ as $N\to \infty$. References
• Nikolay N. Andreev, An extremal property of the icosahedron, East J. Approx. 2 (1996), no. 4, 459–462. MR 1426716
• N. N. Andreev, Location of points on a sphere with minimal energy, Tr. Mat. Inst. Steklova 219 (1997), no. Teor. Priblizh. Garmon. Anal., 27–31 (Russian); English transl., Proc. Steklov Inst.
Math. 4(219) (1997), 20–24. MR 1642295
• John J. Benedetto and Matthew Fickus, Finite normalized tight frames, Adv. Comput. Math. 18 (2003), no. 2-4, 357–385. Frames. MR 1968126, DOI 10.1023/A:1021323312367
• S.V. Borodachov, On minimization of the energy of varying range interactions on one- and multidimensional conductors, Ph.D. Thesis, Vanderbilt University (2006).
• M. Bowick, D.R. Nelson, A. Travesset, Interacting topological defects in frozen topographies, Phys. Rev. B 62 (2000), 8738–8751.
• J. H. Conway and N. J. A. Sloane, Sphere packings, lattices and groups, 3rd ed., Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 290,
Springer-Verlag, New York, 1999. With additional contributions by E. Bannai, R. E. Borcherds, J. Leech, S. P. Norton, A. M. Odlyzko, R. A. Parker, L. Queen and B. B. Venkov. MR 1662447, DOI
• Björn E. J. Dahlberg, On the distribution of Fekete points, Duke Math. J. 45 (1978), no. 3, 537–542. MR 507457
• S. Damelin, V. Maymeskul, On mesh norm of $s$-extremal configurations on compact sets in $\mathbb {R}^n$, Constructive Functions Tech-04, 2004 (Atlanta, GA).
• P. D. Dragnev, D. A. Legg, and D. W. Townsend, Discrete logarithmic energy on the sphere, Pacific J. Math. 207 (2002), no. 2, 345–358. MR 1972249, DOI 10.2140/pjm.2002.207.345
• Herbert Federer, Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, Band 153, Springer-Verlag New York, Inc., New York, 1969. MR 0257325
• L. Fejes Toth, Regular Figures, Pergamon Press, Berlin–G$\textrm {\ddot {o}}$ttingen–Heidelberg, 1953.
• D. P. Hardin and E. B. Saff, Minimal Riesz energy point configurations for rectifiable $d$-dimensional manifolds, Adv. Math. 193 (2005), no. 1, 174–204. MR 2132763, DOI 10.1016/j.aim.2004.05.006
• D. P. Hardin and E. B. Saff, Discretizing manifolds via minimum energy points, Notices Amer. Math. Soc. 51 (2004), no. 10, 1186–1194. MR 2104914
• A. V. Kolushov and V. A. Yudin, On the Korkin-Zolotarev construction, Diskret. Mat. 6 (1994), no. 1, 155–157 (Russian, with Russian summary); English transl., Discrete Math. Appl. 4 (1994),
no. 2, 143–146. MR 1273240, DOI 10.1515/dma.1994.4.2.143
• A. V. Kolushov and V. A. Yudin, Extremal dispositions of points on the sphere, Anal. Math. 23 (1997), no. 1, 25–34 (English, with Russian summary). MR 1630001, DOI 10.1007/BF02789828
• A. B. J. Kuijlaars and E. B. Saff, Asymptotics for minimal discrete energy on the sphere, Trans. Amer. Math. Soc. 350 (1998), no. 2, 523–538. MR 1458327, DOI 10.1090/S0002-9947-98-02119-9
• Pertti Mattila, Geometry of sets and measures in Euclidean spaces, Cambridge Studies in Advanced Mathematics, vol. 44, Cambridge University Press, Cambridge, 1995. Fractals and rectifiability. MR
1333890, DOI 10.1017/CBO9780511623813
• N. S. Landkof, Foundations of modern potential theory, Die Grundlehren der mathematischen Wissenschaften, Band 180, Springer-Verlag, New York-Heidelberg, 1972. Translated from the Russian by A.
P. Doohovskoy. MR 0350027
• A. Martínez-Finkelshtein, V. Maymeskul, E. A. Rakhmanov, and E. B. Saff, Asymptotics for minimal discrete Riesz energy on curves in $\Bbb R^d$, Canad. J. Math. 56 (2004), no. 3, 529–552. MR
2057285, DOI 10.4153/CJM-2004-024-1
• Theodor William Melnyk, Osvald Knop, and William Robert Smith, Extremal arrangements of points and unit charges on a sphere: equilibrium configurations revisited, Canad. J. Chem. 55 (1977),
no. 10, 1745–1761 (English, with French summary). MR 0444497
• Ian H. Sloan and Robert S. Womersley, Extremal systems of points and numerical integration on the sphere, Adv. Comput. Math. 21 (2004), no. 1-2, 107–125. MR 2065291, DOI 10.1023/
• Steve Smale, Mathematical problems for the next century, Math. Intelligencer 20 (1998), no. 2, 7–15. MR 1631413, DOI 10.1007/BF03025291
• V. A. Yudin, Minimum potential energy of a point system of charges, Diskret. Mat. 4 (1992), no. 2, 115–121 (Russian, with Russian summary); English transl., Discrete Math. Appl. 3 (1993), no. 1,
75–81. MR 1181534, DOI 10.1515/dma.1993.3.1.75
Additional Information
• S. V. Borodachov
• Affiliation: Department of Mathematics, Center for Constructive Approximation, Vanderbilt University, Nashville, Tennessee 37240
• Address at time of publication: School of Mathematics, Georgia Institute of Technology, Atlanta, Georgia 30332
• MR Author ID: 656604
• Email: sergiy.v.borodachov@vanderbilt.edu, borodasv@math.gatech.edu
• D. P. Hardin
• Affiliation: Department of Mathematics, Center for Constructive Approximation, Vanderbilt University, Nashville, Tennessee 37240
• MR Author ID: 81245
• ORCID: 0000-0003-0867-2146
• Email: doug.hardin@vanderbilt.edu
• E. B. Saff
• Affiliation: Department of Mathematics, Center for Constructive Approximation, Vanderbilt University, Nashville, Tennessee 37240
• MR Author ID: 152845
• Email: edward.b.saff@vanderbilt.edu
• Received by editor(s): February 10, 2006
• Published electronically: October 17, 2007
• Additional Notes: The research of the first author was conducted as a graduate student under the supervision of E.B. Saff and D. P. Hardin at Vanderbilt University.
The research of the second author was supported, in part, by the U. S. National Science Foundation under grants DMS-0505756 and DMS-0532154.
The research of the third author was supported, in part, by the U. S. National Science Foundation under grant DMS-0532154.
• © Copyright 2007 American Mathematical Society
The copyright for this article reverts to public domain 28 years after publication.
• Journal: Trans. Amer. Math. Soc. 360 (2008), 1559-1580
• MSC (2000): Primary 11K41, 70F10, 28A78; Secondary 78A30, 52A40
• DOI: https://doi.org/10.1090/S0002-9947-07-04416-9
• MathSciNet review: 2357705
|
{"url":"https://www.ams.org/journals/tran/2008-360-03/S0002-9947-07-04416-9/home.html","timestamp":"2024-11-07T17:32:08Z","content_type":"text/html","content_length":"75964","record_id":"<urn:uuid:4efdcc8b-f847-48d4-bd85-4a3659c9c538>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00729.warc.gz"}
|
PICUP Exercise Sets: Visualizing the off-axis electric field due to a ring of electric charges
Visualizing the off-axis electric field due to a ring of electric charges
Developed by Patrick Kelley and Gautam Vemuri - Published January 15, 2020
DOI: 10.1119/PICUP.Exercise.OffAxisRingofCharges
This exercise set is designed to calculate and pictorially visualize the off-axis electric field that is produced by a ring of charges. The graphical portion of the code plots electric field vectors
which display the direction of the field, and the relative magnitude, at various points in space. The novelty of the exercise set lies in being able to calculate the off-axis E field, which is hard
to analytically, and then visualize it in 3D.
Subject Area Electricity & Magnetism
Level First Year
Available MATLAB
Students that complete this exercise will be able to 1. Use computer code to visualize in 3D magnitudes and directions of vector quantities they calculate computationally (Exercises
Learning 1-7) 2. Write the pseudocode for calculating the off-axis electric field due to a charge distribution (Exercise 1). 3. Using the pictures of E-field, develop qualitative reasoning
Objectives skills, such as using the symmetry of the charge distribution to answer questions (exercise 4) 4. Investigate the effect of a ring consisting of both positive and negative charges,
something difficult to do analytically (exercises 5-7)
Time to 90 min
These exercises are not tied to a specific programming language. Example implementations are provided under the Code tab, but the Exercises can be implemented in whatever platform you wish to use
(e.g., Excel, Python, MATLAB, etc.).
## Exercise 1 Write the pseudo code to compute and plot the electric field on a meshgrid. ## Exercise 2 Plot the electric vector field for a positive point charge using a $7\times7\times7$ meshgrid.
Do the same for a negative point charge. ## Exercise 3 Plot the on-axis electric field versus distance from center of a uniformly charged ring with a linear charge density of $\lambda = \frac{10}{2\
pi R}$, where $R$ is the radius of the ring. Recall the electric field for a uniformly charged ring is: $\overrightarrow{E}=k\frac{2\pi R\lambda z}{(R^2+z^2)^{3/2}}\hat{z}$ where $k$ is the Coulomb
constant, $R$ is the radius of the ring, and $z$ is the on-axis coordinate. ## Exercise 4 Plot the electric field vectors in 3D for N number of positive point charges (discrete ring of charges). The
N charges should be equidistant from each other and lie on a circle. For example, for N=2, the charges would lie on the ends of the diameter of the circle, i.e. 180 degrees from each other. See
Theory section on Discrete Ring of Charges for illustration of N=4. Start with N = **2** and use a $7\times7\times7$ meshgrid. Do the same for : * N=**5** * N=**10** * N=**100** for a $7\times7\
times7$ grid. Compare the electric field on-axis results to the analytical expression found in **Excercise 3** and explain your observations. Make sure to change the linear charge density $(\lambda=\
frac{N}{2\pi R})$. ***Note***: You will have to look at electric field values, both for discrete charges and the continuous ring, on the meshgrid vertices. That means for more values, you have to
increase the meshgrid space. Try a size of $27\times27\times27$ and see how that compares with the smaller meshgrid space of $7\times7\times7$. ## # Power of Computational Physics: Other Charge
Configurations ## ## Excercise 5 Now assign alternating negative charges (indicate with a red color) and positive charges (indicate with blue color) around the ring. Show electric field vectors in 3D
for N number of charges, as follows: * N=**5** * N=**10** * N=**100** for a $7\times7\times7$ grid. ## Exercise 6 Now assign one half of the ring with negative charges (red) and the other half of the
ring with positive charges (blue). Show electric field vectors in 3D for N number of charges, as follows: * N=**5** * N=**10** * N=**100** for a $7\times7\times7$ grid. ## Exercise 7 Now assign one
fourth of the ring with negative charges (red) and the next fourth of the ring with positive charges (blue), and so on. Show electric field vectors in 3D for N number of charges, as follows: * N=
**4** * N=**16** * N=**100** for a $7\times7\times7$ grid.
Download Options
Share a Variation
Did you have to edit this material to fit your needs? Share your changes by
Creating a Variation
Credits and Licensing
Patrick Kelley and Gautam Vemuri, "Visualizing the off-axis electric field due to a ring of electric charges," Published in the PICUP Collection, January 2020, https://doi.org/10.1119/
DOI: 10.1119/PICUP.Exercise.OffAxisRingofCharges
The instructor materials are ©2020 Patrick Kelley and Gautam Vemuri.
The exercises are released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license
|
{"url":"https://www.compadre.org/portal/../picup/exercises/exercise.cfm?A=OffAxisRingofCharges","timestamp":"2024-11-11T06:24:34Z","content_type":"text/html","content_length":"40688","record_id":"<urn:uuid:b7ee5029-e4e0-4f38-a754-727f92768629>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00244.warc.gz"}
|
Charles Yong, Author at Charles' Rhapsodies
• Metallic Materials Properties Development and Standardization (MMPDS-2024)
1. DJI FlyCart 30
1.1. Current Power System
• Type: Battery
• Weight: 22.5 kg
• Energy: 4 kWh
• Continuous Power during Max Speed: 14 kW
• Flight Range: 16 km
• Cost (Single Pack): 4,700 USD
• Cost (Triple Pack): 14,000 USD
1.2. Proposed MGT PPU
• Type: Gas Turbine PPU
• Weight: 22.5 kg
□ Fuel: 15 kg
□ Backup Battery: 2.5 kg
□ Gas Turbine: 3 kg
□ Generator Module: 2 kg
• Energy (20% Efficiency): 36 kWh
• Energy (30% Efficiency): 54 kWh
• Flight Range (20% Efficiency): 150 km (vs 16 km w/ battery)
• Flight Range (30% Efficiency): 220 km (vs 16 km w/ battery)
• Cycle Spec
□ TIT: 780 C (material: 330 SS)
□ $\eta_t$: 90%
□ $\eta_c$: 83%
□ $\Pi$: 7.0
□ $\dot{m}$: 0.1 kg/s
□ $\eta_{\text{thermal}}$: 23.6%
□ $\eta_{\text{electrical}}$: 22% (assuming 93% generator efficiency)
1. Compressor Impeller Design
1.1. Design Requirements
The design target for the second generation engine is a power output of 10 kW with a thermal efficiency, defined as the equation below, of more than 10%. $$ \eta_\textrm{thermal}=\frac{\text{net
power out}}{\text{fuel mass flow rate} \times \text{fuel lower heating value}}=\frac{(1+f)c_p^c T_{t4} \eta_T (1-\Pi^{\frac{1-\gamma^c}{\gamma^c}}) – \frac{c_p^a T_{0}}{\eta_C} (\Pi^{\frac{\gamma^
a-1}{\gamma^a}}-1)}{f h_c} $$ For ease of fabrication, the turbine material is selected to be 316 Stainless Steel, which has a yield strength of more than 120 MPa at 650 C. For long term exposure to
heat, it has less than 0.1% elongation at 1000 hours at 85 MPa at 650 C [1]. This decision leads to $T_{t4}=650 \text{ C} \approx 920 \text{ K}$.
The second generation engine is expected to have a radial compressor and an axial turbine. A conservative estimation of compressor efficiency and turbine efficiency is made: $\eta_c = 0.8$, $\eta_t =
The fuel is selected to be propane, which has a lower heating value of $h_c = 4.64 \times 10^7$ J / kg.
Some common assumptions are made to the rest of the variables:
1. $c_p^a = 1004$ J⋅kg$^{−1}$⋅K$^{−1}$
2. $c_p^c = 1200$ J⋅kg$^{−1}$⋅K$^{−1}$
3. $\gamma^a = 1.4$
4. $\gamma^c = 1.3$
5. Compressor Inlet Tempurature $T_0 = 298$ K
Now, plotting $\eta_\textrm{thermal}$ vs pressure ratio $\Pi$ using this code, we have
The pressure ratio is set to 4.0 to maximize specific work. At this design point, specific work is 78,700 J/kg and $\eta_\textrm{thermal}$ is predicted to be 12.3%. Fuel-to-air ratio at design point
is calculated as $$f = \frac{c^c_p T_{t4} – c_p^a T_{t3}}{h_c-c_p^c T_{t4}}= 1.376%$$, where $T_{t3}$ is calculated as $$T_{t3} = T_0 (\frac{\Pi^{(\gamma^a-1)/\gamma^a}-1}{\eta_c}+1) = 479 \text{ K}
According to specific work, in order to have 10 kW output, we need a massflow rate of 0.127 kg/s. This is less than half of the massflow rate designed for Gentoo. Due to a previous calculation error,
a massflow rate of 0.128 kg/s will be used for the rest of the documents.
Aiming for a compact size, to get to this kind of pressure ratio is almost required to go with hybrid or full ceramic bearings. A very cost effective choice is this hybrid ceramic bearing from GRW,
which is rated at 160,000 rpm. So, let’s set the upper limit of spool speed to 160,000 rpm. Other bearing choices for this speed would be: CXMC-1980 and HY KH 61900 C TA.
Due to the limitation of fabrication equipment, we can process rod stock with a diameter up to 200 mm. Say, half of the diameter is reserved for diffuser, the impeller diameter $D_w \leq 100$ mm.
As a summary, the following design requirements are selected for the impeller:
• Compressor Pressure Ratio $\Pi_c = 4.0$
• Isentropic Efficiency $\eta_c = 80%$ (Polytropic Efficiency $\eta_{c,\text{ploy}}=83.5%$)
• Massflow Rate $\dot{m} = 0.128$ kg/s
• Spool Speed $\omega \leq$ 160,000 rpm
• Impeller Diameter $D_2 \leq 100$ mm
1.2. Flow Coefficient
According to [3, p. 326], it is common to pick a flow coefficient in the range of $0.07 \leq \phi_{t1} \leq 0.10$. See the figure below (quoted from the book) for the relationship. Originally, a $\
phi_{t1} = 0.09$ is selected for highest efficiency. But later, $\phi_{t1} = 0.057$ is used due the speed limitation imposed by bearings.
1.3. Work Coefficient
According to multiple source, it is common to have a work coefficient around 0.7 for higher efficiency and lower tip speed. Therefore, work coefficient is choosen to be $\lambda = \frac{c_{u2}}{u_2}
= \frac{\Delta h_t}{(\omega R_2)^2} = \frac{\omega \Delta (rc\theta)}{(\omega R_2)^2} = 0.7$.
1.4. Sizing and Velocities
According to [3, p. 330], We have:
$$ M_{u2}^2=\frac{\Pi^{\frac{\gamma-1}{\eta_{\text{ploy}} \cdot \gamma}}-1}{(\gamma-1)\lambda} $$
According to the definition of polytropic efficiency and assumption for ideal gas, we have
$$ \eta_c = \frac{\Pi^{(\gamma-1)/\gamma}-1}{\Pi^{(\gamma-1)/(\eta_{\text{poly}}\gamma)}-1} $$
Therefore, we have
$$ M_{u2}^2 = \frac{\Pi^{(\gamma-1)/\gamma}-1}{\eta_c(\gamma-1)\lambda} \approx 2.17 $$
$$ M_{u2} \approx 1.473 $$
$$ u_2=M_{u2}\sqrt{\gamma \text{R} T_{0}} \approx 510 \text{ m/s} $$
This result matches the prediction in [3, p. 267]. The Diameter of the impeller can be then calculated as (from [3, p. 255]):
$$ D_2 = \sqrt{\frac{\dot{m}}{\rho_{t1}u_2\phi_{t1}}} $$
When using $\rho_{t1}=1.18$ kg/m$^3$, we have $D_2$ = 61.1 mm. The speed is then calculated to be $\omega$ = 159,315 rpm. Both $D_2$ and $\omega$ satisfy our design requirements.
Then the relative Mach number at the inlet tip can be calculated as
$M_{w1}=\frac{M_{u2}(\frac{3.2 \phi_{t1}}{k})^{0.36}}{1-0.15M_{u2}(0.45+\frac{\phi_{t1}}{k})} = 0.931$, where impeller inlet shape factor $k = 1-(\frac{r_{1h}}{r_{1c}})^2 = 0.91$ is assumed. Equation
source: [3, p. 353].
Then optimal flow inlet angle can be calculated as $\beta_{1c}=cos^{-1}(\frac{\sqrt{3+\gamma M_{w1}^2 + 2 M_{w1}} – \sqrt{3+ \gamma M_{w1}^2-2M_{w1}}}{2M_{w1}})=1.047 \text{ rad}= 60.0 ^\circ $.
Source: [3, p 348].
The inlet diameter ratio can be then calculated as $\frac{D_{1c}}{D_2}= \frac{M_{w1}\sin (\beta_{1c})}{M_{u2}\sqrt{1+\frac{\gamma -1}{2}M_{w1}^2 \cos ^2{\beta_{1c}}}}=0.536 $ . Source: [4].
Then we know our impeller eye diameter $D_{1c}=32.8 $ mm. After that, inlet hub diameter can be calculated using inlet shape diameter $k$, $D_{1h}=D_{1c}\sqrt{1-k}=9.8 $ mm.
Impeller relative velocity at the casing can be then calculated as $w_{1c} = u_2 \frac{2D_{1c}}{\sqrt{3}D_2} = 316 $ m/s.
A de Haller number of $\text{DH} = 0.6$ is choose for medium diffusion. Then $w_2= \text{DH} \cdot w_{c1}=190$ m/s.
By ultilizating, outlet velocity triangle, $\frac{w_{u2}-c_s}{c_{m2}}=\tan \beta_2’ $ and the Wiesner correlation $\frac{c_s}{u_2}=\frac{\sqrt{\cos \beta_2’}}{Z^{0.7}}$ where Z is the number of
blades at outlet, we can solve for the outlet backsweep angle $\beta_2’=26.8 ^\circ$ if $Z = 10$, or $\beta_2’=32.0 ^\circ$ if $Z = 12$, or $\beta_2’=35.5 ^\circ$ if $Z = 14$. In order to have a high
efficiency and side operating range, $Z = 12$ is chosen for a backsweep angle of $\beta_2’=32.0 ^\circ$.
1.5. Velocity Triangles
1.5.1 Inlet Velocity Triangle
(Figure Source: [3])
• $\omega = $ 159,315 rpm
• $\beta_{c1}=1.047 \text{ rad}= 60.0 ^\circ$
• $u_{c1} = 273.6$ m/s
• $c_{m1}=c_1=158.1 $ m/s
• $w_{c1}=316$ m/s
1.5.2 Outlet Velocity Triangle
(Figure & Estimation Source: [3]. See section 11.6.6 in this book for outlet velocity triangle design process.)
• $u_2=510$ m/s
• $c_{u2}=\lambda u_2=357 $ m/s
• $w_{u2}=u_2-c_{u2}=153$ m/s
• $w_2 = 190$ m/s
• $c_{m2}=\sqrt{w_2^2-w_{u2}^2}=112.7 $ m/s
• $\phi_2=\frac{c_{m2}}{u_2}= 0.221$
• $c_2 = \sqrt{c_{m2}^2+c_{u2}^2}=374.4 $ m/s
• $\alpha_2 = \sin^{-1}(\frac{c_{u2}}{c_2}) = 1.265 \text{ rad} = 72.5 ^\circ $
• $\beta_2 = \sin^{-1}(\frac{w_{u2}}{w_2})=0.936 \text{ rad} = 53.6 ^\circ$
• $\beta_2’= 0.559 \text{ rad} = 32.0 ^\circ $
• $c_s = u_2\frac{\sqrt{\cos \beta_2’}}{Z^{0.7}}=82.5$ m/s
1.6. Verification
1.6.1 Pressure Rise Coefficient
The pressure rise coefficient $\psi=\frac{\Delta h_0}{U^2}=\frac{\Delta p}{\rho \omega^2D_2^2}=\lambda\eta_s=0.56$. According to [3, p. 285], an optimal value of $\psi=0.55 \pm 0.05$ is recommended.
Our $\psi$ is in the optimal range.
1.6.1 Specific Speed & Specific Diameter
Then specific speed is calculated as $\omega_s=2\frac{\phi_{t1}^{1/2}}{\psi^{3/4}}=0.7376$, while the specific diameter is calculated as $D_s = \frac{\psi^{1/4}}{\phi_{t1}^{1/2}} = 3.623 $.
According to a cordier diagram plotted in [3, p. 291], our design point is very close to the Cordier Line (max efficiency).
2. Turbine Design
An axial turbine configuration rather than radial turbine is selected for higher efficiency. Since the pressure ratio of compressor is only 4, single stage axial turbine should be able to do the work
with fairly high efficiency.
2.1. Flow Coefficient, Work Coefficient, and Degree of Reaction
Multiple versions of Smith Charts are avaliable to help select these parameters. The following values are selected for maximum efficiency:
• Work Coefficient $\Psi = 1.2 = \frac{\Delta h_0}{U^2} = \frac{\Delta c_{\theta}}{U}$
• Degree of Reaction $\text{R} = 0.5 = \frac{h_2-h_3}{h_1-h_3} \approx \frac{p_2-p_3}{p_1-p_3}$
The following parameter is a bit away from optimal due to the stress consideration imposed by turbine material:
• Flow Coefficient $\phi = 0.90 = \frac{c_x}{U}$
The defination of these parameter can be found at [6, p. 122-123].
A recommanded paper about Smith Chart can be found in [7]. Here’s one version of Smith Chart from this paper:
2.2. Turbine-Compressor Matching
The turbine will be installed on the same shaft with compressor impeller, and therefore $\omega_t = \omega_c = $ 159,315 rpm.
The turbine should be able to supply the right amount of power to run compressor. From the compressor design, the compressor need a specific work of
$$ W_{sp}= \frac{c_p^a T_{0}}{\eta_c} (\Pi^{\frac{\gamma^a-1}{\gamma^a}}-1)= \text{181,757 J/kg} $$
Assuming there’s a 1% mechanical loss, suggested in [8, p. 65], the turbine needs to extract $W_{sp}= $ 183,593 J/kg. Therefore, the power needed to run the compressor is $23.5$ kw.
Then the pressure ratio needed for the turbine to provide enough power can be calculated as $$ \Pi_{\text{turbine}}=(1-\frac{W_{sp}}{(1+f)c_p^c T_{t4} \eta_T })^{\frac{\gamma^c}{1-\gamma^c}} = 2.53
2.3. Blade Speed, Height, and Mean Radius
According to [8, p. 134], minimal blade tip speed can be calculated as $$ U \geq \sqrt{\frac{\dot{W}}{\dot{m}\Psi}}=391.1 \text{ m/s} $$ Using air density $\rho=1.53$ kg/m$^3$ at 4 atm pressure and
650 C, the annulus area can be calculated as $$ A_x=\frac{\dot{m}}{\rho\phi U}=237.7 \text{ mm}^2 $$ By using the equation $$ A_x=\pi r_t^2(1-(\frac{r_h}{r_t})^2) $$ and the following equation from
[8, p. 152]. which estimates centrifugal stress $$ \frac{\sigma_c}{\rho_m}=\frac{(\omega r_t)^2}{2}(1-(\frac{r_h}{r_t})^2) $$ We can estimates the centrifugal stress (assuming $\rho_m$ = 7980 kg/m$^
3$, $\omega=159315 \text{ rpm}=16683.4 \text{ rad/s}$) $$ \sigma_c = \frac{\rho_m\omega^2A_x}{2\pi}=84 \text{ MPa} $$ Which is looks good for to satisfty creep resistance requirement.
The blade mean radius can be calculated as $r_m=\frac{U}{\omega}=23.4$ mm, and the blade height can be then calculated as $$ r_t-r_h=H\approx \frac{\dot{m}}{\rho \phi U 2 \pi r_m}=1.61 \text{ mm} $$
Therefore, we have turbine hub radius $r_h = 21.8$ mm, and turbine tip radius $r_t=25$ mm.
2.4. Velocity Triangles
$\Psi = \frac{\Delta v}{\omega r} = 2\phi \tan \alpha_2-1=2\phi \tan \beta_3-1$, and therefore we have $$ \alpha_2 = \beta_3 = \tan^{-1}(\frac{\Psi+1}{2\phi})=0.885 \text{ rad}=50.7 ^\circ $$ and
because of $\tan \beta_3 – \tan \beta_2 = \frac{\omega r}{u_2} = \frac{1}{\phi}$ $$ \alpha_3 = \beta_2 = \tan^{-1}(\frac{\Psi-1}{2\phi})=0.111 \text{ rad}=6.3 ^\circ $$ from previous calculation, we
have $\omega r = U = 391.1 \text{ m/s}$ and $c_x=\phi U = 352.0 \text{ m/s}$, and therefore we have $$ V_{2R} = V_3 = \frac{c_x}{\cos \beta_2} = 354.2 \text{ m/s} $$
$$ V_2 = V_{3R}=\sqrt{c_x^2 + (c_x\tan \beta_2 + U)^2} = 556.0 \text{ m/s} $$
(Figure and Euqation Source: [9, p. 697])
[1] Yagi, K., et al. “Creep properties of heat resistant steels and superalloys.” Landolt Bornstein numerical data & functional relationships in science & technology, Springer, 2nd editions(2004)
[2] Kerrebrock, Jack L. Aircraft engines and gas turbines. MIT press, 1992.
[3] Casey, Michael, and Chris Robinson. Radial Flow Turbocompressors. Cambridge University Press, 2021.
[4] Rusch, Daniel, and Michael Casey. “The design space boundaries for high flow capacity centrifugal compressors.” Journal of turbomachinery 135.3 (2013): 031035.
[5] Cumpsty, Nicholas A. “Compressor aerodynamics.” Longman Scientific & Technical (1989).
[6] Dixon, S. Larry, and Cesare Hall. Fluid mechanics and thermodynamics of turbomachinery. Butterworth-Heinemann, 2013.
[7] Bertini, Francesco, et al. “A critical numerical review of loss correlation models and Smith diagram for modern low pressure turbine stages.” Turbo Expo: Power for Land, Sea, and Air. Vol. 55232.
American Society of Mechanical Engineers, 2013.
[8] Rogers, Gordon Frederick Crichton, et al. Gas turbine theory. Pearson Higher Ed, 2017.
[9] Mattingly, Jack D. Elements of gas turbine propulsion. Vol. 1. New York: McGraw-Hill, 1996.
1. Faster
1.1. Better Machine Design
B-axis for 6-sided complete machining
Double tools for double MRR
Four 5-axis Turret: Kill CNC Programmers
Swiss Lathe: Assembly Line for Turning
Laser Drilling
Smart Fixturing
1.2. Better Tooling
Ceramic Endmills
Custom Tooling
Focused ion beam-shaped microtools for ultra-precision machining
1.3. Better Control
Excentric Turning
Interpolation Turning
Chatter Control
Collision Protection
AI Chip Removal
2. Tighter Tolerance
Active cooling and compensation
In-process 3D tool wear compensation
In-process surface roughness check
In-process surface probing + grinding for sub micron accuracy
3. Make Impossible Possible
“Machining” ceramics and glass
Additive + Subtractive
During IAP 2023, I had the pleasure of conducting a two-unit workshop at MIT, where students learned advanced manual and CNC machining techniques to construct a fully operational internal combustion
engine in just two weeks.
I would like to extend my heartfelt gratitude to Mark Belanger, Morningside Academy for Design, Project Manus, and the Edgerton Center for their invaluable support. Additionally, a special thanks
goes out to Lee Zamir and Conor McArdle for their efforts in preparing the interview.
I was reading a newsletter, which is a pretty famous one within the product management community. In this weeks’ issue, it’s said the writer is also doing angle investing now. So I looked up startups
he invested in.
I can’t believe those startups are real. Because they’re just ideas in the discussions I had with schoolmates last semester. For example, I pitched an idea on 15.390 about using LiDAR to scan one’s
body measurements in order to eliminate the sizing problem of online apparel shopping. And then, boom! Unspun is out there.
Another example is Cerebral. Another team on 15.390 started by building a community to let depressed patients help each other. And then they found out that a bunch of apps doing exactly the same
thing are available already, so they pivot to licensed therapy, which is exactly what Cerebral offers.
And it didn’t stop there. Last October, I was having a discussion about a proposal to compete in MIT DesignX. The proposal aimed to connect the young with the elderly so that elderly could get some
help and companion when they need some. But there’s one thing we didn’t figure out: how to create motives for the young to help the elderly. Papa‘s answer is the payment from the elderly’s health
plan or employer.
The above examples are just from one part-time angle investor. If we look at a wider scope, there’re Modsy, Curated, AptDeco, MicroAcquire, Lawtrades, Betterleap. Every single one of these companies
occurred as a startup idea in conversations I had last year. So I come to believe that everybody is being hit by apples every day, but only every few of them become Newton. What we should do is to
identify ideas with market demand and start making them come true.
Be ready to be surprised by the crazy, wonderful events that will come dancing out of your past when you stir the pot of memory.
Writing about Your Life: A Journey into the Past by William Zinsser (Marlowe 2004)
I understand this point for a long time: reviewing inspires people. Actually, that’s the reason why I write blogs. So, when I came across this line from William Zinsser, I am so impressed that he
could put this simple idea into such a vivid expression.
The other day, I watched this interview with a famous Chinese director, Ang Lee. I was so fascinated by the accuracy and effectiveness of his wording. For a long time, I felt feeble when trying to
express myself, not matter via verbal or writing. And Lee showed me a prospect. Maybe I should take some courses in creative writing and narrative writing someday.
I’m always attracted by sci-fi works, or more specifically, space operas. What’s so-called space opera is telling stories, which originated in real life, in a space-era setting. Recently, there’s a
new outstanding work available via Netflix: Space Sweepers (2021). This film talks about inequality, apartheid, and genocide.
As I am reading works by Lévi-Strauss, it comes to me that apartheid seems to be the only way to solve the problem of overpopulation so far, even in a sci-fi setting. Back to three thousand years
ago, India tried to solve the population problem by the caste system. However, this is tragic for mankind. As Lévi-Strauss put it: “In the course of history, the various castes did not succeed in
reaching a state in which they could remain equal because they were different.” and “When a community becomes too numerous, however great the genius of its thinkers, it can only endure by secreting
It appears that there are two ultimate society forms in nowadays sci-fi works. One form, featured with Elysium (2013) directed by Neill Blomkamp, is enslavement. While the others, featured with the
Star Trek series, believe that space is the final and yet interminable frontier of mankind. In other words, there will always be planets where no man has gone before.
Perhaps, how we understand the problem of overpopulation is just as limited as how Malthus understand it in the 1800s. But, anyway, it is just a little depressing that sci-fi works—representing the
boundary of human imagination—reach the same conclusion as what thinkers in India came to three thousand years ago.
It has been three months since the first time I tried to write about this spot on planet earth. It seems that Aranya carries too much meaning for me, that every time I tried to order my thoughts, I
accomplished nothing but becoming effusive. Until recently.
Last week, I came across some lines from Claude Lévi-Strauss. It feels like Thanos snapped his fingers and my confusion vanished. I share a similar dilemma about Aranya as Claude feels about
mountains: on the one hand, I value the solitude of it, and on the other hand, the solitude may cause the termination of Aranya.
For some years past, this preference has taken the form of a jealous passion. I hated who those shared my predilection, for they were a menace to the solitude I value so highly; and I despised
those others for whom mountains meant merely physical exhaustion and a constricted horizon and who were, for that reason, unable to share in my emotions. The only thing that would have satisfied
me would have been for the entire world to admit to the superiority of mountains and grant me the monopoly of their enjoyment.
Lévi-Strauss, C. (1992). Tristes tropiques (1955). Trans. John and Doreen Weightman. London: Penguin Books, 333.
In other words, it’s similar to the Arc Reactor for Tony Stark. It is the reactor that keeps Stark alive while the palladium within the reactor is poisoning him.
This is when I fell in love with Aranya. It’s very much like a scene in Oblivion: after a long period of time, humans became barely extinct. Just as Claude wrote in another chapter of the book I
quoted above, the signs of the existence of human beings mark out, more clearly than if humans had not been there, the extreme limit that they have attempted to cross. Great nature placates the
rebellion of humans and quells my turbulence.
I’m also a diehard fan of open water. Ripples on the surface make him alive. And he’s always here, listening. There’s a peaceful lake on my college campus. Without doubts, that’s the place for the
inner me. Every time I approach the lake, the clamour in my mind fades away. It’s like a gigantic jar, purged and stored all my feelings——happy, sad, and everything in between. I always characterize
the lake as a pensive old man. Not the style of The Old Man and the Sea, but more like Yoda.
In addition, Aranya is not only the certainty of space but also a state of mind. I’ve been super busy for the past year. And as far as I could tell, the coming one would not be of much leisure.
Therefore, some time for simply wandering around is such a luxury for me. In the space opera Star Trek: The Next Generation, there’s some discussion about what makes humans different. Or, in other
words, if a single feature of human is picked as the representative of mankind, which one would occupy the place of honour? The answer proposed by scriptwriters is curiosity. And I kind of agree with
this one. Sir Ken Robinson once said: There are two types of people in this world: those who divide the world into two types and those who do not. Continuing this manner, I would like to divide the
pursuit of curiosity into two kinds: those which require teamwork and those which do not. Throughout the last year, I’m offered some wonderful chances to collaborate with brilliant collaborators. A
lost chance never returns. So, collaborations dominated my time. By and large, Aranya becomes the very only and dilapidated refuge for the private part of curiosity.
Every time I plan to spend some time at Aranya, I agonize. I’m so afraid that it may perish. Every visit I paid might be the valediction. I took the beginning picture of this article on my first
visit to Aranya. And on my second trip here, it’s gone, encroached by a construction site. The following ones were taken during my first and second visits. Unfortunately, the spot disappeared before
my third visit, with no trace. I hate what’s happening out there, but there’s nothing I can do except presenting my raw passion.
How this is gonna end? Maybe Aranya will still be there long after my death, while it’s entirely possible that it’s gone by tomorrow. I guess the only thing I could do and what I’m doing is that
enjoy every moment with Aranya until the termination of either one of us.
Who knows where the road will lead us?
Only a fool would say
But if you’ll let me love you
It’s for sure I’m gonna love you
All the way
Sinatra, F. (1957). All the Way.
|
{"url":"https://cyo.ng/author/qizy/","timestamp":"2024-11-12T22:52:28Z","content_type":"text/html","content_length":"114962","record_id":"<urn:uuid:867cb440-c9a8-4a38-b8d8-b23bdc0c7a27>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00105.warc.gz"}
|
simple formula | Qrew Discussions
Hey All,
I've a formula Numeric Formula field that calculates Units Remaining.
[Weekly Units]-[# Units Used]
This works. Now I'm trying to create a message if this field goes NEGATIVE.
if (([Weekly Units]-[# Units Used])<0, ToText("TOO MANY UNITS USED, TRY AGAIN"),
if (([weekly units]=[# units used]),"zero",
([weekly units]-[# units used]))))
I'm receiving a Formula Syntax Error for the last line: "Expecting Text but found Number". If this is a formula NUMERIC field, why am I getting this message?
Here are other variations that I've tried for that last line, but still get the same message:
(ToNumber([weekly units])-ToNumber([# units used]))))
(ToNumber([weekly units]-[# units used]))))
Thanks in advance! Mary
Mary Perkins
M3TR1CS Business Solutions
|
{"url":"https://community.quickbase.com/discussions/getting-started/simple-formula/28179","timestamp":"2024-11-02T11:45:07Z","content_type":"text/html","content_length":"253779","record_id":"<urn:uuid:453a8528-a6dd-466c-912e-bddc8840ccbf>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00359.warc.gz"}
|
From Encyclopedia of Mathematics
cardinal number, of a set
That property of Cardinal number).
[1] P.S. Aleksandrov, "Einführung in die Mengenlehre und die allgemeine Topologie" , Deutsch. Verlag Wissenschaft. (1984) (Translated from Russian)
More information and references can be found in the article Cardinal number.
How to Cite This Entry:
Cardinality. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Cardinality&oldid=12401
This article was adapted from an original article by B.A. Efimov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article
|
{"url":"https://encyclopediaofmath.org/index.php?title=Cardinality&oldid=12401","timestamp":"2024-11-02T23:37:11Z","content_type":"text/html","content_length":"17467","record_id":"<urn:uuid:6d303a4e-3106-4786-9dc7-184ec3ccf9fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00471.warc.gz"}
|
JEE Main Syllabus 2024 - Updated Syllabus for Chemistry, Physics, and Chemistry | iDreamCareer
The National Testing Agency has released the complete syllabus for the JEE main exam 2024. The syllabus can be found on the official website in the PDF form. The aspirants should keep a note of the
syllabus and the important topics for preparation. The JEE main syllabus 2024 comprises six sections – mathematics, physics, chemistry, aptitude, drawing, and planning.
The mathematics, physics, and chemistry syllabus are generally for the paper 1. The aptitude & drawing, and the planning test syllabus for the JEE Main syllabus 2024 are for papers 2a and 2b
respectively. The National Testing Agency keeps updating and changing the JEE main syllabus frequently. Students should be updated with the latest syllabus through the official website.
JEE Main 2024 Syllabus
The syllabus for the first paper of the JEE Main exam 2024 is quite vast. The students need to understand the entire syllabus and their subtopics cs in order to perform well in the exam. The syllabus
for the JEE Main Syllabus for Paper 1 can be divided into six sections.
1. JEE Main Syllabus for Mathematics
2. JEE Main Syllabus for Physics
3. JEE Main Syllabus for Chemistry
4. JEE Main Syllabus for Aptitude Test
5. JEE Main Syllabus for Drawing Test
6. JEE Main Syllabus for Planning Test
JEE Main Syllabus 2024 for Mathematics
According to the official syllabus of mathematics syllabus released by the NTA, there are 14 units in total. The table below depicts the total units for the mathematics syllabus.
SETS, RELATIONS, AND FUNCTIONS COMPLEX NUMBERS AND QUADRATIC EQUATIONS MATRICES AND DETERMINANTS
PERMUTATIONS AND COMBINATIONS BINOMIAL THEOREM AND ITS SIMPLE APPLICATIONS SEQUENCE AND SERIES
LIMIT, CONTINUITY, AND DIFFERENTIABILITY INTEGRAL CALCULAS DIFFRENTIAL EQUATIONS
CO-ORDINATE GEOMETRY THREE-DIMENSIONAL GEOMETRY VECTOR ALGEBRA
STATISTICS AND PROBABILITY TRIGONOMETRY
Let’s now discuss the topic-wise syllabus for mathematics subjects in detail.
Unit 1: Sets, Relations, and Functions
The JEE main syllabus for the mathematics subject for the unit Sets, relations, and functions are mentioned here. The important topics include – Sets and their representation: Union, intersection,
and complement of sets and their algebraic properties; Power set; Relation, Type of relations, equivalence relations, functions; one-one, into and onto functions, the composition of functions.
Unit 2: Complex Numbers and Quadratic Equations
The JEE main syllabus for the mathematics subject for the unit complex numbers and quadratic equations are mentioned here. The important topics include – Complex numbers as ordered pairs of reals,
Representation of complex numbers in the form a + ib and their representation in a plane, Argand diagram, algebra of complex numbers, modulus, and argument (or amplitude) of a complex number,
Quadratic equations in real and complex number system and their solutions Relations between roots and co-efficient, nature of roots, the formation of quadratic equations with given roots.
Unit 3: Matrices and Determinants
The JEE main syllabus for the mathematics subject for the third unit are mentioned here. The important topics include – Matrices, algebra of matrices, type of matrices, determinants, and matrices of
order two and three, evaluation of determinants, area of triangles using determinants, Adjoint, and evaluation of inverse of a square matrix using determinants and, Test of consistency and solution
of simultaneous linear equations in two or three variables using matrices.
Unit 4: Permutation and Combination
The JEE main syllabus for the mathematics subject for the fourth unit is mentioned here. The important topics include – The fundamental principle of counting, permutation as an arrangement and
combination as a section, the Meaning of P (n,r) and C (n,r), and simple applications.
Unit 5: Binomial Theorem and Its Simple Applications
The JEE main syllabus for the mathematics subject for the fifth unit is mentioned here. The important topics include the binomial theorem for a positive integral index, general term and middle term,
and simple applications.
Unit 6: Sequence and Series
The JEE main syllabus for the mathematics subject for the sixth unit is mentioned here. The important topics include – Arithmetic and Geometric progressions, insertion of arithmetic, geometric means
between two given numbers, Relation between A.M and G.M.
Unit 7: Limit, Continuity, and Differentiability
The JEE main syllabus for the mathematics subject for the seventh unit is mentioned here. The important topics include – Real–valued functions, algebra of functions, polynomials, rational,
trigonometric, logarithmic, and exponential functions, inverse functions. Graphs of simple functions. Limits, continuity, and differentiability. Differentiation of the sum, difference, product, and
quotient of two functions. Differentiation of trigonometric, inverse trigonometric, logarithmic, exponential, composite, and implicit functions; derivatives of order up to two, Applications of
derivatives: Rate of change of quantities, monotonic-increasing and decreasing functions, Maxima and minima of functions of one variable.
Unit 8: Integral Calculus
The JEE main syllabus for the mathematics subject for the eighth unit is mentioned here. The important topics include – Integral as an anti-derivative and fundamental integral involving algebraic,
trigonometric, exponential, and logarithmic functions. Integrations by substitution, by parts, and by partial functions. Integration using trigonometric identities.
Unit 9: Differential Equations
The JEE main syllabus for the mathematics subject for the ninth unit is mentioned here. The important topics include – Ordinary differential equations, their order, and degree, the solution of
differential equations by the method of separation of variables, solution of a homogeneous and linear differential equation of the type.
Unit 10: Co-ordinate Geometry
The JEE main syllabus for the mathematics subject for the tenth unit is mentioned here. The important topics include – A cartesian system of rectangular coordinates in a plane, distance formula,
section formula, locus, and its equation, the slope of a line, parallel and perpendicular lines, and intercepts of a line on the co-ordinate axis.
• Straight line: Various forms of equations of a line, intersection of lines, angles between two lines, conditions for concurrence of three lines, the distance of a point from a line, coordinate
of the centroid, orthocentre, and circumcentre of a triangle.
• Circle, conic sections: A standard form of equations of a circle, the general form of the equation of a circle, its radius and central, equation of a circle when the endpoints of a diameter are
given, points of intersection of a line and a circle with the centre at the origin and sections of conics, equations of conic sections (parabola, ellipse, and hyperbola) in standard forms.
Unit 11: Three-Dimensional Geometry
The JEE main syllabus for the mathematics subject for the eleventh unit is mentioned here. The important topics include the coordinates of a point in space, the distance between two points, section
formula, direction ratios, direction cosines, and the angle between two intersecting lines. Skew lines, the shortest distance between them, and its equation. Equations of a line.
Unit 12: Vector Algebra
The JEE main syllabus for the mathematics subject for the twelfth unit is mentioned here. The important topics include – Vectors and scalars, the addition of vectors, components of a vector in two
dimensions and three-dimensional space, and scalar and vector products.
Unit 13: Statistics and Probability
The JEE main syllabus for the mathematics subject for the thirteenth unit is mentioned here. The important topics include –
• Measures of discretion; calculation of mean, median, mode of grouped and ungrouped data calculation of standard deviation, variance, and mean deviation for grouped and ungrouped data.
• Probability: Probability of an event, addition and multiplication theorems of probability, Baye’s theorem, probability distribution of a random variate.
Unit 14: Trigonometry
The JEE main syllabus for the mathematics subject for the fourteenth unit are mentioned here. The important topics include – Trigonometrical identities and trigonometrical functions, inverse
trigonometrical functions, and their properties.
JEE Main Syllabus 2024 for Physics
According to the official syllabus of the physics syllabus released by the NTA, there are total 20 units in total. The table below depicts the total units for physics syllabus.
PHYSICS AND MEASUREMENT KINEMATICS LAWS OF MOTION
WORK, ENERGY, AND POWER ROTATIONAL MOTION GRAVITATION
PROPERTIES OF SOLIDS AND LIQUIDS THERMODYNAMICS KINETIC THEORY OF GASES
OSCILLATIONS AND WAVES ELECTROSTATICS CURRENT ELECTRICITY
OPTICS DUAL NATURE OF MATTER AND RADIATION ATOMS AND NUCLEI
ELECTRONIC DEVICES EXPERIMENTAL SKILLS
Let’s now discuss the topic-wise syllabus for physics subjects in detail.
Unit 1: Physics and Measurement
The JEE main syllabus for the physics subject for the first unit is mentioned here. The important topics include – Units of measurement, System of Units, S I Units, fundamental and derived units,
least count, significant figures, Errors in measurements, Dimensions of Physics quantities, dimensional analysis, and its applications.
Unit 2: Kinematics
The JEE main syllabus for the physics subject for the second unit is mentioned here. The important topics include – The frame of reference, motion in a straight line, Position- time graph, speed and
velocity; Uniform and non-uniform motion, average speed and instantaneous velocity, uniformly accelerated motion, velocity-time, position-time graph, relations for uniformly accelerated motion,
Scalars and Vectors, Vector.
Addition and subtraction, scalar and vector products, Unit Vector, Resolution of a Vector. Relative Velocity, Motion in a plane, Projectile Motion, Uniform Circular Motion.
Unit 3: Laws of Motion
The JEE main syllabus for the physics subject for the third unit is mentioned here. The important topics include – Force and inertia, Newton’s First Law of motion; Momentum, Newton’s Second Law of
motion, Impulses; and Newton’s Third Law of motion.
Law of conservation of linear momentum and its applications. Equilibrium of concurrent forces. Static and Kinetic friction, laws of friction, rolling friction. Dynamics of uniform circular motion:
centripetal force and its applications: vehicle on a level circular road, vehicle on a banked road.
Unit 4: Work, Energy, and Power
The JEE main syllabus for the physics subject for the fourth unit is mentioned here. The important topics include – Work done by a constant force and a variable force; kinetic and potential energies,
work-energy theorem, and power.
The potential energy of spring conservation of mechanical energy, conservative and nonconservative forces; motion in a vertical circle: Elastic and inelastic collisions in one and two dimensions.
Unit 5: Rotational Motion
The JEE main syllabus for the physics subject for the fifth unit is mentioned here. The important topics include the centre of the mass of a two-particle system, the Centre of the mass of a rigid
body; Basic concepts of rotational motion; moment of a force; torque, angular momentum, conservation of angular momentum and its applications.
The moment of inertia, the radius of gyration, values of moments of inertia for simple geometrical objects, parallel and perpendicular axes theorems, and their applications. Equilibrium of rigid
bodies, rigid body rotation and equations of rotational motion, comparison of linear and rotational motions.
Unit 6: Gravitation
The JEE main syllabus for the physics subject for the sixth unit is mentioned here. The important topics include – The universal law of gravitation. Acceleration due to gravity and its variation with
altitude and depth. Kepler’s law of planetary motion.
Gravitational potential energy; gravitational potential. Escape velocity, Motion of a satellite, orbital velocity, time period, and energy of satellite.
Unit 7: Properties of Solids and Liquids
The JEE main syllabus for the physics subject for the seventh unit is mentioned here. The important topics include – Elastic behaviour, Stress-strain relationship, and Hooke’s Law. Young’s modulus,
bulk modulus, and modulus of rigidity. Pressure due to a fluid column; Pascal’s law and its applications. Effect of gravity on fluid pressure. Viscosity. Stokes’ law. terminal velocity, streamline,
and turbulent flow.
Critical velocity. Bernoulli’s principle and its applications. Surface energy and surface tension, angle of contact, excess of pressure across a curved surface, application of surface tension –
drops, bubbles, and capillary rise. Heat, temperature, thermal expansion; specific heat capacity, calorimetry; change of state, latent heat. Heat transfer conduction, convection, and radiation.
Unit 8: Thermodynamics
The JEE main syllabus for the physics subject for the eighth unit is mentioned here. The important topics include – Thermal equilibrium, the zeroth law of thermodynamics, and the concept of
temperature. Heat, work, and internal energy. The first law of thermodynamics, isothermal and adiabatic processes. The second law of thermodynamics: reversible and irreversible processes.
Unit 9: Kinetic Theory of Gases
The JEE main syllabus for the physics subject for the ninth unit is mentioned here. The important topics include the equation of the state of a perfect gas, work done on compressing a gas, the
Kinetic theory of gases – assumptions, and the concept of pressure. Kinetic interpretation of temperature: RMS speed of gas molecules: Degrees of freedom. Law of equipartition of energy and
applications to specific heat capacities of gases; Mean free path. Avogadro’s number.
Unit 10: Oscillations and Waves
The JEE main syllabus for the physics subject for the tenth unit is mentioned here. The important topics include – Oscillations and periodic motion – time period, frequency, displacement as a
function of time. Periodic functions. Simple harmonic motion (S.H.M.) and its equation; phase: oscillations of a spring -restoring force and force constant: energy in S.H.M. – Kinetic and potential
energies; Simple pendulum – derivation of expression for its time period.
Wave motion. Longitudinal and transverse waves, speed of the travelling wave. Displacement relation for a progressive wave. Principle of superposition of waves, reflection of waves. Standing waves in
strings and organ pipes, fundamental mode, and harmonics. Beats.
Unit 11: Electrostatics
The JEE main syllabus for the physics subject for the eleventh unit is mentioned here. The important topics include –
• Electric charges: Conservation of charge. Coulomb’s law forces between two point charges, forces between multiple charges: superposition principle and continuous charge distribution.
• Electric field: Electric field due to a point charge, Electric field lines. Electric dipole, Electric field due to a dipole. Torque on a dipole in a uniform electric field.
• Electric flux. Gauss’s law and its applications to find field due to infinitely long uniformly charged straight wire uniformly charged infinite plane sheet, and uniformly charged thin spherical
shell. Electric potential and its calculation for a point charge, electric dipole and system of charges; potential difference, Equipotential surfaces, Electrical potential energy of a system of
two point charges and of electric dipole in an electrostatic field.
• Conductors and insulators. Dielectrics and electric polarization, capacitors and capacitances, the combination of capacitors in series and parallel, and capacitance of a parallel plate capacitor
with and without dielectric medium between the plates. Energy is stored in a capacitor.
Unit 12: Current Electricity
The JEE main syllabus for the physics subject for the twelfth unit is mentioned here. The important topics include – Electric current. Drift velocity, mobility, and their relation with electric
current. Ohm’s law. Electrical resistance. V-l characteristics of Ohmic and non-ohmic conductors. Electrical energy and power. Electrical resistivity and conductivity.
Series and parallel combinations of resistors; Temperature dependence of resistance. Internal resistance, potential difference, and emf of a cell, a combination of cells in series and parallel.
Kirchhoff’s laws and their applications. Wheatstone bridge. Metre Bridge.
Unit 13: Magnetic Effect of Current and Magnetism
The JEE main syllabus for the physics subject for the thirteenth unit is mentioned here. The important topics include –
• Biot – Savart law and its application to the current carrying circular loop. Ampere’s law and its applications to infinitely long current carrying straight wire and solenoid. Force on a moving
charge in uniform magnetic and electric fields.
• Force on a current-carrying conductor in a uniform magnetic field. The force between two parallel currents carrying conductors-definition of ampere. Torque experienced by a current loop in a
uniform magnetic field: Moving coil galvanometer, its sensitivity, and conversion to ammeter and voltmeter.
• Current loop as a magnetic dipole and its magnetic dipole moment. Bar magnet as an equivalent solenoid, magnetic field lines; Magnetic field due to a magnetic dipole (bar magnet) along its axis
and perpendicular to its axis. Torque on a magnetic dipole in a uniform magnetic field. Para-, dia- and ferromagnetic substances with examples, the effect of temperature on magnetic properties.
Unit 14: Electromagnetic Induction and Alternating Currents
The JEE main syllabus for the physics subject for the fourteenth unit is mentioned here. The important topics include – Electromagnetic induction: Faraday’s law. Induced emf and current: Lenz’s Law,
Eddy currents. Self and mutual inductance. Alternating currents, peak and RMS value of alternating current/ voltage: reactance and impedance: LCR series circuit, resonance: power in AC circuits,
wattles current. AC generator and transformer.
Unit 15: Electromagnetic Waves
The NTA JEE main syllabus 2024 for the physics subject for the fifteenth unit is mentioned here. The important topics include – Displacement current. Electromagnetic waves and their characteristics,
Transverse nature of electromagnetic waves, Electromagnetic spectrum (radio waves, microwaves, infrared, visible, ultraviolet. X-rays. Gamma rays), Applications of e.m. waves.
Unit 16: Optics
The NTA JEE main syllabus 2024 for the physics subject for the sixteenth unit is mentioned here. The important topics include –
• Reflection of light, spherical mirrors, mirror formula. Refraction of light at plane and spherical surfaces, thin lens formula, and lens maker formula. Total internal reflection and its
applications. Magnification. Power of a Lens. Combination of thin lenses in contact. Refraction of light through a prism. Microscope and Astronomical Telescope (reflecting and refracting ) and
their magnifying powers.
• Wave optics: wavefront and Huygens’ principle. Laws of reflection and refraction using Huygens principle. Interference, Young’s double-slit experiment, and expression for fringe width, coherent
sources, and sustained interference of light. Diffraction due to a single slit, width of central maximum. Polarization, plane-polarized light: Brewster’s law, uses of plane polarized light and
Unit 17: Dual Nature of Matter and Radiation
The NTA JEE main syllabus 2024 for the physics subject for the seventeenth unit is mentioned here. The important topics include the dual nature of radiation. Photoelectric effect. Hertz and Lenard’s
observations; Einstein’s photoelectric equation: particle nature of light. Matter waves-wave nature of particle, de Broglie relation.
Unit 18: Atoms and Nuclei
The NTA JEE main syllabus 2024 for the physics subject for the eighteenth unit is mentioned here. The important topics include the alpha–particle scattering experiment; Rutherford’s model of the
atom; the Bohr model, energy levels, hydrogen spectrum. Composition and size of nucleus, atomic masses, Mass-energy relation, mass defect; binding energy per nucleon and its variation with mass
number, nuclear fission, and fusion.
Unit 19: Electronic Devices
The NTA JEE main syllabus 2024 for the physics subject for the nineteenth unit is mentioned here. The important topics include – Semiconductors; semiconductor diode: I-V characteristics in forward
and reverse bias; diode as a rectifier; and I-V characteristics of LED. the photodiode, solar cell, and Zener diode; Zener diode as a voltage regulator. Logic gates (OR. AND. NOT. NAND and NOR).
Unit 20: Experimental Skills
The NTA JEE main syllabus 2024 for the physics subject for the twentieth unit is mentioned here. The important topics include –
Familiarity with the basic approach and observations of the experiments and activities:
1. Vernier callipers -its used to measure the internal and external diameter and depth of a vessel.
2. Screw gauge-its use to determine the thickness/ diameter of thin sheet/wire.
3. Simple Pendulum-dissipation of energy by plotting a graph between the square of amplitude.
4. and time.
4. Metre Scale – the mass of a given object by the principle of moments.
5. Young’s modulus of elasticity of the material of a metallic wire.
6. Surf ace tension of water by capillary rise and effect of detergents,
7. Co-efficient of Viscosity of a given viscous liquid by measuring the terminal velocity of a
8. given spherical body,
8. Speed of sound in air at room temperature using a resonance tube,
9. Specific heat capacity of a given (i) solid and (ii) liquid by method of mixtures.
10. The resistivity of the material of a given wire using a metre bridge.
11. The resistance of a given wire using Ohm’s law.
These were some of the important JEE main exam syllabi for physics.
JEE Main Syllabus 2024 for Chemistry
According to the official notification of the chemistry syllabus released by the NTA, there are 20 units in total. The table below depicts the total units for the physics syllabus.
SOME BASIC CONCEPTS IN CHEMISTRY ATOMIC STRUCTURE CHEMICAL BONDING AND MOLECULAR STRUCTURE
: CHEMICAL THERMODYNAMICS SOLUTIONS EQUILIBRIUM
REDOX REACTIONS AND ELECTROCHEMISTRY CHEMICAL KINETICS CLASSIFICATION OF ELEMENTS AND PERIODICITY IN PROPERTIES
P- BLOCK ELEMENTS d – and f- BLOCK ELEMENTS CO-ORDINATION COMPOUNDS
ORGANIC COMPOUNDS CONTAINING HALOGENS ORGANIC COMPOUNDS CONTAINING OXYGEN ORGANIC COMPOUNDS CONTAINING NITROGEN
BIOMOLECULES PRINCIPLES RELATED TO PRACTICAL CHEMISTRY
Unit 1: Some Basic Concepts in Chemistry
Matter and its nature, Dalton’s atomic theory: Concept of atom, molecule, element, and compound:: Laws of chemical combination; Atomic and molecular masses, mole concept, molar mass, percentage
composition, empirical and molecular formulae: Chemical equations and stoichiometry.
Unit 2: Atomic Structure
Nature of electromagnetic radiation, photoelectric effect; Spectrum of the hydrogen atom. Bohr model of a hydrogen atom – its postulates, derivation of the relations for the energy of the electron
and radii of the different orbits, limitations of Bohr’s model; Dual nature of matter, de Broglie’s relationship. Heisenberg uncertainty principle. Elementary ideas of quantum mechanics, quantum
mechanics, the quantum mechanical model of the atom, and its important features.
Concept of atomic orbitals as one-electron wave functions: Variation of Ψ and Ψ2 with r for 1s and 2s orbitals; various quantum numbers (principal, angular momentum, and magnetic quantum numbers) and
their significance; shapes of s, p, and d – orbitals, electron spin, and spin quantum number: Rules for filling electrons in orbitals – Aufbau principle. Pauli’s exclusion principle and Hund’s rule,
electronic configuration of elements, and extra stability of half-filled and completely filled orbitals.
Unit 3: Chemical Bonding and Molecular Structure
• Kossel-Lewis approach to chemical bond formation, the concept of ionic and covalent bonds.
• Ionic Bonding: Formation of ionic bonds, factors affecting the formation of ionic bonds; calculation of lattice enthalpy.
• Covalent Bonding: Concept of electronegativity. Fajan’s rule, dipole moment: Valence Shell Electron Pair Repulsion (VSEPR ) theory and shapes of simple molecules.
• Quantum mechanical approach to covalent bonding: Valence bond theory – its important features, the concept of hybridization involving s, p, and d orbitals; Resonance.
• Molecular Orbital Theory – Its important features. LCAOs, types of molecular orbitals (bonding, antibonding), sigma and pi-bonds, molecular orbital electronic configurations of homonuclear
diatomic molecules, the concept of bond order, bond length, and bond energy.
• Elementary idea of metallic bonding. Hydrogen bonding and its applications.
Unit 4: Chemical Thermodynamics
• Fundamentals of thermodynamics: System and surroundings, extensive and intensive properties, state functions, Entropy, types of processes.
• The first law of thermodynamics – Concept of work, heat internal energy and enthalpy, heat capacity, molar heat capacity; Hess’s law of constant heat summation; Enthalpies of bond 10
dissociation, combustion, formation, atomization, sublimation, phase transition, hydration, ionization, and solution.
• The second law of thermodynamics – Spontaneity of processes; ΔS of the universe and ΔG of the system as criteria for spontaneity. ΔG° (Standard Gibbs energy change) and equilibrium constant.
Unit 5: Solutions
The NTA JEE main syllabus 2024 for the chemistry for the fifth unit is mentioned here. The important topics include –
• Different methods for expressing the concentration of solution – molality, molarity, mole fraction, percentage (by volume and mass both), the vapour pressure of solutions and Raoult’s Law – Ideal
and non-ideal solutions, vapour pressure – composition, plots for ideal and nonideal solutions.
• Colligative properties of dilute solutions – a relative lowering of vapour pressure, depression of freezing point, the elevation of boiling point and osmotic pressure; Determination of molecular
mass using colligative properties; Abnormal value of molar mass, Van’t Hoff factor and its significance.
Unit 6: Equilibrium
• The meaning of equilibrium is the concept of dynamic equilibrium.
• Equilibria involving physical processes: Solid-liquid, liquid-gas – gas and solid-gas equilibria, Henry’s law. General characteristics of equilibrium involving physical processes.
• Equilibrium involving chemical processes: Law of chemical equilibrium, equilibrium constants (Kp and Kc) and their significance, the significance of ΔG and ΔG° in chemical equilibrium, factors
affecting equilibrium concentration, pressure, temperature, the effect of catalyst; Le Chatelier’s principle.
• Ionic equilibrium: Weak and strong electrolytes, ionization of electrolytes, various concepts of acids and bases (Arrhenius. Bronsted – Lowry and Lewis) and their ionization, acid-base equilibria
(including multistage ionization) and ionization constants, ionization of water. pH scale, common ion effect, hydrolysis of salts and pH of their solutions, the solubility of sparingly soluble
salts and solubility products, and buffer solutions.
Unit 7: Redox Reactions and Electrochemistry
Electronic concepts of oxidation and reduction, redox reactions, oxidation number, rules for assigning oxidation number, and balancing of redox reactions.
Electrolytic and metallic conduction, conductance in electrolytic solutions, molar conductivities and their variation with concentration: Kohlrausch’s law and its applications.
Electrochemical cells – Electrolytic and Galvanic cells, different types of electrodes, electrode potentials including standard electrode potential, half-cell and cell reactions, emf of a Galvanic
cell and its measurement: Nernst equation and its applications; Relationship between cell potential and Gibbs’ energy change: Dry cell and lead accumulator; Fuel cells.
Unit 8: Chemical Kinetics
The NTA JEE main syllabus 2024 for the chemistry for the eighth unit is mentioned here. The important topics include the rate of a chemical reaction, factors affecting the rate of reactions:
concentration, temperature, pressure, and catalyst; elementary and complex reactions, order and molecularity of reactions, rate law, rate constant and its units, differential and integral forms of
zero and first-order 11 reactions, their characteristics and half-lives, the effect of temperature on the rate of reactions, Arrhenius theory, activation energy and its calculation, collision theory
of bimolecular gaseous reactions (no derivation).
Unit 9: Classification of Elements and Periodicity in Properties
Modem periodic law and present form of the periodic table, s, p. d and f block elements, periodic trends in properties of elements atomic and ionic radii, ionization enthalpy, electron gain enthalpy,
valence, oxidation states, and chemical reactivity.
Unit 10: P-Block Elements
The NTA JEE main syllabus 2024 for the chemistry for the tenth unit is mentioned here. The important topics include –
• Group -13 to Group 18 Elements
• General Introduction: Electronic configuration and general trends in physical and chemical properties of elements across the periods and down the groups; unique behaviour of the first element in
each group.
Unit 11: d and f Block Elements
Transition Elements General introduction, electronic configuration, occurrence and characteristics, general trends in properties of the first-row transition elements – physical properties, ionization
enthalpy, oxidation states, atomic radii, colour, catalytic behaviour, magnetic properties, complex formation, interstitial compounds, alloy formation; Preparation, properties, and uses of K2Cr2O7,
and KMnO4.
• Inner Transition Elements
• Lanthanoids – Electronic configuration, oxidation states, and lanthanoid contraction.
• Actinoids – Electronic configuration and oxidation states.
Unit 12: Co-Ordination Compounds
Introduction to coordination compounds. Werner’s theory; ligands, coordination number, denticity. chelation; IUPAC nomenclature of mononuclear coordination compounds, isomerism; Bonding-Valence bond
approach and basic ideas of Crystal field theory, colour and magnetic properties; Importance of co-ordination compounds (in qualitative analysis, extraction of metals, and in biological systems).
Unit 13: Purification and Characterization of Organic Compounds
• Purification – Crystallization, sublimation, distillation, differential extraction, and chromatography – principles and their applications.
• Qualitative analysis – Detection of nitrogen, sulphur, phosphorus, and halogens. 12 Quantitative analysis (basic principles only) – Estimation of carbon, hydrogen, nitrogen, halogens, sulphur,
and phosphorus.
• Calculations of empirical formulae and molecular formulae: Numerical problems in organic quantitative analysis.
Unit 14: Some Basic Principles of Organic Chemistry
Tetravalency of carbon: Shapes of simple molecules – hybridization (s and p): Classification of organic compounds based on functional groups: and those containing halogens, oxygen, nitrogen, and
sulphur; Homologous series: Isomerism – structural and stereoisomerism.
Nomenclature (Trivial and IUPAC)
• Covalent bond fission – Homolytic and heterolytic: free radicals, carbocations, and carbanions; stability of carbocations and free radicals, electrophiles, and nucleophiles.
Electronic displacement in a covalent bond
• Inductive effect, electromeric effect, resonance, and hyperconjugation. Common types of organic reactions are substitution, addition, elimination, and rearrangement.
Unit 15: Hydrocarbons
Classification, isomerism, IUPAC nomenclature, general methods of preparation, properties, and reactions.
• Alkanes – Conformations: Sawhorse and Newman projections (of ethane): Mechanism of halogenation of alkanes.
• Alkenes – Geometrical isomerism: Mechanism of electrophilic addition: addition of hydrogen, halogens, water, hydrogen halides (Markownikoffs and peroxide effect): Ozonolysis and polymerization.
• Alkynes – Acidic character: Addition of hydrogen, halogens, water, and hydrogen halides: Polymerization.
• Aromatic hydrocarbons – Nomenclature, benzene – structure and aromaticity: Mechanism of electrophilic substitution: halogenation, nitration. Friedel-Craft’s alkylation and acylation, directive
influence of the functional group in monosubstituted benzene.
Unit 16: Organic Compounds Containing Oxygen
• General methods of preparation, properties, and reactions; Nature of C-X bond; Mechanisms of substitution reactions.
• Uses; Environmental effects of chloroform, iodoform freons, and DDT.
Unit 17: Organic Compounds Containing Oxygen
General methods of preparation, properties, reactions, and uses.
Alcohols: Identification of primary, secondary, and tertiary alcohols: mechanism of dehydration.
Phenols: Acidic nature, electrophilic substitution reactions: halogenation. nitration and sulphonation. Reimer – Tiemann reaction.
Ethers: Structure.
Aldehyde and Ketones: Nature of carbonyl group; Nucleophilic addition to >C=O group, relative reactivities of aldehydes and ketones; Important reactions such as – Nucleophilic addition reactions
(addition of HCN. NH3, and its derivatives), Grignard reagent; oxidation: reduction (Wolf Kishner and Clemmensen); the acidity of α-hydrogen. aldol condensation, Cannizzaro reaction. Haloform
reaction, Chemical tests to distinguish between aldehydes and Ketones.
Carboxylic Acids
Acidic strength and factors affecting it.
Unit 18: Organic Compounds Containing Nitrogen
• General methods of preparation. Properties, reactions, and uses.
• Amines: Nomenclature, classification structure, basic character, and identification of primary, secondary, and tertiary amines and their basic character.
• Diazonium Salts: Importance in synthetic organic chemistry.
Unit 19: Biomolecules
General introduction and importance of biomolecules.
CARBOHYDRATES – Classification; aldoses and ketoses: monosaccharides (glucose and fructose) and constituent monosaccharides of oligosaccharides (sucrose, lactose, and maltose).
PROTEINS – Elementary Idea of α-amino acids, peptide bonds, polypeptides. Proteins: primary, secondary, tertiary, and quaternary structure (qualitative idea only), denaturation of proteins, enzymes.
VITAMINS – Classification and functions.
NUCLEIC ACIDS – Chemical constitution of DNA and RNA.
Biological functions of nucleic acids.
Hormones (General introduction)
Unit 20: Principles Related to Practical Chemistry
Detection of extra elements (Nitrogen, Sulphur, halogens) in organic compounds; Detection of the following functional groups; hydroxyl (alcoholic and phenolic), carbonyl (aldehyde and ketones)
carboxyl, and amino groups in organic compounds.
JEE Main Syllabus 2024 for Aptitude Test
The JEE main syllabus for the aptitude test is required for paper 2a of the JEE main exam. Students must keep a note of these topics:
• Unit 1: Awareness of persons, buildings, materials
• Unit 2: Three-dimensional Perception
Unit 1: Awareness of persons, building materials
The NTA JEE main syllabus 2024 for the first unit of the aptitude test is mentioned here. The important topics include –
Objects, Texture related to Architecture and Build-environment, Visualizing three-dimensional objects from two-dimensional drawings. Visualizing. Different sides of three-dimensional objects.
Analytical Reasoning Mental Ability (Visual. Numerical and Verbal)
Unit 2: Three-dimensional Perception
The NTA JEE main syllabus 2024 for the second unit of the aptitude test is mentioned here. The important topics include –
Understanding and appreciation of scale and proportions of objects, building forms and elements, colour texture harmony and contrast Design and drawing of geometrical or abstract shapes and patterns
in pencil. Transformation of forms both 2D and 3D union, subtraction rotation, development of surfaces and volumes, Generation of plans, elevations, and 3D views of objects, creating two-dimensional
and three-dimensional compositions using given shapes and forms.
JEE Main Syllabus 2024 for Drawing Test
The JEE main syllabus for the aptitude test is required for paper 2a of the JEE main exam. Students must keep a note of these topics. Sketching of scenes and activities from memory of urbanscape
(public space, market, festivals, street scenes, monuments, recreational spaces, etc.). landscape (riverfronts. Jungle. Gardens, trees. Plants, etc.) and rural life.
Note: Candidates are advised to bring pencils. Own geometry box set, crasets colour pencils, and crayons for the Drawing Test.
JEE Main Syllabus 2024 for Planning Test
The JEE main syllabus for the planning test is required for paper 2b of the JEE main exam. Students must keep a note of these topics:
• Unit 1: General Awareness
• Unit 2: Social Sciences
• Unit 3: Thinking Skills
Unit 1: General Awareness
The syllabus for the general awareness unit of the planning test includes General knowledge questions and knowledge about prominent cities, development issues, government programs, etc.
Unit 2: Social Sciences
The syllabus for the social sciences unit of the planning test includes:
• The idea of nationalism, nationalism in India, pre-modern world, 19th-century global economy, colonialism, and colonial cities, industrialization, resources, and development, types of 21
resources, agriculture, water, mineral resources, industries, national economy; Human Settlements.
• Power-sharing, federalism, political parties, democracy, the constitution of India.
• Economic development- economic sectors, globalization, the concept of development, poverty; Population structure, social exclusion, and inequality, urbanization, rural development, colonial
Unit 3: Thinking Skills
The syllabus for the thinking skills unit of the planning test includes comprehension (unseen passage); map reading skills, scale, distance, direction, area, etc.; critical reasoning; understanding
of charts, graphs, and tables; basic concepts of statistics and quantitative reasoning.
Is JEE 2024 syllabus released?
Yes, the JEE Main 2024 syllabus has been released. It covers topics in Mathematics, Physics, and Chemistry. For the Mathematics section, the syllabus includes topics like sets, relations, functions,
complex numbers, matrices, permutations, combinations, and more. You can find the detailed syllabus on the official website.
Is JEE 2024 going to be tough?
The JEE 2024 is expected to be challenging, as it assesses candidates’ understanding of complex concepts in Mathematics, Physics, and Chemistry. However, with consistent preparation and practice, you
can overcome the difficulty and perform well!
What is the pattern of JEE 2024?
The JEE Main 2024 exam pattern consists of three papers and will be conducted in two shifts – morning and evening.
Is vernier caliper removed from JEE 2024?
As of now, there is no official confirmation regarding the removal of the vernier caliper from the JEE 2024 syllabus. However, it’s essential to stay updated with any announcements or changes made by
the exam authorities.
|
{"url":"https://exams.idreamcareer.com/engineering/jee-main-exam/syllabus/","timestamp":"2024-11-04T01:29:53Z","content_type":"text/html","content_length":"193294","record_id":"<urn:uuid:c3db6a54-d0d5-4949-901a-e8eb881da27b>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00161.warc.gz"}
|
of Balanced Incomplete Block Designs
Basic Props of Incidence Mats. of Balanced Incomplete Block Designs
Basic Properties of Incidence Matrices of Balanced Incomplete Block Designs
Recall from The Incidence Matrix of a Balanced Incomplete Block Design page that if $(X, \mathcal A)$ is a $(v, b, r, k, \lambda)$-BIBD then the incidence matrix of this BIBD is the $v \times b$
matrix $M$ whose entries $m_{ij}$ are defined as:
\quad m_{ij} = \left\{\begin{matrix} 0 \: \mathrm{if} \: x_i \not \in A_j \\ 1 \: \mathrm{if} \: x_i \in A_j \end{matrix}\right.
We will now state some very basic properties regarding these incidence matrices.
Proposition 1: Let Let $(X, \mathcal A)$ be a $(v, b, r, k, \lambda)$-BIBD and let $M$ be a corresponding incidence matrix. Then every column of $M$ contains exactly $k$ many $1$s.
• Proof: The number $k$ represents the size of each block. Each column in $M$ represents a block in the BIBD and along each column there is a $1$ if the corresponding point is contained in that
block and a $0$ otherwise. Hence each column of $M$ contains exactly $k$ many $1$s. $\blacksquare$
Proposition 2: Let $(X, \mathcal A)$ be a $(v, b, r, k, \lambda)$-BIBD and let $M$ be a corresponding incidence matrix. Then every row of $M$ contains exactly $r$ many $1$s.
• Proof: The number $r$ represents how many blocks contain any arbitrary point in $X$. Each row in $M$ represents a point in the BIBD and along each row there is a $1$ if the corresponding point is
contained in that block and a $0$ otherwise. Hence each row of $M$ contains exactly $r$ many $1$s. $\blacksquare$
Proposition 3: Let $(X, \mathcal A)$ be a $(v, b, r, k, \lambda)$-BIBD and let $M$ be a corresponding incidence matrix. Then between any two distinct rows there are exactly $\lambda$ many $1$s in the
same corresponding columns.
• Proof: The number $\lambda$ means that between any two distinct pair of points $x, y \in X$ with $x \neq y$ there is exactly $\lambda$ blocks containing both $x$ and $y$. So between any two
distinct rows (representing points) $]] there are exactly $\lambda$ many $1$s in the same corresponding columns. $\blacksquare$
|
{"url":"http://mathonline.wikidot.com/basic-properties-of-incidence-matrices-of-balanced-incomplet","timestamp":"2024-11-04T20:29:26Z","content_type":"application/xhtml+xml","content_length":"17157","record_id":"<urn:uuid:bd78e00f-0efc-43aa-9026-8c6438ebb3fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00438.warc.gz"}
|
Chapter Review
12.1 Linear Equations
The most basic type of association is a linear association. This type of relationship can be defined algebraically by the equations used, numerically with actual or predicted data values, or
graphically from a plotted curve. Lines are classified as straight curves. Algebraically, a linear equation typically takes the form y = mx + b, where m and b are constants, x is the independent
variable, and y is the dependent variable. In a statistical context, a linear equation is written in the form y = a + bx, where a and b are the constants. This form is used to help you distinguish
the statistical context from the algebraic context. In the equation y = a + bx, the constant b that multiplies the x variable (b is called a coefficient) is called the slope. The slope describes the
rate of change between the independent and dependent variables; in other words, the rate of change describes the change that occurs in the dependent variable as the independent variable is changed.
In the equation y = a + bx, the constant a is called the y-intercept. Graphically, the y-intercept is the y-coordinate of the point where the graph of the line crosses the y-axis. At this point, x =
The slope of a line is a value that describes the rate of change between the independent and dependent variables. The slope tells us how the dependent variable (y) changes for every one-unit increase
in the independent (x) variable, on average. The y-intercept is used to describe the dependent variable when the independent variable equals zero. Graphically, the slope is represented by three line
types in elementary statistics.
12.2 Scatter Plots
Scatter plots are particularly helpful graphs when we want to determine whether there is a linear relationship among data points. These plots indicate both the direction of the relationship between
the x variables and the y variables and the strength of the relationship. We calculate the strength of the relationship between an independent variable and a dependent variable using linear
12.3 The Regression Equation
A regression line, or a line of best fit, can be drawn on a scatter plot and used to predict outcomes for the x and y variables in a given data set or sample data. There are several ways to find a
regression line, but usually the least-squares regression line is used because it creates a uniform line. Residuals, also called errors, measure the distance from the actual value of y and the
estimated value of y. The sum of squared errors, or SSE, when set to its minimum, calculates the points on the line of best fit. Regression lines can be used to predict values within the given set of
data but should not be used to make predictions for values outside the set of data.
The correlation coefficient, r, measures the strength of the linear association between x and y. The variable r has to be between –1 and +1. When r is positive, x and y tend to increase and decrease
together. When r is negative, x increases and y decreases, or the opposite occurs: x decreases and y increases. The coefficient of determination, r^2, is equal to the square of the correlation
coefficient. When expressed as a percentage, r^2 represents the percentage of variation in the dependent variable, y, that can be explained by variation in the independent variable, x, using the
regression line.
12.4 Testing the Significance of the Correlation Coefficient (Optional)
Linear regression is a procedure for fitting a straight line of the form ŷ = a + bx to data. The conditions for regression are as follows:
• Linear—In the population, there is a linear relationship that models the average value of y for different values of x.
• Independent—The residuals are assumed to be independent.
• Normal—The y values are distributed normally for any value of x.
• Equal variance—The standard deviation of the y values is equal for each x value.
• Random—The data are produced from a well-designed random sample or a randomized experiment.
The slope, b, and intercept a of the least-squares line estimate the slope β and intercept α of the population (true) regression line. To estimate the population standard deviation of y, σ, use the
standard deviation of the residuals, s: $s= SSE n−2 s= SSE n−2$. The variable ρ (rho) is the population correlation coefficient. To test the null hypothesis, H[0]: ρ = hypothesized value, use a
linear regression t-test. The most common null hypothesis is H[0]: ρ = 0, which indicates there is no linear relationship between x and y in the population. The TI-83, 83+, 84, 84+ calculator
function LinRegTTest can perform this test (STATS, TESTS, LinRegTTest).
12.5 Prediction (Optional)
After determining the presence of a strong correlation coefficient and calculating the line of best fit, you can use the least-squares regression line to make predictions about your data.
12.6 Outliers
To determine whether a point is an outlier, do one of the following:
1. Input the following equations into the TI 83, 83+,84, or 84+ calculator:
$y 1 =a+bx y 2 =a+bx+2s y 3 =a+bx– 2s y 1 =a+bx y 2 =a+bx+2s y 3 =a+bx– 2s,$
where s is the standard deviation of the residuals.
If any point is above y[2] or below y[3], then the point is considered to be an outlier.
2. Use the residuals and compare their absolute values to 2s, where s is the standard deviation of the residuals. If the absolute value of any residual is greater than or equal to 2s, then the
corresponding point is an outlier.
3. Note: The calculator function LinRegTTest (STATS, TESTS, LinRegTTest) calculates s.
|
{"url":"https://texasgateway.org/resource/chapter-review-33?book=79081&binder_id=78271","timestamp":"2024-11-11T06:46:14Z","content_type":"text/html","content_length":"46558","record_id":"<urn:uuid:5833d4e8-44a6-46b5-b526-97dbbc291b40>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00536.warc.gz"}
|
Calling all Physicists - help with physics needed!
Ok, here's the next ten ..... :-)
#11. D and E are out as the lenses are convex ( if concave one can have a virtual image on the same side as the object ). So how far to the right of the
rightmost lens?
If you have a focal length of F, with D1 and D2 being the distances from ( midline within ) the lens then
1/F = 1/D1 + 1/D2
if either D1 ( or D2 ) go to infinity then 1/F = 1/D2 ( 1/D1 ), thus the image is at D2 = F ( or D1 = F ). This defines the meaning of focal length ie. at
what point do infinitely distant rays ( parallel to symmetry axis ) converge?
So for the left lens with object at 40cm .....
1/20 = 1/40 + 1/D2
-> 1/D2 = 1/40 and D2 = 40 which is 10cm to the right of the second lens. You still use the above formula again but with care in interpretation.
1/10 = 1/D1 + 1/10
1/D1 = 2/10
D1 = 5cm
now this is on the right side of the second lens also. Basically you have rays from the first lens already converging before they hit the second's
surface. If from infinity ie. parallel they would have converged at 10cm. So the extra convergence has brought the image in closer than 10cm.
#12. If the object was at the focal point ( F/II ) then the rays would go to infinity on the left side ( definition of focal length ). Now the object is
closer in to the mirror than F and rays will make a greater angle to the normal when hitting the mirror's surface. This will give diverging rays to
the left which converge to a virtual image on the right. Virtual means the light never actually comes/goes there but the rays behave as if they did!
Thus V is the only possible point - option E.
#13. I think it was Airy or Rayleigh who did the diffraction analysis and came up with
sin(angular separation) = wavelength / aperture
- plus or minus a bit ( factor of 1.2-ish? ) as one has to 'define' when a diffraction minimum from one source overlaps/obscures an adjacent maximum from the other. For small angles the sin(angle) =
angle. So
aperture = wavelength / angle = 600 * 10^(-9) / 3 * 10^(-5) = 200 * 10^(-4) = 2 * 10^(-2) = 2 cm
So I'd take option B.
#14. With the source at the face the detector is capturing all radiation emitted to that side, a hemispheres worth. The 8cm depth/thickness is sufficient to grab all that is incident when source is
close, assume the same with source at distance. So at 1m away a surface of 8cm radius intercepts what fraction of a hemisphere? [ Area of a sphere = 4 * PI * R^2 ]
Total area of hemisphere = (4 * PI * 1^2)/2
Total area of surface = PI * (0.08)^2
Total area of surface/Total area of hemisphere = ((0.08)^2)/2 = 8 * 10^(-4)/2 = 4 * 10^(-4)
Thus option C
#15. A - although not as accurate as the others ( ie. reflecting the true value ), they have the least spread of results.
#16. To use a sample's statistics to guess the error in one's estimate of the population mean you use:
(variance of sample)/(size of sample)
meaning that you'd like the sample mean to estimate the ( unknown ) population mean to within some degree of confidence. The central limit theorem says that
pretty well every type of population distribution ( various assumptions! ) will behave like this.
The mean is 2 and the variance is 8 for this sample. We want
8/N 100 * 8 = 800
So that would be option C.
#17. They all identify 15 electron spots, so that's good. The right one is option B. [ The orbital filling order doesn't get squirrelly until the 4th row of the periodic table. ]
#18. A singly ionised Helium is a hydrogen-like where the energy goes like 1/Z^2 ( Z is nuclear charge ). For hydrogen ( Z = 1 ) E = 13.6 eV hence for Z = 2, E = 13.6 * 4 = 54.4. Option D. The double
ionization value is a red herring ( except that the singly ionised value ought be less than that, but all of A thru E are ).
#19. Assuming we are talking basic Hydrogen to Helium fusion, then 4 separate protons wind up being bounds as 2 protons + 2 neutrons in Helium. The electrons
are irrelevant, so is talk of atoms vs. ions. So that's option B, the others don't balance nucleons except for option D but that's not the primary source in
our Sun.
#20. E - 'braking radiation' as the electrons get deep into the well near the metal nuclei and flick off photons going around the sharp turn. Unrelated to
electron shells except that you need energy to penetrate them to get to the nucleus. Thus smooth and continuous as all sorts of offsets from 'dead on' the
nucleus occurs.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
|
{"url":"https://einsteinathome.org/content/calling-all-physicists-help-physics-needed#comment-88513","timestamp":"2024-11-03T19:26:30Z","content_type":"application/xhtml+xml","content_length":"50069","record_id":"<urn:uuid:fafec734-09ed-46cf-b154-903ccbd71eea>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00480.warc.gz"}
|
Ministers adopt Socialism ethos and values in rural areas to combat capitalism
Last updated:
Of the four fundamental interactions, gravitation is the dominant at astronomical length scales. Gravity effects are cumulative; by contrast, the effects of positive and negative charges tend to
cancel one another, making electromagnetism relatively insignificant on astronomical length scales. The remaining two interactions, the weak and strong nuclear forces, decline very rapidly with
distance; their effects are confined mainly to sub-atomic length scales.
This diagram shows Earth location in the universe on increasingly larger scales. The images, labeled along their left edge, increase in size from left to right, then from top to bottom.
The size of the universe is somewhat difficult to define. According to the general theory of relativity, far regions of space may never interact with ours even in the lifetime of the universe due to
the finite speed of light and the ongoing expansion of space. For example, radio messages sent from Earth may never reach some regions of space, even if the universe were to exist forever: space may
expand faster than light can traverse it.
Because we cannot observe space beyond the edge of the observable universe, it is unknown whether the size of the universe in its totality is finite or infinite.
Spacetimes are the arenas in which all physical events take place. The basic elements of spacetimes are events. In any given spacetime, an event is defined as a unique position at a unique time. A
spacetime is the union of all events in the same way that a line is the union of all of its points, formally organized into a manifold.
Spacetime events are not absolutely defined spatially and temporally but rather are known to be relative to the motion of an observer. Minkowski space approximates the universe without gravity; the
pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity.
The Renowned Lantern Ritual 2020
An important parameter determining the future evolution of the universe theory is the density parameter, Omega, defined as the average matter density of the universe divided by a critical value of
that density. This selects one of three possible geometries depending on whether is equal to, less than, or greater than 1. These are called, respectively, the flat, open and closed universes.
Observations, including the Cosmic Background Explorer, Wilkinson Microwave Anisotropy Probe, and Planck maps of the CMB, suggest that the universe is infinite in extent with a finite age, as
described by the Friedmann–Lemaître–Robertson–Walker (FLRW) models.
Because we cannot observe space beyond the edge of the observable universe, it is unknown whether the size of the universe in its totality is finite or infinite.
The universe is composed almost completely of dark energy, dark matter, and ordinary matter. Other contents are electromagnetic radiation (estimated to constitute from 0.005% to close to 0.01% of the
total mass-energy of the universe) and antimatter.
The observable universe is isotropic on scales significantly larger than superclusters, meaning that the statistical properties of the universe are the same in all directions as observed from Earth.
The universe is bathed in highly isotropic microwave radiation that corresponds to a thermal equilibrium blackbody spectrum of roughly 2.72548 kelvins.
Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously, and scalar fields such as quintessence or moduli, dynamic quantities whose
energy density can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be
formulated to be equivalent to vacuum energy. Scalar fields having only a slight amount of spatial inhomogeneity would be difficult to distinguish from a cosmological constant.
Ordinary matter commonly exists in four states: solid, liquid, gas, and plasma. However, advances in experimental techniques have revealed other previously theoretical phases, such as Bose–Einstein
condensates and fermionic condensates.
A photon is the quantum of light and all other forms of electromagnetic radiation. It is the force carrier for the electromagnetic force, even when static via virtual photons. The effects of this
force are easily observable at the microscopic and at the macroscopic level because the photon has zero rest mass; this allows long distance interactions. Like all elementary particles, photons are
currently best explained by quantum mechanics and exhibit wave–particle duality, exhibiting properties of waves and of particles.
With the assumption of the cosmological principle that the universe is homogeneous and isotropic everywhere, a specific solution of the field equations that describes the universe is the metric
tensor called the metric,
Some speculative theories have proposed that our universe is but one of a set of disconnected universes, collectively denoted as the multiverse, challenging or enhancing more limited definitions of
the universe. Scientific multiverse models are distinct from concepts such as alternate planes of consciousness and simulated reality.
The Indian philosopher Kanada, founder of the Vaisheshika school, developed a notion of atomism and proposed that light and heat were varieties of the same substance.
|
{"url":"https://ww82.brnodaily.cz/2018/07/10/news/politics/ministers-adopt-socialism-ethos-values-rural-areas-combat-capitalism/","timestamp":"2024-11-14T07:00:36Z","content_type":"text/html","content_length":"228369","record_id":"<urn:uuid:2a271c14-a38d-46d1-a2b7-afd9a356f408>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00323.warc.gz"}
|
Reflections of a High School Math Teacher
I want my students to struggle--to squirm and to be frustrated. I often struggle with math questions. It takes me a while to process and sort out my thinking. Struggling with a math problem gives me
confidence for the next one. In the classroom I love when my students are working on a difficult math question and then someone has an ‘aha’ moment. It is like someone receiving a clue to the
location of a hidden treasure. It spurs others to continue working and finding other connections. I want them to take ownership and celebrate the journey. I could sum up my teaching philosophy with
the phrase, "I want my students in a productive struggle." I want my students to struggle, but I want them to be productive in that quest. If the struggle is too easy or too tough, then I need to
help make some adjustments by creating the right environment and finding the right questions to problem solve. That is why I'm trying 20% Struggle Time.
20% Struggle Time (Problem Solving)
My instructional learning coach
Chris DeWald
recently challenged me. "Why don't you devote 20% of your class time to (productive struggle) problem solving questions?" I have accepted the challenge and I've been on this journey since late last
year. I'm taking 20% of my class time and devoting it towards problem solving, and not necessarily content-based problem solving either. As far as actual time in the classroom that means 1 day a week
or two half days a week we will be in a productive struggle solving problems. It has been fun and really rewarding--and frustrating. It is all about finding the right problems and creating the right
atmosphere for learning.
Easy Problems Do Not Accomplish Much
You know what an easy problem is.... the ones that you don't have to think too much to solve. If you don't have to think too much to solve something, you probably won't remember too much about your
work, nor will you be satisfied with yourself. I tend to give out too many easy problems. My lesson will include some kind of formula and the problems use the formula--too easy and too forgettable.
Fawn Nguyen
says, “Are they really ‘problems’ if we know how to solve them?” I firmly believe this.
Problem Solving creates productive struggle. I'm not talking about word problems at the end of each section of practice problems. I'm talking about NON-Content Specific Problem Solving. The problems
students can't just look up in their notes and see one almost exactly like the one given. A problem that they can solve, but it will take time and maybe more than one class period. It will take
reflection of possible solutions. It might even take some research, or multiple attempts at the problem, or scaffolding from the teacher or other classmates.
What does 20% Struggle Time look like when Problem Solving?
1. I have a walk that I take often that leads me under a canopy of oak trees. Students need to feel like they are in a canopy of oak trees—a safety net. They need to know that the students around
them and especially myself as the teacher are WITH them. A positive classroom atmosphere creates safety nets in case they don't get it. Examples of safety nets would be: formative testing without
grade impact, retakes on summatives, daily work that explores but does not affect final grades, classmates who are willing to help others, and a teacher who offers multiple avenues for help.
2. Finding the right problems is important--Inviting and Approachable Problems with Escalating Difficulty (Low Floor High Ceiling). Everyone has a different threshold of pain. Some complain wildly
because of a small cut, and others would not be fazed by a large gash. That is just how we are made. Accordingly, I believe we all have a different threshold of struggle. For instance, some people
might look into a problem with their laptop with bitter frustration and give up quickly, while another might struggle for a long period of time. It is the same thing with a problem in math class,
some students look at the problem and then give up quickly. Other students look at the problem and start formulating some ideas. The trick is to find questions that anyone can start and yet most will
be challenged at some point in the problem. Here are some of my 'goto' problems.
There are actual examples at the bottom of the post
3. I’m still working out how to approach grading Productive Struggle. Is my 20% struggle time a completion grade? Do I grade on correctness? Should there be a rubric? Maybe I should just build in
accountability with group presentations of thinking? Any thoughts or suggestions on this would be welcome!!
Plumbing and my Father-in-Law
Confidence solving problems is a real world skill. When we had just bought our first house, my father-in-law and I were looking into the electrical box to see how to stop one of the circuit breakers
from going off regularly. It was a nest of confusing wires. I asked him how he felt so confident that he could figure this out. I was ready to give up. He said, "Whatever mess I get myself into, I’ll
just keep trying to figure the problem out by taking a generous amount of time and many setbacks trying to prevail. A last resort solution would be that I could hire someone to fix it." I have used
this advice often and have learned that with determination and confidence, I can often see it through. Like every year when I do my taxes. I can usually figure out most questions (with Turbo Tax).
This is the attitude that I want my students to come out of my class with. I want them to be confident that they can attempt and find a method of solution with almost anything they encounter and that
it will be a productive struggle. My father-in-law without knowing it, was using the Mathematical Practice Number 1. I'm hoping that 20% Struggle Time will help my students with this mindset.
Teach Before or After They Need It?
My wife will never let me teach her about any technology until she needs it. Why? Because she says she will forget it all and then just have to ask me again when she actually does need it. We need to
give the problem first and then our students will need to use the skills and formulas to get the answer. That is true problem solving. Sometimes our classrooms are backwards in that we do all this
upfront teaching so that they will remember it when they need it. Which by the way, rarely happens. Needing it might mean that it might be on a quiz that happens a couple days after they learned it
so that you can just memorize the steps to get through. Introducing 20% struggle problems will help exercise the
needed to solve new challenging problems that they have never seen before.
What about the Content?
Some would argue that they don't have enough time to teach the content now, how are they going to introduce more problems? I get that. However, I believe math is an attitude not a skill. We really
are teaching our students to have
, to never give up, to exhaust all resources, to struggle, and to fail. We can teach skills until our students are robots or we can teach them to be real world problem solvers.
The Struggle is Real, so that is why I'm going on the 20% struggle time journey. I'll keep you posted. I welcome any thoughts or advice.
My Best,
My adult son has had some medical issues recently. My wife said something to him that really made me think. She said, "We are 'with you' as you go through this". It was simple, yet really meaningful.
Now fast forward to the education world. I thought to myself, do my students know that I'm WITH THEM? Do I communicate the idea of "WITH YOU" in everything I do as a teacher? Do I actually tell them
that we are partners in this adventure/struggle? Every single interaction with each student is important.
A message to my students: I'm WITH YOU. I can feel your struggle. I hurt when you hurt and am happy when you thrive. I can't do it for you, but I will be right beside you encouraging you the whole
way. We are partners. You are not alone.
This post is now on the Corwin-Connect Site
I want my students to ... (in no particular order)
Have fun in class
Move while learning
Create stuff
Be respectful
Embrace learning from mistakes
Teach someone else something
Think "Whoa, this is cool"
Have an "I can do this" attitude
Give/Take advice freely
Discern what others say mathematically
Notice patterns
Enjoy a challenge
Ask questions
Understand/Believe in many different approaches to a problem.
Transfer learning to new places within the course
Work hard
Listen closely to others
Connect ideas
Feel safe and a part of a collaborative community
Care about others and know that others care
I hope my students....
Love rigor
Transfer ideas to new places outside the course
Say that Math is their favorite subject
Get the grade they want
Are organized
Set personal goals
Ask questions for curiosity sake alone
Are completely ready for next years course
Please help me with this list. What else? Please leave a comment with your thoughts on what should be added.
|
{"url":"https://teachhighschoolmath.blogspot.com/2018/","timestamp":"2024-11-03T18:57:04Z","content_type":"application/xhtml+xml","content_length":"96655","record_id":"<urn:uuid:37ba15a1-5de4-4c82-93ec-3a8081bcd881>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00713.warc.gz"}
|
CS1026A Assignment 3 solved
Exercise 1. This problem deals with parsing numerical data and performing simple statistical analysis. Imagine this scenario. Your chemistry lab-mate has collected many measurements of an experiment
and has put them all into a single file for you. This file is named
A3-data-file.txt and is posted on OWL alongside these instructions. It is now your job to
analyze the results. In this experiment there were four distinct trials and you must perform
the analysis on each trial individually. Unfortunately, your lab-mate has mixed data from
different trials together! Fortunately, each measurement has a label to indicate of which trial
it is a part. For example, the first few lines of the data file are:
trial1 123.43
trial3 341.32
trial2 123.42
trial4 89.337
trial3 355.12
Therefore, your program must perform the analysis of the data in three steps:
1. Read the data in the file and sort each measurement based on which trial it is a part.
2. Perform the statistical analysis on each trial.
3. Write the statistical analysis of each trial to new file (i.e. write out four different files).
To perform these three steps your program should be broken into two parts.
Part 1. In this section we will define the contents of the myStatistics.py file. In this
file we look to implement six functions for statistical analysis: myMin, myMax, myAverage,
myMedian, myStandardDeviation, myCountBins.
For all functions do not use Pythonβ s statistics package nor the built-in min, max
functions. You must implement the math yourself.
myMin is a function which takes as its only parameter a list of floating point values and
returns the minimum value among all values in the list.
myMax is a function which takes as its only parameter a list of floating point values and
returns the maximum value among all values in the list.
myAverage is a function which takes as its only parameter a list of floating point values and
returns the average of all values in the list.
myMedian is a function which takes as its only parameter a list of floating point values and
returns the median of the values in the list.
2 CS1026A
myStandardDeviation is a function which takes as its only parameter a list of floating point
values and returns the standard deviation of the sample of values. Standard deviation (π )
can be computed by the following formula:
π =
β ¨β§ΈοΈ β§ΈοΈ β ©
π β 1
π =1
((π ₯π β π ₯Β―)
where π ₯π are the individual values in a list of π values, and π ₯Β― is the average of the list of values.
myCountBins is a function which takes two parameters: a list of floating point values, and a
floating point number. This second parameter is the bin size. This function will implement
a simplified form of data binning https://en.wikipedia.org/wiki/Data_binning. This function will go through the list of values given as the first parameter to count the number of
values in the list which fall into a certain β binβ . The bins are defined as: 0 β € π ₯π < bin size,
bin size β € π ₯π < 2 Γ bin size, 2 Γ bin size β € π ₯π < 3 Γ bin size, . . . until all values in the input list
have been found to exist in a certain bin. For example, if the maximum value in the input list
is 30 and the bin size is 10 then there should be 4 bins: 0 β € π ₯π < 10, 10 β € π ₯π < 20, 20 β € π ₯π < 30,
30 β € π ₯π < 40.
All functions only need to handle lists of floating point numbers. That is to say, if you
come across a non-number in the input list then your program is allowed to crash. The
myCountBins function only needs to handle non-negative numbers in its input list. All
other functions must handle all possible floating point numbers.
Part 2. In this section we define the contents of the userid_main.py file. In this file you
shall import your other file (e.g. by the command from myStatistics import *) and then
use those imported functions within your main function to prompt the user for the name of
the data file, read the data in that file, and then output the results of the analysis to four
different files. In particular, your main function should:
1. Prompt the user to input the name of the file which contains the data to analyze.
2. Open the file for reading, if possible. If the file is not found or not available for any
reason, simple print an error message β Sorry, the file is not availableβ and
then terminate the program.
β ’ Hint: use try: except: to accomplish this.
3. Read all of the data in the file and separate the data into four different lists, one list
for each trial.
β ’ Hint: A dictionary of lists would be very helpful here!
4. For each trial compute, using your myStatistics module,
β ’ the minimum,
β ’ the maximum,
β ’ the average,
3 CS1026A
β ’ the median,
β ’ the standard deviation, and
β ’ the list of bin counts for a bin size of 25.
5. For each trial we wish to output the computed data to the files trial1-data-analysis.txt,
trial2-data-analysis.txt, trial3-data-analysis.txt, and trial4-data-analysis.txt
where trial1 data goes in the file trial1-data-analysis.txt, etc. The data should be
output in the following format where each item inside angled brackets (e.g. )
is replaced by the actual value computed for that trial. All numbers should be printed
using 5 digits after the decimal place, except for bin_count which can simply be the
string representation of the list of counts.
minimum :
maximum :
average :
median :
std_dev :
bin_count: An example output file can be found on OWL next to these assignment instructions
but is also repeated here as an example. trial1-data-analysis.txt should have the
following contents:
minimum : 0.47074
maximum : 398.21285
average : 207.06971
median : 209.04432
std_dev : 115.91112
bin_count: [16, 12, 16, 15, 18, 19, 16, 13, 13, 12, 29, 11, 19, 14, 19, 19]
4 CS1026A
|
{"url":"https://www.programmingmag.com/answers/cs1026a-assignment-3-solved-2/","timestamp":"2024-11-13T02:23:11Z","content_type":"text/html","content_length":"104212","record_id":"<urn:uuid:5b834763-ab48-43a4-8655-67c1a287f8f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00815.warc.gz"}
|
The Magic Cafe Forums - Seven Queens stack is the easiest acaan calculation of the good looking tetradistic stacks?
The Seven Queens stack is the easiest acaan calculation of the good looking tetradistic stacks?
Does anyone know of a tetradistic stack that is:
1. easier to learn to do?
2. and quicker to calculate the position?
Note that the Seven Queens stack requires no memorization of any card position.
The Eight Kings stacks are good looking tetradistic stacks but I have not found any Eight Kings variant that can quickly calculate the position of any named card.
Offset number:
In order to do the acaan calculation the really important feature of these good looking (better looking than Si Stebbins) stacks is how fast can the magician determine the offset number (0
or 13 or 26 or 39). That's really the holy Grail for a tetradistic stack to do acaan.
glowball The final step of adding the value of the Harry Riser twin pair is pretty much the same concept in most of these stacks.
Special How fast can the Seven Queens method determine the offset number. Very fast.
user Add the card value and it's suit (SHCD) value (this will give a number 1 through 17).
TN Now divide by 4 giving a remainder of 0 or 1 or 2 or 3. Note: instead of actually dividing by four I mentally just think of how many notches above the nearest multiple of four.
960 Posts
This tells the offset number.
0 equals 0
1 equals 13
2 equals 26
3 equals 39
I have seen postings about the Karma stacks and suspect that they may use this same technique? Don't know? Am thinking about purchasing the Karma Pro version just to find out.
Below is the Eight Queens stack which uses SHoCkeD as the suit values:
7S, QD, 8D, KC, 10H, 9C, AC, 3S, 6H, 5C, JS, 2H, 4D
7H, QS, 8S, KD, 10C, 9D, AD, 3H, 6C, 5D, JH, 2C, 4S
7C, QH, 8H, KS, 10D, 9S, AS, 3C, 6D, 5S, JC, 2D, 4H
7D, QC, 8C, KH, 10S, 9H, AH, 3D, 6S, 5H, JD, 2S, 4C
Your response is welcome.
Special 0
Nashville Oops, I said "Eight Queens" when I should have said "Seven Queens".
960 Posts
user 0
960 Posts
circle Hi Glowball,
1360 When you refer to an "acaan calculation" you seem to mean just calculating what position a card is at, or vice versa.
Posts The easiest way to do this is surely to memorise a stack.
This is not for me, I have been using Aronson for about 20 years.
I'm trying to find the easiest stack that a bunch of my local magic club members can quickly learn and use that meet my 3 criteria:
1. Better looking than a Si Stebbins.
glowball 2. Can quickly calculate the position of a named card.
3. Can quickly calculate the next (top) card after glimpsing the bottom card.
user The "quickly learn and use" overriding criteria rules out a memorized deck for our group.
TN But more specifically I am trying to find the fastest good looking tetradistic stack that can calculate "card to number".
960 Posts
Note that with a tetradistic stack we should later be able to come up with mnemonic methods to address the "next card" calculation if none currently exist.
Therefore my immediate focus is to find the fastest easiest to learn "card to number" calculation amongst the good looking tetradistic stacks.
The Doug Dyment DAO (Dyment, Aryes, Osterlind) Stack is hands down the fastest and easiest to learn of the good looking "next card" stacks that I have seen, but it was not designed to
calculate a card position.
So far my Seven Queens stack appears to be the fastest easiest of the good looking tetradistic stacks that meet my criteria, but am open to other stacks.
As always thanks for your input.
Inner 0
Gibsons, This design is a trivial modification of my own QuickStack, originally published over twenty years ago. But QuickStack features a considerably simplified calculation process (with no
BC, multiplications, divisions, or difficult additions/subtractions; 75% of the cards can be located with at most one single-digit addition). So considerably faster than what is described
Canada here.
Doug, here is my rebuttal:
I disagree with your statement that the Seven Queens stack is a "trivial modification" of your QuickStack.
It is true that they are both tetradistic stacks (so are a lot of other stacks).
It is true that they both use the Harry Riser pairs concept (that's been around a long time).
It's true that they both add the pair value to the offset.
But that's where the similarities end.
glowball Your QuickStack methodology requires the memorization of 13 key cards whereas the Seven Queens stack does not require any such memorization.
Special The Seven Queens stack adds the card value and the suit value and does a mod 4 to almost instantly reveal the offset number (the QuickStack does not use this technique).
Nashville I disagree with your statement:
960 Posts "So (QuickStack) considerably faster than what is described here."
Both stacks require the final addition of the paired value to the offset so there is no difference in the speed of the two stacks for that part of the calculation (of course I give a nod
to QuickStack for the 25% of the time the spectator names one of its 13 memorized key cards).
I believe if we each took a Joe Blow club magician and taught them to perform our respective stacks (your QuickStack versus Seven Queens stack) that I could teach my Joe Blow to perform
the Seven Queens stack three times faster than you could teach your Joe Blow to perform QuickStack (mainly because they have to first memorize 13 key cards).
I believe that at performance time my Joe Blow would arrive at the offset number every bit as fast (maybe faster) than your Joe Blow.
I like your stacks (especially DAO) and your book.
Respectfully, glowball
ddyment 0
Inner Globall and I shall have to agree to disagree about the relative simplicity of the two stacks in question. I will not attempt a detailed response, as I am confused by the rebuttal claims
circle in any case.
BC, QuickStack does not have any key cards, nor does it require memorization of any cards, so I'm not sure what is being referenced above; perhaps glowball is unclear on its construction.
Canada QuickStack requires no divisions, and 75% of the cards can be located with at most one single-digit addition. I don't think Seven Queens comes even close to such simple conversions (and
2525 most people are uncomfortable performing mod 4 calculations -- which require a division -- on the fly.
So I'll stand by my claims, and leave it to others to decide which approach they find easier to use.
My rebuttal number two:
Some clarifications needed: are we talking about the same QuickStack?
I'm comparing my Seven Queens stack to your QuickStack 3.0 as specified in your book Calculated Thoughts.
I maybe used the wrong term "13 KEY cards", however let's just say there are 13 relationships that must be memorized to do QuickStack 3.0.
From your book, and I quote:
"the suit associated with each value in the 13 xxxxxxxxxxxxx; the suits in the other xxxx are defined in relationship to these xxxxxxx, so they - like the numerical positions - must be
thoroughly memorized. ... To reiterate then this is a list of all 13 cards in xxxxx: when given any of these card names you should be able to respond rapidly with its stack position...".
Note that I put in xxxxxxxx in several places to protect your method.
Notice that you say these 13 relationships "MUST BE THOROUGHLY MEMORIZED".
The Seven Queens methodology does not require such memorization thus making it much faster to learn.
The key to both of these stacks is how fast the magician can determine the OFFSET number.
user I maintain that the Seven Queens methodology (mod 4) is probably faster. I agree that division is undesirable therefore the way I teach my friends to do the calculation instead of dividing
Nashville is just to think "how much am I above the nearest multiple of 4". For example: how much is the number 9 above the nearest multiple of four? Well, eight is the nearest multiple of 4
TN therefore the answer is 1 (1 equates to 13).
960 Posts
Mod 4:
Reminder to everyone out there that the only multiples of four that we have to know are 0, 4, 8, 12, 16. That's it! How hard is it to know that 7 is 3 above four? Pretty easy I would say.
Seven Queens methodology:
Let's say the card value times the suit value is equal to 14 (as in the case of the Queen of Hearts):
The magician should know instantly the mod 4 result is 2 (14 is two above the nearest multiple of four). And 2 equates to 26.
This calculation is lightning fast!
Theories about which methodology is faster are interesting but as they say the proof of the pudding is in the eating therefore I'm going to thoroughly learn QuickStack 3.0 and later get
one of my friends that knows the Seven Queens stack methodology to bone up on it and then get someone with a stopwatch to time us. We'll do a lot of cards.
I will legitimately try to win the competition using your QuickStack 3.0 (regardless of who wins I believe the difference will be negligible).
We will do "card to number" contests and then we will do "number to card" contests.
I suspect that the Seven Queens will win the "card to number" contest, but I also suspect that QuickStack 3.0 will win the "number to card" contests.
It may be a few months (holidays coming up) before I and a friend can do this. This will be fun, and I will be honest either way it turns out.
Respectfully, glowball
glowball Oops,
Special As one of my examples I said:
user "Let's say the card value times the suit value is equal to 14 (as in the case of the Queen of Hearts):"
TN Instead of "times" I should have said "plus".
960 Posts
So the correct wording should have been:
Let's say the card value PLUS the suit value is equal to 14 (as in the case of the Queen of Hearts):
ddyment I said that I would avoid any kind of detailed response, and I intend to honour that, but I see now how glowball is interpreting things. QuickStack is a fully algorithmic stack: there is
no need to memorize any cards or positions. But it is necessary to learn some things (I refer to this in the book as "memorizing"; arguably I should just have said "learned"). One must
Inner learn the mirror pair structure. One must learn how the suits are computed. This is the same for Seven Queens.
Gibsons, Indeed, this is true in some sense with any memorized stack. The difference with an algorithmic stack is that one is never exclusively dependent on having memorized the value/location
BC, pairings: in case of difficulty, it is always possible to go back to first principles (which will not be as quick, but will be better than failing). That said, with the continued use of
Canada any algorithmic stack, card value/position associations will, over time, become memorized.
Posts It is, of course, extremely difficult to do any sort of A:B comparison with matters of this sort. The experiment proposed by glowball cannot prove anything. It may suggest that one person
is better than doing mental arithmetic than another. It may suggest that one has a better memory than another. It may suggest that one person has a different learning style than another
(this is why there are four different methods of learning a memorized stack, the algorithmic approach being but one of them). But it can provide no significant A:B evidence. Again, though,
based simply on the numbers and types of actions required by the two methods, I stand by my original comments.
user On Dec 10, 2022, glowball wrote:
Germany, Doug, here is my rebuttal:
Magdeburg I disagree with your statement that the Seven Queens stack is a "trivial modification" of your QuickStack...
521 Posts
I second Glowball because the 7 Queens stack is a shadow stack. Any card points directly to the card's bank. Of course, QuickStack 3.0 could easily be converted into a shadow stack as
Guys, not for nothing, and if you just like calculating things to produce the magic, then I say go for it as a bit of it is quite interesting to say the least. I do enjoy the bit of memory
work and stacks I’ve been through on my journey, they are indeed clever inventions. But if you don’t want to do all that math or any math or memorization, etc., to perhaps concentrate more
on just the magic to present an amazing ACAAN type of effect, then I implore both glow and dd to check out “Berg FAST,” part of the F.A.S.T. Project by Daniel Johnson. It uses these
similar concepts but makes it all work more automatically for you, sorta in the way an elevator just works taking you more automatically up or down with a push of a button and you’re
there. Nothing to calculate, nothing to memorize. Leaves much more time for a good casual presentation, much in the manner of The Berglas Effect.
I do hope you take a look at it if you haven’t already or perhaps take a good look at it again (but of course as it is shown above right here in debate, people will always find something
Inner that they don’t like about most things as mileage may vary), but F.A.S.T. Is very, very clever and much more so easier to do, IMHO. Sorta the difference between taking the stairs or an
circle elevator, or something like that.
Posts Happy New Years, fellas!
*Check out my latest:
Gifts From The Old Country: A Mini-Magic Book
, MBs Mini-Lecture on Coin Magic, The MB Tanspo PLUS, MB's Morgan, Copper Silver INC, Double Trouble, FlySki, Crimp Change - REDUX!, and other fine magic at
gumroad.com/mb217magic "Believe in YOU, and you will see the greatest magic that ever was." -Mb
I think the Shadow Sequeira by Hans-Christian Solka should be a contender.
glowball This excellent formula-based stack was developed in 2015 well before the Seven Queens stack. To calculate the position of "card to number" the two stacks both use the mod 4 plus a pair
value (they use a different pairing scheme). Therefore the speed and ease to mentally do this calculation is very fast on both stacks.
user To do the "next card" calculation the Shadow Sequeira method can be learned much quicker.
TN The Seven Queens method of determining the "next card" using the 13 stories takes longer to learn, but once learned is just as fast as Shadow Sequeira.
960 Posts
Look at the thread below and go to the bottom two or three posts where it starts with "I immediately bought the Shadow Sequeira" to see a more complete comparison:
This stack was mentioned by hcs.
Special 0
Nashville I just did a facepalm! I thought that hcs was a third party observer, wrong, hcs must mean Hans Christian Solka!
960 Posts
The Magic Cafe Forum Index » » Shuffled not Stirred » » Seven Queens stack is the easiest acaan calculation of the good looking tetradistic stacks? (3 Likes)
|
{"url":"https://themagiccafe.com/forums/viewtopic.php?topic=750688#14","timestamp":"2024-11-01T20:49:32Z","content_type":"application/xhtml+xml","content_length":"40676","record_id":"<urn:uuid:a3323d07-c05f-4233-8b75-d98e58195197>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00771.warc.gz"}
|
Jitendra Bajpai
Visiting Professor
Department of Mathematics
University of Kiel
Heinrich-Hecht-Platz 6
24118 Kiel, Germany
Contact responsible for this site: Jitendra Bajpai
general legal notice
Research Interests
My research interest lies in the interface of number theory, group theory, representation theory and geometry. More precisely,
• Automorphic and Modular Forms : Classical and Vector-Valued.
• Cohomology of Arithmetic Groups : Boundary and Eisenstein Cohomology.
• Group Theory : Computational and Combinatorial.
• A short visit to Thiruvanathapuram, Kerala, India, July 23-29, 2024.
• A short visit to IHP, Paris, France, October 20 - November 3, 2024.
• I will be teaching a course on Automorphic Forms in winter term 2024-2025, Mathematics Department, University of Kiel, Germany.
• Groups and Topological Groups conference , Ljubljana, Slovenia, January 23-24, 2025.
Here, I list some coorganized upcoming, ongoing and past activities.
Most of my articles listed below can be found in arXiv.
• Relative Lie algebra cohomology of SU(2,1) and Eisenstein classes on Picard surfaces (with Mattia Cavicchi), preprint 2024.
• Commentory on Sp(6) hypergeometric groups, submitted 2024.
• New dimensional estimates for subvarieties of linear algebraic groups (with Daniele Dona and Harald Helfgott), In a special volume of Vietnam Journal of Mathematics, celebrating 60th Birthday of
Pham Huu Tiep, 2024.
• Thin monodromy in O(5) (with Martin Nitsche), Annales Mathématiques du Québec, 2024.
• Bloch-Beilinson conjectures for Hecke characters and Eisenstein cohomology of Picard surfaces (with Mattia Cavicchi), preprint 2022.
• Arithmetic monodromy in Sp(2n) (with Daniele Dona and Martin Nitsche), submitted 2022.
• Lifting of vector-valued automorphic forms (with Subham Bhakta), submitted 2022.
• Arithmeticity of some hypergeometric groups (with Sandip Singh and Shashank Singh), Linear Algebra and Its Applications, 2023.
• Thin monodromy in Sp(4) and Sp(6) (with Daniele Dona and Martin Nitsche), submitted 2022.
• Growth estimates and diameter bounds for classical Chevalley groups (with Daniele Dona and Harald Helfgott), submitted 2022.
• Growth of Fourier coefficients of vector-valued automorphic forms (with Subham Bhakta and Renan Finder), Journal of Number Theory, 2023.
• Boundary and Eisenstein cohomology of G[2](ℤ) (with Lifan Guan), Research in Mathematical Sciences, 2022.
• Exponential sums in prime fields for modular forms (with Subham Bhakta and Victor García), Research in Number Theory, 2022.
• Euler characteristic and cohomology of Sp[4](ℤ) with nontrivial coefficients (with Ivan Horozov and Matias Moya Giusti), European Journal of Mathematics, 2023.
• Symplectic hypergeometric groups of degree six (with Daniele Dona, Sandip Singh and Shashank Singh), Journal of Algebra, 2021.
• Boundary and Eisenstein cohomology of SL[3](ℤ) (with Günter Harder, Ivan Horozov and Matias Moya Giusti), Mathematische Annalen, 2020.
• Ghost classes in rank two orthogonal Shimura varieties (with Matias Moya Giusti), Mathematische Zeitschrift, 2020.
• Commensurability and arithmetic equivalence for orthogonal hypergeometric monodromy groups (with Sandip Singh and Scott Thomson), Experimental Mathematics, 2020.
• On orthogonal hypergeometric groups of degree five (with Sandip Singh), Transactions of the American Mathematical Society, 2019.
• Lifting of modular forms, in a conference proceeding published by Publications Mathématiques de Besançon, 2019.
• Appendix B (with Daniele Dona) for English version of the article "Graph Isomorphisms in quasi-polynomial time" by Harald Helfgott, Seminaire Bourbaki, 69th year, 2016-2017, no. 1125, January
2017, arXiv 2017.
• Bilateral series and Ramanujan's radial limits (with Susie Kimport, Jie Liang, Ding Ma, James Ricci), Proceedings of the American Mathematical Society, 2014.
• Ph.D. 2015, University of Alberta, Canada. Thesis Title: On Vector-Valued Automorphic Forms. Supervisor: Prof. Terry Gannon
• M.Sc. 2008, McGill University, Canada. Thesis Title: Omnipotence of Surface Groups. Supervisor: Prof. Dani Wise.
• M.Sc. 1998, C.S.J.M. University, Kanpur, India.
• B.Sc. 1995, C.S.J.M. University, Kanpur, India.
• Research Associate , January 2023 - August 2023, Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton AB T6G 2G1, Canada.
• Postdoctoral Fellow , October 2021 - September 2022, Max-Planck Institute for Mathematics, Vivatsgasse 7, 53111 Bonn, Germany.
• Postdoctoral Fellow, April, 2020 - September 2021. Institute of Geometry, Technical University of Dresden, Helmholtzstraße 10, 01069 Dresden, Germany.
• Postdoctoral Fellow, October, 2016 - March, 2020. Mathematics Institute, Georg-August University Göttingen, Bunsenstraße 3-5, D-37073 Germany.
• Postdoctoral Fellow , February, 2015 - September, 2016, Max-Planck Institute for Mathematics, Vivatsgasse 7, 53111 Bonn, Germany.
Short Term Academic Visits
• Research in Pair, with Daniele Dona, and to organize a mini-workshop on Growth and Expansion in Groups, March 24 - April 14, 2024, at MFO Oberwolfach, Germany.
• Research in pair meeting at Oberwolfach, Germany, October 17 - 23, 2021, with Daniele Dona from Hebrew University of Jerusalem, Israel.
• Max-Planck Institute for Mathematics, Bonn, Germany, August, 2021.
• Research in pair meeting at Oberwolfach, Germany, February 28 - March 27, 2021, with Mattia Cavicchi from IRMA Strasbourg, France.
• Max-Planck Institute for Mathematics, Bonn, Germany, August, 2020.
• Max-Planck Institute for Mathematics, Bonn, Germany, August 4 - 9, 2019.
• IHES, Paris, France, October 29 - December 8, 2018.
• Logic and Algorithms in Group Theory, Trimester Program at Hausdorff Institute of Mathematics (HIM), Bonn, Germany, September 10 - 14, and October 1 - 24, 2018.
• Max-Planck Institute for Mathematics, Bonn, Germany, August 19 - 24, 2018.
• Periods in Number Theory, Algebraic Geometry and Physics, Trimester Program at Hausdorff Institute of Mathematics (HIM), Bonn, Germany, March 15 - April 12, 2018.
• Max-Planck Institute for Mathematics, Bonn, Germany, August, 2017.
• Institut Camille Jordan, Université Claude Bernard Lyon 1, France, April 9-14, 2017.
• Research in pair meeting at Oberwolfach, Germany, February 19-24, 2017.
• Mathematical Institute, University of Heidelberg, Germnay, February 1-7, 2017.
• Max-Planck Institute for Mathematics, Bonn, Germany, December 2-7, 2016.
• Department of Mathematics, University of Bern, Switzerland, October 23-28, 2016.
• Department of Mathematics, RWTH-Aachen University, Germany, July 11-17, 2016.
Colloquium Talks
• Department of Mathematics Colloquium, University of Ulm, Germany, November 24, 2017. Title: Vector-Valued Automorphic Forms.
• Junior Mathematics Colloquium, Georg-August Universität Göttingen, Germany, June 14, 2017. Title: Fuchsian Groups : Cusp vs. Index.
• Colloquium in the Department of Mathematics, University of Bern, Switzerland, October 24, 2016. Title: Arithmeticity and Thinness of Hypergeometric Groups.
• Colloquium in the Department of Mathematics, RWTH-Aachen University, Germany, February 2, 2016. Title: Theory of Hypergeometric Groups.
• Graduate Colloquium Talk, November 14, 2012, University of Alberta, Canada. Title: Vector-Valued Modular Forms.
Seminar Talks
• COGENT Seminar, an online seminar in and around the cohomology of groups, April 24, 2023.
• Korteweg-de Vries Institute for Mathematics, University of Amsterdam, The Netherlands. February 8, 2023.
• Short lectures series (4 talks online), Yanqi Lake Beijing Institute of Mathematical Sciences and Applications (BIMSA), China. January 12 - February 2, 2023.
• Department of Mathematics, Indian Institute of Technology - Banaras Hindu University, Varanasi, India. November 15, 2022.
• Department of Mathematics, Harish-Chandra Research Institute (HRI), Prayagraj(Allahabad), India. November 11, 2022.
• Department of Mathematics, Indian Institute of Technology Kanpur, India. November 1, 2022.
• Number Theory Seminar, University of Copenhagen, Denmark. May 14, 2021 (online). Title: Theory of Vector-Valued Modular Forms.
• Oberseminar: Algebra-Geometry-Combinatorics, TU Dresden, Germany. February 4, 2021 (online). Title: Arithmeticity and Thinness of Hypergeometric Groups.
• Mathematics Seminar Series, IIIT Delhi, India. September 29, 2020 (online). Title: Theory of Vector-Valued Modular Forms.
• Oberseminar Number Theory and Arithmetic Geometry, Institute for Algebra, Number Theory and Discrete Mathematics, Leibniz University Hannover, Germany. November 21, 2019. Title: Arithmeticity and
Thinness of Hypergeometric Groups.
• Oberseminar Number Theory, Georg-August Universität Göttingen, Germany, June 24, 2019. Title: Boundary cohomology of SL[3](ℤ)
• Séminaires Mathématique-Physique, Institut de Mathématiques de Bourgogne, Dijon, France, May 16, 2019. Title: Arithmeticity and Thinness of Hypergeometric Groups.
• Séminaire de Géométrie Arithmétique et Motivique, LAGA, Institut Galilée, Université Paris 13, France, February 8, 2019. Title: Arithmeticity and Thinness of Hypergeometric Groups.
• Séminaire de Géométrie et Topologie, Université Paul Sabatier, Institut de Mathématiques de Toulouse, France, February 5, 2019. Title : Arithmeticity and Thinness of Hypergeometric Groups.
• Seminar on Groups, Geometry and Topology, Department of Mathematics, Karlsruhe Institute of Technology, Germany, January 31, 2019. Title : Arithmeticity and Thinness of Hypergeometric Groups.
• Geometry Seminar / Graduate Lectures, Faculty of Mathematics, Technische Universität Dresden, Germany, December 11, 2018. Title : Arithmeticity and Thinness of Hypergeometric Groups.
• Séminaires Mathématique-Physique, Institut de Mathématiques de Bourgogne, Dijon, France, November 14, 2018. Title: Vector-Valued Automorphic Forms.
• Seminar of Algebraic Geometry and Number Theory, École Polytechnique Fédérale de Lausanne, Switzerland, May 30, 2018. Title: Vector-Valued Automorphic Forms.
• IRMA Institut de Mathematique, Université de Strasbourg, France, February 19, 2018. Title : Arithmeticity and Thinness of Hypergeometric Groups.
• Number Theory Seminar, University of Copenhagen, Denmark, January 3, 2018. Title : Arithmeticity and Thinness of Hypergeometric Groups.
• Oberseminar Number theory, Georg-August Universität Göttingen, Germany, May 22, 2017. Title: Theory of Vector-Valued Modular Forms.
• Séminaire de Combinatoire et Théorie des Nombres, Institut Camille Jordan, Université Claude Bernard Lyon 1, France, April 11, 2017. Title: Vector-Valued Modular Forms.
• Geometry Seminar, Laboratoire de Mathématiques, Université Savoie Mont Blanc, Chambery, France, April 6, 2017. Title: Arithmeticity and Thinness of Hypergeometric Groups.
• Oberseminar in Automorphic Forms, Cologne University, Germany, January 16, 2017. Title: Vector-Valued Modular Forms.
• Oberseminar Number Theory, Georg-August Universität Göttingen, Germany, October 31, 2016. Title: Arithmeticity and Thinness of Hypergeometric Groups.
• Algebra Seminar, RWTH-Aachen University, Germany, July 12, 2016. Title: Arithmeticity and Thinness of Hypergeometric Groups.
• Research Seminar in Algebra, University of Cologne, Germany, July 7, 2016. Title: Theory of Hypergeometric Groups.
• Seminar Aachen-Bonn-Köln-Lille-Siegen on Automorphic Forms at MPIM, Bonn, Germany, March 1, 2016. Title: Vector-Valued Modular Forms.
• Number Theory Seminar, Université de Caen, France, January 22, 2016. Title: Vector-Valued Modular Forms.
• Algebra and Representation Theory Seminar, Université de Caen, France, January 19, 2016. Title: Theory of Hypergeometric Groups.
• Research Seminar in Algebra and Mathematical Physics, University of Hamburg, Germany, January 12, 2016. Title: Vector-Valued Modular Forms.
• Hauptseminar Modulformen, Mathematical Institute, University of Heidelberg, Germany, November 18, 2015. Title: Vector-Valued Modular Forms.
• MPIM Number Theory Lunch Seminar, Bonn, Germany, November 11, 2015. Title: Vector-Valued Modular Forms.
• MPIM Oberseminar, Bonn, Germany, October 29, 2015. Title: Theory of Hypergeometric Groups.
• Algebra and Number Theory Seminar, Technische Universität Darmstadt, Germany, October 13, 2015. Title: Vector-Valued Modular Forms.
• Séminaire de Géométrie Arithmétique et Motivique, LAGA, Institut Galilée, Université Paris 13, France, October 9, 2015. Title: Vector-Valued Modular Forms.
Contributory Talks
• Arbeitsgemeinschaft: Rigidity of Stationary Measure, Oberwolfach, Germany, October 7 - 12, 2018. Title: Examples of stationary measures.
• Summer School on Thermodynamic Formalism and Transfer Operator Method, Göttingen, September 14-18, 2015. Title: Hecke triangle groups and Vector-Valued Modular Forms.
• 10th PIMS Young Researchers Conference in Mathematics and Statistics, May 21 - 24, 2013, University of Alberta, Canada. Title: Bilateral series and Ramanujan's mock theta functions.
• Atkin Memorial Lecture and Workshop on Noncongruence modular forms and Galois representations, May 1, 2011, University of Illinois at Chicago, USA. Title: Weakly Holomorphic Vector-Valued Modular
• 25th Automorphic Forms Workshop, March 23, 2011, Oregon State University, USA. Title: Weakly holomorphic vector-valued modular forms for genus-zero subgroups of the modular group.
• Seminar on Geometric Group Theory, March 28, 2007, McGill University, Canada. Title: Surface Groups are Omnipotent.
Conferences/Workshops/Schools Participation (Selected)
• 41st Autumn School in Algebraic Geometry : Arithmetic of Differential Equations, September 2-8, 2018, Lukecin, Poland.
• Hausdorff Summer School: L-functions, Open Problems and Current Methods, June 25-29, 2018, Hausdorff Center for Mathematics, University of Bonn, Germany.
• Automorphic Forms and Arithmetic, September 3 - 9, 2017, Oberwolfach, Germany.
• Hot Topics Workshop on Galois theory of periods, March 27 - 31, 2017, MSRI Berkeley, USA.
• Ergodic Theory and its Connections with Arithmetic and Combinatorics, December 12 - 16, 2016, CIRM Luminy, France.
• Applications of Ergodic Theory in Number Theory, October 17 - 21, 2016, CIRM Luminy, France.
• Analogies between Number Fields and Function Fields , June 26 - July 2, 2016, Université Claude Bernard Lyon, France.
• Thin Groups and SuperApproximation Workshop, March 28 - April 1, 2016, Institute for Advanced Study, Princeton, NJ, USA.
• Hausdorff School : Arithmetic Groups, their Cohomology and Arithmetic Applications, January 25 - 29, 2016, Hausdorff Center for Mathematics, Bonn, Germany.
• Eigenfunction estimates and related topics, July 7 - 10, 2015, Marburg, Germany.
• New Perspectives on the Interplay between Discrete Groups in Low-Dimensional Topology and Arithmetic Lattices, June 21 - 27, 2015, Oberwolfach, Germany.
• Workshop on "Mock Modular Forms, Moonshine, and String Theory", August 26 - 30, 2013, Simons Center for Geometry and Physics, Stony Brook, USA.
• Summer Graduate Workshop : New Geometric Techniques in Number Theory, July 1 - 12, 2013, MSRI Berkeley, USA.
• Arizona Winter School 2013 : Modular Forms and Modular Curves, March 9 - 13, 2013, University of Arizona, USA.
• 1st EU/US conference on automorphic forms and related topics, July 30 - August 10, 2012, Aachen, Germany.
• Hot Topics Workshop on Thin Groups and Super-strong Approximation, February 6 - 10, 2012, MSRI Berkeley, USA.
• The Birch and Swinnerton-Dyer Conjecture Summer School for graduate students and young researchers, June 26 - July 3, 2011, Sardinia, Italy.
• PIMS Algebra Summer School 2007, July 30 - August 8, University of Alberta, Canada.
• Conference on Geometric Group Theory, July 3 - 14, 2006, Centre de Recherché Mathématiques, Montreal, Canada.
Teaching Experience
Course Instructor
• Summer 2024 : A Graduate Course on Modular Forms, Mathematics Department, University of Kiel, Germany.
• Spring-Summer 2023 : Math 201 (Ordinary Differential Equations), University of Alberta, Canada.
• Spring 2011, Spring 2012, Winter 2013 and Spring 2014 : Math 125 (Linear Algebra-I), University of Alberta, Canada.
• Fall 2007 : Math 150 (Advanced Calculus), McGill University, Canada.
• Fall 2007 : Calculus-I (Differential Calculus), CEGEP Vanier College, Montreal, Canada.
• Summer 2007 : Calculus-I (Differential Calculus), CEGEP Dawson College, Montreal, Canada.
• August 2001-April 2002 : As a guest lecturer in the Department of Mathematics, V.S.S.D. College, Kanpur, India, taught following courses:
□ Abstract Algebra to final year master's degree students. The areas covered in this course include Group theory, Ring theory, Field theory and Galois theory.
□ A short course in Advanced Calculus to second year undergraduate students.
□ A course in Real Analysis to third year undergraduate students.
Teaching Assistant
• Fall 2008, Fall 2009, Winter 2010, Fall 2010, Winter 2011, Fall 2011 and Winter 2012 : Teaching Assistant in various sections for the courses Math 101 (Calculus-II), Math 102 (Linear Algebra-1),
Math 113 (Calculus-I) and Math 209 (Advanced Calculus) in the University of Alberta, Canada.
• Winter 2007, Fall 2006, Winter 2006, Fall 2005, July 2005, and May 2005 : Teaching Assistant for Math 141 (Calculus-II) in McGill University, Canada.
Fellowships and Awards
• Azrieli International Postdoctoral Fellowship to pursue postdoctoral studies at Technion, Haifa, Israel.
• Josephine M. Mitchell Graduate Scholarship 2011 and 2014, Department of Mathematical and Statistical Sciences, University of Alberta, Canada.
• Provost Doctoral Fellowship, University of Alberta, Canada, January 2008 - December 2009.
• Best Teaching Assistant Award in Mathematics, McGill University, Canada, 2006.
• Junior Research Fellowship, Harish-Chandra research Institute (HRI), Allahabad, India, August 2002 - June 2004.
• Lectureship, V.S.S.D. College, Kanpur, India, August 2001 - March 2002.
|
{"url":"https://user.math.uni-kiel.de/~jitendra/","timestamp":"2024-11-06T11:07:23Z","content_type":"text/html","content_length":"27519","record_id":"<urn:uuid:7bb146de-aa18-4fe2-afc6-5abe03d7dc43>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00217.warc.gz"}
|
Setting Ticks and Tick Labels in Matplotlib
In Matplotlib library the Ticks are the markers that are used to denote the data points on the axis.
• It is important to note that in the matplotlib library the task of spacing points on the axis is done automatically.
• The default tick locators and formatters in matplotlib are also good and are sufficient for most of the use-cases.
• This task can be done explicitly with the help of two functions: set_xticks() and set_yticks()
• Both these functions will take a list object as its arguments. The elements that are present in the list denote the positions and the values that will be shown on these tick positions is set
using the set_xticklables() and set_yticklabels() functions.
For Example:
The above code will mark the data points at the given positions with ticks.
Then to set the labels corresponding to tick marks, we use the set_xticklabels() and set_yticklabels() functions respectively.
ax.set_xlabels(['two', 'four', 'six', 'eight', 'twelve'])
Now with the help of the above command, It will display the text labels just below the markers on the x-axis.
Custom Ticks and Tick labels
Now let us take a look at an example of setting ticks and tick labels in Matplotlib Library:
import matplotlib.pyplot as plt
import numpy as np
import math
x = np.arange(0, math.pi*2, 0.04)
fig = plt.figure()
ax = fig.add_axes([0.1, 0.1, 0.8, 0.8])
y = np.cos(x)
ax.plot(x, y)
# this will label the x axis
# setting title of plot
# set the tick marks for x axis
# provide name to the x axis tick marks
Here is the output:
In the above code, you must have noticed the function set_xlabel(), this function is used to specify a label for the X axis, in our case we are showing angles on x axis, hence the name angles.
Then we have specified the tick marks explicitly and have also provided labels for the tick marks.
This is a good technique to customize your graphs completely.
|
{"url":"https://shishirkant.com/setting-ticks-and-tick-labels-in-matplotlib/","timestamp":"2024-11-05T20:07:22Z","content_type":"text/html","content_length":"138976","record_id":"<urn:uuid:7e7aa643-0841-4a78-8c3b-56f96ca9433b>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00309.warc.gz"}
|
Excel Calculator for Mortgage Modification & Loan RefinancingExcel Calculator for Mortgage Modification & Loan Refinancing
Have you got an existing Loan or Mortgage, would like to refinance it and are keen to see how the modification may influence your financial situation?
In this case we have just the right Calculator in Excel for you. It will show you exactly what impact amending multiple Loan Parameters will have.
This Calculator requires Microsoft Excel 2007 or 2010 to be installed on your computer.
To download a sample of this calculator: click here
The sample is limited to max. 30 repayments.
Please do some testing and make sure that it will suit your requirements.
Then you can purchase the full version on this link: click here
Variable Parameters That Can Be Changed
As stated earlier, there are many Parameters you can change and instantly see how refinancing will impact your Loan or Mortgage.
These parameters are:
• Loan Amount
• Loan Start Date
• Interest Rate (annual or “p.a.”)
• Interest Compounding Frequency (how often bank adds interest to your loan amount)
• Repayment Frequency (how often you pay the loan)
• Repayment Dates
• Repayment Amounts
• Add Extra Payments
User Manual and Instructions
This Excel Calculator for Loan Modification was developed so that it easy to use. You will still require some basic Excel knowledge. But even without it, these Instructions will make sure that you
will succeed.
Our Loan Calculator consists of 3 parts:
1. Entry Parameters Section (top left)
2. Loan Amortization Schedule (bottom screen-wide)
3. Summary Results (top right)
Ok, now let’s go and have a look at how it works. As already mentioned, you should begin with part (1), then carry on to part (2) and finally see the results in part (3).
1. Data Entry Section for Loan Parameters
In this part you will have to enter the Main Loan Parameters, such as:
• Your Loan Amount. In other words, how much you plan to borrow on your new loan, respectively into what Amount your old Loan is going to be modified.
• Repayment Amount. Majority of Loan Calculators out there will require Loan Term as entry data. Based on the Loan Length they calculate what the Repayment Amount is going to be. Our Excel
Calculator is slightly different. It will ask you how much you want to pay and then it figures out how long it will take to repay the Loan.
• Repayment Frequency. You have the ability to select daily, weekly, bi-weekly (fortnightly) or monthly Repayment Frequency
• Interest Compounding Frequency. Many Banks announce Interest Rates as yearly percentage. However, they compound the Interest daily or monthly. Doing so they are pushing up the Total Interest
Rate. It’s one of those ethically questionable things they do to make their loans look more attractive. Therefore, always ask your bank up-front how they compound the Interest on your Loan.
• Loan Start Date. Start Date will determine your Repayment Calendar (or Loan Amortization Schedule). The good thing about this Calculator is that you can choose whatever Date you like, even
retrospectively. Or in the future – so you can generate new or re-create existing Loan Schedule.
• Yearly Interest Rate. Almost always Interest Rates are published as Yearly percentages. Loan Calculator then automatically adjust it to calculate daily, weekly, fortnightly or monthly Interest.
The formula to do that is already built-in into our Calculator.
2. Loan Amortization Schedule
You may have heard different terms, like the Loan Amortization Schedule or the Repayment Calendar – they both mean the same thing.
In this section you will get all details about your periodic Payment Amounts. The schedule will also show you detailed break-down. You will know exactly how much Principal and how much Interest you
pay in each Repayment Period.
There is one major improvement compared with other Loan Calculators. You will be able to adjust Repayment Amounts and Actual Payment Date for each Repayment Period.
This is to ensure that the Loan Schedule stays current at all times – for example when you miss a Payment or pay late. Our Calculator will then automatically re-calculate the Interest and adjust the
Loan Balance.
Another great improvement is that you are able to add any Extra Payment (Additional Principal) in each Repayment Period.
The fields which can be over-written are highlighted in green color. Just one thing to remember – do not change formulas in any non-green fields. If you do then the Loan Calculator may not work
3. Loan Summary Results
Once you specify all your Loan Parameters in section (1) and complete Adjustments of your modified Loan Schedule (Payment Dates, Repayment Amounts or Extra Payments) in section (2) then proceed to
the Summary Results Section.
Here you will get the following information:
• Total Principal Amount (Should be the same as your original Loan Amount.)
• Total Interest Amount (Paid over the whole Loan Term.)
• Loan Term (That is the time it takes to repay the modified Loan in years)
So, and that’s it. Our Excel Calculator designed for Mortgage Refinance (and modifications of other types of Loans as well) is very easy to use, yet offering quite comprehensive functionality.
It is an ideal tool to get the most out of your Loan. You can also use it for an interactive modelling.
Every update of the Loan Parameters is instantly translated into the Amortization Schedule as well as the Final Results.
We are sure that you will enjoy using our enhanced Excel Calculator. If you have any feedback or would like to suggest some additional features then please don’t hesitate to talk to us. You can reach
us via our Contact Form.
Links to get our Loan Calculator: Download Sample or Buy Full Version
|
{"url":"https://www.unclefinance.com/loan-modification-calculator-in-excel/","timestamp":"2024-11-08T11:10:39Z","content_type":"text/html","content_length":"33859","record_id":"<urn:uuid:32edd57d-a6a3-4ad5-b888-8e8227d5d100>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00359.warc.gz"}
|
The source distribution contains the LaTeX sources to the document, (fractal.ps.gz, or fractal.pdf, about 6MB.) The figures in Appendix B were made from the regression tests for the programs, and can
be reconstructed with the Unix make(1) utility in the~../simulation/test, and~../utilities/test directories in the source tree.
Fractal Time Series Analytical Utilities
Source tsderivative.c, for taking the derivative of a time series. The value of a sample in the time series is subtracted from the previous sample in the time series. The derivative time series is
printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsderivative program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsintegrate.c, for taking the integral of a time series. The value of a sample in the time series is added to the previous samples in the time series. The integral time series is printed to
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsintegrate program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tshcalc.c, for calculating the H parameter for a one variable fractional Brownian motion time series.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tshcalc program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tshurst.c, for calculating the Hurst coefficient for a time series. The time series is broken into variable length intervals, which are assumed to be independent of each other, and the R/S
value is computed for each interval based on the deviation from the average over the interval. These R/S values are then averaged for all of the intervals, then printed to stdout. The -r flag sets
operation as described in "Chaos and Order in the Capital Markets," by Edgar E. Peters, pp 81, and should only be used for time series from market data since logarithmic returns sum to cumulative
return-negative numbers in the time series file are not permitted with this option. The ln (R/S) vs ln (time) plot is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tshurst program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tslogreturns.c, is for taking the logarithmic returns of of a time series. The value of a sample in the time series is divided by the value of the previous sample in the time series, and the
logarithm of the quotient is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tslogreturns program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsshannon.c, for calculating the probability, given the Shannon information capacity.
An example output from the tsshannon program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsshannonmax.c, for calculating unfair returns of a time series, as a function of Shannon probability. The input time series is presumed to have a Brownian distribution. The main function of
this program is regression scenario verification-given an empirical time series, speculative market pro forma performance can be analyzed, as a function of Shannon probability. The cumulative sum
process is Brownian in nature.
To find the maximum returns, the "golden" method of minimization is used. As a reference on the "golden" method of minimization.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsshannonmax program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsfraction.c, for finding the fraction of change in a time series. The value of a sample in the time series is subtracted from the previous sample in the time series, and divided by the value
of the previous sample. The fraction time series is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsfraction program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsrms.c, for taking the root mean square of a time series. The value of a sample in the time series is squared and added to the cumulative sum of squares to make a new time series. The new
time series is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsrms program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tslsq.c, for making a least squares fit time series from a time series.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time. Uses Newton-Raphson method for an iterative solution for the probability, p.
An example output from the tslsq program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsnormal.c, for making a histogram or frequency plot of a time series.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsnormal program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tschangewager.c, for changing the unfair returns of a time series. The idea is to change the returns of a time series which is weighted unfairly, by changing the increments by a constant
factor. The main function of this program is regression scenario verification-given an empirical time series, and a "wager" fraction, speculative market pro forma performance can be analyzed. The
input time series is assumed to be cumulative sum with fractional or Brownian characteristics.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tschangewager program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsavg.c, for taking the average of a time series. The value of a sample in the time series is added to the cumulative sum of the samples to make a new time series by dividing the cumulative
sum by the number of samples, for each sample. The new time series is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsavg program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tssample.c, for sampling a time series. The value of a sample in the time series is printed to stdio only if it is a multiple of the specified interval.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tssample program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsXsquared.c, for taking the Chi-Square of two time series, the first file contains the observed values, the second contains the expected values.
The input file structures are text files consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character
as the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many
fields, then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsXsquared program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsavgwindow.c, for taking the average of a time series. The value of a sample in the time series added to the cumulative sum of the samples to make a new time series by dividing the cumulative
sum by the number of samples, for each sample. The new time series is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsavgwindow program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsrmswindow.c, is for taking the root mean square of a time series. The square of a value of a sample in the time series added to the cumulative sum of the square of the samples to make a new
time series by dividing the cumulative sum of the square of the samples by the number of samples, for each sample. The new time series is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsrmswindow program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsshannonwindow.c, for finding the windowed Shannon probability of a time series. The Shannon probability is calculated by the following method:
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsshannonwindow program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tspole.c, is for single pole low pass filtering of a time series. The single pole low pass filter is implemented from the following discrete time equation:
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tspole program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsdft.c, is for taking the Discrete Fourier Transform (power spectrum) of a time series.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsdft program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsmath.c, for for performing arithmetic operations on each element in a time series. The resultant time series is printed to stdio.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsmath program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsdeterministic.c, is for determining if a time series was created by a deterministic mechanism. The idea is place each element of a time series in an array structure that contains the element
and the next element in the time series, and then sort the array. The array is output and may be plotted. For example, using the program tsdlogistic to make a discrete time series of the logistic,
(quadratic function,) with the command "tsdlogistic -a 4 -b -4 1000 > XXX" and then using this program on the output file, XXX, will result in a plot of a parabola.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsdeterministic program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsstatest.c, for making a statistical estimation of a time series. The number of samples, given the maximum error estimate, and the confidence level required is computed for both the standard
deviation, and the mean.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsstatest program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsshannonaggregate.c, aggregate Shannon probability of many concurrent Shannon probabilities.
An example output from the tsshannonaggregate program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsunfraction.c, is for making a cumulative sum of the fraction of change in a time series. The value of a sample in the time series is multiplied by the running cumulative sum of the time
series, and added to the running sum of the time series. The resultant time series is printed to stdout. (This program is the inverse of the tsfraction program.) Note that:
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsunfraction program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsinstant.c, for finding the instantaneous fraction of change in a time series. The value of a sample in the time series is subtracted from the previous sample in the time series, and divided
by the value of the previous sample. For Brownian motion, random walk fractals, the absolute value of the instantaneous fraction of change is also the root mean square of the instantaneous fraction
of change. Squaring this value is the average of the instantaneous fraction of change, and adding unity to the absolute value of the instantaneous fraction of change, and dividing by two, is the
Shannon probability of the instantaneous fraction of change. The values are printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsinstant program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsrunlength.c, is for finding the run lengths of zero free intervals in a time series, which is assumed to be a Brownian fractal. The value of each sample in the time series is stored, and the
run length to a like value in the time series is stored. A histogram of the number of run lengths of each run length value is printed to stdout as tab delimited columns of run length value, positive
run lengths, negative run lengths, and the sum of both positive and negative run lengths, followed by the cumulative sum of the positive run lengths, the cumulative sum of negative run lengths, and
the cumulative sum of both positive and negative run lengths.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsrunlength program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsrootmean.c for finding the root mean of a time series. The number of consecutive samples of like movements in the time series is tallied, and the resultant distribution is printed to
stdout-a simple random walk fractal with a Gaussian/normal distributed increments would be the combinatorial probabilities, 0.5, 0.25, 0.125, 0.625, ...
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsrootmean program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsrunmagnitude.c is for finding the magnitude of the run lengths in a time series. The value of each sample in the time series is stored, and subtracted from all other values in the time
series, each point being tallied root mean square. The magnitude deviation is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsrunmagnitude program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Note: Conceptually, this program is used to "adjust" the Shannon probability of a stock by considering the volumes of trade in a time interval. Unfortunately, the results were not encouraging, and
the concept was abandoned. It is left in the program inventory for future reference.
Tsshannonvolume.c, is for finding the fundamental Shannon probability of a time series, given a stocks value, and the number of shares traded, in each time interval. The value of a sample in the time
series is divided by the volume, and added to the cumulative sum of the samples, and the square of the value, after dividing by the volume, is added to the sum of the squares to make a new time
series by dividing both the cumulative sum and the square root of the sum of the squares by the number of samples for each sample. The new time series is printed to stdout. The time series printed to
stdout is a tab delimited table of:
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least two fields, which is the data value of the sample, followed by the volume of the sample, but may contain many
more fields-if the record contains many more fields, then the first field is regarded as the sample's time, and the next to the last field the value, with the last field as the sample's volume at
that time.
Source tsshannonfundamental.c, is for finding the fundamental Shannon probability of a time series, given a stocks value, and the number of shares traded. The value of a sample in the time series is
added to the cumulative sum of the samples, and the square of the value is added to the sum of the squares to make a new time series by dividing the cumulative sum by the number of samples, and the
square root of the sum of the squares divided by the number of samples for each sample. The new time series is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least two fields, which is the data value of the sample, followed by the volume of the sample, but may contain many
more fields-if the record contains many more fields, then the first field is regarded as the sample's time, and the next to the last field the value, with the last field as the sample's volume at
that time.
Note that since the average of the normalized increments of a time sampled time series goes up linearly on the number of samples in a sampled interval, and the root mean square of the normalized
increments go up with the square root of the of the number of samples in a sampled interval, it would be reasonable to assume that that the average of the normalized increments would go up linearly
with the trading volume of a stock, and the root mean square to go up with the square root of the trading volume of a stock.
Source tsnumber.c, is for numbering the records of a time series. The new time series is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
Source tsunshannon.c, is for calculating the Shannon information capacity, (and optimal gain,) given the Shannon probability.
Source tskurtosis.c is for finding the coefficient of excess kurtosis of a time series. The value of a sample in the time series is analyzed to find the running coefficient of excess kurtosis to make
a new time series. The new time series is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tskurtosis program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tskurtosiswindow.c is for finding the coefficient of excess kurtosis of a time series. The value of a sample in the time series is analyzed to find the running coefficient of excess kurtosis
to make a new time series. The new time series is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tskurtosiswindow program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsgain.c is for finding the gain of a time series. The value of a sample in the time series added to the cumulative sum of the samples, and is squared and added to the cumulative sum of
squares, the Shannon probability, P, calculated using:
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsgain program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsgainwindow.c is for finding the windowed gain of a time series. The value of a sample in the time series added to the cumulative sum of the samples, and is squared and added to the
cumulative sum of squares.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsgainwindow program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsscalederivative.c, for taking the derivative of a time series. The value of a sample in the time series is subtracted from the previous sample in the time series. The derivative time series
is printed to stdout.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a '#' character as
the first non white space character in the record. Data records must contain at least two fields, which are the time followed by the data value of the sample at that time, but may contain many
fields-if the record contains many fields, then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsscalederivative program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsrootmeanscale.c is for finding the root mean of a time series, at different scales. The number of consecutive samples of like movements in the time series is tallied, at different scales,
and the resultant value of the distribution, as calculated by using the first value in the distribution, the running mean of the distribution, and the least squares fit of the distribution, is
printed to stdout-a simple random walk fractal with a Gaussian/normal distributed increments would be the combinatorial probabilities, 0.5, 0.25, 0.125, 0.625, ...
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsrootmeanscale program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tskalman.c, for taking the Kalman filtered average of a time series. The n'th running Kalman filtered linear average, A, of a time series is calculated by:
A = ((n - 1) / n) A + (1 / n) a
n n - 1 n
where a is the n'th value in the time series. The new time series of the running Kalman filtered average is printed to stdout.
Note the similarity to the running average:
A = (1 / n) (a + a + ... + a )
n 1 2 n
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tskalman program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Fractal Time Series Simulation Utilities
Source tsbrownian.c, brownian noise generator-generates a time series. The idea is to produce a 1/f squared power spectrum distribution by running a cumulative sum on white noise.
An example output from the tsbrownian program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsblack.c, black noise generator-generates a time series. The idea is to produce a 1/f cubed power spectrum distribution by running a cumulative sum on pink noise which is made by running a
cumulative sum on relaxation processes which are generated by a white noise generator.
An example output from the tsblack program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsfractional.c, fractional brownian noise generator-generates a time series. The idea is to produce a 1/f squared power spectrum distribution by running a cumulative sum on a Gaussian power
spectrum distribution.
An example output from the tsfractional program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsgaussian.c, Gaussian noise generator-generates a time series. The idea is to produce a Gaussian power spectrum distribution.
An example output from the tsgaussian program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tswhite.c, white noise generator-generates a time series. The idea is to produce a flat power spectrum distribution.
An example output from the tswhite program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tspink.c, pink noise generator-generates a time series. The idea is to produce a 1/f power spectrum distribution by running a cumulative sum on relaxation processes which are generated by a
white noise generator.
An example output from the tspink program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsunfairbrownian.c, unfair returns of a time series. The idea is to produce the returns of a time series which is weighted unfairly, by a Shannon probability, p, or alternately, a fraction of
reserves to be wagered on each time increment. The input time series is presumed to have a Brownian distribution. The main function of this program is regression scenario verification-given an
empirical time series, a Shannon probability, or a "wager" fraction, (which were probably derived from the program tsshannon,) speculative market pro forma performance can be analyzed. The cumulative
sum process is Brownian in nature.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsunfairbrownian program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tscoin.c, brownian noise generator, with unfair bias, and cumulative sum-generates a time series. The idea is to produce a 1/f squared power spectrum distribution by running a cumulative sum
on white noise. The program accepts an unfair bias and a wager factor.
An example output from the tscoin program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tscoins.c, fractional brownian noise generator, with unfair bias, and cumulative sum-generates a time series. The idea is to produce a 1/f squared power spectrum distribution by running a
cumulative sum on a Gaussian power spectrum distribution. The program accepts an unfair bias and a wager factor.
An example output from the tscoins program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsfBm.c, fractional brownian noise generator-generates a time series. The idea is to produce a programmable power spectrum distribution.
Example outputs from the tsfBm program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tslogistic.c, logistic function generator-generates a time series.
An example output from the tslogistic program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsdlogistic.c, discreet logistic function generator-generates a time series. The idea is to iterate the function x(t) = x(t - 1) * (a + b * x(t - 1)).
An example output from the tsdlogistic program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsstockwager.c, stock capital investment simulation. The idea is to simulate an optimal wagering strategy, dynamically determining the Shannon probability by counting the up movements in a
stock's value in a window from the stock's value time series, and using this to compute the fraction of the total capital to be invested in the stock for the next iteration of the time series, which
is 2P - 1, where P is the Shannon probability.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsstockwager program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsbinomial.c, is for generating binomial distribution noise, with unfair bias, and cumulative sum-generates a time series. The idea is to produce a 1/f squared power spectrum distribution by
running a cumulative sum on a binomial distribution. The program accepts a an unfair bias and a wager factor.
This program is a modification of the program tscoin. The wager fraction is computed by first calculating the optimal wager fraction, f = 2P - 1, where P is the Shannon probability, and f is the
optimal wager fraction, (which is the root mean square = standard deviation of the normalized increments of the time series,) and then reducing this value by the standard deviation of the binomial
distribution, which is the square root of the number of elements in the distribution, ie., the root mean square of the normalized increments of the cumulative sum is the same as the standard
deviation of the binomial distribution.
An example output from the tsbinomial program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Note: Conceptually, this program is used to "weight" the returns of a time series with a Gaussian distribution, ie., produce a fractional Brownian motion time series, as opposed to a Brownian
distribution which would produce a Brownian time series, as produced by the program tsunfairbrownian. Unfortunately, the precision of the results were not encouraging, and the concept was abandoned.
It is left in the program inventory for future reference.
Source tsunfairfractional.c, unfair returns of a time series. The idea is to produce the returns of a time series which is weighted unfairly, by a Shannon probability, p. The input time series is
presumed to have a Gaussian distribution. The main function of this program is regression scenario verification-given an empirical time series, a Shannon probability, and a "wager" fraction, (which
were probably derived from the program tsshannon,) speculative market pro forma performance can be analyzed. Uses Newton-Raphson method for an iterative solution for the inverse function of the
normal function. Also iterates, using Romberg integration, to calculate the cumulative interval value of the normal function.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsunfairfractional program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsintegers.c, integers function generator-generates a time series.
An example output from the tsintegers program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsshannonstock.c, is for simulating the gains of a stock investment using Shannon probability.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsshannonstock program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsmarket.c, is for market simulation by fractional brownian noise generation, with unfair bias, and cumulative sum-generates a time series. The idea is to produce a 1/f squared power spectrum
distribution for each company in an industrial market by running a cumulative sum on a Gaussian power spectrum distribution. The aggregate of all companies participating in the market is obtained by
summing the production of the individual companies. The program accepts an unfair bias and a wager factor, and the number of companies in the market.
An example output from the tsmarket program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsstock.c, is for simulating the gains of a stock investment using Shannon probability.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
An example output from the tsstock program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tsstocks.c, is for simulating the optimal gains of multiple stock investments. The program decides which of all available stocks to invest in at any single time, by calculating the
instantaneous Shannon probability of all stocks, and using an approximation to statistical estimation techniques to estimate the accuracy of the calculated Shannon probability.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a `#' character as
the first non white space character in the record. Data records must contain at least one field, which is the data value of the sample, but may contain many fields-if the record contains many fields,
then the first field is regarded as the sample's time, and the last field as the sample's value at that time.
Source tstrade.c is for simulating the optimal gains of multiple equity investments. The program decides which of all available equities to invest in at any single time, by calculating the
instantaneous Shannon probability of all equities, and using an approximation to statistical estimation techniques to estimate the accuracy of the calculated Shannon probability.
The input file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a '#' character as
the first non white space character in the record. Each data record represents an equity transaction, consisting of a minium of six fields, separated by white space. The fields are ordered by time
stamp, equity ticker identifier, maximum price in time unit, minimum price in time unit, closing price in time unit, and trade volume. The existence of a record with more than 6 fields is used to
suspend transactions on the equity.
Source tstradesim.c is for generating a time series for the tstrade program. Generates a fractal time series, of many stocks, concurrently.
The input file is organized, one stock per record, with each record having up to five fields, of which only the Shannon probability need be specified. The fields are sequential, in any order, with
field the type specified by a single letter-P for Shannon probability, F for wager fraction, N for trading volume, and I for initial value. Any field that is not one of these letters is assumed to be
the stock's name.
The output file structure is a text file consisting of records, in temporal order, one record per time series sample. Blank records are ignored, and comment records are signified by a '#' character
as the first non white space character in the record. Each data record represents an equity transaction, consisting of a minium of six fields, separated by white space. The fields are ordered by time
stamp, equity ticker identifier, maximum price in time unit, minimum price in time unit, closing price in time unit, and trade volume. The existence of a record with more than 6 fields is used to
suspend transactions on the equity.
Source tscauchy.c, Cauchy distributed noise generator-generates a time series. The idea is to produce a 1 / f power spectrum distribution.
An example output from the tscauchy program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
Source tslognormal.c is for changing the distribution of a time series to a log-normal distribution. The value of a sample in the time series is subtracted from the previous sample in the time
series, and divided by the value of the previous sample. This value is multiplied by its exponentiation, (i.e., e-to-the-power,) and the log-normal fractional time series is printed to stdout.
An example output from the tslognormal program appears in Appendix B of the document, (fractal.ps.gz, or fractal.pdf, about 6MB.)
A license is hereby granted to reproduce this software source code and to create executable versions from this source code for personal, non-commercial use. The copyright notice included with the
software must be maintained in all copies produced.
So there.
Copyright © 1994-2011, John Conover, All Rights Reserved.
Comments and/or bug reports should be addressed to:
John Conover
January 6, 2006
|
{"url":"http://www.johncon.com/ndustrix/tests.html","timestamp":"2024-11-05T08:50:46Z","content_type":"text/html","content_length":"82262","record_id":"<urn:uuid:b22bedbb-7f27-4182-9b5f-ab213b34cb74>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00093.warc.gz"}
|
Introduction to Shading
Spherical Light
Reading time: 11 mins.
As mentioned in the previous chapter, this is merely a high-level introduction to light and shadows. Readers interested in studying this topic further are invited to read the lessons A Creative Dive
into BRDF, Linearity, and Exposure and Introduction to Lighting. While these lessons are also part of the beginner section, they delve deeper into the topic of light implementation. You can then
continue with the lessons from the advanced section.
Structure of this Chapter
In this chapter, we will learn how to simulate spherical (or point) lights and how to cast shadows with spherical lights.
Spherical Lights
We have already introduced the concept of directional light sources, which can be used to simulate distant light sources such as the sun. Directional sources are light sources that are so far away
from the scene that the light rays they emit can be considered parallel to each other. In other words, a distant light source emits rays in a single direction.
Figure 1: Compute the light direction for point light sources.
Spherical light sources are different. They are not distant like directional light sources but local. A spherical light can be represented as a single point of light in 3D space from which light is
emitted radially from the point of emission. For this reason, they are also frequently called point light sources. Point light sources can be used to simulate things such as the flame of a candle or
a light bulb. Although, as mentioned in the chapter on directional light sources, spherical or point light sources are also considered light sources with no size. In other words, we model them as an
ideal point in space from which light is emitted radially, but such an object does not exist in nature. The flame of a candle or a light bulb may well emit light radially, though they have a size.
This is not a problem right now, but keep in mind that so far, our lights are considered to be infinitesimally small in size. Such light sources are often referred to as delta lights, as explained in
the introductory chapter on lights. When spherical lights emit light equally in all directions, we say they are isotropic.
How do we simulate point light sources? First, we need to consider the way light is emitted: radially. From a coding perspective, this can be simulated by simply tracing a line from the point that is
being shaded (\(P\)) to the spherical light position. This line will indicate the direction of the light ray emitted by the point light source towards \(P\), as shown in Figure 1.
From a coding perspective, all we need to do to define a point light source is to add a member variable to the Light base class to keep track of its position in space. We will assume that the point
light source is created at the origin of the world coordinate system. To modify its position in 3D space, we will use the light-to-world transformation matrix (as we did to modify the directional
light source's direction):
class PointLight : public Light
Vec3f pos; // Position of the light in world space
PointLight(const Matrix44f &l2w, const Vec3f &c = 1, const float &i = 1) : Light(l2w, c, i)
{ l2w.multVecMatrix(Vec3f(0), pos); }
Since the light direction depends on the position of \(P\) and the position of the light source in 3D space, we will add a getDirection() function to the Light base class. This function will take the
point \(P\) as an argument and return a normalized light direction. For directional light sources, this direction is constant. For spherical light sources, it must be computed explicitly:
class PointLight : public Light
Vec3f pos; // Position of the light
PointLight(const Matrix44f &l2w, the Vec3f &c = 1, the float &i = 1) : Light(l2w, c, i)
{ l2w.multVecMatrix(Vec3f(0), pos); }
// P: is the shaded point
Vec3f getDirection(const Vec3f &P) const { return (pos - P).normalize(); }
Figure 2: Light energy emitted from a point light source is distributed across the surface of a sphere of radius \(r\). This is, of course, an infinity of such spheres centered around the point light
Spherical light sources differ from directional light sources in one more aspect. Let's consider point light sources first. As mentioned earlier in this chapter, spherical light sources emit light
from a single point in space. The light energy emitted from that point is distributed across the surface of a sphere, as shown in Figure 2. Without delving into the details (which would involve a
more thorough understanding of radiometry), let's simply state for now that the entire light energy or power of the point light source is redistributed over that sphere. As with diffuse surfaces, the
amount of light arriving on a small differential area \(dA\) is somewhat proportional to the amount of light contained within a small patch of the sphere with radius \(r_1\), roughly equal in area to
\(dA\), as shown in the illustration below. However, as you can see in the same illustration, as the sphere expands, the area of the differential solid angle \(d\omega\) that matched the area of the
differential area \(dA\) also increases. In other words, the energy that was concentrated over \(d\omega\) is now spread across a much larger area. Consequently, \(P_2\) receives less light than \
(P_1\), or to put it differently, \(P_2\) will appear darker than \(P_1\) (the amount of light \(P_2\) receives is proportional to the green solid angle over the red solid angle).
This effect can easily be observed with any real point light source, such as a light bulb. Objects that are closer to the source are brighter. The contribution of the light seems to decrease as the
distance from the light source to the objects increases. The question now is to determine what governs this falloff.
The amount of light energy arriving at a point in the scene from a point light source depends on the area of the "sphere," which itself depends on the sphere's radius. Note that this sphere doesn't
actually exist; we use this image merely to help you visualize the process. When we speak of a sphere, what we actually mean is a sort of virtual sphere centered around the point of light source
origin, with a radius equal to the distance from the point of the light source to \(P\), a point in the scene that we wish to shade. What we are trying to determine is how much light arrives at \(P\)
from that point of light source. As suggested, the contribution of the point light source depends on the area of the sphere of radius \(r\):
$$A = 4\pi r^2.$$
The intensity of the light arriving at \(P\) is inversely proportional to the sphere's area. In other words:
$$L_i = \dfrac{\text{light intensity * light color}}{4 \pi r^2}.$$
For a more formal explanation, please refer to the lesson on radiometry.
Where \(r\) is equal to the distance between the light position and \(P\). This falloff is known in CG as the square falloff. Note that when the radius of the sphere doubles (if the distance between
the point light source and \(P\) doubles), the area of the sphere is multiplied by 4, and thus the light contribution is four times smaller. More generally, this kind of light attenuation follows
what we call the inverse-square law. The profile of this falloff can be seen in the following figure.
As you can observe from this graph, the contribution of a point light source decreases very rapidly with distance.
To implement this square falloff, we will make a small modification to the getDirection method, which we will rename getDirectionAndIntensity. As you may have guessed, this is also where we will
compute the light intensity for a given \(P\). We do so because we already compute in the method the vector connecting the light to \(P\) to compute the light direction. We can use the square of this
vector's length to attenuate the light intensity according to the inverse square law (line 13):
class PointLight : public Light
Vec3f pos; // Position of the light
PointLight(const Matrix44f &l2w, const Vec3f &c = 1, const float &i = 1) : Light(l2w, c, i)
{ l2w.multVecMatrix(Vec3f(0),
pos); }
// P: is the shaded point
void getDirectionAndIntensity(const Vec3f &P, Vec3f &lightDir, Vec3f &lightIntensity) const
lightDir = pos - P; // Compute light direction
float r2 = lightDir.norm();
lightIntensity = intensity * color / (4 * M_PI * r2);
Spherical Lights & Shadows
Computing shadows with spherical lights is quite straightforward. We can use similar techniques as those used for distant light sources. We simply need to compute the light direction and trace a ray
in the opposite direction. Computing the light direction is straightforward: we trace a line from the light source to \(P\), the shaded point. By normalizing the resulting vector, we obtain the light
class PointLight : public Light
Vec3f pos;
PointLight(const Matrix44f &l2w, const Vec3f &c = 1, const float &i = 1) : Light(l2w, c, i)
{ l2w.multVecMatrix(Vec3f(0), pos); }
// P: is the shaded point
void getDirectionAndIntensity(const Vec3f &P, Vec3f &lightDir, Vec3f &lightIntensity) const
lightDir = pos - P;
float r2 = lightDir.norm();
lightIntensity = intensity * color / (4 * M_PI * r2);
Figure 3: Any intersection whose distance is greater than the distance between \(P\) and the light can be ignored.
However, there is a caveat. Directional lights, by definition, are extremely far away—considered to be at infinity. This is not the case with spherical lights. If we trace a shadow ray in the
direction of the light, this ray could very well overshoot the point light source and intersect an object further away than the source. This would incorrectly suggest that the point is in shadow
when, in fact, \(P\) and the light are visible to each other (as shown in Figure 3). The solution to this problem is to set the ray's maximum length, or \(tNear\), to the distance between \(P\) and
the light position (the distance we denoted \(r\)). That way, the trace() function will never accept an intersection whose distance is greater than \(r\). In other words, if an intersection is found
but the intersection distance is further away than the distance between \(P\) and the light, then it will be ignored. Since we already computed the square of that distance in the
getDirectionAndIntensity() function, we will just add a variable to the method that will be set to the distance between \(P\) and the light:
class PointLight : public Light
Vec3f pos;
PointLight(const Matrix44f &l2w, const Vec3f &c = 1, const float &i = 1) : Light(l2w, c, i)
{ l2w.multVecMatrix(Vec3f(0), pos); }
// P: is the shaded point
void getShadingInfo(const Vec3f &P, Vec3f &lightDir, Vec3f &lightIntensity, float &dist) const
lightDir = pos - P;
// Compute the square distance
float r2 = lightDir.norm();
dist = sqrtf(r2);
// Normalize the incident light ray direction
lightDir.x /= dist, lightDir.y /= dist, lightDir.z /= dist;
// Apply square falloff
lightIntensity = intensity * color / (4 * M_PI * r2);
Then we will set the shadow ray \(tNear\) variable to the result of dist. This can be done directly if we pass the isectShad.tNear variable directly to the getShadingInfo() method of the light (we
can change the method's name in the process to make it more generic).
IsectInfo isectShad;
light->getShadingInfo(hitPoint, lightDir, lightIntensity, isectShad.tNear);
bool vis = !trace(hitPoint + hitNormal * options.bias, -lightDir, objects, isectShad, kShadowRay);
What's Next?
We can now simulate diffuse surfaces and the effects of directional and point light sources. However, so far, we have only rendered scenes containing one light source. How we can extend the technique
to multiple light sources will be the topic of the next chapter.
|
{"url":"https://scratchapixel.com/lessons/3d-basic-rendering/introduction-to-shading/shading-spherical-light.html","timestamp":"2024-11-08T10:26:11Z","content_type":"text/html","content_length":"19359","record_id":"<urn:uuid:8b6dda02-5ba5-4b86-924b-2672f254d478>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00623.warc.gz"}
|
How to Prepare Coco Coir Bricks to Grow Microgreens - Home Microgreens
We love to use coconut coir as the main ingredient of our microgreen growing media.
We currently use bags of loose coir, but we would like to purchase coco coir bricks as it would be more economical and take up less storage room.
Using coco coir bricks would also be ideal for the home microgreen grower. But for every positive aspect, there can also be a negative factor to consider and find a solution to make it work.
In this article, we will go over the pros & cons of coco coir bricks. We will also explain the following.
• Show how to expand the coco coir bricks in a video;
• Determine how much water to use to expand a brick;
• Provide ideas on how to store the extra coir;
• Measure how much loose coir is in a coco coir brick; and
• Discuss how well pure coco coir grows microgreens.
Coco Coir Bricks – What Are They?
If you are unfamiliar with coconut coir, what it is, where it comes from, how it is processed, its advantages, and its renewable & sustainable properties; in that case, we have published an in-depth
article that has become very popular.
Therefore, we won’t touch on those topics. Instead, we will concentrate on managing coco coir bricks and how much loose coir is contained in a brick.
But first, the pros and cons of coco coir bricks.
Coco Coir Bricks Pros vs. Cons
FREE Home Microgreens Grow course that teaches you the basics of growing microgreens in your home! There are 12 video lessons (over 120 minutes), downloads, and more written information and tips!
Our thoughts on coconut coir bricks and their usage (in no particular order).
• Compact size takes up little room.
• Brick is a convenient shape – easy to stack and store.
• Reduces shipping costs both for seller and buyer.
• Tightly sealed & dry – much less chance of fungus gnat infestation.
• Needs preparation before use.
• Once expanded, excess moisture can cause mold issues in leftover coir.
• Pure coconut coir needs additives for the best plant growth.
• We don’t recommend thoroughly wetting soil when planting microgreens. Often, the coir from the bricks is wet.
• We have wasted coco coir from bricks because of improper handling.
We will discuss each of these in more detail later in the article. But first, let’s expand a brick of coconut coir!
Below is a video of a coco coir brick expanding. The data collected is presented below and manipulated to assess other-sized bricks for your use.
We go deep into the numbers later in the article to determine the best way to buy coconut coir.
I intended to fast-forward through much of this, but I brought up some good points throughout the video.
I was surprised by how this coco coir brick expanded. I expected more from it, like the coco coir puck in my Instagram video.
As you saw in the Instagram video (@myviewfromthewoods), I indiscriminately poured water into the nursery pot.
The pot has holes in the bottom, and any excess water will drain out. Also, I was planting tomato sets, and extra moisture isn’t a problem.
Why Moisture Is A Problem With Coco Coir Bricks
If you’re using all of the coconut coir from a brick, moisture isn’t a problem. However, in most cases, when planting microgreens, you only need a small amount of coir compared to how much a brick
will expand into (more on this later).
So the extra coconut coir will need to be stored, and if it holds too much water, mold will quickly grow on the coir.
You can leave the top off the container and allow the water to evaporate and the coir to dry out.
However, what can happen is that fungus gnats will be attracted to the moist soil and lay eggs. From then on, all Hell breaks out.
Where do fungus gnats come from anyway? I’ll have to research that.
So adding enough water to the brick without over-saturating the coconut coir is the trick we need to learn.
Coconut Coir Moisture Content Is Deceiving
As you saw in the video, coconut coir has the uncanny ability to look perfectly moist, but in reality, it’s saturated.
Even the fluffy coconut coir in the video dripped water and flowed out in a steady stream.
This is too wet to plant microgreen seeds on, especially those susceptible to damping off disease, such as beets and Swiss chard, or those that take more than a couple of days to germinate.
Coco Coir Bricks By The Numbers
Before we get to recommendations, let’s review the numbers and make some assumptions.
First, the facts.
Dimensions and Expansion
The video’s Coco Bliss coco coir brick dimensions are 8- by 4- by 2 inches.
That makes the volume of the compressed brick 64 square inches.
Once the coco coir brick was expanded, the volume was measured to be 476 square inches.
That is 8.25 quarts for those familiar with the 8-quart bags sold by garden centers.
Therefore the coco coir brick expanded 7.4 times its size.
Can We Assume That All Bricks Are Compressed The Same?
No, I don’t think we can.
Here’s why.
1.4-Pound Brick
Our compressed brick of coir was 64 square inches. It weighed 1.45 pounds.
To normalize the bricks, we will calculate how many square inches are in a pound of compressed coconut coir.
64 divided by 1.4 is 45.7 cubic inches per pound.
10 Pound Brick
According to Amazon, the 10-pound Coco Bliss (same brand) coir brick is 12 by 12 by 5.5 inches. That is 792 cubic inches.
792 divided by 10 is 79.2 cubic inches per pound.
The Smaller Brick is Much More Dense
The smaller 1.4-pound coco coir brick is 1.7 times as dense as the 10-pound brick. That is the same as saying pound for pound the smaller brick has 1.7 times more coir.
I’m not going to compare all the brands, but you can compare how much coir you will get from any coconut coir brick from the example above.
How Much Coconut Coir in a 10-Pound Brick
Assuming that a particle of coir is the same weight once expanded. We can consider this correct; we can estimate how many quarts of expanded coir can get from a 10-pound Coco Bliss 10-pound brick.
The inverse of 1.7 is 0.59. So, from the 1.4-pound brick, we can calculate that we will get 340 cubic inches of coir for each pound of compressed coir.
Since the 10-pound brick is 0.59 times less dense, we should get 200.6 cubic inches for each pound.
So a 10-pound brick would yield 2,006 cubic inches or 34.7 quarts, which is 1.2 cubic feet.
Most bagged loose soil is sold in 1.5 or 2 cubic feet bags.
From these numbers, we can make cost comparisons to soil media products in stores.
We can also determine if it’s wiser to buy more small bricks or larger bricks.
How Much Water is Needed to Expand a Coco Coir Brick?
The value we recommend below is something we’ve been searching for for a long time.
Most videos or articles say to add water and dump in a random amount.
Maybe they plan on using all of the coir?
I don’t know.
I know that I have thrown quite a bit of coconut coir into the compost heap because it was stored too wet, and mold grew on it.
Remember, we want to expand the coir to the moisture level to store it without mold growing on it.
Seems like a simple problem to solve, but as the video showed you, coconut coir is good at storing water and not looking wet.
We use about two-thirds of a gallon of water (about 2.4 liters) with the 1.4-pound brick.
Sorry to switch between units here, but for liquid measure, liters are easier, and all measuring cups also have liter measurements on them.
As mentioned in the video, 2.4 liters was too much water.
Even trying to be careful, I overdid it.
Doing it again, I would start with 2 liters of water and, if necessary, allow the coir to sit awhile and absorb the water slowly as not to over-wet the coir.
Can We Use The Recommended 1.4-pound Value on Other Bricks? Truthfully, I’m not sure.
If I tried to expand the 10-pound brick referenced above, I would normalize the water amount to a pound of coir and multiply by the density difference.
The normalized value would be 2-liter (2,000 mL) divided by 1.4 or about 1,430 mL per pound.
Since the 10-pound brick is 0.57 times as dense, I’d use 815 mL per pound of coir. So for the 10-pound brick, I’d start with 8 liters of water and see how it goes.
Using Other Brands of Coco Coir Bricks
You should be able to use my numbers with other brands of coco coir bricks. Find their density, compare it to the 1.4-pound Coco Bliss brick, and go from there.
Storing Expanded Coco Coir
As mentioned, I’ve thrown out quite a bit of coir when messing around with hydroponics.
I’d mix it in a 5-gallon pail, use what I needed and store the rest with a tight lid. The next time I went to use some, sure enough, what remained in the bucket was covered in mold.
This is why I’m so concerned with finding the perfect amount of water!
Instead of pails, I now store soil in totes—either the HDX or Rubbermaid brands.
The tops seal tight enough to keep insects out, but they do breathe some and allow air exchange.
Overwet or saturated coco coir will still mold if you’re not careful.
How Wet is Too Wet?
If you grab a small handful of coconut coir and squeeze it, water should not drip.
Your hand may have moisture on it, but you shouldn’t cover it tightly if water drips out of the soil.
The Coir in the Video – How to Dry Out Wet Coir
I did overwet the coir in the video. Water was dripping out even after quite a bit of mixing.
To dry it out, I placed the container in front of a fan (not so close, the coir flew out) and let some of the water evaporate.
When the top dried out and was light brown, I mixed it up again to let the dry coir absorb some of the wetness and bring some moisture up to let the fan dry it.
I now have a beautiful container of coco coir that smells like earth and not mold.
How Well Does Coir Grow Microgreens?
Honestly, not well.
At least not pure coir.
Below are links to some articles that show how well microgreens germinate in coir but then stall out.
The Best Soil For Microgreens
Which Soil Is Best For Slow-growing Microgreens?
However, according to my tests, there is nothing better with additives.
I have run dozens of trials, most documented on this website.
These show that coco coir outperforms peat moss-based potting mixes and any fiber mat with the correct additives.
One of the purposes of expanding the coco coir brick is to use the result to test organic liquid fertilizers and vermicompost in pure coir to see how well microgreens grow.
We don’t give up trying to find the best combination to grow microgreens. We present the tests, and you can make up your own mind on how you want to grow microgreens.
Home Microgreens participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking
to Amazon.com. As an Amazon Associate, we earn from qualifying purchases. We may earn a small commission from the companies mentioned in this post at no additional cost to you. Not all links are
connected to affiliate companies.
Are Coco Coir Bricks a Value?
I think this question is premature.
As it is, pure coir will not grow the best microgreens. So although we can provide a price comparison without the additives’ cost, it’s a moot point.
But as mentioned, we will show those results in future articles and add those costs to the ones shown below. You will have a better idea of the cost differences.
1.4-pound & 10-pound Coco Coir Brick Cost
It used to be that you could choose to purchase 1.4-pound coco coir blocks in any quantity from 2 to 50. But know they sell them in 5-packs.
I recommend the Coco Bliss brand because it’s certified organic and one of the larger companies. We hope that their quality control is top-notch.
Coco Bliss now also offers 250-gram bricks (~1/2-pound). These might even be better for home microgreen growers.
The density of the 250-gram brick is between that of the 1.4- and 10-pound bricks — about 30% less coconut coir per pound, or 5.8 quarts per pound. So a 250-gram brick should produce about 3.2 quarts
of expanded coconut coir.
You can see the current price from Amazon below.
Strange, but Coco Bliss only offers a minimum of 20 1.4-pound bricks at the time of publishing. Hopefully, that changes.
Cost Per Quart Coconut Coir
Using the prices when this article is published, let’s calculate the cost per quart of expanded coconut coir. From this, you can compare the costs with your favorite microgreen soil.
• 250-gram brick is $2.40
• 1.4-pound brick is $4.00
• 10-pound brick is $35
We will consider the density difference of the bricks and assume that this will normalize the expansion.
250-gram Brick of Coco Coir*
A brick should expand to 3.2 quarts, so a quart will cost about 75 cents.
*but you have to buy 10…
1.4-pound Brick of Coco Coir*
A brick does expand to 8.25-quarts, so a quart will cost 48 cents.
*but you have to buy 20…hopefully, they will offer fewer bricks.
10-pound Brick of Coco Coir
A brick should expand to 34.7 quarts so that a quart will cost $1.01.
Coco Coir Brick Takeaways
Here are some points to consider when deciding whether to use and expand coco coir bricks to grow microgreens.
With Coco Coir Bricks Density Matters
Always check the density of the coco coir bricks when comparing the cost. The product listing should contain the size of the bricks.
Density equals the volume (length x width x height) / weight.
What Brand to Buy?
We have always used Coco Bliss bricks and found them consistent and always clean.
Other brands might be as good, but we have no experience with them. Below are the top sellers on Amazon.
If you’ve used any of these or another brand, comment below and let us know how you liked the product.
Use Limited Water
Expand the bricks slowly, adding a little water at a time. Overwet coconut coir will not store well and, in most cases, will grow mold.
Even if you plan on using all of the coir, don’t oversoak it, as it retains a lot of water and may impact seed germination.
Saves on Shipping Costs
Coconut coir bricks will save you money on shipping. Remember, even free shipping is free. It’s built into the cost. The compressed nature of the bricks makes them much more ideal for shipping than
loose soil.
You Need More Than Coir to Grow Plants
Coconut coir will germinate plants well, but it needs additives (granular or liquid) to grow good plants.
It is basically nutrient deficient.
Calculating How Much Water to Use
For a brick with a density of 0.022 pounds/ cubic inch, we recommend 1.4 liters of water per pound of compressed coir.
To estimate how much you should use for other blocks, find your brick’s density and divide it by 0.022. Then multiply that value by 1.4 to determine how many liters of water you need to add per pound
of your brick.
More Articles on Using Coir with Additives
This post’s actual purpose is to set up a series of articles using a variety of additives and fertilizers with pure coco coir to see how well it performs.
Of course, we will compare how the additives do our Home Microgreens potting mix and pure coir.
You should look for these kinds of tests. Many growers show you microgreens, and they look great. Still, judging without comparing it to a known top performer and a control tray is tough.
We will link the articles here once they have been published.
|
{"url":"https://homemicrogreens.com/coco-coir-bricks/","timestamp":"2024-11-12T07:03:56Z","content_type":"text/html","content_length":"519090","record_id":"<urn:uuid:cfca4d04-e616-4703-bf47-fff5c120c6c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00235.warc.gz"}
|
Wilberd van der Kallen
Hecke eigenforms
The programs used for the paper
Hecke eigenforms in the Cuspidal Cohomology of Congruence Subgroups of SL(3,Z)
Experimental Mathematics
6 (1997), 163-174,
have been packed in a zipped tar file (cf. the
instructions to unpack
Intersection Cohomology
We have
implemented the recursions
in the paper
T.A. Springer, Intersection Cohomology of B×B orbits in group compactifications
, Journal of Algebra 258 (2002) 71-111.
Lenstra Lenstra Lovász
of the extended Lenstra Lenstra Lovász algorithm (with integer arithmetic and allowing dependent generators) try to use smaller integers than, say, the implementation in
Version 1.38.71. One can prove
complexity bounds
for our algorithm that are similar to those proved for the original LLL algorithm. One can also do
such an analysis
for the Hermite Normal Form algorithm of Havas, Majewski, Matthews. One may use the same principles to avoid coefficient explosion in a straightforward
GCD based Hermite Normal Form algorithm
. It is just a matter of understanding where the explosion is produced. Severe countermeasures like modular arithmetic are quite unnecessary.
Shortest vector in a lattice
depends on the Mathematica Package
Style files for LaTeX
To make an Index with pagerefs under LaTeX
To make many Indexes under LaTeX
Chladni plates
These pictures
have been made with a Mathematica
package for a simple model
of round and square Chladni plates. Note that plates are not membranes, so that the wave equation does not apply.
|
{"url":"https://webspace.science.uu.nl/~kalle101/","timestamp":"2024-11-07T06:12:55Z","content_type":"application/xhtml+xml","content_length":"5232","record_id":"<urn:uuid:e2fdf8a8-d2d3-44d9-82e9-81e92a5c0baa>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00643.warc.gz"}
|
0 Comments
Show More
Instructor:Muhammad Usman
Enrolled: 1 Students
Courses:6 Courses
Hi, I am Dr. Muhammad Usman graduated from Peking University in the field of applied and computational mathematics with distinguished student. I also completed my postdoc in field of computational
mathematics. I have numerous research articles published in well-reputed international journals. Therefore, I have rich experience in research and supervise project students. As for as my teaching
career is concerned, I have 14 years’ experience in teaching. I have taught various courses at A-level, O-level, graduate and postgraduate level. In the recent past years, I involve extensively in
teaching online at A-level and O-level. Positive feedback of my past students always encourage me to deliver more in the better way. I look forward to working together with you as partners in your
child’s educational growth and development!!
|
{"url":"https://torusacademia.com/course/calculus-i/136","timestamp":"2024-11-11T11:42:24Z","content_type":"text/html","content_length":"136990","record_id":"<urn:uuid:e6a7eb21-0e2c-47f6-81d4-75c592c757da>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00017.warc.gz"}
|
Distributionally Robust Bottleneck Combinatorial Problems: Uncertainty Quantification and Robust Decision Making
In a bottleneck combinatorial problem, the objective is to minimize the highest cost of elements of a subset selected from the combinatorial solution space. This paper studies data-driven
distributionally robust bottleneck combinatorial problems (DRBCP) with stochastic costs, where the probability distribution of the cost vector is contained in a ball of distributions centered at the
empirical distribution specified by the Wasserstein distance. We study two distinct versions of DRBCP from different applications: (i) Motivated by the multi-hop wireless network application, we
first study the uncertainty quantification of DRBCP (denoted by DRBCP-U), where decision-makers would like to have an accurate estimation of the worst-case value of DRBCP. The difficulty of DRBCP-U
is to handle its max-min-max form. Fortunately, similar to the strong duality of linear programming, the alternative forms of the bottleneck combinatorial problems using clutters and blocking systems
allow us to derive equivalent deterministic reformulations, which can be computed via mixed-integer programs. In addition, by drawing the connection between DRBCP-U and its sampling average
approximation counterpart under empirical distribution, we show that the Wasserstein radius can be chosen in the order of negative square root of sample size, improving the existing known results;
and (ii) Next, motivated by the ride-sharing application, decision-makers choose the best service-and-passenger matching that minimizes the unfairness. That is, we study the decision-making DRBCP,
denoted by DRBCP-D. For DRBCP-D, we show that its optimal solution is also optimal to its sampling average approximation counterpart, and the Wasserstein radius can be chosen in a similar order as
DRBCP-U. When the sample size is small, we propose to use the optimal value of DRBCP-D to construct an indifferent solution space and propose an alternative decision-robust model, which finds the
best indifferent solution to minimize the empirical variance. We further show that the decision robust model can be recast as a mixed-integer conic program. Finally, we extend the proposed models and
solution approaches to the distributionally robust $\Gamma-$sum bottleneck combinatorial problem (DR$\Gamma$BCP), where decision-makers are interested in minimizing the worst-case sum of $\Gamma$
highest costs of elements.
View Distributionally Robust Bottleneck Combinatorial Problems: Uncertainty Quantification and Robust Decision Making
|
{"url":"https://optimization-online.org/2020/02/7650/","timestamp":"2024-11-12T16:40:12Z","content_type":"text/html","content_length":"86183","record_id":"<urn:uuid:2f55829b-8ae2-467b-a2ef-9969fcab1bf8>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00353.warc.gz"}
|
First Law Of Thermodynamics Equation & Definition
A law on energy conservation and, in exchange, a definite description of heat is the first law of thermodynamics. The amount of energy transferred to a system in the form of heat plus the amount of
energy transferred in the form of work on the system must be equal to the increase in internal energy(U) of the system, because energy can not be produced or destroyed (leaving aside the subsequent
implications of mass-energy equivalence). Mechanisms by which systems exchange energy with each other are heat and work.
Q + L = U (1)
or more precisely:
Delta Q + Delta L = DeltaU (2)
Thermodynamics’ first law defines heat as a source of energy. It may become mechanical and be stored.
On any machine, it takes a certain amount of energy to produce work; it is impossible for a machine to do work without the need for energy. A hypothetical machine of these characteristics is called a
perpetual motion machine. The energy conservation law rules out that such a machine can be invented. Sometimes the first law of thermodynamics is stated as the impossibility of the existence of a
perpetual motion machines.
Heat, like work, corresponds to energy in transit (energy exchange process). Heat is an energy transfer and can cause the same changes in a body as the work. Mechanical energy can be converted to
heat through friction and the mechanical work needed to produce 1 calorie is known as mechanical heat equivalent.
According to the energy conservation law, all mechanical work done to produce frictional heat appears in the form of energy in the objects on which the work is performed. James Prescott Joule was the
first to prove it in a classic experiment: he heated water in a closed container by turning a pair of paddle wheels and found that the increased energy level of the water was proportional to the work
done to move the wheels.
First law of thermodynamics definition
When heat is converted to mechanical energy, as in an internal combustion engine, the energy conservation law is also valid. However, energy is always lost or dissipated in the form of heat because
no motor has perfect efficiency.
Q = m.ce.Delta T ° (3)
Replacing (3) in (1):
m.ce.Delta T ° + L = U (4)
The following equation rigorously expresses the first law of thermodynamics:
dQ = dW + dU (5)
The figures (1) or (2) or any combination may be used for the general representation of the equation and the signs.
Equation (5) and figures (1) and (2) are valid in any system, conceptually it is the synthesis of the principle of energy conservation in a closed system. Remember that the thermodynamic system (S T
D) is a set of elements of known characteristics and with relationships with each other also known that have a continent of geometry and known properties through which or not exchanges of different
types occur with the medium.
Our theme is in all cases the determination of what is the S T D, for which we must have a perfectly defined continent and contents.
Next we will analyze the following cases:
1. Case 1: An ideally elastic ball that is the S T D and which is at a distance h of a comparison plane, to apply equation (2) to this case we take into account the following considerations:
i. We despise friction with air and therefore:
DeltaQ = 0
And we have:
0 = DeltaW + DeltaU (6)
ii. As there are no forces applied, there is no work on the system or the system on the medium, therefore DeltaW 0 and the expression of the first law of thermodynamics remains:
DeltaU = 0
iii. dU is the mathematical expression of the variation of energy between two infinitesimally distanced points, its integration between point 1 and 2 gives us the following expression:
Delta dU = U2 – U1 (7)
iv. From (7) it arises that:
U2 = U1
v. The S T D analyzed may possess in the terms raised E Mt . This type of energy in position (1) is only potential from rest, and in (2) it is only kinetic because it is the distance to the reference
axis equal to zero, therefore remembering the expressions of the Ep and the Ec, we can write:
½.m.v ² = m.g.h (8)
vi. If we would like to analyze conceptually that it happens if there is exchange of thermal energy between the S T D and the middle, we must do another analysis, before that we notice that the
system described is absolutely reversible and the ball – low-bounce-ups, low-bounce-up…
Let’s look at a sequence of the same case considering the friction, we highlight that it is verified in two ways:
a) external: there is friction of the S T D in the path (1)-(2) that causes a thermal contribution to it.
b) internal: occurs at the moment of shock in which a storage of the kinetic energy is recorded in elastic potential of the S T D, which is used almost instantly to change the direction of movement,
in the period in which the accumulation and return of energy in the pelletation begins intramolecular frictions that generate thermal energy that is provided to the medium. With all these
considerations the sequence would be:
i. On the descent the S T D receives true ? Q (consider that, if it also receives issues).
ii. In the shock-accumulation of energy-inversion of the path, partially or totally transformed? Q is produced. We clarify that the transformations in the path and in the crash are functions closely
linked to the speed of the process.
iii. The expression (2) is in this case:
DeltaQ = DeltaU
iv. Making the same considerations as in the preceding example we can write:
U1– U2 = DeltaQ
Without going into the detailed analysis of the value of where the DeltaQ occurred, we can infer the expression:
U1 = deltaQ + U2
that is telling us that the E MT1 becomes E MT2plus thermal energy, in this case we note that we will have when we reverse the path of an energy, in this case kinetic, less than that provided for the
S T D at the beginning, therefore, it is clear that it will not be able to reach the original height, even if it does not have a new friction in the trajectory (2) – (1), therefore we could represent
what happens in the following approximate form :
In the graph we represent schematically the bounce height that is 0 after ncycles, the graph also indicates that the total energy remains constant, having been transformed in the case of the
irreversible process into thermal energy.
When we talk about U as internal energy we actually refer to variations with respect to a basic energy level containing another type of energy (molecular and nuclear) therefore the DeltaU refers only
to the variation of thermal energy and this symbolic expression is comprehensive of the expression already known:
Q = m.ce.Delta T °
In the case of explosion engines, the S T D is composed of a gas contained in known and variable volumes, which receives and delivers thermal energy from the medium and on which it performs positive
work, part of this mechanically stored constitutes the energy resource to complete the cycle.
Introduction to the Thermodynamics of Materials 6th Edition
Buy on Amazon- https://amzn.to/2GyuPLi
Leave a Comment Cancel Reply
Facts of Universe Cycle/Oscillating Model
Leave a Comment / Mysteries & Hypothesis, Matter & Energy, Physics & Cosmos, Space & Time / By Deep Prakash / July 8, 2020 / astronomy, astrophysics, balance, cosmologist, cosmology, cosmos, cycle of
the universe, cycle of universe, cycle universe, cycles of the universe, cyclic model, cyclic model of the universe, cyclic model of universe, cyclic model theory, cyclic theory, cyclic theory of the
universe, cyclic universe, cyclic universe theory, cyclical time theory, cyclical universe theory, cycling universe, einstein, equation, explaied, galaxy, is the universe cyclical, oscilating theory,
oscillating model, oscillating model theory, oscillating theory, oscillating universe, phenomenon, physics, physics equation, reincarnation, repeating, repeating universe theory, research, science,
scientist, solar system, space, surprise, symbolize, symbols, the cyclic universe theory, theoretical, understand, understand universe, universe, universe cycle, universe cycle theory, universe
cycles, universe examples, what is cyclic universe theory
Cyclic Model of Universe
Leave a Comment / Mysteries & Hypothesis, Matter & Energy, Physics & Cosmos, Space & Time, Uncommon & Remarkable / By Deep Prakash / July 10, 2020 / astronomy, astrophysics, big bang, black, black
hole, cosmology, cosmos, cycle of the universe, cycle of universe, cycle universe, cycles of the universe, cyclic model, cyclic model of the universe, cyclic model of universe, cyclic model theory,
cyclic theory, cyclic theory of the universe, cyclic universe, cyclic universe theory, cyclical time theory, cyclical universe theory, cycling universe, einstein, energy, equation, expanding universe
, explaied, galaxy, gravitational, hole, hypothesis, is the universe cyclical, mass, matter, oscilating theory, oscillating model, oscillating model theory, oscillating theory, oscillating universe,
physics, quasar, repeating universe theory, research, science, scientist, space, the cyclic universe theory, theoretical, theory, universe, universe cycle, universe cycle theory, universe cycles,
universe examples, what is cyclic universe theory, whirling disk
|
{"url":"https://cosmos.theinsightanalysis.com/first-law-of-thermodynamics-conservation/","timestamp":"2024-11-03T16:56:24Z","content_type":"text/html","content_length":"179235","record_id":"<urn:uuid:241c838b-71de-43c2-b58a-707773b4ad53>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00725.warc.gz"}
|
Preface Preface
This book is an introduction to combinatorial mathematics, also known as combinatorics. The book focuses especially but not exclusively on the part of combinatorics that mathematicians refer to as
“counting.” The book consist almost entirely of problems. Some of the problems are designed to lead you to think about a concept, others are designed to help you figure out a concept and state a
theorem about it, while still others ask you to prove the theorem. Other problems give you a chance to use a theorem you have proved. From time to time there is a discussion that pulls together some
of the things you have learned or introduces a new idea for you to work with. Many of the problems are designed to build up your intuition for how combinatorial mathematics works. There are problems
that some people will solve quickly, and there are problems that will take days of thought for everyone. Probably the best way to use this book is to work on a problem until you feel you are not
making progress and then go on to the next one. Think about the problem you couldn't get as you do other things. The next chance you get, discuss the problem you are stymied on with other members of
the class. Often you will all feel you've hit dead ends, but when you begin comparing notes and listening carefully to each other, you will see more than one approach to the problem and be able to
make some progress. In fact, after comparing notes you may realize that there is more than one way to interpret the problem. In this case your first step should be to think together about what the
problem is actually asking you to do. You may have learned in school that for every problem you are given, there is a method that has already been taught to you, and you are supposed to figure out
which method applies and apply it. That is not the case here. Based on some simplified examples, you will discover the method for yourself. Later on, you may recognize a pattern that suggests you
should try to use this method again.
The point of learning from this book is that you are learning how to discover ideas and methods for yourself, not that you are learning to apply methods that someone else has told you about. The
problems in this book are designed to lead you to discover for yourself and prove for yourself the main ideas of combinatorial mathematics. There is considerable evidence that this leads to deeper
learning and more understanding.
You will see that some of the problems are marked with bullets. Those are the problems that I feel are essential to having an understanding of what comes later, whether or not it is marked by a
bullet. The problems with bullets are the problems in which the main ideas of the book are developed. Your instructor may leave out some of these problems because he or she plans not to cover future
problems that rely on them. Many problems, in fact entire sections, are not marked in this way, because they use an important idea rather than developing one. Some other special symbols are described
in what follows; a summary appears in the table below.
\(\bullet\) essential
\(\circ\) motivational material
\(+\) summary
\(\importantarrow\) especially interesting
\(*\) difficult
\(\cdot\) essential for this section or the next
Some problems are marked with open circles. This indicates that they are designed to provide motivation for, or an introduction to, the important concepts, motivation with which some students may
already be familiar. You will also see that some problems are marked with arrows. These point to problems that I think are particularly interesting. Some of them are also difficult, but not all are.
A few problems that summarize ideas that have come before but aren't really essential are marked with a plus, and problems that are essential if you want to cover the section they are in or, perhaps,
the next section, are marked with a dot (a small bullet). If a problem is relevant to a much later section in an essential way, I've marked it with a dot and a parenthetical note that explains where
it will be essential. Finally, problems that seem unusually hard to me are marked with an asterisk. Some I've marked as hard only because I think they are difficult in light of what has come before,
not because they are intrinsically difficult. In particular, some of the problems marked as hard will not seem so hard if you come back to them after you have finished more of the problems.
If you are taking a course, your instructor will choose problems for you to work on based on the prerequisites for and goals of the course. If you are reading the book on your own, I recommend that
you try all the problems in a section you want to cover. Try to do the problems with bullets, but by all means don't restrict yourself to them. Often a bulleted problem makes more sense if you have
done some of the easier motivational problems that come before it. If, after you've tried it, you want to skip over a problem without a bullet or circle, you should not miss out on much by not doing
that problem. Also, if you don't find the problems in a section with no bullets interesting, you can skip them, understanding that you may be skipping an entire branch of combinatorial mathematics!
And no matter what, read the textual material that comes before, between, and immediately after problems you are working on!
One of the downsides of how we learn math in high school is that many of us come to believe that if we can't solve a problem in ten or twenty minutes, then we can't solve it at all. There will be
problems in this book that take hours of hard thought. Many of these problems were first conceived and solved by professional mathematicians, and they spent days or weeks on them. How can you be
expected to solve them at all then? You have a context in which to work, and even though some of the problems are so open ended that you go into them without any idea of the answer, the context and
the leading examples that preceded them give you a structure to work with. That doesn't mean you'll get them right away, but you will find a real sense of satisfaction when you see what you can
figure out with concentrated thought. Besides, you can get hints!
Some of the questions will appear to be trick questions, especially when you get the answer. They are not intended as trick questions at all. Instead they are designed so that they don't tell you the
answer in advance. For example the answer to a question that begins “How many...” might be “none.” Or there might be just one example (or even no examples) for a problem that asks you to find all
examples of something. So when you read a question, unless it directly tells you what the answer is and asks you to show it is true, don't expect the wording of the problem to suggest the answer. The
book isn't designed this way to be cruel. Rather, there is evidence that the more open-ended a question is, the more deeply you learn from working on it. If you do go on to do mathematics later in
life, the problems that come to you from the real world or from exploring a mathematical topic are going to be open-ended problems because nobody will have done them before. Thus working on
open-ended problems now should help to prepare you to do mathematics later on.
You should try to write up answers to all the problems that you work on. If you claim something is true, you should explain why it is true; that is you should prove it. In some cases an idea is
introduced before you have the tools to prove it, or the proof of something will add nothing to your understanding. In such problems there is a remark telling you not to bother with a proof. When you
write up a problem, remember that the instructor has to be able to “get” your ideas and understand exactly what you are saying. Your instructor is going to choose some of your solutions to read
carefully and give you detailed feedback on. When you get this feedback, you should think it over carefully and then write the solution again! You may be asked not to have someone else read your
solutions to some of these problems until your instructor has. This is so that the instructor can offer help which is aimed at your needs. On other problems it is a good idea to seek feedback from
other students. One of the best ways of learning to write clearly is to have someone point out to you where it is hard to figure out what you mean. The crucial thing is to make it clear to your
reader that you really want to know where you may have left something out, made an unclear statement, or failed to support a statement with a proof. It is often very helpful to choose people who have
not yet become an expert with the problems, as long as they realize it will help you most for them to tell you about places in your solutions they do not understand, even if they think it is their
problem and not yours!
As you work on a problem, think about why you are doing what you are doing. Is it helping you? If your current approach doesn't feel right, try to see why. Is this a problem you can decompose into
simpler problems? Can you see a way to make up a simple example, even a silly one, of what the problem is asking you to do? If a problem is asking you to do something for every value of an integer \
(n\text{,}\) then what happens with simple values of \(n\) like 0, 1, and 2? Don't worry about making mistakes; it is often finding mistakes that leads mathematicians to their best insights. Above
all, don't worry if you can't do a problem. Some problems are given as soon as there is one technique you've learned that might help do that problem. Later on there may be other techniques that you
can bring back to that problem to try again. The notes have been designed this way on purpose. If you happen to get a hard problem with the bare minimum of tools, you will have accomplished much. As
you go along, you will see your ideas appearing again later in other problems. On the other hand, if you don't get the problem the first time through, it will be nagging at you as you work on other
things, and when you see the idea for an old problem in new work, you will know you are learning.
There are quite a few concepts that are developed in this book. Since most of the intellectual content is in the problems, it is natural that definitions of concepts will often be within problems.
When you come across an unfamiliar term in a problem, it is likely it was defined earlier. Look it up in the index, and with luck (hopefully no luck will really be needed!) you will be able to find
the definition.
Above all, this book is dedicated to the principle that doing mathematics is fun. As long as you know that some of the problems are going to require more than one attempt before you hit on the main
idea, you can relax and enjoy your successes, knowing that as you work more and more problems and share more and more ideas, problems that seemed intractable at first become a source of satisfaction
later on.
The development of this book is supported by the National Science Foundation. An essential part of this support is an advisory board of faculty members from a wide variety of institutions who have
made valuable contributions. They are Karen Collins, Wesleyan University, Marc Lipman, Indiana University/Purdue University, Fort Wayne, Elizabeth MacMahon, Lafayette College, Fred McMorris, Illinois
Institute of Technology, Mark Miller, Marietta College, Rosa Orellana, Dartmouth College, Vic Reiner, University of Minnesota, and Lou Shapiro, Howard University. The overall design and most of the
problems in the appendix on exponential generating functions are due to Professors Reiner and Shapiro. Any errors or confusing writing in that appendix are due to me! I believe the board has managed
both to make the book more accessible and more interesting.
|
{"url":"https://bogart.openmathbooks.org/ctgd/preface.html","timestamp":"2024-11-08T04:45:20Z","content_type":"text/html","content_length":"25087","record_id":"<urn:uuid:0b97f25c-5b0a-49c3-bce4-775ec825b748>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00336.warc.gz"}
|
CIE AS/A Level Further Pure Mathematics 1 9231 (Paper 1) - The Maths Centre
CIE AS/A Level Further Pure Mathematics 1 9231 (Paper 1)
Cambridge International AS/A Level Further Mathematics is accepted by universities and employers as proof of
mathematical knowledge and understanding. Successful candidates gain lifelong skills, including:
• a deeper understanding of mathematical principles;
• the further development of mathematical skills including the use of applications of mathematics in the context of everyday situations and in other subjects that they may be studying;
• the ability to analyse problems logically, recognising when and how a situation may be represented mathematically;
• the use of mathematics as a means of communication;
• a solid foundation for further study.
Updated to 2020 Syllabus…CLICK TO VIEW SYLLABUS CHANGES
Further Pure Mathematics 1 9231 (Paper 1) Syllabus 2020-2022 [PDF]
We need to remind those of you who are taking further maths that as per the ALevels CIE syllabus, it is mentioned
‘Knowledge of the syllabus for Pure Maths P1 and P3 in Mathematics 9709 is assumed, and candidates may need to apply such knowledge in answering questions’
We have therefore prepared our further maths modules based on the assumption that students have knowledge of P1 and P3 maths.
Nevertheless our solutions are fully detailed and not sketchy. Our further maths modules are prepared following closely the syllabus and use actual exam questions plus some of our own selections to
explain the said topics. These modules are made available for viewing only.
However our modules stand out because they zero in on the syllabus. We used actual exam questions to explain ideas.
Free Preview
Topics :
│1. Roots of polynomial equations (68:03 mins) │View Here│USD$ 40││
│2. Rational functions and graphs (140:26 mins) │View Here│USD$ 50││
│3. Summation of series (135:55 mins) │View Here│USD$ 45││
│4. Matrices (182:20 mins) │View Here│USD$ 50││
│5. Polar coordinates (118:54 mins) │View Here│USD$ 45││
│6. Vectors (251:54 mins) │View Here│USD$ 60││
│7. Proof by induction (96:40 mins) │View Here│USD$ 40││
Full Package (16 hrs 34 mins eVideos) USD$ 300 (6 months subscription)
** Prices are subject to change without prior notice. **
** All eProduct Sales Are Final. No Refunds Can Be Made For Electronic Digital Downloads. **
* For online viewing only, No Download or Print allowed *
** This coursework includes detailed notes and full solutions to many CIE Further Maths exam questions going back as far as 2002.
** Please contact us for more details. **
** Please respect our copyright and do not reprint without our permission. **
Payment Procedure:
After you have made the payment, a receipt will be send to you for your confirmation to ensure you pay for the right content.
Once we received your confirmation we will give access to the contents you subscribed.
We usually give access within an hour of your payment.
All access will always be within that same day of payment.
List of formulae and statistical tables
|
{"url":"https://themathscentre.com/cambridge-cie-as-and-a2-mathematics/cambridge-international-a-level-further-mathematics-9231-paper-1/","timestamp":"2024-11-07T15:22:01Z","content_type":"text/html","content_length":"101095","record_id":"<urn:uuid:69ca676c-e781-4a85-bb1a-75d44031b0ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00370.warc.gz"}
|
Anthony Liu
I recently started learning Portuguese, and I wrote a short script to help me practice my vocabulary. It randomly quizzes me in my terminal as I enter commands like cd and ls, allowing me to practice
throughout my entire day, little by little!
With a little bit of vim trickery on my .zsh_history file, I noticed that I was averaging over 100 commands a day in my terminal. A little more massaging and I saw that most of these commands were in
the set $\{\text{ls},\text{cd},\text{git},\text{python},\text{make}\}$. Hmm I thought. I could alias each CMD to something like alias CMD="./precmd_program; CMD" to run some code before they
executed. I could make a study program, and this would allow me to spread my studying thinly over the course of many hours. I wouldn’t even feel like I was studying, ideally.
I wrote a short node.js script that does the following: 80% of the time, it does nothing. 20% of the time (this is a parameter of course), it prints out “How do you say __ in Portuguese?” and waits
until you type in an answer. It keeps asking you questions until you answer $n$ correctly, at which point the original command is allowed to run.
You wouldn’t want this pre-command script to run on every command because it would seriously interrupt your work flow. But if you instead ask many questions ($n$) at a time less frequently,
amortized, you’re studying the same amount, but you’re much less annoyed at the interruptions.
This script reads from a CSV-esque file of the following form: question,answer,# correct answers,# wrong answers. For Portuguese vocabulary, this looks something like:
1 goodbye,tchau,0,0
2 yes,sim,0,0
3 your,seu,0,0
4 who,quem,0,0
5 good afternoon,boa tarde,1,0
You first write down all your question/answer pairs (omitting the numbers, they’ll default to 0,0 automatically), one pair per line. Then, as the script asks you questions, it keeps track of how many
times you’re missing each question. It quizzes you on questions with low correct/incorrect ratios more frequently than questions with high ratios. Stated differently, it asks you the questions you
struggle with more frequently than those you find easy.
For the full source code of this script and some example aliases in a .zshrc/.bash_profile/etc, check out this gist.
Yeah so some math came up as I was implementing this (as math tends to do). Specifically, given an array of questions sorted by their correct/incorrect ratio, how can I randomly select a question,
such that the question with the highest ratio is selected $f$ times less frequently than the question with the lowest ratio?
For simplicity, I can map each question to $[0,1)$ by dividing each question index by the number of questions, and I can arbitrarily decide that my PDF, $PDF(x)$, over this interval will be linear.
If the intercept is at $(0,y_{max})$, then $PDF(1)=y_{max}/f$ by definition.
Since this is a probability distribution function, we know that the integral from $0$ to $1$ must be $1$, which contrains the value of $y_{max}$. Using this, we can write a formula for our PDF and
integrate it to find the CDF, which in this case will be a quadratic function.
Finally, we can take the inverse $CDF^{-1}(x)$ and use inverse transform sampling to sample according to $PDF(x)$. Then, we convert our result, which will be in the range $0$ to $1$, back into a
question index and return the corresponding question.
Closing thoughts
This was a fun quick project that will hopefully help me learn Portuguese a lot more easily. I think of this script as forcing me to have brief flash-card study sessions every few minutes. I’d love
to hear your thoughts if you try this out and like/dislike it!
Also, just to give a sense of what else you could use this for, it doesn’t just need to be used for questions; another idea I had was to print out one sentence of a book with every terminal command
you ran. A lot of people say they “don’t have time to read”, but with something like this, there’s no excuse (effectiveness yet to be tested). Até a próxima vez!
Four-Dimensional Tic Tac Toe
Skip to demo
Tic tac toe is a classic game, but the standard version is pointless. It’s far too easy to develop strategies that guarantee you’ll draw or win. You can make the game more interesting by increasing
the board size from $3\times 3$ to $4\times 4$, but even that becomes too simple after a while. To develop a truly intriguing alternative version, you need to move on to higher dimensions.
3D Tic Tac Toe Board
Tic tac toe is a two-dimensional game because you play it on a piece of paper. A line of $x$’s or $o$’s on the paper is a win. In three dimensions, you’re still trying to form lines, but there are
significantly more ways to win because you’re playing in a cube and not on a flat sheet.
If you’re having trouble visualizing this, take out a $3\times 3\times 3$ Rubik’s cube. Instead of writing $x$’s and $o$’s with a pen, you’re putting an $x$ or an $o$ inside of a Rubik’s cube piece –
either a center, an edge, or a corner. Three-dimensional tic tac toe has one additonal spot, though: the very middle of the cube, hidden from the outside.
Figure 1: A $3\times 3\times 3$ cube with $27$ cells. This is the “board” you would play 3D tic tac toe on.
There are a whole bunch of different ways to win in this version. You could look at each slice of the cube as a standard $3\times 3$ board and win in the normal ways, or you could form lines that
span multiple slices. For instance, three in a row down the central column, or three in a row from corner to corner through the middle.
Playing on Paper
If you want to play in three dimensions on a sheet of paper, all you need to do is draw three standard tic tac toe boards on top of one another. Each board represents a layer of the cube, and the
winning lines are the same. Players take turns writing in $x$’s and $o$’s in the squares just like in the normal version of the game.
Figure 2: Three two-dimensional tic tac toe boards with two examples of winning lines.
Please have a clear sense of why the above two lines are winning arrangements. It’s going to get a lot more confusing later on…
Mental Tic Tac Toe
Playing on paper is perfectly fine, but I recommend trying to play in your head with a friend! Yup, with your eyes closed, visualizing the tic tac toe cube in your mind. This isn’t too strange of an
idea. After all, some people can play chess in their heads, and chess is a lot more complicated. To play blindfold chess, participants take turns stating their moves in an agreed upon format (like
algebraic chess notation)). “Blindfold” 3D tic tac toe works in the same way.
Whereas in chess there are a bunch of different piece types, in tic tac toe, all that matters is the location of an $x$ or an $o$. You could assign each three-dimensional grid space a letter of the
alphabet so long as you and your friend are consistent, but the most general, extensible method is to use coordinates.
Figure 3: A 3D tic tac toe board next to a set of axes.
In the above image, there’s a yellow axis, a cyan axis, and a magenta axis ($\color{goldenrod} x$, $\color{turquoise} y$, and $\color{magenta} z$ respectively). The axes intersect at $(\color
{goldenrod} 0,\color{turquoise} 0,\color{magenta} 0)$, which is the location of the bottom-left-most grid space and the red sphere. Each grid space is $1$ unit further away, so for a $3\times 3\times
3$ cube, the coordinates range from $0$ to $2$. Take a moment to determine the coordinates of the blue sphere under this system. It’s .
To play, take turns stating coordinates with another person. Keep track of all of the coordinates that have been said so far, imagining the cube and the pieces within it in your head. With a bit of
practice, playing mentally becomes a piece of cake. It’s a great way to improve your spatial reasoning and visualization skills, and it’ll enable you play 3D tic tac toe wherever you are.
Playing in Four Dimensions
Okay. Now for the fun part! We’re going to add yet another dimension! Four-dimensional tic tac toe. I feel like another t– word is in order. Tic tac toe toc maybe? One per dimension?
It’s essential at this point that you are totally comfortable with tic tac toe in three dimensions, including playing it mentally. For the 3D version, I explained the game visually with figure 1
before explaining how to play it in your head. For four dimensions however, it’s best to start with the mental version because at this point we’re really stretching the capabilities of our 2D
computer screens. If you’re already comfortable thinking in 4D, skip to the game. Otherwise, read on.
Figures 1 and 3 may appear clearly three-dimensional, but they’re just flat projections. We perceive 3D objects through 2D mediums all the time (literally all the time; our eyes can’t see 3D), so we
don’t have a problem doing this, but projecting four dimensions down to two is a huge challenge.
Meaning of dimension
A spatial dimension is a line you can move along. Think: left and right, front and back. Multiple lines means multiple dimensions, but only if those lines are mutually perpendicular. A piece of paper
is two-dimensional because if you only have one line, there are regions of the paper you’ll never be able to reach. Adding a second line perpendicular to the first gives you the freedom to move not
just left and right but also up and down. Two lines encompass the entire sheet. Two dimensions.
But two lines aren’t enough to move around a cube; we would always be stuck on the bottom layer. We need a third line, bringing us to three dimensions total. If we wanted to state the location of a
point in this cube, we would need to say how far we moved along each line. Three lines, three coordinates, three dimensions.
Extending this idea to four dimensions would be easy if not for the requirement that these lines be mutually perpendicular. In fact, it’s impossible for four lines to be mutually perpendicular in
three dimensions. So in your head, do not try and visualize a fourth axis! Instead, focus on the idea of movement.
Visualizing the Fourth Dimension
When we move along a single dimension, our position in every other dimension remains constant. If you move left or right on a sheet of paper, you don’t change how far up or down you are, for
instance. So when we move through the fourth dimension, we actually stay right where we are in the first three.
Let’s pretend I’m a four-dimensional creature and you’re your normal 3D self. You’re in your room, which remains constant in the fourth dimension. I proceed to hover a glowing, 3D green cube right in
front of your face, and I place a red cube in the same spot further along in the fourth dimension. Remember: you’re 3D, so you can’t see the red cube.
At this point, I use my alien tendrils to give you a nudge in the fourth dimension. Suddenly, the green cube pops out of existence. There’s no warning – no smooth fade to nothingness and no
shrinkage. It just disappears. Why? Because it was 3D and its coordinates took the form $(x_g, y_g, z_g, w_g)$, where $_g$ signifies that these coordinates are for the green cube. Initially, your
position in the fourth dimension – the $w$ dimension – matched that of the cube, so you could see it. But the second my slimy alien fingers gave you that push, you moved through the fourth dimension
to $w_g+\Delta w$. Your 3D coordinates are exactly the same, but since your $w$ coordinate changed, the cube is no longer in your line of sight. You’re 3D.
Eventually, you drift to $w_r$, where I stop you. Now, there’s a red cube in front of you, and it popped into existence out of nowhere. At no point in this journey did you leave your room. All that
appeared to change was the color of the cube in front of you. Imagine me pushing you back and forth between the two; your body feels the acceleration of my push, and you alternatingly see the green
and red cubes.
Those cubes were 3D, which means they only existed in particular slices of the fourth dimension, just as a sheet of paper appears to have no depth in three dimensions. But now I’m going to use my
alien tehnology to generate a 4D cube, a tesseract, in front of you. What color tesseract? An infinitely colored, rainbow one! Sort of like this:
Figure 4: A cube that transitions from yellow to red smoothly left to right.
This cube’s color depends on where you’re looking along the left-right axis. The far left is yellow, and the far right is red. Pretend this were a stack of post it notes. The first note is bright
yellow and the note at the very bottom is red. Imagine flipping through the notes; each individual post it is a solid color, but flipping through them quickly yields a smooth gradient.
You’re back in your room, still your 3D self, but this time you’re faced with a red cube initially. If I didn’t already tell you this were a tesseract, you would have no idea there was something
special about this cube. That is, until the 4D push.
Since this cube does exist in four dimensions, it doesn’t disappear when your $w$ coordinate changes. The cube stays in precisely the same spot as you move along the $w$ axis, but it changes color,
first red, then orange, then yellow and green and blue and violet. I did say it was a rainbow tesseract, after all.
Think back to the post it note example. We flipped through two-dimensional slices – pieces of paper – along the third dimension and saw a smooth transition from yellow to red. This time, we’re moving
along the fourth dimension and seeing a 3D object change color. It’s important that you’re making an effort to see this in your mind’s eye and not just reading.
I continue pushing you, but once you pass blue and violet, the cube disappears. What happened? This was a finite tesseract, so it’s possible to move beyond it in the fourth dimension. At that point,
it would be behind you and thus not visible. If I pushed you in the other direction, you would see violet, blue, … , orange, red and then nothing. The tesseract would be “in front” of you in the
fourth dimension and again, not visible. Normally, you can see things that are in front of you, but you can’t see “forward” in the $w$ direction with your normal, human eyes. Therefore, the cube
blips out into the unknown.
And now for the final step! Using advanced alien technology, I upload your consciousness into a four-dimensional body complete with eyes that can see along $x$, $y$, $z$, and $w$. Let’s examine how
this changes your perception of the green-red example.
Consider what’s it’s like to look at objects along a dimension. If you had a dozen identical cars lined up along a straight road, the cars that were further away would appear smaller than the cars
right in front of you.
Figure 5: A large green cube and small red cube demonstrating perspective.
Here’s the glowing green and red cubes I placed in front of you earlier, but this time, there isn’t any four-dimensional monkey business. There are just two cubes, one near and one far. Internalize
the fact that the further away an object is, the smaller it appears.
If you’re next to a race car just as it’s about it zoom straight ahead, away from you, it would begin large and then get smaller and smaller as it moved down the track.
Now consider a race car traveling along a hundred mile straight track in the $w$ dimension. It’s only moving along the fourth dimension, so its 3D position shouldn’t change. Even though it’ll zoom
away from you, its wheels will always be right in front of your feet.
What then do you see? Well, we know that the farther away an object is, the smaller it appears, and it doesn’t matter if I’m a hundred miles in front of you, a hundred miles above you, or a hundred
miles away along $w$; regardless of the direction, far is far. So as this special race car zooms onwards, it will appear to shrink right in front of your eyes. Imagine that!
Back to the cube example. You’re in your room. In front of you is a glowing green cube, and in the same exact 3D spot further in the fourth dimension is a red cube. What do you see with your new 4D
eyes? Seriously, try and figure it out.
You see a small red cube inside of a large green cube. The red cube is further away after all, so the rules of perspective say it should appear smaller.
Figure 6: A representation of perspective along a fourth dimension.
The cubes aren’t actually transparent, but just as humans’ 3D eyes enable them to see 2D surfaces, our alien 4D eyes let us see in legitimate 3D. That is to say, we can see inside of solid 3D
objects. We can see the 3D red cube inside the 3D green cube even though they’re both opaque.
“Tic Tac Toe Toc” in Your Head
You are now armed with the necessary intuition to extend tic tac toe into 4D. First, picture an empty 3D grid as you normally would. Then, picture another in the exact same spot, just slightly
smaller. Finally, imagine one more, even smaller. Can you do it? (Why did we stop at three grids? Think for a moment if you don’t already know.)
If you can visualize that, mad props, but for me, that’s just way too crowded. We haven’t even placed any pieces yet! Instead, I recommend playing mental 4D in the same way that we played on-paper
3D. We stacked three ordinary two-dimensional grids and just thought extra hard about how the different slices related to one another. For 4D, picture three 3D grids side by side, left to right.
Isn’t that so much easier?
As far as coordinates go, you just need to make one minor change. Add a $\color{yellowgreen} w$ coordinate that designates which of the three 3D grids each piece goes into. Let $\color{yellowgreen}
{w=0}$ signify the leftmost grid and $\color{yellowgreen} {w=2}$ the rightmost.
You now know how to plot 4D coordinates in a modified 4D grid, and you know what that grid would actually look like in four dimensions. The final step is learning to identify winning lines in 4D. As
we saw with 2D → 3D, all the lower dimensional winning lines remain, but we have a bunch of extra options available to us. I’m only going to explain lines that extend across the fourth dimension,
which means there will be one piece per 3D grid (per “slice”).
The top-left-front corner of each slice, corresponding to $(\color{goldenrod} 0,\color{turquoise} 2,\color{magenta} 0,\color{yellowgreen} 0)$, $(\color{goldenrod} 0,\color{turquoise} 2,\color
{magenta} 0,\color{yellowgreen} 1)$, and $(\color{goldenrod} 0,\color{turquoise} 2,\color{magenta} 0,\color{yellowgreen} 2)$? That’s a winning line. It’s difficult, but visualize these pieces in 4D
perspective. The larger the $w$ coordinate, the smaller the piece would be and the closer it would be to the center. They would form a literal line of pieces in the top-left-front-ish corner. The
line would point to the exact center of the 4D grid.
What about $(\color{goldenrod} 0,\color{turquoise} 1,\color{magenta} 2,\color{yellowgreen} 0)$, $(\color{goldenrod} 1,\color{turquoise} 1,\color{magenta} 1,\color{yellowgreen} 1)$, and $(\color
{goldenrod} 2,\color{turquoise} 1,\color{magenta} 0,\color{yellowgreen} 2)$? Take as much time as you need to plot each of these coordinates in the modified 4D grid. Then try and view it in 4D
perspective. It’s also a winning combination.
And now for the coolest way to win, one of the many long diagonals: $(\color{goldenrod} 2,\color{turquoise} 0,\color{magenta} 2,\color{yellowgreen} 0)$, $(\color{goldenrod} 1,\color{turquoise} 1,\
color{magenta} 1,\color{yellowgreen} 1)$, and $(\color{goldenrod} 0,\color{turquoise} 2,\color{magenta} 0,\color{yellowgreen} 2)$. Make sure you see the line before moving on.
Try the Game
The game below requires WebGL. If it doesn’t work, upgrade to a modern browser like Chrome. Left click and drag to rotate. Scroll to zoom in and out. To move around, right click and drag or ←↑→↓.
Enter piece coordinates in the color-coded form below.
Player 1,
it’s your move.
Problem with 3D+
Unfortunately, there’s a strategy that allows the player who goes first to guarantee a win in $3\times 3\times 3$. I’m not going to tell you what it is because I had fun figuring it out, but you
should know that if you want to have fair 3D and 4D games with your friends, you need to prohibit $(1,1,1)$ in 3D and and $(1,1,1,w)$, $(x,1,1,1)$, $(1,y,1,1)$, and $(1,1,z,1)$ in 4D on the first
Alternatively (this is what I did with my friends), you can increase the size of the grid and graduate to $4\times 4\times 4$ and $4\times 4\times 4\times 4$ games. If you’ve followed along this far,
you should have no problem figuring out what that entails.
Number of Ways to Win
When I first started playing 3D tic tac toe, the first thing I wondered was how many ways there were to win. There are $8$ ways to win in normal tic tac toe, and counting those ways is
straightforward. But in three dimensions, counting is a whole lot harder.
Where $D$ is the number of dimensions and $S$ is the size of the grid, $2$ and $3$ in standard tic tac toe respectively, the following table summarizes the number of winning arrangements:
D=2 D=3 D=4 D=5
S=2 6 28 120 496
S=3 8 49 272 1441
S=4 10 76 520 3376
S=5 12 109 888 6841
There are more than $30$ times the number of ways to win in four-dimensional $3\times 3\times 3\times 3$ than there are in ordinary tic tac toe. As someone who has played many 4D games, believe me
when I say that this difference is what makes 4D so interesting. You’re always on the lookout for sneaky lines, and there are far too many to just scan the grid procedurally. (Note: lots of math up
In order to count the total number of ways to win, we need to look at the various types of winning lines. In normal tic tac toe, I claim that there are just two: 1) diagonals and 2) horizontals and
verticals. Yes, horizontal and verical lines are in the same category. Do you see the distinction? Draw a picture and try to figure out the key difference.
Hint: it has to do with dimensionality.
Lines are one-dimensional by definition, but they can span more than one dimension if they’re rotated relative to the coordinate system. Horizontal tic tac toe lines only take up space in one
dimension, the left-right dimension (up-down for vertical lines). Diagonal lines in contrast move both left-right and up-down.
Continuing along this train of thought, consider 3D tic tac toe. Are there other types of winning lines? We have our horizontal, vertical, and in-your-face lines. They span just one dimension. And
then we have the diagonal lines across slices of our cube. The fact that they exist on a slice means they don’t move in the direction in which the slice is skinny.
Figure 7: This visual highlights the middle slice of a 3D grid, which contains a short diagonal line.
Take any $3\times 3$ slice and draw its two diagonals; they span two dimensions.
Figure 8: The long diagonal of a 3 by 3 by 3 grid.
Lastly, there are the long diagonals, from corner to opposite corner, which span all three dimensions. Three dimensions, three types of winning lines.
More generally, for tic tac toe in $n$ dimensions, there are $n$ different types of lines which correspond to lines spanning dimensions $1$ to $n$. If we can count each of these types, then we’ll be
The easiest line to count is the long diagonal, the line that spans all $n$ dimensions. There are two such lines in $3\times 3$ tic tac toe and four in $3\times 3\times 3$. Long diagonals begin at a
corner, so we have an upper bound of the total number of corners. For a cube in $n$ dimensions, the number of corners is $2^n$. Since long diagonals also end at a corner, we’re counting each of them
twice. Thus, the total number of long-diagonals for an $n$-hypercube is $\frac{1}{2}2^n$.
Luckily for us, all line types are instances of long diagonals. I called the line in Figure 7 a short diagonal because there were longer ones, but within its 2D slice, it’s a long diagonal. Even
horizontal and vertical lines are diagonals, just in one dimension instead of two or three.
If we can count the number of 2D slices in a 3D cube, multiplying by $\frac{1}{2}2^2$ will give the number of lines that span two dimensions. Similarly, we need to count the number of 1D “slivers”.
So far, we’ve focused on the dimensions in which a line changes, labeling lines as spanning either 1D, 2D, or 3D depending on how many dimensions it moves in. However, for the purposes of counting, I
think it’s easier to look at the dimensions in which a line doesn’t move. A 1D line doesn’t move in two of the three dimensions, and a 3D long diagonal moves in all of them. For a 1D line, two
dimensions are the same for all of its constituent points.
This brings us to a key question: in how many ways can two dimensions be the same? For a 3D cube, answering this question will give us the number of 1D slivers.
Various pairs of dimensions can be the same: $\langle x,y\rangle$, $\langle x,z\rangle$, and $\langle y,z\rangle$. Each of these dimensions can take on values from $0$ to $S-1$, where $S$ is the side
length, so there are $S^2$ slivers for each of the three pairings. This yields $27$ one-dimensional slivers for a 3D cube. Since the number of long diagonals in one dimension is $\frac{1}{2}2^1=1$,
there are $27$ horizontal, vertical, and in-your-face lines for $3\times 3\times 3$.
Let’s go back and examine the number of 2D slices in a 3D cube. We now know this is equivalent to finding the number of ways that one dimension can be the same. That constant dimension can be $\
langle x\rangle$, $\langle y\rangle$, or $\langle z\rangle$, each of which can take on $S=3$ different values, giving us $9$ slices total. There are $\frac{1}{2}2^2=2$ long diagonals per slice,
giving us $18$ “two-dimensional” lines in a $3\times 3\times 3$ cube.
Since there’s just one way for zero dimensions to be the same, there are $\frac{1}{2}2^3=4$ “three-dimensional” lines in a $3\times 3\times 3$. Summing these values gives $27+18+4=49$ ways to win in
three-dimensional tic tac toe ($S=3$).
For a general solution, we need an expression that gives the number of ways in which $C$ dimensions can be the same for a $D$-dimensional hypercube of side length $S$. First, we count the number of
unique ways there are to choose $C$ dimensions out of $D$. That’s just ${D \choose C}$. Then, since each of those $C$ dimensions can take $S$ possible values, we multiply by $S^C$ to get ${D \choose
Since $C$ dimensions are the same, $D-C$ dimensions can vary. The number of long diagonals in this space is $\frac{1}{2}2^{D-C}$. Multiplying these two expressions tells us that when you hold $C$
dimensions constant, you can create $\frac{1}{2}{D \choose C} 2^{D-C} S^C$ lines.
Anywhere from $C=0$ to $C=D-1$ dimensions can be constant, so to get our final answer we just need to sum up all those possibilities. The total number of ways to win in $D$-dimensional tic tac toe
(side length $S$) is:
$$\frac{1}{2}\sum\limits_{C=0}^{D-1} {D \choose C} 2^{D-C} S^C\tag{1}$$
Algorithm for Detecting Winning Lines
If you’ve ever implemented $3\times 3$ tic tac toe, you know about that annoying moment when you have to decide how you’re going to detect winning moves. Since there are only eight ways to win,
writing an if statement for each possibility is easy and pain free. But it’s also ugly and terribly inelegant. You could be a little more clever and write two loops, one for the vertical lines and
one for the horizontals, checking diagonals separately, but that would be more work for little benefit. Heck, it’d probably take more lines of code.
In three-dimensional tic tac toe there are $49$ cases to consider ($S=3$). Yeaahhh. Clearly, you can’t hardcode each individual configuration in an if like you could with $3\times 3$. Perhaps you
could store winning lines in a condensed format, like [3,14,25] where each number corresponds to a cell in the grid, but that would be tedious and again, inelegant. And what if you wanted to expand
your game to support $4\times 4\times 4$? Abstracting out the idea of slices and writing a series of loops would probably be best. Unlike $3\times 3$, there’s enough repetition in 3D tic tac toe to
warrant looping over the different types of lines.
Do you see where I’m going with this? For four-dimensional tic tac toe ($S=3$), you would need to check $272$ cases. There is no way you’re listing all those lines out. As far as scaling back and
looking at the grid’s constituent layers and 3D slices goes, it’s a lot harder to program something you can’t visualize.
It’s easy to check for lines on the six faces of a cube (and the middle slices) because we can see them, but the analogy in four dimensions is that there are eight “surface” cubes on a tesseract.
Finding those faces and looping over them is incredibly confusing. And don’t forget, you still have your 2D layers and 1D slivers like you did before! When you can’t point to the geometry you’re
working with, it’s hard to reason about what your algorithm should even be checking for.
I spent a lot of time thinking about this problem back when I started playing 4D tic tac toe, and I knew that I wanted to find an algorithm that generalized to any size hypercube in any number of
dimensions. The solution is based off the method we used to count the number of ways to win. That is to say, given the game grid as a whole, we pick and choose dimensions to hold constant in order to
reduce the problem to a lower dimensional space.
Before I explain the algorithm, I need to explain the data structure I chose to store the game state. It’s a $D$-dimensional array whose elements correspond to grid spaces and are either $-1$,
meaning empty, $0$, meaning player 1 has a piece there, or $1$, meaning player 2 has a piece there. For two-dimensional tic tac toe, it’s just an everyday two-dimensional array. However, setting $D=
4$ makes it an array of an array of an array of an array. I wrote a special function to access elements of this high-dimensional array given an array of coordinates.
There are two key ideas we discovered when we found the formula for the number of ways to win. The first idea is that all winning lines vary in at least one dimension. The second idea is that some
dimensions may have the exact same coordinate value for all of the points in a winning line. For example, one of the four-dimensional winning lines we examined earlier:
$$\text{P}_1(\color{goldenrod} 0,\color{turquoise} {\underline 1},\color{magenta} 2,\color{yellowgreen} 0)$$
$$\text{P}_2(\color{goldenrod} 1,\color{turquoise} {\underline 1},\color{magenta} 1,\color{yellowgreen} 1)$$
$$\text{P}_3(\color{goldenrod} 2,\color{turquoise} {\underline 1},\color{magenta} 0,\color{yellowgreen} 2)$$
Notice how the $\color{turquoise} y$ coordinate is the same for all of the points. When we hold one dimension constant in three dimensions, we get a slice. When we hold one constant in four
dimensions as we just did, we ought to get a 3D slice… but good luck visualizing where this cube fits. Good thing we don’t need to! Notice how every single other dimension is different in points $\
text{P}_1$ to $\text{P}_3$. These points describe a long diagonal of a 3D slice of a tesseract. I’ll save you the trouble and just tell you that we’ll only ever need to check for long diagonals!
How do you check for those in $n$ dimensions? First, generate the coordinates of all the corners. Pair each up with its exact opposite (for corners, each coordinate is either $0$ or $S-1$; two
corners oppose each other if none of their corresponding coordinates are the same). Then, move grid space by grid space from one corner to the other.
Now we can start to sketch out an algorithm. First, check all the long diagonals in $n$-space for lines. Then, pick a dimension to hold constant. Set it constant for each value from $0$ to $S-1$ and
recurse on the now-smaller dimensional space. Do this for each of the $D$ dimensions.
That’s pretty much it. The base case is when $n=1$ since all you can do is iterate through the line. One way to implement this algorithm is to keep track of two arrays in each function call: one
stores the dimensions that are allowed to vary and the other stores the values at which every other dimension is held constant. How freaking cool is it that we can write algorithms that can find
lines we’ll likely never be able to see (in seven dimensions, for instance)?
Future Improvements
This algorithm as posed has its fair share of problems. For starters, it needlessly checks some lines more than once. By far the biggest issue though is that it checks every possible spot a line
could be in, even if there aren’t any pieces near that region of the hypercube. For three and four dimensions this really isn’t that big of a deal because there are so few grid spaces in the first
Consider that a typical high-dimensional game of tic tac toe will have a dozen to a couple dozen placed pieces. At most, there could be $27$ pieces in 3D and $81$ in 4D (S=3), so around $15\%$ to $30
\%$ of the board would be filled. Now consider a five dimensional grid with $S=7$: there are $16807$ grid spaces, resulting in an expected $0.1\%$ fill rate. This algorithm would waste so much time
on empty space.
Ideally, I’d write an algorithm that found lines between sets of points. Instead of checking the grid, I would check the pieces that have actually been placed. Regardless of the number of dimensions
or size of hypercube, the number of player 1 points and player 2 points would be small, so I can’t imagine you’d need to do too much work. If any of you readers have implemented this, I’d be very
interested in seeing your code.
Closing Remarks
Tic tac toe is great, higher dimensions are great, and math and computer science are great. I strongly encourage you to fork this project on Github and make something neat. All the rendering is taken
care of; you can focus on your game ideas! Try making three-player 4D. Maybe that would nullify player 1’s advantage. Hmm. Or you could write an AI and have computers play each other! Anyway, this is
my longest blog post yet by far so I oughta stop thinking out loud.
|
{"url":"https://igliu.com/","timestamp":"2024-11-09T22:05:49Z","content_type":"text/html","content_length":"77836","record_id":"<urn:uuid:afbfbfc4-9f2d-4108-985c-7cea79670174>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00074.warc.gz"}
|
An integer multiple of the fundamental frequency is called a? - Answers
What is a number divisible by 63 and 42?
arithmetic and number theory, the least common multiple(also called the lowest common multiple or smallest common multiple) of two integers a and b, usually denoted by LCM(a, b), is the smallest
positive integer that is a multiple of both a and b.LCM(63,42)=126
|
{"url":"https://math.answers.com/math-and-arithmetic/An_integer_multiple_of_the_fundamental_frequency_is_called_a","timestamp":"2024-11-10T22:33:26Z","content_type":"text/html","content_length":"165565","record_id":"<urn:uuid:4c6626d5-c65c-4ef3-8cb4-a95764ac8819>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00708.warc.gz"}
|
What is odd parity with example?
Odd parity can be more clearly explained through an example. Consider the transmitted message 1010001, which has three ones in it. This is turned into odd parity by adding a zero, making the sequence
0 1010001. Thus, the total number of ones remain at three, an odd number.
What is an odd parity bit?
Description. A parity bit is a computer bit (1 or 0) within a byte of data that is used to enforce the parity checking rule agreed by two computers (even or odd). The parity bit is used to ensure
there are an even or odd number of 1s or 0s within the byte prior to transmission.
What is 3 bit parity?
3 bit Even Parity Generator: Let A, B, and C be input bits and P be output that is even parity bit. Even parity generates as a result of the calculation of the number of ones in the message bit. If
the number of 1s is even P gets the value as 0, and if it is odd, then the parity bit P gets the value 1.
What is parity bit explain even and odd parity?
A parity bit is a check bit, which is added to a block of data for error detection purposes. It is used to validate the integrity of the data. The value of the parity bit is assigned either 0 or 1
that makes the number of 1s in the message block either even or odd depending upon the type of parity.
How is a parity bit used to check for errors?
You can determine if an error occurred during transmission by calculating the parity of the received bytes and com- paring the generated parity with the transmitted parity. Parity can only detect an
odd number of errors. If an even number of errors occurs, the computed parity will match the transmitted parity.
What is 4 bit odd parity generator?
Odd parity checker circuit receives these 4 bits and checks whether any error are present in the data. If the total number of 1s in the data is odd, then it indicates no error, whereas if the total
number of 1s is even then it indicates the error since the data is transmitted with odd parity at transmitting end.
Which gate is used in odd parity generator?
This output is the parity bit. To generate odd parity, simply invert the even parity. The last gate can be an Exclusive-NOR gate. To check parity first a new parity bit must be generated over the
date that was received.
What is an even parity bit used for?
Techopedia Explains Even Parity Parity bits are added to transmitted messages to ensure that the number of bits with a value of one in a set of bits add up to even or odd numbers.
In the case of odd parity, the coding is reversed. For a given set of bits, if the count of bits with a value of 1 is even, the parity bit value is set to 1 making the total count of 1s in the whole
set (including the parity bit) an odd number. If the count of bits with a value of 1 is odd, the count is already odd so the parity bit’s value is 0.
How do you calculate parity in algorithm?
Algorithm: getParity (n) 1. Initialize parity = 0 2. Loop while n != 0 a. Invert parity parity = !parity b. Unset rightmost set bit n = n & (n-1) 3. return parity Example: Initialize: n = 13 (1101)
parity = 0 n = 13 & 12 = 12 (1100) parity = 1 n = 12 & 11 = 8 (1000) parity = 0 n = 8 & 7 = 0 (0000) parity = 1
What is parity bit method in error detection?
A famous error detection code is a Parity Bit method. A parity bit is an extra bit included in binary message to make total number of 1’s either odd or even. Parity word denotes number of 1’s in a
binary string.
What is even and odd parity in binary?
Parity word denotes number of 1’s in a binary string. There are two parity system-even and odd. In even parity system 1 is appended to binary string it there is an odd number of 1’s in string
otherwise 0 is appended to make total even number of 1’s.
|
{"url":"https://www.digglicious.com/types-of-essays/what-is-odd-parity-with-example/","timestamp":"2024-11-11T04:42:42Z","content_type":"text/html","content_length":"45215","record_id":"<urn:uuid:c0a9d83a-4556-4f62-a486-49cd956cfa1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00241.warc.gz"}
|
[Solved] Find the open intervals where the functio | SolutionInn
Answered step by step
Verified Expert Solution
Find the open intervals where the function is concave upward or concave downward. Find any inflection points. f(x)=-(x+1)6 Find any critical numbers for f
Find the open intervals where the function is concave upward or concave downward. Find any inflection points. f(x)=-(x+1)6 Find any critical numbers for f and then use the second derivative test to
decide whether the critical number(s) lead to relative maxima or relative minima. If f''(c) = 0 or f''(c) does not exist for a critical number c, then the second derivative test gives no information.
In this case, use the first derivative test instead. f(x)=4x-10x+9 Complete parts (a) through (c) for the following function. f(x)=x -30x+29 (a) Find intervals where the function is increasing or
decreasing, and determine any relative extrema. (b) Find intervals where the function is concave upward or concave downward, and determine any inflection points. (c) Graph the function, considering
the domain, critical points, symmetry, relative extrema, regions where the function is increasing or decreasing, inflection points, regions where the function is concave upward or concave downward,
intercepts where possible, and asymptotes where applicable. Sketch the graph of a single function that has all of the properties listed. a. Continuous and differentiable for all real numbers b. f'
(x)>0 on (co, -2) and (2,6) c. f'(x) 0 on (1,4) f. f'(-2)=f'(6)=0 g. f'(x)=0 at (1,3) and (4,4) Choose the correct graph below. O A. OB. C. G O D.
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started
|
{"url":"https://www.solutioninn.com/study-help/questions/find-the-open-intervals-where-the-function-is-concave-upward-1009728","timestamp":"2024-11-06T12:36:11Z","content_type":"text/html","content_length":"107250","record_id":"<urn:uuid:b7b4ad98-34ec-4e95-9bb9-63db02f931ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00500.warc.gz"}
|
Arithmetic in Disguise
Arithmetic in Disguise: What is it?
A Mathematical Droodle
|Activities| |Contact| |Front page| |Contents| |Geometry|
Copyright © 1996-2018 Alexander Bogomolny
The picture reminds me of a smiling koala bear that was affinely mapped onto a right-angled triangle. (Turn your head left 45° and see if you agree with me. My wife does not.)
However, it was obtained with the applet below (# Rows 64, # Colors 64, Operation x·y AND (x+y) and Colors Reversed as the low right portion of the Square.)
The applet draws a square or triangular array of dots whose color is defined through simple arithmetic and bitwise operations. x and y coordinates are counted from the upper left corner of the array.
(The triangular array is just a sheared version of the lower left half of the square.) They then are combined by the selected Operation. The result is taken Modulo the number of colors. Or, as an
alternative, in the Binary mode all non-zero results are made to correspond to a single quantity. The finite result becomes an index into an array of gray shades.
Note how much the low right portion of the original display (# Rows 32, # Colors 31, Operation x OR y, and Colors Reversed unchecked) resembles the fractal structure of the Sierpinski gasket or that
of Pascal's triangle in modular arithmetic.
Simone Severini from the Institute for Quantum Computing and Department of Combinatorics and Optimization, University of Waterloo, has pointed out that Sierpinski gasket comes also through with the
Operation x AND y if the result of calculations is split into two classes: zero and non-zero. This is achieved by checking the Binary box.
1. C. A. Pickover, Wonders of Numbers, Oxford University Press, 2001 (p. 173)
|Activities| |Contact| |Front page| |Contents| |Geometry|
Copyright © 1996-2018 Alexander Bogomolny
|
{"url":"https://www.cut-the-knot.org/Curriculum/Geometry/ArithDisguise.shtml","timestamp":"2024-11-09T10:37:27Z","content_type":"text/html","content_length":"14119","record_id":"<urn:uuid:11e0e655-0d3f-4eb0-a852-9fe2c676428e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00023.warc.gz"}
|
How many different ways are there of arranging seven green and eight
How many different ways are there of arranging seven green and eight brown bottles in a row, so that exactly one pair of green bottles is side-by-side?
To solve this problem, we can use a combination of combinatorics and counting principles.
First, let's address the condition that exactly one pair of green bottles is side-by-side. We can think of this as treating the pair of green bottles as a single entity. So, instead of having seven
green bottles, we now have six entities: one pair of green bottles, five individual green bottles, and eight brown bottles.
Now, we can arrange these six entities (pair of green bottles, individual green bottles, and brown bottles) in a row. The number of ways to do this is (6 factorial) = 6!.
Next, we need to consider that the pair of green bottles within the six entities can be arranged in two ways: either with the two green bottles next to each other (GG) or with a brown bottle in
between them (GBG).
For each arrangement of the six entities, we have two possibilities for the pair of green bottles. Therefore, the total number of ways of arranging the seven green and eight brown bottles such that
exactly one pair of green bottles is side-by-side is:
(6!)*(2) = 1440 ways.
So, there are 1440 different ways of arranging the bottles satisfying the given condition.
|
{"url":"https://askanewquestion.com/questions/5629","timestamp":"2024-11-08T18:09:53Z","content_type":"text/html","content_length":"16899","record_id":"<urn:uuid:a6c413eb-8f96-4469-93ea-1dbab6541932>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00668.warc.gz"}
|
Effect of Mass on Terminal Velocity
Published on Apr 02, 2024
The purpose of this investigation is to figure out the relationship mass has with an object’s terminal velocity. I have always found the concept of falling very interesting. While gravity on earth is
relatively constant, air friction is not. Thus, I wanted to study the drag coefficient and velocity of the fall, leading into terminal velocity.
Terminal Velocity is “When an object which is falling under the influence of gravity or subject to some other constant driving force is subject to a resistance or drag force which increases with
velocity, it will ultimately reach a maximum velocity where the drag force equals the driving force”(Nave). One example of the use of Terminal Velocity as a term is through the sport of skydiving.
When a skydiver is falling at the same speed as his drag force he has reached his terminal velocity. According to the Physics Factbook, the terminal velocity of a skydiver is 55 mph.
The purpose of this investigation is to find out the exact relationship, between change of the mass of the object falling and the changing terminal velocity as a result of the mass.
I believe the when I change the mass the relationship between change in mass and change in terminal velocity will have an exponential relationship, with the exponent being 2. This is because mass is
influential in both the drag and downward force. With two forces using the same variable it is most likely that the relationship will be squared.
Materials Required:
1. 20 Coffee Filters 6.67cm in Radius
2. 1 Stopwatch
Procedure for Data Collection
Find a “Drop Zone” that is 4.013 M high then repeat the following steps:
• Drop 1 coffee filter off of the Drop Zone and time the fall 2 times
• Average the two times in order to get a more precise answer
• Add 1 coffee filter to the stack and drop, and time, the filter stack 2 times
• Repeat until the time has been recorded for a stack of 20 coffee filters.
Procedure for Calculations
Once you have the time data, complete the following steps:
1. Use the formula s = ((u + v)/2)t and solve for V (The s=4.013, t=the average time of the drop. Do this for each of the 20 different stacks, dropped.
2. Calculate the drag coefficient using the formula (C*ρ*A*(v^2))/2(WikiHow). C=the drag coefficient. I researched this, because in order to calculate it you need a wind tunnel. For a coffee filter,
the variable is approximately 1.00(meaning it can be disregarded for this experiment). The ρ is the density of the air that the coffee filters are falling through. In my experiment, the density was
1.21 kg/(m^3). The A is the surface area of the filters, and V is the Velocity calculated in step 1.
3. Calculate the Terminal Velocity. Use the formula v = sqrt((2*m*g)/(ρ*A*C)). The V is for Terminal Velocity. The m is the mass of the stack of filters. The G is the force of gravity. The ρ is the
density of the air. The A is the Area of the filters facing the ground when falling. C is the drag coefficient calculated in step 2.
Looking at the data table, one thing that stands out is the increase in the drag force. Increasing from 4.657 all the way to 41.918, it is clear that adding each filter made a significant impact on
the fall. This discrepancy between drag force of each amount of coffee filters, translates over to the terminal velocity, both in cm/s and m/s. All of them follow tight increase windows. The cm/s
increases by approximately 16 cm/s each time, and the drag force increases by around 3N. This makes sense, because of the terminal velocity formula, v = sqrt((2*m*g)/(ρ*A*C)) t. It divides by the
drag force, meaning there is going to be similarities in the rate of growth, but due to the other variables, the terminal velocity would grow at a higher rate(in numerical value).
Looking at this graph, it is very easy to see that the line of expression: .171x+.258 is very accurate in predicting what the next data point will be. This is in contradiction to my hypothesis,
because I estimated that the graph would be exponential rather than linear.
Overall, my hypothesis that the relationship between the change in mass and the change in terminal velocity would be exponential, was incorrect. The graph shown above is demonstrating a trendline
with an increase of .171 m/s for every incremental stage of the data collection process. I originally felt that because mass was influential in both the calculations for the drag force as well as the
terminal velocity, the relationship would be squared. However, it turns out the relationship is not. This makes sense from a thought experiment. If it was exponential, then it would make it possible
for objects with very large mass’ to travel the speed of light and even faster. There are multiple possible reasons as to why the change is linear. The first is that the growth of the terminal
velocity is most likely due to the formulas. While the mass is used in both the calculations for terminal velocity and drag coefficient, the final calculation only uses it one time as v = sqrt
((2*m*g)/(ρ*A*C)). The second reason could be the laws of gravity. If everything falls at the same rate in a vacuum, then the only calculation to consider is the drag coefficient. I did not calculate
this, like stated before, to calculate it you need a wind tunnel. The drag coefficient calculation only uses mass one time, meaning it has to be linear.s
1.Elert, Glenn. “Speed of a Skydiver (Terminal Velocity).” Speed of a Skydiver (TerminalVelocity), The Physics Factbook,hypertextbook.com/facts/1998/JianHuang.shtml.
2. Nave, N. “Terminal Velocity.” Fluid Friction, HyperPhysicshyperphysics.phy-astr.gsu.edu/hbase/airfri2.html.
3. “How to Calculate Terminal Velocity.” WikiHow, WikiHow, 23 Nov. 2017, www.wikihow.com/Calculate-Terminal-Velocity.
|
{"url":"https://www.seminarsonly.com/Engineering-Projects/Physics/terminal-velocity.php","timestamp":"2024-11-11T11:24:46Z","content_type":"text/html","content_length":"18024","record_id":"<urn:uuid:5036d633-b977-4580-ac60-97d5df61b96b>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00020.warc.gz"}
|
Infinite time computable model theory
[bibtex key=HamkinsMillerSeaboldWarner2007:InfiniteTimeComputableModelTheory]
We introduce infinite time computable model theory, the computable model theory arising with infinite time Turing machines, which provide infinitary notions of computability for structures built on
the reals $\mathbb{R}$. Much of the finite time theory generalizes to the infinite time context, but several fundamental questions, including the infinite time computable analogue of the
Completeness Theorem, turn out to be independent of ZFC.
One thought on “Infinite time computable model theory”
1. Have you a copy of this paper as a pdf file in the Cantor’s attic library?
|
{"url":"https://jdh.hamkins.org/infinitetimecomputablemodeltheory/","timestamp":"2024-11-03T16:03:16Z","content_type":"text/html","content_length":"62223","record_id":"<urn:uuid:dcc297d0-0a97-4238-9f6c-c0e38c7bff55>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00540.warc.gz"}
|
Mpan Checker - What is your Mpan and why is it important? | Energy Solutions
Mpan Checker – What is your Mpan and why is it important?
MPAN. What is it and Why is it Important?
An MPAN number is a Meter Point Administration Number and they act as labels for electricity meters in the UK. They are also sometimes called Supply Numbers or S-Numbers
There is an equivalent for gas, too, known as MRPN or Meter Point Reference Number.
You can use your MPAN to check and discover a lot of things about your energy supply. For this our MPAN Checker is used to assist the process.
Enter your 21-digit electricity MPAN below:
Your Mpan is Invalid
To make sure your MPAN is valid, please enter the numbers exactly as it appears on your electricity bill.
Below you will find a guide to why checking your MPAN is important, what information it can provide you with, and how we help you figure everything out!
WHAT IS YOUR MPAN?
It is a 21-digit reference used to identify individual electricity meters. They were introduced in 1998 to help simplify administration for energy suppliers and make it easier for customers to switch
their supplier.
The number is separated into two sections: the core and the top-line data.
• The core is the final 13 digits. This is the unique identifier of your meter.
• The top-line data explains characteristics of the supply you are provided with.
Thanks to the ease of our MPAN calculator we are able to remove the need for you to have to make sense of these numbers.
However, for your intrigue, below is a breakdown of what your MPAN number means:
Section Meaning
Profile Class This gives the supplier an indication of the property’s typical energy consumption. Domestic properties will be either in profile class 01 or 02
Meter Time Switch This refers to how many registers (sets of meter numbers or dials) your electricity meter has. Examples of this are single rate registers, day/night split register or a seasonal
Code time of day register.
Line Loss Factor These numbers indicate the Distribution Use of System charges that your supplier is expected to pay for using the networks and cables in your area
These numbers identify the regional distribution company for your electricity supply. This company is responsible for the management of the wires that deliver electricity to your
Distribution ID
This number can be useful to you if you need to figure out who operates the electricity network in your area.
Meter Point ID This number is unique within the distribution area, it identifies the specific meter.
Check Digit A sum calculated from the Distributor ID and Meter Point ID to verify both numbers.
WHY IS IT SO IMPORTANT?
You will need your MPAN number when you’re switching energy supplier or moving home; your new supplier will often ask for this number as part of the switching process.
Below you can see the differences between the two processes.
If you’re… Then
Swapping Suppliers You will be asked your current MPAN. Your MPAN won’t change because the MPAN number is associated with property and is not dependent on the energy supplier used.
You’ll need to provide the MPANs of both the old and the new property. Your local DNO will be able to tell you the necessary information regarding your new property.
Moving House
You can also phone the National Grid’s Meter Number Helpline on 0870 608 1524.
It is also important to check your MPAN number in order to ensure it is valid and that you are paying for the correct energy supply.
Your MPAN can be found on your energy bills. If you do not have an energy bill for the property, as in if you have just moved in, instructions on what to do can be found above.
The MPAN comprises of different elements which relate to your energy supply. Your MPAN number can be validated and this is achieved by generating a checksum from 12 of the Core Identifier digits.
This number is then compared to a check digit which is the final digit of the MPAN.
The MPAN Checker carries out these complicated steps for you in order to make it an incredibly easy process; we do all the work for you!
The steps we carry out are below:
1. Note the final 13 digits from the MPAN, this is the Core Identifier
2. Remove the final digit. This is the check digit and will be used later
3. For the remaining 12 digits, multiply the first number by 3
4. For the second digit, multiple it by the number 5 (the next prime number after 3)
5. Repeat this process for each of the remaining 10 digits EXCLUDING the number 11 (7, 13, 17, 19, 23, 29, 31, 37, 41, 43)
6. Calculate a checksum by adding up all these products (each of the 12 digits multiplied by successive prime numbers)
7. Generate a check digit by using the checksum modulo 11 modulo 10
8. Compare the calculated check digit to the last number of your sum total MPAN from step 6. If they match you have a valid MPAN number.
If you receive an invalid result from the MPAN checker it could mean several things:
1. The coding could have returned the wrong result,
2. The Profile Class and Distributor ID you entered may not be valid,
3. You may not have completed the form correctly.
Please try again before contacting us further as any one of these are easy mistakes to have occurred.
We encourage you to check your own MPAN by using our service because there is the potential risk that your MPAN is wrong. You could be paying the wrong amount or even paying for someone else’s
electricity. The process is quick and easy; we take away the hassle of the above calculations.
All you need to provide is the entire 21-digit number, broken into the groups illustrated in the first section of this guide.
|
{"url":"https://www.energybrokers.co.uk/electricity/mpan-checker","timestamp":"2024-11-06T04:32:27Z","content_type":"text/html","content_length":"159012","record_id":"<urn:uuid:586babac-1f77-4f6d-be1f-e5e9d8f565d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00236.warc.gz"}
|
Techniques for Analyzing ML models
This is an article I had for quite a while as a draft. As part of my yearly cleanup, I've published it without finishing it. It might not be finished or have other problems.
Techniques for model analysis:
Prediction-Based: * Decision boundaries * LIME * Feature importance * SHAP values * Partial Dependence Plots * Sensitivity analysis / perturbation importance * Model parameter analysis * ELI 5 *
Attention mapping / saliency mapping
Error-Based: * Confusion matrix
Data-Based: * Dimensionality reduction * Feature correlations
If you're interested in analysis of CNNs, have a look at my masters thesis:
Analysis and Optimization of Convolutional Neural Network Architectures
Decision boundaries
Drawing this is only an option if you have 3 or less features. So not really useful in most problem settings.
SHAP values
SHAP Values (an acronym from SHapley Additive exPlanations) go in the direction of feature importance.
Let me explain them with an example of the Titanic dataset: You have a survival probability of a given person, e.g. 76%. You want to understand why it is 76%.
So what you can do is to twiddle the features. How does the survival probability change when the person has less / more siblings? When the person has the median number of siblings?
There is the shap package for calculating the shap values.
|
{"url":"https://martin-thoma.com/model-analysis/","timestamp":"2024-11-10T07:40:22Z","content_type":"text/html","content_length":"13756","record_id":"<urn:uuid:c7e23e4d-c340-4c4a-b031-e73bb7d20362>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00892.warc.gz"}
|
The structural distance between two graphs G and H is defined as $$d_S\left(G,H \left| L_G,L_H\right.\right) = \min_{L_G,L_H} d\left(\ell\left(G\right),\ell\left(H\right)\right)$$ where \(L_G\) is
the set of accessible permutations/labelings of G, and \(\ell(G)\) is a permuation/relabeling of the vertices of G (\(\ell(G) \in L_G\)). The set of accessible permutations on a given graph is
determined by the theoretical exchangeability of its vertices; in a nutshell, two vertices are considered to be theoretically exchangeable for a given problem if all predictions under the
conditioning theory are invariant to a relabeling of the vertices in question (see Butts and Carley (2001) for a more formal exposition). Where no vertices are exchangeable, the structural distance
becomes the its labeled counterpart (here, the Hamming distance). Where all vertices are exchangeable, the structural distance reflects the distance between unlabeled graphs; other cases correspond
to distance under partial labeling.
The accessible permutation set is determined by the exchange.list argument, which is dealt with in the following manner. First, exchange.list is expanded to fill an nx2 matrix. If exchange.list is a
single number, this is trivially accomplished by replication; if exchange.list is a vector of length n, the matrix is formed by cbinding two copies together. If exchange.list is already an nx2
matrix, it is left as-is. Once the nx2 exchangeabiliy matrix has been formed, it is interpreted as follows: columns refer to graphs 1 and 2, respectively; rows refer to their corresponding vertices
in the original adjacency matrices; and vertices are taken to be theoretically exchangeable iff their corresponding exchangeability matrix values are identical. To obtain an unlabeled distance (the
default), then, one could simply let exchange.list equal any single number. To obtain the Hamming distance, one would use the vector 1:n.
Because the set of accessible permutations is, in general, very large (\(o(n!)\)), searching the set for the minimum distance is a non-trivial affair. Currently supported methods for estimating the
structural distance are hill climbing, simulated annealing, blind monte carlo search, or exhaustive search (it is also possible to turn off searching entirely). Exhaustive search is not recommended
for graphs larger than size 8 or so, and even this may take days; still, this is a valid alternative for small graphs. Blind monte carlo search and hill climbing tend to be suboptimal for this
problem and are not, in general recommended, but they are available if desired. The preferred (and default) option for permutation search is simulated annealing, which seems to work well on this
problem (though some tinkering with the annealing parameters may be needed in order to get optimal performance). See the help for lab.optimize for more information regarding these options.
Structural distance matrices may be used in the same manner as any other distance matrices (e.g., with multidimensional scaling, cluster analysis, etc.) Classical null hypothesis tests should not be
employed with structural distances, and QAP tests are almost never appropriate (save in the uniquely labeled case). See cugtest for a more reasonable alternative.
|
{"url":"https://www.rdocumentation.org/packages/sna/versions/2.4/topics/structdist","timestamp":"2024-11-02T20:31:17Z","content_type":"text/html","content_length":"84427","record_id":"<urn:uuid:7bb46ddb-817b-49f5-a3b7-38d7feee0e5f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00396.warc.gz"}
|
How to use XOR in SQL?
It's important to note that SQL uses different syntax for XOR depending on the database management system (DBMS). The above example demonstrates the use of XOR in standard SQL, but each DBMS may have
its own variation.
For example, in MySQL, the XOR operator can be represented using the ^ symbol. Here's an example:
SELECT (1 ^ 1) AS result;
This would return FALSE, as both operands are TRUE.
In PostgreSQL, the XOR operator can be represented using the != symbol. Here's an example:
SELECT (1 != 1) AS result;
This would also return FALSE, as both operands are TRUE.
It's always best to consult the documentation of the specific DBMS you are working with to determine the correct syntax for XOR.
|
{"url":"https://devhubby.com/thread/how-to-use-xor-in-sql","timestamp":"2024-11-13T01:19:50Z","content_type":"text/html","content_length":"126875","record_id":"<urn:uuid:c9fc094f-c565-496d-b027-5c482cd65f01>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00379.warc.gz"}
|
Direct Search Factorization
Direct search factorization is the simplest Prime Factorization Algorithm. It consists of searching for factors of a number by systematically performing Trial Divisions, usually using a sequence of
increasing numbers. Multiples of small Primes are commonly excluded to reduce the number of trial Divisors, but just including them is sometimes faster than the time required to exclude them. This
approach is very inefficient, and can be used only with fairly small numbers.
When using this method on a number , only Divisors up to (where is the Floor Function) need to be tested. This is true since if all Integers less than this had been tried, then
In other words, all possible Factors have had their Cofactors already tested. It is also true that, when the smallest Prime Factor of is , then its Cofactor (such that ) must be Prime. To prove this,
suppose that the smallest is . If , then the smallest value and could assume is . But then
which cannot be true. Therefore, must be Prime, so
See also Prime Factorization Algorithms, Trial Division
© 1996-9 Eric W. Weisstein
|
{"url":"http://drhuang.com/science/mathematics/math%20word/math/d/d251.htm","timestamp":"2024-11-13T08:18:08Z","content_type":"text/html","content_length":"7423","record_id":"<urn:uuid:8ba21cc0-848b-4ba1-a79a-86c2503bd840>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00371.warc.gz"}
|
Nuclear magnetic resonance and electron paramagnetic resonance: Basics
Nuclear Magnetic Resonance (NMR) and Electron Paramagnetic Resonance (EPR) are powerful and versatile techniques that have revolutionized the fields of chemistry, physics, and medicine. These
techniques exploit the intrinsic magnetic properties of atomic nuclei and unpaired electrons, respectively, to reveal detailed information about the structure, dynamics, and electronic environment of
When placed in an external magnetic field, certain nuclei—such as hydrogen or carbon in NMR—or unpaired electrons in EPR, behave like tiny gyroscopes, precessing around the magnetic field at a
characteristic frequency known as the Larmor frequency. This frequency is directly related to the magnetic field strength and the specific particle being observed, whether it be a nucleus or an
In NMR, the interaction between spinning nuclei and an applied external magnetic field leads to Zeeman splitting, where the energy levels of the nuclei split according to their spin states.
Similarly, in EPR, unpaired electrons experience energy level splitting in a magnetic field. By introducing a radio frequency (RF) pulse (in NMR) or a microwave pulse (in EPR), we can perturb these
spin systems, leading to a rich array of resonance phenomena that can be detected and analyzed.
This article delves deep into both the classical and quantum mechanical descriptions of NMR, beginning with the phenomenological Bloch equations, which govern the time evolution of nuclear
magnetization under the influence of static and oscillating magnetic fields. We explore the concepts of longitudinal ($T_1$) and transverse ($T_2$) relaxation times, which characterize how the system
returns to equilibrium after being disturbed, as well as the effects of a classical driving field on spin polarization.
Building on the classical foundation, we then transition to the quantum mechanical description of NMR and EPR, examining the system dynamics through the lens of quantum operators, the density matrix
formalism, and the Lindblad master equation. This comprehensive approach allows us to recover the classical Bloch equations from the quantum mechanical model, providing a deep understanding of the
microscopic origins of macroscopic observations.
Through a combination of theoretical models and simulations, this article aims to provide a thorough understanding of how NMR and EPR work and why these techniques remain cornerstones in both
research and industry.
Nuclear Zeeman effect - the two level system
The energy splitting in nuclear states exploited by NMR is caused by the coupling of the nuclear spins to an external magnetic field. Depending on their orientation they have different energy states.
From a quantum mechanical view this is in it’s most simple configuration a simple two level system with all of it’s consequences - the quantum mechanical view will predict and explain effects like
Rabi oscillations and provide a more in depth view of NMR. In this article we’re going to focus on the classical phenomenological view that utilizes the classical coupling of a magnetic gyroscope to
the polarizing field - depending on it’s orientation there can be different energy states.
In thermal equilibrium the population imbalance by the two states (the ground state $<\alpha\mid$ and the excited state $<\beta\mid$ is determined by the temperature of the system according to the
Boltzmann law:
[ \begin{aligned} \Delta E &= \hbar \gamma B \\ \frac{N_1}{N_2} &= e^{-\frac{\Delta E}{k_B T}} \end{aligned} ]
The net magnetization caused by the difference between higher and lower state that can be detected is thus described by:
[ \begin{aligned} \Delta N &= N_{lower} - N_{upper} \\ &= N \left( 1 - e^{-\frac{\Delta E}{k_B T}} \right) \end{aligned} ]
As one can see one can influence the population difference either by increasing the energy gap (and thus going to higher magnetic fields - which also yields higher resonance frequencies as we’re
going to see shortly) or by massively decreasing the temperature - which is the reason why one sees many NMR and EPR spectrometers in science operating at cryogenic temperatures of liquid nitrogen
(77K), liquid helium (4K) or even below.
Phenomenological Bloch equations
In the following sections we’re going to look at a phenomenological non quantum mechanical description of NMR. This view is described by the phenomenological Bloch equations.
They describe the net magnetization in a sample when applying an external magnetic field. The idea starts off with looking at a single spin precessing around a quantization axis set by the external
static polarizing magnetic field $B_0$, usually assumed along the $z$ axis Since there is no net force acting on the dipole it’s assumed that the torque will be of interest:
[ \begin{aligned} \frac{\partial \vec{L}}{\partial t} &= \sum{\vec{\tau}} \\ &= \vec{\mu} \times \vec{B} \end{aligned} ]
The magnetic momentum of a spin can be related with the angular momentum using the gyromagnetic ratio:
[ \frac{\partial \vec{\mu}}{\partial t} = \gamma \vec{\mu} \times \vec{B} ]
Assuming that the phenomenological net magnetization is resulting of the sum of all magnetic momenta this yields to:
[ \begin{aligned} \vec{M} &= \sum_{i} \vec{\mu_i} \\ \frac{\partial \vec{M}(t)}{\partial t} &= \gamma \vec{M}(t) \times \vec{B}(t) \end{aligned} ]
Solutions to the Bloch equations
A steady state solution without relaxation
First let’s take a look at the steady state solution without relaxation in a static external magnetic field along the $z$ axis without applying any driving fields. We’re going to see the spins are
precessing around the quantization axis.
[ \begin{aligned} \vec{B} &= \begin{pmatrix} 0 \\ 0 \\ B_z \end{pmatrix} \\ \frac{\partial}{\partial t} \begin{pmatrix} M_x \\ M_y \\ M_z \end{pmatrix} &= \gamma \begin{pmatrix} M_x \\ M_y \\ M_z \
end{pmatrix} \times \begin{pmatrix} 0 \\ 0 \\ B_z \end{pmatrix} \end{aligned} ]
This yields a very simple set of differential equations when being read component wise:
[ \begin{aligned} \partial_t M_x &= \gamma ( M_y * B_z - M_z * 0 ) \\ \partial_t M_y &= \gamma ( -M_x * B_z + M_z * 0 ) \\ \partial_t M_z &= \gamma * 0 \end{aligned} ]
The last equation shows that without relaxation effects $M_z$ is time independent - the spin precesses around the axis.
The remaining two differential equations are simple to solve assuming that $\partial_t B_z = 0$ (and also assuming the gyromagnetic ratio is constant over time). We do this by introducing the
[ M_{xy} = M_{x} + i M_{y} ]
This will introduce a transversal co-rotating field component. We’ll see this shortly. Adding the two equations with an additional $i$ term:
[ \begin{aligned} \partial_t M_x &= \gamma ( M_y * B_z - M_z * 0 ) \\ \partial_t M_y &= \gamma ( -M_x * B_z + M_z * 0 ) \\ \to \partial_t M_x + i * \partial_t M_y &= M_y B_z - i * M_x B_z \\ \
partial_t \underbrace{(M_x + i _My)}_{M_{xy}} &= -i * (i \gamma M_y B_z + \gamma M_x B_z) \\ &= -i \gamma B_z (i M_y + M_x) \\ &= -i \gamma B_z \underbrace{(M_x + i M_y)}_{M_{xy}} \end{aligned} ]
We’ve just reduced the system of two coupled differential equations to a single simply solvable differential equation:
[ \partial_t M_{xy} = -i \gamma B_z M_{xy} ]
Using the classical Ansatz
[ \begin{aligned} M_{xy} &= M_{xy}(0) e^{-i \omega t} \\ \to \partial_t M_{xy} &= -i \omega M_{xy}(t) \end{aligned} ]
we now arrive at a very simple solution:
[ \begin{aligned} \partial_t M_{xy} &= - i \gamma B_z M_{xy} \\ \to -i \omega M_{xy}(t) &= -i \gamma B_z M_{xy} \\ \to \omega = \gamma B_z \end{aligned} ]
As we can see for the steady state solution without relaxation the $M_{xy}$ component precesses with the so called Larmor frequency $\omega = \gamma B_z$ around the bias / quantization ($z$) axis
keeping a constant amplitude - and the amplitude of the magnetization in the quantization axis $M_z(t)$ also stays constant.
[ \begin{aligned} M_{xy}(t) &= M_{xy}(0) e^{i \gamma B_z t} \\ M_{z}(t) &= M_{z}(0) \end{aligned} ]
Longitudinal relaxation ($T_1$) and constant external field
Out of experimental data one can assume that net magnetization will return to rest if perturbed. Assuming that one has oriented all spins along the $z$ axis at rest and then has kicked them towards
the $xy$ plane one can assume that the spins will relax into their steady state orientation with a velocity that’s proportional to the difference of the current magnetization $M_z(t)$ and the steady
state magnetization $M_z(0)$ since it’s a stochastic process:
[ \begin{aligned} \frac{\partial M_z(t)}{\partial t} = \frac{1}{T_1} \left(M_0 - M_z(t)\right) \end{aligned} ]
This is a simple inhomogeneous differential equation of first order. First let’s solve the homogeneous differential equation:
[ \begin{aligned} \frac{dM(t)}{dt} &= -\frac{1}{T_1} M(t) \\ \frac{dM(t)}{M(t)} &= -\frac{1}{T_1} dt \\ \int \frac{1}{M(t)} dM(t) &= -\frac{1}{T_1} \int dt \\ ln(M(t)) &= -\frac{t}{T_1} + c_1 \\ M(t)
&= e^{-\frac{t}{T_1}} \underbrace{e^{C_1}}_{c_2} \\ \to M_h(t) &= c_2 * e^{-\frac{t}{T_1}} \end{aligned} ]
Now lets search a particular solution. Since our inhomogeneity is constant ($M_0$) and not time dependent we assume that our homogeneous term will also be constant. We can insert this Ansatz into our
differential equation:
[ \begin{aligned} M_p'(t) &= 0 \\ M'(t) &= -\frac{1}{T_1} M(t) + \frac{M_0}{T_1} \\ 0 &= -\frac{1}{T_1} M_p + \frac{M_0}{T_1} \\ \to M_p &= M_0 \end{aligned} ]
As usual assume that our complete solution is formed by a combination of the homogeneous solution that captures the dynamics and the particular solution that captures the initial condition:
[ \begin{aligned} M_h(t) &= c_2 e^{-\frac{t}{T_1}} \\ M_p(t) &= M_0 \\ M(t) &= M_h(t) + M_p(t) \\ M(t) &= c_2 e^{-\frac{t}{T_1}} + M_0 \end{aligned} ]
To determine the constant we use the initial condition:
[ \begin{aligned} M(0) &= M_z(0) \\ \to c_2 + M_0 &= M_z(0) \\ \to c_2 &= (M_z(0) - M_0) \\ \to M(t) &= M_z(0) e^{-\frac{t}{T_1}} + M_0 \left( 1 - e^{-\frac{t}{T_1}} \right) \end{aligned} ]
Transversal relaxation ($T_2$) and constant external field
For transversal relaxation one can assume a decay rate of $\frac{1}{T_2}$ that is proportional to the transversal field. The steady state should approach zero so the equations are simpler than for
longitudinal relaxation:
[ \begin{aligned} \partial_t M_x &= \gamma B_0 M_y - \frac{M_x}{T_2} \\ \partial_t M_y &= - \gamma B_0 M_x - \frac{M_y}{T_2} \end{aligned} ]
To solve the equation one uses the trick utilizing the complex plane and adds both equations from each other again:
[ \begin{aligned} M_{xy} &= M_x + i M_y \\ \partial_t M_x + i * \partial_t M_y &= (\gamma B_0 M_y - \frac{M_x}{T_2}) + i * (- \gamma B_0 M_x - \frac{M_y}{T_2}) \\ \partial_t (M_x + i M_y) &= \gamma
B_0 (M_y - i M_x) - \frac{M_x + i M_y}{T_2} \\ &= -i \gamma B_0 (i M_y + M_x) - \frac{M_x + i My}{T_2} \\ &= -i \gamma B_0 (M_x + i M_y) - \frac{M_x + i M_y}{T_2} \\ \to \partial_t M_{xy} &= -i \
gamma B_0 M_{xy} - \frac{M_{xy}}{T_2} \end{aligned} ]
Now using again the Ansatz
[ \begin{aligned} M_{xy} &= c e^{-i \omega t} \\ \partial_t M_{xy} &= - i \omega M_{xy} \end{aligned} ]
We can solve the equation again:
[ \begin{aligned} -i \omega M_{xy} &= -i \gamma B_0 m_{xy} - \frac{1}{T_2} M_{xy} \\ &= \underbrace{(-i \gamma B_0 - \frac{1}{T_2})}_{-i \omega} M_{xy} \\ \to M_{xy}(t) &= c e^{(-i \gamma B_0 - \frac
{1}{T_2}) t} \\ &= M_{xy}(0) e^{-i \gamma B_0 t} e^{-\frac{t}{T_2}} \end{aligned} ]
As one can see the second term yields relaxation back into the polarizing field direction. When one waits indefinitely this should yield a zero field in the transversal direction:
[ \lim_{t \to \infty} M_{xy}(t) = M_{xy}(0) e^{-i \gamma B_0 t} \underbrace{e^{- \frac{t}{T_2}}}_{\to 0} \to 0 ]
Bloch equations with an external driving field
Up until now we only looked at the Bloch equations in a steady state - with only an static external $B_z$ bias field. To drive the system one usually applies an classical radio frequency (RF) drive
field in an orthogonal direction. This is usually modeled in the $xy$ plane. Under realistic circumstances one also gets components in $B_z$ direction. Then it gets even harder to solve since
decoupling of the equations does not work any more.
Transforming vectors into rotating fields, differentials of rotation matrices
Now let’s first recapitulate a little math on rotation matrices. This will help in solving the equation massively. Let’s assume we rotate a vector around the $z$ axis on the $xy$ plane. We can do
this by applying the rotation matrix
[ R_{z} (\omega t) = \begin{pmatrix} cos(\omega t) & -sin(\omega t) & 0 \\ sin(\omega t) & cos(\omega t) & 0 \\ 0 & 0 & 1 \end{pmatrix} ]
To transform a vector from our laboratory frame into a co-rotating frame we just multiply it with the rotation matrix:
[ \vec{v}_R = \begin{pmatrix} cos(-\Omega t) & -sin(-\Omega t) & 0 \\ sin(-\Omega t) & cos(-\Omega t) & 0 \\ 0 & 0 & 1 \end{pmatrix} * \vec{v}_{L} ]
Now lets take a closer look at the time derivative of the vector in the rotating frame:
[ \frac{\partial \vec{v}_R}{\partial _t} = \frac{\partial R_z(-\Omega t)}{\partial t} \vec{v}_L + R_z(-\Omega t) \frac{\partial \vec{v}_L}{\partial t} ]
As one can see this contains a time differential of a rotation matrix. Taking a closer look again:
[ \begin{aligned} \frac{\partial R_z(\Omega t)}{\partial t} &= \frac{\partial}{\partial t} \begin{pmatrix} cos(\omega t) & -sin(\omega t) & 0 \\ sin(\omega t) & cos(\omega t) & 0 \\ 0 & 0 & 1 \end
{pmatrix} \\ &= - \Omega \begin{pmatrix} -sin(\omega t) & -sin(\omega t) & 0 \\ cos(\omega t) & -sin(\omega t) & 0 \\ 0 & 0 & 0 \end{pmatrix} \end{aligned} ]
On first glance this doesn’t look very convenient but taking a closer look on the outer product
[ \begin{aligned} \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \times R_z &= \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \times \begin{pmatrix} cos(\omega t) & -sin(\omega t) & 0 \\ sin(\omega t) & cos(\
omega t) & 0 \\ 0 & 0 & 1 \end{pmatrix} \\ &= \begin{pmatrix} -sin(\omega t) & -sin(\omega t) & 0 \\ cos(\omega t) & -sin(\omega t) & 0 \\ 0 & 0 & 0 \end{pmatrix} \end{aligned} ]
One can actually rewrite this expression as
[ \frac{\partial R_z(\Omega t)}{\partial t} = -\Omega \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \times R_z(-\Omega t) ]
Bloch equation without relaxation with driving field
First lets take a look at the Bloch equations when adding a driving field when neglecting the relaxation processes. This is a reasonable approximation for many pulsed experiments since the driving
pulse durations are often very short compared to relaxation times.
Assume the driving field is operating with a frequency $\omega_{RF}$ in the $x$ plane. Thus our driving field vector in the laboratory frame is
[ B_{RF} = \begin{pmatrix} B_1 \cos(\omega_{RF} t) \\ -B_1 \sin(\omega_{RF} t) \\ 0 \end{pmatrix} ]
We can now model our system again using the Bloch equation in the laboratory frame. In the following section all components in the laboratory frame are denoted by the $L$ index, all in the rotating
frame by the $R$ label (i.e. $M_L$ and $M_R$).
[ \frac{\partial M_L}{\partial t} = \gamma \vec{M_L} \times \vec{B_L} ]
As we will see later it’s way easier to move into a co-rotating frame that is rotating around our quantization axis $z$ with the Lamour frequency. In the following we assume the rotation frequency of
our co-rotating frame is described by $\Omega$. Thus our Bloch equation in the co-rotating frame can be derived by transforming our Magnetization vector into the co-rotating frame:
[ \frac{\partial M_R}{\partial t} = \frac{\partial (R_Z(-\Omega t) M_L)}{\partial t} ]
Now we just differentiate with the product rule and apply the differential of the rotation matrix from the last subsection:
[ \begin{aligned} \frac{\partial M_R}{\partial t} &= \frac{\partial (R_Z(-\Omega t) M_L)}{\partial t} \\ &= \frac{\partial R_z(-\Omega t)}{\partial t} M_L + R_Z(-\Omega t) \frac{\partial M_L}{\
partial t} \\ &= -\Omega \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \times M_R + R_Z(-\Omega t) \frac{\partial M_L}{\partial t} \\ &= -\Omega \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \times M_R + R_Z
(-\Omega t) \underbrace{\left( \gamma M_L \times \begin{pmatrix} B_1 \cos(\omega_{RF} t) \\ B_1 \sin(\omega_{RF} t) \\ B_z \end{pmatrix} \right)}_{\frac{\partial M_L}{\partial t}} \end{aligned} ]
Since the rotation matrix is acting on the whole term $\frac{\partial M_L}{\partial t}$ in the laboratory frame and it’s just composed of two vector components we can transform each of the vectors by
the rotation matrix into the rotating frame. We first take a look at the magnetic field components:
[ \begin{aligned} \vec{B}_R &= \begin{pmatrix} \cos(\Omega t) & -\sin(\Omega t) & 0 \\ \sin(\Omega t) & \cos(\Omega t) & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} B_1 \cos(\omega_{RF} t) \\ -B_1 \
sin(\omega_{RF} t) \\ B_z \end{pmatrix} \\ &= \begin{pmatrix} B_1 \cos(\omega_{RF} t) \cos(\Omega t) + B_1 \sin(\omega_{RF} t) \sin(\Omega t) \\ B_1 \cos(\omega_{RF} t) \sin(\Omega t) - B_1 \sin(\
omega_{RF} t) \cos(\Omega t) \\ B_z \end{pmatrix} \\ &= \begin{pmatrix} \frac{1}{2} \left( B_1 \cos(\omega_{RF}t + \Omega t) + B_1 \cos(\omega_{RF}t - \Omega t) + B_1 \cos(\omega_{RF}t - \Omega t) -
B_1 \cos(\omega_{RF}t + \Omega t) \right) \\ \frac{1}{2} \left( B_1 \sin(\omega_{RF}t + \Omega t) + B_1 \sin(-\omega_{RF}t + \Omega t) - B_1 \sin(\omega_{RF}t + \Omega t) - B_1 \sin(\omega_{RF}t - \
Omega t) \right) \\ B_z \end{pmatrix} \\ &= \begin{pmatrix} B_1 \cos((\omega_{RF} - \Omega) t) \\ -B_1 \sin((\omega_{RF} - \Omega) t) \\ B_z \end{pmatrix} \end{aligned} ]
We can now insert this field into our Bloch equation in the rotating frame:
[ \begin{aligned} \frac{\partial M_R}{\partial t} &= -\Omega \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \times M_R + \left(\gamma M_R \times \begin{pmatrix} B_1 \cos((\omega_{RF} - \Omega) t) \\ -B_1
\sin((\omega_{RF} - \Omega) t) \\ B_z \end{pmatrix} \right) \\ &= - \Omega \begin{pmatrix} -M_{R,y} \\ M_{R,x} \\ 0 \end{pmatrix} + \gamma \begin{pmatrix} M_{R,x} \\ M_{R,y} \\ M_{R,z} \end{pmatrix}
\times \begin{pmatrix} B_1 \cos((\omega_{RF} - \Omega)t) \\ -B_1 \sin((\omega_{RF} - \Omega) t) \\ B_z \end{pmatrix} \\ &= - \Omega \begin{pmatrix} -M_{R,y} \\ M_{R,x} \\ 0 \end{pmatrix} + \gamma \
begin{pmatrix} M_{R,y} B_Z + M_{R,z} B_1 \sin((\omega_{RF} - \Omega) t) \\ -M_{R,x} B_z + M_{R,z} B_1 \cos((\omega_{RF} - \Omega)t) \\ - M_{R,x} B_1 \sin((\omega_{RF} - \Omega)t) - M_{R,y} B_1 \cos
((\omega_{RF} - \Omega) t) \end{pmatrix} \end{aligned} ]
Bloch equation with driving field and relaxation
As a last step lets now add relaxation again. Recall that in the laboratory frame the phenomenological relaxation is described by the last term in the following expression:
[ \frac{\partial M_L}{\partial t} = \gamma \vec{M}_L \times \vec{B}_{eff} - \begin{pmatrix} \frac{M_{L,x}}{T_2} \\ \frac{M_{L,y}}{T_2} \\ -\frac{M_0 - M_{L,z}}{T_1} \end{pmatrix} ]
To include them in the rotating frame let’s just apply the rotation matrix:
[ \begin{pmatrix} \cos(\Omega t) & -\sin(\Omega t) & 0 \\ \sin(\Omega t) & \cos(\Omega t) & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \frac{M_{L,x}}{T_2} \\ \frac{M_{L,y}}{T_2} \\ \frac{M_0 - M_
{L,z}}{T_1} \end{pmatrix} = \begin{pmatrix} \frac{1}{T_2} (\cos(\Omega t) M_{L,x} - \sin(\Omega t) M_{L,y}) \\ \frac{1}{T_2} (\sin(\Omega t) M_{L,x} + \cos(\Omega t) M_{L,y}) \\ -\frac{M_0 - M_{L,z}}
{T_1} \end{pmatrix} ]
Taking a look at the transformed net magnetization vector $M_L$ in the rotating frame:
[ \begin{aligned} \begin{pmatrix} M_{R,x} \\ M_{R,y} \\ M_{R,z} \end{pmatrix} &= \begin{pmatrix} \cos(\Omega t) & -\sin(\Omega t) & 0 \\ \sin(\Omega t) & \cos(\Omega t) & 0 \\ 0 & 0 & 1 \end{pmatrix}
\begin{pmatrix} M_{L,x} \\ M_{L,y} \\ M_{L,z} \end{pmatrix} \\ &= \begin{pmatrix} \cos(\Omega t) M_{L,x} - \sin(\Omega t) M_{L,y} \\ \sin(\Omega t) M_{L,x} + \cos(\Omega t) M_{L,y} \\ M_{L,z} \end
{pmatrix} \end{aligned} ]
This shows we can simply apply the prefactors $\frac{1}{T_2}$ and the structure $\frac{M_0 - M_{L,z}}{T_1}$ to the transformed vector as one could have expected:
[ \frac{\partial M_R}{\partial t} = - \Omega \begin{pmatrix} -M_{R,y} \\ M_{R,x} \\ 0 \end{pmatrix} + \gamma \begin{pmatrix} M_{R,y} B_Z + M_{R,z} B_1 \sin((\omega_{RF} - \Omega) t) \\ -M_{R,x} B_z +
M_{R,z} B_1 \cos((\omega_{RF} - \Omega)t) \\ - M_{R,x} B_1 \sin((\omega_{RF} - \Omega)t) - M_{R,y} B_1 \cos((\omega_{RF} - \Omega) t) \end{pmatrix} - \begin{pmatrix} \frac{M_{R,x}}{T_2} \\ \frac{M_
{R,y}}{T_2} \\ -\frac{M_0 - M_{R,z}}{T_1} \end{pmatrix} ]
Thus we have our set of the 3 coupled differential equations that model a simple system with quantization or polarization axis $z$ and an perpendicular RF field in the XY plane:
[ \begin{aligned} \partial_t M_{R,x}(t) &= \Omega M_{R,y}(t) + \gamma M_{R,y}(t) B_z + \gamma M_{R,z}(t) B_1 \sin((\omega_{RF} - \Omega)t) - \frac{1}{T_2} M_{R,x}(t) \\ \partial_t M_{R,y}(t) &= -\
Omega M_{R,x}(t) - \gamma M_{R,x}(t) B_z + \gamma M_{R,z}(t) B_1 \cos((\omega_{RF} - \Omega) t) - \frac{1}{T_2} M_{R,y}(t) \\ \partial_t M_{R,z}(t) &= - \gamma M_{R,x}(t) B_1 \sin((\omega_{RF} - \
Omega) t) - \gamma M_{R,y}(t) B_1 \cos((\omega_{RF} - \Omega) t) + \frac{M_0 - M_{R,z}(t)}{T_1} \end{aligned} ]
Numerical simulations
What do these equations actually mean? Keep in mind those are classical phenomenological equations - they don’t describe effects as Rabi oscillations - so for example you will only see saturation of
the excited state with longer and longer pulse times. This does not fit the experiment where the largest population of the excited state will be achieved using $\frac{\pi}{2}$ pulses.
The following simulations have been calculated with the following parameters:
• Time range over 25 ms
• Initial magnetization along the $z$ axis
• $B_0$ field of 7 T
• $B_1$ field strength of 1 mT
• $\omega_{RF}$ of 42 MHz
• $\Omega$ of 42 MHz
• $\gamma$ - the gyromagnetic ratio - of $42 \frac{MHz}{T}$
• The $T_1$ and $T_2$ times have been chosen way shorter than real values to keep the simulation time span short - they’ve been shorted by a factor of about 1 million for a free proton
□ $T_1 = 650ns$
□ $T_2 = 150ns$
Driving with an on resonant field
To simulate the driving the initial state is a single spin polarized completely parallel to the quantization axis - into $z$ direction. The driving field, relaxation and the bias field are enabled.
As one can see the driving field in rotational frame is constant while oscillating in the lab frame (plots in the lower rows). This is due to the rotating frame rotating with the Lamour frequency.
This slowly tilts the spin from the $z$ plane (the magnetization shown in the right upper plot) into the $xy$ plane (the magnetization shown in the left upper plot). As one can see this yields two
oscillating components. One in phase with the driving field (i.e. along the co-rotating $x$ axis) - this is called the absorptive component since it counteracts the effective field that also drives
the system. The second signal along the $y$ axis is shifted with respect to the drive signal - this is called dispersive signal due to this phase shift.
Decay due to relaxation effects
Illustrating the decay effect due to relaxation effects is done by doing the same simulation, initially polarizing the spins in the $xy$ plane and disabling the drive field.
As one can see the magnetization along the $xy$ plane decays exponentially while the magnetization relaxes back to it’s initial value along the $z$ plane as expected. Again there is an absorptive and
dispersive component in the co-rotating $xy$ plane that yields an exponentially decaying absorptive and dispersive signal.
The quantum mechanical description
As we can see above the description of relaxation matches what we would expect and what we will see later in experimental data - but it fails to describe some well known effects like Rabi
oscillations (that we would expect for any quantum mechanical two level system and it does not tell us how the system really behaves on microscopic scale but delivers only a macroscopic
phenomenological description of the effects. To dive further into the microscopic behavior we have to move on to the quantum mechanical description of our spins. To do this we first look at a single
spin. To do this lets first recall the notation of spin operators and Pauli matrices that we’re going to use to describe spins in our $SU(2)$:
A short reminder on spin operators and Pauli matrices
Let’s recall the spin operators in quantum mechanics. They are designed to behave like angular momentum operators. We usually use Pauli matrices as one of the representations of the generators of the
$SU(2)$ Lie group that contains our spin state:
[ \begin{aligned} \hat{\sigma}_x &= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \\ \hat{\sigma}_y &= \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \\ \hat{\sigma}_z &= \begin{pmatrix} 1 & 0 \\ 0 &
-1 \end{pmatrix} \end{aligned} ]
Those matrices are designed to satisfy the commutation relations that we expect from an angular momentum operator:
[ \begin{aligned} \lbrack \hat{\sigma}_x, \hat{\sigma}_y \rbrack = 2 i \hat{\sigma_z} \\ \lbrack \hat{\sigma}_y, \hat{\sigma}_z \rbrack = 2 i \hat{\sigma_x} \\ \lbrack \hat{\sigma}_z, \hat{\sigma}_x
\rbrack = 2 i \hat{\sigma_y} \end{aligned} ]
The spin operators are designed using the Pauli matrices:
[ \begin{aligned} \hat{S}_x &= \frac{\hbar}{2} \hat{\sigma}_x \\ \hat{S}_y &= \frac{\hbar}{2} \hat{\sigma}_y \\ \hat{S}_z &= \frac{\hbar}{2} \hat{\sigma}_z \end{aligned} ]
Our Hamiltonian and its Eigenvalues and Eigenvectors
Now that we recalled how spin operators work lets set up a Hamiltonian for our system. This will model the coupling between a single spin and an external magnetic field:
[ \begin{aligned} \hat{H}_0 &= -\gamma \vec{B}_0 \hat{S} \\ &= -\gamma B_0 \hat{S_z} \end{aligned} ]
As one can see we have assumed again that the external bias field $B_0$ is oriented along the z Axis - so the inner product of $\vec{B}_0$ and the spin operator $\hat{S}$ only yields a $z$ component.
The Eigenstates $\mid \uparrow >$ and $\mid \downarrow >$ of the $\hat{S}_z$ operator can be derived when solving the Eigenstates of the Hamiltonian:
[ \begin{aligned} \hat{H_0} &= -\gamma B_0 \hat{S}_z \\ &= -\frac{\hbar \gamma B_0}{2} \hat{\sigma}_z \\ &= -\frac{\hbar \gamma B_0}{2} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \end{aligned} ]
Let’s solve for Eigenvalues and Eigenvectors of this operator:
[ \begin{aligned} \hat{H}_0 \mid \psi > &= E \mid \psi > \\ -\frac{\hbar \gamma B_0}{2} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} a \\ b \end{pmatrix} &= E \begin{pmatrix} a \\ b
\end{pmatrix} \end{aligned} ]
Using the method of characteristic polynomials we can determine our Eigenvalues:
[ A = \begin{pmatrix} -\frac{\hbar \gamma B_0}{2} & 0 \\ 0 & \frac{\hbar \gamma B_0}{2} \end{pmatrix} \\ ] [ \begin{aligned} \det(\lambda I - A) &= 0 \\ \det\left(\begin{pmatrix} \lambda & 0 \\ 0 & \
lambda \end{pmatrix} - \begin{pmatrix} -\frac{\hbar \gamma B_0}{2} & 0 \\ 0 & \frac{\hbar \gamma B_0}{2} \end{pmatrix}\right) &= 0 \\ \det \begin{pmatrix} \lambda + \frac{\hbar \gamma B_0}{2} & 0 \\
0 & \lambda - \frac{\hbar \gamma B_0}{2} \end{pmatrix} &= 0 \\ \left(\lambda + \frac{\hbar \gamma B_0}{2}\right) \left(\lambda - \frac{\hbar \gamma B_0}{2}\right) &= 0 \\ \lambda^2 - \left( \frac{\
hbar \gamma B_0}{2} \right)^2 &= 0 \\ \lambda^2 &= \left( \frac{\hbar \gamma B_0}{2} \right)^2 \\ \lambda &= \pm \frac{\hbar \gamma B_0}{2} \end{aligned} ]
Inserting into the Eigenvalue equation above yields the two Eigenvectors:
[ \begin{aligned} E_1 = -\frac{\hbar \gamma B_0}{2} &\to \vec{v}_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix} = \mid \uparrow > \\ E_2 = -\frac{\hbar \gamma B_0}{2} &\to \vec{v}_2 = \begin{pmatrix} 0 \\
1 \end{pmatrix} = \mid \downarrow > \\ \end{aligned} ]
Thus we have seen the Eigenvalues and Eigenstates of the Spin operator $\hat{S}_z$ are:
[ \begin{aligned} \hat{H_0} \mid \uparrow > &= - \frac{\hbar \gamma B_0}{2} \mid \uparrow > \\ \hat{H_0} \mid \downarrow > &= \frac{\hbar \gamma B_0}{2} \mid \downarrow > \\ \end{aligned} ]
Density matrices and their time evolution
Since we are going to use density matrices to model our system later on let’s quickly recall the formalism used. A density matrix describes all states of a system and their population probability -
as well as the overlap between non orthogonal states in the off diagonal terms.
[ \hat{\rho}(t) = \sum_{i} p_i \mid \psi_i(t) > < \psi_i(t) \mid ]
For a single pure state the density matrix thus would be very simple:
[ \hat{\rho}_{pure} = \mid \psi(t) > < \psi(t) \mid ]
The time evolution of a density matrix is described by the Liouville-von Neumann equation. This equation can be derived from the time dependent Schrödinger equation. First we reorder the Schrödinger
[ \begin{aligned} i \hbar \frac{\partial}{\partial t} \mid \psi(t) > &= \hat{H}_0 \mid \psi(t) > \\ \to \frac{\partial \mid \psi(t) >}{\partial t} &= -\frac{i}{\hbar} \hat{H}_0 \mid \psi(t) > \end
{aligned} ]
Then we insert the definition of the density matrix into the time derivative of the density matrix:
[ \begin{aligned} \frac{\partial \hat{\rho}}{\partial t} &= \frac{\partial}{\partial t} \mid \psi(t) > < \psi(t) \mid \\ &= \frac{\partial \mid \psi(t) >}{\partial t} < \psi(t) \mid + \mid \psi(t) >
\frac{\partial < \psi(t) \mid}{\partial t} \\ &= -\frac{i}{\hbar} \hat{H}_0 \mid \psi(t) > < \psi(t) \mid + \frac{i}{\hbar} \mid \psi(t) > < \psi(t) \mid \hat{H}_0 \\ \frac{\partial \hat{\rho}}{\
partial t} &= -\frac{i}{\hbar} \lbrack \hat{H}_0, \hat{\rho} \rbrack \end{aligned} ]
As one can see the dynamics of the density matrix is fully described by the commutator between the Hamiltonian and the density matrix - as usual the quantum mechanical description arises from the
fact that the operators do not commute.
Initial state of our system
In thermal equilibrium the initial density matrix can be constructed by taking into account for the population probability of a state that is described by the Boltzmann distribution:
[ \hat{\rho}_{eq} = \frac{1}{ Tr(e^{-\frac{\hat{H}_0}{k_B T}}) } e^{-\frac{\hat{H}_0}{k_B T}} ]
The trace in the denominator provides the normalization of the probabilities of all states. Recall the trace is the sum of all diagonal elements:
[ \begin{aligned} Tr(\hat{A}) &= \sum_i < \psi_i \mid \hat{A} \mid \psi_i > \\ \to Tr(\hat{\rho}_{eq}) &= 1 \end{aligned} ]
Adding a driving field
In contrast to our treatment of the phenomenological Bloch equations we are adding the drive term before taking a look at the solution of the equation. The driving field is modeled in a classical way
(i.e. we’re not modeling the exchange of photons but just assume the presence of the field that is coupling to the spin system):
[ \vec{B}_1(t) = 2 B_1 \cos(\omega_{RF} t) \hat{x} ]
We’ve chosen to orient the drive field in $\hat{x}$ direction - this is an arbitrary choice in the $xy$ plane that does not limit validity of our model (i.e. we can just rotate each other system
around the $z$ axis so our model fits any driving field in the $xy$ plane - assuming dipole radiation).
The full Hamiltonian for our spin system now is
[ \hat{H}(t) = -\gamma B_0 \hat{S}_z - \gamma B_1 \cos(\omega_{RF} t) \hat{S}_x ]
Moving to the rotating frame, rotating frame approximation
As in the classical description solving the actual time dependent equations gets simpler when we move into a co-rotating frame. To model our rotation we’re utilizing a Unitarian transformation
(recall that unitarity preserves length of state vectors - so normalization and also total energy is conserved).
[ \hat{U}(t) = e^{i \omega t \hat{S}_z} ]
This yields the Hamiltonian in the co-rotating field:
[ \begin{aligned} \hat{H}_{rot} &= \hat{U}^\dagger(t) \hat{H}(t) \hat{U}(t) - i \hbar \hat{U}^\dagger(t) \frac{\partial \hat{U}(t)}{\partial t} \\ &= e^{-i \omega t \hat{S}_z} \left( -\gamma B_0 \hat
{S}_z - \gamma B_1 \cos(\omega_{RF} t) \hat{S}_x \right) e^{i \omega t \hat{S}_z} - i \hbar \hat{U}^\dagger(t) \frac{\partial \hat{U}(t)}{\partial t} \\ &= \hat{U}^\dagger(t) (-\gamma B_0 \hat{S}_z)
\hat{U}(t) - \hat{U}^\dagger(t) (-\gamma B_1 \cos(\omega_{RF} t) \hat{S}_x) \hat{U}(t) - i \hbar \hat{U}^\dagger(t) \left(\frac{i \omega \hat{S}_z}{\hbar} \right) \hat{U}(t) \\ &= -\gamma B_0 \hat{S}
_z - \gamma B_1 \cos(\omega_{RF} t) \left( \cos(\omega t) \hat{S}_x + \sin(\omega t) \hat{S}_y \right) - \hbar \omega \hat{S}_z \\ &= -\gamma B_0 \hat{S}_z - \gamma B_1 \cos(\omega_{RF} t) \cos(\
omega t) \hat{S}_x - \gamma B_1 \cos(\omega_{RF} t) \sin(\omega t) \hat{S}_y \\ &= -\gamma B_0 \hat{S}_z - \gamma B_1 \left(\frac{1}{2} \cos((\omega_{RF} + \omega) t) + \frac{1}{2} \cos((\omega_{RF}
- \omega)t) \right) \hat{S}_x - \gamma B_1 \left(\frac{1}{2} \sin((\omega + \omega_{RF}) t) + \frac{1}{2} \sin((\omega - \omega_{RF})t) \right) \hat{S}_y \end{aligned} ]
Now we perform the rotating wave approximation. We assume that terms $\omega_{RF} + \omega$ are oscillating very fast compared to our systems dynamic so they are assumed to average out over time. The
detuning is now labeled $\Delta \omega = \omega - \omega_{RF}$ so we can simplify our Hamiltonian in the rotating frame to:
[ \hat{H}_{rot} = -\gamma B_0 \hat{S}_z - \gamma B_1 \frac{1}{2} \cos(\Delta \omega t) \hat{S}_x - \gamma B_1 \frac{1}{2} \sin(\Delta \omega t) \hat{S}_y ]
On resonance solutions and Rabi oscillations
To get a first impression of the equation and it’s implication we assume driving the system on resonance, i.e. $\omega_{RF} = \omega$ and thus $\Delta \omega = 0$:
[ \hat{H}_{rot}(t) = -\gamma B_0 \hat{S}_z - \frac{1}{2} \gamma B_1 \hat{S}_x ]
Now we can take a look at the time dependent Schrödinger equation:
[ \begin{aligned} i \hbar \frac{\partial}{\partial t} \mid \psi_{rot}(t) > &= \hat{H}_{rot}(t) \mid \psi_{rot}(t) > \\ &= \left(- \frac{\hbar \Delta \omega}{2} \hat{\sigma}_z - \frac{\hbar \gamma
B_1}{2} \hat{\sigma}_x \right) \mid \psi_{rot}(t) > \end{aligned} ]
Lets assume our state can only be composed of a mixture of up and down states:
[ \mid \psi_{rot} (t) > = a(t) \mid \uparrow > + b(t) \mid \downarrow > ]
We can insert this into our differential equation:
[ \begin{aligned} i \hbar \frac{\partial}{\partial t} \left( a(t) \mid \uparrow > + b(t) \mid \downarrow > \right) &= \left(-\frac{\hbar \Delta \omega}{2} \hat{\sigma}_z - \frac{\hbar \gamma B_1}{2}
\hat{\sigma}_x \right) \left( a(t) \mid \uparrow > + b(t) \mid \downarrow > \right) \\ &= \left(- \frac{\hbar \Delta \omega}{2} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} - \frac{\hbar \gamma B_1}
{2} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \right) \left( a(t) \mid \uparrow > + b(t) \mid \downarrow > \right) \\ &= \begin{pmatrix} -\frac{\hbar \Delta \omega}{2} & -\frac{\hbar \gamma B_1}
{2} \\ -\frac{\hbar \gamma B_1}{2} & \frac{\hbar \Delta \omega}{2} \end{pmatrix} \left( a(t) \mid \uparrow > + b(t) \mid \downarrow > \right) \end{aligned} ]
Writing everything in vector form:
[ \begin{aligned} i \hbar \begin{pmatrix} a(t) \\ b(t) \end{pmatrix} &= \begin{pmatrix} - \frac{\hbar\Delta\omega}{2} & -\frac{\hbar\gamma B_1}{2} \\ -\frac{\hbar \gamma B_1}{2} & \frac{\hbar \Delta
\omega}{2} \end{pmatrix} \begin{pmatrix} a(t) \\ b(t) \end{pmatrix} \\ &= \begin{pmatrix} -\frac{\hbar \Delta \omega}{2} a(t) - \frac{\hbar \gamma B_1}{2} b(t) \\ -\frac{\hbar \gamma B_1}{2} a(t) + \
frac{\hbar \Delta \omega}{2} b(t) \end{pmatrix} \end{aligned} ]
This yields two coupled differential equations:
[ \begin{aligned} i \hbar \frac{\partial}{\partial t} a(t) &= -\frac{\hbar \Delta \omega}{2} a(t) - \frac{\hbar \gamma B_1}{2} b(t) \\ i \hbar \frac{\partial}{\partial t} b(t) &= -\frac{\hbar \gamma
B_1}{2} a(t) + \frac{\hbar \Delta \omega}{2} b(t) \end{aligned} ]
These can be solved by first diving through the prefactor
[ \begin{aligned} \frac{\partial}{\partial t} a(t) &= i \frac{\Delta \omega}{2} a(t) + i \frac{\gamma B_1}{2} b(t) \\ \frac{\partial}{\partial t} b(t) &= i \frac{\gamma B_1}{2} a(t) - i \frac{\Delta
\omega}{2} b(t) \end{aligned} ]
Now one can take the second derivative of the first equation and insert the second:
[ \begin{aligned} \frac{\partial^2}{\partial t^2} a(t) &= i \frac{\Delta \omega}{2} \frac{\partial a(t)}{\partial t} + i \frac{\gamma B_1}{2} \frac{\partial b(t)}{\partial t} \\ &= i \frac{\Delta\
omega}{2} + i \frac{\gamma B_1}{2}\left(i \frac{\gamma B_1}{2} a(t) - i \frac{\Delta \gamma}{2} b(t) \right) \\ &= i \frac{\Delta \omega}{2} \left(i \frac{\Delta \omega}{2} a(t) + i \frac{\gamma B_1}
{2} b(t) \right) + i \frac{\gamma B_1}{2}\left(i \frac{\gamma B_1}{2} a(t) - i \frac{\Delta \gamma}{2} b(t) \right) \\ &= \left(-\frac{\Delta \omega^2}{4} - \frac{\gamma^2 B_1^2}{4} \right) a(t) - \
underbrace{\left( \frac{\Delta \omega \gamma B_1}{2} + \frac{\Delta \omega \gamma B_1}{4} \right)}_{0} b(t) \\ &= \left(-\frac{\Delta \omega^2}{4} - \frac{\gamma^2 B_1^2}{4} \right) a(t) \end
{aligned} ]
This now simplifies to a very simple second order differential equation:
[ \begin{aligned} \frac{\partial^2}{\partial t^2} a(t) + \underbrace{\left(\frac{\Delta \omega^2}{4} + \frac{\gamma^2 B_1^2}{4} \right)}_{\frac{\Omega^2}{4}} a(t) &= 0 \\ \frac{\partial^2}{\partial t
^2} a(t) + \frac{\Omega^2}{4} a(t) &= 0 \end{aligned} ]
Here we have introduced the term $\Omega = \sqrt{\Delta \omega^2 + \gamma^2 B_1^2}$ that will turn out to be the Rabi frequency very soon
Using the Ansatz $a(t) = c_1 \sin(c_2 t) + c_3 \cos(c_2 t)$ we arrive at
[ \begin{aligned} a(t) &= c_1 \sin(c_2 t) + c_3 \cos(c_2 t) \\ \partial_t a(t) &= c_2 * c_1 \cos(c_2 t) - c_3 c_2 \sin(c_2 t) \\ \partial_t^2 a(t) &= -c_1 * c_2^2 \sin(c_2 t) - c_3 c_2^2 \cos(c_2 t)
\\ &= -c_2^2 a(t) \end{aligned} ] [ \begin{aligned} \to c_2^2 a(t) + \frac{\Omega^2}{4} a(t) &= 0 \\ \to c_2^2 &= \frac{\Omega^2}{4} \\ \to c_2 &= \pm \frac{\Omega}{2} \\ \to a(t) &= c_1 \sin\left(\
frac{\Omega}{2} t\right) + c_3 \cos\left(\frac{\Omega}{2} t\right) \end{aligned} ]
Calculating $b(t)$ in an analogue way results in the similar expression
[ b(t) = c_4 \sin\left(\frac{\Omega}{2} t\right) + c_5 \cos\left(\frac{\Omega}{2} t\right) ]
The prefactors $c_1, c_3, c_4$ and $c_5$ have to be determined by initial conditions. As we can see the states are oscillating between up and down state.
The probability of locating a system in states $\mid \uparrow >$ and $\mid \downarrow >$ can be determined by taking the absolute values:
[ \begin{aligned} P(\mid \uparrow >) &= \mid a(t) \mid^2 \\ P(\mid \downarrow >) &= \mid b(t) \mid^2 \end{aligned} ]
Let’s assume we start at time $t=0$ fully in state $\mid \uparrow >$. Then we get the following constants:
[ \begin{aligned} \mid \psi(t=0) > &= 1 * \mid \uparrow > + 0 * \mid \downarrow > \\ \to a(t = 0) = 1 &\to a(t) = \cos\left(\frac{\Omega}{2}t\right) \\ \to b(t = 0) = 0 &\to b(t) = \sin\left(\frac{\
Omega}{2}t\right) \end{aligned} ]
As for the electric dipole this shows oscillations between excited and ground state due to coherent exchange of energy between the driving field $B_1$ and the spin system.
The oscillation between ground and excited state and back is called Rabi oscillation, the frequency $\Omega$ is the Rabi frequency
Again one can define $\frac{\pi}{2}$ and $\pi$ pulses which are the extrema - for a $\pi$ pulse the system is transitioned fully from ground to excited state, for a $\frac{\pi}{2}$ pulse the system
is kicked into the $xy$ plane.
Introducing relaxation processes
Introducing relaxation processes is a little bit harder. Up until now we modeled a unitary system (which means energy is conserved). To model the dynamic using the density matrix formalism one uses
the commutator with the Hamiltonian (also called the von Neumann equation):
[ \begin{aligned} \frac{\partial \hat{\rho}}{\partial t} = -\frac{i}{\hbar} \lbrack \hat{H}_0, \hat{\rho} \rbrack \end{aligned} ]
This equation works well for closed systems (due to it’s unitarity) - when coupling the system to a bath or reservoir one needs to introduce some non Unitarian terms. This is usually done in the
context of the Lindblad master equation (see the Gorini-Kossakowski-Sudarshan and Linblad theorem)
[ \begin{aligned} \frac{\partial \hat{\rho}(t)}{\partial t} = -\frac{i}{\hbar} \lbrack \hat{H}, \hat{\rho}(t) \rbrack + \sum_i \left( L_i \rho(t) L_i^\dagger - \frac{1}{2} \lbrace L_i^\dagger L_i, \
hat{\rho}(t) \rbrace \right) \end{aligned} ]
The Lindblad operators $L_j$ describe the interaction with the environment.
Longitudinal relaxation T1
To introduce longitudinal relaxation one has to account for the relaxation of the $z$ component back into equilibrium state by dissipation of energy to the environment (the bath). This is done by
decaying from the excited state to the ground state with a decay rate $\frac{1}{T_1}$. This can be modeled with the ladder operator $\hat{\sigma_{-}}$:
[ \begin{aligned} \hat{\sigma}_{-} &= \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \\ \hat{L}_1 &= \sqrt{\frac{1}{T_1}} \hat{\sigma}_{-} \end{aligned} ]
Transverse relaxation T2
The introduction of transverse relaxation works similar - this is the loss of coherence in the $xy$ plane (i.e. the decay of off diagonal elements in the density matrix - dephasing).
[ \begin{aligned} \hat{L}_2 &= \sqrt{\frac{1}{2 T_2}} \hat{\sigma}_z \end{aligned} ]
This is dephasing without an change in total energy.
Complete equation including relaxation
Inserting into the master equation yields
[ \frac{\partial \hat{\rho}}{\partial t} = -\frac{i}{\hbar} \lbrack \hat{H}, \hat{\rho} \rbrack + \frac{1}{T_1} \left(\hat{\sigma}_{-} \hat{\rho} \hat{\sigma}_{+} - \frac{1}{2} \lbrace \hat{\sigma}_
{+} \hat{\sigma}_{-}, \hat{\rho} \rbrace \right) + \frac{1}{2 T_2} \left( \hat{\sigma}_z \hat{\rho} \hat{\sigma}_z - \hat{\rho} \right) ]
Expectation values of Magnetization vectors
To determine the magnetization of the whole system - this is the quantity we have also described phenomenological before - we apply our spin operators and trace out the density matrix:
[ \begin{aligned} M_x(t) &= Tr\left(\hat{\rho}(t) \hat{S}_x\right) = Tr\left(\hat{\rho}(t) \frac{\hbar}{2} \hat{\sigma}_x\right) \\ M_y(t) &= Tr\left(\hat{\rho}(t) \hat{S}_y\right) = Tr\left(\hat{\
rho}(t) \frac{\hbar}{2} \hat{\sigma}_y\right) \\ M_z(t) &= Tr\left(\hat{\rho}(t) \hat{S}_z\right) = Tr\left(\hat{\rho}(t) \frac{\hbar}{2} \hat{\sigma}_z\right) \\ \end{aligned} ]
Now let’s take a look at the time evolution of those operators. Recall that
[ \begin{aligned} \frac{\partial}{\partial t} \langle \hat{O} \rangle &= \frac{\partial}{\partial t} Tr\left(\hat{\rho}(t) \hat{O} \right) \\ &= Tr \left(\frac{\partial \hat{\rho}}{\partial t} \hat
{O} \right) \end{aligned} ]
This yields the time derivative for arbitrary operators for the Lindblad equation:
[ \frac{\partial}{\partial t} \langle \hat{O} \rangle = Tr \left( \left( -\frac{i}{\hbar} \lbrack \hat{H}_{rot}, \hat{\rho} \rbrack + \sum_k \left(\hat{L}_k \hat{rho} \hat{L}_k^\dagger - \frac{1}{2}
\lbrace \hat{L}_k^\dagger \hat{L}_k, \hat{\rho} \right) \right) \hat{O} \right) ]
As one can see equation is consisting - as expected - of two parts:
• A Unitarian part covering the system without relaxation
• The non Unitarian term described by the Lindblad operators describing the relaxation process
Unitary part (Hamiltonian)
[ \begin{aligned} Tr \left(-\frac{i}{\hbar} \lbrack \hat{H}_{rot}, \hat{\rho} \rbrack \hat{O} \right) &= -\frac{i}{\hbar} Tr \left( \lbrack \hat{H}_{rot}, \hat{\rho} \rbrack \hat{O} \right) \\ \to \
frac{\partial}{\partial t} \langle \hat{O} \rangle &= \frac{i}{\hbar} \langle \lbrack \hat{H}_{rot}, \hat{O} \rbrack \rangle \end{aligned} ]
We can now apply this to our spin operators:
For the x component:
[ \begin{aligned} M_x(t) &= Tr \left( \hat{rho}(t) \hat{S}_x \right) \\ \lbrack \hat{H}_{rot}, \hat{S}_x \rbrack &= -\frac{\hbar \Delta \omega}{2} \lbrack \hat{\sigma}_z, \hat{\sigma}_x \rbrack \\ &=
i \hbar \frac{\Delta \omega}{2} \hat{\sigma}_y \\ \to \frac{\partial M_x(t)}{\partial t} &= -\Delta \omega M_y(t) \end{aligned} ]
For the y component:
\begin{aligned} M_y(t) &= Tr \left( \hat{\rho}(t) \hat{S}_y \right) \\ \lbrack \hat{H}_{rot}, \hat{S}_y \rbrack &= - \frac{\hbar \gamma B_1}{2} \lbrack \hat{\sigma}_x, \hat{\sigma}_y \rbrack \\ &= i
\hbar \frac{\gamma B_1}{2} \hat{\sigma_z} \\ \to \frac{\partial M_y(t)}{\partial t} &= \Delta \omega M_z(t) - \gamma B_1 M_z(t) \end{aligned}
And for the z component:
\begin{aligned} M_z(t) &= Tr \left( \hat{\rho}(t) \hat{S}_z \right) \\ \lbrack \hat{H}_{rot}, \hat{S}_z \rbrack &= - \frac{\hbar \gamma B_1}{2} \lbrack \hat{\sigma}_x, \hat{\sigma}_z \rbrack \\ &= i
\hbar \frac{\gamma B_1}{2} \hat{\sigma}_y \\ \to \frac{\partial M_z(t)}{\partial t} &= \gamma B_1 M_y(t) \end{aligned}
This is consistent with the phenomenological model in the rotating frame (note that it has been assumed to drive on resonant - this is the reason why we only see the driving field acting onto the $y$
components derivative).
Relaxation (Lindblad terms)
Longitudinal relaxation (T1)
Recall that our operator is given by
[ \begin{aligned} \hat{L}_1 &= \sqrt{\frac{1}{T_1}} \hat{\sigma}_{-} \\ &= \sqrt{\frac{1}{T_1}} \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \\ \to \frac{\partial \hat{\rho}}{\partial t} \mid_{T_1} &
= \frac{1}{T_1} \left( \hat{\sigma}_{-} \hat{\rho} \hat{\sigma}_{+} - \frac{1}{2} \lbrace \hat{\sigma}_{+} \hat{\sigma}_{-}, \hat{\rho} \rbrace \right) \end{aligned} ]
As we can see this influences the $z$ component via the ladder operators $\hat{\sigma}_{+}$ and $\hat{\sigma}_{-}$ which indirectly influences of course the absorptive and dispersive components $M_x$
and $M_y$ via the unitary interactions.
[ \begin{aligned} \frac{\partial M_z(t)}{\partial t} \mid_{T_1} &= - \frac{M_z(t) - M_0}{T_1} \\ \frac{\partial M_x(t)}{\partial t} \mid_{T_1} &= 0 \\ \frac{\partial M_y(t)}{\partial t} \mid_{T_1} &=
0 \\ \end{aligned} ]
Transverse relaxation (T2)
Recall that the operator for transverse relaxation is given by
[ \begin{aligned} \hat{L}_2 &= \sqrt{\frac{1}{2 T_2}} \hat{\sigma}_z \\ &= \sqrt{\frac{1}{2 T_2}} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \\ \to \frac{\partial \hat{\rho}}{\partial t} \mid_
{T_2} &= \frac{1}{2 T_2} \left( \hat{\sigma}_z \hat{\rho} \hat{\sigma}_z - \hat{\rho} \right) \end{aligned} ]
As one can see this does not affect the $z$ component directly - but the $x$ and $y$ components:
[ \begin{aligned} \frac{\partial M_x(t)}{\partial t} \mid_{T_2} &= -\frac{M_x(t)}{T_2} \\ \frac{\partial M_y(t)}{\partial t} \mid_{T_2} &= -\frac{M_y(t)}{T_2} \end{aligned} ]
Unitary and non unitary contributions
Assembling the equations for all three axis yields
[ \begin{aligned} \frac{\partial M_x(t)}{\partial t} &= -\Delta \omega M_y(t) - \frac{1}{T_2} M_x(t) \\ \frac{\partial M_y(t)}{\partial t} &= \Delta \omega M_x(t) - \gamma B_1 M_z(t) - \frac{1}{T_2}
M_y(t) \\ \frac{\partial M_z(t)}{\partial t} &= \gamma B_1 M_y(t) - \frac{M_z(t) - M_0}{T_1} \end{aligned} ]
As one can see we have just recovered the Bloch equations including relaxation terms for an on resonant system from a microscopic theory of nuclear magnetic resonance (NMR) or electron paramagnetic
resonance (EPR).
Linewidth and excited state lifetime
Another interesting property is the expected (minimum) width of the signal as well as the expected minimum lifetime of the excited state. For an spin system that is only affected by the longitudinal
and transversal relaxation processes this is mainly limited by the relaxation time $T_2$ (i.e. transversal relaxation). Keep in mind that for practical systems effects like magnetic field
inhomogenity can quickly yield way larger contributions.
Limits by transversal relaxation
Taking a look at the spectral function of the transversal magnetization
[ M_{xy}(t) \propto M_{xy}(t) e^{-\frac{t}{T_2}} ] [ \begin{aligned} \mathfrak{F}\lbrace M_{xy}(t) \rbrace &= \int_{-\infty}^{\infty} M_{xy}(t) e^{-i 2 \pi \nu t} dt \\ &= \int_{-\infty}^{\infty} M_
{xy}(0) e^{-\frac{t}{T_2}} e^{-i2\pi\nu t} dt \\ &= \int_{-\infty}^{\infty} M_{xy}(0) e^{-\left(\frac{1}{T_2} + i 2 \pi \nu \right)t} dt \end{aligned} ]
[ \int_{0}^{\infty} e^{-\alpha t} = \frac{1}{\alpha} ]
we arrive at
[ \begin{aligned} \mathfrak{F}\lbrace M_{xy}(t) \rbrace &= M_{xy}(0) \frac{1}{\frac{1}{T_2} + i 2 \pi \nu} \\ &= M_{xy}(0) \frac{1}{\frac{1 + i \pi \nu T_2}{T_2}} \\ &= \frac{T_2 M_{xy}(0)}{1 + i 2 \
pi \nu T_2} \\ &= T_2 M_{xy}(0) \lbrace \frac{1 - i 2 \pi \nu T_2}{1 + (2 \pi \nu T_2)^2} \rbrace \\ &= \lbrace \frac{T_2 M_{xy}(0)}{1 + (2 \pi \nu T_2)^2} - i \frac{2 \pi \nu T_2^2 M_{xy}(0)}{1 + (2
\pi \nu T_2)^2} \rbrace \\ \to \mid \mathfrak{F}\lbrace M_{xy}(t) \rbrace \mid &= \sqrt{ \left( \frac{T_2 M_{xy}(0)}{1 + (2 \pi \nu T_2)^2} \right)^2 + \left( \frac{2 \pi \nu T_2^2 M_{xy}(0)}{1 + (2
\pi \nu T_2)^2} \right) } \\ &= \sqrt{\frac{ (T_2 M_{xy}(0))^2 (1 + (2 \pi \nu T_2)^2) }{(1 + (2 \pi \nu T_2)^2)^2}} \\ &= T_2 M_{xy}(0) \frac{\sqrt{1 + (2 \pi \nu T_2)^2}}{1 + (2 \pi \nu T_2)^2} \\
&= T_2 M_{xy}(0) \frac{1}{\sqrt{1 + (2 \pi \nu T_2)^2}} \\ &= \frac{T_2 M_{xy}(0)}{\sqrt{(2 \pi T_2)^2 \nu^2 + 1}} \\ &= \frac{\frac{1}{T_2} T_2 M_{xy}(0)}{\frac{1}{T_2} \sqrt{(2 \pi T_2)^2 \nu^2 +
1}} \\ &= \frac{M_{xy}(0)}{\sqrt{(2 \pi \nu)^2 + \left(\frac{1}{T_2}\right)^2}} \end{aligned} ]
The last step has been done to be able to compare this function to a typical Lorentzian distribution (Breit-Wigner function):
[ L(\nu_2) = \frac{A}{(\nu_2 - \nu_0)^2 + \Gamma^2} ]
Comparing factors yields:
• $\nu_0 = 0$
• $\Gamma = \frac{1}{T_2}$
• $\nu_2 = 2 \pi \nu$
Taking a look at the full width at half maximum of the Lorentzian we can determine the natural line width limit by transversal relaxation processes:
[ \begin{aligned} \mathfrak{L}(\nu_{FWHM}) &= \frac{A}{2} \\ &= \frac{A}{\nu_{FWHM}^2 + \Gamma^2} \\ \to A(\nu_{FWHM}^2 + \Gamma^2) &= 2A \\ \nu_{FWHM} &= \pm \Gamma \\ \to \nu_{FWHM} &= \pm \frac{1}
{T_2} \\ \to \Delta \nu &= 2 \nu_{FWHM} \\ &= \frac{2}{T_2} \\ &= \frac{1}{\pi T_2} \end{aligned} ]
In the last line we moved to a line width in angular frequency instead of Hz.
Associated lifetime limit by transversal relaxation
The lower lifetime boundary associated with the relaxation time $T_2$ can be derived from the energy-time uncertainty relation (and recalling that $E = h \nu$ and $\hbar = \frac{h}{2 \pi}$):
[ \begin{aligned} \Delta E \Delta t &\geq \frac{\hbar}{2} \\ h \Delta \nu \Delta t &\geq \frac{\hbar}{2} \\ \Delta \nu \Delta t &\geq \frac{1}{4 \pi} \\ \frac{1}{\pi T_2} \tau &\geq \frac{1}{4 \pi} \
\ \tau &\geq \frac{T_2}{4} \end{aligned} ]
This article is tagged: Physics, Electrodynamics, Basics, How stuff works, Tutorial, Math, Measurements, Quantum mechanics, Quantum optics
|
{"url":"http://tspi.at/2024/08/25/nmreprbasics.html","timestamp":"2024-11-10T19:20:50Z","content_type":"text/html","content_length":"66435","record_id":"<urn:uuid:02c4d3bc-aee5-485a-89d2-6e8b6adb797b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00319.warc.gz"}
|
Stephen Hawking 11
CHAPTER 11
THE UNIFICATION OF PHYSICS
Stephen Hawking
As was explained in the first chapter, it would be very difficult to construct a complete unified theory of everything in the universe all at one go. So instead we have made progress by finding
partial theories that describe a limited range of happenings and by neglecting other effects or approximating them by certain numbers. (Chemistry, for example, allows us to calculate the interactions
of atoms, without knowing the internal structure of an atom’s nucleus.) Ultimately, however, one would hope to find a complete, consistent, unified theory that would include all these partial
theories as approximations, and that did not need to be adjusted to fit the facts by picking the values of certain arbitrary numbers in the theory. The quest for such a theory is known as “the
unification of physics.” Einstein spent most of his later years unsuccessfully searching for a unified theory, but the time was not ripe: there were partial theories for gravity and the
electromagnetic force, but very little was known about the nuclear forces. Moreover, Einstein refused to believe in the reality of quantum mechanics, despite the important role he had played in its
development. Yet it seems that the uncertainty principle is a fundamental feature of the universe we live in. A successful unified theory must, therefore, necessarily incorporate this principle.
As I shall describe, the prospects for finding such a theory seem to be much better now because we know so much more about the universe. But we must beware of overconfidence – we have had false dawns
before! At the beginning of this century, for example, it was thought that everything could be explained in terms of the properties of continuous matter, such as elasticity and heat conduction. The
discovery of atomic structure and the uncertainty principle put an emphatic end to that. Then again, in 1928, physicist and Nobel Prize winner Max Born told a group of visitors to Gottingen
University, “Physics, as we know it, will be over in six months.” His confidence was based on the recent discovery by Dirac of the equation that governed the electron. It was thought that a similar
equation would govern the proton, which was the only other particle known at the time, and that would be the end of theoretical physics. However, the discovery of the neutron and of nuclear forces
knocked that one on the head too. Having said this, I still believe there are grounds for cautious optimism that we may now be near the end of the search for the ultimate laws of nature.
In previous chapters I have described general relativity, the partial theory of gravity, and the partial theories that govern the weak, the strong, and the electromagnetic forces. The last three may
be combined in so-called grand unified theories, or GUTs, which are not very satisfactory because they do not include gravity and because they contain a number of quantities, like the relative masses
of different particles, that cannot be predicted from the theory but have to be chosen to fit observations. The main difficulty in finding a theory that unifies gravity with the other forces is that
general relativity is a “classical” theory; that is, it does not incorporate the uncertainty principle of quantum mechanics. On the other hand, the other partial theories depend on quantum mechanics
in an essential way. A necessary first step, therefore, is to combine general relativity with the uncertainty principle. As we have seen, this can produce some remarkable consequences, such as black
holes not being black, and the universe not having any singularities but being completely self-contained and without a boundary. The trouble is, as explained in Chapter 7, that the uncertainty
principle means that even “empty” space is filled with pairs of virtual particles and antiparticles. These pairs would have an infinite amount of energy and, therefore, by Einstein’s famous equation
E = mc^2, they would have an infinite amount of mass. Their gravitational attraction would thus curve up the universe to infinitely small size.
Rather similar, seemingly absurd infinities occur in the other partial theories, but in all these cases the infinities can be canceled out by a process called renormalization. This involves canceling
the infinities by introducing other infinities. Although this technique is rather dubious mathematically, it does seem to work in practice, and has been used with these theories to make predictions
that agree with observations to an extraordinary degree of accuracy. Renormalization, however, does have a serious drawback from the point of view of trying to find a complete theory, because it
means that the actual values of the masses and the strengths of the forces cannot be predicted from the theory, but have to be chosen to fit the observations.
In attempting to incorporate the uncertainty principle into general relativity, one has only two quantities that can be adjusted: the strength of gravity and the value of the cosmological constant.
But adjusting these is not sufficient to remove all the infinities. One therefore has a theory that seems to predict that certain quantities, such as the curvature of space-time, are really infinite,
yet these quantities can be observed and measured to be perfectly finite! This problem in combining general relativity and the uncertainty principle had been suspected for some time, but was finally
confirmed by detailed calculations in 1972. Four years later, a possible solution, called “supergravity,” was suggested. The idea was to combine the spin-2 particle called the graviton, which carries
the gravitational force, with certain other particles of spin 3/2, 1, ½, and 0. In a sense, all these particles could then be regarded as different aspects of the same “superparticle,” thus unifying
the matter particles with spin ½ and 3/2 with the force-carrying particles of spin 0, 1, and 2. The virtual particle/antiparticle pairs of spin ½ and 3/2 would have negative energy, and so would tend
to cancel out the positive energy of the spin 2, 1, and 0 virtual pairs. This would cause many of the possible infinities to cancel out, but it was suspected that some infinities might still remain.
However, the calculations required to find out whether or not there were any infinities left uncancelled were so long and difficult that no one was prepared to undertake them. Even with a computer it
was reckoned it would take at least four years, and the chances were very high that one would make at least one mistake, probably more. So one would know one had the right answer only if someone else
repeated the calculation and got the same answer, and that did not seem very likely!
Despite these problems, and the fact that the particles in the super-gravity theories did not seem to match the observed particles, most scientists believed that supergravity was probably the right
answer to the problem of the unification of physics. It seemed the best way of unifying gravity with the other forces. However, in 1984 there was a remarkable change of opinion in favor of what are
called string theories. In these theories the basic objects are not particles, which occupy a single point of space, but things that have a length but no other dimension, like an infinitely thin
piece of string. These strings may have ends (the so-called open strings) or they may be joined up with themselves in closed loops (closed strings) Figure 11:1 and Figure 11:2.
A particle occupies one point of space at each instant of time. Thus its history can be represented by a line in space-time (the “world-line”). A string, on the other hand, occupies a line in space
at each moment of time. So its history in space-time is a two-dimensional surface called the world-sheet. (Any point on such a world-sheet can be described by two numbers, one specifying the time and
the other the position of the point on the string.) The world-sheet of an open string is a strip: its edges represent the paths through space-time of the ends of the string Figure 11:1. The
world-sheet of a closed string is a cylinder or tube Figure 11:2: a slice through the tube is a circle, which represents the position of the string at one particular time.
Two pieces of string can join together to form a single string; in the case of open strings they simply join at the ends Figure 11:3, while in the case of closed strings it is like the two legs
joining on a pair of trousers Figure 11:4.
Similarly, a single piece of string can divide into two strings. In string theories, what were previously thought of as particles are now pictured as waves traveling down the string, like waves on a
vibrating kite string. The emission or absorption of one particle by another corresponds to the dividing or joining together of strings. For example, the gravitational force of the sun on the earth
was pictured in particle theories as being caused by the emission of a graviton by a particle in the sun and its absorption by a particle in the earth Figure 11:5.
In string theory, this process corresponds to an H-shaped tube or pipe Figure 11:6 (string theory is rather like plumbing, in a way). The two vertical sides of the H correspond to the particles in
the sun and the earth, and the horizontal crossbar corresponds to the graviton that travels between them.
String theory has a curious history. It was originally invented in the late 1960s in an attempt to find a theory to describe the strong force. The idea was that particles like the proton and the
neutron could be regarded as waves on a string. The strong forces between the particles would correspond to pieces of string that went between other bits of string, as in a spider’s web. For this
theory to give the observed value of the strong force between particles, the strings had to be like rubber bands with a pull of about ten tons.
In 1974 Joel Scherk from Paris and John Schwarz from the California Institute of Technology published a paper in which they showed that string theory could describe the gravitational force, but only
if the tension in the string were very much higher, about a thousand million million million million million million tons (1 with thirty-nine zeros after it). The predictions of the string theory
would be just the same as those of general relativity on normal length scales, but they would differ at very small distances, less than a thousand million million million million millionth of a
centimeter (a centimeter divided by 1 with thirty-three zeros after it). Their work did not receive much attention, however, because at just about that time most people abandoned the original string
theory of the strong force in favor of the theory based on quarks and gluons, which seemed to fit much better with observations. Scherk died in tragic circumstances (he suffered from diabetes and
went into a coma when no one was around to give him an injection of insulin). So Schwarz was left alone as almost the only supporter of string theory, but now with the much higher proposed value of
the string tension.
In 1984 interest in strings suddenly revived, apparently for two reasons. One was that people were not really making much progress toward showing that supergravity was finite or that it could explain
the kinds of particles that we observe. The other was the publication of a paper by John Schwarz and Mike Green of Queen Mary College, London, that showed that string theory might be able to explain
the existence of particles that have a built-in left-handedness, like some of the particles that we observe. Whatever the reasons, a large number of people soon began to work on string theory and a
new version was developed, the so-called heterotic string, which seemed as if it might be able to explain the types of particles that we observe.
String theories also lead to infinities, but it is thought they will all cancel out in versions like the heterotic string (though this is not yet known for certain). String theories, however, have a
bigger problem: they seem to be consistent only if space-time has either ten or twenty-six dimensions, instead of the usual four! Of course, extra space-time dimensions are a commonplace of science
fiction indeed, they provide an ideal way of overcoming the normal restriction of general relativity that one cannot travel faster than light or back in time (see Chapter 10). The idea is to take a
shortcut through the extra dimensions. One can picture this in the following way. Imagine that the space we live in has only two dimensions and is curved like the surface of an anchor ring or torus
Figure 11:7.
If you were on one side of the inside edge of the ring and you wanted to get to a point on the other side, you would have to go round the inner edge of the ring. However, if you were able to travel
in the third dimension, you could cut straight across.
Why don’t we notice all these extra dimensions, if they are really there? Why do we see only three space dimensions and one time dimension? The suggestion is that the other dimensions are curved up
into a space of very small size, something like a million million million million millionth of an inch. This is so small that we just don’t notice it: we see only one time dimension and three space
dimensions, in which space-time is fairly flat. It is like the surface of a straw. If you look at it closely, you see it is two-dimensional (the position of a point on the straw is described by two
numbers, the length along the straw and the distance round the circular direction). But if you look at it from a distance, you don’t see the thickness of the straw and it looks one-dimensional (the
position of a point is specified only by the length along the straw). So it is with space-time: on a very small scale it is ten-dimensional and highly curved, but on bigger scales you don’t see the
curvature or the extra dimensions. If this picture is correct, it spells bad news for would-be space travelers: the extra dimensions would be far too small to allow a spaceship through. However, it
raises another major problem. Why should some, but not all, of the dimensions be curled up into a small ball? Presumably, in the very early universe all the dimensions would have been very curved.
Why did one time dimension and three space dimensions flatten out, while the other dimensions remain tightly curled up?
One possible answer is the anthropic principle. Two space dimensions do not seem to be enough to allow for the development of complicated beings like us. For example, two-dimensional animals living
on a one-dimensional earth would have to climb over each other in order to get past each other. If a two-dimensional creature ate something it could not digest completely, it would have to bring up
the remains the same way it swallowed them, because if there were a passage right through its body, it would divide the creature into two separate halves: our two-dimensional being would fall apart
Figure 11:8. Similarly, it is difficult to see how there could be any circulation of the blood in a two-dimensional creature.
There would also be problems with more than three space dimensions. The gravitational force between two bodies would decrease more rapidly with distance than it does in three dimensions. (In three
dimensions, the gravitational force drops to 1/4 if one doubles the distance. In four dimensions it would drop to 1/5, in five dimensions to 1/6, and so on.) The significance of this is that the
orbits of planets, like the earth, around the sun would be unstable: the least disturbance from a circular orbit (such as would be caused by the gravitational attraction of other planets) would
result in the earth spiraling away from or into the sun. We would either freeze or be burned up. In fact, the same behavior of gravity with distance in more than three space dimensions means that the
sun would not be able to exist in a stable state with pressure balancing gravity. It would either fall apart or it would collapse to form a black hole. In either case, it would not be of much use as
a source of heat and light for life on earth. On a smaller scale, the electrical forces that cause the electrons to orbit round the nucleus in an atom would behave in the same way as gravitational
forces. Thus the electrons would either escape from the atom altogether or would spiral into the nucleus. In either case, one could not have atoms as we know them.
It seems clear then that life, at least as we know it, can exist only in regions of space-time in which one time dimension and three space dimensions are not curled up small. This would mean that one
could appeal to the weak anthropic principle, provided one could show that string theory does at least allow there to be such regions of the universe – and it seems that indeed string theory does.
There may well be other regions of the universe, or other universes (whatever that may mean), in which all the dimensions are curled up small or in which more than four dimensions are nearly flat,
but there would be no intelligent beings in such regions to observe the different number of effective dimensions.
Another problem is that there are at least four different string theories (open strings and three different closed string theories) and millions of ways in which the extra dimensions predicted by
string theory could be curled up. Why should just one string theory and one kind of curling up be picked out? For a time there seemed no answer, and progress got bogged down. Then, from about 1994,
people started discovering what are called dualities: different string theories and different ways of curling up the extra dimensions could lead to the same results in four dimensions. Moreover, as
well as particles, which occupy a single point of space, and strings, which are lines, there were found to be other objects called p-branes, which occupied two-dimensional or higher-dimensional
volumes in space. (A particle can be regarded as a 0-brane and a string as a 1-brane but there were also p-branes for p=2 to p=9.) What this seems to indicate is that there is a sort of democracy
among supergravity, string, and p-brane theories: they seem to fit together but none can be said to be more fundamental than the others. They appear to be different approximations to some fundamental
theory that are valid in different situations.
People have searched for this underlying theory, but without any success so far. However, I believe there may not be any single formulation of the fundamental theory any more than, as Godel showed,
one could formulate arithmetic in terms of a single set of axioms. Instead it may be like maps – you can’t use a single map to describe the surface of the earth or an anchor ring: you need at least
two maps in the case of the earth and four for the anchor ring to cover every point. Each map is valid only in a limited region, but different maps will have a region of overlap. The collection of
maps provides a complete description of the surface. Similarly, in physics it may be necessary to use different formulations in different situations, but two different formulations would agree in
situations where they can both be applied. The whole collection of different formulations could be regarded as a complete unified theory, though one that could not be expressed in terms of a single
set of postulates.
But can there really be such a unified theory? Or are we perhaps just chasing a mirage? There seem to be three possibilities:
1. There really is a complete unified theory (or a collection of overlapping formulations), which we will someday discover if we are smart enough.
2. There is no ultimate theory of the universe, just an infinite sequence of theories that describe the universe more and more accurately.
3. There is no theory of the universe: events cannot be predicted beyond a certain extent but occur in a random and arbitrary manner.
Some would argue for the third possibility on the grounds that if there were a complete set of laws, that would infringe God’s freedom to change his mind and intervene in the world. It’s a bit like
the old paradox: can God make a stone so heavy that he can’t lift it? But the idea that God might want to change his mind is an example of the fallacy, pointed out by St. Augustine, of imagining God
as a being existing in time: time is a property only of the universe that God created. Presumably, he knew what he intended when he set it up!
With the advent of quantum mechanics, we have come to recognize that events cannot be predicted with complete accuracy but that there is always a degree of uncertainty. If one likes, one could
ascribe this randomness to the intervention of God, but it would be a very strange kind of intervention: there is no evidence that it is directed toward any purpose. Indeed, if it were, it would by
definition not be random. In modern times, we have effectively removed the third possibility above by redefining the goal of science: our aim is to formulate a set of laws that enables us to predict
events only up to the limit set by the uncertainty principle.
The second possibility, that there is an infinite sequence of more and more refined theories, is in agreement with all our experience so far. On many occasions we have increased the sensitivity of
our measurements or made a new class of observations, only to discover new phenomena that were not predicted by the existing theory, and to account for these we have had to develop a more advanced
theory. It would therefore not be very surprising if the present generation of grand unified theories was wrong in claiming that nothing essentially new will happen between the electroweak
unification energy of about 100 GeV and the grand unification energy of about a thousand million million GeV. We might indeed expect to find several new layers of structure more basic than the quarks
and electrons that we now regard as “elementary” particles.
However, it seems that gravity may provide a limit to this sequence of “boxes within boxes.” If one had a particle with an energy above what is called the Planck energy, ten million million million
GeV (1 followed by nineteen zeros), its mass would be so concentrated that it would cut itself off from the rest of the universe and form a little black hole. Thus it does seem that the sequence of
more and more refined theories should have some limit as we go to higher and higher energies, so that there should be some ultimate theory of the universe. Of course, the Planck energy is a very long
way from the energies of around a hundred GeV, which are the most that we can produce in the laboratory at the present time. We shall not bridge that gap with particle accelerators in the foreseeable
future! The very early stages of the universe, however, are an arena where such energies must have occurred. I think that there is a good chance that the study of the early universe and the
requirements of mathematical consistency will lead us to a complete unified theory within the lifetime of some of us who are around today, always presuming we don’t blow ourselves up first.
What would it mean if we actually did discover the ultimate theory of the universe? As was explained in Chapter 1, we could never be quite sure that we had indeed found the correct theory, since
theories can’t be proved. But if the theory was mathematically consistent and always gave predictions that agreed with observations, we could be reasonably confident that it was the right one. It
would bring to an end a long and glorious chapter in the history of humanity’s intellectual struggle to understand the universe. But it would also revolutionize the ordinary person’s understanding of
the laws that govern the universe. In Newton’s time it was possible for an educated person to have a grasp of the whole of human knowledge, at least in outline. But since then, the pace of the
development of science has made this impossible. Because theories are always being changed to account for new observations, they are never properly digested or simplified so that ordinary people can
understand them. You have to be a specialist, and even then you can only hope to have a proper grasp of a small proportion of the scientific theories. Further, the rate of progress is so rapid that
what one learns at school or university is always a bit out of date. Only a few people can keep up with the rapidly advancing frontier of knowledge, and they have to devote their whole time to it and
specialize in a small area. The rest of the population has little idea of the advances that are being made or the excitement they are generating. Seventy years ago, if Eddington is to be believed,
only two people understood the general theory of relativity. Nowadays tens of thousands of university graduates do, and many millions of people are at least familiar with the idea. If a complete
unified theory was discovered, it would only be a matter of time before it was digested and simplified in the same way and taught in schools, at least in outline. We would then all be able to have
some understanding of the laws that govern the universe and are responsible for our existence.
Even if we do discover a complete unified theory, it would not mean that we would be able to predict events in general, for two reasons. The first is the limitation that the uncertainty principle of
quantum mechanics sets on our powers of prediction. There is nothing we can do to get around that. In practice, however, this first limitation is less restrictive than the second one. It arises from
the fact that we could not solve the equations of the theory exactly, except in very simple situations. (We cannot even solve exactly for the motion of three bodies in Newton’s theory of gravity, and
the difficulty increases with the number of bodies and the complexity of the theory.) We already know the laws that govern the behavior of matter under all but the most extreme conditions. In
particular, we know the basic laws that underlie all of chemistry and biology. Yet we have certainly not reduced these subjects to the status of solved problems: we have, as yet, had little success
in predicting human behavior from mathematical equations! So even if we do find a complete set of basic laws, there will still be in the years ahead the intellectually challenging task of developing
better approximation methods, so that we can make useful predictions of the probable outcomes in complicated and realistic situations. A complete, consistent, unified theory is only the first step:
our goal is a complete understanding of the events around us, and of our own existence.
Δεν υπάρχουν σχόλια:
|
{"url":"http://www.parodos.video/2020/05/stephen-hawking-11.html","timestamp":"2024-11-12T16:07:59Z","content_type":"application/xhtml+xml","content_length":"255999","record_id":"<urn:uuid:a2c91d02-0cd6-4e53-a88e-80c455c22a4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00874.warc.gz"}
|
Algorithms: Who Really Invented Them?
Unraveling the Mysteries of Algorithm Invention: A Journey Through Time
Source www.dreamstime.com
Who Invented Algorithms?
The history of algorithms spans back thousands of years, from the ancient civilizations of Babylon and Egypt to the modern day. In this article, we will explore the origins of algorithms and the
contributions of some of the most significant figures in their development.
Overview of Algorithms
An algorithm is a set of instructions that can be used to solve problems or perform specific tasks. They can be found in various fields such as mathematics, computer science, and engineering.
Algorithms are used every day in our lives, from finding the shortest distance between two points on a map to recommending movies and songs on streaming platforms.
Ancient Origins
The concept of algorithms can be traced back to the ancient civilizations of Babylon and Egypt. The Babylonians used algorithms to solve quadratic equations, and the Egyptians used algorithms to
calculate the area of a circle. The Greeks also made significant contributions to the study of algorithms, with Euclid's algorithm for finding the greatest common divisor being one of the most famous
The Father of Computer Science
Alan Turing, a British mathematician, is often credited as the father of computer science and the inventor of algorithms. His work during World War II in breaking the German Enigma code used advanced
algorithms to decode messages. Turing's contributions to computer science included the concept of the universal machine, which became the foundation of modern computing.Despite Turing's significant
contributions to the field, it is important to note that algorithms were not invented by a single person. The development of algorithms has been a collaborative effort spanning thousands of years and
multiple cultures. Mathematicians and scientists all over the world have contributed to the study of algorithms, shaping our understanding of the world and the problems we face.
The Impact of Algorithms Today
Algorithms play a crucial role in modern life, from the way we communicate and work to the way we consume media and entertainment. The rise of big data has brought about a new era of algorithms,
where machine learning algorithms are used to predict and analyze consumer behavior, among many other things.However, the widespread use of algorithms has also brought about concerns regarding their
impact on society. The bias that can be inherent in algorithms, and the potential for these biases to be amplified in the decision-making process, is one such concern. As algorithms continue to play
a more significant role in our lives, it is important to consider these issues carefully and work towards developing fairer and more transparent algorithms.In conclusion, the development of
algorithms has been a collaborative effort spanning multiple cultures and thousands of years. While Alan Turing is often credited as the father of computer science and the inventor of algorithms, it
is important to recognize and acknowledge the many contributions of mathematicians and scientists throughout history. The impact of algorithms on modern life is undeniable, and it is up to us to
consider the ethical implications and strive towards fairer and more transparent algorithms in the future.
The Birth of Modern Algorithms
In mathematics, algorithms have existed for thousands of years. However, the concept of modern algorithms as we know it today can be traced back to the work of mathematicians and computer scientists
in the mid-20th century.
Early Computer Algorithms
During the 1950s and 1960s, computer scientists and mathematicians like John von Neumann and Donald Knuth developed new algorithms specifically designed for use in early computers. Algorithms like
von Neumann's merge sort and Knuth's quicksort were able to sort through large amounts of data with unparalleled speed and efficiency, paving the way for the development of modern computing as we
know it today.
These early algorithms formed the foundation of modern-day computer programming, and many of them continue to be used in various forms to this day. They were also instrumental in the development of
programming languages, which allowed computers to interpret and execute complex algorithms with ease.
Impact of the Internet
The rise of the internet and search engines like Google in the late 20th century created a need for more advanced algorithms to process and present information. As the amount of data available on the
internet continued to grow exponentially, more sophisticated algorithms were required to help users find the information they were looking for as quickly and easily as possible.
This led to the development of algorithms like Google's PageRank, which uses a complex set of algorithms and formulas to analyze web pages and determine their relevance to a given search query. Other
algorithms like recommendation engines and collaborative filtering algorithms were developed to help users find content and products that were personalized to their tastes and preferences.
Current Innovations
Today, modern computers and advanced technology have led to the creation of increasingly complex algorithms in fields like machine learning, artificial intelligence, and big data analysis. These
algorithms are able to process vast amounts of data and make decisions based on patterns and trends that would be impossible for humans to discern on their own.
Machine learning algorithms, for example, are able to "learn" from existing data sets to make predictions and decisions about new data. They are used in everything from self-driving cars to medical
diagnostics, and their potential applications are virtually limitless.
Overall, the invention and development of algorithms has had a profound impact on the world of computing and technology. Today, algorithms are used in everything from search engines and e-commerce
websites to scientific research and medical diagnostics. As technology continues to evolve, we can expect algorithms to become increasingly advanced and sophisticated, pushing the boundaries of what
is possible in the world of computing and beyond.
The Debate on Inventions
The Role of Collaboration
The history of algorithms is a complex one, with many different scientists and mathematicians contributing to its development. Some argue that algorithms were developed collaboratively, and that it
is difficult to pinpoint a single inventor. Much like other inventions, the creation of algorithms was more of a group effort, with contributions from many individuals who refined and built upon each
other's work.
One of the earliest known uses of algorithms was by the ancient Greeks, who developed algorithms for solving mathematical problems. The Chinese also contributed to the development of algorithms,
inventing the abacus, an early computing device, in the 2nd century BC.
During the 19th century, many mathematicians worked together to develop new algorithms for solving complex mathematical problems. One such group was the French Academy of Sciences, which included
luminaries such as Adrien-Marie Legendre, Sophie Germain, and Joseph-Louis Lagrange.
The Importance of Individual Contributions
While it is clear that many individuals contributed to the development of modern algorithms, some experts believe that certain individuals played a more significant role than others. For example,
Alan Turing, a British mathematician and computer scientist, is often credited with inventing the first computer algorithm. His work during World War II on breaking the German Enigma code led to the
development of the Universal Turing Machine, a theoretical model of a computer that laid the foundation for modern computing.
Donald Knuth, an American computer scientist, is also considered a pioneer in algorithm development. He created the first volume of his book "The Art of Computer Programming" in 1962, which included
a comprehensive treatment of sorting algorithms and other important algorithms in computer science.
The Future of Algorithm Development
Regardless of who is credited with inventing algorithms, one thing is certain - the field will continue to grow and evolve as technology advances. As the use of artificial intelligence and machine
learning becomes more widespread, algorithms will become increasingly important in automating tasks and making sense of vast amounts of data.
Advances in quantum computing are also expected to lead to the development of new algorithms that can solve problems that are currently intractable with classical computers. These algorithms have the
potential to revolutionize industries such as finance, cryptography, and drug discovery, among others.
In conclusion, the development of algorithms was a collaborative effort involving the contributions of many individuals throughout history. While some individuals, such as Turing and Knuth, played a
significant role in building the foundation of modern algorithms, it is important to recognize the contributions of all those who came before them. As technology continues to advance, the future of
algorithm development looks bright, with the potential for breakthroughs that could change the world.
Related Video: Algorithms: Who Really Invented Them?
|
{"url":"https://www.sigfox.us/2021/02/algorithms-who-really-invented-them.html","timestamp":"2024-11-15T00:59:17Z","content_type":"application/xhtml+xml","content_length":"183327","record_id":"<urn:uuid:12df7dfc-43e7-4d35-b212-373fd03762fe>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00658.warc.gz"}
|
Handbook Of Physics 2019 - School of Basic Science, Indian Institute of Technology Mandi - IIT Mandi
Page content transcription
If your browser does not render page correctly, please read the page content below
Course Coordinator:
Dr. Ajay Soni (2018 onwards)
Dr. Hari Varma (2015-2018)
Faculty advisors:
Batch 2015-16: Dr. Pradyumna Pathak
Batch 2016-17: Dr. Bindu Radhamany
Batch 2017-18: Dr. C. S. Yadav
Batch 2018-19: Dr. K. Mukherjee
Batch 2019-20: Dr. Ajay Soni
Laboratory staff
Ms. Sushma Verma
Course-Interests Group
Dr. Ajay Soni Dr. C.S Yadav
Associate Professor Associate Professor
Specialization: Nanomaterials Specialization: Low Temperature
and Experimental Condense Physics
Matter Physics Phone: 267135 Email:
Phone: 267154 shekhar@iitmandi.ac.in
Email: ajay@iitmandi.ac.in
Dr. Hari Varma Dr. Kaustav Mukherjee
Associate Professor Associate Professor
Specialization: Atomic and Specialization: Experimental
Molecular physics Condensed Matter Physics
Phone: 267064 Phone: 267043
Email: hari@iitmandi.ac.in Email: kaustav@iitmandi.ac.in
Dr. Bindu Radhamany Dr. Arti Kashyap
Associate Professor Associate Professor
Specialization: X-ray Specialization: Magnetism and
spectroscopy magnetic materials
Phone: 267060 Phone: 267042
Email: bindu@iitmandi.ac.in EMail: arti@iitmandi.ac.in
Dr. Pradeep Kumar Dr.Prashant P. jose
Assistant Professor Assistant professor
Specialization: Raman and Specialization: Soft condensed
Infrared Spectroscopy matter physics
Phone: 267137 Phone: 267266
Email: pkumar@iitmandi.ac.in Email: prasanth@iitmandi.ac.in
Dr. Suman Kalyan Pal Dr. Arghya Taraphder
Associate Professor Professor
Specialization: Fast and Ultrafast Specialization: Condensed Matter
Laser Spectroscopy Physics
Phone: 267040 Phone: 267803
Email: suman@iitmandi.ac.in Email: arghya@phy.iitkgp.ernet.in
Dr. Samar
Assistant professor Dr. Sudhir Kumar Pandey
Specialization: Information Assistant Professor
Theory, Wireless Specialization: Condensed
Communications Matter Physics and Materials
Phone: 267107 Science
Email: samar@iitmandi.ac.in Phone:267066
Email: sudhir@iitmandi.ac.in
Course Content of I-PhD and M.Sc. Physics
Sem Course Credits M Sc I-PhD
I PH 511 Mathematical Physics 4-0-0-4 C C
PH 512 Classical Mechanics 4-0-0-4 C C
PH 513 Quantum Mechanics 3-0-0-3 C C
PH 514 Electronics 3-0-0-3 C C
PH 515P Physics Laboratory 0-0-5-3 C C
Technical Communications 1-0-0-1 C C
Elective (Outside Discipline) 3-0-0-3 E1 -
PH 516 Research Project I 0-0-4-2 - C
PH 517 Research Project II (Winter) 0-0-8-4 - C
21 20+4
II PH 521 Electromagnetic Theory 4-0-0-4 C C
PH 522 Statistical Mechanics 4-0-0-4 C C
PH 523 Cond. Matter Physics 3-0-0-3 C C
PH 524 Atom. Mol. Physics 3-0-0-3 C C
PH 525P Electronics Lab. Pract. 0-0-5-3 C C
Elective 3-0-0-3 E2 -
PH 526 Research Project III 0-0-6-3 - C
PH 527 Research Project IV (Summer) 0-0-6-3 - C
20 20+3
III PH 611P Exp. Res. Techniques 0-0-7-4 C C
PH 614 Seminar and Report 0-0-4-2 C C
PH 613 Spe. Topics. in QM 3-0-0-3 C E1
PH 518P PG Project-I 0-0-6-3 C -
PH 615P Mini Thesis-1 - C
Elective 3-0-0-3 E3 E2
Elective 3-0-0-3 E4 E3
Elective-5 (Outside Discipline) 3-0-0-3 E5 -
IV PH 621 Comput. Meth. Physics 2-0-4-4 C C
PH 519P PG Project-II 0-0-16-8 C -
PH 622 Mini Thesis –II 0-0-16-8 - C
Elective 3-0-0-3 E6 E4
Elective 3-0-0-3 E7 -
Over All 80 80
V-VI Electives (3) for 9 credits 3-0-0-3 09
MSc: Total: 35 (T) + 10 (L) + 13 (R) + 21 (E) + 1 (TC) = 80 Credits
I-PhD: Total: 35 (T) + 10 (L) + 25 (R) + 9 (E) + 1 (TC) + 9 (AE) = 89 Credits
List of Discipline Electives
1. PH 502 Optics and Photonics [3-0-0-3]
2. PH-503 Lasers and Applications [3-0-0-3]
3. PH 507 X-rays as a probe to study the material properties [3-0-0-3]
4. PH 508 Magnetism and Magnetic Materials [3-0-0-3]
5. PH 601 Mesoscopic Physics and Quantum Transport [3-0-0-3]
6. PH 603 Advanced Condensed Matter Physics [3-0-0-3]
7. PH 612 Nuclear and Particle Physics [3-0-0-3]
8. PH 701 Introduction to Molecular Simulations [2-0-4-4]
9. PH 706 Introduction to Stochastic Problems in Physics [3-0-0-3]
10. PH 605 Superconductivity
11. PH 591 Special Topics in High energy Physics [1-0-0-1]
Course Name : Mathematical Physics
Course Number : PH-511
Credits : 4-0-0-4
Preamble: Mathematical physics provides firm foundation in various mathematical methods
Developed and used for understanding different physical phenomena. This course provides
mathematical tools to address formalisms used in the core course of masters level physics
Course Outline: The course starts with the vector calculus followed by the introduction to
tensor analysis, and the concept of linear vectors space. The course continues to introduce
differential equations and special function that are used to understand physical phenomena in
different geometries. This followed by complex analysis and finally Fourier analysis and
integral transforms are discussed.
Modules: Coordinate system, Vector calculus in Cartesian and Curvilinear coordinates,
Introduction to Tensor analysis. Linear vector spaces, Gram-Schmidt orthogonalization, Self -
adjoint, Unitary, Hermitian Operators, transformation of operators, eigenvalue equation,
Hermitian matrix diagonalization. Ordinary differential equation (ODE) with constant
coefficients, second order Linear ODE, Series Solution- Frobenius Method, Inhomogeneous
linear ODE. Sturm Liouville equation Hermition operators - eigenvalue problem. Special
functions: Bessel, Neumann, Henkel, Hermite, Legendre, Spherical Harmonics, Laguerre,
Gamma, Beta, Delta functions. Complex analysis, Cauchy- Riemann conditions, Cauchy’s
Integral theorem, Laurent expansion, Singularities, Calculus of residues, evaluation of definite
integrals, Method of steepest descent, saddle point. Fourier series general properties and
application, Integral transform, Properties of Fourier transform, Discrete Fourier transform,
Laplace transform, Convolution theorem.
Text books:
1. Mathematical methods for physicists by Arfken and Weber (Elsevier Academic Press, 6th
edition, 2005)
2. Mathematical Methods in Physical Sciences by Mary L Boas (Willey 3rd edition, 2005)
References: 1. Mathematical Methods for Physics and Engineering: A Comprehensive Guide
by K. F. Riley, M. P. Hobson (Cambridge India South Asian Edition, 2009)
2. Mathematical Methods for Physicists by Mathews, J., and Walker, R.L. (Imprint, New
edition 1973)
3. Mathematics of Classical and Quantum Physics by F W Byron and R W Fuller (Dover
Publication, New edition, 1992)
4. Methods of theoretical Physics Vol. I and II by P M Morse, H. Freshbach (Mc-GrawHill,
5. Advanced Engineering Mathematics by E Kreyszing (Wiley India Private Limited, 10th
edition, 2003)
6. Mathematics for Physicists by Philippe Dennery and Andre Krzywicki (Dover Publications
Inc. 1996).
Course Name : Classical Mechanics
Course Number : PH-512
Credits : 4-0-0-4
Preamble: Classical mechanics is one of the backbones of physics which deals with
understanding the motion of particles. The present course covers, topics beyond the Newtonian
mechanics for a proper base to many other branches of physics.
Course Outline: The course discusses in an abstraction of the mechanics with introduction to
Lagrangian mechanics starting from Newtonian mechanics, variational principles of
mechanics, Hamiltons equations of motion, canonical transformations, Poisson brackets and
Hamilton-Jacobi equations. The concepts are illustrated using examples such as harmonic
oscillator, two-body problem,rigid body dynamics, and small oscillations.
Modules: Introduction: Mechanics of a system of particles, Constraints, D'Alembert's
Principle and Lagranges Equations, Simple Applications of the Lagrangian Formulation,
Hamiltons principle, Some techniques of the calculus of variations, Derivation of Lagranges
equations from Hamiltons principle, Conservation theorems and Symmetry properties.
The Central Force Problem: The Equivalent one-dimensional problem, and classification of
orbits, Thevirial theorem, The Kepler problem.
The Kinematics of Rigid Body motion: Orthogonal transformations, Eulers theorem on the
motion of a rigid body, Finite rotations, Infinitesimal rotations, Rate of change of a vector,
Angular momentum andkinetic energy of motion, the inertia tensor and the moment of inertia.
Euler equation of motion of rigid body.
Oscillations: Formulation of the problem, the eigenvalue equation and the principal axis
transformation, Small oscillations, Frequencies of free vibration, Normal coordinates,Non-
linear oscillations and the Chaos.
The Hamilton Equations of Motion: Legendre Transformations and the Hamilton Equations of
Motion,Cyclic Coordinates and Conservation Theorems, The Principle of Least action.
Canonical Transformations: The examples of canonical transformation Poisson Bracket and
Canonical invarients, Liouvilles theorem. Hamilton-Jacobi theory and Action-Angle Variables
the Hamilton-Jacobi equation for Hamiltons characteristic function.
1. Classical Mechanics by H. Goldstein, (Pearson Education; 3 edition (2011))
2. The Variational Principles of Mechanics by Cornelius Lanczos (Dover Publications Inc.
3. Classical Mechanics by N.C. Rana and P.S. Joag, McGraw Hill Education (India) Private
Limited; 1 edition (16 February 2001)
1. Classical Dynamics: A contemporary Approach by J.V.Jose and E.J. Saletan, (Cambridge
University Press 2002) 2. Mechanics by L.D. Landau and E.M. Lifshitz, (Butterworth-
Heinemann Ltd; 3rd Revised edition edition (29 January 1982)) 3. Classical dynamics D T
Greenwood (Dover Publications Inc.; New edition edition (21 October 1997)) 4. Introduction
to Dynamics by I.C. Percival and D. Richards (Cambridge University Press (2 December
1982)) 5. A treatise on the analytical dynamics of particles and rigid bodies by E.T. Whittaker,
(Forgotten Books (27 September 2015)) 6. Classical mechanics by John R Taylor (University
Science Books (15 September 2004)) 7. Classical Dynamics of particles and systems by
Thorton and Marion (Cengage; 05 edition (17 December 2012)) 8. Nonlinear Dynamics and
Chaos: With Applications to Physics, Biology, Chemistry and Engineering, Steven H Strogatz
(Perseus Books; First Edition edition (1 February 1994)).
Course Name: Quantum Mechanics
Course Number: PH-513
Credits: 3-0-0-3
Preamble: This course is an introductory level course on quantum mechanics covering its basic
principles. Several applications of quantum mechanics will be discussed to train students to
apply these ideas to model systems in both one-dimension and three-dimensions. Course
outline: The course begins with a discussion on origins of quantum theory and will introduce
the basic postulates. Applications of quantum mechanics on various one dimensional cases will
be discussed. Further Dirac notation will be introduced. Applications of quantum mechanics in
three dimensions will be discussed. Approximation techniques such as perturbation theory
(both time dependent and time independent) and variational methods will be also discussed in
this course.
Modules: Origins of quantum theory, Postulates of quantum mechanics, observables and
operators, theory of measurement in quantum mechanics, state of the system and expectation
values, time evolution of the state, wave-packets, uncertainty principle, probability current,
transition from quantum mechanics to classical mechanics-Ehrenfest theorem. Application of
Schrodinger equation: scattering, tunnelling, bound states , harmonic oscillator, electrons in a
magnetic field in 2D, comparison of classical and quantum results. Basic mathematical
formalism of quantum mechanics, Dirac notation, linear vector operators, matrix representation
of states and operators, commutator relations in quantum mechanics, commutator and
uncertainty relations, complete set of commuting observables. Theory of angular momentum
in quantum mechanics, commutator relations in angular momentum, Eigen values and Eigen
states of angular momentum, spin-angular momentum. Application of Schrodinger equation in
3-D models, symmetry and degeneracy, central potentials, Schrodinger equation in spherical
co-ordinates, solution to hydrogen atom problem. Time independent non-degenerate and
degenerate perturbation theory, fine-structure of hydrogen, Zeeman Effect and hyperfine
Text books: 1. Introduction to quantum mechanics-D J Griffith (Pearson, Second edition,
2004). 2. Quantum Mechanics -Vol.1, Claude Cohen-Tannoudji, B Diu, F Laloe (Wiley, First
edition. 3. Modern Quantum Mechanics - J J Sakurai (Addison Wesley, revised edition,
References: 1. Introductory Quantum Mechanics, R Liboff (Pearson, Fourth edition, 2002) 2.
Quantum physics of atoms and molecules-R Eisberg and R Resnick (Wiley, 2nd edition, 1985)
3. Quantum Mechanics B. H. Bransden and C. J. Joachain (Pearson, Second edition, 2000) 4.
Principles of Quantum Mechanics - R Shankar (Plenum Press, Second edition, 2011) Student
Section. 5. The Feynman Lectures in Physics, Vol. 3, R.P. Feynman, R.B. Leighton, and M.
Sands (Narosa Publishing House, 1992) 6. Practical Quantum Mechanics - Siegefried Flügge
(Springer 1994)
Course Name: Electronics
Course Number: PH-514
Credits: 3-0-0-3
Preamble: To understand the principle of analog and digital electronics.
Course Outline: The course begins with analog electronics involving study of amplifier,
oscillators, field effect transistor and operation amplifiers. Then the concept of Boolean algebra
and digital electronics is introduced. Consecutively various digital circuits like combinational,
clock and timing, sequential and digitally integrated circuit are studied. Further the course will
introduce microprocessor.
Modules: Amplifiers: BJT, Classification of Amplifiers, Cascading of amplifiers, Types of
power amplifiers, Amplifier characteristics, Feedback in amplifiers, Feedback amplifier
topologies, Effects of negative feedback. Oscillators and Multivibrators: Classification and
basic principle of oscillator, Feedback oscillator’s concepts, Types of oscillator, Classes of
multivibrators. Field effect transistors: JFET, MOSFET. Operational amplifiers: OPAMPs,
OPAMP applications. Boolean algebra and Digital circuit: Number systems, Boolean algebra,
De Morgan’s theorem, Logic Gates, Karnaugh Maps, Combinational circuits: Adder,
Multiplexer, DE multiplexer, Encoder, and Decoder. Clock and timing circuit: Clock
waveform, Schmitt Trigger, 555 Timer-A stable, Monostable, Sequential circuits: Filp-Flops,
Registers, Counters, and Memories, D/A and A/D conversions Microprocessor Basics:
Introduction, Outline of 8085/8086 processor, Data analysis.
Text Books: 1) Integrated electronics by Millman and Halkias (McGraw-Hill, 2001) 2)
Electronic Principles: A. P. Malvino and D. P. Bates (7th Edn) McGraw-Hill (2006) 3) Digital
Principles and Applications: D. P. Leach, A. P. Malvino and G. Saha, (6th Edn), Tata McGraw
Hill (2007) 4) Digital Electronics-Principles, Devices and Applications: A. K. Maini John
Wiley & Sons (2007) 5) R. S. Gaonkar, Microprocessor Architecture: Programming and
Applications with the 8085, Penram India (1999). 6) Microelectronic circuits, Sedra and Smith,
Oxford publications, sixth edition 2013
Course Name: Physics Laboratory Practicum
Course Number: PH-515P
Credits: 0-0-5-3
Preamble: This experimental course is expected to develop the art of experimentation and
analysis skill, understanding the basis of knowledge in physics, and collaborative learning
skills among students. Course Outline: The course content includes standard physics
experiments from various modules of physics, the theory of which students have learnt during
their final year of B. Sc.
Experiments: 1. Hall Effect in Semiconductor Objective: To measure the resistivity and Hall
voltage of a semiconductor sample as a function of temperature and magnetic field. The band
gap, the specific conductivity, the type of charge carrier and the mobility of the charge carriers
can be determined from the measurements.
2. Michelson Interferometer Objective: To determine the wavelength of the light source by
producing interference pattern.
3. Fabry-Perot Interferometer Objective: To investigate the multibeam interference of a laser
light. Also, the determination of the wavelength of light source and thickness of a transparent
4. Zeeman Effect Objective: To observe the splitting up of the spectral lines of atoms within a
magnetic field (normal and anormalous Zeeman effect) and find the value of Bohr’s
5. Diffraction of ultrasonic waves Objective: To observe Fraunhofer and Fresnel diffraction
and determine the wavelength of the ultrasound wave.
6. Frank-Hertz Experiment Objective: To demonstrate the quantization of atomic energy states
and determine the first excitation energy of neon.
7. Fourier optics Objective: To observe Fourier transformation of the electric field distribution
of light in a specific plan.
8. Dispersion and resolving power Objective: Determination of the grating constant of a
Rowland grating based on the diffraction angle (up to the third order) of the high intensity
spectral lines. Determination of the angular dispersion and resolving power of a grating.
9. Geiger-Müller-Counter Objective: To study random events, determination of the half-life
and radioactive equilibrium. Verification of the inverse-square law for beta and gamma
10. Scintillation counter Objective: Energy dependence of the gamma absorption coefficient /
Gamma spectroscopy.
1. R. A. Dunlop, Experimental Physics, Oxford University Press (1988). 2. A. C. Melissinos,
Experiments in Modern Physics, Academic Press (1996). 3. E. Hecht, Optics, Addison-Wesley;
4 edition (2001) 4. J Varma, Nuclear Physics Experiments, New Age Publishers (2001) 5. E.
Hecht, Optics, Addison-Wesley; 4 edition (2001) 6. Worsnop and Flint, Advanced Practical
Physics for Students Methusen & Go. (1950). 7. E.V. Smith, Manual for Experiments in
Applied Physics. Butterworths (1970). 8. D. Malacara (ed), Methods of Experimental Physics,
Series of Volumes, Academic Press Inc. (1988).
Course Title: Technical Communication
Course Number: H-S541
Credit: 1-0–0-1
Preamble: Students in general and graduate students in particular are required to share and
communicate their academic activities both in written and oral form to their peers and
reviewers for their comments and review. The duration of these presentation may vary from
few minutes to few hours. The audience may be homogeneous or heterogeneous. This course
intends to help students to learn the art of communication in these areas.
Objectives : The course objectives include facilitate learning the skill of preparing poster
presentations, slides, abstracts, reports, papers and thesis and their oral presentations through
lectures, examples an practices in class. Students are expected to learn structuring of these
academic activity and time allotment for each sub-element of the structure of oral
Major topics:
1) Review of appropriate and correct use of articles, adjectives and adverbs, active and
passive voices, affirmative sentences, sentences with positive and negative connotations and
presentation styles. Examples and class exercise.
2) Poster preparation and presentation in conferences.
3) Research article for conference and journal and slides for their presentations.
4) Thesis and/or book
5) Job interviews
Reference: Perelman, Leslie C., and Edward Barrett.The Mayfield Handbook of Scientific
and Technical Writing. New York, NY: McGraw-Hill, 2003. ISBN: 9781559346474.
General Resources: Carson, Rachel. "The Obligation to endure," chapter 2 inSilent spring.
104th anniversary ed. New York, NY: Mariner Books, 2002. ISBN: 9780618249060.
(Originally published in 1962. Any edition will do.)Day, Robert A., and Barbara Gastel.How
to Write and Publish a Scientific Paper. 6th ed. Westport, CT: Greenwood Press, 2006. ISBN:
---.Scientific English: A Guide for Scientists and Other Professionals. 2nd ed. Phoenix, AZ:
Oryx Press, 1995. ISBN: 978-0897749893.Hacker, Diana.A Pocket Style Manual.4th spiral ed.
New York, NY: Bedford/St. Martin's, 1999. ISBN: 9780312406844. Jackson, Ian C.Honor in
Science.Sigma Xi, The Scientific Research Society, Research Triangle Park, N. C., 1992.Klotz,
Irving M.Diamond Dealers and Feather Merchants: Tales from the Sciences.Boston:
Birkhauser, 1986
Course Name: Research Project I [I-PhD]
Course Number: PH-516
Credits: 0-0-4-2
Preamble: This course is aimed at giving research exposure to students by giving small
projects to them in physics related areas
Course outline: Each student will be given a project which they have to complete during their
first semester
Modules: Faculty members of physics and related areas can offer this project course. Towards
the end of vacation they have to submit their report and must give a seminar based on their
work. Evaluation will be based on student’s performance during the period and their report and
talk. The evaluation will be carried out by the faculty members involved in the program.
Textbooks: As advised by the faculty member
References: As advised by the faculty member
Course Name: Research project II [ I-PhD ]
Course Number: PH517
Credits: 0-0-8-4
Preamble: This course is aimed at giving research exposure to students by giving small
projects to them in physics related areas.
Course outline: Each student will be given a project which they have to complete during their
first year winter vacation.
Modules: Faculty members of physics and related areas can offer this project course. Towards
the end of vacation they have to submit their report and must give a seminar based on their
work. Evaluation will be based on students’ performance during the period and their report and
talk. The evaluation will be carried out by the faculty members involved in the program.
Textbooks: As advised by the faculty member.
References: As advised by the faculty member.
Course Name: Electromagnetic Theory
Course Number: PH521
Credits: 4-0-0-4
Preamble: The course is intended for the physics students at the advanced undergraduate level,
or beginning graduate level. It is designed to introduce the theory of the electrodynamics,
mainly from a classical field theoretical point of field.
Course outline: The course content includes electrostatics and magneto statics and their
unification into electrodynamics, gauge symmetry, and electromagnetic radiation. The special
theory of relativity has been included with four vector fields, and covariant formulation of
classical electrodynamics.
Modules: 1) Overview of Electrostatics & Magneto statics: Differential equation for electric
field, Poisson and Laplace equations, Boundary value problems, Dielectrics, Polarization of
a medium, Electrostatic energy, Differential equation for magnetic field, Vector potential,
Magnetic field from localized current distributions
2) Maxwell’s Equations: Maxwell's equations, Gauge symmetry, Coulomb and Lorentz
gauges, Electromagnetic energy and momentum, Conservation laws.
3) Electromagnetic Waves: Plane waves in a dielectric medium, Reflection and Refraction at
dielectric interfaces, Frequency dispersion in dielectrics and metals, Dielectric constant and
anomalous dispersion, Wave propagation in one dimension, Group velocity, and Metallic wave
4) Electromagnetic Radiation: Electric dipole radiation, Magnetic dipole radiation, Radiation
from a localized charge, The Lienard-Wiechert potentials
5) Relativistic Electrodynamics: Michelson–Morley experiment, Special theory of relativity,
Relativistic kinematics, Lorentz transformation and its consequences, Covariance of Maxwell
equations, Radius four-vector in contra variant and covariant form, Four-vector fields,
Minkowski space, covariant classical electrodynamics.
Textbooks: 1) Classical Electrodynamics by J.D. Jackson (John Wiley & Sons Inc, 1999) 2)
Introduction to Electrodynamics by D.J. Griffiths (Prentice Hall, 1999) 12
References: 1) Classical theory of fields, by L.D. Landau, E.M. Lifshitz and L.P. Pitaevskii
(Elsevier,2010) 2) The Feynman Lectures on Physics, by Feynman, Leighton, Sands
(CALTECH, 2013) 3) Classical Electrodynamics by W. Greiner (Spinger, 1998) 4)
Foundations of Electromagnetic Theory by J.R. Reitz, F.J. Milford and R.W. Christy
(Addition- Wesley, 2008)
Course Name: Statistical Mechanics
Course Number: PH 522
Credits: 4-0-0-4
Preamble: Statistical mechanics use methods of probability to extend the mechanics to many-
body systems to make statistical predications about their collective behaviour. It also acts as
bridge between thermodynamics and mechanics of constituent particles. Statistical mechanics
of ideal gas systems provide basic functioning of the formalisms of statical mechanics.
Methods of statistical mechanics serves as essential pre-requisite to many advanced topics in
various branches of physics where many body systems are dealt with. Course Outline: This
course starts from introducing the concepts of basic probability theory. Next modules explain
the connection between the many body mechanics and phase space to probability theory. This
course gives to introduction different statistical ensembles. Also introduces to studies of statical
behaviour of classical and quantum systems.
Modules: 1) Review of Thermodynamics: Laws of Thermodynamics, Specific heat, Maxwell
relations, Thermodynamic potentials, Ideal gas, Equation of state, van der Waal's equations. 2)
Probability concepts and examples - random walk problem in one dimension mean values
probability distribution for large N. Probability distribution of many variables. 3) Liouvellie
equation-Boltzman ergodic hypothesis, Gibbsian ensemble. Phase space and connection
between mechanics and statistical mechanics- Microcanonical ensemble. Classical ideal gas.
Gibb’s paradox. 4) Canonical ensemble partition function. Helmholtz free energy,
Thermodynamics from the partition function. Classical ideal gas- equipartition and virial
theorem. Examples: harmonic oscillator and spin systems, Grand canonical ensemble- density
and energy fluctuations- Gibbs free energy. 5) Formulation of quantum statistical mechanics
density matrix- micro-canonical, canonical and grand canonical ensembles- Maxwell-
Boltzmann , Fermi-Dirac, and Bose-Einstein statistics - comparison 6)Ideal gas in classical and
quantum ensembles Ideal Bose and fermi systems Examples of quantum quantum ideal gases,
Landau diamagnetism, Pauli paramagnetism, Phonons in solids, Bose-Einstein condensation
in Harmonic Trap, White dwarf Star, Phase transformation.
Textbooks: 1. Statistical Mechanics, R K Pathria (Academic Press Inc; 3rd Revised edition
edition (25 February 2011)) 2. Statistical Physics by K Huang (Wiley; Second edition (24
September 2008) 3. Concepts in Thermal Physics, Stephen Blundell (OUP UK; 2 editions, 24
September 2009)
References: 1. Fundamentals of statistical and thermal physics, F. Reif (Waveland Press (1
January 2010)) 2. Statistical Physics Part I by L D Landau and E M Lifshitz (Butterworth-
Heinemann; 3 edition (22 October 2013)) 3. Statistical physics of particles by Mehran Kardar
(Cambridge University Press; 1 edition (7 June 2007)) 4. The principles of Statistical
Mechanics R. C Tolman (Dover Publications Inc.; New edition edition (1 June 1980))
Course Name: Condensed Matter Physics
Course Number: PH 523
Credits: 3-0-0-3
Preamble: A basic understanding of solids is important for practicing physicists as well as for
many other related disciplines. The course is an introduction to the physics of the solid state
Course Outline: The course emphasizes the large-scale properties of solid materials resulting
from their atomic-scale properties. This course provides a basic understanding of what makes
solids behave the way they do, how they are studied, and the basic interactions which are
Modules: Introduction: Crystal Structures, Reciprocal Lattice, Brillioun Zones, X-ray
diffraction and Structure factor, Defects in Crystal structures Lattice Vibrations and Phonons:
Monoatomic and Diatomic basis, Quantization of elastic waves, Phonon momentum and
Phonon density of states, Einstein and Debye model of heat capacity, Thermal properties of
solids. Electrons in Solids: Drude and Somerfield theories, Fermi momentum and energy,
Fermi surface, Density of states, Electrical conductivity, Ohm’s law, Motion in a magnetic
field, Hall Effect, Bloch Theorem and crystal momentum, Electron motion in Solids, Kroning-
Pening Model, Formation of band, Effective mass Semiconductors: Intrinsic and extrinsic
semiconductors, Acceptor and donor level, Bound State and optical transitions in
semiconductors. Degenerate and non-degenerate semiconductor, Optical properties of solids.
Magnetism: Introduction, Origin of magnetism, Bohr-Van Leeuwen theorem, Types of
magnetism:Diamagnetism,Paramagnetism, Ferro and Anti-ferro magnetism.Superconductivity: Basic
phenomena, Meissner effect, Types of superconductors, London equation, Idea of Cooper pair, Flux
quantization, Josephson’s tunneling.
Textbooks: 1. Introduction to Solid State Physics by C. Kittel, 8th Edition, John Wiley & Sons,
Inc, 2005. 2. Solid State Physics by N. W. Ashcroft and N. D. Mermin. 3. Condensed Matter
Physics by M. P. Marder, (John Wiley & Sons, 2010).
References: 1) Advanced Solid State Physics by Phillips. (Cambridge University Press, 2012).
2) Solid State Physics, Hook and Hall, Wiley Science 3) Physics of Semiconductor Devices,
S. M. Sze.
Course Name: Atomic and Molecular Physics
Course Number: PH-524
Credits: 3-0-0-3
Preamble: This course introduces the basic ideas of atomic and molecular physics. It teaches
students how to apply quantum mechanics and extract information from many-electrons atoms
and molecules. Introduction to group theory is also provided.
Course outline: The course begins with a review of some of the basic concepts in quantum
mechanics and then discusses the time-dependent perturbation theory and its applications. It
will then proceed to many-electron atomic systems and then to molecules. Further the course
discusses the ideas and concepts associated with various spectroscopy techniques and will also
introduce the elementary concepts of group theory.
Modules: 1) Time-independent perturbation theory, Time-dependent perturbation theory and
application Fermi- Golden rule. Interaction of electromagnetic radiation with single electron
atoms, Rabi flopping, Dipole approximation and dipole selection rules, Transition rates, Line
broadening mechanisms, spontaneous and stimulated emissions and Einstein coefficients. 2)
Review of atomic structure of H, Atomic structure of two electron system-variational method,
alkali system, central field approximation, Slater determinant, Introduction to self-consistent
field method, L- S coupling, J-J coupling. General nature of molecular structure, molecular
binding, LCAO, BornOppenheimer approximation. 3) General nature of molecular structure,
molecular binding, LCAO, Born-Oppenheimer approximation. 4) Introduction to microwave,
infra-red and Raman spectroscopy, NMR and ESR, Symmetry and Spectroscopy.
Textbooks: 1. Quantum Mechanics, Leonard Schiff, Mc Graw Hill Education; 3 edition (9
April 2010) 2. Physics of atoms and molecules - Bransden and Joachain (Pearson, second
edition, 2011) 3. Fundamentals of molecular spectroscopy- C. Banwell and E. Maccash (Mc
Graw Hill, 2013) 4. Introductory Quantum Mechanics, R.L. Liboff, Addison-Wesley (2002).
References: 1. Atoms, Molecules and Photons - Wolfgang Demtroder (Springer, Second
edition, 2006).2. Atomic Physics, C. J. Foot (Oxford, First edition 2005) 3. Group theory and
Quantum Mechanics-M. Tinkham (Dover Publications, First edition, 2003) 4. Chemical
applications of group theory-F Albert Cotton (Willey, Third edition, 2015)
Course Name: Electronics Laboratory Practicum
Course Number: PH-525P
Credits: 0-0-5-3
Preamble: To provide instruction and acquaintance with electronic devices and
instrumentation techniques important in the modern physics laboratory. This course will serve
as an introduction to practical laboratory electronics by way of covering the application of
analog, digital, frequency and mixed signal electronics to experiments in the physical sciences.
Course Outline: The course is a laboratory support to the electronics course PH 414.
List of Experiments 1. To design and use bipolar junction transistor (BJT) as an amplifier and
switch, based on common emitter (CE), common collector (CC) and common base (CB)
configurations. 2. Design of Integrator, Differentiator, low pass and high pass filter using
operational amplifier (Op Amp) IC 741. 3. Design of Wein Bridge and Colpitts oscillator. 4.
Verify mathematical expression of De-morgans theorem using electronic circuits. 5. Design of
4-bit Multiplexer and DE multiplexer using flip flops. 6. Design of 4-bit Shift registers and
Counters using flip flops. 7. Design and verify A/D and D/A converters using OpAmp. 8.
Design of A stable and Mono stable Multivibrator using IC 555. 9. Study of 8085
References:1. Basic Electronics, B.L. Thareja 2. Principles of Electronics, V.K. Mehta and
Rohit Mehta
Course Name: Research project III [I-PhD]
Course Number: PH526
Credits: (0-0-6-3)
Preamble: This course is aimed at giving research exposure to students by giving small
projects to them in physics related areas.
Course outline: Each student will be given a project which they have to complete during their
Second semester.
Modules: Faculty members of physics and related areas can offer this project course. Towards
the end of vacation they have to submit their report and must give a seminar based on their
work. Evaluation will be based on student’s performance during the period and their report and
talk. The evaluation will be carried out by the faculty members involved in the program.
Textbooks: As advised by the faculty member.
References: As advised by the faculty member
Course Name: Research project IV [I-PhD]
Course Number: PH527
Credits: (0-0-6-3)
Preamble: This course is aimed at giving research exposure to students by giving small
projects to them in physics related areas.
Course outline: Each student will be given a project which they have to complete during their
first year summer vacation.
Modules: Faculty members of physics and related areas can offer this project course. Towards
the end of vacation they have to submit their report and must give a seminar based on their
work. Evaluation will be based on student’s performance during the period and their report and
talk. The evaluation will be carried out by the faculty members involved in the program.
Textbooks: As advised by the faculty member.
References: As advised by the faculty member
Course Name: Experimental Research Techniques
Course Number: PH 611P
Credits: (0-0-7-4)
Preamble: According to Newton’s third law, we can just move the earth up and down by just
throwing the ball up and down. But why don’t we feel it? Its simply because its immeasurable
within the uncertainty of the measuring set up. Performing an experiment without the
knowledge of uncertainty has no meaning. The students will be given a flavour of what does it
really mean by (a) performing an experiment; (b) developing a mini experiment (c) assembling
and engineering tools.
Course Outline: The aim of the proposed course is to amalgamate the concepts in Physics
through assembling, developing mini experiments and building components.
Modules: Temperature dependence of Electrical resistivity of materials: This experiment
involves measuring temperature dependent resistivity of any material using four probe method
and Vander Pauw methods. The skills that one will develop are to make fine contacts on the
sample, learn the intricacies involved in making this set up.
Electronic properties of material using photoemission technique: Photoemission
experiments will be done on any material and its electronic properties will be studied. The
skills that one will develop are the intricacies involved in conducting experiments in ultra-high
vacuum conditions.
Seebeck coefficient measurement: Develop mini Seebeck coefficient experiment to
distinguish n-type and p-type semiconductors from a mixture of it.
Structural properties of matrial using power X-ray diffraction (XRD) technique.
Course Name: Seminar and report
Course Number: PH614
Credits: 0-0-4-2
Preamble: This course is aimed at developing student’s self-study and presentation skills
which are very much important to build a successful research career.
Course outline: Each student will choose a particular topic for their seminar. Student will be
continually preparing in a self-study mode in consultation with faculty members working in
physics related topics. Students are also required to write a report.
Modules: Student will be continually preparing during the semester in consultation with
faculty members. At the end of the semester students has to give a seminar and a report. Faculty
members who are involved in the program will evaluate based on performance of students
during the period and their seminar and report.
Textbooks: As advised by the faculty member
References: As advised by the faculty member
Course Name: Special Topics in Quantum Mechanics
Course Number: PH 613
Credits: 3-0-0-3
Preamble: This course introduces some of the advanced level topics on quantum mechanics.
Course outline: The course begins a review of some of the basic concepts in quantum
mechanics and then discusses the angular momentum algebra. It will then proceed to discuss
the concepts in scattering theory, symmetry principles and second quantisation. Relativistic
quantum mechanics will be introduced towards the end of the course.
Modules: 1. Review of basic concepts in quantum mechanics, measurements, observables and
generalized uncertainty relations, change of basis, generator of translation 2. Angular
Momentum: General theory of angular momentum, Angular momentum algebra, Addition of
angular momenta, Clebsch-Gordon coefficients, Tensor operators, matrix elements of tensor
operators, Wigner-Eckart theorem 3. Scattering Theory: Non-relativistic scattering theory.
Scattering amplitude and cross- section. The integral equation for scattering. Born
approximation. Partial wave analysis, optical theorem 4. Symmetries in Quantum Mechanics:
Symmetry principles in quantum mechanics, conservation laws and degeneracies, discrete
symmetries, parity and time reversal 5. Second Quantization: Systems of identical particles,
Symmetric and ant symmetric wave functions. Bosons and Fermions. Pauli's exclusion
principle, occupation number representation, commutation relations, applications of second
quantization. Instructors may choose any one of the modules given below: 6. Elements of
relativistic quantum mechanics. The Klein-Gordon equation. The Dirac equation. Dirac
matrices, spinors. Positive and negative energy solutions, physical interpretation.
Nonrelativistic limit of the Dirac equation.
Text Book: 1. Modern Quantum Mechanics - J J Sakurai(Addison Wisley, revised edition,
1993) 2. Advanced Quantum Mechanics, J J Sakurai (Pearson, First edition, 2002) 3. Quantum
Mechanics, Cohen-Tannoudji, B Diu, F Laloe (Vol. II) (Wiley, second edition 1977)
References: 1. Quantum Mechanics-Vol.1 and II-Messiah (Dover Publications Inc., 2014) 2.
Practical Quantum Mechanics - Siegefried Flügge (Springer 1994) 3. Many-electron theory-S.
Raimes (North-Holland Pub. Co.1972) 4. Relativistic Quantum Mechanics-W. Greiner and D.
A. Bromley (Springer, 3rd edition , 2000) 5. Quantum theory of many-particle systems- Fetter
and Walecka (Dover Publications Inc2003) 6. Quantum Mechanics-Merzbacher (Third edition,
Wiley, 2011) 7. Quantum mechanics-Landau and Lifshitz (Butterworth-Heinemann Ltd; 3rd
Revised edition (18 December 1981)
Course Name: Post-Graduate Project-1 [M Sc]
Course Number: PH 518P
Credits: 0-0-6-3
Preamble: The course is aimed at giving research exposure to students by giving small projects
to them in physics related areas.
Course outline: Each student will be given a project which they have to complete during their
1st semester.
Modules: Faculty members of physics and related areas can offer this project course. Towards
the end of vacation they have to submit their reports and must give a semester based on their
work. Evaluation will be based on student performance during the period and their report and
talk. The evaluation will be carried out by the faculty members involved in the program.
Textbooks: As advised by the faculty member
References: As advised by the faculty member
Course Name: Computational Methods for Physicists
Course Number: PH 621
Credits: 2-0-4-4
Preamble: The objective of the proposed course is to introduce students to the basic ideas of
numerical methods and programming Course Outline: The course will cover the basic ideas of
various numerical techniques for interpolation, extrapolation, and integration, differentiation,
solving differential equations, matrices and algebraic equations.
Modules: Basic introduction to operating system fundamentals. 2) Introduction to C: Program
Organization and Control Structures loops, arrays, and function, Error, Accuracy, and Stability.
3) Interpolation and Extrapolation - Curve Fitting: Polynomial Interpolation and Extrapolation
Cubic Spline Interpolation Fitting Data to a Straight Line, examples from experimental data
fitting 4) Integration and differentiation: Numerical Derivatives Romberg Integration Gaussian
Quadrature and Orthogonal Polynomials. 5) Root Finding:Newton-Raphson Method Using
Derivative - Roots of a Polynomial 6) Ordinary Differential Equations: Runge-Kutta Method,
Adaptive Step size Control for Runge- Kutta, Examples from electrodynamics and quantum
mechanics 7) Matrices and algebraic equations: Gauss-Jordan Elimination Gaussian
Elimination with Back substitution, LU Decomposition 8) Concept of simulation, random
number generator.
Textbooks: 1. The C Programming Language by B W Kernighan and D M Richie (PHI
Learning Pvt. Ltd, 2011) 2. Elementary numerical analysis: algorithmic approach by S D Conte
and C de Boor (McGraw- Hill International, 1980).
References: 1. Computer Programming in C by V. Rajaraman, (PHI Learning Pvt. Ltd, 2011).
2. Numerical Methods by Germund Dalquist and Ake Bjork (Dover Publications, 1974) 3.
Numerical Recipes by William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian
P. Flannery, (Cambridge University Press, 1992).
Course Name: Mini-thesis II [I-PhD]
Course Number: PH- 622
Credits: 0-0-6-3
Preamble: The course is aimed at equipping students the necessary knowledge and skills to
take up their Ph.D. work.
Course outline: Each student can work with their supervisor where they are expected to do
research at an advanced level.
Modules: At the end of semester they have to submit their report and must give a seminar
based on their work. A committee shall be formed to evaluate the students’ performance during
the period and their report and seminar.
Textbooks: As advised by the faculty member.
References: As advised by the faculty member
List of Discipline Elective Courses
Course Name: Magnetism and Magnetic Materials
Course Number: PH508
Credits: 3-0-0-3
Course Preamble:Magnetism is an open field where engineers, material scientists, physicists
and others work together. This course is proposed for undergraduate/postgraduate level
students. Itstarts with the fundamentals of magnetism and proceeds to explain magnetic
materials and their applications.
Course Outline: The course will cover a thorough study about different types of magnetism
along with the types of magnetic interactions. Also various types of glassy magnetism and
magnetism in low dimensions will be covered. A detailed study about novel magnetic materials
which are used for technological application will be carried out. Further, the course will
introduce various measurement techniques used for measuring magnetization.
Modules: Introduction History of magnetism, Magnetic units, Classical and quantum
mechanical model of magnetic moment of electrons, magnetic properties of free atoms. Types
of magnetism Classification of magnetic materials, Theories of Diamagnetism, Para
magnetism, Theories of ordered magnetism, Quantum theory of magnetism: electron-electron
interactions, localized electron theory, itinerant electron theory.
Magnetic interactions Origin of crystal field, Jahn Teller effect, Magnetic dipolar interaction,
Origin of exchange interaction, Direct exchange interactions, Indirect exchange interactions in
ionic solid and metals, double and anisotropic exchange interaction.
Magnetic domains Development of domain theory, Block and Neel Wall, Domain wall
pinning, Magnons, Bloch’s law, Magnetic anisotropy, magneto restriction.
Competing interactions and low dimensionality Frustration, Spin glass,
superparamagnetism, one and two dimensional magnets, thin film and multilayers, Heisenberg
and Ising models
Novel magnetic materials Colossal and giant magneto resistive materials, magnetic
refrigerant materials, Shape memory alloys, multiferroics, spintronics devices and their
application in magnetic storage.
Measurements techniques Production and measurement of field, magnetic shielding, Faraday
balance, AC susceptometer, Vibration sample magnetometer, torque magnetometer, SQUID
magnetometer, Experimental method in low temperature.
Textbooks: 1. B. D. Cullity and C. D. Graham, Introduction to magnetic materials. John Wily
& Sons, Inc, 20112. D. Jiles,Introduction to magnetism and magnetic materials. Taylor and
Francis, CRC Press 1998.
Reference books: 1.K. H. J. Buschow and F. R. de Boer, Physics of Magnetism and Magnetic
Materials. Kluwer Academic Publishers, 2003.2.Stephen Blundell, Magnetism in Condensed
Matter. Oxford University Press (2001).3. Mathias Getzlaff, Fundamentals of Magnetism,
Springer, 2008
Course Name: Mesoscopic Physics and Quantum Transport
Course Number: PH601
Credits: 3-0-0-3
Course Preamble: Rather a young branch of science, mesoscopic physics already has several
exciting and instructive achievements over fundamental understanding and technological
applications. This course highlights the mechanisms of the electronic transport at the
mesoscopic scales where novel concepts of quantum mechanics are necessary. The course deals
with the understanding of how the physics and quantum-rules are operative to the electronic
transport in low dimensional structures.
Course Outline: The course is planned to get a broad overview of the world of mesoscopic
physics and various approaches to study quantum transport and related phenomena in
nanostructures. Among the topics covered are the length scaling in physics, conductance from
transmission, scattering approaches, semi classical transport, interference, decoherenceeffects
and concludes by emphasizing on the application of the mesoscopic physics with rapid
evolution of novel materials and experimental techniques.
Modules: 1. Introduction Drude and Somerfield model for electrons in solids, Quantum
mechanics of particle in a box, Bloch states, Density of states and Dimensionality.
2. Mesoscopic physics Mesoscopic phenomena and length scaling in physics,Quantum
structures, Tunneling through the potential barrier, Coulomb blockade.
3. Quantum transport and Localization Influence of reduced dimensionality on electron
transport: Ballistic and Diffusive Transport, Single channel Landauer formula, Landauer-
Buttiker formalism, Localization, Thermal activated conduction, Thouless picture,General and
special cases of localization, Weak localization regime.
4. Quantum Hall effect Origin of zero resistance, Two Dimensional Electron Gas, Transport
in Graphene and two dimensional systems, Localizations in weak and strong magnetic fields,
Quantum Hall effect,Spin Hall Effect.
5. Quantum interference effects in electronic transport Conductance in mesoscopic
systems,Shubnikov de Haas-Van and Aharonov-BohmOscillations, Conductance fluctuations.
6. Mesoscopic Physics with Superconductivity Superconducting ring and thin wires, weakly
coupled superconductors, Josephson effects, Andreev Reflections, Superconductor-Normal
and Superconductor-Normal-Superconductorjunctions.
7. Application of Mesoscopic physics Optoelectronics, Spintronics and Nanoelectronic
Text Books:1.Y. Imri, Introduction to Mesoscopic Physics,Oxford University Press, 2008.2.S.
Datta, Electronic Transport in Mesoscopic Systems, Cambridge University Press, 1997.
Reference Books:
1.S. Datta, Quantum Transport: Atom to transistor, Cambridge University Press, 2005.2.B.L.
Altshuler (Editor), P.A. Lee (Editor), R.A. Webb (Editor), Mesoscopic Phenomena in Solids
(Modern Problems in Condensed Matter Sciences), North Holland (July 26, 1991).3.D. K.
Ferry, S. M. Goodnick, Transport in Nanostructures, Cambridge University Press, 2009.4.N.
W. Ashcroft and N. D. Mermin, Solid State Physics, Cengage Learning, 1976.5.P. Harrison,
Quantum Wells,Wires & Dots: Theoretical and Computational Physics of Semiconductor
Nanostructures, Second Edition, WileyScience, 2009.
Course Name: Advanced Condensed Matter Physics
Course Number: PH603
Credits: 3-0-0-3
Course Preamble: The aim of the proposed course is to introduce the basic notion of the
condensed matter physics and to familiarize the students with the various aspects of the
interactions effects. This course will be bridging the gap between basic solid state physics and
quantum theory of solids. The course is proposed for postgraduate as well as undergrad
Course Outline: The course begins with the review of some of the basic concepts of
introductory condensed matter physics and then sequentially explores the interaction effects of
electron-electron/phonon, optical properties of solids, interaction of light with matter and
finally the superconductivity.
Course Modules: 1Second quantization for Fermions and Bosons. Review of Bloch's theorem,
tight binding Model, Wannier orbitals, density of states.
2. Born-Oppenheimer approximation. Effects of electron-electron interactions -Hartree-Fock
approximation, exchange and correlation effects. Fermi liquid theory, elementary excitations,
3. Dielectric function of electron systems, screening, random phase approximation, plasma
oscillations, optical properties of metals and insulators, excitons, polarons, fluctuation-
dissipation theorem.
4. Review of harmonic theory of lattice vibrations, harmonic effects, electron-phonon
interaction -mass renormalization, effective interaction between electrons, polarons.
5. Metal-Insulator transition, Mott insulators, Hubbard model, spin and charge density waves,
electrons in a magnetic field, Landau levels, integer quantum Hall effect.
6. Superconductivity: phenomenology, Cooper instability, BCS theory, Ginzburg-Landau
Text books:
1. Solid State Physics by N. W. Ashcroft and N. D. Mermin. (Publisher -Holt, Rinehart and
Winston, 1976). 2. Quantum Theory of Solids by C. Kittel.(Wiley, 1987). 3. Condensed Matter
Physics by M. P. Marder. (John Wiley & Sons, 2010). 4. Solid State Physics by H. Ibach and
H. Luth. (Springer Science & Business Media, 2009).
1. Theoretical Solid State Physics by W. Jones and N. H. March.( Courier Corporation, 1985).
2. Advanced Solid State Physics by Phillips. (Cambridge University Press, 2012). 3. Many
Particle Physics by G. D. Mahan. (Springer Science & Business Media, 2000). 4. Elementary
Excitations in Solids by D. Pines. (Advanced Book Program, Perseus Books, 1999). 5. Lecture
Notes on Electron Correlation and Magnetism by Patrik Fazekas. (World Scientific, 1999). 6.
Quantum Theory of the Electron Liquid by Giuliani and Vignale. (Cambridge Uni. Press,
Course Name: Nuclear and Particle Physics
Course Number: PH 612
Credits: (3-0-0-3)
Preamble: The objective of the proposed course is to introduce students to the fundamental
principles and concepts of nuclear and particle physics. Students will be able to know the
fundamentals of the interaction of high energy particles. This course is expected to provide a
working knowledge to real-life problems.
Course Outline: The course begins with basic nuclear phenomenology including stability.
Eventually it will explore nuclear models and reactions; experimental methods: accelerators,
detectors, detector systems; particle phenomenology: leptons, hadrons, quarks; elements of the
quark model: spectroscopy, magnetic moments, masses.
Modules: 1. Properties of Nuclei: Nuclear size, nuclear radius and charge distribution, mass
and binding energy, semi empirical mass formula, angular momentum, parity and isospin,
magnetic dipole moment, electric quadrupole moment and nuclear shape.
2. Two-body problems: Deuteron ground state, excited states, spin dependence of nuclear
forces, two nucleon scattering, charge symmetry and charge independence of nuclear forces,
exchange nature of nuclear forces, Yukawa’s theory.
3. Nuclear decay: Alpha, Beta and Gamma decay, Gamow theory, Fermi theory, direct
evidence for the neutrino.
4. Nuclear models: Liquid drop model, shell model, magic numbers, ground state spin, and
collective model.
5. Nuclear Reactions: Different types of reactions, Breit-Wigner dispersion relation,
Compound nucleus formation and break-up, nuclear fission, neutron physics, fusion reaction,
nuclear reactor.
6. Elementary particles: Fundamental interactions. Particle Zoo: Leptons, Hadrons. Organizing
principle: Baryon and Lepton Numbers, Strangeness, Isospin, The eightfold way. Quarks:
Colour charge and strong interactions, confinement, Gell-Mann – Okubo mass relation,
magnetic moments of Hadrons. Field Bosons: charge carrier. The Standard Model: party non-
conservation of weak interaction, Wu's experiment, elementary idea about electroweak
unification, Higgs boson and origin of mass, quark model, concept of colour charge, discrete
symmetries, properties of quarks and leptons, gauge symmetry in electrodynamics, particle
interactions and Feynman diagrams.
Text Books: 1. K.S. Krane, Introductory Nuclear Physics, John Wiley (2008). 2. D. J. Grifths,
Introduction to Elementary Particles, John Wiley & Sons Inc. (2008).
References: 1. W. E. Burcham and M. Jobes, Nuclear and particle Physics, John Wiley & Sons
Inc.R. R. (1979). 2. W. L. Cottingham and D. A Greenwood, an Introduction to Nuclear
Physics, Cambridge UniversityPress (2001). 3. A. Das and T. Ferbel, Introduction to nuclear
and particle physics, John Wiley (2003) . 4. M. A. Preston and R. K. Bhaduri, Structure of the
nucleus, Addison-Wesley (2008). 5. S. N. Ghoshal, Atomic and Nuclear Physics (Vol. 2) (S.
Chand, 2010). 6. Roy and B. P. Nigam, Nuclear Physics: Theory and Experiment, New Age.
7. D. Perkins, Introduction to High Energy Physics, Cambridge University Press; 4th edition
(2000). 8. G. L. Kane, Modern Elementary Particle Physics, Westview Press. 9. B. R. Martin,
Nuclear and Particle Physics: An Introduction, Wiley (2013).
Course Name: Introduction to Molecular Simulations
Course Number: PH701
Credits: 2-2-0-4
Course content:
Classical statistical mechanics
1) Ensembles: micro canonical, canonical, grand canonical ensembles ideal gas- harmonic
oscillator – Spin Systems. Introduction to Stochastic process, Brownian Motion, Langevin
equation, Fokker-Planck equation, Introduction to liquid state theory- pair distribution
functions- structure factor- coherent and in-coherent scattering- Ornstein-Zernike correlation
function Diffusion in a liquid-mean square displacement- self and collective van
Hovecorrelation function – Intermediate scattering function and dynamics structure factor.
2) Programing in C and FORTRAN 95 - essential for programming in this course
3) Introduction of Monte Carlo methods: Value of using MC method, Gaussian distribution
from 1d random walk, Metrapolis algorithm for construction NVT ensemble, Implementation
of ensemble using MC methods.
4) Proj 1. Write a Monte Carlo simulation to simulate model liquid.
5)Introduction to Molecular dynamic simulations: Molecular dynamics simulations, Numerical
integration of linear differential equations, Leap-Frog algorithm, Velocity Varlet algorithm,
Periodic boundary condition one, two and three dimensions.
6)Proj. 2 Write a MD simulation code for simple liquids and for a polymer chain connected by
harmonic spring.
7) Introduction to Brownian and Lengevin dynamics simulations: Simple Brownian
dynamicsalgorithm without hydrodynamic interactions. Langevin dynamics simulations.
8)Proj. 3: Write a Brownian dynamics code to simulate colloids in a solution and motion of
singlepolymer chain.
9) Analysis data from simulations: Computation of radial distribution function, Structure
factor,Time series analysis, Mean square displacement.
10)Proj 4: Using trajectories produced from the earlier simulation to compute: Radial
distribution functions. Mean square displacement of center of mass and monomers for a
polymer chain. Computation of stress, stress correlation function and viscosity.
Text & Reference Books:
Statistical Mechanics R. K. PathriaIntroduction stochastic process in physics and astronomy,
Rev. Mod. Phys. 1 15(1943) what is liquid? Understanding the state of matter, J. A. Barker and
D. Henderson, Rev. Mod.Phys. 587 48(1976).Theory of simple liquids by J. P. Hansen and I.
R. McDonald Statistical Mechanics by D. A. McQuarrieComputer simulation of liquids by M.
P. Allen and D. J. Tildesey Understanding molecular simulation by Daan Frenkel. The art of
molecular dynamics simulations by D. C. RappaportA guide to Monte Carlo simulations in
statistical Physics by D. P. Landau and Kurt Binder
|
{"url":"https://www.readkong.com/page/handbook-of-physics-2019-school-of-basic-science-2553776","timestamp":"2024-11-03T01:11:40Z","content_type":"text/html","content_length":"163472","record_id":"<urn:uuid:9d84528b-faa3-467c-b680-9e204de21d58>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00402.warc.gz"}
|
Find Endpoint Given Midpoint Worksheet
Midpoint of a Line Segment - Worksheet with answers. The answer to the first problem leads students to the second and so on until the answer to the last problem returns them to the start. Wizerme
free interactive worksheet Midpoint and Distance Formula by teacher Mary Causey. Watch below how to solve this example. The Midpoint of two endpoints A Math Processing Error x 1 y 1 and B Math
Processing Error x 2 y 2 can be found using this formula.
Midpoint from encrypted-tbn0.gstatic.com
Pupils are to be encouraged to look for quicker ways that dont involve plotting- some pupils are likely to deduce the formula themselves but this is not necessary to answer the majority of questions.
10 10 3 Endpoint. The distance and midpoint formulas worksheets. This is a self-checking worksheet to practice finding both the midpoint and an endpoint given the midpoint and one endpoint. Midpoint
of a Line Segment - Worksheet with answers. -8 1 3 Endpoint. JE ____ 2. What are the coordinates of B. Sheets in which pupils work out the midpoint of line segments. Packed with problems like this
our printable midpoint formula worksheet provides the necessary practice in applying the midpoint formula x 1 x 2 2 y 1 y 2 2 and finding the center point.
Round your answer to the nearest tenth if necessary.
Step by step guide to find the Midpoint. 10 7 24 Endpoint. -4 -2 6 Endpoint. 9 10 22 Endpoint. JE ____ 2. -1 9 10 Endpoint. 8 -4 17 Endpoint. This example explains how the endpoint of a segment given
the midpoint and one endpointSite. Answers provided in a Smart Notebook file. Find the other endpoint of the line segment with the given endpoint and midpoint. The middle of a line segment is its
Find the midpoint of the line segment with the given endpoints. Midpoint of a Line Segment - Worksheet with answers. The answer to the first problem leads students to the second and so on until the
answer to the last problem returns them to the start. Wizerme free interactive worksheet Midpoint and Distance Formula by teacher Mary Causey. 10 2 24 Endpoint. 11 9 midpoint. Free pdf worksheets are
also included. This free worksheet contains 10 assignments each with 24 questions with answers. 4 8 25 Endpoint. -8 7 2 Endpoint. 8 -4 17 Endpoint.
|
{"url":"https://www.tutore.org/find-endpoint-given-midpoint-worksheet.html","timestamp":"2024-11-12T20:34:12Z","content_type":"text/html","content_length":"40104","record_id":"<urn:uuid:db338c31-2a67-4b31-862e-d95790134e6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00457.warc.gz"}
|
Offset Bending CalculatorOffset Bending Calculator - Calculator Flares
Offset Bending Calculator
In electrical work, precision and efficiency are crucial. Whether you’re a seasoned electrician or just starting out, mastering conduit bending is essential. The Offset Bending Calculator is a tool
that has transformed the way professionals approach conduit work, making it easier to achieve accurate bends and offsets. This guide will walk you through using the Offset Bending Calculator, helping
you become an expert in conduit bending and ensuring your work is precise and efficient.
What is an offset bending calculator
basics of offset bending
Offset bending is a fundamental technique in electrical conduit installation. It involves creating two bends in a conduit to navigate around obstacles or change the path of the conduit run. An offset
bending calculator is a specialized tool designed to simplify the complex calculations involved in determining the precise measurements needed for these bends.
By inputting a few key values, such as the desired offset distance and the angle of the bend, the calculator can provide accurate results for executing perfect offsets every time.
Benefits of using an offset calculator for electrical conduit work
The advantages of incorporating an offset calculator into your electrical conduit work are numerous. First and foremost, it significantly reduces the likelihood of errors in your bending
calculations. This precision not only saves time but also minimizes material waste, which can be costly in large-scale projects.
Additionally, using a calculator allows for consistent results across multiple bends, ensuring a professional and uniform appearance in your conduit runs. The offset calculator also enables
electricians to tackle more complex bending scenarios with confidence, such as rolling offsets or saddle bends, which require multiple calculations to execute correctly.
How an offset calculator improves accuracy and efficiency
By leveraging the power of a bending calculator, electricians can drastically improve their workflow efficiency. Instead of relying on manual calculations or trial-and-error methods, the calculator
provides instant, accurate results. This speed and precision allow for better project planning and execution. Moreover, many modern offset calculators come in the form of smartphone apps, making them
readily accessible on job sites. These apps often include additional features like bend angle calculations, conduit fill capacities, and even integration with other electrical calculation tools,
further enhancing an electrician’s capabilities and productivity.
How do I use an offset bending calculator for conduit work?
Step-by-step guide to using an offset calculator
Using an offset bending calculator may seem daunting at first, but with a systematic approach, it becomes second nature. Let’s try a step-by-step process:
1. Determine the offset distance required for your conduit run.
2. Choose the desired bend angle (typically 30, 45, or 60 degrees).
3. Measure the conduit size you’re working with.
4. Open your offset calculator app or use an online tool.
5. Enter the offset distance and bend angle into the calculator.
6. If prompted, input the conduit size for more accurate results.
7. Let the calculator process the information.
8. Review the output, which typically includes the distance between bends and the length of the offset.
By following these steps, you’ll be able to quickly generate the measurements needed for your offset bends.
Key measurements and inputs required for accurate calculations
To ensure the most accurate results from your offset bending calculator, it’s crucial to understand and correctly input the key measurements. The primary values you’ll need are:
1. Offset distance: The perpendicular distance between the two parallel sections of conduit.
2. Bend angle: The angle at which you’ll be bending the conduit (common angles are 30°, 45°, and 60°).
3. Conduit size: The diameter of the conduit you’re working with (e.g., 1/2″, 3/4″, 1″).
4. Conduit type: Whether you’re using EMT (Electrical Metallic Tubing), rigid conduit, or PVC, as different materials may require slight adjustments in calculations.
Some advanced calculators may also ask for additional information such as the type of bender you’re using or any deduct values specific to your tools.
Interpreting the results and applying them to your bending process
Once you’ve entered the necessary information into your offset calculator, you’ll receive several key pieces of data. The most important outputs are:
1. Distance between bends: This tells you how far apart your two bends should be on the conduit.
2. Shrinkage: The amount of length you’ll lose due to the bends, which is crucial for cutting your conduit to the correct length.
3. Travel: The overall length of the offset section of your conduit run.
To apply these results, mark your conduit according to the calculator’s output. Use a protractor or the degree markings on your conduit bender to ensure you’re bending at the correct angle. Always
double-check your measurements before making the bends, as precision at this stage will save time and materials in the long run.
What are the essential formulas and multipliers for offset bending calculations?
The offset multiplier is a critical concept in conduit bending calculations. It’s a factor used to determine the distance between bends based on the desired offset and the chosen bend angle. The
multiplier varies depending on the angle of the bend:
– For a 30° bend, the multiplier is 2
– For a 45° bend, the multiplier is 1.414
– For a 60° bend, the multiplier is 1.154
To use the offset multiplier, simply multiply your desired offset distance by the appropriate factor. For example, if you need a 10-inch offset using 30° bends, you would multiply 10 by 2, giving you
a distance of 20 inches between bends.
Calculating bend angles and distances using trigonometry
While offset calculators handle the complex math for you, understanding the underlying trigonometry can be beneficial. The calculations for offset bending rely on basic trigonometric functions,
particularly the sine and tangent.
The formula for calculating the distance between bends is:
Distance = Offset ÷ sin(Angle)
For the shrinkage, or “take-up,” the formula is:
Shrinkage = Offset × [1 ÷ tan(Angle) – 1]
These formulas form the backbone of offset bending calculations and are what your calculator uses to provide accurate results.
Common formulas electricians use for offset bending
While the offset multiplier and trigonometric formulas are fundamental, electricians often use simplified versions or rules of thumb for quick calculations in the field. Some common formulas include:
1. The “6 times” rule for 10° bends: The distance between bends is approximately 6 times the offset distance.
2. The “rise over run” method: For a 45° offset, the rise (offset distance) is equal to the run (distance between bends).
3. The “center-to-center” formula: For determining the overall length of an offset, add the offset distance to the distance between bends.
These simplified formulas can be useful for quick estimates, but for precise work, it’s always best to rely on a calculator or the exact trigonometric equations.
How can I perform rolling offset calculations for complex conduit bending?
Differences between standard offsets and rolling offsets
Rolling offsets present a more complex challenge in conduit bending. Unlike standard offsets, which occur in a single plane, rolling offsets involve bends in two planes. This type of offset is
necessary when you need to navigate around obstacles that aren’t directly above or below the conduit’s path. The calculation for rolling offsets requires consideration of both the vertical and
horizontal displacement, making it more complicated than standard offset calculations.
Steps to calculate rolling offsets accurately
To calculate a rolling offset, follow these steps:
1. Determine the vertical and horizontal offset distances.
2. Use the Pythagorean theorem to calculate the true offset distance: True Offset = √(Vertical Offset² + Horizontal Offset²)
3. Decide on your bend angle (typically 22.5° for rolling offsets).
4. Use your offset calculator or the appropriate formula to determine the distance between bends based on the true offset and chosen angle.
5. Calculate the rolling angle using trigonometry: Rolling Angle = tan⁻¹(Horizontal Offset ÷ Vertical Offset)
6. Mark your conduit with both the distance between bends and the rolling angle.
Many advanced offset bending calculator apps include features for rolling offset calculations, simplifying this process significantly.
Tips for executing rolling offsets with precision
Executing rolling offsets requires careful planning and precision. Here are some tips to ensure success:
1. Use a good quality level to ensure your bends are in the correct plane.
2. Mark both the bend points and the rolling angle clearly on your conduit.
3. Consider using a protractor or angle finder to verify your rolling angle.
4. Make small adjustments as needed, checking your work frequently.
5. Practice on scrap pieces of conduit before attempting complex rolling offsets on a job site.
6. Utilize specialized rolling offset tools or jigs for consistent results.
Remember, mastering rolling offsets takes time and practice, but with the right calculations and techniques, you can achieve professional-looking results.
What are the best practices for using hand benders with offset calculations?
Choosing the right hand bender for your conduit size
Selecting the appropriate hand bender is crucial for accurate offset bending. Hand benders are designed for specific conduit sizes, typically ranging from 1/2″ to 1″ for EMT conduit. Using the wrong
size bender can result in inaccurate bends and potentially damaged conduit.
Always ensure that your bender matches the size of the conduit you’re working with. Additionally, familiarize yourself with the bender’s markings and features, as different models may have slightly
different reference points for making bends.
Techniques for marking and measuring conduit before bending
Proper marking and measuring are essential for successful offset bending. Here are some best practices:
1. Use a fine-point permanent marker for precise markings.
2. Measure twice before making any marks to ensure accuracy.
3. Use a tape measure that’s in good condition to avoid measurement errors.
4. Mark both the bend points and the back of the bend (where the conduit will contact the bender).
5. Consider using different color markers for various types of marks (e.g., red for bend points, blue for cut lines).
6. Use a conduit reamer to remove any burrs after cutting, as these can affect your measurements.
7. For complex bends, create a full-scale drawing on a piece of plywood to use as a template.
Common mistakes to avoid when using hand benders
Even experienced electricians can make mistakes when using hand benders. Here are some common pitfalls to avoid:
1. Overbending: Always underbend slightly and make minor adjustments as needed.
2. Ignoring “spring back”: Account for the conduit’s tendency to slightly unbend after pressure is released.
3. Incorrect foot placement: Ensure your foot is properly positioned on the bender to maintain control.
4. Forgetting to account for bender deduction: Each bender has a specific deduction that needs to be considered in your calculations.
5. Rushing the process: Take your time to ensure accuracy, especially when working with expensive materials.
6. Neglecting to check for level: Use a level frequently to ensure your bends are in the correct plane.
7. Failing to account for conduit “gain” or “shrinkage”: These factors can affect the overall length of your bent conduit.
By avoiding these mistakes and following best practices, you’ll be able to create precise offsets consistently.
How do offset bending calculators differ for EMT, rigid, and PVC conduits?
Adjusting calculations for different conduit materials
While the basic principles of offset bending remain the same, different conduit materials require slight adjustments in calculations and techniques. EMT (Electrical Metallic Tubing), rigid conduit,
and PVC each have unique properties that affect how they bend:
– EMT is relatively easy to bend and springs back minimally.
– Rigid conduit is stronger but requires more force to bend and has more spring back.
– PVC can be bent using heat and has almost no spring back, but requires temperature considerations.
Offset bending calculators often have settings to account for these material differences, adjusting the calculations accordingly.
Specific considerations for EMT bending
EMT is the most common type of conduit for offset bending. When using a calculator for EMT:
1. Ensure the calculator is set for EMT if it has material options.
2. Be aware that EMT has a slight spring back, typically about 2°.
3. Use the appropriate multiplier for your chosen bend angle.
4. Remember that EMT can kink if bent too sharply, so maintain a proper bend radius.
Many electricians prefer EMT for its ease of use and consistent bending properties.
Adapting offset formulas for rigid and PVC conduits
For rigid conduit:
1. Expect more resistance when bending, which may require additional force.
2. Account for greater spring back, often around 3-5°.
3. Use a calculator that factors in the increased wall thickness of rigid conduit.
4. Be prepared to make more minor adjustments after the initial bend.
For PVC conduit:
1. Use a calculator that includes temperature settings, as PVC bending requires heat.
2. Remember that PVC has virtually no spring back when cooled.
3. Factor in the expansion and contraction of PVC due to temperature changes.
4. Be cautious not to overheat the PVC, which can weaken or deform the conduit.
By understanding these material-specific considerations, you can adapt your use of offset bending calculators to achieve accurate results regardless of the conduit type you’re working with.
What advanced features should I look for in an offset bending calculator app?
Comparing free vs. paid offset calculator apps
When choosing an offset bending calculator app, you’ll find both free and paid options available on the app store. Free calculators often provide basic functionality, including standard offset
calculations and simple angle conversions.
These can be sufficient for occasional use or beginners. Paid apps, on the other hand, typically offer more advanced features, greater accuracy, and a more user-friendly interface. They may also
include regular updates and customer support. Consider your needs and frequency of use when deciding between free and paid options.
Must-have features for professional electricians
For professional electricians, certain features are essential in an offset bending calculator app:
1. Multiple conduit type settings (EMT, rigid, PVC)
2. Rolling offset calculations
3. Saddle bend calculations
4. Customizable bend angles beyond standard 30°, 45°, and 60°
5. Shrinkage and gain calculations
6. Ability to save and share calculations
7. Metric and imperial unit conversions
8. Visual diagrams of bends for reference
9. Concentric bend calculations for larger conduits
10. Pipe fill capacity calculations
|
{"url":"https://calculatorflares.com/offset-bending-calculator/","timestamp":"2024-11-03T06:35:24Z","content_type":"text/html","content_length":"197372","record_id":"<urn:uuid:abd83bc5-9fca-4160-adc8-ad0a0c749897>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00400.warc.gz"}
|
How do you measure the mass flow of a gas?
We can determine the value of the mass flow rate from the flow conditions. A units check gives area x length/time x time = area x length = volume. The mass m contained in this volume is simply
density r times the volume. To determine the mass flow rate mdot, we divide the mass by the time.
How do you calculate volumetric flow from mass flow?
Divide the mass flow by the density. The result is the volumetric flow, expressed as cubic feet of material. An example is: 100 pounds (mass flow) / 10 pounds per cubic foot (density) = 10 cubic feet
(volumetric flow).
What is a measurement of the volumetric flow?
Volumetric flow is the measure of a substance moving through a device over time. Standard units of measurement for volumetric flow rate are meters3 /second, milliliters/second, or feet3/hour.
What is gas mass flow?
The mass flow rate shown on screen is the volumetric flow rate of the gas if it was flowing at standard temperature and pressure (STP) conditions. The simple steps are as follows: The device uses the
actual temperature and pressure of the gas to calculate the instantaneous volumetric flow rate.
How is air mass flow calculated?
Mass Flow Rate (ṁ) = V × A × ρ Using the same example as above, if the density was 998 kg/m3 then the volumetric flow rate of 282.74 l/min would be equivalent to a mass flow rate of 4.703 kg/s.
How do you find volume from volumetric flow rate?
Flow rate Q is defined to be the volume V flowing past a point in time t, or Q=Vt where V is volume and t is time. The SI unit of volume is m3. Flow rate and velocity are related by Q=A¯v where A is
the cross-sectional area of the flow and v is its average velocity.
What is the volumetric flow rate of air?
A volumetric flow at standard conditions translates to a specific mass flow rate. For example, 200 cm3/min of dry air at standard conditions of temperature and pressure (200 sccm) calculates to a
mass flow of 0.258 g/min as will be shown below.
How do you calculate mass from mass flow rate?
The conservation of mass is telling us that the mass flow rate through a tube must be constant. We can compute the value of the mass flow rate from the given flow conditions….Mathematically, m = \rho
\times V \times A.
m Mass Flow Rate
\rho The density of the fluid
V The velocity of the fluid
A Area of cross-section
What is flow rate in gas?
A gas flow rate is the volume of gas that passes a particular point in a particular period of time. Gas flow rate calculations are used extensively in the disciplines of chemical engineering and
process engineering.
Why do we use mass flow rate?
Many applications benefit from mass flow instruments, as they can precisely measure and control the flow of gas molecules into or out of a process. At any selected mass flow rate, the number of
molecules flowing through the point of measurement is constant regardless of temperature or pressure conditions.
What is the relationship between the mass and volume?
Mass is the amount of matter an object contains, while volume is how much space it takes up. Example: A bowling ball and a basketball are about the same volume as each other, but the bowling ball has
much more mass.
What is the relationship between mass and volume called?
density, mass of a unit volume of a material substance. The formula for density is d = M/V, where d is density, M is mass, and V is volume. Density is commonly expressed in units of grams per cubic
What is the significance of mass flow rate?
Direct mass flow measurement is an important development across industry as it eliminates inaccuracies caused by the physical properties of the fluid, not least being the difference between mass flow
and volumetric flow. Mass is not affected by changing temperature and pressure.
What does mass flow rate depend on?
The mass flow rate is the mass of a liquid substance passing per unit time. In other words, the mass flow rate is defined as the rate of movement of liquid pass through a unit area. The mass flow is
directly dependent on the density, velocity of the liquid, and area of cross-section.
|
{"url":"https://www.shadowebike.com/how-do-you-measure-the-mass-flow-of-a-gas/","timestamp":"2024-11-05T00:09:13Z","content_type":"text/html","content_length":"49629","record_id":"<urn:uuid:0f936d1a-fcf2-4f32-8ccc-7be5c19a2483>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00405.warc.gz"}
|
Listing all interactions of a model
Extensions of the Standard Model (SM) might involve a different gauge group and/or new fields; in either case, it is important to understand what is the most general Lagrangian invariant under the
symmetries of the model. However, sometimes one might forget to write down some valid interactions, and/or even end up writing some interactions which do not exist. Automatizing the process might
help to reduce such mistakes.
This page describes the Mathematica code Sym2Int (Symmetries to Interactions) which lists all valid interactions given the model's gauge group and fields (specified by their gauge and Lorentz
representations). The program is valid for renormalizable interactions (mass dimension $\leq4$) as well as the ones which are not renormalizable (mass dimension $>4$). Since version 2, terms with
derivatives and gauge bosons are also accounted for. More details can be found below.
Other references
The program was first described in the paper "Renato M. Fonseca, The Sym2Int program: going from symmetries to interactions, arXiv:1703.05221 [hep-ph]", with more details available in "Renato M.
Fonseca, Enumerating the operators of an effective field theory, arXiv:1907.12584 [hep-ph]".
Installing the code
Sym2Int requires the group theory code of GroupMath, so both programs must be downloaded and installed correctly. The GroupMath code can be found here while Sym2Int can be obtained from this page:
Both the GroupMath folder as well as the file sym2Int.m should be placed in a directory visible to Mathematica. A good place to place them is in
(Mathematica base directory)/AddOns/Applications
Linux, Mac OS
(Mathematica base directory)\AddOns\Applications
That's it. To load Sym2Int, type in the front end
For questions, comments or bug reports, please contact me at
Defining a model
A model is defined by the following elements:
• A name;
• A gauge group;
• A list of fields.
For each of the fields, it is necessary to indicate the following (the order is important):
• A name;
• The gauge representation;
• The Lorentz representation: scalar ("S"), right/left-handed Weyl fermion("R"/"L"), vector ("V"), etc.);
• Whether it is a real ("R") or a complex ("C") field;
• Number of copies/flavors present in the model.
For example, consider the Standard Model:
gaugeGroup[SM] ^= {SU3, SU2, U1};
fld1 = {"u", {3, 1, 2/3}, "R", "C", 3};
fld2 = {"d", {3, 1, -1/3}, "R", "C", 3};
fld3 = {"Q", {3, 2, 1/6}, "L", "C", 3};
fld4 = {"e", {1, 1, -1}, "R", "C", 3};
fld5 = {"L", {1, 2, -1/2}, "L", "C", 3};
fld6 = {"H", {1, 2, 1/2}, "S", "C", 1};
fields[SM] ^= {fld1, fld2, fld3, fld4, fld5, fld6};
The following tables are then printed on the screen:
Now, a bit of semantics: each independent gauge and Lorentz invariant contraction of the fields forms an operator. Leaving flavor indices unexpanded, several operators can be written down as a single
term in the Lagrangian. Finally, several terms can correspond to the same operator type, which is the set of all gauge and Lorentz invariant contractions of a given combination of fields (with flavor
indices ignored). So, even if there is more than one independent gauge and Lorentz invariant contraction of the fields (say) A, B, C and D (flavor indices are to be neglected), they are all
considered to be part of a single ABCD type of operator.
For example, in the Standard Model the lepton Yukawa interactions $L_i^* e_j H$ are composed of 9 complex operators, which can be written down as a single term in the Lagrangian. They also form a
single type of operator.
With this understanding, the first above table (with red text) contains the following data:
• First column: just a sequential number associated to each type of operator.
• Second column: the combination of fields which participate in the interaction. A star "*" means that a field is conjugated.
• Third column: mass dimension of the type of operator.
• Forth column: is the combination of fields self conjugated?
• Fifth column: counts the number of operators of a certain type. If the forth column reads "True", these are real operators; otherwise this column indicated the number of complex operators of a
given type (so the value in the fifth column must be multiplied by 2 in order to get the number of real operators).
• Sixth column: counts the minimal number of Lagrangian terms needed to write down all interactions of a certain type. These are either real terms (if the forth column reads "True"), or complex
terms (if the forth column reads "False").
• Seventh column: list of fields which appear more than once in the operator.
• Eighth column: contains information on the permutation symmetry and number of parameters associated to the operator.
Everything should be self-evident, with the exception of the last column perhaps, so let us look into it in more detail.
Before that, note that a second table containing some summary statistics is also printed on screen. It shows the total number of operators, terms and operator types in the model, for each mass
dimension. It is worth pointing out that none of the tables includes the kinetic terms.
The symmetry column
Consider line #5 which corresponds to the Higgs quartic coupling in the Standard Model. In this interaction, both H* and H appear twice, hence one can ask what happens when we do permutations of the
two H* fields, or permutations of the two H* fields. The answer given in the last column is that the gauge and Lorentz invariant contraction of the fields H*H*HH is symmetric both under the exchange
of H*'s (first
We encountered here Young diagram. These diagrams consist of a collection of boxes which can be used to label any irreducible representation of a permutation group $S_n$. Generically speaking, boxes
along the same row are associated to symmetrization, while boxes sharing the same column are associated to an anti-symmetry.
Consider now the Standard Model with an additional Higgs doublet (the Two Higgs Doublet Model). One way to specify it is simply by adding a 7th field:
fld1 = {"u", {3, 1, 2/3}, "R", "C", 3};
fld2 = {"d", {3, 1, -1/3}, "R", "C", 3};
fld3 = {"Q", {3, 2, 1/6}, "L", "C", 3};
fld4 = {"e", {1, 1, -1}, "R", "C", 3};
fld5 = {"L", {1, 2, -1/2}, "L", "C", 3};
fld6 = {"H1", {1, 2, 1/2}, "S", "C", 1};
fld7 = {"H2", {1, 2, 1/2}, "S", "C", 1};
fields[TwoHDM] ^= {fld1, fld2, fld3, fld4, fld5, fld6, fld7};
The program prints the following:
There are scalar mass terms (operators types #1, #2 and #3), Yukawa couplings (operators types #4 to #9) and scalar quartic couplings (operators types #10 to #15). Let us focus just on this last
group of interactions: how many numbers/parameters are necessary to describe the scalar quartic interactions? We need in total $1+2+1=4$ real couplings for the self-conjugate operators #12, #14 and #
15, plus $1+1+1=3$ complex couplings for the non-self-conjugate operators #10, #11 and #13. The grand total is then 10 real numbers. Note in particular that line #14 corresponds to the operator $H_2^
* H_1^* H_2 H_1$ and, according to the columns 5 and 6, there are two possible independent contractions of the gauge+Lorentz indices.
However, we could have chosen to tell the program that the second Higgs doublet is simply a second flavor of the Standard Model one:
gaugeGroup[TwoHDM] ^= {SU3, SU2, U1};
fld1 = {"u", {3, 1, 2/3}, "R", "C", 3};
fld2 = {"d", {3, 1, -1/3}, "R", "C", 3};
fld3 = {"Q", {3, 2, 1/6}, "L", "C", 3};
fld4 = {"e", {1, 1, -1}, "R", "C", 3};
fld5 = {"L", {1, 2, -1/2}, "L", "C", 3};
fld6 = {"H", {1, 2, 1/2}, "S", "C", 2};
fields[TwoHDM] ^= {fld1, fld2, fld3, fld4, fld5, fld6};
One should get the same interactions as before, but in a different language:
An immediate reaction to the above result might be that there are too few interactions, but in reality that is not the case. The first line says that 4 real numbers control the scalar masses,
coinciding with the previous output which said that 2 real parameters + 1 complex one were required. The lines #2, #3 and #4 are the Yukawa interactions, and for each of these lines ...
• There are no permutation symmetries to consider;
• 18 (complex) numbers are needed to parametrize each Lorentz and gauge invariant contraction of the fields;
• There is just one Lorentz and gauge invariant contraction of the fields, so only one term is needed. Each such term require a Yukawa tensor with 3 indices: two contracting with the fermion
flavors, and one contracting with the Higgs flavor index.
As for the last line (#5), it says that 10 numbers are needed to encode all quartic interactions (these are real numbers, since the operator type is self-conjugate). This counting coincides with one
we have seen above. But let us dig a little deeper into the meaning of the data in the last column of the last line. Since the two Higges are seen as 2 flavours of a single field $H$, one needs
flavoured quartic couplings of the form $\lambda_{ijkl}$ as a prefactor to the each indepedent field product $H^*_i H^*_j H_k H_l$. However, note that even though the indices $i,j,k,l$ can take two
values (1 or 2), it would be wrong to assume that there are $2^4=16$ real independent numbers in the tensor $\lambda_{ijkl}$. Indeed, what the
diagrams say is that one can break this $\lambda$ into two parts: $\lambda^{SS}_{ijkl}$ which is symmetric both under the exchange of indices $i \leftrightarrow j$ and the exchange $k \leftrightarrow
l$, and $\lambda^{AA}_{ijkl}$ which is and anti-symmetric tensor under these permutations of indices. It is easy to check that these tensors have 9 ($\lambda^{SS}$) and 1 ($\lambda^{AA}$) independent
For more information on the permutation symmetry of interaction, see arXiv:1907.12584 [hep-ph].
Extra comment: Some operators might have more complex symmetries than simple S's (
If an $S_n$ irrep is neither completely symmetric nor completely anti-symmetric, the program will just indicate the irrep by the corresponding partition of $n$, or its Young diagram.
Gauge and Lorentz representations
Fields can be in any irreducible representation of the gauge and Lorentz groups.
The gauge representations for each simple factor group such as $SU(3)$ can be indicated by the representation's dimension D (e.g., 1, 3, 6, 8, ... in $SU(3)$) and -D if it is the anti-representation
(e.g., -3, -6, ... in $SU(3)$). In those cases where two or more representations have the same dimensions (for example the 15 and 15' of $SU(3)$) one can use Dynkin coefficients, just like in the
Susyno program . In the case of $U(1)$ factors, the program requires the charges of each field under them.
On the other hand, the Lorentz group is similar to a $SU(2)_L \times SU(2)_R$ group, and in particular it has irreducible representations which can be associated to two non-negative half-integers
$j_L$ and $j_R$: $\left(j_{L},j_{R}\right)$. The program needs these numbers. Nevertheless, for the common cases of {0,0} (a scalar), {1/2,0} (a left-handed Weyl spinor), {0,1/2} (a right-handed Weyl
spinor) and {1/2,1/2} (a 4-vector), the user can simply type "S", "L", "R" and "V", respectively.
For example, consider a model with $n$ vectors and $m$ left-handed spinors (both with null charge under a $U(1)$ gauge group):
gaugeGroup[modelA] ^= {U1};
fld1 = {"A", {0}, "V", "R", n};
fld2 = {"Psi", {0}, "L", "C", m};
fields[modelA] ^= {fld1, fld2};
Notice the presence of a mixed symmetry {2,2} in the last line of the operators table. This somewhat complicated symmetry yields a rather peculiar expression for the number of operators: $\frac{1}{6}
n^2 \left(n^2+5\right)$ real numbers are needed to encode the quartic interactions of the $n$ vector fields.
Extra comment: Gauge bosons transform in a slightly different way than would a normal vector field in the adjoint representation of the gauge group. Indeed, under a gauge transformation controlled by
the real parameters $\theta_{a}$, the gauge fields changes according to the formula ($C_{abc}$ are the group structure constants)
\[ A_{\mu}^{a}\rightarrow A_{\mu}^{\prime a}=A_{\mu}^{a}+C_{abc}\theta^{b}A_{\mu}^{c}-g^{-1}\partial_{\mu}\theta^{a}\,. \]
The last term, $-g^{-1}\partial_{\mu}\theta^{a}$, is specific to gauge bosons. Starting with version 2.0, Sym2Int can handle automatically these fields in non-renormalizable operators (see next
section). Note that the gauge bosons also appear in kinetic terms, but these are ignored by the program since they are essentially model independent.
Non-renormalizable terms, derivatives and gauge bosons
The program is not limited to the calculation of renormalizable interactions; it can calculate effective couplings to an arbitrarily large order. Note, however, that the computational time increases
rapidly for higher dimensional terms.
For example, the SM interactions up to dimension 6 can be obtained as follows (the SM taken beyond dimension 4 is often called SMEFT: Standard Model effective field theory).
gaugeGroup[SM] ^= {SU3, SU2, U1};
fld1 = {"u", {3, 1, 2/3}, "R", "C", 3};
fld2 = {"d", {3, 1, -1/3}, "R", "C", 3};
fld3 = {"Q", {3, 2, 1/6}, "L", "C", 3};
fld4 = {"e", {1, 1, -1}, "R", "C", 3};
fld5 = {"L", {1, 2, -1/2}, "L", "C", 3};
fld6 = {"H", {1, 2, 1/2}, "S", "C", 1};
fields[SM] ^= {fld1, fld2, fld3, fld4, fld5, fld6};
GenerateListOfCouplings[SM, MaxOrder -> 6];
This output is in agreement with the results in the paper B. Grzadkowski et al., JHEP 1010 (2010) 085 (the 'GIMR paper' henceforth) . Indeed, looking at the table of operators, the situation is as
• Lines #1 to #5 are the renormalizable operators.
• Line #6 corresponds to the well known Weinberg operator (or rather operator type in the nomenclature we are using in this text).
• Operators types #24 to #41 and #46 to #49 involve four fermions, and correspond to those in table 3 of the GIMR paper. Take for example line #24, which corresponds to the $Q^{prst}_{\ell\ell}=\
left(\overline{\ell}_{p}\gamma_{\mu}\ell_{r}\right)\left(\overline{\ell}_{s}\gamma^{\mu}\ell_{t}\right)$ operator in the language of the GIMR paper ($p,r,s,t=1,2,3$ are flavor indices). The
number of independent entries of the $Q^{prst}_{\ell\ell}$ is not $3^4=81$ because, as indicated in line #24, this tensor can be split into a part which is symmetric under an exchange of indices
$p\leftrightarrow s$ as well as of $r\leftrightarrow t$, and a part which is anti-symmetric under each of these exchanges of indices. Hence, the correct number of independent entries of the $Q^
{prst}_{\ell\ell}$ is $36+9=45$ (this counting of parameters is explained in arXiv:1907.12584 [hep-ph]). Now consider line #36: it indicates that there are 2 distinct ways of contracting the
gauge+Lorentz indices of $Q^*Q^*QQ$ which are symmetric under the permutation of the $Q^*$s and $Q$s, and 2 other distinct ways of contracting the gauge+Lorentz indices in an anti-symmetric
fashion. A total of $36\times 2+9\times 2=90$ parameters are required. Crucially, one cannot rely on a single tensor such as $Q^{(1)prst}_{qq}=\left(\overline{Q}_{p}\gamma_{\mu}Q_{r}\right)\left
(\overline{Q}_{s}\gamma^{\mu}Q_{t}\right)$ because, by (anti)symmetrization of its indices we get just one symmetric part and one anti-symmetric part. Two terms are therefore needed (this
information is shown in column 6), hence $Q_{qq}^{(3)prst}=\left(\overline{Q}_{p}\gamma_{\mu}\tau^{I}Q_{r}\right)\left(\overline{Q}_{s}\gamma^{\mu}\tau^{I}Q_{t}\right)$ must be introduced as well
in the effective Lagrangian.
• Operators of type #50, #51, #52 and #53 correspond to $Q_{e\varphi}$, $Q_{d\varphi}$, $Q_{u\varphi}$ and $Q_{\varphi}$.
• What remains is rows #7 - #23 and #42 - #45. They all have one of the following symbols: $\mathcal{D}$, F1, F2 and/or F3. The F's represent the field strength tensors of each gauge factor group,
or more precisely the part $F$ with spin $(j_L,j_R)=(1,0)$ such that $F_{\mu\nu}=F+F^*$. The index identifies the gauge factor group: in this case, the gauge group is $SU(3)_C\times SU(2)_L \
times U(1)_{Y}$ so F1=$F^C$, F2=$F^L$ and F3=$F^Y$. On the other hand $\mathcal{D}$ represents a derivative: it must be applied to one of the other fields in the interaction, but the program does
not indicate which. Note that operator types with derivatives, such as in row #9, can have permutation symmetries with negative coefficients (see last column). That is because some operators with
derivatives are redundant and should be ignored; the counting in columns 5 and 6 takes this into consideration. In general, it is more complicated to describe the permutation symmetries of these
operators with derivatives, so the information in the last column is more elaborate (see arXiv:1907.12584 [hep-ph]).
One can go beyond dimension 6 operators, but computational time and memory requirements increase quickly. It is not wise to show very long table with many Young diagrams in Mathematica's front end,
so Sym2Int stops printing the operator table if it contains more than 200 rows. The user can also use the Verbose option to do so even for smaller tables (see below).
For example, the following code takes around two hours to run, and it uses ~4 GB of RAM:
savedResults = GenerateListOfCouplings[SM, MaxOrder -> 15]
Because the table of operators is not printed on screen, one should instead analyze the raw data returned by the function GenerateListOfCouplings. In this case, it is being saved to the variable
The number of operators in the statistics table shown above (second column), as well as the number of types of operators up to dimension 12 (forth column), match the results in the paper B. Henning
et al., JHEP 1708 (2017) 016 .
Quick summary: It is possible to go beyond renormalizable operators with the MaxOrder option. The program will use the symbol $\mathcal{D}$ for derivatives, and F1, F2, F3, ... for the field strength
tensors of the 1st, 2nd, 3rd, ... gauge factor groups (following the order of gaugeGroup[<model>]).
More examples
A Left-Right model ($SU(3)_C\times SU(2)_L \times SU(2)_R \times U(1)_{B-L}$ gauge group)
Consider the LR model with the following representations. Fermions (all taken to be left-handed Weyl spinors) are distributed across three copies of the representations $Q=\left(\mathbf{3},\mathbf
{2},\mathbf{1},1/3\right)$ and $Q^c=\left(\overline{\mathbf{3}},\mathbf{1},\mathbf{2},-1/3\right) $, $L=\left(\mathbf{1},\mathbf{2},\mathbf{1},-1\right)$ and $L^{c}=\left(\mathbf{1},\mathbf{1},\
mathbf{2},1\right)$. The scalar sector is composed of the fields $\Phi=\left(\mathbf{1},\mathbf{2},\mathbf{2},0\right)$, $\Delta_{L}=\left(\mathbf{1},\mathbf{3},\mathbf{1},2\right)$ and $\Delta_{R}=\
gaugeGroup[LR] ^= {SU3, SU2, SU2, U1};
fld1 = {"Q", {3, 2, 1, 1/3}, "L", "C", 3};
fld2 = {"Qc", {-3, 1, 2, -1/3}, "L", "C", 3};
fld3 = {"L", {1, 2, 1, -1}, "L", "C", 3};
fld4 = {"Lc", {1, 1, 2, 1}, "L", "C", 3};
fld5 = {"Phi", {1, 2, 2, 0}, "S", "R", 1};
fld6 = {"D", {1, 3, 1, 2}, "S", "C", 1};
fld7 = {"Dc", {1, 1, 3, -2}, "S", "C", 1};
fields[LR] ^= {fld1, fld2, fld3, fld4, fld5, fld6, fld7};
An $SU(5)$ model
gaugeGroup[modelSU5] ^= {SU5};
fld1 = {"F", {-5}, "L", "C", 3};
fld2 = {"T", {10}, "L", "C", 3};
fld3 = {"5", {5}, "S", "C", 1};
fld4 = {"24", {24}, "S", "R", 1};
fields[modelSU5] ^= {fld1, fld2, fld3, fld4};
The Pisano-Pleitez-Frampton model ($SU(3)_C \times SU(3)_L \times U(1)_X$ gauge group)
gaugeGroup[PPF331Model] ^= {SU3, SU3, U1};
Psil = {"Psil", {1, 3, 0}, "L", "C", 3};
Q23L = {"Q23L", {3, -3, -1/3}, "L", "C", 2};
Q1L = {"Q1L", {3, 3, 2/3}, "L", "C", 1};
uc = {"uc", {-3, 1, -2/3}, "L", "C", 3};
dc = {"dc", {-3, 1, 1/3}, "L", "C", 3};
J12 = {"J12", {-3, 1, 4/3}, "L", "C", 1};
J3 = {"J3", {-3, 1, -5/3}, "L", "C", 2};
Chi = {"Chi", {1, 3, -1}, "S", "C", 1};
Eta = {"Eta", {1, 3, 0}, "S", "C", 1};
Rho = {"Rho", {1, 3, 1}, "S", "C", 1};
fields[PPF331Model] ^= {Psil, Q23L, Q1L, uc, dc, J12, J3, Chi, Eta, Rho};
Details and extra options
There are some optional parameters which can be used to change the program's behaviour.
- Add discrete symmetries
At the moment Sym2Int has a limited capacity do deal with discrete symmetries (future versions will probably extend this part of the code). In particular, the program can only handle abelian discrete
symmetries which commute with the gauge group. These symmetries associate to each field a (multiplicatively) conserved charge, which should be provided as follows:
GenerateListOfCouplings[<model>, DiscreteSym->{<charge of field #1>,
<charge of field #2>, ...}];
Each charge is a complex number (with modulus 1) associated to a $Z_n$ symmetry, or a list of such numbers, in case there are multiple $Z_{n_i}$ symmetries.
- Suppress on-screen printing of output
The option
GenerateListOfCouplings[<model>, Verbose->False];
can be used to force the program to perform the calculation silently, in which case one should save the results of GenerateListOfCouplings to some variable (more details are given in the next section
One can also print the statistics table only:
GenerateListOfCouplings[<model>, Verbose->"OnlyStatistics"];
Finally, it is possible to use partitions instead of Young diagrams in the last column of the operators list:
GenerateListOfCouplings[<model>, Verbose->"NoTableaux"];
Saving the output for further processing
We have seen above that the program prints a table with the results (plus another table with some statistics). However, in many cases, one would also like to have this data in a format suitable for
further processing. Such data can indeed be obtained by just saving the output of the GenerateListOfCouplings function, which returns a list containing information related to each operator type. For
each item in the list, corresponding to some operator type, the following information is provided:
1. The number associated to the operator-type.
2. The combination of fields which enter the interaction. Each field is identified by its position in the list provided by the user as input (if the field does not appear conjugated in the operator)
or minus its position in the list provided by the user as input (if the field does appear conjugated in the operator). Derivatives are assigned the number 0, and the field strength tensors have
the indices $x+2$, $x+3$, ... where $x$ is the number of fields given as input. In summary: 0 → derivative; ($1$ to $x$) → the fields given as input; ($x+2$ to $x+i-1$) → field strength tensors,
where $i$ is the number of gauge factor groups. Note that the field strength tensors mentioned here are the $F$'s in the Lorentz representation $(1,0)$, such that $F_{\mu\nu}=F+F^*$.
3. Mass dimension of the operator.
4. Is the combination of fields self conjugated?
5. Number of operators (real ones if the previous entry is "True"; complex otherwise).
6. Number of terms (real ones if the operator type is self-conjugated; complex otherwise).
7. List of fields which appear more than once in the operator (the identification of each field is done as in 2.).
8. Information on the permutation symmetry and number of parameters associated to the operator.
9. Provides information on the explicit expression(s) of the gauge+Lorentz invariant contraction (or contractions if there are several) of the fields which appear in the operator. In other words, it
provides the information needed to write down the Lagragian. This information will only be computed if the following optional arguments are used: CalculateInvariants->True and
Perhaps it is easier to follow what is going on with an example (the Standard Model again):
gaugeGroup[SM] ^= {SU3, SU2, U1};
fld1 = {"u", {3, 1, 2/3}, "R", "C", 3};
fld2 = {"d", {3, 1, -1/3}, "R", "C", 3};
fld3 = {"Q", {3, 2, 1/6}, "L", "C", 3};
fld4 = {"e", {1, 1, -1}, "R", "C", 3};
fld5 = {"L", {1, 2, -1/2}, "L", "C", 3};
fld6 = {"H", {1, 2, 1/2}, "S", "C", 1};
fields[SM] ^= {fld1, fld2, fld3, fld4, fld5, fld6};
operatorsDim4 = GenerateListOfCouplings[SM];
operatorsDim6 = GenerateListOfCouplings[SM, MaxOrder -> 6];
Note that, unlike some previous examples, now the results are being saved to the variables operatorsDim4 and operatorsDim6. We can get a quick glance at what information is in operatorsDim4, for
example, by making a grid:
Grid[operatorsDim4, Frame -> All]
For instance, the forth row, second column reads {-1,3,6} because the u, Q, H fields are involved (these are the fields #1, #3 and #6 in the input list), and u is conjugated (hence the -1 instead of
1). Note that the last column is empty because the options "CalculateInvariants -> True" and "IncludeDerivatives->False" were not used. Finally, note also that in the last row, column 7, there is a
{{2},{2}} permutation symmetry: it stands for the trivial/symmetric $S_2 \times S_2$ irreducible representation (see section "The 'Symmetry and number of parameters' column" above).
In the example above, operatorsDim6 contains the SM operators up to dimension 6. We may write a small code to pick out the position of those types of operators which violate lepton and baryon number:
LeptonNumber[term_] := Module[{baryonNumber},
baryonNumber =
Sign[term[[2]]].(Abs[term[[2]]] /. {0 -> 0, 1 -> 0, 2 -> 0, 3 -> 0,
4 -> 1, 5 -> 1, 6 -> 0, 8 -> 0, 9 -> 0, 10 -> 0});
BaryonNumber[term_] := Module[{baryonNumber},
baryonNumber =
2]]].(Abs[term[[2]]] /. {0 -> 0, 1 -> 1/3, 2 -> 1/3, 3 -> 1/3,
4 -> 0, 5 -> 0, 6 -> 0, 8 -> 0, 9 -> 0, 10 -> 0});
positionLViolation = Cases[operatorsDim6,
x_ /; LeptonNumber[x] =!= 0 :> x[[1]]];
PrintOperatorTable[SM, operatorsDim6[[positionLViolation]]]
positionBViolation = Cases[operatorsDim6,
x_ /; BaryonNumber[x] =!= 0 :> x[[1]]];
PrintOperatorTable[SM, operatorsDim6[[positionBViolation]]]
The following two table are printed by the program (they list the lepton and baryon number violating operators):
The function PrintOperatorTable used here is part of Sym2Int: it allows the user to selectively print just some of the operators of the last model which was processed with GenerateListOfCouplings.
For example,
will print only operators types #3, #4 and #10 in a table.
For SMEFT, the paper B. Henning et al., JHEP 1708 (2017) 016 provides (a) the number of operators of each type up to dimension 12; (b) the total number of operators with mass dimension $d \leq 12$
which violate baryon number in $\Delta B=0,1,2$ units; (c) the total number of operators with with mass dimension up to 15. This last part (c) can be checked directly from the summary table printed
by the program (see bottom of section "Non-renormalizable terms, derivatives and gauge bosons").
As for parts (a) and (b), they can be checked by computing the list of SM operators up to dimension 12, for Nf generations (it takes ~10 minutes), saving the results in a variable (operatorsDim12
gaugeGroup[SM] ^= {SU3, SU2, U1};
fld1 = {"u", {3, 1, 2/3}, "R", "C", Nf};
fld2 = {"d", {3, 1, -1/3}, "R", "C", Nf};
fld3 = {"Q", {3, 2, 1/6}, "L", "C", Nf};
fld4 = {"e", {1, 1, -1}, "R", "C", Nf};
fld5 = {"L", {1, 2, -1/2}, "L", "C", Nf};
fld6 = {"H", {1, 2, 1/2}, "S", "C", 1};
fields[SM] ^= {fld1, fld2, fld3, fld4, fld5, fld6};
operatorsDim12 = GenerateListOfCouplings[SM, MaxOrder -> 12,
Verbose -> False];
For example, the reference above provides the dimension 5 operators in a data file as the sum \[\frac{\text{Nf}^{2}+\text{Nf}}{2}H^{2}L^{2}+\frac{\text{Nf}^{2}+\text{Nf}}{2}\text{Hd}^{2}\text{Ld}^{2}
\] where Nf is the number of generations, H is the Higgs field, Hd is its conjugate, and so on. Derivatives are represented by a t. To convert operatorsDim12 into this format, one can do as follows:
ConvertSym2IntResult[termAll_] := Module[{rule, aux, result},
rule = {-10 -> Br, -9 -> Wr, -8 -> Gr, -6 -> Hd, -5 -> Ld, -4 ->
e, -3 -> Qd, -2 -> d, -1 -> u, 0 -> t, 1 -> ud, 2 -> dd, 3 -> Q,
4 -> ed, 5 -> L, 6 -> H, 8 -> Gl, 9 -> Wl, 10 -> Bl};
aux = If[
termAll[[4]], {termAll[[2]]}, {termAll[[2]], -termAll[[2]]}];
result = Expand[termAll[[5]] Total[Times @@@ (aux /. rule)]];
Total[ConvertSym2IntResult /@ saveResult]
This will provide a sum with all operators up to dimension 12. To get only those with dimension 6 (for example), one can do as follows:
Total[ConvertSym2IntResult /@ Cases[saveResult, x_ /; x[[3]] == 6]]
The result is this:
In order to collect data on the number of operators of each dimension which violate baryon number, one can proceed as follows:
BaryonNumber[term_] := Module[{baryonNumber},
baryonNumber =
Sign[term[[2]]].(Abs[term[[2]]] /. {0 -> 0, 1 -> 1/3, 2 -> 1/3,
3 -> 1/3, 4 -> 0, 5 -> 0, 6 -> 0, 8 -> 0, 9 -> 0, 10 -> 0});
bTable = Table[
temp = Cases[operatorsDim12,
x_ /; x[[3]] == dim && Abs[BaryonNumber[x]] == b];
Expand[temp[[All, 5]].(temp[[All, 4]] /. {True -> 1, False -> 2})]
, {dim, 5, 12}, {b, 0, 2}];
Style[#, Darker[Red]] & /@ {"Delta(B)=0", "Delta(B)=+-1",
"Delta(B)=+-2"}], Frame -> All]
The rows printed on screen represent the various dimensions of the operators (5 to 12).
Renato Fonseca
Last updated
05 March 2024
|
{"url":"https://renatofonseca.net/sym2int","timestamp":"2024-11-02T11:25:28Z","content_type":"text/html","content_length":"53407","record_id":"<urn:uuid:6890a497-4019-472d-a325-137a5ab28e54>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00360.warc.gz"}
|
EIS Mathematics #2 - (The simplicity of Laplace transform) Battery - Application Note 50-2 - BioLogic
5 min read
EIS Mathematics #2 – (The simplicity of Laplace transform) Battery – Application Note 50-2
Latest updated: May 23, 2024
The Laplace transform and its inverse are widely used in electrochemistry. This note illustrates some of its applications in DC or EIS measurement.
Several electrochemical techniques use results obtained using the Laplace transform. Examples can be found in Application notes #21, #28, #38 and #41b [1-4].
The Laplace^1 transform of a function f(t) is defined by the integral
$$F(s)= \text L \{f(t)\} = \int^{\infty}_0 f(t)\text{exp}(-st)\text d t\tag{1}$$
Where s is the Laplace complex variable [5]. For example the Laplace transform of the sine function sin(t) is given by
$$\int^{\infty}_0 \text{sin}(t)\text{exp}(-st)\text d t\tag{2}$$
On the other hand, the time function is given by a more complicated expression from the Laplace transform in the s-domain by
$$f(t)= \text L^{-1} \{f(s)\} = \frac{1}{2 \pi i} \lim\limits_{T \to \infty} \int^{\gamma + iT}_{\gamma – iT}\text{exp}(st)F(s)\text{d}s\tag{3}$$
Table Of Laplace Transforms
Calculation of the integral of a function of a real variable is sometimes easy, e.g. it is well known that $\int xdx = x^2/2$. Calculation of the integral of a function of a complex variable is much
more difficult. Fortunately it is very rare to have to use Eqs. (1) or (3). Laplace transforms and inverse Laplace transforms usually used in electrochemistry have already been calculated. The result
is obtained by consulting a table of Laplace transform [5] (Tabs. I and II), or better using a software such as Wolfram Alpha [6].
Transfer Function of a Linear and Time Invariant (LTI) System
Laplace transform is the tool of choice for the study of linear and time invariant systems (Fig. 1) characterized by their transfer function H(s) defined by [7, 8]
$$H(s)= \frac{Y(s)}{X(s)} = \frac{\text{L}\{y(t)\}}{\text{L}\{x(t)\}}\tag{4}$$
Figure 1: Sketch of a scalar dynamic system.
For example, the impedance of a dipole is the transfer function for a current input
$$Z(s)= \frac{\text{L}\{\Delta E(t)\}}{\text{L}\{\Delta I(t)\}}= \frac{\Delta E(s)}{\Delta I(s)} \tag{5}$$
With $\Delta E(t)= E(t)-E_{t=0}$ and $\Delta I(t)= I(t)-I_{t=0}$.
Response of a LTI System
Knowing the transfer function of a system allows us to calculate its response to an input. For example, if Z(s) is known, it is possible to determine the response of the dipole in the Laplace domain
$$\Delta E(s)= \Delta I(s)Z(s)\tag{6}$$
Then in the time domain using inverse Laplace transform
$$\Delta E(t)= \text L^{-1}\{\Delta E(s)\}\tag{7}$$
Response of a Parallel RC Circuit to a Current Heaviside Step Function
a. Impedance of a parallel RC circuit
The impedance of a parallel RC circuit is directly written from the admittance Y(s) = 1/Z(s) [9]:
$$Y(s)= \frac{1}{R}+sC \implies Z(s)= \frac{R}{1+RCs} \tag{8}$$
b. Response to a Heaviside step function
The Heaviside step function is equal to 0 for t<0 and 1 for t>0 (Fig. 2). The Laplace transform of the Heaviside step function is given by (Tab. I, Eqs. (1) and (2)).
Figure 2: Potential response of the parallel RC circuit to a Heaviside step current using Eq. (11).
$$\Delta I(s)= \text L \{\delta I(t)\} = \frac{\delta I}{s} \tag{9}$$
The Laplace transform of the output is written
$$\Delta E(s)= \frac{\delta I}{s} \frac{R}{1+RCs} \tag{10}$$
And the time domain response is given by (Tab. II, Eqs. (1) and (2)):
$$\Delta E(s)= \text L^{-1} \{\Delta E(s)\} = \text L^{-1} \left\{\frac{\delta I}{s}\frac{R}{1+RCs}\right\}= \delta IR \left( 1- \text{exp}\left(\frac{-t}{CR}\right) \right) \tag{11}$$
c. Response to a sinusoidal input
The Laplace transform of a sinusoidal signal is given by (Tab. I, Eqs (1) and (4))
$$I(t) = \delta I \text{sin}(\omega t) \implies I(s) = \text L\{\delta I(t)\}= \frac{\delta I \omega}{s^2 + \omega^2} \tag{12}$$
Therefore the output potential in the Laplace domain is
$$\Delta E(s) = \frac{\delta I \omega}{s^2 + \omega^2}\frac{R}{1+RCs}$$
And the time domain response is given by (Tab. II, Eqs. (1) and (4))
$$\Delta E(t) = \text L^{-1}\{\Delta E(s)\}= \text L^{-1}\left\{\frac{\delta I \omega}{s^2 + \omega^2}\frac{R}{1+\tau s}\right\} = \frac{\delta IR \left(\tau\omega\text{exp}\left(\frac{-t}{\tau}\
right)\right)-\tau\omega\text{cos}(\omega t) + \text{sin}(\omega t)}{1+\tau^2\omega^2}\tag{13}$$
With τ = RC. The establishment of the sinusoidal steady-state of potential is shown in Fig. 3.
Figure 3: Potential response of the parallel RC circuit to a sinusoidal current using Eq. (13).
Ramp Response of a Series RC Circuit
a. Impedance of a series RC circuit
The impedance of a series RC circuit is given by
$$Z(s)= R+\frac{1}{C_s}\tag{14}$$
b. Response to a potential ramp
The signal used in linear sweep voltammetry is a potential ramp defined as:
$$E(t)=E_i+\upsilon t \implies \Delta E(t) -E_i= \upsilon t \tag{15}$$
The Laplace transform of a ramp is given by (Tab I., Eqs. (1) and (3))
$$\Delta E(s) =\text L \{\Delta E(t)\} =\text L \{\upsilon t\} = \frac{\upsilon}{s^2}\tag{16}$$
$$Z(s)= \frac{\Delta E(s)}{\Delta I(s)} \implies \Delta I(s) =\frac{\Delta E(s)}{Z(s)}$$
It is obtained
$$\Delta I(s) =\text L^{-1}\{\delta I(s)\} =\text L^{-1} \left\{\frac{C\upsilon}{s(1+CRs)}\right\}$$
And using Eqs. (1) and (2) (Tab. II),
$$\Delta I(t)= I(t)=C\upsilon\left( 1- \text{exp}\left(\frac{-t}{CR}\right) \right) \tag{17}$$
Figure 4. Current response of the series RC circuit to a ramp of potential calculated using Eq. (17).
Laplace and Inverse Laplace Transforms Tables
Laplace Transforms
Laplace transforms used in this note are shown in Tab. I.
Table I: Table of Laplace transforms.
$$f(t)$$ $$F(s)$$
(1) $af(t)$ $aF(t)$
(2) $1$ $\frac{1}{F(s)}$
(3) $t$ $\frac{1}{s^2}$
(4) $\text{sin}(\omega t)$ $\frac{\omega}{s^2 + \omega^2}$
Inverse Laplace Transforms
Inverse Laplace transforms used in this note are shown in Tab. II.
Table II: Table of inverse Laplace transforms.
$$f(t)$$ $$F(s)$$
(1) $af(t)$ $aF(t)$
(2) $\frac{1}{s(1+as)}$ $1 \text{exp}(-t/a)$
(3) $\frac{1}{s(1+as)}$ $a(\text{exp}(-t/a)-1)+1$
(4) $\frac{1}{(1+as)(1+bs^s)}$ $\frac{a \text{exp}(-t/a)-a \text{cos}(t/\sqrt{b})+\sqrt{b}\text{sin}(t/\sqrt{b})}{a^2 + b^2}$
1) Application note #21 “Measurement of the double layer capacitance”
2) Application note #28 “Ohmic drop. Part II: Introduction to Ohmic Drop measurement techniques.”
3) Application note #38 “Dynamic resistance determination. A relation between AC and DC measurement?”
4) Application note #41 II “CV Sim, Simulation of the simple redox reaction (E), Part II: The effect of the ohmic drop and the double layer capacitance.”
5) http://en.wikipedia.org/wiki/Laplace_transform.
6) Wolfram Alpha: Computational Knowledge Engine. http://www.wolframalpha.com.
7) http://en.wikipedia.org/wiki/Transfer function.
8) Handbook of EIS – Transfer functions.
9) Handbook of EIS – Circuits made of resistors and capacitors.
Revised in 07/2018
^1Pierre-Simon Laplace was a French mathematician of the eighteenth century (1749-1827).
|
{"url":"https://my.biologic.net/documents/eis-mathematics-2-electrochemistry-battery-application-note-50-2/","timestamp":"2024-11-11T14:54:07Z","content_type":"text/html","content_length":"56397","record_id":"<urn:uuid:f991b55d-8887-4167-81d1-d1360d0f2854>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00819.warc.gz"}
|
Joint Colloquium: Carleton University – University of Ottawa
Date: Friday, October 9, 2020
Time: 4:00 pm
Place: ZOOM
Speaker: Shamgar Gurevitch (University of Wisconsin)
Title: Harmonic Analysis on GL(n) over finite fields
Abstract: There are many formulas that express interesting properties of a finite group G in terms of sums over its characters. For estimating these sums, one of the most salient quantities to
understand is the character ratio: Trace(ñ(g)) / dim(ñ), for an irreducible representation ñ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of the mentioned type
for analyzing certain random walks on G. In 2015, we discovered that for classical groups G over finite fields there is a natural invariant of representations that provides strong information on the
character ratio.
We call this invariant rank. Rank suggests a new organization of representations based on the very few “small” ones. This stands in contrast to Harish-Chandra’s “philosophy of cusp forms”, which is
(since the 60s) the main organization principle, and is based on the (huge collection) of “LARGE” representations. This talk will discuss the notion of rank for the group GL(n) over finite fields,
demonstrate how it controls the character ratio, and explain how one can apply the results to verify mixing time and rate for random walks. This is joint work with Roger Howe (Yale and Texas A&M).
The numerics for this work was carried by Steve Goldstein (Madison).
Join Zoom Meeting:
Meeting ID: 931 0465 4430
Passcode: pP73dd
|
{"url":"https://carleton.ca/math/2020/joint-colloquium-carleton-university-university-of-ottawa-18/","timestamp":"2024-11-13T05:17:17Z","content_type":"text/html","content_length":"126168","record_id":"<urn:uuid:47e0e9f8-8f1f-437d-b8ee-6dee09fe5cf9>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00117.warc.gz"}
|
2019-08-16 · Holding period return is 35% for 4 years. The annual growth rate will not be 35/4 = 8.75%. But, it will be less than this figure which is 7.79% as growth is compounded. If the holding
period is 1 year then CAGR and HPR will be the same. If the holding period is more than 1 year then the CAGR will be less than the HPR. 5) Effective Annual Rate
Summary fiscal policies, inflation and capital formation. individuals prefer to hold government bonds rather than the more risky claims to real capital. It is clear that in order to create value, the
rate of return on invested capital must be Weighted, Weekly Holding Period, Yearly Holding Period, Indonesian Market Return.
19 Feb 2021 The real yield on 10-year Treasuries — which measures the returns investors financial markets seized up and sent investors scrambling to hold cash. Inflation expectations have also
climbed steadily during this peri 30 Jan 2021 It is very important to earn inflation-beating returns, if not, you may not be able to afford Lower as you buy and hold securities for a longer period
Real Estate vs Mutual Funds – Which is the Better Investment Op 17 Aug 2020 Fed's shift in stance brought inflation into the limelight, jeopardize real returns. holding period, an index of real
estate returns exceeded. 20 Mar 2020 Currently the markets are saying that the Breakeven Inflation rate for the It's rare that we look at holding TIPS vs regular nominal treasury bonds Over the same
period, the actual nominal 30-year treasury bond 31 Jul 2019 Rs.1,00,000. Yield on Investment. 14%. Inflation Rate.
ANS: F PTS: 1 4. The geometric mean of a series of returns is always larger than the arithmetic mean and the difference increases with the volatility of the series. ANS: F PTS: 1 5. The expected
return is the average of all possible returns. The Difference Between Nominal Returns and Real Returns . The second impact of inflation is less obvious, but it can eventually take a major bite out of
your portfolio returns.
If you believe that Trump means more inflation, lower taxes and higher 2017 will be the first year since 2013 where there is a real 12% of its value against the USD during the same period. control
over the oil markets I hold a different view. return characteristics for emerging markets is prospectively.
84. Exhibit 25 Equity Risk Premiums,.
Med en inflation på 2-4 procent hamnar avkastningen runt 10 Dessa have delivered average annual returns a tad under 2% in real terms. Real Holding i Den amerikanska börsen har gett en real avkastning
på drygt 7 procent per år i Period Nominell avkastning Volatilitet Inflation Real avkastning.
The holding period return (HPR) is equal to the holding period yield (HPY) stated as a percentage. ANS: F PTS: 1 4. The geometric mean of a series of returns is always larger than the arithmetic mean
and the difference increases with the volatility of the series. ANS: F PTS: 1 5.
By analyzing the HSBC Holdings är en del av HSBC Group. korrelationen mellan realränteobligationer och aktier under samma period och område är -. 0,24. The real after-tax rate is approximately the
after-tax nominal rate minus inflation rate: 5.4 Risk and risk premiums. Holding-period returns. · The realized return Period Nominell avkastning Volatilitet Inflation Real avkastning.
Hvilan kabbarp lunch
The computation follows: For a mutual fund investing in a real estate, the return is in the form of: dividends, capital gains distribution, and price appreciation.
The study is By selling before the end of a holding period the investor protects him/herself generally upward trend, not least due to inflation.
Lanyard meaning
nk kort sebkonsbyte fore och efter bilder kondagens toppval tinderorkla eslöv jobbinc sequin pantswinzip registration codeschablonbelopp engelska
The real rate of return is the actual annual rate of return after taking into consideration the factors that affect the rate like inflation and it is calculated by one plus nominal rate divided by
one plus inflation rate minus one and inflation rate can be taken from consumer price index or GDP deflator.
His long-term hold strategy has helped him weather multiple downturns over \( \text {Holding period return} = \cfrac {(4200 – 2800 + 120)}{2800} = 54.3\%\) We can use the holding period return to
find the true investment return per every unit of money e.g. per unit dollar. This is because it incorporates all the additional income received.
Kemtvatt hemmamaria palmer dj
Gerry FowlerInvestment Director, Absolute Return there have been only five calendar years when holding cash has been better than investing in either bonds or equities in the US*. True, benign
inflation, rapid globalisation and technological developments have fuelled an incredible period of success for
What would be the real rate of return? Real Rate of Return Formula = (1 + Nominal Rate) / (1 + Inflation Rate) – 1 = (1 + 0.06) / (1 + 0.03) – 1 = 1.06 / 1.03 – 1 = 0.0291 = 2.91%. Interpretation
Holding Period Return Formula = Income + (End of Period Value – Initial Value)/Initial Value. An alternative version of the formula can be used for calculating return over multiple periods from an
investment. It is useful for calculating returns over regular intervals, which could include annualized or quarterly returns. The real rate of return formula is the sum of one plus the nominal rate
divided by the sum of one plus the inflation rate which then is subtracted by one. The formula for the real rate of return can be used to determine the effective return on an investment after
adjusting for inflation.
|
{"url":"https://enklapengarfurt.web.app/10560/17631.html","timestamp":"2024-11-05T03:23:16Z","content_type":"text/html","content_length":"10890","record_id":"<urn:uuid:79e1956e-b2cd-435a-901a-9d21cce0604c>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00729.warc.gz"}
|