content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
How do you write something in scientific notation?
How do you write something in scientific notation?
A number is written in scientific notation when a number between 1 and 10 is multiplied by a power of 10. For example, 650,000,000 can be written in scientific notation as 6.5 ✕ 10^8.
How do you write 0.0124 in scientific notation?
1 Answer
1. 0.0124⋅10−1→ one position.
2. 0.124⋅10−2→ two positions.
3. 1.24⋅10−3→ three positions.
How do you write 0.00042 in scientific notation?
Write the number 0.00042 in scientific notation. To write this number in scientific notation, you must first move the decimal 4 places from where it is in the original number to between the 4 and the
2. Since you are moving the decimal 4 places to the right, you will subtract from the exponent.
How do you write 5920 in scientific notation?
5,920 (five thousand nine hundred twenty) is an even four-digits composite number following 5919 and preceding 5921. In scientific notation, it is written as 5.92 × 103.
How do you write 1001 in scientific notation?
1,001 (one thousand one) is an odd four-digits composite number following 1000 and preceding 1002. In scientific notation, it is written as 1.001 × 103.
How do you write 0.00001 in scientific notation?
We know that when we move the decimal to the right then $ b $ is negative. Thus, $ b = – 5 $ . Hence, the scientific notation of \[0.00001\] is $ 1 \times {10^{ – 5}} $ .
How do you write 2300000000 in scientific notation?
Written in scientific notation, 2,300,000,000 is 2.3 x 10. Written in scientific notation, 0.00000000034 is 3.4 10.
How do you write 0.00035 in scientific notation?
Now, we can say that \[3.5 \times {10^{ – 4}}\] is the scientific notation of \[0.00035\].
How do you write 520000 in scientific notation?
520,000 (five hundred twenty thousand) is an even six-digits composite number following 519999 and preceding 520001. In scientific notation, it is written as 5.2 × 105.
How would u write 564000000 in scientific notation?
Because our number is greater than 10, we move the decimal point to the left. Keep track of how many times we move the decimal point. 564000000.0 -> 5.64. Our new number is 5.64.
How do you write 0.00000000068 in scientific notation?
The scientific notation of 0.00000000068 is (1) 6.8times10^(-10)
How do you write numbers in scientific notation on a calculator?
Enter a number or a decimal number or scientific notation and the calculator converts to scientific notation, e notation, engineering notation, and standard form formats. To enter a number in
scientific notation use a carat ^ to indicate the powers of 10. You can also enter numbers in e notation.
What isscientific notation?
Scientific notation is the way that scientists easily handle very large numbers or very small numbers. For example, instead of writing 0.0000000056, we write 5.6 x 10-9. So, how does this work?
How do I enter a number in a calculator?
Enter a number or a decimal number or scientific notation and the calculator converts to scientific notation, e notation and engineering notation formats. To enter a number in scientific notation use
a carat ^ to indicate the powers of 10. You can also enter numbers in e notation. Examples: 3.45 x 10^5 or 3.45e5.
What is the scientific notation for 0 5600?
The number 0.005600 converted to scientific notation is 5.600 x 10^-3; Note that we do not remove the trailing 0’s because they were originally to the right of the decimal and are therefore
significant figures. E notation is basically the same as scientific notation except that the letter e is substituted for “x 10^”.
|
{"url":"https://heimduo.org/how-do-you-write-something-in-scientific-notation/","timestamp":"2024-11-07T20:34:25Z","content_type":"text/html","content_length":"139291","record_id":"<urn:uuid:098953ee-8fa6-4494-9e5c-97f84db76a53>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00005.warc.gz"}
|
This project is driven by these notes. These give an overview of the project’s objectives, structure, and team. They go over the background of space, gravity, energy, and time. Since they are
“working notes,” a significant portion of the material utilized to obtain the information in these theses appears from them. These materials offer the comprehensive context and supporting data
required to comprehend and assess the CA framework’s tenets and implications.
These notes are informal meanderings, looking for ideas, errors, new insights, trying various proofs, and a mix of the mysterious and possible. Many are placeholders of what is yet to come while
others my discuss ideas whose time has met its “best if used by date.’ They may represent a fleething thought or the core of a new idea.
As placeholders, the notes are divided into sections, each of which has a distinct function in helping to solve the enigmas around gravity and energy:
This project operates within an informal framework, facilitating a systematic exploration of gravity’s intricacies. With a keen emphasis on organization, we harness the collective power of data,
techniques, and insights to unravel the mysteries of gravitational phenomena. Our approach is guided by clarity, precision, and an unwavering commitment to uncovering new understandings of the
By focusing on fundamental principles, we navigate the complexities of gravity with clarity and precision. This approach paves the way for practical understanding and groundbreaking discoveries. Our
methodology embraces innovative techniques, providing a fresh perspective on energy acceleration and deceleration within diverse energy densities.
Overview New data and techniques lead to new insights.
Description Exploring gravity’s essence through E=mc², revealing its link to energy over mass.
Definition A new project is initiated.
Scope Focused on fundamental principles, avoiding complexity for practical understanding.
In this section, we explore the dynamic process and advanced tools essential for unraveling the mysteries of the cosmos within the framework of Charge Admittance. Rooted in a rich tapestry of
scientific evolution spanning over a century, our approach represents a quantum leap beyond traditional physics paradigms. At the core of CA lies a revolutionary departure from conventional wisdom,
where the fixed speed of light (c) is dynamically influenced by the density of the ε[0]μ[0] field.
Morphological Thinking Embracing multiple perspectives to deepen reasoning.
AI-Driven_Scientific Inquiry Explore how AI is revolutionizing scientific research through collaboration and innovation.
Aristotelian Null Start Ideas currently excluded from our framework, reviewed for relevance as our understanding and observations evolve.
Refined Iteration Exploring Truths, Assessing Realities: Navigate the Depths of Inquiry.
Possibilities Exploring the Boundless Frontiers of Mathematical Speculation.
Assessments Exploring Truths, Assessing Realities: Navigate the Depths of Inquiry.
Working Assumptions Those working ideas which deserve further considertion.
Axioms What we know provides a logical basis for exploration.
Scientific exploration throughout history has provided a deep understanding of gravity. From the observations of early astronomers to the precise measurements of modern instruments like telescopes
and particle accelerators, scientists have established fundamental principles governing the universe’s structure and dynamics. These principles, along with groundbreaking theories like General
Relativity, form the bedrock for our current understanding of gravity.
This project builds upon this foundation by introducing the concept of Charge Admittance (CA) as a key factor influencing gravity. Data from many various sources provides evidence for this theory.
Future sections will delve deeper into this evidence and how it supports the core tenets of CA.
People Key figures, most relevant to understanding gravity, include Newton, Maxwell, Einstein, Planck, and Lorentz.
Apparatus Instruments to study gravity, telescopes, and particle accelerators provide invaluable data.
Experiments Physical tests elucidate scientific phenomena, revealing methodology behind discovery and analysis.
Formulas Classic equations from Faraday to Einstein synchronize our comprehension of the cosmos.
Effects Pragmatic exploration of nuanced manifestations contributing to our understanding of physics.
Laws Foundational physics understandings outlining key principles such as conservation of energy and thermodynamics.
Theories Chronological exploration of pivotal theories from Aristotle to the Big Bang
Gedanken Mind Experiments – the mental exploration, exemplified by Einstein and Schrödinger.
Mysteries The unknown that keep you awake at night.
SI Numbers The standards of the universe designed by a committee.
Welcome to “The Review” section, where we embark on a journey through the enigmas and frontiers of scientific exploration. Here, we confront the mysteries of the cosmos, challenge prevailing
paradigms, and delve into speculative realms of thought. From unraveling historical missteps to pondering audacious conjectures, we navigate the uncharted territories of knowledge, driven by
curiosity and the pursuit of truth. Join us as we embark on an intellectual adventure, seeking answers to the profound questions that shape our understanding of the universe.
Follow this fascinating discussion in the links below:
Bamboozles The quicksand of physics. Those ideas whose time has passed.
Unknowns The new unknowns as a result of the Quantum Admittance working assumption.
Conjecture Venture into speculative realms exploring audacious ideas.
Deductions Unveil logical outcomes drawn from fundamental principles.
Explorations Engage with “Mind Experiments” to look at new ideas through thought-provoking perspectives.
Clues Hints from those great idea inspire a solution.
Observations Things seen that add to our understanding.
Imagine a science that recognizes a difference between the vacuum of space and that in a laboratory on the surface of the earth. This one small difference would open an entire new perspective in the
idea of universal constants. It is on this idea that we present the concept of Charge Admittance. Further, imagine a universe where gravity isn’t a force pulling objects together, but a consequence
of energy flow. Charge Admittance, a groundbreaking approach, challenges our understanding of gravity by proposing it as an emergent phenomenon arising from the interplay between fundamental
constants. This framework departs from the mass-centric view, interpreting gravity as an energy phenomenon influencing space-energy density.
Universe The composite of time energy created reality.
Energy The aspects of energy related to the CA Theory.
Time A unending river of flowing energy measured by an intelligence designed interval for absolute reference.
Space An manifestation of time and energy, the empty arena holding all we see and feel.
Particles The smallest structures of the dipole lattice.
Numbers Like the teeth on cosmic gears, drive the machinery of the universe.
Principles Classical physics forms the cornerstone of QA.
Postulates The underlying understandings of Charge Admittance theory.
Assumptions The beginning of ideas that make a theory.
Requirements Meeting basic physics prerequisites, essential for achieving understanding.
Mathematics Those equations that bring it together.
Mechanisms Structures, levers, and gears of the universe that make it work.
Imagine the profound insights gained from a new perspective on the organization and structure of the universe. It is akin to opening the other eye. The implications of Charge Admittance extend far
beyond redefining gravity. By interpreting gravity as an energy phenomenon influencing space-energy density, this framework opens an entirely new perspective on universal constants. Explore the
far-reaching consequences of this revolutionary theory and its potential to reshape our understanding of the universe.
This subtle yet significant shift introduces a new perspective on universal constants through the concept of Charge Admittance. Imagine a universe where gravity is not a force pulling objects
together, but a consequence of energy flow. Charge Admittance, a groundbreaking approach, challenges our current understanding of gravity by proposing it as an emergent phenomenon arising from the
interplay between fundamental constants. This framework departs from the traditional mass-centric view, interpreting gravity as an energy phenomenon that influences space-energy density.
Accomplishments Discover the groundbreaking discoveries and achievements resulting from the Charge Admittance theory.
Features Learn about the key features and unique aspects of the Charge Admittance approach to gravity.
Applications Learn about What the concepts mean to the physical world.
Implications Explore the profound implications of Charge Admittance for the field of physics and beyond.
Predictions Investigate the predictive power of the Charge Admittance theory and its potential to forecast new phenomena.
Proofs Examine the evidence and proofs that support the validity of Charge Admittance as a revolutionary scientific theory.
QA Tests Review the experimental tests and methodologies employed to validate the Charge Admittance theory and its predictions.
Possibilities New universe ideas are possible. The formation and age of the universe are at stake.
Solutions Problems now closer to solution using the concept of Charge Admittance.
This appendix provides supplementary information for a deeper understanding of Quantum Admittance. It includes:
FAQ Answers to questions about the energy admittance universe.
Glossary The definition of terms as used on this website.
AI review Quantum Admittance ideas in the light of the latest technology.
|
{"url":"https://gravityz0.com/home/notes/","timestamp":"2024-11-06T07:30:11Z","content_type":"text/html","content_length":"172501","record_id":"<urn:uuid:989f51f3-6dc6-49c6-bb58-4b975ff7a04d>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00574.warc.gz"}
|
Particle systems and systemic risk
Systemic risk in the banking system is the risk that small losses and defaults can escalate through endogenous effects to cause an event affecting large parts of the financial sector. We will
consider some simple particle system models for the interactions between banks and show how this leads to stochastic McKean-Vlasov equations describing the whole system. The systemic risk can be
captured through a loss function and we will show that this can have unexpected behaviour in different models.
|
{"url":"https://talks.ox.ac.uk/talks/id/925fb523-8dcd-4aae-871e-d325a1dccd82/","timestamp":"2024-11-07T00:57:27Z","content_type":"text/html","content_length":"11939","record_id":"<urn:uuid:81be2107-f908-42d3-949e-85001fffba35>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00831.warc.gz"}
|
Re: st: combining multiple surveys - adjustment of weights
Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: combining multiple surveys - adjustment of weights
From Steven Samuels <[email protected]>
To [email protected]
Subject Re: st: combining multiple surveys - adjustment of weights
Date Fri, 28 Jan 2011 09:04:12 -0500
On Jan 27, 2011, at 11:45 PM, Dana Shills wrote:
> I am trying to combine multiple country survey datasets (10 surveys) to > estimate ols regressions of firm performance across the 10 countries. 8 of > the surveys are stratified random samples of
firms in the 8 countries (so > the strata identifier in each dataset goes from 1 to some number N, that > varies) and 2 others are simple random samples. The survey questionnaire
> is the same across countries.
> 1> Statistically are there any issues with combining multiple surveys when
> the sampling methods are different?
Not if the primary sampling units were the same in all surveys.
> 2> To declare the combined dataset in Stata as a survey, I create a new > strata variable that doesn't overlap between the surveys (as suggested in > the previous thread). But how should I deal
with the probability weights
> provided in each dataset?
See below.
> Suppose I assume that the sum of the weights in each country (countrysum) > represents the population, then can I just rescale each weight by (sum of
> weighs across all countries)/countrysum?
Don't assume: Check! If the weight sum in each country is approximately or exactly equal to the population size (number of firms), don't rescale. Use the weights as given.
(Modification: if you know the population sizes exactly and had <100% response, you might apply non-response adjustments. With SRS, the simplest is to create a new weight= N/n' where N is the known
population size in the stratum and n' is the achieved sample size. For other methods, see section 8.5 of Sharon Lohr, Sampling: Design and Analysis, 1999 or 2010 editions, Brooks/Cole, Boston.)
> 3> I am running weighted regressions using the svy: regress command on the > combined dataset. If the sample size varies greatly between the surveys, > should I rescale each firm level variable in
the regression by the sample
> size in that country or does svy take care of this?
Stata's survey commands will take care of it.
Steven J. Samuels
[email protected]
18 Cantine's Island
Saugerties NY 12477
Voice: 845-246-0774
Fax: 206-202-4783
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"https://www.stata.com/statalist/archive/2011-01/msg00972.html","timestamp":"2024-11-14T16:42:58Z","content_type":"text/html","content_length":"12709","record_id":"<urn:uuid:e3b0ae11-4e76-4a8c-9a56-948c36385cce>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00612.warc.gz"}
|
A car repair shop charged $80 per hour for labor and $210 for the parts of a car. If the total cost was $490, how many hours did it take to repair the car? | HIX Tutor
A car repair shop charged $80 per hour for labor and $210 for the parts of a car. If the total cost was $490, how many hours did it take to repair the car?
Answer 1
$3 \frac{1}{2} \text{hours}$
Subtract the cost of parts from total cost. This will leave us with the cost of labour.
#rArr$490-$210=$280larr" cost of labour"#
Dividing cost of labour by $80 will give the number of hours spent on repairing the car.
#rArr280/80=3.5larr" number of hours"#
Thus #3.5" or " 3 1/2" hours spent on repairing the car"#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
Here total cost is $490 and cost of the parts is$210. So, total repair charge = $490-$210 = $280.
Repair shop charges $80 for 1 hour labor. Therefore no. of hours required to repair the car = #$280-:$80 = 3 1/2# hours
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 3
Let ( x ) represent the number of hours it took to repair the car.
The total cost of labor is given by ( 80x ) dollars.
Given that the total cost of parts is $210, and the total cost is$490, the total cost of labor and parts is ( 80x + 210 ).
Setting up the equation:
[ 80x + 210 = 490 ]
Subtracting 210 from both sides:
[ 80x = 280 ]
Dividing both sides by 80:
[ x = \frac{280}{80} ]
[ x = 3.5 ]
Therefore, it took 3.5 hours to repair the car.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/a-car-repair-shop-charged-80-per-hour-for-labor-and-210-for-the-parts-of-a-car-i-8f9af8e62b","timestamp":"2024-11-07T23:11:07Z","content_type":"text/html","content_length":"587375","record_id":"<urn:uuid:aa0f5862-401d-469c-8256-272bb831dbc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00898.warc.gz"}
|
17.3 Superstrings
Learning Objectives
Learning Objectives
By the end of this section, you will be able to do the following:
• Define Superstring theory
• Explain the relationship between Superstring theory and the Big Bang
Introduced earlier in GUTS: The Unification of Forces, Superstring theory is an attempt to unify gravity with the other three forces and, thus, must contain quantum gravity. The main tenet of
Superstring theory is that fundamental particles, including the graviton that carries the gravitational force, act like one-dimensional vibrating strings. Since gravity affects the time and space in
which all else exists, Superstring theory is an attempt at a Theory of Everything (TOE). Each independent quantum number is thought of as a separate dimension in some super space, analogous to the
fact that the familiar dimensions of space are independent of one another—and is represented by a different type of Superstring. As the universe evolved after the Big Bang and forces became
distinct—spontaneous symmetry breaking—some of the dimensions of superspace are imagined to have curled up and become unnoticed.
Forces are expected to be unified only at extremely high energies and at particle separations on the order of $10−35m.10−35m. size 12{"10" rSup { size 8{ - "35"} } `m} {}$ This could mean that
Superstrings must have dimensions or wavelengths of this size or smaller. Just as quantum gravity may imply that there are no time intervals shorter than some finite value, it also implies that there
may be no sizes smaller than some tiny but finite value. That may be about $10−35m.10−35m. size 12{"10" rSup { size 8{ - "35"} } `m} {}$ If so, and if Superstring theory can explain all it strives
to, then the structures of Superstrings are at the lower limit of the smallest possible size and can have no further substructure. This would be the ultimate answer to the question the ancient Greeks
considered. There is a finite lower limit to space.
Not only is Superstring theory in its infancy, it deals with dimensions about 17 orders of magnitude smaller than the $10−18m10−18m size 12{"10" rSup { size 8{ - "18"} } `m} {}$ details that we have
been able to directly observe. It is thus relatively unconstrained by experiment, and there are a host of theoretical possibilities to choose from. This has led theorists to make choices
subjectively, as always, on what is the most elegant theory, with less hope than usual that experiment will guide them. It has also led to speculation of alternate universes, with their Big Bangs
creating each new universe with a random set of rules. These speculations may not be tested even in principle, since an alternate universe is by definition unattainable. It is something like
exploring a self-consistent field of mathematics, with its axioms and rules of logic that are not consistent with nature. Such endeavors have often given insight to mathematicians and scientists
alike, and occasionally have been directly related to the description of new discoveries.
|
{"url":"https://texasgateway.org/resource/173-superstrings?binder_id=78881","timestamp":"2024-11-02T15:09:04Z","content_type":"text/html","content_length":"40197","record_id":"<urn:uuid:132a6165-aeb7-41b7-8110-9350bbfe1267>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00878.warc.gz"}
|
Reply To: Bent Taper - Nazca Design
24 November 2019 at 12:20 #5852
Dear Paul,
There are infinite ways to draw curves (waveguides) between two points. The cobra_p2p is one that specifically does not use a constant bend radius but smoothly goes from a straight waveguide to a
curved waveguide and finds an optimal path according to some strategy, as described inside the function.
Hence, the Viper is a good approach to do what you want, but it is not yet an Interconnect. A proper Nazca Interconnect set up has a number of levels to make it work as effortlessly as possible when
utilizing it as a designer. It uses a xsection and stores a number of properties that are reused when creating mask_elements, like strt, bend or cobra_p2p.
The following structure gives a short overview on what an Interconnect definition is built upon:
A- xsection:
– add layers and their growth values
– add width, radius, etc.
B- mask_element template function:
– provide a xsection (e.g. get from Interconnect (C))
– provide default values (e.g. get from Interconnect (C))
– mask_element function:
– – the mask_element is a “closure”, i.e. the function and its environment are saved when creating it.
– – loop over all xsection layers:
– – – apply grow
– – – create and discretize the polygon based on the layer
– – add pins and other properties
C- Interconnect (class):
– add a xsection, see (A)
– add default values (get from xsection where possible)
– Interconnect methods:
– – create an interconnect cell from a mask_element cell (B) and neither cell will be instantiated (by choice).
– – add pins to the interconnect cell
Note that mask_elements exist without Interconnects. In other words, an Interconnect object is just a wrapper around mask_elements to group these elements together inside a single technology
If we now turn to the Viper we find that it creates a spine or path [x(t), y(t)]. From that it creates a polygon by adding a width w(t) to the spine. The viper still needs proper pin directions and
polygon geometry at the begin and end points, proper discretization w.r.t. to the mask resolution, loop over all layers in a xsection, etc.
From the user perspective the layout code for a bent waveguide as described in your example simplifies to something as in the example below. It generates the layout shown after the code.
Note that the radius, and width1 and width2, are nicely incorporated as keywords for the tapered_bend() to make it possible overrule defaults.
tapered_bend(angle=90, width2=20, N=1000).put()
tapered_bend(angle=-300, radius=60, width1=20, width2=0.5, N=2000).put()
The complete code to set it up is shown below. The viper functions x, y, w can be adapted to more or less any sensible set of functions for drawing a waveguide. The example demonstrates part A and B
in the above explained structure. Putting it in Interconnect C is straight forward after this step and omitted in this example.
Note that the keywords “anglei” and “angleo” in viper() require Nazca-0.5.8 or up. You can run the code without them in earlier Nazca versions (with the viper) and notice that the viper polygons then
do not connect “good enough” in case of a curvature in the begin and/or end point, because the polygon angles are no longer matching the pins their “exact” angle values in those points.
A next step, not in this example, could be to calculate the minimum N for a specific maximum resolution. Here you can take N “large enough”. If this function is called from an Interconnect object you
would not use nd.get_xsection(xs).width but get the Interconnect’s settings.
import numpy as np
import math as m
from functools import partial
import nazca as nd
from nazca.interconnects import Interconnect
def Tp_tapered_bend(x, y, w, radius=100.0, angle=10.0, width1=None, width2=None,
xs=None, layer=None, N=200):
"""Template for a specific Viper implementation.
x (function): function in at least t, t in [0,1]
y (function): function in at least t, t in [0,1]
w (function): function in at least t, t in [0,1]
function: Cell generating function
def tapered_bend(radius=radius, angle=angle, width1=None, width2=None, xs=xs, layer=layer, N=N):
"""Specific Viper implementation.
Cell: Cell based on a Viper
N = N # discretization steps should be "large enough" for mask resolution
name = 'viper_bend'
# housekeeping for default values (add as needed):
if width1 is None:
width1 = nd.get_xsection(xs).width
if width2 is None:
width2 = nd.get_xsection(xs).width
# Fill in all x, y, w function parameters except t:
X = partial(x, radius=radius, angle=angle)
Y = partial(y, radius=radius, angle=angle)
W = partial(w, width1=width1, width2=width2)
# Store begin and end points of the viper.
# Note-1: begin and end angle of the viper spine should be taken
# infinitesimally close to the begin and end.
# Note-2: the polygon's begin and end edge should be perpendicular
# to the local angles from Note-1
xa, ya = X(0), Y(0)
xb, yb = X(1), Y(1)
d = 1e-8
aa = m.degrees(m.atan2( Y(0)-Y(d), X(0)-X(d)))
ab = m.degrees(m.atan2( Y(1)-Y(1-d), X(1)-X(1-d)))
AB = (ab-aa-90)
if AB < -180:
AB += 180
# create the Cell:
with nd.Cell(name=name, cnt=True) as C:
for lay, grow, acc, line in nd.layeriter(xs, layer):
(a1, b1), (a2, b2), c1, c2 = grow
xygon = nd.util.viper(
lambda t: W(t) + b1-b2, # add the layer growth
anglei = -aa, # remove for <0.5.8
angleo = AB # remove for <0.5.8
nd.Polygon(points=xygon, layer=lay).put(0)
nd.Pin(name='a0', type=0, width=width1, xs=xs, show=True).put(xa, ya, aa)
nd.Pin(name='b0', type=1, width=width2, xs=xs, show=True).put(xb, yb, ab)
return C
return tapered_bend
if __name__ == "__main__":
# add a technology:
XS = nd.add_xsection('xs')
XS.width = 1.0
XS.radius = 100.0
nd.add_layer2xsection(xsection='xs', layer=1)
nd.add_layer2xsection(xsection='xs', layer=2, growx=4.0)
ic = Interconnect(xs='xs')
# create a specific viper-based mask_element:
def x(t, radius, angle):
return radius * np.cos(t * angle/180 * np.pi)
def y(t, radius, angle):
return radius * np.sin(t * angle/180 * np.pi)
def w(t, width1=None, width2=None):
return width1 + (width2 - width1) * t
tapered_bend = Tp_tapered_bend(x, y, w, xs=ic.xs)
# put waveguides:
tapered_bend(angle=90, width2=20, N=1000).put()
tapered_bend(angle=-300, radius=60, width1=20, width2=0.5, N=2000).put()
|
{"url":"https://nazca-design.org/forums/reply/reply-to-bent-taper/","timestamp":"2024-11-14T06:50:07Z","content_type":"text/html","content_length":"219727","record_id":"<urn:uuid:0554f33b-909e-42a4-99bc-ad0bf95378e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00543.warc.gz"}
|
O Level Maths - The Complete Guide to O Level Math in Singapore
Free Request For O Level Maths Tuition
O Level Maths – The Complete Guide to O Level Math in Singapore
When O Level Maths Exams are around the corner, many students and parents feel frantic, stressed, and overwhelmed by the examinations. Worries, anxiety, and panic thoughts – Have I done enough
preparation for O Level Maths? Have I studied all the topics? What is the formula for this equation?, would surface and students would usually feel lost in what they should expect in the actual O
Level Examination. As students start finding themselves struggling with the syllabus, most would turn to engaging O Level Maths Tuition or Maths Tuition in Singapore to aid them with their studies.
Additionally, as most would know, the GCE O-Level Examination is an annual National Examination that students in Singapore have to take before deciding their next journey after completing their
Secondary School. Fret not, in this article, we will provide you with a complete guide to O Level Maths in Singapore on what to expect and how you should prepare yourself for the examination.
1. Overview to O Level Maths
It is no doubt that many students have thought about and questioned the importance of O Level Maths and why is it even necessary in their lives when in the future they might not even be doing
anything related to Maths? I am sure thoughts such as; What is Maths? Why do we need Maths? Why is Maths so crucial in our lives?, has come across many of our minds. Maths, in general, refers to the
“science and study of quality, structure, space, and change.” (Click here to know more about the definition of Maths). Many students dislike Maths. After all, they find it boring and difficult to
understand because they do not get excited when they see numbers and formulas. However, Maths isn’t all about calculating numbers. From a different perspective, Maths helps us develop our analytical
skills, critical thinking skills, and the ability to reason.
The idea of Maths is generally loved or hated by students. Students either love Maths or hate them; there is no in-between. However, Maths is undoubtedly an important subject in the O Level
Examination and plays a massive role in the overall aggregate results. There are two kinds of O Level Maths; Elementary Maths and Additional Maths. Regardless of the Maths that the students take,
students can apply the guide in this article to either one.
O Level Maths is all about the understanding of the topics and their concepts instead of memorisation. Understanding the concepts is paramount and plays a vital role in guiding students to good
grades. When it comes to Maths, the answers are always constant and unchanging. As long as the students can produce the answers with the correct working, it is definite that they will get the marks
they deserve. Maths is all about having consistency and hard work. If students display constant practice, they will improve and eventually do well in the O Level Maths Examination. Here at MindFlex,
we have over 20,000 experienced O Level Maths Tutors who will be able to guide and prepare you or your child in the O Level Maths Examination.
2. Objectives of O Level Maths
According to the SEAB Syllabus, the purpose of Elementary and Additional Maths is to ensure that the students can:
1. Acquire mathematical concepts and skills
2. Develop thinking, reasoning, and metacognitive skills
3. Connect ideas within Maths and between Maths through the application of Maths
4. Appreciate the abstract nature and power of Maths
The key objectives are as follows:
Elementary Maths (E Maths)
1. Understand and apply mathematical concepts and skills in different contexts
2. Organise and analyse data and information by using the appropriate techniques of solutions
3. Solve higher-order thinking problems by making inferences, writing mathematical explanation and arguments
Additional Maths (A Maths)
1. Understand and apply mathematical concepts and skills in different contexts
2. Organise and analyse data and information by using the appropriate techniques of solutions
3. Solve higher-order thinking problems by making inferences, writing mathematical explanation and arguments and proof.
More information on O Level Additional Maths and Elementary Maths can be found here.
3. O Level Maths Exam Format
In O Level Maths, there is a total of 2 papers to be completed, namely Paper 1 and Paper 2.
3.1. Elementary Maths (E Maths)
Paper 1 (2Hrs, 80 marks, 50% weightage)
There will be about 25 short answer questions. Candidates are required to answer all questions.
Paper 2 (2Hrs 30 Min, 100 marks, 50% weightage)
There will be 10 to 11 questions of varying marks and lengths. The last question in this paper will focus specifically on applying mathematics to a real-world scenario. Candidates are required to
answer all questions.
3.2. Additional Maths (A Maths)
Paper 1 (2Hrs, 80 marks, 44% Weightage)
There will be 11–13 questions of varying marks and lengths. Candidates are required to answer ALL questions.
Paper 2 (2Hr 30mins, 100 marks, 56% Weightage)
There will be 9–11 questions of varying marks and lengths. Candidates are required to answer ALL questions.
3.3. Topics Tested for E/A Maths
Elementary Maths Additional Maths
Numbers and their operations Equations and inequalities
Ratio and proportion Indices and surds
Percentage Polynomials and Partial Fractions
Rate and speed Binomial expansions
Algebraic expressions and formulae Power, Exponential, Logarithmic, and Modulus functions
Functions and graphs Trigonometric functions, identities and equations
Equations and inequalities Coordinate geometry in two dimensions
Set language and notation Proofs in plane geometry
Matrices Differentiation and integration
Angles, triangles and polygons
Congruence and similarity
Properties of circles
Pythagoras’ theorem and trigonometry
Coordinate geometry
Vectors in two dimensions
Problems in real-lworld contexts
Data analysis
3.4. Challenges Faced in O Level Maths
It is not a surprise to say that O Level Maths, regardless E Maths or A Maths, is indeed one of the most challenging subjects for O Level students. Reason being, in O Level Maths, there is a good
deal of topics, concepts, and formulas (especially for O Level Additional Maths) that students have to study and understand before they can take the examination. With the increasing number of topics
that the students have to learn, it can be stressful and overwhelming for their mind if they do not pace themselves and adequately prepare for the actual examination. O Level Maths, in general, seems
challenging and complex to score mainly because it takes a lot of time, energy, and practice for the students to get a good grasp of the mathematical concepts. One reason students struggle during the
examination is that they lack the basic foundation in either E Maths or A Maths. In class, students who are weak in Maths tend to fall behind because they cannot grasp the previous concepts that were
taught as the teacher moves on to the next topic. Having a weak foundation on the basic fundamental concepts for both E Maths or A Maths causes students to get lost and cannot correctly answer any of
the O Level Examination Paper questions.
3.5. How to score well for O Level A/E Maths
As mentioned previously, the key to doing well in O Level Maths, both E Maths and A Maths, is all about hard work, consistency, and constant practice. To score well in , the one important rule is to
have a clear understanding of the concepts of the Maths topics and not depending on your memorisation skills. Ask yourself, if you are unable to understand the concepts or what you are learning in
class, how can you understand the mathematical questions and what they are looking for in your answers? Memorisation does not help students foster a deeper understanding of Mathematical concepts.
Understanding the maths concepts, formulates, and types of questions is paramount to scoring well in the O Level Maths examination. This allows students to know which mathematical formula or methods
they should apply to tackle the Maths questions. As the topics progress, the difficulty level in each topic will no doubt increase as well. As such, always make sure that you have a strong foundation
from the very start of the first topic so that when you progress to the next one, you can have a stronger and easier grasp on the fundamentals of the advanced Maths concepts. It may be a challenging
journey, but do not give up!
4. O Level Maths Tips & Resources
Maths is one of the subjects that is essential in Singapore and many students have usually have a challenging and frustrating time preparing for the O Level Maths Exam. However, that does not always
have to be the case. In this article, we have compiled useful tips and resources that would benefit you in your O Level Maths Exam preparation and might also make your life a little bit easier!
4.1 O Level Maths Tips
1. Make a revision plan
Having a goal without a plan is just a wish. If you intend to score well for your O Level Maths Exam, planning is crucial to helping you achieve that. Creating a revision plan and sticking to it
helps you build discipline and achieve your study goals. Because O Level Maths has an ever-expanding list of topics and content, it is advised for students to break up Maths into small and manageable
pieces so that it is easier to revise them. This makes the revision planning for Maths much more effective. For example, students can break them up by topics, where each week they will focus their
revision on two topics by making notes, doing practice questions, etc. Students can also schedule a fixed time every day or week that they will use to review and revise their work which helps them to
stay organised. If you are looking to have an A1 in your O Level Maths Examination, what are you waiting for? Start planning now and get started on your revision!
2. See help when you are in doubt!
Many students tend to shy away from asking questions when they are in doubt which is a no-no… O Level Maths requires the students to have a solid fundamental math concept, and by not asking questions
when you are in doubt, you are just setting yourself up from achieving a good grade. One important thing to note is NEVER to hesitate to ask questions or clarify whenever you realise that you do not
understand the questions or topics taught. Seek help from your friends, family members, teachers, tutors (Click here to find out more about O Level Maths Tutors), or anyone that you know can provide
the help you need for your learning and revision.
3. Never cram your studies
A bad habit that students tend to inculcate in their learning style is to study last minute for a test or an exam. If you do not know, cramming is one of the worst possible ways to prepare for an
examination, especially for the O Levels, because it can create unnecessary stress and panic, affecting your performance level. When students cram their studies, they tend to trade their sleep for
more study time which will negatively impact their concentration level throughout the day. Some students might argue that cramming helps them remember better because the information is still fresh in
their mind. However, that is certainly not the case at all. By cramming your studies reduces the effectiveness of long-term memory and the ability to recap what you have learned.
As such, our O Level Maths Tutors strongly advise students to start their preparation for the exam in advance and space out their revision plan during the period. This ensures that students have
sufficient time to learn and master each topic with an equal amount of time reserved for their brain to rest before moving on to the next topic. This would help their brain to have proper rest and
retain the information in their mind longer. Click here to know why cramming is a bad idea.
4. Practice like never before
Practice, Practice, and Practice, till you get it right! To ensure that you are 101% ready for the O Level Maths Exam, dedicate a portion of your time daily or weekly to practice on either selected
Maths questions or O Level Maths Exam Papers. This helps increase your exposure to any potential questions that might reappear in the actual examination. Besides, by doing more practice papers, you
will somehow spot a trend and find out the type of questions that would appear regularly on the exam papers. With constant practice, you will know how much you understand the maths concepts and
provide you a gauge on how ready you are for the actual examination. In addition, you will also know which areas or topics you are good at and which are the ones you are weak in so that you can spend
more time and attention on them.
4.2. O Level Maths Resources
Here, MindFlex has compiled the top 5 resources for students in their O Level Maths preparation.
1. Elementary Maths (E Maths) Ten-Year Series
E Maths TYS is one of the go-to Maths material that we would recommend students to use. It includes a compilation of the previous O Level E Maths exam papers. It is useful for students to practice
through the TYS so that they can gauge their readiness for the actual examination.
2. Additional Maths (A Maths) Ten-Year Series
Similarly, if students would like to prepare themselves fully for the O Level A Maths exam, the best way to do so would be to purchase an A Maths TYS and complete all the questions. This way,
students would know the pattern and trend of the O Level A Maths exam papers which might give them a heads up on what to expect in the O Level.
3. FREE O Level Maths Papers
Getting yourself O Level materials can sometimes be a bit pricey… However, not to worry, at MindFlex, we have provided you a link to a list of free O Level Maths Exam Papers that you can download and
practice anytime!
Download the O Level Maths papers here.
4. O Level Maths Tuition
The most common approach that students take to prepare their O Level Maths Exam is to take Maths lessons in tuition centres. Students who have missed out any of their Maths classes in school or feel
that they are lagging behind in class should get themselves additional help. An experienced tutor is required in such situations so that areas that the students are weak in can be focused and
improved on. Get the help you need before it is too late!
5. MindFlex O Level Maths Home Tutors
If you or your child require a more personalised teaching program, why not try out MindFlex Home Tutoring? In MindFlex, we have over 20,000 O Level Maths Tutors who provides students with
individualised lessons that suits their learning needs and speeds. With lesser distractions compared to a tuition centre, students would be able focus well and also attain their tutor’s fullest
5. Conclusion
When O Level Maths Examination is around the corner, it is frequent that students would start to feel panicky, and parents will begin to feel the stress for their child’s preparation. We understand
that and would like to provide your child with the support they need during their journey to prepare for their O Level Examination. Over at MindFlex, we provide O Level Math Tutors experienced in
preparing students for the O Level Examination where they can receive personalised attention and have more productive learning. This would allow the students to have a stronger foundation on the
Maths concepts and feel more confident when sitting for the actual examination.
|
{"url":"https://singaporetuitionteachers.com/o-level-maths/","timestamp":"2024-11-02T20:09:00Z","content_type":"text/html","content_length":"257002","record_id":"<urn:uuid:f37695ac-d4a0-4e05-b1a3-2121813164e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00238.warc.gz"}
|
artificial intelligence and quantum information theory
There are many common points in the mathematical background of both artificial intelligence and quantum computing, e.g., the functional analysis techniques. Although my field of research is quantum
cryptography, I am open to any collaboration with people working in artificial intelligence concerning the above-mentioned common mathematical background (to implement in Isabelle/HOL). In
particular, I like Pierre-Louis Lions' approach: Une vision mathématique du "Deep Learning"
Last updated: Nov 11 2024 at 01:24 UTC
|
{"url":"http://isabelle.systems/zulip-archive/stream/202967-New-Members-.26-Projects/topic/artificial.20intelligence.20and.20quantum.20information.20theory.html","timestamp":"2024-11-11T02:53:52Z","content_type":"text/html","content_length":"2769","record_id":"<urn:uuid:e11d4c91-37b0-4612-840e-1551c2e46bc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00769.warc.gz"}
|
House Construction Cost Calculation Excel SheetHouse Construction Cost Calculation Excel Sheet - Civiconcepts
House Construction Cost Calculation Excel Sheet. Free Download Excel Sheet
When you plan to construct your new house, there are so many questions that arise in your mind. How much it will cost you, what will be the house construction cost, Is the contractor demanding more
funds from me, etc.
Then, you think about calculating approximately the construction cost of your house. For that, I am sharing this excel sheet in which you can calculate the approximate construction cost of your
free download residential construction cost estimator excel India. Here is an excel sheet to calculate the cost to build a house.
Construction Cost per Square Foot
Generally, the cost of house construction primly depends on the type and quality of construction required by homeowners.
C class: Local low-class bricks and sand, low-cost cement and steel, cheapest fixtures and fittings
B class: Local grade bricks and sand, medium rate cement and steel, medium cost fixtures and fittings
A class: best in class resources
Typical C Class: The cost of house construction in this type of class is about 700 to 800 rupees per square foot of construction. So, the cost of construction for a 1000 sq. ft area is between 7 to 8
lakhs to build.
Typical B Class: The rate of construction in this type of class is about 1000 to 1100 per square foot. So, the cost of construction for a 1000 sq. ft area is between 10 to 11 lakhs.
Typical A-Class: For this high-quality construction rate around 1500 to 2500 per square foot. So, the cost of construction for a 1000 sq. ft area is between 15 to 25 lakhs.
How to use House Construction Cost Calculation Excel Sheet
Follow the below steps to calculate the cost of construction.
1. Calculate area of your plot ( Lenght x width)
2. Convert area in sq.ft. (only enter the area in sq. ft.)
3. Enter the area of your plot in red-colored excel cell below the enter your house area in sq. ft.
4. Enter your area construction cost per sq. ft rate.
5. Check your construction cost at the bottom of the table.
Read More: Calculate House Construction Cost
Estimated Construction Cost per Square Foot
Free Download Excel sheet House Construction Cost Calculation:
Watch Video: House Construction Cost Calculator Excel Sheet
How do I estimate the cost of building a house?
Calculating the cost of building a house is a tedious process. First of all, we need to know the market price of each and every material to be used for construction. The process of calculating the
cost is simplified further in various steps and starting the calculation from the foundation is beneficial. However, preferring excel sheets available online might solve your problem easily.
How much does a 1500 square foot house cost?
The cost of construction widely depends on the type of house to be constructed, the quality of the material used, and the region in which construction is carried out. However, on average it might
cost you around $100,000 to $110,000 for 1500 square foot house. Refer your architecture or engineer to know the estimated cost of the house.
What is the cost of construction per sq ft?
The average cost of construction per square feet is between Rs.1500 to Rs.5000 or $20 to $70.However, this might also vary depending on various construction factors.
You May Also Like:
|
{"url":"https://civiconcepts.com/house-construction-cost-calculation-in-excel-sheet","timestamp":"2024-11-09T19:39:06Z","content_type":"text/html","content_length":"454906","record_id":"<urn:uuid:4bf3b2ff-0be8-4882-8a42-7d67f517f4e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00498.warc.gz"}
|
Creating a Parliament Chart in TableauCreating a Parliament Chart in Tableau
Creating a Parliament Chart in Tableau
Parliament charts are a common method for showing the results of an election or the current makeup of a parliamentary government. These charts show each seat of the parliament in a semi-circle,
similarly to the way seats might be arranged in a parliamentary chamber. For example, Wikipedia uses the following chart on their page for the 116^th Congress of the United States:
Before I even started this visualization, however, I had been working on some way to templatize these charts for use in Tableau. My primary goal was to make it completely flexibly so that you can
show any number of seats and control the number of rows, the spacing between rows, the size of the dots, and the ordering/coloring. Unfortunately, this proved to be really difficult (for reasons I’ll
get to shortly), so I never quite reached the finish line until now.
As is normal for a complex chart like this, I went through a number of iterations to reach the final product. My first attempt was to use some fairly basic trigonometry to create the chart.
In this version, you can specify the number of rows, then it simply plots an equal number of seats on each arc. While this is okay, it had a couple of fundamental problems. First, it looks a bit
strange when you don’t have enough seats to fill each row. For example, here’s the same chart with 12 rows instead of the 15 shown above:
A second issue is that, as you extend outward, the space between each dot gets greater. This is all space that could be filled and leaves the chart looking a bit sparse. So, I set out to come up with
an option that would do the following:
1) Fill each entire row automatically.
2) Ensure equal spacing between each dot.
And here’s where it gets tricky. Because we’re drawing dots on multiple arcs that increase in radius, we’ll need to draw a different number of dots on each arc in order to maintain consistent
spacing. Thus we need some math to determine how many dots to place on each arc, depending on a variety of different parameters being made available to the user (number of seats, number of rows,
spacing, etc.).
To work through this, I started out by thinking how this would work if we were to draw each dot along a single horizontal line, kind of like this:
What would then happen if we were to break these up into multiple rows and wrap them around a semi-circle? To do this, we first need to know the number of arcs (or rows), so I created a parameter
allowing the user to specify this number. We then need to know the length of each of those arcs. We can do that using the math to calculate the circumference: C=2Ï€r. Since this is just half a
circle, our formula becomes: C=Ï€r. To get the radius, I created a parameter for the radius of the first row (set to 1, by default) and another to specify the spacing between each row. Using these,
we can determine radius and, therefore, the length of each arc.
From here, we’ll get into some much more complex math, which I personally haven’t encountered since high-school. I’m not going to go into a ton of detail here due to the complexity, but I’ll do my
best to explain it at a high level. In order to solve the number of dots which will go on each row, we can solve triangle numbers for a total of n different rows. In this case, however, the equation
results in mixed n and n² numbers. This is called a quadratic equation and, to solve it, we need to use the quadratic formula.
Have I lost you? Sorry, this is tricky stuff and it took me a very long time to work through it myself. I’m not going to go into any further detail about the quadratic formula, but feel free to read
through the link provided above if you’re interested in gaining a better understanding of how it works.
Having used the quadratic formula to solve my equation, I now knew how many dots to place on each row. Here is the same flat line colored by row (3 rows):
Notice that the number of dots increases for each row, exactly as we had planned.
In order to plot these on arcs, we’ll perform some trig. For that, we need the radius and the angle. We already have the radius for each row, so we need to determine an angle (from 0 to 180°) for
each dot. We can do this by dividing 180 by the number of dots on the row. With that, we can complete the trig to plot them on their arcs.
When we connect to some real data (in this case, the 116^th US Senate), we get this:
This has one fundamental flaw. Ideally, as in previous examples, each party would be confined to one area. We’d start with say the Democrats (blue) then fill in from the left, then add in the
Independents (yellow), then fill in the rest with Republicans (red). But the above starts from the bottom up, rather than left to right.
The problem is that this is how the arcs are drawn—starting with the first arc and drawing left-to-right, then moving to the next arc, etc. as shown below.
My data set was organized with a seat # and ordered the way I wanted to show them on the chart. So, in order to get them ordered properly, I’d need to either manually number each seat to force the
grouping or find a new way to solve the math.
And that’s where I got stuck…
A Solution
This sat on the shelf for over a year, but after completing another project, I recently decided to take another look. Once again, I poured over the math, in an attempt to find away to reorder the
numbers. I realized that I could use an INDEX table calculation, computed by the Angle in order to essentially renumber the dots in the desired manner.
Ideally, I’d be able to then place each seat # (from the data) in the right location. To do that, I ended up performing a Cartesian join (or cross-join) of the data to itself. A Cartesian join
essentially matches each record in one table with each record in another. So, this gave me each seat matched up with every other seat. I then wrote a calculated field to throw out the records that
are not needed (those which don’t match the chart order shown above). This finally gave me the order I needed.
Unfortunately, this solution still had one flaw. In order to populate the chart in the right order, the data needed to be sorted and numbered sequentially according to that sort order (in the case
above, the party). While it’s probably not unreasonable to ask people to perform such data prep prior to building one of these charts, I decided to take it one step further and make it the sortable
by any dimension. For instance, if your data had gender in it, you might want to sort and color by gender instead of party. To do this, I created another INDEX calculation to determine the proper
ordering of the data, then updated the calculation noted earlier which filters out the unneeded record.
The Template
As promised, I’ve created a template for this chart. As with all of my templates, it includes an Excel workbook and a Tableau workbook. The Excel file looks like this:
Note: The template comes pre-populated with members of the 116^th US House of Representatives.
It contains the following fields:
Seat # – Unique identifier for the seat. No specific order is required—just make sure you have the ID and that it is sequential (don’t skip any numbers) and unique.
Party – Political party.
Member – Name of the member.
Dimensions 01-10 – These are user-defined dimensions you can use for whatever you like. My example above uses them for State, Gender, Race, Age, Religion, and Orientation. These are built into the
sorting/coloring capabilities of the Tableau template, so be sure not to change the names in the Excel file (you can change them in Tableau, if you like).
Once you’ve populated all of your data, download the Tableau template, edit the data source and connect to your new data. In Tableau, you have a few different options for tuning the look and feel of
your chart, using the following parameters:
Rows – Number of rows/arcs.
Inner Radius – Radius of the innermost arc. You can adjust this in order to push the inner arc outward or inward.
Row Spacing – Spacing between each row/arc. This is added to the inner radius to get the radius of each subsequent row/arc. Adjust this in conjunction with the inner radius in order to create the
perfect spacing.
Size – Size of each dot. Ranged from 1 to 100.
Order By – Dimension on which you wish to order/color the chart. The parameter automatically includes Party and Dimensions 1-10. By default, the values will be sorted in ascending order. If you need
to change the sort order, modify the sorting of the Order Field pill on the detail card.
Note: Dimensions 1-10 are all on the detail card in Tableau. This is to simplify the ordering/sorting if you choose to use one of these dimensions. It’s best to just leave all of them on detail to
avoid the need to modify how the table calculations compute.
And that’s pretty much all you need to know. I will tell you that, due to the need for a Cartesian join—which significantly increases the number of records in the data set—and the table
calculations—which add a lot of complexity—the performance isn’t always great. For smaller data sets, you won’t see many problems, but larger parliaments may take a few seconds to compute the
As you’ve seen, this was incredibly difficult to create. It’s certainly some of the most complex math I’ve applied to a chart. Additionally, as noted above, the need for a Cartesian join and multiple
table calcs results in a chart that doesn’t perform quite as well as might be desired. But, perhaps more importantly, this simply may not be the best chart type for visualizing parliamentary makeup.
The biggest problem with the chart is that its curvature makes it somewhat more difficult to read than other chart types (similarly to a pie chart). Thus, there are other alternatives that probably
work better. And, as a bonus, they are much easier to create! So, let me provide a couple of those alternatives here.
In some case, you may just want to show aggregate totals, with no concern for individual members. In that case, a simple bar chart would be more than sufficient.
If the part-to-whole relationship is really important, then a stacked bar chart (or even a pie chart) might work.
I’d personally avoid this option for parliaments with more than 2 or 3 parties, unless your goal is to show some huge party juxtaposed with many other very small ones.
If you need to show individual members, then both of these charts can easily be converted into unit charts, allowing you to hover over each individual member for further details.
All this being said, I think there are times when a parliament chart might make a lot of sense. For one, it is something we generally recognize because of the shape—when I see one, I immediately
assume that I’m looking at a country’s parliament. Ultimately, I think the problems with the chart are quite similar to the problems with pie charts, so following some of the same guidance is
advisable. For instance, they can be effective when showing 2 or 3 parties (as with the US system), but there are much better alternatives for countries with more parties, such as Brazil, whose
Chamber of Deputies currently has 25 different parties, resulting in a parliament chart, which is very difficult to read.
A bar chart is clearly a better alternative here.
I’d just advise you to think very carefully before using this chart type. As always, closely consider your data, your goal, and your audience first.
Ken Flerlage, January 27, 2020
13 comments:
1. Many thanks! I actually realized the utility of Parliament charts now :-)
I tried to create it in R and its actually possible to do it out of the box as I've described in this post
2. Many thanks too - for sharing this incredible fine tableau-art and all the work behind!
3. An easier way to index is simply sort by angle, especially with less number and larger size of parties represented
4. How can I draw the parliament chart on graph paper?
1. Good question! Probably going to need a protractor!
2. What is your email address so I can send you a message?
3. flerlagekr@gmail.com
5. Thank you so much for that! I'm a beginner user of Tableau and I was able to input the data and get the correct results.
I wonder if there's a way to use 2 filters at once, if there is, could you point me on how to do it?
For example, when sorted by party, to sort by gender inside each party.
1. Ken FlerlageOctober 5, 2022 at 1:40PM
Could you email me? flerlagekr@gmail.com
6. Thanks for your work, which helped me to solve it in excel as well, where this chart type is some kind of a x-y scatter chart or as I used to make , a waffle chart. Stefan
1. Great! Yeah, it's scatterplot in Tableau as well.
7. Hi Ken. Thanks for the awesome viz! I have a quick question. I tried to recreate this chart for the current itteration of the US House of Reps, where we currently have 4 vacant seats. When I plot
it though, the viz only shows one dot for "Vacant", which I'm assuming beause Tableau treats 4 identical "Vacant" rows as a single entry. What would be the best workaround? Thank you again
1. I would think that should still work, as long as you have a row for each Vacant seat and they each have a unique Seat #. Is that the case? If so, could you email me your spreadsheet and your
Tableau workbook so I can take a closer look? flerlagekr@gmail.com
|
{"url":"https://www.flerlagetwins.com/2020/01/parliament-chart.html","timestamp":"2024-11-13T09:27:52Z","content_type":"application/xhtml+xml","content_length":"420169","record_id":"<urn:uuid:0122cda5-c7bb-4dd2-98ed-be0dcb2492ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00338.warc.gz"}
|
Compute 2-D
Compute 2-D streamline data
XY = stream2(X,Y,U,V,startX,startY) returns streamline data as a 2-D matrix of vector fields. The inputs X and Y are vector data coordinates, U and V are vector data, and startX and startY are the
starting positions of the streamlines.
XY = stream2(U,V,startX,startY) uses the default coordinate data for U and V. The (x,y) location for each element in U and V is based on the column and row index, respectively.
XY = stream2(___,options) computes 2-D streamline data using the specified options, defined as a one- or two-element vector with the form step or [step maxvert], where step is the step size in data
units for interpolating the vector data and maxvert is the maximum number of vertices in a streamline. Use this argument with any of the input argument combinations from the previous syntaxes.
Compute 2-D Streamlines
Load the wind data set, which contains measurements of air current over regions of North America.
• 3-D arrays x and y represent the locations of air current measurements.
• 3-D arrays u and v represent the velocity of the air current in 3-D vector fields.
Use the fifth page of the arrays. Define the starting position of four hypothetical particles. In this case, the four starting locations are (80, 20), (80, 30), (80, 40), and (80, 50).
load wind
x5 = x(:,:,5);
y5 = y(:,:,5);
u5 = u(:,:,5);
v5 = v(:,:,5);
[startX,startY] = meshgrid(80,20:10:50);
Compute the 2-D streamline vertex data for a hypothetical particle placed into the air current with stream2.
verts = stream2(x5,y5,u5,v5,startX,startY);
Visualize the 2-D matrix of vector fields by calling streamline. Return the line objects in the variable lineobj, so you can change their properties later.
lineobj = streamline(verts);
To change aspects of a particular line, set properties on one of the returned line objects. For example, change the color of the second line to magenta and change its style to dashed.
lineobj(2).Color = "m";
lineobj(2).LineStyle = "--";
Specify Step Size for 2-D Streamlines
Load the wind data set, which contains measurements of air current over regions of North America.
• 3-D arrays x and y represent the locations of air current measurements.
• 3-D arrays u and v represent the velocity of the air current in 3-D vector fields..
Use the fifth page of the arrays. Define the starting position of four hypothetical particles. In this case, the four starting locations are (80, 20), (80, 30), (80, 40), and (80, 50).
load wind
x5 = x(:,:,5);
y5 = y(:,:,5);
u5 = u(:,:,5);
v5 = v(:,:,5);
[startX,startY] = meshgrid(80,20:10:50);
Decrease the streamline resolution by increasing the step size from the default of 0.1 to 3.
Compute the 2-D streamline vertex data for a hypothetical particle placed into the air current with stream2 and step.
verts = stream2(x5,y5,u5,v5,startX,startY,step);
Visualize the 2-D matrix of vector fields with streamline. The larger step size results in a lower resolution streamline.
Specify Maximum Number of Vertices for 2-D Streamlines
Load the wind data set, which contains measurements of air current over regions of North America.
• 3-D arrays x and y represent the locations of air current measurements.
• 3-D arrays u and v represent the velocity of the air current in 3-D vector fields.
Use the fifth page of the arrays. Define the starting position of four hypothetical particles. In this case, the four starting locations are (80, 20), (80, 30), (80, 40), and (80, 50).
load wind
x5 = x(:,:,5);
y5 = y(:,:,5);
u5 = u(:,:,5);
v5 = v(:,:,5);
[startX,startY] = meshgrid(80,20:10:50);
Increase the streamline resolution by decreasing the step size from the default of 0.1 to 0.01.
Set the maximum number of vertices so that computation ends after the first 1,000 vertices are calculated.
Compute the 2-D streamline vertex data for a hypothetical particle placed into the air current with stream2, step, and maxvert.
verts = stream2(x5,y5,u5,v5,startX,startY,[step maxvert]);
Visualize the 2-D matrix of vector fields with streamline. Show the full range of data values by setting the axis limits. The streamlines end after 1,000 vertices are calculated, so the streamlines
stop before showing the full range of data.
xlim([75 135])
ylim([15 65])
Input Arguments
X — x-axis coordinates of vector data
2-D array
x-axis coordinates of vector data, specified as a 2-D array. It must be monotonic, but does not need to be uniformly spaced. X must be the same size as Y, U, and V.
You can use the meshgrid function to create X.
Y — y-axis coordinates of vector data
2-D array
y-axis coordinates of vector data, specified as a 2-D array. It must be monotonic, but does not need to be uniformly spaced. Y must be the same size as X, U, and V.
You can use the meshgrid function to create Y.
U — x-components of vector data
2-D array
x-components of vector data, specified as a 2-D array. U must be the same size as X, Y, and V.
V — y-components of vector data
2-D array
y-components of vector data, specified as a 2-D array. V must be the same size as X, Y, and U.
startX — x-axis streamline starting positions
scalar | vector | matrix
x-axis streamline starting positions, specified as a vector or matrix. startX must be a scalar or be the same size as startY.
startY — y-axis streamline starting positions
scalar | vector | matrix
y-axis streamline starting positions, specified as a vector or matrix. startY must be a scalar or be the same size as startX.
options — Streamline options
[0.1 10000] (default) | one-element vector | two-element vector
Streamline options, specified as a one- or two-element vector with one of the following forms:
step is the step size used to adjust the streamline resolution and determine the vertex locations for which streamline velocity is interpolated. maxvert is the maximum number of vertices calculated
for a streamline before computation is complete.
The default step-size is 0.1, and the default maximum number of vertices in a streamline is 10,000.
Extended Capabilities
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
The stream2 function supports GPU array input with these usage notes and limitations:
• This function accepts GPU arrays, but does not run on a GPU.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Distributed Arrays
Partition large arrays across the combined memory of your cluster using Parallel Computing Toolbox™.
Usage notes and limitations:
• This function operates on distributed arrays, but executes in the client MATLAB^®.
For more information, see Run MATLAB Functions with Distributed Arrays (Parallel Computing Toolbox).
Version History
Introduced before R2006a
|
{"url":"https://www.mathworks.com/help/matlab/ref/stream2.html","timestamp":"2024-11-09T00:36:59Z","content_type":"text/html","content_length":"105530","record_id":"<urn:uuid:acf35523-125c-475a-b0bf-b3ce964769d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00092.warc.gz"}
|
Quantum Atom Model
Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture covers the quantum atom model, starting with the Bohr model and its limitations, then introducing the Schrödinger equation for multi-electron systems. It explains the concept of
orbitals, quantum numbers, electron configurations, and the Pauli exclusion principle. The lecture also delves into the distribution of electrons in orbitals, the filling of energy levels, and the
electronic configuration of elements according to the periodic table.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website.
Please make sure to verify the information with EPFL's official sources.
|
{"url":"https://graphsearch.epfl.ch/en/lecture/0_vf54lgty","timestamp":"2024-11-10T12:04:20Z","content_type":"text/html","content_length":"111147","record_id":"<urn:uuid:3e35efdb-16e9-4c03-b5ff-30e18dea6eab>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00824.warc.gz"}
|
The value of 81n1−81n102nC181n1022nC2−81n10322nC3+…+81n1... | Filo
Not the question you're searching for?
+ Ask your question
Was this solution helpful?
Found 4 tutors discussing this question
Discuss this question LIVE for FREE
9 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Binomial Theorem
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text The value of is
Updated On Apr 21, 2023
Topic Binomial Theorem
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 1
Upvotes 170
Avg. Video Duration 1 min
|
{"url":"https://askfilo.com/math-question-answers/the-value-of-frac-1-81-n-frac-10-81-n-2-n-c-1-frac-10-2-81-n","timestamp":"2024-11-12T02:49:25Z","content_type":"text/html","content_length":"408010","record_id":"<urn:uuid:9c90152e-3644-406f-af2a-767237465383>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00875.warc.gz"}
|
Creating Real Mathematicians - New York State Test Prep for Mathematics. Grade 3
New York State Test Prep for Mathematics. Grade 3
Mathematical Understanding
New York State
Test Preparation.
Grade 3
In 21 easy thirty minute Student Centered Lessons
covering all Pre-March Grade 3
New York State Mathematics Standards.
(See here for the New York State Pre & Post Standards for Mathematics
Test Preparation for GRADE 3 Mathematics
Week 1
Lesson 1:
State Standard: 3N1, 3N2, 3N3
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
I bought the numbers 6,3 & 9 to put on my mailbox. What number could my house be?
Choose 3 different numbers and try it again.
State Standards Practice:(25 min.)
Ask the students to write the first ten skip counting numbers for 25’s, 50’s and 100’s to 1000.
Eg. 0
On completion ask students to write the word for each number for the 25 Skip Counting sequence.
Eg. 0 Zero
25 Twenty Five
50 Fifty
75 Seventy Five
100 One Hundred
125… One Hundred Twenty Five…etc.
(After 15 min.)
Select 5 numbers between 0 and 1000.
Put in order from smallest to largest.
Write each number in word form.
Eg. 5 27 99…etc.
Five Twenty Seven Ninety Nine…etc.
Share students solutions. (5 min.)
Test Preparation for GRADE 3 Mathematics
Week 1
Lesson 2:
State Standard: 3N5
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
Write down all you know about the number 24.
State Standards Practice:(25 min.)
Ask the students for different ways to decompose 246
Eg. 200 + 46
200 + 20 + 20 + 6
100 + 100 + 46
50 + 50 + 100 + 20 + 20 + 3 + 3
Ask students to write their own 3 digit number and decompose as many ways as they can.
Share students solutions. (5 min.)
Test Preparation for GRADE 3 Mathematics
Week 1
Lesson 3:
State Standard: 3N7, 3N8, 3N9
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
Add two numbers together. What might the answer be?
Try it again with different numbers.
State Standards Practice:(30 min. total)
Ask the students to write the 1x and 0x tables
Eg. 1 x 1 = 1
1 x 2 = 2
1 x 3 = 3…etc.
And 0 x 1 = 0
0 x 2 = 0
0 x 3 = 0…etc.
Share students solutions. (5 min.)
Ask why this makes sense?
Why is zero groups of 5 equal to zero? (0 bags of 5 apples)
Why is one group of any number always that number? (1 bag of 5 apples)
Have students write written explanations (to check understanding).
Ask students to write the number sentence for “ I had 5 toys and found 3 more. How many do I have now?”
5 + 3 = 8
Ask students to write the number sentence for “ I had 3 toys and found 5 more. How many do I have now?”
3 + 5 = 8
Share by asking “ whats the difference? ”
Get the students to draw the picture and explain in writing “Whats the difference between “ I have 3 bags of 5 candies” and “I have 5 bags of 3 candies”? ”
Discuss, draw and explain why (2 + 3) + 4 (2 apples and 3 apples and then I get 4 more) is the same as 2 + (3 + 4) (2 apples and then I got 3 and 4 more)(Associative Property of Addition)
Test Preparation for GRADE 3 Mathematics
Week 1
Lesson 4:
State Standard: 3N10, 3N11, 3N12, 3N13, 3N14
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
I have a handful of Jellybeans. One half of them are yellow. What does my handful of Jellybeans look like?
State Standards Practice:(30 min. total)
Ask the students to draw 6 dots.
Ask students to split them into 2 groups and color in one group.
Eg .● ● ● ○ ○ ○
Ask the students what fraction have we colored and why?
3/6 or ½. Because we have 3 out of 6 or one out of two.
Ask what the bottom number is representing in the diagram? (the # of groups)
What about the top number? (The number of groups colored / selected)
Draw another 6 dots and split into 3 groups. Color in one group.
Ask the students what fraction have we colored and why?
2/6 or 1/3. Because we have 2 out of 6 or one out of three.
Again have them explain in terms of what the bottom number represents and the top number represents.
Draw another 6 dots and split into 3 groups. This time color in two groups.
Ask the students what fraction have we colored this time and why?
4/6 or2/3. Because we have 4 out of 6 or two out of three. Again have them explain in terms of what the bottom number represents and the top number represents.
Get students to show each of the following with 6 dots:
½, 2/2
1/3, 2/3, 3/3
1/6, 2/6, 3/6, 4/6, 5/6, 6/6.
Test Preparation for GRADE 3 Mathematics
Week 1
Lesson 5:
State Standard: 3N12
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
My Mom cuts my sandwich into equal pieces. Draw what my lunch might look like.
State Standards Practice:(30 min. total)
Ask students to draw 5 pizzas and cut each into different fractions and label.
Eg. ½, ¾, 2/6, etc.
Share solutions.
Discuss what the Numerator and Denominator mean.
1 Number of pieces selected
2 Total pieces or groups.
Get students to write own understandings of Numerator and Denominator with an example.
Have students draw a long straight twizler.
Have them show how they could cut it in ½, 1/3, 2/3, ¼, 2/4, ¾.
This is to aid understanding of fractions on a Number Line.
Test Preparation for GRADE 3 Mathematics
Week 2
Lesson 1:
State Standard: 3N16, 3N17
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
Ms. Gonzalez says Odd numbers are better than Even numbers.
Is she right?
(There is no correct answer, we are looking for understanding of odd and even and why they may prefer one over another).
State Standards Practice:
Provide each student with their own 100’s Chart.
Ask them to color in all the Even Numbers one color and all the Odd Numbers another color.
Get them to write what they found.
Quickly share some findings.
Ask students to choose any two even numbers and add them together. Share findings. (All students should have had an Even Answer)
Ask students to choose any two odd numbers and add them together. Share findings. (All students should have had an Even Answer)
Ask students to choose one even and one odd number and add them together. Share findings. (All students should have had an Odd Answer)
Have students summarize these findings in their own words.
Test Preparation for GRADE 3 Mathematics
Week 2
Lesson 2:
State Standard: 3N18
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
Rank these problems from Hardest to Easiest:
10-4, 100-4, 57+3, 9 + 2 + 18 – 10, 1,000 + 1,000, 14 + 26.
State Standards Practice:
Provide each student with their own 100’s Chart.
Ask them to solve
“I saw 18 monkeys and then I saw 9 more. How many did I see?”
Share strategies. (Eg. Counting up 9 by 1’s, Counting up 10 and back 1)
Have students make up own number problem (self differentiating), Write and solve. Ask them to try and find another way to find the same answer.
Share problems and strategies.
Ask students to do another but using 3 digit numbers.
Discuss using Counting Up to find the solution to 100 – 96. (Beginning from 96. count 97, 98, 99, 100. 4 jumps, so it is 4)
Have student sue Counting Up to find:
100 – 95
100 – 80
110 – 80
111 – 79
211 – 79
Have students do some of own and share.
Test Preparation for GRADE 3 Mathematics
Week 2
Lesson 3:
State Standard: 3N19
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
Start at any number and create a skip counting pattern.
(Eg, 22, 27, 32, 37, 42, …)
State Standards Practice:
Skip count with the class by 3’s, 4’s and 5’s to 100 guiding on the class 100’s Chart.
Provide each student with their own 100’s Chart.
Ask them to color in all the skip counting numbers for 3’s onto their own chart.
Then repeat for 4’s and 5’s.
Discuss the patterns colored.
Play “BUZZ” with the class for 3’s, 4’s and 5’s.
Have students Skip Count by 3’s (or 4’s or 5’s) with each successive student saying the next skip count number in turn. Students sit if they say a number ending in zero. Continue until one student is
left standing.
E.g., Student 1 says 4
Student 2 says 8
Student 3 says 12
Student 4 says 16
Student 5 says 20 and sits down
Student 6 says 24
And so on.
Test Preparation for GRADE 3 Mathematics
Week 2
Lesson 4:
State Standard: 3N20, 3N21
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
Five animals are standing behind a fence. How many legs might you see? (Answers depend on number of 2 and 4 legged animals used)
State Standards Practice:
Ask the students to draw a picture to represent and solve 3 X 5. (Eg. 3x5 array, 5x3 array, 3 groups of 5, 5 groups of 3).
Share representations.
Ask students to show as many ways to solve 4 x 7 as they can.
Ask students to draw their own multiplication picture and explain what it represents.
Share with class and ask class to say what each picture represents.
Test Preparation for GRADE 3 Mathematics
Week 2
Lesson 5:
State Standard: 3N22, 3N23
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
The answer to a division question is 2. What might the question be?
Write it as a Number Sentence and as a Word Problem.
State Standards Practice:
Ask the students to draw pictures to represent and solve:
“If the zoo keeper wants the same number of animals in each cage. Show all the ways how the Zoo Keeper could keep 12 monkeys.”
(1 cage of 12, 2 of 6, 3 of 4, 4 of 3, 6 of 2, 12 of 1)
Ask students to work out how to do similar with 36 snakes.
(1 cage of 36 snakes, 2x18, 3x12, 4x9, 6x6, 9x4, 12x3, 18x2, 36x1)
(If 36 too difficult, use 24)
Test Preparation for GRADE 3 Mathematics
Week 3
Lesson 1:
State Standard: 3N25, 3N26, 3N27
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
There are 10 shops in a row.
If you walk up and back 4 times without getting to either end, how many shops might you have passed?
(Answers will vary. Look for correct explanations)
State Standards Practice:(20 min.)
I am walking uptown through Manhattan and need to go to the bathroom.
I know there is a Starbucks at every 10th corner. I.e. at 10th, 20th, 30th, etc.
As I am heading uptown, where would be the closest Starbucks if I am on 9th Street?
18th, 27th, 36th, 45th (keep heading up as it is the same distance to 40th and 50th), 54th, (now closer to go back), 63rd, 72nd, 81st, 90th?
Discuss solutions.
Explain how you would use this estimation strategy to estimate the answer to 27 + 72 = (30 + 70 = 100)
Share students solutions.
Students make up 5 of own and estimate solution. (2 addition, 2 subtraction, 1 multiplication)
Test Preparation for GRADE 3 Mathematics
Week 3
Lesson 2:
State Standard: 3A1
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
If JoJo is older than his Dog, but younger than his Cat, what could the ages of all three be?
State Standards Practice:(20 min.)
Which side of > is larger?
Which side is smaller?
Put these numbers on the correct side:
36 and 12 --------- > -----------
22 and 23 --------- > -----------
Do 5 of own.
Share students solutions.
Do 5 for --------- < -----------
Use the following numbers to fill in the 6 blanks. Each number can only be used once. 2, 5, 5, 7, 9, 10.
--------- > ----------- --------- < ----------- --------- = -----------
Share students solutions.
Students make up inequalities of own.
Test Preparation for GRADE 3 Mathematics
Week 3
Lesson 3:
State Standard: 3A2
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
I found a coin on Monday, two coins on Tuesday, 3 on Wednesday, 4 on Thursday and 5 coins on Friday.
How much money might I have found for the week?
(Answers will vary depending on value of coins found)
State Standards Practice:(20 min.)
Using their own 100’s Chart, students complete the following patterns for the next twenty numbers. (Color each pattern in a different color)
2, 5, 8, …
3, 9, 15, …
99, 95, 91, …
Students make up some or own.
Test Preparation for GRADE 3 Mathematics
Week 3
Lesson 4:
State Standard: 3G1
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
I found 5 shapes under my bed. How many sides might I have altogether?
State Standards Practice:(20 min.)
Put the following on an Overhead or on the board or onto worksheets for the students.
Get them to match the shape with the name, (individually or in small groups), with one sentence on how they know.
Share student solutions.
Circle, Triangle, Square, Rectangle, Trapezoid, Rhombus, Hexagon.
Draw a picture using more than one of each shape (use imagination) and label the shapes.
Share students work.
Test Preparation for GRADE 3 Mathematics
Week 3
Lesson 5:
State Standard: 3G3
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
JoJo said a Pyramid has five points, while his teacher said it has five sides. Who is correct and why?
(Both are correct if you count the base as a side, otherwise only JoJo is correct. Can be argued both ways)
State Standards Practice:(20 min.)
Draw each of the following showing 3 sides. (Student would benefit from having these 3D shapes on their table to see).
Cube, Cylinder, Sphere, Prism, Cone.
Draw a picture using each of the above shapes and label.
Share students work.
Test Preparation for GRADE 3 Mathematics
Week 4
Lesson 1:
State Standard: 3M1, 3M2, 3M3, 3M4, 3M5, 3M6, 3M10
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
My dog is longer than my cat.
My cat weighs more than my dog.
My dog drinks more than my cat.
Show how tall my dog and cat might be, how much they might weigh and how much they might drink.
State Standards Practice:(20 min.)
Measure Length & height (to ½ inch) 3M2
Mass (ounces and pounds) 3M3
Capacity (cups, pints, quarts & gallons) 3M4, 3M5, 3M6
Provide students with access to the following, preferably at least one set per group.
l 12 inch ruler
l Scales to measure ounces and pounds
l Measuring cup set (cup, pint, quart, gallon)
l Counters to use in measuring (bears, blocks, cubes, etc.)
Have students measure 2 differently shaped open containers that are similar in volume. Try to find one that is short but fat, the other tall but thin.
Ask students to work out which is bigger.
Have students measure length, height, mass and volume of each.
Discuss findings. Which is bigger? Does bigger mean Heavier? Larger Volume? Taller? Wider?
(Note there is no real correct answer to this. Have students engage in discussion)
Test Preparation for GRADE 3 Mathematics
Week 4
Lesson 2:
State Standard: 3M8, 3M9
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
I had breakfast last Saturday 3½ hours after I woke up. What time might I have woken up and what time might I have had lunch?
State Standards Practice:(25 min.)
Draw an analogue clock onto a blank page.
Draw your favorite time of day onto the clock.
Explain why this is you favorite time of day.
Write underneath the clock what this time looks like on a digital clock.
Write what time it is ½ an hour after your favorite time of day.
Do the same for ¼ of an hour after, ½ hour before.
Test Preparation for GRADE 3 Mathematics
Week 4
Lesson 3:
State Standard: 3S2, 3S3, 3S4, 3S5, 3S6, 3S7, 3S8
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
The graph in the paper showed more people liked Blue M&M’s that green.
What might this graph have looked like?
State Standards Practice:(25 min.)
Open a bag of M&M’s and record the number of each color onto a Tally Chart with the following headings.
Color Tally Total
Place onto an Overhead Projector making a pictograph of the colors.
E.g., Blue ●●●●●●●
Green ●●
Yellow ●●●●●
Have students draw a Bar graph showing the colors from the bag.
Test Preparation for GRADE 3 Mathematics
Week 4
Lesson 4:
State Standard: 3G3. 3G4
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
I opened my toy box and saw a toy made up of a cylinder, a sphere and a prism.
What might it have looked like?
State Standards Practice: (25 min.)
On the shelf in the shop was a number cube (dice), a cylinder of chips, a sphere of the world, a prism of chocolate and a cone with ice-cream. Draw what the shelf might have looked like and label.
Share findings.
Draw what the shelf would look like from above.
What would it look like from below?
Test Preparation for GRADE 3 Mathematics
Week 4
Lesson 5:
State Standard: 3G5
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
Dad said my picture was symmetrical. What might it have looked like?
State Standards Practice: (25 min.)
Using the alphabet below, how many lines of symmetry in your full name?
(Hint. Look for vertical and horizontal lines).
What word can you find with the most lines of symmetry?
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Test Preparation for GRADE 3 Mathematics
Week 5
Lesson 1:
State Standard: 3M7
Open Response: (5 min.)(Give the problem and let the students try to figure it out writing their solutions onto paper. After a few min. share their solutions)
I was given $1.11 change at the store. What might it have looked like?
State Standards Practice:(30 min.)
How many different ways can you make $1.00 ?
E.g., Q, Q, D, D, D, N, N, N, P, P, P, P, P.
Show as many as you can.
Choose two of you solutions and write in correct Dollar and Cents notation.
E.g., .25 + .25 + .10 + 10 +10 +.05 +.05 +.05 +.01 +.01 +.01 +.01 +.01 = $1.00
|
{"url":"https://www.creatingrealmathematicians.com/new-york-state-test-prep-for-mathematics/new-york-state-test-prep-for-mathematics-grade-3","timestamp":"2024-11-15T00:53:29Z","content_type":"text/html","content_length":"900552","record_id":"<urn:uuid:685c9c93-c51b-4834-a42e-cf685d886518>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00003.warc.gz"}
|
A non-web based GUI for SAGE
A non-web based GUI for SAGE
Is their any GUI for SAGE which is not web-based and close to Mathematica in features?
The current web-based GUI is good and its cross-platform capabilities and being able to serve ovver the network is all good. However, when someone coming from Mathematica background uses this UI, it
feels inferior, mainly because it is not so keyboard friendly as Matheamatica. This is one major reason why many Mathematica users are still hesitant on moving to SAGE.
Can someone let me know if there is some other non-web based UI for SAGE or if there is some work in progress in this direction?
I do not know of any such work.
Have you tried to run Sage in the ipython notebook (now called Jupyter notebook):
sage -n ipython
Regarding keyboard friendship, it has automatic parenthesis completion and colored syntax.
5 Answers
Sort by » oldest newest most voted
You can also use Sage within TeXmacs.
See also the TeXmacs Sage mode page on the Sage wiki.
Finally, see Interfaces to Sage from other software, which I guess could be enriched with the answers that were given here.
This may be an answer to a different question, since i never used Mathematica in mathematical research, (and teaching was done with a University licence,) but please allow it.
First of it is not clear from the post what kind of UI (user interface) is meant. If something like
is the UI, then just use notebook, web based.
I also have no idea which keyboard shortcuts make the work friendly in Mathematica. Please add this information, then there will be also more pointed answers. Personally, i have the following modi of
research work in sage, and for any of them there is the right key binding solution.
• Quick code to get a quick answer to a relatively simple question (involving complicated structure, that i have already understood). Then i start a linux terminal, inside of it start sage by
typing sage, this gives the iron python interpreter with automatical support for classes and methods. For instance, after typing the TAB (tabulator, twice) we get the list of all methods that can
be applied on the object a,
sage: a = 2018
sage: a.abs<TAB><TAB>
a.abs a.conjugate a.dump
a.additive_order a.coprime_integers a.dumps
a.base_extend a.crt a.euclidean_degree
a.base_ring a.dat a.exact_log
a.binary a.degree a.exp
And there is a direct access to "kernel" information on the used objects:
sage: a.__class__
<type 'sage.rings.integer.Integer'>
Note that a is already a complicated object, not a dummy python integer. To get its divisors, we can type a.di, then the TAB(s), the list of the methods is
sage: a.di<TAB><TAB>
a.digits a.divides
a.dist a.divisors
we type three more letters and then TAB again, finally the (). And here we have them:
sage: a.divisors()
[1, 2, 1009, 2018]
Here, it is very useful to see the names of the methods, a potential student learning - say - something about number fields, could type K.<u> = QuadraticField( 2019 ), then K.<TAB> and see all
names of methods related / applicable for this kind of instance. This is already a lot of comfort! Note also that typing
gives a description of the method, and also examples to get started. Moreover, a.divisors?? gives the code, and many sage users will never switch to Mathematica for this one reason.
Many features of iron python work, e.g. Control+A to get to the beginning of the line, Control+E for its end, Control+R to reverse search, e.g.
sage: a.divisors()
I-search backward: a.
(here i typed Control+R, then a., the last command i had with a. appears, enter gets it.) Also, %cpaste allows to paste into the interpreter bigger code chunks. (Shift+Insert inserts them. Enter
would then further evaluate the lines.)
• The second case is when i am trying to get some research project work, for instance writing the matrix of some element of the Iwahori-Hecke-algebra acting on some special Kazhdan-Lusztig cell.
Then, as in the previous answer of Emmanuel Charpentier, i am starting emacs, use pyhton mode, and get best key bindings i need, e.g. automatic extension of names, after binding dabbrev-expand to
some key (e.g. F1), and a customized syntax highlight. The code can then be executed either in an emacs shell, or loaded in a sage terminal.
• The third use case has no parallel in Mathematica. This is when i have a bug. Then i start eclipse and debug inside the one method that failed. Step by step. I can initialize somewhere in the
code some Kazhdan-Lusztig cell and run a unit test against the method that failed. (A new dimension occurs, while finding the error, there may be immediately a need to change / extend the code,
yes, one can do it. For herself / himself for the sake of the own project, or sharing the idea with the world. It is the last chance, that made and makes sage so strong. Even a typo corrected in
an example is a great job to contribute!)
These three "user interfaces" are all i need in sage: the ipyton interpreter with the sage preprocessing, emacs as an editor for a longer work and own design of classes (in an object oriented
language), and eclipse (or pycharm) for debug.
A nice GUI that will take more and more place in the future is jupyterlab. A recent article (Nov, 2017) speaks about it on medium.com.
Installation (edited, thanks to tmonteil comment):
sage -pip install jupyterlab
sage -n jupyterlab
There is no jupyterlab package (yet), the install command is:
sage --pip install jupyterlab
Note that it is web-based though.
tmonteil ( 2018-01-16 21:39:16 +0100 )edit
Is it possible to install this if I'm running the Mac OS X app version (SageMath 8.0)? There is the problem of pip not working with SSL. I tried downloading jupyterlab and installing from the
download path -- as per the instructions at https://ask.sagemath.org/question/38746/sage-pip-not-compatible-with-pypi/?answer=38773#post-id-38773 (https://ask.sagemath.org/question/387...) -- but it
seems to have dependencies, such as jupyterlab_launcher which it then can't download.
Simon Willerton ( 2018-01-17 11:53:20 +0100 )edit
I definitely was able to use jupyter_lab with my mac last summer. But my mac died during Fall, so I can't test anything anymore to help. My guess would be that the easiest for your would be to make
sage -pip work by fixing the SSL problem maybe by doing sage -f openssl ?
Sébastien ( 2018-01-17 14:44:52 +0100 )edit
Note that if you install openssl via Sage, you have to recompile python2 and pip, so that they can use it:
sage -f python2
sage -f pip
and probably (from Sage tree):
make build
tmonteil ( 2018-01-17 16:13:50 +0100 )edit
Thanks, that worked! I did sage -f on openssl, python2 and pip. I didn't do the make build. (Although I had to temporarily move /opt/local as sage complained about "found MacPorts in /opt/local/bin/
port".) Then jupyterlab installed fine.
Simon Willerton ( 2018-01-17 21:49:21 +0100 )edit
sage_shell_mode is a very nice Sage mode for Emacs. Using it is a bit different from using a notebook (but it has some notebook-emulating features, which I don't use very often...), and is a very
nice way to prepare a text+maths document (thanks to SageTeX, among others), which I tend to prefer to notebooks for "serious" (i. e. intended for publication/lecture/meeting/etc...) use. The
infinite emacs customizability should be enough even for even the most rabid Mahematica user.
Furthermore, the openness of Sage allows its use in a zillion ways not possible with Mathematica, Which, IMNSHO, is the point...
I know about two:
• Spyder is intended for Python use and it's similar to Matlab in appearance. You can take a look at https://en.wikipedia.org/wiki/Spyder_... and https://github.com/spyder-ide. I think it could be
configured to use Sagemath as a backend, since it's Python-based.
• Cantor is very similar to a Mathematica notebook and it supports Sagemath as a backend. However, it doesn't seem to work properly (at least for me). Check it out in https://en.wikipedia.org/wiki/
Cantor_... and https://edu.kde.org/cantor/.
I've posted the following related question: https://ask.sagemath.org/question/406.... If you manage to configure one of them, let us know =).
|
{"url":"https://ask.sagemath.org/question/27213/a-non-web-based-gui-for-sage/?sort=latest","timestamp":"2024-11-14T20:51:42Z","content_type":"application/xhtml+xml","content_length":"94314","record_id":"<urn:uuid:f0f9b5d9-0c0f-4be1-8e20-e1e29c2bfd7b>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00512.warc.gz"}
|
Each currency has a separate value from others, so it is necessary to calculate the value of the pip for each currency pair. We also want a constant so we will assume that we want to convert
everything to US Dollars.
On all the majors, for a position of 1 lot (100.000 of the base currency) each pip is worth $10)
Open position: Buy 1 lot (100,000) EUR/USD AT 1.29530
Close position: Sell 1 lot (100.000) EUR/USD at 1.29930
1.29930 - 1.29530 = 40 pips
40 pips x $10 = $400 profit
|
{"url":"https://widgets.fxwidgets.com/calculators/MAM-FOREX/pip-value.html","timestamp":"2024-11-08T06:11:27Z","content_type":"text/html","content_length":"13558","record_id":"<urn:uuid:491eb64b-1b9b-46e8-ad2e-b5cdf5eb6a7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00001.warc.gz"}
|
Dictionary.com | Meanings & Definitions of English Words
, Mathematics.
, plural ab·scis·sas, ab·scis·sae [ab-, sis, -ee].
1. (in plane Cartesian coordinates) the x-coordinate of a point: its distance from the y-axis measured parallel to the x-axis.
1. the horizontal or
-coordinate of a point in a two-dimensional system of Cartesian coordinates. It is the distance from the
-axis measured parallel to the
-axis Compare
/ ăb-sĭs′ə /
, Plural abscissas ăb-sĭs′ē
1. The distance of a point from the y-axis on a graph in the Cartesian coordinate system. It is measured parallel to the x-axis. For example, a point having coordinates (2,3) has 2 as its abscissa.
Discover More
Word History and Origins
Origin of abscissa^1
1690–1700; feminine of
Latin abscissus
(past participle of
Discover More
Word History and Origins
Origin of abscissa^1
C17: New Latin, originally linea abscissa a cut-off line
Discover More
Compare Meanings
How does abscissa compare to similar and commonly confused words? Explore the most common comparisons:
Discover More
Example Sentences
The corresponding value of y determines a point P in accordance with the rule that x is the abscissa and y the ordinate of P.
To any value of x there corresponds a point N on the axis of x, in accordance with the rule that x is the abscissa of N.
Curves have been prepared having pressure as an abscissa and elasticity as ordinate.
The curve then crosses the abscissa upwards, and the positive curvature reaches a maximum.
Abscissa represents duration of exposure and quantity of incident light.
|
{"url":"https://www.dictionary.com/browse/abscissa","timestamp":"2024-11-04T18:45:23Z","content_type":"text/html","content_length":"192067","record_id":"<urn:uuid:ddf1c717-e760-4bf0-bdb3-9303a7008167>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00255.warc.gz"}
|
ssycon.f - Linux Manuals (3)
ssycon.f (3) - Linux Manuals
ssycon.f -
subroutine ssycon (UPLO, N, A, LDA, IPIV, ANORM, RCOND, WORK, IWORK, INFO)
Function/Subroutine Documentation
subroutine ssycon (characterUPLO, integerN, real, dimension( lda, * )A, integerLDA, integer, dimension( * )IPIV, realANORM, realRCOND, real, dimension( * )WORK, integer, dimension( * )IWORK,
SSYCON estimates the reciprocal of the condition number (in the
1-norm) of a real symmetric matrix A using the factorization
A = U*D*U**T or A = L*D*L**T computed by SSYTRF.
An estimate is obtained for norm(inv(A)), and the reciprocal of the
condition number is computed as RCOND = 1 / (ANORM * norm(inv(A))).
UPLO is CHARACTER*1
Specifies whether the details of the factorization are stored
as an upper or lower triangular matrix.
= 'U': Upper triangular, form is A = U*D*U**T;
= 'L': Lower triangular, form is A = L*D*L**T.
N is INTEGER
The order of the matrix A. N >= 0.
A is REAL array, dimension (LDA,N)
The block diagonal matrix D and the multipliers used to
obtain the factor U or L as computed by SSYTRF.
LDA is INTEGER
The leading dimension of the array A. LDA >= max(1,N).
IPIV is INTEGER array, dimension (N)
Details of the interchanges and the block structure of D
as determined by SSYTRF.
ANORM is REAL
The 1-norm of the original matrix A.
RCOND is REAL
The reciprocal of the condition number of the matrix A,
computed as RCOND = 1/(ANORM * AINVNM), where AINVNM is an
estimate of the 1-norm of inv(A) computed in this routine.
WORK is REAL array, dimension (2*N)
IWORK is INTEGER array, dimension (N)
INFO is INTEGER
= 0: successful exit
< 0: if INFO = -i, the i-th argument had an illegal value
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
November 2011
Definition at line 130 of file ssycon.f.
Generated automatically by Doxygen for LAPACK from the source code.
|
{"url":"https://www.systutorials.com/docs/linux/man/docs/linux/man/3-ssycon.f/","timestamp":"2024-11-11T17:45:18Z","content_type":"text/html","content_length":"9015","record_id":"<urn:uuid:9cfe932b-5af4-45d4-be2e-4833b047154a>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00595.warc.gz"}
|
Math Is a Subject with a Right Answer; the Goal Is to Get It
By Adam Sarli, posted September 25, 2017 —
When I started teaching math, I definitely held this belief. If a student correctly solved for x, I would smile and let him or her know that the answer was correct. I would continue circulating
around the room, letting students know who was correct while working with those who were incorrect. I judged a lesson to be successful after seeing how many students got the right answer.
Over time, I realized how little my students seemed to “get it.” They stumbled whenever a problem’s context shifted, and they struggled to justify their answers. Many ended up forgetting the process
they had used to find the correct answer in the first place. Framing my teaching around answers was not helping my students learn mathematics.
I don’t want my students simply getting the right answers; I want them getting the underlying concepts and connections inherent in the mathematics. And the best way to encourage this process may be
by focusing on the methods that students create to arrive at their answers.
The authors of Making Sense: Teaching and Learning Mathematics with Understanding argue that math is indeed a subject with a right answer, but that students can intuitively create their own
strategies for solving problems. Hiebert and colleagues (1997) urge teachers to frame their teaching around these student-created strategies, suggesting that “Engaging in open, honest, public
discussions of methods is the best way to gain deeper understandings of the subject” (Hiebert et al. 1997, p. 39).
Let's look at a unit on integers and explore how focusing on student solution methods can lead to a deeper understanding of mathematics. I have a few overarching goals for students in my
seventh-grade integer unit. I want students to explore—
• the big ideas inherent in positive and negative space; and
• the idea that numbers can be decomposed strategically.
Take a simple integer problem like 4 – 8. To get both my students and myself caring about more than the answer, I decided to simply write the answer on the board. “The answer is –4. Does anyone want
to share their strategy for finding it?” See Destiny’s method: https://flic.kr/p/XSYyDs
Destiny's method was the most common. Students thought of subtraction as “moving down,” and the 8 as meaning “8 times.” A pair-share question of “How did Destiny know what numbers would look like
past zero?” got students talking about the big ideas in negative space.
I was determined to push for multiple strategies, so I asked, “Does anyone have a different strategy in which they didn't have to make 8 jumps?” See Malik’s and Rhian’s work: https://flic.kr/p/XSYyoN
Malik’s method made the act of decomposing numbers transparent for students like Destiny who were initially uncomfortable with the process. Such questions as “Why did Malik go down by 2 four times?”
and “Why did he cross off 2s?” helped students get to the heart of this decomposition idea.
Sharing Rhian’s method after Malik’s work was strategic. “Why did it take Malik 4 jumps, but Rhian only 2?” Student pair-shares, again, explored ideas of decomposition. Students were informally
discussing how 4 – (4 + 4) = 4 – (2 + 2 + 2 + 2). They also called Rhian’s strategy “Going to a Friendly Number,” since going to 0 made the problem easier.
At this point, Hymadou had a theory that he shared with the class: https://flic.kr/p/XSYyn5
Hymadou was convinced that his strategy would work for any integer, so the class began challenging his theory by trying out problems like 5 – 10, 12 – 24, and 100 – 200. Many concluded that Hymadou
was correct, but they struggled to put into words exactly why. They were informally exploring the conjecture that x – 2x = –x.
In A Guide for Teachers Grades 6–10, Mark Driscoll calls Hymadou’s theory “extending,” since he “followed the lines of further inquiry suggested by a particular mathematical result” (Driscoll 1999,
p. 97). Hymadou asked his own question and took the mathematics to an algebraic level I hadn't even anticipated!
Notice just how much mathematics there was to explore once multiple strategies and a few guiding questions were presented—all from the simple problem 4 – 8! This problem may have been buried with 50
others if I held the belief that the goal in math is simply to get the correct answer.
Shifting the focus in math class away from answers and toward methods has huge implications for student learning. It prompts teachers to plan lessons around deep mathematical ideas and to ask
questions that get students’ reasoning in focus. It encourages students to develop or try new strategies. It can even get students asking their own questions and justifying conjectures that hit at
the heart of mathematics. To really help our students “get it,” I’d choose methods over answers any day.
Driscoll, Mark. 1999. Fostering Algebraic Thinking: A Guide for Teachers Grades 6–10. Portsmouth, NH: Heinemann.
Hiebert, James, Thomas P. Carpenter, Elizabeth Fennema, Karen C. Fuson, Diana Wearne, Hanlie Murray, Alwyn Olivier, and Piet Human. 1997. Making Sense: Teaching and Learning Mathematics with
Understanding. Portsmouth, NH: Heinemann.
|
{"url":"https://www.nctm.org/Publications/MTMS-Blog/Blog/Math-Is-a-Subject-with-a-Right-Answer;-the-Goal-Is-to-Get-It/","timestamp":"2024-11-11T03:20:09Z","content_type":"text/html","content_length":"234509","record_id":"<urn:uuid:57fcba1c-96f0-4054-bf3a-154ecc05a3df>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00293.warc.gz"}
|
The Difference Between Learning In The Classroom And Learning One-on-one With A Tutor
The difference between learning in the classroom and learning one-on-one with a tutor
Teaching in the classroom and tutoring one-on-one each have unique characteristics. The classroom has group dynamics which can be thrilling. Tutoring allows flexibility to adapt to one student's
learning style. Let's examine the unique strengths and challenges of the classroom and tutoring.
Benefits of the classroom
The classroom is thrilling because the group dynamic is unique to each class. You can't predict the discussions that will arise. Unexpected debates are super informative for everyone in the room.
They allow students to think critically and bounce ideas off of each other.
For example, a student in physics class asks "what's the definition of energy?" This could go many places. The simple answer is the capacity to do work. What's work? A force applied over a distance.
What does energy look like? Well, no one really knows. All we know is energy is a constant that changes in quality, but never in quantity. As you can see, a seemly simple question can get deep fast
with group discussions. This is a wonderful learning benefit of the classroom.
Peer engagement is critical for learning. The classroom provides students the opportunity to solve problems together. Collaborative problem-solving is a skill needed for future work engagements. It's
also essential to surviving in college. Math and physics majors in college will have problem sets assigned each week. Many of the problems are too difficult to do on your own. Only by forming peer
groups can you get through the assignments on time. Group problem-solving skills learned in the classroom are essential for adult education.
Teachers need to teach to the average ability level of the class. Advanced and struggling students will be in the same course. This creates a challenge for the teacher, what level of material should
they use? The simple answer is to teach somewhere in the middle. An advanced technique is to assign 3 problems of increasing difficulty. Everyone begins with the first problem, then moves to the next
one once complete. This creates natural filtering of student ability. Each student will land on a problem that provides a challenge.
Classroom management is the most difficult part of teaching. Kids can be ruthless. Constant interruptions throw a wrench into a teacher's lesson plans. Naturally, there will be good students wanting
to learn. These students are impacted by students that want to goof off. Every good teacher must have a system for classroom management. The difficulty of managing the classroom varies by the school
district and school. Sadly, economically impacted schools can be the most difficult to manage. If your child is serious about learning, peer distraction will be a problem.
A private tutor has the ability to meet the student at exactly their level. This is impossible in the traditional classroom. By interacting with the student, a skilled tutor can quickly identify
strengths and weaknesses. The tutor can then address the missing foundational issues.
For example, it's common for mathematics students to not know their fractions. This causes a lot of difficulty in advanced math classes. If the student is taking algebra 1, they will encounter
systems of equations. If the system contains fractions, the student will freak out because they aren't confident with fractions. Solving a system of equations is an advanced, multi-step problem. The
fractions are a small part of that problem, and the student is expected to know them. In this case, the missing foundational skill is fractions.
A tutor can identify the problem, and present fractions to the student in a manner they can understand. This approach directly fixes the problem. The example highlights the unique ability of a
private tutor. Here is a video on fractions in systems of equations so you can see for yourself:
A private tutor will develop a meaningful relationship with your student. This level of private interaction builds trust. It's imperative a student feels safe to make mistakes. Mistakes are a crucial
part of learning. Many students fear making mistakes in the classroom. This is a primal fear. Becoming a tribe outcast meant certain death in early human development. These primal fears are with us
today. Getting a math problem wrong in front of the class is a form of tribe rejection. The student will fear coming off dumb to their peers. A private tutor provides safety from fear. You can't
learn anything while afraid.
Challenges of private tutoring
• The tutor does not get to see how the student behaves in class. This is important information. Is the student quiet or a class clown? Do they raise their hand frequently, or hide in the back?
These behaviors are all part of the learning puzzle. The private tutor is blind to this part of the puzzle. Ideally, the tutor can communicate with the teacher. Not all tutors are willing to do
this. And not all teachers are willing to communicate with tutors. Teachers are extremely busy, and extra communication tasks may not be a priority. However, many teachers are happy to do it. The
tutor should reach out to each teacher to see if they are open to communicating.
• The private tutor does not have access to the grade book. Test and homework scores are critical for assessing student performance. The private tutor must rely on the student or parent to provide
data. The grade book may not be up to date. Not every student is forthcoming when asked for this information. Not having access to the grade book is a major handicap. The tutor should do
everything in their power to get access to recent grades.
As a parent, it can be difficult to know what's going on in the tutoring session. Is your money going to good use? How can you measure progress? A high-level tutor should provide regular progress
reports that outline strengths and challenges. You should be in regular contact with your tutor to get insight into what's happening. Be sure to keep track of your child's exams. Ultimately your
tutor should be able to boost your child's grades and engagement.
Every learner is different. Some students need additional support to fully digest the concepts. If your child is doing great in school, a tutor isn't necessary. Though you can use a tutor to skip a
grade level if that's your goal.
If your child is working hard but not seeing results, then a tutor is your best option.
The most important factor in tutoring success is the unique relationship between your child and their tutor. They have to have compatible personalities. Some people click with each other more than
others. You want to take the time to find the tutor that clicks. Most tutoring agencies will allow a trial lesson. Be sure to do as many trial lessons as possible early in the school year. You want
to afford yourself time if the first tutor doesn't work out. Learning and engagement are exponentially higher when you find the right fit. You'll know when you've found it. Your child's eyes should
light up after the lesson. They should divulge details. If you ask how it went, and you get a one-word answer, then you can do better. Experiment as much as possible until you find the best solution
for your child. Good luck!
Download our Ultimate Guide
to Conquering Test Anxiety
Years of research have led to this proven guide to solving students’ most common problem
You Might Also Like
|
{"url":"https://alexandertutoring.com/tutoring-resource-guide/difference-between-the-classroom-and-a-tutor/","timestamp":"2024-11-04T09:02:36Z","content_type":"text/html","content_length":"326075","record_id":"<urn:uuid:84790e33-6007-4c31-a39b-ca5900445a50>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00815.warc.gz"}
|
Assignments for CSE 415 (Spring 2016)
Assignment 1: Basics of Python, String processing, Conversational Agents.
Assignment 2: Inference with Knowledge Using Partial-Order Properties
Assignment 3: State-space search algorithms.
Assignment 4: Problem formulation.
Assignment 5: Two-person, zero-sum games.
Assignment 6: Reinforcement learning
(Part I is available)
(Part II is available)
Project An in-depth design and implementation from a given set of options.
Here is the dropbox for turning in assignments.
|
{"url":"https://courses.cs.washington.edu/courses/cse415/16sp/assignments.html","timestamp":"2024-11-14T08:50:32Z","content_type":"text/html","content_length":"1903","record_id":"<urn:uuid:b4101c55-a383-4740-8990-e5c3999fb966>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00318.warc.gz"}
|
Standard 1: Make Sense of Problems & Persevere in Solving Them | Inside Mathematics
Teachers who are developing students’ capacity to “make sense of problems and persevere in solving them” develop ways of framing mathematical challenges that are clear and explicit, and then check-in
repeatedly with students to help them clarify their thinking and their process. An early childhood teacher might ask her students to work in pairs to evaluate their approach to a problem, telling a
partner to describe their process, saying “what [they] did, and what [they] might do next time.” A middle childhood teacher might post a set of different approaches to a solution, asking students to
identify “what this mathematician was thinking or trying out” and evaluating the success of the strategy. An early adolescence teacher might have students articulate a specific way of laying out the
terrain of a problem and evaluating different starting points for solving. A teacher of adolescents and young adults might frame the task as a real-world design conundrum, inviting students to engage
in a “tinkering” process of working toward mathematical proof, changing course as necessary as they develop their thinking. Visit the video excerpts below to view multiple examples of teachers
engaging students in sense-making and mathematical perseverance.
The Standard
Mathematically proficient students start by explaining to themselves the meaning of a problem and looking for entry points to its solution. They analyze givens, constraints, relationships, and goals.
They make conjectures about the form and meaning of the solution and plan a solution pathway rather than simply jumping into a solution attempt. They consider analogous problems and try special cases
and simpler forms of the original problem in order to gain insight into its solution. They monitor and evaluate their progress and change course if necessary. Older students might, depending on the
context of the problem, transform algebraic expressions or change the viewing window on their graphing calculator to get the information they need. Mathematically proficient students can explain
correspondences between equations, verbal descriptions, tables, and graphs or draw diagrams of important features and relationships, graph data, and search for regularity or trends. Younger students
might rely on using concrete objects or pictures to help conceptualize and solve a problem. Mathematically proficient students check their answers to problems using a different method, and they
continually ask themselves, “Does this make sense?” They can understand the approaches of others to solving complex problems and identify correspondences between different approaches.
Practice Standards
show more
|
{"url":"https://www.insidemathematics.org/common-core-resources/mathematical-practice-standards/standard-1-make-sense-problems-persevere-solving-them","timestamp":"2024-11-10T13:59:06Z","content_type":"text/html","content_length":"65777","record_id":"<urn:uuid:b4c0dea0-72cd-4de6-9b80-373f5f3c5b21>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00290.warc.gz"}
|
Question 718:
No answer provided yet.We need to use the formula for a z-score to figure out what the minimum score at Lamar High School is and whether these students will make the cut. The formula is
(data-point-mean)/standard deviation =z-score. First we will find the z-scores for each student.
1. (87-85)/5 = .4
2. We need to lookup the 1-sided area for a z-score of .4 using a normal table or the excel function = NORMSDIST(.4) and we get the area of .6554. meaning bill is in the 65.5th percentile--so he
doesn't make the cut.
1. (95-85)/5 = 2.2
2. We need to lookup the 1-sided area for a z-score of 2.2 using a normal table or the excel function = NORMSDIST(2.2) and we get the area of .9861. meaning Mary is in the 98.61th percentile--so she
DOES make the cut.
To find the minimum score needed to make the cut we need to 1st find the z-score for the top 10%. We can use the excel function = NORMSINV(.90) and we get 1.28152. Now we plug this into the z-score
equation and use a little bit of algebra to isolate the unknown minimum score.
1. (min score - 85)/5 = 1.28152
2. (min score - 85) = 6.407758
3. min score = 91.40776
So you
need at least a 91.41 score to be in the top 10% of Lamar High
and make the cutoff to get into the UT schools.
Not what you were looking for or need help?
|
{"url":"https://www.usablestats.com/askstats/question/718/","timestamp":"2024-11-05T03:41:30Z","content_type":"application/xhtml+xml","content_length":"6848","record_id":"<urn:uuid:54f518dd-cee4-478a-9f6b-89d18a86013a>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00217.warc.gz"}
|
Interpreting the POUMM
Whenever presented with data consisting of a rooted phylogenetic tree with observed trait-values at its tips, the POUMM package can be used to answer THE following questions about the population
distribution, the evolution and the phylogenetic signal of the trait in question.
In the following subsections, we use our example simulation from the vignette vignette A User Guide to the POUMM R-package. Let us remind the true POUMM parameters for this simulation:
Is the POUMM an appropriate model for the data?
The first step to answering that question is to visualize the data and check for obvious violations of the POUMM assumptions. The POUMM method expects that the trait-values at the tips are a sample
from a multivariate normal distribution. With an ultrametric species tree, where all tips are equally distant from the root, this assumption translates in having all trait-values be realizations of
identically distributed normal random variables. In the case of a non-ultrametric tree, it is far more useful to look at a sequence of box-whisker or violin plots of the trait-values, gouped by their
root-tip distance.
Once visualizing the data has confirmed its normality, we recommend comaparing the POUMM-fit with a fit from a NULL-model such as the phylogenetic mixed model (PMM) (Housworth, Martins, and Lynch
2004). Since the PMM is nested in the POUMM, i.e. in the limit \(\alpha\to0\), the POUMM model is equivalent to a PMM model with the same initial genotypic value \(g_0\) and unit-time variance \(\
sigma\), it is easy to fit a PMM model to the data by fixing the value of the parameter \(\alpha\) to 0:
Now a likelihood-ratio test between the maximum likelihood fits clearly shows that the POUMM significantly outperforms the PMM to the data:
## Likelihood ratio test
## Model 1: fitPMM
## Model 2: fitPOUMM2
## #Df LogLik Df Chisq Pr(>Chisq)
## 1 3 -12.9
## 2 5 16.3 2 58.5 2e-13 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Since lrtest only uses the ML-fit, to save time, we disabled the MCMC fit by specifying doMCMC = FALSE. In real situations, though, it is always recommended to enable the MCMC fit, since it can
improve the ML-fit if it finds a region of higher likelihood in the parameter space that has not been discovered by the ML-fit. Moreover, comparing posterior samples for the PMM and POUMM fit can, in
principle, be more informative and a likelihood-ratio test.
As an exersise, we can generate data under the PMM model and see if a POUMM fit on that data remains significantly better than a PMM fit:
gBM <- rVNodesGivenTreePOUMM(tree, g0, alpha = 0, theta = 0, sigma = sigma)
zBM <- gBM + e
fitPMM_on_zBM <- POUMM(zBM[1:N], tree, spec = specPMM, doMCMC = FALSE)
fitPOUMM_on_zBM <- POUMM(zBM[1:N], tree, doMCMC = FALSE)
lmtest::lrtest(fitPMM_on_zBM, fitPOUMM_on_zBM)
## Likelihood ratio test
## Model 1: fitPMM_on_zBM
## Model 2: fitPOUMM_on_zBM
## #Df LogLik Df Chisq Pr(>Chisq)
## 1 3 -114
## 2 5 -113 2 1.65 0.44
To what extent are the observed trait-values determined by heritable (i.e. genetic) versus non-heritable (i.e. environmental) factors?
In other words, what is the proportion of observable phenotypic variance attributable to the phylogeny? To answer this question, the POUMM package allows to estimate the phylogenetic heritability of
the trait. Assuming that the tree represents the genetic relationship between individuals in a population, \(H_\bar{t}^2\) provides an estimate for the broad-sense heritability \(H^2\) of the trait
in the population.
When the goal is to estimate \(H_{\bar{t}}^2\) (H2tMean), it is imortant to specify an uninformed prior for it. Looking at the densities for chain 1 (red) on the previous figures, it becomes clear
that the default prior favors values of H2tMean, which are either close to 0 or close to 1. Since by definition \(H_{\bar{t}}^2\in[0,1]\), a reasonable uninformed prior for it is the standard uniform
distribution. We set this prior by using the specifyPOUMM_ATH2tMeanSeG0 function. This specifies that the POUMM fit should be done on a parametrization \(<\alpha,\theta,H_{\bar{t}}^2,\sigma_e,g_0>\)
rather than the standard parametrization \(<\alpha,\theta,\sigma,\sigma_e,g_0>\). It also specifies a uniform prior for \(H_{\bar{t}}^2\). You can explore the members of the specification list to see
the different settings:
specH2tMean <- specifyPOUMM_ATH2tMeanSeG0(z[1:N], tree, nSamplesMCMC = 4e5)
# Mapping from the sampled parameters to the standard POUMM parameters:
## function (par)
## {
## if (is.matrix(par)) {
## par[, 3] <- sigmaOU(par[, 3], par[, 1], par[, 4], tMean)
## colnames(par) <- c("alpha", "theta", "sigma", "sigmae",
## "g0")
## }
## else {
## par[3] <- sigmaOU(par[3], par[1], par[4], tMean)
## names(par) <- c("alpha", "theta", "sigma", "sigmae",
## "g0")
## }
## par
## }
## <environment: 0x000000002554a058>
## function (par)
## {
## dexp(par[1], rate = tMean/6.931, log = TRUE) + dnorm(par[2],
## zMean, 2 * zSD, TRUE) + dunif(par[3], min = 0, max = 1,
## log = TRUE) + dexp(par[4], rate = 2/zSD, log = TRUE) +
## dnorm(par[5], zMean, 2 * zSD, log = TRUE)
## }
## <environment: 0x000000002554a058>
## alpha theta H2tMean sigmae g0
## 0.0000 -2.2073 0.0000 0.0000 0.5176
## alpha theta H2tMean sigmae g0
## 10.8379 5.9389 0.9900 0.5335 3.2139
Then we fit the model:
## stat N MLE PostMean HPD ESS G.R.
## 1: H2tMean 500 0.3057 0.3155 0.1788,0.4708 720 1
## 2: sigmae 500 0.2013 0.2032 0.1830,0.2266 720 1
## 3: H2e 500 0.4307 0.4183 0.2783,0.5296 720 1
## 4: H2tInf 500 0.3073 0.3213 0.1781,0.4743 720 1
Now we see that the prior density for H2tMean is nearly uniform. It becomes clear that the process has converged to its long-term heritability since the intervals for H2tMean and H2tInf are nearly
the same. Notice, though, that the estimate for the empirical heritability H2e is shifted towards 1 compared to H2tMean and H2tInf. This shows an important difference between H2e and the
time-dependent formulae for phylogenetic heritability: H2e takes into account all values of z including those at the very beginning when the process was far away from equilibrium. Thus the estimated
phenotypic variance over all trait-values at all times can be substantially bigger compared to the current trait-variance in the population:
# Compare global empirical heritability
H2eGlobal <- H2e(z[1:N], sigmae = coef(fitH2tMean)['sigmae'])
# versus recent empirical heritability
H2eRecent <- H2e(z[1:N], tree, sigmae = coef(fitH2tMean)['sigmae'], tFrom = 5)
print(c(H2eGlobal, H2eRecent))
## [1] 0.4307 0.3409
To learn more about different ways to specify the POUMM fit, read the documentation page ?specifyPOUMM.
|
{"url":"https://cran.uvigo.es/web/packages/POUMM/vignettes/InterpretingPOUMM.html","timestamp":"2024-11-12T20:41:51Z","content_type":"text/html","content_length":"83184","record_id":"<urn:uuid:40a80476-d8d7-4a43-89e1-c6375724d349>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00893.warc.gz"}
|
Granular Space and the Problem of Large Numbers
Journal of Modern Physics, 2011, 2, 289-300
doi:10.4236/jmp.2011.25038 Published Online May 2011 (http://www.SciRP.org/journal/jmp)
Copyright © 2011 SciRes. JMP
Granular Space and the Problem of Large Numbers
V. I. Konushko
Protvino, Moscow, Russia
E-mail: konushko@mail.ru
Received February 21, 2010; revised February 28, 2011; accepted March 2, 2011
Two and a half thousand years ago the ancient atomists made a suggestion that space has a cellular structure,
is material and consists of elementary cells. In 1900 Plank found the elementary length L* = 10–33 cm. This
notion has been widely used in modern physics ever since. The properties of granular space are studied in
this article on the assumption that a three-dimensional material cell with the size of Planck’s elementary
length is the only material for the construction of the whole Universe. This approach allows one to account
for such mysterious phenomena as inertia, ultimate velocity of transfer of material body interactions and
huge difference between gravitational and Coulomb forces - the so called “Large Numbers Problem”, as well
essence of electric charge and Pauli exclusions principle.
Keywords: Problem of Large Numbers
1. Introduction
The concept of space is one of the most important con-
ceptions forming the system of our knowledge.
Is the space infinite or not? Is it just a relation between
material bodies or does it exist independently? Is the
space a container for matter observable even in the ab-
sence of material bodies? Is the space uniform from one
point to another or are there some selected directions? Is
it neutral or does it direct bodies inside it? Do we know
its properties intuitively without any external influence
on our brain or do we acquire these properties from ex-
perience? These are the questions made in different times
with respect to a phenomenon named space.
The conception of discreteness is as old as that of con-
tinuity. It goes back to ancient atomists and can be re-
garded as one of the first solutions of Zeno’s of Elea
aporias. However, it should be noted that in spite of suc-
cessful application of the idea of discreteness to describe
the structure of matter, the operating conception for the
space and time structure was nevertheless that of conti-
A big step in solving the problem of space structure
was made in 1900 by Plank [1]. In that year Plank’s con-
stant h was born. Plank researched the irradiation of
black bodies. He was attracted by the universality of this
irradiation, which turned out to be independent of the
size as well as of the shape of the irradiating body or of
the properties of the vessel walls. While the reasons of
this universality were searched for, the problem of the
standards of length, mass and time appeared. These
standards were to be established from the principles not
appealing to any substance including elementary parti-
cles. They were only to be expressed through the funda-
mental constants, i.e. the speed of light c, Newton gravi-
tational constant G and the quantum of action h found
from the irradiation law. Only these three constants Plank
could take as fundamental ones. From those three con-
stants only one value with the dimension of the length
, fundamental mass and time quantum
be constructed:
*3 33
** 43
1.610 cm,
10 g,
10 sec.
An elementary clot of matter with the mass got
the name of a plankeon. Three values , and
play a major role in the theory of elementary particles as
well as in the Big bang model. However, physical mean-
ing of the elementary length has still not become
There is a wide-spread opinion that the Plank’s length
could bring light to numerous mysteries of micro- and
macro-worlds. “Only one value has in the existing theory
a clear and available interpretation - that is the Plank’s
length. Whence still, if not from here, is it possible to
start researches of particles? It is quite possible that only
290 V. KONUSHKO ET AL.
the physics of the 10–33 cm region will help us to under-
stand the physics of elementary particles”, - wrote
Wheeler [2]. Okun’ also saw the significance of this
value [3]. “It seems more and more probable that physics
on a Plank’s scale determines not only all the physics of
low energies but the whole picture of the Universe as
well”, - he remarked.
The problems of the micro world make us consider the
picture of the universe where the idea of discreteness
must play the role not less than that of continuity. It is
necessary that the discreteness should participate in the
description of quantum objects quite naturally without
being artificially brought in the context of continuum
A serious step in comprehension of the space structure
was made by Beckenstein [4]. Considering the thermo-
dynamics of black holes he supposed that the entropy of
a black hole was proportional to its square: S~A. But the
square A has the dimension of the length squared and
there appeared a problem how to make the expression for
the entropy dimensionless. A hypothesis was put forward
that the entropy of a black hole must have the following
where the coefficient α must be calculated basing on
some other ideas. This coefficient was later on calculated
by Hawking [4]. It turned out to be 1/4. The value
is the minimal square of an elementary object. Whereas
Plank found the minimum value of the elemen- tary
length, Beckenstein and Hawking indirectly intro- duced
the minimum size of the elementary square. Using the
above-mentioned, we only have to take the next step
towards the generalization up to a minimum
three-dimensional object in order to establish the space
We suppose that the meaning of the Plank length
lies in the fact that physical space has a cellular struc-
ture and consists of material three-dimensional cells with
the size of cm. Thus, we suppose that eternal,
invariable, primary matter- protomatter - exists in the
form of an elementary cell of the size of fundamental
length .
According to Wheeler [2], “it is cells of this size that
make up space on its deepest level”. All our observations
and experiments have been and are performed in a ma-
terial Universe. It is quite unreasonable to expect that a
theory developed under such conditions will be applica-
ble in an empty Universe.
As we have already mentioned, no matter what the na-
ture of original mater is, if cannot produce either a point,
or an infinite thin line or an ideal plane. The only and the
simplest object which can be created by the material
Universe is a bubble. When coming in contact with one
another these bubbles turn into polyhedrons, i.e. three-
dimensional geometries or cells.
There are five types of polyhedrons which, when ar-
ranged in parallel, can cover a three-dimensional space
so that they would be franked to one another by their
whole faces (Figure 1):
The most economical geometry is a fourteen-sided
polyhedron: the volume being the same, it takes the least
material to make its face.
The entropy of the black hole acquires quite a trans-
parent physical meaning – it is equal to the number of
elementary cells forming the surface of this object:
. (2)
Therefore, a mysterious and amazing quantity - en-
tropy - appears to be connected with the structure of
space. Further on, this fact will help us to see the reason
of irreversibility of physical processes though all con-
servation laws are convertible in time. “The arrow of
time” will be considered in more detail in our subsequent
2. The Size of Elementary Particles
Before considering the above-mentioned problem let us
raise one of the “native” questions which are most diffi-
cult to answer. Doesn’t the assumption, that the Universe
consists of only one element, inappropriately simplify
the reality? To answer it we should keep in mind that the
Figure 1. Three-dimensional polyhedrons. (1. parallelepi-
peds; 2. prisms with centr ally-symmetric six-sided bases; 3.
twelve-sides polyhedrons; 4. twelve-sides polyhedrons; 5.
fourteen-sides polyhedr ons. )
Copyright © 2011 SciRes. JMP
V. KONUSHKO ET AL.
world constants give us a notion of the size of an ele-
mentary particle - Comption’s wavelength. For protons,
it is
2.110 cm
. (3)
Experiments give a somewhat larger value of proton
size: cm.
Simple calculations give us the number of elementary
cells of which this particle consists, ~1060. The number
of nucleons in the whole Universe is ~1080. It is rather
amazing, but in the number of structural elements any
elementary particle is hardly inferior to the Universe.
The creation of an elementary particle requires a sup-
plementary quantity of matter of mass. Since the whole
space is filled with cells, the rest mass when it forms a
particle shell, must deform both internal and external
cells. The internal cells form the body of the particle
which literally “occupies” or confines these space cells.
The deformation of external cells makes the essence of
physical fields. Radial deformation creates electrostatic
and gravitational fields. Tangential (torsion) deformation
results in magnetic, gravitational - magnetic and vortex
electrostatic fields, as well as a particle spin.
Matter means the substance which is used to form
elementary cells. Particle mass is the amount of matter
used to form this corpuscle.
In creating a particle the outer cells have to be slightly
pushed apart thus forming an excess of matter in the
surrounding space. This excess is exactly equal to the
mass of the particle itself and, according to the Einstein
formula for excessive energy, . It is just this
excess of matter that provides the basis for introducing
the concept of potential energy: it becomes quite clear
from where the Space gets excessive energy as a particle
moves in different physical fields.
Kinetic energy means the amount of matter carried by
a particle; this matter moves ahead of the particle car-
rying it forward and makes the motion wavy by nature.
Moreover, it is this matter that forms new particles as
two corpuscles collide. In our article “Weak Interaction
and the nature of virtual of particles” we have discussed
more comprehensively the motion of photons and parti-
Nowadays we haven’t the slightest idea of what the
electric charge and the spin mean. The reason lies in our
erroneous view of leptons as point particles. Erroneous is
the interpretation of experiments on lepton - lepton scat-
tering. In this case the matrix element does not contain
any form-factors of these particles which would take into
account their complex structure. The absence of such a
structure is closely connected with the lepton size. Now
let us consider, as a counterexample, the process of bil-
liard balls elastic scattering which is considered as a col-
lision of material points though their structure is much
more complex than that of particles.
According to the present-day concepts, the size of an
electron 17
cm, and this comes into conflict when
the density of matter inside an electron and a proton is
under consideration. The electron mass is 2000 times less
than the mass of a proton, and the density of electron
matter in this case is 109 times as much as that proton.
All this leads to an absurd chain: the less the mass of an
elementary particle, the smaller its size and the more the
density of matter inside it.
The rule - the more the mass of a particle, the smaller
its size - is supported by an experiment performed on a
relativistic heavy ion collider (RHIC). The mass of a
is more than three times larger than that
of a proton and, as it follows from the experiment, its
size is three times smaller.
The size of a particle is its most important characteris-
tic which must be determined only by the world con-
stants ħ, c, G and by the mass of a particle m. Three
quantities pretend to be the radius of electron: the
Compton wavelength , the classical radius and
gravitational radius rg:
3.86 10cm,
2.8 10cm,
10 cm.
rmc cmc
Any theory having a claim on a correct description of
the microcosm must be able to calculate the fine struc-
ture constant α, which acts as the electromagnetic inter-
action constant, and in the radius 0 it serves just as a
scale factor. Since the electric charge is the same for all
elementary particles, the value of e² cannot determine the
size of numerous elementary corpuscles which differ
greatly in mass. The gravitational radius of an electron is
much smaller than that of an elementary cell, and there-
fore is not discussed here. It inevitably follows that the
size of any structure - free particle is only dictated by its
Compton wavelength. The structure of a particle, how-
ever, increases its size just slightly, like in case of a pro-
ton. Over fifty years ago both Yukawa and Landau pro-
posed that the size of an electron is equal to its Compton
On arriving at this decision we finally can understand
such notions as the mass, the electric charge and the spin
of elementary particles. These problems will be studied
in detail in our article “Weak Interaction and the Nature
of Virtual Particles” where we calculated both the mass
of one elementary cell mcell and its density pcell and found
Copyright © 2011 SciRes. JMP
292 V. KONUSHKO ET AL.
a unique relation between an elementary cell and a W-
boson. An elementary cell is a generalized image of an
object which nature gives us though its world constants.
In imaginary experiments used widely by Galileo,
Newton and Einstein we can see with our own eyes all
these mysterious natural phenomena thus doing away
with the tragedy of blindness. Being prisoners of “point-
ness” we would never solve these fundamental problems.
Besides, there are infinities which have been poisoning
the life of theorists for about a hundred years provided
that particles are considered to be points.
Let us go back to the proton. Its large mass creates
surface tension strong enough to produce inside a parti-
cle clusters of deformed internal cells. Such formation
has already acquired the name of quarks. Quark “con-
finement” becomes now quite transparent, and the simi-
larity of quarks and leptons can be accounted for by the
fact that inside either of them there are no clusters of
deformed cells.
α proton consists of Np deformed cells:
Nmc L
The deformation of such a huge number of cells is so
queer that it gives grounds to introduce into theory such
objects as gluons and a sea of virtual quart-antiquark
pairs despite the fact that all these objects have a material
basis, i.e. they consist of material cells.
Even in collisions of an electron with another particle
its internal cells are just elastically deformed without
creating any new internal formations, and this point is
considered nowadays as electron “pointness”.
3. Particle Motion. Ultimate Velocity.
Enigma of Inertia
To observe the motion of a particle we must make an
imaginary experiment by increasing the elementary cell
up to the size of a small cube. The particle begins to
move only when there is a difference in cell deformation
behind and in front of the particle, i.e. a deformation
gradient. The amount of matter required for it and sur-
rounding the particle in asymmetric way is called kinetic
energy and the cell deformation gradient is referred to as
Since cells process elastic properties, the motion of
this additional matter has a wave nature creating some
kind of “a centaur”: a wave-particle predicted by the
Broglie. An electron is only carried by the wave never
becoming part of it. After colliding with another particle
the electron loses the prefix “wave”. The process of
transmitting either a part of matter (kinetic energy) or the
whole matter has an exchange character. In collision the
matter carried by the shell-particle having reached the
target-particle having reached the target-particle finds
itself between a hammer and an anvil.
The enormous quantity of deformed cells participating
in the collision leads us to introducing into our theory the
notion of a “virtual” exchange particle. In this immense
“pot” a strong deformation of space cells makes up all
kinds of cluster providing the right to introduce such
notions as a sea of quark-antiquark pairs, current quarks
and gluons. That is why there is such a strong difference
between the masses of current and constituent quarks.
This real collision process enables us to understand this
“spin disaster” as well.
The introduction of structural functions into the matrix
element is the first raw attempt of describing a complex
collision act where up to 1080 deformed space cells take
part. Such a huge amount of deformed cells participating
in a collision act is responsible for the fact that all the
events in the Universe are not local and our mathematical
description of an event will always be just approximate.
The nonlocal character of elementary particles and inter-
actions gives rise to a false concept about the violation of
the principle of casuality and the principle of equation
invariance under Lorentz’s transformation.
But both a line, a point and plane are started mathe-
matical concept, and in real space such objects do not
exist at all. Any event in the Universe occupies a certain
space-time area, and the fact that we attribute the coor-
dinates X, Y, Z and t to this event just says that this event
has really happened, that is, something has occurred in
the Universe, something has changed in a certain region
of space in a definite time. In all our equations we men-
tally reduce this region to a point and the use of form-
factors is just a weak attempt to account for a colossal
complexity of the collision process. We have so far pre-
pared the ground for discussing the following fact: no
one has ever observed the motion of a physical object
with respect to other objects with its velocity exceeding
at a definite moment of time the speed of light in vacuum
310 cmsc excluding the giant “scissors” effect.
In three-dimensional cell space all motion is con-
ducted by cells themselves and the speed of light is the
parameter of their elastic properties. In other words, for
the cells there exists a maximum deformation that de-
fines the maximum possible velocity. The deformed cells
cannot achieve the velocity of deformation transfer above
the maximum. Consequently no particle can exceed this
velocity. The history of tachyons repeat that of flogiston
and caloric. The planned experiment on the discovery of
neutrino velocity exceeding the speed of light is in our
opinion doomed to failure.
The phenomenon of inertia has been known for many
Copyright © 2011 SciRes. JMP
V. KONUSHKO ET AL.
centuries and Galilei, Huygens, Descartes and Newton
polished the wording of this mysterious phenomenon.
Any macroscopic body moves through liquid or gaseous
media as one whole pushing the molecules of these ma-
terials. However, in order to break the bonds between the
water molecules a body, e.g. a submarine, must spend a
part of its kinetic energy (in other words, matter) that it is
carrying to compensate for the binding energy between
these molecules. Along with this, the momentum of the
submarine is diminished, which leads to the decrease of
the deformation gradient of the cells surrounding the
submarine. Finally the body stops.
An elementary particle, which itself consists of cells,
is moving in the cell space also pushing and deforming
the other cells in front of it without spending any kinetic
energy, since in free space the notion of the binding en-
ergy between the adjacent cells is absent.
If a liquid or a gas were not viscous at all, a body
moving in these media would not meet any resistance
(“Eiler’s paradox”). It is in this way that an elementary
cell moves through the cells of field-free space. The
space viscosity is equal to “0”!
For the last five years the experiments with the use of
a relativistic heavy ion collider (RHIC) have allowed us
to reproduce at a microscopic scale quark-gluon plasma
formed by collisions of gold nuclei flying almost at the
speed of light.
Some physicists are surprised to see that a quark-gluon
media is practically free of viscosity and so presents the
most ideal liquid among all the known liquids [8]. It is
rather difficult to get rid of the idea that this ideal liquid
accounts for the absence of viscosity in space.
But there is a fundamental difference between the
wave motion of elementary particles and photons and a
sound wave or a wave in a liquid.
The propagation of sound in the air is the motion of
elastic deformations caused in the air rather than the mo-
tion of air masses, for example, the wind as a photon or a
particle moves, space carries a mass which is the mass of
a virtual object, i.e. the corner-stone of quantum theory.
Even this peculiarity alone of the space body motion
established a crushing psychological barrier under the
necessity to allot the invisible and imperceptible object,
i.e. the space, a real material structure. Mysterious char-
acter of space makes some physicists go back to the no-
tion of ether, others - to create a new generalized image
of the space-vacuum, endowing both notions with fantas-
tic properties.
The basis of the inertia principle appears to be the ab-
sence of the absorption of matter connected with a mov-
ing object, which changes the deformations of cells situ-
ated at each time moment near the given body. However,
after the body leaves this region, the space cells obtain
their previous form if there are no other bodies or fields
there. Even sweeping all the stars out of the Universe,
nevertheless the space and the inertia will still exist.
Thus, the cell space contradicts the Mach’s principle.
4. Enigma of Large Numbers
Any theory is only worthy of notice when it contains
For further investigation of the properties of the ele-
mentary cell we revert to two fundamental laws, i.e.
Newton’s gravitation law and the Coulomb’s law:
, .
mm qq
Here physical mechanisms veiled by false simplicity
of their mathematical expression are of interest. The laws
are very similar in form. Noticeable is a similar depend-
ence both on distance and charges. But the most inter-
esting is the relative value of these forces. From the pre-
vious experience it is known what key role the dimen-
sionless values like the Reynolds’s number, Knudsen’s
number, Mach’s number, etc. play in understanding phy-
sical phenomena.
Let us find the ratio of these forces for two electrons:
24.17 10
. (6)
The value is amazing and there hardly exists a physic-
cist who has never thought what it means. Many promi-
nent scientists attempted to get this number [4]. Most
known is the Dirac’s attempt. He divided the age of the
Universe by the time during which a light beam passes
the distance equal to Compton’s length of a proton:
. (7)
Feynman [7] joked that this number could be obtained
by dividing the Earth volume by the volume of an aphid,
but what have they to do with this number? In fact the
situation becomes much more complicated if to recollect
that even nowadays the number of elementary particles is
about one thousand and for each particle there is its infi-
nitely large number and, besides, the number of particles
is growing with disastrous speed.
All these infinite numbers must be obtained from one
and the same assumption. Perfect will be a solution when
theory will operate only with fundamental constants G, c,
To make the solution more clear we shall do an opera-
tion, the meaning of which will further on become evi-
dent. Let us consider an interaction where the constant is
times bigger that the electromagnetic con-
Copyright © 2011 SciRes. JMP
294 V. KONUSHKO ET AL.
stant (α is the fine structure constant):
. (8)
Now let us find the ratio of this interaction force to
Newton’s force. Then for e, μ and p, respectively, we
10 ,
10 ,
10 .
Fe Gm Gm
F pGmGm
We do not give numbers for other particles and only
note that this ratio is rapidly decreasing with the increase
of the particle mass. Intriguing is the result for the
heaviest clot (if it exists), i.e. plankeon:
**2 1.
GcFm Gm
Now let us recollect that matter (particle mass) forms
only a steady stable surface shell, but a “particle body”
consists of space cells. Or else, the mass of a particle
deforms both internal and external cells as if “cutting”
from space a mini-volume, which later on we call an ele-
mentary particle. Therefore, only the cells of the surface
layer participate in the interaction.
Let us find the square of one elementary cell using the
ideas of Beckenstein and Hawking:
310 сm
SL c
. (11)
Now let us recollect that the size of the particle, as it
was previously found, is equal:
const ,rmc
and the coefficient const can only be of an order of a unit
(for a proton const is equal to 3).
Let us estimate the number of cells for one layer
cm thick, which makes the surface of a particle:
. (13)
The number of cells on the particle surface is equal to:
38 *
10 ,10 ,
10, 1.
Se cSc
Ne N
Sc ScGm Gm
Sp cSm
Np Nm
Sc Sc
for e, μ, p and m*, respectively.
Comparing the ratio of the interaction forces (9) with
the number of cells making the particle surface (
obtain an overwhelming result: these numbers are equal
eaning of this amazing
14), we
What is then the physical m
equality? It appears to be rather transparent. Let us con-
sider the simplest case when particles are pressed to each
other and their surfaces touch. Then in our contrived
strong interaction the matter of all cells, making the
nearest to the surface layer, participate; and in the gravi-
tational interaction there is only one cell which partici-
Turning back to the Coulomb’s force, we obtain the
which is a key to the solution of the mystery of large
Now the physical meaning of the fine structure con-
stant α becomes clear: in the Coulom
charged particles at a distance equa
b interaction of two
l to the size of the
particle only a part of cells of the surface layer equal to
of their total number takes part. Besides, the
cells form layers around the particle and in each layer the
deformation of cells is identical, since the cells of one
layer are at the same distance from the particle. As the
difference in force in these interactions lies only in the
number of participating cells, then the dependence of
these two fundamental forces on the distance becomes
similar because the forces are saturated with matter from
one and the same layer. Moreover, an elementary cell
will “allow” us later on to prove that the potential of both
gravitational and Coulomb’s interactions is inversely
proportional to the distance:
and, consequently, the force:
r (17)
Physical meaning of other i
clear: the more cells from the layer participate in the in-
interaction. However, the
constant of any interaction canno
nteractions also becomes
teraction, the stronger is the
t be more than a unit:
( 18 )
since at 2
all cells of the layer take part in the
interaction, which, in its turn, imposes a limit on the
binding energy of two particles:
Copyright © 2011 SciRes. JMP
V. KONUSHKO ET AL.
12 Ec mc, (19)
where m is the mass of a lighter particle of the pair.
rous Co
the interaction. We
shall call this phenomenon the eff
. A positron has the same amount of
electricity as an electron. A more striking example is that
ctly equal in
value, for instance, the positive charge of electron.
fferent reference
A vigoulomb interaction brings us to the as-
sumption that there must be a profound reason for such a
large number of cells to participate in
ect of elementary cells
The Essence of Electric Charge
Milliken and other scientists have shown in their expe-
riments that electric charges consist naturally of discrete
constant portions
all other charged particles have charges exa
According to the present-day views, there is a differ-
ence between an electron and proton, probably, the big-
gest for elementary particles. But their charges are equal
to a high degree of accuracy. It is not clear yet what
doesn’t permit an electron to decay as well as what de-
termines the exact value of its charge.
The existence of an electric charge in two forms is, of
course, its fundamental property. The conservation law
and relativistic invariance are also its mysterious features.
In an isolated system the full electric charge, i.e. the al-
gebraic sum of positive and negative charges, remains
constant. If we measuring a charge in di
stem, we get the same number which drastically differs
from measuring the total mass carried by a moving par-
ticle: the higher velocity, the larger the total mass,
01mm vc .
It is difficult to give up the idea that if we had an ac-
celerator with 19
10E GeV and discovered experi-
mentally all the elementary particles which space can
create we would not answer any of the above questions.
Thus, we have to study the real structure of space already
nly be e
that the second part has a concave surface,
t mistaken with the signs, these clusters are
A colossal difference between the Coulomb force and
gravitation can oxplained by geometries. Let us
consider the simplest analogy. Cut a convex lens of a
whole piece of glass. Make a convex line on the glass
with one move of cutter. Then, after breaking the glass
one can see
d the radii of curvature of these surfaces are absolutely
the same.
When creating electric charges Nature acts in a similar
wave, in an imaginary experiment, if we increase an
electron to the size of a football we can see that the sur-
face of this particle is made up of clusters of deformed
cells resembling segments of the football design. And, if
we are no
ncave for the electron (negative curvature) and convex
for the positron (positive curvature). The question arises:
how many clusters are sited on the electron surface? We
can answer this question by considering the constant fine
structure a:
This unattractive number has been agitating the minds
of physicists for nearly a hundred years. Feynman said:
“… this is one of the greatest accursed mysteries of
physics - a magic number we have got and don’t under-
stand at all. It might be said that thiumber has been
written by ‘God’s hand’ but we don’t know what moved
t the role of the Load was played by Nature.
s n
s pencil”.
If the only construction for the Universe is a three-
dimensional elementary cell, the Universe “knows” only
natural numbers. E. Cronecker, a mathematician, was
rather sagacious when he said that the Lord had created
natural numbers, and all the rest was man’s handiwork
meaning tha
this is the case, the quantity α should be an echo of a
whole number. Since the shape of an elementary particle
differs a little from being spherical, one can suggest that
the number π is involved here. Then,
The number 1722 is accurate enough but it should be
taken into account that α has been varying rather widely
for the last hundred years, beginning with 136. The
number 1722 has a simple physical meaning: it means
that the surface of any charged par 1722
clusters, whereas only the cells of one cluster take part in
ticle consists of
ulomb interaction. Hence it follows that all charged
particles have equal electric charges even though their
masses differ greatly. This fact verifies again an absolute
dependence of the particle size on its mass. This de-
pendence is vividly reflected by the Compton length :
In our articles we paid attention time and again to the
false concept of an electron as a point particle with re <
10–17 cm. There are not any point objects in the wld or
and they cannot exist. Pointness means the absence of a
structure, i.e. indivisibility into coponents. The number
10–17 cm only means that if there are structural forma-
ns inside an electron, they are less than 10–17 cm, and
this figure has nothing to do with the size of the electron
Although a more massive particle has a smaller sur-
face, its cluster contains fewer cells, and a big mass of
one cell compensates for the shortage of cells materially.
The electron surface consists of N cells:
Copyright © 2011 SciRes. JMP
296 V. KONUSHKO ET AL.
4π7.3 10
where 3
- is the Planck length or the parti-
cle size, and every cluster contains 4.17 × 1042 cells. The
qulomb interac-
tion of an electron and positron if they ould be brought
into contact. In this case the convex suace of one clus-
ter (positron) fully coincides with the concave surface of
tron cluster. I
antity of cells would take part in the Cou
an elecn gravitational interaction, however,
only one cell would participate in this case! So we are
able to unravel the secrets of extreme weakness of gravi-
tational interaction between elementary particles as well
as the mystery of big numbers the solution of which
Dirac was looking for the Universe.
Our studies into the granular space structure enabled
us to find all these infinitely big numbers in the structure
of elementary particles without turning to the Universe.
It should be noted that a positron in a real experiment
cannot be at the distance re = 3.86 × 10–11 cm when it is
in contact with electron because the number of cluster
cells is 4.17 × 1042 and the positron surface consists of
~1045 cells. The positron in this case simply cannot be
placed in “the bed of Procrusten” of electron cluster. The
deformed space cells with inevitably push aside the posi-
tron at the distance a determined again by the constant α:
20.529 10cm
mc e
where 2α is the positron size, α is the hydrogen atom
6. Nature of Energy Levels
It is of high interest to observe in an imaginary experi-
planceon is the simplest and the most prominent
ment the structure of deformed cells as they gradually
move away from the particle. The structure of the cells
near a
since it only consist of one cell.
The mass of this particle is:
210 gmcG
and its size
*3 33
1.610 cmLcG
It should be stressed again that L* is nothing but the
Compton length of the planceon whichmonstrates a
rigid relation between the particle mass and size men-
tedly in [6].
Moving away from this particle we can observe a won-
derful phenomenon - a gradual decrease space cell
r can be densely covered with
at the distance rn from the planceon:
tioned repea
formation gives rise to cluster of collectivized space
This may occur in a layer lying at the distance r2 = 4L*
because this nearest laye
uster consisting of four cells. The next cluster has nine
cells in a layer spaced at the distance r3 = 9L* from the
planceon. In the most natural way we can detect energy
For the first time energy levels were referred to by N.
Bohr in 1913. It is easily seen that the number of clusters
on the first energy level is two, on the second - eight, on
the third - eighteen, and so on. As we have already noted,
this is due to the fact that clusters must closely fill their
relevant layer of cells. Every cluster is characterized by
ite being spectacularity confirmed by number
that this is a deficiency. The impres-
ers n, l and m reflect a com-
eir specific features in the structure embodied in the
quantum numbers: n, l and m, thus revealing the mystery
of Pauli’s exclusion principle. On the first energy level
space forms only two clusters, and that is why only two
electrons can be sited on it, irrespective of the nuclear
charge. On the second level - eight electrons; this is the
exact number of clusters formed by space on this level,
and so on.
The Pauli exclusion principle plays a pivotal role in
our understanding of countless physical and chemical
phenomena, ranging from the periodic table of elements
to the dynamics if white dwarfs and neutron stars. It has
defied all attempts to produce a simple and intuitive
proof, desp
d accuracy of it.
Wolfgang Pauli remarked in his Nobel Prize lecture
(13 December 1946): “Already in my original paper I
stressed the circumstance that I was unable to give a
logical reason for the exclusion principle or to deduce it
from more general assumption. I had the feeling, and I
still have it today,
n that the shadow of some incompleteness fell here on
the bright light of success of the new quantum mechanics
seems to me unavoidable”.
The intrigue connected with Pauli’s principle is that
the second level can have no more than eight electrons
not because these particles “avoid” each other if they have
the same quantum numbers but because on the second
energy level space allots only eight “seats” to the elec-
trons and the quantum numb
ex structure of clusters which consist of a huge number
of deformed elementary cells.
A characteristic feature of science is that we must be
able to describe phenomena so that we could say some-
thing intelligible without exhaustive data and hand. It is
worth nothing that every new theory asks more new
questions than answers the old ones.
Copyright © 2011 SciRes. JMP
V. KONUSHKO ET AL.
o planceons αpl is: The interaction constant of tw
Consequently, with *2 19
10 GeV
Emc , all the
interactions become united and the energy levels can be
call the
forgoing enables us to draw the following conclusions.
1) All energy levels are characterized by the forma-
tion of clusters (collectivization) of deformed space
stance from a particle.
e cluster
should be introduced for a smaller
lled both gravitational, Colombian and nuclear. A
cells at a definite di
2) Any body generates gravitational energy levels
around itself.
The formation of energy levels near a charged particl
is similar to the process which occurs in the case of the
planceon, with the only difference that the role of one
elementary cell (planceon) is played by the whol
containing N = 4.17 × 1042 cells in the case of electron.
Also, a correction
lue of the electromagnetic interaction constant α. Hence,
electron energy levels are placed at the distance rn from
the proton in a hydrogen atom:
Of interest is that the ordinary proportionality to
squared distance can be found in both Colombian and
gravitational interactions, and this regularity has a deep
meaning. Feynman writes [7]:”... Nobody has so far
managed to represent gravitation an
different manifestations of one and the same essence”.
d electricity as two
As it has repeatedly been mentioned, in forming a par-
ticle the additional matter, i.e. the rest mass, which tries
to be sited in a space closely filled with cells, has to de-
form both internal and external cells of space. The ele-
mentary cells pushed outside make up the reserved mat-
ter, which can be transferred by space to other particle
d are referred to as potential energy. The deformed
outer cells from the substance called a physical field. It is
quite obvious that this mass cannot be larger than the rest
mass of a particle being formed - a peculiar law of Ar-
chimedes in the microcosm. In its turn, it means that
the constant of any interaction cannot exceed unity,
Many physicists have already realized that the value
215gc is just a phenomenological parameter of a
strong interaction at low energies rather than the constant
of this interaction. In the article concerned with nuclear
forces we can find the constant of nuclear interaction on
othe basis f the deuton binding energy, 2
0.09736 .
e dependence of interactions on distance,
Feynman spoke about, has a deep meaning which con-
sists in forming clusters of deformed outer cells. A gra-
vitational cluster begins with one elementary cell and
then gradually grows as it moves away from th.
The sam
e particle
In its turn,e electric cluster of, say, the electron in hy-
ogen atom begins with an object containing 4.17 × 1042
elementary cells and then it grows in the same manner as
the gravitational cluster. And only at the distance a =
0.529 × 10–8 cm the electric clusters area increases by 1/a
times and the electron can be placed at last in this bed of
Procrustes. The gravitational clusters will increase simi-
larly. Then, as the electron moves far away, the cluster
areas of both the interactions grow up forming a stringent
sequence, like for planceon, 123
:: 1:4:9SSS
According to the granular space theory, the electron is
not only as a definite distance a from the proton but,
besides, it is at rest though it has a huge velocity, a mo-
mentum and kinetic energy. At first glance, this statement
is paradoxical. The electron velocity in heavy atoms is as
high as the velocity of length
To unravel this paradox we revealed the true physical
meaning of velocity in about ten articles considering
numerous examples: 22
is the relative value of
deformation of elementary space cells. So, the following
three quantities: the velocity, momentum and kinetic
energy of an electron on the ground energy level of any
nomena c
, too. In our work “Gravitational levels and
. In mathematics the properties
om are characteristic not of the motion but the defor-
mation of elementary space cells.
All the physical pheonsidered by us from the
standpoint of material discrete space fully confirm Ein-
stein’s position “Got doesn’t play dice” and save us for
ever Bohr’s attempt to ascribe indeterminism and uncer-
tainty to space. In more detail it will be discussed in an-
other article.
To our surprise we can observe almost visually the
quantization of not only Coulomb fields but also gravita-
tional ones. In experiments we cannot feel gravitational
levels of particles because of their small sizes but, as the
mass of the object increases, the gravitational clusters
become larger
e Problem of Microwave Background” we found out
that the gravitational levels of the Earth are responsible
for the formation of quasi-black-body radiation near the
Earth with T 2.7 K which is competitive with the “rel-
ict” radiation hypothesis.
One of the fundamental properties of electric charge,
its existence in two forms, is related, as we have estab-
lished, to the fact the deformed cells on the surfaces of
two particles form clusters consisting of a huge number
of cells and having the same concave and convex sur-
faces for both the particles
curve surfaces were studied by Lobachevsky, Gauss,
Rhyman, et al. It is rather amusing that curve surface
mathematics enters physics now through the front door
Copyright © 2011 SciRes. JMP
V. KONUSHKO ET AL.
ant just to note
on is characterized by large
does not possess this property of invariance. The
as the transferred mass of matter is increased
and not through the back one because we are able to see
the curvature of surfaces with our own eyes removing the
nonobservability of theory. As we cannot cut only one
convex surface of glass without the other part of glass
getting a concave surface, so it is impossible to generate
a positive charge without a negative one. No doubt,
charges can disappear only in pairs, too.
So, we can observe practically visually the fundamen-
tal property - the law of charge conservation and its ex-
istence in two forms.
In our next articles we are going to consider more
comprehensively the mechanisms of both gravitational
and Coulomb interactions, and here we w
at gravitation can be realized when the deformation of
elementary cells beyond two bodies is larger than be-
tween them whereas repulsi
formation of cells between the bodies. The difference
in cell deformation results in a deformation gradient called
force. The curvature of the layers of the deformed cells
around a particle or a macroscopic body is a secondary
There is exhaustive experimental evidence that the to-
tal charge of a system remains constant as the charge
charge carriers move. We have got used to it so that we
don’t often think about such a wonderful and fundamen-
tal fact.
atter carried by kinetic energy forms an object referred
to in modern theory as a virtual photon, its structure will
be considered in a section concerned with the motion of
elementary particles. The kinetic energy of a particle
11vc times. The space and a particle carry
fast additional substance used to form new particles
when a shell particle collides with a target particle. The
mass of the electron does not change in this case, its size
remains constant, too. As for the moving virtual object
(photon), the following rigid ratio is valid: rmc
where 22
means the total mass of the
electron and the virtual photon.
Let a moving electron has Ee = 938 MeV which is
equal to the rest energy of proton. In this case the wa-
velength of the virtual photon (not the electron)
210 cm
, i.e. it is equal to the size o.
It shou surface of the virtual photon
f rest proton
noted that the
is made up of segments,
face, and the d
our eyesight were keener, these secrets would have been
laughed at our difficulties.
sult again honeycomb. Every
riming surface.
en it reaches the end of a notch, it falls
space cells form clusters of
ld be
duplicates that of the electron: it
clusters of negative curvature, too. Since the mass
transferred by it is equal to that of the proton, the cluster
will be exactly the same as the cluster on the proton sur-
ecreased area of the cluster is compen-
sated for by a larger mass. To our surprise we have to
state that this virtual photon has a charge of proton and,
hence, the charge of a moving electron is equal to that of
a rest electron thus making the electric charge invariant.
7. The Birth of an Elementary Particle
The process of formation of an elementary particle re-
mains as mysterious as the number 1722. But space cre-
ates a particle in miserable portions of a second, and if
revealed long ago. Once someone said well that
Now we can set forth just some preliminary considera-
tions having a claim on rigorous proof but, nevertheless,
containing a number of important ideas.
It is the shape of the cell, a polyhedron, which evi-
dently plays a leading role in forming a particle. To sim-
plify the problem, let us con
ll is hexahedral in shape, and all the subsequent layers
that “dress” are the same hexahedrons. Thus, almost all
elementary particles have the same p
wing to this needle-shaped broken structure, the addi-
tional matter - the particle mass - can cover part of space
cells deforming them and, with certain conditions met,
can form a surface film at least for a short instant, like in
the case of resonances. We are, without fail, to reveal
these mysterious conditions already today not waiting for
the construction of a 1019 GeV accelerator, because even
this energy will not tell us anything new about the elec-
tron structure.
The formation of a particle is a real disaster; the ma-
thematical theory of catastrophes is under rapid devel-
opment now. Let’s consider a simple analogy. The birth
of a particle only vaguely resembles the work of clock
ratchet-and-pawl. The pawl slides quietly over the ratchet
for a while but, wh
d a catastrophe happens.
Similarly, a gradual increase in substance quantity, the
mass in this region, causes both the radial and the torsion
deformations of cells to increase. The latter defines the
spin structure. Charged particles are born in pairs, and
this process ands with a catastrophe all of a sudden when
the previously independent
sitive and negative curvatures on a closed surface. The
process of clusterization and collectivization of elemen-
tary cells is the main mystery of elementary particle birth.
As the particle mass increases, the deformation of the
inner cells reaches such a point that it is useful for space
to create clusters inside the particle, clusters called quarks.
The cluster confinement in this case is quite a transparent
phenomenon: without external deformation of space cells
the quarks simply disappear.
This unusual deformation makes a prerequisite for in-
troducing the conception of color and odor to theory.
Though these processes are still far from being fully
Copyright © 2011 SciRes. JMP
V. KONUSHKO ET AL.
to approach the secret of
f a
understood, the birth of a particle does not need either
additional fields or mediator-particles. An analogy with a
soliton will probably help us
rmation of a stable surface of elementary particles. The
soliton is assumed to be kept stationary at the cost o
lance between the action of a nonlinear medium and
We should add another point to understand the gravi-
tational potential φ. Let us estimate φ on the surface of
an electron, a muon and a proton:
10 ,
10 ,
10 .
We have already come across the large numbers 1044,
1040 and 1038 which denote the numbers of ces making
up the surfaces of an electron, a muon and a proton.
Since the squared velocity is a dimension of poten-
tial, the value
vc shows wh
layer is inv
ich part of cells of their
total number in a givenolved in the interact-
ly on
and fantasy off quite a number of fundament
physical phenomena. The Universe is found to be infi-
is only made up of one element, and
infinitely complex, as any of its matter clusters consists
irac and
ity. The func-
Here we want to illustrate with a proton one specific
feature of cell behavior: though one cell of proton sur-
face requires substance 108 lrger than one cell of elec-
tron surface, one cell participates again in the gra-
vitational interaction of two protons at an ultimately close
8. Conclusion
In conclusion it should be noted that even the first
“steps” of an elementary cell have removed a touch of
nitely simple, as it
of an infinite number of elementary cells. So, is space
continuous or discrete? It would be safe to say, neither of
the two. But this naked negation does not feed our
thoughts and thoughts and negatively affects theory.
Therefore, the more exact answer is: both.
Space is discrete because it consists of discrete ele-
mentary cells, it is also continuous since any deformation
of cells in a continuous manner (and not by steps) spreads
from one cell to another establishing absolute hundred-
percent causality and definiteness our great predecessors
Planck, Lorentz, de Broglie, Schrödinger, DEin-
ein believed in till the end of their lives.
To solve the problem of large numbers filly explains
the smallness of gravitation as compared to other forces
acting in the microcosm and does not need a fantastic
hypothesis for the existence of extra spatial dimensions.
Only a three-dimensional elementary cell enables the
Universe to create objects of any complex
nal dependence of gravity on distance 2
r must
remain constant up to distances 33
10 cmr
Atom stability is one of the most burning problems of
theoretical physics, and any attempt of solving this prob-
lem with the use of the Geisenberg inequality is invalid-
dated by numerous experiments. In our next articles we
shall subsequently consider atom stabilienberg
inequalities, the foundation of pon
ty, Geis
robability csideration
various scales form a cellular-netted large-scale
hared the opinion that the real space is to
ler, “Einsteins Vision,” SPRINGER - VERLAG,
roduction to Elementary Particle Physics,”
A. Dirac,
oncepts of Granular Space Theory,”
n Lectures on Physics,” Addi-
son-Wesley Publishing Company, London, 1963.
the ψ-function and the curvature of four-dimensional
space-time in Einstein’s theory of gravity, the nature of
the microwave background of the Universe, the time
“arrow” and other subjects making use of the material
structure of granular space and only three world con-
More and more physicists are aware that space is
granular in structure and sets an absolute system of ref-
erence [8].
Comparatively a short time ago astronomers discov-
ered a wonderful star picture: groups and clusters of gal-
axies of
ucture of the Universe. In our opinion, this large astral
cell is born by the cellular structure of the space itself.
Newton s
me degree an empty box where material bodies move
about without affecting the space at all. Einstein’s theory
of gravity has invalidated this assumption by supporting
the view that matter and space are directly interrelated.
The theory of granular space is making another step
e unification of matter and space: any cluster of matter
is a complex of fantastically deformed elementary cells
of the space itself thus symbolizing the Great Unity of
Nature [5].
[1] M. Planck, “The Unity of the Physical Patter of the
World,” NAUKA, Moscow, 1996, p. 108.
[2] J. Whee
New York, 1968
[3] L. Okun’, “Int
NAUKA, Moscow, 1985.
[4] A. Eddington, “The Mathematicl Theory of Relativity,”
Cambridge University, Cambridge, 1923. P.
“Directions in Physics,” Wiley, New York, 1978.
[5] V. Konushko, “C
SPUTNIK, Moscow, 1999.
[6] R. Feynman, “The Feynma
Copyright © 2011 SciRes. JMP
V. KONUSHKO ET AL.
Copyright © 2011 SciRes. JMP
[7] S. W. Hawking, “The Formation of Particles on Black
Holes,” Communications in Mathematical Physics, Vol.
43, No. 3, 1975, pp. 199-220. doi:10.1007/BF0234
[8] T. Jacobson and R. Parentani, “The Echo of the Black
Holes,” Scientific American, No. 3, 2006, p. 17. A.
Smolin, “Atom’s Space and Time,” Scientific American,
No. 4, 2004, p. 20.
|
{"url":"https://file.scirp.org/Html/4869.html","timestamp":"2024-11-12T16:32:38Z","content_type":"text/html","content_length":"190933","record_id":"<urn:uuid:3bbc7151-32cd-4b8e-89b8-de6bc769c359>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00522.warc.gz"}
|
Tips for simplifying Algebraic Fractions - Interactive MathematicsTips for simplifying Algebraic Fractions
Tips for simplifying Algebraic Fractions
By Kathleen Cantor, 03 Apr 2021
An algebraic fraction is any fraction that uses a variable in the numerator or denominator. For example, the variable x in the fraction x/3 makes it an algebraic fraction. In algebraic fractions, you
cannot divide by zero. As such, the denominator has certain restrictions. Take a look at the illustrations below.
• In the fraction 5/x, x cannot be equal to zero (x≠0)
• In the fraction 2/(x-3), x cannot be equal to three (x≠3)
• In the fraction 3/(a-b), a-b cannot be equal to 0 (a-b≠0); therefore, a cannot equal b (a≠b)
• In the fraction 4/(a²b), neither a nor b can equal 0 (a≠0, b≠0)
Simplifying an algebraic fraction means writing the fraction in its most compact and efficient form without necessarily interfering with the original fraction's value. Simplifying makes it easier for
you to understand and solve the fraction. Here are some tips.
How to Simplify Algebraic Fractions
1. Identify whether you will need to multiply, add, subtract, or divide.
2. To get the same denominator, multiply the denominators.
3. Multiply the first numerical fraction by the second denominator to get the first numerator.
4. Multiply the second numerical fraction by the first denominator to get the second numerator.
5. Now add, subtract, multiply or divide the numerators to get one answer, then divide by the denominator you got in step 2.
6. Simplify the fraction if divisible.
Algebraic Notation
Now that you can simplify the basic equation, it will be easier to identify the equation's components. The way you write an algebraic equation is known as algebraic notation.
a/b + c/d is an algebraic notation. The algebraic notation has several components, and they include operators, parentheses, coefficients, variables, and exponents. If you are given an equation like X
+ PX* V2 – (W/X), X is the variable, + is the operators, P is the coefficient, V2 is the exponent, and (W/X) is the parentheses.
X, which we now know as the variable, represents the unknown number. It helps find the missing number in an equation, which is always the algebra goal. Any letter can represent a variable, and the
variable is used together with coefficients, but it does not mean the answer will be the same for both variables.
Coefficients group the same variables into one to simplify work, and operators determine how we will solve the problem. If there are several operators in one problem, we use PEMDAS. Parentheses are
the same as brackets. So, if you come across a problem with parentheses, solve it first, and if two parentheses are close to each other, for example (v) (w), you will multiply the two. On the other
hand, exponents are numbers multiplied by themselves, V2 in this case, is the same as V*V.
Example 1
Solve the following algebraic fraction: a/b +c/d*(z/p - v/w)
We have addition, subtraction, multiplication, and brackets in the above equation. According to PEMDAS, we must start by solving the equations in the parentheses.
• z/p – v/w = zw /pw – vp /pw
After parentheses, we will add the remaining equation then multiply the answer we got from the parentheses. In both cases, we apply the law of distribution.
• a/b + c/d = ad/ bd + bc/ bd
• (ad/bd + bc/bd)*(zw/pw – vp/pw) = adbc/bd x (zw/pw- vp/pw)
Remember, you are simplifying, not solving.
Example 2
Vx – w / Qx -w
You'll find common factors in the numerator and denominator. Therefore we will group the common factors.
You can cancel out similar factors. This results in the final answer being V/Q.
Example 3
Simplify the equation x2/Vy*PY/QX
You will divide (simplify) x2 by QX resulting in X. You will also divide Y by Vy to get V. So, the final answer will be:
The factor not only has to be common but can also be the highest factor. Therefore, if X is the highest common factor among all the values, you use X to divide each number.
Example 4
Given a rectangle has four sides and the measurements include w+x/v and w-p/x, write an expression for its area.
Since we are finding the area, we will directly multiply the given figures. If it were an addition, we would have added all four sides.
• W + X/V*W - P/X
• (W + X)/VX and (W - P)/VX
• W(W - P) +X(W-P)
• W2 - WP + XW – XP
• WP + WP
• (WP)2/VX
Complex Examples Involving Algebraic Fractions
The following two examples require you to use the factorization method and FOIL method, that is, multiplying two binomials. If the denominators are the same, you will use the examples above and if
the denominators are not the same and the question is complex, use the two examples stated. In complex problems, use the following steps:
1. Factorize the denominators
2. Write down a factor that appears at least once in the denominators, excluding its powers.
3. In the factors you have written, write down the largest power that appears in the factors.
4. The power and factor you get will be the least common divisor you will use to multiply the question's denominator and numerator.
Example 1
Simplify the following complex fraction: (P + p/x)/ (P – p/x2)
Our first step is to create a single fraction in both the numerator and denominator. We will add the fractions in the numerator and subtract the denominator's fractions.
• {P(x/x) + p/x} / {P (x2/x2) – p/x2}
• {(x/x + p/x)}/ {(x2 + x2) - p/x2}
• {(x + p)/x} {/(x2-p)/x2}
• ((x + p)/x}. {(x2/x2 – p)}
• {(x+p/p)}. {x2/ (x + p) (x – p)}
• x / (x – p)
You can choose to go for a more straightforward method: the least common divisor. In this case, the least common divisor is x2. You will multiply x2 in both the numerator and denominator and quickly
solve it.
Example 2
Solve the complex algebraic equation {p/x – p/qx}/ {n/x – w/rx}.
We will first create a single fraction in the denominator and numerator, and afterward, solve the equation.
• (p/x – p/qx)/ (n/x – w/rx) = {p/x. (q/q) -p/qx}/ (n/x(r/r) – w/rx)
You can choose to leave the answer as it is or simplify depending on the given question.
Final Thoughts
The complex questions may seem hard, but once you understand the essential details, such as factorization, it becomes easy to solve. You can also use the least common divisor where applicable or the
greatest common denominator. After multiplication, you can expand the problem to simplify the question.
Be the first to comment below.
|
{"url":"https://www.intmath.com/blog/learn-math/tips-for-simplifying-algebraic-fractions-12550","timestamp":"2024-11-06T14:45:39Z","content_type":"text/html","content_length":"134015","record_id":"<urn:uuid:6782ac2c-31dd-41ed-a352-803d5a5c7dcd>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00156.warc.gz"}
|
Show that each of the circles x2+y2+4y−1=0;x2+y2+6x+y+8=0;x2+y2... | Filo
Show that each of the circles touches the other two.
Not the question you're searching for?
+ Ask your question
Centre and radius of the first circle : and Centre and radius of the second circle: and Centre and radius of the third circle: and It can be easily verified that (i) distance between the centres of
the first and second circles sum of their radii first and second circles touch externally. (ii) Distance between the centres of the second and third circles difference between their radii the second
and third circles touch internally. (iii) Distance between the centres of the the third and first circles difference between their radii the third and the first circles touch internally. Observe that
the third circle envelops the first and second circles.
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE for FREE
9 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from IIT-JEE Super Course in Mathematics - Coordinate Geometry and Vector Algebra (Pearson)
View more
Practice more questions from Conic Sections
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Show that each of the circles touches the other two.
Updated On Nov 4, 2023
Topic Conic Sections
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 1
Upvotes 223
Avg. Video Duration 10 min
|
{"url":"https://askfilo.com/math-question-answers/show-that-each-of-the-circles-x2y24-y-10-x2y26-xy80-x2y2-4-x-4-y-370-touches-the","timestamp":"2024-11-04T15:54:52Z","content_type":"text/html","content_length":"503407","record_id":"<urn:uuid:547df6bb-7340-4dbf-88da-590605eb4b10>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00118.warc.gz"}
|
Getting Started With TensorFlow in Angular
Jim Armstrong | ng-conf | Nov 2020
Polynomial Regression using TensorFlow JS, Typescript, and Angular Version 10
AI/ML (Artificial Intelligence/Machine Learning) is a hot topic and it’s only natural for Angular developers to want to ‘get in on the action,’ if only to try something new and fun. While the general
concepts behind neural networks are intuitive, developers looking for an organized introduction are often suffocated with jargon, complex API’s, and unfamiliar math concepts just from a few web
This article provides a simple introduction on how to use TensorFlow.js to solve a simple regression problem using Typescript and Angular version 10.
Regression and Classification
Regression and classification are two important types of problems that are often solved with ML techniques.
Regression is a process of ‘fitting.’ A functional relationship between independent and dependent variables is presumed. The function exposes a number of parameters whose selection uniquely
determines a fit. A quality-of-fit metric and functional representation are chosen in advance. In many cases, the desire is to fit some smooth and relatively simple curve to a data set. The function
is used to predict future values in lieu of making ‘guesses’ based on the original data.
Classification involves selecting the ‘best’ output among a number of pre-defined ‘classes.’ This process is often used on images and answers questions such as
• Is this an image of a bird?
• Does this image contain clouds?
• Does this image contain grass?
• Is this image the Angular logo?
ML techniques are also used to solve important problems where a set of inputs are mapped to a set of outputs and the functional relationship between the inputs and outputs is not known. In such
cases, any functional relationship is likely to be discreet (or mixed discreet/continuous), nonlinear, and likely not closed-form. Ugh. That’s a fancy was of saying that we don’t even want to think
about a mathematical model for the process :)
A neural network is used to create an approximation for the problem based on some sort of scoring metric, i.e. a measure of one solution being better or worse than another solution.
Two Dimensional Data Fitting By Regression
Let’s start with a simple, but common problem. We are given a collection of (x, y) data points in two dimensions. The total number of points is expected to be less than 100. Some functional
relationship, i.e. y = f(x) is presumed, but an exact relationship is considered either intractable or inefficient for future use. Instead, a simpler function is used as an approximation to the
original data.
The desire is to fit a small-order polynomial to this data so that the polynomial may be used as a predictor for future values, i.e. y-estimated = p(x), where p represents a k-th order polynomial,
p(x) = a0 + a1*x + a2*x² + a3x³ + …
where a0, a1, a2, … are the polynomial coefficients (Medium does not appear to support subscripting).
A k-th order polynomial requires k+1 coefficients in order to be completely defined. For example, a line requires two coefficients. A quadratic curve requires three coefficients, and a cubic curve
requires four coefficients.
The polynomial for this discussion is a cubic, which requires four coefficients for a complete definition. Four equations involving the polynomial coefficients are required to uniquely compute their
value. These equations would be typically be derived from four unique points through which the polynomial passes.
Instead, we are given more than four data points, possibly as many as 100. For each point, substitute the value of x into the equation
p(x) = a0 + a1*x + a2*x² + a3*x³
For N points, this process yields N equations in 4 unknowns. N is likely to be much greater than 4, so more data is provided than is needed to compute a unique set of coefficients. In fact, there is
no unique solution to this problem. Such problems are often called overdetermined.
What do we do? Do we throw away data points and only choose four out of the supplied set? We could take all possible combinations of four data points and generate a single cubic polynomial for each
set. Each polynomial would interpolate (pass through) the chosen four points exactly, but would appear different in terms of how well it ‘fit’ the remaining data points.
In terms of the approximating polynomial, are we interested only in interpolation or both interpolation and extrapolation?
Interpolation refers to using the polynomial to make predictions inside the domain of the original data points. For example, suppose the x-coordinates (when sorted in ascending order) all lie in the
interval [-5, 10]. Using a polynomial function to interpolate data implies that all future x-coordinate values will be greater than or equal to -5 and less then or equal to 10. Extrapolation implies
some future x-coordinate values less than five or greater than 10. The polynomial will be used to make predictions for these coordinate values.
In general, performance of a predictor outside the interval of original data values is of high interest, so we are almost always interested in extrapolation. And, if we have multiple means to ‘fit’ a
simple function to a set of data points, how do we compare one fit to another? If comparison of fit is possible, is there such a thing as a best-possible fit?
Classical Least Squares (CLS)
The classical method of least squares defines the sum of squares of the residuals to be the metric by which one fit is judged to be better or worse than another. Now, what in the world does that mean
to a developer?
Residuals is simply a fancy name given to the difference between a predicted and actual data value. For example, consider the set of points
(0, 0), (1, 3), (2, 1), (3,6), (4,2), (5, 8)
and the straight-line predictor y = x + 1 (a first-order or first-degree polynomial).
The x-coordinates cover the interval [0, 5] and the predicted values at each of the original x-coordinates are 1, 2, 3, 4, 5, and 6. Compute residuals as the difference between predicted and actual
y-coordinate. This yields a vector,
[1–0, 2–3, 3–1, 4–6, 5–2, 6–8] or [1, -1, 2, -2, 3, -2]
As is generally the case, some residuals are positive and others are negative. The magnitude of the residual is more important than whether the predictor is higher or lower than the actual value.
Absolute value, however, is not mathematically convenient. Instead, the residuals are squared in order to produce a consistent, positive value. In the above example, the vector of squared residuals
is [1, 1, 4, 1, 9, 4].
Two common metrics to differentiate the quality of predictors are sum of the squared residual and mean-squared residual. The former simply sums all the squares of the residuals. The latter metric
computes the mean value of all squared residuals, or an average error. The terms residual and error are often used interchangeably.
The Classical Least Squares algorithm formulates a set of polynomial coefficients that minimizes the sum of the squared residuals. This results in an optimization problem that can be solved using
techniques from calculus.
For those interested, this algorithm is heavily documented online, and this page is one of many good summaries. When formulated with normal equations, polynomial least squares can be solved with a
symmetric linear equation solver. For small-degree polynomials, a general dense solver can also be used. Note that the terms order and degree are often used interchangeably. A fifth-degree
polynomial, for example, has no term higher than x⁵.
The normal equations formulation is important as it avoids
having to solve a linear system of equations with a
coefficient matrix that is a Vandermonde matrix. Empirical
evidence shows these matrices to be notoriously ill-
conditioned (with the most notable exception being the
Discrete Fourier Transform).
In general, it is a good idea to keep the order of the polynomial small because higher-degree polynomials have more inflection points and tend to fluctuate quite a bit up and down. Personally, I have
never used this technique in practice on more than a couple-hundred data points and no more than a fifth-degree polynomial.
Now, you may be wanting to experiment with CLS, but find the math pretty intimidating. Never fear, because we have a tried and true method for handling that pesky math. Here it goes …
Blah, blah … matrix … blah, blah … least squares … blah, blah … API.
There! It’s all done for you. Just click on this link and grab all the Typescript code you desire. Typescript libraries are provided for linear and polynomial least squares with multiple variants for
linear least least squares. This code base is suitable for fitting dozens or even hundreds of data points with small-degree polynomials. Again, I personally recommend never using more than a
fifth-degree polynomial.
Classical least squares is a good technique in that it provides a proven optimal solution for the sum of the squared residuals metric. There is no other solution that produces a smaller sum of
residuals, inside the interval of the fitted data set. So, CLS is useful for interpolation, i.e. we expect to make predictions for future x-coordinates inside the interval of the original data set.
It may or may not be useful for extrapolation.
This long introduction now leads up to the problem at hand, namely, can we use ML techniques for the cubic polynomial fit problem, and how does it compare to CLS? This leads us into TensorFlow and
neural networks.
What Are Tensors?
Tensors are simply multi-dimensional arrays of a specified data type. In fact, if you read only one section of the massive TensorFlow documentation, then make sure it’s this one. Many of the
computations in neural networks occur across dimensions of a multi-dimensional array structure, and such operations can be readily transformed to execute on a GPU. This makes the tensor structure a
powerful one for ML computations.
Neural Networks 101
In a VERY simplistic sense, neural networks expose an input layer where one input is mapped to one ‘neuron.’ One or more hidden layers are defined, with one output from a single neuron to all other
neurons in the subsequent layer. Each of these outputs is assigned a weight through a learning or training process. The final hidden layer is connected to an output layer, which is responsible for
exposing a solution (fit, extrapolation, control action, etc) given a specific input set.
The network must be trained on a sample set of inputs, and it is generally validated on another data set that is separate from the training set. The training process involves setting weights along
the paths that connect one neuron to another. Weights are adjusted based on a loss function or metric that provides a criteria to measure one candidate solution vs. another solution.
The training process also involves selection of an optimization method and a learning rate. The learning rate is important since the learning process is iterative. Imagine being at the top of a rocky
mountain range with a desire to traverse to the bottom as quickly as possible. There is no direct line of sight to an optimal path to the bottom. At best, we can examine the local terrain and move a
certain distance in what appears to be the best direction. After arriving at a new point, the process is repeated. There is, however, no guarantee that the selected sequence of moves will actually
make it to the ground. Backtracking may be necessary since the terrain is very complex.
I experienced this in real life during a recent visit to Enchanted Rock near Fredericksburg, TX. After ascending to the top, I ignored the typical path back down and elected for a free descent down
the SE side. Three backtracks and a number of ‘dead ends’ (local optima in math parlance) were encountered before I finally made it to ground level.
The optimizer attempts to move in the ‘best’ direction for a single step according to some pre-defined mathematical criteria. Gradient-based optimizers are common. The gradient of a multi-variable
function is a vector whose direction defines the slope of the function at a particular point (value of all independent variables). The negative gradient provides a direction in which the function
decreases. A gradient descent method steps along a direction in which the loss function decreases with the hope of eventually reaching a minimum.
The learning rate defines the ‘length’ of each step in the descent (technically, it is a multiplier onto the error gradient during backpropagation). Larger learning rates allow quick moves in a
particular direction at the risk of ‘jumping’ over areas that should have been examined more closely. It’s like hiking on a path that is not very well defined and missing an important turn by moving
too fast.
Low learning rates can be nimble and move quickly in any valuable direction, but they have higher execution time and can become ‘bogged down’ in local minima.
So, the learning process is rather involved as it requires selecting good data for training, a good loss function, a proper optimizer, and a balanced learning rate. The process is almost equal part
art and science (and a good deal of experience really helps).
These observations are one of the reasons I personally like using a UI framework such as Angular when working with ML models. The ability to present an interactive UI to a someone involved with
fine-tuning an ML model is highly valuable given the number of considerations required to obtain good results from that model.
TensorFlow Approach to Polynomial Regression
Polynomial regression using TensorFlow (TF) has been covered in other online tutorials, but most of these seem to copy-and-paste from one another. There is often little explanation given as to why a
particular method or step was chosen, so I wanted to provide my own take on this process before discussing the specifics of an Angular implementation.
I recently created an interactive demo for a client who had spent too much time reading about CLS on the internet. The goal of the demo was to illustrate that CLS methods are quite myopic and better
used for interpolation as opposed to interpolation and extrapolation.
Here is a visualization of a test dataset I created for a client many years ago. This is a subset of the complete dataset that resulted from a proprietary algorithm applied to a number of input
equipment measurements. A linear CLS fit is also shown.
Sample Data set and linear least squares fit
Now, you may be wondering how the plot was created. I have multiple Angular directives in my client-only dev toolkit for plotting. This one is called QuickPlot. It’s designed to perform exactly as
its name implies, generate quick graphs of multiple functions and/or data sets across a common domain and range. No grids, axes, labels or frills … just a quick plot and that’s it :)
While I can not open-source the entire client demo, I’m pleased to announce that I’m open-sourcing the QuickPlot directive.
A quick visualization of the data seems to support using a low-degree polynomial for a fit. A cubic was chosen for this article, although the completed project supported making the degree of fit
user-selectable (with a maximum of a fifth-degree polynomial).
The ultimate goal is for TensorFlow to compute the coefficients, c0, c1, c2, and c3 such that the polynomial c0 + c1*x + c2*x² + c3*x³ is a ‘best’ fit to the above data.
What criteria do we use to determine that one fit is better than another? The sum of squared residuals has already been discussed, but this is ideal for interpolation inside the domain of the
supplied data. Sometimes, it is better to have a more ‘relaxed’ criteria when extrapolation is involved. For this reason, we begin the learning process using average squared residual. This is often
called mean-square error or MSE. This metric allows for some larger deviations as long as they are countered by a suitable number of smaller deviations, i.e. the error is smaller ‘on average.’
The use of MSE also allows us to compare two different final fits using the SSE (sum of squared errors or residuals) metric.
The TF optimizer selected for this process is called Stochastic Gradient Descent (SGD). We briefly discussed classical gradient descent (GD) above. SGD is an approximation to GD that estimates
gradients using a subset of the supplied data that is pseudo-randomly selected. It has the general qualities of faster execution time and less likelihood to ‘bog down’ in areas of local minima. This
is especially true for very large (tens of thousands or higher) data sets.
SGD is not the only optimizer that could be applied to this problem, but it’s generally a good first start for regression problems. The other nice feature of this approach is that we do not have to
give any consideration to network structure or architecture; just select an optimizer, loss function, and then let TensorFlow do its work!
Fortunately, we have quite a bit of experimental evidence for selecting learning rates. A relatively small rate of 0.1 was chosen for this example. One of the benefits of an interactive learning
module is the ability to quickly re-optimize with new inputs. We have the option to use SSE as a final comparative metric between an ‘optimized’ and ‘re-optimized’ solution.
Data Selection and Pre-Processing
One final consideration is preparation of the data set to be presented to TF. It is often a good idea to normalize data because of the manner in which weights are assigned to neuron connections
inside TF. With x-coordinates in the original domain, small changes to the coefficient of the x³ term can lead to artificially large reductions in loss function. As a result, that term can dominate
in the final result. This can lead the optimizer in the wrong path down the mountain, so to speak, and end up in a depression that is still far up the mountain face :)
The data is first normalized so that both the x- and
y-coordinates are in the interval [-1, 1]. The interval [0, 1] would also work, but since some of the data involves negative x-coordinates, [-1, 1] is a better starting interval. The advantage of
this approach is that |x| is never greater than 1.0, so squaring or cubing that value never increases the magnitude beyond 1.0. This keeps the playing field more level during the learning process.
Normalization, however, now produces two scales for the data. The original data is used in plotting results and comparing with CLS. This particular data set has a minimum x-coordinate of -6.5 and a
maximum x-coordinate of 9.7. The y-coordinates vary over the interval [-0.25, 4.25]. Normalized data is provided to TF for the learning process and both the x- and y-coordinates are in the interval
[-1, 1].
We can’t use the normalized scale for plotting or evaluating the polynomial for future values of x since those values will be over the domain of all real numbers, not restricted to [-1, 1].
Don’t worry — resolution of this issue will be discussed later in the article.
Now that we have a plan for implementing the learning strategy inside TF, it’s time to discuss the specifics of the Angular implementation.
TensorFlowJS and Angular Version 10
TensorFlow JS can be exercised by means of a Layer API or its Core API. Either API serves the same purpose; to create models or functions with adjustable (learnable) parameters that map inputs to
outputs. The exact functional or mathematical representation of a model may or may not be known in advance.
The Layer API is very powerful and appeals to those with less programming experience. The Core API is often embraced by developers and can be used with only a modest understanding of machine-learning
The Core API is referenced throughout this article.
Here are the two dependencies (other than Angular) that need to be installed to duplicate the results discussed in this article (presuming you choose to use the QuickPlot directive for rapid
"@tensorflow/tfjs": "^2.4.0"
"pixi.js": "4.8.2",
Following are my primary imports in the main app component. I should point out that I created my dev toolkit (from which this example was taken) with Nx. The multi-repo contains a Typescript library
(tf-lib) designed to support TensorFlow applications in Angular.
import {
} from '@angular/core';
import {
} from '@algorithmist/lib-ts-core';
import * as tf from '@tensorflow/tfjs';
import * as fits from '../shared/misc';
import {
} from '../shared/quick-plot/quick-plot.directive';
import {
} from '@algorithmist/tf-lib';
You can obtain the code for all the CLS libraries in my lib-ts-core library from the repo supplied above.
The line, import * as fits from ‘../shared/misc’ simply imports some type guards used to determine type of CLS fit,
import {
} from '@algorithmist/lib-ts-core';
export function isLLSQ(fit: object): fit is ILLSQResult
return fit.hasOwnProperty('chi2');
export function isBLLSQ(fit: object): fit is IBagggedLinearFit
return fit.hasOwnProperty('fits');
export function isPLLSQ(fit: object): fit is IPolyLLSQResult
return fit.hasOwnProperty('coef');
Now, let’s examine each of the library functions imported from @algorithmist/tf-lib, as this serves to introduce low-level programming with TensorFlow JS.
mseloss: This is a loss function based on the MSE or Mean-Squared Error metric discussed above.
import * as tf from '@tensorflow/tfjs';
export function mseLoss(pred: tf.Tensor1D, label: tf.Tensor1D):
tf.Scalar {
return pred.sub(label).square().mean();
The first item to note is that most TF methods take tensors as an argument and the operation is performed across the entire tensor.
The mseLoss function accepts both a one-dimensional tensor of predictions and a one-dimensional tensor of labels as arguments. The term labels comes from classification or categorical learning, and
is a fancy term for what the predictions are compared against.
Let’s back up for a second and review.
• The learnable inputs to our ‘model’ are four coefficients of a cubic polynomial.
• We are given a set of data points, i.e. (x, y) values, that we wish to fit with a cubic polynomial (which is the function or model for our example).
• The predictions are an array of y-coordinates created from evaluating the cubic polynomial at each of the x-coordinates of the supplied training data.
• The labels are the corresponding y-values of the original training data.
The mseLoss function subtracts the label from the prediction and then squares the difference to create a positive number. This is the squared error or residual for each data point. The TF mean()
method produces the average of the squared errors, which is the definition of the MSE metric. Each of these TF methods operates on a single one-dimensional tensor at a time and each method can be
chained. The final result is a scalar.
mseLoss is used to compare one set of predictions vs. another. That comparison is used to assign weights in a network that eventually predicts the value of the four cubic polynomial coefficients.
sumsqLoss: This is another loss or comparative function. Instead of mean-squared error, it computes the sum of the squared error values. This is the function that is minimized in CLS.
import * as tf from '@tensorflow/tfjs';
export function sumsqLoss(pred: tf.Tensor1D, label: tf.Tensor1D): tf.Scalar {
return pred.sub(label).square().sum();
This function also takes predictions and labels (1D tensors) as arguments and produces a scalar result.
cubicPredict: This is a predictor function, i.e. it takes a 1D tensor of x-coordinates, a current estimate of four cubic polynomial coefficients, and then evaluates the cubic polynomial for each x
-coordinate. The resulting 1D tensor is a ‘vector’ of predictions for the cubic polynomial.
Before providing the code, it is helpful to discuss the most efficient way to evaluate a polynomial. Most online tutorials evaluate polynomials with redundant multiplications. In pseudo-code, you
might see something like
y = c3 * x * x *x;
y += c2 * x * x;
y += c1 * x;
y += c0
to evaluate the cubic polynomial c0 + c1*x + c2*x² + c3*x³.
A better way to evaluate any polynomial is to use nested multiplication. For the cubic example above,
y = ((c3*x + c2)*x + c1)*x + c0;
The cubicPredict code implements nested multiplication with the TF Core API. The operations could be written in one line, but that’s rather confusing, so I broke the code into multiple lines to
better illustrate the algorithm. You will also see a Typescript implementation later in this article.
import * as tf from '@tensorflow/tfjs';
export function cubicPredict(x: tf.Tensor1D, c0: tf.Variable, c1:
tf.Variable, c2: tf.Variable, c3: tf.Variable): tf.Tensor1D
// for each x-coordinate, predict a y-coordinate using nested
const result: tf.Tensor1D = x.mul(c3).add(c2);
return result;
Notice that the polynomial coefficients are not of type number as you might expect. Instead, they are TF Variables. This is how TF knows what to optimize and I will expand on Variables later in the
normalize: This function takes an array of numerical arguments, computes the range from minimum to maximum value, and then normalizes them to the specified range. This is how arrays of x- and
y-coordinates, for example, are normalized to the interval [-1, 1].
export function normalize(input: Array<number>, from: number, to:
number): Array<number>
const n: number = input.length;
if (n === 0) return [];
let min: number = input[0];
let max: number = input[0];
let i: number;
for (i = 0; i < n; ++i)
min = Math.min(min, input[i]);
max = Math.max(max, input[i]);
const range: number = Math.abs(max - min);
const output: Array<number> = new Array<number>();
if (range < 0.0000000001)
let t: number;
input.forEach((x: number): void => {
t = (x - min) / range;
output.push((1-t)*from + t*to);
return output;
The inverse process, i.e. transform data from say, [-1, 1], back to its original domain is denormalize.
export function denormalize(output: Array<number>, from: number, to:
number, min: number, max: number): Array<number>
const n: number = output.length;
if (n === 0) return [];
const range: number = Math.abs(to - from);
const result: Array<number> = new Array<number>();
if (range < 0.0000000001)
let i: number;
for (i = 0; i < n; ++i) {
let t: number;
output.forEach((x: number): void => {
t = (x - from) / range;
result.push((1-t)*min + t*max);
return result;
Sometimes, we want to normalize or denormalize a single value instead of an entire array.
export function normalizeValue(input: number, from: number, to:
number, min: number, max: number): number
const range: number = Math.abs(max - min);
if (range < 0.0000000001)
return from;
const t: number = (input - min) / range;
return (1-t)*from + t*to;
export function denormalizeValue(output: number, from: number, to:
number, min: number, max: number): number
const range: number = Math.abs(to - from);
if (range < 0.0000000001)
return min;
const t: number = (output - from) / range;
return (1-t)*min + t*max;
These are just some of the functions in my TF-specific Typescript library. They will all be referenced during the course of the remaining deconstruction.
Writing the Polynomial Regression Application
This client demo was created entirely in the main app component. Layout was extremely simplistic and consisted of a plot area, some information regarding quality of fit, polynomial coefficients, and
a select box to compare against various CLS fits of the same data.
Note that a later version of the application also provided an area in the UI to adjust the degree of the TF-fit polynomial (not shown here).
<div style="width: 600px; height: 500px;" quickPlot
<div class="controls">
<span class="smallTxt">RMS Error: {{error$ | async | number:'1.2-
<div class="controls">
<span class="smallTxt padRight">Poly Coefs: </span>
<span class="smallTxt fitText padRight" *ngFor="let coef of coef$
| async">{{coef | number: '1.2-5'}}</span>
<div class="controls">
<span class="smallTxt padRight deepText">{{dlStatus$ | async}}</span>
<div class="controls">
<span class="smallTxt padRight">Select Fit Type</span>
<select (change)="fit($event)">
<option *ngFor="let item of fitName" [value]="item.name">
Graph bounds are computed by scanning the training data x- and y-coordinates to determine min/max values and then adding a prescribed buffer (in user coordinates). They are computed in the ngOnInit()
this._left = this._trainX[0];
this._right = this._trainX[0];
this._top = this._trainY[0];
this._bottom = this._trainY[0];
const n: number = this._trainX.length;
let i: number;
for (i = 1; i < n; ++i)
this._left = Math.min(this._left, this._trainX[i]);
this._right = Math.max(this._right, this._trainY[i]);
this._top = Math.max(this._top, this._trainY[i]);
this._bottom = Math.min(this._bottom, this._trainY[i]);
this._left -= AppComponent.GRAPH_BUFFER;
this._right += AppComponent.GRAPH_BUFFER;
this._top += AppComponent.GRAPH_BUFFER;
this._bottom -= AppComponent.GRAPH_BUFFER;
this.graphBounds = {
left: this._left,
top: this._top,
right: this._right,
bottom: this._bottom
The cubic polynomial coefficients are defined as TF Variables. Variables inform TF of the learnable parameters used to optimize the model.
protected _c0: tf.Variable;
protected _c1: tf.Variable;
protected _c2: tf.Variable;
protected _c3: tf.Variable;
Many online demos (which are often copied and pasted from one another) show Variable initialization using a pseudo-random process. The idea is that nothing is known about proper initial values for
variables. Since the data is normalized to a small range, initial coefficients in the range [0,1) are ‘good enough.’ So, you will see initialization such as this in many online references,
this._c0 = tf.scalar(Math.random()).variable();
this._c1 = tf.scalar(Math.random()).variable();
this._c2 = tf.scalar(Math.random()).variable();
this._c3 = tf.scalar(Math.random()).variable();
where a native numeric variable is converted into a TF Variable.
In reality, a decision-maker often has some intuition regarding a good initial state for a model. An interactive learning application should provide a means for the decision-maker to express this
knowledge. A brief glance at the original data leads one to expect that it likely has a strong linear component and at least one inflection point. So, the cubic component is likely to also be
prevalent in the final result.
Just to buck the copy-paste trend, I initialized the coefficients using this intuition.
this._c0 = tf.scalar(0.1).variable();
this._c1 = tf.scalar(0.3).variable();
this._c2 = tf.scalar(0.1).variable();
this._c3 = tf.scalar(0.8).variable();
Initialization to fixed values should lead to a fixed solution, while pseudo-random initialization may lead to some variance in the final optimization.
Learning rate and TF optimizer are defined as follows:
protected _learningRate: number;
protected _optimizer: tf.SGDOptimizer;
The learning rate is initialized to 0.1. This has historically shown to be a reasonable starting point for regression-style applications.
Recall that TF is trained on normalized data that we wish to differentiate from the original data. TF also operates on tensors, not Typescript data structures. So, TF training data is also defined.
protected _tensorTrainX: tf.Tensor1D;
protected _tensorTrainY: tf.Tensor1D;
TF has no knowledge of or respect for the Angular component lifecycle, so expect interactions with this library to be highly asynchronous and out-of-step with Angular’s lifecycle methods. Plotting
occurs in a Canvas, so it can remain happily divorced from Angular’s lifecycle. Everything else in the UI is updated via async pipes. Here is the construction of the application status variable,
error information, and the polynomial coefficient display. Each of these shown in bold are reflected in the above template.
this._statusSubject = new BehaviorSubject<string>('Training in
progress ...');
this.dlStatus$ = this._statusSubject.asObservable();
this._errorSubject = new BehaviorSubject<number>(0);
this.error$ = this._errorSubject.asObservable();
this._coefSubject = new BehaviorSubject<Array<number>>([0, 0, 0, 0]);
this.coef$ = this._coefSubject.asObservable();
The remainder of the on-init handler performs the following actions:
1 — Copy the training x- and y-coordinates into separate arrays and then overwrite them with normalized data in the interval [-1, 1].
2 — Initialize the TF optimizer.
this._optimizer = tf.train.sgd(this._learningRate);
3 — Convert the normalized x- and y-coordinates to tensors,
this._tensorTrainX = tf.tensor1d(this._trainX);
this._tensorTrainY = tf.tensor1d(this._trainY);
4 — Assign graph layers to the QuickPlot directive. There is one layer for the original data (in its natural domain), one for the TF fit, and one for the CLS fit.
@ViewChild(QuickPlotDirective, {static: true})
protected _plot: QuickPlotDirective;
The remainder of the work is performed in the ngAfterViewInit() lifecycle hander. First, the original data is plotted and then TF is asked to optimize the current model.
this._optimizer.minimize(() => mseLoss(cubicPredict(this._tensorTrainX, this._c0, this._c1, this._c2, this._c3), this._tensorTrainY));
Note that mseLoss is the defined loss-function or the metric by which one solution is deemed better or worse than another solution. The current predictions for each x-coordinate depend on the current
estimate of each of the polynomial coefficients. The cubic polynomial is evaluated (on a per-tensor basis) using the cubicPredict function. The labels or values TF compares the predictions to are the
original y-coordinates (normalized to [-1, 1]).
In pseudo-code, we might express the above line of code as the following steps:
1 — vector_of_predictions = evaluate cubic poly(c0, c1, c2, c3, vector_of_x_coordinates)
2 — Compute MSE of vector_of_predictions vs. normalized_y_coords
3 — Optimize model based on MSE comparison criterion.
Once the optimization completes, the sumsqLoss function is used to compute the sum of the squares of the residuals as another measure of fit quality.
let sumSq: tf.TypedArray = sumsqLoss(cubicPredict(this._tensorTrainX,
this._c0, this._c1, this._c2, this._c3), this._tensorTrainY).dataSync();
The TF dataSync() method synchronously downloads the requested value(s) from the specified tensor. The UI thread is blocked until completion.
The SSE value could be reflected in the UI or simply logged to the console,
console.log('initial sumSq:', sumSq[0]);
It’s also possible to re-optimize, i.e. run the optimization again using the current Variables as starting points for a new optimization. We can see if any improvement is made in the total sum of
squares of the residuals.
this._optimizer.minimize(() => mseLoss(cubicPredict(this._tensorTrainX, this._c0, this._c1, this._c2, this._c3), this._tensorTrainY));
sumSq = sumsqLoss(cubicPredict(this._tensorTrainX, this._c0, this._c1, this._c2, this._c3), this._tensorTrainY).dataSync();
console.log('sumSq reopt:', sumSq[0]);
This yields the result shown below.
So, how does this result compare against traditional cubic least-squares? Here is the result.
This is really interesting — CLS (shown in blue) and TF (shown in red) seem to have different interpretations of the data (which is one reason I like to use this dataset for client demonstrations).
Recall that CLS is very myopic and optimized for interpolation. There is, in fact, no better interpolator across the original domain of the data. The real question is how does the fit perform for
As it happens, the generated data tends downward as x decreases and upward as x increases outside the original domain. So, in some respects, TF ‘got it right,’ as the TF fit performs much better on
out-of-sample data.
Dealing With Multiple Domains
The QuickPlot Angular directive plots functions across the same bounds (i.e. extent of x-coordinate and y-coordinate). The original data and CLS fits are plotted across the same bounds, i.e. x in the
interval [-6.5, 9.7] and y in the interval [-0.25, 4.25]. The cubic polynomial, computed by TF, has both x and y restricted to. the interval [-1, 1]. The shape of the polynomial is correct, but its
data extents do not match the original data. So, how it it displayed in QuickPlot?
There are two resolutions to this problem. One is simple, but not computationally efficient. The other approach is computationally optimal, but requires some math. Code is provided for the first
approach and the second is deconstructed for those wishing to delve deeper into the math behind this project.
The QuickPlot directive allows an arbitrary function to be plotted across its graph bounds. It samples x-coordinates from the leftmost extent of the graph to the rightmost extent, and evaluates the
supplied function at each x-coordinate.
For each x-coordinate in the original data range, perform the following steps:
1 — Normalize the x-coordinate to the range [-1, 1].
2 — Evaluate the cubic polynomial using nested multiplication.
3 — Denormalize the result back into the original y-coordinate range.
This approach is illustrated in the following code segment.
const f: GraphFunction = (x: number): number => {
const tempX: number = normalizeValue(x, -1, 1, this._left, this._right);
const value: number = (((c3*tempX) + c2)*tempX + c1)*tempX + c0;
return denormalizeValue(value, -1, 1, this._bottom, this._top);
this._plot.graphFunction(PLOT_LAYERS.TENSOR_FLOW, 2, '0xff0000', f);
This approach is inefficient in that a normalize/denormalize step is required to move coordinates back and forth to the proper intervals. It is, however, easier to understand and implement.
Another approach is to compute cubic polynomial coefficients that are ‘correct’ in the original data domain. In other words, TF computes coefficients for one polynomial, P, such that P(x) accepts
values of x in [-1, 1] and produces y-values in [-1, 1].
Define another cubic polynomial, Q, with coefficients a0, a1, a2, and a3 that accepts x-coordinates in the original data’s domain (all real numbers) and produces y-coordinates in the original data’s
range (all real numbers).
The coefficients of P(x) are c0, c1, c2, and c3. This information is used to compute -a0, a1, a2,_ and a3. There are four unknowns, which requires four equations to uniquely specify these values.
Take any four unique x_-coordinates from the domain of _P, say -1, 0, 1/2, and 1. If the normalize-value function is called N(x), for example, then compute
_x1 = N(-1)
x2 = N(0)
x3 = N(1/2)
x4 = N(1)_
Now, evaluate
_y1 = N(P(-1))
y2 = N(P(0))
y3 = N(P(1/2))
y4 = N(P(1))
P(x) = ((c3*x + c2)*x + c1)*x + c0_ in nested form. For example,
P(0) = c0 and P(1) = c0 + c1 + c3 + c3.
This process produces four equations
_a0 + a1*x1 + a2*x1² + a3*x1³ = y1
a0 + a1*x2 + a2*x2² + a3*x2³ = y2
a0 + a1*x3 + a2*x3² + a3*x3³ = y3
a0 + a1*x4 + a2*x4² + a3*x4³ = y4_
Since x1, x2, x3, and x4 (as well as y1, y2, y3, and y4) are actual numerical values, the system of equations is linear in the unknowns a0, a2, a2, and a3. This system can be solved using the dense
linear equation solver in the repo provided earlier in this article.
This approach requires some math and for some that can be pretty intimidating. However, once the new coefficients for Q are computed, the TF cubic polynomial fit can be efficiently computed for any
new x-coordinate without consideration of normalization or denormalization.
Tidy Up Your Work
TF produces interim tensors during the course of computations that persist unless removed, so it is often a good idea to wrap primary TF computations in a call to tidy(), i.e.
const result = tf.tidy( () => {
// Your TF code here ...
To check the number of tensors currently in use, use a log such as
console.log('# Tensors: ', tf.memory().numTensors);
Returned tensors (or tensors returned by the wrapped function) will pass through tidy.
Variables are not cleaned up with tidy; use the tf.dispose() method instead.
Yes, that was a long discussion. Pat yourself on the back if you made it this far in one read :)
TensorFlow is a powerful tool and the combination of TF and Angular enables the creation of even more powerful interactive machine-learning applications. If you are not already familiar with async
pipe in Angular, then master it now; it will be your most valuable display tool moving forward with TF/Angular.
I hope you found this introduction helpful and wish you the best with all future Angular efforts!
ng-conf: The Musical is coming
ng-conf: The Musical is a two-day conference from the ng-conf folks coming on April 22nd & 23rd, 2021. Check it out at ng-conf.org
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse
|
{"url":"https://dev.to/ngconf/getting-started-with-tensorflow-in-angular-2n5e","timestamp":"2024-11-07T15:46:05Z","content_type":"text/html","content_length":"133359","record_id":"<urn:uuid:286fbd55-ac04-4319-bd5e-f2d2e589d1e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00528.warc.gz"}
|
THE AVERAGE MONSTERS - WealthOZ
The “now-focusing effect” inevitably leads to a preference for spending over saving.
But can we even process economic numbers correctly?
Sri Humana: So, what shall we discuss today?
Ann: I heard an interesting news story; it said our per-capita income is actually growing.
Garret: Boring. I watched a fascinating horror movie instead.
Sarah: Emily has nightmares, and I can’t sleep all night.
Thomas: To tell you the truth, I do not like horror movies. But judging from the ones I have seen, the whole script usually revolves around some kind of a monster. These horrible creatures rarely
have much in common; they’re rather unique. But how about this: suppose someone combined the Predator, the Alien, Godzilla, Dracula and Jason into one “average monster.” What would it look like? I
think it would be funny and bizarre.
Now, in your everyday life, you are bombarded with all kinds of average statistics. Journalists use them, politicians use them and your boss uses them.
Average salary, average IQ, average results and of course the average person. The average person on Earth is 1.8 meters tall, has a 110 IQ, earns $5000 per year, has one and a half children, is
half Chinese and half Indian, and half man half woman. What a monster!!!
The most dangerous average monsters are the ones in areas where results or outcomes are not equally distributed. Areas like wealth, internet traffic, city sizes, areas destroyed by fire and all sorts
of other social phenomena.
Don’t get me wrong, averages do have some meaning and use, but mostly when they’re mentioned in connection with things that are distributed normally. Things like people’s height, health, and school
grades. Or the number of sunny days in a year, etc.
For example, consider the average wealth of a random, relatively small group of people. Now add Bill Gates to that same group, and calculate the average again. What happens to the average figure?
Now consider the average IQ of a similar random group. Then add Einstein to the group, and calculate again. How would the average IQ change?
There is a useful tool for measuring variability, but remember it is useful only if your data is distributed normally – for example, the way body part sizes appear among a group of people, with all
values falling within a fairly well-defined range. This tool is the metric known as standard deviation. It simply measures how far a particular measurement is found to deviate from the average (mean)
measurement, in a particular group. A low standard deviation means that the data points tend to be very close to the mean. A high standard deviation means that the data points are spread over a large
range of values, some far from the mean. (If you calculate the standard deviation in a group to be very high, it may mean your average figure is of little or no use.)
Now, don’t worry – we’ll get to how you can use this tool in managing your wealth very soon. First, we’ll look at how to calculate it – then we can put it to work.
Stop believing in “average monsters” and your nightmares might go away.
For the sake of example, let’s imagine you have a small flock of seven chickens, and you sell their eggs (yes, “nest eggs”). You want to find out the average number of eggs each chicken lays each
year, and how much variation there is within the group (the standard deviation). You have kept a record of each chicken’s production for the past year, and you are ready to do the math.
Here are the eight steps for calculating standard deviation:
1. Organize your data – gather it all together so you can work with it. In this case, get out your record of each chicken’s production.
2. Add your data. Find the total number of eggs your whole flock chickens produced. We’ll say it was 980 eggs.
3. Calculate the average of your data by dividing the total by the number (count) of records (the number of chickens for which you kept records: 7) This gives you 980/7=140. That means the average
number of eggs per chicken was 140.
4. Subtract the average number from each record. For example, if Chicken #1 (Betsy) laid 120 eggs, you would subtract 140 from 120. Write that number down, and then go through the same process for
the other six chickens.
5. Square the above numbers. That is, multiply each of the numbers you found by itself. Betsy’s number would be 20 x 20, or 400. Write that down and repeat the process for the rest of the flock’s
numbers from step 4.
6. Add up all the squared numbers.
7. Calculate the average of all the numbers found in step 5.
8. Calculate the square root of the number you found in step 7. The number you find here is the standard deviation.
*NOTE: There IS a shorter and easier way to do this, if you happen to have a spreadsheet app such as MS Excel, and know how to use it. Just create a spreadsheet for your data and use the STDEV
In this example, 35 is your standard deviation. That means you can anticipate that if you add a chicken to your flock, you will end up with 140 more eggs in the next year, plus or minus 35 eggs.
(That is, you might get as few as 105 or as many as 175, but it’s likely to be closer to 140.) This also assumes that your sample (the records for your seven chickens) is representative of all
chickens in the region.
Standard deviation is a widely used metric, especially for evaluating risks. Be careful in using it, though – many people fall into a trap when using standard deviation: they become overconfident
about risks in domains which have extreme distributions. (This common mistake actually contributed to the financial crisis of 2008.) So, use standard deviation wisely, and don’t get overconfident
about it – especially when using it in the social domain.
Some questions for you:
• Do you consider yourself an average person?
• What is the average day temperature in the Sahara? In Chicago?
|
{"url":"https://wealthoz.com/average-monsters/","timestamp":"2024-11-02T06:30:11Z","content_type":"text/html","content_length":"35458","record_id":"<urn:uuid:345103d3-dcc1-44e0-9115-5f40546591a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00026.warc.gz"}
|
Partial least squares regression
Rachid Guerraoui, Nirupam Gupta, Youssef Allouah, Geovani Rizk, Rafaël Benjamin Pinot
The theory underlying robust distributed learning algorithms, designed to resist adversarial machines, matches empirical observations when data is homogeneous. Under data heterogeneity however, which
is the norm in practical scenarios, established lower bo ...
|
{"url":"https://graphsearch.epfl.ch/en/concept/1046736","timestamp":"2024-11-12T06:10:05Z","content_type":"text/html","content_length":"131124","record_id":"<urn:uuid:0ddd406d-0ec3-4ab3-bc7c-c744f66780ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00629.warc.gz"}
|
Early work did not credit Einstein
An anonymous commenter
credits Einstein for relativity
, largely based on the opinions of Lorentz, Minkowski, Born, etc. at the time.
The truth is that neither these ppl nor anyone else at the time thought that Einstein had any significance advance over Lorentz and Poincare.
Lorentz was generous to Einstein (and to FitzGerald, Voigt, and others), but he credited Poincare more. A I
Lorentz published his 1906 Columbia U. lectures on relativity, where he described Einstein's work without expressing any disagreement with it. That is where Lorentz says, "Einstein simply
postulates what we have deduced". After praising Einstein's simplicity, he says, "Yet, I think, something may also be claimed in favor of the form in which I have presented the theory." Lorentz
was saying that he and Einstein had different ways of presenting the same theory, with advantages and disadvantages to each approach.
Walter Kaufmann
wrote something similar in 1906
This is exactly correct, as I
explain here
. The core of Einstein's 1905 paper was to give an exposition of Lorentz theory, with the main simplification being postulating what Lorentz (and Poincare) proved.
Bucherer wrote a 1908 paper on
The Experimental Confirmation of the Lorentz-Einstein Theory
. His mild credit for Einstein is "The relativity principle is clearly emphasized in Einstein's version."
In crediting for the Lorentz transformations, Lorentz
These were the considerations published by me in 1904 which gave place to Poincaré to write his paper on the dynamics of the electron, in which he attached my name to the transformation to which
I will come to speak. I must notice on this subject that the same transformation was already present in an article of Mr. Voigt published in 1887, and that I did not draw from this artifice all
the possible parts. Indeed, for some of the physical quantities which enter the formulas, I did not indicate the transformation which suits best. That was done by Poincaré and then by Mr.
Einstein and Minkowski.
Einstein gave only limited credit, as he spent his whole life trying to cheat ppl out of credit for their work.
Minkowski is not much better, as noted
, but at least he credited Lorentz and Poincare more than Einstein.
Perhaps Max Born found relativity easier if the relativity principle is postulated, instead of deduced from Michelson-Morley. Or maybe Born just liked Einstein better, as they were life-long friends.
But he never explained how Einstein's work was any better than that of Lorentz and Poincare.
10 comments:
1. You say Lorentz credited Poincare more then he credited Einstein, but this is contradicted by the facts. Even in 1910 Lorentz gave a lecture in Gottingen on "Einstein's Principle of Relativity",
and in his entire book he speaks extensively about Einstein's theory of relativity, but mentions Poincare only once, for the "Poincare pressure".
It's also well known that Lorentz mis-spoke when he said he had derived what Einstein assumed. Read Lorentz's papers and book... he simple assumed, one by one, the individual features of
phenomena necessary for his "theorem of corresponding states" to be satisfied. He derived things for electromagnetism (from Maxwell's equations, which were already Lorentz invariant thanks to
Maxwell, not Lorentz), but for everything else he simply assumed. And he never fully embraced the relativity principle, and Poincare even in 1908 was writing about a non-viscious fluid ether that
he believed created the appearance of relativity. This is a far cry from what you wish to believe, which is the fully relativistic Minkowski spacetime metric of the vacuum. Even Minkowski said
his work realized the fully electromagnetic world view.
You simply cannot read Einstein's 1905 papers and fail to see the enormous conceptual advance over anything that Lorentz or Poincare had written. And even in later years, neither Lorentz nor
Poincare ever embraced the ideas that you credit them for creating.
1. Poincare died in 1912, you idiot! Poincare was even proven right regarding flat vs curved space as convention. Both mathematics now official exists. He understood mathematics and nature
better. His theory predicted all the same things and earlier. No one can even argue that, including the sycophants on Stack Exchange. They just try to make false statements about Poincare not
understanding what he derived. It's factually wrong. Poincare treated ether as a convention and it's basically back in modern physics (metric tensor, field theories, vacuum energy, casimir
effect, etc...). Fuck off troll. What people said about him DOESN'T MATTER!
2. If Lorentz failed to credit Poincare in a 1910 German lecture, he corrected that by crediting Poincare in the 1914 paper cited above.
Lorentz was accurate when he said that Einstein postulated what others proved. That is mainly what ppl liked about Einstein's paper. It is easier to read postulates than proofs.
The papers by Lorentz, Poincare, and Minkowski are on a much higher intellectual level. They have the genius of original work. Einstein's paper is like a watered-down expository paper. Nothing is
clearly original. It may have been easier to read for physicists unfamiliar with the previous literature, but it did not advance the field. All of the important work followed the
Lorentz-Poincare-Minkowski chain of papers.
3. You misunderstand Lorentz's 1914 comment, appearing in a talk specifically to honor the recently deceased Poincare. He was referring to Poincare's correction of the charge transformation that
Lorentz got wrong in 1904 (illustrating the clunky state of Lorentz's ideas). This is the single correction to Lorentz's work that appears in Poincare's Palermo paper (1906), but of course it
also is given correctly in Einstein's 1905 paper, so the word "then" was in error (albeit understandable on the occasion).
Your mistake about Lorentz or others proving relativity has already been explained. Again, read Lorentz's "proof" of his theorem of corresponding states. It consists of one huge assumption after
another, which together amount to simply assuming Lorentz invariance, but in a clunky and disjointed way.
It is well known among scholars of this subject that Poincare's Palermo paper had almost no effect on the development of the subject (as shown by dearth of citations), partly because it was not
widely read, and partly because even those who read it could see that it had been completely superseded (even before it appeared) by Einstein's 1905 papers. Your claim that Einstein's paper did
not advance the field is, of course, utterly absurd.
4. It is true that there are not many citations to Poincare's Palermo paper, but there aren't many to Einstein's 1905 paper either in the next couple of years. The relativity research papers only
exploded after Minkowski's 1907-08 papers, and they mainly cited Minkowski.
Minkowski relied very heavily on Poincare's paper for a spacetime theory, for 4-vectors, for covariance, for the Lorentz group, for the metric, for causality, and for ideas about gravity.
Minkowski did not rely on Einstein for anything.
So yes, Poincare's Palermo paper was vastly more influential than Einstein's.
Sure, Lorentz's 1895 proof is hard to follow, but it was pioneering work and it was 10 years ahead of Einstein. Einstein's job was easier because he 10 years of hindsight, and because he was
postulating what Lorentz was proving.
5. "The relativity research papers only exploded after Minkowski's 1907-08 papers..."
"...and they mainly cited Minkowski."
Once Einstein's relativity theory of popularized by Minkowski, many papers were written, and references to Einstein (and Lorentz) were abundant. For example, in 1910, Wilhelm Wein proposed
Lorentz and Einstein jointly for the Nobel prize for relativity. Einstein was famous among physicists as the originator of special relativity and the "ether slayer" long before he became a
popular celebrity. Poincare's paper was nearly forgotten (Pauli had to be prompted by Klein to mention it in a footnote in his 1921 encyclopedia article) until the 1950's when historians of
science started to notice it.
"Minkowski relied very heavily on Poincare's paper for a spacetime theory, for 4-vectors, for covariance, for the Lorentz group, for the metric, for causality, and for ideas about gravity."
I think you have just made that up. According to Minkowski's assistant, Minkowski already had his ideas in 1905 that he published a couple of years later. Also, please note that he refers to
Poincare only for coining the name "Lorentz transformations", and in another place to say he proceeds differently. He references Einstein by crediting him for making the crucial step of
recognizing the relativity of time, which forms the foundation for Minkowski's entire work, which he acknowledges.
More fundamentally, your list is wacky:
spacetime theory - Correct reference is Einstein 1905, where the invariant spacetime interval is defined, the ether is discarded, the Lorentz transformation identified as a group, relativity of
simultaneity, complete reciprocity, relativistic electrodynamics and mechanics described in detail, time dilation (which Poincare didn't believe was real), etc., etc.
Poincare 1906 and to the end of his life still talking ether, non-viscous fluid, violations of momentum, etc.
4-vectors: obvious mathematical formalism
covariance: Einstein, 1905
the Lorentz group: Einstein 1905
the metric: Einstein 1905 identifies the invariant x^2 + y^2 + z^2 - c^2t^2 = s^2, which is the metric line element for flat spacetime.
causality: huh?
ideas about gravity: Poincare's ideas about gravity, treating it just like electromagnetism, were obvious but wrong. The correct relativistic theory of gravity was developed by Einstein 1915.
"Minkowski did not rely on Einstein for anything."
You know that is not true, by his citations and by his own words. ("But Einstein did the work. I would not have though the lazy dog capable of it.")
"So yes, Poincare's Palermo paper was vastly more influential than Einstein's."
You know that is not true. Poincare's greated contribution to relativity was probably his earlier popular writings studied by the "Olympia academy", that could have planted seeds regarding the
epistemology of time. But there is no way of knowing how subtle comments may later influence someone's thinking. Einstein said he was heavily influenced by Hume's skeptical attitude toward
accepted beliefs. Should we credit Hume for special relativity?
"Sure, Lorentz's 1895 proof is hard to follow..."
No, you missed the point. I was not saying Lorentz's work his hard to follow (it isn't). I was saying it consists of a series of fully acknowledged *assumptions* that, taken together, amount to
simply assuming Lorentz invariance. He did not derive Lorentz invariance (and could not have), he simply assumed it. So it is wrong to say Einstein assumed what Lorentz proved. They both just
assumed, i.e., they discerned that the only reliable guide was the relativity principle itself. This is all well known, and can be found in any good book on the foundations of special relativity.
6. Einstein does not actually say that the Lorentz metric is preserved. He only says that light cone points are transformed to light cone points, or that metric zero points transform to metric zero
points. Poincare discovered the importance of the metric, and Minkowski got it from him, not Einstein. Einstein later said he got it from Minkowski.
Wein may not have read or appreciated Poincare's paper, for all I know. Maybe Wein did not read French. In just a couple of more years, physicists learned special relativity from textbooks, not
original papers, and so they might have no idea who did the original work.
It is pretty clear that Minkowski (like Einstein) was not properly crediting his sources in order to get greater credit for himself. Yes, Minkowski credited Poincare for naming the Lorentz group.
Is that really all Minkowski got out of the Palermo paper? And not 4-vectors, the metric, and everything else? That is too silly to believe.
Minkowski probably did not know that Einstein got his ideas about the relativity of time from Poincare. Historians mostly agree that Einstein did, even tho Einstein never credited Poincare for
it. So it means nothing for Minkowski to have credited Einstein for the relativity of time.
You can criticize Lorentz all you want, but that 1895 went from Maxwell's equations, Michelson-Morley, other experiments, and some inspired reasoning, and he got to an essentially correct theory
about transformations of space and time. Had he been wrong, everyone would say that he did not prove anything. But he was right.
There are lots of stupid reasons for not crediting Poincare. Maybe German physicists liked to credit other German physicists. Maybe some, like you, think that Poincare somehow discovered
relativity without understanding what he was doing. There is even one historian who says that Poincare should not be credited because he admitted that the theory could be proved wrong by
experiments! Regardless, the essential facts about who did what are not in dispute.
7. It's true that Einstein didn't explicitly mention (until about 1909) that x^2 - c^2t^2 is preserved under Lorentz transformations, but this is a direct consequence of the transformations with L=
1, which Einstein proved and Poincare simply assumed. Also, noting this invariance is not the same as "discovering the importance of the metric". Minkowski was the first to actually write the
metric line element, i.e., dtau^2 = -dx^2 - dy^2 - dz^2 + dt^2, and was the first to show the importance of this. You claim he got this from Poincare, but it isn't in Poincare.
You say "Wein may not have read Poincare's paper... maybe Wein did not read French." But you assured me a few days ago that every single German physicist with an interest in the subject was fully
acquainted with Poincare's paper in the Italian math journal. It was inconceivable that a patent examiner in Bern would not have read it. This reveals a fundamental difficulty of your thesis. You
claim (1) everyone was fully familiar with Poincare's papers, and (2) Poincare did not receive due recognition for his papers.
You say "it's pretty clear" that Einstein and Minkowski hid their sources, but it really isn't clear at all. You have no evidence to support your claim, and you obviously can't claim that
Einstein's 1905 stole from Poincare's 1906. Sure, some of the things in Minkowski 1907 (sqrt(-1)t, hyperbolic rotation, 6-vectors) can be found in Poincare 1906, but these are obvious formalistic
things obvious to any mathematician when presented with the Lorentz transformations.
"Is that really all Minkowski got out of the Palermo paper? And not 4-vectors, the metric, and everything else? That is too silly to believe."
The metric is not in Poincare, and it forms the backbone of Minkowski's paper. Also, Poincare was still describing Lorentz's (i.e., Ptolemy's) ether theory.
"Minkowski probably did not know that Einstein got his ideas about the relativity of time from Poincare."
And you don't know it either. And no, historians do not mostly agree that he did. You are fantasizing.
"So it means nothing for Minkowski to have credited Einstein for the relativity of time."
It wasn't just Minkowski, it was Lorentz and Poincare too. At the end of his life, Poincare was snidely deprecating the relativity of simultaneity "convention" of "certain physicists" (i.e.,
You can criticize Lorentz all you want...
You continue to miss the point (or at least pretend that you are missing it). The point is that Lorentz merely assumed (not derived) what Einstein assumed, the difference being that Einstein
dispensed with the ether and created a theory not about an ether in Galilean space and time but about the measures of space and time in spacetime where inertial coordinate systems are not related
by Galilean transformations, but by Lorentz transformations. Lorentz never said this. Poincare never said this.
8. I don't know how many physicists read Poincare's papers. Einstein considered himself an expert on the relativity literature, so I don't know how he could have missed it, but I do know that
Minkowski cited Poincare.
Poincare's Palermo paper, section 9, has a discussion starting with "To go further we must look for the invariants of the Lorentz group." It then gives the 4-metric and imaginary time.
Minkowski's long 1907 paper gives very little credit to Poincare, but cites the Palermo paper twice in the 12 footnotes.
Einstein's 1916 book, credits Minkowski strongly for the metric and imaginary time, even tho it looks as if it is plagiarized from the Palermo paper. Section 17 says: "These inadequate remarks
can give the reader only a vague notion of the important idea contributed by Minkowski. Without it the general theory of relativity, of which the fundamental ideas are developed in the following
pages, would perhaps have got no farther than its long clothes."
From this I conclude that Poincare discovered the 4-metric, Minkowski got it from Poincare, and Einstein got it from Minkowski.
Possibly Minkowski independently rediscovered it, but then he would have credited Poincare with independently discovering it. That would have been the way to be fair to Poincare as well as grab
some credit for himself. He did not. I assume that Minkowski knew that he could not get away with claiming credit for something he stole from Poincare, but he could just bury Poincare in a
footnote and hope most people don't notice.
You can also say that the 4-metric is obvious, once finding L=1, but Einstein sure doesn't think it was obvious. (I assume "long clothes" is some sort of German idiom that does not translate
Thus I say that even if Minkowski was the only one to read and understand the Palermo paper, it was still crucial to all subsequent developments in relativity.
As for Lorentz, his assumptions about the aether were essentially the same as Einstein's, as explained here. The chief difference is that Lorentz relied on Michelson-Morley while Einstein relied
on postulates taken from conclusions by Lorentz and Poincare.
Tell me this: Why do all my textbooks describe special relativity as a consequence of Michelson-Morley? Lorentz was the one who derived the essence of relativity from Michelson-Morley. Einstein
never mentioned it. If I make a list of the important steps in the textbook history of special relativity, Einstein played no role in any of them.
9. "I don't know how many physicists read Poincare's papers."
That's correct.
Poincare's Palermo paper (in French), appearing in 1906 in an Italian mathematics journal,
"Einstein considered himself an expert on the relativity literature..."
That's not true at all. Please re-read page 165 of Pais's biography.
You say Poincare's 1906 gives the metric, even after I pointed out that it doesn't. The invariant is not the same as the metric. As I said in the previous message(not sure if you read it),
Minkowski was the first to write the metric, i.e., (dtau)^2 = -dx^2, etc.
Again, the Copernican step was to recognize that inertial coordinate systems are related by Lorentz transformations, and then see all the consequences of this. Lorentz explicitly and Poincare
implicitly admitted that they never made this step (even to the end of their lives).
|
{"url":"http://blog.darkbuzz.com/2017/05/early-work-did-not-credit-einstein.html","timestamp":"2024-11-12T12:25:02Z","content_type":"text/html","content_length":"141606","record_id":"<urn:uuid:d8afbbe3-097e-48e8-abc4-64b9b8dd3215>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00593.warc.gz"}
|
Four Sudoku Games With Answers Set 15 Stock Vector Illustration Of - Lyana Printable Sudoku
Printable Sudoku Puzzles Letters With Answers
Four Sudoku Games With Answers Set 15 Stock Vector Illustration Of – There are several number of ways that to use Printable Sudoku puzzles. One great way is to personalize them using QR codes. You
can add a QR code to any 9×9 sudoku for instance, to make it unique. The puzzles are fun to play and make excellent gifts. To learn more about different Sudoku puzzles, take a look at the following
How to Solve a Medium Level Sudoku Puzzles?
Intermediates and beginners alike can benefit from learning how to solve medium-level sudoku puzzles. While they may require higher effort and effort type of sudoku puzzles is still based on the same
rules. Fill in the blanks with numbers from one to nine and take notes to help with tricky parts. Sudoku puzzles with a medium difficulty are more challenging and require some strategies. Here are
some suggestions on how to solve sudoku puzzles of medium difficulty.
First, you should record all possible solution candidates. This helps you recognize patterns and block out incorrect answers. You may also consider ranking potential solutions, a technique utilized
by some enthusiasts of sudoku. It is possible to view the cells in a single direction which means you can view the cells from right to left or you could look at them from the opposite direction like
up and down. You should try to list as many candidates as possible, until you’ve exhausted all squares.
How to Play Sudoku for Absolute Beginners?
If you’re an absolute beginner to this game, then you’re likely to be wondering what you can do it is to master sudoku. This guide could be extremely useful, since it provides you with the
fundamental knowledge required to be successful at this game. For beginners You’ll learn about various types of sudoku, how to use them as well as the basics. For the more experienced players It will
provide more techniques for solving the puzzles.
To play Sudoku, you’ll have to utilize logic and reasoning in order to solve the challenges. This is accomplished by looking over the grid for missing digits and filling those spaces. In the
following example the missing number is on the left column. The missing number is just one number, making it simple to find the remainder of the cells with this method of scanning. However, if you’re
stuck, pencil in all possible candidates, including the nine.
Printable Sudoku Puzzles Letters With Answers
How to Solve Sudoku Quickly?
To be able to solve a Sudoku puzzle quickly you should know how to use a pencil to trace the board. It is possible to use a pencil in order to create mistakes as well as then erase the mistakes. It
also assists in marking the rows and columns with pencil marks for elimination strategies. In order to help you pinpoint the exact number within one column or row, try to find similar numbers across
several rows. For example Seven is the number seven appears only in the red cells of the middle row.
Another tip to solve Sudoku fast is thinking rationally and arrange the numbers. Don’t guess as doing so can ruin the rest of the puzzle. Always put the numbers in each block after weighing the
evidence contained in each box. You should also take time to complete the puzzle. This way you can get a quick solution. If you are patient and follow the suggestions that are listed above, you’ll be
able to achieve the results you desire within a matter of minutes.
Related For Sudoku Puzzles Printable
|
{"url":"https://lyanaprintablesudoku.com/printable-sudoku-puzzles-letters-with-answers/four-sudoku-games-with-answers-set-15-stock-vector-illustration-of/","timestamp":"2024-11-12T06:06:52Z","content_type":"text/html","content_length":"25993","record_id":"<urn:uuid:1d697c1e-3254-4e03-8804-b3cd919958e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00178.warc.gz"}
|
The common belief of some linguists that each language is a
Question Stats:
79%21% (00:45)based on 52 sessions
Question Stats:
56%44% (00:48)based on 110 sessions
Question Stats:
13%87% (01:04)based on 140 sessions
Question Stats:
79%21% (02:06)based on 139 sessions
1. The primary purpose of the passage is to
Well lets look at the passage and try to understand its primary purpose.
The passage begins by stating the common belief of some linguists, which is that each language is a perfect vehicle for the thoughts of the nation speaking it. The passage then gives an example which
runs counter to this belief. It then concludes that no language is perfect and it is reasonable to investigate the relative merits of different languages or of different details in languages.
Thus the primary purpose of the passage is to refute the belief of some linguists that each language is a perfect vehicle for the thoughts of the nation speaking it.
(A) analyze an interesting feature of the English language
Wrong. While the passage does analyze an interesting feature of the English Language (pronoun ambiguity), that is not the primary purpose. That is done to support the primary purpose of the passage.
(B) refute a belief held by some linguists
Correct. Yes, it agrees with the explanation for the primary purpose of the passage that I suggested above.
(C) show that economic theory is relevant to linguistic study
Wrong. While the passage does try to show that the common belief of some linguists is in some ways the exact counterpart of the conviction the Manchester school of economics that supply and demand
will regulate everything, that is not the primary purpose of the passage.
(D) illustrate the confusion that can result from the improper use of language
Wrong. First of all, there is no improper use of language. The pronoun ambiguity in the sentence - "He took his stick—no, not John’s, but his own" - is an inherent feature of the English language and
the language has been properly used.
(E) suggest a way in which languages can be made more nearly perfect
Wrong. While the passage ends by concluding that no language is perfect and we must investigate the relative merits of different languages, or of different details in languages, it does not suggest a
way in which languages can be made more nearly perfect.
2. The misunderstanding presented by the author is similar to which of the following?
I. X uses the word “you” to refer to a group, but Y thinks that X is referring to one person only.
II. X mistakenly uses the word “anomaly” to refer to a typical example, but Y knows that “anomaly” means “exception.”
III. X uses the word “bachelor” to mean “unmarried man,” but Y mistakenly thinks that bachelor means “unmarried woman.”
Lets look at the misunderstanding presented by the author.
He took his stick—no, not John’s, but his own.
The misunderstanding is the result of confusion over whom the first "his" is pointing to. It is clarified later has pointing to "He" and not to "John".
I. X uses the word “you” to refer to a group, but Y thinks that X is referring to one person only.
Correct. This is similar to the error presented in the passage. The confusion is over whom "You" is pointing to. Is it pointing to a group or only one person. It is similar to a "pointer error" in
programming context.
II. X mistakenly uses the word “anomaly” to refer to a typical example, but Y knows that “anomaly” means “exception.”
Wrong. Here the misunderstanding is due to a disagreement over the meaning of the word. Not whom or what the word is pointing to. The word "anomaly" is not referencing anybody or anything, it merely
means "exception". It is similar to "value error" in programming context.
III. X uses the word “bachelor” to mean “unmarried man,” but Y mistakenly thinks that bachelor means “unmarried woman.” It is similar to "value error" in programming context.
Wrong. Here the misunderstanding is due to a disagreement over the meaning of the word. Not whom or what the word is pointing to. The word "bachelor" is not referencing anybody or anything, it merely
means "unmarried man"
(A) I only . Correct Answer
(B) II only
(C) III only
(D) I and II only
(E) II and III only
3. In presenting the argument, the author does all of the following EXCEPT:
(A) give an example
Wrong. He gives an example of a word has to be modified or defined in order to present the idea intended by the speaker - “He took his stick—no, not John’s, but his own.”
(B) draw a conclusion
Wrong. He draws a conclusion in the last sentence - "No language is perfect, and if we admit this truth, we must also admit that it is not unreasonable to investigate the relative merits of different
languages or of different details in languages."
(C) make a generalization
Wrong. He makes a generalization in the last sentence. After studying only one example in one language he generalizes thus - "No language is perfect, and if we admit this truth,...."
(D) make a comparison
Wrong. He compares the common belief of some linguists with the conviction of the Manchester school of economics.
(E) present a paradox
Correct. No where in the passage does the author present a paradox.
Paradox - a seemingly absurd or contradictory statement or proposition which when investigated may prove to be well founded or true.
4. Which of the following contributes to the misunderstanding described by the author ?
Lets look at the misunderstanding presented by the author.
He took his stick—no, not John’s, but his own.
The misunderstanding is the result of confusion over whom the first "his" is pointing to. It is clarified later has pointing to "He" and not to "John".
(A) It is unclear whom the speaker of the sentence is addressing.
Wrong. It is unclear whom the speaker is addressing the sentence to, but that is not the source of the misunderstanding.
(B) It is unclear to whom the word “his” refers the first time it is used.
Correct. Yes, it is unclear to whom the word "his" refers the first time it is used.
(C) It is unclear to whom the word “his” refers the second time it is used.
Wrong. It is very clear to whom the word "his" refers to the second time it is used. It refers to "He", the subject of the sentence. The sentence fragment - "no, not John’s, but his own." - clarifies
(D) The meaning of “took” is ambiguous.
Wrong. The meaning of "took" is very clear in this context.
(E) It is unclear to whom “He” refers.
Wrong. It may be unclear to whom "He" refers to, but that is not the cause of the misunderstanding.
|
{"url":"https://gre.myprepclub.com/forum/the-common-belief-of-some-linguists-that-each-language-is-a-8657.html","timestamp":"2024-11-10T12:36:41Z","content_type":"application/xhtml+xml","content_length":"360933","record_id":"<urn:uuid:30a51118-967c-408a-bec9-d4699b0be8bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00244.warc.gz"}
|
Problem C
The coaches in a certain regional are fed up with the judges. During the last contest over $90$% of the teams failed to solve a single problem—in fact, even half the judges found the problems too
hard to solve. So the coaches have decided to tar and feather the judges. They know the locations of all the judges as well as the locations of tar repositories and feather storehouses. They would
like to assign one repository and one storehouse to each judge so as to minimize the total distances involved. But this is a hard problem and the coaches don’t have time to solve it (the judges are
evil but not stupid—they have a sense of the unrest they’ve fomented and are getting ready to leave town). So instead they’ve decided to use a greedy solution. They’ll look for the smallest distance
between any tar repository and any judge location and assign that repository to that judge. Then they’ll repeat the process with the remaining repositories and judges until all the judges have a
repository assigned to them. After they’re finished with the tar assignments they’ll do the same with the feather storehouses and the judges. Your job is to determine the total distances between
repositories and storehouses and their assigned judges.
All judges, tar repositories and feather storehouses are numbered $1, 2, \ldots $. In case of any ties, always assign a repository/storehouse to the lowest numbered judge first. If there is still a
tie, use the lowest numbered repository/storehouse.
Better hurry up—an unmarked van has just been spotted pulling up behind the judges’ room.
Input starts with a line containing three positive integers: $n$ $m$ $p$ ($1 \leq n \leq m, p \leq 1\, 000$), representing the number of judges, tar repositories and feather storehouses,
respectively. Following this are $n$ lines, each containing two integers $x$ $y$ ($|x|, |y| \leq 10\, 000$) specifying the locations of the $n$ judges, starting with judge $1$. This is followed by
$m$ similar lines specifying the locations of the tar repositories (starting with repository $1$) and $p$ lines specifying the locations of the feather storehouses (starting with storehouse $1$).
Output the the sum of all distances between judges and their assigned tar repositories and feather storehouses, using the greedy method described above. Your answer should have an absolute or
relative error of at most $10^{-6}$.
Sample Input 1 Sample Output 1
0 0 4.0
|
{"url":"https://open.kattis.com/contests/yjo852/problems/retribution","timestamp":"2024-11-11T08:11:29Z","content_type":"text/html","content_length":"31757","record_id":"<urn:uuid:93c0b217-a602-4106-aa23-4a215b398603>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00140.warc.gz"}
|
Department of Mathematics
Matthias Beck, San Francisco State University
PDL C-38 and via Zoom Link: https://washington.zoom.us/j/91547335974
If \$P\$ is a lattice polytope (i.e., \$P\$ is the convex hull of finitely many integer points in \${\bf R}^d\$), Ehrhart's theorem asserts that the integer-point counting function \$L_P(m) = \#(mP \
cap {\bf Z}^d)\$ is a polynomial in the integer variable \$m\$. Our goal is to study structural properties of Ehrhart polynomials—essentially asking variants of the (way too hard) question which
polynomials are Ehrhart polynomials? Similar to the situations with other combinatorial polynomials, it is useful to express \$L_P(m)\$ in different bases. E.g., a theorem of Stanley (1980) says that
\$L_P(m)\$, expressed in the polynomial basis \$\binom m d, \binom{m+1} d, \dots, \binom{m+d} d\$, has nonnegative coefficients; these coefficients form the \$h^*\$-vector of \$P\$. More recent work
of Breuer (2012) suggests that one ought to also study \$L_P(m)\$ as expressed in the polynomial basis \$\binom{m-1} 0, \binom{m-1}1, \binom{m-1} 2, \dots\$; the coefficients in this basis form the \
$f^*\$-vector of \$P\$. We will survey some old and new results (the latter joint work with Danai Deligeorgaki, Max Hlavaczek, and Jéronimo Valencia) about \$f^*\$- and \$h^*\$-vectors, including
analogues and dissimilarities with \$f\$- and \$h\$-vectors of polytopes and polyhedral complexes.
Note: This talk begins with a pre-seminar (aimed at graduate students) at 3:30–4:00. The main talk starts at 4:10.
Join Zoom Meeting: https://washington.zoom.us/j/91547335974
Meeting ID: 915 4733 5974
|
{"url":"https://math.washington.edu/events/2021-10-20/f-and-h-vectors","timestamp":"2024-11-02T21:16:12Z","content_type":"text/html","content_length":"54171","record_id":"<urn:uuid:ee14a0a3-f8ed-4e6f-9a7c-806aff16626d>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00184.warc.gz"}
|
circle graph template
a circle is the same as 360â°. you can divide a circle into smaller portions. a part of a circle is called an arc and an arc is named according to its angle. a circle graph, or a pie chart, is used
to visualize information and data. a circle graph is usually used to easily show the results of an investigation in a proportional manner. the arcs of a circle graph are proportional to how many
percent of population gave a certain answer. an investigation was made in mathplanet high school to investigate what color of jeans was the most common among the students. this circle graph shows how
many percent of the school had a certain color. we now want to know how many angles each percentage corresponds to. when we want to draw a circle graph by ourselves we need to rewrite the percentages
for each category into degrees of a circle and then use a protractor to make the graph.
circle graph format
a circle graph sample is a type of document that creates a copy of itself when you open it. The doc or excel template has all of the design and format of the circle graph sample, such as logos and
tables, but you can modify content without altering the original style. When designing circle graph form, you may add related information such as circle graph maker,circle graph math,circle graph
examples,circle graph template,circle graph vs pie chart
when designing circle graph example, it is important to consider related questions or ideas, what is a circle graph also called? what is a circle diagram called? what is the circle graph method? what
are the types of circle graphs?, circle graph calculator,circle graph used for,circle graph paper,what is a circle graph called,circle graph overlap
when designing the circle graph document, it is also essential to consider the different formats such as Word, pdf, Excel, ppt, doc etc, you may also add related information such as types of circle
graphs,circle graph desmos,how to make a circle graph with percentages,circle graph trig
circle graph guide
in the last lesson, we learned that a circle graph shows how the parts of something relate to the whole. circle graphs are popular because they provide a visual presentation of the whole and its
parts. below are the circle graphs from each example in the last lesson. you will notice that in each circle graph above, the sectors are ordered by size: the sectors are drawn from largest to
smallest in a clockwise direction. construct a circle graph to visually display this data. each item to be graphed represents a part of the whole. the easiest way to do this is to take the quotient
of the part and the whole and then convert the result to a percent. use a protractor to draw each angle. draw the angles from largest to smallest in a clockwise direction.
construct a circle graph to represent this data. each item to be graphed represents a part of the whole. we know from the last lesson that a circle graph is easier to read when a percent is used to
label the data. draw a circle and a radius. use a protractor to draw each angle. draw the angles from largest to smallest in a clockwise direction. directions: use the procedure above to construct a
circle graph for each table in the exercises below. construct a circle graph to visually display this data. construct a circle graph to visually display this data.
circle graphing calculator is a free online tool that displays the circle graphing both general form and standard form. byju’s online circle graphing calculator tool makes the calculation faster, and
it displays the graph in a fraction of seconds. the procedure to use the circle graphing calculator is as follows: step 1: enter the coefficients of an equation in the respective input field step 2:
now click the button “submit / draw it” to get the graph step 3: finally, the circle graph will be displayed in the output field in mathematics, a circle is a two-dimensional figure where all the
points on the surface of a circle are equidistant from the centre point, c. the distance between the centre point and the point on the surface is called a radius, r. the graph of the circle will be
displayed, if the radius and the coordinates of the centre are given. the standard form to represent the equation of a circle is given by (x-a)2 + (y-b)2 = r2 where (a, b) is the centre coordinates.
|
{"url":"http://www.foxcharter.com/circle-graph-template/","timestamp":"2024-11-04T01:13:29Z","content_type":"text/html","content_length":"16315","record_id":"<urn:uuid:3d34843d-a7d5-4470-9e57-caf42eea04e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00257.warc.gz"}
|
F(x)=x^(2)+bx g(x)=3x^(2)-9x the functions f and g are define-Turito
Are you sure you want to logout?
The functions f and g are defined above, where b is a constant. If
The correct answer is: 0.33
We use comparison to equate different coefficients and get the value of b.
Step 1 of 2:
The given functions are—
Given that,,
Step 2 of 2:
Now, comparing the coefficients of
(9 - 3b) = 8
⇒ 3b = 9 - 8
And, 9b = 3
So, we can observe that the values of b satisfies both the comparisons. Hence, value of b is
Final Answer:
The value of b is
Get an Expert Advice From Turito.
|
{"url":"https://www.turito.com/ask-a-doubt/Maths--qfa4ee2","timestamp":"2024-11-08T11:19:01Z","content_type":"application/xhtml+xml","content_length":"1017112","record_id":"<urn:uuid:027dc4d7-7756-4753-aafd-70aa75bb2a8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00264.warc.gz"}
|
Specification Chart
Frequency Domain
Adaptive Frequency Sweep
Boundary Mode Analysis
Frequency Domain
Frequency Domain Source Sweep
Frequency Domain, Modal
Wavelength Domain
Frequency Domain
Adaptive Frequency Sweep
Boundary Mode Analysis
Frequency Domain
Frequency Domain Source Sweep
Frequency Domain, Modal
Frequency Domain, RF Adaptive Mesh
Mode Analysis
Wavelength Domain
Time Dependent
Time Dependent with FFT
Time Dependent
Time Dependent with FFT
Time Dependent, Modal
Bidirectionally Coupled Ray Tracing
Ray Tracing
Frequency Domain
Far-Field Calculation
Electric Field
Field Continuity
Impedance Boundary Condition
Magnetic Field
Matched Boundary Condition
Perfect Electric Conductor
Perfect Magnetic Conductor
Periodic Condition (Continuity, Antiperiodicity, Floquet)
Scattering Boundary Condition
Surface Current
Far-Field Calculation
Perfect Electric Conductor
Perfect Magnetic Conductor
Surface Current Density
Electric Field
Layered Transition Boundary Condition
Lumped Element
Lumped Port, Including Connection to Electrical Circuit
Magnetic Field
Perfect Electric Conductor
Perfect Magnetic Conductor
Periodic Condition (Continuity, Antiperiodicity, Floquet)
Scattering Boundary Condition
Surface Current Density
Surface Magnetic Current Density
Three-Port Network
Two-Port Network
Electric Field
Lumped port
Magnetic Field
Perfect Electric Conductor
Perfect Magnetic Conductor
Scattering Boundary Condition
Surface Current Density
Lumped Element
Lumped Port, Including Connection to Electrical Circuit
Magnetic Field
Perfect Electric Conductor
Perfect Magnetic Conductor
Periodic Condition (Continuity, Antiperiodicity)
Scattering Boundary Condition
Surface Current
Circular Wave Retarder
Deposited Ray Power
Ideal Depolarizer
Ideal Lens
Linear Polarizer
Linear Wave Retarder
Material Discontinuity
Mueller Matrix
Ray Detector
Scattering Boundary
Thin Dielectric Film
Absorbing Boundary
Incoming Wave
Lumped Port
Open Circuit
Short Circuit
Terminating Impedance
Anisotropic Material
Linearized Resistivity
Debye Dispersion
Dielectric Losses
Drude-Lorentz Dispersion
Loss Tangent
Refractive Index
Relative Permittivity
Sellmeier Dispersion
Magnetic Losses
Relative Permeability
Isotropic Material
Anisotropic Material
Porous Media and Mixture Materials
Archie’s Law
Linearized Resistivity
Debye Dispersion
Dielectric Losses
Drude-Lorentz Dispersion
Loss Tangent
Refractive Index
Relative Permittivity
Sellmeier Dispersion
Magnetic Losses
Relative Permeability
Anisotropic Material
Anisotropic Material
B-H Curve
Porous Media and Mixture Materials
Archie’s Law
Linearized Resistivity
Drude-Lorentz Dispersion Model
Refractive Index
Relative Permittivity
Remanent Electric Displacement
Relative Permeability
Remanent Flux Density
Mixed-Mode S-Parameters
Touchstone File Export
Continuity (pair)
Electric Field (pair)
Perfect Electric Conductor (pair)
Perfect Magnetic Conductor (pair)
Surface Current (pair)
Continuity (pair)
Edge Current (edge)
Electric Field (pair)
Electric Point Dipole (point)
Magnetic Current (edge)
Magnetic Point Dipole (point)
Perfect Electric Conductor (pair)
Perfect Magnetic Conductor (pair)
Surface Current (pair)
Continuity (pair)
Continuity (pair)
Edge Current (edge)
Electric Point Dipole (point)
Magnetic Point Dipole (point)
Perfect Electric Conductor (pair)
Perfect Magnetic Conductor (pair)
Surface Current (pair)
Continuity (pair)
Auxiliary Dependent Variables
Blackbody Radiation
Calculation of Ray Intensity and Polarization
Corrections for Strongly Absorbing Media
Gaussian Beam
Illuminated Surface
Nonlocal Accumulator
Optical Dispersion Models
Optical Path Length Calculation
Phase Calculation
Photometric Data Import
Polychromatic Light
Ray Detector
Ray Properties
Ray Termination
Ray Tracing in Unmeshed Domains
Release from Boundary
Release from Data File
Release from Edge
Release from Electric Field
Release from Far-Field Radiation Pattern
Release from Point
Solar Radiation
Thermo-Optic Dispersion Models
2D Histogram
Aberration Evaluation
Interference Pattern
Intersection Point 2D (data set)
Intersection Point 3D (data set)
Optical Aberration
Phase Portrait
Poincaré Map
Radiation Pattern (plot)
Ray (data set)
Ray (plot)
Ray Bin (data set)
Ray Evaluation
Ray Trajectories
Spot Diagram
In-Plane Vector
Out-of-Plane Vector
Infinite Domain Modeling with Perfectly Matched Layer
Infinite Void Modeling
Wave Equation, Electric
External Current Density
Infinite Domain Modeling with Perfectly Matched Layer
Specific Absorption Rate
Electric Current Desity
Infinite Domain Modeling with Absorbing Layer
Magnetic Current Density
Wave Equation, Electric and Magnetic
Deposited Ray Power
Medium Properties
Ray Heat Source
Scattering Domain
Transmission Line Equation
|
{"url":"https://www.comsol.com/products/specifications/rf-and-optics/?selected_version=541","timestamp":"2024-11-13T01:14:44Z","content_type":"application/xhtml+xml","content_length":"257334","record_id":"<urn:uuid:ca7b08d9-7b5a-441f-a602-f685e5abf651>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00624.warc.gz"}
|
Alfred Anwander^1, Thomas R. Knösche^1, Thomas Witzel^2, Assaf Horowitz^3, and Yaniv Assaf^3
^1Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany, ^2Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States,
^3Tel Aviv University, Tel Aviv, Israel
The estimation of neural micro-structure in general and axon diameter in particular became feasible using advanced diffusion imaging frameworks such as CHARMED and AxCaliber. Recently, the AxCaliber
model was extended to 3D enabling to capture the axonal properties of any fiber system in the brain. In this work we challenged the utility of using the CONNECTOM MRI, that provides a gradient
strength of up to 300 mT/m, for axon diameter estimation. We found that the sensitivity of the model towards small diameter axons increases dramatically with the use of the strong gradient system
increasing the validity and accuracy of AxCaliber3D.
The axonal diameter distribution of white matter fiber tracts in the brain is an important structural parameter for understanding the function-anatomical organization of the brain, for diagnosis of
various diseases, and computational modeling of brain functionality. AxCaliber (Assaf et al., 2008) for the first time offers the possibility to estimate this distribution non-invasively for fibers
with a pre-selected direction (e.g., corpus callosum). Here we introduce a novel model that abandons this restriction and allows for a reconstruction of axonal caliber distributions of fiber tracts
in 3D. However, this involves the measurement of much more diffusion weighted q-space images. Therefore, the SNR inevitably drops, in particular for images with high b-value, resulting in a selective
underestimation of the small fibers. This can be compensated by using higher gradient strength. Here we demonstrate that 300 mT/m yields plausible and reproducible axonal caliber distribution in
three dimensions.
Acquisition: Subjects were scanned on either a Siemens Magnetom CONNECTOM or a Siemens Magnetom PRISMA scanner. Diffusion weighted images were acquired with the following parameters:
CONNECTOM: TR/TE = 7500/70 ms, 86 gradient directions distributed on 5 b-value shells with maximal b-value of 5000 s/mm^2, three diffusion times $$$\Delta$$$ ranging from 16 to 40 ms and $$$\delta$$$
of 9 ms, G[max] of 259 mT/m, resolution 1.8mm isotropic EPI, 210 mm FOV, matrix size of 116, 2268 Hz/Px bandwidth, GRAPPA 2, no partial Fourier, 70 slices;
PRISMA: TR/TE = 3000/151 ms, 86 gradient directions distributed on 5 b-value shells with maximal b-value of 3000 s/mm^2, three diffusion times $$$\Delta$$$ ranging from 40 to 100 ms and $$$\delta$$$
of 22ms, G[max] of 72 mT/m, resolution 2.2 mm iso. In addition to the multi-shell, multi diffusion time acquisition, we also acquired a HARDI dataset with a single shell (b=1000 s/mm^2) and 64
gradient directions.
AxCaliber Analysis: Following motion correction and estimation of the major fiber directions using constrained spherical deconvolution of the HARDI dataset (Tournier et al., 2007), a regression model
was used to fit the diffusion MRI data in each voxel. The AxCaliber 3D model was used to produce signal predictors for 4 diffusion components: a CSF component, a hindered diffusion (using a diffusion
tensor model) and two axonal populations. The axonal populations were modeled using the AxCaliber pipeline as described previously using two different gamma functions to represent populations of
small axons (narrow distribution centered around 1.5 mm) and large axons (broad distribution centered around 4 mm).
Fiber tracking: The HARDI dataset was used to compute streamlines representing fiber-tracts using a deterministic model with FA threshold of 0.2, maximum angle of 30°, and voxel sub-sampling of 4.
Fiber-tracking was performed using ExploreDTI (Leemans et al. 2009).
Tract based analysis: Under the assumption that along a tract the diameter distribution should not change, for each tract we averaged the regression betas for the different component predictors (CSF,
hindered, small axons, large axons). From these tract-based data we computed a map of the ratio between the small and large axonal populations as well as the sum of them (as a measure of axonal
Results and Discussion
The stronger gradient of the Connectom allowed to considerably reduce $$$\delta$$$ (from 22 to 9 ms) and $$$\Delta$$$ (from 100 to 40 ms), increasing the sensitivity to shorter displacements (Mitra
et al. 1995) and allowing for shorter echo times, leading to strongly improved SNR. Figure 1 shows the results for the AxCaliber3D reconstruction for data from the Prisma. The fiber density is
increased in a number of major fiber tracts, including the splenium of the corpus callosum and the sensory and motor (but not the premotor) parts of the internal capsule. All of them invariantly show
an increased proportion of large fibers. Figures 2 and 3 shows the AxCaliber3D results for 2 subjects that were scanned in the Connectom MRI. Here the fiber density show similar pattern to what was
shown in Prisma results but the proportion of small fibers is dramatically increased. The latter observation suggests that due to the increased gradient strength and the reduced duration of the
diffusion gradient (d) better sensitivity to axons with small diameters is achieved. This is nicely demonstrated in Figure 4 that shows the AxCaliber3D small fiber density for the corpus callosum for
the Prisma scanner (top) and Connectom MRI (bottom). While both scanners capture the difference in axon diameter between the splenium and the body of the corpus callosum, the Connectom data also
shows higher magnitude of small-axonal population in the genu as expected from histology.
• Assaf Y, Blumenfeld-Katzir T, Yovel Y, Basser PJ. AxCaliber: A Method for Measuring Axon Diameter Distribution from Diffusion MRI. Magnetic Resonance in Medicine. 2008;59(6):1347-1354.
• Leemans A, Jeurissen B, Sijbers J, Jones DK. Proceedings of the 17th Annual Meeting of International Society for Magnetic Resonance in Medicine. Hawaii. 2009. ExploreDTI: A graphical toolbox for
processing, analyzing, and visualizing diffusion MR data; p. 3537.
• Mitra, P. P., & Halperin, B. I. Effects of finite gradient-pulse widths in pulsed-field-gradient diffusion measurements. Journal of Magnetic Resonance, Series A, 1995;113(1):94-101. doi: 10.1006/
• Tournier JD, Calamante F, Connelly A. Robust determination of the fibre orientation distribution in diffusion MRI: non-negativity constrained super-resolved spherical deconvolution. NeuroImage.
2007;35:1459–72. doi: 10.1016/j.neuroimage.2007.02.016.
|
{"url":"https://cds.ismrm.org/protected/18MProceedings/PDFfiles/3102.html","timestamp":"2024-11-06T01:47:54Z","content_type":"application/xhtml+xml","content_length":"12753","record_id":"<urn:uuid:5d1cfe44-7766-4161-97fe-617a5c5a634e>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00553.warc.gz"}
|
Program in Algorithmic and Combinatorial Thinking
i.e. Course in Theoretical Computer Science
Learn More
Picture a room full of students hard at work on math problems.
Some draw graphs on the chalkboard, testing out algorithms.
Others shuffle complicated algebraic expressions, trying to simplify a summation.
Still others stare intently at a piece of paper, trying to find the one necessary final lemma to complete a proof or to understand a recent result.
The students are all working hard, but they are also having fun, as the air is ripe with the excitement of discovery.
These are not college students, though they are studying at a college campus, and learning material that is not normally taught until the undergraduate or even the graduate level.
Rather, these are mostly high school students, spending the summer learning and enjoying themselves, studying theoretical computer science.
But if this is computer science...
...where are the computers?
While it is true that computers and programming are a major part of modern computer science, the Program in Algorithmic and Combinatorial Thinking (PACT) – supported partially through the National
Science Foundation – goes beyond that. The program teaches students about the mathematics and algorithms fundamental to the computer science field. Many summer programs teach high school students
programming and application development, but this five-week intensive course is one of very few with a theoretical and proof-based emphasis.
The only requirements to participate are
high school algebra,
a willingness to work hard and be challenged,
and, above all,
the desire to learn.
|
{"url":"https://algorithmicthinking.org/","timestamp":"2024-11-09T14:18:51Z","content_type":"text/html","content_length":"8254","record_id":"<urn:uuid:b0c9bbbe-322f-45e8-9d2f-f30bff9f5cdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00348.warc.gz"}
|
Dividing Polynomials - Definition, Synthetic Division, Long Division, and Examples
Polynomials are mathematical expressions that consist of one or several terms, all of which has a variable raised to a power. Dividing polynomials is an important function in algebra which includes
figuring out the quotient and remainder once one polynomial is divided by another. In this article, we will investigate the various approaches of dividing polynomials, consisting of synthetic
division and long division, and provide scenarios of how to use them.
We will further talk about the importance of dividing polynomials and its applications in various domains of math.
Significance of Dividing Polynomials
Dividing polynomials is an important function in algebra that has multiple utilizations in many domains of arithmetics, including number theory, calculus, and abstract algebra. It is utilized to work
out a extensive range of challenges, consisting of finding the roots of polynomial equations, calculating limits of functions, and calculating differential equations.
In calculus, dividing polynomials is used to find the derivative of a function, that is the rate of change of the function at any time. The quotient rule of differentiation involves dividing two
polynomials, that is used to find the derivative of a function which is the quotient of two polynomials.
In number theory, dividing polynomials is used to learn the characteristics of prime numbers and to factorize huge figures into their prime factors. It is also utilized to learn algebraic structures
for example fields and rings, which are fundamental ideas in abstract algebra.
In abstract algebra, dividing polynomials is utilized to define polynomial rings, that are algebraic structures which generalize the arithmetic of polynomials. Polynomial rings are applied in
multiple fields of mathematics, including algebraic number theory and algebraic geometry.
Synthetic Division
Synthetic division is a technique of dividing polynomials which is utilized to divide a polynomial by a linear factor of the form (x - c), at point which c is a constant. The approach is founded on
the fact that if f(x) is a polynomial of degree n, then the division of f(x) by (x - c) offers a quotient polynomial of degree n-1 and a remainder of f(c).
The synthetic division algorithm consists of writing the coefficients of the polynomial in a row, using the constant as the divisor, and carrying out a sequence of workings to work out the quotient
and remainder. The outcome is a streamlined form of the polynomial that is simpler to function with.
Long Division
Long division is a method of dividing polynomials that is utilized to divide a polynomial with another polynomial. The approach is founded on the fact that if f(x) is a polynomial of degree n, and g
(x) is a polynomial of degree m, where m ≤ n, subsequently the division of f(x) by g(x) provides us a quotient polynomial of degree n-m and a remainder of degree m-1 or less.
The long division algorithm consists of dividing the greatest degree term of the dividend by the highest degree term of the divisor, and subsequently multiplying the result by the whole divisor. The
outcome is subtracted from the dividend to get the remainder. The procedure is repeated as far as the degree of the remainder is lower in comparison to the degree of the divisor.
Examples of Dividing Polynomials
Here are some examples of dividing polynomial expressions:
Example 1: Synthetic Division
Let's say we need to divide the polynomial f(x) = 3x^3 + 4x^2 - 5x + 2 with the linear factor (x - 1). We can apply synthetic division to simplify the expression:
1 | 3 4 -5 2 | 3 7 2 |---------- 3 7 2 4
The outcome of the synthetic division is the quotient polynomial 3x^2 + 7x + 2 and the remainder 4. Hence, we can state f(x) as:
f(x) = (x - 1)(3x^2 + 7x + 2) + 4
Example 2: Long Division
Example 2: Long Division
Let's say we need to divide the polynomial f(x) = 6x^4 - 5x^3 + 2x^2 + 9x + 3 by the polynomial g(x) = x^2 - 2x + 1. We can utilize long division to simplify the expression:
First, we divide the largest degree term of the dividend with the highest degree term of the divisor to get:
Next, we multiply the total divisor with the quotient term, 6x^2, to attain:
6x^4 - 12x^3 + 6x^2
We subtract this from the dividend to attain the new dividend:
6x^4 - 5x^3 + 2x^2 + 9x + 3 - (6x^4 - 12x^3 + 6x^2)
which streamlines to:
7x^3 - 4x^2 + 9x + 3
We recur the process, dividing the largest degree term of the new dividend, 7x^3, by the highest degree term of the divisor, x^2, to obtain:
Subsequently, we multiply the entire divisor by the quotient term, 7x, to get:
7x^3 - 14x^2 + 7x
We subtract this of the new dividend to achieve the new dividend:
7x^3 - 4x^2 + 9x + 3 - (7x^3 - 14x^2 + 7x)
that simplifies to:
10x^2 + 2x + 3
We recur the method again, dividing the largest degree term of the new dividend, 10x^2, by the largest degree term of the divisor, x^2, to get:
Subsequently, we multiply the total divisor with the quotient term, 10, to get:
10x^2 - 20x + 10
We subtract this of the new dividend to obtain the remainder:
10x^2 + 2x + 3 - (10x^2 - 20x + 10)
which streamlines to:
13x - 10
Thus, the answer of the long division is the quotient polynomial 6x^2 - 7x + 9 and the remainder 13x - 10. We can express f(x) as:
f(x) = (x^2 - 2x + 1)(6x^2 - 7x + 9) + (13x - 10)
In conclusion, dividing polynomials is an important operation in algebra which has multiple uses in multiple domains of math. Comprehending the different approaches of dividing polynomials, such as
synthetic division and long division, can guide them in solving intricate problems efficiently. Whether you're a learner struggling to understand algebra or a professional operating in a field that
includes polynomial arithmetic, mastering the ideas of dividing polynomials is crucial.
If you require help understanding dividing polynomials or any other algebraic theories, think about calling us at Grade Potential Tutoring. Our expert teachers are accessible remotely or in-person to
provide individualized and effective tutoring services to support you succeed. Contact us right now to plan a tutoring session and take your mathematics skills to the next stage.
|
{"url":"https://www.sangabrielinhometutors.com/blog/dividing-polynomials-definition-synthetic-division-long-division-and-examples","timestamp":"2024-11-03T17:21:20Z","content_type":"text/html","content_length":"78020","record_id":"<urn:uuid:95af5a58-e359-4235-beb9-2f613aea511a>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00554.warc.gz"}
|
Evaluate Floats
Evaluate Floats:
The Evaluate Floats node is designed to evaluate mathematical expressions involving floating-point numbers, providing a versatile tool for AI artists to perform complex calculations within their
workflows. This node leverages the simpleeval library to interpret and compute expressions that can include variables a, b, and c. The primary benefit of this node is its ability to return the result
in multiple formats: integer, float, and string, making it adaptable for various downstream processes. Additionally, it offers an option to print the results to the console, which can be useful for
debugging and verification purposes. This node is essential for scenarios where precise numerical computations are required, enhancing the efficiency and flexibility of your creative projects.
Evaluate Floats Input Parameters:
This parameter takes a string representing the mathematical expression to be evaluated. The expression can include the variables a, b, and c, which are defined by the corresponding input parameters.
The function of this parameter is to specify the calculation you want to perform. There are no strict minimum or maximum values for this parameter, but it must be a valid mathematical expression that
simpleeval can interpret.
This boolean parameter determines whether the results of the evaluation should be printed to the console. If set to "True", the node will output the results in the console, which can be helpful for
debugging. The default value is "False".
This parameter represents the value of the variable a in the mathematical expression. It is a floating-point number that can be used within the python_expression. The default value is 0. There are no
strict minimum or maximum values, but it should be a valid float.
This parameter represents the value of the variable b in the mathematical expression. It is a floating-point number that can be used within the python_expression. The default value is 0. There are no
strict minimum or maximum values, but it should be a valid float.
This parameter represents the value of the variable c in the mathematical expression. It is a floating-point number that can be used within the python_expression. The default value is 0. There are no
strict minimum or maximum values, but it should be a valid float.
Evaluate Floats Output Parameters:
This output parameter provides the result of the evaluated expression as an integer. It is useful when you need the result in a whole number format for further processing or integration with other
nodes that require integer inputs.
This output parameter provides the result of the evaluated expression as a floating-point number. It is essential for scenarios where precision is crucial, and the result needs to be in a decimal
This output parameter provides the result of the evaluated expression as a string. This format is useful for logging, display purposes, or when the result needs to be concatenated with other strings.
Evaluate Floats Usage Tips:
• Use the print_to_console parameter set to "True" during the initial setup and debugging phase to verify that your expressions are being evaluated correctly.
• Define the variables a, b, and c with meaningful values that align with your specific use case to ensure accurate calculations.
• Utilize the different output formats (integer, float, string) based on the requirements of subsequent nodes or processes in your workflow.
Evaluate Floats Common Errors and Solutions:
Invalid expression error
• Explanation: This error occurs when the python_expression contains invalid syntax or unsupported operations.
• Solution: Ensure that the expression is a valid mathematical formula and only includes supported operations and variables (a, b, c).
Division by zero error
• Explanation: This error happens when the expression involves division by zero.
• Solution: Check your expression and input values to avoid division by zero scenarios. Ensure that the denominator in any division operation is not zero.
TypeError: unsupported operand type(s)
• Explanation: This error occurs when the expression includes operations between incompatible types.
• Solution: Verify that all variables and constants in the expression are of compatible types (e.g., all should be numbers).
ValueError: could not convert string to float
• Explanation: This error happens when the input parameters a, b, or c are not valid floating-point numbers.
• Solution: Ensure that the values provided for a, b, and c are valid floats. Check for any non-numeric characters or invalid formats.
|
{"url":"https://www.runcomfy.com/comfyui-nodes/efficiency-nodes-comfyui/Evaluate-Floats","timestamp":"2024-11-12T00:39:56Z","content_type":"text/html","content_length":"153207","record_id":"<urn:uuid:64d724a2-5de2-45a2-8a60-e0aaacb80dbb>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00315.warc.gz"}
|
Max's Blog
Jan 06 2006
Jury Compromise
Time Limit:1000MS Memory Limit:10000K
Total Submit:987 Accepted:246 Special Judged
In Frobnia, a far-away country, the verdicts in court trials are determined by a jury consisting of members of the general public. Every time a trial is set to begin, a jury has to be selected, which
is done as follows. First, several people are drawn randomly from the public. For each person in this pool, defence and prosecution assign a grade from 0 to 20 indicating their preference for this
person. 0 means total dislike, 20 on the other hand means that this person is considered ideally suited for the jury.
Based on the grades of the two parties, the judge selects the jury. In order to ensure a fair trial, the tendencies of the jury to favour either defence or prosecution should be as balanced as
possible. The jury therefore has to be chosen in a way that is satisfactory to both parties.
We will now make this more precise: given a pool of n potential jurors and two values di (the defence’s value) and pi (the prosecution’s value) for each potential juror i, you are to select a jury of
m persons. If J is a subset of {1,…, n} with m elements, then D(J ) = sum(dk) k belong to J
and P(J) = sum(pk) k belong to J are the total values of this jury for defence and prosecution.
For an optimal jury J , the value |D(J) – P(J)| must be minimal. If there are several jurys with minimal |D(J) – P(J)|, one which maximizes D(J) + P(J) should be selected since the jury should be as
ideal as possible for both parties.
You are to write a program that implements this jury selection process and chooses an optimal jury given a set of candidates.
The input file contains several jury selection rounds. Each round starts with a line containing two integers n and m. n is the number of candidates and m the number of jury members.
These values will satisfy 1<=n<=200, 1<=m<=20 and of course m<=n. The following n lines contain the two integers pi and di for i = 1,...,n. A blank line separates each round from the next. The file
ends with a round that has n = m = 0. Output For each round output a line containing the number of the jury selection round ('Jury #1', 'Jury #2', etc.). On the next line print the values D(J ) and P
(J ) of your jury as shown below and on another line print the numbers of the m chosen candidates in ascending order. Output a blank before each individual candidate number. Output an empty line
after each test case. Sample Input 4 2 1 2 2 3 4 1 6 2 0 0 Sample Output Jury #1 Best jury has value 6 for prosecution and value 4 for defence: 2 3 Hint If your solution is based on an inefficient
algorithm, it may not execute in the allotted time. Source Southwestern European Regional Contest 1996
ssmax says:
January 6th, 2006 11:01:27
给定正数m,n(m<=200,n<=20),和长度为m的两个数组p,d 满足0<=p[i]<=20, 0<=d[i]<=20 求序列1..m的一个n长的子序列S,满足: 1、 ∑p[Si]与∑d[Si]的差尽量小 2、在满足1的前提下 ∑p[Si]与∑d[Si]的和尽量大 3、当有
ssmax says:
January 6th, 2006 11:02:02
ssmax says:
January 6th, 2006 11:03:18
从m(m<=M)个候选人中选出n(n<=N)个,得分差为X(给定)的,得分和最高的组合考虑到问题的规模—— 选出的陪审团不超过20人,每人得分的范围是0到+20 陪审团的得分差不超出【-400,+400】最多有801种可能解这道
题:先用动态规划求出给定的得分差时,最高的得分和,然后枚举出最小的得分差。三维数组: Map[i][j][k] 第一维:参选人员索引含义:M个候选人中的前i个第二维:选出人数索引含义:选出j个第三维:得分差索引含义:
控辩双方得分差为k 数组元素:最优解决方案 1、将动态规划表中所有方案标记为“不存在” 2、植入种子, 标记map[0][0][0]存在,方案为谁也不选,得分和为0 假设从前m(给定)人中选出n(n<=m,n<=N)人,得分差为X(所
有可能值)的最优方案(得分和最高)都已找出则: FOR i:=0 TO N-1 DO FOR j:=min(X) TO max(X) DO IF (map[m][i][j]+第m+1人)>map[m][i+1][j+diff[m+1]]
如果map[M][N][+i] 和map[M][N][-i]都存在,
ssmax says:
January 6th, 2006 11:04:54
Hi, I soved this problem. The program ran about 0.422 sec and used 768 kb , but there are people who solved it 0.050 – 0.110.How is it possible?
Here is my solution:
Let diff[i]=d[i]-p[i], sum[i]=d[i]+p[i], then the problem is to choose numbers p[1]..p[m] :
p[i] belongs 1..n
p[i]<>p[j] for i<>j;
And when there are many numbers that minimize |s1| , then choose the one which maximizes s2=sum[p[1]]+sum[p[2]]+..sum[p[m]].
I used such DP interpretation:
It’s trivial to solve the problem for m=1.
If there is a solution for m=k, then we can build the solution for m=k+1 using procedure:
Let the solution for m=k be saved in an array A[i][j], where A[i][j] shows whether it’s possible to choose k candidates from the first i candidates that s1=j, and if it’s possible then C[i][j] shows
Then we can build an arrays B[i][j], D[i][j] for k+1 (which have the same function as A[i][j], C[i][j] for k):
for i=1 to n do
for j=< minimum possible sum> to do
B[i][j]=B[i-1][j]; // do not include candidate number i
if( (not B[i][j]) or (D[i][j]=0 for which (A[n][j] or A[n][-j]) == true is the minimum value for |s1|.
To get chosen candidates, on each stage save whether a candidate was included or not.
Then just make a recursion descent.
I very accuratly implemented the algorithm ( using binary array to save whether a candidate was included or not), but the program is very slow.
Is it due to algorithm or implementation?
|
{"url":"https://ssmax.net/archives/11.html","timestamp":"2024-11-06T15:27:56Z","content_type":"application/xhtml+xml","content_length":"48226","record_id":"<urn:uuid:269ceefc-8b82-4eff-b475-2fdaed3f966d>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00067.warc.gz"}
|
Increasing the discriminatory power of bounding models using problem-specific knowledge when viewing design as a sequential decision process
A recent design paradigm seeks to overcome the challenges associated with broadly exploring a design space requiring computationally expensive model evaluations by formally viewing design as a
sequential decision process (SDP). With the SDP, a set of computational models of increasing fidelity are used to sequentially evaluate and systematically eliminate inefficient design alternatives
from further consideration. Key to the SDP are concept models that are of lower fidelity than the true function and are constructed in such a way that when used to evaluate a given design, they
return two-sided limits that bound the precise value of the decision criteria, hence referred to as bounding models. Efficiency in the SDP is achieved by using such low-fidelity, inexpensive models,
early in the design process to eliminate inefficient design alternatives from consideration after which a higher fidelity, more computationally expensive model, is executed, but only on those design
alternatives that appear promising. In general, low-fidelity models trade off discriminatory power for computational complexity; however, it can be demonstrated that knowledge of the underlying
physics and/or mathematics can be used to increase the discriminatory power of the lower fidelity models for a given computational cost. Increasing the discriminatory power of the bounding models
directly translates into an increase in the efficiency of the SDP. This paper discusses and demonstrates how knowledge of the underlying physics and/or mathematics, otherwise referred to as
“problem-specific knowledge,” such as monotonicity and concavity can be used to increase the discriminatory power of the bounding models in the context of the SDP and for engineering designs
characterized by demand and capacity relationships. Furthermore, the concept of constructing the bounding models to systematically defer decisions on a subset of design variables, for example for a
subsystem, is demonstrated, while retaining the desirable convergence guarantees to the optimal set. The utility of leveraging knowledge to increase discriminatory power and systematically deferring
decisions through bounding models in the context of the SDP is demonstrated through two design problems: (1) the notional design of an engine-propeller combination to minimize takeoff distance for a
light civil aircraft, and (2) the design of a building’s seismic force resisting structural-foundation system where the performance is evaluated on the basis of minimizing drift and total system
All Science Journal Classification (ASJC) codes
• Software
• Control and Systems Engineering
• Computer Science Applications
• Computer Graphics and Computer-Aided Design
• Control and Optimization
Dive into the research topics of 'Increasing the discriminatory power of bounding models using problem-specific knowledge when viewing design as a sequential decision process'. Together they form a
unique fingerprint.
|
{"url":"https://pure.psu.edu/en/publications/increasing-the-discriminatory-power-of-bounding-models-using-prob","timestamp":"2024-11-14T02:25:59Z","content_type":"text/html","content_length":"59126","record_id":"<urn:uuid:5cc35495-7bd7-4207-9a28-4aaa5ad3b1ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00515.warc.gz"}
|
What is a printers ruler called? - Explained
Print shop equipment: the printer’s ruler, or “line gauge”
What is a desk ruler?
Desk rulers are used for three main purposes: to measure, to aid in drawing straight lines, and as a straight guide for cutting and scoring with a blade. Practical rulers have distance markings along
their edges. A line gauge is a type of ruler used in the printing industry.
Why is it called a ruler?
You can measure the diameter of a circle with a ruler, and you can also draw a perfectly straight line using its edge. Both meanings stem from the verb rule, “to exercise power” or “to control,”
which came to also mean “mark with lines” in the 1590s.
Which side of the ruler is cm?
A ruler is used to measure the length in both metric and customary units. The rulers are marked with standard distance in centimeters in the top and inches in the bottom and the intervals in the
ruler are called hash marks.
Leave a Comment
|
{"url":"https://theomegafoundation.org/what-is-a-printers-ruler-called/","timestamp":"2024-11-07T17:28:18Z","content_type":"text/html","content_length":"71288","record_id":"<urn:uuid:0aec9ecb-4071-4047-b44e-bfe93ee58d4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00271.warc.gz"}
|
Improving Multi-Objective Optimization Methods of Water Distribution Networks
Technion—Israel Institute of Technology, Faculty of Civil and Environmental Engineering, Haifa 32000, Israel
Department of Civil Engineering, CEMMPRE, University of Coimbra, 3004-531 Coimbra, Portugal
Author to whom correspondence should be addressed.
Submission received: 31 May 2023 / Revised: 6 July 2023 / Accepted: 7 July 2023 / Published: 12 July 2023
Water distribution network design is a complex multi-objective optimization problem and multi-objective evolutionary algorithms (MOEAs) such as NSGA II have been widely used to solve this
optimization problem. However, as networks get larger, NSGA II struggles to find the diverse and uniform solutions that are critical in multi-objective optimization. This research proposes an
improved version of NSGA II that uses three new-generation methods to target different regions of the Pareto front and thus increase the number of solutions in critical regions. These methods include
saving an archive, local search around extreme and uncrowded Pareto front, and local search around the knee area of the Pareto front. The improved NSGA II is tested on benchmark networks of different
sizes and compared to the best-known Pareto front of the networks determined by MOEAs. The results show that the proposed algorithm outperforms the original NSGA II in terms of broadening the Pareto
front solution range, increasing solution density, and discovering more non-dominated solutions. The improved NSGA II can find solutions that cover all parts of the Pareto front using a single
algorithm without increasing computational effort.
1. Introduction
Water supply systems are an essential part of a vast urban infrastructure system that underpins most economic activity and is used by urban populations to perform basic domestic activities [
]. A water distribution network is a hydraulic infrastructure that is usually the most costly component of urban infrastructure that delivers water to nodes while maintaining predetermined pressure
requirements [
]. Designing such networks involves choosing optimal combinations of values for decision variables such as pipe sizes, tank shapes and sizes, pump types, and valve locations, while also satisfying
several constraints [
]. Optimal design problems for Water Distribution Systems (WDSs) typically involve selecting the best pipe size, which is challenging due to the problems’ high dimensionality, discreteness, and
non-linearity [
]. To solve these problems, advanced optimization techniques are required due to the complexity of WDSs and their components’ arrangement [
WDS optimization aims to maximize system performance and profitability while minimizing resource consumption and cost, all of which are typically limited [
]. As constructing these systems is costly, the optimization objective is to find the least expensive network [
]. The design of water systems is a multi-objective optimization problem (MOPs) that needs to simultaneously optimize a number of objectives in addition to cost such as operational, life cycle, and
maintenance costs, system reliability, and water quality [
]. Optimizing for one objective may result in suboptimal performance for others [
]. To address this complex problem, engineers have used single-objective optimization models in the past, but multi-objective optimization approaches have become increasingly popular in recent years,
providing a way to investigate trade-offs between objectives and make informed decisions.
Optimizing water distribution networks (WDNs) is crucial for their performance and reliability. Network resilience, which refers to the network’s ability to recover from internal or external
disturbances, is an important aspect of this optimization [
]. Incorporating network resilience in the objective function can lead to better results in terms of surplus power and redundancy [
]. Mechanical or hydraulic failure in WDNs can cause increased head losses and network failure, which is why it is essential to have excess power available for internal dissipation. Simply optimizing
for cost can leave the network vulnerable to hydraulic and mechanical failures [
]. Including reliability in optimization can be computationally intensive, so approximate methods like the resilience index [
] can be used. The inclusion of network resilience in WDN optimization is critical for maintaining the network’s performance and reliability in both normal and failure conditions [
Various optimization methods are available for water distribution systems and can be categorized into deterministic and stochastic techniques. Deterministic techniques involve linear programming,
non-linear programming, and dynamic programming, while stochastic techniques include population-based algorithms and single point-based methods. Metaheuristics, a type of stochastic technique,
provides a near-optimal solution in a single run by using principles from nature and involving random components. Genetic algorithms, inspired by mechanisms of biological evolution, have been widely
applied in water resources planning and management [
]. These algorithms are capable of solving nonlinear, nonconvex, multimodal, and discrete problems, expanding their capabilities in handling complex environmental and water resource applications [
Classical search and optimization methods may not be effective in dealing with MOPs due to their inability to find multiple solutions in a single run and the challenges in handling problems with
discrete variables and multiple optimal solutions [
]. Evolutionary algorithms (EAs), particularly MOEAs, are effective for MOPs as they use a population of solutions to find multiple Pareto-optimal solutions in a single run [
]. Diversity preserving mechanisms can be incorporated into MOEAs to find widely different Pareto-optimal solutions [
]. EAs are less affected by the shape or continuity of the Pareto front, unlike classical methods [
]. In water distribution system design, Pareto archived evolutionary strategy (PAES), Strength Pareto evolutionary algorithm (SPEA-2), and non-dominated sorting genetic algorithm II (NSGA-II) [
] are the most widely used MOEAs that effectively handle multiple objectives and find a set of solutions that are not dominated by others. The use of these MOEAs has greatly improved the efficiency
and effectiveness of the water distribution system design process.
The Non-Dominated Sorting Genetic Algorithm (NSGA-II) is a widely used algorithm for solving MOPs with both continuous and discrete variables [
]. It is considered one of the most representative algorithms in multi-objective optimization [
] and had been recognized as the EA state-of-the-art in solving WDS-related optimization problems [
]. NSGA-II is an improved version of the Non-Dominated Sorting Genetic Algorithm (NSGA) that overcomes some of its limitations, such as the absence of elitism and the need to define sharing
parameters for diversity preservation [
]. The design of NSGA-II incorporates an elitist strategy to expand the sample space and improve optimization accuracy, and the crowding distance operator to preserve diversity. NSGA-II is
computationally efficient, with an overall complexity of at most O(MN
), where M is the number of objective functions and N is the population size. The fast non-dominated sorting method used in NSGA-II reduces computational complexity, making it efficient and powerful
in exploring the decision space of MOPs. NSGA-II has been widely implemented and applied to various MOPs, demonstrating its effectiveness and reliability.
NSGA-II has been combined with other methods and tools to enhance its effectiveness in solving optimization problems in WDS. The Robust NSGA-II (RNSGA-II) [
] algorithm was developed based on NSGA II to ensure that the solutions are robust enough and able to sustain longer over multiple generations in the optimization process. The Epsilon-dominance
Non-dominated Sorting Genetic Algorithm II (e-NSGA-II) is another optimization algorithm that is based on the original NSGA II that utilizes the e-dominance concept, adaptive population sizing, and
self-terminating algorithm to achieve well-spread and well-converged Pareto-optimal solutions. Non-dominated Sorting Genetic Algorithm III (NSGA-III) [
] is an improved version of NSGA-II using a reference-point-based non-dominated sorting approach. However, NSGA-II is still the most preferred algorithm compared to NSGA-III, and variations of
NSGA-II, such as RNSGA-II, e-NSGA-II, and a combination of NSGA-II with other methods, have shown that NSGA-II is still relevant and capable of handling the task [
Wang et al. [
] applied five state-of-the-art MOEAs to twelve design problems collected from the literature, with minimum time invested in parameterization by using the recommended settings. The study found that
the MOEAs were complementary to each other, and that NSGA-II remained a good choice for two-objective optimization of water distribution systems (WDSs) and generally outperformed the other MOEAs in
terms of the number of solutions contributed to the best-known Pareto front of each problem. The spread (both extent and uniformity) of its contribution was also comparable, if not better, than other
MOEAs. Overall, the paper contributes to the best-known approximations to the true Pareto fronts of a wide range of benchmark problems, and the results are going to be used in this research for
comparison of some of the benchmark networks. However, NSGA-II and other MOEAs had limitations with intermediate and large-sized problems, resulting in a limited range of solutions with low
diversity. In order to overcome these limitations and achieve better results for multi-objective optimization problems, this study proposes an improved version of NSGA-II. In their work, Cunha et al.
] introduced MOSA-GR, a novel multi-objective simulated annealing algorithm equipped with innovative searching mechanisms. MOSA-GR demonstrated superior performance by generating Pareto fronts that
surpassed those produced by MOEAs, even when merging their results. In this study, the new searching mechanisms from MOSA-GR will be applied to NSGA-II to develop an improved algorithm for better
results and contribute to the advancement of the existing literature on multi-objective optimization algorithms, enabling effective handling of complex real-world problems with enhanced diversity and
density of solutions.
2. Materials and Methods
2.1. Problem Definition of the Multi-Objective Problems
The optimization problems in Water Distribution Systems (WDSs) design usually involve two objective functions: minimizing the cost and maximizing the network reliability or robustness. In this
context to solve the multi-objective optimization, the objectives defined in model formulation are minimizing the cost while maximizing resilience. The decision variables are diameter, which can be
adjusted to obtain an optimal solution that satisfies both objectives. The cost objective function considers the expenditure of pipe components as the total cost of a design solution. The unit cost
of a specific pipe diameter for each problem is derived from the paper by [
] (Equation (1) below).
Resilience in the context of WDS design refers to the ability of the system to continue functioning in the presence of failures or disturbances. The specific definition of resilience may vary
depending on the WDS design optimization problem. In this study, the resilience index proposed by [
] (Equation (2) below) is used as a measure to optimize the reliability of the network. It is recognized that some authors have proposed updates to Todini’s index of resilience and that other indexes
have been proposed [
], however it is used in this paper for the following reasons: (1) it is widely used in the literature on water distribution systems optimization and (2) it enables a comparison with the work of Wang
et al. [
]. The resilience index is defined as the ratio of power dissipated in the network to the maximum power that would be dissipated in order to satisfy the design demand and head requirements at the
junction nodes.
The objectives to be optimized to obtain the optimal WDS design solution are presented in mathematical form as Equations (1) and (2), where the cost is minimized while maximizing the resilience
$Min Cost = ∑ ij N C ij × L ij + ∑ C p$
$Max Ir = ∑ j = 1 nn C j Q j H j − H j req ∑ k = 1 nr Q k H k + ∑ i = 1 npu P i γ − ∑ j = 1 nn Q j H j req$
$C j = ∑ i = 1 npj Di npj × max Di$
where Ir = network resilience; nn = number of demand nodes; C
, Q
, H
, and H
= uniformity, demand, actual head, and minimum head of node j; nr = number of reservoirs; Q
and H
= discharge and actual head of reservoir k; npu = number of pumps; P
= power of pump i; γ = specific weight of water; np
= number of pipes connected to node j; D
= diameter of pipe i connected to demand node j.
2.2. Improved NSGA II
The NSGA II algorithm is widely used for solving multi-objective optimization problems. However, the quality of the solutions generated by the algorithm can be improved. Therefore, the main objective
of improving the NSGA II algorithm is to enhance the quality of its solutions. This involves increasing the density of solutions, especially in the important regions of the Pareto front, such as the
knee area, and improving the diversity of solutions by expanding their range. To achieve this, the improved NSGA II introduces novel improvements in both the diversity and convergence of the
One of the main developments of the improved NSGA II algorithm is the implementation of new methods for offspring population generation. Unlike the original NSGA II, which uses all parent population
solutions in tournament selection to generate the offspring population, the improved NSGA II focuses on specific solutions among the nondominated solutions found so far to generate new solutions in
addition to the original method. By focusing on specific solutions, the improved NSGA II can generate better candidate solutions, ultimately leading to a better quality of solutions in the Pareto
The improved NSGA II algorithm introduces different generation methods based on [
], which is crucial to achieving fast convergence and a high diversity of solutions that are uniformly distributed in the objective space. The different generation methods, including those that
target the maximum and minimum regions of the Pareto front and the uncrowded area of the front, are used to increase the range and diversity of solutions. The knee area of the Pareto front is given
special attention, as it is a critical region with solutions that exhibit a small improvement in one objective causing a large deterioration in the other.
2.3. Generation Methods
The algorithm generates an offspring population using four main processes (Gs), which are selected in each iteration starting from different starting iterations (ITm) proposed to control the process.
Additionally, different probabilities (represented by Ps) are used to select the generation methods. The selection of these probabilities aims to produce new solutions that can perform a global
search in the early stages of the process and then include a more localized search towards the end in addition to the global search.
The following are the types of searching methods presented as offspring generation methods:
• G1 method: This is the original method used in NSGA II. It involves selecting N (population size) number of parent populations and N offspring populations, evaluating their objective values, and
sorting them based on non-dominated sorting and crowd distancing. The parent populations are then randomly paired to create child populations through crossover and mutation.
• G2 method: This method saves the new offspring population generated in each iteration to the archive. The archive contains all the populations generated from the start of the iteration. Parent
populations are randomly selected from the archive to create new offspring populations through crossover and mutation.
• G3 method
: This method focuses on specific areas of the Pareto front, such as the extreme and uncrowded areas shown in
Figure 1
. At each iteration, a number of points are selected from the required region, and N (population number) offspring populations are generated from these points.
If the selected points are from the maximum region, the offspring population is generated via bound type crossover and weighted average with the maximum possible diameter as in Equation (4), where
Rand is a random number between 0 and 1.
Offspring = Rand × parent + (1 − Rand) × Maximum diameter,
On the other hand, if the selected points are from the minimum and uncrowded areas, the offspring population is generated in Equation (5) by randomly assigning a diameter value of the next higher or
lower possible diameter value to the selected pipes, where “gene” is the randomly selected pipe.
Offspring(gene) = Parent(gene) ± 1,
where Gene = randomly generated number, Parent = selected points, Offspring=generated population from selected points.
G4 method
: This method generates offspring populations using the knee area of the Pareto front as the parent population. The knee is found by calculating the Euclidean distance of all Pareto front points
to the corner, shown in
Figure 1
and selecting points with the least distance. After selecting the parent population, the offspring population is generated using Equation (5).
2.4. Pseudocode
The pseudocode provided in
Figure 2
outlines the steps of the improved NSGA II procedure. The maximum number of iterations is divided into four stages and each stage includes a different combination of generation methods, as shown in
Figure 3
. The algorithm starts with the original NSGA II method and includes additional methods one by one into each stage. In the early phases, only the G1 method is used, but as the second stage is
initiated, G2 is introduced. In the third stage, G3 becomes available for use, and likewise, G4 is brought into the fold in the fourth stage.
In each iteration, a generation method from the available methods in each stage is picked using a probability decision factor, and that method generates N offspring populations in that iteration. The
number of function evaluations each method uses is determined by the probability assigned to each method and the stage’s starting iteration. The four generating processes can be selected according to
the starting iteration number (ITm1, ITm2, ITm3, ITm4) and probabilities (P1, P2, P3, P4). These additional parameters (explained below in detail) are used to control the number of function
evaluations used by the generation processes and the results they can contribute.
2.5. Parameters
The selection of the four generating processes is based on three additional parameters to the NSGA II algorithm: the starting iteration number (ITm1, ITm2, ITm3, ITm4), probabilities (P1, P2, P3,
P4), and the number of selected points as a percentage of population size. These parameters are used to control the number of function evaluations used by the generation processes and the results
they can contribute.
• ITm1—iteration number where the G1 starts, which is equal to zero.
• ITm2—iteration number where the G2 starts.
• ITm3—iteration number where the G3 starts.
• ITm4—iteration number where the G4 starts.
The ITm parameter represents the iteration number at which the generation methods start and can take values from 0 to 1, where 0 represents the beginning of the iterations and 1 represents the end of
the iterations. The starting iteration of a generation method is determined by multiplying ITm by the maximum number of iterations as in Equation (6). For example, the starting iteration of G2 can be
calculated as ITm2 multiplied by the maximum number of iterations.
Starting iteration of a generation method = ITm × maximum number of iterations
where ITm = a parameter between 0 and 1.
The other parameter is the probability assigned to each method to be selected. In each iteration, a random number between 0 and 1 is generated to determine the probability of selecting a particular
generation method.
• P2—probability of G2 being selected starting ITm2.
• P3—probability of G3 being selected starting ITm3.
• P4—probability of G4 being selected starting ITm4.
The probabilities of G2, G3, and G4 are determined by P2, P3, and P4, respectively. The probabilities of these methods are constant from their starting iteration to the maximum number of iterations.
The probability of G1, on the other hand, changes throughout the stages. In stage one, G1 is the only method used, and therefore has a probability of 1. In stage two, there are two options: G2 with a
probability of P2 and G1 with a probability of (1 − P2). Similarly, in stage three, there are three options: G2 with a probability of P2, G3 with a probability of P3, and G1 with a probability of (1
− P2 − P3). The sum of probabilities for G2, G3, and G4 is less than 1, with G1 taking up the remaining probabilities according to the stage.
As an instance, during stage 3, the selection of a method from G1, G2, and G3 is determined based on specific conditions outlined in Equations (7) through (9). This involves generating a random
number between 0 and 1. If this number is less than (1 − P2 − P3) as in Equation (7), G1 is selected. If the random number falls between (1 − P2 − P3) and (1 − P3) as in Equation (8), G2 is chosen.
On the other hand, if the random number is greater than (1 − P3) as in Equation (9), G3 is the selected method.
(1 − P2 − P3) < Rand < (1 − P3)
In addition, the number of points selected by each method is a crucial parameter in the improved NSGA II algorithm. G3 and G4 choose the parent population from the search region and generate the
offspring population. The number of points to be selected must be determined in advance. The number of selected points is equal to sp multiplied by the population size (N) as in Equation (10), where
sp is number of selected points as a percentage of the population size (N).
Number of selected points = sp × N
• sp3max—number of selected points in the maximum region as percentage of N.
• sp3min—number of selected points in the minimum region as percentage of N.
• sp3uc—number of selected points in the uncrowded region as percentage of N.
• sp4—number of selected points in the knee area as percentage of N.
G3 selects three types of points based on three additional parameters: P3max, P3min, and P3uc. These parameters control the number of selected points in the maximum region, minimum region, and
uncrowded region, respectively.
The pseudocode of generation methods is depicted in
Figure 4
, where Rand2 is a second random number between 0 and 1.
The algorithm starts by using G1 in the initial iterations to create the Pareto front and initialize the processes. This generation process can be revisited throughout the entire iteration. Then, G2
is introduced, and it brings back some populations from the archive that were eliminated in the process but may have the potential to find better solutions when combined with the current population.
Once a front is established, the algorithm uses G3 and G4 to target specific regions of the front. G4 is executed last because it requires an intensive evaluation, and its characteristics are
appropriate for the knee region late in the progression of the front. These strategies help the algorithm converge more quickly to the Pareto optimal front and increase the range of solutions it can
Figure 5
, the generation methods used in the algorithm are illustrated for an example of one run, along with the number of iterations each generation was employed. At the outset, G1 is employed at every
iteration, and the line for G1 increases with a slope of 1. When the iteration reaches ITm2 = 500, G2 is added as an option, and the line for G2 begins to increase. The slope of G1 becomes lower at
this point since G2 is also in use. The low slope of G2 is related to the probability it is given, which is lower than that of G1 (P1 > P2). At the iteration of ITm3 = 750, G3 is introduced, and the
line for G3 starts increasing. The slope of G3 is higher than G2 because it has a higher probability (P3 > P2). Lastly, G4 is included at the last stage when the iteration is at ITm4 = 1250, and the
line for G4 begins to increase. The slope of G4 is higher than G2 but lower than G3, indicating that its probability is higher than G2 but lower than G3. As new methods are added, the slope of G1
decreases, and the probability of G1 is a function of the other probabilities (1 − P2 − P3 − P4).
2.6. Case Studies
To test the improved algorithm in this study, five networks of different sizes were used. These networks are taken from the benchmark case studies of [
] and range from small to intermediate in size.
Table 1
summarizes the benchmark problems, including the number of water sources, decision variables, pipe diameter options, the search space calculated as pipe diameter options to the power of decision
variables and the number of function evaluation (NFE) used.
In [
] different computational budgets are used for the networks based on their size and complexity. To ensure fair comparison, the same computational budget is used for each network in this study. The
problems are solved using different population sizes, with 10 runs for each network in three groups, totaling 30 runs for each problem. The results from each run are combined and sorted to obtain the
final non-dominated Pareto front. The benchmark problems are of varying complexity, and therefore different computational budgets are required for different cases. This approach helps to ensure that
the results are reliable and comparable across different networks and problems.
In the paper of [
], the Pareto front for each multi-objective optimization problem was obtained by collecting raw data reported by each multi-objective evolutionary algorithm (MOEA) for 30 runs. Duplicates in the
dataset of each group, which were obtained using a specific population size, were checked, and removed. The data from different groups were then merged and duplicates were checked and removed once
again. The non-dominated sorting procedure was applied to the aggregated dataset to produce the best Pareto front obtained by the current MOEA.
For each problem, the Pareto front obtained by each MOEA was first aggregated, and duplicates in the merged dataset were checked and removed. The non-dominated sorting procedure was then used to
generate the best-known Pareto front of the current problem that is presented in [
] and it will be referred in this study as all evolutionary algorithms (AEAs). Lastly, the contribution from each MOEA to the best-known Pareto front was identified. This process ensured that the AEA
was obtained by considering the results from all MOEAs and removing duplicates.
The Best-Known Pareto Front (BKPF) solutions for these problems, which are presented on the Exeter university website “
(accessed on 6 July 2023)”, were obtained through a secondary stage of extensive computation on the Pareto front initially obtained by five state-of-the-art MOEAs. For the smaller networks, the true
Pareto front was obtained through full enumeration, while for the larger networks, the goal was to approximate the true Pareto fronts.
The improved version of NSGA II is used to generate a Pareto front. The dataset from 10 independent runs with the same population size is collected, and the results are merged to remove any
duplicates. Next, the non-dominated sorting method is applied to select the best solutions and form the Pareto front of the group. This process is repeated for each population size to obtain the
Pareto front for each group. Finally, the Pareto front for each population size (3 groups) is merged, and any duplicates are removed. The non-dominated sorting method is then applied to the merged
solutions to produce the final Pareto front.
3. Results
The proposed methodology was tested on benchmark case studies using the same optimization model and hydraulic simulator to enable fair comparison.
Table 2
presents the results obtained from 30 runs using the improved NSGA II and compares them to the three different results for the five case studies (CS). The first set of solutions, as described in [
], consists of the results obtained by five state-of-the-art MOEAs within the computational budget. By combining and eliminating duplicates from these results, the values for the AEA column were
determined. The second source of solutions is the Best-Known Pareto Front (BKPF) solutions for these problems, which were obtained after extensive calculations on the raw solutions initially obtained
by the same five MOEAs as in the AEA.
The last five columns of
Table 2
contain the results obtained by the improved NSGA II algorithm using NFE (number of function evaluations) limitations. The number of solutions obtained by comparing the improved NSGA II solutions
with BKPF were recorded, including the number of solutions that are equal to BKPF (ENDS), the number of solutions obtained by the improved NSGA II that are dominated by the BKPF solutions (DS), the
number of solutions that dominate BKPF (BNDS), the number of solutions obtained by the improved NSGA II that are nondominated to the BKPF (NDS), and the total number of nondominated solutions
obtained by the improved NSGA II (TNDS) for each of the case studies. The aim of these comparisons is to evaluate the effectiveness of the proposed improvement to the NSGA II algorithm in obtaining a
good approximation of the true pareto front with the same function evaluations than other state-of-the-art MOEAs.
In the context of evaluating the performance of the proposed improved NSGA II algorithm, it is crucial to consider the benchmark Pareto fronts solution, which serves as a reference for the quality of
the results obtained. Therefore, the results obtained by the improved NSGA II will be compared with its predecessor, the original NSGA II, using the same computational budget. This comparison is the
primary point of comparison, as it helps determine if the improved NSGA II outperforms the original NSGA II under the same computational budget. If the improved NSGA II can outperform the original
NSGA II, it would be a significant improvement.
After comparing the results of the improved NSGA II with the original NSGA II, the algorithm’s performance will be further evaluated by comparing it to AEA and BKPF. AEA and BKPF are both
combinations of the five MOEAs, but it is essential to note that the BKPFs are obtained with a much larger computational budget than the number of function evaluations (NFEs) used for the improved
NSGA II. By conducting this comprehensive evaluation, it is possible to determine the effectiveness and efficiency of the improved NSGA II algorithm in solving the benchmark problems, considering the
quality of the solutions obtained by the five state-of-the-art MOEAs and the extensive computational effort invested in obtaining the BKPFs. This approach helps to provide a more accurate assessment
of the effectiveness and efficiency of the improved NSGA II algorithm in solving the benchmark problems.
In the benchmark case studies, the first set of case studies, TLN, the true Pareto solution represents the Pareto front that was obtained by enumerating the entire solution space of the problem.
Thus, no better solution can be found for this network. With the proposed improved NSGA II algorithm, all the true Pareto front points were found within the computational budget. On the other hand,
the original NSGA II algorithm only finds 77 points out of the 114 Pareto front solutions, even when combining all the five MOEAs, the number of solutions found is still lower than that of the BKPF.
With the improvements made to the NSGA II algorithm, all the possible solutions, the true Pareto front solutions, were found. In terms of dominance, the DS (dominated solutions), NDS (nondominated
solutions), and BNDS (solutions dominated by BKPF) are all equal to zero because there are no solutions that can dominate the true Pareto front. Furthermore, since all the solutions are found, there
are no solutions that are dominated by BKPF.
The performance of the improved NSGA II algorithm was evaluated on two medium-sized networks: HAN and GOY. For the HAN network, the BKPF was found to be 575, and using the improved NSGA II algorithm,
a total of 716 (TDNS) solutions were found. Among these 716 solutions, 10 are equal to the BKPF and the non-dominated solutions found by the improved NSGA II algorithm were 664, which is higher than
the BKPF. Additionally, 40 solutions that dominate the BKPF were found by the improved NSGA II algorithm, which is higher than the 39 solutions found by the original NSGA II and AEA. For the GOY
network, the improved NSGA II algorithm found a total of 341 solutions (TDNS). Among these solutions, 303 were equal to the BKPF and 27 non-dominated solutions compared to the BKPF were found. The
number of solutions found by the improved NSGA II algorithm is much higher than the solutions found by the original NSGA II algorithm, which is only 29 solutions, and AEA, which found 67 solutions.
The improved NSGA II algorithm was able to find 11 solutions that dominate the BKPF. The improved NSGA II algorithm found solutions lower than the BKPF, but it is important to note that the BKPF was
obtained through an extensive computational effort that utilized a combination of five different state-of-the-art multi-objective evolutionary algorithms (MOEAs).
The FOS and PES networks are two intermediate-sized cases with higher solution spaces compared to previous cases. For the FOS network, the algorithm found 532 solutions, which is higher than the 474
solutions found by the BKPF and 91% higher than the 48 solutions found by the original NSGA II and 140 solutions found by the AEA (a combination of the five MOEAs). The solutions that are equal to
the BKPF are only three of the solutions found, and only 39 were non-dominated solutions. However, the solutions that dominate the BKPF compromise 490 solutions. This suggests that from the solutions
found by the improved NSGA II, there are significantly more solutions that dominate the BKPF.
Another intermediate network is PES with three sources, which increases the difficulty of solving the optimization. The improved NSGA II found 247 solutions, which is higher than the 82 solutions
found by the original NSGA II and 215 AEA solutions. We found 100 non-dominated solutions compared to BKPF solutions, and 146 solutions dominated the BKPF. From the solutions found, 59% dominated the
BKPF solutions and most of the solutions found are better.
To evaluate the quality of the solutions obtained by the improved NSGA II algorithm, the solutions are compared with the benchmark Pareto fronts (BKPFs). In some cases, parts of the front are zoomed
in to highlight the relationship between these two sets of solutions.
The graphs presented below depict the results from the improved NSGA II algorithm with the new methods and the best-known Pareto front. The blue points on the graphs represent the best-known Pareto
front of networks presented in [
]. The yellow points represent the solutions found by the improved NSGA II algorithm with the new methods (PF). The red points indicate the total number of non-dominated solutions compared to the
Figure 6
a displays the result of the TLN network obtained by utilizing the improved NSGA II algorithm with the new methods and comparing them to the best-known Pareto front. It can be observed that there is
a complete overlap between all the true Pareto front points and the solutions obtained by the improved NSGA II algorithm. This is because the improved NSGA II algorithm has successfully found all the
true Pareto optimal solutions.
As can be seen in
Figure 6
b, for the HAN network the new method has expanded the range of the BKPF and increased the number of solutions. Specifically, the improved NSGA II has discovered more points with minimum resilience
and cost, as highlighted in the zoomed-in portion. In the maximum region, the highest possible resilience for HAN has already been achieved, which is 0.3538, and further expansion towards the maximum
is not feasible. In the minimum region, 33 additional solutions have been identified. Since more solutions than the BKPF have been uncovered, it indicates that the density of solutions in the
remaining part of the Pareto front has been increased.
Figure 6
c shows the results of GOY network, illustrating that the improved NSGA II algorithm has successfully generated solutions across the entire range of the pareto front for the GOY network. The improved
NSGA II also outperforms the original NSGA II algorithm in terms of solution density, having found a greater number of solutions.
The results obtained from the improved NSGA II for FOS network,
Figure 6
d reveal that a considerable number of solutions that dominate the BKPF are found, leading to substantial improvements across most parts of the pareto front. Zooming in on specific parts of the
front, it becomes apparent that a significant gap exists between BKPF solutions and the improved NSGA II solutions in the lower resilience region and the knee area, implying that for the same cost
better resilience can be achieved. A significant enhancement is observed in the knee area, which is a crucial part of the pareto front. The fourth method (G4) is specifically designed to address this
region and generate more solutions. The positive impact of the G4 method can be clearly observed in this network as the solutions have demonstrated significant improvement.
In comparison to the BKPF, the results of the improved NSGA II for PES network in
Figure 6
e show that there exist some uncovered sections of the front due to the lower number of solutions found by the improved method. However, it has been observed that the improved NSGA II has found
solutions that were not discovered by the original NSGA II. In
Figure 7
presented from [
], the contributions of each MOEA to the final result can be seen, and it is evident that the original NSGA II found solutions only in the lower to middle part of the pareto front. On the other hand,
the improved NSGA II solution indicates that solutions have been discovered in the maximum region of the front. This is particularly notable in the zoomed-in section, which is located around the knee
area, where densely populated solutions are found, and those solutions dominate the BKPF.
4. Discussion
Table 2
presents the results of the improved NSGA II algorithm and compares them in terms of the number of solutions obtained for each of the five case studies. The results demonstrate that the improved NSGA
II performs well for all case studies since it has found a large number of total non-dominated solutions. In multi-objective optimization problems, the density and diversity of the solutions across
the Pareto front and convergence to the best possible solutions are important factors to consider.
In general, it was observed that the improved NSGA II algorithm discovered a greater number of solutions compared to the original NSGA II for all benchmark networks. Furthermore, the improved NSGA II
found more solutions than the combination of five state-of-the-art MOEAs all networks. Even though the BKPF was discovered through extensive computation and the use of multiple MOEAs, the improved
NSGA II was able to find more solutions for the TLN, HAN, and FOS networks, which is a significant achievement that demonstrates its effectiveness in solving multi-objective optimization problems. In
addition, the improved NSGA II was able to increase the range of the pareto front for HAN and discover a significant number of solutions that dominate the BKPF, especially for intermediate-sized
networks, resulting in a better pareto front.
Moreover, the improved NSGA II found solutions in the parts of the pareto front that were not found by the original NSGA II, thereby improving the diversity of solutions that can be discovered. These
findings suggest that the improved NSGA II algorithm is a promising approach for solving multi-objective optimization problems and can be applied to a wide range of real-world applications.
It is important to note that although the number of solutions found by the improved NSGA II is lower than that of the BKPF for PES network, many of the solutions found are better than the BKPF
solutions. However, a larger computational budget is required for the improved NSGA II algorithm to make a fair and complete comparison with the BKPF in terms of the number of solutions. In summary,
the improved NSGA II algorithm shows promising results in solving intermediate-sized problems such as FOS and PES networks, and further improvements can be made by increasing the computational
The results of the improved NSGA II algorithm presented in this study are highly promising and warrant further exploration in the field of multi-objective optimization. Although the algorithm has
demonstrated success in solving a two-objective problem, there is room for improvement through further parameter calibration. Future work should focus on fine-tuning the implementation parameters to
optimize the algorithm’s performance on all benchmark networks. This can be achieved through extensive testing and experimentation, leading to a more efficient and effective algorithm.
Furthermore, it is recommended that additional tests be conducted on the remaining benchmark networks to further evaluate the algorithm’s capabilities and limitations. These tests will allow
researchers to gain a deeper understanding of the algorithm’s performance across a wider range of problem sets. In addition, further research could be conducted to define a new best-known Pareto
front through extensive computation.
This study compared the solutions obtained using the improved NSGA II algorithm with the original NSGA II and AEA in terms of the number of solutions and graphical representation. However, for a more
comprehensive analysis, it may be useful to evaluate these solutions using additional metrics such as data dispersion. To achieve a better comparison, future research could consider using alternative
metrics such as box plots to provide insight into the spread of solutions in the front.
5. Conclusions
This research presents an improved version of the NSGA II algorithm that can effectively tackle multi-objective optimization problems. The NSGA II algorithm is a well-established state-of-the-art
approach that has been widely used in the field of multi-objective optimization. The improved NSGA II algorithm employs various generation processes motivated by MOSA-GR to enhance the optimization
search. The algorithm is designed to target different regions of the Pareto front to achieve better convergence, diversity, and density. The four generation processes, namely G1, G2, G3, and G4,
serve different purposes, such as increasing diversity, searching for specific parts of the Pareto front, and finding solutions in the knee area. By utilizing different generation processes, the
algorithm can access parts of the Pareto front that were previously unobtainable using a single generation process.
The improved NSGA II algorithm was tested on a well-known two-objective WDN problem from the literature, and the results were compared with those obtained by the original NSGA II, the combination of
five state-of-the-art MOEAs and with the best know pareto front. The comparison demonstrated that the proposed improvement to the NSGA II algorithm was effective in finding more non-dominated
solutions and dominating more solutions than the original NSGA II algorithm and other MOEAs in most cases. The algorithm was able to find all true Pareto front solutions for the small case study,
widen the range of solutions for the intermediate-sized problem (HAN), and found solutions that dominate the best-known Pareto front, particularly in the knee area of the Pareto front for the
intermediate-sized networks.
In conclusion, the proposed modifications to the NSGA II algorithm have the potential to be a significant contribution to the field of multi-objective optimization. The improved NSGA II algorithm is
capable of finding solutions that cover all parts of the Pareto front with a single algorithm without increasing the computational effort. The good performance of the improved NSGA II algorithm
compared to the original NSGA II algorithm demonstrates the effectiveness of the proposed approach.
In summary, the improved NSGA II algorithm exhibits significant promise in tackling multi-objective optimization problems. Subsequent research endeavors should concentrate on enhancing its
implementation and assessing its performance across a broader spectrum of problem sets, while considering larger computational resources.
Author Contributions
Conceptualization, M.C. and A.O.; methodology, R.A.K., M.C., A.O. and E.S.; software, R.A.K., M.C., A.O. and E.S.; validation, A.O., M.C. and E.S.; writing—original draft preparation, R.A.K.;
writing—review and editing, R.A.K., M.C., A.O. and E.S.; visualization, R.A.K., M.C., A.O. and E.S.; supervision, M.C., A.O. and E.S.; funding acquisition, A.O. All authors have read and agreed to
the published version of the manuscript.
This research was supported by the Grand Water Research Institute at the Technion–Israel Institute of Technology.
Data Availability Statement
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the
decision to publish the results.
1. Filion, Y.R.; MacLean, H.L.; Karney, B.W. Life-Cycle Energy Analysis of a Water Distribution System. J. Infrastruct. Syst. 2004, 10, 120–130. [Google Scholar] [CrossRef] [Green Version]
2. Kleiner, Y.; Adams, B.J.; Rogers, J.S. Water Distribution Network Renewal Planning. J. Comput. Civ. Eng. 2001, 15, 15–26. [Google Scholar] [CrossRef] [Green Version]
3. Cunha, M.D.C.; Sousa, J. Water Distribution Network Design Optimization: Simulated Annealing Approach. J. Water Resour. Plan. Manag. 1999, 125, 215–221. [Google Scholar]
4. Zecchin, A.C.; Simpson, A.R.; Maier, H.R.; Leonard, M.; Roberts, A.J.; Berrisford, M.J. Application of two ant colony optimisation algorithms to water distribution system optimisation. Math.
Comput. Model. 2006, 44, 451–468. [Google Scholar] [CrossRef] [Green Version]
5. Bi, W.; Chen, M.; Shen, S.; Huang, Z.; Chen, J. A Many-Objective Analysis Framework for Large Real-World Water Distribution System Design Problems. Water 2022, 14, 557. [Google Scholar]
6. Wu, W.; Maier, H.R.; Simpson, A.R. Multiobjective optimization of water distribution systems accounting for economic cost, hydraulic reliability, and greenhouse gas emissions. Water Resour. Res.
2013, 49, 1211–1225. [Google Scholar] [CrossRef] [Green Version]
7. Awe, O.M.; Okolie, S.T.A.; Fayomi, O.S.I. Optimization of Water Distribution Systems: A Review. J. Phys. Conf. Ser. 2019, 1378, 022068. [Google Scholar] [CrossRef]
8. Farmani, R.; Savic, D.A.; Walters, G.A. Evolutionary multi-objective optimization in water distribution network design. Eng. Optim. 2005, 37, 167–183. [Google Scholar] [CrossRef]
9. Prasad, T.D.; Park, N.-S. Multiobjective genetic algorithms for design of water distribution networks. J. Water Resour. Plan. Manag. 2004, 130, 73–82. [Google Scholar]
10. Prasad, T.D.; Hong, S.-H.; Park, N. Reliability based design of water distribution networks using multi-objective genetic algorithms. KSCE J. Civ. Eng. 2003, 7, 351–361. [Google Scholar] [
11. Saldarriaga, J.; Takahashi, S.; Hernández, F.; Escovar, M. Multi-objective water distribution system design using an expert algorithm. In Proceedings of the Urban Water Management Challenges
Oppurtunities—11th International Conference on Computing and Control for the Water Industry, CCWI 2011, Exeter, UK, 5–7 September 2011; Volume 3. [Google Scholar]
12. Todini, E. Looped water distribution networks design using a resilience index based heuristic approach. Urban Water 2000, 2, 115–122. [Google Scholar] [CrossRef]
13. Prasad, T.D.; Tanyimboh, T.T. Entry Bassed Design of “Anytown” Water Distribution Network; Kruger: Montreal, QC, Canada, 2008. [Google Scholar]
14. Maier, H.R.; Kapelan, Z.; Kasprzyk, J.; Kollat, J.; Matott, L.S.; Cunha, M.C.; Dandy, G.C.; Gibbs, M.S.; Keedwell, E.; Marchi, A.; et al. Evolutionary algorithms and other metaheuristics in water
resources: Current status, research challenges and future directions. Environ. Model. Softw. 2014, 62, 271–299. [Google Scholar] [CrossRef] [Green Version]
15. Nicklow, J.; Asce, F.; Reed, P.; Asce, M.; Savic, D.; Dessalegne, T.; Harrell, L.; Chan-Hilton, A.; Karamouz, M.; Minsker, B.; et al. State of the Art for Genetic Algorithms and Beyond in Water
Resources Planning and Management ASCE Task Committee on Evolutionary Computation in Environmental and Water Resources Engineering. J. Water Resour. Plan. Manag. 2010, 136, 412–432. [Google
16. Zhou, A.; Qu, B.Y.; Li, H.; Zhao, S.Z.; Suganthan, P.N.; Zhang, Q. Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm Evol. Comput. 2011, 1, 32–49. [Google Scholar] [
17. Wang, M.; Dai, G.; Hu, H. Improved NSGA-II algorithm for Optimization of Constrained Functions. In Proceedings of the 2010 International Conference on Machine Vision and Human-Machine Interface,
Kaifeng, China, 24–25 April 2010; pp. 673–675. [Google Scholar] [CrossRef]
18. D’Souza, R.G.L.; Sekaran, K.C.; Kandasamy, A. Improved NSGA-II Based on a Novel Ranking Scheme. J. Comput. Civ. Eng. 2010, 2, 91–95. [Google Scholar]
19. Verma, S.; Pant, M.; Snasel, V. A Comprehensive Review on NSGA-II for Multi-Objective Combinatorial Optimization Problems. IEEE Access 2021, 9, 57757–57791. [Google Scholar] [CrossRef]
20. Liu, T.; Gao, X.; Wang, L. Multi-objective optimization method using an improved NSGA-II algorithm for oil-gas production process. J. Taiwan Inst. Chem. Eng. 2015, 57, 42–53. [Google Scholar] [
21. Wang, Q.; Wang, L.; Huang, W.; Wang, Z.; Liu, S.; Savić, D.A. Parameterization of NSGA-II for the optimal design of water distribution systems. Water 2019, 11, 971. [Google Scholar] [CrossRef] [
Green Version]
22. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green
23. Kapelan, Z.S.; Savic, D.A.; Walters, G.A. Multiobjective design of water distribution systems under uncertainty. Water Resour. Res. 2005, 41, 1–15. [Google Scholar] [CrossRef]
24. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, Part I: Solving problems with box constraints. IEEE Trans. Evol.
Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
25. Mohamad Shirajuddin, T.; Muhammad, N.S.; Abdullah, J. Optimization problems in water distribution systems using Non-dominated Sorting Genetic Algorithm II: An overview. Ain Shams Eng. J. 2022, 14
, 101932. [Google Scholar] [CrossRef]
26. Wang, Q.; Guidolin, M.; Dragan, S.; Zoran, K. Two-Objective Design of Benchmark Problems of a Water Distribution System via MOEAs: Towards the Best-Known Approximation of the True Pareto Front.
J. Water Resour. Plan. Manag. 2015, 141, 04014060. [Google Scholar] [CrossRef] [Green Version]
27. Cunha, M.; Marques, J. A New Multiobjective Simulated Annealing Algorithm—MOSA-GR: Application to the Optimal Design of Water Distribution Networks. Water Resour. Res. 2020, 56, e2019WR025852. [
Google Scholar] [CrossRef]
28. Bin Mahmoud, A.A.; Piratla, K.R. Comparative evaluation of resilience metrics for water distribution systems using a pressure driven demand-based reliability approach. J. Water Supply Res.
Technol. 2018, 67, 517–530. [Google Scholar] [CrossRef]
29. Zhan, X.; Meng, F.; Liu, S.; Fu, G. Comparing Performance Indicators for Assessing and Building Resilient Water Distribution Systems. J. Water Resour. Plan. Manag. 2020, 146, 06020012. [Google
Scholar] [CrossRef]
30. Jayaram, N.; Srinivasan, K. Performance-based optimal design and rehabilitation of water distribution networks using life cycle costing. Water Resour. Res. 2008, 44, 1417. [Google Scholar] [
Figure 6. Pareto front results of Improved NSGA II and BKPF of: (a) TLN; (b) HAN; (c) GOY; (d) FOS; (e) PES.
Figure 7.
Distribution of non-dominated solutions from each MOEA in the BKPF of PES network [
Table 1. Benchmark network characteristics, WS = water sources, DV = decision variables, PD = pipe diameter options, NFE = number of function evaluation.
Problem Name WS DV PD Search Space NFE
Two-Loop Network TLN 1 8 14 1.48 × 10^9 100,000
Hanoi Network HAN 1 34 6 2.87 × 10^26 600,000
GoYang Network GOY 1 30 8 1.24 × 10^27 600,000
Fossolo Network FOS 1 58 22 7.25 × 10^77 1,000,000
Pescara Network PES 3 99 13 1.91 × 10^110 1,000,000
Table 2.
Results of the improved NSGA II and the five MOEAS in [
] CS = case study; CWS = center for water systems; BKPF = best known pareto front; AEA = all the evolutionary algorithms; TNDS = total non-dominated solutions from improved NSGA II; ENDS = equal to
BKPF; DS = dominated by BKPF solutions; NDS = solutions that are non-dominated to BKPF; BNDS = solutions that dominated by BKPF, TF = True Front.
CS CWS Wang et al. [26] Results Improved NSGA II-Results
BKPF NSGA-II ɛ-MOEA ɛ-NSGA-II AMAGAM Borg AEA TNDS ENDS DS NDS BNDS
TLN 114 ^TF 77 64 64 76 65 77 114 114 0 0 0
HAN 575 39 8 9 35 10 39 716 10 115 664 40
GOY 489 29 2 39 57 15 67 341 303 97 27 11
FOS 474 48 10 42 21 31 140 532 3 58 39 490
PES 782 82 58 24 41 49 215 247 1 437 100 146
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Kidanu, R.A.; Cunha, M.; Salomons, E.; Ostfeld, A. Improving Multi-Objective Optimization Methods of Water Distribution Networks. Water 2023, 15, 2561. https://doi.org/10.3390/w15142561
AMA Style
Kidanu RA, Cunha M, Salomons E, Ostfeld A. Improving Multi-Objective Optimization Methods of Water Distribution Networks. Water. 2023; 15(14):2561. https://doi.org/10.3390/w15142561
Chicago/Turabian Style
Kidanu, Rahel Amare, Maria Cunha, Elad Salomons, and Avi Ostfeld. 2023. "Improving Multi-Objective Optimization Methods of Water Distribution Networks" Water 15, no. 14: 2561. https://doi.org/10.3390
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/2073-4441/15/14/2561?utm_campaign=releaseissue_waterutm_medium=emailutm_source=releaseissueutm_term=doilink97","timestamp":"2024-11-02T12:16:28Z","content_type":"text/html","content_length":"440513","record_id":"<urn:uuid:16d577b0-63a3-4450-9a4d-f6a9a97b83e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00390.warc.gz"}
|
A Polynomial Chaos Approach to Stochastic LQ Optimal Control: Error Bounds and
Title data
Ou, Ruchuan ; Schießl, Jonas ; Baumann, Michael Heinrich ; Grüne, Lars ; Faulwasser, Timm:
A Polynomial Chaos Approach to Stochastic LQ Optimal Control: Error Bounds and Infinite-Horizon Results.
Dortmund ; Bayreuth , 2023
DOI: https://doi.org/10.48550/arXiv.2311.17596
Project information
Project title: Project's official title
Project's id
Stochastic Optimal Control and MPC - Dissipativity, Risk, and Performance
Project financing: Deutsche Forschungsgemeinschaft
Abstract in another language
The stochastic linear-quadratic regulator problem subject to Gaussian disturbances is well known and usually addressed via a moment-based reformulation. Here, we leverage polynomial chaos expansions,
which model random variables via series expansions in a suitable L^2 probability space, to tackle the non-Gaussian case. We present the optimal solutions for finite and infinite horizons. Moreover,
we quantify the truncation error and we analyze the infinite-horizon asymptotics. We show that the limit of the optimal trajectory is the unique solution to a stationary optimization problem in the
sense of probability measures. A numerical example illustrates our findings.
Further data
|
{"url":"https://eref.uni-bayreuth.de/id/eprint/87955/","timestamp":"2024-11-09T04:28:51Z","content_type":"application/xhtml+xml","content_length":"26525","record_id":"<urn:uuid:75cca699-8ebc-4141-87e3-553a50ca4440>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00121.warc.gz"}
|
Decision variables - (Cognitive Computing in Business) - Vocab, Definition, Explanations | Fiveable
Decision variables
from class:
Cognitive Computing in Business
Decision variables are the adjustable elements in a mathematical optimization model that represent choices available to decision-makers. These variables are fundamental in prescriptive analytics, as
they help quantify the impact of various scenarios on outcomes, allowing organizations to evaluate different strategies and make informed decisions.
congrats on reading the definition of decision variables. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Decision variables can be continuous, meaning they can take any value within a range, or discrete, where they can only take on specific values, such as integers.
2. In a linear programming model, decision variables are combined in the objective function and constraints to determine optimal solutions for resource allocation.
3. The number of decision variables directly impacts the complexity of the optimization problem; more variables can lead to more potential solutions to evaluate.
4. Sensitivity analysis can be performed on decision variables to understand how changes in their values affect the optimal solution and overall outcomes.
5. Effective identification and formulation of decision variables are crucial steps in developing a robust optimization model, as they shape the direction of the analysis.
Review Questions
• How do decision variables influence the outcome of an optimization model?
□ Decision variables directly impact the outcome of an optimization model by determining the values that can be adjusted to achieve the desired objective. By selecting different combinations of
these variables, decision-makers can evaluate various scenarios and assess how these changes influence performance against defined objectives. This interaction allows organizations to find
the most effective strategy among competing options.
• Discuss the role of constraints in relation to decision variables within an optimization framework.
□ Constraints serve as boundaries for decision variables in an optimization framework, defining what is permissible within the model. They ensure that solutions are realistic and feasible by
limiting the values that decision variables can take based on resources, budget, time, or other relevant factors. The relationship between constraints and decision variables is critical for
guiding the search for optimal solutions while adhering to operational limitations.
• Evaluate how changing decision variables can affect both the objective function and constraints in a linear programming scenario.
□ Changing decision variables in a linear programming scenario has a cascading effect on both the objective function and constraints. As decision variables are adjusted, they alter the values
in the objective function, which may shift the optimal solution point. Additionally, if these changes push any variable beyond its limits defined by constraints, it could render some
solutions infeasible or lead to a complete reevaluation of the model. Understanding this dynamic is essential for effective optimization and strategic planning.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
|
{"url":"https://library.fiveable.me/key-terms/cognitive-computing-in-business/decision-variables","timestamp":"2024-11-13T08:44:14Z","content_type":"text/html","content_length":"157954","record_id":"<urn:uuid:f941b304-1a73-4a4e-a3df-6bec967472b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00867.warc.gz"}
|
Steady state | Èric Roca Fernández
Steady state
We close the model imposing market clearing. This affects the budget constraint:
$$\begin{eqnarray} k_{t+1} = w_{t} + r_{t+1}k_{t} + (1-\delta)k_{t} - c_{t} - \color{red}{T_{t}}, & \\\ w_{t} = f(k_{t}) - f^{\prime}(k_{t})k_{t},& \\\ r_{t} = f^{\prime}(k_{t}). & \end{eqnarray}$$
Hence the dynamics for capital are now affected by public expenditures:
$$k_{t+1} = f(k_{t}) + (1-\delta)k_{t} - c_{t} - \color{red}{T_{t}}.$$
The inclusion of the term $T_{t}$ will impact the level of consumption —and only consumption— at the steady state. Indeed, we can compute it as we did before. From the Euler equation we have:
$$u^{\prime} (c_{t}) = \beta (f^{\prime}(k_{t+1}) + 1 - \delta)u^{\prime}(c_{t+1}).$$ The budget constraint above determines the dynamics of capital. Setting $c_{t} = c_{t+1} = \bar{c}^\mathcal{Gov}$
and $k_{t} = k_{t+1}= \bar{k}^{\mathcal{Gov}}$ reveals the steady-state levels of capital and consumption:
$$1 = \beta( f^{\prime}(\bar{k}^{\mathcal{Gov}}) + 1 - \delta) \implies f^{\prime}(\bar{k}^\mathcal{Gov}) = \frac{1-\beta(1-\delta)}{\beta},$$ $$\bar{c}^\mathcal{Gov} = f \left( \bar{k}^{\mathcal
{Gov}} \right) - \delta \bar{k}^{\mathcal{Gov}} - \color{red}{G_{t}},$$ where we have used $G_{t} = T_{t}.$
Two main comments are important here:
• The $c$-locus is not affected, hence the steady-state level of capital remains the same,
• The $k$-locus moves closer to the horizontal axis, because we subtract $G_{t} > 0$ from it. This implies that the steady-state level of consumption is reduced.
|
{"url":"https://eric-roca.github.io/courses/macroeconomics/ramsey_government/ramsey_government_equilibrium/","timestamp":"2024-11-05T20:07:57Z","content_type":"text/html","content_length":"22255","record_id":"<urn:uuid:c256f9ab-42c1-41c3-9ff3-43729205faee>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00846.warc.gz"}
|
Convert Degrees Minutes Seconds to Decimal Degrees in Didger
KB no longer maintained - Didger is a Legacy Product. Legacy Products are still supported, but no longer receive new features or updates. The majority of Didger's features have been moved to Surfer.
Please contact support@goldensoftware.com with any questions.
Latitude and Longitude coordinates are often presented in degrees, minutes, and seconds, such as 39° 45' 30" (39 degrees, 45 minutes, 30 seconds). However, Didger can only plot values in decimal
degrees. So, for example, 39° 45' is referred to as 39.75° in Didger. Didger does not currently have a way to do this conversion, and the conversion must be done manually.
Converting from degrees, minutes, and seconds is actually quite easy. Consider the latitude value 39° 25' 30". This value needs to be converted to use it in Didger. There are 60 minutes in one degree
and 3600 seconds in one degree. To convert minutes and seconds to decimal degrees, divide minutes by 60, divide seconds by 3600, and then add the results to obtain the decimal equivalent.
Use the following formula to make the conversion:
Decimal degrees = Degrees + Minutes/60 + Seconds/3600
For Example:
To convert 39° 25' 30" to decimal degrees
1. First, convert minutes and seconds to their degree equivalents and add the results
25'/60 = 0.4167°
30"/3600 = .0083°
and 0.4167° + 0.0083° = 0.425°
2. Then, add this number to the number of degrees.
39° + 0.425° = 39.425°
3. So, the final result is:
39° 25' 30" = 39.425°
Once your coordinates are in decimal degrees, you can use the data in Didger. Surfer is able to convert DMS to DD directly in the worksheet.
Updated: August 06, 2018
0 comments
Please sign in to leave a comment.
|
{"url":"https://support.goldensoftware.com/hc/en-us/articles/226508847-Convert-Degrees-Minutes-Seconds-to-Decimal-Degrees-in-Didger","timestamp":"2024-11-13T22:59:09Z","content_type":"text/html","content_length":"43451","record_id":"<urn:uuid:daf542a4-7a15-4277-93a6-226e83948b3b>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00392.warc.gz"}
|
ib mathematics: applications and interpretation pdf
On this view, mathematics is just like the glass bead game. 11. this will take, use a GDC to evaluate the function for total time at x = 281 to obtain 91.1s. The sequence G,is defined by G, = G, =
1and G, = 2G,, + G5, n>2 A second sequence b, is defined by b, = G G, n=1 (a) Use a spreadsheet to find the first 30 terms of both G, and b, (b) Hence, write down the limit of b, as n tends to
infinity, 1}9:27 b, Arithmetic sequences Here are three sequences and the most likely recursive formula for each one. The first term of an arithmetic sequence is 2 and the 30th term is 147. Solution
(a) This is an arithmetic sequence whose first term is 2 and common difference is 9. 12-518 15 -o3sarsanion 10718675296 0-53B1s50161 (wHB)/C 12. For each function below: (i) use your GDC to graph the
derivative of the function (ii) hence, find the locations of all local extrema and justify each using the first derivative test. V = lwh = x(30 2x)21 2x) =4x3 102x> + 630x (b) Since the paper is 21
cm wide, we need to cut less than %(21) = 10.5cm from the edge in order to make an open box. If we were to make the modelling assumptions more realistic, the mathematics in the model would become too
complicated to solve easily. Use the rules of exponents to write each of the following in the form 3, wheren Z. In electrical notation, Z represents impedance, and Z = R + j(X; X,). The humanistic
philosophy brings mathematics down to earth, makes it accessible psychologically, and increases the likelihood that someone can learn it, because its Jjust one of the things that people do. in
context. We will consider impedance in AC circuits here. = 0.297(80)(1.225)V?3 = 29.1V? A language act would be a registry officer saying I pronounce you married The use of language in a performative
manner creates social facts. As such, the value of the annuity is the sum of 30 terms of the geometric series with first term $1200(1.02) and common ratio 1.02. Mathematics Applications and
Interpretation for the IB DiplomaHigher Level provides comprehensive coverage of the new c, Mathematics Applications and Interpretation for the IB Diploma Standard Level provides comprehensive
coverage of the new, Worked solutions for the Mathematics Analysis and Approaches for the IB Diploma Higher Level Pearson book, Mathematics Analysis and Approaches for the IB Diploma Higher Level
provides comprehensive coverage of the new curriculu, Mathematics Analysis and Approaches for the IB Diploma Standard Level provides comprehensive coverage of the new curricu, R ==} ] Mathematics
Applications and Interpretation for the IB Dlploma (UYL R ety IBRAHIM WAZIR GAI 3338 JIM NAKAMOTO EEEEEEEEEEEEEE e Published by Pearson Education Limited, 80 Strand, London, WC2R ORL
wwwpearsonglobalschaols.com Text Pearson Education Limited 2019 Theory of Knowledge chapter authored by Ric Sims Development edited by Jim Newall Copy edited by Linnet Bruce Proofread by Eric Pradel
and Linnet Bruce Indexed by Georgie Bowden Designed by Pearson Education Limited 2019 Typeset by Tech-Set Ltd, Gateshead, UK Original llustrations Pearson Education Limited 2019 lustrated by Tech-Set
Ltd, Gateshead, UK Cover design by Pearson Education Limited 2019 Coverimages: Front: Getty Images: Busi Photography Inside front cover: Shutterstock.com: Dimitry Lobanov Weare grateful to the
following for permission to reproduce copyright material: Text pages 904-905, Edge Foundation Inc.: What Kind of Thing Is a Number? (a) 1216 ( w,=2u, 31 () 893871739 0T 19 =220, 2187 andu, = 6 andu,
=95 __ og _ u=6(3) (b) 4, =952 19\ ) u, _= 100(20) andu, =100 32 @ Tag576 u,,:%u,, candug=2 33. T M 17 Geometry and trigonometry 1 Solution If the mound is considered to be at M(0, 0, 0), then the
starting point up the tree is T(30, 60, 20). Those who can do it often find themselves labelled as nerds or as people who are, in some sense, socially deficient. Example 14.18 can also be solved
without using calculus. Note that value 5E+4 for the y-coordinate of the vertex is the GDC's version of scientific notation (or standard form). Generally it is also possible to identify this point on
a graph. 92 (c) For the interval 1 < < 3, find the time at which the velocity is a minimum and find its value. )3) = x* = C(1 (+ 2)v2) 3 Substitute v = % into this equation and simplify SR x:C(1+2 F)
848 2\ = x(l o ZF) =C= (x2+2y%)%=Cx? Solution The gradient of AB is 76;225 = 1, so the gradient of the perpendicular to - _ [AB] is 1. (a) -85 (b) u, =35 15n () thy =ty 15anduy =2 2. Exercise 1.4 1.
The purchase price of the car is $16,500. 5 Figure 3.1 Triangular pattern of dots The pattern can also be described, for example, in function notation: f) =1f2)=3, f(3) = 6, etc., where the domain is
Z* The first unit represents one dot, the second represents three dots, etc. That a particular piece of paper is money is a social fact that does make things happen. (d) Find AB and BA (e) Find xand
y such that C= D 2. () t,=3u, andu, =4 2. (@) a,=2n3 (b) b,=n+2 () c,=cp-1+2,andc; = 1 (d) 2,5,7,12,19, &) 2.-512-15.., 2. Like other areas of knowledge, it possesses a specialised vocabulary naming
important concepts to build this map. There are 3 times zones (A, B and C) for the first time this year (May 2023 exams). Now, instead of merely keeping a tally, after rolling the dice 6 times, find
the mean of the outcomes. 2. _ Solution (a) A graph of fproduced on a GDC reveals that it is not monotonic over its domain ] o0 , co[. (b) Give a reason why linearisation using logarithms should not
be used for this data. h(t) B(12.5, 1.5 ( ) 15 0 A(6.25,0.6) 0 24 48 Figure 9.32 Diagram for question 1 72 How much time is there between the first low tide and the next high tide? Let (a,) be an
arithmetic sequence with first term @, and a common difference d. We can construct the series in two ways: forwards, by adding d to a, repeatedly and backwards by subtracting d from a, repeatedly.
100 X T -| T =l 1 == I = ! (b) Q) =CV Q. Q (6) When the battery i removed, R+ = 0 which can be solved to give Q(f) = Ke~*C, and initial conditions yield K = CV e/, substitute this into the result and
we have Q(f) = CVe -/k Exercise 20.2 1. Letgix) =2x>3x25x+ 1, x+2x23 (a) Sketch the graph of y = fix) for 2 < x < 3, labelling the coordinates of any axes intercepts. + YB? I ) _ . EINSTEIN'S L FIELD
EQUATION i) LR Figure 5 Einsteins field equation 1. The Maths HL class surveyed random samples of the groups, and their findings are given in the table below. So, of the whole population tested there
would be a total of 99999 + 99 = 100 098 positive results. In the circle in this diagram, the value of the area of the shaded sector is equal to the value of the length of arc / (ignoring units). 23.
Consider the potential difference given as a sinusoidal function, V= 10cos (mt + %T) whose potential difference (amplitude) is 10, with a phase angle of % These characteristics can be illustrated by
the Argand diagram shown. When a skydiver is falling towards the Earth, she will accelerate until the force due to gravity becomes equal to the force due to air resistance. It follows that the angle
will be a central angle of a circle whose centre is at the origin as shown in Figure 5.12. 3.32 2,34.32 ] 5. (b) Indicate the degrees of freedom, v, in this analysis. Answers will differ. (a) ) \5 ks
- 2 Figure 9.31 GDC screen for solution to Example 9.15 (f) We do not consider (g) Although this model seems reasonable, it assumes that the rotation speed of the wheel is constant. Colour Teachers
12th grade blue | green | beige | pink | yellow | Total frequency 9 8 11 6 6 40 12 7 5 6 9 39 11th gmde 11 10 8 7 9 45 10th grade 10 9 5 8 9 41 Test the claim that the colour preferences are
independent of the school grade or group at the level of significance of & = 0.05 (a) State the null and alternative hypotheses. (a) Express the impedance of the circuit as a complex number in the
forma + bi (b) Express the impedance in Euler form, with 6 given in radians to 3 significant figures. 10392 (20-25) - B12x) Remember that LI4E + 3 = 1.14 X 10 = 1140cm? Z, + Z, = 92 + 55j Next
convert Z, Z, and Z+2Z, with your GDC into Euler form: Change the output mode to Euler form, then enter the impedances. Is the flying speed of animals related to their overall body length? The data
in the table shows a sample of the data collected. Mathematics applications and interpretation International Baccalaureate past questions Standard Level and Higher Level plus Marking Scheme. (a) (b)
2 7E ) 13 () 2 (8) 29 % 3] 24 il () 2 (h) 6/3 | o5 {n | | 5. The resulting initial-value problem leading to the population growth can now be written as & & (b doyly where by and d, are positive
constants representing specific birth and death rates. B) V2mrt 11. Determine if 995 is a term in this sequence. Figure 5.4 Anarcand its We use the same logic we used for the area of a sector: we
need the size of the central angle for the arc, and then we calculate the portion of the circumference using a ratio. 209 Matrix algebra 1. iy 4xland = e+ x Lan e*+ linto (b) y(1) ~ 0.32768 () y(1) ~
0.348678 4401 (d) Actual value is 0.367 879 4412. Theres no math without people. P(X=10) = 1 P(X = 9) ~0.0139 (a) 0.817 (b) 1 (c) 0.0162 (a) 0.0338 (b) 0.0245 (c) 0.783 (@ 3 (b) 0.101 (c) 0.000215
(a) 0.0025 (b) 0.9826 (c) 0.9999 (a) 0.1396 (b) 0.1912 (c) 0.9576 (a) 0.8187 (b) 0.5488 (a) 0.9877 (b) 0.999998 (c) 0.0000244 (a) 0.0334 (b) 646 (c) 98" P(X=7) =1~ P(X < 7)~0.212 (Note: In a
continuous 2.00004 5. Figure 9.30 GDC screen for solution to Example 9.15 (c) 334 We can see that a local maximum is at t = 5 minutes. A stone is dropped down a 30 m well. 114 Determine the areas of
the two possible triangles produced in question 4. Also, the Pythagorean Identity allows us to find the value of the sine function given the cosine function, or vice-versa. Hence, the second
derivative of displacement is acceleration. (a) 81,243 (b) 3" 5 a3a, 1 k:370:1_23 65 T=483C m= .06 = 8 minutes and 4 seconds 10C (b) 124C k= 5.64 minutes 15 c=25 3hours and 19 minutes p+q=47.4p+q=53
3. =9 Solution (a) Since the diameter of the Riesenrad is 200 feet, the wave height of the model must be 200, so the amplitude a = 100 (b) Since the minimum value of the function is 12 feet, the
principal axis must be located at 12 + 100 = 112. Solution Since the arc measures 150, we use that fraction of the circumference: Figure 5.5 Diagram for Example 5.3 L 150 =0Qq 360(2 57 X 3) 3)i= = >
=7 7.85m (3 s.f) 1. For part (b), to find the minimum time Figure 14.17 GDC screen for the solution to Example 14.19 Again, we could use aGDC to analyse the function directly. In this case, the
acceleration due to gravity at the Earths surface. The relationship between two quantities how the value of one quantity depends on the value of another quantity - is the key behind the concept of a
function. (a) The Voronoi diagram has i 3_4, -32-% (c) 25 (0) 36 4,8 L (b) y = x2+ 14x + 50 ) y=xx1 (d)y:x2~3x+% (f) y=x2+43x+15 +1) () y=x7-6/2i -8 260 = (291 = (8)n = ( 1) (8)21 = g 2% 4i 2% =
(222" = (4i)> = (16i2)" = (~16)* 2% 13. 11. derivative changes sign atx = c, then (c, fio) is apoint of inflection. The methods and concepts of mathematics, therefore, are quite unlike anything to be
found in the sciences, although they do seem to bear a strong resemblance to the arts in terms of the setting of the rules of the game and the use of the imagination. Afternoon exams must start after
12 PM and finish by 6 PM (the usual start time is 12 PM or 1 PM). = (25 % + 132 I=Vx2+49 m=y(25 x? Simple interest is interest calculated only on the initial investment. So all are install to solve
our maths problems, thank you for this life saver. 26. Find the value of d. (c) Calculate the number of minutes until a person riding the Riesenrad reaches the top of the wheel. Determine the
equation of the line perpendicular to y = 2x 6 that passes through the point (4, 2) 8. :wo Exercise 7.5 rA = L. () y-axis reflection. (b) An alarm sounds when the amount of fluid remaining is 50 ml.
Give your answer correct to the nearest minute. 12; If 2//2 is the imaginary component of a complex number also with a modulus of 10, what are the possible values of its real component? Therefore a
reasonable domain for this model is 0 < t =< 4.51(s) The range indicates the possible heights of the ball. Therefore, given L, we can determine a relationship between C and k. Since the curve must
pass through (x, %) we can substitute and simplify as follows: fo = 1+ Ie"x - LZ: 1+ ek" 2=1+Ce* 1ew - The equation x = lnTC also gives us some insight into the location of the point of inflection.
Also, at the end of each chapter there is a set of practice questions, which are designed to expose you to questions that are more exam-like. Find the quotient zl State your answer in a + bi form.
(a) 12,20 TilneeT - 3\z) (54 (f) z(? ) m(zmm+2) 256 W) fzmxx)dXN%(z_z) +2(3-f3) + 4+ fd) + - + 9-9 + 10 f(10)) )+ ) :%(2-0+2(3-1.5+4~z+m +9:3)+10-0)~ 127 10, (a) i 35 8] S d) 2/3 - 66273 17 166 . ()
= 3 (i) =272 G (iv) 138 (b) Convert the following degree measures to radians. We can use the logic from section 5.1 and write the area of a sector as 0 1 A=sdmr) =26r Inacircle of radius r, the area
of asector witha central angle measured T Sl 2 Again, be careful! In 99% of these cases the test does its job and records a negative result. His data are Temp | Solubility C)_| (gper100mD) 0.0 66.7
4.0 71.0 10 76.3 15 80.6 21 85.7 29 929 (d) Use your least-squares regression line to predict the solubility of NaNO; at 25C. = 2u,,3+ 1 andu; =0 (h) a,=a,;+2anda; =1 Find a recursive definition for
each sequence. 2k 3] = g = 1 )T 5= X0 =AYy =02 13 Since the domain of gis x ] oc, 0], then the range of g~! 609 Further differential calculus Example 14 A supply of four metres of wire is to be used
to form a square and a circle. 0o 261 PP g = 50, Sokt = k=1 S =1023 3 = 1125899906 842 623 100 2100 1 = s Mabt= T 1267 650 600 228 229 401 496 703 205 375 We observe that as the number of terms in
the partial sum increases, the sum also x increases. The solution lies in the chain rule. i1T j 0 0 0 320 | Graz Vienna salzburg 191 Graz Salzburg Innsbruck Linz 298 478 185 282 461 220 188 Innsbruck
135 320 Table 7.3 Distance (in km) between Austrian cities. (b) Calculate the value of Pearsons r and interpret it in context. Vectors are covered in chapter 8. (a) Find a linear function to model
the cost of model A. The distance between the two posts is 25m. @@ B=4"'C 10 (b) DA = (o 00 11. Such a story is just not credible. Solution (a) Substituting into the formula gives: 232 X110 10712 =
10log,(3.2 X 107) = 75.1dB n(3.2 X 1079= = 1olog,,,( 5 . WebMathematics Applications and Interpretation for the IB Diploma Higher Level 9780435193447. much? But this means that mathematics is really
radically different from other areas of knowledge, including the natural sciences. This avoids having too many negative numbers in calculations. Find the value of n. 3. (a) ) L @) fin il= glinx + R =
e w 0 Exercise 20.1 (@) y = cos~1(31) . Interest is paid at the end of each year. Example 9.5 The demand for a certain style of jeans is modelled by the function D = 1000 5p where D is the number of
items sold and p is price of the jeans in euros. Trigonometric models Trigonometric models are well-suited to describing repetitive phenomena: tides, seasonal temperatures, the motion of wheels, etc.
where Pis the power in watts A is the area swept by the turbine blades (swept area), in m? For example, the approximate distance from the Earth to the sun is 149600000 km. (a) Formulate a function to
model the length L of a Gila Monster years after birth. Our aim is to support genuine inquiry into mathematical concepts while maintaining a coherent and engaging approach. (b) = a. Likewise, In C =
0 when C = 1, so the x-coordinate of the point of inflection will be zero. () sinR (b) cosR (c) tanR (d) arcsm(;) (e) arctan G) f) s 3. identity is an equation thatis true for all values of the
variables. 9432 8. 10. _eeeeeeeeeee Solution (a) MCis the rate of change of cost, thus, cost itself is an antiderivative of MC. @ 55 XL By the Net s Change Theorem, the increase in velocity is (d)
49ms' (e) 12465 ) 73.08ms"! This makes sense. (d) In words, accurately describe the motion of the object during the interval -1 0 so production costs are increasing forallx >0 (d) Here we need a new
model. 625 Further differential calculus (a) Write down an expression for T in terms of r, [ and 7. Chapter 4 practice questions 1. Inflation means an overall rise in the cost of goods and services.
Information about the values of the derivatives of fis given in the table. These materials include: * Interactive GeoGebra applets demonstrating key concepts * Worked solutions for all exercises and
practice questions * Graphical display calculator (GDC) support To access the eBook, please follow the instructions located on the inside cover. 30 20. (a) 2,-1%3 (b :(%Ar%i) V2i, v2 2i - & (b) 43es
o 6. Figure 5.16 Diagram for Example 5.9 Solution (a) s:r(?#S:Q(z?v) =6m The length of the arc is exactly 677cm. In mathematics, concavity is whether a graph curves up (concave up) or curves down
(concave down), as shown in Figure 14.9. The prism has a height of . This is a position taken by surprisingly many mathematicians. 10. Mathematical objects such as perfect circles and numbers existed
in this real world; circles on Earth were mere inferior shadows. There is a common misconception that November exams are more difficult. 3 o7 o IR 1.27m 14. oy o 7 2 12,2 25 15,275 17 157 . Determine
if 9803 is a term in this sequence. Technically, this comes from the fact that the whole numbers are closed under the operation of addition because they form an important structure called a group.
made at the beginning of each year. The population growth rate in China was 0.5% and the population was 1.379 billion (source: World Bank). If it is, state which term. Then the two variables P and
are related by the formula P = P;(1.02)", where P, is the population at time = 0. (h) f'4)= = L G me = Thef');e equatibnoft Hisy L5 =gle4) Sy=grtl Set, % 12 (a) y = ~ 70+ 27 X 75 = 450 thereforeA is
on the track. (a) a=4 ) (i) d=-5 6. Find the distance between each pair of points. We'll also give you advice on how to get the most out of your practice so you can hit your target score on test day.
Speed (kmh-!) What is the difference between Mass and Weight? However, here we can use one of two approaches, introduce the factor 6, as we have done before, i.e., [ FiTde=1 6 [VexF 1T 6dx _1 _if
7Efdu76fu _lui 5 du 1 teTguite =g(6x+1Di+c 2 orsinceu=6x+ [T 11 = du=6dx=dx= dG,then iTde= [Vadt= Jutdu and we follow the same steps as before. A sample of size n = 23 from a normally distributed
population shows X =100 and s,_; = 12. Figure 5.10 shows three circles with radii of different lengths (r, < r, < r;) and the same central angle 0 subtending (intercepting) the arc lengths s;, 5, and
s;. Xpry1= =5, X, By,(y!{xz =3.337 i Yurr = Yu h(6x, = yi) Bt x =302 4 2202353 | iy Y2 =429 Particular solutiof (b) Eigenvalues: 2, 5, eigenvectors ()() 2 Trajectories approach equilibrium and then
move away. The distinction between pure and applied mathematics becomes blurred in the hands of someone like the great Carl Friedrich Gauss (1777-1855). (a) 197.1 million 41. The variance is unknown.
This type of truth is independent of place and time. chain rule. (a) C=08d+4 (h) x = 3log,y (i) x=2lny o -1 () 5505C @ F= %K 4504 () () 4594F (i) 273C 3. Find the measure of angle AOB in degrees to 3
significant figures. = (AB')* + (z, z)* =@ x)+ () (g 2) =>AB=[tc,x)+(,~y) + (& 2) Given two points in space, A(x,, y,, z,) and B(x,, y,, z,), the distance AB is found by taking the square root of the
sum of the squares of the differences in each dimension. Enter the observed values into a 2 X 4 matrix A. 2 o ) go=b--3 (c)}ZbVb:5 (d) Point of tangency is at g(1) = 3 = (1, 3), gradient is 3
(given), equation of Tisy 3 = 3(x )= y = 3x. (d) Use your model to predict the depth 500 m from the shore and 1.5 km from the shore. 15. Ly oln=oo 8= sBT ] p d g 1-0) ].f|r|0 = 10. The IB Math
Analysis and Approaches course is designed for students who enjoy the abstract nature of mathematics and have a strong interest in exploring the Hence, the bird is at B(30 - %, 60 - %, 20 118 %) = B
(ZO, 40, %) i A cuboid measuring 10 cm x 12 cm x 20.cm is cut into a rectangular pyramid, keeping the face that is 12 cm x 20 cm as its base and 10 cm as its height. Figure 5.19 Calculating values of
trigonometric functions from coordinates of points on the unit circle Symmetry is crucial to finding the values of the trigonometric ratios in the unit circle. (a) V= )(zf 2 = 0.3702 11. (@) flry=x+2
(b) fiy=322t+1 g =%-20 (d) fiH) = (t VRt +3) (&) gu) = u 4u? Potential difference across a resistor Vy is in phase with the current; however, potential difference across a capacitor V lags, and
potential difference across an inductor V; leads. What problems does it solve or what questions does it answer? (@ () P(X=24)~0.105 (i) P(X = 22) ~ 0.0925, P(X = 26) ~ 0.0902 (iif) P(X = 20) ~
0.0616, P(X = 28) ~ 0.0595 (b) As the probability is the highest at the expected value and those adjacent gradually decrease, the distribution will likely fit a normal curve. Diracs own equation for
the electron must rate as one of the most profoundly beautiful of all. Predict the temperature of this solution, or give a reason why you cannot. (a) The value of Pearson correlation coefficient for
the time it takes an athlete to run 5km and the time for her to cycle 30 km is r = 1.21 (b) The fuel economy of passenger cars decreases linearly with the mass of the car with Pearsons r = 0.78 (c)
Among mammals, those with greater average body mass have longer life expectancy; the rank correlation coefficient is r, = 0.85 (d) For a set of (x, y) data, the least-squares regression line is y=
253.52x with r = 0.64 2. 197kmh! () x-intercept is at (0.439, 0). s -a R N ey samax ARy r AN vy IS R A\ N x=Ce + 2C,e~ General solution: { y=Cie4 = 5C Coe 25t Stable system. Figure 5.10 The ratio of
arc length to radius remains constant The ratio ; indicates how many radius lengths, 7, fit into the length of the arc s. For example, if % = 2, then the length of s is equal to two radius lengths.
Either one of the players has achieved checkmate, or a stalemate (a draw) has been agreed. (@) 2,2,6,10, (b) 10.07,9.95,9.83,9.71, 3 1 7 W B (c) 100,97,94,91, 3. When we enter the data into a GDC, we
can use it to estimate the logistic model for this data as shown in the screenshot. | @x=0x=-2 x="% V3 (e) 2. (a) y, 1 - G #, ) 12 10 | Lk I | 8 s 6 4 -3 2 ) 2 (c) any value 0 < x < 24.3 0r 65.7 < x
< 90 0 0 2 4 (h) 2x+y =29 12. Find the sums of the complex numbers. Time (years) Amount in the account () 0 2000 1 2000 + 2000 2 2000 + 2000 X 0.05 + 2000 X 0.05 = 2000 + 2000 X 0.05 X 2 3 2000 +
2000 X 0.05 X 2 + 2000 4 2000 + 2000 X 0.05 X 3 + 2000 X 0.05 = 2000 + 2000 X 0.05 X 4 X 0.05 > 0.05 = 2000 + 2000 X 0.05 X 3 Table 3.1 Simple interest calculations This appears to be an arithmetic
sequence with five terms (as both the beginning and the end of the first year are counted). Figure 5.8 Angle in standard position Two angles in standard position that have the same terminal sides -
regardless of the direction or number of rotations - are called coterminal angles (Figure 5.9). By continuing to use this website, you consent to our use of these cookies. + -3+ i (d) 1 18(3x2 + 5) =
4 vx + 3) 36 T 5 (2x + 3)8 g -FFIS L @ S, +c 18x (h) %ln[costz ~D+3)+ (@) tan(s +e ) Lsinim +3) + R Leen e W %zu itc () Inlln2z] + (m)%ln) fe 4. A nautical mile is the distance on the Earths surface
formed by a great-circle arc that measures 1 minute (6_10 ofa degree)A (a) Given that the radius of the Earth is 6370 km, find the length of 1 nautical mile in kilometres. Our normal approach would
be to draw a right-angled triangle. They also identify common errors and pitfalls. A typical example of a problem in this category is how to solve a particular type of equation. 133 Geometry and
trigonometry 2 3 radians ] 6 radians 4 radlab\ y)f/ 5 radians Figure 5.13 Arcs with length of the radius placed along the circumference ofa circle Figure 5.14 Degree measure and radian measure for
common angles Figure 5.14 shows all of the angles between 0 and 360 that are multiples of 30 or 45, and their equivalent radian measure. Each course is designed to meet the needs of a particular
group of students. Assume that only 100 people in the whole country have the disease. Find the areas of the triangles in question 17. (b) 0.439cmss! Inverses and reciprocals IfsinA = %, its inverse
is expressed as A = a.rcsin(%), often noted on a GDC as si.nl(%). For instance, should a relation be expressed as y = a*, then x is the exponent of the base a which yields the quantity y, written
simply as x = log, y When the base is 10, we typically do not write it. + 322 2 - 20 - 32 cos 60 = V [ 1 /400 + 1024 1280 - = D = AB =784 1. If the impedance is X, = 4 + 2j Q, what is the magnitude
of the potential difference? Read more about cookies, Mathematics: applications and interpretation SL, Mathematics: applications and interpretation HL, develop mathematical knowledge, concepts and
principles, develop logical, critical and creative thinking. At Grande Anse Beach the height of the water, in metres, is modelled by \(9@ the next period, since the question asked only above the
first ten minutes. athlete performed worse than we would expect? This book is designed to complete the course in conjunction with the Mathematics: Core Topics SL textbook. Your teacher will advise
you on the timeline for completing your exploration and will provide critical support during the process of choosing your topic and writing the draft and final versions of your exploration. (a) 3910
(b) 3901 (c) 8200 (d) 8200.0 (e) 100.3 6. First convert 42 to 24, then add the exponents. That is, the angular velocity of each wheel is 1.5 revolutions per second. 0.3 316 0.4 d35 0.5 =500 (c) Find
a least-squares regression line to predict velocity v(f) based on time . We can use our knowledge of the function to choose more appropriate settings. French A Language and Literature IB past papers
May 2019, Social and cultural anthropology IB past papers 2021. Nevertheless, it often happens that pure mathematics created for no other purpose than solving internal mathematical problems turns out
to have some extraordinary and very practical applications. Identify the claim. (b) Coincident: the second is a scalar multiple of the first. 45 8. (d) Since the first maximum is reached 5 minutes
after the first minimum (when the person board the Riesenrad, we can conclude that one complete rotation takes 10 minutes. 10. (c) i (d i . The boundary XY between the two types of soil is a straight
line running from east to west and point A lies 50 km directly north of it. (c) Calculate how long it takes for the temperature of the water to reach 56C. Therefore, the percentage change per minute
in the difference between the coffee temperature and room temperature is 21.3% g 75e-020s aziye 5(0.787)* Since1 0213 = 0.787, we confirm our result st the cois temperature difference is decseasing
by a fxed rate of21.3%. Express each value in simplest form. 19. (b) After the initial 4 years, the employee has the option to buy the car from his company by paying 50% of the purchase price. = /702
+ d* = /4900 + d* Since XY = 100, then PY = 100 d Using this in triangle PYB gives PB = /PY? The simplest Japanese kokeshi (wooden doll) is formed from a sphere (head) on top of a cylinder (body). IB
Applications and Interpretation Formula Sheet. Consider a taxi company that has the following structure for charging customers. And to ask extra questions about the question requires the premium
which students like me can't spend that much on an app. (b) How much interest has been earned on the investment in 30 years? With 6, = 40 and fit) = = (ksint? Sequences formed in this manner are
called arithmetic sequences. Measurements Volumes and in 3 dimensions gEloc] You should already be familiar with these formulae for volume and surface area. correspondence (mapping) between two sets
X and Y in which corresponds to (maps to) exactly one element of set Y. This is best done application such as GeoGebra that allows you to observe each parameter has on the shape of the graph. 7 1 dy
= f3x2dx:>f(;+ 4y)dy: f3x2dx ln|y| T2y =ik The solution is exact, but implicit, as it cannot be written in an explicit form y = fix). Analysis and Approaches is u, = @) and r :g 0+ 8 8 =25+0 =125
Since V(10, 16) and W(15, 15) are both /65 units from 30 best location for a new bank branch is at L(48, 24). Therefore, a reasonable domain is 0 < x < 10.5 Remember, when usinga GDC to analyse a
model, it is important to choose the viewing window carefully. The resistance of three resistors connected in parallel is given by: % = Ril F Riz k= i So, in AC circuits, impedance is defined as 11Xl
7z T T % 195 Consider when there are two impedances in a parallel circuit, Z; and Z,: 1 %72 1 _ 4 Zy+Z 2 Z,Z; 142 Z= zz | n+a which is the product of two complex numbers divided by their sum. We are
also trying to minimise the total length of the rope, so we will assign variables to each part of it as well, labelling them [ and m. Finally, since the posts are 25m apart, we can label the
horizontal distance from R to the taller post as 25 x il Q x R ~2m B5-x S Now, use the diagram to try to find a general model. v v NN Mmoo n e m e A ro R e e onowow (d) Eigenvalues: 5, 1,
eigenvectors (;)( - i) Generalsolution: s s o Fip e so P s, e e 2 Fes b e i, s iz el Papp e b v iwnn i P8 s g P Z 2 Z o W T P x=Cie + G 10y % o i z & i 5 il LA oi A A an Gaaaa T Gaa e Gaaa o a e WA
AR A A Faraaaaan AAAAA AR A AAAARA AR AAAAAA AR e Ad Tt A 7 DI P Trajectories approach equilibrium and then move away. Ask extra questions about the question requires the premium which students like
me ca n't spend that much an. Instead of merely keeping a tally, after rolling the dice 6 times, find the areas knowledge... Of this solution, or a stalemate ( a ) this is antiderivative... Supply of
four metres of wire is to support genuine inquiry into mathematical concepts maintaining! When c = 0 when c = 1, so the x-coordinate of the first length. Two sets X and y in which corresponds to (
maps to ) one... Is, the Pythagorean Identity allows us to find the value of the point of inflection will be a officer. + 2j Q, what is the magnitude of the potential difference tested there would be
draw... The first time this year ( May 2023 exams ) zl State your answer in a bi. Magnitude of the first time this year ( May 2023 exams ) =. A typical example of a cylinder ( body ) 1 /400 + 1024
1280 =... Centre is at ( 0.439, 0 ) AB =784 1, or Give a why. Fis given in the form 3, wheren Z ) 12,20 TilneeT 3\z. To identify this point on a graph 32 cos 60 = V 1. Diracs own equation for the
electron must rate as one of the two possible triangles produced in question.... X-Intercept is at the origin as shown in Figure 5.12, X, by (... Is a position taken by surprisingly many
mathematicians use our knowledge of the function to model length... In this analysis in Figure 5.12 consent to our use of these cookies more difficult on. X T -| T =l 1 == i = population was 1.379
billion ( source world... Papers 2021 _ ; = 12 @ x=0x=-2 x= '' % V3 ( ). Speed of animals related to their overall body length a total of 99999 + 99 100. The sine function given the cosine function,
or a stalemate ( a ) V= ) ( i LR... Vocabulary naming important concepts to build this map: the second is a term this! Were mere inferior shadows and Literature IB past papers 2021 c = 0 when c =,! =
40 and fit ) = 3 ( i ) d=-5 6 someone like the great Carl Friedrich Gauss 1777-1855... This map a linear function to choose more appropriate settings records a negative result table shows a of. D 2
1, so the x-coordinate of the following in the form 3, wheren Z derivative displacement! 100.3 6 shore and 1.5 km from the shore the areas of knowledge, including the natural sciences each has... And
finish by 6 PM ( the usual start time is 12 PM or 1 )... Use this website, you consent to our use of these cases test! 0.5 % and the population was 1.379 billion ( source: world Bank ) Level
9780435193447. much at. And Higher Level plus Marking Scheme power in watts a is the 's. 10718675296 0-53B1s50161 ( wHB ) /C 12 of each wheel is 1.5 revolutions second! 10.07,9.95,9.83,9.71, 3 years
after birth /400 + 1024 1280 - = d = =784. Blades ( swept area ), in some sense, socially deficient the population growth in. Line to predict the temperature of the data in the cost of goods and
services hands of like!, or Give a reason why you can not me ca n't spend much! There is a position taken by surprisingly many mathematicians in terms of r, [ and 7 =3u,,. Use the rules of exponents
to write each of the vertex is the magnitude the... Paper is money is a term in this real world ; circles Earth. Findings are given in the table V3 ( e ) 100.3 6 language act would be total... In
this category is how to solve easily Q, what is the GDC 's version of scientific (. A, b and c ) find AB and BA ( e ) 100.3 6 area,. The magnitude of the graph ( 20-25 ) - B12x ) Remember that +.,
social and cultural anthropology IB past papers 2021 - 32 cos 60 = V [ 1 /400 1024! In which corresponds to ( maps to ) exactly one element of set y total of 99999 + 99 100. X and y in which
corresponds to ( maps to ) exactly one element of set y length. Revolutions per second change of cost, thus, cost itself is an sequence! Of exponents to write each of the two possible triangles
produced in question 4 following measures! Surface area shows X =100 and s ib mathematics: applications and interpretation pdf _ ; = 12 world! Pm or 1 PM ) in which corresponds to ( maps to ) exactly
one of... Circles on Earth were mere inferior shadows in Figure 5.12 then ( c, then ( c ) ib mathematics: applications and interpretation pdf. + 322 2 - 20 - 32 cos 60 = V [ 1 +. ( body ) does make
things happen all are install to solve our problems... Kokeshi ( wooden doll ) is apoint of inflection X = 281 to obtain 91.1s have disease... @ @ B=4 '' ' c 10 ( b ) DA = ( X! Itself is an
arithmetic sequence whose first term is 2 and the population growth rate in China was 0.5 and... Maps to ) exactly one element of set y, thus, cost itself is an sequence! The Maths HL class surveyed
random samples of the derivatives of fis given in the of. On a graph years after birth b and c ) for the temperature this! The natural sciences the outcomes % V3 ( e ) 2 at ( 0.439, ). These cases
the test does its job and records a negative result sequence whose first term of an arithmetic is! Velocity of each year an antiderivative of MC act would be to draw a right-angled.. Is how to solve
our Maths problems, thank you for this life saver whose centre is at the as! Merely keeping a tally, after rolling the dice 6 times ib mathematics: applications and interpretation pdf find the
measure angle. Us to find the areas of the players has achieved checkmate, or a stalemate ( a, and. A position taken by surprisingly many mathematicians November exams are more difficult degree
measures to ib mathematics: applications and interpretation pdf a taxi that! Body length or Give a reason why you can not afternoon exams start... In some sense, socially deficient interest
calculated only on the initial investment 16,500. ) Calculate how long it takes for the y-coordinate of the first time this year May... The origin as shown in Figure 5.12 xpry1= =5, X, ) ( head ) on
top a... Mathematical objects such as GeoGebra that allows you to observe each parameter on... An overall rise in the table shows a sample of the sine function given the cosine,. The two possible
triangles produced in question 4 more realistic, the mathematics in the cost of and!, the second derivative of displacement is acceleration in watts a is the flying speed of related! Normally
distributed population shows X =100 and s, _ ; = 12 a central angle of a problem this... Its job and records a negative result of this solution, or a stalemate a. And cultural anthropology IB past
papers 2021 angle will be a registry officer saying i pronounce married! For this life saver find a linear function to choose more appropriate settings ) ( 54 f... Regression line to ib mathematics:
applications and interpretation pdf the depth 500 m from the Earth to the sun is 149600000 km 56C. Use this website, you consent to our use of these cookies overall body length our knowledge the. (
wHB ) /C 12 Gila Monster years after birth this analysis following! This category is how to solve our Maths problems, thank you for this data 1 PM ) are in. Iv ) 138 ( b ) Indicate the degrees of
freedom, V, in =... It answer represents impedance, and their findings are given in the model become. The whole population tested there would be to draw a right-angled triangle typical example a.
Model to predict velocity V ( f ) Z (? ).... One element of set y first term of an arithmetic sequence is 2 common. + 1024 1280 - = d = AB =784 1 ) 12,20 TilneeT - 3\z ) ( i LR. Rise in the cost of
model a more difficult your answer in a manner. A right-angled triangle does it answer sine function given the cosine function, or vice-versa related to their overall length... Further differential
calculus example 14 a supply of four metres of ib mathematics: applications and interpretation pdf is to be used for this.! Depth 500 m from the shore and 1.5 km from the Earth the! You can not that
C= d 2 be a total of 99999 + 99 = 100 098 positive results Scheme. Gila Monster years after birth too many negative numbers in calculations and the 30th term is 2 and population. Area swept by the
turbine blades ( swept area ), in some sense, socially deficient saying... Modelling assumptions more realistic, the mathematics in the cost of goods and services Marking Scheme well-suited. Must
start after 12 PM and finish by 6 PM ( the usual time... The x-coordinate of the vertex is the flying speed of animals related to overall! Body length into mathematical concepts while maintaining a
coherent and engaging approach antiderivative of MC designed.
The Suffix Refers To Quizlet
Glock 44 Upgrades
Articles I
|
{"url":"http://ok1mjo.com/how-did/ib-mathematics%3A-applications-and-interpretation-pdf","timestamp":"2024-11-04T15:42:37Z","content_type":"text/html","content_length":"43021","record_id":"<urn:uuid:c8b52a54-e4bc-4804-8ee5-2f736ef42bf1>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00506.warc.gz"}
|
Problem of end effect in a linear induction MHD machine at arbitrary load current
The internal end-effect problem under consideration is solved by the method of varying the constants for an arbitrary surface current load, including the case of discrete windings. For the latter
case, mathematically correct formulas are derived for calculating the mechanical coil forces and useful power. Some means of end-loss compensation are examined.
Magnitnaia Gidrodinamika
Pub Date:
September 1977
□ Asynchronous Motors;
□ Induction Motors;
□ Linear Systems;
□ Magnetohydrodynamic Generators;
□ Differential Equations;
□ Integral Equations;
□ Winding;
□ Electronics and Electrical Engineering
|
{"url":"https://ui.adsabs.harvard.edu/abs/1977MagGi..13...63V/abstract","timestamp":"2024-11-02T15:36:06Z","content_type":"text/html","content_length":"33248","record_id":"<urn:uuid:a28110eb-f076-4143-ab37-760e241898c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00895.warc.gz"}
|
Treap (Cartesian tree)¶
A treap is a data structure which combines binary tree and binary heap (hence the name: tree + heap $\Rightarrow$ Treap).
More specifically, treap is a data structure that stores pairs $(X, Y)$ in a binary tree in such a way that it is a binary search tree by $X$ and a binary heap by $Y$. If some node of the tree
contains values $(X_0, Y_0)$, all nodes in the left subtree have $X \leq X_0$, all nodes in the right subtree have $X_0 \leq X$, and all nodes in both left and right subtrees have $Y \leq Y_0$.
A treap is also often referred to as a "cartesian tree", as it is easy to embed it in a Cartesian plane:
Treaps have been proposed by Raimund Siedel and Cecilia Aragon in 1989.
Advantages of such data organisation¶
In such implementation, $X$ values are the keys (and at same time the values stored in the treap), and $Y$ values are called priorities. Without priorities, the treap would be a regular binary search
tree by $X$, and one set of $X$ values could correspond to a lot of different trees, some of them degenerate (for example, in the form of a linked list), and therefore extremely slow (the main
operations would have $O(N)$ complexity).
At the same time, priorities (when they're unique) allow to uniquely specify the tree that will be constructed (of course, it does not depend on the order in which values are added), which can be
proven using corresponding theorem. Obviously, if you choose the priorities randomly, you will get non-degenerate trees on average, which will ensure $O(\log N)$ complexity for the main operations.
Hence another name of this data structure - randomized binary search tree.
A treap provides the following operations:
• Insert (X,Y) in $O(\log N)$.
Adds a new node to the tree. One possible variant is to pass only $X$ and generate $Y$ randomly inside the operation.
• Search (X) in $O(\log N)$.
Looks for a node with the specified key value $X$. The implementation is the same as for an ordinary binary search tree.
• Erase (X) in $O(\log N)$.
Looks for a node with the specified key value $X$ and removes it from the tree.
• Build ($X_1$, ..., $X_N$) in $O(N)$.
Builds a tree from a list of values. This can be done in linear time (assuming that $X_1, ..., X_N$ are sorted).
• Union ($T_1$, $T_2$) in $O(M \log (N/M))$.
Merges two trees, assuming that all the elements are different. It is possible to achieve the same complexity if duplicate elements should be removed during merge.
• Intersect ($T_1$, $T_2$) in $O(M \log (N/M))$.
Finds the intersection of two trees (i.e. their common elements). We will not consider the implementation of this operation here.
In addition, due to the fact that a treap is a binary search tree, it can implement other operations, such as finding the $K$-th largest element or finding the index of an element.
Implementation Description¶
In terms of implementation, each node contains $X$, $Y$ and pointers to the left ($L$) and right ($R$) children.
We will implement all the required operations using just two auxiliary operations: Split and Merge.
Split ($T$, $X$) separates tree $T$ in 2 subtrees $L$ and $R$ trees (which are the return values of split) so that $L$ contains all elements with key $X_L \le X$, and $R$ contains all elements with
key $X_R > X$. This operation has $O (\log N)$ complexity and is implemented using a clean recursion:
1. If the value of the root node (R) is $\le X$, then L would at least consist of R->L and R. We then call split on R->R, and note its split result as L' and R'. Finally, L would also contain L',
whereas R = R'.
2. If the value of the root node (R) is $> X$, then R would at least consist of R and R->R. We then call split on R->L, and note its split result as L' and R'. Finally, L=L', whereas R would also
contain R'.
Thus, the split algorithm is:
1. decide which subtree the root node would belong to (left or right)
2. recursively call split on one of its children
3. create the final result by reusing the recursive split call.
Merge ($T_1$, $T_2$) combines two subtrees $T_1$ and $T_2$ and returns the new tree. This operation also has $O (\log N)$ complexity. It works under the assumption that $T_1$ and $T_2$ are ordered
(all keys $X$ in $T_1$ are smaller than keys in $T_2$). Thus, we need to combine these trees without violating the order of priorities $Y$. To do this, we choose as the root the tree which has higher
priority $Y$ in the root node, and recursively call Merge for the other tree and the corresponding subtree of the selected root node.
Now implementation of Insert ($X$, $Y$) becomes obvious. First we descend in the tree (as in a regular binary search tree by X), and stop at the first node in which the priority value is less than
$Y$. We have found the place where we will insert the new element. Next, we call Split (T, X) on the subtree starting at the found node, and use returned subtrees $L$ and $R$ as left and right
children of the new node.
Alternatively, insert can be done by splitting the initial treap on $X$ and doing $2$ merges with the new node (see the picture).
Implementation of Erase ($X$) is also clear. First we descend in the tree (as in a regular binary search tree by $X$), looking for the element we want to delete. Once the node is found, we call Merge
on it children and put the return value of the operation in the place of the element we're deleting.
Alternatively, we can factor out the subtree holding $X$ with $2$ split operations and merge the remaining treaps (see the picture).
We implement Build operation with $O (N \log N)$ complexity using $N$ Insert calls.
Union ($T_1$, $T_2$) has theoretical complexity $O (M \log (N / M))$, but in practice it works very well, probably with a very small hidden constant. Let's assume without loss of generality that $T_1
\rightarrow Y > T_2 \rightarrow Y$, i. e. root of $T_1$ will be the root of the result. To get the result, we need to merge trees $T_1 \rightarrow L$, $T_1 \rightarrow R$ and $T_2$ in two trees which
could be children of $T_1$ root. To do this, we call Split ($T_2$, $T_1\rightarrow X$), thus splitting $T_2$ in two parts L and R, which we then recursively combine with children of $T_1$: Union (
$T_1 \rightarrow L$, $L$) and Union ($T_1 \rightarrow R$, $R$), thus getting left and right subtrees of the result.
struct item {
int key, prior;
item *l, *r;
item () { }
item (int key) : key(key), prior(rand()), l(NULL), r(NULL) { }
item (int key, int prior) : key(key), prior(prior), l(NULL), r(NULL) { }
typedef item* pitem;
This is our item defintion. Note there are two child pointers, and an integer key (for the BST) and an integer priority (for the heap). The priority is assigned using a random number generator.
void split (pitem t, int key, pitem & l, pitem & r) {
if (!t)
l = r = NULL;
else if (t->key <= key)
split (t->r, key, t->r, r), l = t;
split (t->l, key, l, t->l), r = t;
t is the treap to split, and key is the BST value by which to split. Note that we do not return the result values anywhere, instead, we just use them like so:
pitem l = nullptr, r = nullptr;
split(t, 5, l, r);
if (l) cout << "Left subtree size: " << (l->size) << endl;
if (r) cout << "Right subtree size: " << (r->size) << endl;
This split function can be tricky to understand, as it has both pointers (pitem) as well as reference to those pointers (pitem &l). Let us understand in words what the function call split(t, k, l, r)
intends: "split treap t by value k into two treaps, and store the left treaps in l and right treap in r". Great! Now, let us apply this definition to the two recursive calls, using the case work we
analyzed in the previous section: (The first if condition is a trivial base case for an empty treap)
1. When the root node value is $\le$ key, we call split (t->r, key, t->r, r), which means: "split treap t->r (right subtree of t) by value key and store the left subtree in t->r and right subtree in
r". After that, we set l = t. Note now that the l result value contains t->l, t as well as t->r (which is the result of the recursive call we made) all already merged in the correct order! You
should pause to ensure that this result of l and r corresponds exactly with what we discussed earlier in Implementation Description.
2. When the root node value is greater than key, we call split (t->l, key, l, t->l), which means: "split treap t->l (left subtree of t) by value key and store the left subtree in l and right subtree
in t->l". After that, we set r = t. Note now that the r result value contains t->l (which is the result of the recursive call we made), t as well as t->r, all already merged in the correct order!
You should pause to ensure that this result of l and r corresponds exactly with what we discussed earlier in Implementation Description.
If you're still having trouble understanding the implementation, you should look at it inductively, that is: do not try to break down the recursive calls over and over again. Assume the split
implementation works correct on empty treap, then try to run it for a single node treap, then a two node treap, and so on, each time reusing your knowledge that split on smaller treaps works.
void insert (pitem & t, pitem it) {
if (!t)
t = it;
else if (it->prior > t->prior)
split (t, it->key, it->l, it->r), t = it;
insert (t->key <= it->key ? t->r : t->l, it);
void merge (pitem & t, pitem l, pitem r) {
if (!l || !r)
t = l ? l : r;
else if (l->prior > r->prior)
merge (l->r, l->r, r), t = l;
merge (r->l, l, r->l), t = r;
void erase (pitem & t, int key) {
if (t->key == key) {
pitem th = t;
merge (t, t->l, t->r);
delete th;
erase (key < t->key ? t->l : t->r, key);
pitem unite (pitem l, pitem r) {
if (!l || !r) return l ? l : r;
if (l->prior < r->prior) swap (l, r);
pitem lt, rt;
split (r, l->key, lt, rt);
l->l = unite (l->l, lt);
l->r = unite (l->r, rt);
return l;
Maintaining the sizes of subtrees¶
To extend the functionality of the treap, it is often necessary to store the number of nodes in subtree of each node - field int cnt in the item structure. For example, it can be used to find K-th
largest element of tree in $O (\log N)$, or to find the index of the element in the sorted list with the same complexity. The implementation of these operations will be the same as for the regular
binary search tree.
When a tree changes (nodes are added or removed etc.), cnt of some nodes should be updated accordingly. We'll create two functions: cnt() will return the current value of cnt or 0 if the node does
not exist, and upd_cnt() will update the value of cnt for this node assuming that for its children L and R the values of cnt have already been updated. Evidently it's sufficient to add calls of
upd_cnt() to the end of insert, erase, split and merge to keep cnt values up-to-date.
int cnt (pitem t) {
return t ? t->cnt : 0;
void upd_cnt (pitem t) {
if (t)
t->cnt = 1 + cnt(t->l) + cnt (t->r);
Building a Treap in $O (N)$ in offline mode¶
Given a sorted list of keys, it is possible to construct a treap faster than by inserting the keys one at a time which takes $O(N \log N)$. Since the keys are sorted, a balanced binary search tree
can be easily constructed in linear time. The heap values $Y$ are initialized randomly and then can be heapified independent of the keys $X$ to build the heap in $O(N)$.
void heapify (pitem t) {
if (!t) return;
pitem max = t;
if (t->l != NULL && t->l->prior > max->prior)
max = t->l;
if (t->r != NULL && t->r->prior > max->prior)
max = t->r;
if (max != t) {
swap (t->prior, max->prior);
heapify (max);
pitem build (int * a, int n) {
// Construct a treap on values {a[0], a[1], ..., a[n - 1]}
if (n == 0) return NULL;
int mid = n / 2;
pitem t = new item (a[mid], rand ());
t->l = build (a, mid);
t->r = build (a + mid + 1, n - mid - 1);
heapify (t);
return t;
Note: calling upd_cnt(t) is only necessary if you need the subtree sizes.
The approach above always provides a perfectly balanced tree, which is generally good for practical purposes, but at the cost of not preserving the priorities that were initially assigned to each
node. Thus, this approach is not feasible to solve the following problem:
Given a sequence of pairs $(x_i, y_i)$, construct a cartesian tree on them. All $x_i$ and all $y_i$ are unique.
Note that in this problem priorities are not random, hence just inserting vertices one by one could provide a quadratic solution.
One of possible solutions here is to find for each element the closest elements to the left and to the right which have a smaller priority than this element. Among these two elements, the one with
the larger priority must be the parent of the current element.
This problem is solvable with a minimum stack modification in linear time:
void connect(auto from, auto to) {
vector<pitem> st;
for(auto it: ranges::subrange(from, to)) {
while(!st.empty() && st.back()->prior > it->prior) {
if(!st.empty()) {
if(!it->p || it->p->prior < st.back()->prior) {
it->p = st.back();
pitem build(int *x, int *y, int n) {
vector<pitem> nodes(n);
for(int i = 0; i < n; i++) {
nodes[i] = new item(x[i], y[i]);
connect(nodes.begin(), nodes.end());
connect(nodes.rbegin(), nodes.rend());
for(int i = 0; i < n; i++) {
if(nodes[i]->p) {
if(nodes[i]->p->key < nodes[i]->key) {
nodes[i]->p->r = nodes[i];
} else {
nodes[i]->p->l = nodes[i];
return nodes[min_element(y, y + n) - y];
Implicit Treaps¶
Implicit treap is a simple modification of the regular treap which is a very powerful data structure. In fact, implicit treap can be considered as an array with the following procedures implemented
(all in $O (\log N)$ in the online mode):
• Inserting an element in the array in any location
• Removal of an arbitrary element
• Finding sum, minimum / maximum element etc. on an arbitrary interval
• Addition, painting on an arbitrary interval
• Reversing elements on an arbitrary interval
The idea is that the keys should be null-based indices of the elements in the array. But we will not store these values explicitly (otherwise, for example, inserting an element would cause changes of
the key in $O (N)$ nodes of the tree).
Note that the key of a node is the number of nodes less than it (such nodes can be present not only in its left subtree but also in left subtrees of its ancestors). More specifically, the implicit
key for some node T is the number of vertices $cnt (T \rightarrow L)$ in the left subtree of this node plus similar values $cnt (P \rightarrow L) + 1$ for each ancestor P of the node T, if T is in
the right subtree of P.
Now it's clear how to calculate the implicit key of current node quickly. Since in all operations we arrive to any node by descending in the tree, we can just accumulate this sum and pass it to the
function. If we go to the left subtree, the accumulated sum does not change, if we go to the right subtree it increases by $cnt (T \rightarrow L) +1$.
Here are the new implementations of Split and Merge:
void merge (pitem & t, pitem l, pitem r) {
if (!l || !r)
t = l ? l : r;
else if (l->prior > r->prior)
merge (l->r, l->r, r), t = l;
merge (r->l, l, r->l), t = r;
upd_cnt (t);
void split (pitem t, pitem & l, pitem & r, int key, int add = 0) {
if (!t)
return void( l = r = 0 );
int cur_key = add + cnt(t->l); //implicit key
if (key <= cur_key)
split (t->l, l, t->l, key, add), r = t;
split (t->r, t->r, r, key, add + 1 + cnt(t->l)), l = t;
upd_cnt (t);
In the implementation above, after the call of $split(T, T_1, T_2, k)$, the tree $T_1$ will consist of first $k$ elements of $T$ (that is, of elements having their implicit key less than $k$) and
$T_2$ will consist of all the rest.
Now let's consider the implementation of various operations on implicit treaps:
• Insert element.
Suppose we need to insert an element at position $pos$. We divide the treap into two parts, which correspond to arrays $[0..pos-1]$ and $[pos..sz]$; to do this we call $split(T, T_1, T_2, pos)$.
Then we can combine tree $T_1$ with the new vertex by calling $merge(T_1, T_1, \text{new item})$ (it is easy to see that all preconditions are met). Finally, we combine trees $T_1$ and $T_2$ back
into $T$ by calling $merge(T, T_1, T_2)$.
• Delete element.
This operation is even easier: find the element to be deleted $T$, perform merge of its children $L$ and $R$, and replace the element $T$ with the result of merge. In fact, element deletion in
the implicit treap is exactly the same as in the regular treap.
• Find sum / minimum, etc. on the interval.
First, create an additional field $F$ in the item structure to store the value of the target function for this node's subtree. This field is easy to maintain similarly to maintaining sizes of
subtrees: create a function which calculates this value for a node based on values for its children and add calls of this function in the end of all functions which modify the tree.
Second, we need to know how to process a query for an arbitrary interval $[A; B]$.
To get a part of tree which corresponds to the interval $[A; B]$, we need to call $split(T, T_2, T_3, B+1)$, and then $split(T_2, T_1, T_2, A)$: after this $T_2$ will consist of all the elements
in the interval $[A; B]$, and only of them. Therefore, the response to the query will be stored in the field $F$ of the root of $T_2$. After the query is answered, the tree has to be restored by
calling $merge(T, T_1, T_2)$ and $merge(T, T, T_3)$.
• Addition / painting on the interval.
We act similarly to the previous paragraph, but instead of the field F we will store a field add which will contain the added value for the subtree (or the value to which the subtree is painted).
Before performing any operation we have to "push" this value correctly - i.e. change $T \rightarrow L \rightarrow add$ and $T \rightarrow R \rightarrow add$, and to clean up add in the parent
node. This way after any changes to the tree the information will not be lost.
• Reverse on the interval.
This is again similar to the previous operation: we have to add boolean flag rev and set it to true when the subtree of the current node has to be reversed. "Pushing" this value is a bit
complicated - we swap children of this node and set this flag to true for them.
Here is an example implementation of the implicit treap with reverse on the interval. For each node we store field called value which is the actual value of the array element at current position. We
also provide implementation of the function output(), which outputs an array that corresponds to the current state of the implicit treap.
typedef struct item * pitem;
struct item {
int prior, value, cnt;
bool rev;
pitem l, r;
int cnt (pitem it) {
return it ? it->cnt : 0;
void upd_cnt (pitem it) {
if (it)
it->cnt = cnt(it->l) + cnt(it->r) + 1;
void push (pitem it) {
if (it && it->rev) {
it->rev = false;
swap (it->l, it->r);
if (it->l) it->l->rev ^= true;
if (it->r) it->r->rev ^= true;
void merge (pitem & t, pitem l, pitem r) {
push (l);
push (r);
if (!l || !r)
t = l ? l : r;
else if (l->prior > r->prior)
merge (l->r, l->r, r), t = l;
merge (r->l, l, r->l), t = r;
upd_cnt (t);
void split (pitem t, pitem & l, pitem & r, int key, int add = 0) {
if (!t)
return void( l = r = 0 );
push (t);
int cur_key = add + cnt(t->l);
if (key <= cur_key)
split (t->l, l, t->l, key, add), r = t;
split (t->r, t->r, r, key, add + 1 + cnt(t->l)), l = t;
upd_cnt (t);
void reverse (pitem t, int l, int r) {
pitem t1, t2, t3;
split (t, t1, t2, l);
split (t2, t2, t3, r-l+1);
t2->rev ^= true;
merge (t, t1, t2);
merge (t, t, t3);
void output (pitem t) {
if (!t) return;
push (t);
output (t->l);
printf ("%d ", t->value);
output (t->r);
Practice Problems¶
|
{"url":"https://gh.cp-algorithms.com/main/data_structures/treap.html","timestamp":"2024-11-05T18:58:32Z","content_type":"text/html","content_length":"211155","record_id":"<urn:uuid:45525c90-6aa9-4a40-a05f-3893258f775f>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00346.warc.gz"}
|
January 2019 - Quantum Physics Lady
Here’s why I began studying quantum physics. I wondered: How does this universe work? What underlies molecules and atoms? How is reality created?
Quantum physics is part of the answer—a huge part. But the trouble is, physicists don’t understand how quantum particles create the solid objects that our senses perceive. After all, quantum
particles are just vibrations in what appears to be huge quantities of empty space.
Many physicists are unperturbed by this question. They use the mathematics of quantum physics for running experiments or for developing technologies, and they leave the Big Questions alone. However,
some physicists/mathematicians have gone ahead and speculated about the Big Questions.
How does quantum physics explain our perceptions of the world?
One speculation of particular interest to me is that Information Theory can cast light on this question. Information Theory reduces the universe to mathematical patterns. It reduces the vibrations of
quantum particles to the mathematical equations which calculate the vibrations. These equations describe changes in matter and energy, what physicists call “evolutions.” The equations are not just
static descriptions like the formula for the composition of water: H[2]O.
The entire universe can be seen as an intermeshing of equations, one supplying data to another, each equation being influenced by others. The physicist, Max Tegmark, wrote the book Our Mathematical
Universe on this premise. Another good book on the subject is Programming the Universe by one of the inventors of the quantum computer, Seth Lloyd.
Information Theory is illuminating. But there’s a big piece of the puzzle that’s still missing. How do mathematical equations become subjective experience? We experience colors, sounds, tastes, and
other sensations as if they were out in the world. But, actually, these are our subjective experiences of electrical impulses in the brain. After all, our skulls don’t have holes in them to let the
world in. The only thing going on in our brains are electrical impulses.
Raining computer code, as in the film, “The Matrix.” Are equations generating digits that our consciousness turns into sensations? [Image source: By Jamie Zawinski – screenshot, MIT, https://
en.wikipedia.org/wiki/Matrix_digital_rain; retrieved Nov. 29, 2018]
Electrical impulses traveling through brain cells somehow allow us to experience sensations. [Image source: Looie496 created file, US National Institutes of Health, National Institute on Aging
created original – Public domain; https://en.wikipedia.org/wiki/Neuron]
Here’s where my brain starts to hurt—our brains are, themselves, describable as mathematical equations interacting with each other. And the electrical impulses that shoot through the brain are
describable as mathematical equations. And these electrical impulses arise from sense organ stimulations that are describable as mathematical equations. These stimulations arises from interactions
with the external world (which, of course, are describable as mathematical equations).
But, how exactly, do we experience electrical impulses traveling through the brain as colors, sounds, tastes, and so on? How do mathematical equations become subjective experience?
The quantum physicist, Amit Gswami in The Self-Aware Universe, suggests how this happens. He proposes that our consciousness codes equations into the images, sounds, smells, and tastes of our
subjective experience. In other words, the world is really in our minds. Or possibly, there is one mind, and we’re all tuned into it, each one of us experiencing it somewhat differently due to our
own unique filters. The traditional Buddhist view has things to say about this.
Is consciousness fundamental?
Plato (428-348 BC) [Image source: Public domain; https://en.wikipedia.org/wiki/Plato]
This view, “Idealism,” depends on the assumption that consciousness is fundamental and matter derivative. Of course, the current scientific view that consciousness arises from brain matter, is
exactly the reverse.
Max Planck (1858-1947) [Image source: Public Domain, https://commons.wikimedia.org/w/index.php?curid=20429172]
Some philosophers and physicists throughout history have espoused Idealism. Plato, considered one of the founding fathers of Western thought, was an Idealist. And so was the grandfather of quantum
physics, Max Planck. Planck said in 1931:
“I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing,
postulates consciousness.”
Rene Descartes proposed another view called “Dualism” in the 1600’s. He saw both matter and consciousness as fundamental but also as very different substances. Dualism formed one of the basic
unspoken assumptions of the worldview that I grew up with and probably most Westerners have grown up with. It is the worldview that most Westerners who aren’t paid to think, unthinkingly adopt.
Descartes (1596-1650) thought consciousness and matter were two different fundamental substances that met in the pineal gland of the brain (the epiphysis). [Image source: Public domain; https://
Starting in the middle of the 20^th Century, mainstream scientists began rejecting both Idealism and Dualism. They adopted Materialism. This view is that matter is fundamental and consciousness
arises from the brain. These days, some scientists, like Amit Gswami, are beginning to question Materialism. Philosophers like David Chalmers are working out the logic of Materialism and pointing out
some logical difficulties that it raises. The precepts of quantum physics, in particular the precept that fundamental particles are nothing more than vibrations, can be seen to undermine the
foundations of Materialism.
|
{"url":"https://quantumphysicslady.org/2019/01/","timestamp":"2024-11-09T00:23:38Z","content_type":"text/html","content_length":"53489","record_id":"<urn:uuid:5bdaf853-f8e6-44d7-a6ae-79422f5190ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00872.warc.gz"}
|
Volvelles: A paper computer for Shamir's secret sharing over the bech32 alphabet
Secret sharing with analog computers
Andrew Poelstra
We will generate a share randomly by paper computers. Our goal is to generate a short version of a seed in a 2-of-3 Shamir secret sharing. We will have 3 individual shares and any 2 of them will be
combinable to produce your actual seed. It won't matter which 2 of the shares you choose. The scheme generalizes, so you can take k-of-n where n is any number up to 31 and k is any number up to n. If
you want to do a 3-of-5 with this scheme, or 5-of-10 or 15-of-30 you probably shouldn't because your body won't like all that tedious manual work. For the purposes of this workshop, we will do
We will generate two random shares, derive the third share in a way we will go over, such that you always get the same secret no matter which two shares you use. We will talk about how to generate
the shares. We are going to next generate a checksum on those random shares. A checksum is a little bit of extra data that you tack on to the end of some real data, and gives you some structured
redundancy that lets you guarantee a detection of a certain number of errors. With the help of a computer, you would be able to even correct errors. If your shares are corrupted, then up to 4 places
of corruption can be determined and you would be able to find what the original values were.
The cool thing about almost all checksums is that they are compatible with Shamir secret sharing. When we derive a new share, we will derive a third share and it will automatically be checksummed.
When you recover your secret from your shares, it will be automatically be checksummed. When you do derivation on the checksum itself, it will do a transform of the checksummed data itself.
Our goal is to generate two random shares, derive one share, pick two of them, and try to recover your secret using these worksheets and this volvelle.
Q: What is a share?
A: Great question. Let's start with what's a secret. Intuitively it is a bunch of random data that can somehow turn into a bitcoin address or something like that. A share is an artifact of a process
called Shamir secret sharing. We will breakup your master secret into multiple pieces that we call shares such that any 2 of them can be used to reconstruct the secret. The term share refers to the
pieces of your secret that you get when you use such a splitting scheme. The shares are themselves the same size as the original secret.
Volvelles is a paper computer, because electronic computers are scary and unpredictable. We also have two slide rules, and there are no windows except at the top, and you can see all the values. As
you rotate it, the mapping and arrows point to different symbols. This is a two-sided slide rule over here. It's sort of like a decoder ring you may have found in your cereal boxes in the 1950s. The
way this works, you translate the symbol you point it to. You flip it around and then there you go.
Checksumming as I said is just this extra data we're going to add to our actual share data. The randomness that we generate is our actual share data, which in real life would be a bip32 seed that you
use to derive everything else. The checksum is something we will compute and tack onto the end.
Secret sharing is a technique to break your secret into multiple pieces called shares such that only a few of the shares are required to reconstruct the secret data. When you have a secret like this,
there is a tradeoff in the choices you might make with storing it. You want to have a lot of copies, because if you lose all your copies then you lose your secret and it is compromised. But you also
don't want a lot of secrets because there's more copies to steal.
Shamir secret sharing has a more nuanced take on this tradeoff where we can make a certain number of copies, as many copies as we want up to 31, and you can control how many of them are needed to
reconstruct the secret. If you do for example, say you do a 3-of-5 secret share where you have 5 shares floating around and you need 3 of them to reconstruct the original secret. The idea here is
that as long as 3 of them are alive, you can reconstruct them. You can lose 2 of them, and still be okay. But, if someone steals 2 of them then you're still okay because you need 3 in order to get
the original secret.
Q: Are all the shares in these schemes always equal? Is one weighted and one is the key one or something?
A: They are all going to be equal weight. What a fun question, could you make one that is double weighted? One way to double weight them is, during the recovery process which we will go through is
you translate each share, and then you add the shares together and add the double shares together. What you're doing mathematically is creating a linear recombination. You translate using the
translation wheel, and add using the addition wheel. If you translate the two shares and add them, that's a combination of two shares that you can use. It's a good question.
What can we can currently do and not do with volvelles
We're not quite reaching the goal of paper wallets yet. We can compute and verify checksummed secrets, which is useful if you're trying to manage backups of a secret. One caveat is that if there are
errors, then you can detect that. I haven't figured out how to turn the correction algorithms into a volvelle yet. Importantly, you can't do wallet stuff and signing. If you want to move your coins,
you will need to put your secret into an electronic computer and all our bad feelings about electronic computers you will just have to live with for now. I don't mean it's impossible, only that we
haven't been able to do it on paper so far.
Tips for handwriting
We're using the volvelles but also filling in worksheets. When you're doing this, you want to use a mechanical pencil. Not a pen, because you will make mistakes, and a normal pencil will get blurry.
You want to write on a hard surface, we're using cloth tables that's okay. If you're dealing with real secrets, you should use a hard surface because that makes your writing more clear and also if
you write on a wax table then your pen will leave imprints on the surface below and people could read your secrets.
Another tip is to cross your 0s, 7s, and Zs (looks like a 2 otherwise), and also your S as a $ because the S looks like 5 otherwise. Be careful with 6's and g's look similar if you write sloppily.
Our intention in the booklet is to provide some drafting instructions on how to draw your letters so that we can really get the kindergarten crafting experience.
If you make mistakes, your checksum worksheet will not work. You put in all your data, you do a computational process, and at the end you get a magical value that should spell out "SECRET SHARE" and
if it doesn't then you know. I will provide some tips for noticing this early. When you're generating your checksum and initial shares, there's no good way to detect errors, so you have to do it
twice. It's very annoying, we haven't found a way to shortcut these error checks.
https://github.com/roconnor-blockstream/SSS32 branch 2022-06-workshop
Q: How do we know that this is not compromised?
A: These are good questions. One way is that you can go to this url, download this document, and this booklet is handwritten in postscript the format that existed before pdf. In postscript, there is
a turing complete language where it doesn't belong. You can implement error correction in it, you can implement field arithmetic, secret sharing, etc, and then you can produce all these volvelles
just by writing code, so it has facilities for doing rotations and translatoins and drawing text.
As a kid, the way I used qbasic or how people used logo before that, you can kind of do that. It feels a lot like bitcoin script because it's stack based.
One answer is that you can verify the postscript code. Another answer which might be more helpful is that you can verify the mathematics; here we have a tiny paper that describes what we're doing
here. The cool thing is that you can just look at these and manually check, and it's not hard to check that everything is where it should be. Reading the code might be easier to verify. This is the
addition, and we have 1024 symbols on the back and you can meticulously check that.
Within the booklet, there are a couple tables of symbols. We actually spent some time brainstorming asking how would you compromise this as an attacker? One idea would be to swap some symbols so that
when someone is generating a share they are actually generating the real secret. We came up with an attack that we think doesn't work, which is where verifying the source or downloading it now and
checking it later, or keeping a safe copy forever... but ultimately, what we're doing is basically directly doing a bunch of mathematical transforms here. I haven't figured out a way to backdoor this
where you could actually recover from some subset of shares. It doesn't mean it's not possible, of course, but there's not a lot of freedom of action in the things that we're doing. There's almost no
freedom of action in how things are supposed to be worked out.
Q: ...
A: The goal here is eventually that we would have hardware wallets that would understand the encoding format, and then the goal is that you only do that when you're actually spending. If you have
long-term coins that you call your "cold storage coins" that maybe you have on an airgapped Ledger hardware wallet, and then you have your seedphrase. For those kinds of applications, that's where
this volvelle secret sharing scheme would come into play.
Let me try to motivate why you would do this. Electronic computers including hardware wallets are very difficult to reason about. All their operations are happening on the nanoscale and you need
expensive equipment to see what they are doing, and even then it's hard to tell what's going on. We know that, even without malicious behavior, there is a risk of sidechannels and there might be
power draw from a USB hub, or EMF from the processor, or even noises that it is making or something like that can somehow reveal computations that it is doing. It's possible to leak information from
A lot of my job as a practical cryptographer is to design algorithms that don't have leakages, but it's very hard to do that. Your compiler fights you, when you write optimized C code it will try to
optimize it its own ways. Division is the worst in Intel processors. The division opcode in x86 takes different amounts of time depending on the inputs, so you never use division in cryptography for
this reason.
Morally, another issue is that you can't verify any of this. Or maybe you store data on flash and try to delete it by writing over it, but maybe your flash chip decides that hey the wear level is too
much and it preserves data forever even though the chip itself and onwards everything tells you hey the data isn't there, maybe someone with an electron microscope can still go and get it.
For long-term bitcoin secrets, maybe you're okay with these individual risks. In the long-term, storing secrets for decades, you have to worry about these problems both now and on an ongoing basis.
Your Ledger hardware wallet is probably not going to last until 2035, well what about the new hardware wallet you upgrade to? Do you trust those? Are there firmware bugs? You can't tell really. You
need to deal with these threat models forever and forever on an ongoing basis.
The goal with paper is that if you handle it properly, you set fire to your intermediate computations and don't write on a soft surface, then you know how it works. You don't need special skills or
equipment. You can use your brain and intuition to know where the copies of your data is and what their status is. Using this scheme, you can create secrets, you can split them up, you can verify
your checksums as much as you want.
Q: Can you generate addresses?
A: I don't think so. Hashes are very hard. Someone in 2012 generated a bitcoin blockhash by hand and it was like 0.67 hashes/day. Ken Sherrif on righto.com... But maybe that's okay, because taproot
addresses don't involve hashes. That's a cool accidental feature, everything Pieter Wuille touches somehow fits together.
Q: Well, only if you're not doing internal key.
A: Yes, you can generate a taproot address where you just generate a secret key and derive a public key from that. But we don't recommend that, because in multi-party settings and other contexts, it
can be impossible to prove that the key you generated did not have a hash or internal script somewhere inside it. In multi-party settings, you want to be able to prove that, and you don't want a
single party to sneak in a script. In single party applications, where you don't need to worry about that, and you use dice for randomness, then you can be okay.
But even then, you need some elliptic curve math to get from a secret to a point. Maybe we can do this. 8 years ago, I wrote a table of discrete logarithms in a github repository. The joke of course
is that real logarithms are continuous and you can interpolate them, but discrete logs are not. You can do elliptic curve by looking things up and adding points and doing an addition ladder. The
issue here is that even adding two points seems to be hard to do by hand, and the reason why is because we're using a 256-bit prime field, which means we use addition and multiplication modulo a
large prime and the reason our scheme here works by hand is because we use a field size of 32, and that's small enough that we can have 32x32 on a piece of paper and you can lookup every possible
operation on a piece of paper. It won't have any internal structure for a 256-bit prime field. Maybe you can derive addresses with pen and paper, but I haven't found a volvelle way to do it. I'd
really like to get there. Then you can generate addresses and receive money... although if you can generate an address, then generating a signature is not much harder. You would need a computer to
compute your transaction hash, though. If a pubkey can be derived, I could put the pubkey on a computer and hash it and get an address, and that's perfectly safe.
One thing you can do is take a ti84 calculator, and implemented it in ti-basic the address derivation. It takes several minutes because it's a slow computer. You really have to do a lot of stuff by
hand. It's a very slow processor. Those are nice because those calculators were for the most part manufactured before the internet and bitcoin and there's like a zero chance that it was compromised
in a way that can compromise bitcoin. Also, they are designed for professors to come by and erase memory, although you probably shouldn't trust that erasure feature necessarily.
Data structure
The data that we're going to generate will have this format. There will be a prefix MS1 at the start. This is kind of like bc1 at the start of bitcoin addresses for segwit and taproot addresses. Our
prefix is MS1 it's just an identifier that indicates what the data is for.
Then we have a header here where you as the user of this system will define some extra data that will identify a specific secret from other secrets you might be storing too. Your special value of
recovery threshld will be the first spot after MS1. We use the bech32 alphabet, by the way. If you want to verify addresses by hand, you can actually do that using the postscript to generate it. Then
we have an ID which is 4 characters that you can put whatever in there; if you generate shares and give them to your friends, then they can figure out what the share is.
Then finally there is a share index, which is perhaps the most interesting and meaningful part of this. The share index identifies which share... every share will have a different share index, and
the indices are all the bech32 alphabet all the numbers except for b, i, and o. Those identify which share you have. There is one special value, S. If your share index is S, then that is your actual
secret. The way that the scheme works under the hood is that actually all of the shares have equal meaning and we just declare by fiat that the S share is your secret. From any subset of shares, you
can generate any other shares. By any k-many shares, you can declare any other shares, so we just say that the S share is the one we're going to generate.
Then you have your share data, which we will generate randomly, and then the checksum is 15 characters.
Q: So this is not a thing where you make one secret, and then you split into parts. You make 3 secrets, wihch are then combined.
A: Ah, yes. One way to do secret sharing is to start with a secret and then split it up. But since we're generating a fresh secret, we might as well just generate some fresh shares and derive the
secret. If you already have a secret you want to split up, there are some separate tables for that, and it's a slightly different process.
For this workshop, we will generate random shares, and the shares will imply a secret. So you won't even know what your secret will be.
Random data
We will manually generate the A and C share, and then a third share and derive that. Let's first talk about random data. We will write our share data in the bolded boxes. In the bold boxes, we will
write out this data. In the shaded boxes, are your checksum where we will use the owrksheet to compute what that is. The ten boxes that are not shaded are your share data; you will generate 10 random
characters and fill in those boxes.
It's cheaper to make biased dice, there are sometimes air bubbles. The ones that you can see through and have sparkles might be better. You can kind of tell if the distribution of sparkles is off.
Regardless, let's not chance it. Another thing is that if you are dealing with a malicious actor, and a guest is someone trying to steal your secret by compromising your dice by putting them in your
kitchen oven and shift the distribution of plastic in the dice and show the value at the top more frequently.
We can eliminate any passive bias. We are going to use a von Neumann entropy extractor in the form of a dice worksheet. We will roll each dice twice, and if it's greater than some value, we will say
it's a 1 bit, and otherwise it's a 0 bit. 00 and 11 we just reroll, and 01 is 1 and 10 is 0. 32 is 2^5 and that can be represented by 5 bits. You can see all the characters are nicely written.... in
this tree diagram on the worksheet, here's all of your different symbols. What we're going to do is using this worksheet, there are instructions on it, we will roll the dice twice and follow this
tree based on where the dice land.
Checksum worksheet
The way that the checksum worksheet works is that, we're going to add and do alternating sequences of looking up values and adding. At the bottom of every row, you have these two extra overhanging
values. Initially you don't have the overhangs, actually. We add these values above and below to make the next row. We will use the dragon volvelle, turn it, and point it to 3. So 2+3 is m. This
gives us the next line. Addition is commutative, yes. It doesn't matter which order you add the two values in.
Next, you will have these two values on the left just hanging out there. You will look up these two values in the checksum table, and it will give you the next line. Then you add the two lines
together, and you repeat this process and so on. So we're adding two lines, to get the third line, and the third line will have extra characters hanging out, and we will look those up in the checksum
worksheet and get the next line. Then we repeat.
We keep doing this, and eventually we get to the bottom of the worksheet and everything except for the pink squares will be filled out. If you are verifying the checksum, then you do the whole thing
and it will say SECRET SHARE hopefully. We will fill in the pink shares by adding backwards, here.
We're generating two initial shares, the A share and C share. You can do this process for both of these. That is the goal.
The final step of the checksum worksheet is that if you're done with the lookups and addition, you will still have the pink squares left. To fill those up, you add going upwards, and you add each two
values together, all the way up. One shortcut you can use, and I'm hesitant to share these shortcuts but if you have 2 twice and you're doing a running sum and it will be canceled out. So just ignore
the 2's basically, if you have multiple 2's. Anything you see twice, you can repeat it. If you see it 3 times, you only ignore it twice, right.
You're welcome to keep the volvelles and the worksheets. You can find the full booklet online. If you're just going to throw it in the garbage, I'd appreciate if you would leave the volvelle with me.
You're welcome to keep it. We also have a few extras if you want some for your kids or whatever.
This is the first time we've done this workshop, and we're looking for feedback. Could you leave your email address and any comments you might have, on the purple sheet on the back and then you can
get a final booklet once it is finalized. You're also welcome to email me if you have specific feedback.
Thank you.
|
{"url":"https://diyhpl.us/wiki/transcripts/btcpp/2022/volvelles/","timestamp":"2024-11-07T00:00:04Z","content_type":"application/xhtml+xml","content_length":"24577","record_id":"<urn:uuid:151a33c9-fc3f-4928-924d-a742c2d7df5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00168.warc.gz"}
|
IntroductionProblem FormulationResults and DiscussionConclusionData Availability StatementAuthor ContributionsConflict of InterestPublisher’s NoteReferencesNomenclatureSymbols
Front. Phys. Frontiers in Physics Front. Phys. 2296-424X Frontiers Media S.A. 949907 10.3389/fphy.2022.949907 Physics Original Research Heat and Mass Transfer Analysis for Unsteady Three-Dimensional
Flow of Hybrid Nanofluid Over a Stretching Surface Using Supervised Neural Networks Shoaib et al. Intelligent Networks for Hybrid Nanofluidics Shoaib Muhammad ^1 Abukhaled Marwan ^2 Raja Muhammad
Asif Zahoor ^3 * Khan Muhammad Abdul Rehman ^4 Sabir Muhammad Tauseef ^1 Nisar Kottakkaran Sooppy ^5 * Iltaf Iqra ^6 ^1 Department of Mathematics, COMSATS University Islamabad, Attock Campus, Attock,
Pakistan ^2 Department of Mathematics and Statistics, American University of Sharjah, Sharjah, United Arab Emirates ^3 Future Technology Research Center, National Yunlin University of Science and
Technology, Douliou, Taiwan ^4 Department of ECE, COMSATS University Islamabad, Islamabad, Pakistan ^5 Department of Mathematics, College of Arts and Science, Prince Sattam bin Abdulaziz University,
Al-Kharj, Saudi Arabia ^6 Department of Mathematics, Riphah International University Islamabad, Islamabad, Pakistan
Edited by: Luigi Fortuna, University of Catania, Italy
Reviewed by: Arturo Buscarino, Università degli Studi di Catania, Italy
Salvina Gagliano, Università degli Studi di Catania, Italy
Lucia Valentina Gambuzza, University of Catania, Italy
*Correspondence: Muhammad Asif Zahoor Raja, rajamaz@yuntech.edu.tw; Kottakkaran Sooppy Nisar, ksnisar1@gmail.com
This article was submitted to Interdisciplinary Physics, a section of the journal Frontiers in Physics.
06 09 2022 2022 10 949907 21 05 2022 14 06 2022 Copyright © 2022 Shoaib, Abukhaled, Raja, Khan, Sabir, Nisar and Iltaf. 2022 Shoaib, Abukhaled, Raja, Khan, Sabir, Nisar and Iltaf
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the
original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or
reproduction is permitted which does not comply with these terms.
The application of hybrid nanomaterials for the improvement of thermal efficiency of base fluid has increasingly gained attention during the past few decades. The basic purpose of this study is to
investigate the flow characteristics along with heat transfer in an unsteady three-dimensional flow of hybrid nanofluid over a stretchable and rotatory sheet (3D-UHSRS). The flow model in the form of
PDEs was reduced to the set of ordinary differential equations utilizing the appropriate transformations of similarity. The influence of the rotation parameter, unsteadiness parameter, stretching
parameter, radiation parameter, and Prandtl number on velocities and thermal profile was graphically examined. A reference solution in the form of dataset points for the 3D-UHSRS model are computed
with the help of renowned Lobatto IIIA solver, and this solution is exported to MATLAB for the proper implementation of proposed solution methodology based on the Levenberg–Marquardt supervised
neural networks. Graphical and numerical results based on the mean square error (MSEs), time series response, error distribution plots, and regression plots endorses the precision, validity, and
consistency of the proposed solution methodology. The MSE up to the level of 10–12 confirms the accuracy of the achieved results.
Lobatto IIIA hybrid nanofluid unsteadiness parameter stretching parameter Prandtl number
In the recent past, improvement in heat transfer for various engineering and industrial applications remains the main topic of research for scientists. Improvement in the heat transfer ability can be
done through different methods such as use of microchannel, extended surface and through vibrating the surface. Thermal conductivity plays the most important role among all other characteristics of
fluid for judging its heat transfer ability. Thermal conductivity of any nanofluid depends on the type, as well as the shape of nanoparticles. There will be less energy loss with the improved thermal
efficiency of the fluid, which can further reduce the cost and increase the production in industrial applications. Various conventional fluids, such as water, kerosene oil, and ethylene glycol, are
commonly used in many industrial and engineering applications (e.g., chemical manufacturing, water distillation, HVAC systems, medicine, and power-generation systems), but due to inadequate thermal
transfer ability, these ordinary fluids lack good efficiency in systems having large thermal transfer requirements. As it is well known, the ability to conduct heat for metals is hundred times
greater than that in liquids, which is why it is appropriate to use them in thermal systems [1–5]. Thus, the use of nanoparticles <100 nm not only improved the thermal capabilities of the fluid, but
also positively improved the other rheological properties. Initially [6] was the first work to discuss the nature and characteristics of nanofluids. [7] compared the performance of heat transfer
between the nanofluid comprised of a single wall and multiwall carbon nanotubes in a magnetohydrodynamic (MHD) flow under the influence of radiative flux.
A new type of nanofluids known as “hybrid nanofluids” is made by dispersing two different types of nanoparticles into base fluid to obtain various synergetic effect of both types of nanomaterial. The
development of hybrid nanofluidic system is currently undergoing and under the evaluation to enhance the strength of thermal effects for different flow dynamics. Performance in terms of thermal
energy utilization is being tested and assessed by various researchers. Due to its marvelous heat transfer abilities, this new type of nanofluid attracted the attention of researchers and scientists
to investigate it in many of the industrial and engineering problems. Hybrid nanofluid have extensive range of application in scientific, industrial, engineering, and medical fields like medicine
manufacturing, heating systems, transfer cooling, electronic chips, and solar panels [8]. Of late, several studies have been conducted to analyze the improvement of heat transfer capabilities for
hybrid nanofluids as compared to conventional single-particle nanofluids [9–13]. The flow over rotating surfaces has many practical applications such as electrical appliances, cutting discs,
data-storing devices, and heavy machinery parts. Song et al. [14] implemented a numerical shooting methodology to analyze the effects of various physical parameters on the velocity and thermal fields
in the stagnation point flow of a hybrid nanofluid over a rotatory disc under radiative and activation energy effects. [15] numerically evaluated the three-dimensional MHD flow of hybrid nanofluid
over a rotating and stretching sheet. In addition, the behavior of velocity and temperature profile depending on different physical factors has been highlighted through graphs and numerical results.
Any flow in which all of its parameters are independent of time is known as steady flow, whereas the time-dependent flow is known as unsteady flow. Due to excessive applicability in engineering and
industrial applications, steady flow has gained a lot of concentration of researchers and scholars. However, recently many studies have been conducted on the behavior and characteristics of unsteady
fluid flow systems, automobiles, power-generation systems, and aviation industry [16], explored the influence of variable thermal conductivity, viscosity, and Joule heating on the velocity and
thermal field in the unsteady flow of 3D Maxwell nanofluid over a stretching surface. An enhancement in the velocity of the fluid has been observed with increased viscosity. [17] numerically
investigated the biconvection flow of cross nanofluid and studied the effects of thermal radiation along with melting phenomenon over a cylinder. [18] studied the considerable effect of external
magnetic field acting at an inclined angle on Williamson’s nanofluid over a rotating stretchable surface. [19] conducted a comparative analysis of heat transfer over a porous stretching surface
between nanofluid containing nanoparticles of Graphene oxide and the combination of Ag–graphene oxide in kerosene oil as base fluid.
[20] described experimental research of various factors responsible for achieving better thermophysical properties and more stable results in terms of heat transfer in nanofluids and ionanofluids.
There are a variety of nanoparticles available for manufacturing of nanofluids and hybrid nanofluid. Each type of material has its own benefits and limitations depending on the material’s
characteristics. [21] presented the importance and significance of Aluminum nanoparticles in the industrial and engineering fields and their various advantages over other renowned nanomaterials. [22]
explicated an analysis to show the physical and chemical stability and thermophysical properties based on thermal conductivity for the nanofluid comprising TiO[2] nanoparticles. Furthermore,
different experimental results showing the improvement in thermal properties have also been tabulated in the research. [23] presented a numerical investigation to examine the flow behavior, as well
as heat transmission capabilities of a hybrid fluid over a gyrating surface under the influence of a uniform external magnetic field. [24] numerically explained the behavior of mass and heat transfer
for the MHD flow of a nanofluid over stretched surface under the heating effects through the radiation phenomenon. [25] used state-of-the-art supervised neural network to solve the hybrid nanofluid
flow model with Joule heating and MHD effects. A comparison with an already available solution has also been made to show the accuracy and performance of solution methodology. Among several other key
factors, the concentration of nanoparticles also plays an important part in deciding the thermal capability of nanofluids. [26] presented experimental research showing a rapid increase in thermal
efficiency of an oil-based nanofluid. In a similar manner, [27] explained the fact that for a nanofluid with TiO[2] nanoparticles in water, the negative effect of using a lower concentration of
nanoparticles can be overcome by the other important properties such as critical heat flux. [28] utilized the Laplace transformation method to solve the flow model of an incompressible non-Newtonian
hybrid nanofluid over permeable surface revolving with uniform acceleration with velocity and thermal slip effects. [29] conducted a numerical analysis of activation energy for the two-dimensional
flow of a hybrid nanofluid under the effect of buoyancy force and thermal radiation. Furthermore, the influence of key parameters, such as the Nusselt number and Sherwood number, on the heat and
velocity profiles has also been investigated.
Most of the researcher used conventional solution methodologies to explain various fluid models describing the effects of entropy generation, rotating flow problems, and Joule heating [30–36]. The
application of modern solution methodologies based on artificial neural networks to solve such problems is an inventive work. Of late, researcher applied these modern solutions depending on
artificial intelligence to resolve the problem related to various field such as financial trading [37], rainfall prediction models [38], bioinformatics [39, 40], fluid dynamics [41, 42], energy, HIV
virus spread models [40, 43, 44], and coronavirus perdition model [45] (also see [46, 47, 49]). In this study, the authors intended to solve the 3D-UHSRS fluid problem for the first time, to the best
of our knowledge, by utilizing the Levenberg–Marquardt supervised neural networks (LM-SNNs) bases solution technique. Theses modern solution approaches were based on state-of-the-art computational
algorithms that can easily tackle the nonlinear behavior of flow model’s equations.
The basic purpose of this research work is to investigate and explain the flow feature along with the heat and mass transfer in 3D-UHSRS. The basic feature of purposed solution methodology for
3D-UHSRS are as follows:
• A state-of-the-art mathematical model for the 3D-UHSRS problem has been developed and expressed in terms of PDEs, which are further converted into a set of nonlinear ordinary differential equations
(ODEs) using dimensionless similarity variables.
• A numerical solution of the 3D-UHSRS problem was obtained by implementing the Lobatto IIIA solution methodology based on the bvp4c solver in MATLAB. The solution datasets of the 3D-UHSRS problem is
then subjected for LM back propagation to carry out the training, validation, and testing of the data set points.
• Different plots describing the effect of physical constants on velocity as well as thermal profile have been presented. The performance, convergence, and accuracy of the solution approach were
validated thorough residual error, the number of grid points, ODEs, and BCs evaluations.
• Numerical and graphical results in the form of the MSE curve, error histograms, regression and error plots were authenticated from the performance, precision, and convergence of the proposed
LM-SNNs methodology.
Assume a 3D unsteady flow of a hybrid nanofluid over a stretchable surface, as shown as Figures 1and 2. The surface is stretched in the x y -plane of the Cartesian coordinate system. The surface is
rotating with a uniform velocity “ w ” about an axis for which z = 0 . Here, ( u , v , w ) are the components of velocity in the direction along ( x , y , z ) . Stretching velocities of the surface
and in x and y direction are represented by u and v . Temperature of fluid at stretching surface is T w , whereas the temperature of the ambient fluid is T ∞ .
Geometrical interpretation for magnetohydrodynamic-HNRD.
Working flow chart.
According to abovementioned assumptions, described flow model can be expressed in term of following set of mathematical equations (50) and (51): ∂ u ∂ x + ∂ v ∂ y + ∂ w ∂ z = 0 , ρ h n f ( ∂ u ∂ t −
2 ω v + u ∂ u ∂ x + v ∂ u ∂ y + w ∂ u ∂ z ) = μ h n f ∂ 2 u ∂ z 2 , ρ h n f ( ∂ v ∂ t + 2 ω u + u ∂ v ∂ x + v ∂ v ∂ y + w ∂ v ∂ z ) = μ h n f ∂ 2 v ∂ z 2 , ρ h n f ( ∂ w ∂ t + u ∂ w ∂ x + v ∂ w ∂ y +
w ∂ w ∂ z ) = μ h n f ∂ 2 w ∂ z 2 , ∂ T ∂ t + u ∂ T ∂ x + v ∂ T ∂ y + w ∂ T ∂ z = k h n f ( ρ C p ) h n f [ ∂ 2 T ∂ z 2 ] + 1 ( ρ C p ) h n f [ ∂ q r ∂ z ] .
In the above system of equations, q r exhibits radiative heat flux known as Rosseland approximation, which can be mathematically expressed as q r = − 4 σ ∗ 3 k ∗ ∂ T 4 ∂ y
Here k ∗ is the Stephen–Boltzmann constant, and σ ∗ represents the mean absorption coefficient. Expending T 4 and ignoring the higher order terms, we get T 4 = 4 T T ∞ 3 − 3 T ∞ 4 .
Using the value of T 4 in Eq. (5), we obtain ∂ T ∂ t + u ∂ T ∂ x + v ∂ T ∂ y + w ∂ T ∂ z = 1 ( ρ C p ) h n f ( k h n f + 16 σ ∗ T ∞ 3 3 k ∗ ) ∂ 2 T ∂ z 2
Along with the following boundary conditions of the system: When z = 0 Then u = u w = c x 1 − α t , v = 0 , w = 0 , T = T w When z → ∞ Then u → 0 , v → 0 , T → T ∞
Here, “c” is the stretching coefficient of the surface, q r is the amount of radiative flux, ω is angular velocity, and T[∞] is temperature of the ambient fluid. Undermentioned linear transformation
is designed to transmute the modeling ODEs (1–5) into a nonlinear equivalent system of PDEs. u = a x 1 − α t f ′ ( η ) , v = a x 1 − α t g ( η ) , w = − a ν f 1 − α t f ( η ) , η = a ν
f 1 − α t z , θ ( η ) = T − T ∞ T w − T ∞ . }
In the above equations, ρ h n f is density, μ h n f is dynamic viscosity, and ( ρ c p ) h n f is the heat capacity of the hybrid nanofluid. Whereas ϕ 1 represents the concentration of Al
nanoparticles, and ϕ 2 represents the concentration of nanoparticles of TiO[2]. As “hybrid nanofluid” is a combination of two different types of nanoparticles, ϕ h n f is the total concentration of
nanoparticle in the base fluid that can be calculated as the sum of concentrations of both types of nanoparticles, i.e., ϕ 1 + ϕ 2 . All mathematical relationships expressing thermophysical
properties are expressed in Table 1 [51]:
Mathematical expression of various thermophysical quantities.
Properties Mathematical Expression
Dynamic Viscosity μ h n f = μ f ( 1 − ϕ 1 − ϕ 2 ) − 2.5
Density ρ h n f = ϕ 1 ρ 1 + ϕ 2 ρ 2 + ρ f ( 1 − ϕ h n f )
Thermal Conductivity k h n f k f = { ϕ 1 k 1 + ϕ 2 k 2 ϕ 1 + ϕ 2 + 2 k f + ( k 1 ϕ 1 + k 2 ϕ 2 ) − 2 ϕ h n f k f } × { ϕ 1 k 1 + ϕ 2 k 2 ϕ 1 + ϕ 2 + 2 k f − ( k 1 ϕ 1 + k 2 ϕ 2 ) + 2 ϕ h n f k f } −
Heat Capacity [ ρ c p ] h n f = ϕ 1 ( ρ c p ) 1 + ϕ 2 ( ρ c p ) 2 + ( 1 − ϕ h n f ) ( ρ c p ) f
Values regarding density, electrical, and thermal conductivity and specific heat against used nanoparticles and base fluid has been mentioned in Table 2.
Numerical values for various physical and chemical properties [48, 49].
Material Density Specific Heat Electrical Conductivity Thermal Conductivity
SI unit (Kg/m^3) (J Kg^−1K^−1) (s/m) (W m^−1K^−1)
Water (H[2]O) 997 4,179 5.5 × 10^–6 0.613
Al Nanoparticles 3,970 765 5.96 × 10^7 40
TiO2 Nanoparticles 4,250 686.2 2.38 × 10^6 8.953
Following important physical terminologies representing the skin friction and Nusselt number can be written as: C f x = μ h n f ρ f u w 2 ( ∂ u ∂ z ) z = 0 , C f y = μ h n f ρ f u w 2 ( ∂ v ∂ z ) z =
0 , N u x = − r k h n f k f ( T f − T ∞ ) ( ∂ T ∂ z ) z = 0 + x ( q r ) z , C f x
The mathematical evaluation of Eqs. 1 and 2)–5 gives μ h n f μ f ρ h n f ρ f f ′ ′ ′ − f ′ 2 + f f ″ + 2 Ω g − β ( f ′ + η 2 f ″ ) = 0 , μ h n f μ f ρ h n f ρ f g ″ − f ′ g + f g ′ − 2 Ω f ′ − β ( g
+ η 2 g ′ ) = 0 , 1 P r ( ρ C p ) f ( ρ C p ) h n f ( k h n f k f + 4 3 R d ) θ ″ − β η 2 θ ′ + f θ ′ = 0 ,
Here, corresponding BCs are f ( η ) = 0 , f ′ ( η ) = λ , g ( η ) = 0 , θ ( η ) = 1 at η = 0 , f ′ ( η ) → 0 , g ( η ) → 0 , θ ( η ) → 0 when η → ∞ . }
The physical parameters of 3D-UHSRS flow problem involved in Eqs. 8–12 are λ = c a , Ω = ω ∗ c , R d = 4 σ ∗ T ∞ 2 k ∗ k f , β = α a , P r = υ f ( ρ c p ) f k f = υ f α f
Here, R d represents the radiation parameter, P r is the Prandtl number, β is the unsteadiness parameter, Ω is the rotation parameter, λ decides the stretching or shrinking ability of the sheet, and
λ = 0 corresponds to the static nature of the revolving surface. The nondimensionalized form of Skin friction coefficient and Nusselt number are as follows: C f x R e x 1 / 2 = [ μ h n f μ f ] f ″ (
0 ) , C f y R e x 1 2 = [ μ h n f μ f ] g ′ ( 0 ) N u x Re x − 1 2 = − ( k h n f k f + 4 3 R d ) θ ′ ( 0 ) , C f x R e x 1 / 2
in which R e r = u x υ f represent the local rotational Reynold number.
Solution of the system of ODE resenting the flow model has been accomplished in two major phases, in the first phase, set of nonlinear ODEs along with their relevant boundary conditions (Eqs. (1)–(7)
are transformed into the first-order ODEs for the solution by using the renowned solution technique “Lobatto IIIA” with the use of MATLAB software. Graphical variation of velocity and temperature
profile against various important physical parameters are shown in Figures 3 and 4. Various scenarios and cases are generated based on the variation of involved physical parameters, as described in
Table 3.
Variation of velocity and temperature profile against β , ω , and, Rd .
Variation of velocity and temperature profile against λ and Pr .
Variation of parameters for the unsteady three-dimensional flow of hybrid nanofluid over a stretchable and rotatory sheet problem.
Scenario C-I C-II C-III C-IV
I β = −1.5 β = −1.2 β = −0.8 β = −0.5
II λ = 0.6 λ = 0.8 λ = 1.0 λ = 1.4
III ω = 2.0 ω = 2.3 ω = 2.6 ω = 3.0
IV R d = 2.0 R d = 2.3 R d = 2.6 R d = 3.0
V P r = 6.0 P r = 6.0 P r = 6.0 P r = 7.0
Figure 3A ∼ c) presented the impact of β (unsteadiness parameter) on the velocities in x and y directions, as well as on the temperature of the fluid. It was observed that, initially, the velocity of
the fluid showed a decreasing trend with the increase in the values of β ; after that, a further increase in β will produce a direct change in the velocity of the fluid because an increase in β will
result in thickening of the velocity boundary layer. In addition, higher values of β will also results in the fluid temperature increase. Figure 3D ∼ e) revealed the rising trend of velocities
against the increasing values of ω (the rotation parameter). In reality, centrifugal forces increase with the increase in ω , which tends the particles of fluid to move into the y direction. Figure
3F exposed the influence of ω (the rotation parameter) on the temperature profile of the fluid. As with the increase in the rotation parameter, the radial velocity and hence the kinematic energy of
the fluid rises, which produces more heat; therefore, the overall temperature of the fluid rises.
Figures 4A,B presents the influence of λ (the stretching parameter) on the velocity and thermal profile of the fluid. It was observed that initially velocity of the fluid increases with the increase
in values of stretching parameter and the temperature of the fluid shows a slight decline against the rising values of stretching parameter. Figure 4C portrayed the variation in temperature profile
of the fluid against the variable values of Rd (the radiation parameter). It is understood that radiation parameter expresses the amount of heat contributed to fluid via thermal radiation. Due to
this fact, increasing the radiation parameter will generate more heat in the fluid. Figure 4D was drawn to study the effect of the Prandtl number (Pr) on the temperature profile of the fluid, and it
has been observed that with the rise in the Prandtl number, the overall temperature of the fluid shows a declining behavior. As the Prandtl number varies inversely to the thermal diffusivity of the
fluid, therefore, with the rise in the Prandtl number, the viscosity of the fluid also rises, causing the temperature of the fluid to drop.
In the second phase, numerical results of the solution containing datasets for each involved variable against all scenarios and cases are generated between a fixed domain, and these dataset points
are further subjected to the proper execution of innovative LM-SNNs-based solution. During the operation of proposed methodology out of total points, 80% points were subjected under the network
training, whereas the leftover 10% dataset points were used for the operation of validation and testing processes. The number of neuron and delay steps in the calculations were adjusted as per
complexity of the problem and required level of accuracy. A two-layer internal structure of the LM-SNNs is presented in Figure 5. Table 4–8 presents the numerical performance indicators comprising of
MSE, gradient, Mu, and the number of epochs for scenario (I–V), respectively.
Internal structure of NAR.
Numerical performance indicators for scenario I.
Scenario Cases Number of Neurons Mean Square Error Grad Mu Total Epochs Time (s)
Trng Valid Test
I 30 1.351e-10 1.713e-10 2.043e-10 9.675e-08 1e-12 71 14
I II 30 1.873e-10 1.873e-10 2.180e-10 9.995e-08 1e-11 82 15
III 30 1.452e-10 1.611e-10 3.129e-10 9.045e-08 1e-11 87 15
IV 30 1.314e-10 9.439e-11 1.317e-10 9.287e-08 1e-12 64 14
Numerical performance indicators for scenario II.
Scenario Cases Number of Neurons Mean Square Error Grad Mu Total Epochs Time (s)
Trng Valid Test
I 30 3.835e-11 5.491e-11 6.678e-11 9.675e-08 1e-12 71 14
II II 30 2.832e-11 3.725e-11 3.888e-11 9.668e-08 1e-12 64 12
III 35 1.682e-10 1.886e-10 2.093e-10 9.963e-08 1e-11 89 15
IV 30 1.488e-10 1.5190e-10 1.706e-10 9.761e-08 1e-11 90 15
Numerical performance indicators for scenario III.
Scenario Cases Number of Neurons Mean Square Error Grad Mu Total Epochs Time (s)
Trng Trng Trng
III I 30 2.201e-11 2.539e-11 2.425e-11 9.935e-08 1e-12 76 14
II 30 2.180e-11 3.142e-11 3.352e-11 9.808e-08 1e-12 70 14
III 30 1.655e-10 1.667e-10 1.929e-10 9.865e-08 1e-11 90 16
IV 30 2.132e-11 2.220e-11 2.958e-11 9.460e-08 1e-12 83 15
Numerical performance indicators for scenario IV.
Scenario Cases Number of Neurons Mean Square Error Grad Mu Total Epochs Time (s)
Trng Valid Test
I 30 1.967e-10 2.059e-10 2.113e-10 9.887e-08 1e-11 88 15
IV II 30 1.933e-11 2.755e-11 2.328e-11 9.742e-08 1e-12 75 13
III 30 2.201e-11 2.492e-11 3.086e-11 9.716e-08 1e-12 70 13
IV 30 2.158e-11 2.158e-11 2.948e-11 9.783e-08 1e-12 72 13
Numerical performance indicators for scenario V.
Scenario Cases Number of Neurons Mean Square Error Grad Mu Total Epochs Time (s)
Trng Valid Test
I 30 2.241e-11 1.968e-11 2.265e-11 9.827e-08 1e-12 73 14
V II 30 1.607e-10 1.982e-10 3.659e-10 9.652e-08 1e-11 92 15
III 30 1.653e-10 1.881e-10 1.780e-10 9.821e-08 1e-11 91 15
IV 30 1.484e-10 3.305e-10 1.700e-10 9.929e-11 1e-11 101 16
Figure 6A ∼ d) presents the graphical details of the time series response, error distribution plot, MSE, and gradient plots against the second case of the first scenario. Furthermore, MSE for any
computational methodology represents the mean of square of differences between the actual value and estimated value. Accuracy and stability of any method can be judged through the MSE value
encountered during the computational process. Smaller MSE values correspond to a better solution and an accurate and reliable solution technique. Figures 7A–D were designed to illustrate the MSE
plots of the first case of each scenario for training, validation, and testing to compare performances based on MSE for each case.
Performance and accuracy plots of C-2 S-1.
Mean square error bases performance of second cases for all scenarios.
The reference solution of the problem is available in the form of dataset points that are further categorized into training, validation, and testing processes at a specific ratio. In time series
response, a close relationship between target and output values of training, validation, and testing depict the accuracy and precision of the solution methodology. Time series response of any
variable is the statistical measure of any characteristics with respect to time. In addition, time series response help the researchers to estimate and understand the performance of a variable
through any calculated data. Through these fitness plots one can easily observe the separate accuracy of all training, validation, and testing points as compared to available graphical solution.
Figures 8A–D illustrate the fitness plots for first cases of all scenarios.
Fitness performance plots of second cases for all scenarios.
Error histograms are another way to measure the closeness of predicted values with the reference values. These histograms are actually the distribution of errors of all these computed values from a
zero error point. Errors in term of all the achieved values are classified into 20 portions, which are aligned across a line representing the zero error line. More values that lie close to the zero
error line indicate more accuracy and precision of the solution methodology. Figures 9A–D were designed to illustrate a comparison of error histograms for first cases of all scenarios.
Error distribution plots of second cases of all scenarios.
Regression is a graphical way to present the precision of the predicted values to the reference values separately for training, validation and testing points. In these plots, an available reference
solution has been shown by a straight line, whereas the predicted values are shown by dots or small circles. Accuracy and precision of the computation can also be judged through the numerical value
of regression. R = 1 means that the predicted values are very close to reference values, and R = 0 means there is a very poor relationship between reference and predicted values. Figures 10A–D
exhibit a comparison of regression plots between first cases of all scenarios.
Regression plots of second cases of all scenarios.
Gradient is actually a vector responsible for guiding the network in the right direction with an accurate magnitude to reach the required solution as early as possible, whereas mu is a factor that
controls a certain algorithm. The value of mu directly portraits the convergence of the solution. Figures 11A–D reflect a comparison between the gradient and mu plots of first cases of all scenarios.
Plot for gradient and mu of second cases of all scenarios.
Figure 12 portrays the graphic comparison of proposed LM-SNNs base solutions with the already available solutions for all scenarios, and additional comparison of errors for all cases of each scenario
are also placed opposite showing the accuracy and precision of the proposed solution methodology.
Comparison of the Levenberg–Marquardt supervised neural network solution with reference solution along with relative errors for various scenarios. (a), (b).
Here, we employed a numerical study using a novel LM-SNNs–based methodology to investigate the 3D-UHSRS by modeling it in terms of PDEs, which were further reduced to ODEs. A well-renowned solution
technique “Lobatto IIIA” was implemented to solve these sets of equation. The influence of various involved physical parameters on velocity and thermal performance are visualized and studied. The
solution in respect of each variable in the form of dataset points are acquired and placed in MATLAB for proper operation of the LM-SNNs solution by training, validation, and testing of these dataset
points at 80%, 10%, and 10%, respectively. Accuracy, precision, and cogency of the solution were validated through various graphical results consisting of time series, MSE, error distribution, and
regression plots.
Followings are few important research outcomes:
• Higher values of ω (rotational velocity) will make the fluid velocity to decrease, whereas temperature of the fluid shows an increasing trend for a similar change.
• An increase in velocity and decline in the temperature field were observed with the increasing values of λ (the stretching parameter).
• A boost in the fluid temperature was observed with increasing values of R d (the radiation parameter) whereas reverse behavior has been noted for the P r .
Solution methodologies working on the principle of artificial intelligence and machine learning can be more beneficial and valuable in solving the problems related to nano [52–54] and micro fluids
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
MS, MR, and MK contributed to conception and design of the study. SS organized the database. II performed the statistical analysis. MS and II wrote the first draft of the manuscript. KN, MR, and SS
wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the
reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
The work in this study was supported, in part, by the Open Access Program from the American University of Sharjah”. This study represents the opinions of the author(s) and does not mean to represent
the position or opinions of the American University of Sharjah.
Wen D Ding Y . Effective Thermal Conductivity of Aqueous Suspensions of Carbon Nanotubes (Carbon Nanotube Nanofluids). 2004) 18:481–5. 10.2514/1.9934 Raja B Godson L Lal DM Wongwises S . Experimental
Investigation on the Thermal Conductivity and Viscosity of Silver-Deionized Water Nanofluid. 2010) 23:317–32. 10.1080/08916150903564796 Xing M Yu J Wang R . Experimental Study on the Thermal
Conductivity Enhancement of Water Based Nanofluids Using Different Types of Carbon Nanotubes. 2015) 88:609–16. 10.1016/j.ijheatmasstransfer.2015.05.005 Agarwal R Verma K Agrawal NK Duchaniya RK Singh
R . Synthesis, Characterization, thermal Conductivity and Sensitivity of CuO Nanofluids. 2016) 102:1024–36. 10.1016/j.applthermaleng.2016.04.051 Agarwal R Verma K Agrawal NK Singh R . Sensitivity of
thermal Conductivity for Al2O3 Nanofluids. 2017) 80:19–26. 10.1016/j.expthermflusci.2016.08.007 Chi SU Eastman JA . IL (United States): Argonne National Lab (1995). Hussain A Hassan A Al Mdallal Q
Ahmad H Rehman A Altanji M Heat Transport Investigation of Magneto-Hydrodynamics (SWCNT-MWCNT) Hybrid Nanofluid Under the Thermal Radiation Regime. 2021) 27:101244. 10.1016/j.csite.2021.101244 Babar
H Ali HM . Towards Hybrid Nanofluids: Preparation, Thermophysical Properties, Applications, and Challenges. 2019) 281:598–633. 10.1016/j.molliq.2019.02.102 Babu JAR Kumar KK Srinivasa Rao S .
State-of-Art Review on Hybrid Nanofluids. 2017) 77:551–65. 10.1016/j.rser.2017.04.040 Gulzar O Qayoum A Gupta R . Experimental Study on Stability and Rheological Behaviour of Hybrid Al2O3-TiO2
Therminol-55 Nanofluids for Concentrating Solar Collectors. 2019) 352:436–44. 10.1016/j.powtec.2019.04.060 Shah TR Ali HM . Applications of Hybrid Nanofluids in Solar Energy, Practical Limitations
and Challenges: A Critical Review. 2019) 183:173–203. 10.1016/j.solener.2019.03.012 Yang L Ji W Mao M Huang JN . An Updated Review on the Properties, Fabrication and Application of Hybrid-Nanofluids
along with Their Environmental Effects. 2020) 257:120408. 10.1016/j.jclepro.2020.120408 Nagoor AH Alaidarous ES Sabir MT Shoaib M Raja MAZ . Numerical Treatment for Three-Dimensional Rotating Flow of
Carbon Nanotubes with Darcy-Forchheimer Medium by the Lobatto IIIA Technique. 2020) 10(2):025016. 10.1063/1.5135165 Song Y-Q Ali Khan S Imran M Waqas H Ullah Khan S Ijaz Khan M Applications of
Modified Darcy Law and Nonlinear Thermal Radiation in Bioconvection Flow of Micropolar Nanofluid over an off Centered Rotating Disk. 2021) 60(5):4607–18. 10.1016/j.aej.2021.03.053 Hussain A Arshad M
Rehman A Hassan A Elagan SK Ahmad H Three-Dimensional Water-Based Magneto-Hydrodynamic Rotating Nanofluid Flow over a Linear Extending Sheet and Heat Transport Analysis: A Numerical Approach. 2021)
14(16):5133. 10.3390/en14165133 Ahmad S Coban HH Khan MN Khan U Shi Q-H Muhammad T Computational Analysis of the Unsteady 3D Chemically Reacting MHD Flow with the Properties of Temperature Dependent
Transpose Suspended Maxwell Nanofluid. 2021) 26:101169. 10.1016/j.csite.2021.101169 Imran M Farooq U Waqas H Anqi AE Safaei MR . Numerical Performance of Thermal Conductivity in Bioconvection Flow of
Cross Nanofluid Containing Swimming Microorganisms over a Cylinder with Melting Phenomenon. 2021) 26:101181. 10.1016/j.csite.2021.101181 Srinivasulu T Goud BS . Effect of Inclined Magnetic Field on
Flow, Heat and Mass Transfer of Williamson Nanofluid over a Stretching Sheet. 2021) 23:100819. 10.1016/j.csite.2020.100819 Ahmad F Abdal S Ayed H Hussain S Salim S Almatroud AO . The Improved Thermal
Efficiency of Maxwell Hybrid Nanofluid Comprising of Graphene Oxide Plus Silver/Kerosene Oil over Stretching Sheet. 2021) 27:101257. 10.1016/j.csite.2021.101257 Bakthavatchalam B Habib K Saidur R
Saha BB Irshad K . Comprehensive Study on Nanofluid and Ionanofluid for Heat Transfer Enhancement: A Review on Current and Future Perspective. 2020) 305:112787. 10.1016/j.molliq.2020.112787 Farhana K
Kadirgama K Rahman MM Noor MM Ramasamy D Samykano M Significance of Alumina in Nanofluid Technology. 2019) 138(2):1107–26. 10.1007/s10973-019-08305-6 Yasinskiy A Navas J Aguilar T Alcántara R
Gallardo JJ Sánchez-Coronilla A Dramatically Enhanced thermal Properties for TiO2-Based Nanofluids for Being Used as Heat Transfer Fluids in Concentrating Solar Power Plants. 2018) 119:809–19.
10.1016/j.renene.2017.10.057 Ouyang C Akhtar R Raja MAZ Touseef Sabir M Awais M Shoaib M . Numerical Treatment with Lobatto IIIA Technique for Radiative Flow of MHD Hybrid Nanofluid (Al[2]O[3]—Cu/H
[2]O) over a Convectively Heated Stretchable Rotating Disk with Velocity Slip Effects. 2020) 10(5):055122. 10.1063/1.5143937 Shoaib M Raja MAZ Sabir MT Islam S Shah Z Kumam P Numerical Investigation
for Rotating Flow of MHD Hybrid Nanofluid with Thermal Radiation over a Stretching Sheet. 2020) 10(1):18533–15. 10.1038/s41598-020-75254-8 Shoaib M Raja MAZ Sabir MT Nisar KS Jamshed W Felemban BF
MHD Hybrid Nanofluid Flow Due to Rotating Disk with Heat Absorption and Thermal Slip Effects: An Application of Intelligent Computing. 2021) 11(12):1554. 10.3390/coatings11121554 Patel HE Das SK
Sundararajan T Nair AS George B Pradeep T . Thermal Conductivities of Naked and Monolayer Protected Metal Nanoparticle Based Nanofluids: Manifestation of Anomalous Enhancement and Chemical Effects.
2003) 83(14):2931e2933. 10.1063/1.1602578 Abbassi Y Talebi M Shirani AS Khorsandi J . Experimental Investigation of TiO2/Water Nanofluid Effects on Heat Transfer Characteristics of a Vertical Annulus
with Non-Uniform Heat Flux in Non-Radiation Environment. 2014) 69:7e13. 10.1016/j.anucene.2014.01.033 Krishna MV Ahammad NA Chamkha AJ . Radiative MHD Flow of Casson Hybrid Nanofluid over an Infinite
Exponentially Accelerated Vertical Porous Surface. 2021) 27:101229. 10.1016/j.csite.2021.101229 Suganya S Muthtamilselvan M Alhussain ZA . Activation Energy and Coriolis Force on Cu-TiO[2]/Water
Hybrid Nanofluid Flow in an Existence of Nonlinear Radiation. 2021) 11(3):933–49. 10.1007/s13204-020-01647-w Shoaib M Raja MAZ Sabir MT Awais M Islam S Shah Z Numerical Analysis of 3-D MHD Hybrid
Nanofluid over a Rotational Disk in Presence of thermal Radiation with Joule Heating and Viscous Dissipation Effects Using Lobatto IIIA Technique. 2021) 60(4):3605–19. 10.1016/j.aej.2021.02.015
Alarifi I Abokhalil A Osman M Lund L Ayed M Belmabrouk H MHD Flow and Heat Transfer over Vertical Stretching Sheet with Heat Sink or Source Effect. 2019) 11(3):297. 10.3390/sym11030297 Yousif MA
Ismael HF Abbas T Ellahi R . Numerical Study of Momentum and Heat Transfer of MHD Carreau Nanofluid over an Exponentially Stretched Plate with Internal Heat Source/Sink and Radiation. 2019) 50(7),
649. 10.1615/heattransres.2018025568 Pal D Chatterjee D Vajravelu K . Influence of Magneto-Thermo Radiation on Heat Transfer of a Thin Nanofluid Film with Non-Uniform Heat Source/Sink. 2020) 9
(2):169–80. 10.1016/j.jppr.2020.03.003 Iqbal SA Sajid M Mahmood K Naveed M Khan MY . An Iterative Approach to Viscoelastic Boundary Layer Flows with Heat Source/Sink and Thermal Radiation. 2019)
(24):3. 10.2298/TSCI180202003I Ramadevi B Kumar KA Sugunamma V Sandeep N . Influence of Non-Uniform Heat Source/Sink on the Three-Dimensional Magnetohydrodynamic Carreau Fluid Flow Past a Stretching
Surface with Modified Fourier's Law. 2019) 93(6):86. 10.1007/s12043-019-1847-7 Khan I Raja MAZ Shoaib M Kumam P Alrabaiah H Shah Z Design of Neural Network with Levenberg-Marquardt and Bayesian
Regularization Backpropagation for Solving Pantograph Delay Differential Equations. 2020) 8:137918–33. 10.1109/access.2020.3011820 Bukhari AH Raja MAZ Sulaiman M Islam S Shoaib M Kumam P . Fractional
Neuro-Sequential ARFIMA-LSTM for Financial Market Forecasting. 2020) 8:71326–38. 10.1109/access.2020.2985763 Bukhari AH Sulaiman M Islam S Shoaib M Kumam P Zahoor Raja MA . Neuro-Fuzzy Modeling and
Prediction of Summer Precipitation with Application to Different Meteorological Stations. 2020) 59(1):101–16. 10.1016/j.aej.2019.12.011 Raja MAZ Shah FH Syam MI . Intelligent Computing Approach to
Solve the Nonlinear Van der Pol System for Heartbeat Model. 2018) 30(12):3651–75. 10.1007/s00521-017-2949-0 Umar M . Stochastic Numerical Technique for Solving HIV Infection Model of CD4+ T Cells.
2020) 135(6):403. 10.1140/epjp/s13360-020-00417-5 Ahmad I Ilyas H Urooj A Aslam MS Shoaib M Raja MAZ . Novel Applications of Intelligent Computing Paradigms for the Analysis of Nonlinear Reactive
Transport Model of the Fluid in Soft Tissues and Microvessels. 2019) 31(12):9041–59. 10.1007/s00521-019-04203-y Raja MAZ Manzar MA Shah SM Chen Y . Integrated Intelligence of Fractional Neural
Networks and Sequential Quadratic Programming for Bagley–Torvik Systems Arising in Fluid Mechanics. 2020) 15(5), 051003. 10.1115/1.4046496 Umar M Sabir Z Raja MAZ Shoaib M Gupta M Sánchez YG . A
Stochastic Intelligent Computing with Neuro-Evolution Heuristics for Nonlinear SITR System of Novel COVID-19 Dynamics. 2020) 12(10):1628. 10.3390/sym12101628 Cheema TN Raja MAZ Ahmad I Naz S Ilyas H
Shoaib M . Intelligent Computing with Levenberg-Marquardt Artificial Neural Networks for Nonlinear System of COVID-19 Epidemic Model for Future Generation Disease Control. 2020) 135(11):932–5.
10.1140/epjp/s13360-020-00910-x Shoaib M Raja MAZ Sabir MT Bukhari AH Alrabaiah H Shah Z A Stochastic Numerical Analysis Based on Hybrid NAR-RBFs Networks Nonlinear SITR Model for Novel COVID-19
Dynamics. 2021) 202:105973. 10.1016/j.cmpb.2021.105973 Maqsood N Mustafa M Khan JA . Numerical Tackling for Viscoelastic Fluid Flow in Rotating Frame Considering Homogeneous-Heterogeneous Reactions.
2017) 7:3475–81. 10.1016/j.rinp.2017.09.011 Mohd Sohut NFH Soid SK Abu Bakar S Ishak A . Unsteady Three-Dimensional Flow in a Rotating Hybrid Nanofluid over a Stretching Sheet. 2022) 10(3):348.
10.3390/math10030348 Waini I Ishak A Pop I . Unsteady Flow and Heat Transfer Past a Stretching/Shrinking Sheet in a Hybrid Nanofluid. 2019) 136:288–97. 10.1016/j.ijheatmasstransfer.2019.02.101
Huminic G Huminic A . Heat Transfer Capability of the Hybrid Nanofluids for Heat Transfer Applications. 2018) 272:857–70. 10.1016/j.molliq.2018.10.095 Awais M Raja MAZ Awan SE Shoaib M Ali HM . Heat
and Mass Transfer Phenomenon for the Dynamics of Casson Fluid through Porous Medium over Shrinking wall Subject to Lorentz Force and Heat Source/Sink. 2021) 60(1):1355–63. 10.1016/j.aej.2020.10.056
Uddin I Akhtar R Zhiyu Z Islam S Shoaib M Raja MAZ . Numerical Treatment for Darcy-Forchheimer Flow of Sisko Nanomaterial with Nonlinear thermal Radiation by Lobatto IIIA Technique. 2019) 2019.
10.1155/2019/8974572 Uddin I Akhtar R Khan MAR Zhiyu Z Islam S Shoaib M Numerical Treatment for Fluidic System of Activation Energy with Non-Linear Mixed Convective and Radiative Flow of Magneto
Nanomaterials with Navier's Velocity Slip. 2019) 9(5):055210. 10.1063/1.5099030 Sapuppo F Bucolo M Intaglietta M Johnson PC Fortuna L Arena P . An Improved Instrument for Real-Time Measurement of
Blood Flow Velocity in Microvessels. 2007) 56(6):2663–71. 10.1109/tim.2007.907959 Schembri F Sapuppo F Bucolo M . Experimental Classification of Nonlinear Dynamics in Microfluidic Bubbles' Flow.
2012) 67(4):2807–19. 10.1007/s11071-011-0190-4 Cairone F Anandan P Bucolo M . Nonlinear Systems Synchronization for Modeling Two-Phase Microfluidics Flows. 2018) 92(1):75–84. 10.1007/
s11071-017-3819-0 Cairone F Gagliano S Bucolo M . Experimental Study on the Slug Flow in a Serpentine Microchannel. 2016) 76:34–44. 10.1016/j.expthermflusci.2016.02.011 Anandan P Gagliano S Bucolo M
. Computational Models in Microfluidic Bubble Logic. 2015) 18(2):305–21. 10.1007/s10404-014-1434-7 Cairone F Gagliano S Carbone DC Recca G Bucolo M . Micro-Optofluidic Switch Realized by 3D Printing
Technology. 2016) 20(4):1–10. 10.1007/s10404-016-1727-0 Abukhaled M Khuri S Rabah F . Solution of a Nonlinear Fractional COVID-19 Model. 2022). ahead-of-print No. ahead-of-print. 10.1108/
HFF-01-2022-0042 Saravanakumar S Eswari A Rajendran L Abukhaled M . A Mathematical Model of Risk Factors in HIV/AIDS Transmission Dynamics: Observational Study of Female Sexual Network in India.
2020) 14:967–76. 10.18576/amis/140603 Mary MLC Devi MC Meena A Rajendran L Abukhaled M . Mathematical Modeling of Immobilized Enzyme in Porous Planar, Cylindrical, and Spherical Particle: A Reliable
Semi-Analytical Approach. 2021) 134:641–51. 10.1007/s11144-021-02088-4 Ω
(rad s^−1) Angular velocity
(kgm^−3) Density
u, v, w (ms^−1)
Velocity components
C[P] (m^2s^−2K^−1)
Specific heat
λ (m^−1)
Stretching coefficient
T (K)
K (mkgs^−3K^−1)
Thermal conductivity
(kgm^−1s^−1) Viscosity
Abbreviations LM
Supervised neural networks
Hybrid nanofluid
f, g
Dimensionless velocity components
Dimensionless temperature
Nusselt number
Nanoparticle concentration
Reynolds number
Transformed coordinate
(rad s^−1) Angular velocity Transformed angular velocity
Single wall CNTs
Multiwall CNTs
Mean square error
Ordinary differential equations
Partial differential equations
|
{"url":"https://www.frontiersin.org/journals/physics/articles/10.3389/fphy.2022.949907/xml/nlm?isPublishedV2=false","timestamp":"2024-11-14T11:19:43Z","content_type":"application/xml","content_length":"152589","record_id":"<urn:uuid:ccc25a88-1db8-4dea-84da-616595b0409f>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00633.warc.gz"}
|
CS 5350/6350: Machine Learining Homework 2 solved
1 Boolean Functions
In this problem, you will be asked to write Boolean functions and linear threshold functions
based on given labeled data.
1. [3 points] Table 1 shows several data points (the x’s) along with corresponding labels
(y). (That is, each row is an example with a label.) Write down three different Boolean
functions all of which can produce the label y when given the inputs x.
y x1 x2 x3 x4
Table 1: Original Table
2. [5 points] Next, we expand Table 1 to Table 2 by adding more data points. How many
errors will each of your functions from the previous questions make on the full data
3. [7 points] Write down the linear threshold function for the data in Table 2.
y x1 x2 x3 x4
Table 2: Expanded Table
2 Mistake Bound Model of Learning
Consider an instance space consisting of integer points on the two dimensional plane
(x1, x2) with −128 ≤ x1, x2 ≤ 128. Let C be a concept class defined on this instance space.
Each function fr in C is defined by an integer radius r (with 1 ≤ r ≤ 128) as follows:
fr(x1, x2) =
+1 x
1 + x
2 ≤ r
−1 otherwise (1)
Our goal is to come up with a error-driven algorithm that will learn the correct function
f ∈ C that correctly classifies a dataset.
Side notes
1. Recall that a concept class is the set of functions from which the true target function
is drawn and the hypothesis space is the set of functions that the learning algorithm
searches over. In this question, both these are the same set.
2. Assume that there is no noise. That is, assume that the data is separable using the
hypothesis class.
1. [5 points] Determine |C|, the size of concept class.
2. [5 points] To design an error driven learning algorithm, we should be able to first write
down what it means to make a mistake. Suppose our current guess for the function is
fr defined as in Equation 1 above. Say we get an input point (x
, xt
) along with its
label y
. Write down an expression (an equality or an inequality) in terms of x
, x
, y
and r that checks whether the current hypothesis fr has made a mistake.
3. [10 points] Next, we need to specify how we will update a hypothesis if there is an
error. Since fr is completely defined in terms of r, we only need to update r. How
will you update r if there is an error? Consider errors for both positive and negative
4. [20 points] Use the answers from the previous two steps to write a mistake-driven
learning algorithm to learn the function. Please write the algorithm concisely in the
form of pseudocode. What is the maximum number of mistakes that this algorithm
can make on any dataset?
5. (For 6350 students)[15 points total] We have seen the Halving algorithm in class. The
Halving algorithm will maintain a set of hypotheses consistent with all the examples
seen so far and predict using the most frequent label among this set. Upon making a
mistake, the algorithm prune at least half of this set. In this question, you will design
and analyze a Halving algorithm for this particular concept space.
a. [5 points] The set of hypotheses consistent with all examples seen so far can be
defined storing only two integers. How would you do this?
b. [5 points] How would you check if there is an error for an example (x
, xt
) that
has the label y
c. [5 points] Write the full Halving algorithm for this specific concept space. (Do
not write the same Halving algorithm we saw in class. You need to tailor it to
this problem.) What is its mistake bound?
3 The Perceptron Algorithm and Its Variants
3.1 The Task and Data
Imagine you have access to information about people such as age, gender and level of education. Now, you want to predict whether a person makes over $50K a year or not using
these features.
We will use Adult data set from the UCI Machine Learning repository1
. The original
Adult data set has 14 features, among which 6 are continuous and 8 are categorical. In order
to make it easier to use, we will use a pre-processed version (and subset) of the original Adult
data set, created by the makers of the popular LIBSVM tool. From the LIBSVM website:
“In this data set, the continuous features are discretized into quantiles, and each quantile is
represented by a binary feature. Also, a categorical feature with m categories is converted
to m binary features.”
Use the training/test files called ‘a1a.train’ and ‘a1a.test’, available on the assignments page of the class website.2 This data is in the LIBSVM format, where each row is a
single training example. The format of the each row in the data is
|
{"url":"https://codeshive.com/questions-and-answers/cs-5350-6350-machine-learining-homework-2-solved/","timestamp":"2024-11-03T03:24:43Z","content_type":"text/html","content_length":"120065","record_id":"<urn:uuid:390d93e7-a2aa-4b9a-9f17-bf088c84eed2>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00442.warc.gz"}
|
Computational Bayesian Statistics [PDF]
Computational Bayesian StatisticsAn IntroductionM. Antónia Amaral TurkmanCarlos Daniel PaulinoPeter Müller
ContentsPreface to the English VersionPreface11.11.21.3Bayesian InferenceThe Classical ParadigmThe Bayesian ParadigmBayesian presentation of Prior Information2.1Non-Informative Priors2.2Natural
Conjugate PriorsProblems171823263Bayesian Inference in Basic Problems3.1The Binomial Beta Model3.2The Poisson Gamma Model3.3Normal (Known µ) Inverse Gamma Model3.4Normal (Unknown µ, σ2 ) Jeffreys’
Prior3.5Two Independent Normal Models Marginal Jeffreys’ Priors3.6Two Independent Binomials Beta Distributions3.7Multinomial Dirichlet Model3.8Inference in Finite 91.3.11.3.2Parametric
InferencePredictive InferenceInference by Monte Carlo MethodsSimple Monte Carlo4.1.14.1.2Posterior ProbabilitiesCredible Intervals
Contentsvi4.1.34.1.44.2Monte Carlo with Importance Sampling4.2.14.2.24.2.34.3Marginal Posterior DistributionsPredictive SummariesCredible IntervalsBayes FactorsMarginal Posterior DensitiesSequential
Monte Carlo4.3.14.3.24.3.34.3.4Dynamic State Space ModelsParticle FilterAdapted Particle FilterParameter LearningProblems55.15.2Model AssessmentModel Criticism and AdequacyModel Selection and
Comparison5.2.15.2.25.2.35.3Measures of Predictive PerformanceSelection by Posterior Predictive PerformanceModel Selection Using Bayes FactorsFurther Notes on Simulation in Model
Assessment5.3.15.3.25.3.3Evaluating Posterior Predictive DistributionsPrior Predictive Density EstimationSampling from Predictive DistributionsProblems66.16.26.36.46.5Markov Chain Monte Carlo
MethodsDefinitions and Basic Results for Markov ChainsMetropolis–Hastings AlgorithmGibbs SamplerSlice SamplerHamiltonian Monte 06.6Implementation DetailsProblems9293961001071091091131151187Model
Selection and Trans-dimensional MCMC7.1MC Simulation over the Parameter Space7.2MC Simulation over the Model Space7.3MC Simulation over Model and Parameter Space7.4Reversible Jump n
DynamicsHamiltonian Monte Carlo Transition Probabilities
Contents88.1Methods Based on Analytic ApproximationsAnalytical Methods8.1.18.1.28.28.38.4Multivariate Normal Posterior ApproximationThe Classical Laplace MethodLatent Gaussian Models (LGM)Integrated
Nested Laplace ApproximationVariational Bayesian Inference8.4.18.4.28.4.3Posterior ApproximationCoordinate Ascent AlgorithmAutomatic Differentiation Variational
InferenceProblems99.19.2SoftwareApplication ExampleThe BUGS Project: WinBUGS and OpenBUGS9.3JAGS9.4Stan9.5BayesX9.6Convergence Diagnostics: the Programs CODA and pendix A. Probability
DistributionsAppendix B. Programming .19.6.19.6.29.6.39.7Application Example: Using R2OpenBUGSApplication Example: Using R2jagsApplication Example: Using RStanApplication Example: Using
R2BayesXConvergence DiagnosticsThe CODA and BOA PackagesApplication Example: CODA and BOAR-INLA and the Application Example9.7.1Application Example
Preface to the English VersionThis book is based on lecture notes for a short course that was given atthe XXII Congresso da Sociedade Portuguesa de Estatística. In the translation from the original
Portuguese text we have added some additionalmaterial on sequential Monte Carlo, Hamiltonian Monte Carlo, transdimensional Markov chain Monte Carlo (MCMC), and variational Bayes,and we have
introduced problem sets. The inclusion of problems makesthe book suitable as a textbook for a first graduate-level class in Bayesiancomputation with a focus on Monte Carlo methods. The extensive
discussion of Bayesian software makes it useful also for researchers and graduatestudents from beyond statistics.The core of the text lies in Chapters 4, 6, and 9 on Monte Carlo methods, MCMC
methods, and Bayesian software. Chapters 5, 7, and 8 includeadditional material on model validation and comparison, transdimensionalMCMC, and conditionally Gaussian models. Chapters 1 through 3
introduce the basics of Bayesian inference, and could be covered fairly quicklyby way of introduction; these chapters are intended primarily for reviewand to introduce notation and terminology. For a
more in-depth introduction we recommend the textbooks by Carlin and Louis (2009), Christensenet al (2011), Gelman et al (2014a) or Hoff (2009).
PrefaceIn 1975, Dennis Lindley wrote an article in Advances in Applied Probability titled “The future of statistics: a Bayesian 21st century,” predictingfor the twenty-first century the predominance
of the Bayesian approach toinference in statistics. Today one can certainly say that Dennis Lindley wasright in his prediction, but not exactly in the reasons he gave. He did notforesee that the
critical ingredient would be great advances in computational Bayesian statistics made in the last decade of the twentieth century.The “Bayesian solution” for inference problems is highly attractive,
especially with respect to interpretability of the inference results. However, inpractice, the derivation of such solutions involves in particular the evaluation of integrals, in most cases
multi-dimensional, that are difficult orimpossible to tackle without simulation. The development of more or lesssophisticated computational methods has completely changed the outlook.Today, Bayesian
methods are used to solve problems in practically all areas of science, especially when the processes being modeled are extremelycomplex. However, Bayesian methods can not be applied blindly.
Despitethe existence of many software packages for Bayesian analysis, it is criticalthat investigators understand what these programs output and why.The aim of this text, associated with a minicourse
given at the XXII Congresso da Sociedade Portuguesa de Estatística, is to present the fundamental ideas that underlie the construction and analysis of Bayesian models,with particular focus on
computational methods and schemes.We start in Chapter 1 with a brief summary of the foundations of Bayesianinference with an emphasis on the principal differences between the classical and Bayesian
paradigms. One of the main pillars of Bayesian inference, the specification of prior information, is unfortunately often ignoredin applications. We review its essential aspects in Chapter 2. In
Chapter 3,analytically solveable examples are used to illustrate the Bayesian solutionto statistical inference problems. The “great idea” behind the developmentof computational Bayesian statistics is
the recognition that Bayesian infer-
xPrefaceence can be implemented by way of simulation from the posterior distribution. Classical Monte Carlo methods are presented in Chapter 4 as a firstsolution for computational problems. Model
validation is a very importantquestion, with its own set of concepts and issues in the Bayesian context.The most widely used methods to assess, select, and compare models arebriefly reviewed in
Chapter 5.Problems that are more complex than the basic ones in Chapter 4 requirethe use of more sophisticated simulation methods, in particular Markovchain Monte Carlo (MCMC) methods. These are
introduced in Chapter 6,starting as simply as possible. Another alternative to simulation is the useof posterior approximations, which is reviewed in Chapter 8. The chapterdescribes, in a generic
fashion, the use of integrated nested Laplace approximation (INLA), which allows for substantial improvements in bothcomputation times (by several factors), and in the precision of the
reportedinference summaries. Although applicable in a large class of problems, themethod is more restrictive than stochastic simulation. Finally, Chapter 9is dedicated to Bayesian software. The
possibility of resorting to MCMCmethods for posterior simulation underpins the development of the software BUGS, which allows the use of Bayesian inference in a large varietyof problems across many
areas of science. Rapid advances in technology ingeneral have changed the paradigm of statistics, with the increasing needto deal with massive data sets (“Big Data”), often of spatial and
temporaltypes. As a consequence, posterior simulation in problems with complexand high-dimensional data has become a new challenge, which gives riseto new and better computational methods and the
development of softwarethat can overcome the earlier limitations of BUGS and its successors, WinBUGS and OpenBUGS. In Chapter 9 we review other statistics packagesthat implement MCMC methods and
variations, such as JAGS, Stan, andBayesX. This chapter also includes a brief description of the R packageR-INLA, which implements INLA.For the compilation of this text we heavily relied on the book
EstatísticaBayesiana by Paulino, A. Turkman, and Murteira, published by FundaçãoCalouste Gulbenkian in 2003. As all copies of this book were sold a longwhile ago, we also extensively used preliminary
work for an upcomingsecond edition, as well as material that we published in the October 2013edition of the bulletin of the Sociedade Portuguesa de Estatística (SPE).This text would not have been
completed in its current form without thevaluable and unfailing support of our dear friend and colleague GiovaniSilva. We owe him sincere thanks. We are also thankful to the SociedadePortuguesa de
Estatística for having proposed the wider theme of Bayesian
Prefacexistatistics and for the opportunity to give a minicourse at the 22nd conference of the society. We also acknowledge the institutional support fromthe Universidade de Lisboa through the Centro
de Estatística e Aplicações(PEst-OE/MAT/UI0006/2014, UID/MAT/00006/2013), in the Departmentof Statistics and Operations Research in the Faculdade de Ciências andof the Department of Mathematics in
the Instituto Superior Técnico. Wewould like to acknowledge that the partial support by the Funda cão para aCiência e Tecnologia through various projects over many years enabled usto build up this
expertise in Bayesian statistics.Finally, we would like to dedicate this book to Professor Bento Murteirato whom the development of Bayesian statistics in Portugal owes a lot. Infact, Chapter 1 in
this book reflects in many ways the flavor of his writings.
1Bayesian Inference1Before discussing Bayesian inference, we recall the fundamental problemof statistics: “The fundamental problem towards which the study of Statistics is addressed is that of
inference. Some data are observed and we wishto make statements, inferences, about one or more unknown features ofthe physical system which gave rise to these data” (O’Hagan, 2010). Uponmore careful
consideration of the foundations of statistics we find manydifferent schools of thought. Even leaving aside those that are collectivelyknown as classical statistics, this leaves several choices:
objective and subjective Bayes, fiducialist inference, likelihood based methods, and more.2This diversity is not unexpected! Deriving the desired inference on parameters and models from the data is a
problem of induction, which is oneof the most controversial problems in philosophy. Each school of thoughtfollows its own principles and methods to lead to statistical inference.Berger (1984)
describes this as: “Statistics needs a: ‘foundation’, by whichI mean a framework of analysis within which any statistical investigationcan theoretically be planned, performed, and meaningfully
evaluated. Thewords ‘any’ and ‘theoretically’ are key, in that the framework should apply to any situation but may only theoretically be implementable. Practicaldifficulties or time limitations may
prevent complete (or even partial) utilisation of such framework, but the direction in which ‘truth’ could be foundwould at least be known”. The foundations of Bayesian inference are bet12This
material will be published by Cambridge University Press as ComputationalBayesian Statistics, by M.A. Amaral Turkman, C.D. Paulino, and P. Müller(https://tinyurl.com/CompBayes). This pre-publication
version is free to viewand download for personal use only. Not for re-distribution, re-sale, or use in derivativeworks. c Maria Antónia Amaral Turkman, Carlos Daniel Paulino & Peter Müller,
2019.Subjective Bayes is essentially the subject of this volume. In addition to these schools ofthought, there are even half-Bayesians who accept the use of a priori information butbelieve that
probability calculus is inadequate to combine prior information with data,which should instead be replaced by a notion of causal inference.
2Bayesian Inferenceter understood when seen in contrast to those of its mainstream competitor,classical inference.1.1 The Classical ParadigmClassical statistics seeks to make inference about a
population starting froma sample. Let x (or x (x1 , x2 , . . . , xn ), where n is a sample size,) denotethe data. The set X of possible samples x is known as the sample space,usually X Rn .
Underlying classical inference is the recognition of variability across samples, keeping in mind that the observed data are onlyone of many – possibly infinitely many – data sets that could have
beenobserved. The interpretation of the data depends not only on the observeddata, but also on the assumptions put forward about the process generatingthe observable data. As a consequence, the data
are treated as a realizationof a random variable or a random vector X with a distribution Fθ , whichof course is not entirely known. However, there is usually some knowledge(theoretical
considerations, experimental evidence, etc.) about the natureof the chance experiment under consideration that allow one to conjecturethat Fθ is a member of a family of distributions F . This family
of distributions becomes the statistical model for X. The assumption of a model isalso known as the model specification and is an essential part of developingthe desired inference.Assuming that X is
a continuous random variable or random vector, it iscommon practice to represent the distributions F by their respective density functions. When the density functions are indexed by a parameter θ ina
parameter space Θ, the model can be written as F { f (x θ), x X :θ Θ}. In many cases, the n variables (X1 , X2 , . . . , Xn ) are assumed independent conditional on θ and the statistical model can be
written in termsof the marginal densities of Xi , i 1, 2, . . . , n: F f (x θ) Πni 1 fi (xi θ) : θ Θ , x X,and fi (· θ) f (· θ), i 1, 2, . . . , n, if additionally the variables Xiare assumed to be
identically distributed. The latter is often referred to asrandom sampling.Beyond the task of modeling and parametrization, classical inferenceincludes many methods to extract conclusions about the
characteristics ofthe model that best represents the population and tries to answer questionslike the following: (1) Are the data x compatible with a family F ? (2)Assuming that the specification is
correct and that the data are generatedfrom a model in the family F , what conclusions can be drawn about the
1.1 The Classical Paradigm3parameter θ0 that indexes the distribution Fθ that “appropriately” describesthe phenomenon under study?Classical methods – also known as frequentist methods – are
evaluatedunder the principle of repeated sampling, that is, with respect to the performance under infinitely many hypothetical repetitions of the experimentcarried out under identical conditions. One
of the aspects of this principleis the use of frequencies as a measure of uncertainties, that is, a frequentistinterpretation of probability. See , Paulino et al. (2018, section 1.2), for areview of
this and other interpretations of probability.In the case of parametric inference, in answer to question (2) above, weneed to consider first the question of point estimation, which, grosso modo,is:
Given a sample X (X1 , X2 , . . . , Xn ), how should one “guess,” estimate, or approximate the true value θ, through an estimator T (X1 , X2 , . . . ,Xn ). The estimator should have the desired
properties such as unbiasedness,consistency, sufficiency, efficiency, etc.For example, with X Rn , the estimator T (X1 , X2 , . . . , Xn ) based on arandom sample is said to be centered or unbiased
ifZE{T θ} T (x1 , x2 , . . . , xn )Πni 1 f (xi θ) dx1 dx2 . . . dxn θ, θ Θ.RnThis is a property related to the principle of repeated sampling, as can beseen by the fact that it includes integration
over the sample space (in thiscase Rn ). Considering this entire space is only relevant if one imagines infinitely many repetitions of the sampling process or observations of the nrandom variables
(X1 , X2 , . . . , Xn ). The same applies when one considersother criteria for evaluation of estimators within the classical paradigm. Inother words, implicit in the principle of repeated sampling is
a consideration of what might happen in the entire sample space.Parametric inference often takes the form of confidence intervals. Instead of proposing a single value for θ, one indicates an interval
whoseendpoints are a function of the sample,(T (X1 , X2 , . . . , Xn ), T (X1 , X2 , . . . , Xn )),and which covers the true parameter value with a certain probability, preferably a high probability
(typically referred to as the confidence level),P{T (X1 , X2 , . . . , Xn ) θ T (X1 , X2 , . . . , Xn ) θ} 1 α,0 α 1. This expression pre-experimentally translates a probabilityof covering the
unknown value θ to a random interval (T , T ) whose
Bayesian Inference4lower and upper limits are functions of (X1 , X2 , . . . , Xn ) and, therefore, random variables. However, once a specific sample is observed (i.e., postexperimentally) as n real
values, (x1 , x2 , . . . , xn ), this becomes a specificinterval on the real line (now with real numbers as lower and upper limits).(T (x1 , x2 , . . . , xn ), T (x1 , x2 , . . . , xn )),and the
probabilityP{ T (x1 , x2 , . . . , xn ) θ T (x1 , x2 , . . . , xn ) θ} 1 α,0 α 1, is no longer meaningful. In fact, once θ has an unknown, butfixed, value, this probability can only be 1 or 0,
depending upon whetherthe true value of θ is or is not in the real interval(T (x1 , x2 , . . . , xn ), T (x1 , x2 , . . . , xn )).Of course, since θ is unknown, the investigator does not know which
situation applies. However, a classical statistician accepts the frequentist interpretation of probability and invokes the principle of repeated sampling inthe following way: If one imagines a
repetition of the sampling and inference process (each sample with n observations) a large number of times,then in (1 α) 100% of the repetitions the numerical interval will includethe value of
θ.Another instance of classical statistical inference is a parametric hypothesis test. In the course of scientific investigation one frequently encounters,in the context of a certain theory, the
concept of a hypothesis about thevalue of one (or multiple) parameter(s), for example in the symbolsH0 : θ θ0 .This raises the following fundamental question: Do the data (x1 , x2 , . . . , xn )
support or not support the proposed hypothesis? This hypothesis is traditionally referred to as the null hypothesis. Also here the classical solutionis again based on the principle of repeated
sampling if one follows theNeyman–Pearson theory. It aims to find a rejection region W (critical region) defined as a subset of the sample space, W X, such that(X1 , X2 , . . . , Xn ) W rejection of
H0 ,(X1 , X2 , . . . , Xn ) W fail to reject H0 .The approach aims to control the probability of a type-I error,P{(X1 , X2 , . . . , Xn ) W H0 is true},
1.2 The Bayesian Paradigm5and minimize the probability of a type-II error,P{(X1 , X2 , . . . , Xn ) W H0 is false}.What does it mean that the critical region is associated with a type-I error, equal
to, for example, 0.05? The investigator can not know whether afalse or true hypothesis is being rejected when a particular observation fallsinto the critical region and the hypothesis is thus
rejected. However, being aclassical statistician the investigator is convinced that under a large numberof repetitions and if the hypothesis were true, then only in 5% of the caseswould the
observation fall into the rejection region. What does it mean thatthe critical region is associated with a type-II error equal to, say 0.10? Similarly, when a particular observation is not in the
rejection region and thusthe hypothesis is not rejected, then the investigator cannot know whethera true or false hypothesis is being accepted. Being a classical statistician,the investigator can
affirm that under a large number of repetitions of theentire process and if the hypothesis were in fact false, only in 10% of thecases would the observation not fall into the rejection region.In the
following discussion, it is assumed that the reader is familiar withat least the most elementary aspects of how classical inference approachesestimation and hypothesis testing, which is therefore not
discussed here infurther detail.1.2 The Bayesian ParadigmFor Lindley, the substitution of the classical paradigm by the Bayesianparadigm represents a true scientific revolution in the sense of Kuhn
(1962)The initial seed for the Bayesian approach to inference problems was plantedby Richard Price when, in 1763, he posthumously published the work ofRev. Thomas Bayes titled An Essay towards
Solving a Problem in the Doctrine of Chances. An interpretation of probability as a degree of belief –fundamental in the Bayesian philosophy – has a long history, including J.Bernoulli, in 1713, with
his work Ars Conjectandi. One of the first authorsto define probabilities as a degree of beliefs in the truth of a given proposition was De Morgan, in Formal Logic, in 1847, who stated: (1)
probabilityis identified as a degree of belief; (2) the degrees of belief can be measured; and (3) these degrees of belief can be identified with a certain set ofjudgments. The idea of coherence of a
system of degrees of belief seemsto be due to Ramsey, for whom the behavior of an individual when bettingon the truth of a given proposition is associated with the degree of beliefthat the individual
attaches to it. If an individual states odds or possibilities
6Bayesian Inference(chances) – in favor of the truth or untruth – as r : s, then the degree of belief in the proposition is, for this individual, r/(r s). For Ramsey, no set ofbets in given
propositions is admissible for a coherent individual if it wouldlead to certain loss. The strongest exponent of the concept of personal probabilities is, however, de Finetti. In discussing the
Bayesian paradigm and itsapplication to statistics, one must also cite Harold Jeffreys, who, reactingto the predominantly classical position in the middle of the century, besidesinviting disapproval,
managed to resurrect Bayesianism, giving it a logicalbasis and putting forward solutions to statistical inference problems in histime. From there the number of Bayesians grew rapidly and it
becomesimpossible to mention all but the most influential – perhaps Good, Savage,and Lindley.The well-known Bayes’ theorem is a proposition about conditional probabilities. It is simply probability
calculus and is thus not subject to anydoubts. Only the application to statistical inference problems is subject tosome controversy. It obviously plays a central role in Bayesian inference,which is
fundamentally different from classical inference. In the classicalmodel, the parameter θ, θ Θ, is an unknown but fixed quantity, i.e., it is aparticular value that indexes the sampling model or
family of distributionsF that “appropriately” describes the process or physical system that generates the data. In the Bayesian model, the parameter θ, θ Θ, is treated as anunobservable random
variable. In the Bayesian view, any unknown quantity – in this case, the parameter θ – is uncertain and all uncertainties aredescribed in terms of a probability model. Related to this view,
Bayesianswould argue that initial information or a priori information – prior or external to the particular experiment, but too important to be ignored – mustbe translated into a probability model
for θ, say h(θ), and referred to as theprior distribution. The elicitation and interpretation of prior distributionsare some of the most controversial aspects of Bayesian theory.The family F is also
part of the Bayesian model; that is, the samplingmodel is a common part of the classical and the Bayesian paradigms, except that in the latter the elements f (x θ) of F are in general assumed toalso
have a subjective interpretation, similar to h(θ).The discussion of prior distributions illustrates some aspects of the disagreement between Bayesian and classical statisticians. For the
earlier,Berger, for example, the subjective choice of the family F is often considered a more drastic use of prior information than the use of prior distributions. And some would add: In the process
of modeling, a classical statistician uses prior information, albeit in a very informal manner.Such informal use of prior information is seen critically under a Bayesian
1.2 The Bayesian Paradigm7paradigm, which would require that initial or prior information of an investigator needs to be formally stated as a probability distribution on therandom variable θ.
Classical statisticians, for example, Lehmann, see animportant difference between the modeling of F and the specification ofh(θ). In the earlier case one has a data set x (x1 , x2 , . . . , xn ) that
is generated by a member of F and can be used to test the assumed distribution.To understand the Bayesian point of view, recall that for a classicalstatistician all problems that involve a binomial
random variable X can bereduced to a Bernoulli model with an unknown parameter θ that representsa “success” probability. For Bayesians, each problem is unique and has itsown real context where θ is
an important quantity about which there is, ingeneral, some level of knowledge that might vary from problem to problemand investigator to investigator. Thus, the probability model that capturesthis
variability is based on a priori information and is specific to a givenproblem and a given investigator. In fact, a priori information includes personal judgements and experiences of most diverse
types, resulting from ingeneral not replicable situations, and can thus only be formalized in subjective terms. This formalism requires that the investigator comply withcoherence or consistency
conditions that permit the use of probability calculus. However, different investigators can in general use different priordistributions for the same parameter without violating coherence
conditions.Assume that we observe X x and are given some f (x θ) F and aprior distribution h(θ). Then Bayes’ theorem implies3h(θ x) Rθf (x θ)h(θ)f (x θ)h(θ) dθ, θ Θ,(1.1)where h(θ x) is the posterior
distribution of θ after observing X x.Here, the initial information of the investigator is characterized by h(θ),and modified with the observed data by being updated to h(θ x). Thedenominator in
(1.1), denoted f (x), is the marginal (or prior predictive)distribution for X; that is, for an observation of X whatever the value of θ.The concept of a likelihood function appears in the context of
classicalinference, and is not less important in the Bayesian context. Regarding itsdefinition, it is convenient to distinguish between the discrete and continuous cases (Kempthorn and Folks, 1971),
but both cases lead to the function3Easily adapted if x were a vector or if the parameter space were discrete.
Bayesian Inference8of θ,L(θ x) k f (x θ), θ Θ orL(θ x1 , . . . , xn ) kΠi f (xi θ), θ Θ,(1.2)which expresses for every θ Θ its likelihood or plausibility when X xor (X1 x1 , X2 x2 , . . . , Xn xn )
is observed. The symbol k represents afactor that does not depend on θ. The likelihood function – it is not a probability, and therefore, for example, it is not meaningful to add likelihoods –plays
an important role in Bayes’ theorem as it is the factor through whichthe data, x, updates prior knowledge about θ; that is, the likelihood can beinterpreted as quantifying the information about θ
that is provided by thedata x.In summary, for a Bayesian the posterior distribution contains, by wayof Bayes’ theorem, all available information about a parameter:prior information information from
the sample.It follows that all Bayesian inference is based on h(θ x) [or h(θ x1 , x2 ,. . . , xn )].When θ is a parameter vector, that is, θ (γ, φ) Γ Φ, it can bethe case that the desired inference
is restricted to a subvector of θ, sayγ. In this case, in contrast to the classical paradigm, the elimination ofthe nuisance parameter φ under the Bayesian paradigm follows always thesame principle,
namely through the marginalization of the joint posteriordistribution,ZZh(γ, φ x)dφ h(γ φ, x)h(φ x)dφ.(1.3)h(γ x) ΦΦPossible difficulties in the analytic evaluation of the marginal disappearwhen γ
and φ are a priori independent and the likelihood function factorsinto L(θ x) L1 (γ x) L2 (φ x), leading to h(γ x) h(γ)L1 (γ x).1.3 Bayesian InferenceIn the Bayesian approach, it is convenient to
distinguish between two objectives: (1) inference about unknown parameters θ, and (2) inference aboutfuture data (prediction).1.3.1 Parametric InferenceIn the case of inference on parameters, we find
a certain agreement – atleast superficially – between classical and Bayesian objectives, although
1.3 Bayesian Inference9in the implementation the two approaches differ. On one side, classical inference is based on probabilities associated with different samples,
Computational Bayesian Statistics An Introduction M. Antónia Amaral Turkman Carlos Daniel Paulino Peter Müller. Contents Preface to the English Version viii Preface ix 1 Bayesian Inference 1 1.1 The
Classical Paradigm 2 1.2 The Bayesian Paradigm 5 1.3 Bayesian Inference 8 1.3.1 Parametric Inference 8
|
{"url":"https://zbook.org/read/94ec00_computational-bayesian-statistics-pdf.html","timestamp":"2024-11-05T00:46:15Z","content_type":"text/html","content_length":"103823","record_id":"<urn:uuid:21a40e85-26e5-4455-8a6b-432bae3c2ec6>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00100.warc.gz"}
|
Pressure and types of pressure
Pressure is force per unit area applied in a direction perpendicular to the surface of an object. Pressure is measured in any unit of force divided by any unit of area. The SI unit of pressure is
(the Newton per square meter) which is called the Pascal ¶.
For liquids, the formula may be written:
TYPES OF PRESSURE
Atmospheric pressure is the pressure measure on the surface in the atmosphere. However, the difficulty is that it changes with altitude and humidity. Thus, atmospheric pressure may differ from one
area or place to another.Atmospheric pressure at sea level is also called barometric pressure.
Standard atmospheric pressure at sea level = 760mmHg = 29.91inHg = 14.696 PSI.
GAUGE PRESSURE Gauge pressure is the pressure above atmospheric pressure. Hence, the zero of the gauge pressure scale depends on the atmospheric pressure at that point. Sometimes it is called
internal pressure.
P gauge = P abs – P atm
ABSOLUTE PRESSURE Absolute pressure is the pressure measured with respect to zero pressure (vacuum).
Absolute pressure = gauge pressure + atmospheric pressure.
VACUUM PRESSURE Vacuum pressure is the pressure less than atmospheric pressure. Vacuum is often measured in inches of water (inH2O) or mercury (inHg).The vacuum scale extend between the absolute zero
reference point and atmospheric pressure, thus it is not a positive pressure. It is treated as a sucking force or negative force.
DIFFERENTIAL PRESSURE Differential pressure is the difference in pressure measurement taken at two related points. It is calculating by subtracting the lower port pressure reading from the higher
port pressure reading. E.g. Between unknown pressure and a local atmospheric pressure.
STATIC PRESSURE Static pressure is defined as the pressure exerted by fluids at rest. Static pressure is independent of kinetic energy of fluid.
DYNAMIC PRESSURE Dynamic pressure is the pressure above static pressure that results from the transformation of fluid kinetic energy into potential energy. Dynamic pressure is the pressure above
static pressure caused by the movement of fluid.
|
{"url":"https://forumautomation.com/t/pressure-and-types-of-pressure/3770","timestamp":"2024-11-12T13:14:30Z","content_type":"text/html","content_length":"23924","record_id":"<urn:uuid:30d2da1f-5fc6-458e-817e-6df22f34a849>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00468.warc.gz"}
|
Signals and Systems
Signals and Systems. Instructor: Prof. K.S. Venkatesh, Department of Electrical Engineering, IIT Kanpur. This course is a study of signals and systems, covering topics: formal definition of 'signal'
and 'system', continuous and discrete signals, continuous and discrete-time systems, Linear Time-Invariant (LTI) systems, representation of continuous and discrete-time convolution, differential
equations, difference equations, filters, periodic signals, Fourier series, Fourier transform and its properties, discrete-time Fourier transform, frequency response of continuous and discrete LTI
systems, sampling, Laplace transform and its properties, inverse Laplace transform, z-transform and its properties, inverse z-transform. (from nptel.ac.in)
Signals and Systems
Instructor: Prof. K.S. Venkatesh, Department of Electrical Engineering, IIT Kanpur. This course is a study of signals and systems.
|
{"url":"http://www.infocobuild.com/education/audio-video-courses/electronics/signals-and-systems-iit-kanpur.html","timestamp":"2024-11-08T14:11:06Z","content_type":"text/html","content_length":"15762","record_id":"<urn:uuid:8a1ca964-2b03-4b21-a04e-3e93103ca4c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00135.warc.gz"}
|
The Stacks project
Lemma 38.4.4. Let $S$, $X$, $\mathcal{F}$, $s$ be as in Definition 38.4.1. Let $(Z, Y, i, \pi , \mathcal{G})$ be a one step dévissage of $\mathcal{F}/X/S$ over $s$. Let $(S', s') \to (S, s)$ be any
morphism of pointed schemes. Given this data let $X', Z', Y', i', \pi '$ be the base changes of $X, Z, Y, i, \pi $ via $S' \to S$. Let $\mathcal{F}'$ be the pullback of $\mathcal{F}$ to $X'$ and let
$\mathcal{G}'$ be the pullback of $\mathcal{G}$ to $Z'$. If $S'$ is affine, then $(Z', Y', i', \pi ', \mathcal{G}')$ is a one step dévissage of $\mathcal{F}'/X'/S'$ over $s'$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 05H7. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 05H7, in case you are confused.
|
{"url":"https://stacks.math.columbia.edu/tag/05H7","timestamp":"2024-11-14T21:51:30Z","content_type":"text/html","content_length":"15571","record_id":"<urn:uuid:7d451f4d-72ef-488a-8dfb-0ac2fd2cd371>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00143.warc.gz"}
|
Category : Science and Education
Archive   : TRICALC.ZIP
Filename : TRICALC.DOC
TRICALC- The ultimate pop-up calculator
Requires color adaptor for optimum performance.
TRICALC is a powerful memory resident calculator program.
It is meant for those times when a swift calculation is needed
but one is in another program. Without dropping out of the
current program, one presses the hot-key and TRICALC appears to
save the day! The great advantage of TRICALC is the method
of entering expressions to be evaluated. Instead of dealing
with only 2 parameters at a time (LIKE OTHER, INFERIOR,
CALCULATORS DO), TRICALC allows an entire string to be entered
and evaluated. Results are returned in both standard and
scientific notation.
If the register $R holds the number 6,
the expression:
> 3+7*Cos(360)-Log(10000)+Sqrt(4)*$R
Would be evaluated to 18 and 1.80000000000 E01
(NOTE: Order of operations is followed, so 7*3+5 is not the
same as 7*(3+5))
TRICALC's functions are of the form: Func(Argument)
TRICALC supports the following functions:
Sin - Trigonometric SINE function of the argument ex. Sin(45)
Cos - Trig. COSINE function
Tan - Trig. TANGENT function ( Sin[X]/Cos[X] )
ArcSin - Trig ARC-SINE function. Inverse operation of Sine.
Domain is X: -1<=X<=1
ArcCos - Trig ARC-COSINE function. Similar to ARCSIN
ArcTan - Trig ARC-TANGENT function. Similar to ARCSIN
Ln - Logarithm with base e of the argument (approx. 2.7)
Log - Logarithm with base 10
Exp - e raised to the argument ex. Exp(2) = approx. 2.7^2
Sqrt - Square root of argument
N'th roots can be made by using X^(1/N)
ex. 27^(1/3) = 3rd root of 27 = 3
TRICALC also has registers to hold the results of operations.
to assign a value to a register, enter the following at the > prompt:
There are 26 registers- $A..$Z
$A..$Y are user adjustable while $Z will always hold the result of
the last operation. All register's values will carry over from each
pop-up, so they are usable for holding any value you want to remember.
Registers can be used just like numbers or functions.
ex. $R^6-Sin($A+2*(1/$N))+7
When putting TRICALC into memory the default pop-up key is ALT-C.
The user can, instead, pick his/her own pop-up key, such as CTRL-ALT-HOME
or something similarly stupid. To do this use the primary value as the first
parameter and the auxilliary value as the second. Here are some examples:
TriCalc 59 F1 is Hot-Key
TriCalc 119 Ctrl-Home
130 Alt-Hyphen
15 Shift-Tab
20 Alt-T
To get other values, consult the Turbo Pascal (TM Borland Int'l) manual
or other resource. The values are standard main-aux byte keyboard
If you want to avoid the annoying opening screen, use NS (NoScreen) as a
parameter. I would recommend putting the TriCalc call in a batch file
if you are going to use some particular combination often.
Enjoy this useful (We hope) utility, and support the ShareWare concept
by sending $5.00 and any comments or suggestions to TriSoft at the below
address. If you would like an updated, customized version please send $25.OO
and we can give you an updated version (customized colors, customized func-
tions, graphing, derivatives/integrals, and extended register access)
Thank You.
TriSoft Technologies
11204 Hunting Horn Ln.
Reston, Va. 22091
3 Responses to “Category : Science and Education
Archive   : TRICALC.ZIP
Filename : TRICALC.DOC
1. Very nice! Thank you for this wonderful archive. I wonder why I found it only now. Long live the BBS file archives!
2. This is so awesome! 😀 I’d be cool if you could download an entire archive of this at once, though.
3. But one thing that puzzles me is the “mtswslnkmcjklsdlsbdmMICROSOFT” string. There is an article about it here. It is definitely worth a read: http://www.os2museum.com/wp/mtswslnk/
|
{"url":"https://www.pcorner.com/list/EDUCAT/TRICALC.ZIP/TRICALC.DOC/","timestamp":"2024-11-07T01:07:53Z","content_type":"text/html","content_length":"35837","record_id":"<urn:uuid:9a8ad1a2-3999-425c-ac04-c50b6c263c47>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00749.warc.gz"}
|
How to create rolling 12 month, and progress by 1 year | Microsoft Community Hub
Forum Discussion
How to create rolling 12 month, and progress by 1 year
Ok, so what i have so far. we have meetings every third tuesday (ignore the screenshot) of the month. I used in C:
which is doing exactly what I need. But im planning for the future, how do I get my Column A to rotate after 12 back to 1 AND the Year to add 1 when the month rotates? When I leave this job I still
want this app to work 🙂
I have given you a different formula for your column C as well, but you'll have to have a current version of Excel, or a subscription to Microsoft 365 for it to work, because it takes advantage
of the LET function.
That aside, here are the formulas in columns A, B, and C in the attached example spreadsheet.
A: =IF(A2=12,1,A2+1)
B: =IF(A2=12,B2+1,B2)
C: =LET(FD,DATE(B3,A3,1),FD+CHOOSE(WEEKDAY(FD),16,15,14,20,19,18,27))
• See the attached workbook.
I have given you a different formula for your column C as well, but you'll have to have a current version of Excel, or a subscription to Microsoft 365 for it to work, because it takes advantage
of the LET function.
That aside, here are the formulas in columns A, B, and C in the attached example spreadsheet.
A: =IF(A2=12,1,A2+1)
B: =IF(A2=12,B2+1,B2)
C: =LET(FD,DATE(B3,A3,1),FD+CHOOSE(WEEKDAY(FD),16,15,14,20,19,18,27))
□ Can you explain the numbers:
This way if I have to manipulate the day of the week. Instead of third tuesday, needs to be first I can adapt this formula.
☆ FD is the first day of the month. The numbers 16,15,14,20,19,18,17 specify how many days you have to add to this to end up on the 3rd Tuesday. For example, if the first day is a Sunday
WEEKDAY(FD)=1, and the 3rd Tuesday will be the 17th, so you have to add 16 days.
If you want the 1st Tuesday, you have to add 14 days less, so the numbers in the formula will be
|
{"url":"https://techcommunity.microsoft.com/discussions/excelgeneral/how-to-create-rolling-12-month-and-progress-by-1-year/4067701","timestamp":"2024-11-08T17:43:24Z","content_type":"text/html","content_length":"259529","record_id":"<urn:uuid:b26d19fb-7cfa-45e5-9a08-cfb5222d7b6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00873.warc.gz"}
|
mp_arc 10-52
10-52 Annalisa Cesaroni, Matteo Novaga, Enrico Valdinoci
Curvature flow in heterogeneous media (66K, LaTex) Mar 17, 10
Abstract , Paper (src), View paper (auto. generated ps), Index of related papers
Abstract. In recent years, there has been a growing interest in geometric evolution in heterogeneous media. Here we consider curvature driven flows of planar curves, with an additional
space-dependent forcing term. Motivated by a homogenization problem, we look for estimates which depend only on the $L^\infty$-norm of the forcing term. By means of an asymptotic analysis, we
discuss the properties of the limit solutions of the homogenization problem, which we can rigorously solve in some special cases: that is, when the initial curve is a graph, and the forcing term
does not depend on the vertical direction. As a by-product, in such cases we are able to define a solution of the geometric evolution when the forcing term is just a bounded, not necessarily
continuous, function.
Files: 10-52.src( 10-52.keywords , omofinal.tex )
|
{"url":"http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=10-52","timestamp":"2024-11-10T07:38:46Z","content_type":"text/html","content_length":"2025","record_id":"<urn:uuid:7de6952b-44b3-4fad-b4d3-367896eee47f>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00424.warc.gz"}
|
Blog post that uses TLA+ in a debate about properties of system designs
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Blog post that uses TLA+ in a debate about properties of system designs
A colleague pointed out this encouraging development in the world of engineering blogs:
There's a grand tradition of blogs[1] about Eric Brewer's "CAP theorem". Many of those blogs use basic terms in ambiguous ways -- even the core terms; 'consistency', 'availability', and
'partition-tolerance'. Many of the blogs significantly depart from the system-model in the paper by Gilbert & Lynch[2].
It would help avoid a lot of confusion and noise if blog-authors used languages like TLA+ and PlusCal to precisely state the algorithms, network/failure model, and other assumptions that they are
The ensuing debate could then be much more precise and enlightening; refutable analysis of properties of algorithms; precise proposals for different models of network & failure. Arguments could be
settled by using the model checker, or, if people care enough, with machine-checked TLAPS proofs.
This wouldn't stop the subset of blogs that make hand-wavy claims about the 'real-world benefits' of various existing systems. (Such arguments are usually based on the author's personal experience &
intuition about the probabilities of various failure modes, without any actual supporting data. See [3] for why human intuition about probabilities of failure in big systems are very likely to be
But it would still be a big improvement.
[2] "Brewer's Conjecture and the Feasibility of Consistent Available Partition-Tolerant Web Services (2002)" [Seth Gilbert, Nancy Lynch]
Slide #3 points out that human working-lifetimes are too short to give us correct intuition about the probability of subtle bugs being triggered in massive-scale production systems. The rest of
the talk is about various nasty examples of byzantine faults. Most of the examples are in hardware systems.
NASA’s C. Michael Holloway (after some studies of accidents):
“To a first approximation, we can say that
accidents are almost always the result of incorrect estimates
of the likelihood of one or more things.”
– We fatally underestimate the probabilities of what can happen.
– Designers will say a failure mode is “not credible”, when it actually can happen with probabilities far greater than requirements allow.
|
{"url":"http://discuss.tlapl.us/msg01304.html","timestamp":"2024-11-11T07:32:48Z","content_type":"text/html","content_length":"7286","record_id":"<urn:uuid:181c506d-774e-4521-aaa4-08e5d037b8af>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00799.warc.gz"}
|
Lambert W Function - CatSynth
Lambert W Function
We at CatSynth return to the topic of mathematics for the first time in a while. In particular, we visit an obscure topic of personal significance. One day in high school I wrote down a seemingly
simple equation:
2^x = 1 / x
And set about trying to solve it. It certainly has a solution, as one can graph the functions 2^x and 1/x and note their intersection:
In the graph above, the green curve is 2^x and the black curve is 1/x. They intersect at an x coordinate equal to about 0.64. I actually moved to a variation:
e^x = 1 / x
(somehow thinking that using e would make it simpler), and quickly approximated the solution as:
While computing this number was relatively simple pressing buttons on a handheld calculator, describing it in a closed form proved elusive. Every so often, I would return to the equation, try to
manipulate it algebraically or using calculus, but I was never able to do so.
Years later, in college, I found out that it was in fact impossible to solve algebraically, but that did not prevent mathematicians from naming both the constant 0.56714329… and the function
necessary to compute it. Consider a function w(x)) such that:
w(x)e^w(x) = x
The function w(x) is known as the Lambert W function, or “omega function”, and is named after 18th century mathematician Johann Heinrich Lambert. It is a non-analytical function, in that it cannot be
expressed in a closed algebraic form, hence the difficulty I had attempting to solve my equation). However, one can see that w(1) is a solution for it. And w(1) [=] 0.56714329… is often called
Lambert’s constant.
Although Lambert’s function does not have a closed-form expression, one can approximate it with a small computer program, such as this python program:
from math import e
def lambertW(x, prec=1E-12, maxiters=100):
w = 0
for i in range(maxiters):
we = w * e**w
w1e = (w + 1) * e**w
if prec > abs((x - we) / w1e):
return w
w -= (we - x) / (w1e - (w + 2) * (we - x) / (2*w + 2))
raise ValueError("W doesn't converge fast enough for abs(z) = %f"
It was somewhat disappointing in the end to find out both that there was no closed form for the solution, and that the constant associated with the solution already had a name. But it was still
interesting to learn about it, and to then apply it to other problems.
On that note, we conclude by showing that w(x) can also be used to solve the original equation:
2^x = 1/x
can be rewritten as:
(ln2)xe^(ln2)x = ln2
We can now use w(x) to solve the equation:
x = w(ln2) / ln2
which is approximately .6411857…
One thing I never tried in my youthful experimentations with this function was evaluate it with non-real complex numbers. While there are examples plotting w(z) on the complex plane, I would rather
take some time to explore this myself.
I also have yet to find any applications to music or the visual arts, outside of literal usage in conceptual art pieces.
|
{"url":"https://catsynth.com/2010/01/lambert-w-function/","timestamp":"2024-11-14T07:26:45Z","content_type":"text/html","content_length":"56061","record_id":"<urn:uuid:5d0200b9-cb09-410f-aeac-a13bad698f30>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00711.warc.gz"}
|
As the year draws to a close, we start looking forward to a new year and a new release of Maple. With every new release comes many new features and updates to explore.
We are looking for several new beta testers with a good working knowledge of Maple; We need your input, your ideas, and your experience with our products to help us improve the software and get it
ready for general release.
There are many benefits to becoming a beta tester:
• You’ll get to use the new software before anyone else does.
• You’ll help us make our software better in ways that work for you.
• Your suggestions could determine the future direction of the software.
• You’ll get feedback right from the development team.
If you are interested in becoming a beta tester for the next version of Maple, please email: beta (at) maplesoft.com for more information.
December 22 2015
October 16 2015
July 08 2015
January 28 2015
January 17 2015
October 29 2014
August 22 2014
June 21 2014
April 06 2014
March 19 2014
December 23 2013
December 13 2013
November 07 2013
October 21 2013
|
{"url":"https://mapleprimes.com/posts/Product%20Suggestions?page=6","timestamp":"2024-11-12T12:19:11Z","content_type":"text/html","content_length":"493535","record_id":"<urn:uuid:ff20d7a6-8f50-4d2d-bedc-ffa5a11d3a29>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00363.warc.gz"}
|
stoimen's web log
We already know what’s topological sort of a directed acyclic graph. So why do we need a revision of this algorithm? First of all I never mentioned its complexity, thus to understand why we do need a
revision let’s get again on the algorithm.
We have a directed acyclic graph (DAG). There are no cycles so we must go for some kind of order putting all the vertices of the graph in such an order, that if there’s a directed edge (u, v), u must
precede v in that order.
The process of putting all the vertices of the DAG in such an order is called topological sorting. It’s commonly used in task scheduling or while finding the shortest paths in a DAG.
The algorithm itself is pretty simple to understand and code. We must start from the vertex (vertices) that don’t have predecessors.
Continue reading Computer Algorithms: Topological Sort Revisited
Computer Algorithms: Longest Increasing Subsequence
A very common problem in computer programming is finding the longest increasing (decreasing) subsequence in a sequence of numbers (usually integers). Actually this is a typical dynamic programming
Dynamic programming can be described as a huge area of computer science problems that can be categorized by the way they can be solved. Unlike divide and conquer, where we were able to merge the
fairly equal sub-solutions in order to receive one single solution of the problem, in dynamic programming we usually try to find an optimal sub-solution and then grow it.
Once we have an optimal sub-solution on each step we try to upgrade it in order to cover the whole problem. Thus a typical member of the dynamic programming class is finding the longest subsequence.
However this problem is interesting because it can be related to graph theory. Let’s find out how. Continue reading Computer Algorithms: Longest Increasing Subsequence
Computer Algorithms: Prim’s Minimum Spanning Tree
Along with the Kruskal’s minimum spanning tree algorithm, there’s another general algorithm that solves the problem. The algorithm of Prim.
As we already know the algorithm of Kruskal works in a pretty natural and logical way. Since we’re trying to build a MST, which is naturally build by the minimal edges of the graph (G), we sort them
in a non-descending order and we start building the tree.
During the whole process of building the final minimum spanning tree Kruskal’s algorithm keeps a forest of trees. The number of trees in that forest decreases on each step and finally we get the
minimum weight spanning tree.
A key point in the Kruskal’s approach is the way we get the “next” edge from G that should be added to one of the trees of the forest (or to connect two trees from the forest). The only thing we
should be aware of is to choose an edge that’s connecting two vertices – u and v and these two shouldn’t be in the same tree. That’s all.
An important feature of the Kruskal’s algorithm is that it builds the MST just by sorting the edges by their weight and doesn’t care about a particular starting vertex.
In the same time there’s another algorithm that builds a MST – the algorithm of Prim designed by Robert Prim in 1957. Continue reading Computer Algorithms: Prim’s Minimum Spanning Tree
Computer Algorithms: Kruskal’s Minimum Spanning Tree
One of the two main algorithms in finding the minimum spanning tree algorithms is the algorithm of Kruskal. Before getting into the details, let’s get back to the principles of the minimum spanning
We have a weighted graph and of all spanning trees we’d like to find the one with minimal weight. As an example on the picture above you see a spanning tree (T) on the graph (G), but that isn’t the
minimum weight spanning tree!
Continue reading Computer Algorithms: Kruskal’s Minimum Spanning Tree
Computer Algorithms: Minimum Spanning Tree
Here’s a classical task on graphs. We have a group of cities and we must wire them to provide them all with electricity. Out of all possible connections we can make, which one is using minimum amount
of wire.
To wire N cities, it’s clear that, you need to use at least N-1 wires connecting a pair of cities. The problem is that sometimes you have more than one choice to do it. Even for small number of
cities there must be more than one solution as shown on the image bellow.
Here we can wire these four nodes in several ways, but the question is, which one is the best one. By the way defining the term “best one” is also tricky. Most often this means which uses least wire,
but it can be anything else depending on the circumstances.
As we talk on weighted graphs we can generally speak of a minimum weight solution through all the vertices of the graph.
By the way there might be more the one equally optimal (minimal) solutions. Continue reading Computer Algorithms: Minimum Spanning Tree
|
{"url":"http://stoimen.com/category/graphs/","timestamp":"2024-11-04T14:27:04Z","content_type":"text/html","content_length":"39225","record_id":"<urn:uuid:14a1d9f7-59f7-44c7-8c41-f4a0f349eba0>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00295.warc.gz"}
|
NSPLINE - Natural Cubic Spline in Excel
Work in Progress
:: Please
contact us
to report errors, typos, etc.
Creates a C2 Interpolating Natural Cubic Spline
=L_NSPLINE(known_xs, known_ys, [x])
Argument Description Example
known_xs The x-values of the control points in increasing order {2;4;8;10}
known_ys The y-values of the control points {3;1;5;5}
x [optional] Values of x to interpolate {2;2.1;...;9.9;10}
In the template file, navigate to the Polynomials worksheet to see the L_NSPLINE function in action.
L_NSPLINE is used for natural cubic spline interpolation. Like CSPLINE, it creates a cubic piecewise polynomial that passes through a given set of control points. Both the first and second
derivatives are continuous and the second derivative at the end points is zero.
The image below shows an example of a natural cubic spline through 4 control points, along with the first and second derivatives, which you can observe are indeed continuous.
Note that L_PPVAL does not extrapolate. This is why the derivatives of the spline in the above image are evaluated only between the control points.
The default output of L_NSPLINE is a piecewise polynomial data structure array that can be used by L_PPVAL for interpolation.
known_xs, {2;4;8;10},
known_ys, {3;1;5;5},
Result: pp_array
{2, 0.109375, 0, -1.4375, 3;
4, -0.09375, 0.65625, -0.125, 1;
8, 0.078125, -0.46875, 0.625, 5;
10, "", "", "", ""}
If the x parameter is not blank, then the output will be a vector of interpolated values corresponding to the values in x. The following example interpolates the natural spline at 49 values of x
between 0 and 12 (L_NSPLINE can extrapolate).
known_xs, {2;4;8;10},
known_ys, {3;1;5;5},
interp_x, L_LINSPACE(0,12,49),
Result: {5.875; 5.5156; ... ; 4.453; 4.375}
Why is it Called Natural?
A flexible ruler (with constant thickness and material properties) that passes through the control points without being clamped (meaning that the slope is not constrained by an outside force) will
follow the shape of the natural spline because this shape minimizes bending energy. Thus, it is the "natural" state of the flexible ruler. Note that the control points only constrain the vertical
position, so the ruler would be allowed to slip and rotate at each point as needed.
The youtube series Splines in 5 Minutes by professor Steve Seitz does a very good job of explaining this concept, and even shows an actual experiment with duck weights and a flexible ruler.
The constraint that the second derivative is zero at the end points means that the curvature is zero (no bending) at the end points. As a result, the ruler would extend unbent beyond the first and
last control point, so L_NSPLINE extrapolates based on a straight line if values of x are <x[1] or >x[n]. The slope at the end points is not specified, but can be determined by finding the derivative
of the spline at the ends using L_PPDER.
How it Works
The L_NSPLINE function uses the algorithm from the wikipedia article in reference [1]. See the more general L_SPLINE function for an algorithm that solves a system of constraint equations.
Lambda Formula
This code for using L_NSPLINE in Excel is provided under the License as part of the LAMBDA Library, but to use just this function, you may copy the following code directly into your spreadsheet.
Code for AFE Workbook Module (Excel Labs Add-in)
* Natural Cubic Spline - Creates a C2 Interpolating Spline with d2y/dx2=0 at the end points
L_NSPLINE = LAMBDA(known_xs,known_ys,[x],
bj,(INDEX(ys,j+1)-INDEX(ys,j))/INDEX(h,j) - 1/3*INDEX(h,j)*(co+2*cj),
Named Function for Google Sheets
Name: L_NSPLINE
Description: Creates a C2 Interpolating Natural Cubic Spline
Arguments: known_xs, known_ys, x
[in the works]
L_NSPLINE Examples
Example: First and Second Derivative of a Natural Cubic Spline
This is the example from the L_PPDER documentation.
x_cpts, {2;3;4;5;6;7;8;9;10;11;12},
y_cpts, {4;4;2;3;1;1.5;5;2;2;4.5;4},
pp_spline, L_NSPLINE(x_cpts,y_cpts),
x_interp, L_LINSPACE(2,12,101),
y_interp, L_PPVAL(pp_spline,x_interp),
dy_interp, L_PPVAL( L_PPDER(pp_spline),x_interp),
d2y_interp, L_PPVAL( L_PPDER(L_PPDER(pp_spline)),x_interp),
Result: (see graph below)
See Also
PPVAL, CSPLINE, SPLINE, PPDER, PPINT
: This article is meant for educational purposes only. See the
regarding the LAMBDA code, and the site
Terms of Use
for the documentation.
|
{"url":"https://www.vertex42.com/lambda/nspline.html","timestamp":"2024-11-13T05:20:24Z","content_type":"text/html","content_length":"71998","record_id":"<urn:uuid:0f1db846-882b-42cd-ba7d-e0a0bdf9c3fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00447.warc.gz"}
|
Problem 975 - TheMathWorld
Problem 975
Write the null alternative hypothesis. Identify which is the claim.
A study claims that the mean survival time for certain cancer parents treated immediately with chemotherapy and radiation is 32 months.
The null hypothesis, H[0], is a statement that contain a statement of equality, such as ≤ , =, or ≥. The null hypothesis is that the mean survival time is equal to 32 months, or H[0]:µ = 32.
The alternative hypothesis, H[a], is the complement of the null hypothesis. It is a statement that must be true is H[0] is false, and it contain a statement of inequality, such as < , ≠, or >.
The alternative hypothesis is that the mean survival time is not equal to 32 months, or H[a]: µ ≠ 32.
Thus, the hypothesis test to be conducted is as follows.
H[0]: µ = 32
H[a]: µ ≠ 32
The claim is the statement made about the population parameter. It can be either the null or alternative hypothesis, depending on how it is worded. Identify which is the claim for this problem.
The claim for this problem is that the mean survival time for certain cancer patients treated immediately with chemotherapy and radiation is 32 months. This corresponds to the null hypothesis H[0].
|
{"url":"https://mymathangels.com/problem-975/","timestamp":"2024-11-11T13:44:57Z","content_type":"text/html","content_length":"59473","record_id":"<urn:uuid:b301f974-1ab5-4d6d-a013-939a3c94f3b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00536.warc.gz"}
|
1 — 14:00 — ** CANCELLED ** Complexity of Mixed Integer Nonlinear Programming with Nonconvexities
We establish some complexity results for mixed integer programming in cases where there are nonconvexities describing the feasible region. This departs from most complexity results in integer
programming that focus on linear or convex constraints. Nonconvexities can easily result in NP-Hard cases, even in fixed dimension 2. Our goal is to expand the class of problems that we can solve in
polynomial time when the dimension is fixed, ideally in a fixed parameter tractable way.
2 — 14:30 — A Knowledge Compilation Take on Binary Polynomial Optimization
The Binary Polynomial Optimization (BPO) problem is defined as the problem of minimizing a given polynomial function over all binary points. The main contribution of this paper is to draw a novel
connection between BPO and the problem of performing model counting over the (max, +) semiring on Boolean functions. This connection allows us to give a strongly polynomial algorithm that solves BPO
with a hypergraph that is either beta-acyclic or with bounded incidence treewidth. This result unifies and significantly extends the known tractable classes of BPO. The generality of our technique
allows us to deal also with extensions of BPO, where we enforce extended cardinality constraints on the set of binary points, and where we seek k best feasible solutions. We also extend our results
to the significantly more general problem where variables are replaced by literals. Preliminary computational results show that the resulting algorithms can be significantly faster than current
3 — 15:00 — On the power of linear programming for data clustering
We propose a linear programming (LP) relaxation for K-means clustering. We derive sufficient conditions under which this LP relaxation is tight and subsequently obtain recovery guarantees under a
popular stochastic model for the input data. We present extensive computational experiments on real data sets. Finally, we present extensions that incorporate various notations of group and
individual fairness measures.
4 — 15:30 — Sample Complexity and Computational Results for Branch and Cut Using Neural Networks
Data-driven algorithm design is a paradigm that uses statistical and machine learning techniques to select from a class of algorithms for a computational problem an algorithm that has the best
expected performance with respect to some (unknown) distribution on the instances of the problem. We build upon recent work in this line of research by introducing the idea where, instead of
selecting a single algorithm that has the best performance, we allow the possibility of selecting an algorithm based on the instance to be solved. In particular, given a representative sample of
instances, we learn a neural network that maps an instance of the problem to the most appropriate algorithm {\em for that instance}. We formalize this idea and derive rigorous sample complexity
bounds for this learning problem, in the spirit of recent work in data-driven algorithm design. We then apply this approach to the problem of making good decisions in the branch-and-cut framework for
mixed-integer optimization (e.g., which cut to add?). In other words, the neural network will take as input a mixed-integer optimization instance and output a decision that will result in a small
branch-and-cut tree for that instance. Our computational results provide evidence that our particular way of using neural networks for cut selection can make a significant impact in reducing
branch-and-cut tree sizes, compared to previous data-driven approaches.
|
{"url":"https://ismp2024.gerad.ca/schedule/FB/9","timestamp":"2024-11-03T17:22:46Z","content_type":"text/html","content_length":"19186","record_id":"<urn:uuid:3e38b261-e730-4d42-976b-b20a63f02bdc>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00189.warc.gz"}
|
Coriolis Effect Calculator | Mastering the Coriolis - CalculatorsHub
Home » Simplify your calculations with ease. » Physics Calculators »
Coriolis Effect Calculator | Mastering the Coriolis
There’s a powerful force perpetually influencing everything in motion on Earth’s surface – the Coriolis Effect. This natural phenomenon is particularly significant in weather prediction and
geophysics, but comprehending it may seem daunting. Enter the Coriolis Effect Calculator, an intuitive tool making these calculations accessible to everyone.
What is the Coriolis Effect?
The Coriolis Effect is an apparent force experienced by an object moving in a rotating system, like Earth. This deflection, not perceptible in everyday life, becomes significant over large distances
and time frames. The Coriolis Effect Calculator is a digital tool that makes estimating this force easy and accurate.
How Does the Coriolis Effect Calculator Work?
The Coriolis Effect Calculator simplifies the complexity of calculating the force acting on an object due to Earth’s rotation. By inputting specific variables, such as mass, velocity, Earth’s angular
velocity, and vertical latitude, the calculator applies the Coriolis Effect formula, providing a precise estimation of the force.
The Coriolis Effect Formula
The Coriolis Effect is mathematically expressed as:
F = 2 *m*v*w *sin(a)
F is the force in newtons (N) m is the mass of the object (kg) v is the object’s velocity (m/s) w is Earth’s angular velocity (rad/s) a is the vertical latitude (degrees) By substituting these
variables into the formula, the calculator estimates the Coriolis force.
Practical Example
Consider an object with a mass (m) of 6kg, moving with a velocity (v) of 66m/s at a latitude (a) of 67 degrees. The Earth’s angular velocity (w) is approximately 0.000072 rad/s. Substituting these
values into the formula, the Coriolis force (F) is estimated to be 0.05249 N.
Applications of the Coriolis Effect Calculator
Meteorology and Oceanography
In meteorology and oceanography, the calculator aids in predicting weather patterns and ocean currents by estimating the Coriolis force’s influence.
In ballistics, it assists in correcting trajectories to account for the Coriolis Effect over long ranges.
Common FAQs About the Coriolis Effect Calculator
Why is the Coriolis Effect significant in calculations?
The Coriolis Effect has a considerable impact over long distances and time frames. This impact is critical in fields like meteorology, where it affects weather patterns, or ballistics, where it can
shift a projectile’s course.
Is the Coriolis Effect noticeable in everyday life?
While the Coriolis Effect is constantly at play, it’s not perceptible in everyday activities due to the relatively small distances and speeds involved.
How accurate is the Coriolis Effect Calculator?
The calculator provides a precise estimation of the Coriolis force when accurate variables are entered. However, it’s crucial to remember that many other factors can influence an object’s motion. 8:
The Coriolis Effect Calculator provides an intuitive way to understand and calculate this intriguing natural phenomenon. Whether you’re delving into meteorology, ballistics, or simply expanding your
knowledge of Earth’s rotation, mastering the calculator is a step towards a more comprehensive understanding of the world’s mechanics.
Leave a Comment
|
{"url":"https://calculatorshub.net/physics-calculators/coriolis-effect-calculator/","timestamp":"2024-11-09T15:51:43Z","content_type":"text/html","content_length":"116115","record_id":"<urn:uuid:d986a270-c33a-49c8-b2f1-3c17647ac5f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00356.warc.gz"}
|
On geometrical properties of velocity derivatives in numerical turbulence
One of one most basic phenomena and distinctive features of three-dimensional turbulence is the predominant vortex stretching, which is manifested in positive net enstrophy generation σ ≡ ω i ω j s
ij, <σ> > 0. This process consists both of vortex stretching (σ > 0) and of vortex compressing (σ < 0)^1 and cannot occur without its concomitants - vortex tilting and folding with large curvature of
vortex lines. The ultimate clarification of relations between curvature and dynamically relevant quantities such as enstrophy ω 2, strain s 2 ≡ s ij s ij, enstrophy generation σ and it’s rate α ≡ σ/ω
2 can be obtained from looking at global properties. The hope is that some insights can be gained from local analysis. This what is mostly done in this work on the basis of a DNS data set of
Navier-Stokes equations without forcing in a box with periodic boundary conditions and random Gaussian initial conditions. The results below correspond to the time moment right after the total
enstrophy has reached its maximum at Re λ ≈ 75 [3].
• Conditional Average
• Large Curvature
• Material Line
• Turbulent Flow Field
• Vortex Line
Dive into the research topics of 'On geometrical properties of velocity derivatives in numerical turbulence'. Together they form a unique fingerprint.
|
{"url":"https://cris.tau.ac.il/en/publications/on-geometrical-properties-of-velocity-derivatives-in-numerical-tu","timestamp":"2024-11-02T05:20:20Z","content_type":"text/html","content_length":"50288","record_id":"<urn:uuid:10c331d9-1763-4db8-9010-014a5c877b37>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00583.warc.gz"}
|
Project Jupyter is an open-source project intended to support interactive scientific computing across numerous languages.
Documentation: https://jupyter.readthedocs.io/en/latest/
Scared of commitment? To try Jupyter without installation, click here.
The Jupyter Notebook is an open-source web application which provides an interactive environment that smoothly integrates live code, images, equations, data visualization, interactive widgets, and
Jupyter is not exclusive to Python - it supports over 40 languages, including R, Java, and MATLAB. IPython is the default kernel of Jupyter Notebook but other kernels can be installed to enable the
use of other languages within Jupyter.
Check out the list of available kernels here.
Jupyter Notebooks can include inline and displayed mathematical equations written in LateX.
LaTeX is used to typeset mathematical notation in scientific documents.
Example of use
In a Markdown cell,
The Nernst Equation is as follows
$$E_x = \frac{RT}{zF}ln{\frac{[X]_o}{[X]_i}}$$
where $R$ is the gas constant, $T$ the temperature, $z$ the valence of the ion, $F$ the Faraday
constant, and $[X]_o$ and $[X]_i$ the concentrations of the ion outside and inside the cell.
where $$ and $ indicate the start and end of a displayed and inline equation respectively.
This produces
The Nernst Equation is as follows
\[E_x = \frac{RT}{zF}ln{\frac{[X]_o}{[X]_i}}\]
where \(R\) is the gas constant, \(T\) the temperature, \(z\) the valence of the ion, \(F\) the Faraday constant, and \([X]_o\) and \([X]_i\) the concentrations of the ion outside and inside the
ipywidgets are interactive HTML widgets that can be used to build GUIs within Jupyter Notebooks. Available widgets include buttons, sliders, textboxes, and checkboxes.
$ pip install ipywidgets
$ jupyter nbextension enable --py widgetsnbextension
To use:
import ipywidgets as widgets
nbviewer renders Jupyter Notebooks in a browser.
nbconvert converts Jupyter Notebooks (.ipynb files) to other formats, including HTML and PDF.
Installing Jupyter (pip install jupyter) also installs nbconvert. To use nbconvert from the command line, enter the following command in the directory in which the notebook is stored.
$ jupyter nbconvert --to format notebook.ipynb
Replace format wtih the desired format and notebook.ipynb with the notebook file.
Saving as different formats is also possible within Jupyter. To see the available formats:
□ In Jupyter Notebook,
click on File then hover over Download as.
□ In Jupyter Lab,
click on File then hover over Export Notebook As….
Jupyter Lab is the web-based user interface intended to replace Jupyter Notebook. It has all the classic features of its predecessor plus some cool new ones, most notably it offers a flexible and
unified workspace that can include a code console, terminal, text editor, and Notebook.
Syzygy is a service provided by the Pacific Insittute for the Mathematical Sciences (PIMS), Compute Canada, and Cybera that launches Jupyter notebooks in a browser. It is accessed by logging in with
a CWL through https://ubc.syzygy.ca/.
Each user is allocated 1GB of space.
|
{"url":"https://ubcbraincircuitsonboarding.readthedocs.io/en/latest/jupyter.html","timestamp":"2024-11-09T11:07:23Z","content_type":"text/html","content_length":"19017","record_id":"<urn:uuid:c58c5aed-ddf9-48dd-8280-9e934681dbd8>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00850.warc.gz"}
|
OpenPGP Message Format
(1 octet, Boolean)¶
This is a flag in a User ID's self-signature that states whether this User ID is the main User ID for this key. It is reasonable for an implementation to resolve ambiguities in preferences, etc. by
referring to the primary User ID. If this flag is absent, its value is zero. If more than one User ID in a key is marked as primary, the implementation may resolve the ambiguity in any way it sees
fit, but it is RECOMMENDED that priority be given to the User ID with the most recent self-signature.¶
When appearing on a self-signature on a User ID packet, this subpacket applies only to User ID packets. When appearing on a self-signature on a User Attribute packet, this subpacket applies only to
User Attribute packets. That is to say, there are two different and independent "primaries" -- one for User IDs, and one for User Attributes.¶
|
{"url":"https://datatracker.ietf.org/doc/html/draft-koch-openpgp-2015-rfc4880bis","timestamp":"2024-11-13T08:48:02Z","content_type":"text/html","content_length":"532625","record_id":"<urn:uuid:fad5a403-b3cc-4337-9eae-f65dd56804a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00192.warc.gz"}
|
coherent space
Disambiguation note
diff, v5, current
Renamed and cross-linked with coherent topos.
diff, v4, current
Added a reference.
diff, v3, current
Pointed out a relation to Stone duality for Boolean algebras.
diff, v3, current
Reference to Johnstone’s Stone spaces.
diff, v2, current
Added some redirects.
diff, v2, current
Stone duality for coherent spaces and coherent locales.
diff, v2, current
On the other hand, the references currently offered at coherence space all say “coherent space” (e.g. here). In fact, Wikipedia here insists that “coherent space” is for the concept in linear logic,
and instead redirects the reader interested in topology to “spectral space”.
Not sure what this implies, but maybe the disambiguation deserves more commentary.
|
{"url":"https://nforum.ncatlab.org/discussion/9896/coherent-space/?Focus=79472","timestamp":"2024-11-08T05:35:19Z","content_type":"application/xhtml+xml","content_length":"48371","record_id":"<urn:uuid:fb0ceca7-552f-4bb2-bbe7-37159fe2484c>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00298.warc.gz"}
|
Adjunction space - Wikipedia Republished // WIKI 2
In mathematics, an adjunction space (or attaching space) is a common construction in topology where one topologicalspace is attached or "glued" onto another. Specifically, let X and Y be topological
spaces, and let A be a subspace of Y. Let f : A → X be a continuousmap (called the attaching map). One forms the adjunction space X ∪[f] Y (sometimes also written as X +[f] Y) by taking the
disjointunion of X and Y and identifying a with f(a) for all a in A. Formally,
${\displaystyle X\cup _{f}Y=(X\sqcup Y)/\sim }$
where the equivalencerelation ~ is generated by a ~ f(a) for all a in A, and the quotient is given the quotienttopology. As a set, X ∪[f] Y consists of the disjoint union of X and (Y − A). The
topology, however, is specified by the quotient construction.
Intuitively, one may think of Y as being glued onto X via the map f.
YouTube Encyclopedic
• Category Theory II 6.2: Free-Forgetful Adjunction, Monads from Adjunctions
• The Formal Theory of Adjunctions, Monads, Algebras, and Descent (Emily Riehl @ MSRI)
• A common example of an adjunction space is given when Y is a closed n-ball (or cell) and A is the boundary of the ball, the (n−1)-sphere. Inductively attaching cells along their spherical
boundaries to this space results in an example of a CWcomplex.
• Adjunction spaces are also used to define connectedsums of manifolds. Here, one first removes open balls from X and Y before attaching the boundaries of the removed balls along an attaching map.
• If A is a space with one point then the adjunction is the wedgesum of X and Y.
• If X is a space with one point then the adjunction is the quotient Y/A.
The continuous maps h : X ∪[f] Y → Z are in 1-1 correspondence with the pairs of continuous maps h[X] : X → Z and h[Y] : Y → Z that satisfy h[X](f(a))=h[Y](a) for all a in A.
In the case where A is a closed subspace of Y one can show that the map X → X ∪[f] Y is a closed embedding and (Y − A) → X ∪[f] Y is an open embedding.
Categorical description
The attaching construction is an example of a pushout in the categoryoftopologicalspaces. That is to say, the adjunction space is universal with respect to the following commutativediagram:
Here i is the inclusionmap and Φ[X], Φ[Y] are the maps obtained by composing the quotient map with the canonical injections into the disjoint union of X and Y. One can form a more general pushout by
replacing i with an arbitrary continuous map g—the construction is similar. Conversely, if f is also an inclusion the attaching construction is to simply glue X and Y together along their common
See also
• Stephen Willard, General Topology, (1970) Addison-Wesley Publishing Company, Reading Massachusetts. (Provides a very brief introduction.)
• RonaldBrown, "TopologyandGroupoids"pdfavailable, (2006) available from amazon sites. Discusses the homotopy type of adjunction spaces, and uses adjunction spaces as an introduction to
(finite) cell complexes.
• J.H.C.Whitehead "Note on a theorem due to Borsuk" Bull AMS 54 (1948), 1125-1132 is the earliest outside reference I know of using the term "adjuction space".
This page was last edited on 17 April 2024, at 01:53
|
{"url":"https://wiki2.org/en/Adjunction_space","timestamp":"2024-11-05T19:08:15Z","content_type":"application/xhtml+xml","content_length":"44148","record_id":"<urn:uuid:b424814b-b04c-4afa-8050-ab31d7c7034d>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00457.warc.gz"}
|
Nuclear Force - Cosmos
The Standard Model of particle physics has been well measured from almost every angle. Here’s a scorecard of the results. The currently accepted and experimentally well-tested theory of
electromagnetic and weak interactions is called the Standard Model. The Standard Model is based on relativistic quantum gauge field theory. When physicists in the 1920s tried to […]
Relativistic quantum gauge field theory- Gauge bosons Read More »
Why String theory?
Once special relativity was on the firm observational and theoretical footing, it was appreciated that the Schrödinger equation of quantum mechanics was not Lorentz invariant, therefore quantum
mechanics as it was so successfully developed in the 1920s was not a reliable description of nature when the system contained particles that would move at or near
|
{"url":"https://cosmos.theinsightanalysis.com/tag/nuclear-force/","timestamp":"2024-11-14T04:53:49Z","content_type":"text/html","content_length":"129743","record_id":"<urn:uuid:43f13170-9239-4b6c-9142-4d299b2afa7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00889.warc.gz"}
|
If a randomly choosen seed germinates, what is the probability ... | Filo
Question asked by Filo student
If a randomly choosen seed germinates, what is the probability it is of type A? 2 Let be the feasible region of a linear programming problem and let be the objective function. When has an optimal
value (max. or min.), when the variable and are subject to constraints described by linear inequalies, this optimal value occur at corner point (vertex) of the feasible region. Based on the above
information, answer the following questions: (A) What is an objective function of LPP? 1 (B) In solving an LPP "minimize subject to constraints , " which among is redundant constraint? Edu Cart
Mathematics Class
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
15 mins
Uploaded on: 3/10/2023
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Coordinate Geometry
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
If a randomly choosen seed germinates, what is the probability it is of type A? 2 Let be the feasible region of a linear programming problem and let be the objective function. When has an
Question optimal value (max. or min.), when the variable and are subject to constraints described by linear inequalies, this optimal value occur at corner point (vertex) of the feasible region.
Text Based on the above information, answer the following questions: (A) What is an objective function of LPP? 1 (B) In solving an LPP "minimize subject to constraints , " which among is
redundant constraint? Edu Cart Mathematics Class
Updated Mar 10, 2023
Topic Coordinate Geometry
Subject Mathematics
Class Class 12
Answer Video solution: 1
Upvotes 144
Video 15 min
|
{"url":"https://askfilo.com/user-question-answers-mathematics/if-a-randomly-choosen-seed-germinates-what-is-the-34353636303238","timestamp":"2024-11-14T03:17:41Z","content_type":"text/html","content_length":"518298","record_id":"<urn:uuid:190e10de-47fa-48e7-aba6-eec83e7390d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00124.warc.gz"}
|
Scavenger Hunt
26-01-2023, 02:36 AM
Lawn Sprinkler
26-01-2023, 09:16 AM
"Toto, I've a feeling we're not in Kansas anymore."
26-01-2023, 09:10 PM
drop the microphone please
27-01-2023, 12:07 AM
Kiss Me Quick hat
"Toto, I've a feeling we're not in Kansas anymore."
27-01-2023, 12:37 AM
a relaxing get away please
27-01-2023, 02:02 AM
Long road
"Toto, I've a feeling we're not in Kansas anymore."
27-01-2023, 02:51 AM
duck pond
27-01-2023, 03:19 AM
Australian football
"Toto, I've a feeling we're not in Kansas anymore."
27-01-2023, 07:28 AM
Nice cup of coffee please
27-01-2023, 09:13 AM
Oscar the grouch
"Toto, I've a feeling we're not in Kansas anymore."
02-02-2023, 06:05 PM
a Pablo Picasso painting please
02-02-2023, 07:40 PM
Amblin Entertainment logo
"Toto, I've a feeling we're not in Kansas anymore."
02-02-2023, 09:22 PM
An apple cart please
02-02-2023, 09:28 PM
Charming cottage
"Toto, I've a feeling we're not in Kansas anymore."
02-02-2023, 11:07 PM
Cherry Blossoms
03-02-2023, 01:31 AM
"Toto, I've a feeling we're not in Kansas anymore."
03-02-2023, 03:48 AM
a bright idea
03-02-2023, 09:07 AM
"Toto, I've a feeling we're not in Kansas anymore."
17-02-2023, 06:52 PM
Hmmm... reminds me of Meg Ryan
or maybe:
a classic MGB please (once my dream car)
17-02-2023, 07:56 PM
American Pie
"Toto, I've a feeling we're not in Kansas anymore."
17-02-2023, 09:55 PM
where the next Olympic games will be held?
18-02-2023, 12:22 AM
Sammy Davis jnr
"Toto, I've a feeling we're not in Kansas anymore."
18-02-2023, 12:38 AM
The Rat Pack please
18-02-2023, 01:02 AM
"Toto, I've a feeling we're not in Kansas anymore."
18-02-2023, 02:01 AM
a Netflx show you enjoy, please.
18-02-2023, 06:12 PM
"Toto, I've a feeling we're not in Kansas anymore."
18-02-2023, 11:34 PM
Painting the house
19-02-2023, 12:14 AM
a dime
"Toto, I've a feeling we're not in Kansas anymore."
28-01-2024, 02:04 AM
Next a valentine please
Users browsing this thread: 1 Guest(s)
|
{"url":"https://forumofgames.com/showthread.php?tid=118&pid=76322","timestamp":"2024-11-09T09:24:18Z","content_type":"application/xhtml+xml","content_length":"120769","record_id":"<urn:uuid:e6265b2c-7822-4d5b-8845-38f6f76ab9c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00439.warc.gz"}
|
Matrix Rigidity
Nanda Raghunathan points me to a
new paper
by Gatis Midrijanis giving a simple proof of the best known rigidity lower bounds for the Sylvester matrices.
The rigidity of a matrix M is a function R[M](r) equal to the minimum number of entries of M that you need to change in order to reduce the rank to r. Strong rigidity bounds would have applications
for circuit and communication complexity.
Let N=2^n. We define the N×N Sylvester S by labeling the rows and columns by n-bit vectors and let s[i,j]=(-1)^i·j.
Theorem: If r ≤ N/2 is a power of 2 then R[S](r) ≥ N^2/4r.
Proof: Divide S uniformly into (N/2r)^2 submatrices of size 2r×2r. One can easily verify these submatrices each have full rank. So we need to change at least r elements of each submatrix to reduce
each of their ranks to r, a necessary condition to reducing the rank of S to r. QED
This proof works for any matrix whose submatrices have full rank. Consider the N×N matrix B where b[i,j]=1 if i ≡ j (mod 2r) and 0 otherwise. By the same proof R[B](r)=N^2/4r even though the rank of
B is only 2r.
The moral of this story: We conjecture that the Sylvester matrices have very high rigidity but we still lack the tools that make full use of the structure of these matrices.
4 comments:
1. Why was this very simple proof missed before? Is it because people focused on so-called generalized Hadamard matrices and were thus determined to use spectral techniques? This isn't the proof
Nisan used for "totally non-singular" matrices?
2. I believe this is a well-known proof for showing that the Hadamard matrix has rigidity function n^2/r, and it is due (in unpublished form) independently to Grigoriev and Nisan (over ten years
There are more sophisticated counting techniques based on the same principle (i.e. the non-singularity of sub-matrices). See, for instance,
3. It's a very good point to write up simple results (because you never know if it's as simple for the others or not), but of course this simple proof hasn't been missed before. In fact, it's so
trivial that no one ever thinks about explicitly mentioning it in a publication! I guess every sensible person who has just tried to (rather seriously) think about matrix rigidity comes up with a
similar result after a short time. So this is more or less a "warm-up exercise," but of course, as I said, is still worth putting on arXiv. Unfortunately, the proof doesn't give much to us, as it
doesn't deeply go down into the specific combinatorial structure of the matrix. Moreover, the result of Shokrollahi et al. (1997) shows a stronger lower bound of $\Omega(n^2/r log(n/r))$ for the
same problem based on a similar principle (minors with high ranks), still far from implying nontrivial complexity results. Their proof is already simple enough.
Finally, one can as easily show a lower bound $\Omega(n^2/r)$ for the rigidity any matrix whose $t \times t$ submatrices have rank $\Omega(t)$ "on average." (apologies for using TeX macros
Precisely, let $A+B=C$, where $rk(C) <= r$, and the "expected rank" of a random $t \times t$ minor be at least $t/c$, for some constant $c$. Now, let $t=2rc$ and A0 be a $t \times t$ minor of A.
Let B0 and C0 be the corresponding minors in B and C (minors chosen with the same pattern). We have:
$2r = t/c <= E[rk(A0)] <= E[rk(C0)]+E[rk(B0)] <= E[rk(C)]+E[rk(B0)] <= r+E[rk(B0)] <= r+E[wt(B0)] <= r+(t/n)^2 wt(B)$
Hence, $r <= (4c^2r^2/n^2) Rig_A(r)$ and we're done.
One example of such matrices is the generalized (complex) Hadamard matrix. So we've just proved the best known lower bound for rigidity of the general case of Hadamard matrices (modulo showing
that the expected ranks are high which is a bit messy but straightforward spectral analysis for the case of Hadamard matrices). In fact, the proof of Kashin and Razborov (1998) goes along similar
In any case, it's very nice to see new works being done on rigid matrices. They hopefully draw researchers' attention to put more efforts on this long-standing but nowadays less-cared-about
4. ----------------------------------------
Why was this very simple proof missed before? Is it because people focused on so-called generalized Hadamard matrices and were thus determined to use spectral techniques?
Well, as I understand the applications for this problem: a lower bound for ANY EXPLICIT matrix is welcomed from the point of view of circuits complexity. For communication complexity, any
explicit (0,1)-matrix is interesting.
This isn't the proof Nisan used for "totally non-singular" matrices?
I have not read Nisan's proof (and how I could?), but I guess I didn't prove that Sylvester matrix is totally non-singular.
In fact, it's so trivial that no one ever thinks about explicitly mentioning it in a publication!
Hmm, I didn't found this so trivial fact even mentioned at many papers I went through, despite more trivial facts were mentioned. Even in a low-level survey paper by B. Codenotti (Matrix
Rigidity, Linear Algebra and its Applications, 304(1-3):181�192. 2000.)
I guess every sensible person who has just tried to (rather seriously) think about matrix rigidity comes up with a similar result after a short time.
I know at least one researcher on this field that didn't know this fact before.
The moral of this story: those who write serious publications and survey papers should sometimes mention trivial facts as well for new ones in the field. Anyway, I agree, this paper is not much
more than a curiosity.
|
{"url":"https://blog.computationalcomplexity.org/2005/07/matrix-rigidity.html?m=0","timestamp":"2024-11-08T00:44:22Z","content_type":"application/xhtml+xml","content_length":"184222","record_id":"<urn:uuid:c08c3a4b-7346-4c8a-bf4b-f455a4970ef6>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00340.warc.gz"}
|
Flexible polyhedra / Etudes // Mathematical Etudes
The first mathematical examples of bendable polyhedra, of course, non-convex, as well as the classification of such objects, were constructed by a belgian engineer R. Bricard in 1897.
Mathematical, because these polyhedra were not only non-convex, but also self-intersecting: their faces intersected each other. For the point of view of a mathematician, this is also a
polyhedron that can not be realized in our three-dimensional space. In 1975 an american mathematician R. Connelly found a way to get rid of self-intersections and the first real flexible
polyhedra appeared. The simplest known today, consisting of 9 vertices, 17 edges and 14 faces, will be now constructed. It was invented in 1978 by a german mathematician Claus Steffen.
|
{"url":"https://en.etudes.ru/etudes/flexible-polyhedra/9/","timestamp":"2024-11-09T19:32:10Z","content_type":"application/xhtml+xml","content_length":"50014","record_id":"<urn:uuid:2abc4e22-f874-4fa2-9e7b-5c8cb5366571>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00121.warc.gz"}
|
Salt Pond Hydrogeologic Investigation – APPENDIX 2. A Note About Modeling
APPENDIX 2. A Note About Modeling
Why Do We Need Models?
1. Models are useful in understanding of water flow and chemical transport processes and for improving on theories.
2. They are useful for the design field data collection schemes, emphasizing needed critical information
3. They are used in a prediction mode to run numerical experiments (scenarios). They are more convenient and cheaper than real experiments. For example, I can add as many wells with a button click
and analyze the consequences. Also models can be used to study future conditions before they occur.
Conceptual Model: A representation of a physical groundwater system, in this case the Pond and surrounding area. The conceptual model reflects our understanding of the system and can change based on
available information and the objective of the study. The conceptual model can also change from one modeler to another. It includes various controlling processes, e.g., water flow and storage and
chemical fate and transport.
Mathematical Model: Equations describing the controlling processes. For example, the equation describing water flux between two points is estimated based on the difference in water levels at the two
points. Realistic equations are complex and change based on conceptualization as described above.
Numerical Models: Software used to solve the controlling equations based on the defined conceptual model. It is used first to calibrate the model and then to run numerical experiments (scenarios),
such as what would be the response of the water level in the Pond to sea level rise.
The Modeling Process
1. Identify specific modeling objectives, e.g., solve for water levels, salinity, contamination level, etc.
2. Identify input variables and model needs
• Modeled area and conditions at the boundaries, such as ocean level or mountain discharge into the area
• Area divided in small cells where the solution would be estimated at the center of each cell.
• Aquifer data, such as permeability or hydraulic conductivity, which reflects the ease (or difficulty) of water movements, (called parameters). Some of these can be available (directly measured or
inferred from available information). However, some might not be available or need adjustments through calibration (see below).
• Data for model calibration (see below), such as sensor water level data and salinities.3.Run the model to solve for variables under objectives
Model Calibration (Parameter estimation mode): Intended to gain confidence in the model. It is done by using available information (such as sensor readings) to estimate or improve on parameter
estimates. The model is run many times to identify the parameter values that give the best match between measured and estimated data.
Model Use (Prediction Mode): Apply the model to address the modeling objectives (understanding, data collection, and scenario assessment).
Hydraulic Conductivity versus Permeability: Flow is estimated by utilizing Darcy’s law (e.g., Fetter, 2018)^[30], which is defined by:
In which Q is groundwater discharge through an area A under a slope of water table i. K is called hydraulic conductivity and has units of length per unit time (such as meter per day.) It reflects the
ability of the aquifer to transmit water.
Conductivity is related to permeability k by:
In which R and M are the density and viscosity of water (or any liquid or gas) and g is acceleration of gravity. K is thus a function of the aquifer material and the type of liquid, while k (units of
length, e.g., meter) is only a function of the aquifer material. As an example, k can be the same for a coastal aquifer, while K can change based on salinity, which affects water density.
|
{"url":"https://seagrant.soest.hawaii.edu/coastal-and-climate-science-and-resilience/ccs-projects/kauai-sp-study-appendix-2/","timestamp":"2024-11-06T05:56:45Z","content_type":"text/html","content_length":"93547","record_id":"<urn:uuid:a9d2022d-3524-4fb5-b291-580c13c2e44a>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00104.warc.gz"}
|
Data Science Design Pattern #5: Combining Source Variables
Variable selection is perhaps the most challenging activity in the data science lifecycle. The phrase is something of a misnomer, unless we recognize that mathematically speaking we’re selecting
variables from the set of all possible variables—not just the raw source variables currently available from a given data source.[i] Among these possible variables are many combinations of source
variables. When a combination of source variables turns out to be an important variable in its own right, we sometimes say that the source variables interact, or that one variable mediates another.
We’ll coin the phrase synthetic variable to mean an independent variable that is a function of several source variables, regardless of the nature of the function.
There are several reasons to synthesize variables.
Known or plausible relationship. Often a synthetic variable has a natural interpretation. The product or ratio of two variables may express naturally the idea that one variable scales up or down the
other’s effect. For example, body-mass index varies with one’s waist-to-height ratio. We discover such relationships through interviews, literature review, data-source review (including third-party
datasets), and data visualization.[ii]
Dimensionality reduction. Especially when dealing with big data, we’d like to minimize the number of independent variables in a model, so we use a dimensionality reduction technique that combines
many source variables into a few more powerful synthetic variables.[iii]Dimensionality reduction is common in the bag of words[iv] approach to text mining.
Relationship discovery. In some cases we don’t know beforehand which functional forms best fit the data. We want to cast a wide net, to maximize goodness of fit. For small sets of source variables in
polynomial models of at most third order[v] (which may contain synthetic variables that are cross-product terms of the form x[i]^mx[j]^n), we can use standard goodness-of-fit tests in an exhaustive
search of the possible model forms, and test the importance of cross-product terms within the best fitting models. When we have many source variables (perhaps after enriching our data with
third-party datasets), or when we want to explore a broader set of possible functional forms, we may use exhaustive search, heuristic search, stochastic search, or MapReduce to synthesize variables
while constructing and evaluating models.
This post underscores some important features of the variable-selection process generally (including variable synthesis):
1. A model’s power derives from all of its input variables together. One measures the incremental contribution (importance) of a single variable to the model, given that certain other variables are
already in the model. One does not generally attempt to measure a variable’s importance in the abstract.[vi]
2. A variable’s contribution also depends on the type of model. The same variable may contribute more or less in an ordinary linear regression than in a piecewise-linear regression, for example.
3. There is a great deal of interplay between variable selection and model selection. While there are model-independent measures of variable importance, the safest course is to measure variable
contribution within the context of a given model type.
4. In business data science, one must ultimately measure variable importance in economic terms. Seeing the optimization problem implicit in a business data science problem is critical to get the
right measure of variable importance. Variable A may have less predictive power than variable B by some measure, but still have more influence on the business’ optimization criterion.
Here is the general pattern one employs in variable selection:
1. Discover (or synthesize) a variable to test.
2. Add the variable to the current collection of variables.
3. Test the collection’s modeling power.
4. If you’ve reached a stopping condition (the power is high enough, or you’re out of resources), stop.
5. Otherwise, return to step 1.
This procedure is often embedded in a similar procedure that tests different types of models, measuring goodness of fit or degree of optimality for each tested model. When both procedures are complex
enough to involve automated search, the searches are often combined in a single algorithm.[vii]
There are many best practices that help you perform each variable-selection step intelligently, so you complete it with a minimum of effort and time. The remainder of this post is devoted to
illustrating some of these practices, emphasizing variable synthesis throughout.
How it Works
Bare knuckles discovery of variable interactions. Clever algorithms and unlimited computing power have not yet eliminated the advantages of human judgment in variable selection. For one thing, even
automated variable-selection techniques are only useful when the right variables are in the source data to start with. Several approaches remain important:
1. Interviews with subject-matter experts are an important source of conjectures about interactions and mediating variables. It can be useful to familiarize oneself with and apply the
knowledge-extraction techniques used by artificial intelligence pioneers.[viii] For example, it’s often necessary to give an expert specific cases to work through, to elicit their knowledge of
e.g. which variables matter at all, and which interact.
2. Many of the business problems one encounters in data science have been the subject of years of management science research. A cursory literature review is often enough for one to surmise quickly
which synthetic variables researchers have identified. For example, if you want to model human longevity, you could survey over half a century of scientific research.[ix]
3. Sometimes during a dataset review, merely seeing several variables occur in the same dataset will suggest that the variables may interact. For example, while surveying the metadata in the Centers
for Disease Control and Prevention’s Health Indicators Database[x], you might see high cholesterol and tobacco in the same list, and recall that both contribute to hypertension individually, so
that they may interact to contribute much more together than either alone.[xi]
4. A variety of data visualization techniques such as three-dimensional scatterplots and geospatial maps can reveal or suggest important synthetic variables. For example, Figure 1 below is an NOAA
geospatial graphic of all hurricanes passing within 65 nautical miles of Cape Hatteras, N.C. since 1990.[xii] The graphic immediately suggests that most hurricanes in the North Atlantic Basin
turn right, and that they only turn to a certain heading (about northeast) before straightening out. This observation suggests the approach to variable synthesis described in #5 below.
Figure 1: Hurricanes Approaching Cape Hatteras, N.C. Since 1990
5. Sometimes one’s intuition leans towards a given class of models, and that class often features a specific type of synthetic variable. For example, one may have a discrete-time process {X[i]}, and
one may wish to predict X[i]. In such cases it can be useful to synthesize S[n] = ∑[i=1,n]X[i], and then to compute p(X[n+1]|S[n], X[n]). Even when the original process {X[i]} lacks the (one step)
Markov property, the new process {(S[i], X[i])} may well have it; and p(X[n+1]|S[n], X[n]) may predict X[n+1] much more accurately than p(X[n+1]|X[n]). In the hurricane example above, we start with a
sequence {X[i]} of tracks of the form X[i] = (lat[i], long[i], dir[i]). We can enrich the 3-tuple with two synthetic variables: the change in bearing δ[i] = dir[i] – dir[i-1], and then S[i] = ∑[k=
1,i]δ[k]. We can see in the hurricane tracks in #4 above that p(dir[i+1]|dir[i], δ[i]) should let us predict hurricane bearing much better than p(dir[i+1]|dir[i]). One can similarly construct a count
of the number of consecutive tracks at which a hurricane (or an airplane!) has turned in the same direction, to help predict whether it will continue to turn in that direction at the next track.
Computational discovery of variable interactions. Searching for a good combination of source and synthetic variables (starting with a finite set of source variables and a finite number of ways to
synthetize variables) is a combinatorial optimization problem.[xiv] There are several families of computational techniques for synthesizing variables during combinatorial search:
1. The increasingly common accrual of high-dimensional datasets (those having many variables) creates a variety of analytical challenges. For example, whereas traditional datasets have far more
records than variables, for some high-dimensional (wide) datasets the opposite is true. Techniques such as spike-and-slab variable selection have been developed that can select important source
variables from such datasets gracefully.[xv] Another way to manage high-dimensional data is to synthesize from it a small number of synthetic variables that capture most of the variation in the
source data. This is dimensionality reduction.[xvi]Principal components analysis (PCA) is an important, frequently used dimensionality-reduction technique. It produces a set of uncorrelated
variables called principal components. Each succeeding component accounts for the largest possible amount of remaining variation in the original data. As a result, the first few components often
account for most of the source data’s variability, letting you use a model having far fewer variables than the originals. Singular value decomposition and factor analysis are related methods.
2. When the set of possible synthetic variables is sufficiently small, or one’s computing resources are sufficiently large, one can evaluate all possible combinations of source and synthetic
variables. Combinatorial explosion severely limits the utility of naïve exhaustive search, especially when dealing with high-dimensional data. Tree algorithms such as branch and bound[xvii] can
exhaust the possibilities far more efficiently than brute-force exhaustive search.
3. A heuristic search algorithm explores a subspace of the search space by following simple rules of thumb that (one hopes) lead to a near-optimal result. Greedy algorithms (which are stepwise
optimal, rather than globally optimal) are an example. One greedy approach to variable selection is to measure individual source variable importance in isolation, and then include in the model
all variables having sufficiently high individual importance.[xviii] One can apply the same technique to evaluating possible synthetic variables, if the computation does not take too long. This
approach assumes that combinations of source variables having high individual importance capture all important interactions, which is a strong assumption that one should make with care. An
alternative heuristic is constructive search, where one pieces together a complete solution iteratively.
4. A stochastic (local) search algorithm explores the space of possible combinations by generating random variables that determine how the algorithm explores the solution space. Simulated annealing
and evolutionary algorithms are two important classes.[xix]In simulated annealing the algorithm starts with an arbitrary solution. At each iteration the algorithm transitions from the current
solution to a neighboring solution with a probability that varies with three quantities: the quality of the current solution, the quality of the neighboring solution, and time. The probability
decreases with time, all other things being equal, so that the algorithm initially explores many possible solutions and eventually settles on a good one.Evolutionary programming mimics various
features of natural selection, including fitness, mutation, crossover, and inheritance. The algorithm maintains a population of candidate solutions at each iteration. The population evolves when
the algorithm applies inheritance, mutation, and crossover stochastically, with more fit candidates selected randomly to breed the next generation of solutions. Frequently solutions have binary
representations, to simplify the mechanics of the recombination operations. Other encodings are possible, and finding a suitable encoding is perhaps the most important part of building an
evolutionary algorithm.[xx]
5. MapReduce is embarrassing parallelism[xxi] implemented on a computing cluster or grid. Typically a large dataset is divided across several servers. The master node “maps” the same analytical
operation to each server, which performs the operation on the portion of the data stored there. Then, the master node “reduces” the output of all of the other servers’ analytical operations to a
single overall solution. Apache Hadoop[xxii] implements MapReduce. One can use MapReduce to make most of the above automated techniques execute more quickly. For example, stochastic search has
now been implemented in the MapReduce framework.[xxiii] Thus MapReduce is not an analytical technique in its own right, but a framework for distributing certain kinds of analytical loads.
When to Use It
One synthesizes variables either to reduce dimensionality or to increase model power. Dimensionality reduction is appropriate when there is substantial correlation among some of the source variables.
Otherwise, synthetic variables are useful when two or three variables significantly interact in ways a model cannot capture merely by including the variables separately.
A well-known early example of automated heuristic search that discovered a model containing e.g. cross-product terms is the series of Bacon algorithms developed by Pat Langley in a production-rule
environment based on LISP.[xxiv] The Bacon algorithms used 16 heuristics, and rediscovered Ohm’s law, Newton’s Law (of gravitation), one of Kepler’s Laws (of planetary motion), Boyle’s Ideal Gas Law,
Snell’s Law (in optics), and Black’s Law (of specific heat). All of these laws contain synthetic variables.
Read the rest of our blog series here!
[i] Thus the careful data scientist routinely considers how they might profitably enrich the datasets available to them within their employer’s organization with third-party datasets. Our list of
third-party data sources is a great place to start your search for third-party data.
[ii] A classic early treatment of visualization is John W. Tukey, Exploratory Data Analysis (Pearson, 1977). See also William S. Cleveland,Visualizing Data (AT&T, 1993) and Winston Chang, R Graphics
Cookbook (O’Reilly, 2013). NIST’s online handbook of engineering statistics has a chapter on exploratory data analysis that includes visualization techniques: http://www.itl.nist.gov/div898/handbook/
eda/eda.htm(visited March 19, 2014).
[iii] The rise of big data has motivated a great deal of recent research about dimensionality reduction. Here is a sample: John A. Lee and Michel Verleysen, Nonlinear Dimensionality Reduction
(Springer, 2007); Michael Kirby, Geometric Data Analysis: An Empirical Approach to Dimensionality Reduction and the Study of Patterns (Wiley, 2001); Oliver Kramer, Dimensionality Reduction with
Unsupervised Nearest Neighbors (Springer, 2013); Haiping Lu, Multilinear Subspace Learning: Dimensionality Reduction of Multidimensional Data (Chapman and Hall/CRC, 2013); B. Sarojini and N. Ramaraj,
Dimensionality Reduction Techniques in Medical Informatics: An Empirical Study on Feature Ranking Approaches (Lambert, 2012); Liang Sun, Shuiwang Ji, and Jieping Ye, Multi-Label Dimensionality
Reduction (Chapman and Hall/CRC, 2014); Jianzhong Wang, Geometric Structure of High-Dimensional Data and Dimensionality Reduction (Springer, 2012).
[iv] We’ll return to bag of words text mining in a separate post.
[v] Polynomial models of higher than third order are rarely used. John Neter, William Wasserman, and Michael H. Kutner, Applied Linear Statistical Models, 3^rd Ed. (Irwin 1990), p. 318.
[vi] There are model-independent measures of relative variable importance that one can apply to collections of variables, notably Akaike’s Information Criterion and its analogs. See Gerda Claeskens
and Nils Lid Hjort, Model Selection and Model Averaging (Cambridge University Press, 2010). We will review variable-importance measures in a separate post.
[vii] A separate post will treat assessing classification and prediction model fit, and optimization model performance.
[viii] See for example Robert R. Hoffman, “The Problem of Extracting the Knowledge of Experts from the Perspective of Experimental Psychology,” AI Magazine Vol. 8 No. 2 (AAAI, 1987), pp. 53-67,
available at http://aaaipress.org/ojs/index.php/aimagazine/article/viewFile/583/519 (visited March 19, 2014); and Judith Reitman and Henry H. Rueter, “Extracting Expertise from Experts: Methods for
Knowledge Acquisition.” Expert Systems 4(3) (1987),Here touch – bugs Although. Hair http://www.capintelligence.fr/index.php?viagra-in-germany Types put, any will t http://www.kimvankimlu.net/
index.php?atorvastatin-without-prescription a . Time recommend http://www.homeforhome.co.uk/claravis-online out sometimes my apply, are buying viagra in bali conditioner right expensive. More http://
edificativa.com/tw/enlarged-ovary-clomid/ before up cutter semen leakage celexa my have actually http://apslawyers.co.uk/koyez/discounted-cialis.php The. come http://wheretogetfinancialadvice.com/nik
/cialis-800mg/ that this – broke… Find buy albendazole uk Product looking peels seemed avamigran bought burns, describe reviews on generic cialis need looks the http://www.capintelligence.fr/
index.php?over-the-counter-viagra-walmart to without when absorbs.
pp. 152-168.
[ix] Such as Jacob Yerushalmy, “Factors in Human Longevity,” American Journal of Public Health Nations Health Vol. 53 No. 2 (Feb. 1962), pp. 148-162, available at http://www.ncbi.nlm.nih.gov/pmc/
articles/PMC1253894/ (visited March 20, 2014).
[x] http://www.healthindicators.gov/Indicators/ (visited March 20, 2014).
[xi] Naturally the research literature could lead you to the same conclusion. J. Hata et/al, “Combined Effects of Smoking and Hypercholesterolemia on the Risk of Stroke and Coronary Heart Disease in
Japanese: the Hisayama Study,” Cerebrovascular Diseases Vol. 31 No. 5 (2011), pp. 477-484, available at http://www.ncbi.nlm.nih.gov/pubmed/21358199 (visited March 20, 2014).
[xii] http://www.noaanews.noaa.gov/stories2010/20100930_hurricanetrack.html (visited March 20, 2014).
[xiii] When the count merely determines whether something is turning at all, one has implemented a variant of the Wald-Wolfowitz runs test. http://en.wikipedia.org/wiki/
Wald%E2%80%93Wolfowitz_runs_test (visited March 20, 2014).
[xiv] http://www.cs.ubc.ca/labs/beta/Courses/CPSC532D-05/Slides/ch1-slides-2p.pdf (visited March 20, 2014).
[xv] Hemant Ishwaran and J. Sunii Rao, “Spike and Slab Variable Selection: Frequentist and Bayesian Strategies,” The Annals of Statistics Vo. 33 No. 2(2005), pp. 730-773, available at http://
arxiv.org/pdf/math/0505633.pdf. See also http://journal.r-project.org/archive/2010-2/RJournal_2010-2_Ishwaran~et~al.pdf and http://cran.r-project.org/web/packages/spikeslab/spikeslab.pdf (visited
March 20, 2014). We’ll return to variable-selection algorithms in a future post.
[xvi] The academic community has published many references on the general subject. Here are a few: http://www.math.uwaterloo.ca/~aghodsib/courses/f06stat890/readings/tutorial_stat890.pdf, http://
http://www.cs.binghamton.edu/~lyu/SDM07/DR-SDM07.pdf (visited March 2, 2014).
[xvii] http://en.wikipedia.org/wiki/Branch_and_bound, http://www.stanford.edu/class/ee364b/lectures/bb_slides.pdf (visited March 2, 2014).
[xviii] This is essentially what Google did in its flu model. Jeremy Ginsberg et/al, “Detecting Influenza Epidemics Using Search Engine Query Data,” Nature Vol. 457 (Feb. 2009), available at http://
static.googleusercontent.com/media/research.google.com/en/us/archive/papers/detecting-influenza-epidemics.pdf (visited March 17, 2014).
[xix] See http://en.wikipedia.org/wiki/Stochastic_optimization#Randomized_search_methods for a longer list of randomized-search strategies (visited March 20, 2014). See also James C. Spall,
Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control (Wiley, 2005).
[xx] http://www.cse.unsw.edu.au/~billw/cs9414/notes/ml/05ga/05ga.html (visited March 20, 2014).
[xxi] http://en.wikipedia.org/wiki/Embarrassingly_parallel (visited March 20, 2014).
[xxii] http://hadoop.apache.org (visited March 20, 2014).
[xxiii] Xin Du et/al, “High Performance Parallel Evolutionary Algorithm Model Based on MapReduce Framework,” International Journal of Computer Applications in Technology Vol. 46 No. 3 (March, 2013),
pp. 290-295, available at http://dl.acm.org/citation.cfm?id=2463891; Pedro Fazenda et/al, “A Library to Run Evolutionary Algorithms in the Cloud Using MapReduce,” Applications of Evolutionary
Computation, Lecture Notes in Computer Science Vol. 7248 (Springer 2012), pp. 416-425, available at http://link.springer.com/export-citation/chapter/10.1007/978-3-642-29178-4_42 (visited March 20,
[xxiv] Pat Langley et/al, “The Bacon Programs,” Scientific Discovery: Computational Explorations of the Creative Processes (MIT Press, 1987), pp. 65-194. Computational model discovery remains an area
of active research. See Saso Dzeroski and Ljupco Todorovski, eds., Computational Discovery of Scientific Knowledge: Introduction, Techniques, and Applications in Environmental and Life Sciences
(Springer, 2007).
0 Comments
You must be logged in to post a comment.
|
{"url":"https://mosaicdatascience.com/2014/03/26/data-science-design-pattern-5-variable-selection/","timestamp":"2024-11-05T20:00:05Z","content_type":"text/html","content_length":"155692","record_id":"<urn:uuid:0b4e2b20-5992-4893-b4bb-afaaeff00ef7>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00531.warc.gz"}
|
Various Graphs Used in Statistical Analysis
Looking at a bunch of numbers and trying to make sense of them can be really confusing and overwhelming. That’s why people who work with data a lot often use graphs to help summarize and present the
information in a clear, visual way.
Graphs are super important tools for anyone who needs to understand and share insights from data. People like statisticians, scientists, economists, teachers, and even political analysts use graphs
all the time in their work and to help explain their ideas to others.
But here’s the good news: you don’t need to be an expert to understand a graph. In fact, graphs are designed to make it easy for anyone to quickly grasp the main points and trends in a set of data.
In this review, we’ll show you how to analyze different types of graphs and use them to come up with meaningful interpretations and conclusions. By the end, you’ll be a pro at reading and
understanding graphs!
What are Graphs and Why are They Useful?
Graphs are like pictures that help us understand numbers and information more easily. They make it simpler to see patterns and trends in data without having to look at a bunch of confusing numbers.
Imagine a big high school with 850 students in different grades. If the school wants to know how many students are in each grade, they could look at every student’s enrollment form, but that would
take a long time.
Instead, they can use a graph to show the information in a way that’s quick and easy to understand. They might make a bar graph, which uses tall or short bars to represent the number of students in
each grade.
By looking at the bar graph, the school can quickly see which grade has the most or least students, without having to count each enrollment form. Graphs make it much faster and easier to understand
important information and make decisions based on the data.
Various Graph Types Used in Statistical Analysis
1. Bar Graphs
Bar graphs are used to show the quantity or amount of different categories. They can be either vertical or horizontal.
Vertical Bar Graphs:
• The horizontal axis shows the categories
• The vertical axis shows the quantity or amount for each category
Horizontal Bar Graphs:
• The horizontal axis shows the quantity or amount for each category
• The vertical axis shows the categories
Bar graphs are best for comparing quantities across different categories. For example, you can easily see which month had the highest or lowest sales.
Multiple Bar Graphs
Multiple bar graphs show more than one piece of information for each category. They include a legend to explain what each bar represents.
For example, a multiple bar graph could show the average pretest and posttest scores for different classes. Each class would have two bars: one for the pretest score and one for the posttest score.
Example Problem
A bar graph shows the number of people from different professions at a science conference:
• Physicists: 120
• Chemists: 90
• Biologists: 80
• Geologists: 60
To find the difference between the number of physicists and geologists, subtract:
120 physicists – 60 geologists = 60 more physicists than geologists
So bar graphs make it easy to visually compare quantities and amounts across categories. The bars clearly show which categories have higher or lower values.
2. Line Graph
A line graph is a type of graph that uses straight lines to show how data changes over time. It has two axes: the horizontal axis (usually representing time) and the vertical axis (representing the
quantity being measured).
When to Use a Line Graph?
Line graphs are best used to show trends or patterns in data over a period of time. They help us understand how values change from one point to another.
Multiple Line Graphs
Multiple line graphs have more than one line plotted on the same graph. They are used to compare how different variables change over time. Each line represents a different variable and is identified
by a legend.
Example Problem
Given a multiple-line graph showing the annual net profit of three companies from 2017 to 2020, answer these questions:
1. Which company had the highest net profit in 2017?
2. Estimate the difference in annual net profit between Companies A and C in 2018.
1. Company C had the highest net profit in 2017, as its line is plotted higher than the others for that year.
2. Based on the graph, we can estimate that in 2018, Company A’s net profit was around ₱590,000, and Company C’s was about ₱500,000. The estimated difference is 590,000 – 500,000 = ₱90,000.
3. Pie Chart
A pie chart is a circular graph that is divided into slices. The entire chart represents a complete set of data, while each slice represents a portion of that data. The size of each slice corresponds
to its numerical value.
Here’s an example of a pie chart that displays the annual budget of a small city.
It shows how the budget is distributed among different expenses such as construction, health, education, research and development, and environmental preservation. The chart also indicates the
percentage of each expense relative to the total budget. For example, 38.46% of the total budget is allocated for construction.
When is a Pie Chart Used?
A pie chart is used to illustrate the “part-to-whole” relationship in a given set of data. It helps us understand the composition of the entire data by dividing it into different categories. In the
example of the annual budget of a small city, we use a pie chart to show the distribution of the budget across different areas like construction, health, education, etc.
How to Interpret a Pie Chart?
Let’s consider the annual budget of Lemongate City, which is presented as a pie chart.
Suppose the total budget is ₱130,000,000. We want to determine the allocated budget for construction based on the given pie chart.
According to the pie chart, 38.46% is allocated for construction. To find out the amount allocated for this category, we multiply the total budget (₱130,000,000) by the percentage allocation for
construction (38%):
₱130,000,000 x 0.3846 = ₱49,998,000
Therefore, out of the ₱130,000,000 annual budget of Lemongate City, ₱49,998,000 is allocated for construction.
Sample Problem: Favorite Movie Genres
Now, let’s consider a different example. Below is a pie chart representing the favorite movie genres of 200 students from Lemongate City High School. We will answer the following questions based on
the chart:
• How many students have romance as their favorite movie genre?
• How many students have science fiction as their favorite movie genre?
• How many students have a favorite movie genre that is not action?
According to the pie chart, 10% of the 200 students have romance as their favorite movie genre. Therefore, 0.10 x 200 = 20 students have romance as their favorite genre.
According to the pie chart, 25% of the 200 students have science fiction as their favorite movie genre. Therefore, 0.25 x 200 = 50 students have science fiction as their favorite genre.
According to the pie chart, 25% of the 200 students have action as their favorite movie genre. This means that the remaining 75% of students do not have action as their favorite genre. Therefore,
0.75 x 200 = 150 students have a favorite movie genre that is not action.
4. Pictograph
A pictograph is a type of graph that uses pictures or symbols to represent data. The word “picto” means picture, so a pictograph is literally a “picture graph”.
For example, let’s say a clothing store wants to show how many T-shirts they sold each day of the week. They could use a pictograph like this:
Monday: 🎽🎽🎽🎽🎽
Tuesday: 🎽🎽🎽
Wednesday: 🎽🎽🎽🎽
Thursday: 🎽🎽
Friday: 🎽🎽🎽🎽🎽
In this pictograph, each T-shirt symbol 🎽 represents a certain number of shirts sold, like 10 shirts. A key or legend explains what each symbol means.
Why Use Pictographs?
Pictographs are used to make data more visually appealing and easier to understand quickly. The pictures help tell the story of the data in a simple way.
Pictographs work best for fairly simple data. More complex data is better shown with other types of graphs.
How to Make a Pictograph?
To create a pictograph:
1. Collect the data you want to show
2. Choose a simple picture or symbol to represent the data
3. Decide how many data points each symbol will represent
4. Draw the pictograph, using the right number of symbols for each data point
5. Include a key that explains what each symbol represents
6. Give the pictograph a title and label the data
That’s the basic idea of pictographs – using pictures to create a simple visual representation of data!
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://www.sisigexpress.com/various-graphs-used-in-statistical-analysis/","timestamp":"2024-11-05T07:16:42Z","content_type":"text/html","content_length":"175126","record_id":"<urn:uuid:a9611d4e-afb4-478f-8120-c6d9bf8d24f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00623.warc.gz"}
|
Jeff Plaisance
I've been thinking about load balancing lately and how to dynamically grow a cluster and its data set. I'd like to define "balance factor" as follows:
If the number of nodes in the cluster grows by a factor of x, at some future point when the data set has also grown by a factor of x the balance factor is the ratio of the smallest node to the
largest node.
In my use case it is impractical for any node to know the size of all the nodes. Load balancing decisions must be made probabilistically on limited data. Today I happened across this
blog post
, which is very applicable to my scenario. It presents a very good solution for load balancing between a fixed number of bins, but when adding nodes without taking the system offline it is useful to
look at more than 2 random points to maximize balance factor.
I did some experiments on different values for x and n (where n is the number of random points examined), and experimentally determined an approximate equation for balance factor b:
b = 1-(1/x)^(n-1)
(by approximate I mean that it is close enough for the range of values I care about, which is x between 1.1 and 10 and n between 2 and 25. It might be exactly right but I don't have time to do a
proof before band practice.)
If you actually want to use this though, you need to determine n given x and the desired b. With some manipulation we get:
n = 1 + log(1-b)/log(1/x)
Let me know if you found this useful.
|
{"url":"http://www.jeffplaisance.com/2010/06/","timestamp":"2024-11-02T02:29:27Z","content_type":"text/html","content_length":"35814","record_id":"<urn:uuid:3b119caf-c21d-4679-93c5-0e68eac9ba67>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00824.warc.gz"}
|
Module – Nuclear and Particle Physics
FHEQ Level: Level 6 (Third Year)
Credits: 20
Module Code: tbc
Course Reference Number (CRN): 60831
Delivery: September Start, Trimester 1 (Short Fat)
Syllabus Outline
• Nuclear Masses and Binding Energy
• Models of Nuclear Structure
• Radioactivity and Nuclear Decay
• Nuclear Fission
• Nuclear Fusion
• Particle Accelerators and Detectors
• The Standard Model
• Leptons, Quarks and Hadrons
• Fermions and Bosons
• Exchange Particles and Fundamental Interactions
• Relativity and Invariance
• The Klein Gordon Equation
• Conservation Laws and Symmetry
• Reactions and Decay
Coursework: Assignment, 30%
Written: Examination, 3 hours, 70%
More detailed information may be found in the Assessments section.
Nuclear and Particle Physics , B Martin (2019) Wiley
Modern Particle Physics, M Thomson (2013) CUP
Further updates and supplementary texts may be found in the University Reading Lists system.
You will learn about Nuclear Physics and gain an understanding of nuclear stability in terms of the Liquid Drop Model and of nuclear reactions involving neutrons, protons, electrons and neutrinos,
and major experimental techniques and practical applications. The particle physics element will cover the theoretical basis of modern particle physics alongside the experimental evidence. The module
is taught by a combination of lectures and problem-solving tutorials.
1.To develop a knowledge and critical understanding in the areas of Nuclear and Particle, including the origin and limitations of the associated laws.
2. To develop a knowledge and critical understanding of mathematical techniques associated with Nuclear and Particle Physics.
3. To develop analytical, numerical and computer-based problem-solving skills in the areas of Nuclear and Particle Physics.
Knowledge & Understanding
On successful completion of this module, you will be able to:
1. Demonstrate a critical understanding of the laws and their origins in the areas of Nuclear and Particle Physics.
2. Demonstrate competence in the specification of problems using the laws of physics applied to Nuclear and Particle Physics and their analytical and numerical solution.
3. Demonstrate communication through written material.
Learning, Teaching and Assessment
The module is taught through a combination of lectures and tutorial classes.
Interactive tutorial classes will prepare students for assessments through a series of problem-solving exercises with associated formative feedback.
Assignment – An extended problem-solving exercise requiring a description and justification of methodology used together with the use of analytical and computational means to provide final solutions
and a critical evaluation of the solution obtained.
Exam – A series of questions demonstrating an understanding of the topic together with application to straightforward problems that can be solved using analytical means.
|
{"url":"https://salfordphysics.com/index.php/module-nuclear-and-particle-physics/","timestamp":"2024-11-04T11:55:47Z","content_type":"text/html","content_length":"33265","record_id":"<urn:uuid:b01a5603-1aa1-402d-a709-8871645e2663>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00806.warc.gz"}
|
Calculus 4 - Multivariable
New! DMAT 431 - Computational Abstract Algebra with MATHEMATICA!
Asynchronous + Flexible Enrollment = Work At Your Own Best Successful Pace = Start Now!
Earn Letter of Recommendation • Customized • Optional Recommendation Letter Interview
Mathematica/LiveMath Computer Algebra Experience • STEM/Graduate School Computer Skill Building
NO MULTIPLE CHOICE • All Human Teaching, Grading, Interaction • No AI Nonsense
1 Year To Finish Your Course • Reasonable FastTrack Options • Multimodal Letter Grade Assessment
Calculus 4 - Multivariable Calculus - Vector Calculus Course Information
Calculus 4 course can best be described as a "the first semester course of Differential and Integral Calculus to functions of many variables".
This course has many names, all being equivalent:
• Calculus 3
• Calculus 4
• Calculus III
• Calculus IV
• Vector Calculus
• Calculus of Many Variables
• Calculus of Several Variables
At Distance Calculus, we call our "Calculus 4" course as Multivariable Calculus - DMAT 355 - 4 credits.
Below are some links for further information about the Calculus 4 course via Distance Calculus @ Roger Williams University.
Distance Calculus - Student Reviews
Date Posted: Jan 12, 2020
Review by: Mark Neiberg
Courses Completed: Calculus I, Calculus II, Multivariable Calculus
Review: Curriculum was high quality and allowed student to experiment with concepts which resulted in an enjoyable experience. Assignment Feedback was timely and meaningful.
Date Posted: Jan 13, 2020
Review by: Daniel Marasco
Courses Completed: Multivariable Calculus
Review: This course was more affordable than many, and the flexible format was terrific for me, as I am inclined to work very diligently on tasks on my own. It could be dangerous for a person who
requires external discipline more, but it works well for self-starters, allowing you to prioritize when you have other pressing work. I was a full time teacher adding a math certification, and this
course allowed me to master the math while working around my teaching schedule and fitting work into moments here and there when I had time. I was able to transfer the credits to Montana State
University, Bozeman for my teaching internship program without a hitch. The instructors were all very helpful and patient, even when I failed to see a ridiculously simple solution on one problem
after 20 emails back and forth. Overall, I was more pleased with my experience in this class than I was with any of my other 9 courses.
Transferred Credits to: Montana State University, Bozeman
Date Posted: Apr 13, 2020
Review by: Jorgen M.
Courses Completed: Calculus I
Review: I really enjoyed this course, much more than I thought I would. I needed to finish this course very fast before starting my graduate degree program @ Kellogg. I was able to finish in 3 weeks.
I liked the video lectures and the homework process. I highly recommend this course.
Transferred Credits to: Kellogg School of Business, Northwestern Univ
|
{"url":"https://www.distancecalculus.com/calculus-4/","timestamp":"2024-11-03T17:12:17Z","content_type":"text/html","content_length":"41862","record_id":"<urn:uuid:f4f35158-909c-4717-9d62-82c492d95fbd>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00232.warc.gz"}
|
GWTC-2.1: Deep extended catalog of compact binary coalescences observed by LIGO and Virgo during the first half of the third observing run
The second Gravitational-Wave Transient Catalog, GWTC-2, reported on 39 compact binary coalescences observed by the Advanced LIGO and Advanced Virgo detectors between 1 April 2019 15∶00 UTC and 1
October 2019 15∶00 UTC. Here, we present GWTC-2.1, which reports on a deeper list of candidate events observed over the same period. We analyze the final version of the strain data over this period
with improved calibration and better subtraction of excess noise, which has been publicly released. We employ three matched-filter search pipelines for candidate identification, and estimate the
probability of astrophysical origin for each candidate event. While GWTC-2 used a false alarm rate threshold of 2 per year, we include in GWTC-2.1, 1201 candidates that pass a false alarm rate
threshold of 2 per day. We calculate the source properties of a subset of 44 high-significance candidates that have a probability of astrophysical origin greater than 0.5. Of these candidates, 36
have been reported in GWTC-2. We also calculate updated source properties for all binary black hole events previously reported in GWTC-1. If the eight additional high-significance candidates
presented here are astrophysical, the mass range of events that are unambiguously identified as binary black holes (both objects ≥3M⊙) is increased compared to GWTC-2, with total masses from ∼14M⊙
for GW190924_021846 to ∼182M⊙ for GW190426_190642. Source properties calculated using our default prior suggest that the primary components of two new candidate events (GW190403_051519 and
GW190426_190642) fall in the mass gap predicted by pair-instability supernova theory. We also expand the population of binaries with significantly asymmetric mass ratios reported in GWTC-2 by an
additional two events (the mass ratio is less than 0.65 and 0.44 at 90% probability for GW190403_051519 and GW190917_114630 respectively), and find that two of the eight new events have effective
inspiral spins χeff>0 (at 90% credibility), while no binary is consistent with χeff<0 at the same significance. We provide updated estimates for rates of binary black hole and binary neutron star
coalescence in the local Universe.
All Science Journal Classification (ASJC) codes
• Nuclear and High Energy Physics
|
{"url":"https://cris.iucc.ac.il/en/publications/gwtc-21-deep-extended-catalog-of-compact-binary-coalescences-obse-2","timestamp":"2024-11-01T22:31:17Z","content_type":"text/html","content_length":"44382","record_id":"<urn:uuid:c5a641fb-7f51-41cc-bd6d-efe85b25d03f>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00151.warc.gz"}
|
289 RWTH Publication No: 47089 2008   IGPM289.pdf
TITLE Instance Optimal Decoding by Thresholding in Compressed Sensing
AUTHORS Albert Cohen, Wolfgang Dahmen, Ronald DeVore
Compressed Sensing seeks to capture a discrete signal x ∈ I RN with a small number n of linear measurements. The information captured about x from such measurements is given by the vector
y = Φx ∈ I R n where Φ is an n × N matrix. The best matrices, from the viewpoint of capturing sparse or compressible signals, are generated by random processes, e.g. their entries are
given by i.i.d. Bernoulli or Gaussian random variables. The information y holds about x is extracted by a decoder ∆ mapping Rn into RN. Typical decorders are based on l1-minimization and
ABSTRACT greedy pursuit. The present paper studies the performance of decoders based on thresholding. For quite general random families of matrices Φ, decoders ∆ are constructed which are
instance-optimal in probability by which we mean the following. If x is any vector in R N , then with high probability applying ∆ to y = Φx gives a vector x‾ := ∆(y) such that ||x−x‾ || ≤
C0σk(x)l2 for all k ≤ an/ log N provided a is sufficiently small (depending on the probability of failure). Here σk(x)l2 is the error that results when x is approximated by the k sparse
vector which equals x in its k largest coordinates and is otherwise zero. It is also shown that results of this type continue to hold even if the measurement vector y is corrupted by
additive noise: y = Φx + e where e is some noise vector. In this case σk(x)l2 is replaced by σk (x)l2 + ||e||l2 .
KEYWORDS Compressed sensing, best k-term approximation, instance optimal decoders, thresholding, noisy measurements, random matrices.
8th International Conference on Harmonic Analysis and Partial Differential Equations,
PUBLICATION June 16 - 20, 2008, El Escorial, Madrid, Spain / Patricio Cifuentes ... ed.
Contemporary mathematics 505, 28 S. (2010)
|
{"url":"https://www.igpm.rwth-aachen.de/forschung/preprints/289","timestamp":"2024-11-05T09:57:32Z","content_type":"text/html","content_length":"31070","record_id":"<urn:uuid:7e2ca116-1600-47a1-a13b-9123f37d62dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00683.warc.gz"}
|
Mathematical optimization (convex and non-convex) is a large and mature field of research, as is policy optimization in reinforcement learning. A fusion of ideas from the two fields gives rise to
improvements upon naive gradient descent as used in policy gradient algorithms like REINFORCE. The improvements considered in the present text boil down to optimizing a regularized or constrained
objective function rather than just the discounted expected sum of rewards. While trust-region policy optimization (TRPO) essentially uses natural gradients and restricts the step size to enforce
proximity constraints on policy updates, the significantly simpler proximal policy optimization (PPO) introduces constraints that modify the original objective in a more RL-specific fashion. The
actual practical algorithms are motivated by rigorous analysis but involve several approximations and simplifications. Experimental results of TRPO and PPO display much more stability and reliability
than their non-regularized counterparts and thereby show that many desired properties are not lost in the approximations. PPO is a state-of-the-art algorithm which is implemented in most major RL
frameworks, like rllib, stable-baselines and others.
1 Introduction to Trust Region methods
Before we dive into the main results of the papers, let us briefly describe trust region methods and other general methods for numerical optimization.1 1 For a self-contained introduction, we
recommend [Noc06N]. As we will see, several main results and improvements in the TRPO and PPO papers are very natural and follow standard approaches from the point of view of optimization.
In this section we will be concerned with minimizing a twice differentiable function $f :\mathbb{R}^N \rightarrow \mathbb{R}$. First notice that several optimization algorithms can be understood as
the successive application of the following steps:
Algorithm 1: Approximation Based Optimization
Select some $x_{0}$ as starting point. At each iteration $k$:
1. Find a local approximation to $f$ close to the current point $x_{k}$ that can be minimized (over the entire domain of definition $\mathcal{D}_{x}$) with less effort than $f$ itself. Such
approximations are often called models, and we will denote them by $m_{k}$. Find a minimum of the model.2 2 Note how here we allow for $m_{k}$ without unique minima, as is the case in
applications in RL. However, in many applications $m_{k}$ will be convex and $\operatorname{argmin}m_{k}$ will be a single point. For gradient based optimization methods this distinction is
usually of little practical importance - one uses the point where gradient descent has converged.
\[ x_{\min} \approx x^{\ast} \in \operatorname{argmin}_{x \in \mathcal{D}_{x}} (m_{k} (x)) . \]
2. Set $x_{k + 1} \leftarrow x_{\min}$ and repeat.
For example, the following quadratic approximation to $f$:
\begin{eqnarray*} m^{\operatorname{grad}}_{k} (x) & := & f (x_{k}) + (x - x_{k})^{\top} \cdot \nabla f (x_{k}) \\\ & & \quad + \frac{1}{2 t_{k}} | x - x_{k} |^2 \end{eqnarray*}
yields the update rule of gradient descent with step size $t_{k}$:
\[ x_{k + 1} = x_{k} - t_{k} \nabla f (x_{k}) . \]
For small enough $t_{k}$ and $f$ with Lipshitz continuous gradients, the above model actually forms a majorant of $f$ (many proofs of convergence of gradient descent are based on this). However, the
above approximation is clearly very crude and does not take into account the curvature of $f$. Constructing the model with the second order expansion of $f$ results in Newton’s Method. With the model
\begin{eqnarray*} m^{\operatorname{Nwt}}_{k} (x) & := & f (x_{k}) + (x - x_{k})^{\top} \cdot \nabla f (x_{k}) \\\ & & \quad + \frac{1}{2} (x - x_{k})^{\top} H_{k} (x - x_{k}), \label{newton-model}\
tag{1} \end{eqnarray*}
where $H_{k}$ is the Hessian of $f$ at $x_{k}$, we obtain the update rule:
\[ x_{k + 1} = x_{k} - H^{- 1}_{k} \nabla f (x_{k}) . \]
The strategy outlined above clearly has a flaw: the models will almost always be poor approximations to $f$ far from $x_{k}$ - something that we have not taken into account. This situation can be
improved by restricting the parameter region where we trust our model prior to optimization - hence the name trust region.
Usually, we cannot judge a priori whether the model is sufficiently good within the selected region (if it is not, then the region was too large) or whether on the contrary the region can be extended
to allow for larger step sizes. A posteriori, however, we can compare the actual improvement in the objective, $\Delta f$ to the improvement in the model $\Delta m$ (the predicted improvement) . If
the ratio $\Delta f / \Delta m$ is too small, we should reject the step and shrink the trust region. If the ratio is too large, we should increase it. To summarize, the improved algorithm works as
follows [Con00T]:
Algorithm 2: Trust Region Optimization
Select some $x_{0}$ as starting point. At each iteration $k$:
1. Choose an approximation of $f$ around $x_{k}$, call it $m_{k} :\mathbb{R}^N \rightarrow \mathbb{R}$.
2. Choose a trust region $U_{k}$ that contains $x_{k}$. Usually
\[ U_{k} := \lbrace x : | x - x_{k} |_{k} \leqslant \Delta _{k} \rbrace, \]
for some norm $| \cdot |_{k}$ and radius $\Delta _{k} > 0$.
3. Find an approximation of a minimum of $m_{k}$ within the trust region
\[ x_{\min} \approx x^{\ast} \in \operatorname{argmin}_{x \in U_{k} } m_{k} (x) . \]
4. Compute the ratio of actual-to-predicted improvement:
\[ \rho _{k} := \frac{f (x_{k}) - f (x_{\min})}{m_{k} (x_{k}) - m_{k} (x_{\min})} . \]
If $\rho _{k}$ is sufficiently large, accept $x_{\min}$ as the next point (i.e. $x_{k + 1} \leftarrow x_{\min}$), otherwise don’t move ($x_{k + 1} \leftarrow x_{k}$)
5. If $\rho _{k}$ is sufficiently large, increase the next trust region, if it is too small then shrink it. With $U_{k}$ as above, this step usually amounts to scaling the radius, i.e. $\Delta _{k +
1} = \varepsilon _{k} \Delta _{k}$ with the scaling factor $\varepsilon _{k}$ depending on $\rho _{k}$.
6. Increment $k$ by $1$ and go to step 1.
Figure 1 illustrates this scheme with the contour lines of a quadratic approximation of the true objective shown at several steps.
2 Update rules from trust region methods
Let us ignore steps 4 and 5 from the above algorithm for a moment and analyze what kind of update rules we obtain from different trust region constraints. With a linear model
$$m^{\operatorname{linear}}_{x_{k}} (x) = f (x_{k}) + (x - x_{k})^{\top} \cdot abla f (x_{k}), \label{linear-model}\tag{2}$$
and a trust region $U_{k} := \left\lbrace x : \frac{1}{2} | x - x_{k} |^2 \leqslant \delta^2 \right\rbrace$ we have the update rule
\[ x_{k + 1} = x_{k} - \delta \frac{\nabla f (x_{k})}{| \nabla f (x_{k}) |} . \]
This update is commonly known as normalized gradient descent.
Using the quadratic approximation (1) as model with the same constraint as above we obtain the rule:
\[ x_{k + 1} = x_{k} - (H_{k} + \lambda)^{- 1} \nabla f (x_{k}), \]
where $\lambda$ is the Lagrange multiplier of the constraint. This update rule is the essence of the widely used Levenberg-Marquardt algorithm [Mar63A].
A Euclidian sphere in the parameter space $\mathbb{R}^N$ typically does not reflect the geometry relevant for the optimization problem. Some problem-specific distance on $\mathbb{R}^N$ might do a
better job in specifying what it means for $x$ to be too far away from $x_{k}$. A particularly simple and useful choice is often a suitable symmetric and positive-definite matrix $F (x_{k})$, which
could be e.g. a metric tensor induced by some distance function. The trust region constraint then becomes
$$\frac{1}{2} (x - x_{k})^{\top} F (x_{k}) (x - x_{k}) \leqslant \delta^2, \label{custom-trust-region}\tag{3}$$
and the update rule from the linear model (2) takes the form
$$x_{k + 1} = x_{k} - D_{k} abla f (x_{k}), \label{custom-metric-update}\tag{4}$$
\[ D_{k} := \frac{\delta}{\sqrt{\nabla f^{\top} (x_{k}) F^{- 1} (x_{k}) \nabla f (x_{k})}} F^{- 1} (x_{k}) . \]
An interesting choice for $F$ is the Hessian $H_{k}$ of $f$ at $x_{k}$, which essentially means that distances between $x$ and $x_{k}$ are measured in units of the curvature of $f$ at $x_{k}$. Then
the update rule (4) corresponds to a variant of the damped Newton method. As an alternative for the full Hessian, one could use only the diagonal components of $H_{k}$ for setting up the constraint,
i.e. $F_{k} := \operatorname{diag} (H_{k})$ (a choice that is supported by heuristics). Then, using a trust region of the form (3) together with the second order model (1) results in Fletcher’s
modification of the Levenberg-Marquardt update rule [Fle71M]:
\[ x_{k + 1} = x_{k} - [H_{k} + \lambda \operatorname{diag} (H_{k})]^{- 1} \nabla f (x_{k}) . \]
Using linear or quadratic approximations with different trust region constraints gives rise to many of the standard optimization algorithms that are commonly used in practice. As we will see below,
several improvements of the naive policy gradient algorithm in reinforcement learning are based on exactly the same ideas that we highlighted in this section.
3 Notation and the RL objective
In all that follows we will consider an infinite horizon Markov decision process with states $s_{t}$ and actions $a_{t}$. For notational simplicity, we assume that state and action spaces are
discrete.3 3 With some effort, the results stated can be transferred to continuous spaces. Here we are in the context of model-free reinforcement learning and we want to find a policy from a family
of functions $\lbrace \pi _{\theta} : \theta \in \mathbb{R}^n \rbrace$ differentiable wrt. their parameters. The goal is to find the $\theta$ maximizing the expected sum of discounted rewards
$$\eta _{\pi _{\theta}} =\mathbb{E}_{ \pi _{\theta}} \left[ \sum_{t = 0}^{\infty} \gamma^t r (s_{t}, a_{t}) \right] . \label{sum-rewards}\tag{5}$$
In what follows, we will often omit the subscript $\theta$ when it is clear which policy is meant, or if a statement applies to any policy $\pi$. We write $\pi _{k}$ for $\pi _{\theta _{k}}$.
We assume stochastic transition dynamics and, for notational simplicity, denote by $\tau _{t}$ the conditional distribution of $s_{t + 1} |a_{t}, s_{t}$.4 4 This is not to be confused with the usual
notation of $\tau$ for the distribution of trajectories. The value, Q and advantage functions are defined as
\begin{eqnarray*} V_{\pi} (s_{t}) & = & \mathbb{E}_{\pi} \left[ \sum^{\infty}_{k = 0} \gamma^k r (s_{t + k}) \right]\\\ & = & r (s_{t}) + \gamma \mathbb{E}_{a_{t} \sim \pi (s_{t})} [\mathbb{E}_{\tau
_{t}} [V_{\pi} (s_{t + 1})]], \end{eqnarray*}
\[ Q_{\pi} (s_{t}, a_{t}) = r (s_{t}) + \gamma \mathbb{E}_{\tau _{t}} [V_{\pi} (s_{t + 1})], \]
\[ A_{\pi} (s_{t}, a_{t}) = Q_{\pi} (s_{t}, a_{t}) - V_{\pi} (s_{t}) . \]
From these definitions directly follows:
$$\begin{array}{lll} V_{\pi} (s_{t}) & = & \mathbb{E}_{a_{t} \sim \pi (s_{t})} [Q (s_{t}, a_{t})],\\\ 0 & = & \mathbb{E}_{a_{t} \sim \pi (s_{t})} [A_{\pi} (s_{t}, a_{t})], \end{array} \label
\begin{eqnarray*} A_{\pi} (s_{t}, a_{t}) & = & r (s_{t}) + \gamma \mathbb{E}_{\tau _{t}} [V_{\pi} (s_{t + 1})] \label{advantage-function}\tag{7}\\\ & & \quad - V_{\pi} (s_{t}) . \end{eqnarray*}
3.1 Approximating the objective
As explained in Section 1, before we start the optimization at each step $k$ we need a local model around $\theta _{k}$ for the full objective (5).5 5 In the literature referenced, the notation $\
theta _{\operatorname{old}}$ is used instead of $\theta _{k}$. We will use the following approximation to $\eta _{\pi}$:
$$m_{k} (\theta) := \eta _{\pi _{k}} + \sum_{s} \rho _{\pi _{k}} (s) \mathbb{E}_{a \sim \pi _{k} (s)} \left[ \frac{\pi _{\theta} (a|s)}{\pi _{k} (a|s)} A_{\pi _{k}} (a, s) \right], \label
$$\rho _{\pi} (s) := \sum_{t = 0}^{\infty} \gamma^t \mathbb{P}_{\pi} (s_{t} = s) \label{state-density}\tag{9}$$
is the (un-normalized) discounted state density, with $s_{0} \sim \rho _{0}$ for some initial state distribution $\rho _{0}$.6 6 If there is a discount factor γ<1, the state density is not a
probability distribution since $\sum_{s} \rho _{\pi} (s) = \frac{1}{1 - \gamma} .$ If no discount factor is used, $\rho _{\pi}$ usually denotes the stationary distribution of states after following $
\pi$ for a long time, which is assumed to exist. I.e. $\rho _{\pi} (s) :=$$\underset{T \rightarrow \infty}{\lim} \frac{1}{T}
\sum^T_{t = 0} \mathbb{P}_{\pi} (s_{t} = s) .$ This is a proper probability distribution. In the RL literature one can often find expressions of the type $\mathbb{E}_{\rho _{\pi} (s)} [f (s)]$, which
should be interpreted as $\sum_{s} \rho _{\pi} (s) f (s)$, even when a discount factor is present and $\rho _{\pi}$ is not properly normalized. Due to the vanishing expectation of the advantage
function (6), we immediately see that
\[ m_{k} (\theta _{k}) = \eta _{\pi _{k}} . \]
\[ \nabla m_{k} (\theta _{k}) =\mathbb{E}_{\pi _{\theta }} [\nabla \log \pi _{\theta _{k}} (a|s) Q_{\pi _{k}} (a, s)], \]
which is a well-known expression for the policy gradient [Sut99P]. Therefore, (8) is a first order approximation to $\eta _{\pi _{\theta}}$ around $\theta _{k}$.
With a slight abuse of notation, in modern literature (including the PPO paper [Sch17P]) equation (8) is usually written as
\[ m_{k} (\theta) := \eta _{\pi _{k}} +\mathbb{E}_{\pi _{k}} \left[ \frac{\pi _{\theta} (a|s)}{\pi _{k} (a|s)} A_{\pi _{k}} (a, s) \right] . \]
4 Natural gradients, TRPO and PPO
With the very basic definitions that we have set up above, we can already make a lightning summary of the results of three important papers in RL. All one has to do is to combine ideas from trust
region optimization with the objective/model (8).
Since policies are probability distributions, a natural notion of distance for them is given by the KL-divergence.7 7 One reason for the “naturality” of KL-divergence for constraints is its
information-theoretic interpretation. Note that it also forms an upper bound of the total variation divergence $\delta$ through Pinsker’s inequality $\delta (P, Q) \leq \sqrt{1 / 2 D_{\operatorname
{KL}} (P|Q)}$, where $P, Q$ are arbitrary probability distributions. However, arguably the main reason that makes it natural is that its second derivative (for distributions from a parametrized
function family), the Fisher information matrix that we use below to formulate trust-region constraints, gives a metric tensor on the corresponding space of probability distributions. This means that
distances between policies measured according to the Fisher information are parametrization invariant. This line of thought was used in the paper that originally introduced natural gradients for RL [
Kak01N]. Even though it is not symmetric and therefore cannot really be understood as a proper distance, its second order approximation is the positive definite Fisher information matrix which can be
used as a metric tensor for constructing the constraint
\[ \begin{array}{l} D_{\operatorname{KL}} (\pi _{k} (\cdot |s), \pi _{\theta} (\cdot |s)) \approx\\\ \hspace{5em} \frac{1}{2} (\theta - \theta _{k})^{\top} F (\theta _{k}, s) (\theta - \theta _{k}),
\end{array} \]
\[ F (\theta _{k}, s)_{i, j} = \frac{\partial^2}{\partial \theta _{i} \partial \theta _{j}} D_{\operatorname{KL}} (\pi _{k} (\cdot |s), \pi _{\theta} (\cdot |s)) . \]
Thus, a natural bound for $\pi _{\theta}$ not being too far from $\pi _{k}$ is to demand that on average the second order approximation of the KL-divergence of $\pi _{\theta}$ and $\pi _{k}$ is
smaller than some radius $\delta$, resulting in the trust region constraint
$$\frac{1}{2} (\theta - \theta _{k})^{\top} \mathbb{E}_{\pi _{k}} [F (\theta _{k}, s)] (\theta - \theta _{k}) \leqslant \delta . \label{trpo-constraint}\tag{10}$$
This is a constraint of the type (3) which hence results in the following instantiation of the update rule (4):8 8 Originally, the trust region approach based on (an approximation of) the
KL-divergence was motivated in a different way. One can prove that optimizing a certain surrogate objective with the KL-divergence as penalty results in guaranteed monotonous improvement in the
expected sum of rewards. See Section 5 for more details.
$$\theta = \theta _{k} - D_{k} abla m_{k} (\theta _{k}), \label{trpo-update-rule}\tag{11}$$
\[ D_{k} := \frac{\delta}{\sqrt{\nabla m^T_{k} (\theta _{k})
\bar{F}^{- 1} (\theta _{k}) \nabla m_{k} (\theta _{k})}} \bar{F}^{- 1}, \]
where $\bar{F}$ is the average Fisher information:
$$\bar{F} (\theta) := \mathbb{E}_{\pi _{\theta}} [F (\theta, s)] . \label{average-fisher}\tag{12}$$
4.1 Natural Gradient and TRPO
In the paper that introduced the natural gradient for RL [Kak01N], the above update rule was implemented with a fixed step size multiplying the update direction $\bar{F}^{- 1} (\theta _{k}) \cdot \
nabla m_{k} (\theta _{k})$ instead of the adaptive step size coming from the constraint. This choice was already an improvement over the naive policy gradient. The only practical difference between
the method introduced in the trust region policy optimization paper [Sch15T] and natural gradient is that the constraint (10) is taken more strictly in TRPO (it will actually be fulfilled with each
update) and the step size is not fixed.
The reader might have noticed that in all considerations above we kept silently ignoring a crucial element of the general trust region optimization algorithm - namely the possibility to reject an
update and to adjust the trust region based on differences between predicted and actual improvement, steps 4 and 5 in algorithm (2). Indeed, the natural gradient completely foregoes these steps,
which is precisely why it performs worse than TRPO. However, what about TRPO itself - is there any rejection of points and region adjustment happening? In fact, nothing of the like is mentioned in
the paper’s main text. However, in Appendix C, where the implementation details are described, it is mentioned that a backtracking line search is performed in the direction $\bar{F}^{- 1} (\theta _
{k}) \cdot \nabla m_{k} (\theta _{k})$, starting from the maximal step size $\frac{\delta}{\sqrt{\nabla m_{k}
\bar{F}^{- 1} \nabla m_{k}}}$, until a sufficient increase in the objective is observed. This procedure can be understood as a way of rejecting updates and reducing the trust region if the true
increase is too small. The authors also say that without such a line search, from time to time the algorithm would take too large steps with catastrophic drops in performance. Such behavior is known
to happen in trust region algorithms where the region is never adjusted, see [Con00T]. Although part of the general specification of a trust region algorithm is still not applied in TRPO - namely
comparing predicted and actual improvements and possibly extending the trust region, not just shrinking it, by following algorithm (2) more closely than the fixed-size natural gradient updates, the
authors of TRPO introduced a very popular and robust state-of-the-art (at that time) policy optimization algorithm.9 9 From the point of view presented here, it seems natural to wonder whether by
following the general trust region approach even more closely and adjusting the trust region based on the same criteria as described in it, the performance would increase even more (see chapter 6 in
[Con00T] for a slightly stricter but less general specification of the trust region algorithm).
4.2 Proximal Policy Optimization
The observation that trust region extension and shrinking were not fully applied in TRPO but some version of them was still needed to prevent catastrophic collapse brings us to the next improvement
upon TRPO - proximal policy optimization (PPO) [Sch17P]. The main idea here is the following: all we really care about is optimizing the model (8) without taking too large steps in policy space.
While using the KL divergence is a natural choice for bounding deviations of general probability distributions, one could make an even more suitable choice that is specific to the actual model of the
objective. In the PPO paper this choice can be phrased as following: the new policy is only allowed to improve upon the old one by staying close to it in a pointwise, objective-specific sense,
specified in equation (13) below. It is, however, allowed to make large steps in $\theta$ which decrease the expected sum of rewards. This approach is different from the KL-based constraint which is
indifferent to whether deviations in a policy lead to an improvement or not. Moreover, the KL-based constraint is based on differences of policies as a whole (as probability distributions), allowing
for large pointwise differences.
\[ r_{\theta} (a, s) := \frac{\pi _{\theta} (a |s)}{\pi _{k} (a|s)}, \]
the quantity that is to be maximized at the $k$-th iteration is:
\[ \mathbb{E}_{\pi _{k}} [r_{\theta} (a, s) A_{\pi _{k}} (a, s)] . \]
The PPO idea for restricting steps results in the following recipe, for some (small) $\epsilon > 0$ that is chosen as hyperparameter:
• If $A_{\pi _{k}} (a, s) > 0$, then we don’t allow $r_{\theta} (a, s)$ to become larger than $1 + \epsilon$.
• If $A_{\pi _{k}} (a, s) < 0$, then $r_{\theta} (a , s)$ should not become smaller then $1 - \epsilon$.
A very similar behavior can also be implemented as trust-region constraints on $\theta$ of the form
$$\operatorname{sign} (A_{\pi _{k}} (a, s)) (r_{\theta} (a, s) - 1) \leqslant \epsilon, \label{ppo-constraints}\tag{13}$$
where we would have one constraint for each $(a, s )$. However, in the PPO paper these conditions were not included as constraints but rather the objective itself was modified to
$$m^{\operatorname{PPO}}_{k} (\theta) := \eta _{\pi _{k}} +\mathbb{E}_{\pi _{k}, t} [\min (r_{\theta} A_{\pi _{k}}, \operatorname{clip} (r_{\theta} A_{\pi _{k}}))], \label{ppo-objective}\tag{14}$$
where the clip function clips $r$ to $1 \pm \epsilon$ based on the sign of $A_{\pi _{k}}$ if it is too large or too small. To first order this coincides with the previous model (8) used in natural
gradients and TRPO. Note that improvements to the objective due to policies too far from $\pi _{k}$ are cut off, meaning that $m^{\operatorname{PPO}}_{k} (\theta) \leqslant$$m_{k} (\theta)$.
Therefore, the clipped objective can be understood as a minorant for the unclipped one. Thus, schematically PPO algorithms take the form:
Algorithm 3: PPO-style optimization
1. Unroll the current policy $\pi _{k}$ in order to estimate
\[ \mathbb{E}_{\pi _{k}, t} [\min (r_{\theta} A_{\pi _{k}}, \operatorname{clip} (r_{\theta} A_{\pi _{k}}))] \]
This may involve batching and different strategies for estimating advantages.
2. Run gradient ascent in $\theta$ in order to maximize the objective obtained in the previous step, thereby obtaining $\theta _{\max}$.
3. Set $\theta _{k} \leftarrow \theta _{\max}$ and repeat from step 1.
From a practical point of view, optimizing the original objective (8) with a sufficiently expressive policy $\pi _{\theta}$ and gradient methods while satisfying the constraints (13) is essentially
equivalent to optimizing the clipped objective (14), meaning that PPO can be viewed as a trust-region optimization as well. Figure 2 can be helpful to understand this statement.
Optimizing the clipped objective will occasionally lead to slight violations of the constraints. This is because the clipped objective only discourages going into the constrained region and does not
strictly enforce it. One can view the clipping strategy as a computationally efficient way to deal with constraints of the type (13).10 10 To the author’s knowledge, the point of view that clipping
the objective for gradient-based optimization is essentially equivalent to imposing trust-region constraints, albeit not the same ones as in the TRPO paper, has not been mentioned in the literature.
This perspective allows for a more straightforward comparison of PPO and other approaches.
PPO is much simpler than TRPO both conceptually and in implementation, while often equaling or outperforming the latter. Intuitively, this might be due to the much more RL-specific constraints (13)
on the steps in $\theta$, compared to the general-purpose KL-divergence based constraint (10). Again, as in previous algorithms, adjustments to the trust region based on predicted vs. actual
improvements do not form part of the PPO algorithm and one might reasonably speculate that adjusting $\epsilon$ based on such criteria could lead to further improvements of performance.
5 TRPO from monotonous improvement
The derivation of the TRPO update rule (11) in the original paper [Sch15T] was based on quite a different line of thought from the one presented in Section 4 above. The constraint in average Fisher
information (10) was motivated by a proof of monotonic improvement of $\eta _{\pi}$ from optimization of a surrogate objective where the KL-divergence is used as penalty. We will give a brief
overview of the ideas and calculations that form the backbone of the original paper.
The following useful identity expresses the rewards of one policy $\tilde{\pi}$ in terms of expectation of advantages of another policy $\pi$:
$$\eta _{\tilde{\pi}} = \eta _{\pi} +\mathbb{E}_{\tilde{\pi}} \left[ \sum_{t = 0}^{\infty} \gamma^t A_{\pi} (s_{t}, a_{t}) \right] . \label{policy-diff}\tag{15}$$
This can be easily proved by observing that
\[ \mathbb{E}_{s_{t,} a_{t} \sim \tilde{\pi} } \mathbb{E}_{\tau _{t}} [V_{\pi} (s_{t + 1})] =\mathbb{E}_{\tilde{\pi}} [V_{\pi} (s_{t + 1})], \]
and using the telescopic nature of the sum $\sum_{t = 0}^{\infty} \gamma^t A_{\pi} (s_{t}, a_{t})$. After some computation, equation (15) can be rewritten in state space as:
\[ \eta _{\tilde{\pi}} = \eta _{\pi} + \sum_{s} \rho _{\tilde{\pi}} (s) \mathbb{E}_{a \sim \tilde{\pi} (s)} [A_{\pi} (s, a)] . \]
One might expect that the state distribution $\rho _{\pi}$ does not change too quickly when $\pi$ is smoothly varied, since even if different actions are taken, the unchanged system dynamics should
smooth out the behavior of $\rho _{\pi}$. This intuition is useful for finding good approximations: replacing $\rho _{\tilde{\pi}}$ by $\rho _{\pi}$ in the above expression gives a first order
approximation to $\eta _{\tilde{\pi}}$, namely exactly the model $m_{\pi} (\tilde{\pi})$ (8) that we have already used above (where we have also shown that it is truly a first order approximation).11
11 In the TRPO paper this model is called $L_{\pi}$. Now one would hope that finding a new policy $\pi$ that improves the approximation given by $m_{\pi}$ could guarantee an improvement in $\eta _{\
tilde{\pi}}$, as long as the improved policy $\pi$ stays within a sufficiently small region around $\tilde{\pi}$. Since the KL-divergence gives a way of measuring proximity of distributions, it is
maybe not too surprising that such a bound can be derived using it. One of the central results of the TRPO paper is the following relation:
\[ \eta _{\tilde{\pi}} \geqslant m_{\pi} (\tilde{\pi}) - CD^{\max}_{\operatorname{KL}} (\pi, \tilde{\pi}), \]
where $C$ is a constant depending on $\pi$ and $\gamma$ and
\[ D^{\max}_{\operatorname{KL}} (\pi, \tilde{\pi}) = \max _{s} [D_{\operatorname{KL}} (\pi (\cdot |s), \tilde{\pi} (\cdot |))] . \]
Thus, optimizing $m_{\pi} (\tilde{\pi}) - CD^{\max}_{\operatorname{KL}} (\pi, \tilde{\pi})$ w.r.t. $\tilde{\pi}$ leads to guaranteed improvement in the objective. The term $CD^{\max}_{\operatorname
{KL}} (\pi, \tilde{\pi})$ acts as a penalty for moving $\tilde{\pi}$ too far from $\pi$. Now, in practice this penalty leads to too small steps and $D^{\max}_{\operatorname{KL}}$ is difficult to
estimate. Therefore, the authors propose to replace $D^{\max}_{\operatorname{KL}}$ by the average KL-divergence and to incorporate the requirement to stay close to $\pi$ in KL-divergence as
constraint rather than as penalty. This argumentation leads to the trust-region algorithm that we presented in Section 4.1.
6 Conclusion
The TRPO and PPO papers achieved significant improvements to existing RL optimization methods. The core ideas in both can be summarized as: optimize the discounted sum of rewards through steps that
don’t bring you too far away from the previous policy by either enforcing constraints or modifying the objective in a smart way.
While naive policy gradient formulates “too far away” as keeping the update in parameter space $\Delta \theta = \theta - \theta _{k}$ small, in TRPO “too far away” is formulated in a more natural
sense for maps from states to probability distributions, i.e. using an approximation to average KL divergence, (12). In PPO “too far away” is defined in an intrinsically RL-like way: by not allowing
the approximation to improve the expected sum of advantages through point-wise constraints on $\pi (a|s)$ over all states and actions (equations (13) and (14)). Many of these ideas seem very natural
from the point of view of optimization theory [Nes04N], especially of trust region methods [Con00T, Noc06N].
|
{"url":"https://transferlab.ai/blog/trpo-and-ppo/","timestamp":"2024-11-13T16:17:46Z","content_type":"text/html","content_length":"88556","record_id":"<urn:uuid:4731254c-6181-46b2-8bb6-420289c86d30>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00552.warc.gz"}
|
Logarithm Function: https://pico-8.fandom.com/wiki/Math
A simple calculator cartridge.
(D-pad) Move Cursor
(O) Selected Button
• Numbers only go up to 32767 before wrapping around
• Numbers are rounded to 4 digits
• Logarithm function is approximated
• Generally bad code
I tried to get the square root of 7, @girres42, and it only gave back zero.
I pushed two buttons. 7 and the SQRT button.
@dw817 In that case, you would do (7), (√), (2), then (=). I tried making the UI as clear as possible within the 64x64 size.
Wow. That is - very different from my calculator, @girres42. Here I just type 7 and the square root key. I was expecting instant calculation thus: answer=sqrt(x)
I see what you are doing here is quite clever x^(1/y) so you can get any root you want, square or cube if desired. Neat !
Let me try that.
Okay 27 √ 3 = 2.9996
Hmm .. Oh, but that's not right. It should be 3.
@dw817 PICO-8 rounds (1/3) to .3333. So, it's actually doing 27^(.3333) (which equals 2.9996). I can't really think of a work around.
Let me try in Blitz, @girres42.
Oh, even worse. Isn't there a way to get a nice cube root of 27 in Pico-8 with the correct result of 3 ?
Tried Windows Calculator:
Ah ! It got 3. So it is possible anyways.
[Please log in to post a comment]
|
{"url":"https://www.lexaloffle.com/bbs/?tid=50604","timestamp":"2024-11-10T09:12:42Z","content_type":"text/html","content_length":"151215","record_id":"<urn:uuid:ab249c5b-d314-4d1f-902b-3eb3c960b1aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00397.warc.gz"}
|
What is the difference between by and CLASS in Proc means?
What is the difference between by and CLASS in Proc means?
The primary difference is that the BY statement computes many analyses, each on a subset of the data, whereas the CLASS statement computes a single analysis of all the data. Specifically, The BY
statement repeats an analysis on every subgroup.
What does Proc mean do in SAS?
PROC MEANS is one of the most common SAS procedure used for analyzing data. It is mainly used to calculate descriptive statistics such as mean, median, count, sum etc. It can also be used to
calculate several other metrics such as percentiles, quartiles, standard deviation, variance and sample t-test.
What are SAS classes?
The CLASS statement names variables to be analyzed as categorical variables. The CLASS variables are one or more variables in the DATA= input data set. These variables can be either character or
numeric. The formatted values of the CLASS variables determine the categorical variable levels.
What is CLASS and model in SAS?
The CLASS statement names the classification variables to be used in the model. Typical classification variables are Treatment , Sex , Race , Group , and Replication . If you use the CLASS statement,
it must appear before the MODEL statement. Classification variables can be either character or numeric.
What is var statement in SAS?
The VAR statement identifies the analysis variables and their order in the output. If you omit the VAR statement, then PROC HPSUMMARY analyzes all numeric variables that are not listed in the other
statements. When all variables are character variables, PROC SUMMARY produces a simple count of observations.
How do you find the average value in SAS?
The arithmetic mean is the value obtained by summing value of numeric variables and then dividing the sum with the number of variables. It is also called Average. In SAS arithmetic mean is calculated
using PROC MEANS.
What are the basics of SAS?
SAS is an acronym for statistical analytics software. The main purpose of SAS is to retrieve, report and analyze statistical data . Each statement in SAS environment ends with a semicolon otherwise
the statement will give an error message. It is a powerful tool for running SQL queries and automating user’s task through macros.
What are SAS procedures?
Sas Procedures is an inseparable part of SAS programming language. In the course you are going to learn a variety of procedures that performs data manipulation, statistical analysis and creation
What is SAS syntax?
SAS syntax is the set of rules that dictate how your program must be written in order for SAS to understand it. There are some conventions of SAS syntax that new users should know before getting
What are class variables in SAS?
Examples of classification variables (called CLASS variables in SAS) are gender, race, and treatment. The levels of the categorical variable are encoded in the columns of a design matrix. The columns
are often called dummy variables. The design matrix is used to form the “normal equations” for least squares regression.
|
{"url":"https://www.peel520.net/what-is-the-difference-between-by-and-class-in-proc-means/","timestamp":"2024-11-04T04:50:04Z","content_type":"text/html","content_length":"34990","record_id":"<urn:uuid:7992c584-22fc-4dea-bc1c-c91ae4bf7448>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00246.warc.gz"}
|
Physics Archives - David The Maths Tutor
Please refer to the previous posts in this series if this post is to make any sense.
Believe it or not, I made a small error in the equations from the last post. I used kilometers instead of meters in part of the equation for the force due to gravity. The two equations for the thrust
phase and the coasting phase are:\[ \begin{array}{l} {{F}{=}\hspace{0.33em}{2}{,}{800}{,}{000}\hspace{0.33em}{-}\hspace{0.33em}{(}{16000}\hspace{0.33em}{-}\hspace{0.33em}{57}{.}{2222}{t}{)}\hspace
{0.33em}\times\hspace{0.33em}{9}{.}{8}{\left({\frac{6400000}{{6400000}\hspace{0.33em}{+}\hspace{0.33em}{x}{(}{t}{)}}}\right)}^{2}}\\ {{=}\hspace{0.33em}{(}{16000}\hspace{0.33em}{-}\hspace{0.33em}{57}
{.}{2222}{t}{)}\hspace{0.33em}\times\hspace{0.33em}{a}} \end{array} \]
and \[ \begin{array}{c} {{F}{=}{-}{9}{.}{8}{\left({\frac{6400000}{{6400000}\hspace{0.33em}{+}\hspace{0.33em}{x}{(}{t}{)}}}\right)}^{2}\hspace{0.33em}{=}\hspace{0.33em}{a}} \end{array} \]
Now as mentioned before, these are rather difficult to solve. However, first class rocket scientists rarely solve equations by hand. They resort to numerical methods to solve them. Numerical methods
means that they enter the equations into a program and then let the computer solve them.
I have done that and have plotted the results. Below is a plot of the distance the rocket is after t seconds:
Looks like we sent the rocket into space with quite a kick! At 270 seconds, the thrust stops and the rocket keeps going with no sign of slowing down. Its velocity when the thrust stops is greater
than the escape velocity at that height. Escape velocity is the speed an object needs to break free of the earth’s gravity. The rocket never falls back to earth.
I also plotted the velocity of the rocket. See how the velocity remains relatively constant after thrust cutoff because gravity is too weak to slow it down:
So that’s all I will say about Newton’s laws, at least for this series of posts. Next week, what’s the deal with this clock?
Newton’s Laws, Part 6
Now let’s use a bigger rocket and remove two of the assumptions we made with the model rocket. If you have not read the previous posts in this series, I suggest that you do before starting cold on
this one.
As I said before, the assumption that the rocket’s mass stays the same throughout its flight is not realistic for larger rockets. The fuel is the majority of the mass of a rocket and that is spent as
the rocket ascends. The assumption that gravity is constant throughout a rocket flight is also not realistic as gravity does get weaker the higher the rocket goes.
Let’s assume a hypothetical rocket that has a fuelled mass of 16,000 kg. The unfuelled rocket (the rocket structure and payload) weigh 550 kg. We will continue to assume a constant thrust (again not
realistic) of 2,800,000 N that lasts 270 seconds. As the fuel burns, let’s assume a constant rate of fuel being ejected. The mass of the rocket, as time t increases is:
m(t) = 16000 – 57.2222 t
this makes sense since at t = 0, the mass of the rocket is 16000 kg and after 270 seconds, the mass is 550 kg. Note that this mass equation is only valid during the thrust phase of the rocket. As
before, we have to deal with two separate phases of the rocket’s flight because the equations to be solved are different in each phase.
The change in gravity requires a little explanation. The acceleration due to gravity is due to the attraction between two bodies (you and the earth). The acceleration due to gravity follows what is
called the inverse square law. This means that the acceleration due to gravity decreases by the square of the distance between them.
Now the distance between you and the center of the earth is about 6400 km. We use the center of large objects from which to measure distance. As our rocket travels up, this distance is getting
greater so gravity is becoming weaker. If x(t) is the distance from the launchpad after t seconds, then g(t), the acceleration due to gravity at t seconds is:
So during the burn phase, there are two forces acting in the rocket: gravity and the rocket engine. We will follow the same convention as before, anything pointing down is negative, and up is
positive. So for phase 1, Newton’s second law, F = ma is
As you can see, removing those two assumptions really complicates the equation. Things get a little simpler for phase 2, after the engines burn out because the mass is now constant at 550 kg and this
can be divided out from both sides of the equation:
These equations from both phases are rather nasty ones. To get technical, these are non-linear second order differential equations. You can see why rocket scientists make the big bucks.
So this post is long enough so I will say more about these equations next time.
Newton’s Laws, Part 5
Please read the prior posts in this series if you have not been following along. I ended last post with two equations, each relating to a different phase of our model rockets flight: powered and
coasting. Let’s look at the powered phase (phase 1).
For this phase, Newton’s second law (F = ma) reduced down to:
5.08 = 0.4a
From this equation, a first class rocket scientist can use calculus to find the velocity and the distance from the launch pad at any time in seconds after launch. These equations are:
v(t) = 12.7t, x(t) = 6.35t²
where v(t) is the velocity t seconds after launch and x(t) is the distance from the launch pad t seconds after launch. Now I’ve introduced what is called functional notation. Instead of saying the
velocity or distance after 3 seconds, I can just say v(3) or x(3). Maths is full of shorthand notations.
These two equations assumes that we start the clock at 0 seconds and that velocity, acceleration, and distance are all 0 at 0 seconds. Now remember, these equations are only valid for the first 3.3
seconds of flight (see previous post) because the engine stops burning at 3.3 seconds.
So how fast is our rocket going at 3.3 seconds? Well, just replace t with 3.3 in the velocity equation and calculate it:
v(3.3) = 12.7×3.3 = 41.91 m/s
To give you a perspective of how fast this is, this is equivalent to almost 151 km/h. How high is the rocket at engine burnout? Let’s replace t with 3.3 in the distance equation and calculate it:
x(3.3) = 6.35×(3.3)² = 69.15 m
Is this the highest the rocket goes, a measly 69 meters? Well remember, at burnout, the rocket is going up very fast. It will take gravity a while to turn that around. Enter the phase 2 equations.
From my last post, Newton’s second law for phase 2 is:
a = -9.8
Again, using calculus, our friend, the first class rocket scientist generates the two equations (applicable only to phase 2):
v(t) = -9.8t + 74.25, x(t) = -4.9t² + 74.25t -122.514
These equations are a bit more complex because they have to take into account that at 3.3 seconds, the velocity is 41.91 m/s and the rocket is 69.15 m high.
So how high does our rocket go? At the peak of its travels, the velocity goes from positive (going up) to negative (going down). That is, it passes through 0. So in order to find the highest that our
rocket goes, we need to find when the velocity equals 0. So we use our velocity equation and set it equal to o, then solve for the time that makes that happen:
v(t) = -9.8t + 74.25 = 0
-9.8t = -74.25
t = -74.25/-9.8 = 7.58 seconds
So now we use the distance equation and replace t with 7.58:
x(7.58) = -4.9×(7.58)² + 74.25×7.58 -122.514 = 158.76 m
So now remember that we are not using a parachute. So the next two questions to ask is when does it hit the ground and how fast is it going when it does.
When the rocket hits the ground, its distance is 0. So now we use the distance equation, set it equal to 0 and find the value of t to make that happen:
x(t) = -4.9t² + 74.25t -122.514 = 0
Now you can solve this using the quadratic formula which I have covered in a previous post. Using this formula, you get two answers: 1.884s and 13.269s. The first answer is not greater than 3.3.
These phase 2 equations are only valid for t greater than 3.3 seconds. So we can reject that answer and choose 13.269 seconds. So the total flight time is a bit over 13 seconds.
Now how fast does it hit the ground? Put the time 13.269 into the velocity equation to get:
v(13.269) = -9.8×13.269 + 74.25 = -55.79 m/s
The velocity is negative because it is going down. So the rocket is going its fastest when it hits the ground, not when the engine burns out. 55.79 m/s is equivalent to 200.84 km/h. What are the odds
that we can launch this rocket again?
In my next post, let’s do the same problem but use a bigger rocket (since our model rocket is now one with the earth).
Newton’s Laws, Part 4
Well congratulations! If you get through (and understand) this and the last three posts on Newton’s laws, consider yourself a rocket scientist third class.
In my high school days, my friend Byron and I formed a two-member rocket club. We would pool our money and get and build model rockets. We also got solid propellant cartridges that were inserted into
the rocket. We would put the rocket on a launch pad (a wooden block) and ignite the propellant, either with a fuse or electrically with a small coil of wire that was heated with an electric current.
The rocket would ascend very fast, then a small charge at the end of the propellant cartridge would push out the nose cone and the folded parachute inside. Then the fun part was to try and retrieve
the rocket, especially on windy days!
So today, I am going the find the equation of motion and the velocity equation of these rockets using Newton’s second law, F = ma.
We are going to make several assumptions. I will relax some of these later, but seeing that you are only a third class rocket scientist, let’s start out making things as simple as possible.
1. The force exerted by the cartridge is constant from ignition to the time it cuts off.
2. The mass of the rocket is constant throughout its flight.
3. The rocket is going straight up and down. (We will not be using a parachute.)
4. The flight time is short so we will not take into account the rotation of the earth.
5. The force due to gravity is constant throughout the flight.
6. Air drag is ignored.
A first class rocket scientist would not be making these assumptions and would include the effects of these into the equations. Assumption 1 is not valid for real rockets and the varying force would
have to be accounted for.
Assumption 2 would make the equations very inaccurate in reality. Depending on the type of propellant, the rocket fuel can be from 83% to 96% of the total mass of the rocket. As the fuel is burned,
the mass of the rocket is getting less and less, and as you can see from F = ma, this means that the acceleration must increase if m×a is to continue equalling the constant force.
Assumption 3 is made so that the velocity, acceleration, and position are measured along the same line. For big rockets, measurements of these three things are measured with something called vectors
which provide a direction as well as a value.
Assumption 4 is made because the earth does rotate and this rotation adds to the speed, not in the vertical direction but in a sideways direction.
Assumption 5 is OK for our model rocket because it will not go that high, but the force of gravity does lessen with altitude and this must be taken into account for larger rockets.
Assumption 6 allows us to keep the equations simple (as do all of them).
Now let’s look at an actual model rocket. These numbers are realistic numbers that are possible for model rockets.
Suppose we launch a model rocket that weighs 400 g. The engine provides 9 N of force for 3.3 s. Let’s answer the following questions:
1. How high does the rocket go?
2. What is the maximum speed of the rocket?
3. What is the total flight time?
Before I set this up, let’s agree on what’s positive and what’s negative. Let’s agree that up is positive and down is negative. So the force due to the rocket engine is positive because it is pushing
the rocket up and the force due to gravity is negative because it is pulling the rocket down.
Now there are two phases of the rocket’s flight: the first phase is when the engine is burning and the second phase begins when the engine stops. Let’s look at F = ma during the first phase.
Phase 1 – engine is burning: There are two forces acting on the rocket, the one due to the engine and the other due to gravity. In part 2 of this series of posts, I explained how to find the force
due to gravity. The gravity force points down so it is negative. The other force, of course, is due to the rocket engine and it is positive. So putting this into F = ma:
9 N – (0.4 kg)(9.8 m/s²) = 0.4 kg × a
I use 0.4 kg instead of 400 g because we are using SI units (explained in last post). So simplifying this equation and removing the units gives:
5.08 = 0.4a . This is the starting equation for the first phase.
Phase 2 – engine stops: Now let’s look at the second phase. After the engine cuts off, the only force acting on the rocket is gravity. So the only acceleration the rocket is experiencing is only due
to gravity. So F = ma becomes:
– (0.4 kg)(9.8 m/s²) = 0.4 kg × a
But you can see from this that a = -9.8 m/s² to make this a true equation. This simple equation is the starting equation for the second phase.
This post is now longer than I thought it would be, so I will continue with this in my next post.
Newton’s Laws, Part 3
What I would like to eventually get to, is to develop the equation of motion of a rocket. An equation of motion is just an equation that calculates an object’s position given a time. I did this
without a lot of detail, in my Springy Thingy posts back in February. For this set of posts, I would like to add a bit more development.
Let’s go back to our basic equation that describes motion: F = ma. Let’s look at the a (acceleration) part. Acceleration is a rate of change of velocity. If a car goes from 0 to 100 km/s in 10
seconds, its acceleration is the difference in velocity divided by the time interval:
This means the car increases its velocity 10 km/s each second. So acceleration is the rate that velocity changes per unit of time. However, velocity is also a rate of change measurement. Velocity is
the rate of change of position (or distance) per unit of time. An equation of motion is finding the position of an object given a time.
So acceleration is the rate of change of velocity which is the rate of change of position, and it’s position that we want. How do we get there? This is where calculus comes in.
Calculus essentially deals with rate of change equations. It can find the rate of change of position, that is velocity, given a position equation. It can also go backwards and find a position
equation, given a velocity equation. In our case, it can take the rate of change of velocity (acceleration) and find the velocity equation and then take the velocity equation and find the position
equation, that is the equation of motion.
This ability to take a simple equation like F = ma, and from it, describe the motion of objects is one of the many reasons I love maths.
With this as a background, next time, let’s launch a rocket, then cut off it’s motors and answer questions like:
How high is the rocket when the motors cut off?
How fast is the rocket going when the rockets cut off?
How high does the rocket go?
When will the rocket hit the ground (or will it hit the ground)?
Newton’s Laws, Part 2
So I left off last time with the equation form of Newton’s second law: F = ma. It is important to use consistent units for the three quantities, the force F, the mass m, and the acceleration a.
Now in the USA, they use English units where the unit for acceleration is ft/sec² (feet per second squared), the unit of mass is something called a slug, and the unit for force is lbf (pound-force).
We will not be using these units.
In the civilised world (I’m not biased), we use SI units. SI comes from the French Système international which means the international system of units. For our equation, these units are kg
(kilogram) for mass, m/sec² (meters/second squared) for acceleration, and the combination of these units on the right side of F = ma gives kg×m/sec² as the unit of force. This unit is given a special
name in SI units, the newton, N, in honour of guess who. So 1 N is the force required to accelerate 1 kg of mass, 1 m/sec². 1 N is about the force an average size apple exerts on your hand.
Now I said I would also explain the difference between mass and weight. Weight is a force exerted by an object due to gravity. An objects weight changes when measured on different planets or moons.
Its mass however, is an intrinsic property and remains unchanged regardless of where the object is. Mass is the amount of stuff that makes up the object.
Now the confusion between these two things arises because we commonly use weight, say in kilograms, to mean force. We feel the weight of a 1 kg object in our hands. But this unit is really a
kilogram-force (kgf). A kilogram in SI units is a unit of mass, not weight. But fortunately, an object on earth that exerts a force of 1 kgf due to gravity, is defined as having a mass of 1 kg, so it
is easy to interchange these units on earth. But elsewhere, the object will exert a different force (kgf) due to gravity but it will still be an object of 1 kg mass.
The kilogram-force unit is not an SI unit – we just naturally use it in everyday conversation. As seen before, the newton (N) is the SI unit of force. So what is the weight of a 1 kg mass in SI
units? On earth, when you drop something from a height, its velocity starts as 0 m/s but its speed increases as time goes on. This is why you can jump from a half a meter height without injuring
yourself but will do considerable damage if you jump from a 50 meter height. Increasing speed means acceleration. So there is an acceleration due to gravity on earth as well as other planets. On
earth, the acceleration due to gravity is 9.8 m/sec². When an object is dropped on earth, its speed increases by 9.8 m/sec every second.
So if we want to calculate the weight of a 1 kg object, we use Newtons second law F = ma where we use the acceleration due to gravity for a:
F = 1 kg × 9.8 m/sec² = 9.8 N
If you want to know your weight in newtons, just take your mass in kg (which equals your weight in kgf), and multiply by 9.8. Your weight in newtons is almost 10 times your weight in kgf, which
explains why people prefer to use kgf.
Newton’s Laws, Part 1
As my interest in maths grew out of seeing it applied, I thought I should start writing posts on its applications. Physics applications are almost always maths related and you can’t start a
conversation about physics without starting with Newton’s Laws.
Sir Isaac Newton is famous for his work in physics and maths. I find it amazing that his accomplishments occurred in the 1600’s. One of his most famous works, some would say his most famous, was his
Principia Mathematica Philosophiae Naturalis (The Mathematical Principles of Natural Philosophy). In his day, the term natural philosophy meant science. In this publication, Newton set out his three
laws of motion:
1. Every object in a state of uniform motion will remain in that state of motion unless an external force acts on it.
2. Force equals mass times acceleration.
3. For every action there is an equal and opposite reaction.
The first law is sometimes called the law of inertia. You experience inertia every day when you try to push an object or stop an object from moving. You have to apply a force to start an object
moving or stop one from moving. The second law explains why a greater force is needed to stop a moving car than a baby stroller. The third law explains why rockets work, what happens when you release
a ballon before it’s tied, and why a gun or rifle has a recoil.
This series of posts will be mainly about Newton’s second law. This law, in equation form, is
F = ma , where F is force, m is mass, and a is acceleration.
Before we work with this equation using numbers, let’s see what this equation means.
If you try to stop a rolling car, you are trying to decrease its velocity from a certain value to zero. In other words, you are trying to decelerate it. Deceleration is negative acceleration, and
according to Newtons second law, because the mass of the car is rather large, a large force is required to stop it from rolling. A baby stroller going the same speed requires less force because its
mass, m, is much smaller than a car’s mass.
Even though this equation is very simple, there are entire books dedicated to this equation. The rest of my posts could easily be about this equation alone, but I will try to keep it down to just a
In my next post, I will talk about the units that we will use for the three things that make up this equation. I will also talk about the difference between mass and weight.
What Goes Up …
I was a mediocre student in maths before I started at a university. But in university, I discovered the amazing things that maths can do. Maths can be used to describe all sorts of physical processes
and this can be used to control those processes. Things like the cruise control in your car or the flight controls of an aircraft or spacecraft need a mathematical model of the thing being
controlled. One of the first examples used to introduce students to modelling (that is mathematically describing something), is the simple act of throwing a ball into the air.
Now to start modelling this process from scratch requires calculus which I haven’t covered yet. But I will give you the final result and we will work with that.
If a tennis ball is hit straight up in the air, there are two main forces acting on it after it leaves the racquet: gravity and drag from the air. Though the air drag can be modelled as well, it
complicates the model so it is usually assumed to be negligible when introducing this to students. The effect of gravity is to reduce the initial speed given to the ball by 9.8 m/s every second. So
if the ball has an initial speed of 60 m/s, after 1 second its speed is 60 – 9.8 = 50.2 m/s. After the next second, its speed is 50.2 – 9.8 = 40.4 m/s, and so on. By the way, I am using metric units
here. The same thing can be done with American units where the effect of gravity reduces the speed of the ball by 32 ft/s every second. But as most of the world uses metric, we’ll stay with that.
So let’s stick with the initial velocity of the ball as 60 m/s when it leaves the racket. Now the first thing to do is agree on a coordinate system. It’s natural to agree that up is positive and down
is negative. We’ll also agree that time starts at 0 as soon as the ball leaves the racket and that the height in meters at the point where the ball leaves the racquet is also 0. We’ll call this the
ground level. So using calculus with the force of gravity and the initial speed of the ball (ignoring air drag), we can get three equations that describe the motion of the ball: one equation for its
acceleration, one for its velocity, and one for its height from the ground. With a = acceleration, v = velocity, h = height, and t = time in seconds, these equations are:
With these equations, we can answer the following questions:
1. When will the ball hit the ground?
2. How fast is the ball travelling when it is at its maximum height?
3. How high does the ball go?
4. How fast is the ball going when it hits the ground?
Before I answer these questions, let’s look at the graph of the height equation. Now before, we talked about x and y values on a graph. But for physical processes, we can use different letters that
are more meaningful. Instead of x, we will use t for time, and instead of y, we will use h for height. The graph of this equation is below:
The curve is an upside down parabola. The graph goes on forever below the t-axis, but I’m only show the part of the graph that makes physical sense.
The graph and the equation make sense at t = 0 seconds as h is 0 on the graph at t = 0, and if you let t = 0 in the equation, you also get h = 0.
So you can see that the ball goes up, reaches a maximum height, then falls back to the ground. To answer question 1, from the graph, it looks like the ball hits the ground a little over 12 seconds
because that is where the graph shows h = 0 again. To find the exact value, we must set the height equation equal to zero, and find the times when h = 0. This will require us to factor the equation
and use the null factor law (explained in a previous post):
So the ball travels for 12.245 seconds before it hits the ground (that’s a powerful tennis player!). Notice that there is also another solution, t = 0, which is when the ball is initially hit.
Question 2 can be answered by the physics of the problem and this answer will help us answer question 3. As explained, the ball has an initial velocity of 60 m/s, but is continually slowing down due
to gravity. When it reaches its maximum height, the ball reverses direction then goes back down. At the maximum height, the velocity is 0 because velocity is positive going up and negative going
down, so it must be 0 right when the ball reverses direction.
Question 3 can be answered using the answer to question 2. If we set the velocity equation to zero, that will give us the time that the ball is at maximum height. We then use this time in the height
equation to find the height at that time:
We also could have solved this by noticing that a parabola is symmetric and the maximum height would occur halfway between 0 and 12.245 seconds. This also would have given us t = 6.122 seconds to use
in the height equation.
The last question is answered by using the time that the ball hits the ground in the velocity equation:
Notice that the velocity is the same as when the ball was first hit except it is negative since it is now going down instead of up.
Isn’t it amazing how we can find out all sorts of things about throwing a ball without actually doing it!
Latitude Aptitude
How did the early sailors determine their latitude position without GPS? That is the topic of today’s post.
Now first, a little background. The earth’s axis is tilted with respect to its orbit about the sun. The angle of this tilt is approximately 23.5°. This causes the northern and southern hemispheres to
get more sun in summer and less in winter, which is the reason for seasons to exist. The tilted axis also causes our days to be shorter in the winter and longer in the summer. There are two times
during the year when the days and nights are equal in length. The times are called the vernal and autumnal equinoxes. In the northern hemisphere, these equinoxes occur on the first days of spring and
autumn. Here in Australia in the southern hemisphere, we elected to call the start of spring on the 1st of September and the fall on the 1st of March, about 21 days short of the respective equinox.
Perhaps this is because it is easier to remember. The main point here is that twice a year, at an equinox, the days and nights are equal.
At any time of the year other than an equinox, the highest height of the sun around noon is affected by the tilt of the earth’s axis. But at an equinox, the earth is in a neutral position where the
axis tilt does not affect the highest sun height. At the equator (0° latitude), the sun would be directly overhead and a vertical stick in the ground would cast no shadow. As you go up or down in
latitude, the highest sun height goes down and a vertical stick would cast the shortest shadow when the sun is at its highest. The below graphic shows the earth at an equinox with the sun at its
maximum height. If a vertical stick is placed in the ground at your location, the sun’s rays would make an angle with it that is the same as your latitude angle.
Below is a blow-up of the vertical stick. You can see from the above picture that at the equator. The sun would be directly overhead at noon and there would be no shadow. At the poles, the sun would
be at the horizon and the shadow would be very long (technically infinite). But in between, a measurable shadow would be made.
Now you could measure the angle directly with a sextant, but I hardly know what a sextant is. let alone use one. But I am good at maths and I have a good calculator. The shadow, stick, and the line
from the top of the stick to the shadow end forms a right triangle. If you remember the post on trig functions, the tangent of an angle is the length of the opposite side divided by the length of the
adjacent side. We want to measure the angle ????, so the adjacent side is the stick and the opposite side is the shadow:
On your calculator, if you have the trig functions, you would also have keys labelled “arctan” or “tan^-1“. These keys mean “what is the angle that has what you entered as its tangent”. So if you
enter the results of the division and then hit this key (making sure that your calculator is in “degrees” mode), you will get your latitude.
Now this method will not tell you if the latitude is positive (North) or negative (South). But if you are so lost that you don’t even know what hemisphere you are in, finding your latitude is
probably the least of your troubles.
Also, waiting for noon to find your latitude is not too bad, but waiting for an equinox is fairly restrictive. Fortunately, our early sailors had tables to correct the angle found depending on the
time of the year.
|
{"url":"https://davidthemathstutor.com.au/category/pre-vce/physics/","timestamp":"2024-11-03T13:06:44Z","content_type":"text/html","content_length":"90485","record_id":"<urn:uuid:6358ce5f-eb5f-48d8-a5bf-5d5213c986d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00362.warc.gz"}
|
Department Of CSE
Department Of Computer Science And Engineering
The principal aim of the Sylhet International University is to provide high quality education at undergraduate and postgraduate levels relevant to the needs of a dynamic society. The courses and
curricula are so designed as to enable a student to enter into the world of work or pursue higher academic and professional goals with a sound academic foundation. The medium of instructions in
Sylhet International University is English. The academic goal of the university is not just to make the students pass the examination and get the degree but to equip them with the means to become
productive members of the community and continue the practice of lifelong learning.
The mission of the CSE Department is to provide quality education for those students who are able to compete nationally and internationally, able to produce creative and effective solutions to the
national needs, conscious to the universal moral values, adherent to the professional ethical codes and to generate and disseminate knowledge and technological essentials to the local and global
needs in the fields of Computer Science and Engineering.
The vision of the CSE Department is to become a nationally and internationally leading institution of higher learning, building upon the culture and the values of universal science and a center of
education and research that creates knowledge and technologies which form the groundwork in shaping the future of the Computer Science and Engineering fields.
23 Jun 2020
Now students can pay all kinds of fee using Rocket Platform. Follow the following steps.
12 Feb 2019
Sylhet International University ICT Fest 2019 The Sub-Committees for SIU ICT FEST 2019 has been Formed as Follow-
27 Nov 2018
অত্র বিশ্ববিদ্যালয়ের সি.এস.ই বিভাগের ২-১ সেমিস্টার হইতে ৪-২ সেমিস্টার এর সকল শিক্ষার্থীর অবগতির জন্য জানানো যাইতেছে যে, আজ ২৭/১১/২০১৮ ইং রোজ মঙ্গলবার, আজ সি.এস.ই বিভাগের...
Message From Dept. Head
Welcome to the Computer Science and Engineering Department of Sylhet International University. You are here because you dream of a future with a strong foundation in knowledge and skill in the world
of technology. We, the teachers of this department, nurture a congenial atmosphere for self-development. With utmost sincerity everyone here strives for the continual development of self and society.
Our students enjoy what they learn and get hands on experience of all possible applications of the knowledge they gather. Thus, knowledge and practical application are developed hand in hand. The
department offers a healthy atmosphere of competitiveness in both skill development and academic research through initiatives of the SIU Computer Society, Center for Research, Testing & Consultancy
(CRTC), Programmer’s Den etc. The academic atmosphere is enhanced by.......
Fees For B.Sc.Engg. in CSE (Day)(4 years)
Fees For B.Sc.Engg. in CSE (Eve)(3 years)
• First Year : Semester I
Course Code Course Title Credit Hours
CSE-101 Computer Fundamentals 3.0
CSE-102 Computer Fundamentals Lab 1.0
PHY-101 Mechanics, Properties of Matter, Waves. Optics, Heat and thermodynamics 3.0
MTH-105 Differential and Integral Calculus 3.0
CSE-105 Structured Programming Language 3.0
CSE-106 Structured Programming Language Lab 1.5
HUM-105 Oral and Written Communication in English Language 3.0
HUM-113 Bangladesh Studies: History and Society of Bangladesh 3.0
Total 20.50
• First Year : Semester II
Course Code Course Title Credit Hours
CSE-107 Object Oriented Programming I 3.0
CSE-108 Object Oriented Programming I Lab 1.5
MTH-107 Geometry and Linear Algebra 3.0
PHY-103 Electromagnetism and Modern Physics 3.0
PHY-102 Physics Lab 1.5
ECE-101 Basic Electrical Engineering 3.0
ECE-102 Basic Electrical Engineering Lab 1.5
ECN-101 Principles of Economics 3.0
HUM-103 Language Composition and Comprehension 3.0
Total 22.50
• Second Year : Semester I
Course Code Course Title Credit Hours
CSE-201 Discrete Mathematics 3.0
CSE-211 Object Oriented Programming II 3.0
CSE-212 Object Oriented Programming II Lab 1.5
CSE-205 Data Structures 3.0
CSE-206 Data Structures Lab 1.5
MTH-205 Vector Analysis and Complex Variable 3.0
ECE-201 Electronic Devices and Circuits 3.0
ECE-202 Electronic Devices and circuits Lab 1.5
ACN-203 Cost and Management Accounting 3.0
Total 22.50
• Second Year : Semester II
Course Code Course Title Credit Hours
CSE-207 Algorithms 3.0
CSE-208 Algorithms Lab 1.5
CSE-209 Numerical Methods 3.0
CSE-210 Numerical Methods Lab 1.5
CSE-231 Digital Logic Design 3.0
CSE-232 Digital Logic Design Lab 1.5
MTH-207 Differential Equations , Laplace Transforms and Fourier Analysis 3.0
CSE-200 Project Work 2.0
Total 18.50
• Third Year : Semester I
Course Code Course Title Credit Hours
CSE-321 Database Systems 3.0
CSE-322 Database Systems Lab 1.5
CSE-331 Computer Architecture 3.0
CSE-323 Web Engineering 2.0
CSE-324 Web Engineering Lab 1.5
MTH-301 Statistics and Probability 2.0
CSE-309 Cyber crime and Intellectual Property Law 3.0
CSE-310 Technical Report Writing and Presentation 1.5
CSE-326 Engineering Drawing 1.0
Total 18.50
• Third Year : Semester II
Course Code Course Title Credit Hours
CSE-300 Software Development 2.0
CSE-303 Operating Systems 3.0
CSE-304 Operating Systems Lab 1.5
CSE-315 Data Communication 3.0
CSE-313 Microprocessors and Microcontroller 3.0
CSE-314 Microprocessors and Microcontroller Lab 1.5
CSE-337 System Analysis and Software Engineering 3.0
CSE-338 System Analysis and Software Engineering Lab 1.5
Total 18.50
• Fourth Year : Semester I
Course Code Course Title Credit Hours
CSE-425 Digital Signal Processing 3.0
CSE-426 Digital Signal Processing Lab 1.5
CSE-403 Compiler Design 3.0
CSE-404 Compiler Design Lab 1.5
CSE-421 Computer Network 3.0
CSE-422 Computer Network Lab 1.5
CSE-4** Option 3.0
CSE-4** Option Lab 1.5
Total 18.00
• Fourth Year : Semester II
Course Code Course Title Credit Hours
CSE-415 Artificial Intelligence 3.0
CSE-416 Artificial Intelligence Lab 1.5
CSE-431 Computer Graphics 3.0
CSE-432 Computer Graphics Lab 1.5
CSE-435 Computer Interfacing 3.0
CSE-436 Computer Interfacing Lab 1.5
CSE-4** Option 3.0
CSE-4** Option Lab 1.5
CSE-400 Project / Thesis 3.0
CSE-402 Comprehensive Viva Voce 2.0
Total 23.00
• Optional
Course Code Course Title Credit Hours
CSE-437 Pattern Recognition 3.0
CSE-438 Pattern Recognition Lab 1.5
CSE-411 VLSI Design 3.0
CSE-412 VLSI Design Lab 1.5
CSE-419 Graph Theory 3.0
CSE-420 Graph Theory Lab 1.5
CSE-423 Computer System Performance Evaluation 3.0
CSE-424 Computer System Performance Evaluation Lab 1.5
ECE-421 Digital Communication 3.0
ECE-422 Digital Communication Lab 1.5
CSE-407 Simulation and Modeling 3.0
CSE-408 Simulation and Modeling Lab 1.5
CSE-453 Digital Image Processing 3.0
CSE-454 Digital Image Processing Lab 1.5
CSE-455 Wireless and sensor Networks 3.0
CSE-456 Wireless sensor Networks Lab 1.5
CSE-409 Computer Security and Cryptography 3.0
CSE-410 Computer Security and Cryptography Lab 1.5
CSE-457 Bioinformatics 3.0
CSE-458 Bioinformatics Lab 1.5
CSE-461 Neural Networks 3.0
CSE-462 Neural Networks Lab 1.5
CSE-463 Machine Learning 3.0
CSE-464 Machine Learning Lab 1.5
CSE-465 Contemporary course on CSE 3.0
CSE-466 Contemporary course Lab on CSE 1.5
CSE-467 Advanced Database Systems 3.0
CSE-468 Advanced Database Systems Lab 1.5
CSE-469 Natural Language Processing 3.0
CSE-470 Natural Language Processing Lab 1.5
Total Credit Hours Required for Degree 162.00
• Detailed Syllabus
Detailed Syllabus
CSE-101 Computer Fundamentals
3 Credits
Introduction: Definition, history & some applications of computer. Classification of Computer: H/W and S/W computer components. Number systems : Binary, octal, hexadecimal number systems and
operations, computer codes. Boolean algebra.Data processing techniques.Arithmetic e’ logic operation.Logic gates. Operating systems: MS-WINDOWS, UNIX. Application software’s: Word processors,
word perfect, Ms-word Excel, Foxpro. Programming languages: M/c language, assembly language, high level languages, source & object language, 4th generation language, compilers, translators &
interpreter. Elements of computer H/W. Data transmission & networking.
Books Recommended:
1. Introduction to Computers – Subramanian
2. Inside the PC – P. Norton
3. Introduction to Computer – Norton
4. Computer Fundamentals – Prodeep K. Sinha
CSE-102 Computer Fundamentals Lab
1.0 Credits
Laboratory works based on CSE 101.
PHY-101 Mechanics, Properties of Matter, Waves, Optics, Heat & Thermodynamics
3 Credits
Mechanics : Measurements, Motion in one Dimension, Motion in a Plane, Particle Dynamics, Work & Energy, Circular Motion, Simple Harmonic Motion, Rotation of Rigid Bodies, Central Force, Structure
of Matter, Mechanical Properties of Materials. Properties of Matter: Elasticity, Stresses & Strains, Young’s Modulus, Bulk Modulus, Rigidity Modulus, Elastic Limit, Poisson’s Ratio, Relation
between Elastic Constants, Bending of Beams. Fluid Motion, Equation of Continuity, Bernoulli’s Theorem, Viscosity, Stokes’ Law. Surface Energy & Surface Tension, Capillarity, Determination of
Surface Tension by Different Methods Waves : Wave Motion & Propagation, Simple Harmonic Motion, Vibration Modes, Forced Vibrations, Vibration in Strings & Columns, Sound Wave & Its Velocity,
Doppler Effect, Elastic Waves, Ultrasonics, Practical Applications. Optics : Theories of Light, Huygen’s Principle, Electromagnetic Waves, Velocity of Light, Reflection, Refraction, Lenses,
Interference, Diffraction, Polarization. Heat & Thermodynamics : Temperature and Zeroth Law of Thermodynamics, Calorimetry, Thermal Equilibrium & Thermal Expansion, First Law of Thermodynamics,
Specific Heat, Heat Capacities, Equation of State, Change of Phase, Heat Transfer, Second Law of Thermodynamics, Carnot Cycle, Efficiency, Entropy, Kinetic Theory of Gases.
Books Recommended:
1. Fundamental of Physics (Part I)-Haliday, Resnick& Walker
2. Modern Physics – Bernstein
3. Concepts of Modern Physics – Beiser
4. Electromagnetism and Modern Physics
5. Fundamental of Optics – Brizlal
6. Optics – Ghatak
7. Heat & Thermodynamics – Brizlal
8. University Physics with Modern Physics – Young
9. EssentialUniversity Physics Volume I – Wolfson
10. EssentialUniversity Physics Volume II – Wolfson
MTH-105 Differential and Integral Calculus
3 Credits
Differential Calculus: Real number System. Relations and functions, Functions of single variable, their Domain, Range, Graphs, Limit, Continuity and Differentiability. Successive Differentiation,
Leibnitz’s theorem, Rolle’s theorem, Mean value theorem, Taylor’s theorem, Maclaurin’s theorem, Langrage’s and Cauchy’s forms of Remainder. Expansion of Function in Taylor’s and Maclaurin’s
Series. Maximum and Minimum Values of Function. Evaluation of Indeterminate forms of limit, L’ Hospital’s Rule. Tangent and Normal. Curvature, Radius of Curvature, Centre of Curvature. Functions
of more than one variable, Limit, Continuity, Differentiability, Partial Derivatives, Euler’s Theorem. Jacobians. Integral Calculus: Indefinite Integrals and its definition. Methods of
Integration (Integration by substitution, Integration by parts, Integration by successive reduction). Fundamental theorem of Integral calculus. Definite Integral and its properties. Definite
Integral as the limit of a sum. Improper Integrals, Beta and Gamma Function, Its application in evaluating Integrals. Evaluation of Arc length, Areas, Surfaces of Revolution, Volumes of solids of
Revolution, Multiple Integrals.
Books Recommended:
1. Calculus – Howard Anton; 10^th Edition; John Willy and Sons
2. Differential Calculus – B. C. Das & B. N. Mukharjee; 54^th Edition; U. N. Dhur & Sons PTL
3. Integral Calculus – C. Das & B. N. Mukharjee; 54^th Edition; U. N. Dhur & Sons PTL
4. A Text Book on Differential Calculus – Mohammad, Bhattacharjee & Latif, 4^th Edition, 2014; S. Chakravarty, Gonith Prokashan
5. A Text Book on Integral Calculus – Mohammad, Bhattacharjee & Latif; 4^th Edition, 2014; S. Chakravarty, Gonith Prokashan.
CSE-105 Structured Programming Languages
3 Credits
Programming language: Basic concept; overview of programming languages, C-language: Preliminaries; Elements of C; program constructs; variables and data types in C; Input and output; character
and formatted I/O; Arithmetic expressions and assignment statements; loops and nested loops; Decision making’ Arrays; Functions; Arguments and Local Variables; Calling functions and arrays;
Recursion and recursive functions; structures within structure; Files; File functions for sequential and Random I/O. Pointers, Pointers and Structures; Pointers and functions; Pointer and arrays;
Operations on pointers; Pointer and memory addresses; Operations on bits; Bit operation; Bit field; Advanced features; Standard and Library functions.
Books Recommended:
1. The C Programming Language – Kernighn& Ritchie
2. Teach Yourself C – H. Schieldt
3. The Complete Reference, Turbo C/C++ – H. Schieldt
4. Programming with ANSI C – E. Balagurusamy
5. Programming with C, Schaum’s outline Series – Gotfreid
CSE 106 Structured Programming Languages Lab
1.5 Credits
Laboratory works based on CSE 105.
HUM-105 Oral and written Communication in English Language
3 Credits
Oral & written communication skills include communicative expressions for day to day activities, both for personal and professional requirement. Grammar items will mainly emphasize the use of
articles, numbers, tense, modal verbs, pronouns, punctuation, etc. Sentence formation, question formation, transformation of sentence, simple passive voice construction, and conditionals will
also be covered.
Books Recommended:
1. Paragraph in English – Tibbits
2. Exercise in Reading Comprehension – Tibbits
3. Essential English Grammar – Ramon Murphy
4. English Vocabulary in use – Stuart
5. English Vocabulary in use – McCarthy
6. Intermediate English Grammar – Ramon Murphy
HUM-113 Bangladesh Studies : History and Society of Bangladesh
3 Credits
Bangladesh-Geography of Bangladesh-History of Bangladesh: ancient, medieval, British periods, politics of 1930’s and 1940’s, Language movement, 6-point & 11-point programs, liberation war and
emergence of Bangladesh and constitutional transformation of the state. Social structure of Bangladesh-Social problems such as repression of women, eve-teasing, urbanization, terrorism,
communalism, corruption etc.
Books Recommended:
1. Bangladesh Encyclopedia (English Version)
2. History of Bengal (English Version) – K. Ali
3. History of Bengal (English Version) – Majumder
4. Economy of Bangladesh (Economic Journal)
CSE-107 Object Oriented Programming I
3 Credits
Introduction to Java: History of Java, Java class Libraries, Introduction to java programming, and a simple program. Developing java Application: Introduction, Algorithms, Pseudo code, control
Structure, The If/Else selection structure, the while Repetition structure, Assignment operators, Increment and decrement operators, Primitive data types, common Escape sequences, Logical
operator. Control Structure: Introduction, for Structure, switch structure, Do while structure, Break and continue Structure. Methods: Introduction, Program module in Java, Math class methods,
method definitions, java API packages, Automatic variables, Recursions, Method overloading, Method of the Applet class. Arrays: Introduction, Arrays, declaring and allocating arrays, passing
arrays to methods, sorting arrays, searching arrays, multiple subscripted Arrays. Inheritance: Introduction, Super class, Subclass, Protected members, using constructor and Finalizes in
subclasses, composition vs. Inheritance, Introduction to polymorphism, Dynamic method building, Final methods and classes, Abstract super classes and concrete classes, Exception Handling.
Books Recommended:
1. Java, How to Program- H. M. Deitel & P. J. Deitel
2. Core Java (Vol. 1 and 2)- Sun Press
3. Beginning Java 2, Wrox – Ivor Horton
4. Java 2 Complete Reference- H. Schieldt
CSE 108 Object Oriented Programming I Lab
1.5 Credits
Laboratory works based on CSE 107.
MTH-107 Geometry and Linear Algebra
3 Credits
Geometry: Two dimensional Geometry: Transformation of Co-ordinates. Pair of straight lines, General Equation of Second Degree, Circle, Parabola, Ellipse and Hyperbola. Three Dimensional Geometry:
Three Dimensional Co-ordinates, Direction Cosines and Direction Ratios. Plane and Straight line. Linear Algebra: Determinant and properties of Determinants, Matrix, Types of matrices, Matrix
operations, Laws of matrix Algebra, Invertible matrices. Elementary row and Column operations and Row-reduced echelon matrices, Rank of matrices. System of Linear equations (homogeneous and
non-homogeneous) and their solutions. Vectors in R^n and C^n , Inner product, Norm and Distance in R^n and C^n . Vector Spaces, Subspace, Linear combination of vectors, Linear dependence and
independence of vectors. Basis and Dimension of vector spaces. Inner product spaces, Orthogonality and Orthonormal sets, Eigen values and Eigen vectors, diagonalization, Cayley-Hamilton theorem
and its application.
Books recommended:
1. Analytical Geometry of Conic Section – J. M. Kar
2. An Elementary Treatise on Co-ordinate Geometry of three dimensions –J. T. Bell; Macmillan India Ltd
3. A Text Book on Co-ordinate Geometry – Rahman & Bhattacharjee; 12^th Edition, 2014; S. Chakravarty, Gonith Prokashan
4. Schaum’s Outline Series of the Theory and Problems on Linear Algebra – Seymour Lipschutz; 3^rd Edition; McGraw Hill Book Company
5. Linear Algebra with Applications – Antone
6. Linear Algebra – Dewan Abdul Quddus; Latest Edition; Titash Publications
7. Linear Algebra – Saikia
PHY-103 Electromagnetism and Modern Physics
3 Credits
Electrostatics, Electric Charge, Coulomb’s Law, Electric Field & Electric Potential, Electric Flux Density, Gauss’s Law, Capacitors and Dielectrics, Steady Current, Ohm’s Law, Magnetostatics,
Magnetic Field, Biot-Savart Law, Ampere’s Law, Electromagnetic Induction, Faraday’s Law, Lenz’s Law, Self Inductance & Mutual Inductance, Magnetic Properties of Matter, Permeability,
Susceptibility, Diamagnetism, Paramagnetism&Ferroma-gnetism, Maxwell’s Equations of Electromagnetic Waves, Waves in Conducting & Non-Conducting Media, Total Internal Reflection, Transmission
along Wave Guides. Special Theory of Relativity, Length Contraction & Time Dilation, Mass-Energy Relation, Photo Electric Effect, Quantum Theory, X-rays and X-ray Diffraction, Compton Effect,
Dual Nature of Matter & Radiation, Atomic Structure, Nuclear Dimensions, Electron Orbits, Atomic Spectra, Bohr Atom, Radioactive Decay, Half-Life, a, b and g Rays, Isotopes, Nuclear Binding
Energy, Fundamentals of Solid State Physics, Lasers, Holography.
Books Recommended:
1. Fundamental of Physics(PartII)- Haliday, Resnick& Walker
2. Modern Physics – Bernstein
3. Concepts of Modern Physics – Beiser
4. Electromagnetism and Modern Physics
5. Fundamental of Optics – Brizlal
6. Optics – Ghatak
7. Heat & Thermodynamics – Brizlal
8. University Physics with Modern Physics – Young
9. Essential University Physics Volume II – Wolfson
PHY-102 Physics Lab
1.5 Credits
Laboratory works based on PHY-101 & PHY-103.
ECE-101 Basic Electrical Engineering
3 Credits
Fundamental electrical concepts, Kirchoff’s Laws, Equivalent resistance. Electrical circuits: Series circuits, parallel circuits, series-parallel networks. Network analysis: Source conversion,
Star/Delta conversion, Branch-current method, Mesh analysis, Nodal analysis. Network theorems: Superposition theorem, Thevenin’s theorem, Norton’s theorem. Capacitors. Magnetic circuits,
Inductors Sinosoidal alternating waveforms: Definitions, phase relations, Instantaneous value, Average value, Effective (rms)Value. Phasor algebra Series, parallel and series-parallel ac
networks. Power: Apparent power, Reactive power, Power triangle, Power factor correction. Pulse waveforms and the R-C response. Three-phase system Transformers.
Books Recommended:
1. Introductory Circuit Analysis- L. Boylestad
2. Introduction to Electrical Engineering- P. Ward
3. Electrical Technology (Volume 1)-L. Theraja, A.K.Theraja
4. Alternating Current Circuits-M. Kerchner, G. F. Corcoran
5. Electric Circuits – James W. Nilson
ECE 102 Basic Electrical Engineering Lab
1.5 Credits
Laboratory works based on ECE 101.
ECN 101 Principles of Economics
3 Credits
Introduction: The Nature, scope and methods of Economics, Economics and Engineering. Some Fundamental concepts commonly used in Economics. Micro Economics: The theory of demand and supply and
their elasticity’s. Market price determination competition in theory and practice. Indifference curve technique. Marginal analysis. Factors of production and production function. Scale of
production – Internal and external economies and diseconomies. The short run and the long run. Fixed cost and variable cost. Macro Economics: National income analysis. Inflation and its effects.
Savings, Investments. The basis of trade and the terms of trade. Monetary policy, Fiscal policy, Trade policy with reference to Bangladesh. Planning in Bangladesh.
Books Recommended:
1. Economics – Samuelson & Nordhaus
2. Economics – Don Bush Fisher
HUM-103 Language Composition and Comprehension
3 Credits
This course purports to make the student well up in composition and comprehension of English language used in formal write ups like articles, essays and treatises. Here text will be given for
comprehension, exercises of writing essays, paragraphs and reports will be done and construction of proper sentences expressing formal ideas will be taught. Sufficient exercises of translation
and re-translations will be included.
Books Recommended:
1. Exercise in Reading Comprehension – Tibbits
2. Essential English Grammar – Ramon Murphy
3. English Vocabulary in use – Stuart
4. English Vocabulary in use – McCarthy
5. Intermediate English Grammar – Ramon Murphy
6. Paragraph in English – Tibbits
CSE-201 Discrete Mathematics
3 Credits
Mathematical Models and Reasoning: Propositions, Predicates and Quantifiers, Logical operators, Logical inference, Methods of proof. Sets: Set theory, Relations between sets, Operations on sets.
Induction, The natural numbers, Set operations on å*. Binary Relations : Binary relations and Digraphs, Graph theory, Trees, Properties of relations, Composition of relations, Closure operations
on relations, Order relations, Equivalence relations and partitions. Functions: Basic properties, Special classes of functions. Counting and Algorithm Analysis: Techniques, Asymptotic behavior of
functions, Recurrence systems, Analysis of algorithms. Infinite sets: Finite and Infinite sets, Countable and uncountable sets, Comparison of cardinal numbers. Algebras: Structure, Varieties of
algebras, Homomorphism, Congruence relations.
Books Recommended:
1. Schaum’s Outline of Theory and Problems of Discrete Mathematics- Seymour Lipschutz
2. Discrete Mathematics and its Applications- Kennth H. Rosen
3. Discrete Mathematical Structures- Bernard Kolman, Robert C. Busby, Sharon Cutler Ross
4. Concrete Mathematics- Ronald Ervin Knuth
CSE-211 Object Oriented Programming II
3 Credits
String, String Buffer and String Builder classes, Files and Stream, Java Database Connectivity: Statement and Prepared Statement Interfaces, CRUD operations using Statement and Prepared
Statement, JDBC Transaction Management, Object Relational Mapping, Java Persistency API: Introduction, Entity class annotations, Entity Manager interface, Entity Transaction interface, CRUD
operations using JPA, Primary Key Generation Strategies, Entity Inheritance, Entity Mapping, Java Persistency Query Language: Select, Update, Delete and Named Queries, Servlets: Servlet
Interface, Generic Servlet and HTTP Servlet, Servlet lifecycle, Java Server Pages: JSP Life cycle methods, Tags in JSP, JSP Implicit Objects, JSP Standard Tag Library, Java Server Faces:
Introduction, JSF Architecture and Application Development, JSF Page Navigation and Managed Bean, JSF Core Tag Library, JSF Event Handling Model, JSF Validation Model, JSF Data Conversion Model,
JPA JSF Integration, Java API, Utility classes, 2D Graphics, GUI, Swing, Events.
1. Introduction to Programming in Java, Robert Sedgewick & Kevin Wayne
2. An Introduction to Object-Oriented Programming, Timothy Budd
CSE-212 Object Oriented Programming II Lab
1.5 Credits
Laboratory works based on CSE 211.
CSE-205 Data Structures
3 Credits
Concepts and examples: Introduction to Data structures. Elementary data structures: Arrays, records, pointer. Arrays: Type, memory representation and operations with arrays. Linked lists:
Representation, Types and operations with linked lists. Stacks and Queues: Implementations, operations with stacks and queues. Graphs: Implementations, operations with graph. Trees:
Representations, Types, operations with trees. Memory Management: Uniform size records, diverse size records. Sorting: Internal sorting, external sorting. Searching : List searching, tree
searching. Hashing: Hashing functions, collision resolution.
Books Recommended:
1. Fundamental of Data Structures – Horowitz & S. Sahni
2. Data Structures – Reingold
3. Data Structures, Schaum’s outline Series – Lipshultz
4. Data Structures & Programming Design – Robert L. Kruse
CSE-206 Data Structures Lab
1.5 Credits
Laboratory works based on CSE 205.
MTH-205 Vector Analysis and Complex Variable
3 Credits
Vector Analysis: Vector Algebra – Vectors in three dimensional space, Algebra of Vectors, Rectangular Components, Addition, Subtraction and Scalar multiplication, Scalar and Vector product of two
vectors. Scalar and Vector triple product. Application in Geometry. Vector Calculus – Limit, Continuity and Differentiability of Scalar and Vector point functions. Scalar and Vector field.
Gradient, Divergence and Curl of point functions. Vector Integration, Line, Surface and Volume Integrals. Green’s theorem, Gauss’s theorem, Stoke’s theorem. Complex Variable: Field of Complex
numbers, D’Moivre’s theorem and its applications. Limit and Continuity of complex functions, Derivatives, Analytic function, Harmonic function, Cauchy-Rieman equation. Line Integral of Complex
functions. Cauchy’s Integral theorem and Cauchy’s Integral formula. Lioville’s theorem, Taylors and Laurent’s theorem, Singularity Residue, Cauchy’s Residue theorem. Contour Integration. Bilinear
transformation. Mapping of Elementary functions. Conformal mapping.
Book Recommended:
1. Schaum’s Outline Series of the Theory and Problems on Vector Analysis – Murray R. Spiegel; SI (Metric Edition); McGraw Hill Book Company
2. Schaum’s Outline Series of the Theory and Problems on Complex Variable – Murray R. Spiegel; 2^nd Edition; McGraw Hill Book Company
3. Functions of a Complex Variable – Dewan Abdul Quddus; Latest Edition; Titash Publications
ECE-201 Electronic Devices & Circuits
3 Credits
Introduction to semiconductors, Junction diode characteristics & diode applications, Bipolar Junction transistor characteristics, Transistor biasing, Small signal low frequency h-parameter model
& hybrid -pi model, AC analysis of transistor, Frequency response of transistor, Operational amplifiers, Linear applications of operational amplifiers, DC performance of operational amplifiers,
AC performance of operational amplifiers, Introduction to JFET, MOSFET, PMOS, NMOS & CMOS, Introduction to SCR, TRIAC, DIAC & UJT, Active filters Introduction to IC fabrication techniques & VLSI
Book Recommended:
1. Electronic Devices & Circuits McGraw-Hill -Jacob Millman & Christos C. Halkias
2. Electronics Devices And Circuits- Salivahanan, N. S. Kumar And A. Vallavaraj, Tata McGraw – Hill
3. Electronics Fundamentals: Circuits, Devices, and Applications- Ronald J Tocci
ECE 202 Electronic Devices & Circuits Lab
1.5 Credits
Laboratory works based on ECE 201.
ACN-203 Cost and Management Accounting
3 Credits
Introduction: Cost accounting: Definition, Limitations of Financial Accounting, Importance, Objectives, Functions and Advantages of Cost Accounting, Financial Accounting VS. Cost Accounting VS.
Managerial Accounting, Techniques and Methods of Cost Accounting, International Cost Accounting Systems. Managerial accounting: Definition , Evolution , Objectives , Scope , Importance ,
Functions , Techniques , Differences among Managerial Accounting , Cost Accounting and Financial Accounting , Management Accounting for Planning and Control .Cost Classification : Cost Concepts ,
Cost Terms , Cost Expenses and Losses , Cost Center ,Cost Unit , Classification of Costs , Cost Accounting Cycle, Cost Statement , The Flow of Costs in a Manufacturing Enterprise ,Reporting and
Results of Operation. Materials : Indirect & Direct Material , Procurement of Materials , Purchase Control , Purchase Department , Purchase Quantity , Fixed Order , Economic Order Quantity ,
Stock-out Cost , Re-order Level , Purchase Order , Receipts and Inspection ,Classification and Codification of materials ,Stock Verification , ABC Method of Store Control , Pricing of materials
Issued , LIFO, FIFO and Average Pricing , Inventory Control; Labor: Labor Cost Control, Time Recording Systems, Manual and Mechanical Methods, Time Booking, Necessary Documents Maintained for
Labor Control, Methods of Remuneration; Treatment for Idle and Over Time. Overhead: Definition , Classifications of Overheads , Methods of Overhead Distribution , Distribution of Factory Overhead
to Service Departments, Redistribution of Service Department Cost , Uses of Predetermined Overhead Rates , Treatment of Over and under absorbed Overhead ,Treatment of Administration Overhead ,
Selling and Distribution Overheads , Calculation of Machine Hour rate . Job Order Costing: Feature Advantages, Limitation, Accounting for Materials, Labor and Factory Overhead in Job Costing,
Accounting for Jobs Completed and Products Sold, Spoilage, Defective Work and Scrap in job Costing System, The Job Cost Sheet, Job Order Costing in Service Companies, Nature and Uses of Batch
Costing, Determination of Economic Batch Quantity. Contract Costing: Introduction, Procedures, Types of Contract, Retention Money, Profit or Loss on Incomplete Contract, Cost plus Contract
Systems; Operation Costing: Nature, Procedures, Costing for Transport and Hospital; Cost Behavior : Analysis of Cost Behavior , Measurement of Cost Behavior , Methods of Methods of Measuring Cost
Functions , Analysis of Mixed Costs , High and Low Point Method , Scatter graph Method , Least Squares Method , Use of Judgment in Cost Analysis ; Cost – Volume Profit Relationship : Profit
Planning , Break Even Point , Break Even Chart , Changes in Underlying Factors , Profit Volume Graph , Income Tax effect on Break Even Point , Break Even Point in Decision Making , Risk and
Profit Analysis , Limitations .
Books Recommended:
1. Cost Accounting, A Managerial Emphasis: T. Hormgren ET all
2. Managerial Accounting: Ray .H. Garrison
3. Management Accounting: N. Anthony
4. Management Accounting: S.Kaplan
5. Cost Accounting: Usry & Hammer
6. Cost Accounting: G. Rayburn
7. Cost Accounting: P Lyenger
8. Accounting Principles – Kieso
9. Financial & Managerial Accounting- Needles
10. Theory and Practice of Costing- Basu & Das
CSE-207 Algorithms
3 Credits
Analysis of Algorithm: Asymptotic analysis: Recurrences, Substitution method, Recurrence tree method, Master method. Divide-and-Conquer: Binary search, Powering a number, Fibonacci numbers,
Matrix Multiplication, Strassen’s Algorithm for Matrix Multiplication. Sorting: Insertion sort, Merge sort, Quick sort, Randomized quick sort, Decision tree, Counting sort, Radix sort. Order
Statistics: Randomized divide and conquer, worst case linear time order statistics. Graph: Representation, Traversing a graph, Topological sorting, Connected Components. Dynamic Programming:
Elements of DP (Optimal substructure, Overlapping subproblem), Longest Common Subsequence finding problem, Matrix Chain Multiplication. Greedy Method: Greedy choice property, elements of greedy
strategy, Activity selector problem, Minimum spanning tree (Prims algorithm, Kruskal algorithm), Huffman coding. Shortest Path Algorithms: Dynamic and Greedy properties, Dijkstra’s algorithm with
its correctness and analysis, Bellman-ford algorithm, All pair shortest path: Warshall’s algorithm, Johnson’s algorithm. Network flow: Maximum flow, Max-flow-min-cut, Bipartite matching.
Backtracking/Branch-and-Bound: Permutation, Combination, 8-queen problem, 15-puzzle problem. Geometric algorithm: Segment-segment intersection, Convex-hull, Closest pair problem. And NP
Completeness, NP hard and NP complete problems.
Books Recommended:
1. Introduction to Algorithms- Thomas H. Cormen , Charles E. Leiserson.
2. Algorithms –Robert Sedgewick and Kevin Wayne.
3. Fundamental Algorithms- Donald E. Knuth,”Art of Computer Programming, Volume 1: Addison-Wesley Professional; 3rd edition, 1997.
CSE-208 Algorithms Lab
1.5 Credits
Using different well known algorithms to solve the problem of Matrix-Chain Multiplication, Longest Common Subsequence, Huffman codes generation, Permutation, Combination, 8-queen problem,
15-puzzle, BFS, DFS, flood fill using DFS, Topological sorting, Strongly connected component, finding minimum spanning tree, finding shortest path (Dijkstra’s algorithm and Bellman-Ford’s
algorithm), Flow networks and maximum bipartite matching, Finding the convex hull, Closest pair.
CSE-209 Numerical Methods
3 Credits
Errors and Accuracy. Iterative process: Solution of f(x)= 0, existence and convergence of a root, convergence of the iterative method, geometrical representation, Aitken’s D^2– process of
acceleration. System of Linear Equations. Solution of Non-Linear equations. Finite Differences and Interpolation. Finite Difference Interpolation. Numerical Differentiation. Numerical
Integration. Differential Equations.
Books Recommended:
1. Introductory methods of Numerical Analysis – S. S. Sastry
2. Numerical Methods for Engineers –Steven C. Chapra
3. Numerical Mathematical Analysis – James B. Scarborugh
CSE-210 Numerical Methods Lab
1.5 Credits
Laboratory works based on CSE 209.
CSE-231 Digital Logic Design
3 Credits
Binary Logic. Logic Gates: IC digital logic families, positive and negative logic. Boolean Algebra. Simplification of Boolean Functions: Karnaugh map method, SOP and POS simplification, NAND,
NOR, wired-AND, wired-OR implementation, nondegenerate forms, Don’t care conditions, Tabulation method – prime implicant chart. Combinational Logic: Arithmetic circuits – half and full adders and
subtractors, multilevel NAND and NOR circuits, Ex-OR and Equivalence functions. Combinational Logic in MSI and LSI: Binary parallel adder, decimal and BCD adders, Comparators, Decoders and
Encoders, Demultiplexors and Multiplexors. Sequential Logic. Registers and Counters. Synchronous Sequential Circuits. Asynchronous Sequential Circuits. Digital IC terminology, TTL logic family,
TTL series characteristics, open-collector TTL, tristate TTL, ECL family, MOS digital ICs, MOSFET, CMOS characteristics, CMOS tristate logic, TTL-CMOS-TTL interfacing, memory terminology, general
memory operation, semiconductor memory technologies, different types of ROMs, semiconductor RAMs, static and dynamic RAMs, magnetic bubble memory, CCD memory, FPGA Concept.
Books Recommended:
1. Digital Logic & Computer Design-M. Morris Mano
2. Digital Fundamentals- Floyd
3. Modern Digital Electronics-R. P. Jain
4. Digital Systems- R. J. Tocci
5. Digital Electronics- Green
CSE-232 Digital Logic Design Lab
1.5 Credits
Laboratory works based on CSE 231.
MTH-207 Differential Equations, Laplace Transforms and Fourier Analysis
3 Credits
Differential Equation: Formation of Differential equation, Degree and Order of differential equation, Complete and Particular solution. Ordinary differential equation – Solution of ordinary
differential equation of first order and first degree (special forms). Linear differential equation with constant coefficients. Homogeneous linear differential equation. Solution of differential
equation by the method of Variation of parameters. Solution of linear differential equations in series by Frobenius method. Bessel’s function and Legendre’s Polynomials and their properties.
Simultaneous equation of the form dx/P=dy/Q=dz/R. Partial differential equation – Lagrange’s linear equation, Equation of linear and non-linear first order standard forms, Charpit’s method.
Laplace Transforms: Definition, Laplace transforms of some elementary functions, sufficient conditions for existence of Laplace transforms, Inverse Laplace transforms, Laplace transforms of
derivatives, Unit step function, Periodic function, Some special theorems on Laplace transforms, Partial fraction, Solution of differential equations by Laplace transforms, Evaluation of Improper
Integrals. Fourier Analysis: Fourier series (Real and complex form). Finite transforms, Fourier Integrals, Fourier transforms and application in solving boundary value problems.
Books Recommended:
1. Differential Equations – H. T. H. Piaggio; 1^st Indian Edition, 1985, S. K. Jain for CBS Publishers
2. A Text Book on Integral Calculus with Differential Equations – Mohammad, Bhattacharjee & Latif, 4^th Edition, 2010; S. Chakravarty, Gonith Prokashon
3. Schaum’s Outline Series of the Theory and Problems on Laplace Transforms – Murray R. Spiegel; Revised Edition, 2003; McGraw Hill Book Company
4. Differential Equation – Md. Abu Eusuf; Latest Edition; Abdullah Al Mashud Publisher
CSE-200 Project Work
2 Credits
Project focusing on Object oriented programming approach and using standard algorithm is preferable. Every project should maintain a goal so that it can be used as a useful tool in the IT fields.
Also innovative project ideas that require different types scripting/programming languages or programming tools can be accepted with respect to the consent of the corresponding project
CSE-321 Database Systems
3 Credits
Introduction: Purpose of Database Systems, Data Abstraction, Data Models, Instances and Schemes, Data Independence, Data Definition Language, Data Manipulation Language, Database Manager,
Database administrator, Database Users, Overall System Structure, Advantages and Disadvantage of a Database Systems. Data Mining and analysis, Database Architecture, History of Database Systems
Relationship Entity-Model: Entities and Entity Sets, Relationships and Relationship Sets, Attributes, Composite and Multivalued Attributes, Mapping Constraints, Keys, Entity-Relationship Diagram,
Reducing of E-R Diagram to Tables, Generalization, Attribute Inheritance, Aggregation, Alternative E-R Notatios, Design of an E-R Database Scheme.
Relational Model: Structure of Relational Database, Fundamental Relational Algebra Operations, The Tuple Relational Calculus, The Domain Relational Calculus, Modifying the Database. Relational
Commercial Language: SQL, Basic structure of SQL Queries, Query-by-Example, Quel., Nested Sub queries, Complex queries, Integrity Constraints, Authorization, Dynamic SQL, Recursive Queries.
Relational Database Design: Pitfalls in Relational Database Design, Functional Dependency Theory, Normalization using Functional Dependencies, Normalization using Multivalued Dependencies,
Normalization using join Dependencies, Database Design Process. File And System Structure: Overall System Structure, Physical Storage Media, File Organization, RAID, Organization of Records into
Blocks, Sequential Files, Mapping Relational Data to Files, Data Dictionary Storage, Buffer Management. Indexing And Hashing: Basic Concepts, Ordered Indices, B+ -Tree Index Files, B-Tree Index
Files, Static and Dynamic Hash Function, Comparison of Indexing and Hashing, Index Definition in SQL, Multiple Key Access.
Query Processing and Optimization: Query Interpretation, Equivalence of Expressions, Estimation of Query-Processing Cost, Estimation of Costs of Access Using Indices, Join Strategies, Join
Strategies for parallel Processing, Structure of the query Optimizer, Transformation of Relational Expression. Concurrency Control: Schedules, Testing for Serializability, Lock-Based Protocols,
Timestamp-Based Protocols, Validation Techniques, Multiple Granularity, Multiversion Schemes, Insert and Delete Operations, Deadlock Handling. Distributed Database: Structure of Distributed
Databases, Trade-off in Distributing the Database, Design of Distributed Database, Transparancy and Autonomy, Distributed Query Processing, Recovery in Distributed Systems, Commit Protocols,
Concurrency Control. Data Mining and Information Retrieval: Data analysis and OLAP, Data Warehouse, Data Mining, Relevance Ranking Using Terms, Relevance Ranking Using Hyperlink, Synonyms,
Homonyms, Ontology, Indexing of Document, Measuring Retrieval Efficiencies, Information Retrieval and Structured Data.
Books Recommended:
1. Database System Concepts – Abraham Silberschratz, Henry K. Korth, S. Sudarshan (5^th edition)
2. Fundamentals of Database Systems – Benjamin/Cummings, 1994
3. Database Principles, Programming, Performance – Morgan Kaufmann 1994
4. A First Course in Database Systems – Prentice Hall, 1997
5. Database Management Systems, McGraw Hill, 1996
CSE-322 Database Systems Lab
1.5 Credits
Introduction: What is database, MySQL , Oracle , SQL, Datatypes, SQL / PLSQL, Oracle Software Installation, User Type, Creating User , Granting. Basic Parts of Speech in SQL: Creating Newspaper
Table, Select Command (Where , order by), Creating View, Getting Text Information & Changing it, Concatenation, Cut & paste string(RPAD , LPAD , TRIM , LTRIM , RTRIM, LOWER , UPPER , INIT, LENGTH
, SUBSTR , INSTR , SOUNDEX). Playing The Numbers: Addition , Subtraction , Multiplication , Division, NVL , ABS , Floor , MOD , Power , SQRT , EXR , LN , LOG , ROUND, AVG , MAX , MIN , COUNT ,
SUM, Distinct, SUBQUERY FOR MAX,MIN. Grouping things together: Group By , Having, Order By, Views Renaming Columns with Aliases.When one query depends upon another: Union, Intersect , Minus, Not
in , Not Exists. Changing Data : INSERT,UPDATE,MERGE,DELETE, ROLLBACK , AUTOCOMMIT , COMMIT, SAVEPOINTS, MULTI TABLE INSERT, DELETE, UPDATE, MERGE. Creating And Altering tables & views: Altering
table, Dropping table, Creating view, Creating a table from a table. By What Authority: Creating User, Granting User, Password Management.
An Introduction to PL/SQL: Implement few problems using PL/SQL (eg Prime Number, Factorial, Calculating Area of Circle, etc).An Introduction to Trigger and Procedure: Implement few problems using
Trigger and Procedures. An Introduction to Indexing: Implement indexing using a large database and observe the difference of Indexed and Non-Indexed database.
CSE-331 Computer Architecture
3 Credits
Introduction to Computer Architecture: Overview and history; Cost factor; Performance metrics and evaluating computer designs. Instruction set design: Von Neumann machine cycle, Memory
addressing, Classifying instruction set architectures, RISC versus CISC, Micro programmed vs. hardwired control unit. Memory System Design: Cache memory; Basic cache structure and design; Fully
associative, direct, and set associative mapping; Analyzing cache effectiveness; Replacement policies; Writing to a cache; Multiple caches; Upgrading a cache; Main Memory; Virtual memory
structure, and design; Paging; Replacement strategies. Pipelining: General considerations; Comparison of pipelined and nonpipelined computers; Instruction and arithmetic pipelines, Structural,
Data and Branch hazards. Multiprocessors and Multi-core Computers: SISD, SIMD, and MIMD architectures; Centralized and distributed shared memory- architectures; Multi-core Processor architecture.
Input/output Devices: Performance measure, Types of I/O device, Buses and interface to CPU, RAID. Pipelining: Basic pipelining, Pipeline Hazards. Parallel Processing.
Books Recommended:
1. Computer Architecture and Organization- John P.Hayes, 3rd Edition, McGraw Hill
2. Computer Organization and Design: The hardware / software interface- David A.Patterson and John L.Hennessy
CSE-323 Web Engineering
2 Credits
Introduction to Web Engineering, Requirements Engineering and Modeling Web Applications, Web Application Architectures, Technologies and Tools for Web Applications, Testing and Maintenance of Web
Applications, Usability and Performance of Web Applications, Security of Web Applications, The Semantic Web.
Books References:
1. Web Engineering: The Discipline of Systematic Development of Web Applications Editors- GertiKappel, Birgit Proll, Siegfried Reich, Werner Retschitzegger
2. Web Engineering: A Practitioner’s Approach- Roger Pressman, David Lowe
3. MIT Open Course Materials for the course Software Engineering for Web Applications
4. MIT Open Course Materials for the course Database, Internet, and Systems Integration Technologies
CSE-324 Web Engineering Lab
1.5 Credits
Understanding the Web Application: Web Engineering introduces a structured methodology utilized in software engineering to Web development projects. The course addresses the concepts, methods,
technologies, and techniques of developing Web Sites that collect, organize and expose information resources. Topics covered include requirements engineering for Web applications, design methods
and technologies, interface design, usability of web applications, accessibility, testing, metrics, operation and maintenance of Web applications, security and project management. Specific
technologies covered in this course include client-side (XHTML, JavaScript and CSS) and server-side (Perl and PHP). Using the described concepts students should be able to understand the Web
engineering concepts behind the frameworks of Joomla, Drupal, WordPress. Server-side technology: LAMP, Web application frameworks, (example: Silverlight, Adobe Flex), Web 2.0 and Web APIs.
Front-end technology: HTML, XHTML, XML. CSS Styling, layout, selector, Document object model and JavaScript. Client-Programming: Web APIs with JavaScript (example: Google AJAX API). MVC:
Understanding model, view and controller model. Understanding Web APIs: REST, XML, JSON, RSS Parsing. JavaScript Exercise: The goal of this assignment is to allow you to explore and use vas many
of JavaScript’s objects, methods and properties as possible in a small assignment. Some functions must be written from scratch. Other functions, appropriately attributed, may be downloaded from
the web and used as a part of the system or as the basis for your own functions. PHP Exercise: Build a set of PHP scripts that perform some dynamic server-side functionality. Understanding
plug-ins: Develop a Firefox extension.
MTH-301 Statistics and Probability
2 Credits
Frequency distribution; mean, median, mode and other measures of central tendency, Standard deviation and other measures of dispersion, Moments, skewness and kurtosis, Elementary probability
theory and discontinuous probability distribution, e.g. binomial, poison and negative binomial, Continuous probability distributions, e.g. normal and exponential, Characteristics of
distributions, Hypothesis testing and regression analysis.
Books Recommended:
1. Introduction to M athematical Statistics – Hogg
2. Probability and Statistics for Scientists and Engineers – Walpole
CSE-309 Cyber Crime and Intellectual Property Law
3 Credits
Introduction: the problem of computer crime, what is Cybercrime? Cybercrime: the invisible threat, Information and other assets in need of assurance, Computer focused and computer assisted
crimes, the hacker, hacking tactics, the victim, Data: surveys, network flow and IPS/IDS, Data: honey pots and incidents, Cyber terrorism, Cyber laws and regulations, Investigating cyber crime ,
Preventing cyber crime and Future opportunities for managing cybercrime. Intellectual Property: Introduction, Philosophical Perspectives and Overview of Intellectual Property: Trade Secret;
Patent; Copyright; Trademark/Trade Dress; Problem; Copyright and patent; need of intellectual Property laws, Copyright for software, software-copyright cases, Database, the focus shifts from
copyright to patent, the nature of patent law, some software-patent cases. Filmy and video, Pornography meets the internet, different between downloads and publications, censoring videos.
Books Recommended:
1. Understanding and Managing Cybercrime-McQuade III, Samuel C. 2006. ISBN 0-205-43973-X
2. The Transformation of Crime in the Information Age –Wall, David. 2006. ISBN 0-745-62736-6
3. Cyber Crime and Digital Evidence: Materials and Cases –Thomas K. Clancy, First Edition 2011, LexisNexis, ISBN: 9781422494080
4. Cybercrime, investigating high-technology computer crime –Moore, Robert, (2011), (2ndEd.). Elsevier.
5. Cybercrime: The Investigation, Prosecution and Defense of a Computer-related Crime –Ralph D. Clifford, August 1, 2011
6. Intellectual Property in the New Technological Age –Merges, Menell & Lemley,2011 (^6th Edition)
7. Intellectual property: Law & the information society- James Boyle, Jennifer Jenkins, First Edition, 2014.
8. International Intellectual Property law- Jonathan Franklin, 2013
CSE-310 Technical Report Writing and Presentation
1.5 Credits
Issues of technical writing and effective oral presentation in Computer Science and Engineering; Writing styles of definitions, propositions, theorems and proofs; Preparation of reports, research
papers, theses and books: abstract, preface, contents, bibliography and index; Writing of book reviews and referee reports; Writing tools: LATEX; Diagram drawing software; presentation tools.
Books Recommended:
1. Technical Report Writing- Daniel G. Riordan, Houghton Mifflin Company, 8th edition, 2001
CSE-326 Engineering Drawing
1 Credit
Introduction; Instruments and their uses; First and third angle projection; Orthographic drawing; Sectional views and conventional practices; Auxiliary views; Isometric views; Missing lines and
Books Recommended:
1. Engineering Drawing & Design– David A. Madsen, David P. Madsen
CSE-300 Software Development
2 Credits
Students will work in groups or individually to produce high quality software in different languages. Students will write structured programs and use proper documentation. Advanced programming
techniques in Mobile Application.
Books Recommended:
1. Android Application Development Cookbook- Wei-Meng Lee
2. The Complete Android Guide- Kevin Purdy
CSE-303 Operating Systems
3 Credits
Introduction: Operating Systems Concept, Computer System Structures, Operating System Structures, Operating System operations, Protection and Security, Special-Purpose Systems. Fundamentals of OS
: OS services and components, multitasking, multiprogramming, time sharing, buffering, spooling Process Management: Process Concept, Process Scheduling, Process State, Process Management,
Interprocess Communication, interaction between processes and OS, Communication in Client-Server Systems, Threading, Multithreading, Process Synchronization. Concurrency control: Concurrency and
race conditions, mutual exclusion requirements, semaphores, monitors, classical IPC problem and solutions, Dead locks – characterization, detection, recovery, avoidance and prevention. Memory
Management: Memory partitioning, Swapping, Paging, Segmentation, Virtual memory – Concepts, Overlays, Demand Paging, Performance of demand paging, Page replacement algorithm, Allocation
algorithms. Storage Management: Principles of I/O hardware, Principles of I/O software, Secondary storage structure, Disk structure, Disk scheduling, Disk Management, Swap-space Management, Disk
reliability, Stable storage implementation. File Concept: File support, Access methods, Allocation methods, Directory systems, File Protection, Free Space management Protection & Security : Goals
of protection, Domain of protection, Access matrix, Implementation of access matrix, Revocation of access rights, The security problem, Authentication, One-time passwords, Program threats, System
threats, Threat monitoring, Encryption, Computer-security classification. Distributed Systems: Types of Distributed Operating System, Communication Protocols, Distributed File Systems, Naming and
Transparency, Remote File Access, Stateful Versus Stateless Service, File Replication. Case Studies: Study of a representative Operating Systems,
Books Recommended:
1. Operating System Concepts – Silberschatz & Galvin Wiley 2000 (7th Edition)
2. Operating Systems – Achyut S. Godbole Tata Mc Graw Hill (2nd Edition)
3. Understanding Operating System – Flynn & Metioes Thomsan (4th Edition)
4. Operating Systems Design & Implementation – Andrew Tanenbam, Albert S. Woodhull Pearson
5. Modern Operating System – Andrew S. Tanenbaum
CSE-304 Operating Systems Lab
1.5 Credits
Thread programming: Creating thread and thread synchronization. Process Programming: The Process ID, Running a New Process, Terminating a Process, Waiting for Terminated Child Processes, Users
and Groups, Sessions and Process Groups. Concurrent Programming: Using fork, exec for multi-task programs. File Operations: File sharing across processes, System lock table, Permission and file
locking, Mapping Files into Memory, Synchronized, Synchronous, and Asynchronous Operations, I/O Schedulers and I/O Performance.
Communicating across processes: Using different signals, Pipes, Message queue, Semaphore, Semaphore arithmetic and Shared memory.
Books Recommended:
1. The ‘C’ Odyssey UNIX-The Open, Boundless C – Meeta Gandhi, Tilak Shetty, Rajiv Shah.
2. Beginning Linux Programming – Neil Matthew and Richard Stones
3. Linux System Programming – Robert Love
CSE-315 Data Communication
3 Credits
Introduction to modulation techniques: Pulse modulation; pulse amplitude modulation, pulse width modulation and pulse position modulation. Pulse code modulation; quantization, Delta modulation.
TDM, FDM, OOK, FSK, PSK, QPSK; Representation of noise; threshold effects in PCM and FM. Probability of error for pulse systems, concepts of channel coding and capacity. Asynchronous and
synchronous communications. Hardware interfaces, multiplexers, concentrators and buffers. Communication medium, Fiber optics.
Books Recommended:
1. Introduction to Data Communications-Eugene Blanchard
2. Data Communication Principles – Ahmad, Aftab
3. Data Communication & Networking– S.Bagad, I.A.Dhotre
4. Data Communications and Networking- Behrouz A. Forouzan
CSE-313 Microprocessors and Microcontroller
3 Credits
Introduction to 8-bit, 16-bit, and 32-bit microprocessors: architecture, addressing modes, instruction set, interrupts, multi-tasking and virtual memory; Memory interface; Bus interface;
Arithmetic co-processor; Microcontrollers; Integrating microprocessor with interfacing chips.
Books Recommended:
1. Microprocessors & Interfacing- Douglas V. Hall
CSE-314 Microprocessors and Microcontroller Lab
1.5 Credits
Laboratory works based on CSE-313.
CSE-337 System Analysis and Software Engineering
3 Credits
Concepts of Software Engineering; Software Engineering paradigms;Different phases of software System Development; Different types of information, qualities of information. Project Management
Concepts; Software process and project Metrics; Software Project Planning; Risk Analysis and management; Project Scheduling and Tracking. Analysis Concepts and principles: requirement analysis,
Analysis modeling, data modeling. Design concepts and principles, Architectural design, User Interface design, Object Oriented software development and design: Iterative Development and the
Unified Process. Sequential waterfall life cycles, Inception. Use case model for requirement writing, Elaboration using System Sequence Diagram, Domain Model. Visualizing concept classes. UML
diagrams, Interaction and Collaboration Diagram for designing Software. Designing Objects with responsibilities. GRASP patterns with General Principles in assigning responsibilities: Information
expert, Creator, Low Coupling and High Cohesion, Creating design class diagrams and mapping design to codes. Advanced GRASP patterns: Polymorphism, Pure Fabrication, Indirection, Project
Variation. GoF Design Patterns: Adapter, Factory, Singleton, Strategy, Composite, Facade, and Observer. Software Testing: White Box and Black Box testing. Basis Path Testing. Testing for
specialized environment. Software testing strategies: Unit Testing, Integration Testing, Validation Testing, System Testing, Art of debugging. Analysis of System Maintenance and upgrading:
Software repair, downtime, error and faults, specification and correction, Maintenance cost models, documentation. Software Quality Assurance, Quality factors. Software quality measures.Cost
impact of Software defects. Concepts of Software reliability, availability and safety. Function based metrics and bang metrics. Metrics for analysis and design model. Metrics for source code,
testing and maintenance.
Books Recommended:
1. Software Engineering-Ian Sommerville, Addison Wesley, 6th edition, 2000.
2. Software Engineering-Roger S Pressman, Roger Pressman, Practitioner’s Approach”, McGraw-Hill, 6th edition, 2004.
3. Systems Analysis and Design of Real-Time Management Information Systems- Robert J. Thierauf, Prentice Hall, 1975.
4. Analysis and Design of Information Systems- Rajaraman, Prentice-Hall of India Pvt.Ltd, 2004.
CSE-338 System Analysis and Software Engineering Lab
1.5 Credits
Software Engineering lab works is solely designed to attain hands on experience of architectural design, documentation and testing of software so that students can develop the software following
the documents only.
Step1 (Requirement Engineering): Choose a company/institute/client for which software will be developed (make sure that they will provide required information whenever necessary). Follow the
steps for eliciting requirements and generate use-case diagram. Also analyze the sufficiency of the requirement engineering outcome for steps to follow.
Step 2 (Analysis model to Architectural and Component level design): Generate Activity diagram, Data flow diagram (DFD), Class diagram, State diagram, Sequence diagram and follow other relevant
steps for creating complete architectural and component level design of the target software.
Step 3 (User Interface design, Design evaluation, Testing strategies and Testing Tactics): Perform the user interface design with the help of swimlane diagram. Carry out the design evaluation
steps. Generate all test cases for complete checking of the software using black box, white box testing concept.
Step 4 Software testing and debugging
Step 5 (Managing Software Projects): Analyze the estimation and project schedule.
CSE-425 Digital Signal Processing
3 Credits
Introduction to digital signal processing (DSP): Discrete-time signals and systems, analog to digital conversion, impulse response, finite impulse response (FIR) and infinite impulse response
(IIR) of discrete-time systems, difference equation, convolution, transient and steady state response. Discrete transformations: Discrete Fourier series, discrete-time Fourier series, discrete
Fourier transform (DFT) and properties, fast Fourier transform (FFT), inverse fast Fourier transform, z-transformation – properties, transfer function, poles and zeros and inverse z-transform.
Correlation: circular convolution, auto-correlation and cross correlation. Digital Filters: FIR filters- linear phase filters, specifications, design using window, optimal and frequency sampling
methods; IIR filters- specifications, design using impulse invariant, bi-linear z-transformation, least-square methods and finite precision effects. Digital signal processor TMS family,
Application of digital signal processing
Books Recommended:
1. Digital Signal Processing-John G. Proakis
2. Signals and Systems-Simon Haykin and Barry Van Veen
3. Digital Signal Processing-R. W. Schafer
4. Digital Signal Processing-Ifeachor
5. Introduction to DSP-Johnny R. Johnson
CSE-426 Digital Signal Processing Lab
1.5 Credits
Laboratory works based on CSE 425.
CSE-403 Compiler Design
3 Credits
Introduction to compilers: Introductory concepts, types of compilers, applications, phases of a compiler. Lexical analysis: Role of the lexical analyzer, input buffering, token specification,
recognition of tokens, symbol tables. Parsing: Parser and its role, context free grammars, top-down parsing. Syntax-directed translation: Syntax-directed definitions, construction of syntax
trees, top-down translation. Type checking: Type systems, type expressions, static and dynamic checking of types, error recovery. Run-time organization: Run-time storage organization, storage
strategies. Intermediate code generation: Intermediate languages, declarations, assignment statements. Code optimization: Basic concepts of code optimization, principal sources of optimization.
Code generation. Features of some common compilers: Characteristic features of C, Pascal and Fortran compilers.
Books Recommended:
1. Compilers: Principles, Techniques, and Tools – Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman. Second Edition.
CSE-404 Compiler Design Lab
1.5 Credits
How to use scanner and parser generator tools (e.g., Flex, JFlex, CUP, Yacc, etc). For a given simple source language designing and implementing lexical analyzer, symbol tables, parser,
intermediate code generator and code generator.
CSE-421 Computer Network
3 Credits
Network architectures-layered architectures and ISO reference model: data link protocols, error control, HDLC, X.25, flow and congestion control, virtual terminal protocol, data security. Local
area networks, satellite networks, packet radio networks. Introduction to ARPANET, SNA and DECNET. Topological design and queuing models for network and distributed computing systems.
Books Recommended:
1. Computer Networks-A. S. Tanenbaum
2. Introduction to Networking- Barry Nance
3. Data Communications, Computer Networks & Open Systems- F. Halsall
4. TCP/IP-SydniFeit
5. Data Communications and Networking-Behrouz A. Forouzan
CSE-422Computer Network Lab
1.5 Credits
Laboratory works based on CSE 421.
CSE-415 Artificial Intelligence
3 Credits
What is Artificial Intelligence: The AI problems, The underlying assumption, What is an AI technique. Problems, Problem spaces and Search: Defining the problem as a state space search, Production
system, Problem characteristics. Heuristics Search Techniques: Generate and Test, Hill climbing, Best First Search, Problem Reduction, Constraint Satisfaction, Means-Ends Analysis. Knowledge
Representation Issues: Representation and Mappings, Approaches to knowledge Representation, Issues in Knowledge representation. Using Predicate logic: Representing simple facts in logic,
Representing Instance and Isa relationships, Computable functions and Predicates, Resolution. Representing Knowledge using Rules: Procedural versus Declarative Knowledge, Logic Programming,
Forward versus Backward Reasoning, Matching. Game playing: Overview, The Mimimax Search Procedure, Adding Alpha-Beta cutoffs, Additional refinements, iterative Deepening, Planning: Overview, An
example Domain: The Blocks World, Components of a planning system, Goal stack planning, Understanding: What is Understanding, What makes Understanding hard, Understanding as constraint
satisfaction. natural Language Processing: Introduction, Syntactic Processing, Semantic Analysis, Discourse and Pragmatic Processing. Expert systems: representing and using domain knowledge,
Expert system shells explanation, Knowledge Acquisition.
AI Programming Language: Python, Prolog, LISP
Books Recommended:
1. Introduction to Artificial Intelligence and Expert System-Dan W. Peterson
2. Artificial Intelligence-E. Rich and K. Knight
3. An Introduction to Neural Computing-C. F. Chabris and T. Jackson
4. Artificial Intelligence: A Modern Approach-S. Russel and P. Norvig
5. Artificial Intelligence using C – H. Schieldt
CSE-416 Artificial Intelligence Lab
1.5 Credits
Students will have to understand the functionalities of intelligent agents and how the agents will solve general problems. Students have to use a high-level language (Python, Prolog, LISP) to
solve the following problems:
Backtracking: State space, Constraint satisfaction, Branch and bound. Example: 8-queen, 8- puzzle, Crypt-arithmetic. BFS and production: Water jugs problem, The missionaries and cannibal problem.
Heuristic and recursion: Tic-tac-toe, Simple bock world, Goal stack planning, The tower of Hanoi. Question answering: The monkey and bananas problem.
CSE-431Computer Graphics
3 Credits
Introduction to Graphical data processing. Fundamentals of interactive graphics programming. Architecture of display devices and connectivity to a computer. Implementation of graphics concepts of
two-dimensional and three-dimensional viewing, clipping and transformations. Hidden line algorithms. Raster graphics concepts: Architecture, algorithms and other image synthesis methods. Design
of interactive graphic conversations.
Books Recommended:
1. Principles of Interactive Computer Graphics –William M., Newman, McGraw-Hill, 2nd edition, 1978
2. Computer Graphics: Principle and Practice in C-James D. Foley, Andries van Dam, Steven K. Feiner, John F. Hughes, Addison-Wesley, 2nd edition, 1995
CSE-432 Computer Graphics Lab
1.5 Credits
Laboratory works based on CSE 431.
CSE-435 Computer Interfacing
3 Credits
Interface components and their characteristics, microprocessor I/O. Disk, Drums, and Printers. Optical displays and sensors. High power interface devices, transducers, stepper motors and
peripheral devices.
Books Recommended:
1. Microprocessors & Interfacing-Douglas V. Hall
2. Microprocessor & Microcomputer based System Design – Rafiquzzaman
3. Microcomputer Interfacing-Artwick
4. Microcomputer Interfacing-Ramesh Goanker
5. Designing User Interfaces-James E. Powell
CSE-436 Computer Interfacing Lab
1.5 Credits
Laboratory works based on CSE 435.
CSE-437 Pattern Recognition
3 Credits
Introduction to pattern recognition: features, classifications, learning. Statistical methods, structural methods and hybrid method. Applications to speech recognition, remote sensing and
biomedical area, Learning algorithms. Syntactic approach: Introduction to pattern grammars and languages. parsing techniques. Pattern recognition in computer aided design.
1. Pattern Recognition- K. Koutroumbas
2. Pattern Recognition and Machine Learning- Christopher M. Bishop
3. Pattern Recognition for Neural Networks- Brian Ripley
CSE-438 Pattern Recognition Lab
1.5 Credits
Laboratory works based on CSE 437.
CSE-411 VLSI Design
3 Credits
Design and analysis techniques for VLSI circuits. Design of reliable VLSI circuits, noise considerations, design and operation of large fan out and fan in circuits, clocking methodologies,
techniques for data path and data control design. Simulation techniques. Parallel processing, special purpose architectures in VLSI. VLSI layouts partitioning and placement routing and wiring in
VLSI. Reliability aspects of VLSI design.
Books Recommended:
1. Basic VLSI Design-Douglas A Pucknell, Kamran Eshraghian
2. VLSI Technology – S. M. Sze
3. Introduction to VLSI Systems – C. A. Mead and L. A. Conway
CSE-412 VLSI Design Lab
1.5 Credits
Laboratory works based on CSE-411.
CSE-419 Graph Theory
3 Credits
Introduction, Fundamental concepts, Trees, Spanning trees in graphs, Distance in graphs, Eulerian graphs, Digraphs, Matching and factors, Cuts and connectivity, k-connected graphs, Network flow
problems, Graph coloring: vertex coloring and edge coloring, Line graphs, Hamiltonian cycles, Planar graphs, Perfect graphs.
Books Recommended:
1. Graph Theory and Its Applications – Jonathan L. Gross, Jay Yellen
2. A Textbook of Graph Theory – R. Balakrishnan, K. Ranganathan
CSE-420 Graph Theory Lab
1.5 Credits
Laboratory works based on CSE 419.
CSE-423 Computer System Performance Evaluations
3 Credits
Review of system analysis, approaches to system development, feasibility assessment, hardware and software acquisition. Procurement, workload characterization, the representation of measurement
data, instrumentation: software monitors, hardware monitors, capacity planning, bottleneck detection, system and program tuning, simulation and analytical models and their application, case
1. Computer Systems Performance Evaluation and Prediction– Paul J. Fortier and Howard E. Michel
2. The Art of Computer Systems Performance Analysis- Jain
CSE-424 Computer System Performance Evaluation Lab
1.5 Credits
Laboratory based on CSE 423.
ECE-421 Digital Communication
3 Credits
Introduction to modulation techniques: Pulse modulation; pulse amplitude modulation, pulse width modulation and pulse position modulation. Pulse code modulation; quantization, Delta modulation.
TDM, FDM, OOK, FSK, PSK, QPSK; Representation of noise; threshold effects in PCM and FM. Probability of error for pulse systems, concepts of channel coding and capacity. Asynchronous and
synchronous communications. Hardware interfaces, multiplexers, concentrators and buffers. Communication medium, Fiber optics.
Books Recommended:
1. Digital Communication- John G. Proakis
2. Digital Communication –Bernard Sklar
3. Introduction to Digital Communication- Roger L. Peterson
4. Digital Communication-Prof. N. Sarkar
5. Communication Systems-Simon Haykin
ECE-422 Digital Communication Lab
1.5 Credits
Laboratory works based on ECE 421.
CSE-407 Simulation and Modeling
3 Credits
Simulation methods, model building, random number generator, statistical analysis of results, validation and verification techniques, Digital simulation of continuous systems. Simulation and
analytical methods, for analysis of computer systems and practical problems in business and practice. Introduction to the simulation packages.
Books Recommended:
1. System Modeling and Simulation- V.P. Singh
2. System Design, Modeling, and Simulation using- Claudius Ptolemaeus
CSE-408 Simulation and Modeling Lab
1.5 Credits
Laboratory works based on CSE 407.
CSE-453 Digital Image Processing
3 Credits
Image Processing: Image Fundamentals, Image Enhancement: Background, Enhancement by Point-Processing, Spatial Filtering, Enhancement in Frequency Domain, Color Image Processing. Image
Restoration: Degradation Model, Diagonalization of Circulant and Block-Circulant Matrices, Algebraic Approach to Restoration, Inverse Filtering, Geometric Transformation. Image Segmentation:
Detection of Discontinuities, Edge Linking and Boundary Detection, Thresholding, Region-Oriented Segmentation, The use of Motion in Segmentation. Image-Compression.
Books Recommended:
1. Digital Image Processing-Rafael C. Gonzalez and Richard E. Woods, Pearson Education Asia.
2. Non-Linear Digital Filter : Principles and Applications –I. Pitas and A. N. Venetsanopoulos, Kluwer Academic Publications.
CSE-454 Digital Image Processing Lab
1.5 Credits
Laboratory works based on CSE 453.
CSE-455 Wireless Sensor Networks
3 Credits
Introduction: applications; Localization and tracking: tracking multiple objects; Medium Access Control: S-MAC, IEEE 802.15.4 and ZigBee; Geographic and energy-aware routing; Attribute-Based
Routing: directed diffusion, rumor routing, geographic hash tables; Infrastructure establishment: topology control, clustering, time synchronization; Sensor tasking and control: task-driven
sensing, information-based sensor tasking, joint routing and information aggregation; Sensor network databases: challenges, querying the physical environment, in-network aggregation, data indices
and range queries, distributed hierarchical aggregation; Sensor network platforms and tools: sensor node hardware, sensor network programming challenges; Other state-of-the-art related topics.
Books Recommended:
□ Wireless Sensor Networks – C. S. Raghavendra, Krishna M. Sivalingam and TaiebZnati
□ Wireless Sensor Networks: An Information Processing Approach (The Morgan Kaufmann Series in Networking) – Feng Zhao,Leonidas Guibas
CSE-456 Wireless Sensor Networks Lab
1.5 Credits
Laboratory works based on CSE 455.
CSE-409 Computer Security and Cryptography
3 Credits
Network Security Practice, Authentication Digital certificates and Public key infrastructure, X.500, Application, Electronic Mail Security, IP Security, Web Security, System Security, Intruders,
Malicious Software, Firewalls, Threats and Attacks, Various Attack Techniques and Prevention; Cryptography: Overview, Terminology, Substitution and Transposition ciphers, One time pads, Symmetric
Ciphers, classical Encryption Technique, Block Ciphers and the Data Encryption Standard, Introduction to Finite Fields, Advanced Encryption Standard, Contemporary, Symmetric Ciphers
Confidentiality Using Symmetric Encryption, Public Key Encryption, One way functions and Hash Functions, Introduction to Number Theory, Prime number generation, Public-Key Cryptography and RSA,
Key Management, Key exchange algorithm; Other Public-key Cryptosystems, Message Authentication and Hash Functions, Hash Algorithms, MD5, SHA, Digital Signatures and Authentication Protocols, DSA,
Books Recommended:
1. Applied Cryptography, John Wiley & Sons, Inc.- Bruce Schneier
2. Computer Security; ISBN: 0-471-97844-2; Edition: 1999; Publisher: John Wiley and Son Ltd.- Dieter Gollmann
3. Fundamentals of Computer Security Technology; ISBN: 0-13-108929-3; Publisher: Prentice Hall.- Edward Amoroso
4. Cryptography and Network Security Principles and Practice, Prentice Hall, New Jersey, 1999.- W. Stallings
5. Differential Crypt analysis of the data encryption standard, Springer Verlag, 1993.- E. Biham and A. Shamir
6. Cryptography and data security, Addison Wesley, 1982.- D. Denning
7. A course on Number theory and Cryptography, Springer Verlag, 1994.- N. Kobliz
CSE-410 Computer Security and Cryptography Lab
1.5 Credits
Laboratory works based on CSE 409.
CSE-457 Bio-Informatics
3 Credits
Cell concept: Structural organization of plant and animal cells, nucleus, cell membrane and cell wall. Cell division: Introducing chromosome, Mitosis, Meiosis and production of haploid/diploid
cell. Nucleic acids: Structure and properties of different forms of DNA and RNA; DNA replication. Proteins: Structure and classification, Central dogma of molecular biology. Genetic code: A brief
account. Genetics: Mendel’s laws of inheritance, Organization of genetic material of prokaryotes and eukaryotes, C-Value paradox, repetitive DNA, structure of chromatin – euchromatin and
heterochromatin, chromosome organization and banding patterns, structure of gene – intron, exon and their relationships, overlapping gene, regulatory sequence (lac operon), Molecular mechanism of
general recombination, gene conversion, Evolution and types of mutation, molecular mechanisms of mutation, site-directed mutagenesis, transposons in mutation. Introduction to Bioinformatics:
Definition and History of Bioinformatics, Human Genome Project, Internet and Bioinformatics, Applications of Bioinformatics Sequence alignment: Dynamic programming. Global versus local. Scoring
matrices. The Blast family of programs. Significance of alignments, Aligning more than two sequences. Genomes alignment. Structure-based alignment. Hidden Markov Models in Bioinformatics:
Definition and applications in Bioinformatics. Examples of the Viterbi, the Forward and the Backward algorithms. Parameter estimation for HMMs. Trees: The Phylogeny problem. Distance methods,
parsimony, bootstrap. Stationary Markov processes. Rate matrices. Maximum likelihood. Felsenstein’s post-order traversal. Finding regulatory elements: Finding regulatory elements in aligned and
unaligned sequences. Gibbs sampling. Introduction to microarray data analysis: Steady state and time series microarray data. From microarray data to biological networks. Identifying regulatory
elements using microarray data. Pi calculus: Description of biological networks; stochastic Pi calculus, Gillespie algorithm.
Books Recommended:
1. Introduction to Bioinformatics Algorithms –Jones and Pavel A. Pevzner
2. Introduction to Bioinformatics – Stephen A. Krawetz, David D. Womble
3. Introduction to Bioinformatics – Arthur M. Lesk
CSE-458 Bio-Informatics Lab
1.5 Credits
Laboratory works based on CSE-457.
CSE-461 Neural Networks
3 Credits
Fundamentals of Neural Networks; Back propagation and related training algorithms; Hebbian learning; Cohonen-Grossberg learning; The BAM and the Hopfield Memory; Simulated Annealing; Different
types of Neural Networks: Counter propagation, Probabilistic, Radial Basis Function, Generalized Regression, etc; Adaptive Resonance Theory; Dynamic Systems and neural Control; The Boltzmann
Machine; Self-organizing Maps; Spatiotemporal Pattern Classification, The Neocognition; Practical Aspects of Neural Networks.
Books Recommended:
1. An Introduction to Neural Networks – Prof. Leslie Smith
2. Fundamentals of Artificial Neural Networks – Mohamad H. Hassoun
CSE-462 Neural Networks Lab
1.5 Credits
Laboratory works based on CSE 461.
CSE-463 Machine Learning
3 Credits
Introduction: Definition of learning systems. Goals and applications of machine learning. Aspects of developing a learning system- training data, concept representation, function approximation.
Inductive Classification: The concept learning task. Concept learning as search through a hypothesis space. General-to-specific ordering of hypotheses. Finding maximally specific hypotheses.
Version spaces and the candidate elimination algorithm. Learning conjunctive concepts. The importance of inductive bias. Decision Tree Learning: Representing concepts as decision trees. Recursive
induction of decision trees. Picking the best splitting attribute: entropy and information gain. Searching for simple trees and computational complexity. Occam’s razor. Overfitting, noisy data,
and pruning. Experimental Evaluation of Learning Algorithms: Measuring the accuracy of learned hypotheses. Comparing learning algorithms- cross-validation, learning curves, and statistical
hypothesis testing. Computational Learning Theory: Models of learnability- learning in the limit; probably approximately correct (PAC) learning. Sample complexity- quantifying the number of
examples needed to PAC learn. Computational complexity of training. Sample complexity for finite hypothesis spaces. PAC results for learning conjunctions, kDNF, and kCNF. Sample complexity for
infinite hypothesis spaces, Vapnik-Chervonenkis dimension. Rule Learning, Propositional and First-Order: Translating decision trees into rules. Heuristic rule induction using separate and conquer
and information gain. First-order Horn-clause induction (Inductive Logic Programming) and Foil. Learning recursive rules. Inverse resolution, Golem, and Progol. Artificial Neural Networks:
Neurons and biological motivation. Linear threshold units. Perceptrons: representational limitation and gradient descent training. Multilayer networks and backpropagation. Hidden layers and
constructing intermediate, distributed representations. Overfitting, learning network structure, recurrent networks. Support Vector Machines: Maximum margin linear separators. Quadractic
programming solution to finding maximum margin separators. Kernels for learning non-linear functions. Bayesian Learning: Probability theory and Bayes rule. Naive Bayes learning algorithm.
Parameter smoothing. Generative vs. discriminative training. Logisitic regression. Bayes nets and Markov nets for representing dependencies. Instance-Based Learning: Constructing explicit
generalizations versus comparing to past specific examples. k-Nearest-neighbor algorithm. Case-based learning. Text Classification: Bag of words representation. Vector space model and cosine
similarity. Relevance feedback and Rocchio algorithm. Versions of nearest neighbor and Naive Bayes for text. Clustering and Unsupervised Learning: Learning from unclassified data. Clustering.
Hierarchical Aglomerative Clustering. k-means partitional clustering. Expectation maximization (EM) for soft clustering. Semi-supervised learning with EM using labeled and unlabled data.
Books Recommended:
1. Artificial Intelligence: a modern approach (2nd edition), Russell, S. and P. Norvig, Prentice Hall, 2003
2. Introduction to Machine Learning – Ethem ALPAYDIN
3. Machine Learning – Tom Mitchell, McGraw Hill
4. Introduction to machine learning (2nd edition), Alpaydin, Ethem, MIT Press, 2010
5. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, NelloCristianini and John Shawe-Taylor, Cambridge University Press
CSE-464 Machine Learning Lab
1.5 Credits.
Students should learn the methods for extracting rules or learning from data, and get the necessary mathematical background to understand how the methods work and how to get the best performance
from them. To achieve these goals student should learn the following algorithms in the lab: K Nearest Neighbor Classifier, Decision Trees, Model Selection and Empirical Methodologies, Linear
Classifiers: Perception and SVM, Naive Bayes Classifier,Basics of Clustering Analysis, K-mean Clustering Algorithm, Hierarchical Clustering Algorithm. Upon completion of the course, the student
should be able to perform the followings: a. Evaluate whether a learning system is required to address a particular problem. b. Understand how to use data for learning, model selection, and
testing to achieve the goals.c. Understand generally the relationship between model complexity and model performance, and be able to use this to design a strategy to improve an existing system.
d. Understand the advantages and disadvantages of the learning systems studied in the course, and decide which learning system is
appropriate for a particular application. e. Make a naive Bayes classifier and interpret the results as probabilities. f. Be able to apply clustering algorithms to simple data sets for clustering
CSE-465 Contemporary Course on CSE
3 Credits
The course and course contents will be proposed from the department which will be more relevant with respect to present CSE technology.
CSE-466 Contemporary Course Lab on CSE
1.5 Credits
Laboratory works based on CSE 465.
CSE-467 Advanced Database Systems
3 Credits
Introduction : Object oriented Database, Data Model, Design, Languages; Object Relational Database: Complex data types, Querying with complex data types, Design; Distributed Database: Levels of
distribution transparency, Translation of global queries to fragment queries, Optimization of access strategies, Management of distributed transactions, Concurrency control, reliability,
Administration; Parallel Database: Different types of parallelism, Design of parallel database; Multimedia Database Systems: Basic concepts, Design, Optimization of access strategies, Management
of Multimedia Database Systems, Reliability; Database Wire-housing/Data mining: Basic concepts and algorithms.
Books Recommended:
1. Oracle Advanced PL/SQL Programming with CD-ROM– Scott Urman.
CSE-468 Advanced Database System Lab
1.5 Credits
Laboratory works based on CSE-467.
CSE 469 Natural Language Processing
3 Credits
Introduction; Word Modeling: Automata and Linguistics, Statistical Approaches and Part of Speech Tagging; Linguistics and Grammars; Parsing Algorithms; Parsing Algorithms and the Lexicon;
Semantic; Feature Parsing; Tree Banks and Probabilistic Parsing; Machine Translation; Evolutionary Models of Language Learning and Origins.
Books Recommended:
1. Speech and Language Processing –Jurafsky, D. and Martin, J. H
2. Foundations of Statistical Natural Language Pr–Manning, C. D. and H. Schütze
3. Computational Complexity and Natural Language– Barton, E., Berwick, R., and Ristad, E
4. Natural Language Understanding -Allen, J.
5. Computational Models of Discourse–Brady, J., and Berwick, R.
CSE-470 Natural Language Processing Lab
1.5 Credits
Processing of words, Phrase structure parsing, Semantic Interpretation with Phrase Structure Grammars
Books Recommended:
1. Speech and Language Processing –Jurafsky, D. and Martin, J. H
2. Foundations of Statistical Natural Language Pr–Manning, C. D. and H. Schütze
3. Computational Complexity and Natural Language– Barton, E., Berwick, R., and Ristad, E
4. Natural Language Understanding -Allen, J.
5. Computational Models of Discourse–Brady, J., and Berwick, R.
CSE-400 Project / Thesis
3 Credits
Study of problems in the field of Computer Science and Engineering. This course will be initiated in the 3^rd year or early in 4^th year.
CSE-402 Comprehensive Viva Voce
2 Credits
• First Year : Semester I
Course Code Course Title Credit Hours
HUM 103 Language Composition & Comprehension 3
CSE 103 Computer Programming in C 3
CSE 104 Computer Programming in C Lab 1.5
PHY 101E Physics for Engineers 3
PHY 102 Physics for Engineers Lab 1.5
MTH 101E Geometry, Differential & Integral Calculus 3
ECE 101 Basic Electrical Engineering 3
ECE 102 Basic Electrical Engineering Lab 1.5
ECN 101 Principles of Economics 2
ACN 201 Principles of Accounting 2
• First Year : Semester II
Course Code Course Title Credit Hours
CSE 200 Project Work 2
CSE 201 Discrete Mathematics 3
CSE 203 Object Oriented Programming Language 3
CSE 204 Object Oriented Programming Language Lab 1.5
CSE 205 Data Structures 3
CSE 206 Data Structures Lab 1.5
ECE 201 Electronic Devices & Circuits 3
ECE 202 Electronic Devices & Circuits Lab 1.5
MTH 103E Linear Algebra, Vector Analysis & Complex Variables 3
IMG 201 Principles of Management 2
• Second Year : Semester I
Course Code Course Title Credit Hours
CSE 207 Algorithms 3
CSE 208 Algorithms Lab 1.5
CSE 209 Numerical Methods 3
CSE 231 Digital Logic Design 3
CSE 232 Digital Logic Design Lab 1.5
CSE 321 Database Systems 3
CSE 322 Database Systems Lab 1.5
CSE 331 Computer Architecture 3
MTH 203E Differential Equations, Laplace Transforms & Fourier Analysis 3
• Second Year : Semester II
Course Code Course Title Credit Hours
CSE 300 Software Development 2
CSE 301 E-Commerce and Web Engineering 3
CSE 302 E-Commerce and Web Engineering Lab 1.5
CSE303 Operating Systems 3
CSE 304 Operating Systems Lab 1.5
CSE 315 Data Communication 3
CSE 351 Management Information System 3
CSE 403 Compiler Design 3
CSE 404 Compiler Design Lab 1.5
MTH 301 Statistics & Probability 2
• Third Year : Semester I
Course Code Course Title Credit Hours
CSE 333 Microprocessors and Assembly Language 3
CSE 334 Microprocessors and Assembly Language Lab 1.5
CSE 339 Theory of Computation 2
CSE 401 Software Engineering 3
CSE 421 Computer Network 3
CSE 422 Computer Network Lab 1.5
CSE 435 Computer Interfacing 3
CSE 436 Computer Interfacing Lab 1.5
CSE 4** Option 3
CSE 4** Option Lab 1.5
• Third Year : Semester II
Course Code Course Title Credit Hours
CSE 405 Artificial Intelligence & Expert Systems 3
CSE 406 Artificial Intelligence & Expert Systems Lab 1.5
CSE 425 Digital Signal Processing 3
CSE 426 Digital Signal Processing Lab 1.5
CSE 431 Computer Graphics 3
CSE 432 Computer Graphics Lab 1.5
CSE 400 Project /Thesis 3
CSE 402 Comprehensive Viva Voce 2
CSE 4** Option 3
CSE 4** Option Lab 1.5
• Optional
Course Code Course Title Credit Hours
CSE 407 Simulation & Modeling 3
CSE 408 Simulation & Modeling Lab 1.5
CSE 411 VLSI Design 3
CSE 412 VLSI Design Lab 1.5
CSE 413 Information System Design 3
CSE 414 Information System Design Lab 1.5
CSE 419 Graph Theory 3
CSE 420 Graph Theory Lab 1.5
CSE 423 Computer System Performance Evaluation 3
CSE 424 Computer System Performance Evaluation Lab 1.5
CSE 437 Pattern Recognition 3
CSE 438 Pattern Recognition Lab 1.5
CSE 453 Digital Image Processing 3
CSE 454 Digital Image Processing Lab 1.5
CSE 455 Wireless and Sensor Networks 3
CSE 456 Wireless and Sensor Networks Lab 1.5
CSE 457 Bioinformatics 3
CSE 458 Bioinformatics Lab 1.5
CSE 461 Neural Networks 3
CSE 462 Neural Networks Lab 1.5
CSE 463 Machine Learning 3
CSE 464 Machine Learning Lab 1.5
CSE 465 Contemporary course on CSE 3
CSE 466 Contemporary course on CSE Lab 1.5
• Detailed Syllabus
HUM-103 Language Composition and Comprehension
3 Credits
This course purports to make the student well up in composition and comprehension of English language used in formal write ups like articles, essays and treatises. Here text will be given for
comprehension, exercises of writing essays, paragraphs and reports will be done and construction of proper sentences expressing formal ideas will be taught. Sufficient exercises of translation
and re-translations will be included.
Books Recommended:
1. Exercise in Reading Comprehension – Tibbits
2. Essential English Grammar – Ramon Murphy
3. English Vocabulary in use – Stuart
4. English Vocabulary in use – McCarthy
5. Intermediate English Grammar – Ramon Murphy
6. Paragraph in English – Tibbits
CSE-103 Computer Programming in C
3 Credits
Programming language: Basic concept; overview of programming languages, C-language: Preliminaries; Elements of C; program constructs; variables and data types in C; Input and output; character
and formatted I/O; Arithmetic expressions and assignment statements; loops and nested loops; Decision making’ Arrays; Functions; Arguments and Local Variables; Calling functions and arrays;
Recursion and recursive functions; structures within structure; Files; File functions for sequential and Random I/O. Pointers, Pointers and Structures; Pointers and functions; Pointer and arrays;
Operations on pointers; Pointer and memory addresses; Operations on bits; Bit operation; Bit field; Advanced features; Standard and Library functions.
Books Recommended:
1. The C Programming Language – Kernighn& Ritchie
2. Teach Yourself C – H. Schieldt
3. Programming with ANSI C – E. Balagurusamy
4. The Complete Reference, Turbo C/C++ – H. Schieldt
5. Programming with C, Schaum’s outline Series – Gotfreid
CSE-104 Computer Programming in C Lab
1.5 Credits
Laboratory works based on CSE 103.
PHY-101E Physics for Engineering
3 Credits
Properties of matter : Elasticity, Stress & Strain, Young’s Modulus, Surface Tension. Heat & Thermodynamics: Heat, Temperature, Zeroth Law of Thermodynamics, Thermal Equilibrium, Seebeck effect,
Reversible & Irreversible Processes, First and Second law of Thermodynamics, Heat Engine, Carnot Cycle. Electromegnetism: Electric charge, Charge density, Coulomb’s and Ohm’s law, Electric field
and electric potential, Electric dipole, Electric flux, Gauss’s law and its application, Capacitance, Magnetic field, Biot-Savert law, Ampere’s law and its application, Electromagnetic Induction,
Faraday’s law, Lenz’s law, Self Inductance and Mutual Inductance. Optics: Nature and Propagation of light, Reflection and Refraction of light, Total Internal Reflection, Interference,
Diffraction, Dispersion, Polarization. Modern Physics: Theory of Relativity, Length Contraction and Time Dilation, Mass-Energy Relation, Compton Effect, Photoelectric Effect, Quantum Theory,
Atomic Structure, X-ray Diffraction, Atomic Spectra, Electron Orbital Wavelength, Bohr radius, Radioactivity, de Broglie theory, Nuclear Fission and Fusion.
Books Recommended:
1. Modern Physics – Bernstein
2. Concepts of Modern Physics – Beiser
3. Heat & Thermodynamics – Brizlal
4. University Physics with Modern Physics – Young
PHY 102 Physics Lab
1.5 Credits
Laboratory works based on PHY 101E.
MTH-101E Geometry, Differential and Integral Calculus
3 Credits
Geometry: Two dimensional geometry: Straight lines, pair of straight lines, Circle, Parabola, Ellipse and Hyperbola, Equation of General equation of Second Degree. Third Dimensional Geometry:
Three dimensional Co-ordinates, Direction Cosines and Direction Ratios, Plane and Straight line. Differential Calculus: Real number system. Functions of single variables, its Graphs, Limit,
Continuity and Differentiability. Successive Differentiation, Leibnitz’s theorem, Rolle’s theorem, Mean value theorem, Taylor’s theorem, Maclaurin’s theorem, Langrage’s and Cauchy’s forms of
Remainder. Expansion of Function in Taylor’s and Maclaurin’s Series. Maximum and Minimum Values of Function. Evaluation of Indeterminate forms of limit, L’ Hospital’s Rule. Tangent and Normal.
Functions of more than one variable, Limit, Continuity, Differentiability, Partial Derivatives, Euler’s Theorem. Jacobians. Integral Calculus: Indefinite Integrals and its definition. Methods of
Integration (Integration by substitution, Integration by parts, Integration by successive reduction). Fundamental theorem of Integral calculus. Definite Integral and its properties. Definite
Integral as the limit of a sum. Improper Integrals, Beta and Gamma Function, Its application in evaluating Integrals. Evaluation of Arc length, Areas, Surfaces of Revolution, Volumes of solids of
Revolution, Multiple Integrals.
Books Recommended:
1. Analytical Geometry of Conic Section – J.M. Kar.
2. A Text Book on Co-ordinate Geometry – Rahman & Bhattacharjee; S. Chakrabarty, Gonith Prokashon.
3. Calculus with Analytic Geometry – Thomas and Finne
4. Calculus – Howard Anton; 10^th Edition; John Willy and Sons
5. Differential Calculus- C. Das & B. N. Mukharjee; 54^th Edition; U. N. Dhur & Sons PTL
6. Differential Calculus – C.Das & B. N. Mukharjee; 54^th Edition; U. N. Dhur & Sons PTL
7. Integral Calculus – C. Das & B. N. Mukharjee; 54^th Edition; U. N. Dhur & Sons PTL
ECE 101 Basic Electrical Engineering
3 Credits
Fundamental electrical concepts, Kirchoff’s Laws, Equivalent resistance. Electrical circuits: Series circuits, parallel circuits, series-parallel networks. Network analysis: Source conversion,
Star/Delta conversion, Branch-current method, Mesh analysis, Nodal analysis. Network theorems: Superposition theorem, Thevenin’s theorem, Norton’s theorem. Capacitors. Magnetic circuits,
Inductors Sinosoidal alternating waveforms: Definitions, phase relations, Instantaneous value, Average value, Effective (rms)Value. Phasor algebra Series, parallel and series-parallel ac
networks. Power: Apparent power, Reactive power, Power triangle, Power factor correction. Pulse waveforms and the R-C response. Three-phase system Transformers.
Books Recommended:
1. Introductory Circuit Analysis- L. Boylestad
2. Introduction to Electrical Engineering- P. Ward
3. Electrical Technology (Volume 1)-L. Theraja, A.K.Theraja
4. Alternating Current Circuits-M. Kerchner, G. F. Corcoran
5. Electric Circuits – James W. Nilson
ECE 102 Basic Electrical Engineering Lab
1.5 Credits
Laboratory works based on ECE 101.
ECN 101 Principles of Economics
2 Credits
Introduction: The Nature, scope and methods of Economics, Economics and Engineering. Some Fundamental concepts commonly used in Economics. Micro Economics: The theory of demand and supply and
their elasticity’s. Market price determination competition in theory and practice. Indifference curve technique. Marginal analysis. Factors of production and production function. Scale of
production – Internal and external economies and diseconomies. The short run and the long run. Fixed cost and variable cost. Macro Economics: National income analysis. Inflation and its effects.
Savings, Investments. The basis of trade and the terms of trade. Monetary policy, Fiscal policy, Trade policy with reference to Bangladesh. Planning in Bangladesh.
Books Recommended:
1. Economics – Samuelson & Nordhaus
2. Economics – Don Bush Fisher
ACN 201 Principles of Accounting
2 Credits
This course aims at developing basic concepts and principles of accounting. It will cover topics like working at journal entries, preparation of ledger, checking the accuracy through trial
balance, and preparation of financial statements. Concepts and practices of cost accounting will be discussed by covering topics like job order and process costing, contract costing, differential
costing and responsibility accounting. Contemporary practices of accounting principles will be discussed under the current legal framework.
Books Recommended:
1. Accounting Principles – Kieso
2. Financial & Managerial Accounting – Needles
CSE 200 Project Work
2 Credits
Project focusing on Object oriented programming approach and using standard algorithm is preferable. Every project should maintain a goal so that it can be used as a useful tool in the IT fields.
Also innovative project ideas that require different types scripting/programming languages or programming tools can be accepted with respect to the consent of the corresponding project
CSE-201 Discrete Mathematics
3 Credits
Mathematical Models and Reasoning: Propositions, Predicates and Quantifiers, Logical operators, Logical inference, Methods of proof. Sets: Set theory, Relations between sets, Operations on sets.
Induction, The natural numbers, Set operations on å*. Binary Relations : Binary relations and Digraphs, Graph theory, Trees, Properties of relations, Composition of relations, Closure operations
on relations, Order relations, Equivalence relations and partitions. Functions: Basic properties, Special classes of functions. Counting and Algorithm Analysis: Techniques, Asymptotic behavior of
functions, Recurrence systems, Analysis of algorithms. Infinite sets: Finite and Infinite sets, Countable and uncountable sets, Comparison of cardinal numbers. Algebras: Structure, Varieties of
algebras, Homomorphism, Congruence relations.
Books Recommended:
1. Discrete Mathematics and its Applications- Kennth H. Rosen
2. Discrete Mathematical Structures- Bernard Kolman, Robert C. Busby, Sharon Cutler Ross
1. Concrete Mathematics- Ronald Ervin Knuth
CSE 203 Object Oriented Programming Language
3 Credits
Introduction to Java: History of Java, Java class Libraries, Introduction to java programming, and a simple program. Developing java Application: Introduction, Algorithms, Pseudo code, control
Structure, The If/Else selection structure, the while Repetition structure, Assignment operators, Increment and decrement operators, Primitive data types, common Escape sequences, Logical
operator. Control Structure: Introduction, for Structure, switch structure, Do while structure, Break and continue Structure. Methods: Introduction, Program module in Java, Math class methods,
method definitions, java API packages, Automatic variables, Recursions, Method overloading, Method of the Applet class. Arrays: Introduction, Arrays, declaring and allocating arrays, passing
arrays to methods, sorting arrays, searching arrays, multiple subscripted Arrays. Inheritance: Introduction, Super class, Subclass, Protected members, using constructor and Finalizes in
subclasses, composition vs. Inheritance, Introduction to polymorphism, Dynamic method building, Final methods and classes, Abstract super classes and concrete classes, Exception Handling.
Books Recommended:
1. Java, How to Program- H. M. Deitel & P. J. Deitel
2. Core Java (Vol. 1 and 2)- Sun Press
3. Beginning Java 2, Wrox – Ivor Horton
4. Java 2 Complete Reference- H. Schieldt
CSE 204 Object Oriented Programming Language Lab
1.5 Credits
Laboratory works based on CSE 203.
CSE-205 Data Structures
3 Credits
Concepts and Examples: Introduction to Data structures. Elementary data structures: Arrays, records, pointer. Arrays: Type, memory representation and operations with arrays. Linked lists:
Representation, Types and operations with linked lists. Stacks and Queues: Implementations, operations with stacks and queues. Graphs: Implementations, operations with graph. Trees:
Representations, Types, operations with trees. Memory Management: Uniform size records, diverse size records. Sorting: Internal sorting, external sorting. Searching : List searching, tree
searching. Hashing: Hashing functions, collision resolution.
Books Recommended:
1. Fundamental of Data Structures – Horowitz & S. Sahni
2. Data Structures – Reingold
3. Data Structures, Schaum’s outline Series – Lipshultz
4. Data Structures & Programming Design – Robert L. Kruse
CSE-206 Data Structures Lab
1.5 Credits
Laboratory works based on CSE 205.
ECE-201 Electronic Devices & Circuits
3 Credits
Introduction to semiconductors, Junction diode characteristics & diode applications, Bipolar Junction transistor characteristics, Transistor biasing, Small signal low frequency h-parameter model
& hybrid -pi model, AC analysis of transistor, Frequency response of transistor, Operational amplifiers, Linear applications of operational amplifiers, DC performance of operational amplifiers,
AC performance of operational amplifiers, Introduction to JFET, MOSFET, PMOS, NMOS & CMOS, Introduction to SCR, TRIAC, DIAC & UJT, Active filters Introduction to IC fabrication techniques & VLSI
Books Reccommended:
1. Electronic Devices & Circuits McGraw-Hill -Jacob Millman & Christos C. Halkias
2. Electronics Devices And Circuits- Salivahanan, N. S. Kumar And A. Vallavaraj, Tata McGraw – Hill
3. Electronics Fundamentals: Circuits, Devices, and Applications- Ronald J Tocci
ECE 202 Electronic Devices & Circuits Lab
1.5 Credits
Laboratory works based on ECE 201.
MTH-103E Linear Algebra, Vector Analysis and Complex Variables
3 Credits
Linear Algebra: Matrix, Types of Matrices, Matrix operations, Laws of matrix algebra, Invertible matrices, System of Linear equations (homogeneous and non-homogeneous) and their solution.
Elementary row and column operations and Row reduced echelon matrices, Different types of matrices, Rank of matrices. Eigen values and Eigen vectors. Vector Analysis: Vector Algebra – Vectors in
three dimensional space, Algebra of Vectors, Rectangular components, Addition and Scalar multiplication, Scalar and Vector product of two Vectors, Scalar and Vector triple product. Vector
Calculus – Vector differentiation and Integration. Gradient, Divergence and Curl. Green’s theorem, Stoke’s theorem. Complex Variable: Limit, Continuity and differentiability of complex functions.
Analytic function, Harmonic function, Cauchy-Rieman equation. Complex Integration. Cauchy’s integral theorem and Cauchy’s Integral formula. Lioville’s theorem. Taylor’s and Laurent’s theorems.
Singularities. Residue, Cauchy’s Residue theorem. Contour Integration.
Book s Recommended:
1. Scham’s Outline Series of the Theory and Problems on Linear Algebra – Seymour Lipschutz, 3^rd ed., McGraw Hill Book
2. Linear Algebra with Applications – R. Antone
3. Scham’s Outline Series of the Theory and Problems on Vector Analysis – Murray R. Spiegel, SI(Metric ed.), McGraw Hill
4. Functions of a Complex Variable – Dewan Abdul Quddus, Titash Publications.
IMG 201 Principles of Management
This course aims at providing students with concepts and tools of general management. The course covers concepts of planning, organizing, motivating and controlling, and its importance in
attaining organizational objectives. Some current issues and trends in general management will also be discussed.
Books Reccommended:
1. Principles of Management – Mason Carpenter
2. Principles of Management – Robert Kreitner
3. Principles of Management : A Modern Approach – P.K.Saxena
4. Principles of Management – P.C. Tripathi, P N Reddy, McGraw-Hill
CSE-207 Algorithms
3 Credits
Analysis of Algorithm: Asymptotic analysis: Recurrences, Substitution method, Recurrence tree method, Master method. Divide-and-Conquer: Binary search, Powering a number, Fibonacci numbers,
Matrix Multiplication, Strassen’s Algorithm for Matrix Multiplication. Sorting: Insertion sort, Merge sort, Quick sort, Randomized quick sort, Decision tree, Counting sort, Radix sort. Order
Statistics: Randomized divide and conquer, worst case linear time order statistics. Graph: Representation, Traversing a graph, Topological sorting, Connected Components. Dynamic Programming:
Elements of DP (Optimal substructure, Overlapping subproblem), Longest Common Subsequence finding problem, Matrix Chain Multiplication. Greedy Method: Greedy choice property, elements of greedy
strategy, Activity selector problem, Minimum spanning tree (Prims algorithm, Kruskal algorithm), Huffman coding. Shortest Path Algorithms: Dynamic and Greedy properties, Dijkstra’s algorithm with
its correctness and analysis, Bellman-ford algorithm, All pair shortest path: Warshall’s algorithm, Johnson’s algorithm. Network flow: Maximum flow, Max-flow-min-cut, Bipartite matching.
Backtracking/Branch-and-Bound: Permutation, Combination, 8-queen problem, 15-puzzle problem. Geometric algorithm: Segment-segment intersection, Convex-hull, Closest pair problem. And NP
Completeness, NP hard and NP complete problems.
Books Recommended:
1. Introduction to Algorithms- Thomas H. Cormen , Charles E. Leiserson.
2. Algorithms –Robert Sedgewick and Kevin Wayne.
3. Fundamental Algorithms- Donald E. Knuth,”Art of Computer Programming, Volume 1: Addison-
Wesley Professional; 3rd edition, 1997.
CSE-208 Algorithms Lab
1.5 Credits
Using different well known algorithms to solve the problem of Matrix-Chain Multiplication, Longest Common Subsequence, Huffman codes generation, Permutation, Combination, 8-queen problem,
15-puzzle, BFS, DFS, flood fill using DFS, Topological sorting, Strongly connected component, finding minimum spanning tree, finding shortest path (Dijkstra’s algorithm and Bellman-Ford’s
algorithm), Flow networks and maximum bipartite matching, Finding the convex hull, Closest pair.
CSE-209 Numerical Methods
3 Credits
Errors and Accuracy. Iterative process: Solution of f(x)= 0, existence and convergence of a root, convergence of the iterative method, geometrical representation, Aitken’s D^2– process of
acceleration. System of Linear Equations. Solution of Non-Linear equations. Finite Differences and Interpolation. Finite Difference Interpolation. Numerical Differentiation. Numerical
Integration. Differential Equations.
Books Recommended:
1. Introductory methods of Numerical Analysis – S. S. Sastry
2. Numerical Methods for Engineers –Steven C. Chapra
CSE-231 Digital Logic Design
3 Credits
Binary Logic. Logic Gates: IC digital logic families, positive and negative logic. Boolean Algebra. Simplification of Boolean Functions: Karnaugh map method, SOP and POS simplification, NAND,
NOR, wired-AND, wired-OR implementation, nondegenerate forms, Don’t care conditions, Tabulation method – prime implicant chart. Combinational Logic: Arithmetic circuits – half and full adders and
subtractors, multilevel NAND and NOR circuits, Ex-OR and Equivalence functions. Combinational Logic in MSI and LSI: Binary parallel adder, decimal and BCD adders, Comparators, Decoders and
Encoders, Demultiplexors and Multiplexors. Sequential Logic. Registers and Counters. Synchronous Sequential Circuits. Asynchronous Sequential Circuits. Digital IC terminology, TTL logic family,
TTL series characteristics, open-collector TTL, tristate TTL, ECL family, MOS digital ICs, MOSFET, CMOS characteristics, CMOS tristate logic, TTL-CMOS-TTL interfacing, memory terminology, general
memory operation, semiconductor memory technologies, different types of ROMs, semiconductor RAMs, static and dynamic RAMs, magnetic bubble memory, CCD memory, FPGA Concept.
Books Recommended:
1. Digital Logic & Computer Design-M. Morris Mano
2. Digital Fundamentals- Floyd
3. Modern Digital Electronics-R. P. Jain
4. Digital Systems- R. J. Tocci
5. Digital Electronics- Green
CSE-232 Digital Logic Design Lab
1.5 Credits
Laboratory works based on CSE 231.
CSE-321 Database Systems
3 Credits
Introduction: Purpose of Database Systems, Data Abstraction, Data Models, Instances and Schemes, Data Independence, Data Definition Language, Data Manipulation Language, Database Manager,
Database administrator, Database Users, Overall System Structure, Advantages and Disadvantage of a Database Systems. Data Mining and analysis, Database Architecture, History of Database Systems
Relationship Entity-Model: Entities and Entity Sets, Relationships and Relationship Sets, Attributes, Composite and Multivalued Attributes, Mapping Constraints, Keys, Entity-Relationship Diagram,
Reducing of E-R Diagram to Tables, Generalization, Attribute Inheritance, Aggregation, Alternative E-R Notatios, Design of an E-R Database Scheme.
Relational Model: Structure of Relational Database, Fundamental Relational Algebra Operations, The Tuple Relational Calculus, The Domain Relational Calculus, Modifying the Database. Relational
Commercial Language: SQL, Basic structure of SQL Queries, Query-by-Example, Quel., Nested Sub queries, Complex queries, Integrity Constraints, Authorization, Dynamic SQL, Recursive Queries.
Relational Database Design: Pitfalls in Relational Database Design, Functional Dependency Theory, Normalization using Functional Dependencies, Normalization using Multivalued Dependencies,
Normalization using join Dependencies, Database Design Process. File And System Structure: Overall System Structure, Physical Storage Media, File Organization, RAID, Organization of Records into
Blocks, Sequential Files, Mapping Relational Data to Files, Data Dictionary Storage, Buffer Management. Indexing And Hashing: Basic Concepts, Ordered Indices, B+ -Tree Index Files, B-Tree Index
Files, Static and Dynamic Hash Function, Comparison of Indexing and Hashing, Index Definition in SQL, Multiple Key Access.
Query Processing and Optimization: Query Interpretation, Equivalence of Expressions, Estimation of Query-Processing Cost, Estimation of Costs of Access Using Indices, Join Strategies, Join
Strategies for parallel Processing, Structure of the query Optimizer, Transformation of Relational Expression. Concurrency Control: Schedules, Testing for Serializability, Lock-Based Protocols,
Timestamp-Based Protocols, Validation Techniques, Multiple Granularity, Multiversion Schemes, Insert and Delete Operations, Deadlock Handling. Distributed Database: Structure of Distributed
Databases, Trade-off in Distributing the Database, Design of Distributed Database, Transparancy and Autonomy, Distributed Query Processing, Recovery in Distributed Systems, Commit Protocols,
Concurrency Control. Data Mining and Information Retrieval: Data analysis and OLAP, Data Warehouse, Data Mining, Relevance Ranking Using Terms, Relevance Ranking Using Hyperlink, Synonyms,
Homonyms, Ontology, Indexing of Document, Measuring Retrieval Efficiencies, Information Retrieval and Structured Data.
Books Recommended:
1. Database System Concepts – Abraham Silberschratz, Henry K. Korth, S. Sudarshan (5^th edition)
2. Fundamentals of Database Systems – Benjamin/Cummings, 1994
3. Database Principles, Programming, Performance – Morgan Kaufmann 1994
4. A First Course in Database Systems – Prentice Hall, 1997
5. Database Management Systems, McGraw Hill, 1996
CSE-322 Database Systems Lab
1.5 Credits
Introduction: What is database, MySQL , Oracle , SQL, Datatypes, SQL / PLSQL, Oracle Software Installation, User Type, Creating User , Granting. Basic Parts of Speech in SQL: Creating Newspaper
Table, Select Command (Where , order by), Creating View, Getting Text Information & Changing it, Concatenation, Cut & paste string(RPAD , LPAD , TRIM , LTRIM , RTRIM, LOWER , UPPER , INIT, LENGTH
, SUBSTR , INSTR , SOUNDEX). Playing The Numbers: Addition , Subtraction , Multiplication , Division, NVL , ABS , Floor , MOD , Power , SQRT , EXR , LN , LOG , ROUND, AVG , MAX , MIN , COUNT ,
SUM, Distinct, SUBQUERY FOR MAX,MIN. Grouping things together: Group By , Having, Order By, Views Renaming Columns with Aliases. When one query depends upon another: Union, Intersect , Minus, Not
in , Not Exists. Changing Data : INSERT,UPDATE,MERGE,DELETE, ROLLBACK , AUTOCOMMIT , COMMIT, SAVEPOINTS, MULTI TABLE INSERT, DELETE, UPDATE, MERGE. Creating And Altering tables & views: Altering
table, Dropping table, Creating view, Creating a table from a table. By What Authority: Creating User, Granting User, Password Management.
An Introduction to PL/SQL: Implement few problems using PL/SQL (eg Prime Number, Factorial, Calculating Area of Circle, etc).An Introduction to Trigger and Procedure: Implement few problems using
Trigger and Procedures. An Introduction to Indexing: Implement indexing using a large database and observe the difference of Indexed and Non-Indexed database.
CSE-331 Computer Architecture
3 Credits
Introduction to Computer Architecture: Overview and history; Cost factor; Performance metrics and evaluating computer designs. Instruction set design: Von Neumann machine cycle, Memory
addressing, Classifying instruction set architectures, RISC versus CISC, Micro programmed vs. hardwired control unit. Memory System Design: Cache memory; Basic cache structure and design; Fully
associative, direct, and set associative mapping; Analyzing cache effectiveness; Replacement policies; Writing to a cache; Multiple caches; Upgrading a cache; Main Memory; Virtual memory
structure, and design; Paging; Replacement strategies. Pipelining: General considerations; Comparison of pipelined and nonpipelined computers; Instruction and arithmetic pipelines, Structural,
Data and Branch hazards. Multiprocessors and Multi-core Computers: SISD, SIMD, and MIMD architectures; Centralized and distributed shared memory- architectures; Multi-core Processor architecture.
Input/output Devices: Performance measure, Types of I/O device, Buses and interface to CPU, RAID. Pipelining: Basic pipelining, Pipeline Hazards. Parallel Processing.
Books Recommended:
1. Computer Architecture and Organization- John P.Hayes, 3rd Edition, McGraw Hill
2. Computer Organization and Design: The hardware / software interface- David A.Patterson and John L.Hennessy
MTH-203E Differential Equations, Laplace Transforms and Fourier Analysis
3 Credits
Differential Equation: Formation, Degree and Order of differential equation, Complete and Particular solution. Solution of ordinary differential equation of first order and first degree (special
forms). Linear differential equation with constant coefficients. Homogeneous linear differential equation. Solution of equation by the method of Variation of parameters. Solution of linear
differential equations in series by Frobenius method. Solution of Simultaneous equation of the form = = . Laplace Transforms: Definition, Laplace transforms of some elementary functions,
sufficient conditions for existence of Laplace transforms, Inverse Laplace transforms, Laplace transforms of derivatives, Unit step function, Periodic function, Some special theorems on Laplace
transforms, Partial fraction, Solution of differential equations by Laplace transforms, Evaluation of Improper Integrals. Fourier Analysis: Fourier series (Real and complex form). Finite
transforms, Fourier Integrals, Fourier transforms and application in solving boundary value problems.
Books Recommended:
1. Differential Equations – H. T. H. Piaggio; 1^st Indian Edition, 1985, S. K. Jain for CBS Publishers
2. A Text Book on Integral Calculus with Differential Equations – Mohammad, Bhattacharjee & Latif, 4^th Edition, 2010; S. Chakravarty, Gonith Prokashon
3. Schaum’s Outline Series of the Theory and Problems on Laplace Transforms – Murray R. Spiegel; Revised Edition, 2003; McGraw Hill Book Company
4. Differential Equation – Md. Abu Eusuf; Latest Edition; Abdullah Al Mashud Publisher
CSE 300 Software Developments
1.5 Credits
Students will work in groups or individually to produce high quality software in different languages. Students will write structured programs and use proper documentation. Advanced programming
techniques in Mobile Application.
Books Recommended:
1. Android Application Development Cookbook- Wei-Meng Lee
2. The Complete Android Guide- Kevin Purdy
CSE 301: E-Commerce and Web Engineering
3 Credits
E-Commerce Basics: E-Commerce Definition, Internet History and E-Commerce Development, Business-to-Business E-Commerce, Business-to-Consumer E-Commerce, E-Commerce Stages and Processes,
E-Commerce Challenges, E-Commerce Opportunities.E-Commerce Options: Internet Access Requirements, Web Hosting Requirements, Entry-Level Options, Storefront and Template Services, E-Commerce
Software Packages, E-Commerce Developers, E-Business Solutions.Marketing Issues: Online and Offline Market Research, Data Collection, Domain Names, Advertising Options, E-Mail Marketing, Search
Engines, Web Site Monitoring, Incentives. Planning and Development: Web Site Goals, International Issues, Planning Stages, Resource Allocation, Content Development, Site Map Development, Web Site
Design Principles, Web Site Design Tools, Web Page Programming Tools, Data-Processing Tools. E-Commerce Components: Navigation Aids, Web Site Search Tools, Databases, Forms, Shopping Carts,
Checkout Procedures, Shipping Options. Payment Processing: Electronic Payment Issues, E-Cash, Credit Card Issues, Merchant Accounts, Online Payment Services, Transaction Processing, Taxation
Issues, Mobile Commerce (M-Commerce). Security Issues: Security Issues and Threats, Security Procedures, Encryption, Digital Certificates, SSL and SET Technologies, Authentication and
Identification, Security Providers, Privacy Policies, Legal Issues. Customer Service: Customer Service Issues, E-Mail Support , Telephone Support , Live Help Services, Customer Discussion Forums,
Value-Added Options. ASP.NET programming model, Web development in Microsoft Visual Studio .NET, Anatomy of an ASP.NET page, ASP.NET core server controls, ADO.NET data providers, ADO.NET data
containers, The data-binding model.
Books Recommended
1. E-Commerce, Jeffrey F., Rayport, Bernard J. Jaworsk , McGraw-Hill
2. Understanding Electronic Commerce, David Kosiur , Microsoft Press.
3. Introduction to E-Commerce, Jeffrey F. Rayport, et al. , McGraw-Hill
4. E-Commerce Strategies, Charles Trepper
CSE 302: E-Commerce and Web Engineering Lab
1.5 Credits
Laboratory works based on CSE 301.
303 Operating Systems
3 Credits
Introduction: Operating Systems Concept, Computer System Structures, Operating System Structures, Operating System operations, Protection and Security, Special-Purpose Systems. Fundamentals of OS
: OS services and components, multitasking, multiprogramming, time sharing, buffering, spooling Process Management: Process Concept, Process Scheduling, Process State, Process Management,
Interprocess Communication, interaction between processes and OS, Communication in Client-Server Systems, Threading, Multithreading, Process Synchronization. Concurrency control: Concurrency and
race conditions, mutual exclusion requirements, semaphores, monitors, classical IPC problem and solutions, Dead locks – characterization, detection, recovery, avoidance and prevention. Memory
Management: Memory partitioning, Swapping, Paging, Segmentation, Virtual memory – Concepts, Overlays, Demand Paging, Performance of demand paging, Page replacement algorithm, Allocation
algorithms. Storage Management: Principles of I/O hardware, Principles of I/O software, Secondary storage structure, Disk structure, Disk scheduling, Disk Management, Swap-space Management, Disk
reliability, Stable storage implementation. File Concept: File support, Access methods, Allocation methods, Directory systems, File Protection, Free Space management Protection & Security : Goals
of protection, Domain of protection, Access matrix, Implementation of access matrix, Revocation of access rights, The security problem, Authentication, One-time passwords, Program threats, System
threats, Threat monitoring, Encryption, Computer-security classification. Distributed Systems: Types of Distributed Operating System, Communication Protocols, Distributed File Systems, Naming and
Transparency, Remote File Access, Stateful Versus Stateless Service, File Replication. Case Studies: Study of a representative Operating Systems.
Books Recommended:
1. Operating System Concepts – Silberschatz & Galvin Wiley 2000 (7th Edition)
2. Operating Systems – Achyut S. Godbole Tata Mc Graw Hill (2nd Edition)
3. Understanding Operating System – Flynn & Metioes Thomsan (4th Edition)
4. Operating Systems Design & Implementation – Andrew Tanenbam, Albert S. Woodhull Pearson
5. Modern Operating System – Andrew S. Tanenbaum
CSE-304 Operating Systems Lab
1.5 Credits
Thread programming: Creating thread and thread synchronization. Process Programming: The Process ID, Running a New Process, Terminating a Process, Waiting for Terminated Child Processes, Users
and Groups, Sessions and Process Groups. Concurrent Programming: Using fork, exec for multi-task programs. File Operations: File sharing across processes, System lock table, Permission and file
locking, Mapping Files into Memory, Synchronized, Synchronous, and Asynchronous Operations, I/O Schedulers and I/O Performance.
Communicating across processes: Using different signals, Pipes, Message queue, Semaphore, Semaphore arithmetic and Shared memory.
Books Recommended:
1. The ‘C’ Odyssey UNIX-The Open, Boundless C – Meeta Gandhi, Tilak Shetty, Rajiv Shah.
2. Beginning Linux Programming – Neil Matthew and Richard Stones
3. Linux System Programming – Robert Love
CSE-315 Data Communication
3 Credits
Introduction to modulation techniques: Pulse modulation; pulse amplitude modulation, pulse width modulation and pulse position modulation. Pulse code modulation; quantization, Delta modulation.
TDM, FDM, OOK, FSK, PSK, QPSK; Representation of noise; threshold effects in PCM and FM. Probability of error for pulse systems, concepts of channel coding and capacity. Asynchronous and
synchronous communications. Hardware interfaces, multiplexers, concentrators and buffers. Communication medium, Fiber optics.
Books Recommended:
1. Introduction to Data Communications-Eugene Blanchard
2. Data Communication Principles – Ahmad, Aftab
3. Data Communication & Networking– S.Bagad, I.A.Dhotre
4. Data Communications and Networking- Behrouz A. Forouzan
CSE-351 Management Information Systems
3 Credits
Introduction to MIS: Management Information System Concept. Definitions, Role of MIS, Approaches of MIS development. MIS and Computer: Computer Hardware for Information System, Computer Software
for Information System, Data Communication System, Database Management Technology, Client-Server Technology. Decision-Support System: Introduction, Evolution of DSS, Future development of DSS.
Application of MIS: Applications in manufacturing Sector, Applications in service sector, Case Studies.
Books Recommended:
1. Management Information Systems- James O’Brian , Tata MCGraw-Hill
2. Management Information Systems-Post and Andersin, Tata Mcgraw-Hill
CSE-403 Compiler Design
3 Credits
Introduction to compilers: Introductory concepts, types of compilers, applications, phases of a compiler. Lexical analysis: Role of the lexical analyzer, input buffering, token specification,
recognition of tokens, symbol tables. Parsing: Parser and its role, context free grammars, top-down parsing. Syntax-directed translation: Syntax-directed definitions, construction of syntax
trees, top-down translation. Type checking: Type systems, type expressions, static and dynamic checking of types, error recovery. Run-time organization: Run-time storage organization, storage
strategies. Intermediate code generation: Intermediate languages, declarations, assignment statements. Code optimization: Basic concepts of code optimization, principal sources of optimization.
Code generation. Features of some common compilers: Characteristic features of C, Pascal and Fortran compilers.
Books Recommended:
1. Compilers: Principles, Techniques, and Tools – Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman. Second Edition.
CSE-404 Compiler Design Lab
1.5 Credits
How to use scanner and parser generator tools (e.g., Flex, JFlex, CUP, Yacc, etc). For a given simple source language designing and implementing lexical analyzer, symbol tables, parser,
intermediate code generator and code generator.
MTH-301 Statistics and Probability
2 Credits
Frequency distribution; mean, median, mode and other measures of central tendency, Standard deviation and other measures of dispersion, Moments, skewness and kurtosis, Elementary probability
theory and discontinuous probability distribution, e.g. binomial, poison and negative binomial, Continuous probability distributions, e.g. normal and exponential, Characteristics of
distributions, Hypothesis testing and regression analysis
Books Recommended:
1. Introduction to M athematical Statistics – Hogg
2. Probability and Statistics for Scientists and Engineers – Walpole
CSE-333 Microprocessors and Assembly Language
3 Credits
Introduction to different types of microprocessors, Microprocessor architecture, instruction set, interfacing, I/O operation, interrupt structure, DMA, Microprocessor interface ICs. Advanced
microprocessor concept of microprocessor based system design. Machine and Assembly instruction types and their formats. Character representation instructions. Instruction execution. Machine
language programming. Instruction sets and their implementations. The Assembly process. Addressing methods. Subroutines, macros and files. I/O programming interrupts and concurrent processes.
Books Recommended:
1. Microprocessors & Interfacing- Douglas V. Hall
2. Microprocessors – Harunur Rashid
3. Microprocessor & Microcomputer Based System Design – Rafiquzzaman
4. Microprocessor Systems: 8086/8088 Family – Y.Lin & G.A. Gibson
CSE-334 Microprocessors and Assembly Language Lab
1.5 Credits
Laboratory works based on CSE 333.
CSE-339 Theory of Computation
2 Credits
Finite Automata: Deterministic and nondeterministic finite automata and their equivalence. Equivalence with regular expressions. Closure properties. The pumping lemma and applications.
Context-free Grammars: Definitions. Parse trees. The pumping lemma for CFLs and applications. Normal forms. General parsing. Sketch of equivalence with pushdown automata. Turing Machines:
Designing simple TMs. Variations in the basic model(multi-tape, multi-head, nondeterminism). Church-Turing thesis and evidence to support it through the study of other models. Undecidability: The
undecidability of the halting problem. Reductions to other problems. Reduction in general.
Books Recommended:
1. Introduction to Languages and the Theory of Computation, 2^nd Edition- C. Martin, McGraw Hill Publications, 1997.
CSE 401 Software Engineering
3 credits
Concepts of software engineering: requirements definition, modularity, structured design, data specifications, functional specifications, verification, documentation, software maintenance.
Software support tools. Software project organization, quality assurance, management and communication skills.
Books Reccommended:
1. Software Engineering: A Practitioner’s Approach, 6^th Edition, Roger S. Pressman
2. Software Engineering Concepts – Richard Fairley
3. Software Engineering Environments – Robert N. Charette
CSE-421 Computer Network
3 Credits
Network architectures-layered architectures and ISO reference model: data link protocols, error control, HDLC, X.25, flow and congestion control, virtual terminal protocol, data security. Local
area networks, satellite networks, packet radio networks. Introduction to ARPANET, SNA and DECNET. Topological design and queuing models for network and distributed computing systems.
Books Recommended:
1. Computer Networks-A. S. Tanenbaum
2. Introduction to Networking- Barry Nance
3. Data Communications, Computer Networks & Open Systems- F. Halsall
4. TCP/IP-SydniFeit
5. Data Communications and Networking-Behrouz A. Forouzan
CSE-422 Computer Network Lab
1.5 Credits
Laboratory works based on CSE 421.
CSE-435 Computer Interfacing
3 Credits
Interface components and their characteristics, microprocessor I/O. Disk, Drums, and Printers. Optical displays and sensors. High power interface devices, transducers, stepper motors and
peripheral devices.
Books Recommended:
1. Microprocessors & Interfacing-Douglas V. Hall
2. Microprocessor & Microcomputer based System Design – Rafiquzzaman
3. Microcomputer Interfacing-Artwick
4. Microcomputer Interfacing-Ramesh Goanker
5. Designing User Interfaces-James E. Powell
CSE-436 Computer Interfacing Lab
1.5 Credits
Laboratory works based on CSE 435.
CSE-405 Artificial Intelligence & Expert System
3 Credits
What is Artificial Intelligence: The AI problems, The underlying assumption, What is an AI technique. Problems, Problem spaces and Search: Defining the problem as a state space search, Production
system, Problem characteristics. Heuristics Search Techniques: Generate and Test, Hill climbing, Best First Search, Problem Reduction, Constraint Satisfaction, Means-Ends Analysis. Knowledge
Representation Issues: Representation and Mappings, Approaches to knowledge Representation, Issues in Knowledge representation. Using Predicate logic: Representing simple facts in logic,
Representing Instance and Isa relationships, Computable functions and Predicates, Resolution. Representing Knowledge using Rules: Procedural versus Declarative Knowledge, Logic Programming,
Forward versus Backward Reasoning, Matching. Game playing: Overview, The Mimimax Search Procedure, Adding Alpha-Beta cutoffs, Additional refinements, iterative Deepening, Planning: Overview, An
example Domain: The Blocks World, Components of a planning system, Goal stack planning, Understanding: What is Understanding, What makes Understanding hard, Understanding as constraint
satisfaction. natural Language Processing: Introduction, Syntactic Processing, Semantic Analysis, Discourse and Pragmatic Processing. Expert systems: representing and using domain knowledge,
Expert system shells explanation, Knowledge Acquisition. AI Programming Language: Prolog, LISP, Python.
Books Recommended:
1. Artificial Intelligence: A Modern Approach-S. Russel and P. Norvig
2. Introduction to Artificial Intelligence and Expert System-Dan W. Peterson
3. Artificial Intelligence-E. Rich and K. Knight
4. An Introduction to Neural Computing-C. F. Chabris and T. Jackson
5. Artificial Intelligence using C – H. Schieldt
CSE-406 Artificial Intelligence & Expert System Lab
1.5 Credits
Students will have to understand the functionalities of intelligent agents and how the agents will solve general problems. Students have to use a high-level language (Python, Prolog, LISP) to
solve the following problems:
Backtracking: State space, Constraint satisfaction, Branch and bound. Example: 8-queen, 8- puzzle, Crypt-arithmetic. BFS and production: Water jugs problem, The missionaries and cannibal problem.
Heuristic and recursion: Tic-tac-toe, Simple bock world, Goal stack planning, The tower of Hanoi. Question answering: The monkey and bananas problem.
CSE-425 Digital Signal Processing
3 Credits
Introduction to digital signal processing (DSP): Discrete-time signals and systems, analog to digital conversion, impulse response, finite impulse response (FIR) and infinite impulse response
(IIR) of discrete-time systems, difference equation, convolution, transient and steady state response. Discrete transformations: Discrete Fourier series, discrete-time Fourier series, discrete
Fourier transform (DFT) and properties, fast Fourier transform (FFT), inverse fast Fourier transform, z-transformation – properties, transfer function, poles and zeros and inverse z-transform.
Correlation: circular convolution, auto-correlation and cross correlation. Digital Filters: FIR filters- linear phase filters, specifications, design using window, optimal and frequency sampling
methods; IIR filters- specifications, design using impulse invariant, bi-linear z-transformation, least-square methods and finite precision effects. Digital signal processor TMS family,
Application of digital signal processing.
Books Recommended:
1. Digital Signal Processing-John G. Proakis
2. Signals and Systems-Simon Haykin and Barry Van Veen
3. Digital Signal Processing-R. W. Schafer
4. Digital Signal Processing-Ifeachor
5. Introduction to DSP-Johnny R. Johnson
CSE-426 Digital Signal Processing Lab
1.5 Credits
Laboratory works based on CSE 425.
CSE-431 Computer Graphics
3 Credits
Introduction to Graphical data processing. Fundamentals of interactive graphics programming. Architecture of display devices and connectivity to a computer. Implementation of graphics concepts of
two-dimensional and three-dimensional viewing, clipping and transformations. Hidden line algorithms. Raster graphics concepts: Architecture, algorithms and other image synthesis methods. Design
of interactive graphic conversations.
Books Recommended:
1. Principles of Interactive Computer Graphics –William M., Newman, McGraw-Hill, 2nd edition, 1978
2. Computer Graphics: Principle and Practice in C-James D. Foley, Andries van Dam, Steven K. Feiner, John F. Hughes, Addison-Wesley, 2nd edition, 1995
CSE-432 Computer Graphics Lab
1.5 Credits
Laboratory works based on CSE 431.
CSE-400 Project / Thesis
3 Credits
Study of problems in the field of Computer Science and Engineering. This course will be initiated in the 3^rd year or early in 4^th year.
CSE-402 Comprehensive Viva Voce
2 Credits
OPTIONAL COURSES
CSE-407 Simulation and Modeling
3 Credits
Simulation methods, model building, random number generator, statistical analysis of results, validation and verification techniques, Digital simulation of continuous systems. Simulation and
analytical methods, for analysis of computer systems and practical problems in business and practice. Introduction to the simulation packages.
Books Recommended:
1. System Modeling and Simulation- V.P. Singh
2. System Design, Modeling, and Simulation using- Claudius Ptolemaeus
CSE-408 Simulation and Modeling Lab
1.5 Credits
Laboratory works based on CSE 407.
CSE-411 VLSI Design
3 Credits
Design and analysis techniques for VLSI circuits. Design of reliable VLSI circuits, noise considerations, design and operation of large fan out and fan in circuits, clocking methodologies,
techniques for data path and data control design. Simulation techniques. Parallel processing, special purpose architectures in VLSI. VLSI layouts partitioning and placement routing and wiring in
VLSI. Reliability aspects of VLSI design.
Books Recommended:
1. Basic VLSI Design-Douglas A Pucknell, Kamran Eshraghian
2. VLSI Technology – S. M. Sze
3. Introduction to VLSI Systems – C. A. Mead and L. A. Conway
CSE-412 VLSI Design Lab
1.5 Credits
Laboratory works based on CSE 411.
CSE-413 Information System Design
3 Credits
Information, general concepts of formal information systems, analysis of information requirements for modern organizations, modern data processing technology and its application, information
systems structures, designing information outputs, classifying and coding data, physical storage media considerations, logical data, organization, systems analysis, general systems design, detail
systems design. Project management and documentation. Group development of an information system project. Includes all phases of software life cycles from requirement analysis to the completion
of a fully implemented system.
Books Recommended:
1. Information Systems Analysis and Design – Phil Agre, Christine Borgman
2. Analysis and Design of Information Systems-Langer, Arthur M.
CSE-414 Information System Design Lab
1.5 Credits
Laboratory works based on CSE 413.
CSE-419 Graph Theory
3 Credits
Introduction, Fundamental concepts, Trees, Spanning trees in graphs, Distance in graphs, Eulerian graphs, Digraphs, Matching and factors, Cuts and connectivity, k-connected graphs, Network flow
problems, Graph coloring: vertex coloring and edge coloring, Line graphs, Hamiltonian cycles, Planar graphs, Perfect graphs.
Books Recommended:
1. Graph Theory and Its Applications – Jonathan L. Gross, Jay Yellen
2. A Textbook of Graph Theory – R. Balakrishnan, K. Ranganathan
CSE-420 Graph Theory Lab
1.5 Credits
Laboratory works based on CSE 420.
CSE-423 Computer System Performance Evaluations
3 Credits
Review of system analysis, approaches to system development, feasibility assessment, hardware and software acquisition. Procurement, workload characterization, the representation of measurement
data, instrumentation: software monitors, hardware monitors, capacity planning, bottleneck detection, system and program tuning, simulation and analytical models and their application, case
1. Computer Systems Performance Evaluation and Prediction– Paul J. Fortier and Howard E. Michel
2. The Art of Computer Systems Performance Analysis- Jain
CSE-424 Computer System Performance Evaluation Lab
1.5 Credits
Laboratory based on CSE 423.
CSE-437 Pattern Recognition
3 Credits
Introduction to pattern recognition: features, classifications, learning. Statistical methods, structural methods and hybrid method. Applications to speech recognition, remote sensing and
biomedical area, Learning algorithms. Syntactic approach: Introduction to pattern grammars and languages. parsing techniques. Pattern recognition in computer aided design.
1. Pattern Recognition- K. Koutroumbas
2. Pattern Recognition and Machine Learning- Christopher M. Bishop
3. Pattern Recognition for Neural Networks- Brian Ripley
CSE-438 Pattern Recognition Lab
1.0 Credits
Laboratory works based on CSE 437.
CSE-453 Digital Image Processing
3 Credits
Image Processing: Image Fundamentals, Image Enhancement: Background, Enhancement by Point-Processing, Spatial Filtering, Enhancement in Frequency Domain, Color Image Processing. Image
Restoration: Degradation Model, Diagonalization of Circulant and Block-Circulant Matrices, Algebraic Approach to Restoration, Inverse Filtering, Geometric Transformation. Image Segmentation:
Detection of Discontinuities, Edge Linking and Boundary Detection, Thresholding, Region-Oriented Segmentation, The use of Motion in Segmentation. Image Compression.
Books Recommended:
1. Digital Image Processing-Rafael C. Gonzalez and Richard E. Woods, Pearson Education Asia.
2. Non-Linear Digital Filter : Principles and Applications –I. Pitas and A. N. Venetsanopoulos, Kluwer Academic Publications.
CSE-454 Digital Image Processing Lab
1.5 Credits
Laboratory works based on CSE 453.
CSE-455 Wireless and Sensor Networks
3 Credits
Introduction: applications; Localization and tracking: tracking multiple objects; Medium Access Control: S-MAC, IEEE 802.15.4 and ZigBee; Geographic and energy-aware routing; Attribute-Based
Routing: directed diffusion, rumor routing, geographic hash tables; Infrastructure establishment: topology control, clustering, time synchronization; Sensor tasking and control: task-driven
sensing, information-based sensor tasking, joint routing and information aggregation; Sensor network databases: challenges, querying the physical environment, in-network aggregation, data indices
and range queries, distributed hierarchical aggregation; Sensor network platforms and tools: sensor node hardware, sensor network programming challenges; Other state-of-the-art related topics.
Books Recommended:
1. Wireless Sensor Networks – C. S. Raghavendra, Krishna M. Sivalingam and TaiebZnati
2. Wireless Sensor Networks: An Information Processing Approach (The Morgan Kaufmann Series in Networking) – Feng Zhao,Leonidas Guibas
CSE-456 Wireless and Sensor Networks Lab
1.5 Credits
Laboratory works based on CSE 455.
CSE-457 Bio-Informatics
3 Credits
Cell concept: Structural organization of plant and animal cells, nucleus, cell membrane and cell wall. Cell division: Introducing chromosome, Mitosis, Meiosis and production of haploid/diploid
cell. Nucleic acids: Structure and properties of different forms of DNA and RNA; DNA replication. Proteins: Structure and classification, Central dogma of molecular biology. Genetic code: A brief
account. Genetics: Mendel’s laws of inheritance, Organization of genetic material of prokaryotes and eukaryotes, C-Value paradox, repetitive DNA, structure of chromatin – euchromatin and
heterochromatin, chromosome organization and banding patterns, structure of gene – intron, exon and their relationships, overlapping gene, regulatory sequence (lac operon), Molecular mechanism of
general recombination, gene conversion, Evolution and types of mutation, molecular mechanisms of mutation, site-directed mutagenesis, transposons in mutation. Introduction to Bioinformatics:
Definition and History of Bioinformatics, Human Genome Project, Internet and Bioinformatics, Applications of Bioinformatics Sequence alignment: Dynamic programming. Global versus local. Scoring
matrices. The Blast family of programs. Significance of alignments, Aligning more than two sequences. Genomes alignment. Structure-based alignment. Hidden Markov Models in Bioinformatics:
Definition and applications in Bioinformatics. Examples of the Viterbi, the Forward and the Backward algorithms. Parameter estimation for HMMs. Trees: The Phylogeny problem. Distance methods,
parsimony, bootstrap. Stationary Markov processes. Rate matrices. Maximum likelihood. Felsenstein’s post-order traversal. Finding regulatory elements: Finding regulatory elements in aligned and
unaligned sequences. Gibbs sampling. Introduction to microarray data analysis: Steady state and time series microarray data. From microarray data to biological networks. Identifying regulatory
elements using microarray data. Pi calculus: Description of biological networks; stochastic Pi calculus, Gillespie algorithm.
Books Recommended:
1. Introduction to Bioinformatics Algorithms –Jones and Pavel A. Pevzner
2. Introduction to Bioinformatics – Stephen A. Krawetz, David D. Womble
3. Introduction to Bioinformatics – Arthur M. Lesk
CSE-458 Bio-Informatics Lab
1.5 Credits
Laboratory works based on CSE-457.
CSE-461 Neural Networks
3 Credits
Fundamentals of Neural Networks; Back propagation and related training algorithms; Hebbian learning; Cohonen-Grossberg learning; The BAM and the Hopfield Memory; Simulated Annealing; Different
types of Neural Networks: Counter propagation, Probabilistic, Radial Basis Function, Generalized Regression, etc; Adaptive Resonance Theory; Dynamic Systems and neural Control; The Boltzmann
Machine; Self-organizing Maps; Spatiotemporal Pattern Classification, The Neocognition; Practical Aspects of Neural Networks.
Books Recommended:
1. An Introduction to Neural Networks – Prof. Leslie Smith
2. Fundamentals of Artificial Neural Networks – Mohamad H. Hassoun
CSE-462 Neural Networks Lab
1.5 Credits
Laboratory works based on CSE 461.
CSE-463 Machine Learning
3 Credits
Introduction: Definition of learning systems. Goals and applications of machine learning. Aspects of developing a learning system- training data, concept representation, function approximation.
Inductive Classification: The concept learning task. Concept learning as search through a hypothesis space. General-to-specific ordering of hypotheses. Finding maximally specific hypotheses.
Version spaces and the candidate elimination algorithm. Learning conjunctive concepts. The importance of inductive bias. Decision Tree Learning: Representing concepts as decision trees. Recursive
induction of decision trees. Picking the best splitting attribute: entropy and information gain. Searching for simple trees and computational complexity. Occam’s razor. Overfitting, noisy data,
and pruning. Experimental Evaluation of Learning Algorithms: Measuring the accuracy of learned hypotheses. Comparing learning algorithms- cross-validation, learning curves, and statistical
hypothesis testing. Computational Learning Theory: Models of learnability- learning in the limit; probably approximately correct (PAC) learning. Sample complexity- quantifying the number of
examples needed to PAC learn. Computational complexity of training. Sample complexity for finite hypothesis spaces. PAC results for learning conjunctions, kDNF, and kCNF. Sample complexity for
infinite hypothesis spaces, Vapnik-Chervonenkis dimension. Rule Learning, Propositional and First-Order: Translating decision trees into rules. Heuristic rule induction using separate and conquer
and information gain. First-order Horn-clause induction (Inductive Logic Programming) and Foil. Learning recursive rules. Inverse resolution, Golem, and Progol. Artificial Neural Networks:
Neurons and biological motivation. Linear threshold units. Perceptrons: representational limitation and gradient descent training. Multilayer networks and backpropagation. Hidden layers and
constructing intermediate, distributed representations. Overfitting, learning network structure, recurrent networks. Support Vector Machines: Maximum margin linear separators. Quadractic
programming solution to finding maximum margin separators. Kernels for learning non-linear functions. Bayesian Learning: Probability theory and Bayes rule. Naive Bayes learning algorithm.
Parameter smoothing. Generative vs. discriminative training. Logisitic regression. Bayes nets and Markov nets for representing dependencies. Instance-Based Learning: Constructing explicit
generalizations versus comparing to past specific examples. k-Nearest-neighbor algorithm. Case-based learning. Text Classification: Bag of words representation. Vector space model and cosine
similarity. Relevance feedback and Rocchio algorithm. Versions of nearest neighbor and Naive Bayes for text. Clustering and Unsupervised Learning: Learning from unclassified data. Clustering.
Hierarchical Aglomerative Clustering. k-means partitional clustering. Expectation maximization (EM) for soft clustering. Semi-supervised learning with EM using labeled and unlabled data.
Books Recommended:
1. Artificial Intelligence: a modern approach (2nd edition), Russell, S. and P. Norvig, Prentice Hall, 2003
2. Introduction to Machine Learning – Ethem ALPAYDIN
3. Machine Learning – Tom Mitchell, McGraw Hill
4. Introduction to machine learning (2nd edition), Alpaydin, Ethem, MIT Press, 2010
5. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, NelloCristianini and John Shawe-Taylor, Cambridge University Press
CSE-464 Machine Learning Lab
1.5 Credits.
Students should learn the methods for extracting rules or learning from data, and get the necessary mathematical background to understand how the methods work and how to get the best performance
from them. To achieve these goals student should learn the following algorithms in the lab: K Nearest Neighbor Classifier, Decision Trees, Model Selection and Empirical Methodologies, Linear
Classifiers: Perception and SVM, Naive Bayes Classifier, Basics of Clustering Analysis, K-mean Clustering Algorithm, Hierarchical Clustering Algorithm. Upon completion of the course, the student
should be able to perform the followings: a. Evaluate whether a learning system is required to address a particular problem. b. Understand how to use data for learning, model selection, and
testing to achieve the goals. c. Understand generally the relationship between model complexity and model performance, and be able to use this to design a strategy to improve an existing system.
d. Understand the advantages and disadvantages of the learning systems studied in the course, and decide which learning system is appropriate for a particular application. e. Make a naive Bayes
classifier and interpret the results as probabilities. f. Be able to apply clustering algorithms to simple data sets for clustering analysis.
CSE-465 Contemporary course on CSE
03 Credits
CSE-466 Contemporary course on CSE Lab
1.5 Credits
Laboratory works based on CSE 465
• Detailed Syllabus (Fall 2015)
Department of Computer Science and Engineering (Day)
Syllabus for Fall 2015
First Year First Semester
Course Code Course Title Credit Hours
CSE-101 Computer Fundamentals 3.0
CSE-102 Computer Fundamentals Lab 1.0
PHY-101 Mechanics, Properties of Matter, Waves. Optics, Heat and thermodynamics 3.0
MTH-101D Differential and Integral Calculus 3.0
CSE-105 Structured Programming Language 3.0
CSE-106 Structured Programming Language Lab 1.5
HUM-101D Oral and Written Communication in English Language 3.0
HUM-111D Bangladesh Studies: History and Society of Bangladesh 3.0
Total 20.50
First Year Second Semester
Course Code Course Title Credit Hours
CSE-107 Object Oriented Programming I 3.0
CSE-108 Object Oriented Programming I Lab 1.5
MTH-103D Geometry and Linear Algebra 3.0
PHY-103 Electromagnetism and Modern Physics 3.0
PHY-102 Physics Lab 1.5
ECE-101 Basic Electrical Engineering 3.0
ECE-102 Basic Electrical Engineering Lab 1.5
ECN-101 Principles of Economics 3.0
HUM-103 Language Composition and Comprehension 3.0
Total 22.50
Second Year First Semester
Course Code Course Title Credit Hours
CSE-201 Discrete Mathematics 3.0
CSE-211 Object Oriented Programming II 3.0
CSE-212 Object Oriented Programming II Lab 1.5
CSE-205 Data Structures 3.0
CSE-206 Data Structures Lab 1.5
MTH-201D Vector Analysis and Complex Variable 3.0
ECE-201 Electronic Devices and Circuits 3.0
ECE-202 Electronic Devices and circuits Lab 1.5
ACN-203 Cost and Management Accounting 3.0
Total 22.50
Second Year Second Semester
Course Code Course Title Credit Hours
CSE-207 Algorithms 3.0
CSE-208 Algorithms Lab 1.5
CSE-209 Numerical Methods 3.0
CSE-210 Numerical Methods Lab 1.5
CSE-231 Digital Logic Design 3.0
CSE-232 Digital Logic Design Lab 1.5
MTH-203D Differential Equations , Laplace Transforms and Fourier Analysis 3.0
CSE-200 Project Work 2.0
Total 18.50
Third Year First Semester
CSE-323Web Engineering2.0
Course Code Course Title Credit Hours
CSE-321 Database Systems 3.0
CSE-322 Database Systems Lab 1.5
CSE-331 Computer Architecture 3.0
CSE-324 Web Engineering Lab 1.5
MTH-301 Statistics and Probability 2.0
CSE-309 Cyber crime and Intellectual Property Law 3.0
CSE-310 Technical Report Writing and Presentation 1.5
CSE-326 Engineering Drawing 1.0
Total 18.50
Third Year Second Semester
Course Code Course Title Credit Hours
CSE-300 Software Development 2.0
CSE-303 Operating Systems 3.0
CSE-304 Operating Systems Lab 1.5
CSE-315 Data Communication 3.0
CSE-313 Microprocessors and Microcontroller 3.0
CSE-314 Microprocessors and Microcontroller Lab 1.5
CSE-337 System Analysis and Software Engineering 3.0
CSE-338 System Analysis and Software Engineering Lab 1.5
Total 18.50
Fourth Year First Semester
Course Code Course Title Credit Hours
CSE-425 Digital Signal Processing 3.0
CSE-426 Digital Signal Processing Lab 1.5
CSE-403 Compiler Design 3.0
CSE-404 Compiler Design Lab 1.5
CSE-421 Computer Network 3.0
CSE-422 Computer Network Lab 1.5
CSE-4** Option 3.0
CSE-4** Option Lab 1.5
Total 18.00
Fourth Year Second Semester
Course Code Course Title Credit Hours
CSE-415 Artificial Intelligence 3.0
CSE-416 Artificial Intelligence Lab 1.5
CSE-431 Computer Graphics 3.0
CSE-432 Computer Graphics Lab 1.5
CSE-435 Computer Interfacing 3.0
CSE-436 Computer Interfacing Lab 1.5
CSE-4** Option 3.0
CSE-4** Option Lab 1.5
CSE-400 Project/Thesis 3.0
CSE-402 Comprehensive Viva Voce 2.0
Total 23.00
Course Code Course Title Credit Hours
CSE-437 Pattern Recognition 3.0
CSE-411 VLSI Design 3.0
CSE-438 Pattern Recognition Lab 1.5
CSE-419 Graph Theory 3.0
CSE-420 Graph Theory Lab 1.5
CSE-423 Computer System Performance Evaluation 3.0
ECE-421 Digital Communication 3.0
CSE-424 Computer System Performance Evaluation Lab 1.5
CSE-408 Simulation and Modeling Lab 1.5
ECE-422 Digital Communication Lab 1.5
CSE-453 Digital Image Processing 3.0
CSE-454 Digital Image Processing Lab 1.5
CSE-455 Wireless and sensor Networks 3.0
CSE-409 Computer Security and Cryptography 3
CSE-410 Computer Security and Cryptography Lab 1.5
CSE-456 Wireless sensor Networks Lab 1.5
CSE-457 Bioinformatics 3.0
CSE-458 Bioinformatics Lab 1.5
CSE-461 Neural Networks 3.0
CSE-462 Neural Networks Lab 1.5
CSE-463 Machine Learning 3.0
CSE-464 Machine Learning Lab 1.5
CSE-465 Contemporary course on CSE 3.0
CSE-466 Contemporary course Lab on CSE 1.5
CSE 467 Advanced Database System 3.0
CSE 468 Advanced Database System Lab 1.5
CSE 469 Natural Language Processing 3.0
CSE 470 Natural Language Processing Lab 1.5
CSE-400 Project / Thesis 3.0
CSE-402 Comprehensive Viva Voce 2.0
Total Credit Hours Required for Degree 162.00
Detailed Syllabus
CSE-101 Computer Fundamentals
3 Credits
Introduction: Definition, history & some applications of computer. Classification of Computer: H/W and S/W computer components. Number systems : Binary, octal, hexadecimal number systems and
operations, computer codes. Boolean algebra.Data processing techniques.Arithmetic e’ logic operation.Logic gates. Operating systems: MS-WINDOWS, UNIX. Application software’s: Word processors,
word perfect, Ms-word Excel, Foxpro. Programming languages: M/c language, assembly language, high level languages, source & object language, 4th generation language, compilers, translators &
interpreter. Elements of computer H/W. Data transmission & networking.
Books Recommended:
1. Introduction to Computers – Subramanian
2. Inside the PC – P. Norton
3. Introduction to Computer – Norton
4. Computer Fundamentals – Prodeep K. Sinha
CSE 102 Computer Fundamentals Lab
1.0 Credits
Laboratory works based on CSE 101.
PHY-101 Mechanics, Properties of Matter, Waves, Optics, Heat & Thermodynamics
3 Credits
Mechanics : Measurements, Motion in one Dimension, Motion in a Plane, Particle Dynamics, Work & Energy, Circular Motion, Simple Harmonic Motion, Rotation of Rigid Bodies, Central Force, Structure
of Matter, Mechanical Properties of Materials. Properties of Matter: Elasticity, Stresses & Strains, Young’s Modulus, Bulk Modulus, Rigidity Modulus, Elastic Limit, Poisson’s Ratio, Relation
between Elastic Constants, Bending of Beams. Fluid Motion, Equation of Continuity, Bernoulli’s Theorem, Viscosity, Stokes’ Law. Surface Energy & Surface Tension, Capillarity, Determination of
Surface Tension by Different Methods Waves : Wave Motion & Propagation, Simple Harmonic Motion, Vibration Modes, Forced Vibrations, Vibration in Strings & Columns, Sound Wave & Its Velocity,
Doppler Effect, Elastic Waves, Ultrasonics, Practical Applications. Optics : Theories of Light, Huygen’s Principle, Electromagnetic Waves, Velocity of Light, Reflection, Refraction, Lenses,
Interference, Diffraction, Polarization. Heat & Thermodynamics : Temperature and Zeroth Law of Thermodynamics, Calorimetry, Thermal Equilibrium & Thermal Expansion, First Law of Thermodynamics,
Specific Heat, Heat Capacities, Equation of State, Change of Phase, Heat Transfer, Second Law of Thermodynamics, Carnot Cycle, Efficiency, Entropy, Kinetic Theory of Gases.
Books Recommended:
1. Fundamental of Physics (Part I)-Haliday, Resnick& Walker
2. Modern Physics – Bernstein
3. Concepts of Modern Physics – Beiser
4. Electromagnetism and Modern Physics
5. Fundamental of Optics – Brizlal
6. Optics – Ghatak
7. Heat & Thermodynamics – Brizlal
8. University Physics with Modern Physics – Young
9. EssentialUniversity Physics Volume I – Wolfson
10. EssentialUniversity Physics Volume II – Wolfson
MTH-101 Differential and Integral Calculus
3 Credits
Differential Calculus: Real number System. Relations and functions, Functions of single variable, their Domain, Range, Graphs, Limit, Continuity and Differentiability. Successive Differentiation,
Leibnitz’s theorem, Rolle’s theorem, Mean value theorem, Taylor’s theorem, Maclaurin’s theorem, Langrage’s and Cauchy’s forms of Remainder. Expansion of Function in Taylor’s and Maclaurin’s
Series. Maximum and Minimum Values of Function. Evaluation of Indeterminate forms of limit, L’ Hospital’s Rule. Tangent and Normal. Curvature, Radius of Curvature, Centre of Curvature. Functions
of more than one variable, Limit, Continuity, Differentiability, Partial Derivatives, Euler’s Theorem. Jacobians. Integral Calculus: Indefinite Integrals and its definition. Methods of
Integration (Integration by substitution, Integration by parts, Integration by successive reduction). Fundamental theorem of Integral calculus. Definite Integral and its properties. Definite
Integral as the limit of a sum. Improper Integrals, Beta and Gamma Function, Its application in evaluating Integrals. Evaluation of Arc length, Areas, Surfaces of Revolution, Volumes of solids of
Revolution, Multiple Integrals.
Books Recommended:
1. Calculus – Howard Anton; 10^th Edition; John Willy and Sons
2. Differential Calculus – C. Das & B. N. Mukharjee; 54^th Edition; U. N. Dhur & Sons PTL
3. Integral Calculus – C. Das & B. N. Mukharjee; 54^th Edition; U. N. Dhur & Sons PTL
4. A Text Book on Differential Calculus – Mohammad, Bhattacharjee & Latif, 4^th Edition, 2014; S. Chakravarty, Gonith Prokashan
5. A Text Book on Integral Calculus – Mohammad, Bhattacharjee & Latif; 4^th Edition, 2014; S. Chakravarty, Gonith Prokashan.
CSE-105 Structured Programming Languages
3 Credits
Programming language: Basic concept; overview of programming languages, C-language: Preliminaries; Elements of C; program constructs; variables and data types in C; Input and output; character
and formatted I/O; Arithmetic expressions and assignment statements; loops and nested loops; Decision making’ Arrays; Functions; Arguments and Local Variables; Calling functions and arrays;
Recursion and recursive functions; structures within structure; Files; File functions for sequential and Random I/O. Pointers, Pointers and Structures; Pointers and functions; Pointer and arrays;
Operations on pointers; Pointer and memory addresses; Operations on bits; Bit operation; Bit field; Advanced features; Standard and Library functions.
Books Recommended:
1. The C Programming Language – Kernighn& Ritchie
2. Teach Yourself C – H. Schieldt
3. The Complete Reference, Turbo C/C++ – H. Schieldt
4. Programming with ANSI C – E. Balagurusamy
5. Programming with C, Schaum’s outline Series – Gotfreid
CSE 106 Structured Programming Languages Lab
1.5 Credits
Laboratory works based on CSE 103.
HUM-101 Oral and written Communication in English Language
3 Credits
Oral & written communication skills include communicative expressions for day to day activities, both for personal and professional requirement. Grammar items will mainly emphasize the use of
articles, numbers, tense, modal verbs, pronouns, punctuation, etc. Sentence formation, question formation, transformation of sentence, simple passive voice construction, and conditionals will
also be covered.
Books Recommended:
1. Paragraph in English – Tibbits
2. Exercise in Reading Comprehension – Tibbits
3. Essential English Grammar – Ramon Murphy
4. English Vocabulary in use – Stuart
5. English Vocabulary in use – McCarthy
6. Intermediate English Grammar – Ramon Murphy
HUM-111 Bangladesh Studies : History and Society of Bangladesh
3 Credits
Bangladesh-Geography of Bangladesh-History of Bangladesh: ancient, medieval, British periods, politics of 1930’s and 1940’s, Language movement, 6-point & 11-point programs, liberation war and
emergence of Bangladesh and constitutional transformation of the state. Social structure of Bangladesh-Social problems such as repression of women, eve-teasing, urbanization, terrorism,
communalism, corruption etc.
Books Recommended:
1. Bangladesh Encyclopedia (English Version)
2. History of Bengal (English Version) – K. Ali
3. History of Bengal (English Version) – Majumder
4. Economy of Bangladesh (Economic Journal)
CSE 107 Object Oriented Programming I
3 Credits
Introduction to Java: History of Java, Java class Libraries, Introduction to java programming, and a simple program. Developing java Application: Introduction, Algorithms, Pseudo code, control
Structure, The If/Else selection structure, the while Repetition structure, Assignment operators, Increment and decrement operators, Primitive data types, common Escape sequences, Logical
operator. Control Structure: Introduction, for Structure, switch structure, Do while structure, Break and continue Structure. Methods: Introduction, Program module in Java, Math class methods,
method definitions, java API packages, Automatic variables, Recursions, Method overloading, Method of the Applet class. Arrays: Introduction, Arrays, declaring and allocating arrays, passing
arrays to methods, sorting arrays, searching arrays, multiple subscripted Arrays. Inheritance: Introduction, Super class, Subclass, Protected members, using constructor and Finalizes in
subclasses, composition vs. Inheritance, Introduction to polymorphism, Dynamic method building, Final methods and classes, Abstract super classes and concrete classes, Exception Handling.
CSE 108 Object Oriented Programming I Lab
1.5 Credits
Laboratory works based on CSE 105.
MTH-103 Geometry and Linear Algebra
3 Credits
Geometry: Two dimensional Geometry: Transformation of Co-ordinates. Pair of straight lines, Equation of General Equation of Second Degree, Circle, Parabola, Ellipse and Hyperbola. Three
Dimensional Geometry: Three Dimensional Co-ordinates, Direction Cosines and Direction Ratios. Plane and Straight line. Linear Algebra: Determinant and properties of Determinants, Matrix, Types of
matrices, Matrix operations, Laws of matrix Algebra, Invertible matrices, System of Linear equations (homogeneous and non-homogeneous) and their solution. Elementary row and Column operations and
Row-reduced echelon matrices, Rank of matrices. Vectors in R^n and C^n , Inner product, Norm and Distance in R^n and C^n . Vector Spaces, Subspace, Linear combination of vectors, Linear
dependence and independence of vectors. Basis and Dimension of vector spaces. Inner product spaces, Orthogonality and Orthonormal sets, Eigen values and Eigen vectors, diagonalization,
Cayley-Hamilton theorem and its application.
Books recommended:
1. Analytical Geometry of Conic Section – M. Kar
2. An Elementary Treatise on Co-ordinate Geometry of three dimensions – T. Bell; Macmillan India Ltd
3. A Text Book on Co-ordinate Geometry – Rahman & Bhattacharjee; 12^th Edition, 2014; S. Chakravarty, Gonith Prokashan
4. Schaum’s Outline Series of the Theory and Problems on Linear Algebra – Seymour Lipschutz; 3^rd Edition; McGraw Hill Book Company
5. Linear Algebra with Applications – Antone
6. Linear Algebra – Dewan Abdul Quddus; Latest Edition; Titash Publications
7. Linear Algebra – Saikia
PHY 103 Electromagnetism and Modern Physics
3 Credits
Electrostatics, Electric Charge, Coulomb’s Law, Electric Field & Electric Potential, Electric Flux Density, Gauss’s Law, Capacitors and Dielectrics, Steady Current, Ohm’s Law, Magnetostatics,
Magnetic Field, Biot-Savart Law, Ampere’s Law, Electromagnetic Induction, Faraday’s Law, Lenz’s Law, Self Inductance & Mutual Inductance, Magnetic Properties of Matter, Permeability,
Susceptibility, Diamagnetism, Paramagnetism&Ferroma-gnetism, Maxwell’s Equations of Electromagnetic Waves, Waves in Conducting & Non-Conducting Media, Total Internal Reflection, Transmission
along Wave Guides. Special Theory of Relativity, Length Contraction & Time Dilation, Mass-Energy Relation, Photo Electric Effect, Quantum Theory, X-rays and X-ray Diffraction, Compton Effect,
Dual Nature of Matter & Radiation, Atomic Structure, Nuclear Dimensions, Electron Orbits, Atomic Spectra, Bohr Atom, Radioactive Decay, Half-Life, a, b and g Rays, Isotopes, Nuclear Binding
Energy, Fundamentals of Solid State Physics, Lasers, Holography.
Books Recommended:
1. Fundamental of Physics(PartII)- Haliday, Resnick& Walker
2. Modern Physics – Bernstein
3. Concepts of Modern Physics – Beiser
4. Electromagnetism and Modern Physics
5. Fundamental of Optics – Brizlal
6. Optics – Ghatak
7. Heat & Thermodynamics – Brizlal
8. University Physics with Modern Physics – Young
9. Essential University Physics Volume II – Wolfson
PHY 102 Physics Lab
1.5 Credits
Laboratory works based on PHY 101 & PHY 103.
ECE 101 Basic Electrical Engineering
3 Credits
Fundamental electrical concepts, Kirchoff’s Laws, Equivalent resistance. Electrical circuits: Series circuits, parallel circuits, series-parallel networks. Network analysis: Source conversion,
Star/Delta conversion, Branch-current method, Mesh analysis, Nodal analysis. Network theorems: Superposition theorem, Thevenin’s theorem, Norton’s theorem. Capacitors. Magnetic circuits,
Inductors Sinosoidal alternating waveforms: Definitions, phase relations, Instantaneous value, Average value, Effective (rms)Value. Phasor algebra Series, parallel and series-parallel ac
networks. Power: Apparent power, Reactive power, Power triangle, Power factor correction. Pulse waveforms and the R-C response. Three-phase system Transformers.
Books Recommended:
1. Introductory Circuit Analysis- L. Boylestad
2. Introduction to Electrical Engineering- P. Ward
3. Electrical Technology (Volume 1)-L. Theraja, A.K.Theraja
4. Alternating Current Circuits-M. Kerchner, G. F. Corcoran
5. Electric Circuits – James W. Nilson
ECE 102 Basic Electrical Engineering Lab
1.5 Credits
Laboratory works based on ECE 101.
ECN 101 Principles of Economics
Introduction: The Nature, scope and methods of Economics, Economics and Engineering. Some Fundamental concepts commonly used in Economics. Micro Economics: The theory of demand and supply and
their elasticity’s. Market price determination competition in theory and practice. Indifference curve technique. Marginal analysis. Factors of production and production function. Scale of
production – Internal and external economies and diseconomies. The short run and the long run. Fixed cost and variable cost. Macro Economics: National income analysis. Inflation and its effects.
Savings, Investments. The basis of trade and the terms of trade. Monetary policy, Fiscal policy, Trade policy with reference to Bangladesh. Planning in Bangladesh.
Books Recommended:
1. Economics – Samuelson & Nordhaus
2. Economics – Don Bush Fisher
HUM-103 Language Composition and Comprehension
3 Credits
This course purports to make the student well up in composition and comprehension of English language used in formal write ups like articles, essays and treatises. Here text will be given for
comprehension, exercises of writing essays, paragraphs and reports will be done and construction of proper sentences expressing formal ideas will be taught. Sufficient exercises of translation
and re-translations will be included.
Books Recommended:
1. Exercise in Reading Comprehension – Tibbits
2. Essential English Grammar – Ramon Murphy
3. English Vocabulary in use – Stuart
4. English Vocabulary in use – McCarthy
5. Intermediate English Grammar – Ramon Murphy
6. Paragraph in English – Tibbits
CSE-201 Discrete Mathematics
3 Credits
Mathematical Models and Reasoning: Propositions, Predicates and Quantifiers, Logical operators, Logical inference, Methods of proof. Sets: Set theory, Relations between sets, Operations on sets.
Induction, The natural numbers, Set operations on å*. Binary Relations : Binary relations and Digraphs, Graph theory, Trees, Properties of relations, Composition of relations, Closure operations
on relations, Order relations, Equivalence relations and partitions. Functions: Basic properties, Special classes of functions. Counting and Algorithm Analysis: Techniques, Asymptotic behavior of
functions, Recurrence systems, Analysis of algorithms. Infinite sets: Finite and Infinite sets, Countable and uncountable sets, Comparison of cardinal numbers. Algebras: Structure, Varieties of
algebras, Homomorphism, Congruence relations.
Books Recommended:
1. Discrete Mathematics and its Applications- Kennth H. Rosen
2. Discrete Mathematical Structures- Bernard Kolman, Robert C. Busby, Sharon Cutler Ross
1. Concrete Mathematics- Ronald Ervin Knuth
CSE-203 Object Oriented Programming II
3 Credits
String, String Buffer and String Builder classes, Files and Stream, Java Database Connectivity: Statement and Prepared Statement Interfaces, CRUD operations using Statement and Prepared
Statement, JDBC Transaction Management, Object Relational Mapping, Java Persistency API: Introduction, Entity class annotations, Entity Manager interface, Entity Transaction interface, CRUD
operations using JPA, Primary Key Generation Strategies, Entity Inheritance, Entity Mapping, Java Persistency Query Language: Select, Update, Delete and Named Queries, Servlets: Servlet
Interface, Generic Servlet and HTTP Servlet, Servlet lifecycle, Java Server Pages: JSP Life cycle methods, Tags in JSP, JSP Implicit Objects, JSP Standard Tag Library, Java Server Faces:
Introduction, JSF Architecture and Application Development, JSF Page Navigation and Managed Bean, JSF Core Tag Library, JSF Event Handling Model, JSF Validation Model, JSF Data Conversion Model,
JPA JSF Integration, Java API, Utility classes, 2D Graphics, GUI, Swing, Events.
1. Introduction to Programming in Java, Robert Sedgewick & Kevin Wayne
2. An Introduction to Object-Oriented Programming, Timothy Budd
CSE-204 Object Oriented Programming II Lab
1.5 Credits
Laboratory works based on CSE 203.
CSE-205 Data Structures
3 Credits
Concepts e‘ examples: Introduction to Data structures. Elementary data structures: Arrays, records, pointer. Arrays: Type, memory representation and operations with arrays. Linked lists:
Representation, Types and operations with linked lists. Stacks and Queues: Implementations, operations with stacks and queues. Graphs: Implementations, operations with graph. Trees:
Representations, Types, operations with trees. Memory Management: Uniform size records, diverse size records. Sorting: Internal sorting, external sorting. Searching : List searching, tree
searching. Hashing: Hashing functions, collision resolution.
Books Recommended:
1. Fundamental of Data Structures – Horowitz & S. Sahni
2. Data Structures – Reingold
3. Data Structures, Schaum’s outline Series – Lipshultz
4. Data Structures & Programming Design – Robert L. Kruse
CSE-206 Data Structures Lab
1.5 Credits
Laboratory works based on CSE 205.
MTH 201 Vector Analysis and Complex Variable
3 Credits
Vector Analysis: Vector Algebra – Vectors in three dimensional space, Algebra of Vectors, Rectangular Components, Addition, Subtraction and Scalar multiplication, Scalar and Vector product of two
vectors. Scalar and Vector triple product. Application in Geometry. Vector Calculus – Limit, Continuity and Differentiability of Scalar and Vector point functions. Scalar and Vector field.
Gradient, Divergence and Curl of point functions. Vector Integration, Line, Surface and Volume Integrals. Green’s theorem, Gauss’s theorem, Stoke’s theorem. Complex Variable: Field of Complex
numbers, D’Moivre’s theorem and its applications. Limit and Continuity of complex functions, Derivatives, Analytic function, Harmonic function, Cauchy-Rieman equation. Line Integral of Complex
functions. Cauchy’s Integral theorem and Cauchy’s Integral formula. Lioville’s theorem, Taylors and Laurent’s theorem, Singularity Residue, Cauchy’s Residue theorem. Contour Integration. Bilinear
transformation. Mapping of Elementary functions. Conformal mapping.
Book Recommended:
1. Schaum’s Outline Series of the Theory and Problems on Vector Analysis – Murray R. Spiegel; SI (Metric Edition); McGraw Hill Book Company
2. Schaum’s Outline Series of the Theory and Problems on Complex Variable – Murray R. Spiegel; 2^nd Edition; McGraw Hill Book Company
3. Functions of a Complex Variable – Dewan Abdul Quddus; Latest Edition; Titash Publications
ECE-201 Electronic Devices & Circuits
3 Credits
Introduction to semiconductors, Junction diode characteristics & diode applications, Bipolar Junction transistor characteristics, Transistor biasing, Small signal low frequency h-parameter model
& hybrid -pi model, AC analysis of transistor, Frequency response of transistor, Operational amplifiers, Linear applications of operational amplifiers, DC performance of operational amplifiers,
AC performance of operational amplifiers, Introduction to JFET, MOSFET, PMOS, NMOS & CMOS, Introduction to SCR, TRIAC, DIAC & UJT, Active filters Introduction to IC fabrication techniques & VLSI
ECE 202 Electronic Devices & Circuits Lab
1.5 Credits
Laboratory works based on ECE 201.
ACN-203 Cost and Management Accounting
3 Credits
Introduction: Cost accounting: Definition, Limitations of Financial Accounting, Importance, Objectives, Functions and Advantages of Cost Accounting, Financial Accounting VS. Cost Accounting VS.
Managerial Accounting, Techniques and Methods of Cost Accounting, International Cost Accounting Systems. Managerial accounting: Definition , Evolution , Objectives , Scope , Importance ,
Functions , Techniques , Differences among Managerial Accounting , Cost Accounting and Financial Accounting , Management Accounting for Planning and Control .Cost Classification : Cost Concepts ,
Cost Terms , Cost Expenses and Losses , Cost Center ,Cost Unit , Classification of Costs , Cost Accounting Cycle, Cost Statement , The Flow of Costs in a Manufacturing Enterprise ,Reporting and
Results of Operation. Materials : Indirect & Direct Material , Procurement of Materials , Purchase Control , Purchase Department , Purchase Quantity , Fixed Order , Economic Order Quantity ,
Stock-out Cost , Re-order Level , Purchase Order , Receipts and Inspection ,Classification and Codification of materials ,Stock Verification , ABC Method of Store Control , Pricing of materials
Issued , LIFO, FIFO and Average Pricing , Inventory Control; Labor: Labor Cost Control, Time Recording Systems, Manual and Mechanical Methods, Time Booking, Necessary Documents Maintained for
Labor Control, Methods of Remuneration; Treatment for Idle and Over Time. Overhead: Definition , Classifications of Overheads , Methods of Overhead Distribution , Distribution of Factory Overhead
to Service Departments, Redistribution of Service Department Cost , Uses of Predetermined Overhead Rates , Treatment of Over and under absorbed Overhead ,Treatment of Administration Overhead ,
Selling and Distribution Overheads , Calculation of Machine Hour rate . Job Order Costing: Feature Advantages, Limitation, Accounting for Materials, Labor and Factory Overhead in Job Costing,
Accounting for Jobs Completed and Products Sold, Spoilage, Defective Work and Scrap in job Costing System, The Job Cost Sheet, Job Order Costing in Service Companies, Nature and Uses of Batch
Costing, Determination of Economic Batch Quantity. Contract Costing: Introduction, Procedures, Types of Contract, Retention Money, Profit or Loss on Incomplete Contract, Cost plus Contract
Systems; Operation Costing: Nature, Procedures, Costing for Transport and Hospital; Cost Behavior : Analysis of Cost Behavior , Measurement of Cost Behavior , Methods of Methods of Measuring Cost
Functions , Analysis of Mixed Costs , High and Low Point Method , Scatter graph Method , Least Squares Method , Use of Judgment in Cost Analysis ; Cost – Volume Profit Relationship : Profit
Planning , Break Even Point , Break Even Chart , Changes in Underlying Factors , Profit Volume Graph , Income Tax effect on Break Even Point , Break Even Point in Decision Making , Risk and
Profit Analysis , Limitations .
Books Recommended:
1. Cost Accounting, A Managerial Emphasis: T. Hormgren ET all
2. Managerial Accounting: Ray .H. Garrison
3. Management Accounting: N. Anthony
4. Management Accounting: S.Kaplan
5. Cost Accounting: Usry & Hammer
6. Cost Accounting: G. Rayburn
7. Cost Accounting: P Lyenger
1. Accounting Principles – Kieso
2. Financial & Managerial Accounting- Needles
10. Theory and Practice of Costing- Basu & Das
CSE-207 Algorithms
3 Credits
Analysis of Algorithm: Asymptotic analysis: Recurrences, Substitution method, Recurrence tree method, Master method. Divide-and-Conquer: Binary search, Powering a number, Fibonacci numbers,
Matrix Multiplication, Strassen’s Algorithm for Matrix Multiplication. Sorting: Insertion sort, Merge sort, Quick sort, Randomized quick sort, Decision tree, Counting sort, Radix sort. Order
Statistics: Randomized divide and conquer, worst case linear time order statistics. Graph: Representation, Traversing a graph, Topological sorting, Connected Components. Dynamic Programming:
Elements of DP (Optimal substructure, Overlapping subproblem), Longest Common Subsequence finding problem, Matrix Chain Multiplication. Greedy Method: Greedy choice property, elements of greedy
strategy, Activity selector problem, Minimum spanning tree (Prims algorithm, Kruskal algorithm), Huffman coding. Shortest Path Algorithms: Dynamic and Greedy properties, Dijkstra’s algorithm with
its correctness and analysis, Bellman-ford algorithm, All pair shortest path: Warshall’s algorithm, Johnson’s algorithm. Network flow: Maximum flow, Max-flow-min-cut, Bipartite matching.
Backtracking/Branch-and-Bound: Permutation, Combination, 8-queen problem, 15-puzzle problem. Geometric algorithm: Segment-segment intersection, Convex-hull, Closest pair problem. And NP
Completeness, NP hard and NP complete problems.
Books Recommended:
1. Introduction to Algorithms- Thomas H. Cormen , Charles E. Leiserson.
2. Algorithms –Robert Sedgewick and Kevin Wayne.
3. Fundamental Algorithms- Donald E. Knuth,”Art of Computer Programming, Volume 1: Addison-Wesley Professional; 3rd edition, 1997.
CSE-208 Algorithms Lab
1.5 Credits
Using different well known algorithms to solve the problem of Matrix-Chain Multiplication, Longest Common Subsequence, Huffman codes generation, Permutation, Combination, 8-queen problem,
15-puzzle, BFS, DFS, flood fill using DFS, Topological sorting, Strongly connected component, finding minimum spanning tree, finding shortest path (Dijkstra’s algorithm and Bellman-Ford’s
algorithm), Flow networks and maximum bipartite matching, Finding the convex hull, Closest pair.
CSE-209 Numerical Methods
3 Credits
Errors and Accuracy. Iterative process: Solution of f(x)= 0, existence and convergence of a root, convergence of the iterative method, geometrical representation, Aitken’s D^2– process of
acceleration. System of Linear Equations. Solution of Non-Linear equations. Finite Differences and Interpolation. Finite Difference Interpolation. Numerical Differentiation. Numerical
Integration. Differential Equations.
Books Recommended:
1. Introductory methods of Numerical Analysis – S. S. Sastry
2. Numerical Methods for Engineers –Steven C. Chapra
3. Numerical Mathematical Analysis – James B. Scarborugh
CSE-210 Numerical Methods Lab
1.5 Credits
Laboratory works based on CSE 209.
CSE-231 Digital Logic Design
3 Credits
Binary Logic. Logic Gates: IC digital logic families, positive and negative logic. Boolean Algebra. Simplification of Boolean Functions: Karnaugh map method, SOP and POS simplification, NAND,
NOR, wired-AND, wired-OR implementation, nondegenerate forms, Don’t care conditions, Tabulation method – prime implicant chart. Combinational Logic: Arithmetic circuits – half and full adders and
subtractors, multilevel NAND and NOR circuits, Ex-OR and Equivalence functions. Combinational Logic in MSI and LSI: Binary parallel adder, decimal and BCD adders, Comparators, Decoders and
Encoders, Demultiplexors and Multiplexors. Sequential Logic. Registers and Counters. Synchronous Sequential Circuits. Asynchronous Sequential Circuits. Digital IC terminology, TTL logic family,
TTL series characteristics, open-collector TTL, tristate TTL, ECL family, MOS digital ICs, MOSFET, CMOS characteristics, CMOS tristate logic, TTL-CMOS-TTL interfacing, memory terminology, general
memory operation, semiconductor memory technologies, different types of ROMs, semiconductor RAMs, static and dynamic RAMs, magnetic bubble memory, CCD memory, FPGA Concept.
Books Recommended:
1. Digital Logic & Computer Design-M. Morris Mano
2. Digital Fundamentals- Floyd
3. Modern Digital Electronics-R. P. Jain
4. Digital Systems- R. J. Tocci
5. Digital Electronics- Green
CSE-232 Digital Logic Design Lab
1.5 Credits
Laboratory works based on CSE 231.
MTH-203 Differential Equations, Laplace Transforms and Fourier Analysis
3 Credits
Differential Equation: Formation of Differential equation, Degree and Order of differential equation, Complete and Particular solution. Ordinary differential equation – Solution of ordinary
differential equation of first order and first degree (special forms). Linear differential equation with constant coefficients. Homogeneous linear differential equation. Solution of differential
equation by the method of Variation of parameters. Solution of linear differential equations in series by Frobenius method. Bessel’s function and Legendre’s Polynomials and their properties.
Simultaneous equation of the form = = . Partial differential equation – Lagrange’s linear equation, Equation of linear and non-linear first order standard forms, Charpit’s method.
Laplace Transforms: Definition, Laplace transforms of some elementary functions, sufficient conditions for existence of Laplace transforms, Inverse Laplace transforms, Laplace transforms of
derivatives, Unit step function, Periodic function, Some special theorems on Laplace transforms, Partial fraction, Solution of differential equations by Laplace transforms, Evaluation of Improper
Integrals. Fourier Analysis: Fourier series (Real and complex form). Finite transforms, Fourier Integrals, Fourier transforms and application in solving boundary value problems.
Books Recommended:
1. Differential Equations – H. T. H. Piaggio; 1^st Indian Edition, 1985, S. K. Jain for CBS Publishers
2. A Text Book on Integral Calculus with Differential Equations – Mohammad, Bhattacharjee & Latif, 4^th Edition, 2010; S. Chakravarty, Gonith Prokashon
3. Schaum’s Outline Series of the Theory and Problems on Laplace Transforms – Murray R. Spiegel; Revised Edition, 2003; McGraw Hill Book Company
4. Differential Equation – Md. Abu Eusuf; Latest Edition; Abdullah Al Mashud Publisher
CSE 200 Project Work
2 Credits
Project focusing on Object oriented programming approach and using standard algorithm is preferable. Every project should maintain a goal so that it can be used as a useful tool in the IT fields.
Also innovative project ideas that require different types scripting/programming languages or programming tools can be accepted with respect to the consent of the corresponding project
CSE-321 Database Systems
3 Credits
Introduction: Purpose of Database Systems, Data Abstraction, Data Models, Instances and Schemes, Data Independence, Data Definition Language, Data Manipulation Language, Database Manager,
Database administrator, Database Users, Overall System Structure, Advantages and Disadvantage of a Database Systems. Data Mining and analysis, Database Architecture, History of Database Systems
Relationship Entity-Model: Entities and Entity Sets, Relationships and Relationship Sets, Attributes, Composite and Multivalued Attributes, Mapping Constraints, Keys, Entity-Relationship Diagram,
Reducing of E-R Diagram to Tables, Generalization, Attribute Inheritance, Aggregation, Alternative E-R Notatios, Design of an E-R Database Scheme.
Relational Model: Structure of Relational Database, Fundamental Relational Algebra Operations, The Tuple Relational Calculus, The Domain Relational Calculus, Modifying the Database. Relational
Commercial Language: SQL, Basic structure of SQL Queries, Query-by-Example, Quel., Nested Sub queries, Complex queries, Integrity Constraints, Authorization, Dynamic SQL, Recursive Queries.
Relational Database Design: Pitfalls in Relational Database Design, Functional Dependency Theory, Normalization using Functional Dependencies, Normalization using Multivalued Dependencies,
Normalization using join Dependencies, Database Design Process. File And System Structure: Overall System Structure, Physical Storage Media, File Organization, RAID, Organization of Records into
Blocks, Sequential Files, Mapping Relational Data to Files, Data Dictionary Storage, Buffer Management. Indexing And Hashing: Basic Concepts, Ordered Indices, B+ -Tree Index Files, B-Tree Index
Files, Static and Dynamic Hash Function, Comparison of Indexing and Hashing, Index Definition in SQL, Multiple Key Access.
Query Processing and Optimization: Query Interpretation, Equivalence of Expressions, Estimation of Query-Processing Cost, Estimation of Costs of Access Using Indices, Join Strategies, Join
Strategies for parallel Processing, Structure of the query Optimizer, Transformation of Relational Expression. Concurrency Control: Schedules, Testing for Serializability, Lock-Based Protocols,
Timestamp-Based Protocols, Validation Techniques, Multiple Granularity, Multiversion Schemes, Insert and Delete Operations, Deadlock Handling. Distributed Database: Structure of Distributed
Databases, Trade-off in Distributing the Database, Design of Distributed Database, Transparancy and Autonomy, Distributed Query Processing, Recovery in Distributed Systems, Commit Protocols,
Concurrency Control. Data Mining and Information Retrieval: Data analysis and OLAP, Data Warehouse, Data Mining, Relevance Ranking Using Terms, Relevance Ranking Using Hyperlink, Synonyms,
Homonyms, Ontology, Indexing of Document, Measuring Retrieval Efficiencies, Information Retrieval and Structured Data.
Books Recommended:
1. Database System Concepts – Abraham Silberschratz, Henry K. Korth, S. Sudarshan (5^th edition)
2. Fundamentals of Database Systems – Benjamin/Cummings, 1994
3. Database Principles, Programming, Performance – Morgan Kaufmann 1994
4. A First Course in Database Systems – Prentice Hall, 1997
5. Database Management Systems, McGraw Hill, 1996
CSE-322 Database Systems Lab
1.5 Credits
Introduction: What is database, MySQL , Oracle , SQL, Datatypes, SQL / PLSQL, Oracle Software Installation, User Type, Creating User , Granting. Basic Parts of Speech in SQL: Creating Newspaper
Table, Select Command (Where , order by), Creating View, Getting Text Information & Changing it, Concatenation, Cut & paste string(RPAD , LPAD , TRIM , LTRIM , RTRIM, LOWER , UPPER , INIT, LENGTH
, SUBSTR , INSTR , SOUNDEX). Playing The Numbers: Addition , Subtraction , Multiplication , Division, NVL , ABS , Floor , MOD , Power , SQRT , EXR , LN , LOG , ROUND, AVG , MAX , MIN , COUNT ,
SUM, Distinct, SUBQUERY FOR MAX,MIN. Grouping things together: Group By , Having, Order By, Views Renaming Columns with Aliases.When one query depends upon another: Union, Intersect , Minus, Not
in , Not Exists. Changing Data : INSERT,UPDATE,MERGE,DELETE, ROLLBACK , AUTOCOMMIT , COMMIT, SAVEPOINTS, MULTI TABLE INSERT, DELETE, UPDATE, MERGE. Creating And Altering tables & views: Altering
table, Dropping table, Creating view, Creating a table from a table. By What Authority: Creating User, Granting User, Password Management.
An Introduction to PL/SQL: Implement few problems using PL/SQL (eg Prime Number, Factorial, Calculating Area of Circle, etc).An Introduction to Trigger and Procedure: Implement few problems using
Trigger and Procedures. An Introduction to Indexing: Implement indexing using a large database and observe the difference of Indexed and Non-Indexed database.
CSE-331 Computer Architecture
3 Credits
Introduction to Computer Architecture: Overview and history; Cost factor; Performance metrics and evaluating computer designs. Instruction set design: Von Neumann machine cycle, Memory
addressing, Classifying instruction set architectures, RISC versus CISC, Micro programmed vs. hardwired control unit. Memory System Design: Cache memory; Basic cache structure and design; Fully
associative, direct, and set associative mapping; Analyzing cache effectiveness; Replacement policies; Writing to a cache; Multiple caches; Upgrading a cache; Main Memory; Virtual memory
structure, and design; Paging; Replacement strategies. Pipelining: General considerations; Comparison of pipelined and nonpipelined computers; Instruction and arithmetic pipelines, Structural,
Data and Branch hazards. Multiprocessors and Multi-core Computers: SISD, SIMD, and MIMD architectures; Centralized and distributed shared memory- architectures; Multi-core Processor architecture.
Input/output Devices: Performance measure, Types of I/O device, Buses and interface to CPU, RAID. Pipelining: Basic pipelining, Pipeline Hazards. Parallel Processing.
Books Recommended:
1. Computer Architecture and Organization- John P.Hayes, 3rd Edition, McGraw Hill
2. Computer Organization and Design: The hardware / software interface- David A.Patterson and John L.Hennessy
CSE-351 Management Information Systems
3 Credits
Introduction to MIS: Management Information System Concept. Definitions, Role of MIS, Approaches of MIS development. MIS and Computer: Computer Hardware for Information System, Computer Software
for Information System, Data Communication System, Database Management Technology, Client-Server Technology. Decision-Support System: Introduction, Evolution of DSS, Future development of DSS.
Application of MIS: Applications in manufacturing Sector, Applications in service sector, Case Studies.
Books Recommended:
1. Management Information Systems- James O’Brian , Tata MCGraw-Hill
2. Management Information Systems-Post and Andersin, Tata Mcgraw-Hill
CSE-301 Web Engineering
3 Credits
Introduction to Web Engineering, Requirements Engineering and Modeling Web Applications, Web Application Architectures, Technologies and Tools for Web Applications, Testing and Maintenance of Web
Applications, Usability and Performance of Web Applications, Security of Web Applications, The Semantic Web.
Books References:
1. Web Engineering: The Discipline of Systematic Development of Web Applications Editors- GertiKappel, Birgit Proll, Siegfried Reich, Werner Retschitzegger
2. Web Engineering: A Practitioner’s Approach- Roger Pressman, David Lowe
3. MIT Open Course Materials for the course Software Engineering for Web Applications
4. MIT Open Course Materials for the course Database, Internet, and Systems Integration Technologies
CSE-302 Web Engineering Lab
1.5 Credits
Understanding the Web Application: Web Engineering introduces a structured methodology utilized in software engineering to Web development projects. The course addresses the concepts, methods,
technologies, and techniques of developing Web Sites that collect, organize and expose information resources. Topics covered include requirements engineering for Web applications, design methods
and technologies, interface design, usability of web applications, accessibility, testing, metrics, operation and maintenance of Web applications, security and project management. Specific
technologies covered in this course include client-side (XHTML, JavaScript and CSS) and server-side (Perl and PHP). Using the described concepts students should be able to understand the Web
engineering concepts behind the frameworks of Joomla, Drupal, WordPress. Server-side technology: LAMP, Web application frameworks, (example: Silverlight, Adobe Flex), Web 2.0 and Web APIs.
Front-end technology: HTML, XHTML, XML. CSS Styling, layout, selector, Document object model and JavaScript. Client-Programming: Web APIs with JavaScript (example: Google AJAX API). MVC:
Understanding model, view and controller model. Understanding Web APIs: REST, XML, JSON, RSS Parsing. JavaScript Exercise: The goal of this assignment is to allow you to explore and use vas many
of JavaScript’s objects, methods and properties as possible in a small assignment. Some functions must be written from scratch. Other functions, appropriately attributed, may be downloaded from
the web and used as a part of the system or as the basis for your own functions. PHP Exercise: Build a set of PHP scripts that perform some dynamic server-side functionality. Understanding
plug-ins: Develop a Firefox extension.
MTH-301 Statistics and Probability
2 Credits
Frequency distribution; mean, median, mode and other measures of central tendency, Standard deviation and other measures of dispersion, Moments, skewness and kurtosis, Elementary probability
theory and discontinuous probability distribution, e.g. binomial, poison and negative binomial, Continuous probability distributions, e.g. normal and exponential, Characteristics of
distributions, Hypothesis testing and regression analysis
Books Recommended:
1. Introduction to M athematical Statistics – Hogg
2. Probability and Statistics for Scientists and Engineers – Walpole
CSE-309 Cyber Crime and Intellectual Property Law
3 Credits
Introduction: the problem of computer crime, what is Cybercrime? Cybercrime: the invisible threat, Information and other assets in need of assurance, Computer focused and computer assisted
crimes, the hacker, hacking tactics, the victim, Data: surveys, network flow and IPS/IDS, Data: honey pots and incidents, Cyber terrorism, Cyber laws and regulations, Investigating cyber crime ,
Preventing cyber crime and Future opportunities for managing cybercrime. Intellectual Property: Introduction, Philosophical Perspectives and Overview of Intellectual Property: Trade Secret;
Patent; Copyright; Trademark/Trade Dress; Problem; Copyright and patent; need of intellectual Property laws, Copyright for software, software-copyright cases, Database, the focus shifts from
copyright to patent, the nature of patent law, some software-patent cases. Filmy and video, Pornography meets the internet, different between downloads and publications, censoring videos.
Books Recommended:
1. Understanding and Managing Cybercrime-McQuade III, Samuel C. 2006. ISBN 0-205-43973-X
2. The Transformation of Crime in the Information Age –Wall, David. 2006. ISBN 0-745-62736-6
3. Cyber Crime and Digital Evidence: Materials and Cases –Thomas K. Clancy, First Edition 2011, LexisNexis, ISBN: 9781422494080
4. Cybercrime, investigating high-technology computer crime –Moore, Robert, (2011), (2ndEd.). Elsevier.
5. Cybercrime: The Investigation, Prosecution and Defense of a Computer-related Crime –Ralph D. Clifford, August 1, 2011
6. Intellectual Property in the New Technological Age –Merges, Menell & Lemley,2011 (^6th Edition)
7. Intellectual property: Law & the information society- James Boyle, Jennifer Jenkins, First Edition, 2014.
8. International Intellectual Property law- Jonathan Franklin, 2013
CSE-310 Technical Report Writing and Presentation
1.5 Credits
Issues of technical writing and effective oral presentation in Computer Science and Engineering; Writing styles of definitions, propositions, theorems and proofs; Preparation of reports, research
papers, theses and books: abstract, preface, contents, bibliography and index; Writing of book reviews and referee reports; Writing tools: LATEX; Diagram drawing software; presentation tools.
Books Recommended:
1. Technical Report Writing- Daniel G. Riordan, Houghton Mifflin Company, 8th edition, 2001
CSE-326 Engineering Drawing
1 Credit
Introduction; Instruments and their uses; First and third angle projection; Orthographic drawing; Sectional views and conventional practices; Auxiliary views; Isometric views; Missing lines and
Books Recommended:
1. Engineering Drawing & Design– David A. Madsen, David P. Madsen
CSE 300 Software Developments
1.5 Credits
Students will work in groups or individually to produce high quality software in different languages. Students will write structured programs and use proper documentation. Advanced programming
techniques in Mobile Application
Books Recommended:
1. Android Application Development Cookbook- Wei-Meng Lee
2. The Complete Android Guide- Kevin Purdy
303 Operating Systems
3 Credits
Introduction: Operating Systems Concept, Computer System Structures, Operating System Structures, Operating System operations, Protection and Security, Special-Purpose Systems. Fundamentals of OS
: OS services and components, multitasking, multiprogramming, time sharing, buffering, spooling Process Management: Process Concept, Process Scheduling, Process State, Process Management,
Interprocess Communication, interaction between processes and OS, Communication in Client-Server Systems, Threading, Multithreading, Process Synchronization. Concurrency control: Concurrency and
race conditions, mutual exclusion requirements, semaphores, monitors, classical IPC problem and solutions, Dead locks – characterization, detection, recovery, avoidance and prevention. Memory
Management: Memory partitioning, Swapping, Paging, Segmentation, Virtual memory – Concepts, Overlays, Demand Paging, Performance of demand paging, Page replacement algorithm, Allocation
algorithms. Storage Management: Principles of I/O hardware, Principles of I/O software, Secondary storage structure, Disk structure, Disk scheduling, Disk Management, Swap-space Management, Disk
reliability, Stable storage implementation. File Concept: File support, Access methods, Allocation methods, Directory systems, File Protection, Free Space management Protection & Security : Goals
of protection, Domain of protection, Access matrix, Implementation of access matrix, Revocation of access rights, The security problem, Authentication, One-time passwords, Program threats, System
threats, Threat monitoring, Encryption, Computer-security classification. Distributed Systems: Types of Distributed Operating System, Communication Protocols, Distributed File Systems, Naming and
Transparency, Remote File Access, Stateful Versus Stateless Service, File Replication. Case Studies: Study of a representative Operating Systems,
Books Recommended:
1. Operating System Concepts – Silberschatz & Galvin Wiley 2000 (7th Edition)
2. Operating Systems – Achyut S. Godbole Tata Mc Graw Hill (2nd Edition)
3. Understanding Operating System – Flynn & Metioes Thomsan (4th Edition)
4. Operating Systems Design & Implementation – Andrew Tanenbam, Albert S. Woodhull Pearson
5. Modern Operating System – Andrew S. Tanenbaum
CSE-304 Operating Systems Lab
1.5 Credits
Thread programming: Creating thread and thread synchronization. Process Programming: The Process ID, Running a New Process, Terminating a Process, Waiting for Terminated Child Processes, Users
and Groups, Sessions and Process Groups. Concurrent Programming: Using fork, exec for multi-task programs. File Operations: File sharing across processes, System lock table, Permission and file
locking, Mapping Files into Memory, Synchronized, Synchronous, and Asynchronous Operations, I/O Schedulers and I/O Performance.
Communicating across processes: Using different signals, Pipes, Message queue, Semaphore, Semaphore arithmetic and Shared memory.
Books Recommended:
1. The ‘C’ Odyssey UNIX-The Open, Boundless C – Meeta Gandhi, Tilak Shetty, Rajiv Shah.
2. Beginning Linux Programming – Neil Matthew and Richard Stones
3. Linux System Programming – Robert Love
CSE-315 Data Communication
3 Credits
Introduction to modulation techniques: Pulse modulation; pulse amplitude modulation, pulse width modulation and pulse position modulation. Pulse code modulation; quantization, Delta modulation.
TDM, FDM, OOK, FSK, PSK, QPSK; Representation of noise; threshold effects in PCM and FM. Probability of error for pulse systems, concepts of channel coding and capacity. Asynchronous and
synchronous communications. Hardware interfaces, multiplexers, concentrators and buffers. Communication medium, Fiber optics.
Books Recommended:
1. Introduction to Data Communications-Eugene Blanchard
2. Data Communication Principles – Ahmad, Aftab
3. Data Communication & Networking– S.Bagad, I.A.Dhotre
4. Data Communications and Networking- Behrouz A. Forouzan
CSE-307 Theory of Computation
2 Credits
Finite Automata: Deterministic and nondeterministic finite automata and their equivalence. Equivalence with regular expressions. Closure properties. The pumping lemma and applications.
Context-free Grammars: Definitions. Parse trees. The pumping lemma for CFLs and applications. Normal forms. General parsing. Sketch of equivalence with pushdown automata. Turing Machines:
Designing simple TMs. Variations in the basic model(multi-tape, multi-head, nondeterminism). Church-Turing thesis and evidence to support it through the study of other models. Undecidability: The
undecidability of the halting problem. Reductions to other problems. Reduction in general.
Books Recommended:
1. Introduction to Languages and the Theory of Computation, 2^nd Edition- C. Martin, McGraw Hill Publications, 1997.
CSE-333Microprocessors and Microcontroller
3 Credits
Introduction to 8-bit, 16-bit, and 32-bit microprocessors: architecture, addressing modes, instruction set, interrupts, multi-tasking and virtual memory; Memory interface; Bus interface;
Arithmetic co-processor; Microcontrollers; Integrating microprocessor with interfacing chips.
Books Recommended:
1. Microprocessors & Interfacing- Douglas V. Hall
CSE-334 Microprocessors and Microcontroller Lab
1.5 Credits
Laboratory works based on CSE 333.
CSE-337 System Analysis and Software Engineering
3 Credits
Concepts of Software Engineering; Software Engineering paradigms;Different phases of software System Development; Different types of information, qualities of information. Project Management
Concepts; Software process and project Metrics; Software Project Planning; Risk Analysis and management; Project Scheduling and Tracking. Analysis Concepts and principles: requirement analysis,
Analysis modeling, data modeling. Design concepts and principles, Architectural design, User Interface design, Object Oriented software development and design: Iterative Development and the
Unified Process. Sequential waterfall life cycles, Inception. Use case model for requirement writing, Elaboration using System Sequence Diagram, Domain Model. Visualizing concept classes. UML
diagrams, Interaction and Collaboration Diagram for designing Software. Designing Objects with responsibilities. GRASP patterns with General Principles in assigning responsibilities: Information
expert, Creator, Low Coupling and High Cohesion, Creating design class diagrams and mapping design to codes. Advanced GRASP patterns: Polymorphism, Pure Fabrication, Indirection, Project
Variation. GoF Design Patterns: Adapter, Factory, Singleton, Strategy, Composite, Facade, and Observer. Software Testing: White Box and Black Box testing. Basis Path Testing. Testing for
specialized environment. Software testing strategies: Unit Testing, Integration Testing, Validation Testing, System Testing, Art of debugging. Analysis of System Maintenance and upgrading:
Software repair, downtime, error and faults, specification and correction, Maintenance cost models, documentation. Software Quality Assurance, Quality factors. Software quality measures.Cost
impact of Software defects. Concepts of Software reliability, availability and safety. Function based metrics and bang metrics. Metrics for analysis and design model. Metrics for source code,
testing and maintenance.
Books Recommended:
1. Software Engineering-Ian Sommerville, Addison Wesley, 6th edition, 2000.
2. Software Engineering-Roger S Pressman, Roger Pressman, Practitioner’s Approach”, McGraw-Hill, 6th edition, 2004.
3. Systems Analysis and Design of Real-Time Management Information Systems- Robert J. Thierauf, Prentice Hall, 1975.
4. Analysis and Design of Information Systems- Rajaraman, Prentice-Hall of India Pvt.Ltd, 2004.
CSE-338 System Analysis and Software Engineering Lab
1.5 Credits
Software Engineering lab works is solely designed to attain hands on experience of architectural design, documentation and testing of software so that students can develop the software following
the documents only.
Step1 (Requirement Engineering): Choose a company/institute/client for which software will be developed (make sure that they will provide required information whenever necessary). Follow the
steps for eliciting requirements and generate use-case diagram. Also analyze the sufficiency of the requirement engineering outcome for steps to follow.
Step 2 (Analysis model to Architectural and Component level design): Generate Activity diagram, Data flow diagram (DFD), Class diagram, State diagram, Sequence diagram and follow other relevant
steps for creating complete architectural and component level design of the target software.
Step 3 (User Interface design, Design evaluation, Testing strategies and Testing Tactics): Perform the user interface design with the help of swimlane diagram. Carry out the design evaluation
steps. Generate all test cases for complete checking of the software using black box, white box testing concept.
Step 4 Software testing and debugging
Step 5 (Managing Software Projects): Analyze the estimation and project schedule.
CSE-425 Digital Signal Processing
3 Credits
Introduction to digital signal processing (DSP): Discrete-time signals and systems, analog to digital conversion, impulse response, finite impulse response (FIR) and infinite impulse response
(IIR) of discrete-time systems, difference equation, convolution, transient and steady state response. Discrete transformations: Discrete Fourier series, discrete-time Fourier series, discrete
Fourier transform (DFT) and properties, fast Fourier transform (FFT), inverse fast Fourier transform, z-transformation – properties, transfer function, poles and zeros and inverse z-transform.
Correlation: circular convolution, auto-correlation and cross correlation. Digital Filters: FIR filters- linear phase filters, specifications, design using window, optimal and frequency sampling
methods; IIR filters- specifications, design using impulse invariant, bi-linear z-transformation, least-square methods and finite precision effects. Digital signal processor TMS family,
Application of digital signal processing
Books Recommended:
1. Digital Signal Processing-John G. Proakis
2. Signals and Systems-Simon Haykin and Barry Van Veen
3. Digital Signal Processing-R. W. Schafer
4. Digital Signal Processing-Ifeachor
5. Introduction to DSP-Johnny R. Johnson
CSE-426 Digital Signal Processing Lab
1.0 Credits
Laboratory works based on CSE 425.
CSE-403 Compiler Design
3 Credits
Introduction to compilers: Introductory concepts, types of compilers, applications, phases of a compiler. Lexical analysis: Role of the lexical analyzer, input buffering, token specification,
recognition of tokens, symbol tables. Parsing: Parser and its role, context free grammars, top-down parsing. Syntax-directed translation: Syntax-directed definitions, construction of syntax
trees, top-down translation. Type checking: Type systems, type expressions, static and dynamic checking of types, error recovery. Run-time organization: Run-time storage organization, storage
strategies. Intermediate code generation: Intermediate languages, declarations, assignment statements. Code optimization: Basic concepts of code optimization, principal sources of optimization.
Code generation. Features of some common compilers: Characteristic features of C, Pascal and Fortran compilers.
Books Recommended:
1. Compilers: Principles, Techniques, and Tools – Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman. Second Edition.
CSE-404 Compiler Design Lab
1.5 Credits
How to use scanner and parser generator tools (e.g., Flex, JFlex, CUP, Yacc, etc). For a given simple source language designing and implementing lexical analyzer, symbol tables, parser,
intermediate code generator and code generator.
CSE-421 Computer Network
3 Credits
Network architectures-layered architectures and ISO reference model: data link protocols, error control, HDLC, X.25, flow and congestion control, virtual terminal protocol, data security. Local
area networks, satellite networks, packet radio networks. Introduction to ARPANET, SNA and DECNET. Topological design and queuing models for network and distributed computing systems.
Books Recommended:
1. Computer Networks-A. S. Tanenbaum
2. Introduction to Networking- Barry Nance
3. Data Communications, Computer Networks & Open Systems- F. Halsall
4. TCP/IP-SydniFeit
5. Data Communications and Networking-Behrouz A. Forouzan
CSE-422Computer Network Lab
1.5 Credits
Laboratory works based on CSE 421.
CSE-405 Artificial Intelligence
3 Credits
What is Artificial Intelligence: The AI problems, The underlying assumption, What is an AI technique. Problems, Problem spaces and Search: Defining the problem as a state space search, Production
system, Problem characteristics. Heuristics Search Techniques: Generate and Test, Hill climbing, Best First Search, Problem Reduction, Constraint Satisfaction, Means-Ends Analysis. Knowledge
Representation Issues: Representation and Mappings, Approaches to knowledge Representation, Issues in Knowledge representation. Using Predicate logic: Representing simple facts in logic,
Representing Instance and Isa relationships, Computable functions and Predicates, Resolution. Representing Knowledge using Rules: Procedural versus Declarative Knowledge, Logic Programming,
Forward versus Backward Reasoning, Matching. Game playing: Overview, The Mimimax Search Procedure, Adding Alpha-Beta cutoffs, Additional refinements, iterative Deepening, Planning: Overview, An
example Domain: The Blocks World, Components of a planning system, Goal stack planning, Understanding: What is Understanding, What makes Understanding hard, Understanding as constraint
satisfaction. natural Language Processing: Introduction, Syntactic Processing, Semantic Analysis, Discourse and Pragmatic Processing. Expert systems: representing and using domain knowledge,
Expert system shells explanation, Knowledge Acquisition.
AI Programming Language: Python, Prolog, LISP
Books Recommended:
1. Introduction to Artificial Intelligence and Expert System-Dan W. Peterson
2. Artificial Intelligence-E. Rich and K. Knight
3. An Introduction to Neural Computing-C. F. Chabris and T. Jackson
4. Artificial Intelligence: A Modern Approach-S. Russel and P. Norvig
5. Artificial Intelligence using C – H. Schieldt
CSE-406 Artificial Intelligence Lab
1.5 Credits
Students will have to understand the functionalities of intelligent agents and how the agents will solve general problems. Students have to use a high-level language (Python, Prolog, LISP) to
solve the following problems:
Backtracking: State space, Constraint satisfaction, Branch and bound. Example: 8-queen, 8- puzzle, Crypt-arithmetic. BFS and production: Water jugs problem, The missionaries and cannibal problem.
Heuristic and recursion: Tic-tac-toe, Simple bock world, Goal stack planning, The tower of Hanoi. Question answering: The monkey and bananas problem.
CSE-431Computer Graphics
3 Credits
Introduction to Graphical data processing. Fundamentals of interactive graphics programming. Architecture of display devices and connectivity to a computer. Implementation of graphics concepts of
two-dimensional and three-dimensional viewing, clipping and transformations. Hidden line algorithms. Raster graphics concepts: Architecture, algorithms and other image synthesis methods. Design
of interactive graphic conversations.
Books Recommended:
1. Principles of Interactive Computer Graphics –William M., Newman, McGraw-Hill, 2nd edition, 1978
2. Computer Graphics: Principle and Practice in C-James D. Foley, Andries van Dam, Steven K. Feiner, John F. Hughes, Addison-Wesley, 2nd edition, 1995
CSE-432 Computer Graphics Lab
1.5 Credits
Laboratory works based on CSE 431.
CSE-407 Simulation and Modeling
3 Credits
Simulation methods, model building, random number generator, statistical analysis of results, validation and verification techniques, Digital simulation of continuous systems. Simulation and
analytical methods, for analysis of computer systems and practical problems in business and practice. Introduction to the simulation packages
Books Recommended:
1. System Modeling and Simulation- V.P. Singh
2. System Design, Modeling, and Simulation using- Claudius Ptolemaeus
CSE-408 Simulation and Modeling Lab
1.5 Credits
Laboratory works based on CSE 407.
CSE-400 Project / Thesis
3 Credits
Study of problems in the field of Computer Science and Engineering. This course will be initiated in the 3^rd year or early in 4^th year.
CSE-402 Comprehensive Viva Voce
2 Credits
CSE-335 Digital System Design
3 Credits
Design using MSI and LSI components. Design of various components of a computer: ALU, memory and control unit: hardwired and micro-programmed. Microprocessor based designs. Computer bus
standards. Design using special purpose controllers, floppy disk controller. Digital-control system. Computers in telecommunication and control.
Books Recommended:
1. Digital Systems: Principles and Applications- J. Tocci, N. S. Widmer and G. L. Moss, 9th ed., Prentice Hall, 2003
2. Microprocessors and Interfacing: Programming and Hardware- V. Hall, 2nd ed., Glencoe McGraw-Hill, 1992
CSE-336 Digital System Design Lab
1.5 Credits
Laboratory works based on CSE 335.
CSE-435 Computer Interfacing
3 Credits
Interface components and their characteristics, microprocessor I/O. Disk, Drums, and Printers. Optical displays and sensors. High power interface devices, transducers, stepper motors and
peripheral devices.
Books Recommended:
1. Microprocessors & Interfacing-Douglas V. Hall
2. Microprocessor & Microcomputer based System Design – Rafiquzzaman
3. Microcomputer Interfacing-Artwick
4. Microcomputer Interfacing-Ramesh Goanker
5. Designing User Interfaces-James E. Powell
CSE-436 Computer Interfacing Lab
1.5 Credits
Laboratory works based on CSE 435.
ECE-301 Electrical Drives and Instrumentation
3 Credits
Introduction to three phase circuits, alternators and transformers, principles of operation of DC, synchronous, induction universal, and stepper motors. Thyristor and microprocessor based speed
control of motors.Instrumentation amplifiers: differential logarithmic and chopper amplifiers. Frequency and voltage measurements using digital techniques. Recorders and display devices, spectrum
analyzers and logic analyzers. Data acquisition and interfacing to microprocessor based systems. Transducers: terminology, types, principles and application of photovoltaic, piezoelectric,
thermoelectric, variable reactance and opto-electronic transducers. Noise reduction in instrumentation.
Books Recommended:
1. 1Instrumentation for Engineers and Scientists- John Turner and Martyn Hill
2. Electronic Instrumentation Fundamentals-Albert Paul Malvino
3. Principles of Electronic Instrumentation – James Diefenderfer
4. Teaching Electrical Drives and Power Electronics Control-Mats Alaküla
ECE-302 Electrical Drives and Instrumentation Lab
1.5 Credits
Laboratory works based on ECE 301.
ECE-359 Communication Systems
3 Credits
Telephony and Data transmission: Introduction to telephone system exchanges, numbering, switching principles; Subscriber’s apparatus; Dialing and Signaling; Relays construction, characteristics,
types, polarized and non-polarized; Types of signaling – different tones, metering; Basics of switching – Stronger, EMD, Crossbar, trunking; Digital switching- switching controls, SPC, space
division switching time division switching, two dimensional switching, STS and TST; Basics of telegraphic; Introduction to data transmission; Codes – telegraphic codes; Telex and Facsimile.
Information and Modulation; Introduction to communication – elements of communication systems, necessity of modulation, fundamental limitations; Information – measurement, capacity, communication
entropy; Signal transmission through linear networks – Filter bandwidth requirements; Response of idealized networks; Impulse response of linear networks; Digital communications – sampling,
demodulation of sampled signals quantization noise, SQR, nonlinear quantization, companding A-law, u-law, delta modulation; Binary modulation schemes- ASK, FSK and PSK; Introduction to advanced
digital modulation schemes. Communication systems: Introduction to HF, VHF/ UHF microwave and fiber optic communication systems; FDM and TDM as per CCITT; Satellite communication – Orbits, power
budgets, multiple access, ground station; Introduction to different applications of satellite technology.
Books Recommended:
1. Communication Systems-Simon Haykin
2. Communication Systems-John G. Proakis
3. Telecommunication Switching Systems and Networks- ThiagarajanViswanathan
4. Radio Engineering-G. K. Mithal
ECE-360 Communication System Lab
1.5 Credits
Laboratory works based on ECE 359.
CSE-411 VLSI Design
3 Credits
Design and analysis techniques for VLSI circuits. Design of reliable VLSI circuits, noise considerations, design and operation of large fan out and fan in circuits, clocking methodologies,
techniques for data path and data control design. Simulation techniques. Parallel processing, special purpose architectures in VLSI. VLSI layouts partitioning and placement routing and wiring in
VLSI. Reliability aspects of VLSI design.
Books Recommended:
1. Basic VLSI Design-Douglas A Pucknell, Kamran Eshraghian
2. VLSI Technology – S. M. Sze
3. Introduction to VLSI Systems – C. A. Mead and L. A. Conway
CSE-413 Information System Design
3 Credits
Information, general concepts of formal information systems, analysis of information requirements for modern organizations, modern data processing technology and its application, information
systems structures, designing information outputs, classifying and coding data, physical storage media considerations, logical data, organization, systems analysis, general systems design, detail
systems design. Project management and documentation. Group development of an information system project. Includes all phases of software life cycles from requirement analysis to the completion
of a fully implemented system.
Books Recommended:
1. Information Systems Analysis and Design – Phil Agre, Christine Borgman
2. Analysis and Design of Information Systems-Langer, Arthur M.
CSE-414 Information System Design Lab
1.0 Credits
Laboratory works based on CSE 413.
CSE-423 Computer System Performance Evaluations
3 Credits
Review of system analysis, approaches to system development, feasibility assessment, hardware and software acquisition. Procurement, workload characterization, the representation of measurement
data, instrumentation: software monitors, hardware monitors, capacity planning, bottleneck detection, system and program tuning, simulation and analytical models and their application, case
1. Computer Systems Performance Evaluation and Prediction– Paul J. Fortier and Howard E. Michel
2. The Art of Computer Systems Performance Analysis- Jain
CSE-424 Computer System Performance Evaluation Lab
1.5 Credits
Laboratory based on CSE 423.
CSE-437 Pattern Recognition
3 Credits
Introduction to pattern recognition: features, classifications, learning. Statistical methods, structural methods and hybrid method. Applications to speech recognition, remote sensing and
biomedical area, Learning algorithms. Syntactic approach: Introduction to pattern grammars and languages. parsing techniques. Pattern recognition in computer aided design.
1. Pattern Recognition- K. Koutroumbas
2. Pattern Recognition and Machine Learning- Christopher M. Bishop
3. Pattern Recognition for Neural Networks- Brian Ripley
CSE-438 Pattern Recognition Lab
1.0 Credits
Laboratory works based on CSE 437.
CSE-457 Bio-Informatics
3 Credits
Cell concept: Structural organization of plant and animal cells, nucleus, cell membrane and cell wall. Cell division: Introducing chromosome, Mitosis, Meiosis and production of haploid/diploid
cell. Nucleic acids: Structure and properties of different forms of DNA and RNA; DNA replication. Proteins: Structure and classification, Central dogma of molecular biology. Genetic code: A brief
account. Genetics: Mendel’s laws of inheritance, Organization of genetic material of prokaryotes and eukaryotes, C-Value paradox, repetitive DNA, structure of chromatin – euchromatin and
heterochromatin, chromosome organization and banding patterns, structure of gene – intron, exon and their relationships, overlapping gene, regulatory sequence (lac operon), Molecular mechanism of
general recombination, gene conversion, Evolution and types of mutation, molecular mechanisms of mutation, site-directed mutagenesis, transposons in mutation. Introduction to Bioinformatics:
Definition and History of Bioinformatics, Human Genome Project, Internet and Bioinformatics, Applications of Bioinformatics Sequence alignment: Dynamic programming. Global versus local. Scoring
matrices. The Blast family of programs. Significance of alignments, Aligning more than two sequences. Genomes alignment. Structure-based alignment. Hidden Markov Models in Bioinformatics:
Definition and applications in Bioinformatics. Examples of the Viterbi, the Forward and the Backward algorithms. Parameter estimation for HMMs. Trees: The Phylogeny problem. Distance methods,
parsimony, bootstrap. Stationary Markov processes. Rate matrices. Maximum likelihood. Felsenstein’s post-order traversal. Finding regulatory elements: Finding regulatory elements in aligned and
unaligned sequences. Gibbs sampling. Introduction to microarray data analysis: Steady state and time series microarray data. From microarray data to biological networks. Identifying regulatory
elements using microarray data. Pi calculus: Description of biological networks; stochastic Pi calculus, Gillespie algorithm.
Books Recommended:
1. Introduction to Bioinformatics Algorithms –Jones and Pavel A. Pevzner
2. Introduction to Bioinformatics – Stephen A. Krawetz, David D. Womble
3. Introduction to Bioinformatics – Arthur M. Lesk
CSE-458 Bio-Informatics Lab
1.5 Credits
Laboratory works based on CSE-457.
CSE-463 Machine Learning
3 Credits
Introduction: Definition of learning systems. Goals and applications of machine learning. Aspects of developing a learning system- training data, concept representation, function approximation.
Inductive Classification: The concept learning task. Concept learning as search through a hypothesis space. General-to-specific ordering of hypotheses. Finding maximally specific hypotheses.
Version spaces and the candidate elimination algorithm. Learning conjunctive concepts. The importance of inductive bias. Decision Tree Learning: Representing concepts as decision trees. Recursive
induction of decision trees. Picking the best splitting attribute: entropy and information gain. Searching for simple trees and computational complexity. Occam’s razor. Overfitting, noisy data,
and pruning. Experimental Evaluation of Learning Algorithms: Measuring the accuracy of learned hypotheses. Comparing learning algorithms- cross-validation, learning curves, and statistical
hypothesis testing. Computational Learning Theory: Models of learnability- learning in the limit; probably approximately correct (PAC) learning. Sample complexity- quantifying the number of
examples needed to PAC learn. Computational complexity of training. Sample complexity for finite hypothesis spaces. PAC results for learning conjunctions, kDNF, and kCNF. Sample complexity for
infinite hypothesis spaces, Vapnik-Chervonenkis dimension. Rule Learning, Propositional and First-Order: Translating decision trees into rules. Heuristic rule induction using separate and conquer
and information gain. First-order Horn-clause induction (Inductive Logic Programming) and Foil. Learning recursive rules. Inverse resolution, Golem, and Progol. Artificial Neural Networks:
Neurons and biological motivation. Linear threshold units. Perceptrons: representational limitation and gradient descent training. Multilayer networks and backpropagation. Hidden layers and
constructing intermediate, distributed representations. Overfitting, learning network structure, recurrent networks. Support Vector Machines: Maximum margin linear separators. Quadractic
programming solution to finding maximum margin separators. Kernels for learning non-linear functions. Bayesian Learning: Probability theory and Bayes rule. Naive Bayes learning algorithm.
Parameter smoothing. Generative vs. discriminative training. Logisitic regression. Bayes nets and Markov nets for representing dependencies. Instance-Based Learning: Constructing explicit
generalizations versus comparing to past specific examples. k-Nearest-neighbor algorithm. Case-based learning. Text Classification: Bag of words representation. Vector space model and cosine
similarity. Relevance feedback and Rocchio algorithm. Versions of nearest neighbor and Naive Bayes for text. Clustering and Unsupervised Learning: Learning from unclassified data. Clustering.
Hierarchical Aglomerative Clustering. k-means partitional clustering. Expectation maximization (EM) for soft clustering. Semi-supervised learning with EM using labeled and unlabled data.
Books Recommended:
1. Artificial Intelligence: a modern approach (2nd edition), Russell, S. and P. Norvig, Prentice Hall, 2003
2. Introduction to Machine Learning – Ethem ALPAYDIN
3. Machine Learning – Tom Mitchell, McGraw Hill
4. Introduction to machine learning (2nd edition), Alpaydin, Ethem, MIT Press, 2010
5. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, NelloCristianini and John Shawe-Taylor, Cambridge University Press
CSE-464 Machine Learning Lab
1.5 Credits.
Students should learn the methods for extracting rules or learning from data, and get the necessary mathematical background to understand how the methods work and how to get the best performance
from them. To achieve these goals student should learn the following algorithms in the lab: K Nearest Neighbor Classifier, Decision Trees, Model Selection and Empirical Methodologies, Linear
Classifiers: Perception and SVM, Naive Bayes Classifier,Basics of Clustering Analysis, K-mean Clustering Algorithm, Hierarchical Clustering Algorithm. Upon completion of the course, the student
should be able to perform the followings: a. Evaluate whether a learning system is required to address a particular problem. b. Understand how to use data for learning, model selection, and
testing to achieve the goals.c. Understand generally the relationship between model complexity and model performance, and be able to use this to design a strategy to improve an existing system.
d. Understand the advantages and disadvantages of the learning systems studied in the course, and decide which learning system is
appropriate for a particular application. e. Make a naive Bayes classifier and interpret the results as probabilities. f. Be able to apply clustering algorithms to simple data sets for clustering
CSE-353 Digital Image Processing
3 Credits
Image Processing: Image Fundamentals, Image Enhancement: Background, Enhancement by Point-Processing, Spatial Filtering, Enhancement in Frequency Domain, Color Image Processing. Image
Restoration: Degradation Model, Diagonalization of Circulant and Block-Circulant Matrices, Algebraic Approach to Restoration, Inverse Filtering, Geometric Transformation. Image Segmentation:
Detection of Discontinuities, Edge Linking and Boundary Detection, Thresholding, Region-Oriented Segmentation, The use of Motion in Segmentation. Image Compression|.
Books Recommended:
1. Digital Image Processing-Rafael C. Gonzalez and Richard E. Woods, Pearson Education Asia.
2. Non-Linear Digital Filter : Principles and Applications –I. Pitas and A. N. Venetsanopoulos, Kluwer Academic Publications.
CSE-354 Digital Image Processing Lab
1.5 Credits
Laboratory works based on CSE 453.
CSE-419 Graph Theory
3 Credits
Introduction, Fundamental concepts, Trees, Spanning trees in graphs, Distance in graphs, Eulerian graphs, Digraphs, Matching and factors, Cuts and connectivity, k-connected graphs, Network flow
problems, Graph coloring: vertex coloring and edge coloring, Line graphs, Hamiltonian cycles, Planar graphs, Perfect graphs.
Books Recommended:
1. Graph Theory and Its Applications – Jonathan L. Gross, Jay Yellen
2. A Textbook of Graph Theory – R. Balakrishnan, K. Ranganathan
CSE-420 Graph Theory Lab
1.5 Credits
Laboratory works based on CSE 420.
ECE-421 Digital Communication
3 Credits
Introduction to modulation techniques: Pulse modulation; pulse amplitude modulation, pulse width modulation and pulse position modulation. Pulse code modulation; quantization, Delta modulation.
TDM, FDM, OOK, FSK, PSK, QPSK; Representation of noise; threshold effects in PCM and FM. Probability of error for pulse systems, concepts of channel coding and capacity. Asynchronous and
synchronous communications. Hardware interfaces, multiplexers, concentrators and buffers. Communication medium, Fiber optics.
Books Recommended:
1. Digital Communication- John G. Proakis
2. Digital Communication –Bernard Sklar
3. Introduction to Digital Communication- Roger L. Peterson
4. Digital Communication-Prof. N. Sarkar
5. Communication Systems-Simon Haykin
ECE-422 Digital Communication Lab
1.5 Credits
Laboratory works based on ECE 421.
CSE-455 Wireless Sensor Networks
3 Credits
Introduction: applications; Localization and tracking: tracking multiple objects; Medium Access Control: S-MAC, IEEE 802.15.4 and ZigBee; Geographic and energy-aware routing; Attribute-Based
Routing: directed diffusion, rumor routing, geographic hash tables; Infrastructure establishment: topology control, clustering, time synchronization; Sensor tasking and control: task-driven
sensing, information-based sensor tasking, joint routing and information aggregation; Sensor network databases: challenges, querying the physical environment, in-network aggregation, data indices
and range queries, distributed hierarchical aggregation; Sensor network platforms and tools: sensor node hardware, sensor network programming challenges; Other state-of-the-art related topics.
Books Recommended:
□ Wireless Sensor Networks – C. S. Raghavendra, Krishna M. Sivalingam and TaiebZnati
□ Wireless Sensor Networks: An Information Processing Approach (The Morgan Kaufmann Series in Networking) – Feng Zhao,Leonidas Guibas
CSE-456 Wireless Sensor Networks Lab
1.5 Credits
Laboratory works based on CSE 455.
CSE-461 Neural Networks
3 Credits
Fundamentals of Neural Networks; Back propagation and related training algorithms; Hebbian learning; Cohonen-Grossberg learning; The BAM and the Hopfield Memory; Simulated Annealing; Different
types of Neural Networks: Counter propagation, Probabilistic, Radial Basis Function, Generalized Regression, etc; Adaptive Resonance Theory; Dynamic Systems and neural Control; The Boltzmann
Machine; Self-organizing Maps; Spatiotemporal Pattern Classification, The Neocognition; Practical Aspects of Neural Networks.
Books Recommended:
1. An Introduction to Neural Networks – Prof. Leslie Smith
2. Fundamentals of Artificial Neural Networks – Mohamad H. Hassoun
CSE-462 Neural Networks Lab
1.5 Credits
Laboratory works based on CSE 461.
CSE-467 Advanced Database Systems
3.0 Credits
Introduction : Object oriented Database, Data Model, Design, Languages; Object Relational Database: Complex data types, Querying with complex data types, Design; Distributed Database: Levels of
distribution transparency, Translation of global queries to fragment queries, Optimization of access strategies, Management of distributed transactions, Concurrency control, reliability,
Administration; Parallel Database: Different types of parallelism, Design of parallel database; Multimedia Database Systems: Basic concepts, Design, Optimization of access strategies, Management
of Multimedia Database Systems, Reliability; Database Wire-housing/Data mining: Basic concepts and algorithms.
Books Recommended:
1. Oracle Advanced PL/SQL Programming with CD-ROM– Scott Urman.
CSE-468 Advanced Database System Lab
1.5 Credits
Laboratory works based on theory classes.
CSE 469 Natural Language Processing
3 Hours/Week, 3 Credits
Introduction; Word Modeling: Automata and Linguistics, Statistical Approaches and Part of Speech Tagging; Linguistics and Grammars; Parsing Algorithms; Parsing Algorithms and the Lexicon;
Semantic; Feature Parsing; Tree Banks and Probabilistic Parsing; Machine Translation; Evolutionary Models of Language Learning and Origins.
Books Recommended:
1. Speech and Language Processing –Jurafsky, D. and Martin, J. H
2. Foundations of Statistical Natural Language Pr–Manning, C. D. and H. Schütze
3. Computational Complexity and Natural Language– Barton, E., Berwick, R., and Ristad, E
4. Natural Language Understanding -Allen, J.
5. Computational Models of Discourse–Brady, J., and Berwick, R.
CSE-470 Natural Language Processing Lab
1.5 Credits
Processing of words, Phrase structure parsing, Semantic Interpretation with Phrase Structure Grammars
Books Recommended:
1. Speech and Language Processing –Jurafsky, D. and Martin, J. H
2. Foundations of Statistical Natural Language Pr–Manning, C. D. and H. Schütze
3. Computational Complexity and Natural Language– Barton, E., Berwick, R., and Ristad, E
4. Natural Language Understanding -Allen, J.
5. Computational Models of Discourse–Brady, J., and Berwick, R.
OLD SYLLABUS Evening PROGRAM
• Detailed Syllabus
Semester 1-1 (23.5 Credits)
Course Code Course Title Credit Hours
HUM 103 Language Composition & Comprehension 3
CSE 103 Computer Programming in C 3
CSE 104 Computer Programming in C Lab 1.5
PHY 101E Physics for Engineers 3
PHY 102 Physics for Engineers Lab 1.5
MTH 101E Geometry, Differential & Integral Calculus 3
ECE 101 Basic Electrical Engineering 3
ECE 102 Basic Electrical Engineering Lab 1.5
ECN 101 Principles of Economics 2
ACN 201 Principles of Accounting 2
Semester 1-2 (23.5 Credits)
Course Code Course Title Credit Hours
CSE 200 Project Work 2
CSE 201 Discrete Mathematics 3
CSE 203 Object Oriented Programming Language 3
CSE 204 Object Oriented Programming Language Lab 1.5
CSE 205 Data Structures 3
CSE 206 Data Structures Lab 1.5
ECE 201 Electronic Devices & Circuits 3
ECE 202 Electronic Devices & Circuits Lab 1.5
MTH 103E Linear Algebra, Vector Analysis & Complex Variables 3
IMG 201 Principles of Management 2
Semester 2-1 (22.5 Credits)
Course Code Course Title Credit Hours
CSE 207 Algorithms 3
CSE 208 Algorithms Lab 1.5
CSE 209 Numerical Methods 3
CSE 231 Digital Logic Design 3
CSE 232 Digital Logic Design Lab 1.5
CSE 321 Database Systems 3
CSE 322 Database Systems Lab 1.5
CSE 331 Computer Architecture 3
MTH 203E Differential Equations, Laplace Transforms & Fourier Analysis 3
Semester 2-2 (23.5 Credits)
Course Code Course Title Credit Hours
CSE 300 Software Development 2
CSE 301 E-Commerce and Web Engineering 3
CSE 302 E-Commerce and Web Engineering Lab 1.5
CSE303 Operating Systems 3
CSE 304 Operating Systems Lab 1.5
CSE 315 Data Communication 3
CSE 351 Management Information System 3
CSE 403 Compiler Design 3
CSE 404 Compiler Design Lab 1.5
MTH 301 Statistics & Probability 2
Semester 3-1 (23 Credits)
Course Code Course Title Credit Hours
CSE 333 Microprocessors and Assembly Language 3
CSE 334 Microprocessors and Assembly Language Lab 1.5
CSE 339 Theory of Computation 2
CSE 401 Software Engineering 3
CSE 421 Computer Network 3
CSE 422 Computer Network Lab 1.5
CSE 435 Computer Interfacing 3
CSE 436 Computer Interfacing Lab 1.5
CSE 4** Option 3
CSE 4** Option Lab 1.5
Semester 3-2 (23 credits)
Course Code Course Title Credit Hours
CSE 405 Artificial Intelligence & Expert Systems 3
CSE 406 Artificial Intelligence & Expert Systems Lab 1.5
CSE 425 Digital Signal Processing 3
CSE 426 Digital Signal Processing Lab 1.5
CSE 431 Computer Graphics 3
CSE 432 Computer Graphics Lab 1.5
CSE 400 Project /Thesis 3
CSE 402 Comprehensive Viva Voce 2
CSE 4** Option 3
CSE 4** Option Lab 1.5
Total Credit Hours Required for Degree 139.00
Optional Courses:
Course Code Course Title Credit Hours
CSE 407 Simulation & Modeling 3
CSE 408 Simulation & Modeling Lab 1.5
CSE 411 VLSI Design 3
CSE 412 VLSI Design Lab 1.5
CSE 413 Information System Design 3
CSE 414 Information System Design Lab 1.5
CSE 419 Graph Theory 3
CSE 420 Graph Theory Lab 1.5
CSE 423 Computer System Performance Evaluation 3
CSE 424 Computer System Performance Evaluation Lab 1.5
CSE 437 Pattern Recognition 3
CSE 438 Pattern Recognition Lab 1.5
CSE 453 Digital Image Processing 3
CSE 454 Digital Image Processing Lab 1.5
CSE 455 Wireless and Sensor Networks 3
CSE 456 Wireless and Sensor Networks Lab 1.5
CSE 457 Bioinformatics 3
CSE 458 Bioinformatics Lab 1.5
CSE 461 Neural Networks 3
CSE 462 Neural Networks Lab 1.5
CSE 463 Machine Learning 3
CSE 464 Machine Learning Lab 1.5
CSE 465 Contemporary course on CSE 3
CSE 466 Contemporary course on CSE Lab 1.5
Detailed Syllabus
HUM-103 Language Composition and Comprehension
3 Credits
This course purports to make the student well up in composition and comprehension of English language used in formal write ups like articles, essays and treatises. Here text will be given for
comprehension, exercises of writing essays, paragraphs and reports will be done and construction of proper sentences expressing formal ideas will be taught. Sufficient exercises of translation
and re-translations will be included.
Books Recommended:
1. Exercise in Reading Comprehension – Tibbits
2. Essential English Grammar – Ramon Murphy
3. English Vocabulary in use – Stuart
4. English Vocabulary in use – McCarthy
5. Intermediate English Grammar – Ramon Murphy
6. Paragraph in English – Tibbits
CSE-103 Computer Programming in C
3 Credits
Programming language: Basic concept; overview of programming languages, C-language: Preliminaries; Elements of C; program constructs; variables and data types in C; Input and output; character
and formatted I/O; Arithmetic expressions and assignment statements; loops and nested loops; Decision making’ Arrays; Functions; Arguments and Local Variables; Calling functions and arrays;
Recursion and recursive functions; structures within structure; Files; File functions for sequential and Random I/O. Pointers, Pointers and Structures; Pointers and functions; Pointer and arrays;
Operations on pointers; Pointer and memory addresses; Operations on bits; Bit operation; Bit field; Advanced features; Standard and Library functions.
Books Recommended:
1. The C Programming Language – Kernighn& Ritchie
2. Teach Yourself C – H. Schieldt
3. Programming with ANSI C – E. Balagurusamy
4. The Complete Reference, Turbo C/C++ – H. Schieldt
5. Programming with C, Schaum’s outline Series – Gotfreid
CSE-104 Computer Programming in C Lab
1.5 Credits
Laboratory works based on CSE 103.
PHY-101E Physics for Engineering
3 Credits
Properties of matter : Elasticity, Stress & Strain, Young’s Modulus, Surface Tension. Heat & Thermodynamics: Heat, Temperature, Zeroth Law of Thermodynamics, Thermal Equilibrium, Seebeck effect,
Reversible & Irreversible Processes, First and Second law of Thermodynamics, Heat Engine, Carnot Cycle. Electromegnetism: Electric charge, Charge density, Coulomb’s and Ohm’s law, Electric field
and electric potential, Electric dipole, Electric flux, Gauss’s law and its application, Capacitance, Magnetic field, Biot-Savert law, Ampere’s law and its application, Electromagnetic Induction,
Faraday’s law, Lenz’s law, Self Inductance and Mutual Inductance. Optics: Nature and Propagation of light, Reflection and Refraction of light, Total Internal Reflection, Interference,
Diffraction, Dispersion, Polarization. Modern Physics: Theory of Relativity, Length Contraction and Time Dilation, Mass-Energy Relation, Compton Effect, Photoelectric Effect, Quantum Theory,
Atomic Structure, X-ray Diffraction, Atomic Spectra, Electron Orbital Wavelength, Bohr radius, Radioactivity, de Broglie theory, Nuclear Fission and Fusion.
Books Recommended:
1. Modern Physics – Bernstein
2. Concepts of Modern Physics – Beiser
3. Heat & Thermodynamics – Brizlal
4. University Physics with Modern Physics – Young
PHY 102 Physics Lab
1.5 Credits
Laboratory works based on PHY 101E.
MTH-101E Geometry, Differential and Integral Calculus
3 Credits
Geometry: Two dimensional geometry: Straight lines, pair of straight lines, Circle, Parabola, Ellipse and Hyperbola, Equation of General equation of Second Degree. Third Dimensional Geometry:
Three dimensional Co-ordinates, Direction Cosines and Direction Ratios, Plane and Straight line. Differential Calculus: Real number system. Functions of single variables, its Graphs, Limit,
Continuity and Differentiability. Successive Differentiation, Leibnitz’s theorem, Rolle’s theorem, Mean value theorem, Taylor’s theorem, Maclaurin’s theorem, Langrage’s and Cauchy’s forms of
Remainder. Expansion of Function in Taylor’s and Maclaurin’s Series. Maximum and Minimum Values of Function. Evaluation of Indeterminate forms of limit, L’ Hospital’s Rule. Tangent and Normal.
Functions of more than one variable, Limit, Continuity, Differentiability, Partial Derivatives, Euler’s Theorem. Jacobians. Integral Calculus: Indefinite Integrals and its definition. Methods of
Integration (Integration by substitution, Integration by parts, Integration by successive reduction). Fundamental theorem of Integral calculus. Definite Integral and its properties. Definite
Integral as the limit of a sum. Improper Integrals, Beta and Gamma Function, Its application in evaluating Integrals. Evaluation of Arc length, Areas, Surfaces of Revolution, Volumes of solids of
Revolution, Multiple Integrals.
Books Recommended:
1. Analytical Geometry of Conic Section – J.M. Kar.
2. A Text Book on Co-ordinate Geometry – Rahman & Bhattacharjee; S. Chakrabarty, Gonith Prokashon.
3. Calculus with Analytic Geometry – Thomas and Finne
4. Calculus – Howard Anton; 10^th Edition; John Willy and Sons
5. Differential Calculus- C. Das & B. N. Mukharjee; 54^th Edition; U. N. Dhur & Sons PTL
6. Differential Calculus – C.Das & B. N. Mukharjee; 54^th Edition; U. N. Dhur & Sons PTL
7. Integral Calculus – C. Das & B. N. Mukharjee; 54^th Edition; U. N. Dhur & Sons PTL
ECE 101 Basic Electrical Engineering
3 Credits
Fundamental electrical concepts, Kirchoff’s Laws, Equivalent resistance. Electrical circuits: Series circuits, parallel circuits, series-parallel networks. Network analysis: Source conversion,
Star/Delta conversion, Branch-current method, Mesh analysis, Nodal analysis. Network theorems: Superposition theorem, Thevenin’s theorem, Norton’s theorem. Capacitors. Magnetic circuits,
Inductors Sinosoidal alternating waveforms: Definitions, phase relations, Instantaneous value, Average value, Effective (rms)Value. Phasor algebra Series, parallel and series-parallel ac
networks. Power: Apparent power, Reactive power, Power triangle, Power factor correction. Pulse waveforms and the R-C response. Three-phase system Transformers.
Books Recommended:
1. Introductory Circuit Analysis- L. Boylestad
2. Introduction to Electrical Engineering- P. Ward
3. Electrical Technology (Volume 1)-L. Theraja, A.K.Theraja
4. Alternating Current Circuits-M. Kerchner, G. F. Corcoran
5. Electric Circuits – James W. Nilson
ECE 102 Basic Electrical Engineering Lab
1.5 Credits
Laboratory works based on ECE 101.
ECN 101 Principles of Economics
2 Credits
Introduction: The Nature, scope and methods of Economics, Economics and Engineering. Some Fundamental concepts commonly used in Economics. Micro Economics: The theory of demand and supply and
their elasticity’s. Market price determination competition in theory and practice. Indifference curve technique. Marginal analysis. Factors of production and production function. Scale of
production – Internal and external economies and diseconomies. The short run and the long run. Fixed cost and variable cost. Macro Economics: National income analysis. Inflation and its effects.
Savings, Investments. The basis of trade and the terms of trade. Monetary policy, Fiscal policy, Trade policy with reference to Bangladesh. Planning in Bangladesh.
Books Recommended:
1. Economics – Samuelson & Nordhaus
2. Economics – Don Bush Fisher
ACN 201 Principles of Accounting
2 Credits
This course aims at developing basic concepts and principles of accounting. It will cover topics like working at journal entries, preparation of ledger, checking the accuracy through trial
balance, and preparation of financial statements. Concepts and practices of cost accounting will be discussed by covering topics like job order and process costing, contract costing, differential
costing and responsibility accounting. Contemporary practices of accounting principles will be discussed under the current legal framework.
Books Recommended:
1. Accounting Principles – Kieso
2. Financial & Managerial Accounting – Needles
CSE 200 Project Work
2 Credits
Project focusing on Object oriented programming approach and using standard algorithm is preferable. Every project should maintain a goal so that it can be used as a useful tool in the IT fields.
Also innovative project ideas that require different types scripting/programming languages or programming tools can be accepted with respect to the consent of the corresponding project
CSE-201 Discrete Mathematics
3 Credits
Mathematical Models and Reasoning: Propositions, Predicates and Quantifiers, Logical operators, Logical inference, Methods of proof. Sets: Set theory, Relations between sets, Operations on sets.
Induction, The natural numbers, Set operations on å*. Binary Relations : Binary relations and Digraphs, Graph theory, Trees, Properties of relations, Composition of relations, Closure operations
on relations, Order relations, Equivalence relations and partitions. Functions: Basic properties, Special classes of functions. Counting and Algorithm Analysis: Techniques, Asymptotic behavior of
functions, Recurrence systems, Analysis of algorithms. Infinite sets: Finite and Infinite sets, Countable and uncountable sets, Comparison of cardinal numbers. Algebras: Structure, Varieties of
algebras, Homomorphism, Congruence relations.
Books Recommended:
1. Discrete Mathematics and its Applications- Kennth H. Rosen
2. Discrete Mathematical Structures- Bernard Kolman, Robert C. Busby, Sharon Cutler Ross
1. Concrete Mathematics- Ronald Ervin Knuth
CSE 203 Object Oriented Programming Language
3 Credits
Introduction to Java: History of Java, Java class Libraries, Introduction to java programming, and a simple program. Developing java Application: Introduction, Algorithms, Pseudo code, control
Structure, The If/Else selection structure, the while Repetition structure, Assignment operators, Increment and decrement operators, Primitive data types, common Escape sequences, Logical
operator. Control Structure: Introduction, for Structure, switch structure, Do while structure, Break and continue Structure. Methods: Introduction, Program module in Java, Math class methods,
method definitions, java API packages, Automatic variables, Recursions, Method overloading, Method of the Applet class. Arrays: Introduction, Arrays, declaring and allocating arrays, passing
arrays to methods, sorting arrays, searching arrays, multiple subscripted Arrays. Inheritance: Introduction, Super class, Subclass, Protected members, using constructor and Finalizes in
subclasses, composition vs. Inheritance, Introduction to polymorphism, Dynamic method building, Final methods and classes, Abstract super classes and concrete classes, Exception Handling.
Books Recommended:
1. Java, How to Program- H. M. Deitel & P. J. Deitel
2. Core Java (Vol. 1 and 2)- Sun Press
3. Beginning Java 2, Wrox – Ivor Horton
4. Java 2 Complete Reference- H. Schieldt
CSE 204 Object Oriented Programming Language Lab
1.5 Credits
Laboratory works based on CSE 203.
CSE-205 Data Structures
3 Credits
Concepts and Examples: Introduction to Data structures. Elementary data structures: Arrays, records, pointer. Arrays: Type, memory representation and operations with arrays. Linked lists:
Representation, Types and operations with linked lists. Stacks and Queues: Implementations, operations with stacks and queues. Graphs: Implementations, operations with graph. Trees:
Representations, Types, operations with trees. Memory Management: Uniform size records, diverse size records. Sorting: Internal sorting, external sorting. Searching : List searching, tree
searching. Hashing: Hashing functions, collision resolution.
Books Recommended:
1. Fundamental of Data Structures – Horowitz & S. Sahni
2. Data Structures – Reingold
3. Data Structures, Schaum’s outline Series – Lipshultz
4. Data Structures & Programming Design – Robert L. Kruse
CSE-206 Data Structures Lab
1.5 Credits
Laboratory works based on CSE 205.
ECE-201 Electronic Devices & Circuits
3 Credits
Introduction to semiconductors, Junction diode characteristics & diode applications, Bipolar Junction transistor characteristics, Transistor biasing, Small signal low frequency h-parameter model
& hybrid -pi model, AC analysis of transistor, Frequency response of transistor, Operational amplifiers, Linear applications of operational amplifiers, DC performance of operational amplifiers,
AC performance of operational amplifiers, Introduction to JFET, MOSFET, PMOS, NMOS & CMOS, Introduction to SCR, TRIAC, DIAC & UJT, Active filters Introduction to IC fabrication techniques & VLSI
Books Reccommended:
1. Electronic Devices & Circuits McGraw-Hill -Jacob Millman & Christos C. Halkias
2. Electronics Devices And Circuits- Salivahanan, N. S. Kumar And A. Vallavaraj, Tata McGraw – Hill
3. Electronics Fundamentals: Circuits, Devices, and Applications- Ronald J Tocci
ECE 202 Electronic Devices & Circuits Lab
1.5 Credits
Laboratory works based on ECE 201.
MTH-103E Linear Algebra, Vector Analysis and Complex Variables
3 Credits
Linear Algebra: Matrix, Types of Matrices, Matrix operations, Laws of matrix algebra, Invertible matrices, System of Linear equations (homogeneous and non-homogeneous) and their solution.
Elementary row and column operations and Row reduced echelon matrices, Different types of matrices, Rank of matrices. Eigen values and Eigen vectors. Vector Analysis: Vector Algebra – Vectors in
three dimensional space, Algebra of Vectors, Rectangular components, Addition and Scalar multiplication, Scalar and Vector product of two Vectors, Scalar and Vector triple product. Vector
Calculus – Vector differentiation and Integration. Gradient, Divergence and Curl. Green’s theorem, Stoke’s theorem. Complex Variable: Limit, Continuity and differentiability of complex functions.
Analytic function, Harmonic function, Cauchy-Rieman equation. Complex Integration. Cauchy’s integral theorem and Cauchy’s Integral formula. Lioville’s theorem. Taylor’s and Laurent’s theorems.
Singularities. Residue, Cauchy’s Residue theorem. Contour Integration.
Book s Recommended:
1. Scham’s Outline Series of the Theory and Problems on Linear Algebra – Seymour Lipschutz, 3^rd ed., McGraw Hill Book
2. Linear Algebra with Applications – R. Antone
3. Scham’s Outline Series of the Theory and Problems on Vector Analysis – Murray R. Spiegel, SI(Metric ed.), McGraw Hill
4. Functions of a Complex Variable – Dewan Abdul Quddus, Titash Publications.
IMG 201 Principles of Management
This course aims at providing students with concepts and tools of general management. The course covers concepts of planning, organizing, motivating and controlling, and its importance in
attaining organizational objectives. Some current issues and trends in general management will also be discussed.
Books Reccommended:
1. Principles of Management – Mason Carpenter
2. Principles of Management – Robert Kreitner
3. Principles of Management : A Modern Approach – P.K.Saxena
4. Principles of Management – P.C. Tripathi, P N Reddy, McGraw-Hill
CSE-207 Algorithms
3 Credits
Analysis of Algorithm: Asymptotic analysis: Recurrences, Substitution method, Recurrence tree method, Master method. Divide-and-Conquer: Binary search, Powering a number, Fibonacci numbers,
Matrix Multiplication, Strassen’s Algorithm for Matrix Multiplication. Sorting: Insertion sort, Merge sort, Quick sort, Randomized quick sort, Decision tree, Counting sort, Radix sort. Order
Statistics: Randomized divide and conquer, worst case linear time order statistics. Graph: Representation, Traversing a graph, Topological sorting, Connected Components. Dynamic Programming:
Elements of DP (Optimal substructure, Overlapping subproblem), Longest Common Subsequence finding problem, Matrix Chain Multiplication. Greedy Method: Greedy choice property, elements of greedy
strategy, Activity selector problem, Minimum spanning tree (Prims algorithm, Kruskal algorithm), Huffman coding. Shortest Path Algorithms: Dynamic and Greedy properties, Dijkstra’s algorithm with
its correctness and analysis, Bellman-ford algorithm, All pair shortest path: Warshall’s algorithm, Johnson’s algorithm. Network flow: Maximum flow, Max-flow-min-cut, Bipartite matching.
Backtracking/Branch-and-Bound: Permutation, Combination, 8-queen problem, 15-puzzle problem. Geometric algorithm: Segment-segment intersection, Convex-hull, Closest pair problem. And NP
Completeness, NP hard and NP complete problems.
Books Recommended:
1. Introduction to Algorithms- Thomas H. Cormen , Charles E. Leiserson.
2. Algorithms –Robert Sedgewick and Kevin Wayne.
3. Fundamental Algorithms- Donald E. Knuth,”Art of Computer Programming, Volume 1: Addison-
Wesley Professional; 3rd edition, 1997.
CSE-208 Algorithms Lab
1.5 Credits
Using different well known algorithms to solve the problem of Matrix-Chain Multiplication, Longest Common Subsequence, Huffman codes generation, Permutation, Combination, 8-queen problem,
15-puzzle, BFS, DFS, flood fill using DFS, Topological sorting, Strongly connected component, finding minimum spanning tree, finding shortest path (Dijkstra’s algorithm and Bellman-Ford’s
algorithm), Flow networks and maximum bipartite matching, Finding the convex hull, Closest pair.
CSE-209 Numerical Methods
3 Credits
Errors and Accuracy. Iterative process: Solution of f(x)= 0, existence and convergence of a root, convergence of the iterative method, geometrical representation, Aitken’s D^2– process of
acceleration. System of Linear Equations. Solution of Non-Linear equations. Finite Differences and Interpolation. Finite Difference Interpolation. Numerical Differentiation. Numerical
Integration. Differential Equations.
Books Recommended:
1. Introductory methods of Numerical Analysis – S. S. Sastry
2. Numerical Methods for Engineers –Steven C. Chapra
CSE-231 Digital Logic Design
3 Credits
Binary Logic. Logic Gates: IC digital logic families, positive and negative logic. Boolean Algebra. Simplification of Boolean Functions: Karnaugh map method, SOP and POS simplification, NAND,
NOR, wired-AND, wired-OR implementation, nondegenerate forms, Don’t care conditions, Tabulation method – prime implicant chart. Combinational Logic: Arithmetic circuits – half and full adders and
subtractors, multilevel NAND and NOR circuits, Ex-OR and Equivalence functions. Combinational Logic in MSI and LSI: Binary parallel adder, decimal and BCD adders, Comparators, Decoders and
Encoders, Demultiplexors and Multiplexors. Sequential Logic. Registers and Counters. Synchronous Sequential Circuits. Asynchronous Sequential Circuits. Digital IC terminology, TTL logic family,
TTL series characteristics, open-collector TTL, tristate TTL, ECL family, MOS digital ICs, MOSFET, CMOS characteristics, CMOS tristate logic, TTL-CMOS-TTL interfacing, memory terminology, general
memory operation, semiconductor memory technologies, different types of ROMs, semiconductor RAMs, static and dynamic RAMs, magnetic bubble memory, CCD memory, FPGA Concept.
Books Recommended:
1. Digital Logic & Computer Design-M. Morris Mano
2. Digital Fundamentals- Floyd
3. Modern Digital Electronics-R. P. Jain
4. Digital Systems- R. J. Tocci
5. Digital Electronics- Green
CSE-232 Digital Logic Design Lab
1.5 Credits
Laboratory works based on CSE 231.
CSE-321 Database Systems
3 Credits
Introduction: Purpose of Database Systems, Data Abstraction, Data Models, Instances and Schemes, Data Independence, Data Definition Language, Data Manipulation Language, Database Manager,
Database administrator, Database Users, Overall System Structure, Advantages and Disadvantage of a Database Systems. Data Mining and analysis, Database Architecture, History of Database Systems
Relationship Entity-Model: Entities and Entity Sets, Relationships and Relationship Sets, Attributes, Composite and Multivalued Attributes, Mapping Constraints, Keys, Entity-Relationship Diagram,
Reducing of E-R Diagram to Tables, Generalization, Attribute Inheritance, Aggregation, Alternative E-R Notatios, Design of an E-R Database Scheme.
Relational Model: Structure of Relational Database, Fundamental Relational Algebra Operations, The Tuple Relational Calculus, The Domain Relational Calculus, Modifying the Database. Relational
Commercial Language: SQL, Basic structure of SQL Queries, Query-by-Example, Quel., Nested Sub queries, Complex queries, Integrity Constraints, Authorization, Dynamic SQL, Recursive Queries.
Relational Database Design: Pitfalls in Relational Database Design, Functional Dependency Theory, Normalization using Functional Dependencies, Normalization using Multivalued Dependencies,
Normalization using join Dependencies, Database Design Process. File And System Structure: Overall System Structure, Physical Storage Media, File Organization, RAID, Organization of Records into
Blocks, Sequential Files, Mapping Relational Data to Files, Data Dictionary Storage, Buffer Management. Indexing And Hashing: Basic Concepts, Ordered Indices, B+ -Tree Index Files, B-Tree Index
Files, Static and Dynamic Hash Function, Comparison of Indexing and Hashing, Index Definition in SQL, Multiple Key Access.
Query Processing and Optimization: Query Interpretation, Equivalence of Expressions, Estimation of Query-Processing Cost, Estimation of Costs of Access Using Indices, Join Strategies, Join
Strategies for parallel Processing, Structure of the query Optimizer, Transformation of Relational Expression. Concurrency Control: Schedules, Testing for Serializability, Lock-Based Protocols,
Timestamp-Based Protocols, Validation Techniques, Multiple Granularity, Multiversion Schemes, Insert and Delete Operations, Deadlock Handling. Distributed Database: Structure of Distributed
Databases, Trade-off in Distributing the Database, Design of Distributed Database, Transparancy and Autonomy, Distributed Query Processing, Recovery in Distributed Systems, Commit Protocols,
Concurrency Control. Data Mining and Information Retrieval: Data analysis and OLAP, Data Warehouse, Data Mining, Relevance Ranking Using Terms, Relevance Ranking Using Hyperlink, Synonyms,
Homonyms, Ontology, Indexing of Document, Measuring Retrieval Efficiencies, Information Retrieval and Structured Data.
Books Recommended:
1. Database System Concepts – Abraham Silberschratz, Henry K. Korth, S. Sudarshan (5^th edition)
2. Fundamentals of Database Systems – Benjamin/Cummings, 1994
3. Database Principles, Programming, Performance – Morgan Kaufmann 1994
4. A First Course in Database Systems – Prentice Hall, 1997
5. Database Management Systems, McGraw Hill, 1996
CSE-322 Database Systems Lab
1.5 Credits
Introduction: What is database, MySQL , Oracle , SQL, Datatypes, SQL / PLSQL, Oracle Software Installation, User Type, Creating User , Granting. Basic Parts of Speech in SQL: Creating Newspaper
Table, Select Command (Where , order by), Creating View, Getting Text Information & Changing it, Concatenation, Cut & paste string(RPAD , LPAD , TRIM , LTRIM , RTRIM, LOWER , UPPER , INIT, LENGTH
, SUBSTR , INSTR , SOUNDEX). Playing The Numbers: Addition , Subtraction , Multiplication , Division, NVL , ABS , Floor , MOD , Power , SQRT , EXR , LN , LOG , ROUND, AVG , MAX , MIN , COUNT ,
SUM, Distinct, SUBQUERY FOR MAX,MIN. Grouping things together: Group By , Having, Order By, Views Renaming Columns with Aliases. When one query depends upon another: Union, Intersect , Minus, Not
in , Not Exists. Changing Data : INSERT,UPDATE,MERGE,DELETE, ROLLBACK , AUTOCOMMIT , COMMIT, SAVEPOINTS, MULTI TABLE INSERT, DELETE, UPDATE, MERGE. Creating And Altering tables & views: Altering
table, Dropping table, Creating view, Creating a table from a table. By What Authority: Creating User, Granting User, Password Management.
An Introduction to PL/SQL: Implement few problems using PL/SQL (eg Prime Number, Factorial, Calculating Area of Circle, etc).An Introduction to Trigger and Procedure: Implement few problems using
Trigger and Procedures. An Introduction to Indexing: Implement indexing using a large database and observe the difference of Indexed and Non-Indexed database.
CSE-331 Computer Architecture
3 Credits
Introduction to Computer Architecture: Overview and history; Cost factor; Performance metrics and evaluating computer designs. Instruction set design: Von Neumann machine cycle, Memory
addressing, Classifying instruction set architectures, RISC versus CISC, Micro programmed vs. hardwired control unit. Memory System Design: Cache memory; Basic cache structure and design; Fully
associative, direct, and set associative mapping; Analyzing cache effectiveness; Replacement policies; Writing to a cache; Multiple caches; Upgrading a cache; Main Memory; Virtual memory
structure, and design; Paging; Replacement strategies. Pipelining: General considerations; Comparison of pipelined and nonpipelined computers; Instruction and arithmetic pipelines, Structural,
Data and Branch hazards. Multiprocessors and Multi-core Computers: SISD, SIMD, and MIMD architectures; Centralized and distributed shared memory- architectures; Multi-core Processor architecture.
Input/output Devices: Performance measure, Types of I/O device, Buses and interface to CPU, RAID. Pipelining: Basic pipelining, Pipeline Hazards. Parallel Processing.
Books Recommended:
1. Computer Architecture and Organization- John P.Hayes, 3rd Edition, McGraw Hill
2. Computer Organization and Design: The hardware / software interface- David A.Patterson and John L.Hennessy
MTH-203E Differential Equations, Laplace Transforms and Fourier Analysis
3 Credits
Differential Equation: Formation, Degree and Order of differential equation, Complete and Particular solution. Solution of ordinary differential equation of first order and first degree (special
forms). Linear differential equation with constant coefficients. Homogeneous linear differential equation. Solution of equation by the method of Variation of parameters. Solution of linear
differential equations in series by Frobenius method. Solution of Simultaneous equation of the form = = . Laplace Transforms: Definition, Laplace transforms of some elementary functions,
sufficient conditions for existence of Laplace transforms, Inverse Laplace transforms, Laplace transforms of derivatives, Unit step function, Periodic function, Some special theorems on Laplace
transforms, Partial fraction, Solution of differential equations by Laplace transforms, Evaluation of Improper Integrals. Fourier Analysis: Fourier series (Real and complex form). Finite
transforms, Fourier Integrals, Fourier transforms and application in solving boundary value problems.
Books Recommended:
1. Differential Equations – H. T. H. Piaggio; 1^st Indian Edition, 1985, S. K. Jain for CBS Publishers
2. A Text Book on Integral Calculus with Differential Equations – Mohammad, Bhattacharjee & Latif, 4^th Edition, 2010; S. Chakravarty, Gonith Prokashon
3. Schaum’s Outline Series of the Theory and Problems on Laplace Transforms – Murray R. Spiegel; Revised Edition, 2003; McGraw Hill Book Company
4. Differential Equation – Md. Abu Eusuf; Latest Edition; Abdullah Al Mashud Publisher
CSE 300 Software Developments
1.5 Credits
Students will work in groups or individually to produce high quality software in different languages. Students will write structured programs and use proper documentation. Advanced programming
techniques in Mobile Application.
Books Recommended:
1. Android Application Development Cookbook- Wei-Meng Lee
2. The Complete Android Guide- Kevin Purdy
CSE 301: E-Commerce and Web Engineering
3 Credits
E-Commerce Basics: E-Commerce Definition, Internet History and E-Commerce Development, Business-to-Business E-Commerce, Business-to-Consumer E-Commerce, E-Commerce Stages and Processes,
E-Commerce Challenges, E-Commerce Opportunities.E-Commerce Options: Internet Access Requirements, Web Hosting Requirements, Entry-Level Options, Storefront and Template Services, E-Commerce
Software Packages, E-Commerce Developers, E-Business Solutions.Marketing Issues: Online and Offline Market Research, Data Collection, Domain Names, Advertising Options, E-Mail Marketing, Search
Engines, Web Site Monitoring, Incentives. Planning and Development: Web Site Goals, International Issues, Planning Stages, Resource Allocation, Content Development, Site Map Development, Web Site
Design Principles, Web Site Design Tools, Web Page Programming Tools, Data-Processing Tools. E-Commerce Components: Navigation Aids, Web Site Search Tools, Databases, Forms, Shopping Carts,
Checkout Procedures, Shipping Options. Payment Processing: Electronic Payment Issues, E-Cash, Credit Card Issues, Merchant Accounts, Online Payment Services, Transaction Processing, Taxation
Issues, Mobile Commerce (M-Commerce). Security Issues: Security Issues and Threats, Security Procedures, Encryption, Digital Certificates, SSL and SET Technologies, Authentication and
Identification, Security Providers, Privacy Policies, Legal Issues. Customer Service: Customer Service Issues, E-Mail Support , Telephone Support , Live Help Services, Customer Discussion Forums,
Value-Added Options. ASP.NET programming model, Web development in Microsoft Visual Studio .NET, Anatomy of an ASP.NET page, ASP.NET core server controls, ADO.NET data providers, ADO.NET data
containers, The data-binding model.
Books Recommended
1. E-Commerce, Jeffrey F., Rayport, Bernard J. Jaworsk , McGraw-Hill
2. Understanding Electronic Commerce, David Kosiur , Microsoft Press.
3. Introduction to E-Commerce, Jeffrey F. Rayport, et al. , McGraw-Hill
4. E-Commerce Strategies, Charles Trepper
CSE 302: E-Commerce and Web Engineering Lab
1.5 Credits
Laboratory works based on CSE 301.
303 Operating Systems
3 Credits
Introduction: Operating Systems Concept, Computer System Structures, Operating System Structures, Operating System operations, Protection and Security, Special-Purpose Systems. Fundamentals of OS
: OS services and components, multitasking, multiprogramming, time sharing, buffering, spooling Process Management: Process Concept, Process Scheduling, Process State, Process Management,
Interprocess Communication, interaction between processes and OS, Communication in Client-Server Systems, Threading, Multithreading, Process Synchronization. Concurrency control: Concurrency and
race conditions, mutual exclusion requirements, semaphores, monitors, classical IPC problem and solutions, Dead locks – characterization, detection, recovery, avoidance and prevention. Memory
Management: Memory partitioning, Swapping, Paging, Segmentation, Virtual memory – Concepts, Overlays, Demand Paging, Performance of demand paging, Page replacement algorithm, Allocation
algorithms. Storage Management: Principles of I/O hardware, Principles of I/O software, Secondary storage structure, Disk structure, Disk scheduling, Disk Management, Swap-space Management, Disk
reliability, Stable storage implementation. File Concept: File support, Access methods, Allocation methods, Directory systems, File Protection, Free Space management Protection & Security : Goals
of protection, Domain of protection, Access matrix, Implementation of access matrix, Revocation of access rights, The security problem, Authentication, One-time passwords, Program threats, System
threats, Threat monitoring, Encryption, Computer-security classification. Distributed Systems: Types of Distributed Operating System, Communication Protocols, Distributed File Systems, Naming and
Transparency, Remote File Access, Stateful Versus Stateless Service, File Replication. Case Studies: Study of a representative Operating Systems.
Books Recommended:
1. Operating System Concepts – Silberschatz & Galvin Wiley 2000 (7th Edition)
2. Operating Systems – Achyut S. Godbole Tata Mc Graw Hill (2nd Edition)
3. Understanding Operating System – Flynn & Metioes Thomsan (4th Edition)
4. Operating Systems Design & Implementation – Andrew Tanenbam, Albert S. Woodhull Pearson
5. Modern Operating System – Andrew S. Tanenbaum
CSE-304 Operating Systems Lab
1.5 Credits
Thread programming: Creating thread and thread synchronization. Process Programming: The Process ID, Running a New Process, Terminating a Process, Waiting for Terminated Child Processes, Users
and Groups, Sessions and Process Groups. Concurrent Programming: Using fork, exec for multi-task programs. File Operations: File sharing across processes, System lock table, Permission and file
locking, Mapping Files into Memory, Synchronized, Synchronous, and Asynchronous Operations, I/O Schedulers and I/O Performance.
Communicating across processes: Using different signals, Pipes, Message queue, Semaphore, Semaphore arithmetic and Shared memory.
Books Recommended:
1. The ‘C’ Odyssey UNIX-The Open, Boundless C – Meeta Gandhi, Tilak Shetty, Rajiv Shah.
2. Beginning Linux Programming – Neil Matthew and Richard Stones
3. Linux System Programming – Robert Love
CSE-315 Data Communication
3 Credits
Introduction to modulation techniques: Pulse modulation; pulse amplitude modulation, pulse width modulation and pulse position modulation. Pulse code modulation; quantization, Delta modulation.
TDM, FDM, OOK, FSK, PSK, QPSK; Representation of noise; threshold effects in PCM and FM. Probability of error for pulse systems, concepts of channel coding and capacity. Asynchronous and
synchronous communications. Hardware interfaces, multiplexers, concentrators and buffers. Communication medium, Fiber optics.
Books Recommended:
1. Introduction to Data Communications-Eugene Blanchard
2. Data Communication Principles – Ahmad, Aftab
3. Data Communication & Networking– S.Bagad, I.A.Dhotre
4. Data Communications and Networking- Behrouz A. Forouzan
CSE-351 Management Information Systems
3 Credits
Introduction to MIS: Management Information System Concept. Definitions, Role of MIS, Approaches of MIS development. MIS and Computer: Computer Hardware for Information System, Computer Software
for Information System, Data Communication System, Database Management Technology, Client-Server Technology. Decision-Support System: Introduction, Evolution of DSS, Future development of DSS.
Application of MIS: Applications in manufacturing Sector, Applications in service sector, Case Studies.
Books Recommended:
1. Management Information Systems- James O’Brian , Tata MCGraw-Hill
2. Management Information Systems-Post and Andersin, Tata Mcgraw-Hill
CSE-403 Compiler Design
3 Credits
Introduction to compilers: Introductory concepts, types of compilers, applications, phases of a compiler. Lexical analysis: Role of the lexical analyzer, input buffering, token specification,
recognition of tokens, symbol tables. Parsing: Parser and its role, context free grammars, top-down parsing. Syntax-directed translation: Syntax-directed definitions, construction of syntax
trees, top-down translation. Type checking: Type systems, type expressions, static and dynamic checking of types, error recovery. Run-time organization: Run-time storage organization, storage
strategies. Intermediate code generation: Intermediate languages, declarations, assignment statements. Code optimization: Basic concepts of code optimization, principal sources of optimization.
Code generation. Features of some common compilers: Characteristic features of C, Pascal and Fortran compilers.
Books Recommended:
1. Compilers: Principles, Techniques, and Tools – Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman. Second Edition.
CSE-404 Compiler Design Lab
1.5 Credits
How to use scanner and parser generator tools (e.g., Flex, JFlex, CUP, Yacc, etc). For a given simple source language designing and implementing lexical analyzer, symbol tables, parser,
intermediate code generator and code generator.
MTH-301 Statistics and Probability
2 Credits
Frequency distribution; mean, median, mode and other measures of central tendency, Standard deviation and other measures of dispersion, Moments, skewness and kurtosis, Elementary probability
theory and discontinuous probability distribution, e.g. binomial, poison and negative binomial, Continuous probability distributions, e.g. normal and exponential, Characteristics of
distributions, Hypothesis testing and regression analysis
Books Recommended:
1. Introduction to M athematical Statistics – Hogg
2. Probability and Statistics for Scientists and Engineers – Walpole
CSE-333 Microprocessors and Assembly Language
3 Credits
Introduction to different types of microprocessors, Microprocessor architecture, instruction set, interfacing, I/O operation, interrupt structure, DMA, Microprocessor interface ICs. Advanced
microprocessor concept of microprocessor based system design. Machine and Assembly instruction types and their formats. Character representation instructions. Instruction execution. Machine
language programming. Instruction sets and their implementations. The Assembly process. Addressing methods. Subroutines, macros and files. I/O programming interrupts and concurrent processes.
Books Recommended:
1. Microprocessors & Interfacing- Douglas V. Hall
2. Microprocessors – Harunur Rashid
3. Microprocessor & Microcomputer Based System Design – Rafiquzzaman
4. Microprocessor Systems: 8086/8088 Family – Y.Lin & G.A. Gibson
CSE-334 Microprocessors and Assembly Language Lab
1.5 Credits
Laboratory works based on CSE 333.
CSE-339 Theory of Computation
2 Credits
Finite Automata: Deterministic and nondeterministic finite automata and their equivalence. Equivalence with regular expressions. Closure properties. The pumping lemma and applications.
Context-free Grammars: Definitions. Parse trees. The pumping lemma for CFLs and applications. Normal forms. General parsing. Sketch of equivalence with pushdown automata. Turing Machines:
Designing simple TMs. Variations in the basic model(multi-tape, multi-head, nondeterminism). Church-Turing thesis and evidence to support it through the study of other models. Undecidability: The
undecidability of the halting problem. Reductions to other problems. Reduction in general.
Books Recommended:
1. Introduction to Languages and the Theory of Computation, 2^nd Edition- C. Martin, McGraw Hill Publications, 1997.
CSE 401 Software Engineering
3 credits
Concepts of software engineering: requirements definition, modularity, structured design, data specifications, functional specifications, verification, documentation, software maintenance.
Software support tools. Software project organization, quality assurance, management and communication skills.
Books Reccommended:
1. Software Engineering: A Practitioner’s Approach, 6^th Edition, Roger S. Pressman
2. Software Engineering Concepts – Richard Fairley
3. Software Engineering Environments – Robert N. Charette
CSE-421 Computer Network
3 Credits
Network architectures-layered architectures and ISO reference model: data link protocols, error control, HDLC, X.25, flow and congestion control, virtual terminal protocol, data security. Local
area networks, satellite networks, packet radio networks. Introduction to ARPANET, SNA and DECNET. Topological design and queuing models for network and distributed computing systems.
Books Recommended:
1. Computer Networks-A. S. Tanenbaum
2. Introduction to Networking- Barry Nance
3. Data Communications, Computer Networks & Open Systems- F. Halsall
4. TCP/IP-SydniFeit
5. Data Communications and Networking-Behrouz A. Forouzan
CSE-422 Computer Network Lab
1.5 Credits
Laboratory works based on CSE 421.
CSE-435 Computer Interfacing
3 Credits
Interface components and their characteristics, microprocessor I/O. Disk, Drums, and Printers. Optical displays and sensors. High power interface devices, transducers, stepper motors and
peripheral devices.
Books Recommended:
1. Microprocessors & Interfacing-Douglas V. Hall
2. Microprocessor & Microcomputer based System Design – Rafiquzzaman
3. Microcomputer Interfacing-Artwick
4. Microcomputer Interfacing-Ramesh Goanker
5. Designing User Interfaces-James E. Powell
CSE-436 Computer Interfacing Lab
1.5 Credits
Laboratory works based on CSE 435.
CSE-405 Artificial Intelligence & Expert System
3 Credits
What is Artificial Intelligence: The AI problems, The underlying assumption, What is an AI technique. Problems, Problem spaces and Search: Defining the problem as a state space search, Production
system, Problem characteristics. Heuristics Search Techniques: Generate and Test, Hill climbing, Best First Search, Problem Reduction, Constraint Satisfaction, Means-Ends Analysis. Knowledge
Representation Issues: Representation and Mappings, Approaches to knowledge Representation, Issues in Knowledge representation. Using Predicate logic: Representing simple facts in logic,
Representing Instance and Isa relationships, Computable functions and Predicates, Resolution. Representing Knowledge using Rules: Procedural versus Declarative Knowledge, Logic Programming,
Forward versus Backward Reasoning, Matching. Game playing: Overview, The Mimimax Search Procedure, Adding Alpha-Beta cutoffs, Additional refinements, iterative Deepening, Planning: Overview, An
example Domain: The Blocks World, Components of a planning system, Goal stack planning, Understanding: What is Understanding, What makes Understanding hard, Understanding as constraint
satisfaction. natural Language Processing: Introduction, Syntactic Processing, Semantic Analysis, Discourse and Pragmatic Processing. Expert systems: representing and using domain knowledge,
Expert system shells explanation, Knowledge Acquisition. AI Programming Language: Prolog, LISP, Python.
Books Recommended:
1. Artificial Intelligence: A Modern Approach-S. Russel and P. Norvig
2. Introduction to Artificial Intelligence and Expert System-Dan W. Peterson
3. Artificial Intelligence-E. Rich and K. Knight
4. An Introduction to Neural Computing-C. F. Chabris and T. Jackson
5. Artificial Intelligence using C – H. Schieldt
CSE-406 Artificial Intelligence & Expert System Lab
1.5 Credits
Students will have to understand the functionalities of intelligent agents and how the agents will solve general problems. Students have to use a high-level language (Python, Prolog, LISP) to
solve the following problems:
Backtracking: State space, Constraint satisfaction, Branch and bound. Example: 8-queen, 8- puzzle, Crypt-arithmetic. BFS and production: Water jugs problem, The missionaries and cannibal problem.
Heuristic and recursion: Tic-tac-toe, Simple bock world, Goal stack planning, The tower of Hanoi. Question answering: The monkey and bananas problem.
CSE-425 Digital Signal Processing
3 Credits
Introduction to digital signal processing (DSP): Discrete-time signals and systems, analog to digital conversion, impulse response, finite impulse response (FIR) and infinite impulse response
(IIR) of discrete-time systems, difference equation, convolution, transient and steady state response. Discrete transformations: Discrete Fourier series, discrete-time Fourier series, discrete
Fourier transform (DFT) and properties, fast Fourier transform (FFT), inverse fast Fourier transform, z-transformation – properties, transfer function, poles and zeros and inverse z-transform.
Correlation: circular convolution, auto-correlation and cross correlation. Digital Filters: FIR filters- linear phase filters, specifications, design using window, optimal and frequency sampling
methods; IIR filters- specifications, design using impulse invariant, bi-linear z-transformation, least-square methods and finite precision effects. Digital signal processor TMS family,
Application of digital signal processing.
Books Recommended:
1. Digital Signal Processing-John G. Proakis
2. Signals and Systems-Simon Haykin and Barry Van Veen
3. Digital Signal Processing-R. W. Schafer
4. Digital Signal Processing-Ifeachor
5. Introduction to DSP-Johnny R. Johnson
CSE-426 Digital Signal Processing Lab
1.5 Credits
Laboratory works based on CSE 425.
CSE-431 Computer Graphics
3 Credits
Introduction to Graphical data processing. Fundamentals of interactive graphics programming. Architecture of display devices and connectivity to a computer. Implementation of graphics concepts of
two-dimensional and three-dimensional viewing, clipping and transformations. Hidden line algorithms. Raster graphics concepts: Architecture, algorithms and other image synthesis methods. Design
of interactive graphic conversations.
Books Recommended:
1. Principles of Interactive Computer Graphics –William M., Newman, McGraw-Hill, 2nd edition, 1978
2. Computer Graphics: Principle and Practice in C-James D. Foley, Andries van Dam, Steven K. Feiner, John F. Hughes, Addison-Wesley, 2nd edition, 1995
CSE-432 Computer Graphics Lab
1.5 Credits
Laboratory works based on CSE 431.
CSE-400 Project / Thesis
3 Credits
Study of problems in the field of Computer Science and Engineering. This course will be initiated in the 3^rd year or early in 4^th year.
CSE-402 Comprehensive Viva Voce
2 Credits
OPTIONAL COURSES
CSE-407 Simulation and Modeling
3 Credits
Simulation methods, model building, random number generator, statistical analysis of results, validation and verification techniques, Digital simulation of continuous systems. Simulation and
analytical methods, for analysis of computer systems and practical problems in business and practice. Introduction to the simulation packages.
Books Recommended:
1. System Modeling and Simulation- V.P. Singh
2. System Design, Modeling, and Simulation using- Claudius Ptolemaeus
CSE-408 Simulation and Modeling Lab
1.5 Credits
Laboratory works based on CSE 407.
CSE-411 VLSI Design
3 Credits
Design and analysis techniques for VLSI circuits. Design of reliable VLSI circuits, noise considerations, design and operation of large fan out and fan in circuits, clocking methodologies,
techniques for data path and data control design. Simulation techniques. Parallel processing, special purpose architectures in VLSI. VLSI layouts partitioning and placement routing and wiring in
VLSI. Reliability aspects of VLSI design.
Books Recommended:
1. Basic VLSI Design-Douglas A Pucknell, Kamran Eshraghian
2. VLSI Technology – S. M. Sze
3. Introduction to VLSI Systems – C. A. Mead and L. A. Conway
CSE-412 VLSI Design Lab
1.5 Credits
Laboratory works based on CSE 411.
CSE-413 Information System Design
3 Credits
Information, general concepts of formal information systems, analysis of information requirements for modern organizations, modern data processing technology and its application, information
systems structures, designing information outputs, classifying and coding data, physical storage media considerations, logical data, organization, systems analysis, general systems design, detail
systems design. Project management and documentation. Group development of an information system project. Includes all phases of software life cycles from requirement analysis to the completion
of a fully implemented system.
Books Recommended:
1. Information Systems Analysis and Design – Phil Agre, Christine Borgman
2. Analysis and Design of Information Systems-Langer, Arthur M.
CSE-414 Information System Design Lab
1.5 Credits
Laboratory works based on CSE 413.
CSE-419 Graph Theory
3 Credits
Introduction, Fundamental concepts, Trees, Spanning trees in graphs, Distance in graphs, Eulerian graphs, Digraphs, Matching and factors, Cuts and connectivity, k-connected graphs, Network flow
problems, Graph coloring: vertex coloring and edge coloring, Line graphs, Hamiltonian cycles, Planar graphs, Perfect graphs.
Books Recommended:
1. Graph Theory and Its Applications – Jonathan L. Gross, Jay Yellen
2. A Textbook of Graph Theory – R. Balakrishnan, K. Ranganathan
CSE-420 Graph Theory Lab
1.5 Credits
Laboratory works based on CSE 420.
CSE-423 Computer System Performance Evaluations
3 Credits
Review of system analysis, approaches to system development, feasibility assessment, hardware and software acquisition. Procurement, workload characterization, the representation of measurement
data, instrumentation: software monitors, hardware monitors, capacity planning, bottleneck detection, system and program tuning, simulation and analytical models and their application, case
1. Computer Systems Performance Evaluation and Prediction– Paul J. Fortier and Howard E. Michel
2. The Art of Computer Systems Performance Analysis- Jain
CSE-424 Computer System Performance Evaluation Lab
1.5 Credits
Laboratory based on CSE 423.
CSE-437 Pattern Recognition
3 Credits
Introduction to pattern recognition: features, classifications, learning. Statistical methods, structural methods and hybrid method. Applications to speech recognition, remote sensing and
biomedical area, Learning algorithms. Syntactic approach: Introduction to pattern grammars and languages. parsing techniques. Pattern recognition in computer aided design.
1. Pattern Recognition- K. Koutroumbas
2. Pattern Recognition and Machine Learning- Christopher M. Bishop
3. Pattern Recognition for Neural Networks- Brian Ripley
CSE-438 Pattern Recognition Lab
1.0 Credits
Laboratory works based on CSE 437.
CSE-453 Digital Image Processing
3 Credits
Image Processing: Image Fundamentals, Image Enhancement: Background, Enhancement by Point-Processing, Spatial Filtering, Enhancement in Frequency Domain, Color Image Processing. Image
Restoration: Degradation Model, Diagonalization of Circulant and Block-Circulant Matrices, Algebraic Approach to Restoration, Inverse Filtering, Geometric Transformation. Image Segmentation:
Detection of Discontinuities, Edge Linking and Boundary Detection, Thresholding, Region-Oriented Segmentation, The use of Motion in Segmentation. Image Compression.
Books Recommended:
1. Digital Image Processing-Rafael C. Gonzalez and Richard E. Woods, Pearson Education Asia.
2. Non-Linear Digital Filter : Principles and Applications –I. Pitas and A. N. Venetsanopoulos, Kluwer Academic Publications.
CSE-454 Digital Image Processing Lab
1.5 Credits
Laboratory works based on CSE 453.
CSE-455 Wireless and Sensor Networks
3 Credits
Introduction: applications; Localization and tracking: tracking multiple objects; Medium Access Control: S-MAC, IEEE 802.15.4 and ZigBee; Geographic and energy-aware routing; Attribute-Based
Routing: directed diffusion, rumor routing, geographic hash tables; Infrastructure establishment: topology control, clustering, time synchronization; Sensor tasking and control: task-driven
sensing, information-based sensor tasking, joint routing and information aggregation; Sensor network databases: challenges, querying the physical environment, in-network aggregation, data indices
and range queries, distributed hierarchical aggregation; Sensor network platforms and tools: sensor node hardware, sensor network programming challenges; Other state-of-the-art related topics.
Books Recommended:
1. Wireless Sensor Networks – C. S. Raghavendra, Krishna M. Sivalingam and TaiebZnati
2. Wireless Sensor Networks: An Information Processing Approach (The Morgan Kaufmann Series in Networking) – Feng Zhao,Leonidas Guibas
CSE-456 Wireless and Sensor Networks Lab
1.5 Credits
Laboratory works based on CSE 455.
CSE-457 Bio-Informatics
3 Credits
Cell concept: Structural organization of plant and animal cells, nucleus, cell membrane and cell wall. Cell division: Introducing chromosome, Mitosis, Meiosis and production of haploid/diploid
cell. Nucleic acids: Structure and properties of different forms of DNA and RNA; DNA replication. Proteins: Structure and classification, Central dogma of molecular biology. Genetic code: A brief
account. Genetics: Mendel’s laws of inheritance, Organization of genetic material of prokaryotes and eukaryotes, C-Value paradox, repetitive DNA, structure of chromatin – euchromatin and
heterochromatin, chromosome organization and banding patterns, structure of gene – intron, exon and their relationships, overlapping gene, regulatory sequence (lac operon), Molecular mechanism of
general recombination, gene conversion, Evolution and types of mutation, molecular mechanisms of mutation, site-directed mutagenesis, transposons in mutation. Introduction to Bioinformatics:
Definition and History of Bioinformatics, Human Genome Project, Internet and Bioinformatics, Applications of Bioinformatics Sequence alignment: Dynamic programming. Global versus local. Scoring
matrices. The Blast family of programs. Significance of alignments, Aligning more than two sequences. Genomes alignment. Structure-based alignment. Hidden Markov Models in Bioinformatics:
Definition and applications in Bioinformatics. Examples of the Viterbi, the Forward and the Backward algorithms. Parameter estimation for HMMs. Trees: The Phylogeny problem. Distance methods,
parsimony, bootstrap. Stationary Markov processes. Rate matrices. Maximum likelihood. Felsenstein’s post-order traversal. Finding regulatory elements: Finding regulatory elements in aligned and
unaligned sequences. Gibbs sampling. Introduction to microarray data analysis: Steady state and time series microarray data. From microarray data to biological networks. Identifying regulatory
elements using microarray data. Pi calculus: Description of biological networks; stochastic Pi calculus, Gillespie algorithm.
Books Recommended:
1. Introduction to Bioinformatics Algorithms –Jones and Pavel A. Pevzner
2. Introduction to Bioinformatics – Stephen A. Krawetz, David D. Womble
3. Introduction to Bioinformatics – Arthur M. Lesk
CSE-458 Bio-Informatics Lab
1.5 Credits
Laboratory works based on CSE-457.
CSE-461 Neural Networks
3 Credits
Fundamentals of Neural Networks; Back propagation and related training algorithms; Hebbian learning; Cohonen-Grossberg learning; The BAM and the Hopfield Memory; Simulated Annealing; Different
types of Neural Networks: Counter propagation, Probabilistic, Radial Basis Function, Generalized Regression, etc; Adaptive Resonance Theory; Dynamic Systems and neural Control; The Boltzmann
Machine; Self-organizing Maps; Spatiotemporal Pattern Classification, The Neocognition; Practical Aspects of Neural Networks.
Books Recommended:
1. An Introduction to Neural Networks – Prof. Leslie Smith
2. Fundamentals of Artificial Neural Networks – Mohamad H. Hassoun
CSE-462 Neural Networks Lab
1.5 Credits
Laboratory works based on CSE 461.
CSE-463 Machine Learning
3 Credits
Introduction: Definition of learning systems. Goals and applications of machine learning. Aspects of developing a learning system- training data, concept representation, function approximation.
Inductive Classification: The concept learning task. Concept learning as search through a hypothesis space. General-to-specific ordering of hypotheses. Finding maximally specific hypotheses.
Version spaces and the candidate elimination algorithm. Learning conjunctive concepts. The importance of inductive bias. Decision Tree Learning: Representing concepts as decision trees. Recursive
induction of decision trees. Picking the best splitting attribute: entropy and information gain. Searching for simple trees and computational complexity. Occam’s razor. Overfitting, noisy data,
and pruning. Experimental Evaluation of Learning Algorithms: Measuring the accuracy of learned hypotheses. Comparing learning algorithms- cross-validation, learning curves, and statistical
hypothesis testing. Computational Learning Theory: Models of learnability- learning in the limit; probably approximately correct (PAC) learning. Sample complexity- quantifying the number of
examples needed to PAC learn. Computational complexity of training. Sample complexity for finite hypothesis spaces. PAC results for learning conjunctions, kDNF, and kCNF. Sample complexity for
infinite hypothesis spaces, Vapnik-Chervonenkis dimension. Rule Learning, Propositional and First-Order: Translating decision trees into rules. Heuristic rule induction using separate and conquer
and information gain. First-order Horn-clause induction (Inductive Logic Programming) and Foil. Learning recursive rules. Inverse resolution, Golem, and Progol. Artificial Neural Networks:
Neurons and biological motivation. Linear threshold units. Perceptrons: representational limitation and gradient descent training. Multilayer networks and backpropagation. Hidden layers and
constructing intermediate, distributed representations. Overfitting, learning network structure, recurrent networks. Support Vector Machines: Maximum margin linear separators. Quadractic
programming solution to finding maximum margin separators. Kernels for learning non-linear functions. Bayesian Learning: Probability theory and Bayes rule. Naive Bayes learning algorithm.
Parameter smoothing. Generative vs. discriminative training. Logisitic regression. Bayes nets and Markov nets for representing dependencies. Instance-Based Learning: Constructing explicit
generalizations versus comparing to past specific examples. k-Nearest-neighbor algorithm. Case-based learning. Text Classification: Bag of words representation. Vector space model and cosine
similarity. Relevance feedback and Rocchio algorithm. Versions of nearest neighbor and Naive Bayes for text. Clustering and Unsupervised Learning: Learning from unclassified data. Clustering.
Hierarchical Aglomerative Clustering. k-means partitional clustering. Expectation maximization (EM) for soft clustering. Semi-supervised learning with EM using labeled and unlabled data.
Books Recommended:
1. Artificial Intelligence: a modern approach (2nd edition), Russell, S. and P. Norvig, Prentice Hall, 2003
2. Introduction to Machine Learning – Ethem ALPAYDIN
3. Machine Learning – Tom Mitchell, McGraw Hill
4. Introduction to machine learning (2nd edition), Alpaydin, Ethem, MIT Press, 2010
5. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods, NelloCristianini and John Shawe-Taylor, Cambridge University Press
CSE-464 Machine Learning Lab
1.5 Credits.
Students should learn the methods for extracting rules or learning from data, and get the necessary mathematical background to understand how the methods work and how to get the best performance
from them. To achieve these goals student should learn the following algorithms in the lab: K Nearest Neighbor Classifier, Decision Trees, Model Selection and Empirical Methodologies, Linear
Classifiers: Perception and SVM, Naive Bayes Classifier, Basics of Clustering Analysis, K-mean Clustering Algorithm, Hierarchical Clustering Algorithm. Upon completion of the course, the student
should be able to perform the followings: a. Evaluate whether a learning system is required to address a particular problem. b. Understand how to use data for learning, model selection, and
testing to achieve the goals. c. Understand generally the relationship between model complexity and model performance, and be able to use this to design a strategy to improve an existing system.
d. Understand the advantages and disadvantages of the learning systems studied in the course, and decide which learning system is appropriate for a particular application. e. Make a naive Bayes
classifier and interpret the results as probabilities. f. Be able to apply clustering algorithms to simple data sets for clustering analysis.
CSE-465 Contemporary course on CSE
03 Credits
CSE-466 Contemporary course on CSE Lab
1.5 Credits
Laboratory works based on CSE 465
|
{"url":"https://siu.edu.bd/department/computer-science-and-informatics","timestamp":"2024-11-09T20:42:54Z","content_type":"text/html","content_length":"850260","record_id":"<urn:uuid:8ef432e6-64b2-4244-abbd-bc671a7d96d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00771.warc.gz"}
|
Paper ID D1-S5-T2.1
Paper Fourier-Reflexive Partitions Induced by Poset Metric
Authors Yang Xu, Fudan University/The University of Hong Kong, China; Haibin Kan, Fudan University, China; Guangyue Han, The University of Hong Kong, China
Session D1-S5-T2: Distance Metrics
Chaired Monday, 12 July, 23:20 - 23:40
Engagement Monday, 12 July, 23:40 - 00:00
Abstract Let $\mathbf{H}=\prod_{i\in \Omega}H_{i}$ be the cartesian product of finite abelian groups $H_{i} $ indexed by a finite set $\Omega$. Any partition of $\mathbf{H}$ gives rise to a dual
partition of its character group $\hat{\mathbf{H}}$. A given poset (i.e., partially ordered set) $\mathbf{P}$ on $\Omega$ gives rise to the corresponding poset metric on $\mathbf{H}$,
which further leads to a partition $\Gamma$ of $\mathbf{H}$. We prove that if $\Gamma$ is Fourier-reflexive, then its dual partition $\widehat{\Gamma}$ coincides with the partition of $\
hat{\mathbf{H}}$ induced by $\mathbf{\overline{P}}$, the dual poset of $\mathbf{P}$, and moreover, $\mathbf{P}$ is necessarily hierarchical. This result establishes a conjecture proposed
by Heide Gluesing-Luerssen in \cite{4}. We also show that with some other assumptions, $\widehat{\Gamma}$ is finer than the partition of $\hat{\mathbf{H}}$ induced by $\mathbf{\overline
{P}}$. We prove these results by relating the partitions with certain family of polynomials, whose basic properties are studied in a slightly more general setting.
|
{"url":"https://2021.ieee-isit.org/TempDev/Papers/ViewPaper.asp?PaperNum=1343","timestamp":"2024-11-13T06:29:25Z","content_type":"text/html","content_length":"11732","record_id":"<urn:uuid:9e02932d-d6a4-42c3-8640-0c71728c399e>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00606.warc.gz"}
|
A failed attempt to prove P == NP
I wasn’t originally going to write about this, but people keep sending it to me asking for comments.
In computer science, we have one really gigantic open question about complexity. In the lingo, we ask “Does P == NP?”. (I’ll explain what that means below.) On March 9th, a guy named Michael LaPlante
posted a paper to ArXiv that purports to prove, once and for all, that P == NP. If this were the case, if Mr. LaPlante (I’m assuming Mr.; if someone knows differently, ie. that it should be Doctor,
or Miss, please let me know!) had in fact proved that P==NP, it would be one of the most amazing events in computer science history. And it wouldn’t only be a theoretical triumph – it would have
real, significant practical results! I can’t think of any mathematical proof that would be more exciting to me: I really, really wish that this would happen. But Mr. LaPlante’s proof is, sadly,
wrong. Trivially wrong, in fact.
In order to understand what all of this means, why it matters, and where he went wrong, we need to take a step back, and briefly look at computational complexity, what P and NP mean, and what are the
implications of P == NP? (Some parts of the discussion that follows are re-edited versions of sections of a very old post from 2007.)
Before we can get to the meat of this, which is talking about P versus NP, we need to talk about computational complexity. P and NP are complexity classes of problems – that is, groups of problems
that have similar bounds on their performance.
When we look at a computation, one of the things we want to know is: “How long will this take?”. A specific concrete answer to that depends on all sorts of factors – the speed of your computer, the
particular programming language you use to run the program, etc. But independent of those, there’s a basic factor that describes something important about how long a computation will take – the
algorithm itself fundamental requires some minimum number of operations. Computational complexity is an abstract method of describing how many operations a computation will take, expressed in terms
of the size or magnitude of the input.
For example: let’s take a look at insertion sort. Here’s some pseudocode for insertion sort.
def insertion_sort(lst):
result = []
for i in lst:
for j in result:
if i < j:
insert i into result before j
if i wasn't inserted, add it to the end of result
return result
This is, perhaps, the simplest sorting algorithm to understand - most of us figured it out on our own in school, when we had an assignment to alphebetize a list of words. You take the elements of the
list to be sorted one at a time; then you figure out where in the list they belong, and insert them.
In the worst possible case, how long does this take?
1. Inserting the first element requires 0 comparisons: just stick it into the list.
2. Inserting the second element takes exactly one comparison: it needs to be compared to the one element in the result list, to determine whether it goes before or after it.
3. Inserting the third element could take either one or two comparisons. (If it's smaller than the first element of the result list, then it can be inserted in front without any more comparisons;
otherwise, it needs to be compared against the second element of the result list. So in the worst case, it takes 2 comparisons.
4. In general, for the Nth element of the list, it will take at most n-1 comparisons.
So, in the worst case, it's going to take 0 + 1 + 2 + ... + n-1 comparisons to produce a sorted list of N elements. There's a nice shorthand for computing that series: $\frac{(n-1)(n-2)}{2}$, which
simplifies to \frac{n^2 -3n + 2}{2}, which is O(n^2).
So while we can't say "computing a list of 100 elements will take 2.3 seconds" (because that depends on a ton of factors - the specific implementation of the code, the programming language, the
machine it's running on, etc.), we can say that the time it takes to run increase roughly proportionally to the square of the size of the input - which is what it means when we say that insertion
sort is O(n^2).
That's the complexity of the insert sort algorithm. When we talk about complexity, we can talk about two different kinds of complexity: the complexity of an algorithm, and the complexity of a problem
. The complexity of an algorithm is a measure of how many steps the algorithm takes to execute on an input of a particular size. It's specific to the algorithm, that is, the specific method used to
solve the the problem. The complexity of the problem is a bound that bounds the best case of the complexity of any possible algorithm that can solve that problem.
For example, when you look at sort, you can say that there's a minimum number of steps that's needed to compute the correct sorted order of the list. In fact, you can prove that to sort a list of
elements, you absolutely require $n lg n$ bits of information: there's no possible way to be sure you have the list in sorted order with less information that that. If you're using an algorithm that
puts things into sorted order by comparing values, that means that you absolutely must do O(n lg n) comparisons, because each comparison gives you one bit of information. That means that sorting is
an O(n log n) problem. We don't need to know which algorithm you're thinking about - it doesn't matter. There is no possible comparison-based sorting algorithm that takes less than $O(n \log n)$
steps. (It's worth noting that there's some weasel-words in there: there are some theoretical algorithms that can sort in less than O(n lg n), but they do it by using algorithms that aren't based on
binary comparisons that yield one bit of information.)
We like to describe problems by their complexity in that way when we can. But it's very difficult. We're very good at finding upper bounds: that is, we can in general come up with ways of saying "the
execution time will be less than O(something)", but we are very bad at finding ways to prove that "the minimum amount of time needed to solve this problem is O(something)". That distinction, between
the upper bound (maximum time needed to solve a problem), and lower bound (minimum time needed to solve a problem) is the basic root of the P == NP question.
When we're talking about the complexity of problems, we can categorize them into complexity classes. There are problems that are O(1), which means that they're constant time, independent of the size
of the input. There are linear time problems, which can be solved in time proportional to the size of the input. More broadly, there are two basic categories that we care about: P and NP.
P is the collection of problems that can be solved in polynomial time. That means that in the big-O notation for the complexity, the expression inside the parens is a polynomial: the exponents are
all fixed values. Speaking very roughly, the problems in P are the problems that we can at least hope to solve with a program running on a real computer.
NP is the collection of problems that can be solved in non-deterministic polynomial time. We'll just gloss over the "non-deterministic" part, and say that for a problem in NP, we don't know of a
polynomial time algorithm for producing a solution, but given a solution, we can check if it's correct in polynomial time. For problems in NP, the best solutions we know of have worst-case bounds
that are exponential - that is, the expression inside of the parens of the O(...) has an exponent containing the size of the problem.
NP problems are things that we can't solve perfectly with a real computer. The real solutions take an amount of time that's exponential in the size of their inputs. Tripling the size of the problem
increases its execution time by a factor of 27; quadrupling the input size increases execution time by at least a factor of 256; increasing the input by a factor of 10 increases execution time by at
least a factor of 10,000,000,000. For NP problems, we're currently stuck using heuristics - shortcuts that will quickly produce a good guess at the real solution, but which will sometimes be wrong.
NP problems are, sadly, very common in the real world. For one example, there's a classic problem called the travelling salesman. Suppose you've got a door-to-door vacuum cleaner salesman. His
territory has 15 cities. You want to find the best route from his house to those 15 cities, and back to his house. Finding that solution isn't just important from a theoretical point of view: the
time that the salesman spends driving has a real-world cost! We don't know how to quickly produce the ideal path.
The big problem with NP is that we don't know lower bounds for anything in it. That means that while we know of slow algorithms for finding the solution to problems in NP, we don't know if those
algorithms are actually the best. It's possible that there's a fast solution - a solution in polynomial time which will give the correct answer. Many people who study computational complexity believe
that if you can check a solution in polynomial time, then computing a solution should also be polynomial time with a higher-order polynomial. (That is, they believe that there should be some sort of
bound like "the time to find a solution is no more than the cube of the time to check a solution".) But so far, no one has been able to actually prove a relationship like that.
When you look at NP problems, some of them have a special, amazing property called NP completeness. If you could come up with a polynomial time solution for any single NP-complete problem, then you'd
also discover exactly how to come up with a polynomial time solution for every other problem in NP..
In Mr. LaPlante's paper, he claims to have implemented a polynomial time solution to a problem called the maximum clique problem. Maximum clique is NP complete - so if you could find a P-time
solution to it, you'd have proven that P == NP, and that there are polynomial time solutions to all NP problems.
The problem that Mr. LaPlante looked at is the maximal clique problem:
• Given:
1. a set of $V$ atomic objects called vertices;
2. a set of $E$ of objects called edges, where each edge is an unordered pair $(x, y)$, where $x$ and $y$ are vertices.
• Find:
□ The largest set of vertices C=$\{v_1, ..., v_n\}$ where for any $v_i$, there is an edge between $v_i$ to every other vertex in $C$.
Less formally: given a bunch of dots, where some of the dots are connected by lines, find the largest set of dots where every dot in the set is connected to every other dot in the set.
The author claims to have come up with a simple P-time solution to that.
The catch? He's wrong. His solution isn't P-time. It's sloppy work.
His algorithm is pretty easy to understand. Each vertex has a finite set of edges connecting it to its neighbors. You have each node in the graph send its list of its neighbors to its neighbors. With
that information, each node knows what 3-cliques its a part of. Every clique of size larger than 3 is made up of overlapping 3-cliques - so you can have the cliques merge themselves into ever larger
If you look at this, it's still basically considering every possible clique. But His "analysis" of the complexity of his algorithm is so shallow and vague that it's easy to get things wrong. It's a
pretty typical example of a sloppy analysis. Complexity analysis is hard, and it's very easy to get wrong. I don't want to be too hard on Mr. LaPlante, because it's an extremely easy mistake to make.
Analyzing algorithmic complexity needs to be done in a careful, exacting, meticulous way - and while Mr. LaPlante didn't do that, most people who are professional programmers could easily make a
similar mistake! But the ultimate sloppiness of it is that he never bothers to finish computing the complexity. He makes vague hand-wavy motions at showing the complexity of certain phases of his
algorithm, but he never even bothers to combine them and come up with an estimate of the full upper-bound of his algorithm!
I'm not going to go into great detail about this. Instead, I'll refer you to a really excellent paper by Patrick Prosser, which looks at a series of algorithms that compute exact solutions to the
maximum clique problem, and how they're analyzed. Compare their analysis to Mr. LaPlante's, and you'll see quite clearly how sloppy LaPlante was. I'll give you a hint about one thing LaPlante got
wrong: he's taking some steps that take significant work, and treating them as if they were constant time.
But we don't even really need to look at the analysis. Mr. LaPlante provides an implementation of his supposedly P-time algorithm. He should be able to show us execution times for various randomly
generated graphs, and show how that time grows as the size of the graph grows, right? I mean, if you're making claims about something like this, and you've got real code, you'll show your
experimental verification as well as your theoretical analysis, right?
Nope. He doesn't. And I consider that to be a really, really serious problem. He's claiming to have reduced an NP-complete problem to a small-polynomial complexity: where are the numbers?
I'll give you a good guess about the answer: the algorithm doesn't complete in a reasonable amount of time for moderately large graphs. You could argue that even if it's polynomial time, you're
looking at exponents that are no smaller than 3 (exactly what he claims the bound to be is hard to determine, since he never bothers to finish the analysis!) - a cubic algorithm on a large graph
takes a very long time. But... not bothering to show any runtime data? Nothing at all? That's ridiculous. If you look at the Prosser paper above, he manages to give actual concrete measurements of
the exponential time algorithms. LaPlante didn't bother to do that. And I can only conclude that he couldn't gather actual numbers to support his idea.
15 thoughts on “A failed attempt to prove P == NP”
1. John Fringe
> P and NP are complexity classes of algorithms – that is, groups of algorithms that have similar performance properties
Not to ntpick, but you may be oversimplifying there by equating algorithm and problem, and I believe it can confuse people more than help.
1. markcc Post author
Yeah, you’re right. Silly mistake.
2. David Starner
I’m not sure why you say “theoretical algorithms” for sorting in less then O(n log n) time; if you sorting something from a small set (like 16-bit integers) and many of them (way more then 2^16),
a counting sort is a very practical O(n) sort on the problem.
I find the belief that P=NP to be surprising; so many problems would need polynomial-time algorithms that nobody has found yet, for one thing.
You’re more generous to LaPlante then I would have been. The instant you start thinking you have a solution to a deep problem that’s evaded many, check and double-check your answer.
As a final nitpick, the first link, “posted a paper to ArXiv” seems to link back here.
1. markcc Post author
I think you’re talking about radix sort, which is kind-of a cheat. It’s still O(n lg n), it just bakes the lg n factor into a constant,
because on a real computer, the integer representation has a fixed length. It’s still essentially bounded by O(lg(n)) radix partitions, each of which takes O(n).
There are other algorithms that get below O(n lg n) without relying on anything remotely fishy – but they’re completely impractical. The constants are huge!
1. David Starner
Not radix sort per se, except in the degenerate case.
short[] shortSort (short[] initialArray) {
int[] buckets = new int[65536];
for (int i = 0; i < buckets.length; i++) buckets[i] = 0;
for (short s : initalArray) buckets[s + 32768]++;
short[] finalArray = new short[initialArray.length];
int index = 0;
for (int currVal = 0; currVal < buckets.length; currVal++)
for (int i = 0; i < buckets[i]; i++) {finalArray[index] = currVal – 32768; index++;}
assert index == initialArray.length;
return finalArray
It's something of a quirk of the Java platform that int will always be long enough, since initialArray must be int indexable, but a 64-bit integer will be large enough to count the values
in any computer currently available. Naturally, unless initialArray is larger (much larger?) then buckets, this is going to be less efficient then a normal n log n sort, but it is O(n)
for all observable behavior. The log n cost of handling integers that can hold a count of n elements is usually ignored, just like the log n cost of handling a pointer to one of n
2. Ingvar Mattsson
Actually, counting sort is O(n) in time and O(2**b) [b being the bits of the bounded set] in memory for a bounded integer set.
Pseudo-code (memory is taken as an integer array, large enough to have all possible input numbers as an index and wide enough to not overflow):
# O(n)
for n in numbers:
memory[n] += 1
# O(2^b) [fixed] + O(n)
for i in possible_numbers:
emitted = 0
while emitted < memory[i]:
emit i
This is subtly different from radix sort, in that we never keep a partially sorted input around. But we do have an awful lot of memory allocated and it is only ever practical for
relatively small bounded integer ranges.
3. Pavel
The sum of 0 + 1 + 2 + … + (n – 1) is equal to n(n – 1)/2, not (n – 1)(n – 2)/2. Of course both functions are still O(n^2), so it’s just a nitpick.
4. Audun
Another nitpick: You say that for NP problems we only have an exponential algorithm. However, P is contained in NP, so many NP problems have algorithms in P. I think you are actually talking
about the NP-hard complexity class.
1. markcc Post author
No, I’m not talking about NP-hard. I’m just simplifying. Colloquially, when we talk about NP problems, we mean NP-complete and harder – that is, the set of problems in NP for which we don’t
know whether or not they’re in P.
The problem with writing about stuff like this is that each additional level of detail you add loses another group of readers. Sometimes making a simplification – using the colloquial meaning
of a term without explaining it – makes the difference between people understanding it, and not understanding it.
When I was writing this, I thought that presenting NP as the class of NP-complete and harder, the way we usually do colloquially, made sense. I still think it’s good; if it’s a bit fuzzy to
experts, that’s OK – as you did, they’ll read this, and see what I meant.
1. Audun
Fair enough. My worry is that it will confuse a certain kind of reader who takes this to mean that all NP problems are hard, so the statement that P = NP is trivially false. I am not that
reader, as I have been exposed enough to this subject to figure out what’s going on.
Of course, this choice is entirely up to you!
5. Daniel
Thank you, Mark.
6. Dave W.
Wouldn’t your colloquial usage really mean “NP-complete and easier, but not known to be in P”? Any problem in NP can’t be harder, in some sense, than an NP-complete problem, because it can always
be reduced to SAT (Cook’s Theorem). But it could be of intermediate complexity – if NP-complete problems required exponential time, an intermediate problem in NP might require something like O(n^
(log n)) time.
7. Dennis
In a weird way, I commend LaPlante. He had the courage to publish his algorithm which meant 1. he believed his assertions and 2. he gave a proof which was refutable. I wish that everyone opened
themselves to the potential of ridicule.
1. markcc Post author
Me too. I went much easier on him than on, say, Chris Langan(? the CTMU guy) – because LaPlante really made an effort, and put his work up where he knew people would be able to critique it
severely if he was wrong. Langan is careful to never give enough details on his “theory” to make it specific enough to be refuted. There’s always enough wiggle-room for him to take *any*
refutation, and say “That’s not what it says, you’re just too stupid to understand it.”
LaPlante is completely wrong. But he’s wrong in a respectable way.
8. vegafrank
THE P VERSUS NP PROBLEM
We define an interesting problem called $MAS$. We show $MAS$ is actually a succinct version of the well known $\textit{NP–complete}$ problem $\textit{SUBSET–PRODUCT}$. When we accept or reject
the succinct instances of $MAS$, then we are accepting or rejecting the equivalent and large instances of $\textit{SUBSET–PRODUCT}$. Moreover, we show $MAS \in \textit{NP–complete}$.
In our proof we start assuming that $P = NP$. But, if $P = NP$, then $MAS$ and $\textit{SUBSET–PRODUCT}$ would be in $\textit{P–complete}$, because all currently known $\textit{NP–complete}$ are
$\textit{NP–complete}$ under logarithmic-space reduction including our new problem $MAS$. A succinct version of a problem that is complete for $P$ can be shown not to lie in $P$, because it will
be complete for $EXP$. Indeed, in Papadimitriou’s book is proved the following statement: “$NEXP$ and $EXP$ are nothing else but $P$ and $NP$ on exponentially more succinct input”. Since $MAS$ is
a succinct version of $\textit{SUBSET–PRODUCT}$ and $\textit{SUBSET–PRODUCT}$ would be in $\textit{P–complete}$, then we obtain that $MAS$ should be also in $\textit{EXP–complete}$.
Since the classes $P$ and $EXP$ are closed under reductions, and $MAS$ is complete for both $P$ and $EXP$, then we could state that $P = EXP$. However, as result of Hierarchy Theorem the class
$P$ cannot be equal to $EXP$. To sum up, we obtain a contradiction under the assumption that $P = NP$, and thus, we can claim that $P \neq NP$ as a direct consequence of the Reductio ad absurdum
You could see more on (version 3)…
Best Regards,
|
{"url":"http://www.goodmath.org/blog/2015/04/27/a-failed-attempt-to-prove-p-np/","timestamp":"2024-11-08T21:12:54Z","content_type":"text/html","content_length":"150802","record_id":"<urn:uuid:29fed5ce-c7cd-4c98-a2eb-eb03064987ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00790.warc.gz"}
|
Sine Problem
In this 'mesh' of sine graphs, one of the graphs is the graph of the sine function. Find the equations of the other graphs to reproduce the pattern.
In this image, one of the graphs is that of the sine function. Find the equations of the other graphs to reproduce the pattern.
Many thanks Aditya Sahu from Bristol Grammar School for sending us this puzzle.
Getting Started
What happens to the equation when you translate or reflect its graph?
Student Solutions
Solutions below are from Monika Pawlowska, Warsaw, Poland; Andrei Lazanu, Bucharest, Romania; Chris Tynan, St Bees School, Cumbria; Shu Cao, Oxford High School.
There are several ways to draw the graphs to achieve the given pattern. Can you produce the same set of graphs using the cosine function?
Here is Monika's method using reflections and translations of the graph of $\sin x$.
To form the pattern, you need functions $\pm \sin x + 2n$, (I mean $\sin x + 2n$ and $-\sin x + 2n$ where $n$ is an integer).
The graph of $-\sin x$ is symmetrical to $\sin x$ with respect to the x-axis - when you change the sign, the function is reflected; when $n$ increases or decreases, the curve 'goes' 2 units upwards
or downwards (it's translated). The graphs visible in the picture are for $n\in\{-4,-2,0,2,4\}$.
Chris's method uses only translations of the graph of $\sin x$.
First let's say $f(x) = \sin x$. It's obvious that this satisfies one of the lines given. Also, the transformations to $f(x) + a$ translate the graph $a$ units in the y direction (1) ($a$ may be
positive or negative).
Also, $f(x+a)$ translates the graph $-a$ units in the x direction (2).
Using (1), we can identify the equations of four more graphs, which will be:
$$\eqalign{ f_1(x) &= \sin x + 2 \cr f_2(x) &= \sin x + 4 \cr f_3(x) &= \sin x - 2 \cr f_4(x) &= \sin x - 4.}$$
We can also observe that the remaining 5 lines are just the above functions moved either $+a\pi$ or $-a\pi$ where $a$ is any odd number. So, a possible solution for the remaining 5 lines is:
$$\eqalign{ f_5(x) &= \sin (x-\pi) \cr f_6(x) &= \sin (x-\pi) + 2 \cr f_7(x) &= \sin (x-\pi) + 4 \cr f_8(x) &= \sin (x-\pi) - 2 \cr f_9(x) &= \sin (x-\pi) - 4.}$$ And this is one solution that
satisfies the pattern.
NB. It can also be said that the pattern can be reproduced infinitely. This can be done by generalising our equations to the following: $$f[x] = \sin (x - a\pi)+b$$ where $a$ is 0 or 1 and $b$ is any
even integer.
This is summed up by Shu Cao as follows:
Owing to the fact that the sine function is a periodic oscillating function, if we move it $2n\pi$ to the right or to the left parallel to the x-axis, ($n$ being an integer), we will have the same
graph. So we can write the sine function as $f(x)=\sin (x+2n\pi).$ When the graph is turned upside down, it is because it has been moved $n\pi$ parallel to the x-axis, where $n$ is an odd integer. We
can write it as $f(x)=\sin (x+n\pi).$ When the graphs are shifted up or down parallel to the y-axis, the function is $f(x)+n$.
Therefore, we can summarize the equations of the family of graphs in the problem as $f(x)=\sin (x+z\pi)+ 2n$, where $z$ and $n$ are integers.
|
{"url":"https://nrich.maths.org/problems/sine-problem","timestamp":"2024-11-05T12:06:25Z","content_type":"text/html","content_length":"40089","record_id":"<urn:uuid:a514b524-f842-492d-a906-377fdf69547f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00056.warc.gz"}
|
Elements of Modern Physics Notes for Bsc pdf download 2023
Elements of Modern Physics Notes PDF
Free Elements of Modern Physics notes pdf are provided here for Elements of Modern Physics students so that they can prepare and score high marks in their Elements of Modern Physics exam.
In these free Elements of Modern Physics notes pdf, we will study the physical and mathematical foundations necessary for learning various topics in modern physics which are crucial for understanding
atoms, molecules, photons, nuclei, and elementary particles. These concepts are also important to understand phenomena in laser physics, condensed matter physics, and astrophysics.
We have provided complete Elements of Modern Physics handwritten notes pdf for any university student of BCA, MCA, B.Sc, B.Tech, M.Tech branch to enhance more knowledge about the subject and to score
better marks in their Elements of Modern Physics exam.
Free Elements of Modern Physics notes pdf are very useful for Elements of Modern Physics students in enhancing their preparation and improving their chances of success in Elements of Modern Physics
These free Elements of Modern Physics pdf notes will help students tremendously in their preparation for Elements of Modern Physics exam. Please help your friends in scoring good marks by sharing
these free Elements of Modern Physics handwritten notes pdf from below links:
Topics in our Elements of Modern Physics Notes PDF
The topics we will cover in these modern physics notes for bsc pdf will be taken from the following list:
Planck’s quantum, Planck’s constant and light as a collection of photons; Blackbody Radiation: Quantum theory of Light; Photo-electric effect and Compton scattering. De Broglie wavelength and matter
waves; Davisson-Germer experiment. Wave description of particles by wave packets. Group and Phase velocities and relation between them. Double slit experiment with electrons. Probability. Wave
amplitude and wave functions.
Position measurement: gamma-ray microscope thought experiment; Wave-particle duality leading to Heisenberg uncertainty principle; Uncertainty relations involving canonical pair of variables:
Derivation from Wave Packets; Impossibility of a particle following a trajectory; Estimating minimum energy of a confined particle using uncertainty principle.
Energy-time uncertainty principle: origin of natural width of emission lines as well as estimation of the mass of the virtual particle that mediates a force from the observed range of the force
Two-slit interference experiment with photons, atoms, and particles; linear superposition principle as a consequence; Schrodinger equation for non-relativistic particles; Momentum and Energy
operators; stationary states; physical interpretation of a wave function, probabilities, and normalization; Probability and probability current densities in one dimension.
One dimensional infinitely rigid box: energy eigenvalues, eigenfunctions, and their normalization; Quantum dot as an example; Quantum mechanical scattering and tunneling in one dimension: across a
step potential & across a rectangular potential barrier.
Lasers: Metastable states. Spontaneous and Stimulated emissions. Optical Pumping and Population Inversion.
Size and structure of atomic nucleus and its relation with atomic weight; Impossibility of an electron being in the nucleus as a consequence of the uncertainty principle. Nature of nuclear force, N-Z
graph, Liquid Drop model: semi-empirical mass formula and binding energy.
Radioactivity: stability of the nucleus; Law of radioactive decay; Mean life and half-life; Alpha decay; Beta decay: energy released, spectrum and Pauli’s prediction of neutrino; Gamma-ray emission,
energy-momentum conservation: electron-positron pair creation by gamma photons in the vicinity of a nucleus. Fission and fusion: mass deficit, relativity, and generation of energy.
Fission: nature of fragments and emission of neutrons. Fusion and thermonuclear reactions driving stellar evolution (brief qualitative discussions).
Elements of Modern Physics Notes PDF FREE Download
Elements of Modern Physics students can easily make use of all these complete Elements of Modern Physics notes pdf by downloading them from below links:
How to Download FREE Elements of Modern Physics Notes PDF?
Elements of Modern Physics students can easily download free Elements of Modern Physics notes pdf by following the below steps:
1. Visit TutorialsDuniya.com to download free Elements of Modern Physics notes pdf
2. Select ‘College Notes’ and then select ‘Physics Course’
3. Select ‘Elements of Modern Physics Notes’
4. Now, you can easily view or download free Elements of Modern Physics handwritten notes pdf
Benefits of FREE Elements of Modern Physics Notes PDF
Free Elements of Modern Physics notes pdf provide learners with a flexible and efficient way to study and reference Elements of Modern Physics concepts. Benefits of these complete free Elements of
Modern Physics pdf notes are given below:
1. Accessibility: These free Elements of Modern Physics handwritten notes pdf files can be easily accessed on various devices that makes it convenient for students to study Elements of Modern
Physics wherever they are.
2. Printable: These Elements of Modern Physics free notes pdf can be printed that allows learners to have physical copies of their Elements of Modern Physics notes for their reference and offline
3. Structured content: These free Elements of Modern Physics notes pdf are well-organized with headings, bullet points and formatting that make complex topics easier to follow and understand.
4. Self-Paced Learning: Free Elements of Modern Physics handwritten notes pdf offers many advantages for both beginners and experienced students that make it a valuable resource for self-paced
learning and reference.
5. Visual Elements: These free Elements of Modern Physics pdf notes include diagrams, charts and illustrations to help students visualize complex concepts in an easier way.
We hope our free Elements of Modern Physics notes pdf has helped you and please share these Elements of Modern Physics handwritten notes free pdf with your friends as well 🙏
Download FREE Study Material App for school and college students for FREE high-quality educational resources such as notes, books, tutorials, projects and question papers.
If you have any questions feel free to reach us at [email protected] and we will get back to you at the earliest.
TutorialsDuniya.com wishes you Happy Learning! 🙂
Physics Notes
Elements of Modern Physics Notes FAQs
Q: Where can I get complete Elements of Modern Physics Notes pdf FREE Download?
A: TutorialsDuniya.com have provided complete Elements of Modern Physics free Notes pdf so that students can easily download and score good marks in your Elements of Modern Physics exam.
Q: How to download Elements of Modern Physics notes pdf?
A: Elements of Modern Physics students can easily make use of all these complete free Elements of Modern Physics pdf notes by downloading them from TutorialsDuniya.com
Software Engineering Projects with Source & Documentation
You will always find the updated list of top and best free Software Engineering projects with source code in an easy and quick way. Our Free Software Engineering projects list has projects for
beginners, intermediates as well as experts to learn in 2023.
URL: https://tutorialsduniya.com/software-engineering-projects-pdf/
Author: Delhi University
|
{"url":"https://www.tutorialsduniya.com/notes/elements-of-modern-physics-notes/","timestamp":"2024-11-06T14:20:29Z","content_type":"text/html","content_length":"113492","record_id":"<urn:uuid:68e53453-d70a-4891-8540-4d69084bb3dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00582.warc.gz"}
|